Testing and final selection of human risk assessment ... · updating on scientific and technical...
Transcript of Testing and final selection of human risk assessment ... · updating on scientific and technical...
This project has received funding from the EuropeanUnion's Horizon 2020 research and innovationprogramme under grant agreement No 686239caLIBRAte Final Conference, Copenhagen Nov 2019
Testing and final selection of human risk assessment models for the nano-risk governance
frameworkMiikka Dal Maso, Tampere University, Finland
caLIBRAte WP 7 team
This project has received funding from the EuropeanUnion's Horizon 2020 research and innovationprogramme under grant agreement No 686239
caLIBRAte Final Conference, Copenhagen Nov 2019
Testing
caLIBRAte
Demonstration
Strategy for the models and framework testing, calibration and demonstration
Data
The caLIBRATEframework
Identification and Assessment of
Models and Tools
WP1-4
WP8
WP5-6
WP7
This project has received funding from the EuropeanUnion's Horizon 2020 research and innovationprogramme under grant agreement No 686239
caLIBRAte Final Conference, Copenhagen Nov 2019
caLIBRAte performed testing of models on several levels
§ Assessment against needs: WP2 for human risk assessment (HRA), WP3 for environmental risk assessment (ERA)
§ Models passing the assessment were subjected to sensitivity analysis and performance testing
§ Sensitivity analysis: how does a change in an input affect the output?– Find critical inputs, identify problematic behavior– Using simulated data with realistic ranges
§ Performance testing: how does the model fulfil its function– User testing – comparison to the real world– Use of real data from measured case studies
min maxmean
input
output
min maxmean
This project has received funding from the EuropeanUnion's Horizon 2020 research and innovationprogramme under grant agreement No 686239
caLIBRAte Final Conference, Copenhagen Nov 2019
Input and output mapping§ Inputs required of the models were
very heterogeneous, and overlapswere not in the majority
§ 20 models (16 nanospecific):– 1190 (990) different input values were
identified. – only a small subset is shared between
different models. § The data formats used in the models
were found to be relatively simple. § In general, no principal difficulties in
transferring data to the models wereidentified
§ No proprietary data formats wereused, and the inputs have lowcomplexity
See poster by Poikkimäki et al. [P19]
This project has received funding from the EuropeanUnion's Horizon 2020 research and innovationprogramme under grant agreement No 686239
caLIBRAte Final Conference, Copenhagen Nov 2019
Sensitivity testing for HRA models – different testing methodologies were applied
§ Guidance, test environmentsand reporting templates weredeveloped for tool function, sensitivity and performance testing– To allow as harmonized testing as
possible, challenging due to the heterogeneity of the tool/model approaches
– Primarily OAT testing, but alsodiagnostic testing and Monte Carlo were applied
OAT
Diagnostic
Monte Carlo
See poster by Poikkimäki et al. [P19]
This project has received funding from the EuropeanUnion's Horizon 2020 research and innovationprogramme under grant agreement No 686239
caLIBRAte Final Conference, Copenhagen Nov 2019
Sensitivity testing results• Most and least sensitive parameters for
models• 64 inputs as most sensitive
• most are physical-chemical or exposure, only few hazard,
• eg. release rate/concentration, durationof cycle/process, stability/half-life, …
• 43 inputs classified as least sensitive, e.g. duration of handling/ duration of activityventilation rate, room volume,…
• Sensitivity testing served provided the basisfor the further performance testing of thecaLIBRAte framework models
See poster by Poikkimäki et al. [P19]
This project has received funding from the EuropeanUnion's Horizon 2020 research and innovationprogramme under grant agreement No 686239
caLIBRAte Final Conference, Copenhagen Nov 2019
User testing: Interactive consultations with stakeholders for evaluating framework tools§ Stakeholders requirements and
recommendations on risk governance models and usability of the risk governance tools(D7.3)
– demonstration and test events were organized as a combination of webinars and workshops, deliveredby expert partners
§ Summary documents of key features and application domains of the models weredeveloped
§ Result: tool criteria– Provide guidance for the safe handling and
for the safe use of NMs by all players along the supply chain
– Comply (as concerns models, inputs, outputs) with existing regulatory frameworks and requirements
– Be linked to databases for continuous updating on scientific and technical progress (above all on NP properties)
– Provide validated and reliable output data for risk governance of NMs
– Be easy to use for a wide range of specialists with different expertise and role inside the company
– Facilitate foresight analysis and decision-making in planning new development and applications of NMs
– Provide scenario analysis to predict possible risks of novel products
– Act as early warning system for high risks hazard and exposure scenarios
– Be strictly confidential regarding management of proprietary data
– Have an affordable cost
General perfomance of tools was good
This project has received funding from the EuropeanUnion's Horizon 2020 research and innovationprogramme under grant agreement No 686239
caLIBRAte Final Conference, Copenhagen Nov 2019
Performance testing: comparison against availablecase studiesGeneral Methodology 1. Compilation of parameters
requested by the selected models related to human exposure
2. Identification of data sources (databases, data generated in EU Projects or literature);
3. Evaluation of data availability to cover requirements by the different models;
4. Evaluation of data quality5. Selection of relevant,
reliable and complete case studies for model performance testing.
See poster by Fonseca et al. [P10]
This project has received funding from the EuropeanUnion's Horizon 2020 research and innovationprogramme under grant agreement No 686239
caLIBRAte Final Conference, Copenhagen Nov 2019
WP6 input for testing: Collection of data for performance testing• Data was inventoried, evaluated
and ranked for their quality• Missing parameters/data required
by the caLIBRAte tools wereidentified (e.g. dustiness data, SSA, size…)
• Case studies have beenparameterized for each model/toolwith data available in each case study, with assumptions, defaultparameter and/or calculated values
• In case data was missing, it wasfulfilled using data libraries
See poster by Fonseca et al. [P10]
This project has received funding from the EuropeanUnion's Horizon 2020 research and innovationprogramme under grant agreement No 686239
caLIBRAte Final Conference, Copenhagen Nov 2019
Parameterisation of case studies to match inputsfor multiple models
Due to the heterogeneity of model inputs and the available data, a major effort of data harmonization and parameterization was made to generate performance testing data
See posters by Fonseca et al. [P10, P14]
Case study CCase study B
Case study A
Model 3Model 2
Model 1 Model 3Model 2
Model 1 Model 3Model 2
Model 1Case A
Case B
Case C
This project has received funding from the EuropeanUnion's Horizon 2020 research and innovationprogramme under grant agreement No 686239
caLIBRAte Final Conference, Copenhagen Nov 2019
Applied criteria for performance testing
§ Testing to be done with the intended purpose of each tool§ Testing of HRA models is performed with DNEL and Bulk OELs§ Aiming to follow the existing minimum criteria to assess the model/tool prediction
(adapted from the Dutch Social Economic Council):– 25 comparisons: Minimal 25 exposure measurements are conducted and compared to the model
outcome.– Definitions of application domains: The application domain is known as well as which processes
and substances, the model is suitable for (and which processes and substances, the model is not suitable for).
– The exposure situations for which exposure measurements are conducted, are widely spread over the applicability domain of the model.
– The Spearman correlation between model estimates and measured exposure values is at least 0.6.
– There are no domains of the model where exposure measurements clearly and consistently are higher compared to the model estimates.
– The tool estimates a reasonable worst case, which represents the upper side of occurring exposure values
– Measurements do not exceed the model estimates for more than 10% of the total comparisons.– Evaluation is done separately for solids, liquids and/or gases/fumes
This project has received funding from the EuropeanUnion's Horizon 2020 research and innovationprogramme under grant agreement No 686239
caLIBRAte Final Conference, Copenhagen Nov 2019
Example performance testing results:Stoffenmanager (included via Licara Nanoscan)
• Model scores positively correlated with their corresponding measured
• Model tends to overestimate the exposure at lower levels of exposure. … this performance test concludes that Stoffenmanager nano in its current form, is suitable to be included in the Calibrate system.
See poster by Franken & Fransman [P23]
This project has received funding from the EuropeanUnion's Horizon 2020 research and innovationprogramme under grant agreement No 686239
caLIBRAte Final Conference, Copenhagen Nov 2019
Example performance testing results:Nanosafer
Spearman correlation coefficient= 0.6
Recommended protection factor acceptable for a specific work situation?
See poster by Fonseca [P24]
This project has received funding from the EuropeanUnion's Horizon 2020 research and innovationprogramme under grant agreement No 686239
caLIBRAte Final Conference, Copenhagen Nov 2019
Example performance testing results:ConsExpo Nano
…the ConsExpo tool represents the current state of the science. The results of thisperformance test increase confidence in the validity of the methods itimplements. Therefore, the tool is deemed suitable to include in the caLIBRAteNano-Risk Governance Portal.
Comparison study: Berger-Preiß, E, et al., 2009 Int. J.Hyg.Environ.Health212 (2009) 505–518
Comparison study: Park, J., C. Yoon and K. Lee (2018). nternational Journal of Hygiene and Environmental Health.
• Evaluated predicted air concentrations with measured data
• Identified 5 suitable studies for model evaluation
• Account for uncertainty in experimental settings
• Compare modeled uncertainty bounds with data
• Overall good agreement between model and experiment
• Critical experiment information often lacking :
• Concentration ingredient in product
• Particle size distribution of spray
See poster by Delmaar et al. [P20]
This project has received funding from the EuropeanUnion's Horizon 2020 research and innovationprogramme under grant agreement No 686239
caLIBRAte Final Conference, Copenhagen Nov 2019
Overview of tested models
a
a As a reference model
See poster by Fonseca [P24]
Webpage: www.nanocalibrate.eu
ACKNOWLEDGEMENTS
This work is part of the caLIBRAteProject funded by the EuropeanUnion's Horizon 2020 research andinnovation programme under grantagreement No 686239.
?