Heather Case and Jay Messer U.S. EPA Ispra, Italy January, 2006 Framework for a Comparison of EEA...

Post on 14-Jan-2016

213 views 1 download

Tags:

Transcript of Heather Case and Jay Messer U.S. EPA Ispra, Italy January, 2006 Framework for a Comparison of EEA...

Heather Case and Jay Messer U.S. EPA

Ispra, ItalyJanuary, 2006

Framework for a Comparison of EEA and

EPA Indicators

• Propose a framework for a cooperative effort to conduct an in-depth comparison of EEA and EPA environmental indicators.

• Present examples of EPA indicators from the upcoming EPA Report on the Environment that illustrate the key comparison issues.

• Set the stage for discussing additional issues that involve electronic augmentation and updating of indicators going forward (next presentation).

Purposes

EPA’s Report on the EPA’s Report on the Environment Environment

Recent events and future Recent events and future

directionsdirections

Recent events & future directionsRecent events & future directionsSince we met in May 2005• July 2005: Peer review of proposed indicators• Oct. 2005: Second peer review of indicators

newly proposed or significantly revised

Looking Ahead• January 2006: Posting of “final” indicators for ROE 2007 on the internet• September 2006: Scientific peer review of full ROE Technical Document• Spring 2007: Final Release of Technical Document

Background

• At Washington, DC meeting in May, 2005, we decided to pursue a comparison of EEA and EPA indicators (including scaling).

• EPA Post-Doc, Ellen Natesan, developed a white-paper comparing EEA core indicators with indicators from EPA’s 2003 Draft Report on the Environment.

• Since then, many indicators have been updated and regionalized, and new indicators have been added for the 2007 ROE.

Propositions

• Indicators may fundamentally differ because of purpose, criteria, etc.

• Indicators may fundamentally differ because of monitoring design, methods, averaging period, scale, and reference points

• To the extent that the indicators are transparent and reproducible, and the date well-documented and accessible, if two indicators are not fundamentally different, an opportunity exists to calibrate one indicator against the other.

Overview of Proposed Criteria for Comparisons

• Purpose of indicators

• Indicator definition, criteria and “ground rules”

• Monitoring design and data comparability

• Quality assurance

• Scaling

• Data management and accessibility

Purpose of Indicators

• ROE indicators answer questions about the state of the environment over time (e.g., are ozone levels decreasing over time?)

• Accountability indicators track the effectiveness of particular programs (e.g., are controls on mobile sources reducing ozone?) – Must be responsive to early actions– Must differentiate among causes– May involve cost-effectiveness

Purpose of Indicators

• Examples– What are the trends in outdoor air quality and

their effects on human health and the environment?

• Sulfur Dioxide Emissions• Ozone Injury to Forest Plants

– What are the trends in extent and condition of fresh surface waters and their effects on human health and the environment?

• Nitrogen and Phosphorus in Streams in Agricultural Watersheds

• Benthic Invertebrates in Wadeable Streams

Indicator Definition and Criteria

• ROE Indicator – a numerical value derived from actual measurements of a pressure, ambient condition, exposure, or human health or ecological condition over a specified geographic domain, whose trends over time represent or draw attention to underlying trends in the condition of the environment.

Indicator Types

What is currently NOT included

• Administrative indicators (government actions and responses to them)

• Resource use

• Economic and “sustainability” indicators

ROE Indicator Criteria

• The indicator is useful. It answers (or makes an important contribution to answering) a question in the ROE.

• The indicator is objective. It is developed and presented in an accurate, clear, complete, and unbiased manner.

• The indicator is transparent and reproducible. The specific data used and the specific assumptions, analytic methods, and statistical procedures employed are clearly stated

ROE Indicator Criteria (cont.)

• The underlying data are characterized by sound collection methodologies, data management systems to protect its integrity, and quality assurance procedures.

• Data are available to describe changes or trends and the latest available data are timely.

• The data are comparable across time and space, and representative of the target population. Trends depicted in the indicator accurately represent the underlying trends in the target population.

ROE Indicator Modeling “Ground Rule”

• A model may be used to calculate and indicator value based on a physical measurement that is not itself the indicator, as long as the physical value and the indicator are at the same hierarchical level.

– Permissable: NOX emissions based on fuel consumption and an emissions factor

– Not permissable: acid deposition based on SO2 emissions

Monitoring design & data comparability

• What is being measured? Are the methods equivalent? Is guidance available and being followed?

• Where are the monitoring sites located? How were the locations chosen (e.g., purposive vs probability designs)

• When are samples collected? What is the averaging period?

• What are the reference points?

Monitoring design & data comparability

• What is being measured? Are the methods equivalent? Is guidance available and being followed?

Examples

• SO2 and VOC Emissions

• Fuel Combustion: Power Generators - emissions from coal, gas, and oil-fired power plants required to use continuous emissions monitors (SO2 only)

• Fuel Combustion: Other Sources -industrial, commercial, institutional and residential heaters and boilers not required to use CEMs – emissions factors and DOE Fuel use data

• Other Industrial Processes – e.g., chemical production and petroleum refining – emissions factors, production data

SO2 and VOC Emissions

• On-road Vehicles – e.g. cars, trucks, buses, and motorcycles – FHWA mileage estimates and EPA’s MOBILE6 model

• Non-road Vehicles and Engines – e.g., farm and construction equipment, lawnmowers, chainsaws, boats/ships, aircraft – EPA’s NONROAD model

National Emissions Inventory

• Conducted every three years

• EPA develops some data (electricity generators)

• States develop other data with guidance from EPA

• EPA performs consistency checks

• Methods evolve - only 1990 inventory fully reconciled to latest inventory year

Monitoring design & data comparability

• Where are the monitoring sites located? How were the locations chosen (e.g., purposive vs probability designs)

• When are samples collected? What is the averaging period?

• What is the reference point?

Examples

• Nitrogen and Phosphorus in streams

– Nitrate in streams in agricultural watersheds

– Nutrient Concentrations in wadeable streams

Three possibilities

• Section 305(b) of Clean Water Act - States

• National Water Quality Assessment (NAWQA) – U.S. Geological Survey

• Wadeable Streams Assessment (SWA) – EPA and States

Section 305(b) of Clean Water Act

• States determine (attainable) designated uses for each water body

• Monitor against water quality standards appropriate for the designated use

• Report to EPA every two years on percentage of water bodies that meet standards (possible indicator)

Section 305(b) of Clean Water Act

• Only a small fraction of water bodies assessed

• Biases in designation of use and water bodies monitored

• Standards and methods vary from state to state

• Rejected as indicator in FY03 Draft ROE for failure to meet indicator criteria

Nutrients in Streams

• NAWQA– Purposive design (50

watersheds)

– Sampled at many points in the watershed

– Sampled 12-13 times/year

– No reference levels

• WSA – Probability design

(1392 reaches)

– Sampled at one point on the reach

– Sampled once every 4 years (summer)

– Reference levels based on statistics from regional reference sites

Nitrate in streams in agricultural watersheds, NAWQA (1992-1998)

52.3%

29.9%

8.4%9.3%

0%

20%

40%

60%

80%

100%

1992-1998

less than 2 ppm

2 to 6 ppm

6 to 10 ppm

10 ppm or more

Pe

rce

nt o

f Str

eam

Site

s

EPA's drinking water standard is 10 ppm (Maximum Contaminant Level).

Nutrient Concentrations in Wadeable Streams

WSA (1999-2003)

51%

45%

17%

22%

32%

33%

0% 20% 40% 60% 80% 100%

TotalPhosphorus

Total Nitrogen

Percentage of stream miles

Low: Below 75th percentile of reference range

Moderate: Between 75th and 95th percentiles of reference range

High: Above 95th percentile of reference range

Source: U.S. EPA, Wadeable Streams Assessment

Nutrients in Streams

• NAWQA – Better characterization

of sampled streams and watersheds

But

– Expensive

– Can’t be extrapolated to unsampled streams

– No confidence bounds for national estimates

• WSA– Unbiased estimates of

all wadeable streams, with known confidence

– Comparatively inexpensive

But

– Poor characterization of individual reaches

– No data for extreme events or other seasons

Quality assurance

• Are controls in place to insure that the data are of adequate and know quality?

• Are the metadata available?

• Links to QA Plans and metadata for ROE indicators in Indicator QA forms (Heather Case’s presentation)

Scaling

• What is the most disaggregated level at which the indicator is meaningful?

• Is the reference level appropriate for the extent and grain size of the indicator? How important are episodes?

• How sensitive is the indicator to the effects of a few very large entities?

Scaling

• What is the most disaggregated level at which the indicator is meaningful?

– SO2 and VOC emissions

• ROE07 - 10 EPA Regions

• 3100 US counties (theoretically)

– N&P in streams

• ROE07 - national only

• NAWQA – 50 predominantly agricultural watersheds

• WSA – 10 EPA Regions (theoretically) or 9 ecoregions

Scaling

• What is the most disaggregated level at which the indicator is meaningful?

Scaling

• What is the most disaggregated level at which the indicator is meaningful?

Scaling

• Is the reference level appropriate for the extent and grain size of the indicator? How important are episodes?

– Mean levels of toxic chemicals in a stream may not mean much if storm events do the damage

• How sensitive is the indicator to the effects of a few very large entities?

– A very small percentage of emitters may be responsible for a large fraction of total emissions – to the extent that they are concentrated in a few states or regions, they may skew national statistics

Data management and accessibility

• The key to transparency and reproducibility

• All ROE indicators have

– Data underlying the figures available in excel spreadsheets online

– Links to parent databases

• Some ROE indicators have

– Links to datasets (or data in excel spreadsheets) that underlie the data supporting the figures.

Conclusions

• Indicators may fundamentally differ because of purpose, criteria, etc.

• Indicators may fundamentally differ because of monitoring design, methods, averaging period, scale, and reference points

• To the extent that the indicators are transparent and reproducible, and the date well-documented and accessible, if two indicators are not fundamentally different, an opportunity exists to calibrate one indicator against the other.