A Review of Building_Infrastructure SRTs

download A Review of Building_Infrastructure SRTs

of 34

Transcript of A Review of Building_Infrastructure SRTs

  • 8/10/2019 A Review of Building_Infrastructure SRTs

    1/34

    A review of building/infrastructure sustainability

    reporting tools (SRTs)Renard Y. J. Siew, Maria C. A. Balatbat and David G. Carmichael

    The University of New South Wales, Sydney, New South Wales, Australia

    Abstract

    Purpose Buildings/infrastructure are recognised to have a significant impact on the environmentand the community, and hence there is pressure on industry practitioners to incorporate environmentaland social considerations in addition to the traditional cost, time and quality. The development ofsustainability reporting tools (SRTs) to assist in the management of green building/infrastructureprojects is pivotal in informing on progress in sustainability practices. However, the rapid growth

    of SRTs in the last decade, with different criteria and methodology, has created complicationsfor stakeholders.Design/methodology/approach The paper provides a comprehensive review of tools to guidepractitioners, property investors, policy makers and developers towards making informed choices ingreen building/infrastructure projects. Comparative analyses, benefits and limitations of these toolsare discussed in the paper.Findings Some of the findings from the analysis of SRTs include: an emphasis on environmentalissues; scoring which does not account for uncertainty or variability in assessors perceptions; lack ofpublished reasoning behind the allocation of scores; inadequate definition of scales to permitdifferentiation among projects; and the existence of non-scientific benchmarks.Originality/value The paper departs from earlier reviews to include a discussion on infrastructureSRTs, life cycle tools, and issues broader than the environment. Changes and additions, subsequent toearlier reviews, have been made to SRTs, making the updated review provided here useful.

    Keywords Sustainable development, Infrastructure, Buildings, Rating tools, Sustainability criteria,

    Sustainability indicators, ReportingPaper type General review

    1. IntroductionSustainable development has been internationally agreed as a key goal for policymakers to guide development at global, national and local levels (Singh et al., 2009).The World Economic Forum (2011, p. 11) identifies the building sector as an area whichneeds to be addressed because it accounts for 40% of the worlds energy use, 40% ofcarbon output and consumes 20% of available water. The large use of electricity inbuildings has been identified as one of the main culprits for high emissions across theglobe. The Centre for International Economics Canberra and Sydney (2007) reportsthat 23 per cent of the total greenhouse gas emissions in Australia come fromthe energy demand of the building sector, while the US Green Building Council(USGBC, 2011) claims that both residential and commercial buildings account for39 per cent of total emissions in the USA, and more than any other country in theworld except China.

    The increased recognition that buildings are substantial carbon dioxide (CO2)emitters (Reed et al., 2009; Urge-Vorsatz and Novikova, 2008; Buchanan and Honey,1994; Levermore, 2008), and contribute significantly to climate change, puts pressureon construction industry practitioners to incorporate sustainability goals aside fromthe traditional project goals of cost, time and quality (Fernandez-Sanchez and

    The current issue and full text archive of this journal is available at

    www.emeraldinsight.com/2046-6099.htm

    Smart and Sustainable Built

    Environment

    Vol. 2 No. 2, 2013

    pp. 106-139

    r Emerald Group Publishing Limited

    2046-6099

    DOI 10.1108/SASBE-03-2013-0010

    106

    SASBE2,2

  • 8/10/2019 A Review of Building_Infrastructure SRTs

    2/34

    Rodrguez-Lopez, 2010). Translating sustainability goals into action at the project levelis complicated by the individual characteristics of countries, their cultures, climatesand types of buildings (Ugwu and Haupt, 2007).

    Against this background, there is a widely recognised need to identify metrics and

    tools that would help articulate the extent to which current activities are eithersustainable or not sustainable (Singh et al., 2009). This has been the key motivatorfor the development and increased popularity of sustainability reporting tools(SRTs) in the building sector and the civil engineering infrastructure sector.Infrastructure includes transport (roads and bridges, bus and cycle ways, footpaths,railways), water (sewage and drainage, water storage and supply), energy(transmission and distribution) and communication (transmission and distribution)among others (AGIC, 2012). This paper provides a review of available tools usedto assess and report sustainability in the infrastructure and building sectors. The toolsare commonly used in their country of origin, particularly if this is legislated, but areadopted in other countries.

    Cole (1999) suggests that SRTs, used with the intent to evaluate green

    performance, usually take on a few common characteristics such as emphasis on theassessment of resource use and ecological loadings, the assessment of designintentions and potential through prediction rather than actual real-world performance,use of performance scoring as an additive process and a performance summary,certificate or label. Cole (2005) adds that SRTs are not only a means to facilitatethe reduction of environmental impacts, but also are increasingly being used as a basisfor risk and real estate valuations in obtaining development approval from thebanking industry. SRTs provide industry standard guidelines and allow comparabilityacross projects. For building owners and operators, using SRTs demonstratescommitment to corporate social responsibility (CSR), and permits staying ahead offuture government regulations (Green Building Council of Australia (GBCA), 2012b).Developing an ideal SRT is challenging because it needs to be able to satisfy all

    stakeholders concerns (Ding, 2008).SRTs have been in existence since the last decade within a number of countries, and

    were introduced in an effort to better understand the sustainability level of buildings andinfrastructure. While it may be argued that different climates and cultures, and thedifferent nature of buildings/infrastructure for each country may warrant uniquereporting tools, the rapid growth of SRTs has made sustainability comparisons morecomplicated for stakeholders, for example, property investors (Reed et al., 2009), who relyon such tools to make informed investment decisions. According to Nguyen and Altan(2011), although there are many registered building SRTs, only a few of them are widelyacknowledged. Infrastructure SRTs and life cycle tools are less commonly discussed.

    This paper departs from other reviews (Ding, 2008; Reed et al., 2009; Mitchell, 2010;Berardi, 2012; Sev, 2011), to include a discussion on infrastructure SRTs, life cycle tools,

    and issues broader than the environment. Berardi (2012) provides a review of buildingSRTs in three categories: total quality assessment, life cycle analysis (LCA) and energyconsumption evaluation. However, because of the wide scope adopted, the review on lifecycle tools is not extensive. Changes subsequent to the above reviews have been made tosome SRTs, for example, Leadership in Energy and Environmental Design (LEED),making the updated review provided here useful. As well, this paper summarises theempirical evidence on the benefits of engaging in SRTs, and provides a critique of the tools.

    The Global Reporting Initiative (GRI) was set up with the intention of providing aninternational sustainability reporting framework (Global Reporting Initiative (GRI), 2011).

    10

    Buildinginfrastructur

    SRT

  • 8/10/2019 A Review of Building_Infrastructure SRTs

    3/34

    Under this framework, specific reporting guidelines for the construction and realestate sector are available. However, since the context provided by the GRI guidelinesis more applicable at a corporate level, it is not reviewed here. The scope of this paper isSRTs for building/infrastructure projects.

    The structure of the paper is as follows. The following sections explore the natureof major building SRTs, infrastructure SRTs and life cycle tools applicable to bothbuildings and infrastructure. A critique of these tools and suggestions for futureresearch are given.

    This paper acknowledges that the multiplicity of terms in SRTs can be confusing tothe reader. As such, it is important to clarify some of this terminology upfront.Typically, for most SRTs, there are hierarchical levels of sustainability criteria. Toensure consistency, the top (highest) level will be referred to here as criteria, and thenext (lower) level as subcriteria.

    The review provided here will be of interest to a range of stakeholders construction industry practitioners, real estate investors and developers involved inmaking decisions about green building/infrastructure projects. As well, it will serve

    as a useful reference for the development of the next generation of SRTs.

    2. SRTs for buildingsA review of some of the major tools applicable to buildings is given. This is followed,after a similar review for infrastructure and life cycle tools, by a critique.

    Building Research Establishments Environmental Assessment Method (BREEAM)BREEAM, established in 1990, was first launched in the UK with office buildingsin mind (Bonham-Carter, 2010; Sharifi and Murayama, 2013) but later expandedin scope to also include specific schemes for residential housing and neighbourhoods.It is perceived to be one of the worlds foremost environmental reporting tools for

    buildings (Crawley and Aho, 1999). Scores are awarded to ten criteria management,health and well-being, energy, transport, water, materials, waste, land use and ecology,pollution and innovation according to performance, and summed to produce anoverall score. This score is then matched to an award: pass, good, very good, excellentor outstanding.

    Table I highlights both the criteria and subcriteria in BREEAM. Scores are awardedupon meeting the agreed performance targets for each of the subcriteria.

    The award benchmarks for new buildings, refurbishments and, where applicable,fit-out projects, are presented in Table II. The BREEAM tool offers a set ofweightings to be taken into account as part of the assessment process (see Table III)(BREEAM, 2012).

    LEEDLEED was developed by the USGBC in 2000. Since its inception, LEED has grown toencompass more than 14,000 projects in the USA and more than 30 countries (Nguyenand Altan, 2011). This tool promotes sustainable building and development practicesthrough a suite of reporting, and recognises projects which are committed to betterenvironmental and health performance (LEED, 2012). Two major building typologiescovered by LEED are:

    (1) New Construction and Major Renovations v2009. The criteria and scores(included in parentheses) available for each criterion are as follows: sustainable

    108

    SASBE2,2

  • 8/10/2019 A Review of Building_Infrastructure SRTs

    4/34

    sites (26), water efficiency (10), energy and atmosphere (35), indoorenvironmental quality (IEQ) (15), innovation in design (6), regional priority

    (4) and materials and resources (14) (LEED, 2009a).

    (2) Existing Buildings: Operations and Maintenance v2009. The criteria andscores (included in parentheses) available for each criterion are as follows:sustainable sites (26), water efficiency (14), energy and atmosphere (35), IEQ(15), innovation in design (6), regional priority (4) and materials and resources(10) (LEED, 2009b).

    For both typologies, scores are accumulated using a base of 100 (innovation in designand regional priority are added separately), and rated according to a scale as shown

    Management WasteCommissioning Construction wasteConstruction site impacts Recycled aggregatesSecurity Recycling facilities

    Health and well-being PollutionDaylight Refrigerant use and leakageOccupant thermal comfort Flood riskAcoustics NOxemissionsIndoor air and water quality Watercourse pollutionLighting External light and noise pollution

    Energy Land use and ecologyCO2 emissions Site selectionLow or zero carbon technologies Protection of ecological featuresEnergy sub-metering Mitigation/enhancement of ecological valueEnergy efficient building toolsTransport MaterialsPublic transport network connectivity Embodied life cycle impact of materialsPedestrian and cyclist facilities Materials re-use

    Access to amenities Responsible sourcingTravel plans and information Designing for robustnessWater InnovationWater consumption New design and construction methods not formally recognisedLeak detectionWater re-use and recycling

    Source:BREEAM (2012)

    TableBREEAM criteria a

    subcrite

    Award Score (%)

    Unclassified o30Pass X30Good X45Very Good X55Excellent X70Outstanding X85

    Source:BREEAM (2012)Table

    BREEAM awa

    10

    Buildinginfrastructur

    SRT

  • 8/10/2019 A Review of Building_Infrastructure SRTs

    5/34

    in Table IV. There are embedded prerequisites within each criterion (except forsustainable sites Existing Buildings: Operations and Maintenance v2009) whichmust be met before a score is awarded. LEED for Neighbourhood Development (2009)is the latest USGBC reporting tool, which incorporates site selection, design andconstruction elements (Hurley and Horne, 2006) taking into account both landscapeand regional contexts (Sharifi and Murayama, 2013).

    Green starGreen Star, developed by the Green Building Council of Australia (GBCA) is acomprehensive voluntary building SRT. It was initially developed to accommodate the

    need for buildings operating in hot climatic areas (Roderick et al., 2009; Tronchin andFabbri, 2008). It incorporates ideas from other tools, such as BREEAM and LEED, andother environmental criteria specific to the Australian environment (Lockwood, 2006).Green Star covers the nine criteria shown in Table V, where scores are awarded iftargets are met.

    A single, overall score is calculated based on a series of steps. First, for eachcriterion, a score is determined. Then, given weightings are applied. All the weightedcriteria scores are summed. Innovation points can be obtained by either engaging withinnovative strategies and technologies or exceeding the Green Star benchmark.Innovation points are added to the weighted criteria scores. This gives an overall score,which is then matched to an award (see Table VI). The GCBA only certifies buildingswith 4, 5, or 6 Green Stars.

    Weightings (%)

    BREEAM criterionNew builds, extensions and

    major refurbishmentsBuilding-fit-out only

    (where applicable to scheme)

    Management 12 13Health and well-being 15 17Energy 19 21Transport 8 9Water 6 7Materials 12.5 14Waste 7.5 8Land use and ecology 10 naPollution 10 11Innovation 10 10

    Source:BREEAM (2012)

    Table III.BREEAMs criteriaweightings

    Award Score

    Certified 40-49Silver 50-59Gold 60-79Platinum 80 and above

    Source:LEED (2012)Table IV.LEED award

    110

    SASBE2,2

  • 8/10/2019 A Review of Building_Infrastructure SRTs

    6/34

    Comprehensive Assessment System for Building Environmental Efficiency (CASBEE)

    CASBEE was introduced by the Japan Sustainable Building Consortium in 2002 topromote the concept of sustainable buildings (Sev, 2011). CASBEE defines sustainablebuildings as those that are designed to save energy and resources, recycle materials,reduce emissions of toxic substances to the environment, harmonise the local climate,traditions and culture, and lastly to sustain and improve the quality of human lifewhile maintaining the capacity of the ecosystem at both local and global levels(CASBEE, 2002).

    In CASBEE, buildings are assessed according to life cycles. Environmental load (L)and building performance (Q) are distinguished, where scoring uses progressive levels

    Green star criterion Purpose

    Management Scores address the adoption of sustainable development principles fromproject conception through design, construction, commissioning, tuning and

    operationEnergy Scores target reduction of greenhouse emissions from building operation by

    addressing energy demand reduction, use efficiency, and generation fromalternative sources

    Water Scores address reduction of potable water through efficient design ofbuilding services, water reuse and substitution with other water sources(specifically rainwater)

    Land use and ecology Scores address a projects impact on its immediate ecosystem, bydiscouraging degradation and encouraging restoration of flora and fauna

    Indoor EnvironmentQuality (IEQ)

    Scores target environmental impact along with occupant wellbeing andperformance by addressing heating, ventilation and air conditioning(HVAC), lighting, occupant comfort and pollutants

    Transport Scores reward the reduction of demand for individual cars by bothdiscouraging car commuting and encouraging use of alternativetransportation

    Materials Scores target resource consumption through material selection, reuseinitiatives and efficient management practices

    Emissions Scores address source of pollution from buildings and building services tothe atmosphere, watercourse and local ecosystems

    Innovation Green Star seeks to reward marketplace innovation that fosters theindustrys transition to sustainable building

    Source:GBCA (2012a)Table

    Green star crite

    Award Score Description

    1 star 10-19 Minimum practice2 star 20-29 Average practice3 star 30-44 Good practice4 star 45-59 Best practice5 star 60-74 Australian excellence6 star X75 World leadership

    Sources:GBCA (2012a), Roderick et al. (2009), Mitchell (2010)Table V

    Green star awar

    11

    Buildinginfrastructur

    SRT

  • 8/10/2019 A Review of Building_Infrastructure SRTs

    7/34

    1 to 5, leading to Building Environmental Efficiency (BEE), BEEQ/L. BEErepresents the overall environmental performance of a building,Qincorporates quality(consisting of combined scores of various subcriteria, such as indoor environment,quality of services and outdoor environment on site), and L incorporates energy,

    resources and materials, and off-site environment (CASBEE, 2002).The BEE graph (Figure 1) shows that the higher the Qvalue and the lower the

    Lvalue, the more sustainable the building is. Ordinary buildings are represented by agradient of BEE 1.0. Depending on which region in Figure 1 that a BEE value fallsinto, a different CASBEE award is available: C (poor), B (Fairly poor), B (Good),A (Very good) and S (Excellent). In 2008, CASBEE included the consideration of globalwarming by estimating life cycle CO2 emissions as part of its off-site environmentsubcriterion (Sharifi and Murayama, 2013).

    Hong Kong Building Environmental Assessment Method (HK-BEAM)HK-BEAM was introduced in 1996 by the Hong Kong BEAM Society, a not-for-profitorganisation consisting of professionals within the building industry (Chan and Chu,

    2010). It began primarily as a voluntary environmental reporting tool for high-risebuildings, and subsequently branched out into two main typologies covering all localbuildings: the HK-BEAM Version 4/04 for new buildings (for planning, design,construction, commissioning, with design and specifications for deconstruction) andthe HK-BEAM Version 5/04 for existing buildings (for management, operation andmaintenance) (Lee et al., 2007; Chan and Chu, 2010). From January 2013, HK-BEAMPlus v1.2 became mandatory.

    HK-BEAM is comparable to other SRTs. The criteria (scores bonus points) areas follows: site aspects (18 1); material aspects (11 2); energy aspects (39 2);water aspects (7 2); IEQ (30 3) and innovative techniques (1 5) (HKGBC andBEAM Society, 2012). Suggested weightings for these criteria are shown in Table VII.These weightings differ depending on whether it is an existing or new building.

    S

    BEE = 3.0BEE = 1.5

    BEE = 1.0

    BEE = 0.5

    L(Load)

    50

    100

    10050

    A B+

    B

    C

    0

    Q(

    Quality)

    Source:CASBEE (2002)

    Figure 1.The BEE graph

    112

    SASBE2,2

  • 8/10/2019 A Review of Building_Infrastructure SRTs

    8/34

    Similar to BREEAM, the determination of an overall assessment grade is bypercentage of applicable scores obtained under each criterion, including its weightingfactor. SA, EU and IEQ are perceived to be important and therefore a minimumpercentage must be obtained in these criteria to qualify for an overall grade (see

    Table VIII). The overall grade (per cent) achieved is mapped to an award (Table VIII).

    Nationwide House Energy Rating Scheme (NatHERS)NatHERS was initiated in 1993, but only implemented under the Energy Smart HomePolicy in 1998, as part of the Australian Governments initiative to improve thermalperformance of buildings (Kordjamshidiet al., 2006). The heat energy gains and lossesassociated with the design of a building are calculated, and the amount of artificialheating and cooling required to maintain a comfortable temperature is determined. Theresults are used to establish a star award for the building 0 (poor performance)through to 10 (requiring virtually no energy to be used for heating or cooling). Awardsare usually given before a residential building is occupied (Department of ClimateChange and Energy Efficiency, 2010).

    Building Sustainability Index (BASIX)BASIX is a web-based self-assessment tool (Vijayan and Kumar, 2005) which analysesthe design of a proposed building (single dwelling, multi-dwelling or alterations andadditions) and benchmarks against anticipated water consumption and greenhousegas emission targets (New South Wales (NSW) Government Planning andInfrastructure, 2013). These targets are derived based on average similardevelopments (NSW Government Planning and Infrastructure, 2013). BASIX can beused across all residential building types and is part of a development applicationprocess (NSW Government Planning and Infrastructure, 2013; NSW GovernmentPlanning and Infrastructure, 2013).

    Award Overall (%) SA (%) EU (%) IEQ (%)

    Platinum 75 70 70 70Gold 65 60 60 60Silver 55 50 50 50Bronze 40 40 40 40

    Source:HKGBC and BEAM Society (2012)Table VI

    HK-BEAM awar

    CriterionWeighting (%)

    existing buildingsWeighting (%)new buildings

    Site aspects (SA) 18 25Material aspects (MA) 12 8Energy use (EU) 30 35Water use (WU) 15 12Indoor environmental quality (IEQ) 25 20

    Source:HKGBC and BEAM Society (2012)

    Table VHK-BEAM crite

    weightin

    11

    Buildinginfrastructur

    SRT

  • 8/10/2019 A Review of Building_Infrastructure SRTs

    9/34

    Building actual performanceWhile SRTs generally focus on the potential environmental impact of design (designperformance), some tools only inform on actual building performance. Typically,assessments of actual building performance are conducted on an annual basis

    (NABERS, 2011). Two examples are National Australian Built Environmental RatingsScheme (NABERS) and Energy STAR, which are reviewed here.

    NABERSNABERS, launched in 1998 in Australia, informs on the actual environmentalperformance of buildings, tenancies and homes. Criteria that are assessed includewater usage, energy usage, waste and indoor environment. There are four types ofreporting tools available for: offices, shopping centres, hotels and homes (NABERS,2011). The awards range from 1 (worst) to 5 (best) to reflect on the point-in-time annualperformance of buildings (with reference to data from 12 months occupation/use). Foroffice buildings, there is a subdivision into tenancy, base building and whole buildingas shown in Table IX. The tenancy subdivision covers only tenanted space and is

    applicable to tenants occupying either a leased or privately owned space within acommercial building. For building owners and property managers, two subdivisionsare applicable: base building which focuses on central building services and commonareas; and whole building which covers tenanted space, central building services andcommon areas (NABERS, 2011).

    Mitchell (2010) describes the assessment process in relation to NABERS energycriterion. The first step involves converting energy use into greenhouse gasequivalents. This is done with reference to the emissions intensity of the standardenergy mix across the relevant state/territory of Australia. For example, if thebuilding is located in Victoria it will have its emissions relating to electricity calculatedbased on Victorias electricity mainly come from high emitting brown-coal-firedpower stations. The calculated raw emissions are then normalised by taking into

    consideration the hours of use of the premises, the occupant and equipmentdensity and local climate. These normalised values are then divided by the areaassessed, giving emissions per square metre. Finally, this is compared against thebenchmark for the relevant state/territory and type of building, to establish a suitableaward. An example of a NABERS award for a base building using normalisedemissions per square metre is shown in Table X. The normalisation and benchmarkingprocess is reviewed periodically. Median performance, described in terms of halfstars, is allowed.

    Confusion may exist between Green Star and NABERS, since the awards are quitesimilar (Mitchell, 2010). The differences between Green Star and NABERS aresummarised in Table XI.

    Office building subdivision Coverage

    Tenancy Tenanted spaceBase building Central building services and common areasWhole building A combination of the above

    Source:NABERS (2011)

    Table IX.NABERS officebuildings

    114

    SASBE2,2

  • 8/10/2019 A Review of Building_Infrastructure SRTs

    10/34

    Energy STAREnergy STAR derives from the US Environmental Protection Agency and the USDepartment of Energy. Essentially, it is a tool used to track and benchmark a buildingsenergy performance. An energy performance scale is developed based on (Energy

    STAR, 2011):. Identification of the best available survey data representative of buildings

    nationwide, differentiated by size, energy use and operation. One such exampleis the US Department of Energys Commercial Building Energy ConsumptionSurvey (CBECS) conducted once every four years (see EIA, 2012).

    . Assessment of the characteristics of buildings surveyed, via a statisticalanalysis.

    . From the results of the statistical analysis, development of a model to predict theenergy use of a certain type of building accounting for its location and type ofoperation.

    .

    For each surveyed building, calculation of an energy efficiency ratio (actual topredicted energy use).

    . Use of the energy efficiency ratio to create a distribution of energy performancefor the population of buildings. This forms the Energy STAR performance scalefrom 1 to 100, where a score of 50 means that the building is at an average level.

    Item Green Star NABERS

    Environmental impactassessment

    Potential Actual

    Scope Design PerformancePhase Design phase In operation/use

    Owner GBCA Department of Environment, Climate Changeand Water NSWCoverage Office

    RetailHealthcareEducationIndustrial

    OfficeHomesHotelShopping centres

    Certifiable awards 4,5 or 6 stars 1, 1.5, 2, 2.5, 3, 3.5, 4, 4.5 or 5 starsLegislation Accreditation is on a

    voluntary basisNABERS energy rating must be disclosed whenleasing or selling

    Table XDifferences betwe

    Green Star and NABER

    Award Emissions (kg CO2/m2 ) Comments

    1 star 199 Poor-poor energy management2 star 167 Average building performance3 star 135 Very good current market best practice

    4 star 103 Excellent strong performance5 star 71 Exceptional best building performance

    Source:Mitchell (2010)Table

    NABERS awa

    11

    Buildinginfrastructur

    SRT

  • 8/10/2019 A Review of Building_Infrastructure SRTs

    11/34

    3. SRTs for infrastructureA review of some of the major tools applicable to infrastructure is given. This isfollowed by a critique.

    A Sustainability Poverty Infrastructure Routine for Evaluation (ASPIRE)ASPIRE was developed by ARUP and Engineers Against Poverty (EAP) for a rangeof stakeholders committed to the development of sustainable pro-poor infrastructure(Gryc, 2012); it informs on poverty reduction performance of infrastructure projects.It considers four major criteria the environment, society, economics andinstitutions with breakdowns of four to six subcriteria under each criterion asdepicted in Figure 2.

    A primary environmental consideration is in terms of how a development reducesimpact on natural resources such as air, land, water, biodiversity and materials.Infrastructure is assessed in terms of how well it meets societys needs equitably andhow it reduces poverty via public health, culture and accessibility to services. Projectviability, macroeconomic effects, livelihood opportunity and the creation of anequitable economy are considered. The criterion of institution encompasses foursubcriteria, namely policy, governance, skills and reporting; these represent thecapacity and effectiveness of the institutional environment in supporting the deliveryof the infrastructure (Gryc, 2012). In the assessment, the user goes through a series ofquestions and is provided with illustrations of best to worst case scenarios to help inthe allocation of non-weighted scores. The aggregated scores for each criterion are thenrepresented using a traffic light idea where green indicates strength and red indicatesweakness.

    Australian Green Infrastructure Council (AGIC)AGIC officially released its Infrastructure Sustainability Rating Tool v1.0 in December

    2012. Compared to the majority of SRTs, AGIC adopts a much broader range of criteria,including: management, procurement and purchasing, climate change adaptation,energy, water, material, discharges to air, land and water, land, waste, ecology, health,heritage, stakeholder participation, urban design and innovation. There are three typesof reporting under AGIC, summarised in Table XII, and the level of sustainability of aproject is scored on a 100-point scale (Table XIII).

    ASPIRE

    Environment

    Air

    LandWaterBiodiversityEnergyMaterials

    StakeholdersCulturePopulationServicesHealthVulnerability

    Viability

    Macro

    Livelihoods

    Equity

    Reporting

    Policies

    Skills

    Structures

    Society Economics Institutions

    Source:Gryc (2012)

    Figure 2.ASPIREs framework

    116

    SASBE2,2

  • 8/10/2019 A Review of Building_Infrastructure SRTs

    12/34

    Non-formal SRTsPublications to date, on infrastructure reporting, have focused on the development ofsustainability criteria. For example, Fernandez-Sanchez and Rodrguez-Lopez (2010)recommend more than 80 criteria for infrastructure projects in Spain. From their study,

    they find that 11 of the criteria voted for by stakeholders in the top 30 (based on theanalytic hierarchy process) largely involve economic and social issues. These criteria(with relative importance as a percentage in parentheses) include: health and safety(3.85 per cent), necessity of work-urgency of work (3.77 per cent), life cycle cost (3.72 percent), economical cost/economical benefit (3.24 per cent), project declaration of generalinterest (2.96 per cent), public participation and control on the project (2.59 per cent),barrier effect of the project (2.38 per cent), project governance and strategicmanagement (2.26 per cent), accessibility for human biodiversity (2.26), respect forlocal customs (2.05 per cent) and increase in economic value (1.42 per cent). Shen et al.(2007) suggest a sustainability project performance checklist across a projects lifecycle inception, design, construction, operation and demolition. Ugwu and Haupt(2007) identify key performance indicators (KPIs) and assessment methods for

    infrastructure sustainability from a South African construction industry perspective.Sahelyet al.(2005) propose sustainability criteria for urban infrastructure, focusing onkey interactions and feedback mechanisms between infrastructure and widerenvironmental, social and economic concerns. Morrissey et al.(2012) critically appraiseproject impacts from an ecological limits sense.

    4. Life cycle tools for buildings/infrastructureLife cycle tools include LCA, input-output (IO) analysis and material flow accounting(MFA), Pavement Life Cycle Assessment Tool for Environmental and Economic Effects(PaLATE), and Life Cycle Analyser.

    Types ofreporting

    When can this beapplied? Description

    Design End of planning anddesign

    Awarded based on the inclusion of design elements andconstruction requirements

    As-built End of construction Includes measured sustainability performance duringconstruction and built into the infrastructure asset

    Operation During operation Given after 24 months of operation. Based on themeasured green performance of operatinginfrastructure

    Source:AGIC (2012)

    Table XAGICs thr

    reporting typ

    Award Score

    Not eligible o25Commended 25-o50Excellent 50-o75Leading 75-100

    Source:AGIC (2012)Table XI

    AGICs awa

    11

    Buildinginfrastructur

    SRT

  • 8/10/2019 A Review of Building_Infrastructure SRTs

    13/34

    LCAIn LCA, the environmental impact of activities and raw materials over a building lifecycle (manufacture, transportation, deconstruction and recycling) are assessed. Thefour phases in LCA are: goal and scope, life cycle inventory, life cycle impact

    assessment and improvement (Hsu, 2010).In setting a goal and scope, a person may wish to investigate, for example, which

    structural design has a lesser environmental impact, or how a structure can befurther improved to lessen its impact on the environment. Life cycle inventoryinvolves collecting inflow and outflow data and modelling. Life cycle impact involvesselecting, classifying and characterising the impact on the environment. The finalphase of improvement involves revisiting the earlier phases. This is necessaryin order to identify the most important aspects of impact assessment, checkthe validity of results and redo aspects of the LCA that need more work(Hsu, 2010, p. 15).

    Mrouehet al.(2001) use LCA for road and earth construction. Mithraratne and Vale(2004) take into account embodied and operating energy requirements as well as life

    cycle costs over the useful life of a house. Many countries have also developed specificLCA tools, for example, BEES in the USA, BOUSTEAD and ENVEST in England,Ecoinvent in Switzerland and GaBi in Germany (see Appendix Table AI).

    IO analysisIO analysis, as used in macroeconomic studies of monetary flows, has been adapted toenvironmental impact analysis (Pilusoet al., 2008). IO tables (Xuet al., 2010) have rowsrepresenting outputs, and columns representing inputs. From an IO table, a matrix ofIO coefficients can be derived. These IO coefficients (also known as technicalcoefficients) represent the amount of input required to produce one unit of output(see Xu et al., 2010 for a mathematical formulation). For sustainability analysis,typically a simplified IO matrix is adopted (Pilusoet al., 2008; Xuet al., 2010) and the

    technical coefficients could potentially help answer questions such as how much CO2has been emitted in the production of one tonne of steel (Born, 1996). Norman et al.(2006) use IO and LCA combined to estimate the energy use and greenhouse gasemissions associated with the manufacture of construction materials for infrastructure,buildings and transportation.

    MFAMFA is used to characterise the flow of materials, products and substances in a definedsystem (Huang and Tsu, 2003; Kahhat and Williams, 2012). It applies a conservationlaw total inputs must equal total outputs plus any net accumulation (EUROSTAT,2001). EUROSTAT (2001, pp. 20-24) suggests making a distinction between materialflows: direct vs indirect; used vs unused; domestic vs rest of the world.

    PaLATEPaLATE is a spreadsheet-based tool used in the assessment of environmental andeconomic impacts for pavements and roads. The tool depends on knowing the design,roadway cost, construction materials and transportation information (both mode anddistances), as well as any road maintenance involved. Among some of theenvironmental effects covered under PaLATE are energy consumption, CO2emissions, NO

    x emissions, PM10 emissions, CO emissions and leachate (Horvarth

    et al., 2007).

    118

    SASBE2,2

  • 8/10/2019 A Review of Building_Infrastructure SRTs

    14/34

    Life Cycle AnalyzerLife Cycle Analyzer is software developed specifically to analyse the entire life cycle ofconcrete in all types of production, be it site-poured or used in prefabrication. It allowsthe calculation of both environmental and cost impacts of different concrete mix

    designs. The output from this software can feed into major building SRTs such asBREEAM and LEED (BASF, 2012).

    5. Benefits of engaging with SRTsSeveral studies have highlighted the positive impact of engaging with building/infrastructure SRTs. Lee and Guerin (2010) find that LEED-certified buildings yieldpositive benefits in relation to employee job performance. Miller et al. (2008) addressthe question on the benefits of investing in energy savings and environmental design.In their study, they use US-based Energy STAR office buildings as one set of greenbuildings, together with LEED-certified buildings as an alternative, with large samplesof non-Energy STAR and non-LEED-rated buildings included in the analysis. Theyconclude that going green does pay off with significant rental rate differentials.

    Similar conclusions are found across other studies such as Fuerst and McAllister (2008)cited in Milleret al. (2008), who compare rentals for LEED and non-LEED buildings,and Eichholtzet al.(2009), who compare rentals for green and non-green buildings.Ries et al. (2006), through a case study, explore the benefits of moving into a newLEED-certified facility. They find that manufacturing productivity increased by 25 percent, while energy usage decreased by 30 per cent on a square foot basis. Heerwagen(2000) suggests that green buildings are often linked to higher productivity, betterhealth and well-being, and improvements in organisational performance, such as lowerstaff turnover.

    Wiley et al. (2010) conducted an empirical study to test the relationship betweenenergy-efficient design and leasing markets for commercial real estate, and find thatgreen buildings have superior rents and occupancy rate. Fuerst (2009) finds no

    reversal of this trend due to the economic downturn for both LEED- and EnergySTAR-certified buildings, and also argues that growth for this market segment islikely. Research focusing on the financial performance of green office buildings inAustralia finds that a 5-star NABERS energy-rated building delivers a higher premiumvalue compared to a 3- to 4.5-star NABERS energy-rated building. Newellet al.(2011)give that Green Star-certified buildings show a green premium value. Hes (2007)

    justifies the effectiveness of SRTs using nine measures: reduction in environmentalimpact, positive social impact, positive effect on occupant comfort, positive effect onproductivity, cost savings, ease of use, rating and modelling accuracy, ability to bedynamic and support continuous improvement and ability to support innovationin design.

    6. Critique of building/infrastructure SRTsThere has been some recently published critique of mainstream SRTs such asBREEAM, LEED, CASBEE, HK-BEAM and Green Star (Nguyen and Altan, 2011).Similarities exist between them as shown in Table XIV. It is observed that all five toolsprovide some form of guidance on the design and review phases of projects, offertraining and certification processes to ensure that assessors are up-to-date with thesetools, use both prescriptive and performance-based criteria in the evaluation process,adopt different schemes most notably for new construction buildings and existingbuildings, as well as provide case studies and manuals. Verification by third parties

    11

    Buildinginfrastructur

    SRT

  • 8/10/2019 A Review of Building_Infrastructure SRTs

    15/34

    Item

    BREEAM

    LEED

    CASBEE

    GreenStar

    HK-BEAM

    Stagesofproject

    Design,review

    andconstru

    ction

    Design,

    construction

    andoperation

    Preliminary

    design,

    design,and

    postdesign

    Design

    review

    andas-built

    Design,re

    view

    andconstruction

    Offertrainingandcertification

    Useofprescriptiveandperformancebasedcriteria

    Existenceofdifferentschem

    es(i.e.newconstruction,

    existingbuilding,etc.)

    Provisionofcasestudiesandmanuals

    Verification(bythirdparty)

    Endproductpresentation

    Percentageofcredits

    achieved(%

    )

    TotalScore

    Graphicalrepresentation

    (BEEgraph

    )

    Totalscore

    Percentageofcredits

    achieved(%)

    Note:

    ,Indicatesthepres

    enceofanattribute

    Sources:NguyenandAltan(2011),BREEAM(2012),LEED(2012),CASBEE(2002),GBCA(2012a),HK

    GBCandBEAMSociety(2012)

    Table XIV.Similarities acrossreporting tools

    120

    SASBE2,2

  • 8/10/2019 A Review of Building_Infrastructure SRTs

    16/34

    exists for all tools. In terms of the final presentation, BREEAM and HK-BEAMreporting is in the form of percentage of credits achieved, CASBEEs reporting is donein the form of a BEE graph as illustrated in Figure 1, while the reporting in LEED andGreen Star is done in terms of a total score.

    Nguyen and Altan (2011) compare several attributes of these five building SRTs byscoring them based on: popularity and influence, availability, user-friendliness,applicability, data collecting process, development and quality of results presentation.The outcomes of their assessment (Table XV) show that BREEAM and LEED may bebetter in terms of applicability and popularity. CASBEE, on the other hand, gives thehighest methodology score, possibly due to its more rigorous nature.

    The other strengths of these mainstream SRTs are summarised as follows.BREEAM:

    . encourages energy reduction leading to zero net CO2emissions;

    . sets a minimum standard for sub-metering energy use;

    . demonstrates more rigour in terms of public transport accessibility (taking into

    account routes, hours of service and frequency level); and

    . can be independently assessed (Adegbile, 2013).

    LEED:

    . criteria are publicly-reviewed by more than 15,000 members and are arguablymore transparent compared to BREEAM (LEED, 2012);

    . credits are allocated for heat island effect (trees as shades, or the specificationof high solar reflectance materials) (LEED, 2009a);

    . credit is given for verification of thermal comfort (post-occupancy); and

    . allows for credit interpretation request (CIR) in the event that developers have an

    alternative means of meeting a credit point.

    CASBEE:

    . Highly rigorous and versatile methodology (Adegbile, 2013).

    . Graphical representation of assessments which can be interpreted easily. This isnot present in other tools (CASBEE, 2002).

    Attribute BREEAM LEED CASBEE Green Star HK-BEAM

    Popularity (/10) 10 10 6 5 5Availability (/10) 7 7 7 8 8

    Methodology (/15) 11 10 13 9 11Applicability (/20) 13 13 11.5 10 9Data-collecting process (/10) 7 7 6 9 8Accuracy (/10) 8 7 9 5 5User-friendliness (/10) 8 10 6 8 8Development (/10) 8 8 7 8 8Presentation (/5) 3 3 4 3 4Final Score (/100) 75 75 69.5 65 66

    Source:Nguyen and Altan (2011)

    Table XOverall score

    reporting to

    12

    Buildinginfrastructur

    SRT

  • 8/10/2019 A Review of Building_Infrastructure SRTs

    17/34

    HK-BEAM:

    . Mandatory adoption of BEAM Plus Version 1.2 from 1 January 2013. Thisallows for standardisation and better comparability (HKGBC and BEAM Society,2012).

    . Allows for CIR in the event that developers have an alternative means ofmeeting a credit point.

    Green Star:

    . Designed specifically to cater for Australias unique conditions. (Energymodelling is consistent with the NABERS tool.) (Green Building Council ofAustralia (GBCA), 2012a).

    . Allows for compliance interpretation request (CIR) in the event that developershave an alternative means of meeting a credit point.

    These building SRTs have also been criticised for a lack of attention to life cycle

    perspectives. Bowyer et al. (2006) claim that there is no requirement for consideration oflife cycle inventory data in LEED; Scheuer and Keoleian (2002) find LEED to be anunreliable sustainability assessment tool when looked at from a life cycle perspective.However, a majority of SRTs have now started or are in the process of incorporatinglife cycle thinking.

    Other concerns regarding SRTs have been raised. Baird (2009) argues for theinclusion of user performance criteria, claiming that buildings which perform poorlyfrom a users point of view are unlikely to be sustainable. Chew and Das (2007, p. 10)highlight one of the issues with SRTs is that scores are lost for credits that are beyondthe scope of a project. For example, sustainable site development or provisions relatedto fuel-efficient vehicles are not feasible in the case of a commercial building on a tightsite, downtown with a well-established public transport system. Fard (2012) as well as

    Fenner and Ryce (2008) argue that point-hunting or green-washing may become anissue where building owners are only concerned about gaining the required pointsfor certification without actually addressing pertinent issues relating to energyefficiency and resource preservation. Saunders (2008) notes that different standards areused in different SRTs, and this makes it difficult to do comparisons between tools.Based on a normalised set of conditions, Saunders (2008) claims that LEED (USA) usesa less rigorous, and to a certain extent lower building code standard compared toGreen Star (Australia) or BREEAM (UK).

    An analysis of 14 SRTs for buildings/infrastructure, carried out by this papersauthors, is shown in Appendix Tables AII-AIV, and covers: criteria and subcriteria;nature of the scoring used; and identification of international standards embedded.Note that for the reason of brevity, only the major tools are explained in detail in

    this paper; more information about the other listed tools is available on the respectiveweb sites indicated in Appendix Table AIII. The findings are summarised here:

    First, from Appendix Table AII, it is observed that SRTs have a strongenvironmental focus, where a majority of them have adopted subcriteria in areas suchas energy, water, waste, land use and ecology, as well as materials. Researchers (Fennerand Ryce, 2008; Mateus and Braganca, 2011; Watson et al., 2013; Ding, 2008; Toddet al., 2001) have also highlighted this point across different SRTs. Many SRTs(CASBEE, EPRA, BCA Green Mark for Districts, Estidama Pearl Community, GreenGlobe, Sustainable Design Scorecard, DGNB-Seal, Protocol ITACA) do not explicitly

    122

    SASBE2,2

  • 8/10/2019 A Review of Building_Infrastructure SRTs

    18/34

    mention the incorporation of financial aspects in their assessment (Watsonet al., 2013).Only ASPIRE is found to encompass more balanced triple bottom line criteria ofenvironmental, social and economic. Ding (2008, p. 456) argues for the inclusion ofeconomic criteria, claiming that even though a project may be environmentally sound

    it could be very expensive to develop. While it might be reasoned that economicconcerns are not downplayed by SRTs, because they have been taken intoaccount under environmental impacts (e.g. cost savings that emanate from energyreduction see LEED, 2009a), this does not appear to be the case for a majority ofSRTs. CASBEE, for example, makes no explicit mention of cost issues in itsassessment. As well, from Appendix Table AIV, it is clear that ISO 14001, concerningenvironmental management, is the most commonly embedded standard across SRTssuch as BREEAM, Green Star, AGIC, BCA Green Mark for Districts, Estidama PearlCommunity, Green Globe, HK-Beam and DGNB Seal. Only one SRT out of the 14analysed, namely HK-BEAM, has included OHSAS 18001, a standard in healthand safety.

    Second, from Appendix Table AIII, it can be observed that all SRTs analysed have

    adopted deterministic scoring which does not account for variability or uncertainty invalue judgments. Uncertainty in judgments needs to be accounted for because of theexistence of subjective criteria and measurement scales. Some examples follow.In Green Star, the building users guide criterion gives that 1 point is awarded for aneasy-to-use guide that includes information relevant to users, occupants and tenants.However, different people will have different understandings of what constitutes easy-to-use, and they may not arrive at the same score. The conditional requirementset under the energy criterion states that before points can be awarded againstgreenhouse gas emissions, the projects predicted greenhouse gas emissions must notexceed 110 kg CO2/m

    2/annum as determined according to energy modelling tools(referring to either the Australian Building Greenhouse Rating Validation Protocol orthe Green Star energy calculator). Uncertainty lies in these energy modelling tools and

    calculations may differ depending on how optimistic or pessimistic the inputs ofenergy or gas consumption are (see GBCA, 2012a). These calculations also depend onthe phase of the project, whether it is at pre-design or post-design phase. Theenvironmental design initiative criterion in Green Star states that 1 point is awarded ifan initiative in a project viably addresses a valid environmental concern outsidethe scope of the tool; opinions may differ as to what constitutes a valid environmentalconcern. LEED incorporates post-occupancy thermal comfort surveys which arebased on the value judgement of users and are carried out in a deterministic manner(LEED, 2012). Insua and French (1991), Wolters and Mareschal (1995) and Hyde et al.(2004) demonstrate that uncertainty in input parameters needs to be incorporatedinto decision-making processes due to its influence on the ranking of alternatives. Insummary, SRTs largely ignore uncertainty in value judgements and behavioural

    issues, which have the potential to affect a buildings overall performance. Fenner andRyce (2008) share a similar view by arguing that assessors opinions tend to differ andtherefore inconsistencies are unavoidable in sustainability assessments.

    Third, not all SRTs consider criteria weighting. From Appendix Table AIII, sevenout of 14 SRTs (ASPIRE, AGIC, BCA Green Mark for Districts, Estidama PearlCommunity, Green Globe, Sustainable Design Scorecard and Protocol ITACA) do notexplicitly mention criteria weighting. There is currently no consensus to guide thederivation of the weightings. CASBEE, for example, derives its weighting from asurvey of building owners, operators and designers. BREEAM claims that its

    12

    Buildinginfrastructur

    SRT

  • 8/10/2019 A Review of Building_Infrastructure SRTs

    19/34

    environmental weightings are derived from a combination of consensus-basedweightings and ranking by a panel of experts, but makes no mention of whoconstitutes the panel of experts. Green Star derives its environmental weightings fromthe following parties:

    . The Organisation for Economic Co-operation and Development (OECD)Sustainable Buildings Project Report.

    . Australian Greenhouse Office, Environment Australia, CSIRO, the CooperativeResearch Centre for Construction and the Commonwealth Department ofEnvironment and Heritage.

    . A national survey conducted by the Green Building Council, which informed thedevelopment of the tool and assisted in assessing regional variation.

    Understanding how the weightings are derived and understanding the stakeholders isimportant because this will have a bearing on the overall assessment of buildings/infrastructure. Future harmonisation effects should consider this aspect in an attempt

    to develop an overarching standard for weightings or, at the very least, there needsto be specific mention or justification of the process involved in deriving theseweightings.

    Fourth, there are issues over benchmarks and relative comparisons, based on thescoring used by the tools. Sharifi and Murayama (2013) argue that the benchmarks setare non-scientific. Mitchell (2010) claims that Green Star has been criticised for beingtoo idealistic, showing hallmarks of something developed by architects rather thanpeople with practical experience in the commercial building industry. As an example,for Green Star under the IEQ criterion, 1 point is awarded when a daylight factor of 2per cent is achieved over 90 per cent of the nominated area and 1 point is awardedwhen high-frequency ballasts are installed in fluorescent luminaires over a minimumof 95 per cent of the nominated area. The question remains as to how these standards

    are set. There is no empirical evidence to justify that achieving a daylight factor of 2per cent over the 90 per cent nominated area is actually beneficial to stakeholders.Fenner and Ryce (2008) highlight that building SRTs rely heavily on designers toestimate the amount of energy and resources consumed by building occupiers.

    Fifth, there is a lack of published reasoning behind the scores allocated for eachcriterion, further suggesting that users are merely applying these tools without reallyunderstanding what lies behind the tools. Berardi (2012) and Ding (2008) claim that thereasons behind the selection of criteria, allocation of scores and weights are not explicit.SRTs are designed based on opinions, as opposed to a rigorous analysis of building/infrastructure effects on the environment, economy and society (Fard, 2012; Fowler andRauch, 2006; Rumsey and McLellan, 2005 cited in Berardi, 2012; Udall and Schendler,2005). For example, under the ecological value criterion for Green Star, 4 credit points

    are available when the site has no threatened or vulnerable species, no reduction ofnative vegetation cover or if the ecological value of the site is not diminished. Nofurther explanation is provided as to why these criteria are proposed or the reasonbehind the allocation of 4 credit points. It is questionable as to whether rewardingcredit points for such criteria in this manner would lead to better environmentalimpact. It could occur that a project developer has coincidentally acquired land, whichhappens to meet all of the above criteria, without applying additional effort. It would behelpful if more detailed rationalisation and explanation accompanied the criteriaproposed. Justification as to why certain criteria are allocated more credit points

    124

    SASBE2,2

  • 8/10/2019 A Review of Building_Infrastructure SRTs

    20/34

    compared to others would be helpful to users (e.g. at present there is no explanation inGreen Star as to why four credit points are available for the ecological value of sitecriterion vs 1 credit point for the maintenance of topsoil criterion). Having thereasoning behind score allocation would help with efforts to improve SRTs.

    Lastly, the tools do not sufficiently account for possible project variety or forsufficient differentiation between projects. Sharifi and Murayama (2013, p. 80) explainthis limitation, with reference to BREEAM: [y] to maintain a minimum point thedeveloper should demonstrate that 50-74% of the development site that was built onpreviously developed/brownfield land will be brought back to use [y] the problem isthere is no justification for setting 50% as the minimum and awarding the same pointsfor two different projects that have corresponding percentages that are in the samerange but with significant differences. As a further example, in the LEED materialreused criterion, 1 point is awarded if 5 per cent of materials are reused out of the totalvalue of the project, and 2 points if 10 per cent of materials are reused. This may notsufficiently account for possible project differences. For example, 5 per cent of reusedmaterials for a large project compared to 5 per cent of reused materials in a small

    project have different environmental impact. Under Green Star, the electric lightinglevels criterion states that [y] one point is awarded where it is demonstrated that thefacility lighting design provides a maintenance illuminance of no more than 25%, aproject that has achieved a maintenance illuminance of 5 per cent vs another projectwhich has achieved 25 per cent obtain similar scores for this criterion. There are manyother examples of this blurred project differentiation, which can be observed across SRTs.

    In contrast to the claimed benefits of engaging with SRTs, as noted in Section 5, afew researchers have challenged their usefulness. Newsham et al. (2009) find that28-35 per cent of LEED-certified buildings actually use more energy than traditionalbuildings. Torcellini et al . (2006) find that actual energy usage in sixhigh-performance buildings is higher than predicted. Williamson et al. (2001) findevidence that there is little correlation between a NatHERS award and actual

    heating and cooling energy consumption.This papers authors investigated whether there is any value in obtaining higher

    Green Star awards (buildings), as compared with the base award of 4 stars. Green Staronly certifies buildings that achieve a 4, 5 or 6 stars. Buildings that do not meet atleast the minimum 4-star requirement are not publicly disclosed. Two databases arecompared: one from the Green Star web site (Green Building Council of Australia(GBCA), 2012c), which rates buildings based on adherence to specific sustainabledesign specifications, and the second from the NABERS web site (NABERS, 2012),which rates buildings by measuring energy and water efficiency. Table XVI shows thecomparison, where data were available.

    From Table XVI, it is seen that a better Green Star award does not necessarilymean better performance in terms of energy and water efficiency (using NABERS

    award as a gauge of building performance). For example, although the buildingoccupied by E has a higher Green Star award (6 star) compared to the buildingoccupied by B (5 star), the NABERS award (energy) is lower for E (3.5 star) comparedto B (5 star). It could be that the afore-mentioned limitations in SRTs (namely, that theydo not sufficiently account for project variety, have subjective benchmarks, etc.) resultin this conclusion. Naturally, this casts doubt over the reliability and effectivenessof current SRTs. This also raises the concern that building developers might select theSRT that results in the highest rating. Further investigation is warranted to validatethe findings presented here.

    12

    Buildinginfrastructur

    SRT

  • 8/10/2019 A Review of Building_Infrastructure SRTs

    21/34

    7. Conclusions and future research

    This paper provides an overview of SRTs used for sustainable development ofbuilding/infrastructure projects. The paper gives a compilation of information aboutthe criteria and scoring used in SRTs. Empirical research that has been conducted onthe benefits of SRTs, and a detailed critique of SRTs are reviewed. Some of the findingsof SRTs include: the emphasis on environmental issues; scoring which does notaccount for uncertainty or variability in assessors perceptions; lack of publishedreasoning behind allocation of scores; inadequate definition of scales to permitdifferentiation among projects; and the existence of non-scientific benchmarks.

    Future researchIn light of this review, much remains to be done to enhance building/infrastructureSRTs and the current understanding of users of these tools. Some suggestions for

    future research include:. Expanding the list of criteria to include more measurable social and economic

    issues. Current SRTs for buildings are predominantly focused on theenvironment.

    . Exploring the possibility of inter-linking different sustainability criteria. Lozanoand Huisingh (2011) observe that a majority of the guidelines and standardsaddress sustainability issues through compartmentalisation, that is separatingeconomic, environmental and social criteria. They argue that as a result of thisapproach, sustainability efforts are not properly integrated.

    . The GRI has been introduced to guide sustainability reporting amongcorporations. There is a need to bridge the current gap and look at avenues bywhich building/infrastructure SRTs can interlock with GRI.

    . The need to incorporate uncertainty/variability in SRTs, given that assessorsperceptions differ.

    . Currently, there are tools that report potential environmental impact and toolsthat report actual performance. Future research could look into harmonisingboth tools to reduce confusion. A lot of work will be required around theintegration process. Mitchell (2010) suggests that perhaps a single agencyresponsible for both tools would speed up this process.

    Building occupantGreen star award(design)

    NABERS award(energy)

    NABERS award(water)

    A 4 star 5 star 5.5 starB 5 star 5 star 3.5 starC 6 star 4.5 star naD 4 star 4.5 star naE 6 star 3.5 star naF 4 star 3.5 star 4.5 starG 6 star na 2 star

    Note:As of 20 May 2012; na, not available

    Table XVI.Comparison betweenGreen Star and NABERS

    126

    SASBE2,2

  • 8/10/2019 A Review of Building_Infrastructure SRTs

    22/34

    . The varying standards across SRTs developed in different countries, makecomparability difficult. Having a common standard would assist in betterbenchmarking of projects internationally. A memorandum of understanding hasalready been signed in 2009 between Green Star, BREEAM and LEED to jointly

    develop common metrics to report CO2emissions, and align the reporting tools(Mitchell, 2010). Future research could work on facilitating this unification.

    . More empirical research is needed to assess the validity of any benchmarksproposed in SRTs. Currently, a majority of the benchmarks set are based onperception, which may be inaccurate.

    References

    Adegbile, M.B.O. (2013), Assessment and adaption of an appropriate green building ratingsystem for Nigeria, Journal of Environment and Earth Science, Vol. 3 No. 1, pp. 1-11.

    AGIC (2012), Australian Green Infrastructure Council IS Rating Scheme , Australian Green

    Infrastructure Council (AGIC), Sydney, available at: www.agic.net.au/ISratingscheme1.htm(accessed 7 January 2013).

    Baird, G. (2009), Incorporating user performance criteria into building sustainability ratingtools (BSRTs) for buildings in operation,Sustainability, Vol. 1 No. 4, pp. 1069-1086.

    BASF (2012), Life cycle Analyzer, BASF, Ludwigshafen, available at: www.basf.com/group/corporate/en/function/conversions:/publish/content/news-and-media-relations/news-releases/downloads/2012/P210_Life_Cycle_Analizer_e.pdf (accessed 27 January 2013).

    Berardi, U. (2012), Sustainability assessment in construction sector: rating systems and ratedbuildings,Sustainable Development, Vol. 20 No. 6, pp. 411-424.

    Bonham-Carter, C. (2010), Sustainable communities in the UK, cited in Sharifi and Murayama(2013).

    Born, P. (1996), Input-output analysis: input of energy, CO2and work to produce goods,Journal

    of Policy Modelling, Vol. 18 No. 2, pp. 217-221.

    Bowyer, J., Howe, J., Fernholz, K. and Lindburg, A. (2006), Designation of EnvironmentallyPreferable Building Materials: Fundamental Change Needed Within LEED, DovetailPartners Inc, Minneapolis, MN, available at: www.dovetailinc.org/files/DovetailLEED0606.pdf (accessed 27 January 2013).

    BREEAM (2012), What is BREEAM?Building Research Establishment (BRE), Watford, availableat: www.breeam.org/about.jsp?id66 (accessed 20 November 2012).

    Buchanan, A.H. and Honey, B.G. (1994), Energy and carbon dioxide implications of buildingconstruction, Energy and Buildings, Vol. 20 No. 3, pp. 205-217.

    CASBEE (2002), The Assessment Method Employed by CASBEE, Japan Green Build Counciland Japan Sustainable Building Consortium, available at: www.ibec.or.jp/CASBEE/english/methodE.htm (accessed 20 November 2012).

    Centre for International Economics Canberra and Sydney (2007), Embodied carbon metrics willavoid higher than desired carbon content and additional costs, cited in Davis Langdon,available at: www.davislangdon.com/ANZ/Sectors/Sustainability/ECM/ (accessed 24March 2012).

    Chan, P. and Chu, C. (2010), HK-BEAM (Hong Kong Building Environmental AssessmentMethod): Assessing Healthy Buildings, BEAM Society, Hong Kong, available at:www.mixtechnology.com/files/download/HK_BEAM.pdf (accessed 6 January 2012).

    Chew, M.Y.L. and Das, S. (2007), Building grading systems: a review of the state-of-art,Architectural Science Review, Vol. 51 No. 1, pp. 3-13.

    12

    Buildinginfrastructur

    SRT

  • 8/10/2019 A Review of Building_Infrastructure SRTs

    23/34

    Cole, R.J. (1999), Building environmental assessment methods: clarifying intentions, BuildingResearch and Information, Vol. 27 Nos 4-5, pp. 230-246.

    Cole, R.J. (2005), Building environmental assessment methods: redefining intentions and roles,Building Research and Information, Vol. 35 No. 5, pp. 455-467.

    Crawley, D. and Aho, I. (1999), Building environmental assessment methods: applications anddevelopment trends, Building Research and Information, Vol. 27 Nos 4-5, pp. 300-308.

    Department of Climate Change and Energy Efficiency (2010),NaTHERS, Department of ClimateChange and Energy Efficiency, Canberra, available at: www.climatechange.gov.au/what-you-need-to-know/buildings/homes/B/media/publications/buildings/nationwide-home-energy-rating-scheme.pdf (accessed 23 February 2013).

    Ding, G.K.C. (2008), Sustainable construction the role of environmental assessment tools,Journal of Environmental Management, Vol. 86 No. 3, pp. 451-464.

    EIA (2012),2012 Commercial Buildings Energy Consumption Survey (CBECS), US Departmentof Energy, Washington, DC, available at: www.eia.gov/survey/form/eia_871/2012/2012%20CBECS%20Form%20871A%20Building%20Questionnaire.pdf (accessed 10January 2013).

    Eichholtz, P., Kok, N. and Quigley, J.M. (2009), Doing well by doing good?, working paper,Green Office Buildings, Centre for the Study of Energy Markets (CSEM), Berkeley, CA,August.

    Energy STAR (2011),Energy STAR Performance Ratings Technical Manual, US EnvironmentalProtection Agency and US Department of Energy, available at: www.energystar.gov/ia/business/evaluate_performance/General_Overview_tech_methodology.pdf (accessed 7December 2011).

    EUROSTAT (2001), Economy-Wide Material Flow Accounts and Derived Indicators: AMethodological Guide , European Commission, Luxembourg, available at: http://epp.eurostat.ec.europa.eu/portal/page/portal/environmental_accounts/documents/3.pdf(accessed 27 January 2013).

    Fard, N.H. (2012), Energy-based sustainability rating system for buildings: case study of

    Canada, Masters thesis of Applied Science, The University of British Columbia,Okanagan.

    Fenner, R.A. and Ryce, T. (2008), A comparative analysis of two building rating systems, part I:evaluation,Engineering Sustainability, Vol. 161 No. 1, pp. 55-63.

    Fernandez-Sanchez, G. and Rodrguez-Lopez, F. (2010), A methodology to identify sustainabilityindicators in construction project management application to infrastructure projects inSpain,Ecological Indicators, Vol. 10 No. 6, pp. 1193-1201.

    Fowler, K.M. and Rauch, E.M. (2006), Sustainable Building Rating Systems Summary, PacificNorthwest National Laboratory, US Department of Energy, Battelle Washington, DC.

    Fuerst, F. (2009), Building momentum: an analysis of investment trends in LEED andEnergy STAR-certified properties, Journal of Retail and Leisure Property, Vol. 8 No. 4,pp. 285-297.

    Fuerst, F. and McAllister, P. (2008), Pricing sustainability: an empirical investigation of the valueimpacts of green building certification, working paper presented at ARES, cited in Milleret al. (2008).

    Global Reporting Initiative (GRI) (2011), Sustainability Reporting Guidelines v3.1, GRI,Amsterdam, available at: www.globalreporting.org/resourcelibrary/G3.1-Sustainability-Reporting-Guidelines.pdf (accessed 24 January 2013).

    Green Building Council of Australia (GBCA) (2012a), Green Star-Rating Tools, Green BuildingCouncil of Australia (GBCA), Sydney, available at: www.gbca.org.au/green-Star/rating-tools/ (accessed 20 November 2012).

    128

    SASBE2,2

  • 8/10/2019 A Review of Building_Infrastructure SRTs

    24/34

    Green Building Council of Australia (GBCA) (2012b), Green Star-Performance Benefits, GreenBuilding Council of Australia (GBCA), Sydney, available at: www.gbca.org.au/green-Star/green-Star-performance/benefits/ (accessed 3 January 2012).

    Green Building Council of Australia (GBCA) (2012c), Green Star Project Directory, Green

    Building Council of Australia (GBCA), Sydney, available at: www.gbca.org.au/project-directory.asp (accessed 12 December 2012).

    Gryc, H. (2012), Sustainable Structural Engineering-Gaining Greater Benefits to Communities inDeveloping Nations, Structures Congress, ASCE, Chicago, IL, available at: http://ascelibrary.org/doi/pdf/10.1061/9780784412367.178 (accessed 20 November 2012).

    Heerwagen, J. (2000), Green buildings, organizational success and occupant productivity,Building Research and Information, Vol. 28 Nos 5-6, pp. 353-367.

    Hes, D. (2007), Effectiveness of green building rating tools: a review of performance, TheInternational Journal of Environmental, Cultural, Economic & Social Sustainability, Vol. 3No. 4, pp. 143-152.

    HKGBC and BEAM Society (2012), BEAM Plus Existing Buildings Version 1.2, BEAM Society,Hong Kong, available at: www.beamsociety.org.hk/files/BEAM_Plus_For_Existing_

    Buildings_Version_1_2.pdf (accessed 6 January 2013).Horvarth, A., Pacca, S., Masanet, E. and Canapa, R. (2007), PaLATE, University of California,

    Berkeley, CA, available at: www.ce.berkeley.edu/Bhorvath/palate.html (accessed 27December 2012).

    Hsu, S.L. (2010), Life cycle assessment of materials and construction in commercial structures:variability and limitations, Masters thesis, Massachusetts Institute of Technology,Cambridge, MA.

    Huang, S.-L. and Tsu, W.-L. (2003), Materials flow analysis and Energy evaluation of Taipeisurban construction, Landscape and Urban Planning, Vol. 63 No. 2, pp. 61-74.

    Hurley, J. and Horne, R. (2006), Review and analysis of tools for the implementation andassessment of sustainable urban development, Environmental Institute of Australian andNew Zealand (EIANZ) report, Adelaide.

    Hyde, K.M., Maier, H.R. and Colby, C.B. (2004), Reliability-based approach to multi-criteriadecision analysis for water resources, Journal of Water Resources Planning and

    Management, Vol. 130 No. 6, pp. 429-438.

    Insua, D.R. and French, R. (1991), A framework for sensitivity analysis in discretemulti-objective decision making,European Journal of Operational Research, Vol. 54 No. 2,pp. 176-190.

    Kahhat, R. and Williams, E. (2012), Materials flow analysis of e-waste: domestic flows andexports of used computers from the United States,Resources, Conservation and Recycling,Vol. 67, pp. 67-74.

    Kordjamshidi, M., King, S. and Prasad, D. (2006), Why are rating schemes always wrong?Regulatory frameworks for passive design and energy efficiency, 23rd Conference on

    Passive and Low Energy Architecture, Geneva, 6-8 September, available at: www.unige.ch/

    cuepe/html/plea2006/Vol2/PLEA2006_PAPER176.pdf (accessed 23 February 2013).

    Lee, Y.S. and Guerin, D.A. (2010), Indoor environmental quality differences between officetypes in LEED-certified buildings in the US, Building and Environment, Vol. 45 No. 5,pp. 1104-1112.

    Lee, W.L., Yik, F.W.H. and Burnett, J. (2007), Assessing energy performance in the latestversions of Hong Kong building environmental assessment method (HK-BEAM), Energyand Buildings, Vol. 39 No. 3, pp. 343-354.

    LEED (2009a), LEED for new construction and major renovations v2009, available at: http://new.usgbc.org/credits/new-construction/v2009 (accessed 26 January 2013).

    12

    Buildinginfrastructur

    SRT

  • 8/10/2019 A Review of Building_Infrastructure SRTs

    25/34

    LEED (2009b), LEED for existing buildings: operations and maintenance v2009, available at:http://new.usgbc.org/credits/existing-buildings/v2009 (accessed 26 January 2013).

    LEED (2012),What is LEED?US Green Building Council, Washington, DC, available at: https://new.usgbc.org/leed (accessed 20 November 2012).

    Levermore, G.J. (2008), A review of the IPCC assessment report four: part I: the IPCC processand greenhouse gas emission trends from buildings worldwide, Building Services

    Engineering Research and Technology, Vol. 29 No. 4, pp. 349-361.

    Lockwood, C. (2006), Building the green way, Tool kit,Harvard Business Review, Harvard, MA,pp. 1-9, available at: http://ecologicdesignlab.com/files/Eco-Urban/VIII.1_HBR_building_green_way.pdf (accessed 12 December 2012).

    Lozano, R. and Huisingh, D. (2011), Inter-linking issues and dimensions in sustainabilityreporting, Journal of Cleaner Production, Vol. 19 Nos 2-3, pp. 99-107.

    Mateus, R. and Braganca, L. (2011), Sustainability assessment and rating of buildings:developing the methodology SBToolPTH, Building and Environment, Vol. 46 No. 10,pp. 1962-1971.

    Miller, N., Spivey, J. and Florance, A. (2008), Does green pay off?, Journal of Real Estate

    Portfolio Management, Vol. 14 No. 4, pp. 385-399.

    Mitchell, L.M. (2010), Green Star and NABERS: learning from the Australian experience withgreen building rating tools in Bose, R.K. (Ed.), Energy Efficient Cities: Assessment Toolsand Benchmarking Practices, The International Bank for Reconstruction andDevelopment/The World Bank, Washington, DC, pp. 93-124.

    Mithraratne, N. and Vale, B. (2004), Life cycle analysis model for New Zealand houses, Buildingand Environment, Vol. 39 No. 4, pp. 483-492.

    Morrissey, J., Iyer-Raniga, U., McLaughin, P. and Mills, A. (2012), A strategic appraisalframework for ecologically sustainable urban infrastructure, Environmental Impact

    Assessment Review, Vol. 33 No. 1, pp. 55-65.

    Mroueh, U.-M., Eskola, P. and Laine-Ylijoki, J. (2001), Life-cycle impacts of the use of industrialby-products in road and earth construction, Waste Management, Vol. 21 No. 3,pp. 271-277.

    NABERS (2011),Preparing for NABERS Office Rating Application, NSW Office of Environmentand Heritage, Sydney, available at: www.nabers.gov.au/public/WebPages/DocumentHandler.ashx?docType 3&id 15&attId 0 (accessed 6 January 2013).

    NABERS (2012),Rating Register, NSW Office of Environment and Heritage, Sydney, available at:www.nabers.gov.au/public/ WebPages/ContentStandard. aspx?module 30& template3&id 310& sideRatingsfrom20July.htm (accessed 7 January 2013).

    Newell, G., MacFarlane, J. and Kok, N. (2011), Building Better Returns: A Study of the FinancialPerformance of Green Office Buildings in Australia, Australian Property Institute andProperty Funds Association, University of Western Sydney and the University ofMaastricht in conjunction with Jones Lang LaSalle and CBRE, Australian PropertyInstitute, Sydney, available at: www.api.org.au/ assets/media_library/ 000/000/219/

    original.pdf?1315793106 (accessed 5 January 2013).

    Newsham, G.R., Mancini, S. and Birt, B.J. (2009), Do LEED-certified buildings save energy? Yes,but,Energy and Buildings, Vol. 41 No. 8, pp. 897-905.

    New South Wales (NSW) Government (2013), Environmental Planning and AssessmentRegulation 2000, New South Wales (NSW) Government, Sydney, available at:www.legislation.nsw. gov.au/maintop/ view/inforce/ subordleg 557 2000 cd 0N(accessed 24 February 2013).

    Nguyen, B.K. and Altan, H. (2011), Comparative review of five sustainable rating systems,Procedia Engineering, Vol. 21, pp. 376-386.

    130

    SASBE2,2

  • 8/10/2019 A Review of Building_Infrastructure SRTs

    26/34

    Norman, J., MacLean, H. and Kennedy, C. (2006), Comparing high and low residential density:life-cycle analysis of energy use and greenhouse gas emissions, Journal of Urban

    Planning and Development, Vol. 132 No. 1, pp. 10-21.

    NSW Government Planning and Infrastructure (2013), Using the BASIX Assessment Tool, NSW

    Government Planning and Infrastructure, Sydney, available at: www.basix.nsw.gov.au/basixcms/ getting-started/using- the-basix-assessment- tool.html (accessed 23 February2013).

    Piluso, C., Huang, Y. and Lou, H.L. (2008), Ecological input-output analysis-based sustainabilityanalysis of industrial systems, Industrial & Engineering Chemistry Research, Vol. 47No. 6, pp. 1955-1966.

    Reed, R., Bilos, A., Wilkinson, S. and Schulte, K.-W. (2009), International comparison ofsustainable rating tools, Journal of Sustainable Real Estate , Vol. 1 No. 1, pp. 1-22.

    Ries, R., Bilec, M.M., Gokhan, N.M. and Needy, K.L. (2006), The economic benefits of greenbuildings: a comprehensive case study, The Engineering Economist, Vol. 51 No. 3,pp. 259-295.

    Roderick, Y., McEwan, D., Wheatley, C. and Alonso, C. (2009), Comparison of energy

    performance assessment between LEED, BREEAM and Green Star, 11th InternationalIBPSA Conference, Glasgow, July 27-30.

    Rumsey, P. and McLellan, J.F. (2005), The green edge the green imperative, environmentaldesign and construction, pp. 5556, cited in Berardi (2012).

    Sahely, H.R., Kennedy, C.A. and Adams, B.J. (2005), Developing sustainability criteria for urbaninfrastructure systems,Canadian Journal of Civil Engineering, Vol. 32 No. 1, pp. 72-85.

    Saunders, T. (2008),A Discussion Document Comparing International Environmental AssessmentMethods for Buildings, Building Research Establishment (BRE), Watford, available at:www.dgbc.nl/images/uploads/rapport_vergelijking.pdf (accessed 29 July 2012).

    Scheuer, C.W. and Keoleian, G.A. (2002), Evaluation of LEED Using Life Cycle AssessmentMethods, National Institute of Standards and Technology (NIST), Gaithersburg, MD,available at: www.fire.nist.gov/bfrlpubs/build02/PDF/b02170.pdf (accessed 27 January

    2013).

    Sev, A. (2011), A comparative analysis of building environmental assessment tools andsuggestions for regional adaptations, Civil Engineering and Environmental Systems,Vol. 28 No. 3, pp. 231-245.

    Sharifi, A. and Murayama, A. (2013), A critical review of seven selected neighbourhoodsustainability assessment tools, Environmental Impact Assessment Review, Vol. 38,pp. 73-87.

    Shen, L.-Y., Hao, J.L., Tam, V.W.-Y. and Yao, H. (2007), A checklist for assessing sustainabilityperformance of construction projects, Journal of Civil Engineering and Management,Vol. XIII No. 4, pp. 273-281.

    Singh, R.K., Murty, H.R., Gupta, S.K. and Dikshit, A.K. (2009), An overview of sustainabilityassessment methodologies, Ecological Indicators, Vol. 9 No. 2, pp. 189-212.

    Todd, J.A., Crawley, D., Geissler, S. and Lindsey, G. (2001), Comparative assessment ofenvironmental performance tools and the role of the green building challenge, Building

    Research and Information, Vol. 29 No. 5, pp. 324-335.

    Torcellini, P., Pless, S., Deru, M., Griffith, B., Long, N. and Judkoff, R. (2006), Lessons learnedfrom case studies of six high-performance buildings, Technical Report No. NREL/TP-550-37542, National Renewable Energy Laboratory, Golden, CO, June.

    Tronchin, L. and Fabbri, K. (2008), Energy performance building evaluation in Mediterraneancountries: comparison between software simulations and opening rating simulation,

    Energy and Buildings, Vol. 40 No. 7, pp. 1176-1187.

    13

    Buildinginfrastructur

    SRT

  • 8/10/2019 A Review of Building_Infrastructure SRTs

    27/34

    Udall, R. and Schendler, A. (2005), LEED is Broken Lets Fix it, iGreenBuild.Com, San Mateo,CA, available at: www.igreenbuild.com/cd_1706.aspx (accessed 27 January 2013).

    Ugwu, O.O. and Haupt, T.C. (2007), Key performance indicators and assessment methods forinfrastructure sustainability a South African construction industry perspective,

    Building and Environment, Vol. 42 No. 2, pp. 665-680.Urge-Vorsatz, D. and Novikova, A. (2008), Potential and costs of carbon dioxide mitigation in the

    worlds buildings, Energy Policy, Vol. 36 No. 2, pp. 642-661.

    USGBC (2011), Buildings and Climate Change, US Green Building Council, Washington, DC,available at: www.documents.dgs.ca.gov/dgs/pio/facts/LA%20workshop/climate.pdf(accessed 20 June 2012).

    Vijayan, A. and Kumar, A. (2005), A review of tools to assess the sustainability in buildingconstruction, Environmental Progress, Vol. 24 No. 2, pp. 125-132.

    Watson, P., Jones, D. and Mitchell, P. (2013), Are Australian building eco-assessment toolsmeeting stakeholder decision making needs?, available at: www.construction-innovation.info/images/pdfs/Research_library/ResearchLibraryB/RefereedConferencePapers/Are_Australian_building_eco-assessement_tools.pdf (accessed 3 May 2013).

    Williamson, T.J., OShea, S. and Menadue, V. (2001), NatHERS: science and non-science, 35thANZAScA Conference, Wellington, November, available at: www.pc.gov.au/data/assets/pdf_file/0017/45116/sub028attachment1.pdf (accessed 24 February 2013).

    Wiley, J.A., Benefield, J.D. and Johnson, K.H. (2010), Green design and the market forcommercial office space, Journal of Real Estate Finance and Economics, Vol. 41 No. 2,pp. 228-243.

    Wolters, W.T.M. and Mareschal, B. (1995), Novel types of sensitivity analysis foradditive MCDM methods, European Journal of Operational Research, Vol. 81 No. 2,pp. 281-290.

    World Economic Forum (2011), A Profitable and Resource Efficient Future: Catalysing RetrofitFinance and Investing in Commercial Real Estate, World Economic Forum, Geneva,

    October, available at: www3.weforum.org/docs/WEF_IU_CatalysingRetrofitFinanceInvestingCommercialRealEstate_ Report_2011.pdf (accessed 6 January 2013).

    Xu, M., Allenby, B. and Kim, J. (2010), Input-Output Analysis for Sustainability, Centre forSustainable Engineering, Tempe, AZ, available at: www.ce.cmu.edu/Bcse/5aug07%20Allenby%20IO.pdf (accessed 27 January 2013).

    Further reading

    AS/NZS:4801 (2001),Occupational Health and Safety Management Systems Specification WithGuidance for Use, Standards Australia, Sydney, available at: http://infostore.saiglobal.com/store2/Details.aspx?ProductID386329 (accessed 16 October 2012).

    BSI (2013), OHSAS 18001 Occupational Health and Safety, BSI, London, available at: www.bsigroup.com.au/en-au/Assessment-and-Certification-services/Management-systems/

    Standards-and-schemes/OHSAS-18001/ (accessed 24 January 2013).

    EMAS (2013), What is EMAS?, European Commission, Brussels, available at: http://ec.europa.eu/environment/emas/index_en.htm (accessed 23 January 2013).

    ISO9001 (2008), Quality Management Systems Requirements, ISO, Geneva, availableat: www.iso.org/iso/home/store/catalogue_tc/catalogue_detail.htm?csnumber46486(accessed 16 December 2011).

    ISO14001 (2004), Environmental Management Systems Requirements with Guidance for Use ,ISO, Geneva, available at: www.iso.org/iso/home/store/catalogue_tc/catalogue_detail.htm?csnumber31807 (accessed 16 December 2011).

    132

    SASBE2,2

  • 8/10/2019 A Review of Building_Infrastructure SRTs

    28/34

    Appendix

    LCAtool

    Developer

    Weblink

    Coverage

    Outputs/typesofanalysis

    BEES

    NationalInstituteof

    Standardsa

    nd

    Technology

    (NIST)

    http://ws680.nist.gov/bees/

    (A(7rkOp_kyzgEkAAAAZ

    WExZWEwZDItMzBiY

    S00YTVlLWJhYm

    UtN2NiOTc4MzNm

    Y2YwJt9Z32gVssFf9Qf_

    Ghok1rCVQig1))/default.aspx

    Allstagesinthelifeofaproduct

    (rawmaterialacquisition,

    manufacture,transportation,

    installation,recycling,waste

    management)

    Economicperformancemeasuredusing

    standardlifecyclecostmethod

    Economicandenvironmentalperfor

    mance

    combinedintooneoverallperforman

    ceusing

    multi-attributedecisionanalysis

    BOUSTEAD

    BOUSTEAD

    ConsultingUK

    www.boustead-consulting.co.uk/

    LCAtoolacrossanumberof

    categories(fuelproduction,fuel

    use,process,transport,biomass)

    Globalwarmingpotential

    Conservationoffossilfuels

    Acidification

    Gridelectricityuse

    Publicwateruse

    ENVEST

    EdgeEnvironment

    http://edgeenvironment.com

    .au/

    envest/

    LCAtoolforearlierphaseof

    buildingdesign

    Revealsoperationalimpactsandem

    bodied

    impactsofbuildingasdesignevolves

    Providesestimatesofconstructioncostand

    wholelifecyclecost

    Ecoinvent

    EcoinventC

    entre

    www.ecoinvent.org/databas

    e/

    Containsdatasetsintheareao

    f

    agriculture,energysupply,

    transport,biofuels,construction

    materials,metalsprocessing,

    electronicsandwastetreatment

    Lifecycleinventorywhichcanbeusedwith

    othermajorLCAtools

    GaBi

    PEInternational

    www.gabi-software.com/

    australia/software/gabi-

    software/gabi-5/

    Usershavetheflexibilityto

    constructlifecycleofproductsat

    anystage

    Lifecycleassessmentacrossdifferent

    modules(designforenvironment,eco-

    efficiency,eco-design,efficientvalue

    chains)

    Lifecyclecost(designingandoptim

    ising

    productsandservicesforcostreduction)

    Lifecyclereportingwithmodulesacross

    sustainableproductmarketing,

    sustainabilityreportingandLCAkn

    owledge

    sharing

    Lifecycleworkingenvironment(developing

    manufacturingprocessesthataddre

    sssocial

    responsibilities)

    Table ALCA too

    13

    Buildinginfrastructur

    SRT

  • 8/10/2019 A Review of Building_Infrastructure SRTs

    29/34

    Criterion

    Subcriterion

    BREEAMLEED

    Green

    Star

    CASB

    EEASPIRE

    BCA

    Green

    Markfor

    DistrictsEPRA

    Estidama

    Pearl

    Community

    Green

    Globe

    Sustainable

    Design

    Scorecard

    BEAM

    DGNB-

    Seal

    Protocol

    ITACA

    AGIC

    EnvironmentalEnergy

    Water

    Waste

    Pollution/emis

    sions/

    air

    Landuseand

    ecology

    Biodiversity

    Materials

    Social

    Management(i.e.

    integrated

    process,sustainable

    procurement,etc.)

    Healthandwell-being/

    IEQ

    Economic

    Innovation

    Equityofeconomic

    opportunity

    Livelihood

    opportunity

    Macroeconomiceffects

    Note:,presenceofcriteria

    Table AII.Broad comparison ofcriteria across a selectionof SRTs

    134

    SASBE2,2

  • 8/10/2019 A Review of Building_Infrastructure SRTs

    30/34

    SRT

    Owner

    Weblink

    NatureofSRT

    Comments

    Deterministic

    scoringfor

    criteria

    Weigh

    tingfor

    criteria

    BREEAM

    BuildingResearch

    Establishment,UK

    http://breeam.org

    Environmentalweightingsexistand

    allocatedtothe10criteriaid

    entified;

    management,waste,healthandwell-

    being,energy,transport,water,pollution,

    landuseandecology,materialsand

    innovation

    LEED

    USGreenBuilding

    Counc

    il

    www.usgbc.org

    SimilaritieswithBREEAM.

    The

    managementcriterionisnot

    presentin

    LEED

    GreenStar

    Green

    Building

    Counc

    ilAustralia

    www.gbca.org.au/green-Star

    SimilartoBREEAMinterm

    sofallthe

    criteriaassessed

    CASBEE

    Japan

    GreenBuild

    Counc

    ilandJapan

    SustainableBuilding

    Conso

    rtium

    www.ibec.or.jp