Imaged-based verification of as-built documentation of operational buildings

11

Click here to load reader

Transcript of Imaged-based verification of as-built documentation of operational buildings

Page 1: Imaged-based verification of as-built documentation of operational buildings

Automation in Construction 21 (2012) 161–171

Contents lists available at ScienceDirect

Automation in Construction

j ourna l homepage: www.e lsev ie r.com/ locate /autcon

Imaged-based verification of as-built documentation of operational buildings

Laura Klein a, Nan Li a, Burcin Becerik-Gerber b,⁎a Sonny Astani Department of Civil and Environmental Engineering, University of Southern California, 3620 S. Vermont Avenue, KAP 217, Los Angeles, CA 90089, United Statesb Sonny Astani Department of Civil and Environmental Engineering, University of Southern California, 3620 S. Vermont Avenue, KAP 224C, Los Angeles, CA 90089, United States

⁎ Corresponding author. Tel.: +1 213 740 4383.E-mail addresses: [email protected] (L. Klein), nanl@

[email protected] (B. Becerik-Gerber).

0926-5805/$ – see front matter © 2011 Elsevier B.V. Aldoi:10.1016/j.autcon.2011.05.023

a b s t r a c t

a r t i c l e i n f o

Article history:Accepted 29 May 2011Available online 21 June 2011

Keywords:Image-based verificationAs-built conditionsDimension assessmentPhotogrammetryFacilities managementFacade measurement

As-built models and drawings are essential documents used during the operations andmaintenance (O&M) ofbuildings for a variety of purposes including the management of facility spaces, equipment, and energysystems. These documents undergo continuous verification and updating procedures both immediately afterconstruction during the initial handover process to reflect construction changes and during occupancy stagefor the changes that occur throughout the building's lifespan. Current as-built verification and updatingprocedures involve largely time consuming on-site surveys, where measurements are taken and recordedmanually. In an attempt to streamline this process, the paper investigates the advantages and limitations ofusing photogrammetric image processing to document and verify actual as-built conditions. A test bed of boththe interior and exterior of a university building is used to compare the dimensions generated by automatedimage processing to dimensions gathered through the manual survey process currently employed by facilitiesmanagement and strategies for improved accuracy are investigated. Both manual and image-baseddimensions are then used to verify dimensions of an existing as-built Building Information Model (BIM).Finally, the potential of the image-based spatial data is assessed for accurately generating 3D models.

usc.edu (N. Li),

l rights reserved.

© 2011 Elsevier B.V. All rights reserved.

1. Introduction

As-builtmodels and drawings are essential documents used duringthe operations and maintenance (O&M) of buildings for managingfacility spaces, equipment, and energy systems. While these docu-ments are typically generated, developed, and used throughout thedesign and construction phases of new buildings, in the United States,new construction represents only two to three percent of the existingbuilding stock in any given year [1]. Existing facilities are generally inoperation for 30 to 50 years meaning that approximately 87% ofcurrent buildings will still be operational in 2050, representing 70% ofthe building stock [2]. As-built documents are therefore of greatestvalue to building owners and managers and are used continuously forassessing building performance, managing building repairs andrenovations, and assisting building decommissioning [3–5].

Inefficiencies in processing, communicating, and revising as-builtdocuments therefore result in high costs imposed on building owners.A 2004NIST report found that an estimated $1.5 billion iswasted everyyear as a result of unavailable and inaccurate as-built documentscausing information delays to facilities management (FM) personnel.Changes that occur during construction are often reflected as redlinemarkups or partial drawings that are not transferred to complete as-

built documentation handed over to owners during building closeoutor after major renovations. Changes that occur during occupancymustalso be updated in drawings and equipment databases to ensureaccurate records. An additional $4.8 billion is therefore spent annuallyon FM labor alone to verify and validate existing as-built documen-tation [5].

Consequently, solutions are being increasingly investigated forimproving the capturing and communicating of building informationcritical to building operations and maintenance. The digitization andexchange of 3D geometric and semantic building data in the form ofBuilding InformationModels (BIM) are one of themost promising andfastest developing approaches to as-built information management[4]. BIM has been shown to reduce mismanagement of associatedbuilding information throughout the building's lifespan, and improvethe efficiency of information access and presentation [6–8]. There area variety of use cases where BIM can be potentially applied to FM,including locating building components [9,10], checking maintain-ability, providing real-time data access [7], facilitating space manage-ment [11], supporting planning and feasibility study for non-capitalconstruction, monitoring and controlling energy consumption [12],and improving personnel training and development. Realizing thevalue of 3D as-built models, building owners and managers are nowmore frequently requiring the handover of BIM models after designand construction phases [13,14].

Whether relying on 3D or 2D as-built documentation for buildingoperations, facilities managers still rely heavily upon manualprocesses for developing and verifying drawings andmodels. Facilities

Page 2: Imaged-based verification of as-built documentation of operational buildings

162 L. Klein et al. / Automation in Construction 21 (2012) 161–171

managers typically execute field surveys of existing building condi-tions including dimensions, materials, and assets using digitalcameras, tape measures and/or laser measuring devices to verifydrawings andmodels or to generate digital as-built documents, wherethey do not exist. Attempts to automate the surveying and modelingof as-built conditions include leveraging new remote sensingtechnologies, such as 3D laser scanning and photogrammetry, whichuse sensors to capture 3D spatial information from a distance in a non-disruptive fashion [15–19]. The availability of low cost and effectivetools for automated verification and modeling of as-built conditionswould allow facilities managers and building owners more frequentand more comprehensive updates to as-built documents to improvetheir daily operations. Today, photogrammetry offers one of the mostpromising low cost solutions, but it relies heavily on satisfactoryenvironmental conditions. For the application of photogrammetricimage processing for interior and exterior as-built documentassessment, it is therefore crucial to conduct tests in realistic fieldenvironments.

This paper provides a brief overview of current manual as-builtdocument verification processes and developing remote sensingverification solutions that allow increased automation and increasedpotential for 3D model development. Specifically, the advantages andlimitations of FM personnel using commercial photogrammetricimage processing software for as-built document verification areinvestigated through a test bed building at the University of SouthernCalifornia. Document verification is focused on dimension measure-ments and dimension control representing critical aspects of qualityassurance procedures. The proposed image-based dimension verifi-cation procedure is compared in terms of accuracy to the current FMprocedure employingmanual surveying tools. Themanual and image-based dimensions are then used to verify an existing as-built BIMmodel currently undergoing verification by university FM after thedesign and construction phase handover. Finally, the potential of theimage-based spatial data is assessed for accurately generating 3Dgeometric building models. This is done through a comparison of the3D relative locations of building elements in the image-based recon-struction and the verified as-built BIM model.

2. Overview of as-built document assessment proceduresand solutions

2.1. As-built document assessment in current practice

Whenamanual verification is required, the FMpersonnelmustfirstcheck all as-built documentation to identify changes that are eitherscheduled or in progress to the building that is under verification.Based on the information collected, a plan for the field survey isprepared, which specifies the changes and the corresponding di-mensions that need to be verified. Tools required formeasurement anddocumentation during the field survey must also be prepared, such aslarge-scale area plans, laser distance meters, measurement tapes, anddigital cameras. During the field survey of a building exterior,dimensions of interest, which typically include overall dimensions ofall facades and facade openings such as windows and doors, aremeasured and documented manually on existing building plans.Digital photographs of the facades are also taken for future reference.Field surveys of building interiors are usually executed on a roombasis.Every room of interest is surveyed separately and basic dimensionsincluding room width, length, and height and sizes of doors andwindows aremeasured. FMpersonnel are also required to observe anddocument the inventory of permanent furniture and equipment ineach room.

After all dimension information is collected by the field survey, it isused by FM personnel as the “ground truth” to verify the accuracy ofcurrent as-built documentation. Dimensions of critical building geom-etry are generally only measured once to limit the time required for the

survey. If the difference between any dimension from the existing as-built documentation and the field survey exceeds a pre-determinedthreshold (approximately 2% for interior or theminimumof 2%or 10 cmfor exterior), correction of the as-built documentation is required. Suchrevision may include revising polylines and square footages of thebuildings or rooms in CAD files, assigning room numbers or usage tonewly created/remodeled areas or removing demolished areas in thespacemanagement system, correcting inaccurate geometrical attributesof objects in as-built BIM models, archiving notes and images from thefield survey, and updating any other related documentation and linkeddatabases. Certain types of corrections such as updates of floor squarefootages must be reviewed and authorized by the space manager toensure the accuracy and appropriateness of the proposed changes to theas-built documentation.

2.2. Remote sensing solutions

As an alternative to the manual survey procedure, remote sensingtechnologies are increasingly being tested and implemented inacademic research and industry practice to capture existing buildingconditions. 3D laser scanning and photogrammetry are the mostcommon means by which this spatial information is gatheredremotely. While very different in terms of equipment costs andsensing processes, both automation technologies use sensors to eitherdirectly or indirectly compute relative distances between theirlocations and points in the sensed scene. Over the past decade, con-siderable work has covered the application of these technologies fordocumenting, monitoring, and inspecting construction site progress[20–25], infrastructure [26–28] and the structural health of buildingsand bridges [29–31]. Image processing specifically has been used toautomatically detect and identify building elements according toshape andmaterials [32–35]. Remote capture of heritage buildings hasalso proven a valuable application for historical documentations andpreservation purposes [36–39]. For thewide range of applications thatcapture building as-built conditions, choice of remote sensingtechnology depends heavily on the size and complexity of the sceneor object, the required accuracy and level of detail, and budgetaryconstraints [40]. For the sensing of large objects and complex environ-ments, a combination of 3D laser scanning and photogrammetrictechniques has been shown to achieve successful results [41–44].

2.2.1. 3D laser scanningA 3D laser scanner, commonly referred to as light detection and

ranging (LiDAR), is an “active” optical sensing technology that directlycalculates distances to line-of-sight objects by emitting and measur-ing the return of laser beams. Laser scanners are capable of obtaininglarge amounts of spatial information, in the form of dense 3D pointclouds (millions of points), in a short amount of time. Commercialsoftware is available for the registering, editing, and texturing of thesensed 3D data [45,46]. Point clouds generated from 3D laser scanscan also be exported to CAD, where they can be used to automaticallygenerate meshes and other model geometry [27].The captured spatialinformation is generally accurate within millimeters and reflects ahigh level of detail due to the high resolution of points [27,28]. Thistechnology is therefore well suited for capturing complex shapegeometry and objects with small details [43] as well as for projectsrequiring fully automated 3D data retrieval [47].

While laser scanned spatial data can be very accurate and detailed,this accuracy is often subject to environmental parameters such asobject reflectivity, surface texture, lack of line of sight, and weather[48]. For instance, laser scanners are not well suited for capturingfacilities with large window areas as the laser beam might notaccurately reflect off of glazing [47]. 3D lasers scanners also commonlyface difficulty in capturing sharp corners and edges, which are criticalregions for modeling [49]. For the most accurate laser scan results,artificial targets are used, whose layout and placement in relation to

Page 3: Imaged-based verification of as-built documentation of operational buildings

163L. Klein et al. / Automation in Construction 21 (2012) 161–171

the scannermust be carefully planned and executed [43]. Additionally,deliberate adjustments to targets (necessary for some target types)and accidental adjustments to targets (caused my wind or humaninteraction) can significantly sacrifice the accuracy of the data [51].Finally, use of 3D laser scanning for capturing existing conditions islimitedmainly by the high costs of the equipment and the high level ofskill required for equipment operation. While the price of laserscanning equipment is dropping, scanners still cost tens of thousandsof dollars [45]and training is required to ensure proper targetplacement and operation of the devices and registration software[27,43].

2.2.2. PhotogrammetryIn comparison to 3D laser scanning, photogrammetry offers a lower

cost, lower skill, portable solution for remote sensing [40,47].Photogrammetry traditionally refers to the process of derivinggeometric information (distances and dimensions) about an objectthrough measurements made on photographs. Photogrammetry caninvolve one photo or multiple photos, analog or digital images, still-frame or video images (videogrammetry), and manual or automaticprocessing [51].Generally, photogrammetry includes selecting commonfeature points in two or more images; calculating camera positions,orientations, and distortions; and reconstructing 3D information byintersecting feature point locations. Over the past decade, majordevelopments in computer vision and image processing have allowedincreased automation in each of these steps, thereby expanding thepotential applications for photogrammetry [52–55]. Today, commercialsoftware is available for semi-automatically selecting and matchingvisual feature points, calculating camera positions, distortions, andorientations, and generating 3D reconstructions of image-capturedobjects in the form of sparse 3D point clouds [56–59].

The selection and stitching of overlapping feature points betweenimages can be achieved with varying levels of automation. Manualstitching of feature points generally requires fewer images butdepends on some input of priori knowledge of the object or scene. Incontrast, automated stitching requires a large number of images takenclosely together to provide sufficient overlap and repetition of featurepoints [41,60]. While automated stitching generates denser pointclouds (thousands rather than hundreds of points) and reduces theneed for human intervention, it is, at this time, more prone to stitchingerrors and increased noise [40] caused by the extraction of unwantedbackground feature points such as trees, surrounding buildings, andthe sky. Additionally, automatically generated photogrammetric pointcloud data represents only approximate 3D information resulting fromthe computation of potential feature point correspondences and istherefore generally insufficient for the automatic extraction ofprimitive geometry such as planes and simple shapes [19].

After feature points are manually or automatically defined andstitched between 2D images, camera positions and orientations arecalculated based on corresponding collections of approximated 3Dfeature point locations. Distortion due to specific camera lenses mustbe taken into account either manually or automatically according to

Table 1Comparison of 3D laser scanning and photogrammetry for remote sensing.

Technology 3D laser scanning

Accuracy MillimeterResolution Millions of pointsEquipment cost Tens of thousandsRequired skill Medium-highPortability “Bulky”3D data generation Automatic captureCommercial software Yes3D modeling Automatic meshing & shape extractionEnvironmental challenges Reflectivity, surface texture, weather,

target movement, edges, line of sight

camera information embedded in digital image files. A method knownas bundle adjustment is often employed to simultaneously optimizecalculated structure and camera poses [61]. The final reconstructedscene includes the optimized camera positions and their associatedvisual data in a 3D representation such as a sparse point cloud. Oncecameras are positioned and calibrated for each image, the 3Dcoordinate of any point or image pixel can be calculated with arelatively high degree of accuracy using triangulation to define thesame point in two images taken from different perspectives.

Like 3D laser scanning, image processing for accurate photogram-metry relies on satisfactory environmental and site conditions. Forautomated processing, dynamic effects such as lighting and movingobjects can influence the correspondence of feature points and thestitching of images [40]. Other challenges result from a lack of featurepoints caused by untextured surfaces or the appearance of similarfeature points in images [18,41]. Buildings, especially newer buildings,are often characterized byuniformmaterials and building features thatlack enough visual distinction to be recognized uniquely by computervision. Surrounding buildings, typical in crowded urban settings, alsoprevent wide angle views of buildings thereby limiting the number offeature points in each image or even preventing line-of-site to parts ofbuildings. Finally, critical building geometry can often be occluded bycommon site clutter including vegetation, cars, furniture, and people.Still, many of the environmental challenges facing photogrammetricsurveys can be reduced or eliminated through careful planning of thesurvey procedure. For instance, taking images over a short period oftime can minimize environmental variation caused by changes inlighting. Adding unique visual markers to a scene can artificiallyincrease the number of feature points and adding geometric assump-tions such as planar constraints to the modeling process canminimizethe impact of occlusions and limited views [41]. The advantages andlimitations of both 3D laser scanning and photogrammetry found inrelated literature are summarized in Table 1.

The affordable equipment and labor costs of photogrammetricsurveys and their reported success with simple and planar geometry[43] make them appealing to building owners and facilities managersfor documenting andmodeling buildings. Recent research efforts havefocused on testing the accuracy of photogrammetric techniques fordimensioning building elements including structural columns, fa-cades, and facade openings [66–69]. While these developments andtests have provided valuable and promising results, remote capture ofa complete existing and occupied building offers a complex set ofobstacles that has yet to be fully investigated. With the presented testbed, this paper tests a methodology for verifying as-built documentsby dimensioning existing operational buildings using a digital camera,photogrammetric image-processing software, and 3D modelingsoftware. The photogrammetric image-processing software is notoriginally developed for the remote sensing of buildings but anincreasing demand for accurate as-built documentation is motivatingresearch for this application.Workflows and tools for both themanualfield survey verification procedure and the proposed image-basedverification procedure are outlined in the following sections. Methods

Photogrammetry

[27,28] Centimeter [27][28,43] Hundreds of points [43,47,55][45] Hundreds [62][27,43] Low [40][63] Hand held [47][64] Post-processing [47][45,46] Yes [56][65] Manual modeling [43][27,48–50] Feature repetition, surface texture and

material, view angle, line of sight[40,41]

Page 4: Imaged-based verification of as-built documentation of operational buildings

Fig. 2. Interior test bed floor plans of Rooms 206 and 207.

164 L. Klein et al. / Automation in Construction 21 (2012) 161–171

for assessing the accuracy and feasibility of the proposed image-basedmethod for dimension verification and 3D model generation are alsopresented.

3. Test bed description

The School of Cinematic Animation and Digital Arts Building (SCB)was selected to test the current and proposed as-built documentationverification methods on existing conditions (Fig. 1). The test bedbuilding is of relatively recent construction and has been occupiedsince June 2010. The facility is approximately 3580 m2 and housestypical classrooms and offices as well as computer laboratories andspecialized production rooms, screening rooms, and animationstudios. The building has four stories, including a basement, and isroughly rectangular in plan.

This building was selected to reflect typical exterior and interiorconditions for an educational or commercial building so that bothexterior and interior image-based verification methods could besufficiently tested. The test bed building also provides opportunity forthe verification of dimensions ranging from small to large andspanning in both horizontal and vertical directions. Finally, the testbed site allows the testing of the proposed method with surfacematerials, environmental conditions, and site occlusions typical for anoperational building. These environmental factors include natural andartificial lighting, furniture, vegetation, and surrounding buildings.

A research library (Room 206) and a classroom (Room 207) on thesecond floor of SCB were selected for the interior field tests as theyrepresent typical spaces found on the university campus (Fig. 2). Room207 is roughly twice the size of Room 206, each room coveringapproximately 25.5 and 53 m2, respectively. Both rooms include oneor more windows on their southernwalls which allow in natural light.At the time of the surveys, both rooms were heavily populated withequipment and furniture, obstructing corners of the floor, windowsand doors. The walls of Room 207 were also covered with posters andother visually distinct graphics but thewalls of Room 206weremostlyclear.

The exterior of SCB consists of mostly planar surfaces withrepetitive ornamentation and archways at entrances and along anexterior corridor. The facade consists of stucco material of uniformcolor and texture. A few variations of steel framedwindows and doorsrepeat along the sides. As the building is part of a dense complex on acompact university campus,wide angle views are only available for thenorthern facade and half of the southern facade. For the remaining twoand a half facades, surrounding buildings obstruct complete views.Vegetation closely surrounding the building, including flower bedsand palm trees, also obstruct the view of some building elements suchas windows, doors, and facade corners as can be seen in Figs. 1 and 3.

Finally, the test bed building was selected as it is one of the fewcampus facilities to have an existing as-built BIM model developedthrough design and construction phases. The BIM model was

Fig. 1. An image and south-east pers

motivated by both the project donor and owner and was managedby a designated BIM team consisting of representatives fromarchitecture, engineering, construction, subcontractor, and FMgroups.The BIM team maintained multiple BIM models which they mergedinto a single Navisworks model and updated periodically throughoutconstruction to reflect the as-built conditions. Architectural supple-mental instructions issued during construction were communicatedby direct changes issued to the BIM model. The final BIM modeldelivered to the owner is therefore expected to be a more accurate as-built document than typically received during building closeout. TheBIM model is currently undergoing standard manual verificationprocesses executed byUSC FMso thiswork is relevant to the actual testbed site and building management.

4. Methodology for as-built assessment

4.1. Manual field survey

To replicate the manual survey verification procedure currentlycarried out by the university FM personnel, measurements of the twointerior rooms and the four exterior facadeswere gatheredwith an off-the-shelf laser surveying device, which measures linear distanceswithin a range of 100 m to an accuracy of 1.6 mm. Measurements ofeach building elementwere initially taken once to an accuracy of 1 cm.Building elevations and floor plans were used to choose thedimensions and measuring sequence before the survey, as well as tofacilitate documenting the measurements recorded onsite. Theinterior surveyed dimensions included the length, width, and heightof each room;magnitudes ofwall protrusions and recessions; and sizesof doors and windows and their relative distances to adjacent walls.The exterior surveyed dimensions included the width and height ofeachwall and the sizes of doors andwindows at the ground level of thebuilding. Keeping consistent with current university FM practice,

pective of the test bed building.

Page 5: Imaged-based verification of as-built documentation of operational buildings

Fig. 3. Test bed building plan and surrounding complex site plan.

165L. Klein et al. / Automation in Construction 21 (2012) 161–171

dimensions collected through the manual survey were compared todimensions extracted from the existing as-built BIM model to verifythe accuracy of themodel. Dimensions found to differ bymore than 2%from the as-built model were re-measured an additional two times toverify that these discrepancies were not caused by random errors.

4.2. Image-based survey

To gather the same building geometry data acquired by themanualfield survey, an image-based survey was executed for the buildingexterior and the two interior rooms. The presented methodologyfocuses on the three major steps involved in remote sensing throughphotogrammetry, which include image acquisition, image processing(image stitching and 3D reconstruction), and geometry or dimensionextraction as shown in Fig. 4.

4.2.1. Image acquisitionFor the study, photographs were taken with a fixed focal length

(4.6 mm) using an off-the-shelf 8.0 megapixel digital camera. Photo-graphs were taken in a circular path around the exterior of the buildingwith a maximum of 10° between images. In many cases, severalphotographs were shot from a single location to capture multiple partsof the building. Camera positions around the building exterior can beviewed in Fig. 6. A total of 145 images were acquired for the building'sexterior in a single session. Interior photograph taking adhered to a lessdefined path but rather optimized views of all critical geometry andbuilding elements such aswall openings and floor and ceiling corners. Atotal of 130 images were acquired for the two interior rooms in a singlesession.

Feature Point Selection &

Stitching

Structure & Camera Pose Calculations

Bundle Adjustment: Camera Calibration & Refined Structure

DimenExtrac

Image Processing Image Acquisition

Fig. 4. Image-based survey and

Augmentation of both exterior and interior scenes with uniquevisual markers was employed where feature points were predicted tobe insufficient or where views were limited by surrounding buildings.As shown in Fig. 5, the visual markers used were random hand-drawnsketches on letter-sized white paper. Assessment of the existingfeature points in the scene was carried out according to the judgmentand experience of the surveyors. Typically, the uniform stucco anddrywall materials of the test bed building were assumed to lacksufficient feature points unless they were accompanied by suchelements as decorative posters as was the case in Room 207. Thesevisualmarkerswere added to all four sides of the building aswell as allfour walls of Room 206 to help in the automated stitching processes.

4.2.2. Image processingImage processing was accomplished using commercially available

software [56] capable of automatic camera calibration and structurecomputing. Feature points were first automatically selected andstitched between images through the identification and matching ofpixel groups with unique variation and contrast. Stitched images wereused to calculate initial structure of motion and camera positions.Additional images added to the scene by the image processor wereused to refine the calculated structure and camera poses by recog-nizing outlying points and adding new 3D points to the reconstruction[55]. Once all images were added to the scene, bundle adjustment wasused to find the parameters of the cameras and to optimize camerasand the 3D structure. The output returned by the image processorincluded stitched images, their associate camera positions, and thegenerated sparse point cloud in a 3D model space as shown in Fig. 6.

While these steps were carried out automatically, the imageprocessor did not successfully stitch all images and compute thestructures of the entire scenes in the first attempt. To complete thereconstruction, feature points had to bemanually defined and matchedbetween those images already stitched and those images that had notbeen successfully stitched. The first iteration of exterior imageprocessing automatically stitched only the north facade, therebyrequiring manual stitching assistance to complete the remainingbuilding facades. A total of four iterations of manual stitching andresubmission to the image processor were executed. Room 206 wasreconstructed with four iterations of manual stitching and imageprocessing and Room 207 was reconstructed with only two iterations.

4.2.3. Dimension extractionManual methods were used to extract dimensions and 3D

coordinates from the 3D reconstructions. Once image processing wascompleted for all exterior and interior scenes, points and polylineswere used to model all major building geometry including facades,wall openings, and building and room footprints and elevations. ForRooms 206 and 207 respectively, 18 and 22 points were manually

As-Built BIM Model

Manual Survey Measurements

sion tion

Accuracy Comparison

verification methodology.

Page 6: Imaged-based verification of as-built documentation of operational buildings

Fig. 5. Left: the posters on the walls of Room 207 provided enough feature points. Right: the walls of Room 206 had to be augmented with visual markers.

166 L. Klein et al. / Automation in Construction 21 (2012) 161–171

modeled for dimension extraction. For the exterior scene, 251 pointswere manually modeled. Where critical points such as floor cornerscould not be modeled directly due to environmental obstructions,geometric assumptions were made using planar and axial constraints.Generated points and polylines were exported to a 3D modelingsoftware for measurement and 3D coordinate takeoffs as shown inFig. 7. All dimension and coordinate data gathered from the linemodels remained unscaled until the data was analyzed and comparedto the manual field survey measurements.

4.3. Accuracy comparisons

All exterior and interior building dimensions extracted through theimage-based surveywerefirst compared tomanual survey dimensionsto assess their accuracy and potential value for verifying as-builtconditions.Manual dimensionswere then compared to the existing as-built BIM model and were used as the “ground truth” dimensions toverify the accuracy of the model within 2% as defined and used by FMgroups. Then, the proposed image-based method was assessed fordirect verification of the existing as-built BIM model (Fig. 4).Additionally, absolute Cartesian coordinates of pointsmodeled aroundthe exterior were calculated from a set origin and compared to thecorresponding coordinates in the BIMmodel to assess the potential ofthe remotely gathered spatial data for 3D model generation.

5. Interior results

In the final 3D scene used to verify the as-built dimensions of Room206, 60 of 67 photos were successfully stitched together, representing89% of the attempted reconstruction. Resulting from the automatedstitching process, 9055 3D points were computed. For Room 207, 60 of63 photos were successfully stitched, representing 95% of theattempted reconstruction. This high percentage, as well as the smaller

Fig. 6. Interior and exterior point cloud recons

number of iterations required for reconstruction, can be attributed tothe existence of posters and other feature points on the walls of Room207 that were not present on the walls of Room 206. A total of 10,0813D points were automatically computed for Room 207. The two 3Dmodels were then scaled using one manually acquired dimension ineach room.

5.1. Comparison of manual survey and image-based survey dimensions

After reconstruction, 17 dimensions were extracted from the 3Dscene of Room 206 and 23 dimensions were extracted from the 3Dscene of Room 207. Dimensions ranged from 0.5 m to 8.2 m andincluded all room, door, and window widths and heights. Moredimensions were extracted from Room 207 as it had more windowsand amore complex perimeter as compared to Room 206. A total of 17and 23 dimensions were also measured manually in Rooms 206 and207, respectively, to match the image-based survey dimensions. Com-paring the image-based dimensions to the manual survey dimensionsyielded absolute and percent errors for each room as summarized inTables 2 and 3. The RMSE (rootmean square error) of the image-baseddimensions in Room 206 was 4.88 cm and the RMSE of the image-based dimensions in Room 207 was 6.42 cm revealing dimensioningprecision within these limits.

The largest errors seen in image-based survey dimensions in bothrooms resulted from dimensions partially or completely obstructed byfurniture or by the room's perimeter allowing only limited perspec-tives of the dimensions. This illustrates the potential difficulties inusing line-of-sight sensing tools to capture operational buildingconditions. In Room 207, two dimensions, the bottom left corner ofone door and the bottom right corner of onewindow, were obstructedby furniture in all photos used for reconstruction. Similarly, in Room206 the smallest dimension of the north wall was only visible in onephoto due to the limitations of the room for viewing the recessed

tructions and calculated camera positions.

Page 7: Imaged-based verification of as-built documentation of operational buildings

Fig. 7. Exterior points and polylines exported to 3D modeling software.

Table 3Results of manual and image-based surveys and absolute and percent errors for image-based survey dimensions and as-built BIM model for Room 207.

# Buildingelement

Dimensions(cm)

Absolute Error(cm)

Percent Error(%)

Manual Image-based

As-builtBIM

Image-based

As-builtBIM

Image-based

As-builtBIM

18 Ceiling 350 354 351 4.25 0.52 1.21% 0.15%19 Door 269 261 264 7.87 4.84 2.92% 1.80%20 Door 91 91 81 0.09 9.72 0.10% 10.68%21 Door 269 261 264 7.87 4.84 2.92% 1.80%22 Door 91 87 81 3.96 9.72 4.35% 10.68%23 Perimeter 608 605 612 2.74 3.82 0.45% 0.63%24 Perimeter 819 819 820 0.17 0.78 0.02% 0.10%25 Perimeter 666 663 669 3.05 2.97 0.46% 0.45%26 Perimeter 648 636 647 12.38 0.62 1.91% 0.10%27 Perimeter 58 56 57 2.33 0.85 4.02% 1.47%28 Perimeter 171 165 172 6.01 1.41 3.52% 0.82%29 Window 243 253 249 10.04 5.92 4.13% 2.44%30 Window 147 151 152 3.81 5.4 2.59% 3.67%31 Window 243 255 249 12.06 5.92 4.96% 2.44%32 Window 147 149 152 1.79 5.4 1.21% 3.67%33 Window 244 255 249 11.06 4.92 4.53% 2.02%34 Window 148 152 152 3.82 4.4 2.58% 2.97%35 Window 244 253 249 9.04 4.92 3.70% 2.02%36 Window 148 154 152 5.85 4.4 3.95% 2.97%37 Window 243 245 249 1.94 5.92 0.80% 2.44%38 Window 148 144 152 4.28 4.4 2.89% 2.97%39 Window 243 243 249 0.09 5.92 0.04% 2.44%40 Window 148 142 152 6.3 4.4 4.26% 2.97%

Average 5.25 4.44 2.50% 2.68%Std. dev. 3.77 2.48 1.65% 2.75%Maximum 12.38 9.72 4.96% 10.68%

167L. Klein et al. / Automation in Construction 21 (2012) 161–171

corner. Together, the occluded dimensions represented three of thetop five greatest percent errors averaging 4.62%. When thesedimensions were removed from the data sets, the average percenterrors for image-based survey dimensions in Room 206 and Room 207reduced to 1.54% and 2.33%, respectively.

5.2. Verification of interior as-built BIM

Manually measured dimensions, considered as the “ground truth”,were then used to verify the corresponding dimensions extracted fromthe as-built BIMmodel. The absolute and percent errors of the as-builtBIMmodel dimensions are summarized in Tables 2 and 3. The percenterrors of 7 as-built BIM dimensions in Room 206 and 14 as-built BIMdimensions in Room 207 exceeded the 2% threshold, requiring theremeasuring of these dimensions and the updating of the as-built BIMmodel. These erroneous dimensions, however, were unrelated to theerroneous image-based survey dimensions previously found. In eachroom, the doorwidths saw the greatest discrepancies between as-builtconditions represented in the existing BIMmodel and the true as-builtconditions with percent errors exceeding 10% and absolute errorsexceeding 10 cm. The manual survey also found the as-built BIMdimensions of the windows in both rooms to differ by 2 to 4% or 4 to6 cm.

Table 2Results of manual and image-based surveys and absolute and percent errors for image-based survey dimensions and as-built BIM model for Room 206.

# Buildingelement

Dimensions(cm)

Absolute error(cm)

Percent error(%)

Manual Image-based

As-builtBIM

Image-based

As-builtBIM

Image-based

As-builtBIM

1 Ceiling 381 380 381 0.73 0 0.19% 0.00%2 Ceiling 365 367 366 2.26 0.76 0.62% 0.21%3 Door 269 261 264 7.82 4.84 2.91% 1.80%4 Door 92 89 81 2.94 10.72 3.19% 11.65%5 Door 269 63 264 5.81 4.84 2.16% 1.80%6 Door 92 90 81 1.94 10.72 2.10% 11.65%7 Perimeter 107 111 112 4.08 5.4 3.81% 5.04%8 Perimeter 421 425 420 4.3 0.95 1.02% 0.23%9 Perimeter 608 613 612 5.43 3.82 0.89% 0.63%10 Perimeter 224 228 224 4.13 0.44 1.84% 0.20%11 Perimeter 58 61 57 3.04 0.85 5.25% 1.47%12 Perimeter 197 195 96 1.86 1.42 0.95% 0.72%13 Perimeter 666 679 669 13.48 2.97 2.02% 0.45%14 Window 243 240 249 2.83 5.92 1.16% 2.44%15 Window 148 146 152 1.9 4.4 1.28% 2.97%16 Window 243 241 249 1.83 5.92 0.75% 2.44%17 Window 148 148 152 0.1 4.4 0.07% 2.97%

Average 3.79 4.02 1.78% 2.75%Std. dev. 3.16 3.24 1.38% 3.61%Maximum 13.48 10.72 5.25% 11.65%

Finally, the image-based survey dimensions were used in a directassessment of the existing as-built BIM model to parallel the as-builtBIM assessment already performed with manual survey measure-ments. Differences between as-built BIM dimensions and image-basedsurvey measurements were plotted against zero and directly com-pared in the same plot to differences between as-built BIM dimensionsand manual survey measurements (Fig. 8). A noticeable level ofagreement was found between the manual and image-based surveyassessments. The coefficient of determination (R2) between thepercent errors identified by the manual and image-based measure-ments was found to equal 0.58. The R2 value increased to 0.72 whenonly those dimensions with errors exceeding 2% were considered. Asthe manual survey assessment found the greatest discrepancies indimensions for door widths in both rooms, the image-based surveysimilarly showed the as-built BIMmodel to under-represent the actualdoorwidths in each room (dimensions 5, 6, 21, and 22 in Fig. 8). In thisway, the image-based survey method correctly identified all errors inthe existing as-built BIM model that exceeded 5%.

-12%-10%

-8%-6%-4%-2%0%2%4%6%8%

10%12%

1 3 5 7 9 13 15 17 19 21 23 25 27 29 31 33 35 37 39

Perc

ent D

iffe

renc

e

Dimension #

As-Built -Manual As-Built -Image-Based

11

Fig. 8. Difference between interior as-built BIM model dimensions and manual andimage-based survey dimensions.

Page 8: Imaged-based verification of as-built documentation of operational buildings

Table 4Absolute and percent errors of image-based survey dimensions with various scalepoints.

Scalepoints

Maximum error Average error Standard deviation

cm % cm % cm %

1 45.83 3.18% 6.40 1.31% 8.05 0.90%2 41.16 3.45% 6.40 1.31% 7.71 0.87%4 17.59 3.22% 4.89 1.16% 5.09 0.82%8 17.59 3.33% 4.02 0.92% 4.50 0.72%

168 L. Klein et al. / Automation in Construction 21 (2012) 161–171

6. Exterior results

The final 3D scene reconstructed for the building exterior included41,235 automatically generated 3D points and 104 stitched photosrepresenting only 72% of the photos attempted during imageprocessing. The remaining unstitched 28% were all photos taken ofthe eastern half of the southern facade resulting in a gap in thegenerated point cloud in this region as seen in Fig. 6. This stitchingfailurewasmost likely due to the fact that the neighboring sound stagebuilding, to the south, prevented continuous wide angle views of thissection of the facade due to its proximity to SCB. The sound stage,which can be seen on the right side of Fig. 1 and in plan view in Fig. 3,sits approximately 7.5 m from the southern facade of SCB. In this area,SCB reaches a height of approximately 13.5 m forcing a viewing angleof 61°. In addition, this section of the facade consists of repetitivecolumns, arches, and windows with no visual differences to distin-guish them in image processing. When the photos of this region wereprocessed on their own, 33 of 38 photos were successfully stitched.However,whereas the actual building has 7 identical arches, the imageprocessor interpreted and reconstructed only 4 identical arches asseen in Fig. 9. This result demonstrates the difficulty of stitchingrepetitious facadeswith limited views, evenwith the addition of visualmarkers placed on the columns. It also provides insight as to why thissection of the facade was not stitched with the rest of the building.

6.1. Comparison of manual survey and image-based survey dimensions

From the remainder of the building facade, 63 image-based surveydimensionswere extracted ranging fromapproximately 0.8 to 25 mandrepresenting all facade and window widths and heights. The image-based survey dimensions were first scaled with one manually acquiredmeasurement for the entire building. The exterior facade was thendivided into 2, 4, and finally 8 approximately equal sections around theperimeter of the building and each section was independently scaledusing one manually acquired measurement from that section. This wasdone in order to investigate improvements to accuracy achieved byusing multiple scale points. The testing of multiple scale points was notconducted on the interior scenes as these scenes included significantlyfewer dimensions. Comparing the manual survey dimensions to theimage-based survey dimensions using 1, 2, 4, and 8 scale points, foundthe absolute and percent errors summarized in Table 4. These errorswere concluded to be independent of which scaling point was selectedin each section as the dimensions varied on average by only 7 mmdepending on which of the eight manually measured dimensions wasused for scaling. The maximum absolute errors and maximum percenterrors do not necessarily represent the same dimensions.

Fig. 9. Missing arches in south

As demonstrated, a considerable advantage was gained whennumerous scale points were used around the perimeter of the building.Considerable improvements to the maximum and average absoluteerrors were especially seen when four or more scale points are usedrepresenting approximately one scaling dimension per facade. When8 scale points were used, a significantly greater number of dimensionshad errors below 1% as shown in Fig. 10. Also with 8 scale points, only8 of the 63 dimensions had percent errors exceeding 2% and only onedimension had a percent error exceeding 3%. The RMSE for the image-based dimensions using 8 scale points was 6.00 cm revealing dimen-sioning precision within this limit. Regardless of scaling method, thelargest absolute and percent errors were found in image-based surveydimensions extracted from thewestern facade. This ismost likely due toseveral factors including the limited views of this facade due tosurrounding fences, concave corners, and a high number of obstructionsincluding vegetation, planters, and gates.

6.2. Verification of exterior as-built BIM

For the verification of the as-built BIM model, 8 scale points wereused to achieve a relatively high level of accuracy while still ensuringthe efficiency of the image-based survey method. Manual dimensionswere first compared to dimensions extracted from the as-built modelfinding the BIMmodel to very accurately reflect actual as-built exteriorconditions. The average absolute and percent errors of the as-built BIMmodel dimensions are summarized in Table 5. As shown, the accuracyof the exterior as-built BIMmodel was far greater than the accuracy ofthe interior as-built BIM models for the test bed. No as-built BIMdimensions exceeded the established 2% threshold, but one dimensionerror, a perimeter width on the northern facade reached 10 cmrequiring re-measuring and an update to the as-built BIM model.

The image-based survey dimensions were then used in a directassessment of the existing as-built BIM model. Differences betweenas-built BIM dimensions and image-based surveymeasurementswere

ern facade reconstruction.

Page 9: Imaged-based verification of as-built documentation of operational buildings

0

5

10

15

20

25

30

35

40

0-1% 1-2% 2-3% 3-4%

Freq

uenc

y

Percent Error

1 Scale Point

2 Scale Points

4 Scale Points

8 Scale Points

Fig. 10. Frequencies of percent errors for image-based dimensions with various scalepoints.

-12%-10%-8%-6%-4%-2%0%2%4%6%8%

10%12%

1 4 7 10 13 16 19 22 25 28 31 34 37 40 43 46 49 52 55 58 61

Per

cent

Dif

fere

nce

Dimension #

As-Built - Manual As-Built - Image-Based

Fig. 11. Difference between exterior as-built BIM model dimensions and manual andimage-based dimensions.

169L. Klein et al. / Automation in Construction 21 (2012) 161–171

plotted against zero and directly compared in the same plot to differ-ences between as-built BIM dimensions and manual survey measure-ments. Comparing Fig. 11 to the interior verification results presentedin Fig. 8, reveals that both the manual and image-based surveys foundsignificantly smaller errors in the exterior as-built model than in theinterior as-built model. However, while no gross exterior errors werefound through either surveys, the image-based survey reported 9 ofthe 63 as-built model dimensions to have errors exceeding 2%when inreality, the manual survey confirmed their accuracy to be within thislimit.

6.3. Assessment of image-based spatial data for 3D modeling

With the exterior as-built BIM model confirmed to accuratelyrepresent true as-built conditions, the model was used to assess theaccuracy of the 3D spatial data generated by the image-based survey.In addition to 2Dmeasurements, 3D global coordinateswere extractedfrom both the as-built BIMmodel and the 3D reconstructed scene. Forthe assessment, 37manuallymodeled points representing the relative3D locations of facade openings on the western, northern, and easternfacades were scaled and translated to align with the as-built BIMmodel coordinate system. For the purpose of this investigation, onlyone scale point for all three facadeswas used. The southern facadewasnot included due to the known image processing errors discussed inSection 6. Differences between the Cartesian coordinates of each pointwere calculated and the average and maximum errors for eachdirection and for the total Euclidean distance are presented in Table 6.While the average difference between 3D points was approximately30 cm, the total volumeencompassed by the building is approximately12,000 m3. All 37 image-based coordinates were accurate within75 cm. The R2 values between the image-based and as-built BIMmodelcoordinates were found to exceed 0.99 for each of the x, y, and zdirections demonstrating a relatively high level of agreement.

7. Opportunities and limitations of image-based survey

A comprehensive assessment of both the manual and image-basedsurveys conducted for this study revealed several key advantages anddisadvantages of the proposed remote sensing solution. The mostobvious limitations of the image-based survey included reducedaccuracy from the manual survey and difficulties in image stitching

Table 5Average and absolute errors of the as-built BIM model dimensions.

Maximum error Average error Standard deviation

cm % cm % cm %

10 1.29% 1 0.27% 2 0.28%

with repetitive building elements, closely surrounding buildings, andsite clutter. Despite these findings, the authors noted the followingopportunities for the image-based survey:

• Measuring building elements and facade openings on upper storiesinaccessible from the ground level of the surveyed site. In this study,an additional 54windows and facade dimensions were accessible bythe image-based survey not accessible through the manual survey.

• Reduced on-site work. While the manual and image-based surveystook approximately equal time for both the interior and exteriortrials, two surveyorswere required for themanual survey to assist inproviding suitable reflective surfaces for the laser measuring device.Only one surveyor was required for the image-based survey.

• Measuring distances without line-of-site. Entire facade lengthscould easily be measured through the image-based survey. Thesemeasurements most often had to be divided into multiple line-of-sight distances for the laser measuring device thereby increasingopportunity for human error.

• Generating 3D spatial data in addition to 2D dimensions. Thecurrent manual survey method does not support opportunities for3D modeling directly from the gathered geometric data.

• Automatic digital building documentation. While not yet a CADmodel, the 3D scene reconstructed from the image-based surveystill allows for digital storage for future dimension takeoffs andbuilding visualization purposes.

8. Conclusion

While further research must still be done to improve imageacquisition and image processing for complex environments such asthe interiors and exteriors of operational buildings, these initialresults motivate further development and investigation of buildingverification methods using photogrammetric technology. The image-based survey did not successfully identify all of the as-built modeldimensions with errors exceeding 2% as required by FM qualityassurance standards, however, it did successfully identify all dimen-sions with errors exceeding 5%. Specifically, the greatest geometricerrors found in the existing as-built BIM model through the manualsurvey, the door widths in both classrooms, were clearly detectedthrough the image-based survey.

As demonstrated with the exterior survey, significant improve-ments to the accuracy of extracted dimensions can be realized by using

Table 6Average and maximum errors of image-based 3D coordinates.

X Y Z Euclidean distance

Average error (cm) 18.28 23.11 11.78 33.13Maximum error (cm) 45.31 64.56 29.33 71.51

Page 10: Imaged-based verification of as-built documentation of operational buildings

170 L. Klein et al. / Automation in Construction 21 (2012) 161–171

different scale points for different facades around the perimeter of thebuilding. Multiple scale points reduce the impact of distortion andskewing in the reconstructed scene caused by limited line-of-sightbetween different sides of the building. The image-based spatial datacaptured for the building exterior also demonstrated potential for thegeneration of 3D building models. The comparison of the existing as-built BIM model and the 3D reconstructed scene showed an averagedifference of 30 cm between 3D points. The required accuracy of a 3Das-built geometric model has yet to be defined and varies betweenapplications, but this result shows promise for photogrammetricimage-processing software to aid in simple 3D building modeling.

The proposed image-based survey method offers potential advan-tages to the currently employedmanual surveymethod including: lesstime and labor spent on-site, increased accessibility to buildinggeometry and features beyond the limits of traditional measuringdevices, and the simultaneous generation of both 2D dimensions anddigital 3D spatial data. These opportunities could be leveraged withfurther research in remote sensing technologies, including automaticrecognition of building elements and automatic extraction of buildinggeometry. Additionally, future work will focus on developing a morecomprehensive statistical study of the quality and validity of as-builtdocuments and models. Error analysis of spatial data, includingrelative measurements and absolute coordinates is a fundamentalissue in other domains including GIS [70]. Finally, new imageacquisition techniques should also be investigated to improve thecapture of building areas occluded by site clutter and surroundingbuildings.

Acknowledgment

Authors would like to thank Autodesk IDEA Studio for theirsupport of this project in providing financial and technical support inusing Autodesk Photo Scene Editor. Any opinions, findings, conclu-sions, or recommendations presented in this paper are those of theauthors and do not necessarily reflect the views of Autodesk.

References

[1] M.A. Brown, F. Southworth, T.K. Stovall, Towards a Climate-friendly BuiltEnvironment, Pew Center on Global Climate Change: Oak Ridge NationalLaboratory, 2005.

[2] M.J. Kelly, Energy efficiency, resilience to future climates and long-termsustainability: the role of the built environment, Philos. Trans. R. Soc. A. 368(2010) 1091–1108.

[3] A. Akcamete, B. Akinci, J.H. Garrett, Motivation for computational support forupdating building information models (BIMs), 2009 ASCE International Work-shop on Computing in Civil Engineering, June 24, 2009–June 27; 2009, AmericanSociety of Civil Engineers, Austin, TX, United States, 2009, pp. 523–532.

[4] C. Eastman, P. Teicholz, R. Sacks, K. Liston, BIM Handbook: a Guide to BuildingInformation Modeling for Owners, Managers, Designers, Engineers, and Contrac-tors, John Wiley and Sons, Hoboken, New Jersey, 2008.

[5] M.P. Gallaher, A.C. O'Connor, J.L.J. Dettbarn, L.T. Gilday, Cost Analysis of InadequateInteroperability in the U.S. Capital Facilities Industry, National Institute ofStandards & Technology, Gaithersburg, Maryland, 2004 Report No.: NIST GCR04–867.

[6] R. Vanlande, C. Nicolle, C. Cruz, IFC and building lifecycle management, Autom.Constr. 2;18 (1) (2008) 70–78.

[7] A. Motamedi, A. Hammad, Lifecycle management of facilities components usingradio frequency identification and building information model, Electron. J. Inf.Technol. Constr. 14 (2009) 238–262.

[8] R. Vanlande, C. Cruz, C. Nicolle, Active3D: Semantic and Multimedia Merging forFacility Management. 6th International Conference on Web Information Systemsand Technologies, WEBIST 2010, April 7, 2010–April 10; 2010, Inst. for Syst. andTechnol. of Inf. Control and Commun, Valencia, Spain, 2010, pp. 21–29.

[9] A. Krukowski, D. Arsenijevic, RFID-based Positioning for Building ManagementSystems. 2010 IEEE International Symposium on Circuits and Systems. ISCAS2010, 30 May–2 june 2010, Piscataway, NJ, USA, IEEE, 2010, pp. 3569–3572.

[10] S.A. Mallepudi, R.A. Calix, G.M. Knapp, Material classification and automaticcontent enrichment of images using supervised learning and knowledge bases,Multimedia on Mobile Devices 2011; and Multimedia Content Access: Algorithmsand Systems V; 25–26 Jan. 2011, SPIE - The International Society for OpticalEngineering, USA, 2011, p. 788113, 11 pp.

[11] V.K. Bansal, Use of GIS and topology in the identification and resolution of spaceconflicts, J. Comput. Civ. Eng. 25 (2) (2011) 159–171.

[12] S. Dawood, R. Lord, N. Dawood, Development of a visual whole life-cycle energyassessment framework for built environment, 2009 Winter Simulation Confer-ence, WSC 2009, December 13, 2009–December 16; 2009, Institute of Electricaland Electronics Engineers Inc, Austin, TX, United States, 2009, pp. 2653–2663.

[13] CIS, BIM Project Execution Planning Guide, Pennsylvania State University:Computer Integrated Construction Research Program, 2010 Report No.: 2.0.

[14] LACCD, LACCD Building Information Modeling Standards, Los Angeles CommunityCollege District, 2010 http://standards.build-laccd.org/projects/dcs/pub/BIM%20Standards/released/content.html.

[15] I. Brilakis, M. Lourakis, R. Sacks, S. Savarese, S. Christodoulou, J. Teizer, et al.,Toward Automated Generation of Parametric BIMs Based on Hybrid Video andLaser Scanning Data, Elsevier Ltd, Langford Lane, Kidlington, Oxford, OX5 1GB,United Kingdom, 2010, pp. 456–465.

[16] J. Dickinson, A. Pardasani, S. Ahamed, S. Kruithof, A survey of automationtechnology for realising as-built models of services, 1st International Conferenceon Improving Construction and Use Through Integrated Design Solutions, CIB IDS2009, june 10, 2009 - june 12; 2009, Technical Research Center of Finland, Espoo,Finland, 2009, pp. 365–381.

[17] D. Huber, B. Akinci, P. Tang, A. Adan, B. Okorn, X. Xiong, Using laser scanners formodeling and analysis in architecture, engineering, and construction, 2010 44thAnnual Conference on Information Sciences and Systems (CISS 2010); 17–19March 2010, IEEE, Piscataway, NJ, USA, 2010, p. 6.

[18] J.D. Markley, J.R. Stutzman, E.N. Harris, Hybridization of photogrammetry andlaser scanning technology for as-built 3D CAD models, 2008 IEEE AerospaceConference, AC, March 1, 2008–March 8; 2008, Inst. of Elec. and Elec. Eng.Computer Society, Big Sky, MT, United States, 2008.

[19] P. Tang, D. Huber, B. Akinci, R. Lipman, A. Lytle, Automatic reconstruction of as-built building information models from laser-scanned point clouds: a review ofrelated techniques, Autom. Constr. 19 (7) (2010) 829–843.

[20] F. Dai, M. Lu, Photo-based 3D modeling of construction resources for visualizationof operations simulation: case of modeling a precast facade, 2008 WinterSimulation Conference (WSC); 7–10 Dec. 2008, IEEE, Piscataway, NJ, USA, 2008,pp. 2439–2446.

[21] S. El-Omari, O. Moselhi, Integrating 3D laser scanning and photogrammetryfor progress measurement of construction work, Autom. Constr. 18 (1) (2008)1–9.

[22] C. Gordon, B. Akinci, F. Boukamp, D. Huber, Assessment of visualization softwarefor support of construction site inspection tasks using data collected from realitycapture technologies, 2005 ASCE International Conference on Computing in CivilEngineering, July 12, 2005– July 15; 2005, American Society of Civil Engineers,Cancun, Mexico, 2005, pp. 1097–1106.

[23] A. Makhmalbaf, M. Park, J. Yang, I. Brilakis, P.A. Vela, 2D vision tracking methods'performance comparison for 3D tracking of construction resources, ConstructionResearch Congress 2010: Innovation for Reshaping Construction Practice, May 8,2010–May 10; 2010, American Society of Civil Engineers, Banff, AB, Canada, 2010,pp. 459–469.

[24] P. Tang, D. Huber, B. Akinci, Characterization of laser scanners and algorithms fordetecting flatness defects on concrete surfaces, J. Comput. Civ. Eng. 25 (1) (2011)31–42.

[25] P. Tang, B. Akinci, D. Huber, Semi-automated as-built modeling of light rail systemguide beams, Construction Research Congress 2010: Innovation for ReshapingConstruction Practice, May 8, 2010–May 10; 2010, American Society of CivilEngineers, Banff, AB, Canada, 2010, pp. 122–131.

[26] P.A. Fuchs, G.A. Washer, S.B. Chase, M. Moore, Applications of laser-basedinstrumentation for highway bridges, J. Bridge Eng. 9 (6) (2004) 541–549.

[27] E.J. Jaselskis, Z. Gao, R.C. Walters, Improving transportation projects using laserscanning, J. Constr. Eng. Manage. 131 (3) (2005) 377–384.

[28] P. Tang, B. Akinci, Extracting surveying goals from point clouds to support con-struction and infrastructure inspection, 2009 Construction Research Congress —

Building a Sustainable Future, April 5, 2009–April 7; 2009, American Society ofCivil Engineers, Seattle, WA, United States, 2009, pp. 1164–1173.

[29] Z. Zhu, S. German, I. Brilakis, Detection of large-scale concrete columns forautomated bridge inspection, Autom. Constr. 19 (8) (2010) 1047–1055.

[30] Z. Zhu, I. Brilakis, Machine vision-based concrete surface quality assessment,J. Constr. Eng. Manage. 136 (2) (2010) 210–218.

[31] R. Jiang, D.V. Jauregui, K.R. White, Close-range photogrammetry applications inbridge measurement: literature review, Meas. J. Int. Meas. Confed. 41 (8) (2008)823–834.

[32] I.K. Brilakis, L. Soibelman, Shape-based retrieval of construction site photographs,J. Comput. Civ. Eng. 22 (1) (2008) 14–20.

[33] I. Brilakis, L. Soibelman, Y. Shinagawa, Material-based construction site imageretrieval, J. Comput. Civ. Eng. 19 (4) (2005) 341–355.

[34] Z. Zhu, I. Brilakis, Parameter optimization for automated concrete detection inimage data, Autom. Constr. 19 (7) (2010) 944–953.

[35] Z. Zhu, I. Brilakis, Concrete column recognition in images and videos, J. Comput.Civ. Eng. 24 (6) (2010) 478–487.

[36] P. Arias, J. Herraez, H. Lorenzo, C. Ordonez, Control of structural problems incultural heritage monuments using close-range photogrammetry and computermethods, Comput. Struct. 08;83 (21–22) (2005) 1754–1766.

[37] P. Arias, C. Ordonez, H. Lorenzo, J. Herraez, J. Armesto, Low-cost documentationof traditional agro-industrial buildings by close-range photogrammetry, Build.Environ. 42 (4) (2007) 1817–1827.

[38] F. Remondino, S. El-Hakim, A. Gruen, L. Zhang, Turning images into 3D models,IEEE Signal Process. Mag. 07;25 (4) (2008) 55–64.

[39] H.M. Yilmaz, M. Yakar, F. Yildiz, Documentation of historical caravansaries bydigital close range photogrammetry, Autom. Constr. 17 (4) (2008) 489–498.

Page 11: Imaged-based verification of as-built documentation of operational buildings

171L. Klein et al. / Automation in Construction 21 (2012) 161–171

[40] F. Remondino, S. El-hakim, Image-based 3D modelling: a review, Photogramm.Rec. 21 (115) (2006) 269–291.

[41] S. El-Hakim, 3D modeling of complex environments, Videometrics and opticalmethods for 3D shape measurement; 22–23 Jan. 2001, SPIE-Int. Soc. Opt. Eng,USA, 2001, pp. 162–173.

[42] S. El-Hakim, J.- Beraldin, M. Picard, A. Vettore, Effective 3D modeling of heritagesites, 3DIM 2003; 6–10 Oct. 2003, IEEE Comput. Soc, Los Alamitos, CA, USA, 2003,pp. 302–309.

[43] F. Remondino, A. Guarnieri, A. Vettore, 3D Modeling of Close-range Objects:Photogrammetry or Laser Scanning? Videometrics VIII; 01, SPIE - The Interna-tional Society for Optical Engineering, USA, 2005, pp. 216–225.

[44] S. El-Hakim, J.- Beraldin, M. Picard, G. Godin, Detailed 3D reconstruction of large-scale heritage sites with integrated techniques, IEEE Comput. Graphics Appl.05;24 (3) (2004) 21–29.

[45] Scene: Laser Scanner Software [Internet] Available from: http://www.faro.com/focus/uk 2011.

[46] Leica Cyclone [Internet] Available from: http://www.leica-geosystems.com/images/new/product_solution/Leica_Cyclone_REGISTER_6.0_data_sheet_enUS.pdf 2008.

[47] Z. Zhu, I. Brilakis, Comparison of optical sensor-based spatial data collectiontechniques for civil infrastructure modeling, J. Comput. Civ. Eng. 05;23 (3) (2009)170–177.

[48] F. Jazizadeh, G. Kavulya, B. Becerik-Gerber, In: Effects of weather conditions andobject colors on the quality of 3D point clouds: A case study. June 19–22, 2011;Miami, FL.; 2011 (in review).

[49] P. Tang, B. Akinci, D. Huber, Quantification of edge loss of laser scanned data atspatial discontinuities, Autom. Constr. 18 (8) (2009) 1070–1083.

[50] B. Becerik-Gerber, F. Jazizadeh, G. Kavulya, G. Calis, Assessment of target types andlayouts in 3D laser scanning for registration accuracy, Autom. Constr. (2011)Available Online February 2011.

[51] E.M. Mikhail, J.S. Bethel, J.C. McGlone, Introduction to Modern Photogrammetry,John Wiley and Sons, New York, 2001.

[52] D. Nister, Automatic passive recovery of 3D from images and video, 2ndInternational Symposium on 3D Data Processing, Visualization, and Transmission;6–9 Sept. 2004, IEEE Comput. Soc, Los Alamitos, CA, USA, 2004, pp. 438–445.

[53] L. Barazzetti, M. Scaioni, F. Remondino, Orientation and 3D modelling frommarkerless terrestrial images: combining accuracy with automation, Photo-gramm. Rec. 25 (132) (2010) 356–381.

[54] M. Pollefeys, R. Koch, M. Vergauwen, L. Van Gool, Automated reconstruction of 3Dscenes from sequences of images, ISPRS J. Photogramm. Remote. Sens. 55 (4)(2000) 251–267.

[55] M. Pollefeys, L. Van Gool, M. Vergauwen, F. Verbiest, K. Cornelis, J. Tops, et al.,Visual modeling with a hand-held camera, Int. J. Comput. Vis. 59 (3) (2004)207–232.

[56] Photo Scene Editor [Internet] Available from: http://labs.autodesk.com/utilities/photo_scene_editor/ 2011.

[57] PhotoModeler [Internet] Available from: http://www.photomodeler.com/index.htm2011.

[58] iWitness [Internet] Available from: http://www.iwitnessphoto.com/support/contact.html 2010.

[59] Topcon ImageMaster [Internet], 2011.[60] S.B. Kang, Y. Li, X. Tong, H. Shum, Image-based Rendering, Found. Trends Comput.

Graphics Vis. 2 (3) (2006) 173–258.[61] B. Triggs, P.F. McLauchlan, R.I. Hartley, A.W. Fitzgibbon, Bundle Adjustment—a

Modern Synthesis. International Workshop on Vision Algorithms; 21–22 Sept.1999, Springer, Berlin, Germany, 2000, pp. 298–372.

[62] Digital Compact Cameras [Internet] Available from: http://www.usa.canon.com/cusa/consumer/products/cameras/digital_cameras 2011.

[63] J. Teizer, T. Kahlmann, Range imaging as emerging optical three-dimensionmeasurement technology, Transp. Res. Rec. 2040 (2007) 19–29.

[64] C. Kim, C.T. Haas, K.A. Liapi, Rapid, on-site spatial information acquisition and itsuse for infrastructure operation and maintenance, Autom. Constr. 10;14 (5)(2005) 666–684.

[65] Shape Extraction [Internet] Available from: http://labs.autodesk.com/utilities/shape_extraction_autocad/ 2011.

[66] F. Dai, M. Lu, Assessing the accuracy of applying photogrammetry to takegeometric measurements on building products, J. Constr. Eng. Manage. 136 (2)(2010) 242–250.

[67] C. Ordonez, P. Arias, J. Herraez, J. Rodriguez, M.T. Martin, Two photogrammetricmethods for measuring flat elements in buildings under construction, Autom.Constr. 07;17 (5) (2008) 517–525.

[68] C. Ordonez, P. Arias, J. Herraez, J. Rodriguez, M.T. Martin, A combined single rangeand single image device for low-cost measurement of building facade features,Photogramm. Rec. 23 (122) (2008) 228–240.

[69] C. Ordonez, J. Martinez, P. Arias, J. Armesto, Measuring building facades with alow-cost close-range photogrammetry system, Autom. Constr. 19 (6) (2010)742–749.

[70] Y. Leung, J.-. Ma, M.F. Goodchild, A general framework for error analysis inmeasurement-based GIS part 1: the basic measurement-error model and relatedconcepts, J. Geogr. Syst. 6 (2004) 325–354.