Computers and Graphics 3d Web Survey Final Preprint

20
3D Graphics on the Web: a Survey Alun Evans * , Marco Romeo, Arash Bahrehmand, Javi Agenjo, Josep Blat Interactive Technologies Group, Universitat Pompeu Fabra, Barcelona, Spain Abstract In recent years, 3D graphics has become an increasingly important part of the multimedia web experience. Following on from the advent of the X3D standard and the definition of a declarative approach to presenting 3D graphics on the web, the rise of WebGL has allowed lower level access to graphics hardware of ever increasing power. In parallel, remote rendering techniques permit streaming of high-quality 3D graphics onto a wide range of devices, and recent years have also seen much research on methods of content delivery for web-based 3D applications. All this development is reflected in the increasing number of application fields for the 3D web. In this paper, we reflect this activity by presenting the first survey of the state of the art in the field. We review every major approach to produce real-time 3D graphics rendering in the browser, briefly summarise the approaches for remote rendering of 3D graphics, before surveying complementary research on data compression methods, and notable application fields. We conclude by assessing the impact and popularity of the 3D web, reviewing the past and looking to the future. Keywords: 3D, Graphics, Web, Survey, Internet, Browser, WebGL, X3D, X3DOM, Three.JS, Rendering, Meshes 1. Introduction The Web has come a long way since its humble beginnings. Initially existing as a means of sharing remote pages of static text via the internet, bound together by a new markup language, HTML, the release of the Mosaic browser allowed users to ’browse’ remote ’web-pages’ which featured a mixture of both text and images [1]. Following the release of the Netscape Nav- igator and Microsoft’s Internet Explorer in the mid nineteen nineties, development in web-technology exploded. Cascading Style Sheets (CSS), Secure Sockets Layer (SSL), cookies, Java and Javascript, Adobe Flash, XML, and AJAX (to name but a few of the enabling technologies); all were to be incorporated into the web model during the following years. By the turn of the millennium, the web was full of professionally designed pages, rich in visual content - recovering fully and to an un- expected extent both of initial goals of the web: browsing and editing content. Yet full interactive multimedia content was invariably heav- ily reliant on Adobe Flash. As a proprietary system, Adobe had free reign to define the functionalities of its browser plugin [2], without necessarily making any reference to the series of standards that were being gradually undertaken and adopted by the World Wide Web Consortium (W3C). Adobe’s closed sys- tem allowed developers to embed interactive audio, video, and * Corresponding author Email addresses: [email protected] (Alun Evans), [email protected] (Marco Romeo), [email protected] (Arash Bahrehmand), [email protected] (Javi Agenjo), [email protected] (Josep Blat) URL: http://gti.upf.edu (Josep Blat) 2D animations into a Flash executable file, which was then ex- ecuted by the browser-plugin installed on the user machine, al- most completely bypassing the browser itself. Flash-based ap- plications typically required (relatively) long download times, and this fact, coupled with the later lack of availability for Ap- ple’s iOS platform, meant that Flash lost some of its initial pop- ularity [3]. Meanwhile, the potential for creating interactive, multime- dia rich web pages, using open standards methods, grew during the early part of the new millennium. Firstly, Scalable Vector Graphics (SVG) was introduced into browsers, allowing com- plex 2D drawing in a manner which fit in with the existing style of HTML. Then the canvas element was introduced, also allowing 2D drawing, but diering from SVG in being con- trolled via Javascript. First introduced by Apple as part of the WebKit framework, canvas later became incorporated into the draft HTML5 standard [4] (along with SVG). HTML5 is (be- ing) specifically designed to adapt HTML in a way such that it can be used to make web applications i.e. dynamic interactive pages which are rich in multimedia content. This consistent expansion of the scope, power, and com- plexity of the web has meant that technology previously re- served for custom software (or even hardware) is now being used via the web. Once example of such technology is 3D graphics. While eorts have been made to implement real time 3D graphics over the internet since the mid-nineties (see Sec- tion 2 below), recent years have seen a rapid growth in its avail- ability and distribution. This paper presents a survey of the current state of the art in real-time 3D graphics on the web, covering rendering techniques, scene description methods, 3D specific data delivery, and relevant application fields. Preprint submitted to Computers & Graphics February 24, 2014

description

3D, Graphics, Web, Survey, Internet, Browser, WebGL, X3D, X3DOM

Transcript of Computers and Graphics 3d Web Survey Final Preprint

3D Graphics on the Web: a SurveyAlun Evans, Marco Romeo, Arash Bahrehmand, Javi Agenjo, Josep BlatInteractive Technologies Group, Universitat Pompeu Fabra, Barcelona, SpainAbstractIn recent years, 3D graphics has become an increasingly important part of the multimedia web experience. Following on from theadvent of the X3D standard and the denition of a declarative approach to presenting 3D graphics on the web, the rise of WebGLhas allowed lower level access to graphics hardware of ever increasing power. In parallel,remote rendering techniques permitstreaming of high-quality 3D graphics onto a wide range of devices, and recent years have also seen much research on methodsof content delivery for web-based 3D applications. All this development is reected in the increasing number of application eldsfor the 3D web. In this paper, we reect this activity by presenting the rst survey of the state of the art in the eld. We reviewevery major approach to produce real-time 3D graphics rendering in the browser, briey summarise the approaches for remoterendering of 3D graphics, before surveying complementary research on data compression methods, and notable application elds.We conclude by assessing the impact and popularity of the 3D web, reviewing the past and looking to the future.Keywords: 3D, Graphics, Web, Survey, Internet, Browser, WebGL, X3D, X3DOM, Three.JS, Rendering, Meshes1. IntroductionThe Web has come a long way since its humble beginnings.Initially existing as a means of sharing remote pages of statictext via the internet, bound together by a new markup language,HTML, thereleaseof theMosaicbrowser alloweduserstobrowse remote web-pages which featured a mixture of bothtext and images [1]. Following the release of the Netscape Nav-igatorandMicrosoftsInternet Explorerinthemidnineteennineties, development in web-technology exploded. CascadingStyle Sheets (CSS), Secure Sockets Layer (SSL), cookies, Javaand Javascript, Adobe Flash, XML, and AJAX (to name but afew of the enabling technologies); all were to be incorporatedintothewebmodelduringthefollowingyears. Bytheturnof the millennium, the web was full of professionally designedpages, rich in visual content - recovering fully and to an un-expected extent both of initial goals of the web:browsing andediting content.Yet full interactive multimedia content was invariably heav-ilyreliantonAdobeFlash. Asaproprietarysystem, Adobehad free reign to dene the functionalities of its browser plugin[2], without necessarily making any reference to the series ofstandards that were being gradually undertaken and adopted bythe World Wide Web Consortium (W3C). Adobes closed sys-tem allowed developers to embed interactive audio, video, andCorresponding authorEmail addresses:[email protected] (Alun Evans),[email protected] (Marco Romeo), [email protected](Arash Bahrehmand), [email protected] (Javi Agenjo),[email protected] (Josep Blat)URL: http://gti.upf.edu (Josep Blat)2D animations into a Flash executable le, which was then ex-ecuted by the browser-plugin installed on the user machine, al-most completely bypassing the browser itself. Flash-based ap-plications typically required (relatively) long download times,and this fact, coupled with the later lack of availability for Ap-ples iOS platform, meant that Flash lost some of its initial pop-ularity [3].Meanwhile, the potential for creating interactive, multime-dia rich web pages, using open standards methods, grew duringthe early part of the new millennium. Firstly, Scalable VectorGraphics (SVG) was introduced into browsers, allowing com-plex2Ddrawinginamannerwhicht inwiththeexistingstyle of HTML. Then the canvas element was introduced, alsoallowing2Ddrawing, butdieringfromSVGinbeingcon-trolled via Javascript. First introduced by Apple as part of theWebKit framework, canvas later became incorporated into thedraft HTML5 standard [4] (along with SVG). HTML5 is (be-ing) specically designed to adapt HTML in a way such that itcan be used to make web applications i.e. dynamic interactivepages which are rich in multimedia content.Thisconsistentexpansionofthescope, power, andcom-plexityofthewebhasmeant that technologypreviouslyre-servedforcustomsoftware(orevenhardware)isnowbeingusedviatheweb. Onceexampleofsuchtechnologyis3Dgraphics. While eorts have been made to implement real time3D graphics over the internet since the mid-nineties (see Sec-tion 2 below), recent years have seen a rapid growth in its avail-abilityanddistribution. Thispaperpresentsasurveyofthecurrentstateoftheartinreal-time3Dgraphicsontheweb,covering rendering techniques, scene description methods, 3Dspecic data delivery, and relevant application elds.Preprint submitted to Computers & Graphics February 24, 2014Why a survey into 3D web graphics? We see the rst ofits kind timely due to a new maturity in the subject. With allmodern versions of major browsers now supporting plugin-freeaccess to dedicated graphics hardware, and the ever-increasingpower of this hardware, web-based 3D graphics applications -nally have the opportunity to become a ubiquitous part of theweb environment, accessible to everyone. Our goal in this pa-per is to both provide historical context in addition to assessingthe current state of the art, which we attempt to do by review-ing the academic literature, assessing trends in the commercialsector, and drawing on our direct experience in using 3D webtechnology. Our hope is that the results of our survey will allowfuture developers and researchers to have a more complete un-derstanding of the eld as they proceed with their endeavours.For the purposes of this survey, we dene 3D graphics to bethe use of 3D geometric data (usually through Cartesian coordi-nates) to perform certain calculations (for example, changes ofform, animation, collision detection etc.)and to create 2D im-ages suitable for display on a standard computer screen or mon-itor. The term rendering describes the process of converting the3D data into a 2D image on a screen. Rendering techniquesvary greatly in terms of their complexity, speed, photorealismand application. Photorealism is usually judged according therealismof the 3Dshape being rendered, and also howthat shapeisshaded(asynonym, ingraphicsterms, forcoloured)withrespect to light sources in the scene. Complex shading tech-niques,such as ray-tracing and radiosity,produce 2D imageswhich are more photorealistic, at the cost of increased calcula-tion time. By reducing the complexity of the algorithms usedto simulate the eect of light in the scene, rendering time canbe shortened to enable applications where the 2D image is up-dated at framerates fast enough to be undetectable (or barelydetectable) to the human eye. One factor common to most 3Dgraphicsistheuseofperspectiveprojection, wherethepro-jected size of any 3D object onto a 2D image is inversely pro-portional to its distance from the eye. The use of perspectiveprojection, homogenous coordinates, and heavy use of 3D vec-torsinCartesiancoordinatestorepresent3Dobjects, meansthat most 3D graphics applications make extensive use of ma-trix mathematics to both simplify and speed up computation.The requirement to perform calculations on multiple datapoints (either for calculating the projection of 3D data points, orfor calculating the colour of each pixel in a rendered 2D image)has led to development of specic class of hardware, the Graph-ics Processing Unit (GPU) which is designed to process severaloperationsinparallel. TheintroductionofthemodernGPUheralded an unprecedented reduction in the time taken to render3D graphics in widespread systems, thus allowing more com-plex and photorealistic techniques to be applied in real-time. Toharness the power of the GPU, access is usually facilitated via alow-level API such as OpenGL or Direct3D. OpenGL bindingsexist for most major programming languages and most majorplatforms; whereas Direct3D is typically restricted to Microsoftplatforms (Windows or Microsoft games consoles).In this survey we repeatedly make reference to several ba-sic terms. Although all of these will be familiar to the graph-ics community, for clarity we dene them here. A real-time 3DScene features one or more objects, which can be represented inan implicit manner (such as using NURBS) or, more commonly,as polygonal Meshes. Object appearance (colour, reectivity,transparency etc.)is described by an associated Material.Ma-terials frequently dene colours according to textures: arraysofcolourvaluesusuallyloadedfrom2Dimageles. Theseimage les, along with any other le which is loaded into thescene (for example, to specify the structure or the behaviour ofa Mesh), are frequently called Assets. A Scene also contains aviewpoint or camera, as well as one or more light sources, andmay be organised into structure called a Scene Graph, wherethe contents of the scene are organised logically (e.g. imposinga hierarchy) and spatially. Scene Graphs are frequently usedin 3D rendering engines to group scene objects together suchthat dierent rendering processes may be applied to dierentgroups.The components of a xed pipeline programme use al-gorithms predened by the 3D programming API to calculatethe position and colour of the assets in the Scene,dependingon the point of view of the camera and location of any lightsources. A programmable pipeline approach requires the pro-grammer to carry out such calculations manually in a shader,which is a separate programme compiled at run-time and ex-ecuted on the GPU. It follows therefore, that a programmablepipeline approach requires the programmer to have deeper un-derstanding of the underlying mathematics, while also allow-ing a much ner level of control of the appearance of the nalscene.The remainder of this survey is organised as follows. Werst review the state of the art in web based 3D rendering, in-cluding historical approaches (Section 2), before presenting anoverview of the eld of remote rendering (Section 3) and how itapplies to the 3D web. We then review techniques of data com-pression and content delivery (Section 4), of vital importanceto any client-server based scenario, but especially relevant to3D applications where le sizes are typically large. Followingthis we present a selection of standout applications, grouped byapplication eld, which have contributed to the state of the art(Section 5). Finally, Section 6 discusses the rise in the popular-ity of 3D web and concludes the paper.2. Browser-based RenderingJankowski et al [5]classifybrowser-based3Drendering(i.e. where the client browser/machine executes the renderingprocess) into declarative and imperative techniques, creating amatrix which classies 2D and 3D graphics according to thedierent paradigms. Figure 1 is an adaptation of their classi-cation. For example, it is possible to draw 2D graphics withinthe browser using both SVG and the HTML5 canvas element:SVG [6] is an XML-based le format for two dimensional vec-tor graphics - drawing and image lter eects are coded in adeclarativemanner. Bycontrast, similardrawingandimageprocessing can be achieved by using the element inan imperative way using Javascript. Jankowski et al extend thisdeclarative/imperative distinction into the world of 3D browser-based graphics, referencing some of the approaches which wediscuss below.2Approach Requires Plugin Part of HTMLDocumentDOM Integra-tionInbuilt SceneGraphCustomisablePipeline*Standards-basedX3D yes no no yes no yesX3DOM no yes yes yes no yesXML3D no yes yes yes no partialCSS TransformsEno partial yes no no yesO3D no longer no no yes no partialThree.JS no no no yes yes partialStage3D (Flash) yes no no no yes noJava yes no no yes

yes

noSilverlight yes no no no yes noNative WebGL no no no no yes yesUnity3DOyes no no yes no noTable 1: Approaches to browser-based 3D rendering, classied according to level of declarative behaviour. * A customisable pipeline is one where the programmerhas low-level control over the render targets, render order, scene graph, materials and associated shaders.EWhile CSS transforms are not true rendering per se,they are included for completeness.Both O3D and Three.JS can render via WebGL, which is standards based.

Only certain Java libraries and APIs have thesefunctionalities.OUnity3D is included as a (popular) example of proprietary games engines (see section 2.12)Figure 1: Declarative vs Imperative approaches to web-based graphics (adaptedfrom [5])The declarative/imperative distinction for web-based 3Dgraph-ics is useful, particularly as it allows comparison to the equiva-lent 2D cases.Nevertheless, when considering the broad spec-trum of browser-based 3Drendering techniques, it becomes dif-cult to impose such a strict classication. For example, manyoftheapproachessurveyedinthissectionarebasedonpro-gramming languages which follow an imperative paradigm, yetsome of them also make heavy use of a Scene Graph,whichcould be considered a declarative construct. Table 1 shows acomparative summary of the browser-based 3D rendering ap-proaches surveyed in this paper, particularly from the point ofview of their classication as declarative or imperative. An ap-proach can be said to be more declarative than imperative if:It is part of the HTML document (i.e. it uses a text formatto declare content without direct context)It iscapableofintegratingwiththeDocument ObjectModel (DOM)It features a scene graphIt does not allow overloading of the rendering pipelineIt has a high level of platform interoperabilityOther important characteristics, such as the requirement forplugins, and whether the approach is standards based or not, arealso listed in the table. The remainder of this section surveyseach of these technologies in turn.2.1. VRML, X3D and ISO StandardsIn 1994, David Raggett, in parallel with Mark Pesce andTonyParisicalledfordevelopmenttheVirtualRealityMod-elingLanguage(VRML)leformat todescribea3Dscene[7], and the format (in its second version) became an ISO stan-dard (ISO/IEC 14772-1:1997) in 1997. VRML97, as it becameknown, uses text to specify the both the contents and appear-ance of a scene (i.e. associating a mesh with a particular ma-terial) but does not allow the specication of more advanced3DconceptssuchasNURBS(implicitsurfaces)orcomplexanimation. Shortly after the denition of VRML97, and in or-der to protect it as an open standard, the Web3D consortiumwas formed, as a cross-section of businesses, government agen-cies, academic institutes and individuals. Several applicationsand browser plugins were developed to enable the display ofVRML scenes in the browser,such as Cosmo Player,World-View, VRMLView, andBlaxxunContact. Inmanyrespects,these applications formed the rst eorts to bring 3D graphicsto the internet.Despite these eorts, support for the format was intermit-tent [7], and in 2004 VRML was replaced by X3D (ISO/IEC19775/19776/19777). While designed to be backward compat-ible with VRML, it provides more advanced APIs, additionaldata encoding formats, stricter conformance, and a componen-tized architecture [8]. The most immediate dierence to VRMLis the syntax: while X3D still supports the traditional VRMLC-like syntax, it also adds support for binary and XML for-mats. The latter is important as it is a standard format widelyused in web-technologies, and thus brings X3D closer to theweb. X3D supports a multi-parent scene-graph and a runtime,event and behaviour model; as well as a dataow system which3integrates sensor, interpolation and visualisation nodes. All thispermits the description of animated scenes in a declarative man-nerandwithoutusingimperativescriptingtechniques. X3Dis designed to be used in both web and non-web applications;in that sense, it can be said more precisely that X3D is con-cerned with internet-based 3D as opposed to purely web-based.As such, an X3D enabled application (whether standalone soft-ware, or browser plugin) is required to view and interact withX3Dscenes. Webbrowserintegrationinvolvesthebrowserholding the scene internally, allowing the plugin developer tocontrol and update content via a Scene Access Interface.2.2. X3DOMIn an attempt to extend browser support for X3D, in 2009Behr et al [9], introduced X3DOM. X3DOM provides nativebrowser, plugin-free (where possible) and independent 3D ca-pabilities, andisdesignedtointegratecloselywithstandardweb techniques such as AJAX. It maintains the declarative na-ture of X3D and is, in eect, an attempt to code X3D directlyinto a web page, without the need for a browser plugin. X3DOMis dened as a front-end and back-end system, where a connec-tor component transforms a scene dened in the front-end (theDOM) to the backend (X3D). The connector then takes respon-sibility for synchronising any changes between the two ends ofthe model. X3DOM denes an XML namespace [10], to enablean tag (and all of its children) to be used in a standardHTML or XHTML page. An example of the code required todraw a scene featuring a sample mesh is shown below, startingfrom the initial tag of an XHTML le (vertex data ofthe object is redacted for brevity). The result of this code, whenloaded in a web browser, is shown in Figure 2:

This code snippet uses hierarchical declaration to create a3D. A viewpoint position and orientation is set, as isthe background colour. A tag denes translation,rotation and scaling transforms which apply to the tag contents(in this case all the content dened within it should be is trans-lated by 10 units in the y-axis). The tag denes anFigure 2: Screenshot taken from a Mozilla Firefox browser window executingthe X3DOM code shown above.object in the scene, within which the objects properties (such asthe mesh and appearance) are dened in parallel. The mesh inthis scene is dened as a set of indexed faces; no normal vec-tors are supplied so X3DOM calculates them to enable lightingeects.For rendering the scene X3DOM denes a fallback model[11], trying to initialise preferred renderers rst, and using al-ternative technology if required. Currently the fallback modelrst checks whether the user application is capable of renderingX3D code natively, and is so, passes the contents of thetag to the X3D rendering engine dened by the host application.If native X3D rendering is not detected, or an X3D browser plu-gin is not installed, the application tries to create a WebGL ren-dering context, and build the X3D scene graph on top of that.With the partial integration of WebGL into Microsoft InternetExplorer 11 [12], the majority of modern browsers now supportbrowser based 3Drendering via WebGL (see below). If WebGLis not supported, X3DOM tries to render with Stage 3D (seeSection 2.10 below) before nally presenting a non-interactiveimage or video if no 3D rendering is possible.X3DOM is designed to integrate with the DOM and thuscan be modied in the same manner as any DOM object. Forexample, if the programmer wished to modify the position ofthe shape at runtime (for example, in response to user input), thetranslation attribute of the tag can be modied atruntime using Javascript, and the scene will update accordingly.3D models can be loaded into X3D/X3DOM by deningvertex geometry as in the example above, or by importing anX3D (or VRML) scene dened in a separate le. This scenele can be coded manually or, for complex objects and scenes,itcanbecreatedusingcustomexportpluginsfor3Ddigital4content creation (DCC) packages (such as Blender, AutodeskMaya, and Autodesk 3D Studio Max). Basic shading (such asdiuse or specular shading) is supported in a declarative man-ner. Schwenk et al [13] [14] added a declarative shader to X3D,enabling advanced 3D eects and lighting (such as multi tex-turing, reections and other lighting eects), this is now par-tially supported in X3DOM. Custom shaders are supported viathe ComposedShader node; this allows the programmer to writetheir own shader code as long as uniform data is restricted to asupported set and naming conventions. In 2011, Behr et al ex-tended their framework with various functionalities [15]. Sev-eral cameranavigationmethodswereintroduced, deningaViewpoint node (seen in the above example) and several dierentnavigation patterns (denition of custom camera navigation isalso supported via manipulation of the equivalent DOM node,for example, via Javascript). Also introduced was simple ob-ject animation, either via CSS transforms and animations (seebelow) or by the X3D Interpolators.2.3. XML3DXML3D [16] is similar to X3DOM, in that it is an extensiontoHTMLdesignedtosupportinteractive3Dgraphicsinthebrowser, requiring the use of XHTML. Its stated goal is to try tond the minimum set of additions that fully support interactive3Dcontent as an integral part of 2D/3Dweb documents. It takesa similar high-level approach to 3D programming as X3DOM:both use declarative language to dene a 3D scene,and bothexpose all elements of the 3D to the DOM (with the benetsof DOM manipulation that arise thereafter). The central dif-ference between the two approaches is that X3DOM grew outof an attempt to natively embed an existing framework (X3D)intoabrowsercontext, whereasXML3Dproposestoextendthe existing properties of HTML wherever possible, embedding3D content into a web page with the maximum reuse of exist-ing features. The sample XML3D code below draws a similarscene to the X3D code above (for brevity, standard html codesuch as the body and head tags are excluded, along with linesto load XML3D libraries and the data for the mesh):

[vertex coordinates for model]

[normals for model]

[indices for model]

true1.0 0.03 0.001.0 1.0 1.0

0.00.4 0.12 0.180.5 0.5 0.50.2

In this example, a tag allows the denition of var-ioustransforms, thedataforthemeshtobedisplayed, andshaders and their uniforms, along with a light (position and in-tensity), and the eye view position. Transforms are dened as atag, as in X3DOM, although they could be specied using CSStransforms (see below) for those browsers that support them,fullling the authors goal to reuse existing features as much aspossible. Shaders can also be dened using a xed number ofCSS properties (again, with the goal of extending existing func-tionality). Once all the denitions are nished, the viewpoint isdened, and mesh and light added to the scene. In this case, the3D mesh is dened in line, but 3D objects can also be loadedfrom an external le with the mesh data - stored, for example,in a JSON, XML or binary le.XML3D also introduces another level of abstraction, seenintheexampleabove, byusingDataContainers, deninga tag which wraps the primitive data [17]. This denitionof a declarative data containers allows the piping of the datato processing modules (see Section 2.4 below) which in turnpermits the complex geometry transformations required for dy-namic meshes and certain graphical eects.XML3D rendering is implemented both in WebGL and alsoas a native component for the Mozilla browser framework (sup-porting Firefox) and Webkit-based browsers (supporting GoogleChromeandAppleSafari). Thenativecomponent wasap-parently implemented in order to avoid any performance issuewith the Javascript/WebGL implementation (specically, slowJavascript parsing of the DOM, and limitations of the OpenGLES 2.0 specication).2.4. XowTheneedsofinteractive3Dgraphicsapplicationsgobe-yond simply drawing and shading meshes. Techniques such ascharacter animation, physical material simulation, and render-ing complex eects such as smoke or re typically require aconsiderable amount of runtime data processing. X3DOM, aspreviously mentioned, passes such issues to the X3D backendand synchronises the results to the DOM front end, yet this op-tion is hamstrung by Javascripts slow parsing of the DOM, and5furthermore is not available to alternative approaches such asXML3D. Thus, in 2010 Klein et al. proposed a declarative solu-tion to intensive data processing on the web, called Xow [17].Xow is a system for exposing system hardware in a declarativemanner and allowing dataow programming. Examples of 3Dgraphics applications for this technique are modifying vertexdata for animated or otherwise dynamic meshes, animation ofshader parameters, and image processing and post-processing.In essence, Xow can be thought of as a data processing ex-tension to XML3D. Xow is integrated into XML3D and pro-videsacrucialextensionbydeningacomputeattributefordened data containers. This attribute is used to provide pro-cessing operators to the data dened within a container (for ex-ample, for skeletal animation).2.5. CSS 3D TransformsCSS transforms are a part of the W3C specication whichallow elements styled with CSS code to be transformed in twoor three dimensional space. Originally developed by Apple aspart of the WebKit framework [18] (and thus functional in Sa-fari and Chrome browsers), these transforms are now fully sup-ported by Mozilla based browsers and almost fully supported inInternet Explorer.3D transforms work by initially setting a perspective to thescene (using transform: perspective(value ); or more sim-ply perspective: value ;). Once set, standard 3D transformssuch as translation, rotation and scale can all be applied in threeaxes. By using CSS Transitions [19], simple animation betweendierent states can be achieved. Object shading must be spec-ied manually by dening colour or texture information (as animage) as with standard CSS.CSS 3D transforms provide a very fast, easy to understandmethod of coding simple 3D eects into a webpage. One in-teresting aspect of their use is that existing 3D content, suchas a DOM element (for example, a canvas) which features 3Dgraphicsrenderedbyothertechniques, canbefurthertrans-formed using CSS 3D. Nevertheless, the lack of true lightingand shading capabilities means CSS 3D is very limited in whatcan be achieved with more powerful declarative solutions suchas X3DOM or XML3D.2.6. ColladaCollada [20] it is a declarative le format used for describ-ing a 3D scene, and while it does not feature any form of run-time or event model, its popularity as an interchange format forweb based 3D graphics means that it is included in this sec-tion for completeness. Initially developed by Sony and Intel,it is now managed by the Khronos Group (see WebGL sectionbelow) and is an open standard with ISO/PAS 17506. Apartfrom describing basic object parameters such as shape and ap-pearance, Collada also stores information about animation andphysics. Supercially, it is similar to X3D, in that both deneXML schemas for describing 3D scenes, and were specicallydesignedfortransferofdigitalassetsinastandardisedman-ner. X3D scenes, however, are designed to rendered with spe-cic X3D capable software (such as browser browser pluginsFigure3: ScreenshotfromtheBarcelonaWorldRacebrowser-basedMMOgame [23], rendered using O3D (reproduced with permission)or X3DOM), whereas Collada is designed purely to be a formatfor data interchange, and is agnostic as to the rendering engineused to draw the scene. In 2013, the Collada working group an-nounced the project to dene and create the glTF format [21],which is designed to better match WebGL processing require-ments (for example, using typed arrays, and storing geometryand texture assets in binary format2.7. Javascript access to graphics hardwareIncontrasttodeclarativeorfunctionalprogramming, theparadigm of imperative programming describes computation asaseriesofstatementswhichchangeaprogrammesstate. Itcould be argued that most computer programming must even-tuallyreducetoimperativecode, inthesensethatmostlowlevel hardware programming (for example with Assembly lan-guage)isimperative. Thetraditionallanguageforexecutingnon-declarative code in the web browser is Javascript, whichhas elements of the imperative, object-orientated, and functionalprogramming paradigms. According to Tony Russell, the Chairof the WebGL working group, one of the biggest contributingfactors to rise of imperative 3D web programming are the hugeperformance increases in the Javascript Virtual Machine, allow-ing fast control and manipulation of thousands of vertices in 3Dspace, in every drawn frame [22].Although it is possible to create a web-based software ren-dering algorithm using SVG [24] or the HTML5 Canvas, to-wards the end of the rst decade of the 21st century eorts werebeing made to allow imperative programming access to dedi-cated graphics hardware. Principal among these was GooglesO3D library [25] [26]. Developed as a cross-platform pluginfor all major browsers and operating systems, O3D originallyprovided a Javascript API to enable access to its plugin code(written in C/C++), which in turn allowed programming of thegraphics hardware, either via Direct3D or OpenGL (the deci-sionwashiddenfromthenaluser). Inordertopopularisethe technology and to act as tutorials, Google published an ex-tensive suite of demo web applications made using O3D [25].Perhaps one of the most visible applications of the technology,in terms of number of users, was the ocial MMO game forthe Barcelona World Race sailing regatta, which used O3D topower its 3D component (see Figure 3). The game developers6noted [23] that O3D support for multi pass rendering (impor-tant for post-processing eects such as shadows, reections andweather) were critical in the decision to use the API ahead ofother options such as the nascent Stage 3D from Adobe (seebelow). Thisconclusionwasinagreement withSanchezetal. [27], who concluding that O3D demonstrated better perfor-mance than X3DOM, both in terms of rendering performance(where frustum culling was possible) and animation. With theadvent of WebGL, the O3D plugin was discontinued, and theAPI was ported to use the WebGL renderer; non-rendering ele-ments of O3D (for example, the Scene-Graph and Events API)are still accessible and usable.Besides O3D, other eorts were made to allow access tothe GPU via Javascript. Canvas3D was a Firefox plugin fromMozilla which allowed the creation of an OpenGL context withina HTML5 canvas element, and was essentially the precursor toWebGL (see below). Canvas3D JS [28] [29] was an mid-layerAPI designed to simplify its use, and was later adapted to makeuse of WebGL (see below). Opera Software (creators of theOpera web-browser) also released a similar plugin [30] whichallowed access to OpenGL calls via the HTML5 canvas.All of the above eorts use some form of browser plugin,usuallyprogrammedinCorC++, whichessentiallyactsasa Javascript wrapper for accessing the graphics hardware viaOpenGL or Direct3D. By 2009, the need to standardise meth-ods for accessing the GPU via the browser had become clear.Thus, theKhronosGroup(thenot-for-prot organisationre-sponsible for, among other things, the OpenGL specication)started the WebGL working group, and with input fromMozilla,Apple, Google, Opera and others, released the version 1.0 ofWebGL in 2011 [31].2.8. WebGL & Associated LibrariesKhronos describes WebGL thus:WebGL is a cross-platform, royalty-free web standard foralow-level 3DgraphicsAPIbasedonOpenGLES2.0, ex-posed through the HTML5 Canvas element as Document ObjectModel interfaces [31].OpenGLES(EmbeddedSystems)2.0isanadaptationof the standard OpenGL API, designed specically for deviceswith more limited computing power, such as mobile phones ortablets. WebGL is designed to use, and be used in conjunctionwith, standard web technology; thus while the 3D componentof a web page is drawn with the WebGL API via Javascript, thepage itself is built with standard HTML.WebGL is purposefully built to be a lean, reasonably lowlevel API - indeed, the two principal declarative methods men-tioned above (X3DOMand X3D) both use or have used WebGLto some extent in their implementations. WebGL is targeted atthe experienced graphics programmer, who has a good knowl-edge of core graphics concepts such as matrix and vector math-ematics, shading, and preferably organisation of 3Dscenes withScene Graphs [22]. The process required to create a simple boxis very similar to that for any OpenGL ES 2.0 renderer: createa WebGL context (in this case, on a HTML5 Canvas element);dene and load a vertex and a fragment shader and bind anyuniform data; bind data buers and pass data (for vertices, andoptionallynormals, coloursandtexturecoordinates); controlcamera and perspective using standard model-view-projectionmatrices; and nally draw.The Khronos group provides several helper Javascript leswhich assist in the setup and debugging of WebGL applications,which are contained within many of the public demos publiclyavailable on the Khronos site [32]; nevertheless, as mentionedabove, WebGL is clearly aimed at the more experienced graph-ics programmer. Directly comparing it to the declarative tech-niquesisnotparticularlyworthwhile, astheytargetdierentsections of the developer/artist communities.A detailed analysis of WebGL is beyond the scope of thispaper, and the reader is directed to several recent books [22][33] [34]. However, it is interesting to introduce several Javascriptlibraries, whose goal is to abstract WebGL inner workings andproduce higher level code, which have proliferated as a result ofWebGLs steep learning curve. One of the rst of these librariesto appear was SpiderGL [35] [36]. Its original version consistsof ve libraries:GL, which abstracts core WebGL functionali-ties; MESH, dening and rendering 3D meshes; ASYNC, to loadcontent asynchronously;UI, to draw the user interface withinthe GL context;and SPACE, a series of mathematics and ge-ometry utilities. Despite abstracting many WebGL functions,programming an application in SpiderGL still requires knowl-edge of 3D concepts (more than those required to use XML3D,forexample)andrequiresat least somebasicknowledgeofOpenGL. After its initial impact as the rst comprehensive ab-stractionofWebGL, SpiderGLenteredaperiodofextensiverefactoring, from which it emerged with several lower level im-provements [36], such as custom extensions and better integra-tion with other libraries.LightGL is a low-level wrapper which abstracts many of themore code intensive WebGL functionalities, while still requir-ing shader programming and matrix manipulation. Agenjo etal. [37] modify LightGL to use the popular GLMatrix library[38] to create a tiered API with several layers of abstraction.OSG.JS [39] is a WebGL framework which aims to mimic anOpenSceneGraph [40] approach to 3D development. Other li-braries of note are SceneJS [41], PhiloGL [42], and GLGE [43];the latter providing a declarative method of programming a 3Dscene, much like X3DOM or XML3D.2.9. Three.JSPerhaps the most famous library/API for web-based 3Dgraph-ics is ThreeJS [44] [45]. Although originally developed in Ac-tionScript (see below), it is now an open-source Javascript li-brary which enables high-level programming of browser-based3D scenes, such as that shown in Figure 4. Its modular structuremeans that several dierent rendering engines (WebGL, CanvasandSVG)havebeendevelopedtorenderscenecontent, andthe library can be (and is being) extended in a distributed man-ner by several dozen individual contributors. Three.JS featuresa scene graph, several types of camera and navigation modes,several pre-programmed shaders and materials (and the abilityto program custom shaders), Level-of-Detail mesh loading and7Figure 4: Car visualisation running in Mozilla Firefox, created using ThreeJSby Plus 360 Degrees [46] (reproduced with permission)rendering, and an animation component allowing skeletal andmorph-target animation.The sample code below demonstrates the steps required toload a mesh with a simple material, similar to the previous twocode examples (code is Javascript/JQuery, and HTML setup leftout for brevity):var WIDTH = 400, HEIGHT = 400;var $container = $(#container);var scene = new THREE.Scene();var renderer = new THREE.WebGLRenderer();renderer.setSize(WIDTH, HEIGHT);$container.append(renderer.domElement);var VIEW_ANGLE = 45, ASPECT = WIDTH / HEIGHT;var NEAR = 0.1, FAR = 10000;var camera = newTHREE.PerspectiveCamera(VIEW_ANGLE, ASPECT, NEAR, FAR);camera.position.z = 300;var sphereMaterial = new THREE.MeshLambertMaterial({color: 0xCC0000});var pointLight = new THREE.PointLight( 0xFFFFFF );pointLight.position.x = 10;pointLight.position.y = 50;pointLight.position.z = 130;var loader = new THREE.OBJLoader( );loader.load( bunny.obj, function ( object ) {object.traverse( function ( child ) {if ( child instanceof THREE.Mesh ) {child.material = sphereMaterial;}} );scene.add( object );} );scene.add(camera);scene.add(pointLight);animate();function animate() {requestAnimationFrame(animate);renderer.render(scene, camera);}Beyond the obvious changes in language syntax, the Three.JSapproachisnotdissimilartothoseofX3DandXML3Dde-scribed above, particularly XML3D. AMaterial, a Mesh, a Cam-era and Light must all be dened by being added to the scene. Inthis example the mesh is loaded from an external le (using theubiquitous Wavefront Object format) via an asynchronous load-ing method. Once the mesh le has been downloaded and themesh object created, it is assigned a material and added to thescene using a callback function. This ability to call functionsis a major dierence in comparison with declarative examplesabove, and is seen again in the nal few lines, which are specicinstructions to request a newanimation (or drawing) frame fromthe browser, and render. Without this nal imperative part, thecode would download the mesh object correctly and add it thescene, but nothing would appear in the viewport, as there wouldhave been no command to render the scene executed after theobject was downloaded.2.10. Stage 3DAs mentioned above, Adobes Flash plugin is a proprietarysystem which allows multimedia content to run inside a webpage which has the Flash plugin enabled. Although there wereinitial attempts to embed 3D graphics into Flash (such as thenow disappeared Papervision [9]) these relied on software ren-dering techniques which did not allow access to the GPU. Stage3D is Adobes proprietary 3D engine [47], with the key dier-ence being that it allows allows Flash and AIR applications todraw hardware accelerated 3D graphics. Stage 3D applicationsare written in ActionScript, an object-oriented language devel-oped to write Flash-based applications. Stage3D is marketed asa medium-low language for 3D graphics programming, allow-ing platform independent programming of applications that arefully compatible with existing Flash libraries. It is quite lowlevel in that a sample application must deal directly with vertexand index buers, shaders, and the Model/View/Projection ma-trices common to many 3D applications, and there is no built-inscenegraph. Theapplicationcodeiscompiledagainstrela-tively high level libraries, allowing drawing to contexts whichallow seamless merging with 2D Flash, and/or Flash Video con-tents; this means that many low-level hardware aspects are hid-den from the programmer by proprietary libraries. Shaders inStage 3D are written in Adobe Graphics Assembly Language(AGAL), averylowlevel assemblylanguage, whichmakeswriting shaders for Stage 3D a more laborious task comparedwith writing shaders in a higher level language such as GLSLorHLSL, whicharetheshadinglanguagesforOpenGLandDirect3D, respectively. Adobe has made eorts to develop ahigher level shader creation package called Pixel Bender, thoughas of 2013, development of this tool appears to have stalled.82.10.1. SilverlightMicrosoft Silverlight is an API for developing web-basedapplications, not dissimilar to Adobe Flash. It facilitates thecreationofinteractivemultimediaapplicationsandtheirdis-tributionviaweb, withclientsideexecutiondependingonabrowser plugin which the user must install. Silverlight applica-tions are created using Microsofts .NET framework. Version 3of Silverlight introduced basic 3D transforms (similar to mod-ern CSS 3D transforms), but the current version (version 5) nowfeatures a full programmable graphics pipeline, allowing accessto the GPU and shader programming.2.11. JavaTheJavaplatform, developedinitiallybySunMicrosys-tems before its merger with Oracle, is now an established partof modern computing. From a web perspective, the ability tolaunchaJavaapplet (asmall applicationwhichisexecutedwithin the Java Virtual Machine) within the browser, was oneof the earliest ways to programme more computation-expensivevisualisations [48]. In particular, it is possible to allow Java ap-plets to access the GPU which, prior to the advent of WebGL,wasoneoftheearliermethodsofaccessinghardwareaccel-eration from a web page without relying on a custom browserplugin.In 1998, the Java3D API was released to facilitate 3D devel-opment with Java [49]. Java3D features a full scene graph andis able to render using Direct3D or OpenGL. Development onJava3D was abandoned in 2008 as Sun Microsystems moved topush its new JavaFX platform, though development on Java3Dhas been restarted by the JogAmp community [50]. For lowerlevel access (i.e. wrapping OpenGL functions directly) otherlibraries exist such as JOGL [50] or LWJGL [51]. The latter isthe basis of the popular multiplatform game, Minecraft, whosewelldocumentedsuccessdemonstratesthataJavabasedap-proachtoweb3Dgraphicscanbeaviableoptionformanydevelopers.2.12. Proprietary Videogame EnginesThe video game industry has long been a showcase for thelatest graphics technology (and a driving force for the constantperformance increases and lower prices of GPUs), and it is notsurprising that there have been commercial eorts to make 3Dvideo games available via the browser. However, the historicallackoffullcross-browsersupportforWebGL(onlyrecentlyovercome with the release of Microsoft Internet Explorer 11),and the natural desire of companies to target the widest possibleuser base, has meant that WebGL-based 3D gaming is still in itsinfancy (see also Section 5.3).Unity is a cross-platform game engine featuring advanced3D graphics capabilities [52]. Compared to the other technolo-gies described thus far, Unity is much higher level, aiming to bean application for creating video games (and, thus, more thansimply a 3D engine for the web). Nevertheless, Unitys ease ofuse in creating a 3D scene and exporting it to a webpage, andpopularity among the public web (225 million claimed installsof the web-player plugin, as of 2013 [53]) merits its inclusion inFigure 5: The Unity IDE and game engine has become a popular way to quicklyembed 3D graphics into a web-browserthis paper. Unitys technology is split into two applications - anIntegrated Development Environment (IDE) (see Figure 5) forthe creation of a scene/game, and a Player plugin/standaloneapplication which allows Unity scenes/applications developedwith the IDE to be executed on a target platform. At its mostbasic level, creating a 3D scene for the web in Unity involvesdragginganddropping3Dassetsintoaviewport, optionallysetting material, light and camera settings, and then exportingthe scene to a custom le format. The Unity web-browser plu-gin (available for all major browsers) reads this le and displaythe scene in the browser window. Beyond the drag and dropcapabilities of the IDE, a Unity scene can be modied (or evencreated) entirely in code, either via Javascript, C# or Boo. Unityalso allows exporting applications to other platforms, includingmobile platforms such as iOS and Android.Othercompanieshavemadeshort-termeortstopresentvideogamesasbrowser-basedexperience; forexampleEpicGames Unreal Engine has in the past featured a web playercomponent and now has a HTML5/WebGL-based demo avail-able [54]; while the game Battleeld Heroes [55] from DICE isa purely browser based experience which requires downloadinga plugin.3. Remote RenderingThe concept of remote rendering involves the server-basedgeneration of data, which is then passed to a client for visualisa-tion. Much of the research in this eld does not apply directlytoweb-based3Dgraphics, inthatitfocusesonclient-serverrendering systems where the client is usually a non-web appli-cation. However,the increasing use of the web-browser as a3D graphics client (as demonstrated in this paper) means thatwe consider it appropriate to include a brief survey of remoterendering techniques.Commercially, remote rendering of 3D graphics for videogame purposes has been exploited by at least two startup com-panies, Onlive [56] and Gaikai [57], the latter being purchased9by Sony in 2012 [58] for $380 million. From a research per-spective, it is possible to roughly classify the dierent approachesto remote rendering into three areas: Graphics Commands, Pix-els, and Primitives or Vectors. A fourth server-based process,that of parsing, segmenting and/or simplifying 3D objects, alsofalls under the remote rendering denition, but is not includedin this section as it is reviewed in detail in section 4 below.3.1. Graphics CommandsLow-level draw calls to the server GPU are intercepted andpassed to the client [59], which then renders and displays thenal image. This technique has been adapted by Glander et al.[60] for parallel WebGL visualisation of existing 3D desktopapplications, using AJAX to synchronise the two rendering ap-plications. Furthermore, there are eorts being made to directlyconvert C/C++ code (via LLVM bytecode) to Javascript [61].3.2. PixelsThe server renders the image and passes it directly to theclient for display [62] [63] [64]. This basic method of remoterendering can be viewed as a generic data transfer issue, essen-tially sampling the remote system graphics buer and sendingit to the client as a video stream [65]. Several optimisationscan be made to this technique, such as synchronising a high-end server render with a low-end client render [66], selectivelytransmittingpixels[67], oroptimisingthevideoencodingtotake advantage of GPU-rendered scenes [68].3.3. Primitives or VectorsFeature extraction techniques are used on the server to ob-tain vectors to be passed to the client to render, either in 2D[69] or 3D [70]. The advantage of this method is that it thatclient devices which do not have any native 3D capabilities canrender 3D objects from the passed vector data, as the costly 3Dtransformations are being executed by the server.3.4. Combined TechniquesFinallytherehavebeeneortswhichcombineseveralofthese techniques, such as the parallel client-server image-basedrendering by Yoon [71], or the Games@Large platform [72].The latter captures the rendering instructions of an applicationat run-time, and sends changes in that scene to the remote client,which is rendering the scene locally. If the client is not capableof rendering the scene locally, a video stream is sent instead.As scientic datasets become ever larger, the challenge ofviewing and interacting with them locally has increased, andtherehasbeenamovetostoringthedataoncentralparallelclusters, and interacting with data via remote clients. ParaView[73] is a multi platform, open source data analysis and visual-isation application, built on top of VTK and extended to sup-port parallel cluster rendering. Recently it has been extendedto enable 3D visualisation via a web-based context, called Par-aViewWeb [74], which allows a user to access a ParaView ren-dering cluster from within a web page.One concrete application of remote rendering is in collab-orativevisualisation, asthesamescenemust berenderedinreal-timetomultipleusers, viaeitheraclient-servermodel,a peer-to-peer model,or a hybrid of the two. The Resource-Aware Visualisation Environment (RAVE) [75] was created inorder to demonstrate whether web services were capable of sup-porting collaborative visualisation (not dissimilar to the Games@Large approach). It uses a Java applet on the client to deter-mine the clients rendering capabilities - less powerful clientsreceive a video-feed from a remotely rendered scene, whereasmore powerful clients receive the polygonal dataset to render itlocally. The system was extended to support the X3D formatwith ShareX3D [76], which was the rst implementation of acollaborative3DviewerbasedonHTTPcommunication. Amore comprehensive survey of collaborative visualisation sys-tems, including some applications to the 3D web, is presentedin [77].4. Data Compression & DeliveryForthemajorityofthetechniquesforbrowser-based3Dgraphics described above, the 3D data are represented by poly-gon (usually triangle) meshes, composed of vertices and faces.Such data, when describing an object in great detail, can be rel-atively large in size if unoptimised. For example, a laser scanof a large 3D object will result in a le which is hundreds ofmegabytes in size. Large le size has serious performance im-plications (both in terms of parsing and rendering the data); tooset/reduce these implications, there has been considerable re-search eort made into mesh optimisation techniques [78] [79],and mesh segmentation [80]. However, many mesh compres-sion methods are highly geared towards dealing with specictype of data, and are not designed to handle arbitrary meshesincluding normal and texture information i.e. meshes for theweb [81].4.1. Compression & OptimizationProgressive Meshes (PM), proposed originally in 1996 byHoppe [82], allowcontinuous, progressive renement of a polyg-onal mesh during data transmission over a network. PM worksby initially transferring a coarse mesh fromserver to client, thenprogressively rening the mesh using a stream of vertex-splitoperations, until the high resolution mesh has been recreated.Limper et al. [81] summarise reported decode times and com-pression performance for the original PM technique and severalof its subsequent renements. They found that Hoppes 1998implementation of Progressive Meshes [83] still provides thefastest decompression time, despite several attempts to improveupon it, and not taking into account the improvements in hard-ware capabilities since 1998. On the other hand, the compres-sion factor (i.e. how many bits each vertex occupies) is an orderof magnitude less for more modern techniques - for example,that of Maglo et al. [84]. The conclusion therefore is that meshcompression research over the last decade has focused more onimproving pure compression (rate-distortion) performance, andless on its speed. With this in mind, and combining the con-clusion of both Limper et al. [81] and Lavou e et al. [85], it ispossible to draw a series of requirements for mesh compression10for web-based 3D rendering, separate from the requirements ofmore general mesh compression:Rate distortion versus decompression speedMesh compres-sion techniques for web-based 3D graphics present a spe-cial case where both decompression speed and downloadspeedcontributetotheoverallperformanceofagiventechnique. An algorithm which achieves very high com-pressionrates(forexample, that ofValetteet al [86])may not be suitable for a web-based system, in that thedecompression rates are comparatively slow [81]. Thisproblem is particularly relevant currently, where despiteincreasing network bandwidth (allowing rapid data trans-fer), there has been a parallel increase in low-power mo-bile devices which may struggle to execute complex de-compression algorithms.Browser-based decompressionOn the other hand, plugin-freebrowser-based 3D may necessitate decompression usingJavascript. ForalltheimprovementinJavascriptexe-cution time in recent years, it is still slower than nativecode [87]. Given that much of the literature on progres-sive meshes reports results with native code, and despitepossibilities to speed up decompression using the GPUtechniques, the requirement to implement the code withJavascript is not to be underestimated.Multiple scene objects A typical 3D scene for web-based ren-dering may feature several objects of varying topologicalcomplexity and size. It may also feature modication oranimation of these objects. Classic Progressive Meshesalgorithms work well with regularly-sampled, closed sur-faces, and as a result may not function correctly or e-ciently in many use-cases for the 3D web - or at the veryleast, some formof pre-processing step is required to splitand classify the meshes before compression.Various data sources for a single object BasicPMdoesnottakeintoaccountvertexpropertiesotherthanposition.While vertex position is naturally the most important com-ponentofanymesh(asitdescribestheshape), ameshobject may store several other components which are im-portant for display, such as normal vectors, texture coor-dinates and colour values.In recent, years, however, there have been several researcheorts to address these four points and create a viable methodfor mesh compression more suitable to web-based contexts. Tianand AlRegib [88] present a method to compress texture resolu-tion in parallel with mesh vertex resolution. The authors notethat geometric inaccuracies in the compressed mesh may be ei-ther enhanced or dissimulated by errors in the compressed tex-ture; however, rening rst the mesh and then the texture is notecient, as either a full resolution mesh with a coarse textureor a full resolution texture with a coarse mesh will not generallyprovide good visualisation for the textured model. Thus, theypropose a progressive solution which attempt to rene both themesh and the texture in the same bit-stream, by proposing a bit-allocation framework. More research on bit-allocation for meshcompression has been carried out by King and Rossignac [89],who use a shape-complexity measure to optimise distortion foragivenbitrate; andbyPayanandAntonini[90]whouseawavelet based method; although neither of these techniques areadapted for progressive transmission [91].While several methods for encoding colour information ex-ist (for example, Ahn et al. [92], who encode indices of eachvertex in a colour mapping table; or Yoon et al. [93] who intro-duce a predictive method based on geometry information), onlyin recent years have eorts been made for progressive meshes.Cirio et al [94] propose a technique that allows any propertyor set of properties (such as vertex colours or normals) to driveacompressionalgorithmbasedonkd-trees. Leeetal. [91]propose a progressive algorithm which is based on the valence-driven progressive connectivity encoding from Alliez and Des-brun [95], and this technique is further optimised for the webby Lavou et al. [85]. A mesh is encoded by progressive stepsof decimation and cleansing, and colour components are quan-tised adaptively according to level of details. Lavou et al [85]also present several implementational details specic to decom-pression using Javascript, such as the use of Array Buers andthe minimisation of garbage collection, as it was found that thelatter process used blocked performance in unacceptable ways.Gobbetti et al. [96] convert input meshes to equally sized quadpatches, each one storing vertex, colour and normal informa-tion, and which are then stored in an image format. This allowsto use the atlas images for multi-resolution, and fast renderingusing simple mip-map operations. Limper et al. [97] present aprogressive encoding scheme for general triangle soups, usinga hierarchy of quantisation to reorder the original primitive datainto nested levels of detail.Meshcompressionfacesfurtherproblemswhenthecho-senrenderingframeworkreliesondeclarativemarkup, suchasX3D/X3DOMorXML3D, asoneofadvantagesofthoseframeworks (the ability to describe a scene with clear, human-readable code) causes diculties when loading scenes with verylarge meshes - parsing text les of hundreds of megabytes is ef-fectively impossible for web browsers (although applying HTTPsGZIP compression can greatly reduce le sizes). Behr et al.[98] counter this by taking advantage of Javascript Typed Ar-rays to introduce a BinaryGeometry component to X3DOM, al-lowing the scene to be described in a declarative manner, but thecontent geometry to be passed to the client as raw binary data.Once the data has been downloaded, it can be passed immedi-ately to the GPU memory, which eectively eliminates the rel-atively slow Javascript parsing of the data. Transmission speedof binary data can be reduced by compressing it, for exampleusing the OpenCTM format [99], which is designed specicallyto compress mesh data. The downside of using compression forweb-based methods is, as always, the associated decompressioncost in the Javascript layer.In 2011, Google launched the Google Body project [100],an interactive browser-based visualisation of the human body.The development of this work also resulted in the creation ofWebGL-Loader [101], a minimalistic Javascript library for com-pact 3D mesh transmission. It takes full advantage of in-builtbrowser features, such as support for GZIP and the UTF-8 le11format, to attempt to enable fast decompression of mesh data inJavascript, using predictive techniques. Limper et al. [81] con-ducted a case-study to compare WebGL loader with OpenCTM[99], X3DOMs Binary Geometry [98] and standard X3D. Theirresults demonstrated the balance between le-transmission sizeand decompression rate; at low bandwidths, techniques whichminimised le size (such as CTM compression) provided bet-ter results, while at high bandwidths, the speed of transfer ofBinary Geometry to the GPU ensured best performance. Whilethis initial result is not surprising, in the mobile context the re-sult was dierent:the decreased power of the hardware meantthat any technique which relied on heavy decompression suf-fered in comparison, even at high bandwidths.4.2. Selective Transmission of GeometryIn 1996, Schmalstieg and Gervautz [102] proposed an earlyapproach to optimising the transmission of geometry for dis-tributedenvironments, arguingthatthenetworktransmissionrate is the bottleneck in such a system (perhaps as true in thecurrent day as it was in 1996). The authors proposed a sys-tem of demand-driven geometry transmission as a method forecient transmission of geometric models, which was later ex-panded by Hesina [103]. A server stores data for all the objects(and their positions) for a given scene. Client viewer applica-tions can make server requests only for the objects that are inthe client area of interest. Thus, if geometry can be deliveredfrom the server to the client just in time, there is no need totransfer the entire scene geometry to the client.While not directly related to web-based 3D graphics, therehas been considerable research eort made into selective trans-mission of the 3D data for online virtual worlds and MassiveMultiplayer Online (MMO) games, such as World of Warcraft[104] and Eve Online [105]. Such systems feature dedicatedclient-server systems which also employ Peer-to-Peer technol-ogy for the transfer of 3D assets. For further information, wedirect the interested ready to a recent comprehensive survey pa-per by Yahyavi [106].5. ApplicationsDespite the clear growth of 3D graphics applications acrossmultiple platforms in the last two decades, the initial approachestobringthisgrowthtotheinternethaveeitherstalledordidnot gain traction [9]. In an attempt to research the reasons be-hind this, Jankowski [107] surveyed the dierent tasks and ac-tions carried out by users when viewing web pages, and alsowhen interacting with 3D content. The result is a taxonomyof3Dwebuse, whichdemonstratesthat thereareveryfewactionswhicharesharedbetweenwhatmightbeconsideredWebTasks(e.gClickonHyperlink, FindonPage)and3D tasks (Spatial Navigation, Object Selection). This lackof shared tasks leads to the conclusion that switching betweentextual (hypertext) and spatial (3D graphics) spaces can be po-tentially disorientating to the user, and any interface should takethis separation into account. This research eventually resultedin [108], which presents a hybrid 2D/3D interface, designed toavoid confusing the user.Nevertheless, Mouton et al [77] argue that web applicationshavemajorbenetswithrespecttodesktopapplications, fortwo main reasons:Web browsers are available for all major platforms, in-cluding mobile devices. This means that cross-platformcompatibility is almost guaranteed, and there is no needto spend resources in developing for dierent platforms.Applicationdeploymentismuchmorestraightforward,as a web application typically does not require the user toinstall or update any software or libraries (other than theweb browser).These advantages, combined with a greater understandingof how users interact with web-based 3D content [107], havespurred application development in the eld. In this section wepresent an overview of the dierent application elds for 3Dweb graphics, which we have grouped into dierent areas: datavisualisation and medical applications, digital content creation,video games, e-learning, and engineering, architecture and cul-tural heritage.The overview does attempt to be exhaustive, but rather toprovidethereaderwithanunderstandingofthebreadthanddepth of applications that have been, and are currently being,developed using 3D web technology.5.1. Data Visualisation and Medical Applications3D graphics have long been used as a technique for data vi-sualisation [109], and the introduction of a 3D context to theweb is now facilitating access to the eld. A remote render-ing example for visualizing scientic information already men-tioned is ParaViewWeb [74]. Yet as Marion and Jomier pointout [110], a downside of many of these frameworks (froma webpoint-of-view) is that many require a customised client setup,either with plugins or external applications. Thus, [110] pro-pose a system which uses Three.JS and WebSockets to countertheseissues. TheWebSocketsspecication[111]introducesa full-duplex socket connection created in Javascript, designedto allow real-time synchronisation between multiple browsers.Marionetalextendtheirworkin[112]andcompareaWe-bGL/WebSocketimplementationofamolecularvisualisationwith an identical ParaViewWeb scene; their results suggestingbetter performance of the WebGL version. Other approachesto visualisation of molecular structures are presented by Zolloet al. [113], who use X3DOM to enable usersto interact inreal-time with molecular models built in X3D, and molecularvisualisation in WebGL developed by Callieri et al [114] (seeFigure 6).Limberger et al [115] use WebGL to visualise source coderepositories using software maps (see Figure 7), which link 3Dtree-maps, software system structure and performance indica-tors, usedinsoftwareengineeringprocesses. Softwaremapvisualisations typically feature large number of vertices, eachof which requires linking to several attributes in order to pre-cisely control the visualisation. The authors thus propose a cus-tomised structure for vertex arrays that allows rapid drawing of12Figure 6: Superposition of molecular structure and underlying atomic structurein WebGL (reproduced with permission from [114])Figure 7: WebGL used to visualise software maps from Web-based source coderepositories (reproduced with permission from [115])geometry and avoids issues arising from WebGL current 16-bitlimitation for its vertex index buers.Medical science and education have also benetted by in-creasing accessibility to 3D software, through the 3D visualiza-tion of medical data. According to [116], visualization tech-niques in medicine fall in two main categories: surface extrac-tion and volume rendering. The result of the surface represen-tations, in both techniques, can be stored in a 3D format suchas X3D and then rendered in a web real-time fashion in by tak-ing advantages of 3D web recent developments. Warrick andFunnell [117] are the pioneers of using network technologies inmedical education, by rendering the surface of the anatomicalrepresentation stored in a VRML le. In [118] animation ca-pabilities in the form of rotational and slicing movements wereadded to the learning modules. In addition, the authors triedto address issues under low bandwidth through using lattices asan extension for X3D, representing high quality surface shapewith minimal data. [119] propose an approach for modelling ofdevelopmental anatomy and pathology that provides users witha UI for a narrative showing dierent stages of anatomical de-velopment. Moreover, the user is given some basic control suchas pause, rewind, and fast forward.A new mobile learning tool [120] provides users with an in-teractive 3D visualization of medical imaging to teach anatomyand manual therapy. The authors implement a new volume ren-dering method specialized for mobile devices that denes thecolor of each pixel through a ray casting method. Jacinto etal [121]bringthemedical applicationeldintothemodernHTML5/WebGL paradigm with an application that allows re-altime visualisation and segmentation of medical images, usinga client-server approach. Data is processed using VTK [122]on the server side, with segmentation of medical images beingconverted into 3D surfaces and sent to the client to be renderedusing Three.JS. Mani and Li [123] present a surgical trainingsystem built with X3D and WebGL, allowing real-time updatesfrom trainees and experienced surgeons. Congote et al. [124]use WebGL for volume rendering of scientic data, using a ray-casting method to visualise both medical data and meteorolog-ical data.5.2. Digital Content CreationA clear application of web-based 3D is the creation, editing,and revision of 3D assets which are destined for other applica-tions or productions. There have been several academic eortsat proposing and creating solutions for digital content creation[125] [126], annotation [127] [128], and scene and programmecreation [37] [129]; yet with the continual success of digital ani-mation and video game production, it is perhaps no surprise thatthe commercial sector is a the forefront for using the 3D web ina production environment. The digital production industry hasnow become global involving many companies working on dif-ferent aspects of a production from any place around the globe.Modern digital production deals with 3D assets on a daily basis,and there are now 3D applications on the web which are beingused to ease the production pipeline of these digital media.One of the most important tasks for such production is cre-ating tools that are able to properly access 3D assets, reviewthemandproperlyannotateanychange. Suchtoolsmaybeapplied to varying facets of the authoring process, such as mod-eling, shading or even animation and lighting/rendering. Thesuccess of such tools is encapsulated by Tweak Softwares RV[130]. RV is a desktop oine tool that allows users to reviewand annotate still frames, image sequences and 3D contents us-ing OpenGL technology. As an industry tool it includes a fea-ture that allows users to share the RV workspace through theinternet in order to collaborate on the same data and watch theresults of any change in real time even if in dierent places inthe world.With theincreased maturity of web-basedtechnology forcreating advanced user interfaces, there are several web appli-cations attempting to replicate RVs success in a web-based en-vironment. Sketchfab [131] is a web tool and community, de-veloped with WebGL, for 3D modeling and texturing, and isdesigned to allow 3D artists to share their creations seamlesslyvia the web. Commonly, 3Dmodels are showcased through pre-rendered turnaround videos or still images but with tools like13Figure 8: Screenshot of the Sketchfab 3D web application [131]the Sketchfab it is possible to share the results in a 3D web con-text, so that the viewer can interact with the scene, rotating thecamera and even interacting with the content. Additional fea-tures such as the ability to write comments and the possibilityto view the model in three dierent combined ways (wireframe,shaded and textured) make it suitable for use as a review toolfor media production companies. It could be enhanced with asketching tool, allowing reviewers to rapidly add corrections ornotes, and to relate a comment to a particular part of the geom-etry.Clara.io [132] is an online tool (currently in Beta) for thecreation and sharing of 3D content. It features a suite of mod-elling tools for mesh creation, and also allows key-frame ani-mation and annotation of meshes for sharing and collaborativecreation.Figure 9: Screenshot of the Lagoa 3D web application [133]Lagoa [133] is a web based platform for photoreal 3D vi-sualizationandrenderingdevelopedusingWebGL.Itoersthe possibility to work on 3D scenes through the web browserin a collaborative way by creating workgroups and simultane-ously interacting with the contents and commenting them withother members. It enables users to upload assets directly fromtheircomputeroraccessthemfromthecloudassetmanagerthat Lagoa oers. This asset manager is also based on work-groups and allows users to see only the scenes they are permit-ted to work on, partially mimicking the industry standard toolfor cloud asset management, Shotgun [134]. All the accessibleassets can be placed, moved around and previewed in 3D space.The tool can also manage lights in real time for rapid develop-ment of the scenes and includes a state-of-the-art photorealis-tic rendering engine. The latter uses all the information pro-vided by the users in the scene and makes all the computationon servers provided by Lagoa itself. In this way, the user is notbounded by computational constraints of his/her computer andcan render rapidly even on lower end hardware. The renderingis performed using progressive renement global illuminationalgorithmsthatshowstheuseragraduallyenhancedversionof the entire scene (opposite to standard renderers which showimage tiles only when completed). This permits creative users(e.g. a lighting artist, or the director of photography) to startcommenting on the work even before rendering is nished.3DTin [135] is a tool for rapid creation of simple 3Dgeome-tries, which was recently purchased by Lagoa. Unfortunately,the resulting mesh is very basic and too simple to be used ingeneral production, but the tool demonstrates that there is aneort to work in this direction.Autodesk, the leader in 3Dauthoring software development,has also shown interest in developing 3D applications for theweb and is working on dierent solutions. 123Design [136] isa modelling tool similar to 3DTin but, more than just creatingsimple geometries out of primitives, it also allows users to im-port their own models and modify them. It requires a plugin tobe installed and does not use WebGL. Despite Autodesk back-ing, 123Designdoesnotyetachievethequalityrequiredforgeneral industry use. However, Autodesk has announced a plat-formforhostingserversrunningtheirapplications, allowingusers to connect and interact with the applications through aninternet connection. This solution would unleash all the powerof current high-end Autodesk products, together with the possi-bilities of working on the cloud. Although this solution wouldnot involve direct rendering of 3D graphics on the web, it evi-dence of the interest in building 3D applications for the indus-try, running through the web and on the cloud.5.3. GamesIt is beyond doubt that video games have contributed to thevery cutting edge of computer graphics since their invention,with several annual conferences (such as the Games Develop-ers Conference) bringing together companies, researchers andindividuals in the eld. Videogames are also indelibly associ-ated with the web thanks to the ubiquity of Adobe Flash, whichhas provided a platform for many 2D-based games. However,the contribution of the games eld to the 3D web is less ob-vious, as many popular online games, such as World of War-craft [104] or Eve Online [105] use custom, platform specicclients. The rise of the free-to-play model has seen some eortsto produce 3D games within the browser [55] [137]; there havebeen some been open-source eorts to port older 3D games toWebGL [138];and Epic Games have created a demo of theirUnreal Engine working with WebGL [54]. Perhaps the greatestbreakthrough in the enabling of 3D gaming via the web so farhas been by Unity [52] [53], discussed in detail above. Withthe rise of web-enabled 3D digital content creation tools (men-tioned above), it is perhaps only a matter of time before more14commercial attention is paid to 3D games running natively inthe browser.5.4. E-learningIt is acknowledged that students only fully absorb the learn-ing material once they have applied it in practice; yet learningthrough Virtual Reality (VR) provides a simulated learning ex-perience [139] which attempts to enhance learning by joiningtheory and practice. The application of 3D virtual platformson the web has become an increasingly popular research topicsince Wickens analyzed the advantages of learning in VR en-vironment [140]. He stated that VR can be dened based onve main concepts: 3D perspective, real-time rendering, closedloop interaction, inside-out perspective, and enhanced sensoryfeedback. At the same time, [141] assessed the merits and faultsof VRs conceptual and technical future. Due to the special soft-ware and hardware requirements of VR systems, the develop-ment of these types of systems have a high cost associated withthem [142], while in the past few years, we have witnessed thecreation and proliferation of 3D virtual frameworks through theweb that can be used by lower end computers.Initially, the growth of 3Dmulti-user virtual worlds (MUVE)faced the challenge of motivating learners to utilize this kind of3D capability to learn courses with greater precision. In lightof recent developments of web capabilities (as surveyed in thispaper), this particular type of e-learning has received a greatdeal of attention as a compelling means of resolving traditionalteaching issues. One of the rst attempts [143] in this area waslaunchedbyLindenLabswithSecondLife[144], originallya 3D virtual world presented as an educational framework forsimulating social interactions. The programming language ofSecond Life was Linden Scripting Language. However, this ap-plication suers from two main drawbacks: poor performancedue to network latency and the prohibition of sharing the con-tent outside of the virtual environment. Later, OpenSim [145]expanded on this concept, improving the quality of the learningenvironmentandmakingitfreeandopenratherthanpropri-etary. In addition, it allowed user-created content to be exportedin a portable format called OAR and shared within the commu-nity. The advantages and disadvantages of these two systemsare specically discussed in [146] [143]. Second Life has itselfbecome a platform for research,with several authors using itas a base for e-learning research [147] [148]; and now it has aweb-based client, bringing it inline with the current trends forbrowser-based 3D.[149] analysed the application of Second Life as a 3D plat-form on the web that oers potential as a tourism educationaltool by providing interactive experiences for students. In thisstudy, Self-Determination Theory (SDT) [150] [151] is appliedas a metric to recognise signicant factors which inuence stu-dent learning motivations in 3D world on the web. The resultsrevealed that a positive emotional state had a positive and sig-nicant impact on students intrinsic motivation when learningin a 3D virtual world.Di Cerbo et al [152] make specic eorts to integrate a 3Dweb interface into an avatar-based e-learning platform. Theyconclude that the addition of high-performance, plugin-free 3DFigure 10: Screenshot of the Ruthwell Cross and associated narrative (repro-duced with permission from [153]graphics into their platform allowed a complete and meaning-ful interaction with the e-learning services, benetting the userexperience.5.5. Geography, Architecture and Virtual HeritageGeography and architecture are natural application elds of3D technology, and this is reected in related research in theweb context. Over et al. [154] provide a summary of eortsmade to use OpenStreetMap (OSM) data as a basis for creating3D city models, which are then viewed in a web-based context.Many of the developments in this eld [155][156][157] createand use 3D data stored in the CityGML format [158] an XML-based open format and information model for the representationofurbanobjects. 3DNSITE[159]streamslargehybriddata(georeferencedpointcloudsandphotographs)toclienthard-ware, with the goal of assisting crisis managers and rst respon-ders to familiarise themselves with an environment either dur-ing an emergency or for training purposes. Lamberti [160] useweb-based 3D to visualise real-world LED-based street light-ing. Geographical Information Systems (GIS) (such as GoogleEarth) are increasingly being augmented with 3D data [161],and there have been several eorts to aid in the streaming andvisualisation of such data [162] [163].TheriseofVirtual Heritage(VH)representstheequiva-lent riseintheabilityoftechnologytodigitisereal 3Dob-jects (in this case, of cultural value), and display them in a vir-tual environment. Thus VH has been an important applicationof internet based 3D graphics and has been since the days ofVRML[164][165][166]. Eortsatusingmodern, browserbased declarative 3D techniques in VH are presented in [167].Manferdini and Remondino [128] outline a method to semanti-cally segment 3D models in order to facilitate annotation, anddemonstrate the new possibilities with architectural and archae-ologicalheritagestructures. Callierietal[153]demonstratehow 3D visualisation for VH can be combined in a narrativeway with standard HTML to create narrative experiences andprovide greater understanding of the artwork or cultural objectin question (see Figure 10). Other modern eorts for web-based15virtual heritage have involved adaptation of systems specicallyfor mobile contexts [168] [169].Finally, Autodesk has recently worked in conjunction withthe Smithsonian Foundation to deliver the Smithsonian X 3D[170], a WebGL virtual heritage tool for the online visualizationof 3D scans. It has many features common to 3D tools, suchas the choice between wireframe or shaded view, textures, andadvanced real time materials, but it also provides the possibilityto alter the lighting of the scene in a very direct and easy wayby dening light sources around the surface of a virtual dome.6. DiscussionIn 2010, Ortiz [26] stated that there were several major hur-dles which needed to be overcome before the 3D web can trulyower:The requirement for web browsers to use plugins in or-der to view and interact with 3D content; not only be-cause plugin-installation is a barrier to installation, butbecause plugins are prone to cause browser crashes andother problems.The reliance on plugins also damages cross-platformcom-patibility and contributes to the inability of 3D to workon all browsers and operating systems.The lack of standardisation may lead to the web becom-ing a tangled mess of incompatible formats and technolo-gies, forcing the developers to create multiple versions ofthe same technology.The long authoring times for 3D content on the web area barrier to entry, both for web developers and end users.Proponents are not designing online 3D technology forthe average user (which, it is claimed, led to the per-ceived failure of VRML)The rise in use of low-power devices means that running3D content on the client side is increasingly dicultReferring to the work surveyed in this paper, we can nowattempt toconclude whether anyprogress has beenmade onany of these points, and whether any new issues have arisen.Firstly, it is clear that the need to use plugins for web-based3D content is now diminishing. The release of WebGL, nowsupported by all major browsers, means that developers can ac-cessgraphicshardwareaccelerationviathebrowser, withoutrequiring the user to install a third party plugin. This exibilityis reected in the number of libraries which abstract and addfunctionality to the core WebGL specication, chief of whichis Three.JS. The removal of plugins means that cross-platformcompatibilityisalsogreatlyenhanced. Theonlycurrentre-maining stumbling block to the general acceptance of WebGL isthe continuing reluctance of Apple to enable ocial support onits Mobile Safari browser (the default browser on all the com-panies internet capable mobile devices, such as the iPhone andKeyword GitHubstarsACMDigitalLibraryIEEEXploreX3DOM (X3D) 143 (n/a) 71 (547) 9 (127)Three.JS 13616 14 1Table 2: Popularity of X3D and Three.JS in dierent online platforms. Githubis an online source code repository used for collaboratively creating softwareprojects, stars are can be given by users to a particular project (maximum onestar per project per registered user). The ACM and IEEE digital libraries aredatabases of academic papers published in journals and aliated conferencesfor these two institutions.iPad), despite clear evidence that the functionality is present (inboth hardware and software) [171].Eorts towards a standard for web-based 3D graphics haveachieved mixed results. Table 2 summarises some very basicstatistics on the relative popularity of X3D/X3DOM (proposedstandard) and Three.JS (non-standard library built on the We-bGL). Three comparison measures are used: (a) the number ofstars (recommendations by users) the projects have receivedon the popular GitHub online repository; and (b) and (c) thenumber of returned hits for published papers when searchingon two major academic portals, the ACM Digital Library andtheIEEEXploreDigitalLibraryrespectively. Whilenotrig-orous in terms of assessment, they show a clear dierence inpopularity between the academic community on one hand andthe wider development community on the other. The X3D stan-dard, and its younger, browser-integrated cousin, X3DOM, arefeaturedindozensofacademicarticles, whereasThree.JSisbarely mentioned in any. The Github stars show that Three.JSis clearly much more popular among developers than X3DOM.This presents a quandary for the standards community, as whilethere has clearly been considerable research eort put into cre-ating a standard for declarative 3D, most developer communityattentionhasbeenpaidtoanopensourcelibrarywhichhasgrown in a less formal way (not forgetting that both Three.JSand X3DOM depend heavily on WebGL, itself a standard ofthe Khronos group). The appearance of XML3D muddies thewater yet further for 3D web standards - while it shares manyof the overall goals of the declarative 3D paradigm favoured byX3D/X3DOM, its existence somewhat undermines the lattersgoal to become a web standard. Nevertheless, the communityis making eorts to tread carefully through this mineeld, withJankowski et al [5] proposing a common PolyFill layer for alldeclarative 3D approaches, and the development of interoper-ability tools such as those by Berthelot et al. [172]. Throughthis open dialogue and collaboration, then, there is hope thatOrtiz [26] tangled mess of standards can yet be avoided.Our survey on Digital Content Creation for the 3D web (seeSection 5.2) has demonstrated that there is now real eort intothe task of democratising the creation of 3D content. Both fromacademic eld (see Figure 11) and the commercial elds, thereare now several tools and interfaces that are bringing the powerof3Dcontentandscenecreationwithinawebcontext, par-tially removing the need for installation of large and/or expen-sive desktop software (though we have yet to see the release ofa fully-featured 3D modelling tool).16Figure 11: WebGLStudio is an example of how web 3D tools are being usedto democratise the process of 3D scene creation (reproduced with permissionfrom [37])It is clear that the rise of low-power devices, such as mobilesphones and tablets, is not presenting a large barrier to adoptionof the 3D web. Modern mobile devices can execute 3D contentin the browser, and Section 3 references several eorts made tooset the downsides of a mobile context, particularly regardingprocessing power and bandwidth consumption.Our general conclusion from this survey is that the world ofweb-based 3D graphics is vibrant and exciting, both in the aca-demicandwiderdevelopercommunities. Eachpassingyearbringsfurtherdevelopmentswhicharesharedonline, inthegeneral academic press and conferences, and in specic gath-erings such as the International Conference on 3D Web Tech-nology, which in 2013 was staged for the 18th time.AcknowledgementsThe authors would like to acknowledge the support of theIMPART Project, funded by the ICT - 7th Framework Programfrom the European Commission (http://impart.upf.edu).References[1] SchatzBR, HardinJB. NCSAMosaicandtheWorldWideWeb:Global Hypermedia Protocols for the Internet. Science (New York, NY)1994;265(5174):895901.[2] Curtis H. Flash Web Design: The Art of Motion Graphics. New RidersPublishing; 2000. ISBN 0735708967.[3] NielsenJ. DesigningWebUsability. NewRiders; 1999. ISBN156205810X.[4] W3C . HTML5 Specication. 2009. URL:http://www.w3.org/TR/html5/.[5] Jankowski J, Ressler S, Jung Y, Behr J, Slusallek P. Declarative Integra-tion of Interactive 3D Graphics into the World-Wide Web: Principles,Current Approaches, and Research Agenda. In: Proceedings 18th In-ternational Conference on 3D Web Technology (Web3D13). 2013, p.3945.[6] W3C . Scalable Vector Graphics. 2001. URL:http://www.w3.org/Graphics/SVG/.[7] Geroimenko V, Chen C. Visualizing Information Using SVG and X3D.Springer; 2004. ISBN 1852337907.[8] Web3D . X3D. 2013. URL:http://www.web3d.org/x3d/specifications/x3dspecification.html.[9] Behr J, Eschler P, Jung Y, Z ollner M. X3DOM: a DOM-basedHTML5/X3D integration model. Proceedings of the 14th InternationalConference on 3D Web Technology 2009;:12736.[10] W3C . Namespaces in XML. 1998. URL:http://www.w3.org/TR/REC-xml-names/.[11] Behr J, JungY, Keil J, DrevensekT. AscalablearchitecturefortheHTML5/X3Dintegrationmodel X3DOM. In: Proceedings ofthe 15thInternational Conference on3DWebTechnology. ISBN9781450302098; 2010, p. 18594.[12] Microsoft . WebGL. 2013. URL:http://msdn.microsoft.com/en-us/library/ie/bg182648%2528v=vs.85%2529.aspx.[13] Schwenk K, Jung Y, Behr J, Fellner DW. A modern declarative surfaceshader for X3D. In: Proceedings of the 15th International Conferenceon Web 3D Technology - Web3D 10. 2010, p. 7.[14] Schwenk K, Jung Y, Vo G, Sturm T, Behr J. CommonSurfaceShaderrevisited: improvements and experiences. In: Proceedings of the 17thInternational Conference on 3D Web Technology. ISBN 1450314325;2012, p. 936.[15] Behr J, Jung Y, Drevensek T, Aderhold A.Dynamic and interactive as-pects of X3DOM. In: Proceedings of the 16th International Conferenceon 3D Web Technology - Web3D 11. 2011, p. 81.[16] Sons K, Klein F, Rubinstein D, Byelozyorov S, Slusallek P. XML3D.In: Proceedings of the 15th International Conference on Web 3D Tech-nology - Web3D 10. 2010, p. 175.[17] Klein F, Sons K, Rubinstein D, Slusallek P. XML3D andXow: CombiningDeclarative3Dfor theWebwithGenericDataFlows. IEEE Computer Graphics and Applications 2013;33(5):3847.doi:10.1109/MCG.2013.67.[18] W3C . CSS 3D Transforms. 2012. URL:http://www.w3.org/TR/css3-transforms/.[19] W3C . CSS Transitions. 2009. URL:http://www.w3.org/TR/css3-transitions/.[20] Arnaud R, Barnes M. COLLADA: sailing the gulf of 3Ddigi-tal content creation. CRCPress; 2006. ISBN1568812876. URL:http://www.lavoisier.fr/livre/notice.asp?ouvrage=1828050.[21] Khronos . glTF. 2013.[22] ParisiT. WebGL:UpandRunning. OReillyMedia; 2012. ISBN144932357X.[23] Evans A, Agenjo J, Abadia J, Balaguer M, Romeo M, Pacheco D, et al.Combining educational MMO games with real sporting events. 2011.[24] Tautenhahn L. SVG-VML-3D. 2002. URL:http://www.lutanho.net/svgvml3d/.[25] Google . O3D. 2008. URL: https://code.google.com/p/o3d/.[26] Ortiz Jr. S. Is 3D Finally Ready for the Web?Computer 2010;43(1):146.[27] S anchezJR,OyarzunD,DazR. Studyof3Dwebtechnologiesforindustrial applications. In: Proceedings of the 17th International Con-ference on 3D Web Technology - Web3D 12. 2012, p. 184.[28] C3DL . Canvas 3D JS. 2008. URL: http://www.c3dl.org/.[29] Leung C, Salga A, Smith A. Canvas 3D JS library. In:Proceedings ofthe 2008 Conference on Future Play Research, Play, Share - Future Play08. ISBN 9781605582184; 2008, p. 274.[30] Johansson T. Taking the canvas to another dimension. 2008.URL: http://my.opera.com/timjoh/blog/2007/11/13/taking-the-canvas-to-another-dimension.[31] Khronos . WebGL Specication. 2011. URL:http://www.khronos.org/registry/webgl/specs/latest/1.0/.[32] Khronos . WebGL Demo Repository. 2013. URL:http://www.khronos.org/webgl/wiki/DemoRepository.[33] Cantor D, Jones B. WebGL Beginners Guide. Packt Publishing; 2012.ISBN 184969172X.[34] Matsuda K, Lea R. WebGL Programming Guide: Interactive 3D Graph-icsProgrammingwithWebGL(OpenGL). Addison-WesleyProfes-sional; 2013. ISBN 0321902920.[35] Di BenedettoM, PonchioF, Ganovelli F, ScopignoR. SpiderGL.In: Proceedings of the 15thInternational Conference onWeb3DTechnology - Web3D10. ISBN9781450302098; 2010, p. 165.doi:10.1145/1836049.1836075.[36] Di Benedetto M, Ganovelli F, Banterle F. Features and Design Choicesin SpiderGL. In: Cozzi P, Riccio C, editors. OpenGL Insights. CRC17Press. ISBN 978-1439893760; 2012, p. 583604.[37] Agenjo J, Evans A, Blat J. WebGLStudio. In:Proceedings of the 18thInternational Conference on 3D Web Technology - Web3D 13. ISBN9781450321334; 2013, p. 79.[38] GlMatrix . glMatrix v2.2.0. 2013. URL: http://glmatrix.net/.[39] Pinson C. OSG.JS. 2013. URL: http://osgjs.org/.[40] Oseld R, Burns D. Open scene graph. 2004. URL:http://www.openscenegraph.org/.[41] Kay L. Scene.js. 2010. URL: http://www.scenejs.org/.[42] Belmonte N. PhiloGL. 2013. URL:http://www.senchalabs.org/philogl/.[43] Brunt P. GLGE.org. 2010. URL: http://www.glge.org/.[44] Cabello R. Three.JS. 2010. URL: http://threejs.org/.[45] Dirksen J. Learning Three.js: The JavaScript 3D Library for WebGL.Packt Publishing; 2013. ISBN 1782166289.[46] Degrees P. Car Visualizer. 2013. URL:http://carvisualizer.plus360degrees.com/threejs/.[47] Adobe . Stage 3D. 2011. URL:http://www.adobe.com/devnet/flashplayer/stage3d.html.[48] Boese ES. An Introduction to Programming with Java Applets. Jones &Bartlett Learning; 2009. ISBN 0763754609.[49] Oracle . Java3D. 2008. URL: https://java3d.java.net/.[50] JogAmp . JogAmp. 2013. URL: http://jogam