3D Time of Flight Camera

download 3D Time of Flight Camera

of 38

Transcript of 3D Time of Flight Camera

  • 7/31/2019 3D Time of Flight Camera

    1/38

    1

    INDEX

    1) Abstract 2) Introduction .....3) Purpose4) Scope5) Overall Description

    5.1) Concept.. 5.2) Principle. 5.3) Components Of TOF camera5.4) Working

    5.5) Faults In TOF camera.

    5.6) Practical Use

    5.7) Technologies Developed5.8) Advantages

    5.9) Short Comings5.10) Applications

    6) Conclusion 7) Bibliography

  • 7/31/2019 3D Time of Flight Camera

    2/38

  • 7/31/2019 3D Time of Flight Camera

    3/38

    3

    new device, its problems and possibilities for its practical application.

    INTRODUCTION

    What is Time Of Flight?

    Time of flight (TOF) describes a variety of methods that measure the time

    that it takes for an object, particle or acoustic, electromagnetic or other wave to

    travel a distance through a medium. This measurement can be used for a time

    standard (such as an atomic fountain), as a way to measure velocity or path length

    through a given medium, or as a way to learn about the particle or medium (such as

    composition or flow rate). The traveling object may be detected directly (e.g., ion

    detector in mass spectrometry) or indirectly (e.g., light scattered from an object in

    laser doppler velocimetry).

    Time-of-flight cameras are relatively new devices, as the semiconductor

    processes have only recently become fast enough for such devices. The systems

    cover ranges of a few meters up to about 60 m. The distance resolution is about

    1 cm. The lateral resolution of time-of-flight cameras is generally low compared to

    standard video cameras, at 320 240 pixels or less. Only one camera reports 484 x

    648 pixels of resolution using a standard CCD sensor.The biggest advantage of the

    cameras may be that they provide up to 100 images per second.

  • 7/31/2019 3D Time of Flight Camera

    4/38

    4

    Why to use a 3-D TOF camera?

    When using a 3D-camera three specific reference values are of increased interest:

    the accuracy of coordinates, the resolutionof the recording chip (number of pixels) the maximumrange procurable.

    With the development of the latest generation of 3D-cameras, much progress has

    been made in regard to thesethree items:

    accuracy is now in the range of a few centimetres, resolution goes up to 40k pixels the maximum distance wasraised to about 20m.

    In light of these developments the question , whether the 3D-camera may be usedas an instrument for surveying and recording is worth investigating. This may be

    done by extrapolating from the technical specifications of currently available 3D-

    cameras into the near future, which will most certainly bring higher resolution,

    increased accuracy and an expanded maximum distance. Further developments of

    this kind will be realised soon as prospective customers exist in the field of

    robotics, vehicle construction and many more areas.

    The main questions are as follows:

    1. Are there any fundamental problems in the process of recording architecture?

    2. Which new possibilities for recording of geometry and for visualisation

    purposes arise for practical work?

  • 7/31/2019 3D Time of Flight Camera

    5/38

    5

    3. Is the proposed 2D- / 3D-system capable to replace current devices?

    PURPOSE

    Today, is world of technology. None of the technologies available in any field at

    present is complete in itself. The scientists are continuously trying to develop new

    one which would be better than the present one.

    The 2-D camera has been used for developing pictures, video recording, graphics

    modulation, and in many applications for many decades, but now a 3-D time of

    flight technology has been developed. A traditional camera delivers a two-

    dimensional image of the surrounding world. The 3D-time-of-flight camera (ToF) -

    which was developed in the last decade - captures not only two, but three

    dimensions in one step by measuring the distance from each pixel of the camera

    chip to the object. This way a large number of coordinates is determined for each

    exposurea so called frame.

    The 3-D time of flight technology is quiet sufficient in producing efficient image

    than a 2-D technology. The target source is illuminated using infra red light (

    generally), the reflected light is gathered by the image sensor and a cloud of pixels

    is generated, each point corresponds to every exposed part of the object.

  • 7/31/2019 3D Time of Flight Camera

    6/38

    6

    The main advantage of using this technology is the speed it provides. Time-of-

    flight cameras are relatively new devices, as the semiconductor processes have

    only recently become fast enough for such devices. The systems cover ranges of a

    few meters up to about 60 m. The distance resolution is about 1 cm. The lateral

    resolution of time-of-flight cameras is generally low compared to standard video

    cameras, at 320 240 pixels or less. Only one camera reports 484 x 648 pixels of

    resolution using a standard CCD sensor. The biggest advantage of the cameras may

    be that they provide up to 100 images per second.

    Another main feature of TOF camera is that it is quiet simple to use them. As the

    whole system is very compact, the illumination is placed just next to the lens, and

    no mechanical moving parts are needed.

    The 3-D TOF camera provides a better algorithm to calculate distance between

    object and the light source. Therefore, this task uses only a small amount of

    processing power, again in contrast to stereo vision, where complex correlation

    algorithms have to be implemented. After the distance data has been extracted,

    object detection, for example, is also easy to carry out because the algorithms are

    not disturbed by patterns on theobject.

    So, 3-D time of flight technology has a great advantage over the traditional 2-D

    technology in image generation and it will be quiet useful in upcoming days.

  • 7/31/2019 3D Time of Flight Camera

    7/38

    7

    SCOPE

    Due to fast speed and simplicity of 3-D time of flight technology, it has become

    very popular in many fields. Various technologies using 3-D TOF technology:

    Automotive applications

    Time-of-flight cameras are also used in assistance and safety functions for

    advanced automotive applications such as active pedestrian safety, precrash

    detection and indoor applications like out-of-position (OOP) detection.

    Human-machine interfaces / gaming

    As time-of-flight cameras provide distance images in real time, it is easy to track

    movements of humans. This allows new interactions with consumer devices such

    as televisions. Another topic is to use this type of cameras to interact with games

    on video game consoles.

    Measurement / machine vision

    Other applications are measurement tasks, e.g. for the fill height in silos. In

    industrial machine vision, the time-of-flight camera helps to classify objects and

    help robots find the items, for instance on a conveyor. Door controls can

    distinguish easily between animals and humans reaching the door.

  • 7/31/2019 3D Time of Flight Camera

    8/38

    8

    Robotics

    Another use of these cameras is the field of robotics: Mobile robots can build up a

    map of their surroundings very quickly, enabling them to avoid obstacles or follow

    a leading person. As the distance calculation is simple, only little computational

    power is used.

  • 7/31/2019 3D Time of Flight Camera

    9/38

    9

    OVERALL DESCRIPTION

    1)Concept of Time Of Flight

    Time-of-Flight (ToF) sensors provide a direct way for acquiring 3D surface

    information of objects. More recently, applications like gesture recognition or

    automotive passenger classification are using ToF sensors and ToF is on its way

    to become a component of consumer electronics.

    As ToF sensors provide data at rates higher than 15 Hz, they are suitable for

    real-time 3D imaging and can also be used as an additional imaging modality in

    medicine. We are convinced that ToF technology can contribute to enhance

    applications within medicine and suggest several applications within this field.

    Time-of-flight cameras are relatively new devices, as the semiconductor

    processes have only recently become fast enough for such devices. The systems

    cover ranges of a few meters up to about 60 m. The distance resolution is about

    1 cm. The lateral resolution of time-of-flight cameras is generally low compared

    to standard video cameras, at 320 240 pixels or less.Only one camera reports

    484 x 648 pixels of resolution using a standard CCD sensor. The biggest

    advantage of the cameras may be that they provide up to 100 images per second.

  • 7/31/2019 3D Time of Flight Camera

    10/38

    10

    2) Time Of Flight Principal

    A Time of flight camera (TOF camera) is a camera system that creates

    distance data with help of the following principle:

    Time of flight Principle (TOF) describes a variety of methods that measure the

    time that it takes for an object, particle or acoustic, electromagnetic or other wave

    to travel a distance through a medium. This measurement can be used for a time

    standard, as a way to measure velocity or path length through a given medium, or

    as a way to learn about the particle or medium. The traveling object may be

    detected directly or indirectly.

    The 3D Time-Of-Flight (ToF) camera, simultaneously delivers gray-levelimages and 3D information of the scene.

    Illumination source is in the form of matrix of LEDs. On the basis of reflected ray:

    Distance Image (D(i, j)) is computed based on the phase shift between theemitted and reflected signals,

    Amplitude Image (A(i, j)) is estimated based on the amplitude of thereflected signal at every pixel location.

  • 7/31/2019 3D Time of Flight Camera

    11/38

    11

    3) Components Of Time Of Flight camera

    A time-of-flight camera consists of the following components:

    Illumination unit: It illuminates the scene. As the light has to be modulatedwith high speeds up to 100 MHz, only LEDs or laser diodes are feasible.

    The illumination normally uses infrared light to make the illumination

    unobtrusive.

    Optics: A lens gathers the reflected light and images the environment ontothe image sensor. An optical band pass filter only passes the light with the

    same wavelength as the illumination unit. This helps suppress background

    light.

    Image sensor: This is the heart of the TOF camera. Each pixel measures thetime the light has taken to travel from the illumination unit to the object and

    back. Several different approaches are used for timing; see types of devices

    above. Driver electronics: Both the illumination unit and the image sensor have to

    be controlled by high speed signals. These signals have to be very accurate

    to obtain a high resolution. For example, if the signals between the

    illumination unit and the sensor shift by only 10 picoseconds, the distance

    changes by 1.5 mm. For comparison: current CPUs reach frequencies of up

    to 3 GHz, corresponding to clock cycles of about 300 ps - the corresponding

    'resolution' is only 45 mm.

    Computation/Interface: The distance is calculated directly in the camera.To obtain good performance, some calibration data is also used. The camera

    then provides a distance image over a USB or Ethernet interface.

  • 7/31/2019 3D Time of Flight Camera

    12/38

    12

    4) Working of TOF camera

    The simplest version of a time-of-flight camera uses light pulses. The

    illumination is switched on for a very short time; the resulting light pulse

    illuminates the scene and is reflected by the objects. The camera lens gathers the

    reflected light and images it onto the sensor plane. Depending on the distance, the

    incoming light experiences a delay. As light has a speed of c = 300,000,000 meters

    per second, this delay is very short: an object 2.5 m away will delay the light by:

    The pulse width of the illumination determines the maximum range the

    camera can handle. With a pulse width of e.g. 50 ns, the range is limited to

    These short times show that the illumination unit is a critical part of the

    system. Only with some special LEDs or lasers is it possible to generate such short

    pulses.

    The single pixel consists of a photo sensitive element (e.g. a photo diode). It

    converts the incoming light into a current. In analog timing imagers, connected tothe photo diode are fast switches, which direct the current to one of two (or

    several) memory elements (e.g. a capacitor) that act as summation elements. In

    digital timing imagers, a time counter, running at several gigahertz, is connected to

    each photo detector pixel and stops counting when light is sensed.

  • 7/31/2019 3D Time of Flight Camera

    13/38

    13

    In the diagram of an analog timer (given below), the pixel uses two switches

    (G1 and G2) and two memory elements (S1 and S2). The switches are controlled

    by a pulse with the same length as the light pulse, where the control signal of

    switch G2 is delayed by exactly the pulse width. Depending on the delay, only part

    of the light pulse is sampled through G1 in S1, the other part is stored in S2.

    Depending on the distance, the ratio between S1 and S2 changes as depicted in the

    drawing. Because only small amounts of light hit the sensor within 50 ns, not only

    one but several thousands pulses are sent out (repetition rate tR) and gathered,

  • 7/31/2019 3D Time of Flight Camera

    14/38

    14

    thus increasing the signal to noise ratio.

    After the exposure, the pixel is read out and the following stages measure

    the signals S1 and S2. As the length of the light pulse is defined, the distance can

    be calculated with Medina's formula:

    In the example, the signals have the following values: S1 = 0.66 und S2 =

    0.33. The distance is therefore:

    In the presence of background light, the memory elements receive an

    additional part of the signal. This would disturb the distance measurement. To

    eliminate the background part of the signal, the whole measurement can be

    performed a second time with the illumination switched off. If the objects are

    further away than the distance range, the result is also wrong. Here, a second

    measurement with the control signals delayed by an additional pulse width helps to

    suppress such objects. Other systems work with a sinusoidally modulated light

    source instead of the pulse source.

    Different denotations exist as far as 3D-cameras are concerned:

    The TOF-camera, The PMD-camera The range imaging camera (RIM or flash radar camera)

  • 7/31/2019 3D Time of Flight Camera

    15/38

    15

    They are all targeted at the same type of camera, which is capable of not

    only detecting intensity like traditional analogue or modern digital cameras, but

    also the distance of each pixel to the mapped part of the object. A more precise

    description for the term range imaging camera is the term ToF (TOF)-camera -

    indicating that an - optical - signal that has been emitted from the camera is

    reflected from the object, and also indicating that the reflected part of the emitted

    signal is then returned to the camera.

    The flight-time of the signal on its way to the object and back is measured.

    The ToF is proportional to the distance. However, this is only a very general

    explanation of electronic distance measurement (EDM). Often a modified method

    (phase modulation) is used that relies on the modulation of the signal strength (the

    amplitude of the emitted light).

  • 7/31/2019 3D Time of Flight Camera

    16/38

    16

    The wavelength of one modulation period depends on the modulation

    frequency f. It is calculated with theactual - velocity of light c according to

    Wavelength = C/F. Most 3D-cameras emit IR-light modulated with f = 20 MHz. In

    the example, the corresponding wavelength of 15m (15m = 300000km/s / 20

    x 106/s) is reflected at a distance of about 22m. As the light passes to the object

    and back it runs the double distance, so the result needs to be divided by two.

    The so-called scale of the distance meter is 15m/2 = 7,5m. The whole

    distance is d = (n x wavelength + phase), where n is the even multiple of

    wavelength , and phase is the rest of the wavelength which has to be measured, the

    so called phase difference between the emitted and the returned signal. Thus it

    follows that when a single modulation frequency is utilized, a direct distinct

    measurement is possible only up to 7.5m. So a distance of 8.05m would show as

    0.55m.

    This is a general description how distances can be measured by means of an

    amplitude modulated signal. The phase measurementthe determination of . - may

    be done in different ways. In a 3D-camera it is done using the PMD-effect

    (Photonic-Mixer-Device (Schwarte)). In this case phase is represented by the

    difference in charge of two photo-sensitive areas of each respective pixel. Daylight

    may cause an overload of one of these areas.

    This may cause serious errors in the distance measured by a ToF-PMD

    camera, making daylight suppression a fundamental requirement for any 3D-

    camera to be used for surveying.

  • 7/31/2019 3D Time of Flight Camera

    17/38

    17

    Above figure shows the difference of images obtained from 2-D & 3-D

    camera. It can be shown clearly that images obtained from 3-D camera are of high

    resolution than 2-D camera. 2-D camera produces redundant data which makes the

    image blurred and the image produced is of low resolution.

    Some General Remark

    In order to answer the aforementioned main questions it has not been

    necessary to utilize the very latest generation of 3Dcameras. However, the device

    used for testing needed to meet some minimum requirements, which are sufficient

    short-term temperature-stability as well as adequate daylight suppression. Due to

    its high performance in the latter area a 3k camera of PMD-technologies ( Siegen,

    Germany) was selected. This model, however, is soon to be succeeded by the

    camera 2.0 (41k). Figure 1 shows 2.0 - currently being the camera with the highest

    pixel-resolution available - 205 x 205 pixels - as well as the model PMD 3k with a

    2D-colour-camera placed on top.

  • 7/31/2019 3D Time of Flight Camera

    18/38

    18

    Figure a: 3D-camera 2.0, 41k (PMD-Technologies)

    Figure b: 3D-camera 3k (PMD-Technologies) mounted with 2D-camera

    On both sides of the 3D-camera's cubes diode arrays are mounted emitting

    an amplitude-modulated IR-light that illuminates the whole scene to be captured.

    Apart from the distances (resp. coordinates) and the grey-values (intensities)

    the signal-amplitudes of the returned signals are registered for each pixel as well.

    Additionally, the 2D- / 3D-camera system of Bochum University delivers a colour-

    image with higher resolution. The lateral positions of the pixels on the chip of the

    3D-camera and of the colour-camera respectively present polar coordinates

    together with the distances. They allow the calculation of orthogonal coordinates

    of the object.

  • 7/31/2019 3D Time of Flight Camera

    19/38

    19

    5) Typical Faults Of TOF Camera

    The emitted light illuminates the whole 3D-scene at once. This is a very

    important difference to common electronic distance measurement (EDM). At EDM

    the distance measuring ray is precisely focused. The emitted signal amplitude is

    D (t0) = Acos wt(0). The remitted light signal comprises the distance information

    in terms of a phase delay of the emitted signal: d(t) = k + a cos (wt + phase). The

    phase delay contains the distance information. The remitted signal amplitude a is

    generally much smaller than the sent amplitude A. Additionally, the returning

    signal may be superimposed with a term k which contains various very different

    kinds of influences on the signal, like, for instance, a delay that might be caused by

    daylight or by temperature effects.

    In order to understand the difficulties concerning the application of 3D-

    cameras for architectural recording it is necessary to take a closer look at the

    instruments faults. Most faults are well-known either from the field of

    photogrammetry e.g. distortion of pictures or from electronic distance

    measurement (EDM) - like phase shifts caused by optical or electrical

    superimposition of the direct signal with error signals (crosstalk). Most of the

    faults of 3DPMD- cameras can be sufficiently modelled in a calibration process.

    Certain errors may also be suppressed, e.g. daylight via the SBIcircuit

    (Suppression of Background Light Intensity). The short time random inaccuracy of

    neighboring pixels may nowadays be estimated to generally not exceeding 1cm

    in a distance-range of some meters. However, two fundamental influences on the

  • 7/31/2019 3D Time of Flight Camera

    20/38

    20

    distances have a systematic nature. They can considerably compromise the general

    accuracy. Both are resulting from unwelcome reflections of the distance measuring

    signals.

    The first influence may be characterized as internalsuperimposition, the

    second as external superimposition. Superimposition in general terms means that

    the periodic amplitude modulated signal, which is characterized by frequency,

    amplitude and phase , is superimposed by a signal of the same frequency, but with

    another phase and generally much lower amplitude.

    By this process the resulting signal may deviate significantly from the

    undisturbed one in regards to phase. The amount of the phase shift depends on the

    phase difference between both signals as well as on the ratio of the amplitudes of

    the signals as in the following figure:

    Signal a with phase p is superimposed by a small reflected- signal b with

    phase p'. Resulting is a signal with phase p + t. The influence of b depends on its

    individual phase. A phase shift is equivalent to an error in distance. The grave

    influence of internal and external superimposition respectively on distances

  • 7/31/2019 3D Time of Flight Camera

    21/38

    21

    measured with the 3D-camera has to be regarded in more detail in order to access

    the possibilities to use the tool for architectural recording. Internal superimposition

    effecting distance shadows that encircle the foreground like a small ribbon.

    In literature internal superimposition is also characterised as scattering:

    Small parts of incoming signals are reflected in the camera itself, overlaying the

    directly reflected incoming signals on their way to the pixel. Thus the stronger

    direct signal is changed in phase by the reflected parts belonging to neighbouring

    pixels. Scattering predominantly effects neighboring pixels with very different

    phases . As the lens of the camera has to be focused to a fixed distance, e.g. 5m,

    the blurring effect favours scattering at distances which are shorter or longer than

    the one the camera is focused on. However, efforts and advances made by the

    manufacturers in order to minimize the influence of scattering raise hope that

    scattering will be overcome in the future. Mathematical models also help to

    minimize it (Mure-Dubois, 2007).

    External superimposition seems to be the most dangerous error. It might

    very well ultimately prove to fundamentally limit the possibilities to use 3D-

    cameras for architectural recording. The reason for the occurrence of this effect lies

    in the fact that the array of diodes emitting the IR-Signal always illuminates the

    whole scene all at once. Small parts of diffusely reflected light from different parts

    of the object may superimpose the directly reflected signals on their way back to

    the camera. The possible influence of diffuse reflections on a wall, causing a phase

    shift of a directly reflected signal, can be depicted only schematically as in

    following figures:

  • 7/31/2019 3D Time of Flight Camera

    22/38

    22

    Possible influence of diffuse reflection on direct measurement

    External superimposition effecting a phase delay, respectively a distance shift

    The impact of the effect is demonstrated in following figure. Appropriate

    measurements were made to a vertical edge, formed by two flexible planes. In the

    experiment the angle between the planes was modified while the position of the

    edge itself did not move. The smaller the angle between the planes, the more

    deformed the edge will appear. However, the distances measured more or less

    exactly into the centre of the corner, into the intersection of the planes, are those

    which are less deformed by the superimposition. The amount of error thus may

    reach considerable amounts.

  • 7/31/2019 3D Time of Flight Camera

    23/38

    23

    Interaction between topography of the architecture

    and the measurement with a)d) increasing deformation withsmaller angle; e),f) same angle but f) one plane less reflecting

    The topography of an area can also mean the surface shape and features

    themselves or in a broader sense, topography is concerned with local detail in

    general. Here the feature and shapes of the reflecting surfaces are concerned,

    showing changes occurred in images as the position of the surfaces is changed. It

    can be easily observed that as the angle between the planes decreases the image

    gets more deformed keeping the position of edges constant.

  • 7/31/2019 3D Time of Flight Camera

    24/38

    24

    6) Practical Use Of TOF Camera

    The examples prove the wide range of possibilities inherent in the new

    methodology, which can be applied to a variety of tasks that currently require

    several different measuring devices, like the electronic-distance-meter,

    tacheometer, laserscanner or camera. The possibilities of the new system are

    demonstrated in regards to distance-measurement, angle-measurement, taking

    images, measuring point clouds or using it like a video-camera at a high frame

    rate. The 2D- / 3D-system has all the fundamental capabilities of common

    contemporary measuring devices. Will it be possible to discard these instruments

    in favour of the 3D-camera in the future? Will it be possible to use it efficiently

    despite the handicap provided by external superimposition? The following

    experiments try to find some answers.

    User interface of the 2D- / 3D-system

  • 7/31/2019 3D Time of Flight Camera

    25/38

    25

    The 2D- / 3D-system in exchange to a tacheometer

    With the tacheometer polar coordinates are measured in reference to the

    vertical: the zenith distance, the horizontal angle and the distance. In architectural

    surveying it is often used to establish a network or to capture single points.

    Whether the 3D-camera can replace a tacheometer is largely dependent on the

    orientation of the camera and on its accuracy. To prove this, the 2D- / 3D-camera-

    system was mounted on a tacheometer tripod and leveled with a bubble-tube. The

    3D camera was used for distance measurement, the 2D-camera for angle

    measurement because of its higher angle resolution. Different types of special

    targets were developed. E.g. in various sectors rods were marked with reflecting

    film or reflecting spheres.

    Targets for automatic identification of single points

    Although the cameras angle resolution is rather poor, distances could still

    be detected sufficiently well, especially as far as measurements to spherical-

    reflectors are concerned. In order to carry out angle-measurement well-defined

    points in the same line e.g. the reflecting spheres could be chosen automatically.

    Thus automated extraction and identification of reflecting points of high intensity

    was possible. Distances of more than 20m were reached, however with rapidly

    decreasing accuracy down to + 5cm. Faults caused by excessive intensity of the

    signal amplitude at the reflecting points had to be considered. If necessary, these

    strongly reflecting points can be used for identification purposes only, while less

  • 7/31/2019 3D Time of Flight Camera

    26/38

    26

    reflecting adjacent (red) spheres may be utilized for angle measurement

    respectively for distance measurement. The latest generation cameras, however,

    should yield better results. It will also be easier to measure distances longer than

    half a wavelength, because increased switching times of the measuring frequency

    make it possible to determine n in the equation d = (n x + .) using different

    frequencies. The present means of interaction using a measuring rod is a typical

    modus operandi of the 3D-camera. It cannot be realized with other instruments this

    way. Only here different targets can be reached all at once. The eventual

    replacement of the tacheometer by the 3D-camera seems a likely future course for

    certain areas, e.g. for documenting the advancing process of measurements at an

    archaeological excavation site, or for topographical purposes at short distances. In

    these applications of measurements to reflecting targets faults caused by external

    superimposition may be sufficiently suppressed. It has also been proved that a

    network can easily be referenced by moving a GPS-antenna across the field of

    sight with the camera positioned on a tripod. Single-frame exposure time and

    GPS-coordinate measurement were synced.

    Scanning with the 3D-camera

    With the 3D-system being used as a scanner special attention should be paid to two

    different working modes: In the first one, the camera is mounted on a tripod to be

    turned around a fixed e.g. vertical - axis, whereas in the second one the camera

    may be moved freely around an object or moved about in a room. In both cases

    every single frame delivers a cloud of thousands of points that are fixed to the

  • 7/31/2019 3D Time of Flight Camera

    27/38

    27

    camera system. In either of the two modes good starting points for matching are

    obtained by the procedurethanks to the high frame rate and the frames

    overlapping heavily. However, external superimposition may cause considerable

    deformations and will deteriorate the quality of matched point-clouds in any case.

    To minimize the influence the point-clouds of structures scanned with the 3D-

    system should preferably only be neighboring small reflecting areas. Figure gives

    an example of a suitable object. The point cloud was constructed from 50 frames,

    taken from only three positions with the camera on a tripod. This unfavourable

    way of recording can easily be identified by the characteristic structure of points

    forming rows.

  • 7/31/2019 3D Time of Flight Camera

    28/38

    28

    Visualization

    Photorealistic texturing requires geometry data and pre-rectified images. The

    images used for photorealistic modeling were taken with the 2D-camera and

    matched with the coordinates taken by the 3D-camera. Mapping and rectifying of

    the images was done here using the technology and the algorithms of photo

    tacheometry (Scherer, 2006). Figure shows an image of a three-dimensional work

    of art consisting of coloured planes facing each other in different angles. In this

    example no effects by external superimposition on the coordinates were to be

    expected. Changes in colour result from to the lower quality of the 2D video

    camera.

    In the visualization of built structures the level of detail (LOD) is

    characterized by four degrees. LOD3 describes models of buildings that have been

    texturized on the outside. In these cases the general geometry recording resolution

    goes down to a few decimeters. LOD4 is reserved to describe the most detailed

    models, the interior of buildings as well as detailed textures. The visualization of

    the niche in example figure 12 shows that - in spite of superimposition-effects -

    planes can be extracted sufficiently well to build a LOD3-model.

  • 7/31/2019 3D Time of Flight Camera

    29/38

    29

    7) Technologies Developed for Time Of Flight Camera

    Several different technologies for time-of-flight cameras have been developed.

    Pulsed light source with digital time counters

    There are devices with a pulsed laser and a custom imaging integrated

    circuit with a fast counter behind every pixel. These devices produce depth values

    for each pixel on every frame. Typical image sizes are 128 x 128 pixels. Ranges up

    to 22,000 feet with an eye-safe narrow beam have been achieved. Detectors are

    typically InGaAs (indium-gallium-arsenide) devices.

    RF-modulated light sources with phase detectors

    Photonic Mixer Devices (PMD) and the Swiss Ranger works by modulating

    the outgoing beam with an RF carrier, then measuring the phase shift of that carrier

    on the receive side. This is a compact, short-range device. This approach has a

    modular error challenge; ranges are mod the maximum range, which is the RF

    carrier wavelength. With phase unwrapping algorithms, the maximum uniqueness

    range can be increased. The Swiss Ranger has ranges of 5 or 10 meters, with 176 x

    144 pixels. The PMD can provide ranges up to 60m. Illumination is pulsed LEDs,

    rather than a laser. The demodulation is usually achieved by gating the sensor in

    synchrony with the light source modulation, so in essence they are range gated

    imagers.

  • 7/31/2019 3D Time of Flight Camera

    30/38

    30

    Range gated imagers

    This is the most promising technology, invented by Antonio Medina. The

    phase detector is the gate or shutter in the camera. The gate allows collection of

    portions S2 and S1 of the received light pulse S. The portions are dependent of the

    time of arrival, and range is derived from them according to Medina's equation, z

    =R(S2-S1)/2S+R/2 for an ideal camera. R is the camera range, determined by the

    round trip of the light pulse. The Z-cam and Canesta 3D cameras are range-gated

    systems.

    Similar principle is used in the ToF camera line developed by FraunhoferInstitute of Microelectronic Circuits and Systems and TriDiCam. These cameras

    employ photo detectors with fast electronic shutter used as a gated integrator and

    pulsed lasers.

    There are also "range gated imagers", which are not 3D cameras. The gate is

    open to collect the totality of the reflected pulse and nothing else. Anything outside

    a specified distance range can be suppressed. Those are useful for seeing through

    fog. A pulsed laser provides illumination, and an optical gate allows light to reach

    the imager only during the desired time period.

  • 7/31/2019 3D Time of Flight Camera

    31/38

    31

    8) Advantages of Time Of Flight Camera

    Simplicity

    In contrast to stereo vision or triangulation systems, the whole system is very

    compact: the illumination is placed just next to the lens, whereas the other systems

    need a certain minimum base line. In contrast to laser scanning systems, no

    mechanical moving parts are needed.

    Efficient distance algorithm

    It is very easy to extract the distance information out of the output signals of

    the TOF sensor, therefore this task uses only a small amount of processing power,

    again in contrast to stereo vision, where complex correlation algorithms have to be

    implemented. After the distance data has been extracted, object detection, for

    example, is also easy to carry out because the algorithms are not disturbed by

    patterns on the object.

    Speed

    Time-of-flight cameras are able to measure the distances within a complete

    scene with one shot. As the cameras reach up to 100 frames per second, they are

    ideally suited to be used in real-time applications.

  • 7/31/2019 3D Time of Flight Camera

    32/38

    32

    9) Shortcomings in Time Of Flight Camera

    Background light

    Although most of the background light coming from artificial lighting or the

    sun is suppressed, the pixel still has to provide a high dynamic range. The

    background light also generates electrons, which have to be stored. For example,

    the illumination units in today's TOF cameras can provide an illumination level of

    about 1 watt. The Sun has an illumination power of about 50 watts per squaremeter after the optical bandpass filter. Therefore, if the illuminated scene has a size

    of 1 square meter, the light from the sun is 50 times stronger than the modulated

    signal.

    Interference

    If several time-of-flight cameras are running at the same time, the cameras may

    disturb each others' measurements. There exist several possibilities for dealing with

    this problem:

    Time multiplexing: A control system starts the measurement of theindividual cameras consecutively, so that only one illumination unit is active

    at a time.

    Different modulation frequencies: If the cameras modulate their light withdifferent modulation frequencies, their light is collected in the other systems

    only as background illumination but does not disturb the distance

    measurement.

  • 7/31/2019 3D Time of Flight Camera

    33/38

    33

    Multiple reflections

    In contrast to laser scanning systems, where only a single point is

    illuminated at once, the time-of-flight cameras illuminate a whole scene. Due to

    multiple reflections, the light may reach the objects along several paths and

    therefore, the measured distance may be greater than the true distance.

  • 7/31/2019 3D Time of Flight Camera

    34/38

    34

    10) Applications Of TOF Camera

    Automotive applications

    Time-of-flight cameras are also used in assistance and safety functions for

    advanced automotive applications such as active pedestrian safety, precrash

    detection and indoor applications like out-of-position (OOP) detection.

    Human-machine interfaces / gaming

    As time-of-flight cameras provide distance images in real time, it is easy to track

    movements of humans. This allows new interactions with consumer devices such

    as televisions. Another topic is to use this type of cameras to interact with games

    on video game consoles.

  • 7/31/2019 3D Time of Flight Camera

    35/38

    35

    Measurement / machine vision

    Other applications are measurement tasks, e.g. for the fill height in silos. In

    industrial machine vision, the time-of-flight camera helps to classify objects and

    help robots find the items, for instance on a conveyor. Door controls can

    distinguish easily between animals and humans reaching the door.

    Robotics

    Another use of these cameras is the field of robotics: Mobile robots can build up a

    map of their surroundings very quickly, enabling them to avoid obstacles or follow

    a leading person. As the distance calculation is simple, only little computational

    power is used.

    http://en.wikipedia.org/wiki/File:TOF_Kamera_Boxen.jpghttp://en.wikipedia.org/wiki/File:TOF_Kamera_Boxen.jpghttp://en.wikipedia.org/wiki/File:TOF_Kamera_Boxen.jpghttp://en.wikipedia.org/wiki/File:TOF_Kamera_Boxen.jpg
  • 7/31/2019 3D Time of Flight Camera

    36/38

    36

    CONCLUSION

    On the one side, fundamental difficulties using 3D-cameras for architecturalsurveying are to be outlined, and on the other side possibilities for the use of a

    2D- / 3D-system replacing traditional measuring instruments are to be shown.

    The most important results from these first experiences are as follows:

    - When modelling structures the geometrical accuracy depends on the structures

    themselves and on the geometrical relation between the camera and the object. The

    strong interdependence caused by external superimposition has to be taken into

    account.

    - Geometrical correctness can often not be guaranteed.

    - A sufficient resolution of point-clouds can be reached only with a large number

    of frames; the most efficient way to arrange them will have to be examined.

    - The aforementioned method of recording with the 3D-camera is suited for

    recording and visualization with LOD3-quality.

    - The combination of a 3D-camera with a 2D-camera of higher resolution yields

    improved coordinate quality.

  • 7/31/2019 3D Time of Flight Camera

    37/38

    37

    - The large variety of possibilities to combine the different functionalities may

    allow to replace traditional measuring instruments and measuring methods under

    certain conditions.

    Although many instrumental parameters will see additional improvements in

    the future, the external superimposition error is inherent in the technology itself. It

    may cause uncontrollable alterations in the distances measured. Determining

    accurate boundaries for widespread and common use of 3D-cameras in surveying

    might prove to be a long way.

  • 7/31/2019 3D Time of Flight Camera

    38/38

    BIBLIOGRAPHY

    www.wikipedia.org/wiki www5.informatik.uni-erlangen.de/.../time-of-flight-tof-group www.mesa-imaging.ch/pdf/Application_SR3000_v1_1.pdf www.artts.eu/publications/oprisescu_et_al_ieee.pdf www.springerlink.com/index www.cs.unc.edu/~lguan/publications/guan_pollefeys_eccv08.pdf