Driver Assistance System

download Driver Assistance System

of 10

Transcript of Driver Assistance System

  • 8/6/2019 Driver Assistance System

    1/10

    1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1us 20070043491Al(19) United States(12) Patent Application Publication

    Goerick et al.(10) Pub. No.: US 2007/0043491 Al(43) Pub. Date: Feb. 22, 2007

    (S4) DRIVER ASSISTANCE SYSTEM (S2) U.S. Cl. 701141; 701/300(76) Inventors: Christian Goerick, Seligenstadt (DE);

    Jens Gayko, Frankfurt am Main (DE);Martin Schneider, Reinheim (DE) (S7) ABSTRACT

    ( S I ) Int. Cl.B62D 6/00G06F 17/00

    (2006.01)(2006.01)

    A driver assistance system with an architecture of generaluse for applications related to driver assistance, passengersafety and comfort and communication with external sys-tems, e.g. for diagnosis and maintenance purposes, com-prises (in one embodiment): a first interface for sensingexternal signals coming from the environment of a vehicle,a second interface for sensing internal signals of the vehicle,a first subsystem for fusing the output signals of the firstinterface and the second interface in order to generate arepresentation of the vehicle's environment, a second sub-system for modeling the current state of the vehicle beingprovided with the output signals of the first subsystem, athird subsystem for modeling the behavior of a driver, afourth subsystem for interpreting the current scene repre-sented by the first subsystem in order to determine charac-teristic states of the current external situation of the vehicle,and a fifth subsystem that generates stimuli for the actuatorsof the vehicle and/or the driver based on the output signalsof the second, the third and the fourth subsystem.

    Correspondence Address:FENWICK & WEST LLPSILICON VALLEY CENTER801 CALIFORNIA STREETMOUNTAIN VIEW, CA 94041 (US)

    (21) Appl. No.:(22) Filed:

    111505,541Aug. 16, 2006

    (30) Foreign Application Priority DataAug. 18, 200S (EP) OS 017 976.1

    Publication Classification

    Sens or -f us io n , r ep re se n ta ti on a n d i nt er pr et at io n : s ys tem de ta il s

  • 8/6/2019 Driver Assistance System

    2/10

    Patent Application Publication Feb. 22, 2007 Sheet 1 of 2 US 2007/0043491 Al

    co'ii).a. . ! .osnCQ)(f)

    f~,1[ T~g 0" ' _ ~j_- - - - - - - - - - - , - - - ,- >< F i : : : - e : i:f:: ~ :~:t:I:2 :f:~1:~[

    ,--~i : aj~

    ji f.'.B,.

    T

  • 8/6/2019 Driver Assistance System

    3/10

    Patent Application Publication Feb. 22, 2007 Sheet 2 of 2 US 2007/0043491 Al

    :- - .,, < 1 > ''-""'"I.D I: ',

    :- - . :- -, :--,I g :I,t: I'(I> ,: ' * : I ,, ,

    : --, . -~ : : --,c',0'-U ;I IC'" , . ,a; I~~-:

    ,I~ I'm: C J ) I ,,

    ,I ~I"' .0.~I

    I I"" ,,

    < 1 > ,I~Io.:',. . _ :. ! . .

    ~ . . .0 - - '

    - --'en '0..': C ! J :!..--

    . - - - -, on ':~::~~:

    .. - __I

    I-j-.---':~~ e:I.~.~ E I1 . " 5 i ~ 0".,~?-~--~

    ._ .-... ..---------_-_-_._ .-. -.-.~.-. . : .-

    - - - - _ '_ - - - - - - - - -'

  • 8/6/2019 Driver Assistance System

    4/10

    US 2007/0043491 Al

    DRIVER ASSISTANCE SYSTEMFIELD OF THE INVENTION

    [0001] The invention relates to driver assistance systemsand more particularly to automotive driver assistance sys-tems and methods.

    BACKGROUND OF THE INVENTION[0002] Driver assistance systems are control systems forvehicles or intelligent vehicle technologies that aim atincreasing the comfort and safety of traffic participants.Potential applications of such systems include lane departurewarning, lane keeping, collision warning or avoidance,adaptive cruise control and low speed automation in con-gested traffic.[0003] Driver assistance systems in the context of thepresent invention can thereby "assist" an action triggered orperformed by a driver, but can also autonomously start andcarry out a control action for the vehicle.[0004] One of the above mentioned applications typicallyuses a manifold of separate sensors and electronic controlunits (ECUs) for sensor control, analyzing the sensor dataand generating suitable output signals. For example,RADAR (RAdio Detecting And Ranging), and LIDAR(Light Detecting And Ranging) sensors can accurately mea-sure distances to sensed objects, but they cannot provide anysemantic information related to the sensed objects (e.g. inwhich lane a vehicle is located, what kind of object is beingsensed). Such information needs additionally to be gatheredby a video camera or cameras.[0005] Due to the general tendency to increase safety andsupport driver and passengers with new comfort functions,the complexity of each of the applications as well as thenumber of applications within a modem vehicle is increas-ing and will increase further in the future. Basically, avehicle with several sensor systems means a respectivenumber of ECUs, where all the ECUs perform their tasksindependent from the others.[0006] Conventional vehicles have a bus system, forexample the CAN ("Controller Area Network")-bus, withwhich most or all ECUs are connected. The CAN-bus allowsto exchange data between the connected ECUs. However, itturns out that by just adding together several sensors and/orapplications and interconnecting them with a bus, onlysimple driver assistance systems can satisfactorily be con-structed regarding aspects like reliability or stability androbustness.[0007] Nevertheless it is a goal to further enhance safety,offer more comfort functions and at the same time limit costsfor the development of such new and/or enhanced safety andcomfort features. Thus what is needed is a system andmethod that integrates the up-to-now separated sensor sys-tems into a single information system, which actively con-trols the vehicle and/or assists the vehicle operator.[0008] However, no such general model of behavior gen-eration is known. Existing assistance driver systems com-prise a sensor or several sensors and subsequent sensor dataprocessing steps specifically designed for a particular appli-cation. To implement a further application, essentially aredesign of the system is required.

    F e b . 2 2 , 2 0 0 71

    SUMMARY OF THE INVENTION[0009] In view of the above, it is an object of the inventionto provide a driver assistance system which is based on aflexible architecture, such that it can be easily adapted.[0010] It is a further object of the invention to provide adriver assistance system allowing significant safety andcomfort improvements compared to existing systems.[0011] Another object of the invention is to provide adriver assistance system or a respective method whichallows the sharing of limited resources, e.g. in terms ofenergy consumption, computing power or space.[0012] It is a further object of the invention to provide adriver assistance system able to make robust predictions andenable complex behavior.[0013] Another object of the invention is to provide amodular architectural concept which easily allows the incor-poration of new elements, for example on sensory side, newapplications or units for enhanced internal processing.[0014] It is a further object of the invention to provide adriver assistance system which allows real-time control andaction selection.[0015] The features and advantages described inthe speci-fication are not all inclusive and, in particular, many addi-tional features and advantages will be apparent to one ofordinary skill in the art in view of the drawings, specifica-tion, and claims. Moreover, it should be noted that thelanguage used in the specification has been principallyselected for readability and instructional purposes, and maynot have been selected to delineate or circumscribe theinventive subject matter.

    BRIEF DESCRIPTION OF THE DRAWINGS[0016] FIG. 1 in the form of a functional block diagram anembodiment of a driver assistance system according to theinvention; an[0017] FIG. 2 in schematic form details of the informationfusion, scene representation and interpretation subsystemsof the embodiment of FIG. 1.

    DETAILED DESCRIPTION OF THEINVENTION

    [0018] A preferred embodiment of the present invention isnow described with reference to the figures where likereference numbers indicate identical or functionally similarelements. Also in the figures, the left most digit of eachreference number corresponds to the figure in which thereference number is first used.[0019] Reference in the specification to "one embodi-ment" or to "an embodiment" means that a particularfeature, structure, or characteristic described in connectionwith the embodiments is included in at least one embodi-ment of the invention. The appearances of the phrase "in oneembodiment" in various places in the specification are notnecessarily all referring to the same embodiment.[0020] Some portions of the detailed description that fol-lows are presented in terms of algorithms and symbolicrepresentations of operations on data bits within a computermemory. These algorithmic descriptions and representations

  • 8/6/2019 Driver Assistance System

    5/10

    US 2007/0043491 Al

    are the means used by those skilled in the data processingarts to most effectively convey the substance of their workto others skilled in the art. An algorithm is here, andgenerally, conceived to be a self-consistent sequence of steps(instructions) leading to a desired result. The steps are thoserequiring physical manipulations of physical quantities.Usually, though not necessarily, these quantities take theform of electrical, magnetic or optical signals capable ofbeing stored, transferred, combined, compared and other-wise manipulated. It is convenient at times, principally forreasons of common usage, to refer to these signals as bits,values, elements, symbols, characters, terms, numbers, orthe like. Furthermore, it is also convenient at times, to referto certain arrangements of steps requiring physical manipu-lations of physical quantities as modules or code devices,without loss of generality.[0021] However, all of these and similar terms are to beassociated with the appropriate physical quantities and aremerely convenient labels applied to these quantities. Unlessspecifically stated otherwise as apparent from the followingdiscussion, it is appreciated that throughout the description,discussions utilizing terms such as "processing" or "com-puting" or "calculating" or "determining" or "displaying" or"determining" or the like, refer to the action and processesof a computer system, or similar electronic computingdevice, that manipulates and transforms data represented asphysical (electronic) quantities within the computer systemmemories or registers or other such information storage,transmission or display devices.[0022] Certain aspects of the present invention includeprocess steps and instructions described herein in the formof an algorithm. Itshould be noted that the process steps andinstructions of the present invention could be embodied insoftware, firmware or hardware, and when embodied insoftware, could be downloaded to reside on and be operatedfrom different platforms used by a variety of operatingsystems.[0023] The present invention also relates to an apparatusfor performing the operations herein. This apparatus may bespecially constructed for the required purposes, or it maycomprise a general-purpose computer selectively activatedor reconfigured by a computer program stored in the com-puter. Such a computer program may be stored in a computerreadable storage medium, such as, but is not limited to, anytype of disk including floppy disks, optical disks, CD-ROMs, magnetic-optical disks, read-only memories(ROMs), random access memories (RAMs), EPROMs,EEPROMs, magnetic or optical cards, application specificintegrated circuits (ASICs), or any type ofmedia suitable forstoring electronic instructions, and each coupled to a com-puter system bus. Furthermore, the computers referred to inthe specification may include a single processor or may bearchitectures employing multiple processor designs forincreased computing capability.[0024] The algorithms and displays presented herein arenot inherently related to any particular computer or otherapparatus. Various general-purpose systems may also beused with programs in accordance with the teachings herein,or it may prove convenient to construct more specializedapparatus to perform the required method steps. Therequired structure for a variety of these systems will appearfrom the description below. In addition, the present inven-

    F e b . 2 2 , 2 0 0 72

    tion is not described with reference to any particular pro-gramming language. Itwill be appreciated that a variety ofprogramming languages may be used to implement theteachings of the present invention as described herein, andany references below to specific languages are provided fordisclosure of enablement and best mode of the presentinvention.[0025] In addition, the language used in the specificationhas been principally selected for readability and instruc-tional purposes, and may not have been selected to delineateor circumscribe the inventive subject matter. Accordingly,the disclosure of the present invention is intended to beillustrative, but not limiting, of the scope of the invention,which is set forth in the claims.[0026] In order to reliably accomplish the above men-tioned objects, one embodiment of the invention proposes adriver assistance system unifying several a priori separatedsensor systems, and which implements a central model ofthe vehicle's environment. The model is obtained by ana-lyzing the data delivered by the different kinds of sensors.The common representation forms the basis for implemen-tation of complex behavior of the system, for examplegenerate warning messages for the driver and/or controlcommands for the various actuators of the vehicle thesystem is connected with.[0027] The invention proposes a general architecture for adriver assistance system, and therefore differs from the priorart according to which specific separately existing sensorsystems are interconnected. Within the system of the inven-tion, a sensor is not specifically assigned to a particularapplication, but a number of internal and external sensors isintegrated into the system to arrive at the above mentionedcentral representation of the environment. Thus, new appli-cations might be defined without any change in the sensorpool of a vehicle.[0028] From the above it is clear that information fusion(data fusion) is an important aspect of the inventive driverassistance system. For example, the object informationobtained by an image processing stage must be fused withdata stemming from other sensors, for example RADARdata, to reach at a unified description of the sensed envi-ronment.[0029] With the help of symbolic information, the result ofthe fusion process, namely the scene representation, isevaluated and a behavior created.[0030] As the driver assistance system according to theinvention operates independently of the driver to outputinformation to the driver or act on the vehicle's actuators,the system will occasionally also be called 'agent' through-out this document.[0031] Some of the advantages of various embodiments ofthe invention are summarized in the following.[0032] The architecture of the driver assistance system isbased upon a central representation of the environment andthe own state of the vehicle. Therefore the invention cansave limited resources as, for example, energy, computingpower and space. These resources are shared among thevarious assistance functions implemented within the com-mon architecture according to the invention. Further,

  • 8/6/2019 Driver Assistance System

    6/10

  • 8/6/2019 Driver Assistance System

    7/10

    US 2007/0043491 Al

    [0050] The processing of information within the driverassistance system illustrated in FIG. 1 is described next. Thesensors 24 for sensing the internal state of the vehicle maycomprise sensors for the velocity, acceleration, yaw-rate, tiltand others. Sensor data are also delivered to the driver viaan appropriate HMI 44.[0051] The HMI 44 can also notify warnings from theaction selection/combination system 38 to the driver 14.[0052] More detailed information may be provided to theassistance system, for example the velocity of each of thewheels of the vehicle. The pre-processing module 20 servesas a temporal stabilization and smoothing unit for theincoming sensory information. Further, it adapts the sensorinformation for the downstream information fusion process.For example, the information might be adapted to the timestep scheme of the processing stage of the agent.[0053] The sensors 22 for sensing the vehicle's environ-ment 10 might deliver for example video, RADAR andLIDAR data. Furthermore, data capture through radio infor-mation, vehicle-vehicle-communication or digital mapsmight be delivered by appropriate devices. The pre-process-ing unit 18 includes a multitude of different proceduresadapted to pre-process the sensor data. Both units 18, 20forward the pre-processed data to the information fusionmodule 26, where the external and internal data are fused.For data fusion, a neural network can be used.[0054] Data resulting from the fusion process are deliv-ered to the scene representation module 28, where a scenerepresenting the environment of the vehicle 12 is estab-lished. The scene might for example represent the trafficsituation. Generally the scene representation containsdetected objects, for example cars, motorcycles and/orpedestrians, and a background, for example comprising thestreet, vegetation and/or sky.A more detailed description ofinformation fusion and scene representation will be givenbelow with regard to FIG. 2.[0055] Data related to the scene representation is alsodelivered as a feedback to the information fusion module 26,supporting for example the prediction in a subsequent timestep.[0056] According to the invention, perception can beorganized and supported by prediction. The driver sub-system 32 receives data via HMI 34 from the driver 14.Thus, module 32 can determine and anticipate the intentionof the driver, e.g. planned lane-change maneuvers on amultiple lane road (Autobahn, ... ). Further driver- orpassenger-related data is received from interior sensing 45of vehicle 12. Amongst others, the internal sensory 45mightcomprise a camera for observing the driver's eyes. Withinthe driver model subsystem 32, visual data in combinationwith further data might lead to a conclusion concerning thedriver's state, e.g. estimations of his or her awakenessldrowsiness. The driver model 32 determines and representsdriver states for use by the agent, as described below.[0057] Data of the scene representation 28 are furtherprocessed within the scene interpretation module 36.[0058] The scene interpretation module 36 outputs signalsto the driver model subsystem 32. Suitable data mightcomprise a representation of statements concerning e.g.distances to other objects (other vehicles) falling below

    F e b . 2 2 , 2 0 0 74

    predetermined minimum values or vehicle (relative) veloci-ties above predetermined values.[0059] Further, the scene interpretation module 36 sup-plies signals to a (global) scene description module 46. Here,a global scenario or scene description is recorded. Mostimportant scenarios relate to 'urban' or extra-urban, resp."rural" environments. Additionally or alternatively, sce-narios like 'Freeway' and 'Highway' might be included.Global features extracted from the pre-processing module 18are forwarded directly to the scene description module 46.[0060] Data from the scene interpretation module 36 is fedback to the module 28, for example to guide an attention/relevance filtering process in the represented scenario inorder to concentrate on the relevant objects in the currentsituation.[0061] From the scene description module 46, data arehanded over to a (traffic) rule-selection module 48. The datamight also comprise data forwarded from scene interpreta-tion 36, for example related to a detected traffic sign or acombination of traffic signs. The module 48 comprises a listof traffic signs and other traffic rules as well as representa-tion of global scene descriptions, where each entry in the listpoints towards one or more traffic rules, for example relatedto maximum velocity, but also lane changes, parking pro-hibitions or such like rules. Any kind of assigmnent of rulesto global scenes and/or objects within the scene can beimplemented as rule selection module 48.[0062] The subsystem 38 of the agent collects data fromseveral subsystems and modules. The vehicle model 30delivers data related to the determined state of the vehicle12. The driver model 32 delivers data related to the state ofthe driver 14. The scene interpretation 36 hands over datarelated to state variables of the represented scene. The scenedescription model 46 delivers data related to the globalscenario and weather conditions. The rule selection 48delivers data related to the traffic rules which are relevant forfuture actions of the vehicle 12 or the driver 14. Thesubsystem 38 interprets the data in order to define actions tobe taken and possibly to rank required actions according topredetermined criteria related to, e.g., safety. Related data isalso fed back to the scene interpretation module 36.[0063] The subsystem 38 can generate warning messageswhich are directed towards the driver and are outputted viathe HMI 44. Suitable interfaces comprise visual or auditoryinterfaces such as e.g. LEDs, the output of numbers or textonto a display or the front shield or spoken messages.[0064] Data related to the one or more actions to be takenare then handed over to the output stage of the agent, thebehavior generation module 40, where the driver-assistancemodules like ACC, lane-keeping or collision-avoidance-functions are located. Additional data might be deliveredfrom the scene representation module 28 related to thephysical description of objects. Further data related to thedriver's intention is inputted from the driver 14 via HMI 34.In module 40, the actions computed by the upstream mod-ules are merged together. For example, there can be imple-mented a hierarchy of action signals, such that an actionsignal received from the driver 14 (HMI 34) has highestpriority.[0065] The module 40 generates output commands andtransmits these to the appropriate actuators 42, for example

  • 8/6/2019 Driver Assistance System

    8/10

    US 2007/0043491 Al

    throttle and brakes, but also steering, gear and furtheractuators, including actuators related to driver and passen-gers, for example airbags, pre-tensioners, but possibly alsointerior lighting, climate control or window regulators.[0066] One embodiment of the invention as described hereprovides for a variety of interactions between driver anddriving assistance agent. The relevant driver's actions arerepresented by the driver model and further have directinfluence on the behavior generation. The agent supports thedriver via warnings or more generally by information mes-sages generated by the scene interpretation module andfurther by controlling the driver-related actuators. Thus, theinventive system is flexible enough to allow for complexinteractions and to assist the driver without patronizing him.[0067] FIG. 2 shows in schematic form functional detailsof the first subsystem for fusing sensed information in orderto generate a scene representation and the fourth subsystemfor interpreting the current scene in such a way, that somemeaningful information about the external scene is availablefor choosing the optimal action.[0068] The external sensing module 22 comprises, forexample, a visual external sensing unit 50 and a extendedexternal sensing unit 52. While the visual external sensingunit 50 captures all sensory information by projective sensorelements like camera, RADAR and LIDAR, the extendedexternal sensing unit 52 receives information from non-projective elements like digital-map data and informationthat extends the visual range like radio traffic reports orvehicle-vehicle-communication messages.[0069] The visually captured data is forwarded to thefeature extraction unit 54, which determines characteristicfeatures of the sensor measurements, e.g. edges in a camera-image and disparity (depth) of a stereo-camera system ordistance and velocity data from RADARILIDAR-devices.Further, global features are extracted for providing theglobal scene description unit 46. Suitable global featuresfrom camera-images can be determined from frequency-components (Fourier-spectrum, Gabor-wavelets) or thepower-density spectrum.[0070] The features delivered by unit 54 are clusteredand/or matched against object-models in order to generateobject hypothesis from the separate feature modalities byunit 56. Object hypothesis are defined as candidates forreal-world objects that have to be further confirmed byadditional information integration and consistency valida-tion.[0071] Both, the object hypothesis itself and the corre-sponding features are transformed (when required) via unit62 into suitable coordinate systems and are directly fed intothe information fusion system 26. The fusion system 26generates fused object-hypotheses in unit 66 based on afeature binding process in unit 64. The different as well asredundant feature modalities are fused in order to supportthe corresponding object hypothesis and to discard theinconsistencies in the sensory data. Object-hypothesis fusionand feature binding works hand in hand in this fusionarchitecture and can be generally performed by statisticalfusion methods or neural networks. Algorithms useful forfeature binding are known to the skilled person and thereforewill not be discussed here. Robustness against measurementnoise and uncertain information is achieved by temporaltracking (unit 68) of both, features and object hypothesis.

    F e b . 2 2 , 2 0 0 75

    [0072] The fused object hypotheses are further validatedby the visual object categorization unit 70 that delivers asemantic descriptions of the objects in terms of the object-type, e.g. 'car', 'pedestrian' or 'traffic sign'.[0073] The physical description (position, velocity, accel-eration) of the objects and the corresponding object-type arestored in the scene representation module 28 as a 2'10dimensional sketch in unit 72 that is used for the close-rangevisual scene exploration. The term "2'10D sketch" representsthe fact that it is not possible to fully explore the real 3-Dcharacteristic of a scene by only using 2-D projectivesensor-elements.[0074] Object categories delivered by unit 70 are linked tosome attributes via the knowledge base 58, e.g. someattributes of detected objects might describe different dan-ger-categories based on the solidity of the objects. Otherattributes might be possible motion parameters or motionmodels for adapting the tracking unit 68. The attributes arealso represented in unit 72.[0075] Data from the extended external sensing module 52are directly fed into the information extraction module 60,which performs map-data analyse and report analyse fromthe various messages in order to extract relevant informationabout the traffic situation around the vehicle that cannot becapture by the visual external sensing module 50. Theacquired information ismapped into a 2D-space point objectrepresentation in unit 74 via the mapping unit 62. Therepresentation in unit 74 is a kind of 'birdview' long-rangeenvironment map and is useful for pre-activating specificregions in the 2'10D-sketch representation 72 for guidingattention and supporting prediction in the sensor-fusionmodule 26.[0076] Examples for data available inunit 74 aremap-datainformation like curves for guiding the lane-detection pro-cedure or rough positions of traffic jams and accidents inorder to pre-activate the fusion system for concentrating onstationary targets detection.[0077] Data accnmulated in the 2'10D-sketch representa-tion is further evaluated in the scene-interpretation module36 for giving the agent a kind of 'understanding' what isgoing on in the current situation based on the acquired objectinformation. Some kind of interpretations in order to 'under-stand' the scene are the assignment of detected traffic signsto the different lanes or the assignment of detected obstaclesto different lanes. Also the interpretation of what is the roador where the vehicle can drive belongs to the task of thescene interpretation module 36.[0078] Module 36 provides the decision/action selectionmodule 38 with some states of the current scene, whichcomprise discrete, semantic information like the number oflanes, obstacles in that lane as well as continues informationlike the physical properties of the scene (distances, veloci-ties). Based on that information and the driver's statesacquired from occupancy monitoring and delivered by thedriver-model the decision system 38 selects the optimalbehavior by analysing the state space of the scene.[0079] The scene interpretation module 36 also forwardsinformation to the global scene description unit 46, namelythe traffic signs and lane-configuration. The global scenedescription unit 46 determines global aspects of the scene as

  • 8/6/2019 Driver Assistance System

    9/10

    US 2007/0043491 Al

    already described above using global features delivered bythe feature extraction unit 54.[0080] Italso tunes the visual object categorization unit 70according to the different statistics occurring in differentglobal scenes. The global aspects of the scene (urban,extra-urban, highway) are further fed into the knowledgebase 48 for determining a proper set of traffic rules valid inthe current situation. These rules represent some constraintsfor the action selection module 38. Further, module 38guides the attention/relevance fil tering module 76 for filter-ing relevant objects depending on the task or action. Therelevance filtering module 76 also receives interpreted infor-mation from module 36 that might deliver only thoseobjects, which are in the lane or close to it in respect to thevehicle, for instants.[0081] The action selection information from unit 38activates or deactivates or combines different driver-assis-tant modules 40 that might control the vehicle based onphysical object information delivered by the scene-repre-sentation unit 28. Possible functions are longitudinal control(ACC, ACC stop and go), lateral control (lane-keeping,lane-departure warning), collision avoidance/mitigation,warnings according to traffic rules and so forth.[0082] The internal sensing module 24 is connected tosmoothing/stabilization pre-processing unit 20 as alreadydescribed above. Internal vehicle data might be used in thetransformation/mapping unit 62 for correcting the mapping,e.g. using the ti lt-angle information for the projective trans-formation, as well as for the feature binding process and thevehicle model 30, which provides decision/action selectionmodule 38 and the assistance modules 40 with the internalstates and the dynamic of the vehicle.

    What is claimed is:1. A driver assistance system comprising:a first interface for sensing external signals received from

    an enviromnent of a vehicle,a second interface for sensing internal signals of the

    vehicle,a first subsystem for fusing output signals of the first

    interface and the second interface in order to generatea representation of the vehicle's environment,

    a second subsystem for modeling a current state of thevehicle being provided with the output signals of thesecond interface,

    a third subsystem for modeling a behavior of a driver,a fourth subsystem for interpreting a current scene rep-

    resented by the first subsystem in order to determinecharacteristic states of a current external situation of thevehicle, and

    a fifth subsystem that generates stimuli for at least one ofactuators of the vehicle or the driver based on theoutput signals of the second, the third and the fourthsubsystems.

    2. The system according to claim 1,wherein the fifth subsystem is designed to additionally use

    additional traffic rules depending on the current scene.

    F e b . 2 2 , 2 0 0 76

    3. The system according to claim 1,wherein the fifth subsystem is adapted to feed-back sig-

    nals to the fourth subsystem representing informationrelated to the generated stimuli and

    the fourth subsystem is adapted to interpret the currentscene in response to the feed-back signal of the fifthsubsystem.

    4. The system according to claim 1, further comprisinga scene description subsystem for providing a description

    of a global scene, adapted to determine the global sceneby an output signal of the fourth subsystem represent-ing characteristic states of the current external situationof the vehicle and to output a signal representing thedetermined global scene, wherein the subsystem forgenerating stimuli for the actuators of the vehicleand/or the driver is adapted to generate these stimuli inresponse to the output signal of the scene descriptionsubsystem.

    5. The system of claim 4, wherein the scene descriptionsubsystem provides an output signal for traffic rule selection.6. The system according to claim 1, wherein the fourth

    subsystem is adapted to further generate a stimulus for thethird subsystem and the third subsystem is adapted to modelthe behavior of the driver in response to the input signal ofthe fourth subsystem and to an input signal from a humanmachine interface representing an action of the driver.

    7. The system according to claim 1, wherein the sub-system for generating stimuli for the actuators and/or thedriver is further adapted to generate these stimuli in responseto input signals from passenger monitoring.

    8. The system according to claim 1, wherein the firstsubsystem provides an output signal representing physicalobject descriptions for a behavior generation module.

    9. The system according to claim 1, wherein the firstsubsystem comprises an information fusion module forfusing the information represented by the output signals ofthe first and second interface, a knowledge base for storingobjects with associated attributes, and a scene representationmodule for representing a scene.

    10. The system according to claim 10, for use in a vehicle.11. A driver assistance system, the system comprising:a first interface for sensing external signals coming from

    an enviromnent of a vehicle,a second interface for sensing internal signals of the

    vehicle,a first subsystem for fusing output signals of the first

    interface and the second interface in order to generatea representation of the vehicle's environment,

    a second subsystem for modeling a current state of thevehicle being provided with the output signals of thesecond interface,

    a third subsystem for modeling a behavior of a driver,a fourth subsystem for interpreting a current scene rep-

    resented by the first subsystem and for generatingstimuli for at least one of the actuators of the vehicle orthe driver based on the output signals of the second andthe third subsystem.

    12. The system according to claim 11, wherein the fourthsubsystem is designed to use additional traffic rules depend-ing on the current scene.

  • 8/6/2019 Driver Assistance System

    10/10

    US 2007/0043491 Al

    13. The system according to claim 11, further comprisinga scene description subsystem for providing a descriptionof a global scene, adapted to determine the global sceneby an output signal of the fourth subsystem represent-ing characteristic states of a current external situationof the vehicle and to output a signal representing thedetermined global scene, wherein the subsystem forgenerating stimuli for the actuators of the vehicleand/or the driver is adapted to generate these stimuli inresponse to the output signal of the scene descriptionsubsystem.

    14. The system of claim 13, wherein the scene descriptionsubsystem provides an output signal for traffic rule selection.15. The system according to claim 11, wherein the fourth

    subsystem is adapted to further generate a stimulus for thethird subsystem and the third subsystem is adapted to modelthe behavior of the driver in response to the input signal ofthe fourth subsystem and to an input signal from a humanmachine interface representing an action of the driver.16. The system according to claim 11, wherein the sub-system for generating stimuli is further adapted to generate

    these stimuli in response to input signals from passengermonitoring.17. The system according to claim 11, wherein the first

    subsystem provides an output signal representing physicalobject descriptions for a behavior generation module.18. The system according to claim 11, wherein the first

    subsystem comprises an information fusion module forfusing the information represented by the output signals ofthe first and second interface, a knowledge base for storingobjects with associated attributes, and a scene representationmodule for representing a scene.19. The system according to claim 18,wherein the information fusion module provides objectsignals representing determined objects to the knowl-edge base, which in response to the object signalsprovides attribute signals representing attributes of thedetermined objects to the scene representation module.

    20. A system according to claim 11, for use in a vehicle.21. A method for assisting a driver of a vehicle, compris-

    ing the following steps:sensing external signals of the vehicle,sensing internal signals of the vehicle,fusing the external signals and the internal signals in orderto generate a representation of the vehicle's environ-ment,

    modeling a current state of the vehicle based on a vehiclemodel and the external and internal signals,

    F e b . 2 2 , 2 0 0 77

    modeling a behavior of the driver,interpreting a current scene in order to determine charac-teristic states of a current external situation of thevehicle, and

    combining the characteristic states of the scene with thecurrent states of the vehicle and the behavior of thedriver in order to output a stimulus for at least one ofan actuator of the vehicle or the driver.

    22. The method of claim 21 for filtering a representedscene in order to focus on those 0bjects of the representation,which are relevant in a current context of a scenario.23. A computer software program product implementing

    a method according to claim 21 when running on a com-puting device.24. The computer program product according to claim 23

    comprising firmware, adapted for flashing the firmware intoa programmable device of a vehicle.25. A data storage device onto which the computer

    software program product of claim 23 is stored.26. A method for assisting a driver of a vehicle, compris-

    ing the following steps:sensing external signals of the vehicle,sensing internal signals of the vehicle,fusing the external signals and the internal signals in orderto generate a representation of a vehicle's environment,

    modeling a current state of the vehicle based on a vehiclemodel and the external and internal signals,

    modeling a behavior of the driver, andinterpreting a current scene and generating a stimulus forat least one of an actuator of the vehicle or the driverbased on a vehicle's states and a driver's intention.

    27. The method of claim 26 for filtering a representedscene in order to focus on those 0bjects of the representation,which are relevant in a current context of a scenario.28. A computer software program product implementing

    a method according to claim 26 when running on a com-puting device.29. The computer program product according to claim 28

    comprising firmware, adapted for flashing the firmware intoa programmable device of a vehicle.30. A data storage device onto which the computer

    software program product of claim 28 is stored.

    * * * * *