A general approach to robot path planning for optical ...

142
A general approach to robot path planning for optical inspections Thesis submitted in fulfilment of the requirements for the degree of Doctor in Applied Engineering (doctor in toegepaste ingenieurswetenschappen) at the University of Antwerp Boris Bogaerts Supervisors prof. dr. Rudi Penne prof. dr. Steve Vanlanduit Faculty of Applied Engineering Antwerp 2019 A general approach to robot path planning for optical inspections Boris Bogaerts

Transcript of A general approach to robot path planning for optical ...

Page 1: A general approach to robot path planning for optical ...

A general approach to robot path planning for optical inspections

Thesis submitted in fulfilment of the requirements for the degree of Doctor in Applied Engineering (doctor in toegepaste ingenieurswetenschappen) at the University of Antwerp

Boris Bogaerts

Supervisorsprof. dr. Rudi Penne prof. dr. Steve Vanlanduit

Faculty of Applied EngineeringAntwerp 2019

A general approach to robot path planning for optical inspections

Boris Bogaerts

Page 2: A general approach to robot path planning for optical ...
Page 3: A general approach to robot path planning for optical ...

Faculty of Applied Engineering

A general approach to robot pathplanning for optical inspections

Thesis submitted in fulfilment of the requirements for the degree ofDoctor in Applied Engineering (doctor in toegepaste ingenieurswetenschappen)

at the University of Antwerp

Boris Bogaerts

Antwerp, 2019

SupervisorsProf. Dr. Rudi Penne

Prof. Dr. Steve Vanlanduit

Page 4: A general approach to robot path planning for optical ...

Jury

ChairmanProf. Dr. ing. Jan Steckel, University of Antwerp

SupervisorsProf. Dr. Rudi Penne, University of AntwerpProf. Dr. Steve Vanlanduit, University of Antwerp

MembersProf. Dr. Helder Araujo, University of CoimbraProf. Dr. Matthew Blaschko, KU LeuvenProf. Dr. Kris Luyten, Hasselt UniversityEm. Prof. Dr. Ir. Luc Mertens (†), University of AntwerpDr. Ing. Bart Ribbens, University of Antwerp

Special thanks toProf. Dr. Ir. Walter Daems, University of Antwerp

ContactBoris Bogaerts

University of AntwerpFaculty of Applied Engineering

Op3Mech Research GroupGroenenborgerlaan 171 2020 Antwerpen, BelgieW: Op3Mech.beT: +32 326 51 929

c© 2019 Boris BogaertsAll rights reserved.

ISBN 9789057286421Wettelijk depot D/2019/12.293/28 9 789057 286421

Page 5: A general approach to robot path planning for optical ...

Contents

1 Introduction 7

1.1 Research context . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7

1.2 Current manufacturing trends . . . . . . . . . . . . . . . . . . . . . 9

1.3 Problem statement . . . . . . . . . . . . . . . . . . . . . . . . . . . 11

1.4 Contributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12

1.5 Outline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13

1.6 Publications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14

2 Background 17

2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17

2.2 General inspection model . . . . . . . . . . . . . . . . . . . . . . . 17

2.2.1 Sensor visibility and coverage . . . . . . . . . . . . . . . . . 18

2.2.2 Inspection quality . . . . . . . . . . . . . . . . . . . . . . . . 20

2.2.3 Formal definition of inspection quality . . . . . . . . . . . . 21

2.2.4 Composite measurement devices . . . . . . . . . . . . . . . 22

2.2.5 Example: Uncertainty in dimensional metrology . . . . . . . 23

2.2.6 Example: Specular similarity in 3D reconstruction . . . . . . 24

2.2.7 Example: Directional emissivity in thermography . . . . . . 25

2.3 Robot systems and digital twins . . . . . . . . . . . . . . . . . . . . 26

2.3.1 Robot kinematics . . . . . . . . . . . . . . . . . . . . . . . . 27

2.3.2 Collision detection . . . . . . . . . . . . . . . . . . . . . . . 28

2.3.3 Path planning . . . . . . . . . . . . . . . . . . . . . . . . . . 29

i

Page 6: A general approach to robot path planning for optical ...

ii CONTENTS

2.3.4 Topological challenges . . . . . . . . . . . . . . . . . . . . . 30

2.3.5 Abstracting robot systems . . . . . . . . . . . . . . . . . . . 31

I Automated Inspection Planning Techniques 33

3 Gradient Based Inspection Path Optimization 35

3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35

3.2 Related work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36

3.3 Gradient-based optimization . . . . . . . . . . . . . . . . . . . . . . 37

3.3.1 Notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38

3.3.2 Gradient-based path simplification . . . . . . . . . . . . . . 38

3.3.3 Gradient-based coverage optimization . . . . . . . . . . . . 39

3.3.4 Combination of gradients . . . . . . . . . . . . . . . . . . . 41

3.3.5 Constraints . . . . . . . . . . . . . . . . . . . . . . . . . . . 42

3.4 Algorithm overview . . . . . . . . . . . . . . . . . . . . . . . . . . . 42

3.4.1 Initialization . . . . . . . . . . . . . . . . . . . . . . . . . . 44

3.4.2 Gradient update . . . . . . . . . . . . . . . . . . . . . . . . 45

3.4.3 Avoiding local minima . . . . . . . . . . . . . . . . . . . . . 46

3.5 Simulation results . . . . . . . . . . . . . . . . . . . . . . . . . . . 47

3.5.1 Locally sub-optimal path . . . . . . . . . . . . . . . . . . . . 48

3.5.2 Local optimal path . . . . . . . . . . . . . . . . . . . . . . . 49

3.6 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51

4 Near-optimal inspection path planning 53

4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53

4.2 Related work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54

4.3 Abstract problem structure . . . . . . . . . . . . . . . . . . . . . . . 55

4.3.1 Inspection quality . . . . . . . . . . . . . . . . . . . . . . . . 56

Page 7: A general approach to robot path planning for optical ...

CONTENTS iii

4.3.2 Travelling costs . . . . . . . . . . . . . . . . . . . . . . . . . 58

4.3.3 The submodular orienteering problem and the GeneralizedCost-Benefit Algorithm . . . . . . . . . . . . . . . . . . . . . 60

4.3.4 Improving the solution of the GCB algorithm . . . . . . . . 61

4.3.5 Obtaining a tight reference measure . . . . . . . . . . . . . 62

4.4 Practical implementation . . . . . . . . . . . . . . . . . . . . . . . . 63

4.4.1 Obtaining TSP costs . . . . . . . . . . . . . . . . . . . . . . 63

4.4.2 Obtaining measurement quality . . . . . . . . . . . . . . . . 65

4.4.3 Optimization . . . . . . . . . . . . . . . . . . . . . . . . . . 65

4.5 Experiments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67

4.5.1 Performance evaluation . . . . . . . . . . . . . . . . . . . . 67

4.5.2 Robustness analysis . . . . . . . . . . . . . . . . . . . . . . 71

4.5.3 Large scale highly complex inspection tasks . . . . . . . . . 72

4.6 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73

II User-centered Inspection planning 75

5 Human factors in camera network design 77

5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77

5.2 The automated camera network design problem . . . . . . . . . . . 78

5.2.1 Problem structure . . . . . . . . . . . . . . . . . . . . . . . 78

5.2.2 Camera network performance functions . . . . . . . . . . . 79

5.2.3 Solving the Automated Camera Network Design problem . . 80

5.3 User interaction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81

5.4 Virtual reality interface . . . . . . . . . . . . . . . . . . . . . . . . . 82

5.4.1 Motivation and overview . . . . . . . . . . . . . . . . . . . . 82

5.4.2 Simulator . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84

5.4.3 Interactive quality computation . . . . . . . . . . . . . . . . 84

Page 8: A general approach to robot path planning for optical ...

iv CONTENTS

5.4.4 Virtual reality process . . . . . . . . . . . . . . . . . . . . . 85

5.5 Experiments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86

5.5.1 Office scenario . . . . . . . . . . . . . . . . . . . . . . . . . 87

5.5.2 Harbour scenario . . . . . . . . . . . . . . . . . . . . . . . . 88

5.5.3 Performance . . . . . . . . . . . . . . . . . . . . . . . . . . 92

5.6 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93

5.7 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94

6 Human factors in inspection path planning 95

6.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95

6.2 VR interface for robotic inspection planning . . . . . . . . . . . . . 96

6.2.1 Robot programming interaction . . . . . . . . . . . . . . . . 96

6.2.2 Usability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98

6.3 Experiments design . . . . . . . . . . . . . . . . . . . . . . . . . . . 99

6.3.1 User selection . . . . . . . . . . . . . . . . . . . . . . . . . . 100

6.3.2 User preparation . . . . . . . . . . . . . . . . . . . . . . . . 100

6.3.3 Inspection planning scenarios . . . . . . . . . . . . . . . . . 102

6.3.4 Inspection problem definition . . . . . . . . . . . . . . . . . 103

6.3.5 Quality comparison . . . . . . . . . . . . . . . . . . . . . . . 104

6.4 Experimental results . . . . . . . . . . . . . . . . . . . . . . . . . . 105

6.4.1 Small scale inspection planning problems . . . . . . . . . . 105

6.4.2 Large scale inspection planning problem . . . . . . . . . . . 106

6.4.3 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108

6.5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109

7 General Conclusions 113

7.1 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113

7.2 Recommendations and future work . . . . . . . . . . . . . . . . . . 116

Page 9: A general approach to robot path planning for optical ...

CONTENTS v

References 119

Page 10: A general approach to robot path planning for optical ...

vi CONTENTS

Page 11: A general approach to robot path planning for optical ...

CONTENTS 1

Acknowledgements

Throughout my Ph.D., I probably had over a million different ideas. Most of theseideas were utterly useless, and others resulted in this thesis. Because initially I amalways thoroughly convinced of an idea, I always make sure to inform everybodyabout it. So, for their patience with me, and many other things, I would like tothank everybody that had to deal with one of my enthusiastic rants.

Page 12: A general approach to robot path planning for optical ...

2 CONTENTS

Abstract

Robots are increasingly used as a motion platform during optical inspections. Themotions that are provided by robots are fast, accurate and reliable, which decreasesthe costs of performing these inspections. However, before a robot can performan inspection task, it is necessary to program an efficient inspection path that canbe executed by the robot. This programming task is challenging and requires bothspecialized knowledge in robotics and inspection techniques. The need for suchspecialized knowledge limits the rate of adoption of robots for inspections.

In this thesis, we will investigate two different solutions that solve the robotic in-spection planning problem that avoids using experts. The first approach eliminatesthe need for any user input by automatically solving the inspection planning prob-lem. In this thesis, we will therefore present new automated inspection planningalgorithms. These algorithms improve over existing algorithms by being able tosolve larger scale, more challenging and thus realistic inspection planning problems.The algorithms are also more general, which ensures that they can solve a widevariety of different inspection tasks.

The second entirely new approach involves users in the inspection planning problem.The abstract concepts that experts use when they solve the inspection planningproblem are visualized to users in intuitive visualizations in virtual reality. Withthese intuitive visualizations, inexperienced users can manually define high-qualityrobotic inspection paths.

Page 13: A general approach to robot path planning for optical ...

CONTENTS 3

Samenvatting

Robots worden steeds vaker gebruikt als bewegingsplatform tijdens optische inspec-ties. De bewegingen die door robots gegenereerd worden zijn snel, nauwkeurigen betrouwbaar, waardoor de kosten voor het uitvoeren van deze inspecties dalen.Voordat een robot een inspectietaak kan uitvoeren, is het echter noodzakelijk omeen efficint inspectiepad te programmeren. Deze programmeertaak is uitdagend envereist zowel gespecialiseerde kennis in robotica als kennis van inspectietechnieken.De behoefte aan dergelijke specialistische kennis beperkt de snelheid waarmeerobots voor inspecties worden ingezet.

In dit proefschrift zullen we twee verschillende oplossingen onderzoeken die hetrobotische inspectieplanning probleem oplossen, waarbij geen beroep wordt gedaanop experts. De eerste benadering elimineert de noodzaak voor gebruikersinput doorhet inspectieplanningsprobleem automatisch op te lossen. In dit proefschrift zullenwe daarom nieuwe geautomatiseerde inspectieplanningsalgoritmes presenteren.Deze algoritmes zijn een verbetering ten opzichte van bestaande algoritmes, omdatze in staat zijn om grotere, meer uitdagende en dus realistische inspectieplan-ningsproblemen op te lossen. De algoritmes zijn ook algemener, wat ervoor zorgtdat ze een grote verscheidenheid aan verschillende inspectietaken kunnen oplossen.

De tweede geheel nieuwe aanpak betrekt gebruikers bij het inspectieplanningsprob-leem. De abstracte concepten die experts gebruiken bij het oplossen van hetinspectieplanningsprobleem worden gevisualiseerd voor gebruikers in intutieve vi-sualisaties in de virtuele realiteit. Met deze intutieve visualisaties kunnen onervarengebruikers handmatig hoogwaardige robotinspectiepaden definiren.

Page 14: A general approach to robot path planning for optical ...

4 CONTENTS

List of abbreviations

• COP: Correlated orienteering problem

• C-space: Configuration space (robot)

• DLS: Damped least squares

• DOF: Degree-of-freedom

• FPS: Frames per second

• GCB: Greedy cost benefit (algorithm)

• GUI: Graphical user interface

• HK bound: Held-Karp bound

• IK: Inverse kinematics

• MGDA: Multiple gradient descent algorithm

• NDT: Non-destructive testing

• SCP: Set covering problem

• TCP: Tool-center point

• TSP: Travelling salesman problem

• T-space: Task space (robot)

• UAV: Unmaned aerial vehicle

• VR: Virtual reality

• VTK: Visualization toolkit

Page 15: A general approach to robot path planning for optical ...

CONTENTS 5

List of commonly used notation

• M “ tm1, ...,mku: The set of all points that need to be inspected.

• V “ tv1, ..., vnu: The set of all view poses (position + orientation).

• X “ tx1, ..., xsu: The solution set of an algorithm (subset of view poses).

• P “ tp1, ..., plu: The set of all view positions (viewpoints without orientation).

• O “ to1, ..., oru: The set of all view orientations (viewpoints without posi-tion).

• E “ te1, ..., emu : The set of edges that connect view poses.

• B : The maximum budget of an inspection.

• Qpm, vq : A function that describes how well a point that needs to be inspectedm can be inspected from a view pose v.

• Gpm,Xq : A function that fuses quality values from multiple measurementsto a single quality value for each point that needs to be inspected m.

• fpXq : The global quality of an inspection over multiple measurements in X.

• CpXq : The cost associated to travelling through all the view poses in X.

• SOp3q : The group of all rotations in three-dimensional euclidean space.

• SEp3q : The group of all rigid motions in three-dimensional euclidean space.

• sep3q : The Lie algebra of SEp3q.

Page 16: A general approach to robot path planning for optical ...

6 CONTENTS

Page 17: A general approach to robot path planning for optical ...

Chapter 1111111111111111111111111111111111111111111111111111111111111111111111111Introduction

1.1 Research context

Routine inspections of objects and structures are necessary to ensure their struc-tural health and thus to avoid any undesired breakdowns. During the productionof mechanical components, these inspections can be necessary to monitor thestructural health of components during the production process. This is necessarybecause production processes are not perfect, which can lead to defects. In thiscase inspections can be performed on each object in the aerospace industry forexample, or on a few objects of a larger batch of objects in less critical industries.On the other hand, inspections of large structures such as bridges or wind turbinesare performed regularly during the lifetime of these structures. In Figure 1.1 somereal-world robotic inspections are shown. While destructive testing destroys theobjects during the testing process, and thus can only be used on a few objects ofa larger batch of objects, non-destructive tests leave the object unharmed. In thiswork, we uniquely focus on optical non-destructive testing (NDT) methodologies.

NDT is a broad area that targets the characterization of an object or component.Different object characteristics are typically measured: dimensions, vibrations, tem-peratures, etc. In dimensional metrology, for example, the dimensional tolerancesof objects are validated. In addition to that, techniques that spot subsurface defectsare also available such as thermography. In this work, we will restrict our focus tooptical measurement techniques. These methods make use of light that is reflectedor scattered by the object to perform measurements of characteristics of the object.

In this thesis, we develop a general representation of inspection tasks. This modelcan then represent specific inspection techniques such as thermography for example.To make this model as concrete as possible, we will also provide a set of examples.These examples will show how specific inspection techniques fit within our generalinspection model. However, this set of examples is by no means an exhaustive listof inspection techniques that can be represented by the model.

Performing routine inspections is expensive. This can be due to the high cost of

7

Page 18: A general approach to robot path planning for optical ...

8 CHAPTER 1. INTRODUCTION

Figure 1.1: Examples of real world optical inspections. A fringe projection systemattached to a robot arm for the dimensional inspection of aluminum parts (top-left)[1]. An active thermography system attached to a robot arm for the thermal inspectionof composite parts (bottom-left) [2]. An optical camera attached to a drone for theinspection of wind turbines (top-right) [3]. Optical cameras (and lighting) attachedto a robot arm for the optical inspection of a car engine (bottom-right) [4]

measurement devices, machine downtime, or increased lead time in the productionof parts. Because of these financial incentives, developing methodologies thatincrease the speed of measurements are necessary. The trend of using opticalmeasurement techniques in dimensional metrology, for example, is aimed towardsdrastically decreasing the inspection times. A more general trend in optical NDT isto replace operators that perform measurements, with robots. Measurement devicesare then attached to the robot, which provides its motion. These robots can performmeasurements faster than human operators and can perform these measurements24/7 [5]. Robots can also inspect more complex geometries because the capacityof operators to inspect complex shapes is limited. An additional advantage of usingrobots is that robots can perform repetitive motions. The fact that measurements areperformed under the same conditions between different measurements makes theanalysis of measurement data easier, and more reliable. An undervalued advantageis that the use of robots makes it possible to create complete and closed softwarepackages. This results in a more seamless workflow, which makes the adoptionof NDT techniques in production processes easier. This last argument also fitsinto what Microsoft in its Manufacturing trends report calls the convergence of IT(information technologies) and OT (operational technologies) as part of industry4.0 [6]. Important in this trend is that planning of inspections (IT) and collection

Page 19: A general approach to robot path planning for optical ...

1.2. CURRENT MANUFACTURING TRENDS 9

Figure 1.2: This image visualizes the end goal of this thesis. The green path is aninspection path that can be executed by the robot. The expected measurement qualityis depicted on the object of interest (bicycle frame). The goal of this thesis is to findefficient inspection paths that maximize this quality.

and interpretation of measurement data (Operations Technology, OT) are treatedin a seamless workflow.

However, before a robot can be used as a motion platform for optical NDT measure-ments, it is necessary to find efficient inspection paths. Such an inspection path isshown in Figure 1.2. This thesis is aimed towards finding efficient methods to findrobot inspection paths.

1.2 Current manufacturing trends

An essential aspect of the developed methodologies is that we aim for industrialrelevance. So, with that goal in mind, we will sketch some relevant manufacturingtrends, and how they are connected to optical inspections in this section.

As a backbone of this sketch of relevant trends we will run over all the steps fromthe design of a product, to the implementation of the robotic inspection system. Wewill also point towards the technological drivers behind all trends, and how thesewill impact the inspections that will be performed in the future.

The first step in manufacturing a product is to design the product. This design-ing process of products is rapidly changing due to the breakthrough of additivemanufacturing in industry. According to the Microsoft 2019 Manufacturing TrendsReport, two-thirds of American companies are currently already using additivemanufacturing to some extent, a number which will only rise in future [6]. With

Page 20: A general approach to robot path planning for optical ...

10 CHAPTER 1. INTRODUCTION

(Future) product manufacturing trends

Design Requirements Plan inspection

Smaller batch sizemore variety

More complexgeometries

Minimize cycle time

Zero defect

Implementation

Simulations

Six sigma

Technological drivers

Additivemanufacturing

Topology optimization

Industry4.0

Digital twins

Convergence of operation technologiesand information technologies

Automation

Created by Insticonfrom the Noun Project

Figure 1.3: This figure shows all the different steps from designing a product toimplementing inspections. For each step, there are (future) manufacturing trends thatimpact requirements for technological solutions. These trends are driven by importanttechnological drivers.

additive manufacturing, it is possible to create highly specialized products at amuch cheaper cost. Together with the trend to personalize products, this will leadto more varying product being produced in smaller quantities [6]. Furthermore,with additive manufacturing it is also possible to perform automated topology opti-mization of products to save weight or material [7]. Topology optimization leads tomore complicated geometries. So to summarize, future trends point towards morecomplicated and more different objects being produced.

After a part is designed, it essential to determine the requirements of the objects.The inspection technique should then be applied to test whether the productsadhere to these requirements. Six sigma and zero defects manufacturing areessential trends in manufacturing that aim towards reducing the production errorsto very low numbers. Increasing the number, or the rigour of inspections is anessential tool in bringing down the number of production errors. Furthermore, foradditive manufacturing, it is extra vital to perform thorough inspections [8]. So, itis clear that in future the importance of inspections will likely grow, and that moreinspections will be performed. To not increase lead times of products, it is thereforenecessary to perform inspections faster.

After the product is designed, and its requirements are determined, an inspection

Page 21: A general approach to robot path planning for optical ...

1.3. PROBLEM STATEMENT 11

plan must be prepared. Important is that with this inspection plan, it is possible tocheck if product requirements are met. To not deviate too much, we will assumethat the robotic measurement setup is already in place and working. Obtaininga high-quality inspection plan is a challenging matter. A practitioner must beexperienced in the inspection methodology that is used, to ensure that the productrequirements can be checked using the captured data. In optical measurements,reflections, for example can diminish the quality of measurement data, and musttherefore be avoided. But on the other hand, a practitioner must also program arobot to perform this inspection plan, which requires the necessary expertise. Tomake matters even more complex, this measurement plan must minimize cycle time.Future trends indicate that the time practitioners get to create an inspection planwill decrease dramatically. The adoption of additive manufacturing, for example,reduces the time to design the measurement process. In aerospace, for example,additive manufacturing reduces the time from product design to production fromsix months to one week [6].

After the measurement process is planned, it must be implemented in the produc-tion. This implementation is very time-critical since any delay halts the production.A future trend that aims to avoid this and thus helps with this process is theincreased adoption of digital twins [6]. Digital twins aim to accurately modelreal-world systems that can be used in simulations. With these accurate simulationsof the real production line during the design process, more relevant inspection plancan be developed. However, errors in the inspection plan cannot be excluded sinceinspection planning is inherently a challenging problem.

1.3 Problem statement

In this thesis, we develop methodologies to generate inspection plans for roboticoptical measurements. It is essential that practitioners can use these techniqueswithout any specialized knowledge in either robotics or any optical inspectiontechnique. These techniques should also be as general as possible, such that asmany different types of optical inspection devices can be paired with differentrobots, easily.

Problem Statement

Minimize the need for user expertise in designing high quality inspection plans forrobotic inspections with optical measurement techniques.

The requirement for industrial relevance introduces some requirements for thedeveloped methodologies:

1. The methodologies must be flexible to adapt to different product requirements

2. The methodologies must minimize the required cycle time.

Page 22: A general approach to robot path planning for optical ...

12 CHAPTER 1. INTRODUCTION

Robotic inspection planning

Path planning

Automated algorithm

Chapter 4

Human augmentation

Chapter 6

Path optimization

Automated algorithm

Chapter 3

Camera networkdesign

Human augmentation

Chapter 5

Figure 1.4: This scheme shows the different contributions in this thesis and how theyare related. Two different problems are being studied. The robotic inspection planningproblem and the camera network design problem. Both problems are related. In therobotic inspection planning problem, any path planning solution can serve as inputfor a path optimization post-processing.

3. The methodologies must work for complex objects

4. The methodologies must generate inspection plans sufficiently fast

In this thesis, we will develop two types of methodologies that try to meet theserequirements. The first type of approach will use automated algorithms to generatehigh-quality inspection plans. A user can then set up a digital twin, provide someproduct requirements, and obtain an inspection plan by running the algorithm.This way, the required involvement of a practitioner is minimized. The secondtype of approach does the opposite. It depends on a high level of involvementfrom the practitioner. These approaches maximize the ability of the practitionerto influence the final solution. In these methodologies, users are responsible fordefining inspection plans. To minimize the need for user experience, we will usevirtual reality to increase the intuitiveness of the planning task. To achieve this,interactive user interactions and visualizations are used to replace experience. Sincethe user is ultimately responsible for planning the inspections, this type of solutionis more flexible.

1.4 Contributions

This thesis presents a new automated inspection planning method (Chapter 4).Furthermore, it presents the first real inspection path optimization method (Chapter3). And finally it investigates a whole new domain where users are responsible forthe inspection path planning task, but are assisted by virtual reality visualizationsand interactions (Chapter 5 & 6). Since we cover multiple different techniques wewill discuss the relevant literature in each chapter. However, in the remainder ofthis section we will give the general idea of the academic relevance of each method.

Page 23: A general approach to robot path planning for optical ...

1.5. OUTLINE 13

The new automated inspection planning method is the first near-optimal methodthat is focused towards scalability to complex real-world robotic inspection prob-lems. This near optimally means that the inspection paths it generates are guar-anteed to be a fixed percentage worse than the optimal solution. This algorithmbridges the gap between algorithms that can solve real-world inspection plan-ning tasks [9, 10, 11, 12, 13, 14, 15], and algorithms with theoretical guarantees[16, 17, 18]. However, paths generated by path planning methods are typically notsuited for direct usage. In a traditional path planning workflow a path planningstage is followed by a path optimization stage [19, 20, 21]. In this stage paths arelocally improved to be shorter, smoother, or better at some other criterion. Forinspection paths no real inspection optimization method yet exists. So, we presenta gradient based optimization strategy that improves inspection paths locally, whilerespecting the inspection task. This work resulted in a publication (2). This inspec-tion path optimization method can be used after any inspection planning method.It is important that any complete and usable software package should include aninspection path optimization stage. Our experiments also indicate that lead timescan be significantly reduced.

Besides automated algorithms, this thesis will also investigate how users can beinvolved in inspection path planning. The direct involvement of users createsworkflows that are easier and faster to set up. Human involvement in the inspectionpath planning problem has not been investigated. This investigation is dividedin two parts. The first part investigates human involvement in camera networkdesign. This work resulted in a journal publication (1). Camera network designis a problem that is related to the robotic inspection planning problem. In thisfirst step, we investigate how users can interactively optimize inspection quality,without having to worry about robotic constraints. This is contrary to the traditionalapproach of using automated algorithms in the camera network design problem[22, 23, 24]. Finally, we will investigate human involvement in robotic inspectionplanning. Here, we investigate how users can optimize inspection quality, while atthe same time dealing with robotic constraints.

1.5 Outline

Before we develop new methods to solve the robotic inspection planning problem,we will first give some necessary background information about both optical in-spections and robot systems (Chapter 2). In this chapter, we will also introduce ageneral model for optical inspections. This model will later be used to representdifferent inspection techniques.

In Chapter 3, we will develop a new inspection path optimization method. Wedo this before any path planning methods since it is independent of any pathplanning method, and it is widely applicable. In Chapter 4, we will design a newinspection path planning method. In Chapter 5, we will start by investigatinghuman involvement in the camera network design problem. This is to build up

Page 24: A general approach to robot path planning for optical ...

14 CHAPTER 1. INTRODUCTION

to Chapter 6, which investigates human involvement in the robotic inspectionplanning problem.

1.6 Publications

A1 Journal Papers

First author

1. Bogaerts B, Seppe S, Vanlanduit S and Penne R. 2019. “Interactive CameraNetwork Design Using a Virtual Reality Interface”. Sensors 19 (5). doi:10.3390/S19051003.

2. Bogaerts B, Seppe S, Vanlanduit S and Penne R. 2018. “A Gradient-basedInspection Path Optimization Approach. IEEE Robotics and AutomationLetters 3 (3): 264653. doi: 10.1109/LRA.2018.2827161.

3. Bogaerts B, Penne R, Sels S, Ribbens B and Vanlanduit S. 2016. “A SimpleEvaluation Procedure for Range Camera Measurement Quality. Lecture Notesin Computer Science 10016: 28696.

co-author

4. Peeters J, Verspeek S, Sels S, Bogaerts B and Steenackers G. 2019. “OptimizedDynamic Line Scanning Thermography for Aircraft Structures. QuantitativeInfra Red Thermography Journal, 116. doi: 10.1080/17686733.2019.1589824.

5. Sels S, Vanlanduit S, Bogaerts B and Penne R. 2019. “Three-dimensional full-field vibration measurements using a handheld single-point laser Doppler vi-brometer”. Mech Syst Signal Process 2019;126:42738. doi:10.1016/J.YMSSP.2019.02.024.

6. Sels S, Verspeek S, Ribbens B, Bogaerts B, Vanlanduit S, Penne R and Steenack-ers G. 2019. “A CAD matching method for 3D thermography of complex ob-jects”. Infrared Phys Technol 2019;99:1527. doi:10.1016/J.INFRARED.2019.04.014.

7. Peeters J, Louarroudi E, Bogaerts B, Sels S, Dirckx J and G. Steenackers.2018. “Active Thermography Setup Updating for NDE : A Comparative Studyof Regression Techniques and Optimisation Routines with High ContrastParameter Influences for Thermal Problems. Optimization and Engineering19 (1): 16385. doi: 10.1007/S11081-017-9368-Z.

Page 25: A general approach to robot path planning for optical ...

1.6. PUBLICATIONS 15

8. Peeters J, Bogaerts B, Sels S, Ribbens B, Dirckx J and Steenackers G. 2018.“Optimized Robotic Setup for Automated Active Thermography Using Ad-vanced Path Planning and Visibility Study. Applied Optics 57 (18): D12329.doi: /10.1364/AO.57.00D123.

9. Sels S, Bogaerts B, Vanlanduit S and Penne R. 2018. “Extrinsic calibration ofa laser galvanometric setup and a range camera”. Sensors. 2018;18(5).

10. Sels S, Ribbens B, Bogaerts B, Peeters J and Vanlanduit S. 2017. “3D modelassisted fully automated scanning laser Doppler vibrometer measurements”.Optics and lasers in engineering. 2017;99:2330.

11. Van Geem C, Bellen M, Bogaerts B, Beusen B, Berlmont B, Denys T, De Meule-naere P, Mertens L and Hellinckx P. 2016. “Sensors on vehicles (SENSOVO):proof-of-concept for road surface distress detection with wheel accelerationsand ToF camera data collected by a fleet of ordinary vehicles”. Transportationresearch Procedia. 2016; 14:2966-2975.

Submitted

12. (Under review) Fast prototyping tools for human-robot interaction systems invirtual reality)

13. (Under review) Bogaerts B, Sels S, Vanlanduit S and Penne R. 2019. “Near-Optimal Path Planning for Complex Robotic Inspection Tasks”. Arxiv.arXiv:1905.05528

14. (Under review) Bogaerts B, Sels S, Vanlanduit S and Penne R. 2019. “EnablingHumans to Plan Inspection Paths Using a Virtual Reality Interface”. Arxiv.arXiv:1909.06077

P1 Conference Proceedings

co-author

16. Peeters J, Verspeek S, Sels S, Bogaerts B and Steenackers G. 2018. “OptimisedDynamic Line Scanning Thermography for Aircraft Structures. In QIRT 2018 :14th Quantitative Infrared Thermography Conference, 68795. Quebec: Qirtcouncil. doi: 10.21611/QIRT.2018.077.

17. Ribbens B, Peeters J, Bogaerts B, Steenackers G, Sels S, Penne R and VanBarel G. 2016. “4D Active and Passive Thermography Measurement SystemUsing a KUKA KR16 Robot and Time-of-flight Imaging. In 13th QuantitativeInfrared Thermography Conference : QIRT 2016, 4-8 July 2016, Gdansk,Poland, 67077. Quebec: Qirt council

Page 26: A general approach to robot path planning for optical ...

16 CHAPTER 1. INTRODUCTION

P3 Conference Proceedings

First author

19. Bogaerts B, Penne R, Ribbens B, Sels S and Vanlanduit S. 2018. “New ErrorMeasures for Evaluating Algorithms That Estimate the Motion of a RangeCamera”. VISIGRAPP 2018 : Proceedings of the 13th International JointConference on Computer Vision, Imaging and Computer Graphics Theoryand Applications : January 27-29, 2018, Funchal, Portugal / Imai, Fran-cisco [edit.]. Vol. 5. Setbal: Science and Technology Publications. doi:10.5220/0006510005080515.

co-author

21. Sels S, Bogaerts B, Vanlanduit S, Penne R. 2018. “Easy sensor localizationusing an rgb camera”. Proceedings of ISMA2018 including USD2018

22. Peeters J, Ribbens B, Bogaerts B, Sels S, Dirckx J and Steenackers G. 2017.“Optimized Setup Modification for Automated Active Thermography”. In-ternational Workshop on Advanced Infrared Technology and Applications,September 27-29, 2017, Quebec City, Canada / Maldague, X.P.V. .

Page 27: A general approach to robot path planning for optical ...

Chapter 2222222222222222222222222222222222222222222222222222222222222222222222222Background

2.1 Introduction

In this chapter, we will discuss some vital background concepts around opticalinspections and robots systems that will be used throughout this work. First, wewill start with some basics about optical inspections from a very general perspective.To do this, we will not focus on any specific measurement technique but focus onthe common aspects. While doing this, we will also develop a general inspectionmodel that covers many inspection techniques that will be used throughout thiswork. We will end this part with a few concrete examples of inspection techniquesand how they fit within this framework. Finally, we will discuss some basic conceptsof robotic systems and their digital twin.

In this chapter and the remainder of this work, we will not assume any backgroundknowledge in either robotics or optical inspections. A roboticist with a limitedunderstanding of optical inspections should, with this background, understand theremainder of this work and vice versa.

2.2 General inspection model

The goal of this section is to develop a novel general model to represent opticalinspections and the requirements that measurements need to fulfil. Other generalmodels for coverage in camera networks exist [25] but are less appealing for opticalinspections. This general model consists of two different parts.

1. Sensor visibility and coverage

2. Inspection quality

17

Page 28: A general approach to robot path planning for optical ...

18 CHAPTER 2. BACKGROUND

Figure 2.1: Each camera has a limited field of view which is shown in these images(measurement frustums). Two cameras can see different parts of the objects (left).Their data must be combined to obtain complete measurements. The red area in theright image is invisible to the camera due to occlusion by a ridge.

The sensor visibility and coverage is a geometrical component related to inspections.The second part is a general quantification of measurement quality, which is neededto encode the requirements of measurements. This model will be used throughoutthis document.

The goal of this thesis is to find procedures that maximize this general inspectionquality for measurement devices attached to a robot. To make matters moreconcrete, we will also provide a few examples of measurement devices used inindustry and academia, and show how this general quality model applies specifically.

2.2.1 Sensor visibility and coverage

Sensor coverage and visibility are very general aspects of performing measurements.Coverage models the fact that the entire object of interest cannot be measured bypositioning the measurement device at any single location, so only part of an objectis visible. To obtain complete measurements, multiple measurements are required.The measurement data from all these measurements can then be fused to obtaindata for the whole object. These multiple measurements are necessary becauseon the one hand, the measurement device has a limited field-of-view, and on theother hand, occlusions can occur. This occlusion is a result of limited visibility. Theabstract notion of coverage then collects the combined surface percentage that isvisible after several measurements from different positions. Note that the coveragepercentage can already act as a quality measure for an inspection.

The field-of-view of a measurement device is formally defined as the extent of thephysical world that can be seen from a specific measurement device pose. To make

Page 29: A general approach to robot path planning for optical ...

2.2. GENERAL INSPECTION MODEL 19

this more concrete, we will presume a regular optical camera under the pinholemodel. This type of camera forms an image by collecting the light that enters thecamera and is projected on a sensor. Some light rays miss the sensor because of itsfinite size, rendering them invisible in the measurement data. For a regular camera,this field of view is in the shape of a four-sided frustum (see Figure 2.1).

Occlusions happen when a view ray from the measurement device gets blocked bysome obstacle (Figure 2.1 (right)). All the geometry that is beyond this hit pointis invisible to the measurement device. Because occlusion is dependent on themeasurement scene, and the object that needs to be inspected, its computation isimportant. The mathematics that describes the structure of occlusion for polyhedralobjects is called the aspect graph [26]. A triangular mesh, which is often used torepresent an object that needs to be measured, is an example of a polyhedral object.The study of aspect graphs reveals an interesting fact about occlusion, which isthat its computation is complex and often infeasible. More precisely, the size of anaspect graph is Opn9q1 (here n is the number of vertices in a mesh) [26]. This resultmainly indicates that an exact computation of occlusion for triangular meshes isalready infeasible for simple objects. To circumvent this computational issue, it isnecessary to discretize the inspected object into a discrete set of primitives. We willmainly work with points because visibility computations with points are the mostefficient.

Different types of occlusion queries were studied most extensively in the computergraphics community, mostly in the context of lighting [28]. While occlusion andvisibility is not a central topic of this work, its practical computation will affectrelevant algorithmic choices. Therefore we will restrict the discussion to aspectsthat affect this work.

Two main algorithms are used to compute occlusion, one which is fast, and onewhich is accurate [28]. The first and fast method is a Z-buffer approach [29]. Thesecond method, which is accurate, is by using ray tracing [30]. Both approacheswill be used in this work. The Z-buffer approach works by rendering geometry fromthe viewpoint of a camera, by using the commonly used, hardware accelerated,rasterization approach. The depth value at each pixel (i.e. the Z-value) is thenstored in a Z-buffer. The visibility of a point can subsequently be evaluated byrendering it to the Z-buffer and checking if its ”Z-value” is smaller than the one inthe Z-buffer. If this is the case, the point is visible. The disadvantage of this methodis that a Z-buffer has a finite resolution, resulting in non-negligible discretizationerrors. Another issue is that if query points lie on a surface, they will have acomparable ”Z-value” with the surface.

With ray-tracing, rays are traced through a scene. An occlusion query can then beperformed by tracing a ray from the camera position to the target point. If the rayhits some geometry, then the point is invisible. A disadvantage of ray-tracing is thatit is computationally more expensive than the Z-buffer approach. However, due tointerest from the rendering and gaming community, ray tracing is a well-studied

1We will make extensive use of the ‘Big O’ notation. We refer to McDonnell [27] for a quickintroduction in this topic.

Page 30: A general approach to robot path planning for optical ...

20 CHAPTER 2. BACKGROUND

Figure 2.2: The area of a defect or the intensity with which it is measured are twoexamples of effects that can be used to construct a quality function, that assigns aquality to a view pose. The smaller rectangles represent images captured by a cameraunder specific measurement conditions.

problem, for which very efficient implementations are available.

The occlusion query methods that we discussed implicitly assume that the objectof interest is discretized into a finite set of points. It is straightforward to performthis discretization by randomly and uniformly sampling the input mesh. Anotheroption would be for an operator to manually select a set of points dependent on themeasurement specifications. Another option is to discretize the object of interestinto an octree or voxel map [31]. However, occlusion queries with points are usedmore often, making the available implementations better and faster.

Finally, the measurement coverage is the percentage of points that were measuredduring a measurement, after considering the limited field-of-view of the measure-ment device and occlusions.

2.2.2 Inspection quality

Communities that study different measurement techniques tend to agree that notall measurements are equal [9, 13, 32, 33]. These differences in measurementquality are related to different measurement conditions. Which exact conditionshave an impact differ per measurement principle.

In Figure 2.2, for example, we show the effect of two examples of measurementconditions. These are:

Page 31: A general approach to robot path planning for optical ...

2.2. GENERAL INSPECTION MODEL 21

1. Projected feature area

2. Perceived light intensity for Lambertian materials

The projected area measures the size of a defect in the final measurement data. Alarger area means that it is easier to detect defects. Another example is the intensity,which captures how the brightness of an object in the measurement data. Thisbrightness affects the signal to noise ratio in the final measurement data. Theseare only two examples of elements that can affect the quality of measurement data.Some more examples from the literature are:

1. Effect of light travelling through diffuse media [34]

2. Calibrated uncertainty characteristics of a measurement device [35]

3. Thermal emissivity of materials [32]

A quality function then joins the effects with the most influence in one function.While it may seem that it is challenging to compose quality functions, some generalprinciples recur. The first set of principles are geometrical, of which an exampleis shown in Figure 2.2 (left). The second set of principles try to model the signalto noise ratio, or measurement bias of measurements. These last effects, sinceoptical measurements are considered in this work, are dependent on the propertiesof light. Electromagnetic waves, emitted by the measurement device itself, orenvironment lights, interact with the object that needs to be inspected and returnto the measurement sensors for analysis. The quality of this analysis is thendependent on the amount of light that returns to the measurement device. Withthis observation in mind, it is no surprise that many quality functions are built-upsimilarly.

In this work, we assume that each quality function is normalized. This means thevalue that represents measurement quality ranges from zero to one. A quality ofone then indicates a perfect measurement, while a measurement with a quality ofzero is worthless. Note that the concept of coverage, introduced in the previoussection, can act as an inspection quality function by itself. However, as long as apoint is visible to the measurement device, it counts as a perfect measurement.

2.2.3 Formal definition of inspection quality

The above discussion on quality functions was qualitative. In this section, we willformally define how they can be constructed. The object that needs to be discussedis represented by a finite collection of points M “ tm1, ...,mnu as discussed before.We furthermore assume that the measurement device is positioned in a view posev. This view pose consists of a sensor position and orientation.

Page 32: A general approach to robot path planning for optical ...

22 CHAPTER 2. BACKGROUND

The first element of inspection quality functions is modelled as a function Q:

q “ Qpm, vq. (2.1)

This function Q computes for any surface point m and view pose v combinationa quality value q as a bounded above non-negative number. As discussed insubsection 2.2.2 this function is dependent on the measurement characteristics ofthe inspection task. For the ease of presentation we assume that the visibility, asdiscussed in subsection 2.2.1 is fused in function Q. This is easily accomplishedby assuming that Qpm, vq “ 0 if point m is not visible from v. This first element ofinspection quality functions only models the quality of one measurement of onesurface point. The next elements fuses the quality of multiple measurements of asingle point as follows:

g “ Gpm,Xq. (2.2)

Here, function G fuses the quality of a set of view poses X “ tx1, ..., xku, for aninspection point m. Function G is typically not dependent on the measurementprinciple, but on the processing of the data. If, for example, the data is fused byselecting the data of the best measurement, this corresponds to maxxPXQpe, xq.Other options exist, such as the fusion of measurement uncertainty [36], but areused less frequently.

The last element fuses the quality over multiple inspection points:

fpXq “ÿ

mPM

Gpm,Xq. (2.3)

This function f measures the inspection quality, which was the goal of this section.While it is possible to change these functions, we are not aware of any works thatdo this. Different choices of functions f , G and Q can represent almost any notionof quality. Note that the concept of sensor coverage as discussed in subsection 2.2.1is also an inspection quality function, where measurement quality is either zero orone.

2.2.4 Composite measurement devices

Until now we implicitly assumed that a measurement system consists of a singledevice. However, this representation is too simplistic to represent many real-worldmeasurement systems. A simple example of a composite measurement device is afringe projection system which combines a projector and a camera. The projectorprojects a pattern on the object of interest, while the camera records deformationsin this pattern, to determine the geometry of the object. For such a system it isnecessary that both the camera and the projector see the same part of the surface,at the same time. A part of the surface that is only visible from one device cannot bemeasured. The effect on the sensor coverage is modeled by a logical AND operation.Thus sensor coverage can be easily adapted by applying logical AND or OR relationsbetween the multiple components that make up a measurement device. The effect

Page 33: A general approach to robot path planning for optical ...

2.2. GENERAL INSPECTION MODEL 23

on the inspection quality is less straightforward since these functions are typicallycustom. However, even if a custom inspection quality function is defined (i.e.different Q), it is clear that the measurement planning problem does not changemuch. Some other examples of composite measurement devices are:

1. Stereo camera system with two optical cameras

2. Active thermography where the object is actively heated with a lamp before itis measured with a thermal camera

3. Solar panel inspection with an optical and thermal camera

2.2.5 Example: Uncertainty in dimensional metrology

Figure 2.3: Example of a robotic laser line scanner that is used at NASA to inspectcomponents [37].

The discussion from this section will be based on Mahmud et al. [38]. This articleshows how inspection paths can be constructed to minimize the uncertainty indimensional inspections. In this section, we will show how the preference formeasurements with low measurement uncertainty leads to an inspection qualityfunction. The derivation focuses on measurements with a laser line scanner attachedto a coordinate measurement machine (CMM). However, only the details of thederived quality function depend on this specific measurement setup/technique.

For dimensional inspection setups, it is important to conform to ISO standards. ISOstandard 14253 deals with the conformity of geometric product specifications [39].The following rule applies:

k ¨ ucIT

ď1

8. (2.4)

Here, uc is a combined uncertainty estimate of a measurement by applying thelaw of propagation of uncertainty on all individual sources of uncertainty, and k

Page 34: A general approach to robot path planning for optical ...

24 CHAPTER 2. BACKGROUND

is the coverage factor (e.g. a coverage factor of 2 provides a confidence level ofa conformity statement about the product of approximately 95% for a Gaussiandistribution on uc). Finally, IT is the tolerance interval of the geometrical spec-ification that can be provided. This means that when a tolerance interval IT ischosen as a requirement for the inspection, Equation 2.4 acts as a threshold on themeasurement uncertainty uc.

The next step is to determine the measurement uncertainty uc for the completemeasurement process. This measurement uncertainty is an accumulation of manysources of error [40, 41]. The measurement device can introduce errors which areeither systematic or random. However, also intrinsic or extrinsic system calibrationsor CMM positioning can cause errors. These errors, which are assumed to beGaussian and independent can be accumulated as follows:

uc “b

u21 ` ...` u

2n. (2.5)

It is known that the measurement errors of a laser line scanner are dependent ontwo parameters [35, 38, 41]. The first parameter d is the distance of the laserline scanner to the object that is being inspected. And the second parameter isthe measurement angle θ (see Figure 2.2). The exact dependence on d and θ isdetermined through calibration measurements. We will limit the discussion to theerrors that are caused by the laser line scanner. For a discussion on other errors, werefer to the original work of Mahmud et al. [38].

This example fits in our general inspection model, by the following three elements:

1. Q “ 1 if and only if a point is visible and acceptable according to the ISOnorm, and Q “ 0 otherwise.

2. Gpm,Xq “ maxxPX

Qpm,xq

3. fpXq “ř

mPM

Gpm,Xq

2.2.6 Example: Specular similarity in 3D reconstruction

A completely different problem for which a quality function can be constructed ismultiview 3D reconstruction. In multiview reconstruction, the goal is to construct a3D model of an environment/object from a set of images. 3D reconstruction hasbeen used in drone-based inspections with optical cameras [14, 15].

An important factor in 3D reconstruction algorithms are the so-called photo con-sistency measures [42]. When a 3D point is measured in multiple images, its3D location can be estimated as the intersection point of view rays. Before thiscalculation can be performed, it is necessary to find the point in multiple images.In reality, this is achieved by defining a similarity measure between image pixels.

Page 35: A general approach to robot path planning for optical ...

2.2. GENERAL INSPECTION MODEL 25

Pixels that maximize this similarity are assumed to represent the same point. Mul-tiple of such measures are available and are called photo consistency measures.As a general principle, these measures require a surface point to look the samein all images. This is the case if the colour of the point looks the same in everyimage. In reality, however, due to diffuse and specular lighting, the appearance of apoint is dependent on the position from which it is viewed. A series of scene spacesimilarities model these effects using general reflection functions (i.e. BRDFs) [43].

In this example, we will neglect a typical concern in 3D reconstruction, namely3D reconstruction accuracy. This is merely for the ease of presentation. A qualityfunction that characterizes 3D reconstruction accuracy can be constructed [44]. Wewill model the requirement of 3D reconstruction accuracy as a distance constraintbetween view poses Dpv, v1q ě d0 in this example.

The most intuitive photo consistency measure is the variance of image intensities[42]. This variance is Opcospθqq for Lambertian materials (as in Figure 2.2). So,the variation in perceived colour is the difference in perceived intensity betweentwo measurements. So this example fits in our general inspection model, by thefollowing three elements:

1. Qpm,xq is maxvPV :Dpx,vqě0

|cospθxq ´ cospθvq|

2. Gpm,Xq “ maxxPX

Qpe, xq

3. fpXq “ř

mPM

Gpm,Xq

Here, Q is dependent on two viewpoints in the set of viewpoints. Note that distanceconstraints are also included in Q as a constraint. In the second element G is themax operation because the best correspondence is used in a 3D reconstruction. Ifmultiple correspondences are used during the reconstruction, G can be changed tocorrespond with this operation.

2.2.7 Example: Directional emissivity in thermography

The goal of infrared thermography is to detect (sub-)surface defects throughits thermal behaviour. In active thermography, the object is thermally excited(heated/cooled), and an infrared camera measures its thermal response. Due toKirchhoff’s radiation law however, the emitted thermal radiation is dependenton the angle of the outgoing radiation relative to the surface [45]. This effectresults in undesirable temperature offsets in measurements that are dependent onthe measurement angle and view position [32]. It is evident that an optimizedmeasurement trajectory naturally tries to avoid such errors. Since this effect isentirely dependent on the geometry of the measurement object, and the location ofmeasurements, it is possible to estimate this effect in advance [32]. We will not

Page 36: A general approach to robot path planning for optical ...

26 CHAPTER 2. BACKGROUND

elaborate on how this directional emissivity can be estimated, as this can be quitecomplex, but the following approximation can be used [32]:

εpθq “ max´

cos´

a

P1θ ` P2 ¨ π¯

, 0¯

. (2.6)

In this equation, P1 and P2 are parameters that can be determined by numericalsimulations or calibration measurements, and θ is the measurement angle. Thisequation ranges from zero to one, as is the norm in this work. During the processingof the data, the best measurement is stored for each model point m PM . So thisexample fits in our general inspection model, by the following three elements:

1. Qpm,xq “ εpθq

2. Gpm,Xq “ maxxPXQpm,xq

3. fpXq “ř

mPM Gpm,Xq

Notice the generality of the last two elements. While the examples feature com-pletely different inspection techniques, these steps are recurrent.

2.3 Robot systems and digital twins

Robot systems are a crucial component in robotic inspection systems. From apractitioner’s perspective, the robot should follow any desired path. Robots arehowever limited in the motions they can perform. So, during the constructionof inspection paths, it is crucial to take the robotic limitations into account. Inthis section, we will discuss why robots are limited and which issues arise fromthese limitations. We will also give some details about computations that are notdiscussed explicitly in the following chapters.

We also distinguish between two fundamentally different types of robots. Theseare on one hand robot arms, and on the other hand field robots. The fundamentaldifference between these two types of robots is that robot arms are attached to theground with their base, and that field robots (e.g. drones) can move freely in theworld. From this distinction, it is clear that field robots are less limited than robotarms as they can move freely. So, while we will also consider field robots in ourexperiments, we will primarily focus on robot arms. In practice, robot arms will beused more in production environments, while field robots will be used more for theinspection of larger structures.

Page 37: A general approach to robot path planning for optical ...

2.3. ROBOT SYSTEMS AND DIGITAL TWINS 27

2.3.1 Robot kinematics

From our introduction to robots, it is clear that one of the most important aspects ofa robot, for performing inspections, is its reachability. This reachability determineswhich measurement device poses can be reached, and maybe more importantly,which poses cannot be reached.

A very general description of what a robot can be is a kinematic graph [46, 47]. Akinematic chain is a chain of joints that are connected by rigid links. While generaltrees of joints are possible, we do not consider these. In general, three fundamentaltypes of joints exist. The revolute joint allows rotation in a plane of the two attachedrigid links relative to each other. The prismatic joint enables a linear translation inone direction of the two attached rigid links. Finally, the least common joint type,the spherical joint allows rotation in three directions. Additionally, most practicaljoints also have movement limits (e.g. minimum or maximum values). With thefreedom to choose any combination of these three joint types, in any chain, a largeclass of robots can be constructed. For the ease of presentation, we will restrict tothe first two joint types, each of which has one degree of freedom.

The state of a robot can then be uniquely described by the position of each of itsjoints. This state can be described by a vector θ which consists of individual jointvalues (i.e. rθ1, ..., θns). The space of all possible robot states is the configurationspace (C-space) of a robot [48]. In this thesis, since the only goal of a robot inour context is to move a measurement device, we assume that the measurementdevice is placed on the end-effector. This means that the measurement device isattached to the last link of the kinematic chain. Each of the configurations of therobot positions the measurement device in a pose (position + orientation) in thereal world. The space of all end-effector poses is called the task-space (T -space).The calculation that calculates the pose of the end-effecor from a point in theconfiguration space of a robot is called the forward kinematics calculation. Thiscalculation is unique and relatively straightforward to perform [48]. The conversehowever, calculating the position in C-space for which the end-effector reachesthe desired pose in T -space is far more challenging. This calculation is also notguaranteed to be unique, or even possible, and is called the inverse kinematicscalculation [49].

The specifics of inverse kinematics is somewhat involved. We will therefore restrictthe discussion to relevant aspects. To solve the inverse kinematics problem, twogeneral approaches exist [49]. The first approach is an optimization-based approachin which joint values are optimized to minimize the distance towards the targetpose. It is possible to specify the used distance in the objective function flexibly.With this methodology, it is possible to include or omit rotations in the objectivefunctions. We will use this specific detail in our developed methods. The otherapproach is based on search heuristics. These are methods that use a set of proventricks to find solutions to the inverse kinematics problem. We will mainly usetwo popular inverse kinematics methods which are damped least squares (DLS)[50] and the pseudo-inverse method [51]. Both methods are examples of the

Page 38: A general approach to robot path planning for optical ...

28 CHAPTER 2. BACKGROUND

Figure 2.4: A digital twin typically contains visible meshes (left) for cosmetic purposes.Valid collision shapes are required for collision detection and can be a convex object(middle) or an octree (right).

optimization-based approach.

2.3.2 Collision detection

The kinematic structure of a robot determines which positions in T -space can bereached by the robot. In real-world applications, obstacles in the environment exist,which also affect the reachability. Collisions of the robot with these obstacles makesome parts of the robot’s C-space inaccessible. Because any calculated inspectionpath must be executable by a real robot, we need to make sure that no collisionswith the environment occur. To be able to guarantee this, we need to be able tocompute if collisions occur [52].

In this work, we will not go into the details of collision detection. However, forcollision detection to be efficient, it is necessary to work with collision objects[53]. These collision objects are objects for which efficient collision computationsexist. Collision detection for triangular meshes is highly inefficient since triangularmeshes can be concave. A simple solution is then to compute the convex hull of thetriangular mesh to obtain a convex shape. The disadvantage of a convex shape isthat the robot can never reach into cavities of an object, as this would register as acollision. Another option is to represent the mesh as an octree [31]. An octree is asearch structure for 3D space in which partitions (cubes) can be either occupiedor free. Each of these cubes by itself is convex, and the search tree acceleratesquick searches among the occupied portions. While computations with octrees aretypically slower as with convex objects, it can model cavities. Both a convex objectand an octree are shown in Figure 2.4.

As shown in Figure 2.4, even the robot itself, and the measurement device must berepresented by a collision shape. It is also clear that neighbouring robot links inthe kinematic chain collide continuously. These links are, in practical applications,ignored. However, collisions between the measurement device and the robot end-

Page 39: A general approach to robot path planning for optical ...

2.3. ROBOT SYSTEMS AND DIGITAL TWINS 29

effector are more delicate. A measurement device may restrict the motion of arobot, by colliding with it. So during the modelling of the collision shape of themeasurement device, care must be taken that these collision shapes do not touchcontinuously. Only when they in reality collide.

2.3.3 Path planning

Path planning for robots is a complex problem because it is computationally in-tractable, generated academical interest [54]. Because of this interest, many differ-ent approaches exist. So, as with collision detection, we will limit the discussion inthis section to relevant aspects of the path planning problem.

The solutions to the path planning problem can be subdivided into two very distinctsubcategories. The first category is motion planning where the goal is to find asequence of robots states (i.e. path) that, moves the robot from a starting positionto a goal state. The thing that makes motion planning interesting is that the intentis to find the shortest path. Due to the nature of this problem, even while the intentis to find the shortest path, these paths are typically non-smooth [20, 21]. Thisroughness of the paths makes the paths locally inefficient, and the non-smoothnesscan cause abrupt motion which can cause unnecessary wear on the robot’s joints.To solve this problem, a second major category of path planning considers the localoptimization (or smoothing) of robot paths. A complete approach to path planningtypically combines both methods [20, 21].

The practically most successful approach to motion planning considers sampling-based algorithms [55, 56]. Sampling-based algorithms represent the robots con-figuration space as a roadmap constructed from random samples. Neighbouringrandom samples are then connected by edges to obtain a discrete graph. Finally, agraph-based algorithm finds the shortest path in this graph from starting positionsto goal state (e.g. Dijkstra’s algorithm [57]). Because the final path consists ofa selection of random samples, it is clear why it is non-smooth. Especially forrobot manipulators, there are however more issues that must be taken into account.The curse of dimensionality, for example, is a significant issue, especially for robotmanipulators. Since the graph-based algorithms run on a discrete roadmap, itsrelevance to the real problem is dependent on the quality of the sampling. So,a roadmap without a sufficient number of samples will result in an inefficientpath. This problem leads to an apparent contradiction for robotic systems. It isclear that adding degrees of freedom to a robot, which makes it more flexible, canonly improve the efficiency of an optimal path. The extra freedom of the addedDOF’s provides extra motion capabilities which can lead to more efficient motions.However, adding DOF’s to the robot increases the size of its configuration spacedramatically. So, a roadmap in this new, much larger C-space, with the samenumber of samples will have a lower sampling quality. The resulting path willtypically also be less efficient.

Another problem that arises especially for robotic manipulators is collision detection.

Page 40: A general approach to robot path planning for optical ...

30 CHAPTER 2. BACKGROUND

1 IKsolution

1 IKsolution

2 IKsolutions

2 IKsolutions

11

11

22

2233

33

44

44

Figure 2.5: This figure aims to clarify the topological challenges of a robot arm (seetext). Notice that a square grid in C-space results in a tangled curved grid in T -space.

Samples in the configuration space cannot be added to the roadmap if they collidewith the environment. So, before adding states to the roadmap, it is necessaryto perform collision detections. The problem however arises with the edges ofthe roadmap. Any edge represents a non-finite amount of states, each of whichmust be checked for collisions. To do this efficiently, continuous collision detectionmethods exist [58, 59]. These detection methods however rely on the assumptionthat the collision shapes perform a linear motion during their movement. For linksof a robot manipulator, this is however far from the case [60]. As can be seen inFigure 4.6 these links move along curves for straight paths in C-space. A solution tothis problem is to make edges in C-space short enough for the linear assumptionsto be approximately correct. This again requires the samples of the roadmap to bedense.

Motion smoothing, finally, transforms a rough path resulting from the motionplanning step, to a smooth path that can be executed by the robot [19, 20, 21].These methods typically move the random samples that define the path continuouslyor replace the straight paths (edges) by curved paths. These movements can eitherbe to make the final path shorter or to decrease torque on the joints of the robot.

2.3.4 Topological challenges

In this section, we will discuss an issue that can arise in the planning of long robottrajectories, that cover a large portion of the reachable volume of the robot. Thisissue is quite unknown (or just rarely explicitly discussed), but has a significantimpact on the problem of inspection planning. It is a consequence of the kinematicstructure of the robot and will be primarily based on Figure 2.5.

For a practitioner that is interested in performing measurements, the robot needs

Page 41: A general approach to robot path planning for optical ...

2.3. ROBOT SYSTEMS AND DIGITAL TWINS 31

to follow a trajectory around an object. This trajectory is in T-space. However,the robot must assume a trajectory in C-space such that its end-effector movesalong the path in T-space. However, since one point in T-space might correspond tomultiple points in C-space, multiple trajectories are possible. On the other hand,path planning procedures must use tricks to work since the path planning problemis intractable.

The issue arises from an incompatible of the tricks that are required to make pathplanning procedures usable, and the possibility of multiple path combinations. Animportant trick to make path planning of long paths tractable is to build a pathincrementally (i.e. greedily). This means that partial paths must remain fixed whennew pieces are added. As a result, pieces must be added to the path that can causeproblems later. Figure 2.5 illustrates this. So, imagine that a partial path ends in thepurple dot. For this dot it T-space, two configurations in C-space are possible. Forthe inspection task, it does not matter which one. Due to the greedy nature of thepath planning solution, one of the two configurations is chosen (no preference oneither of them). For the inspection task, it would be best to add the purple arrow asa new piece to the path. This is possible only in one of the two configurations. Thegreedy choice that put the robot in one of the two configurations could furthermorehave been made arbitrarily long in the past. So, the effect of certain path planningchoices could be noticeable after a long time. This effect is very noticeable in theinspection path planning problem since it is a planning problem with a very longplanning horizon [61].

2.3.5 Abstracting robot systems

The previous sections of this chapter showed that robot systems are complicatedsystems. In order to design algorithms that can solve the inspection planning prob-lem for general robot systems, it is necessary to create a convenient abstraction of arobot system. For this abstraction, we will rely on a robot simulation environment.In all our experiments, we make use of the Virtual Robot Experimentation Platform,better known as V-REP [62]. Algorithms will then be able to send a predefinedset of queries to this simulator. In this section, we will discuss which queries arepossible, and what they entail. However, we will start first by discussing whatinformation is required by the robot simulator. These queries are:

1. Reachability query

2. Path planning query

3. Collision query

The robot simulator should contain a digital twin of the production line in whichall relevant components are present. The most important part is the robot system.A digital twin of the robot system should contain the kinematic structure of the realrobot and efficient collision objects. A second important part is the measurement

Page 42: A general approach to robot path planning for optical ...

32 CHAPTER 2. BACKGROUND

Figure 2.6: This figure shows a real robotic inspection system (left), together with itsdigital twin (middle) and the necessary collision objects (right). Note that the visiblemeshes (middle) are only cosmetic, and in reality are not required. Further note thatthe collision shapes can deviate significantly from the true geometry, as long as thecomputed collisions remain relevant.

device which should be placed on the robot end-effector at precisely the sameposition as in the real system. A collision object of the measurement device isalso required. Next, the measurement object should be provided together with itsposition, together with a collision shape. And finally, a relevant representation, interms of collisions, of the environment, must be provided.

The methods that will be used to compute all preceding queries are also chosenin the robot simulator. The inverse kinematics solver must be specified, as well asthe collision calculation module. Furthermore, it is necessary to provide collisionmasks that determine between which collisions are computed.

The first type of query that can be sent to the robot simulation environment is areachability query. The input for this query is a pose in T -space. The simulator willthen calculate whether there is a robot configuration for which this pose can bereached, without colliding with the environment. For the algorithm that uses thisquery, it is not essential how the robot simulator does this calculation.

The second type of query is a path planning query. The input of this query isa starting robot state (i.e. point in T -space), and a target pose in T -space (thiscan eventually be replaced by a robot state in C-space). The simulator will thenfind a path that can be executed by a robot, that moves the robot to the desiredposition, without colliding with the environment. Note that this type of query isconsiderably more expensive than a reachability query. This is important in thedesign of inspection planning algorithms, which need to be economical with thistype of queries.

Another query type is the collision query. This a simple query type that answers fora robot configuration whether it collides with the environment, or not.

Page 43: A general approach to robot path planning for optical ...

Part I

Automated InspectionPlanning Techniques

33

Page 44: A general approach to robot path planning for optical ...
Page 45: A general approach to robot path planning for optical ...

Chapter 3333333333333333333333333333333333333333333333333333333333333333333333333Gradient Based Inspection Path

Optimization

The research that is featured in this chapter has been published under the name “Agradient based inspection path optimization approach” [63].

3.1 Introduction

The goal of this section is to develop an inspection path post-processing technique.This technique improves the efficiency or quality of an inspection path. We achievethis by formulating the inspection path improvement goal as a local optimizationproblem. This local optimization problem can then be solved to improve theinspection path. The local nature of this optimization problem is the reason thatthis chapter precedes Chapter 4, which develops an inspection path planningapproach (global optimization). Since local optimizations do not reason aboutthe global structure of the problem, it is theoretically and conceptually morestraightforward. In this chapter, we will assume without loss of generality that theinspection quality function is the coverage function. It is however, straightforwardto adapt the method to other inspection quality functions.

Traditionally there are two general classes of algorithms in inspection path planning:

1. Sampling-based algorithms

2. Decomposition-based algorithms

The sampling-based algorithms provide global optimality guarantees [64, 65]. Theirdrawback, however, is that the resulting paths are locally suboptimal [19, 20, 21].This is because the optimality guarantees depend on infinitely many randomsamples. Because the sampling process is terminated at a finite time, the resulting

35

Page 46: A general approach to robot path planning for optical ...

36 CHAPTER 3. GRADIENT BASED INSPECTION PATH OPTIMIZATION

path is locally suboptimal. This is a well-known drawback in the planning of normalrobot paths. In inspection path planning, this is even worse since the planninghorizon is much longer, which results in a sparser sampling [61].

Decomposition-based algorithms decompose the inspection planning problem intwo steps. A viewpoint selection phase is followed by a path planning phase, thatconnects the viewpoints [10, 12, 66]. This two-step procedure is generally capableof solving more realistic inspection planning problems. The main drawback of thisapproach is that the resulting inspection paths are globally and locally suboptimal.So, regardless of the technique that is used to obtain an inspection path, it can stillbe improved by a local post-processing step.

In traditional path planning the problem of local sub-optimality is solved using post-processing of paths [19, 20, 21]. In this step, the path is locally refined to optimizepath length [21], path smoothness [19, 20] or another criterion. These techniquescan potentially be used in inspection path planning by considering measurementlocations on the path as fixed waypoints. Path segments between these waypointscan then be optimized. However, this blind usage of these approaches will resultin suboptimal paths, as the task-freedom inherently present in observing a set ofmeasurement locations is not leveraged. Points can be measured from multiplelocations, which results in a task-freedom that is difficult to describe.

In this chapter, we propose a gradient-based path optimization approach. Thismethod minimizes the length of an inspection path in the robot’s configurationspace, while it maintains observability of measurement points. In this method, weexplicitly model the observability of points which ensures that the optimizationis not over-constrained. In this work, we focus on optimizing path length (i.e.minimizing) because measurement time is a more critical factor in operation costthan energy cost in a typical inspection application. We define a piecewise smoothdifferentiable function, representing the sensor coverage of a path, and use thegradient of this function, together with the gradient on path length, in a multiplegradient descent algorithm (MGDA). We show that the proposed procedure whichtakes the task-freedom of observability into account, significantly outperforms aprocedure ignoring this freedom.

First, we will discuss related work in Section 3.2. In Section 3.3 we will math-ematically formulate our optimization approach, and in Section 3.4 we discussthe proposed algorithm. We present an evaluation of the proposed method inSection 3.5, and conclude in Section 3.6.

3.2 Related work

In general, there are two main approaches in the post-processing of general paths.The first approach, path smoothing uses fitting techniques on already efficient pathsto optimize the smoothness of these paths [19]. These techniques are limited in

Page 47: A general approach to robot path planning for optical ...

3.3. GRADIENT-BASED OPTIMIZATION 37

the sense that they only take path smoothness into account, and generally cannotdeal with other objectives.

The second approach, path optimization optimizes the path towards some objective.A popular example of one such technique is CHOMP which also optimizes pathsmoothness [20]. The drawback of this algorithm is that it requires pre-processingand has a significant algorithmic complexity. More recent work [21] proposes agradient-based solution which optimizes directly towards path length. This workalso provides a very efficient way of avoiding collisions by adding constraints tothe optimization. However, Campana et al. [21] focuses on paths of free robotmovement, ignoring the task the robot is performing during the execution of thepath.

Alatartsev et al. [67] model a robot task by relaxing the waypoints through whichthe path has to pass. Optimization consists of randomly perturbing the path andevaluating whether the path cost decreases. This random search strategy doesnot scale well for higher-dimensional problems. The work of Somani et al. [68]models grasping tasks as non-linear inequality constraints which can be used inan optimization procedure. The authors focus on optimizing manipulability ratherthan path complexity, and the constraints cannot be used for inspection tasks.

The key contributions of our approach in relation to the state of the art are:

1. We introduce a piecewise smooth differentiable function that models sensorcoverage

2. We formulate a multi-objective optimization strategy to optimize both cover-age and path length with preferential treatment of path length

3. We show that the freedom within the under-constrained specification of aninspection task has the potential to decrease path length significantly.

3.3 Gradient-based optimization

In this section, we describe all the components of our gradient-based optimizationprocedure. For the length minimization component of the algorithm, we draw onthe method introduced in [21]. To provide a complete overview of the approach, wewill mention the most important steps, but for the details, we refer to the originalarticle. Next, we will discuss a piecewise smooth differentiable function relatinga sensor trajectory to complete sensor coverage. We will combine the gradient ofthe path length with the gradient of path coverage in a Pareto optimizing fashion,which will be discussed thereafter.

Page 48: A general approach to robot path planning for optical ...

38 CHAPTER 3. GRADIENT BASED INSPECTION PATH OPTIMIZATION

3.3.1 Notation

The measurement device is attached to a kinematic chain (i.e. robot). In the mostgeneral sense, this kinematic chain is described as a tree of joints. The state ofthe kinematic chain is described by a vector of joint parameters θ. This vectorwill be called a joint vector and corresponds to a point in the robot configurationspace (C-space). In this work, we will focus on revolute joints, but an extensionto any type of joint is straightforward [21]. Forward kinematics transforms thisjoint vector to an end-effector pose in task-space. This space is described as ahomogeneous transformation Tk P SEp3q, the space of rigid motions in 3-space.The sensor pose can be obtained by multiplying the end-effector pose with thehand-eye transformation between the robot end-effector and the measurementsensor [69]. In the rest of this work, we assume that the hand-eye transformationis included in the forward kinematics calculation. Linearized pose updates in task-space are implemented as Lie-algebra elements tk P sep3q [70, 71, 72]. This ensuresthat updated transformations remain on the surface of SEp3q. Linearized poseupdates are transformed from sep3q to SEp3q using the well known exponential map[70, 71, 72]. This use of Lie-algebra ensures that gradient-based optimization canbe performed on manifolds (i.e. SEp3q). For more details on the use of Lie-algebrain optimization we refer to [70, 71, 72].

A path is modelled as a collection of joint vectors θi. Between consecutive configu-rations, we assume linear motion in configuration space. We denote such a pathas θ “ rθ1, ..., θN s for a path consisting of N poses. We denote single elements bynormal letters (for example θi) and a collection of elements by bold letters (forexample θ). We assume a discrete set of measurement locations on θ denoted byθM “ rθj1 , ..., θjM s Ď θ resulting in M ď N .

The surface of the object geometry to be inspected is discretized into a finite setof S points. This set of points is uniformly sampled from the surface of the object[10, 33]. In this work, we have randomly and uniformly sampled a triangulatedmesh representing the surface of the object.

3.3.2 Gradient-based path simplification

This section will briefly summarize the work of [21]. The length of a discretizedpath θ can be computed as follows:

Lpθq “N´1ÿ

k“1

θk ´ θk`12W . (3.1)

Note that this length is in joint-space, not in euclidean space. The length metricW is the mahalanobis distance which weights joints. We assume no correlationsbetween joints such that W is expressed by a diagonal matrix. This length can be

Page 49: A general approach to robot path planning for optical ...

3.3. GRADIENT-BASED OPTIMIZATION 39

Figure 3.1: Visualisation of the coverage related functions. The black point representsthe sensor location. The black lines mark the sensor frustum. These functions aredefined over the entire 3-dimensional space, but we show a 2-dimensional slice of them.(a) level set function Φ and (b) function: HpΦq

differentiated to the path parameters resulting in the following gradient:

BLpθq

Bθ“ pppθk ´ θk´1q

T ´ pθk`1 ´ θkqT q ¨W 2qkP2,...,N´1. (3.2)

Both the first and the last path points are not included since we assume that thesepoints are fixed. With this gradient we define a second order gradient updatedecreasing the path length:

gL “ ´αL ¨H´1 ¨

BLpθq

Bθ. (3.3)

Where H is the constant Hessian of Equation 3.1 which is given in [21]. Wedetermine the step size αL later in subsection 3.4.2.

3.3.3 Gradient-based coverage optimization

Coverage optimization intends to change the sensor trajectory such that moresurface points become visible. A point is visible when it is inside the sensormeasurement volume, also called sensor frustum, in at least one measurementlocation in the sensor path. The sensor frustum is modelled as a convex polytopeand is described by six planes, in a typical sensor. This measurement frustum isdefined by the sensor perspective angle, aspect ratio, and minimum/maximumscanning distance. The thick black lines in Figure 3.1 are an example of a typicalsensor frustum. The plane coordinates of these planes expressed in the sensor basisare arranged such that their normals point towards the inside of the sensor frustum.With this arrangement of plane coordinates, a point is inside the measurementvolume, if and only if the point plane signed distance is positive for all the planesof the polytope. Using this information, we can associate a gradient to a path toincrease the number of measured points.

Page 50: A general approach to robot path planning for optical ...

40 CHAPTER 3. GRADIENT BASED INSPECTION PATH OPTIMIZATION

We start by defining the following level-set function for a single sensor location:

Φpp, T q “ miniPt1,...,6u

prTi ¨ T ¨ pq. (3.4)

Parameters ri are the plane coordinates of the faces of the convex polytope and pis a surface point. Transformation T is the homogeneous transformation matrixtransforming point p to the coordinate frame of the measurement sensor. A level-setfunction is a convenient way of representing a closed surface in a space. Thisfunction is zero on the surface itself, positive inside the surface, and negativeoutside the surface. Equation 3.4 is the signed distance to the closest plane of theconvex polytope as shown in Figure 3.1 (a), and is a valid level set function. Thisfunction works for one sensor location, but can easily be modified to work withmultiple sensor locations.

Φpp,T q “ maxkpΦpp, Tkqq (3.5)

Measurement locations T “ rT1, ..., Tk, ...TM s are computed by doing a forwardkinematics calculation on the measurement locations θM . The level-set functioncan be used to compose an error function which decreases for increasing sensorcoverage. This function is given by the following expression:

fpT q “ ´Sÿ

j“1

HpΦppj ,T qq. (3.6)

With Hpxq being a smoothed Heaviside step function 1π parctanpbxq ` π

2 q, and nota Hessian as in Equation 3.3. Points pj are sampled from the object geometry.This function warps the level set function to one if a measured point is insidethe sensor frustum, and to zero if it is outside, as illustrated in Figure 3.1 (b).Parameter b determines the steepness of this smoothed Heaviside step function andis determined experimentaly. The function sums over all measurement locations,and negates the function to convert the maximum to a minimum. Note that thisfunction is piecewise smooth, continuous and non-convex. The required gradientnecessary for doing gradient-based optimization is:

BfpT q

BT“ ´

Sÿ

j“1

b ¨ ppTj ¨ riq

π ¨ b2 ¨ prTi ¨ Tk ¨ pjq ` π. (3.7)

Indices i and k are determined by the minimum operation in Equation 3.4 andmaximum operation of Equation 3.5 respectively. These operations are the reasonwhy the final function is only piecewise smooth. This gradient is only available inM measurement locations along the robot path. The gradient at locations where nomeasurement is performed is set to zero. Using the chain rule of differentiation thegradients can be converted from SE(3) to the measurement system configurationspace. We also use a Lie-algebra implementation to keep the optimization on thesurface of SEp3q.

Bf

Bθ“Bf

BT¨BT

Bt¨Bt

Bθ(3.8)

Page 51: A general approach to robot path planning for optical ...

3.3. GRADIENT-BASED OPTIMIZATION 41

Where t P sep3q is the linearized pose update, and BtBθ is a variant of the commonly

used robot Jacobian J . This variant differentiates to an element of sep3q instead ofSEp3q [73]. For the computation of Bt

Bθ we refer to [73], and for the computation ofBTBt we refer to [72]. An extension to more complicated inspection quality functionsthan the coverage function is possible by a slight modification of this equation.Note that we assumed that Q included sensor limitations and visibility. A similarthing can be accomplished by taking the product of the coverage function, and anunaltered Q. The gradient to this more complex function can than be obtainedusing the chain rule of partial derivatives.

The final gradient update uses the second order pseudo-inverse update rule com-monly used in inverse kinematics.

gf “ ´αf ¨Bf

BT¨BT

Bt¨ J# (3.9)

Where J# is the pseudo-inverse pJTJq´1J of the robot jacobian J [74]. The stepsize αC is determined in Section 3.4.2.

3.3.4 Combination of gradients

In inspection planning there are two fundamentally different competing objectives.Firstly the coverage of the inspection path should be as high as possible, andsecondly the path needs to be as short as possible. Therefore, we combine these twoobjectives using the multiple gradient descent algorithm (MGDA) [75]. MGDA isan algorithm used to find pareto optima of a multi-objective optimization problems.Using this technique we combine these gradients as a convex combination of bothsub-gradients:

g “ β1 ¨ gL ` β2 ¨ gC . (3.10)

Parameters β1 and β2 are subject to β1 ` β2 “ 1 and βi ě 0. The final gradient g isthe minimal distance inside the convex hull of both gradient vectors. The factors βiare calculated as follows [75]:

β1 “

$

&

%

gL2´g2C ¨gL

gL2`gC

2´2g2C ¨gLif g2

C ¨ gL ă minpgL , gCq

0 if minpgL , gCq “ gL1 if minpgL , gCq “ gC

(3.11)

The idea behind the MGDA algorithm is that the steepest common descent directionof all sub-gradients is chosen as search direction. MGDA works for non-convexcontinuous functions [75]. The sub-gradients in our application satisfy thesecriteria.

MGDA is preferred over the method of Lagrange multipliers [76] because bothsensor coverage and path length are optimization objectives. The optimizationproblem would be more restricted if one of the two objectives would be considered

Page 52: A general approach to robot path planning for optical ...

42 CHAPTER 3. GRADIENT BASED INSPECTION PATH OPTIMIZATION

as a constraint. This extra restriction in combination with the non-convexity of thecoverage objective would make the method more susceptible to local minima.

3.3.5 Constraints

In previous paragraphs, we assumed that any gradient update results in a valid path.In a cluttered environment, this assumption is invalid. After an update, a link ofthe robot may collide with either another robot link or the environment. Collisionavoidance can be modelled as linear constraints to the optimization procedure[21, 68]. In [21] it is shown that any gradient update should satisfy a system oflinear equations.

A ¨ g “ 0 (3.12)

In order for this equality to hold, any gradient update g should be projected ontothe null-space of A. We make use of the work of [21] which describes a way tobuild this matrix lazily. The basic steps are:

1. Start with a collision-free path and compute a path update.

2. Detect whether a collision occurred in the new path.

3. If a collision is detected, compute a collision constraint an add it to the matrix.A

4. Revert to previous collision-free path.

5. Compute path update with newly updated A.

An intuitive description of these collision constraints is that each constraint removesone degree of freedom (DOF) of the colliding waypoint safeguarding free movementin all other waypoints and DOF’s. This degree of freedom is the direction from thewaypoint to the collision point.

3.4 Algorithm overview

A high-level overview of the Gradient-based inspection planning algorithm is givenin Algorithm 3.1. The first step is to obtain an initial path. This path should becollision-free and reachable by the robotic system. The initialization of the pathis important because the global optimality of the final path is highly dependenton a good initial path. Any path initialization can be used, but we present ourtechnique in subsection 3.4.1. The Hessian of the path length error is constant andonly dependent on path length. The set of collision constraints initializes empty.

The optimization will be terminated when a predefined termination criterion isreached. The termination criterion used in this work is purely based on sensor

Page 53: A general approach to robot path planning for optical ...

3.4. ALGORITHM OVERVIEW 43

Algorithm 3.1 Gradient-based inspection planning1: counter Ð 02: θT Ð getInitialPathpq3: H Ð computePathHessianpθT q4: AÐH

5: while not termination criterion do6: pvis Ð checkV isibilitypθT , p, geometryq7: gL Ð computeLengthUpdatepθT , Hq8: gC Ð computeCoverageUpdatepθT , pvisq9: θT`1 Ð updatePathpθT , gL, gC , Aq

10: CollidingPointsÐ testCollisionpθT`1, sceneq11: if #CollidingPoints ą 0 then12: AÐ addCollisionConstraintpA, CollidingPointsq13: else14: θT Ð θT`1

15: counter “ counter ` 116: end if17: if remainderpcounternq ““ 0 then18: θT Ð reparametrizePathpθT q19: end if20: end while21: θT Ð relinearizePathpθT q

coverage. If the coverage drops below a predefined percentage under the startingcoverage, the algorithm stops. This choice of termination criterion allows somemovement of the coverage function, without immediately terminating the optimiza-tion. This percentage should be low enough to be not significant in the specificapplication. This termination criterion is chosen because our main objective is tosimplify the path using task-specific freedom. Different termination criteria caneasily be chosen in different applications. Traditional termination criteria basedon vanishing gradients are difficult to use in this application. When the path isclose to locally optimal, the two sub-gradients (length and coverage) are generallyindividually large. The final gradient update from the MGDA algorithm should besmall in this case since both gradients point in opposite directions. Because of thenon-convexity of Equation 3.6 this length as termination criterion is not as stable asdesired. This instability is related to particular choices of αL in Equation 3.3, andαC in Equation 3.9, which is detailed in subsection 3.4.2. Therefore we suggest todefine a criterion based on gradient and coverage after gradient update.

In the coverage error expression in Equation 3.6, we assumed that all surface pointsare visible from every viewpoint. A surface point is visible from a sensor position ifthe line connecting both points does not intersect any geometry. This effect can beincorporated in the gradient calculation by a simple modification to Equation 3.5.Instead of considering every sensor for each surface point, we only consider sensorsthat are visible from the surface point. This visibility is checked every iteration usingan efficient ray-tracer [77]. In Algorithm 3.1 functions computeLengthUpdate and

Page 54: A general approach to robot path planning for optical ...

44 CHAPTER 3. GRADIENT BASED INSPECTION PATH OPTIMIZATION

...IK1

IKN1

...

Pos 1

IK1

IKN2

...

Pos 2

IK1

IKNM

...Pos M

Figure 3.2: We do not compute paths between every combination of measurementlocations. We restrict the search to a subset of this graph. We compute paths betweenall IK combinations of consecutive measurement locations.

computeCoverageUpdate are straightforward implementations of Equation 3.3 andEquation 3.9.

Function updatePath is not simply Equation 3.10 because in Equation 3.3 andEquation 3.9 factors αL and αC are not determined yet. To determine these factorswe use a grid search procedure, which is detailed in subsection 3.4.2.

Collision detection in function testCollision is performed using standard collisiondetection software [62]. If a collision is detected, the collision constraint matrix Ais updated. Otherwise, the current path is updated.

A path re-parametrization which is discussed in subsection 3.4.3 is performed everynh-iteration.

If the termination criterion is reached, we perform a path re-linearization discussedin subsection 3.4.3. Opposite gradients of coverage cost and length cost cancause the final path to be locally jagged. In this step, we solve this jaggedness byreplacing every sub-path between measurement locations by a straight linear pathif no collision is detected.

3.4.1 Initialization

We use a greedy approach to obtain an initial measurement path inspired by[10, 66]. This is because simulations of measurements of complex objects willbe used in combination with an articulated robot arm. Algorithms in the field ofprovably optimal planners did not demonstrate the ability to solve the planningtasks of this dimensionality and complexity. Greedy path planning approaches areconsidered as state of the art in inspection planning for complex objects with robotmanipulators. Note that in Chapter 4, we will focus specifically on creating a betteralgorithm.

Firstly, a fixed number of S points are sampled from the surface of the inputgeometry. To get an efficient sampling of viewpoint space, these surface points are

Page 55: A general approach to robot path planning for optical ...

3.4. ALGORITHM OVERVIEW 45

Algorithm 3.2 getInitialPathpq1: sÐ sampleSurfacePointspgeometry, Sq2: v Ð dilatePointspgeometry, s, distanceq3: v Ð filterV iewpointspvq4: visibility Ð computeV isibilityps, pq5: pÐ setCoveringProblempvisibilityq6: pÐ travellingSalesmanppq7: IK Ð InverseKinematicsppq8: pathGraphÐ constructGraphpIKq9: pathLengthsÐ computePathsppathGraphq

10: pathÐ shortestPathppathLengthsq

dilated by an ideal scanning distance, as shown in [10, 66]. Because not everyviewpoint is reachable by the robot, the ones that are not reachable are filtered out.Visibility is computed between every surface point and every candidate viewpoint.This results in a binary visibility matrix. This matrix is used as input for the set-covering problem (SCP). The set covering problem [78] aims to find a minimumset of viewpoints needed to get complete coverage over the surface points. Thealgorithm that is used is greedy and can be terminated before 100% visibility isreached. This is important because for complex objects, achieving 100% visibilitymay not be possible.

This minimum set of viewpoints consists of configurations in the robot task-space.The order of these points on the final path is solved as the travelling salesmanproblem. For every configuration in task-space, there is either one or multipleinverse kinematic solutions. From these configurations, we construct a path graphwhich is shown in Figure 3.2. In this graph, all the inverse kinematic solutionsof consecutive measurement positions are densely linked. For highly redundantmanipulators, the inverse kinematic solutions may feature a continuous set. Inthese cases, we sample a finite set of point configurations from this continuous set.This dense linking of possible IK configuration is performed as a solution to thetopological challenge described in subsection 2.3.4.

For every edge in the path graph, we compute a robot path using the RRT-connectalgorithm [56]. The length of these paths is encoded as path weights in the graph.As a final step, the shortest path in the graph is selected. This step is time-consumingsince many paths need to be calculated.

3.4.2 Gradient update

The gradient update described in Equation 3.10 is not fully defined because factorsαL and αC in sub-gradient updates gL and gC are not determined yet. BecauseEquation 3.6 is non-convex it is important to choose the step size carefully. Wetherefore perform a linear grid search over different values of αL and αC , whichvary over a predetermined range. The set of step-sizes that is selected is dependent

Page 56: A general approach to robot path planning for optical ...

46 CHAPTER 3. GRADIENT BASED INSPECTION PATH OPTIMIZATION

Algorithm 3.3 Update path1: counter Ð 02: for αL Ð αminL to αmaxL steps n1 do3: for αC Ð αminC to αmaxC steps n2 do4: g ÐMGDApαL, αCq5: g Ð projectOnNullspacepA, gq6: θtpcounterq Ð θ ` g7: coveragepcounterq Ð getPathCoveragepθtq8: lengthpcounterq Ð getPathLengthpθtq9: counter Ð counter ` 1

10: end for11: end for12: θ Ð selectionPolicypθt, coverage, counterq

on a selection policy. This selection policy chooses particular values for both stepsizes, using evaluations of sensor coverage and path length at each potential pathupdate. It is in this step that the notion of preferential treatment towards pathlength is introduced.

The main objective of this work is to decrease the length of a path while maintainingthe same sensor coverage. Therefore, we select from the set of step sizes whichdecrease the length, the one which maximally improves the coverage. The maximalcoverage improvement may be negative, but because we systematically take theset of step sizes that increase the coverage maximally, the coverage will be themaximum that is attainable for the given path length in the local area around thecurrent path. Evidently, this selection policy can be different for other applications.

3.4.3 Avoiding local minima

Because the coverage objective is non-convex, local minima are inevitable. Thepreferential treatment of path length introduced in the gradient update step (sub-section 3.4.2) is an important measure to avoid these local minima. This heuristicallows for a controlled fluctuation of the sensor coverage. Fixing the sensor cov-erage and optimizing path length (or vice versa) will quickly imply local minima.Next, we will introduce two extra routines to avoid local minima further.

A path θ is a collection of N joint configurations with linear paths in betweenthem. Measurements are taken at a subset θM of these joint angles. During thegradient-based optimization, the relative position of these measurements on thepath is fixed. This fixed parameterization of the measurement locations causesrigidity in the path. This rigidity can limit movement in the direction of the lengthrelated gradient gL. This limited movement can result in a local minimum limitingthe movement of the gradient-based optimization.

To avoid this limited movement, we propose two additional procedures:

Page 57: A general approach to robot path planning for optical ...

3.5. SIMULATION RESULTS 47

Figure 3.3: This image shows the object geometries that were used during the ex-periments. (a) bicycle frame which is relatively large for the workspace of the usedrobot (KUKA KR16), and is topologically complex. (b) exhaust manifold with complexocclusions.

1. Re-parameterization of the measurement location

2. Re-linearization the measurement path

In the re-parameterization step, we redo the set covering problem after a fixednumber of steps. The viewpoints in this set are obtained by performing a densersampling along the current sensor trajectory. The set covering problem is solvedusing the algorithm of [78]. This is equivalent to re-parametrizing the measurementlocations on the path. The percentage of sets that need to be covered by the newsolution is at least the same as in the old solution. In this work, we keep the numberof measurement points (M) fixed. Other strategies are also possible.

The re-linearization is performed because the final gradient can cause the optimiza-tion to get stuck in a position where the paths between two measurements locationsare not as short as possible. If the straight path between these measurements iscollision-free, the original path is replaced with a linear path. If this straightforwardlinearization is not possible, simple path simplification is performed, with fixedmeasurement locations. This is equivalent to using the gradient update given inEquation 3.3.

3.5 Simulation results

In this section, we will provide simulation results of two complex inspection tasks.The objects to be inspected are given in Figure 3.3. One object is a bike framewhich is topologically complex and relatively large for the robot workspace. It ischallenging to find a path witch provides good sensor coverage, that is reachableand short for an articulated robot with a fixed base. The other object is an exhaustmanifold. This object is complex because it features complex occlusions, whichmakes finding good viewpoints in a short trajectory challenging. Because the objectis relatively small, the constraints of the robot are less critical, since fewer collisions

Page 58: A general approach to robot path planning for optical ...

48 CHAPTER 3. GRADIENT BASED INSPECTION PATH OPTIMIZATION

occur, which makes the initial path locally more optimal. This is because almost allpaths between measurement locations are simple straight lines in joint space.

The robotic manipulator used in the simulations is a Kuka KR16 industrial robot,with six joints. The robot Jacobian J introduced in Equation 3.8 is derived usingalgorithmic differentiation [79].

The stopping criterion used in both examples is that the coverage can drop by 2 %.For the value of b of Equation 3.6 we empirically chose a value of 0.1. The weightmatrix W in Equation 3.1 is chosen to be I6 (identity matrix of 6 ˆ 6). The gridsearch from Algorithm 3.3 varies over a grid of 5 steps for both αL and αC . αLvaries from 0.01 to 1, and αC varies from 0.0001 to 0.001. We do re-parametrizingdiscussed in subsection 3.4.3 of the path every ten iterations. We perform the pathre-linearization discussed in subsection 3.4.3 in the final iteration.

The predefined set of points on the surface of the geometry are in both examples,a set of 2000 randomly sampled points on the surface. The used sensor frustumis defined by a viewing angle of 65 degrees, and near and far clipping planes atrespectively 100 and 700 millimetres from the principal point, in the Z direction.The ideal scanning distance used in Algorithm 3.2 is 400 millimetres.

3.5.1 Locally sub-optimal path

The first example is the bike frame shown in Figure 3.3 (a). Both the originaland optimized path are shown in Figure 3.4. Both coverage and path length as afunction of optimization iteration are shown in Figure 3.5. In this example, theobject to be inspected is relatively large relative to the robot. The resulting pathsreturned by the RRT-connect algorithm [56] during path initialization are complexsince simple linear paths are generally not possible. This extra complexity causessome of the paths to be locally sub-optimal since no post-processing was applied tothe paths. So even if the measurement locations remain fixed, locally sub-optimalpath segments between these fixed points can be optimized. In this simulation, weon the one hand lock the location of the path waypoints θM and optimize the pathsbetween them, and on the other hand, use the proposed approach.

The optimization with locked waypoints can decrease the path length by 11%.The proposed optimization, on the other hand, can decrease the path length by32%. Because of the defined termination criterion, the coverage dropped by 2%.Other termination criteria could have stopped the optimization on other pointsof the optimization curve shown in Figure 3.5. The final drop in path length atthe final iteration is caused by re-linearisation of the measurement path discussedin subsection 3.4.3. The result shows that using the inherent task freedom inan inspection task results in significant improvements in optimized path length.The length using our approach is 33% shorter than the naive approach with fixedwaypoints (dotted line versus fixed line).

In Figure 3.5 at iteration 10 and 280, there is a sudden decrease in path length.

Page 59: A general approach to robot path planning for optical ...

3.5. SIMULATION RESULTS 49

Original pathOptimized path

Figure 3.4: Result of path optimization for a measurement of a bike frame geometry.The purple path is the original starting point of the optimization. The green path isoptimized. The coverage of the original path is 82% while the coverage of the optimalpath is 80%. The length of the optimized path is 69% of the initial path length.

This drop coincides with path re-parametrization actions. These drops show thatsearching for better path parametrizations can result in better convergence.

If we would fix sensor coverage and optimize path length or vice versa we wouldquickly end up in local minima (as discussed in subsection 3.4.3) which is visiblein Figure 3.5. The first point where the coverage drops is, for example, such alocal minimum. The path length can significantly decrease (even with increasedcoverage) by continuing the optimization. This illustrates the effectiveness of thegradient update rule of subsection 3.4.2.

3.5.2 Local optimal path

The second example is the exhaust manifold shown in Figure 3.3 (b). This object issmall relative to the robotic manipulator, but visually complex. The final path iscomposed of straight path segments in joint space. Because the path is in a localminimum, it is the locally optimal path, and it cannot be optimized if the waypointsremain fixed. In the previous simulation, we compared two types of optimization,both with fixed measurement locations and with moving measurement locations.Because the original path is in a local minimum of the first approach, this methoddoes not apply. Both the original and optimized path are shown in Figure 3.6. Theoptimization results as a function of the iteration are shown in Figure 3.7.

The proposed optimization procedure was able to decrease the path length by 27%.The resulting path is also visually simpler and smoother than the starting trajectory.This result was obtained using the same optimization parameters (αL, αC , ...) as inthe optimization with the bike frame geometry. The fact that optimization resultsare similar for both examples shows that the optimizations work for widely varying

Page 60: A general approach to robot path planning for optical ...

50 CHAPTER 3. GRADIENT BASED INSPECTION PATH OPTIMIZATION

0 200 400Iteration

60

70

80

90

100

Leng

th (

%)

(a)

0 200 400Iteration

78

80

82

84

86

88

Cov

erag

e (%

)

(b)

L+CL

Figure 3.5: Graphs are showing both the path length (a) and sensor coverage (b) as afunction of optimization iteration for the bike frame geometry. The dotted line (L) isan optimization where measurement locations θM are fixed, and the paths betweenthem are optimized. The full line (L+C) is the proposed optimization procedurewhere measurement locations can move. The final drop in path length is caused bypath re-linearization discussed in subsection 3.4.3. The coverage is rising at first anddropping after a while.

Original pathOriginal pathOptimized pathOptimized path

Figure 3.6: Result of path optimization for a measurement on the exhaust manifoldgeometry. The purple path is the original path, and the green path is an optimizedpath. The length of the purple path is 73% of the original path length. The sensorcoverage of the green path is 2% less than the coverage of the purple path (82% versus80%) as a result of the stopping criterion.

Page 61: A general approach to robot path planning for optical ...

3.6. CONCLUSIONS 51

0 200 400 600Iteration

70

75

80

85

90

95

100

Leng

th (

%)

(a)

0 200 400 600Iteration

79

80

81

82

83

Cov

erag

e (%

)

(b)

L+C

Figure 3.7: Graphs are showing both the path (a) and sensor coverage (b) as a functionof optimization iteration for the exhaust manifold geometry. The graph only shows theproposed approach, as the optimization with fixed measurement points is not possible,because the original path is in a local minimum of that function. A re-linearizationoperation of the path causes the large drop at the final iteration.

input geometries.

The final drop in the coverage function of Figure 3.7 (b) is not due to the re-linearization operation in the final iteration. Indeed, visually, it is clear that themeasurement path in Figure 3.6 is as simple as possible in the local neighbourhood.Any further decrease in path length will cause a sharp drop in sensor coverage. Thissharp drop is visible in Figure 3.7(b), and is caused by gradients gL and gC beingopposite to each other. Instead of relying on a vanishing gradient g, we proposed adifferent termination criterion in Algorithm 4.1. This causes the line search fromAlgorithm 4.2 to treat gL preferentially, and causes g to be opposite to gC . Thiseffect indicates that the inspection path is as short as possible with similar coveragein a local neighbourhood around that path.

3.6 Conclusions

The goal of this chapter was to define a path optimization procedure for inspectionpaths. To optimize these paths, it is important to capture the task-specific freedominherent in an inspection task and leverage this freedom in the optimization. Toaccomplish this, we defined a piecewise smooth differentiable function describingthe sensor coverage for a robot path. This coverage can easily be extended to generalinspection functions, by including the gradient of this function in the optimizationprocedure. We used this function in combination with a function describing thelength in a multiple gradient descent algorithm (MGDA). We proposed an algorithmto perform this optimization and proposed extra re-parametrizing and re-linearizingsubroutines, to avoid local minima during the optimization process. We showedin two inspection tasks of complex, realistic objects that the addition of this task-

Page 62: A general approach to robot path planning for optical ...

52 CHAPTER 3. GRADIENT BASED INSPECTION PATH OPTIMIZATION

specific freedom to the optimization gives significantly better results. In oneinspection task, the optimized path length was 69% of the original path length, andin the second inspection task, the path length was 73% of the initial path length.These results show that copying path simplification algorithms from traditionalpath planning results in suboptimal path simplification results. Modelling thespecific freedom of a measurement task can result in a significantly improved pathsimplification performance. In the first simulation, the difference between bothapproaches was 33%, and in the second simulation, this difference was 27%. Weperformed the same optimization for two vastly different object geometries usingthe same optimization parameters, indicating robustness against a change of theseparameters.

A drawback of the proposed method is that it relies on a good input path. In thenext chapter, we will propose a method to find good initial paths. We also neglectedthe global structure of the inspection planning problem, which will also be studiedin the next chapter.

Page 63: A general approach to robot path planning for optical ...

Chapter 4444444444444444444444444444444444444444444444444444444444444444444444444Near-optimal inspection path

planning

4.1 Introduction

In the previous chapter, we studied the local structure of the inspection planningproblem and developed a local optimization procedure to improve inspection paths.In this chapter, we will do something similar with the global structure of theinspection planning problem. In the previous chapter, we focused on the coveragefunction as inspection quality function. In this chapter broaden our focus to generalinspection quality function to guarantee generality. The end goal is to construct analgorithm that can generate high-quality inspection paths. These paths can then beused in the post-processing procedure of Chapter 3. The key contributions of thedeveloped algorithm compared to the state-of-the-art are in accordance with themain requirements of this thesis:

1. High-quality paths are assured due to a near-optimality guarantee of thealgorithm.

2. Highly complicated inspection tasks can be solved, because of favourablycomputational complexity of the method.

3. A wide variety of inspection planning problems can be solved due to anextreme generality of the approach.

The wealth of practical applications for automated inspection algorithms generatedacademic interest, making it a well-studied problem. The literature on inspectionplanning generated two major algorithmic approaches. The first class of algorithmsformally model the inspection planning problem as an optimization problem, andalgorithms are constructed to generate solutions that score well on this optimizationproblem. A significant issue with these algorithms is that the abstract problemof inspection planning is a very challenging combination of NP-hard problems as

53

Page 64: A general approach to robot path planning for optical ...

54 CHAPTER 4. NEAR-OPTIMAL INSPECTION PATH PLANNING

we will discuss later. This results in algorithms that can typically only solve smallscale problems (i.e. toy problems). The other class of algorithms originated from amore pragmatic view and try to solve more realistically complex problems. Thesealgorithms typically focus on concrete, specific problems but often fail to generalizeto other related problems (e.g. different robot, different measurement devices).Another disadvantage is that there is no guarantee on the quality of the solutionsthat are returned by the algorithm.

In this chapter, we will construct an automatic inspection planning algorithm thataims to connect both classes of algorithms. This algorithm generates near-optimalsolutions, while at the same time, it can solve realistically large scale complexinspection problems. Another advantage of our algorithm is that it is generalenough to solve a large variety of inspection planning problems. The proposedalgorithm also generates a bound on the maximally achievable quality, which isvaluable in comparison with other algorithms. A 360VR video is available onlinethat describes this chapter1.

4.2 Related work

Early works that focus on practical aspects of the inspection planning problem areconcerned with the next-best-view planning problem [80]. These articles typicallyfocus on the construction of functions that model the quality of a measurement[33, 36, 80]. However, to find a solution to the inspection planning problem, theproposed methods use a two-step procedure that separates the viewpoint selectionand path planning problem. As the name suggests, the planning horizon is alsolimited to one step ahead planning. There are no mathematical guarantees on theoptimality of the solutions generated by these procedures.

Early work on the coverage path planning defines the planning task as findinga robot trajectory that covers the entire space of interest [81]. The inspectionrequirement is simplified to the point that the robot must visit a collection of points,and thus the focus is only on the robot trajectory [82].

In contrast, Bircher et al. [13] propose a sampling-based procedure that considersboth inspectability of a path and its length. However, their procedure has no formaloptimality guarantees and is specifically designed for UAVs. Roberts et al. [14]formally model the inspection problem as an optimization problem, and use abranch-and-bound solver on a relaxation of this problem. The branch-and-boundsolution method, however, limits the size of the problem instances that can besolved. The relaxation that is performed also breaks all optimality guarantees.

Englot and Hover [11] propose a sampling-based procedure that is guaranteed toconverge to the optimal solution. The main disadvantage of this approach is thatthe rate of convergence is unknown and most likely slow, especially considering

1https://youtu.be/Fg-ulGRyw2w

Page 65: A general approach to robot path planning for optical ...

4.3. ABSTRACT PROBLEM STRUCTURE 55

that the inspection planning problem is typically a large NP-hard problem. Anotherdisadvantage is that the inspection quality is treated as a binary variable. Inreality, however, the inspection quality depends on the measurement conditions.Papadopoulos et al. [17] propose an algorithm with similar properties that extendsto robots with more challenging constraints, such as a robotic manipulator. However,this approach is only of theoretical interest as it does not scale well to largeproblems.

Singh et al. [83] model the inspection task as maximizing the mutual informationof a Gaussian Process, and present a near-optimal algorithm to find a walk in agraph with the aim to maximize the inspection quality. However, the proofs inthis work rely on a particular instance of submodularity specific to the mutualinformation of a Gaussian Process (i.e. pr, γq-local submodularity). Furthermore,this algorithm has been shown to scale poorly to larger problems [14].

A closely related problem, proposed by Yu et al. [84] is the correlated orienteeringproblem (COP). This problem is used to model a persistent monitoring task usingdrones and can be solved by mixed-integer quadratic programming. The disad-vantage, however, is that the structure of the COP formulation is not rich enoughto model realistic complexities typically related to functions modelling inspectionquality. Another disadvantage is that this method was only shown to work withsmall problem instances (i.e. a workspace grid of 7ˆ7 nodes in 2D).

4.3 Abstract problem structure

As mentioned earlier, the goal of robotic inspection planning is to find an efficientrobot trajectory that allows an attached measurement device to perform highquality, complete measurements. It is a problem with at its core two conflictinginterests. The primary interest is to maximize the inspection quality while the otherinterest is to keep the robot trajectory efficient. Separately, both maximizing theinspection quality and finding a minimal trajectory are NP-hard problems. Whatcounts as a cost differs per application but can, for example be travelling distance,inspection time, etc. We will cover both aspects and how they are connected in thefollowing sections.

In this work, we will assume that information about the object that is beinginspected, the robot system, and the characteristics of the measurement deviceare known beforehand. Information about the measurement object is in the formof a CAD model. We also assume that a digital twin of the robot system in anaccurate representation of the environment is available to make sure that the finaltrajectory is kinematically reachable. We finally assume that the characteristicsof the measurement device, that quantify the expected measurement quality as afunction of the measurement conditions are available.

An instance of the inspection planning problem is, in reality continuous. To make theproblem tractable, we will discretize the problem to a discrete problem. Different

Page 66: A general approach to robot path planning for optical ...

56 CHAPTER 4. NEAR-OPTIMAL INSPECTION PATH PLANNING

Cam

eraRobot

Object to be inspected

Description Set and cardinalities

View poses

Model points

View positions

View orientations

Robot motions

Figure 4.1: This figure connects the abstract structure of the inspection planningproblem and its notation to their physical interpretation.

aspects of the discretization are shown in Figure 4.1. Each of these aspects will bediscussed in separate subsections. The cardinalities provided in Figure 4.1 are usedthroughout this work, without repeating their meaning.

4.3.1 Inspection quality

In this section, we will show that functions that model inspection quality arenaturally connected to submodular functions. The general inspection quality model,developed in subsection 2.2.3 fits within this connection. This connection alsoallows us to characterize the inspection planning problem formally, and study theeffectiveness of algorithms. Specific instances of functions modelling inspectionquality have already been shown to be submodular [14, 85], but in this chapter, wewill highlight the generality of this connection. Submodular set functions are setfunctions that are characterized by a diminishing returns property. More formally:Let Ω be a finite set, and let f : 2Ω ÝÑ R be a real-valued function of subsets of Ω,then f is called submodular if for each X Ď Y Ď Ω and for each x P ΩzY it holdsthat

fpX Y txuq ´ fpXq ě fpY Y txuq ´ fpY q. (4.1)

This diminishing returns property models the fact that inspecting some part of theobject becomes less interesting after it has already been inspected, in the context ofinspection planning. To relate functions that quantify inspection performance tosubmodular functions, we start by discretizing the object that we wish to inspectM “ tm1, . . . ,mku (model) into a finite collection of points. Furthermore, weassume the availability of a finite sample V of relevant view poses for the inspectingcamera, V “ tv1, . . . , vnu. Each view pose v P V is a pair pp, oq consisting ofa 3D-point p (position) and an orientation o (Figure 4.1). In subsection 4.4.1we explain how to obtain V by a separate sampling of view positions P and

Page 67: A general approach to robot path planning for optical ...

4.3. ABSTRACT PROBLEM STRUCTURE 57

orientations O. For each model point m PM we can consider the subset view poses,Rm “ tv P V : m is visible from vu. While Rm represents a binary inspectionquality of view poses in a given model point m PM , we will also introduce moregeneral quality rate functions qm : V ÝÑ R`, representing a more nuanced qualityof the view poses in a given inspected model point (subsection 4.4.2). In thissetting, we agree that high values qmpvq correspond to high inspection quality byv P V . Furthermore, we assume that qmpvq “ 0 if v R Rm.

In general, the total inspection quality for a given model pointm is the accumulationof the inspection rates qmpvq by the fusion of the sensor data captured at severalview poses, visited during the inspection trajectory. The inspection rate formalizesthe measurement fusion function Q as defined in subsection 2.2.3. Locally at eachmodel point m P M , this sensor accumulation is formally ruled by a functionGm : 2V ˆ V ÝÑ R`. More precisely, for each subset X of view poses and foreach individual view pose v P V , the nonnegative value GmpX, vq represents theadded inspection quality at m by adding v, relative to previous inspections by X.Such a marginal quality function Gm is associated with a given family of qualityrate functions F “ tqm | m P Mu, and we call Gm an unordered and decreasingmarginal quality function if the following conditions are satisfied:

1. GmpH, vq “ qvpmq for each v P V (initialized by F)

2. GpX, vq “ 0 if v P X (saturation, i.e. measuring a model point under exactlythe same conditions twice, is not beneficial.)

3. If X “ tv1, . . . , vsu Ď V than the sum

GmpH, v1q `Gmptv1u, v2q ` ¨ ¨ ¨GmpXztvsu, vsq

is independent of the chosen order in X. (unordered)

4. Extending the set of view poses X Ď Y Ď V , decreases the relative benefits:GmpY, vq ď GmpX, vq. (decreasing)

Now we can state the inspection objective as a submodular function on subsetsX Ď V of view poses. More precisely, for a fixed m P M , we formalize theinspection quality fmpXq due to viewpoints X Ď V recursively by

fmpHq “ 0 (4.2)

fmpXq “ fmpXztvuq `GmpXztvu, vq if X ‰ H and v P X (4.3)

Notice that fmptvuq “ qmpvq.

Examples.

1. For a given object point m and an arbitrary family of quality rate functionsF “ tqm | m PMu, we can define

GmpX, vq “ 0, if qmpvq ď maxtqmpwq | w P Xu,

GmpX, vq “ qmpvq ´maxtqmpwq | w P Xu, otherwise.

Page 68: A general approach to robot path planning for optical ...

58 CHAPTER 4. NEAR-OPTIMAL INSPECTION PATH PLANNING

that can be seen to be an unordered and decreasing marginal quality function,yielding the submodular inspection objective:

fmpXq “ maxtqmpwq | w P Xu

.

2. If we use a binary quality rate, with qmpvq “ 1 ðñ v P Rm, and if we definea binary Gm by

GmpX, vq “ 1 ðñ v R X , v P Rm and fmpXztvuq “ 0

then we just obtain fmpXq “ 1 if #pX X Rmq ą 0. Note that this exactobjective is used frequently [11, 17, 33, 63, 86].

Proposition 1. Let m be a fixed model point and let qm : V ÝÑ R` be a givenquality rate in m, supporting a marginal quality function Gm. Then the associatedquality function fm : 2V ÝÑ R` is a well-defined, monotone increasing, submodularfunction.

Proof. First at all, fm is well-defined by the recursion of Equation 4.3, because Gmis unordered.Next, fm is monotone increasing because Gm delivers positive values.Finally, fm is submodular because Gm is supposed to be decreasing.

Of course, our ultimate goal is to maximize the inspection quality for the globalobject, represented by the sample M . This motivates us to define

fpXq “ÿ

mPM

fmpXq

which is monotone increasing and submodular as a sum of monotone submodularfunctions. All the aforementioned examples feature noiseless measurements. Buteven if measurements have zero-mean Gaussian noise, and the noise on the MAPestimate is used as an inspection quality measure, the total inspection objectiveremains monotone submodular [87].

4.3.2 Travelling costs

In order to compute a travelling cost for subsets of view poses X Ď V , we startby representing the space of all robot motions as a finite connected graph GpV,Eq(see Figure 4.1). For now, we will assume the existence of edges in this graph,in subsection 4.4.1 we will provide more details. The edges of this graph E “

te1, ..., emu represent robot motions between view poses. Traversing an edge ofthis graph will result in an associated cost cpeq. For each collection of view poses

Page 69: A general approach to robot path planning for optical ...

4.3. ABSTRACT PROBLEM STRUCTURE 59

X Ď V , cpXq collects the costs of all edges on the walk of minimum cost throughG that passes through all v P X. We model the cost of a subset of view poses as:

CpXq “ cpXq ` α|X|. (4.4)

Here, α models the cost associated with performing a measurement in each viewpose. This constant will be dependent on the measurement principle in practice.Note that in order to evaluate cpXq we need to solve the well-known travellingsalesman problem. In the remainder of this section, we will highlight some im-portant aspects of the travelling salesman problem which will be used later in thiswork.

While the travelling salesman problem (TSP) is NP-hard, it is possible to quicklysolve this problem for graphs of up to 1000 nodes using the Dantzig-Fulkersonformulation [88]. This formulation transforms the TSP problem in an integer linearprogramming problem that can be solved to optimality with branch-and-boundsolvers. In our implementation, we make use of the Gurobi solver [89] to solve theinteger linear programming problem.

Obtaining an exact solution to the TSP problem does, however come at a computa-tional cost. Our final algorithm requires the computation of many TSP problems(i.e. Opnq, n is the number of view poses) making it too expensive for realisticproblems. Therefore, we make use of an algorithm that generates approximatesolutions to the TSP problem. Approximate solutions are solutions that are at most,a constant factor larger than the exact solution. In this work, we make use of thenearest-neighbour algorithm that generates a Oplogpnqq-approximate solution tothe TSP problem [90]. We will refer to the cost obtained by the nearest neighbouralgorithm by qC (the accent on C points downwards to indicate that the value of theexact solution is smaller).

While the nearest neighbour algorithm generates an approximate solution, whichcan serve as an upper bound for the exact value, we also make use of a lower bound.This lower bound is known as the Held-Karp bound (HK). The HK bound is thesolution to the relaxation of the linear programming formulation of the travellingsalesman problem [91]. Held and Karp [92] proposed an iterative approach toobtain this bound quickly. While the HK bound is only guaranteed to give a solutionthat is never less than 2/3 of the minimum cost, it in reality performs much better.The HK bound was shown to generate solutions to real problems with a gap of lessthan 1% to the exact solution [93]. We will refer to the HK bound by pC. Valenzuelaand Jones [93] show that the HK bound can be reliably estimated with an algorithmwith time complexity of Opnlogpnqq. This algorithm works by generating a sequenceof minimum 1-trees that converge to the linear programming relaxation of the TSPproblem.

Page 70: A general approach to robot path planning for optical ...

60 CHAPTER 4. NEAR-OPTIMAL INSPECTION PATH PLANNING

4.3.3 The submodular orienteering problem and the General-ized Cost-Benefit Algorithm

The problem combining submodular function maximization subject to an upperbound on the travelling cost constraint is known as the submodular orienteeringproblem [16]. This problem is formally given by:

X˚ “ argmaxXĂV

tfpXq|CpXq ď Bu. (4.5)

Here, B is the maximum allowed travelling budget. In the context of inspectionplanning, this problem aims to maximize the inspection quality with a constraint onthe maximum inspection cost. Zhang and Vorobeychik [94] propose the generalizedcost-benefit algorithm (GCB), which is a polynomial-time algorithm with provableapproximation guarantees. The fact that the algorithm is polynomial is interestingin the context of inspection planning because typical problem instances tend tobe very large. This algorithm guarantees a 1

2 p1 ´ e´1q (« 0.32) solution to thesubmodular orienteering problem relative to a value fpXq, where X is the optimalsolution of maxtfpXq| rCpXq ď kBψpnqu. Here, k is a constant larger than, butclose to 1 and rC is a submodular function mimicking the behaviour of the true costfunction [94]. More recently, Qian et al. [95] showed that the GCB algorithms isnear-optimal even for exact cost function C, by introducing the assumption thatC can be both submodular and supermodular with bounded curvature. ψpnq isthe approximation guarantee of the travelling salesman algorithm that is used inthe GCB algorithm. This bound on the performance of the algorithm is however,overly pessimistic, as we will show in our experiments. We discuss a tighterproblem-specific bound in subsection 4.3.5

The GCB algorithm is provided in Algorithm 4.1. Also, notice that Opnq instancesof the TSP problem need to be solved in this algorithm which is the reason forresorting to approximate solutions.

Algorithm 4.1 Generalized Cost-Benefit Algorithm (GCB)1: X ÐH

2: while CpXq ď B do3: for all x P V do4: ∆x

f “ fpX Y txuq ´ fpXq

5: ∆xqC“ qCpX Y txuq ´ qCpXq

6: end for7: x˚ “ argmaxp∆x

f ∆xqCq

8: X “ X Y tx˚u9: end while

10: return Xztx˚u

Page 71: A general approach to robot path planning for optical ...

4.3. ABSTRACT PROBLEM STRUCTURE 61

4.3.4 Improving the solution of the GCB algorithm

In this section, we will propose an algorithm that is performed on the solution of theGCB algorithm. This step is a variation of a step that is often implicitly performedin the submodular function community but rarely discussed explicitly. This steptries to replace elements of the final set if new elements have greater marginalfunction values than old elements. This step often has a significant impact on thefinal quality of the solution. Our variation of this step is provided in AlgorithmAlgorithm 4.2.

Algorithm 4.2 GCB`

1: X Ð GCB2: δf Ð8

3: while δf ą 0 do4: f0 Ð fpXq5: for all x P X do6: X´ Ð Xztxu7: Xt Ð V zX´

8: for all v P Xt do9: ∆v

f Ð fpX´ Y tvuq ´ fpX´q

10: qCv Ð qCpX´ Y tvuq11: end for12: v˚ Ð lazyargmaxt∆

vf |CpX

´ Y tvuq ď Bu13: X Ð X Y tv˚u14: end for15: δf Ð fpXq ´ f0

16: end while17: return X

Algorithm 4.3 lazyargmax1: C˚ Ð B2: while C˚ ą B do3: v˚ Ð argmaxp∆v

f |qCpX´ Y tvuq ď Buq

4: ∆v˚

f Ð 0

5: C˚ Ð CpX´ Y tv˚uq6: end while7: return v˚

Our variation of this extra step is designed explicitly for the submodular orienteeringproblem. The usual step will replace elements with new elements if their cost-benefit ratio is higher. In our algorithm, the cost-benefit ratio will not be considered,but the focus will be on just the benefit. We however, only consider elements thatwhen added, result in a solution that has a cost lower than the budget. The benefits

Page 72: A general approach to robot path planning for optical ...

62 CHAPTER 4. NEAR-OPTIMAL INSPECTION PATH PLANNING

of this heuristic are twofold. The first benefit is that the Nearest neighbour algorithmis only precise over large instances of the TSP problem (i.e. Oplogpnqq), making theerror in the estimation of the cost-benefit ratio relatively large. This error will besubstantial because the gap over which decisions to include elements need to bemade (CpX´ Y vq ´ CpX´q) can be relatively small. In our heuristic, we do notneed to estimate the cost-benefit ratio. The other reason is that elements of V withthe largest marginal return could have been ignored during the optimization phase,because of a large associated cost relative to a partial solution. In our extension,these elements will be reconsidered with respect to a set that is close to the finaltrajectory.

In this algorithm, we also compute qCv (the HK bound), seemingly for no particularreason. Its use is however hidden in the lazyargmax step where Cv (the exact cost)is computed (see Algorithm 4.3). Since the HK bound is a lower bound, we cansafely ignore all values with an HK bound that is greater than B. The practicaltightness of the HK bound ensures that only a minimal number of more expensiveCv problems need to be solved.

4.3.5 Obtaining a tight reference measure

As mentioned earlier, the guarantee of the GCB algorithm is not tight. In this section,we will provide details on how to compute a much tighter problem-specific bound.We also suggest using this bound to compare the quality of different algorithms.Since this bound can be computed for large scale realistic problems, it can also beused to quantify the performance of algorithms that do not have formal optimalityguarantees.

The GCB algorithm produces a partial solution at each time step tX1, X2, ..., Xt, Xt`1u

until the budget B is violated at step t` 1. The value OPT is then given by:

OPT “fpXt`1q

1´śt`1k“2

´

1´cpXkq´cpXk´1q

B

¯ . (4.6)

Proposition 2. Let Xt be the subset generated by the GCB algorithm, and Xt`1 bethe first subset for which CpXt`1q ą B then OPT is a tighter problem specific boundthan fpXtq

12 p1´e

´1qi.e.:

fpXtq12 p1´ e

´1qě

fpXt`1q

p1´ e´1qě OPT ě fpXq ě fpXtq

.

Where X is the optimal solution of maxtfpXq| rCpXqu ď kBψpnq, and k is a constantclose to one, as defined by [94].

Proof. This statement follows directly from the proof of Theorem 1 of [94].

Page 73: A general approach to robot path planning for optical ...

4.4. PRACTICAL IMPLEMENTATION 63

The factor p1´ e´1q is incidentally also the performance of the greedy algorithmon the monotone submodular function maximization under cardinality constraints[96]. This bound is known to be tight. This suggests that OPT is also tightsince [94] use the same strategy that is used to prove the boundedness of thegreedy algorithm, in their proofs. Tightness implies that it is not possible tofind a tighter bound without making more assumptions on f . The most popularstructure in submodular functions, namely curvature, does for example, not applyto submodular functions modelling inspection performance.

It is important to stress that the guarantees from which OPT is derived do notinclude the actual cost C. These guarantees depend on a placeholder submodularfunction rC that captures important characteristics of the true cost function C [97].

4.4 Practical implementation

In Section 4.3, we discussed the abstract structure of the inspection planningproblem and presented an algorithm to solve it. In this section, we will discusshow we can use this approach to solve a practical inspection planning problem.Important in our practical implementation is that we want to keep the theoreticalguarantees from Section 4.3 intact. On the other hand, we also require that thetheoretical assumptions make sense in the context of practical problems.

Another important aspect is that we strive for generality. This means that theimplementation should be able to solve planning problems for drones and roboticmanipulators. Furthermore, we cannot make particular assumptions on the qualityfunction f or cost function C which need to tailor to a wide variety of specificproblems.

4.4.1 Obtaining TSP costs

An important aspect of robotic inspections is the robotic system that generatesthe motion of the measurement device. So it is no surprise that the practicalimplementation is built around dealing with this system efficiently. In this work,we assume that we can query the validity of robot states in a robot simulationenvironment. This validity entails that the robot can move the measurement deviceto a certain position and orientation without colliding with the environment. Inour implementation, we use the robot simulation environment, V-REP [62]. A firststep in encoding the robot limitations efficiently is to construct a view pose spacediscretization.

To guarantee a general approach, we will perform the discretization of the viewposes in task-space. The alternative, discretizing configuration-space would resultin discretization with low quality in the case of high dimensional state spaces. We

Page 74: A general approach to robot path planning for optical ...

64 CHAPTER 4. NEAR-OPTIMAL INSPECTION PATH PLANNING

will start by defining a specific discretization approach and proceed by pointing outthe advantages of this discretization.

The discrete position set P is generated by constructing a regularly spaced gridin the inflated convex hull of the object that needs to be inspected. All the pointsthat cannot be reached by the robot in any orientation are removed from the setP . Note that we can guarantee that the points outside the inflated convex hullhave a limited inspection quality. If the maximum scanning distance is chosenas the inflation distance, we can even guarantee that the greedy algorithm willnever select these points. Orientations are generated randomly and uniformly [98],yielding the finite set O. The path planning is than executed on a graph G “ pV,Eq,where each vertex v in V corresponds to fiber pO “ tpu ˆO that represents all theposes in one specific position p P P , considering each discrete orientation. Twosuch vertices pOi and pOj are connected by an edge ei,j P E if and only the positionsti and tj are connected by a grid edge or a grid diagonal. Note that the reachabilityof positions at specific orientations was not evaluated during the construction ofgraph G. We will however check the reachability of positions at specific orientationsin a lazy fashion at a later stage. This choice to only check the reachability ofpositions (rather than poses) results in a reduced number of cheaper reachabilityqueries (Oplq vs Opnq, see Figure 4.1) to the robot simulator. The cost betweenthese two view poses ppi, oiq and ppj , ojq is modelled as an edge cost given by:

cpei,jq “ p1´ βqdtppi, pjq ` βdopoi, ojq. (4.7)

Here dt and do are metric distance functions (i.e. the triangle inequality is satisfied)between positions and rotations, and β is a weighting parameter. The main conse-quence of this modelling choice is the fact that the cost of orientations and positionsare separated. Practical cost functions also motivate this choice. For inspectionswith drones, this type of cost function can model the total travelling cost, whichmust be bounded due to battery life. For inspections using robotic manipulators,this cost is proportional to the distance travelled by the measurement device, ofwhich the derivative (i.e. speed) is limited. An example where this cost functionfails is when energy consumed by a robot manipulator is considered as a cost.These types of costs are, however improbable to play a role in robotic inspectionproblems, because the speed of the inspection is the main concern in the economicsof robotic inspections.

Now that edge costs cpeq are defined, we can run all TSP related algorithmson graph G. The discretization and distance matrices are pre-computed for bothpositions and orientations. The computation of the distance matrix in the case of thepositions requires the computation of all shortest paths between every combinationof positions. With Johnsons algorithm [99] this step has a time complexity ofOpl2 logplq ` l2q. Note that we made use of the fact that the number of edges isOplq in our proposed graph. Also, note that the choice to separate positions fromorientations resulted in a highly reduced computational cost. In our experiments,we will show that in large problem instances, this computational cost is acceptable.And can be used by TSP queries which are performed interactively during theoptimization phase. This means that a cost cpeq is only computed when needed.

Page 75: A general approach to robot path planning for optical ...

4.4. PRACTICAL IMPLEMENTATION 65

For every subset of view poses in a TSP query, a sub-matrix can quickly be extractedas the sum of a subset of the two distance matrices.

4.4.2 Obtaining measurement quality

While the calculation of the TSP costs already required the discretization of the viewpose space, calculating the measurement quality also requires the discretizationof the object that needs to be inspected. Many methods exist that accomplish this,and the right choice depends on the application. In all our experiments, we willrandomly and uniformly sample the input mesh. The surface normal associatedwith each point in the discretization is inherited from the triangle of which thepoint was sampled. This normal can later be used in determining the expectedmeasurement quality from different viewpoints. Determining the measurementquality for each view pose-surface point combination is achieved in three steps.

In the first step, the visibility is calculated using a highly efficient ray-tracer [77].Note that because the orientations and positions of the viewpoints are independent,we only need to perform l ˆ k visibility computations. The second step evaluatesfor each visible view pose-surface point combination, for which orientations ofthe measurement device, the surface point is in the view frustum of the sensorlocated in the view pose. Finally, the expected measurement quality is calculatedfor the remaining view pose-surface point combinations. These quality valuesare pre-computed and stored in a sparse matrix. How this quality is computed isdependent on the physics of the measurement technique that is being used, andthus highly dependent on the measurement technique. In our implementation, wemake use of a GPU to quickly evaluate the marginal benefit of adding a viewpoint inparallel. We do this mainly because the time complexity of this evaluation is Opnkq.In our largest experiment for example n “ 351 ¨ 103 and k “ 30 ¨ 103 (ignoring thesparsity).

4.4.3 Optimization

The optimization phase uses the aforementioned pre-computed quality values anddistance matrices. Given the pre-computed data it is straightforward to implementAlgorithm 4.1 and Algorithm 4.2. The final measurement path returned by thealgorithms is however, not guaranteed to be executable by the robot. Note that thefollowing limitations of the robot were neglected during the construction of theproblem:

1. Reachability of specific orientations at positions is not guaranteed.

2. It is not guaranteed that the robot can execute each path that is consideredby the optimizer.

Page 76: A general approach to robot path planning for optical ...

66 CHAPTER 4. NEAR-OPTIMAL INSPECTION PATH PLANNING

The reachability of specific orientations will be checked lazily just before addingnew points to the solution set. This reachability check is performed in a robotsimulator. If the pose of this point is not reachable by the robot, it will be ignored,and the next best point will be considered.

The second point is more challenging and presents a fundamental challenge relatedto solving the inspection planning problem. Especially for robotic manipulators, itis expensive to perform path planning queries of long, complex paths in clutteredenvironments. This path planning requires checking the capability of a robot toexecute consecutive linear motions. The sufficient number of such queries thatare needed to keep the theoretical guarantees intact would result in prohibitivecomputation times. This is mainly because the number of possible trajectories underconsideration is exponential in the size of the input graph. Our final algorithm willneglect this check. Thus the near-optimality only remains intact if the followingassumption is valid.

Assumption 1. If every p P P is reachable in at least one orientation o P O by therobot, and every view pose x P X Ď V is reachable (position + orientation), then weassume that the robot can move the sensor along the trajectory that minimizes CpXqwithout incurring additional costs.

This assumption is almost surely guaranteed in the case of drone inspections if theresolution of the discretization is sufficiently fine. The view pose discretization thatwe proposed is aimed towards maximizing the probability that this assumptionis also valid in other cases (e.g. manipulators). The grid structure of the viewpose space graph ensures that the maximal spatial distance between reachabilitychecks is limited to the diagonal distance of the input grid. The capability of ourapproach to deal with large graphs ensures that grids with a fine spacing can easilybe achieved. Also, note that from a theoretical perspective, this assumption onlyhas to hold for the final solution. Partial solutions that violate Assumption 1 donot imply that the final solution will violate this assumption, or that OPT will beviolated.

An improved implementation is however, possible. This implementation wouldcheck all configurations in the complete path (i.e. the path that minimizes CpXq) ineach iteration with a specific orientation. However, when such a path is not entirelyreachable, complicated data structures are required to reflect this knowledge. Thisdata structure is required because robotic manipulators can have multiple inversekinematics solutions, which makes the reachability dependent on previous states.This would result in a substantial increase in the complexity of the implementation,and is therefore not considered in this work. This choice to omit this step is alsomotivated by our experiments where Assumption 1 was not violated.

Page 77: A general approach to robot path planning for optical ...

4.5. EXPERIMENTS 67

4.5 Experiments

We will perform three different experiments in this section. The first experimentaims to evaluate the quality of solutions generated by the proposed algorithm in avariety of complex inspection tasks. The variety of problems arises from combiningdifferent robotic systems, different quality functions and different inspection objects.The second experiment evaluates the robustness of the GCB algorithm and post-processing step to changes in problem defining parameters. The third experimentaims to show that the proposed approach can solve highly complex, large scaleinspection tasks. In this experiment, the focus is less on flexibility and more on thesize of the problems.

To keep the variety of problems manageable, we limit the number of qualityfunctions to two specific functions. These functions are displayed in Figure 4.2. Thefirst quality function models an inspection task where only coverage is important.Thus the measurement quality is not dependent on the measurement conditions.We, however, define a maximum measurement angle which is 30˝ in Figure 4.2(left) after which the quality becomes zero. Function G (see Equation 4.3) is inboth quality functions the max operation. The second function is more complexand is given by:

A “cospγq

r2. (4.8)

Where γ is the angle between the vector pointing from point m to view pose vand the surface normal and r is the distance between these points. This functionis proportional to the projected area of a surface element located at m measuredfrom v. It is important that there is a minimum measurement distance that limitsmeasurements that are too close. Note that any linear scaling of the distance leavesthe problem unchanged. In experiment 1, we will consider both quality functions,and in experiment 2, we will only consider the second quality function.

Both functions will result in fundamentally different submodular functions f . Func-tion C will result in a f that can feature many nonlinearities since qi can onlybe zero or one. This will result in a more submodular f (i.e. higher submodularcurvature). A will result in a smoother Q since qi can take on all positive values.

4.5.1 Performance evaluation

During this experiment, we test the performance of the proposed algorithm in awide range of problems. We create different problems by combining two differentmeasurement systems, measuring three different objects with two different qualityfunctions (creating 12 distinct problems). As a reference technique, we considerthe traditional method that starts by building a set of viewpoints using the greedyalgorithm, ignoring travelling costs, and proceeds by connecting them via the

Page 78: A general approach to robot path planning for optical ...

68 CHAPTER 4. NEAR-OPTIMAL INSPECTION PATH PLANNING

C A

Figure 4.2: These plots show the quality functions that are used in the experiments.The black arrow in each figure represents the surface normal, and the colours displaythe quality value if a camera is located at that position pointing towards the surfaceelement (located at the base of the black arrow). High measurement quality isdisplayed as a blue colour, and a low-quality value is shown as yellow. The left imageis a standard coverage quality function, thus one if the measurement conditions areacceptable and zero otherwise. The right image shows a quality function proportionalto the projected surface area of a surface element which is a smooth function.

Created by Pham Duy Phuong Hungfrom the Noun Project

Kuka UR10

Figure 4.3: This figure shows the robot systems and objects that are considered inthe experiment. The first measurement system consists of a Kuka KR16 robot with6DoF, and a complicated measurement system. The second system consists of a smallercamera attached to a UR10 robot with 6DoF and an additional rotation table (7DoF intotal). The first system represents a very constrained system, while the second systemis very flexible. The three measurement objects are a bicycle frame, a transmissionhousing and a plate with elevations.

Page 79: A general approach to robot path planning for optical ...

4.5. EXPERIMENTS 69

shortest path. This algorithm is also applied to the discretization proposed inthis article. We will refer to this technique as the greedy algorithms, which refersto the construction of the viewpoint set. We run the improvement algorithm ofsubsection 4.3.4 on both the result of the GCB algorithm and the greedy algorithm.We will distinguish between both the original solution and the improved solution byadding a, `-sign. We will compare the performance of algorithms by percentagesQpXqOPT (subsection 4.3.5).

The two measurement systems in this experiment represent a constrained systemand an unconstrained system (see Figure 4.3). The Kuka system features a largemeasurement device that limits the movements of this robot significantly due tocomplex self-collisions. This results in a reduced reachability of the manipulator.The UR10 system with rotation table has 7DoF in total, which makes it a lessrestricted system. Furthermore, the measurement device is smaller, resulting infewer self-collisions. The three objects that need to be inspected are fundamentallydifferent. The bicycle frame is a large object with a tube structure. The transmissioncasing is a nearly convex object that still features complex occlusions. The plateobject is an object that can be efficiently measured by a simple path that movesback an forth over the object.

To limit the number of problems, we set all other problem-related parameters to beequal. However, we consider the effect of changing several important parametersin the next experiment. Parameter α in subsection 4.3.2 is chosen to be 0.05. Thismeans that performing a measurement costs as much as travelling 50mm withthe end-effector. This choice reflects that we focus on continuous scanning, whichrequires that the robot at least slows down the speed of the end-effector surroundingany area of measurement. Distance function dt introduced in Equation 4.7 is theEuclidean distance between points in meters. Function do is the angle in radians ofthe axis angle representation of the rotation between orientations. Parameter β ischosen to be 0.01.

The spatial view position discretization is obtained by dilating the convex hull of theobject under consideration by 400mm. The resolution of the grid is 50mm in anydimension. Any unreachable point is rejected in advance in accordance with thetechnique proposed in subsection 4.4.1. In this experiment, we will mainly focuson the quality of the solutions returned by all algorithms. We will not focus on theexecution time of each algorithm because our implementation is aimed towardsflexibility rather than speed. Thus, all times that are provided are only suited toindicate execution times. The budget was chosen to 10 for all experiments but wasincreased until the total coverage of the trajectory was saturated. This ensures thatthe path is long enough to at least cover the entire object.

The perspective angle of the measurement device is 45˝ (arbitrary choice) in allexperiments, except for the experiments where the plate object is being measured.The perspective angle in these experiments was changed to 15˝. This distinctionwas introduced to force to optimizer towards trajectories that pass multiple timesover the plate object. While the maximum measurement angle in the coveragefunction is 30˝ in nearly all cases, it is 10˝ in case of the plate object. The results of

Page 80: A general approach to robot path planning for optical ...

70 CHAPTER 4. NEAR-OPTIMAL INSPECTION PATH PLANNING

Function B k n l r Greedy Greedy+ GCB GCB+

C 15 64,2 64,4 63,8 64,7

A 10 66,5 67,8 63,9 70,1

C 10 64,7 65,2 64 66,4

A 10 65,1 66 63,9 66,5

C 15 62,4 62,6 63,3 63,5

A 15 66,9 67,9 63,2 69,8

C 15 66,1 66,6 63,7 65,5

A 10 62,8 63,8 63,4 70,6

C 15 64,9 65,5 63,2 67,8

A 12 67 67,2 63,2 68,7

C 10 66 66,6 63,4 66,5

A 15 65,9 72,5 63,3 73,3

A 70 30k 351k 7k 50 63,3 65 63,3 65,7

A 800 10k 297k 6k 50 64 64,2 63,3 64,2

Problem definition Problem size Algorithm performance (%)

3k 350k 7k 50

Inspection task

K

u

k

a

10k 300k 6k 50

10k 125k 2,5k 50

10k 225k 4,5k 50

10k 226k 4,5k 50

L

a

r

g

e

10k 141k 2,8k 50

U

R

1

0

Figure 4.4: Figure summarizing the results that were obtained for the first and finalexperiment. The icons are introduced in Figure 4.3, Figure 4.2 and Figure 4.6. Theletters in problem size were introduced in Figure 4.1. All the numbers under algorithmperformance are percentages of OPT . Greedy is the result of running the traditionalapproach that separates the viewpoint collection (using the greedy algorithm) andpath planning step. The ` refers to feeding the solution of the original approach tothe improvement step introduced subsection 4.3.4.

this experiment are displayed in Figure 4.4.

The maximum time required for pre-computing the discretization in the smallerscale experiments was 10 minutes. The maximum time required to execute anyalgorithm was 1 hour. All these times are acceptable in real-world applications,especially since extensive graphs were considered (i.e. 350k nodes) in this experi-ment.

A first thing that is noticeable from Figure 4.4 is that the traditional greedy al-gorithm performs surprisingly well. It almost always outperforms the basic GCBalgorithm. This means that important viewpoints can almost always be connectedwith an efficient path. The range of different problems that were considered inthese experiments indicates that this can safely be assumed in the design of newalgorithms. The GCB+ algorithm almost always outperforms any other algorithm.However, the margin can be small. The degree to which GCB+ outperformsother methods is also dependent on the problem that is being considered. Thisobservation also indicates that a proper evaluation of new algorithms requiresthe considerations of many different problems. Another observation is that theGCB+ algorithm performs better with the A quality function. The smoothness ofA results in a smoother f , which is better suited to the local nature of the ` step

Page 81: A general approach to robot path planning for optical ...

4.5. EXPERIMENTS 71

β 0,2 0,4 0,6 0,8

α 0,15 0,25 0,35 0,45

B 3 6 9 12

Greedy 52,4 59,9 60,5 61,2 62,4 62,4 60,1 59,8 56,1

Greedy+ 52,4 59,9 60,5 61,2 62,6 62,5 60,1 59,8 56,1

GCB 60,8 63,1 63,3 63,4 63,3 66,5 64,3 65,1 63,6 62,8 63 63,3 63,1

Function : C GCB+ 60,8 63,8 63,3 63,4 63,5 66,5 64,3 65,1 63,6 62,8 63,1 63,4 63,1

Greedy 50,1 60,5 67,7 63,9 65,9 67 61,6 62,3 53,8

Greedy+ 50,1 66,8 76,9 77,6 72,5 80,2 74,3 74,2 63,9

GCB 59,9 63,1 64,9 63,6 63,3 63,8 63,3 63,9 63,9 63,1 63,2 63,2 63,3

Function : A GCB+ 69,1 72,4 77,5 80 73,3 87,8 82,4 82,5 79,3 86,1 79,9 78,6 82,4

15

65,9

Alg

ori

thm

Par

ame

ter

0,05

0,01

Kuka

UR10 72,5

62,4

62,6

15+ (α-0,05)

0,01

𝑋𝑔𝑟𝑒𝑒𝑑𝑦

Figure 4.5: Figure showing the effect of changing problem defining parameters α(Eq.4.4), β (Equation 4.7) and the budget B (Equation 4.5). In this experiment onlythe best and the worst problems of previous experiment where considered.

(subsection 4.3.4).

4.5.2 Robustness analysis

In the previous experiment, we fixed some parameters that defined the inspectionproblem. In this experiment, to evaluate the stability of our algorithms, we willchange these parameters to see the effect on the quality of the solutions generatedby automated algorithms. In the previous experiment, we iteratively defined a travelbudget for each experiment. To get an idea about the effect of this choice, we willchange B, and keep all other parameters the same. In submodular orienteering, thenotion of cost is central. Therefore we will change both parameter β (Equation 4.7)and α (Equation 4.4) to see the effect of different types of cost functions. Parameterα determines the cost of a measurement, and β determines whether orientationrelated costs or position-related costs dominate. When α is changed, we alsoaugment the budget. We augment the budget in such a way that the result of thegreedy algorithm will remain the same. The main reason for this augmentation isthat the GCB algorithm will always have a solution with more elements than thegreedy algorithm. Changing α will indirectly penalize the number of elements inthe final solutions. So changing α will test the ability of the GCB algorithm to dealwith this changing context.

We will run the algorithm with the different parameters on two different problems.These are both the problem that proved to be the most challenging, and the problemthat proved to be the least challenging. Both problems incidentally considered theinspection of the plate object (see Figure 4.3). This object was inspected by theKuka kr16 robot, using the quality function C in the most challenging problem. Inthe least challenging problem, this object was inspected by the UR10 robot withthe quality function A. The results of these experiments are provided in Figure 4.5.

The performance of the GCB algorithm is robust to changing all investigated

Page 82: A general approach to robot path planning for optical ...

72 CHAPTER 4. NEAR-OPTIMAL INSPECTION PATH PLANNING

Figure 4.6: This figure shows the two complex inspection tasks considered in theexperiment. The first inspection task entails the inspection of a car frame, with acomplex measurement device. The movement of this device is provided by a KukaKR16 robot placed on a linear translation stage. The second inspection task entails theinspection of a 60m high wind turbine with a drone.

parameters. The only notable exception is when the budget is too small. Ouraugmentation step is however able to recover from this is the less challengingproblem. The greedy algorithm is however, far less robust to a change of parameters.

4.5.3 Large scale highly complex inspection tasks

In this section, we will study the scalability of our approach by considering two verylarge complex yet distinct problems. The first problem considers the inspection of acar frame with a robotic manipulator on rails. This problem distinguishes itself byits very complex collision constraints due to complex collisions and self-collisions.The second problem considers the inspection of a 60-meter high wind turbine witha drone. This problem is different from the first problem since the trajectory of thedrone is much longer. The quality of the solutions obtained in this experiment isincluded in Figure 4.4.

The perspective angle of the measurement device in the car frame inspectionproblem is 40˝. The distances are computed precisely the same as in the firstexperiment. The convex hull of the car frame is dilated by 0.5m, and a grid with aresolution of 0.1m within this dilation is adopted to generate the viewpoint set. Thecar frame itself is a triangulation consisting of 373k triangles. To perform collisions,it is represented by an octree with 3525 voxels. Visibility is computed with theoriginal mesh, together with all other static scene meshes. Further details aboutthe size of the problem are provided in Figure 4.3. The drone in the wind turbineinspection problem is equipped with a camera with a perspective angle of 45˝. Thewind turbine is provided as a triangulation of 103k triangles and is representedas a voxelization of 7k voxels to compute collisions. The convex hull of the windturbine is dilated by 1m to create a viewpoint set with a resolution of 1m.

Page 83: A general approach to robot path planning for optical ...

4.6. CONCLUSION 73

The Greedy+ and GCB+ algorithms were allowed to run for 24 hours on bothproblems. The performance of the Greedy and GCB algorithms was extracted fromrunning the + algorithms. Assumption 1 was found to be valid for both problems.For the turbine inspection problem, it was straightforward to find a path with linearconnections between way-points. For the Car frame inspection problem, it was morechallenging because of the complexity of the robot system, self-collisions resultingfrom the measurement system and collisions from the complex car frame. To findthis path, we resorted to programming the path manually using a virtual realityprogramming approach2. The fact that we manually could find a path connectingall way-points using linear paths suffices to show the existence of such a path.

The size of these problems is results from the size of the input graph, and the lengthof the final path. The largest problem in terms of the input graph is the car frameinspection problem, with an input graph of 351k nodes. Also notable is the sizeof the object discretization, which is 30k points. The length of the path in thisproblem was 351 nodes of the input graph. The complexity of the car frame is alsoan essential factor since interactive collision checks are more expensive. The pathlength of the wind turbine inspection problem is 520 nodes.

4.6 Conclusion

We started by linking the general robotic inspection planning problem to thesubmodular orienteering problem. While this connection was already noticed inspecific examples by other authors, the generality of this connection was neverstressed and fully explored. For the submodular orienteering problem, there existsan algorithm that solves this problem that provides formal mathematical guarantees.We extended this algorithm with a post-processing step to improve the solutioneven further.

We investigated the assumptions about the real-world problem that are requiredduring the optimization phase and formalized this in an assumption. We proposed adiscretization procedure for real-world problems such that this assumption is highlylikely satisfied. This means that solutions to the abstract optimization problem(together with their guarantees) are valid in the practical context as well. Thediscretization procedure that we proposed is also able to deal with very large andcomplex problems, as demonstrated in our experiments.

We subjected the proposed algorithms to a wide range of problems in which italmost always outperformed the traditional approach. We also showed that thealgorithm was capable of solving highly complex inspection planning problems.The observation that our algorithm can provide solutions to complex inspectionproblems with mathematical guarantees makes it an ideal reference method tobenchmark more pragmatic algorithms that do not offer these guarantees. Further-more, we showed that the advantage of having mathematical guarantees in the

2A video demonstrating this process is available online: https://youtu.be/nakQGTs4Fs0

Page 84: A general approach to robot path planning for optical ...

74 CHAPTER 4. NEAR-OPTIMAL INSPECTION PATH PLANNING

context of inspection planning resulted in a more robust algorithm.

The proposed method in this chapter and the post-processing step proposed inChapter 3, can be combined to obtain a complete automated inspection pathplanning procedure. This procedure would furthermore fulfil all the requirementsfrom the problem statement. The only drawback from this procedure is that itis quite complicated, and perhaps not always user-friendly. The input from theuser that is required to generate practically useful inspection paths requires someknowledge about the procedures themselves. This knowledge cannot always beexpected from practical users of automated inspection planning procedures. So, inthe following chapters, we will focus on increasing the usability for inexperiencedusers.

Page 85: A general approach to robot path planning for optical ...

Part II

User-centered Inspectionplanning

75

Page 86: A general approach to robot path planning for optical ...
Page 87: A general approach to robot path planning for optical ...

Chapter 5555555555555555555555555555555555555555555555555555555555555555555555555Human factors in camera network

design

The work from this chapter is published under the title “Interactive camera networkdesign with a virtual reality interface” [86].

5.1 Introduction

In Chapter 3 and Chapter 4 we presented automated inspection planning algorithms.However, to increase the flexibility and usability for inexperienced users, we willinvestigate a more user-centric approach. In this approach, we transform theabstract inspection problem to a problem that inexperienced users can manuallysolve. We will do this through intuitive visualizations and interactions in virtualreality. This approach is investigated in two stages. In this chapter, we will focuson a slightly simpler problem, namely camera network design. This problem isthe same as the robotic inspection planning problem, but cameras can be placedfreely without robot or path constraints. After this chapter, we will study the morecomplex problem of robotic inspection planning in Chapter 6.

In the camera placement problem, the goal is to find an optimal configuration ofcameras to perform some observation task. Many practical problems can be formu-lated as an observation task, so it is no surprise that this problem has been studiedextensively. For example, the design of surveillance systems for both large buildingsand outdoor environments can be formulated as a camera network design problem[23, 24]. In this formulation, the objective is to maximize the camera coverageover a floorplan of the environment. The same problem arises in designing customtracking systems in Virtual Reality applications. In the photogrammetry community,a similar problem is studied, where the goal is to select image acquisition locationsfrom which a 3D reconstruction will result in minimal uncertainty over recon-structed points [9, 22, 44, 100]. In computer graphics, the same problem occurs inview selection where the goal is to select a limited number of views (renders) of

77

Page 88: A general approach to robot path planning for optical ...

78 CHAPTER 5. HUMAN FACTORS IN CAMERA NETWORK DESIGN

an object/scene that together provide the most efficient summary of information[101, 102, 103]. From these examples, it is evident that camera network designis related to inspection planning. The only different thing is that a discrete set ofmeasurement positions replaces the measurement path.

In general, three distinct steps are important in automated camera network design:representation of the problem, formulation of the cost/quality function and opti-mization of this cost/quality function. We will discuss these steps in Section 5.2 andhighlight the fundamental problems in practical settings by studying its usabilityand its slightly altered mathematical structure (compared to inspection roboticinspection planning). We will present a virtual reality user interface where the useris in charge of placing all cameras, thereby avoiding all of the traditional steps,without a loss in quality. The fact that these steps are avoided makes the solutioneasier to understand for inexperienced users.

Our main idea, letting users design a camera network, is opposite to most ap-proaches trying to automate this design. We will motivate that there are structuralproblems with automated design approaches that are not appropriately addressedfrom the perspective of usability. In this work, the strong spatial reasoning skillsof humans, which is crucial in solving this design task, will be enhanced by thevisualization possibilities of virtual reality.

In subsection 5.2.1, we will discuss the structural problems with automated cameradesign algorithms from both a mathematical and a user interactive perspective.In Section 5.4 will elaborate on our proposed interface that will be evaluated inSection 5.5.

5.2 The automated camera network design problem

5.2.1 Problem structure

As discussed earlier, the goal of camera network design is to find an optimal cameraconfiguration that performs some observation task. In order to design a relevantcamera network, knowledge about the environment is necessary, traditionally inthe form of a CAD model which is frequently available. Further knowledge aboutthe camera geometry and area of interest is strictly necessary. Early work on theart gallery problem tries to position cameras such that the entire area of interest isvisible using geometric algorithms [23, 104]. Approximate solutions are availablefor specific instances of this type of problem, that typically rely on a discretizationof the problem [105].

Two things need to be discretized, firstly the area of interest that needs to beobserved should be represented by a finite set of elements, i.e. M “ tm1, ...,mnu.Secondly, the space of all possible camera locations needs to be a finite set ofconfigurations, i.e. V “ tv1, ..., vmu (Viewpoints). This discretization reduces the

Page 89: A general approach to robot path planning for optical ...

5.2. THE AUTOMATED CAMERA NETWORK DESIGN PROBLEM 79

problem of finding a camera configuration that covers the area of interest, to theclassical set covering problem (SCP) [106, 107]. This binary representation ofcamera visibility is not useful because it lacks the freedom to model realistic cameranetwork performance models [108]. A more general problem formulation thatencompasses the SCP is known as submodular function maximization [96]. Thisclass of problems is well studied, and many complexity results of these problemsare known.

5.2.2 Camera network performance functions

A difficulty in camera network design is that it is very challenging to define whatexactly makes a camera system good formally. This notion of quality is also highlyproblem-specific. An exact formulation is however needed in automated designapproaches, so many propositions for such functions are available. However, thecamera network performance function fit seamlessly within the formal definition ofinspection quality we developed in subsection 2.2.3. In this section, we will reviewpopular function modelling choices and discuss their impact on the structure of theproblem.

A first modelling choice is to encode the notion that it is better for an environmentpoint to be viewed by multiple cameras [22, 24, 44, 107]. This corresponds toextending the previously defined function G (Equation 2.2) from the max functionto arbitrary functions. The simplest function G is to count the number of cameras(cameras can have weights) and clip the function above some defined threshold [24]which results in a monotone submodular f . Another choice models the propagationof uncertainty in measuring environment points [22, 44]. This formulation howeverdoes not guarantee a concave G which makes f non-monotone submodular. Thiswill result in a deteriorated performance of optimization algorithms.

Another modeling choice is to assign weights to viewpoints or/and environmentpoints based on some notion of importance [24, 102, 107]. No weighting schemecan fundamentally change the problem structure (as f remains monotone submod-ular) as long as the weights remain positive.

Finally, regularizers can be added to the problem. The complete camera networkdesign problem, with regularizer, can than be formulated as:

X˚ “ argmaxXĂV :|X|“k

fpXq ´ αÿ

x,yPX

rpx, yq. (5.1)

Note that the path constraints from Chapter 4 are replaced by cardinality constraints.A cardinality constraints limits the number of elements in the solution set (to kelements). These cardinality constraints model that a limited number of camerasare available. The positive constant α and positive regularizer function r areused as a tool to discourage the optimizer to choose certain undesired cameraconfigurations [24].

Page 90: A general approach to robot path planning for optical ...

80 CHAPTER 5. HUMAN FACTORS IN CAMERA NETWORK DESIGN

The concept of a regularizer is only relevant for the camera network design problemand not for the inspection path planning problem. The main reason for this isthat viewpoints from a path are always connected. This is not the case for adiscrete set of camera positions. An example where this can be useful is a stereoreconstruction camera system. Stereo reconstruction requires overlap in whatneighbouring cameras perceive. The regularizer can penalize a lacking of overlapguiding the optimizer to solutions that exhibit this overlap. From the perspective ofthe problem structure, this is however, not a good idea. The regularizer destroysthe monotonicity, which will result in a far more challenging optimization problemwith a worse solution quality [109, 110]. It is important to note that regularizerscan pop up unexpectedly. Any element of a quality function that only depends oncameras and not environment points can be written as a regularizer1.

5.2.3 Solving the Automated Camera Network Design problem

Submodular function maximization (since f is submodular) with a cardinalityconstraint is a combinatorial NP-hard problem that is well studied. An advantageis that there is an asymptotically optimal algorithm2 to maximize a monotonesubmodular f under a cardinality constraint [111]. This algorithm is known asthe greedy algorithm. The greedy algorithm builds a solution by greedily addingthe next best cameras to the final solution set. The greedy algorithm yields a tightp1´ 1eq-approximation (« 0.63-approximation)3, which means that the optimalfunction value is at most a constant factor (1p1´ 1eq) higher than the value re-turned by the greedy algorithm. The downside is that it is proven that no algorithmcan improve on the greedy algorithm with a polynomial number of steps [111],which is precisely why it is called an optimal algorithm. The greedy algorithms hasan unbounded performance guarantee for non-monotone submodular functions.So any camera network performance choices (e.g. regularizers) that break themonotonicity are undesired.

It is important to note that the results as mentioned above do not exclude superioroptimizers that leverage some additional problem structure, but these algorithmswill not generalize well and may be challenging to find. Furthermore, any geo-metric structure is hard to leverage as geometric algorithms have a prohibitivecomputational complexity related to the aspect graph [26]. These algorithms aretypically not suited in general-purpose camera network design applications.

Many strategies to solve the Automated Camera Network Design problem areavailable in the literature. Next-best-view planning uses an optimizer that buildsa solution by subsequently adding the best viewpoint to the solution set, whichis precisely the greedy algorithm that is optimal for the problem [9, 33, 44, 102].Another approach is to employ an evolutionary optimization strategy [22]. Amodular relaxation of the actual submodular function is also used together with

1Examples include: Distance constraints, Overlap constraints, Orthogonality constraints, etc2We will use the more commonly used term ’optimal algorithm’ in the remainder of this work3Here e is the base of the natural logarithm, not a problem specific constant

Page 91: A general approach to robot path planning for optical ...

5.3. USER INTERACTION 81

a branch-and-bound solution method [24]. The latter algorithms do not providea theoretically better upper bound than the original greedy algorithm, so theirperformance is highly problem-specific. Specialized branch-and-bound methodsexist for submodular functions as well, but they suffer from the fact that submodularfunctions are much harder to bound than modular functions, which limits theirpractical applicability [96]. This is a result of the fact that the worst-case runningtime of branch-and-bound algorithms is exponential in the problem size [112].

In this section, we linked the camera network design problem to monotone submod-ular maximization with a cardinality constraint. We discussed manipulations of theproblem as treated in camera network design literature and discussed their impacton the structure of the problem. This structure proves that automated algorithmsare fundamentally limited in the quality of the solution they can generate. Anillustration why the asymptotic properties of algorithms are essential is that thenumber of possible camera systems in the experiment in subsection 5.5.1 is ofthe order 10115 (draw 25 cameras from 40000 candidates). Because the greedyalgorithms are optimal, any superior algorithm should address a significant subsetof the vast number4 of possible solutions, which is not realistic.

5.3 User interaction

As discussed earlier, the first step in most automated approaches is to discretizethe inherently continuous camera network design problem. Discretizing the areaof interest is, in most cases, straightforward. Strategies such as voxelization andrandom sampling of points exist and require minimal interaction from a user. Asampling of possible viewpoints is, however much more difficult. The algorithmcan choose any viewpoint from this set, so care should be taken that only validconfigurations are present. This introduces the following problems:

1. The space of all possible camera positions must be represented by a finitenumber of samples that should be dense enough to represent the problemaccurately.

2. The number of samples should be low enough to avoid prohibitively longoptimization times.

3. Cameras cannot be placed at every position, so a lot of domain-specificknowledge is necessary to select these possible positions.

From an operational perspective, this is not ideal because the user that shouldprovide this information to the application needs domain-specific knowledge, andat the same time, must know about sampling strategies which is rare. From a user

4As a reference, the estimated number of particles in the universe is a comparatively measly 3.28 ˆ

1080

Page 92: A general approach to robot path planning for optical ...

82 CHAPTER 5. HUMAN FACTORS IN CAMERA NETWORK DESIGN

interaction perspective, there are also issues in finding a method that avoids havingto select every point manually but retains a qualitative sampling density.

While the user often has a good intuitive notion of what constitutes a good cameranetwork, the choice of a quality function is challenging and highly problem-specific.Furthermore, many methods have weights that need to be chosen that have asignificant impact on the final result. Choosing this function with associatedweights requires intimate knowledge about the mathematics of a problem andthe measurement specification of the cameras. Moreover, even then, iterationsare necessary to align optimizer results with user expectations. For users withoutspecialized knowledge it is very challenging to encode these specific camera networkrequirements in an automated algorithm. For these users, dealing with the abstractstructure of the camera network design problem will seem very unintuitive.

Defining a regularizer r and associated weight α is often required to avoid patho-logical behaviours of automated optimization algorithms [24] and to encode com-plicated network requirements. However, to our knowledge, no literature exists onhow to define these, while they will have a crucial impact on the final measurementsystem.

A general problem with all of the problems associated with user interaction and inour opinion the worst is that all the information that needs to be provided is veryabstract. This abstractness excludes practitioners from the adoption of automatedalgorithms for the design of camera systems.

User interaction for camera network design has not received much attention inthe literature. A GUI has been proposed that gives the user tools to perform therequired discretization and assign importance weights to environment parts [24].In this tool, viewpoint sampling is limited to a uniform sampling over the region ofinterest, which is not realistic in many real-world cases, and the cost function isfixed. A closely related work develops a GUI that allows users to manually choosecamera configurations to complete 3D scans of single objects [113]. A user studywas performed, which showed that the quality of solutions provided by users andan automated algorithm are comparable, even with inexperienced users.

5.4 Virtual reality interface

5.4.1 Motivation and overview

The basic principle of our virtual reality interface is simple. The user is placedin the scene of interest together with an initial camera setup X. The applicationwill calculate Gpm,Xq for each environment point m and visualize these valuesas an interactive coloured volume (cloud). Our choice of gpe, Uq is discussed insubsection 5.4.4, but it is important to note that this function is only needed toprovide the user with a qualitative notion of quality. The user can manipulate all

Page 93: A general approach to robot path planning for optical ...

5.4. VIRTUAL REALITY INTERFACE 83

Traditional workflow

Proposed workflow

ViewpointSampling

DesignQualityFunction

CalculteSolution

Iterate

ImproveCameraConfiguration

VisualizeProblem

Interact

Figure 5.1: Orange arrows with a book pictograph indicate steps in a workflow thatrequires specialist knowledge by the user. Orange arrows with a brain pictographrequire visual reasoning of a user but no specialist knowledge, and the green arrowsindicate work performed by a computer.

Simulator

Geometry

Volume

Cameras

VR threadVR thread

RenderingEngine

SimulatorSimulator

Geometry

Volume

...

RenderingEngine

Volume threadVolume thread

For each For each

Combine

Depth map

Compare QualityUser

Render

Figure 5.2: Three main parts that together form the structure of our interface.

cameras and see the effect on the environment quality in real-time. The user is incharge of generating a camera configuration but can apply geometrical reasoningto solve this problem. Real-time colour feedback of the values of gpe, Uq allowsthe user to decide what is important for his application while performing theoptimization.

A schematic overview of the difference between our proposed workflow and thetraditional workflow is given in Figure 5.1. Our proposed workflow avoids the needto perform a challenging viewpoint discretization step and the step that designs aspecific quality function. These steps both require intimate knowledge about boththe practical problem and theory of camera design planning. The advantage ofthe traditional workflow is that a computer can automatically solve the cameranetwork design problem. However, in reality, these results often do not correspondto what is expected by the user. This is mainly because the quality function is notaligned with the intuitive expectation of the user. Another common problem is thatmany informal notions of quality cannot be directly encoded in the cost function.

Page 94: A general approach to robot path planning for optical ...

84 CHAPTER 5. HUMAN FACTORS IN CAMERA NETWORK DESIGN

The structure of our interface5 consists of three main parts, as shown in Figure 5.2.In this section, we will give a brief overview of their function, but each will get amore detailed treatment in a later section. The first part is a simulator that handlesall geometries, maintains its positions and performs dynamics calculations. Thesecond part is responsible for calculating the coverage and quality of a given camerasystem. The final part is responsible for rendering to the virtual reality device andmanaging user interaction.

5.4.2 Simulator

As a simulator, we use a commercially available robot simulator called V-REP [62],providing many features that are available to our interface. The use of a flexiblesimulator allows for the modelling of complicated real-world dynamic systems upto high fidelity and the subsequent design of camera systems in these environments.

In our proposed workflow, the simulator will also be responsible for the definitionand modelling of the specific problem. Firstly, the geometry of the problem shouldbe available in the shape of a triangular mesh. This data format is widely availableby the use of CAD software packages in construction and design. When no CADmodel is available, one could always resort to 3D reconstruction from image data.In our experiments section, we will show both cases to highlight the flexibility ofour solution. Next, we require geometrical knowledge of each camera that canbe positioned. This information consists of a perspective angle of the camera, theresolution and the maximum/minimum measurement distance.

Finally, a discretization of the space of interest needs to be performed. We providea box that can be positioned on the scene of which the size and position can bechanged. This box is shown in purple, the left image of Figure 5.3. The user canselect a discretization resolution which determines in how many voxels the boxwill be subdivided. From each voxel, we select the centre point, and together thesepoints determine the set M . We choose to represent M as points because geometriccalculations using points are much faster than using objects that have volume.

5.4.3 Interactive quality computation

Interactively calculating quality values over the space of interest is a challengingtask. The visibility between all environment points and all cameras in the sceneneed to be calculated in an environment of arbitrary complexity. To create aninteractive experience, we resort to the z-buffer visibility algorithm to computevisibility. In our results, we were able to achieve computation times of the orderof 300ms to compute the quality of 50k points of interest for ten cameras in anenvironment of 300k triangles. These results will be discussed more formally in the

5Our implementation is publically available at: https://github.com/BorisBogaerts/V-REP-VR-Toolbox

Page 95: A general approach to robot path planning for optical ...

5.4. VIRTUAL REALITY INTERFACE 85

results section, but show the interactivity and scalability of the z-buffer approach.In our implementation, we use the publically available implementation of thisalgorithm in the visualization-toolkit (VTK) [114].

The proposed method can also model cameras with a non-traditional field-of-view (e.g. omnidirectional cameras), as these field-of-views can be recreated bycomposing multiple perspective renders.

5.4.4 Virtual reality process

The final part of our application deals with managing the virtual reality component.This encompasses both the rendering of all information and managing the userinteraction. Important in this process is that the obtained frame rate of the renderingprocedure is high enough to provide a comfortable virtual reality experience, sothe focus of this part is speed. The information we render is:

1. All problem geometry

2. An interactively rendered volume

3. Virtual camera feeds

As rendering engine, we again use the visualization-toolkit (VTK) [114] whichconnects to the publicly available openvr-API which in turn connects to popularvirtual reality hardware. The engine can handle real-time volume rendering byusing highly optimized implementation of GPU based ray-casting [115]. As a finalfeature, the camera feeds obtained by rendering individual cameras as discussed insubsection 5.4.3, are displayed as dynamic textures viewable by the user. Thesefeeds contain useful information that the user can leverage during his design task.An example where this can be useful is the design of live stereo reconstructionsetups. In these setups, adjacent cameras must have enough overlap in what theyperceive. In this application, the user can see what each camera sees, and ensurehimself that the required overlap is present.

The user can manipulate camera positions by dragging each camera with a con-troller. The virtual reality thread will update the position inside the simulator. This,in turn, results in a changing of the camera position in the volume thread. Thiseventually results in an updated quality function that changes the appearance ofthe volume. Furthermore, the user can change his position and scale relative tothe scene. We believe that the latter is of fundamental importance to perform thedesign task. The user can, for example, do a rough initialization of the camera setupwhen the scene is small (zoomed out), and perform more detailed manipulationsin a larger scene (zoomed in).

Finally, the user can choose a colour/opacity function. The colour/opacity transferfunction selects a colour and an opacity for every point in the volume as a function

Page 96: A general approach to robot path planning for optical ...

86 CHAPTER 5. HUMAN FACTORS IN CAMERA NETWORK DESIGN

Source

CA

D

Area# Triangles

140k 750m²

Source

CA

D

Area# Triangles

50k 1600m²

SourceArea# Triangles

530k 900m²

Ha

rbo

ur

Office

Gara

ge

Figure 5.3: Overview of the various scenes used in our experiments and some associatedmetrics. The source of the scene is either from a CAD workflow or reconstructed fromimages.

of its value. For example, the function gives an opacity of zero if the environmentpoint is visible and red with opacity in the range 0 ă o ă 1 if a point is invisible. Inour implementation, we support the choice of one user-defined custom functionthat can be interactively changed. Our experience is that using two functions, onewhich shows the quality, and one that shows what is invisible provides the userwith all information that is needed.

5.5 Experiments

To evaluate the performance of our proposed workflow, we performed three ex-periments. The first two experiments are user studies that aim to evaluate thequality of the user-generated camera networks relative to automatically generatednetworks in different settings. It is important to note that these user studies aremore formative than evaluating. These studies aim to provide directions for futureresearch. In the third experiment, we evaluate the computational performance ofour interface to highlight its scalability. Images and basic information about eachof the three experimental scenes are depicted in Figure 5.3. In all experiments, wefocus on realistic large scale and complex camera network design problems. Thefocus on more challenging design problems is important because simpler problemsmight give an overly optimistic view on human performance.

The performance of user-generated and computer-generated solutions are comparedwith the same quality functions and the same constraints in every experiment. Toquantitatively compare different camera networks, we use the same procedure toevaluate their quality f . We obtain this quality by computing a quality value g foreach environment point m using the method discussed in subsection 5.4.3. Theselocal quality values g can finally be accumulated to a global quality value f . We donot focus on the time required to generate each solution because these times arerelatively short compared to the time needed to design and implement a camerasystem.

Page 97: A general approach to robot path planning for optical ...

5.5. EXPERIMENTS 87

As automated algorithm we use the well known greedy algorithm [9, 33, 44, 102]on a problem specific cost function. The viewpoint discretization of the problem isdifferent in each experiment but is generally dense. This dense discretization en-sures that the computer-generated solution is close to the best achievable solutions.In each experiment, we actively searched for discretizations that resulted in thebest performance for the automated algorithm.

5.5.1 Office scenario

The goal of this experiment is to place 25 cameras in an office environment depictedin Figure 5.3 (middle). A top view of the entire scene is also shown in Figure 5.4.The objective of the camera system is to maximally cover the free space of thescene6. Cameras can be placed at ceiling level over the entire area of the office.The goal of this experiment is to evaluate the quality of user-generated solutionsrelative to computer-generated solutions in a setting where the cost function andconstraints are clear and intuitive.

MethodologyThe participants of this user study were equipped with virtual glasses and controllers.Next, all the possible user interactions that are discussed in subsection 5.4.4 weredemonstrated. Because all the interactions are relatively straightforward, theparticipants received no further training before the experiment.

The users received the instruction to position the cameras as to maximally coverthe free space of the office environment. All uncovered areas were interactivelydisplayed as a red cloud and visible for the users. Users were also instructed toposition all cameras at ceiling level, which was evaluated during the experiments.If a camera was not placed at ceiling level, they were instructed to re-position thecamera. At the beginning of the experiment, all cameras were placed outside thescene to eliminate the bias that any other initial setup might introduce. The usersdid not get a time limit and were instructed to stop when they were pleased withthe camera configuration7.

To obtain a set of viewpoints for the automated algorithm, we uniformly andrandomly sample 40k points over the entire office at ceiling level. For each position,we generate a random orientation that does not point upward. This discretizationwas the result of an active search for the best possible discretization achieving thehighest coverage. All evaluations of coverage are relative to a dense sampling ofthe environment of 4.8 million samples (resolution of 0.1 meters).

ParticipantsFor this study, we recruited five volunteers that had no prior experience in designingcamera networks. All these participants were part of our department.

6Quality function g is one if the environment point is covered by more than one camera, and zerootherwise.

7No user spent more than 30min to design the camera network (to give an indication about thetiming)

Page 98: A general approach to robot path planning for optical ...

88 CHAPTER 5. HUMAN FACTORS IN CAMERA NETWORK DESIGN

78% 74%

Not covered

25 X2:1 | 90°

78%

75%

77%

81%

5K74%

10K74%

20K77%

40K77%

#viewp

oin

tsam

ple

s

Figure 5.4: Top view of the office scene with uncovered areas shown in red. The leftimage displays a user-generated solution, while the right shows a computer-generatedsolution. All percentages shown are coverage percentages, so higher is better. Thecamera network consists of 25 cameras with aspect ration 2:1 and perspective angle of90 degrees.

ApparatusThe experiments were performed using a HTC-VIVE virtual reality system. Thissystem was also equipped with a wireless module, to eliminate distractions that ca-bles might cause. All experiments were performed on a standard desktop computerwith Nvidia GTX1070 GPU, intel I7-8700 CPU.

ResultsThe computer-generated solution on the problem with the initial discretization

of 10k samples achieved coverage of 74%. Surprisingly, all users in the experi-ment performed better than the initial result of the automated algorithm. This issurprising in part because of the challenging nature of the design problem andinexperience of the users. We also expected the scene to be too large (1600m2 with25 cameras) for users to keep an overview of everything, which is necessary tofind good solutions. After increasing the viewpoint sample size, we were able toincrease the performance of the automated algorithm to 77%, but users with ourinterface still scored comparable or better.

These results indicate that users with our virtual reality interface can generatecamera networks that are at least highly competitive with automated algorithmsand sometimes even better.

5.5.2 Harbour scenario

In this study, we consider a real-world camera network design task where thecamera network is used to guarantee safety. When containers are unloaded fromships, they are positioned on the harbour quay. After this unloading, workers haveto confirm some information on the container and thus have to move among these

Page 99: A general approach to robot path planning for optical ...

5.5. EXPERIMENTS 89

containers. To track these workers, a camera system is attached to the crane toensure that it can be safely operated. The scene used in our examples is depicted inFigure 5.3 (left).

The goal of this experiment is to evaluate the quality of user-generated solutionsrelative to computer-generated solutions in a more challenging yet more conven-tional setting. In this case, the notion of quality is known by an expert and has tobe encoded in a quality function. The quality function is also more complicated,making the design task more challenging for the user. This is also why we donot consider this problem solvable by non-experts. Instead, we will focus on theperformance of experts.

MethodologyThe experimental conditions in this study are the same as in the office scenario.How the notion of camera network quality is derived and how the automatedalgorithm is configured is however different.

In this study, the details about the notion of quality were provided by an expert.The orange lines in Figure 5.3 indicate the possible positions of cameras in thisproblem which is limited to two beams on the crane. To construct a set of possibleviewpoints, we linearly subdivide each beam into 20 positions and defined 15possible orientations, creating a set of 2ˆ20ˆ15 (600) configurations.

The goal is then to select ten positions that maximize coverage over the area ofinterest, defined as the purple box in Figure 5.3 and a denser sampling betweenthe containers. This denser sampling encodes the notion that areas in between con-tainers are more critical because they are more dangerous. This box is discretizedin 30k points, and between the containers, we uniformly sample another 16k pointto force the optimizer to focus on areas between the container.

The design of a quality function is challenging because it has to encode qualitativeuser preferences. To reflect this point, we consider multiple quality functions.The expert preferred environment points to be covered by as many cameras aspossible, but the added importance of a point being covered by more cameras isstrictly decreasing. An example illustrating this notion of quality is that with anenvironment consisting of two points, both points being covered by two camerasis preferred over one point being covered by one camera and the other by threecameras. We can generate quality functions encoding this notion of quality bygenerating a sequence of 6 strictly decreasing values ts1, ..., s6u :

ř

i si “ 1. Eachvalue si is multiplied by the number of environment points that is viewed by morethan i´1 cameras and summed together to create a quality function. Every functiongenerated this way results in a monotone submodular f . In this experiment, wewill generate random quality functions gi and calculate for each function a cameranetwork X˚i . We also have a solution designed by an expert using our interface X˚e .

The quality of different solutions X˚i versus X˚e will be evaluated by studyingratio’s fipX˚j qfipX

˚i q and fipX˚e qfipX

˚i q. The former describes how well different

quality functions agree on the quality of designed networks using a different quality

Page 100: A general approach to robot path planning for optical ...

90 CHAPTER 5. HUMAN FACTORS IN CAMERA NETWORK DESIGN

Different Quality Functions

User input evaluated on Quality Functions

Dif

fere

nt

So

luti

on

s Ave

rage

ov

e r 60 fu

nctio

ns

0.90

1. 0 0

0. 9 7

0. 9 3

0. 9 9

0. 9 8

0. 9 9

1. 0 0

0. 9 9

0. 9 9

1. 0 0

0. 9 9

1. 0 0

1. 0 0

1. 0 0

0. 9 9

0. 9 7

0. 9 8

0. 9 9

1. 0 0

0. 9 9

0. 9 9

0. 9 9

0. 9 8

0. 9 9

0. 9 9

1. 0 0

0. 9 9

1. 0 0

1. 0 0

1. 0 0

1. 0 0

0. 9 5

0. 9 9

1. 0 0

0. 9 9

1. 0 0

1. 0 0

0. 9 9

0. 9 9

0. 9 9

0. 9 9

1. 0 0

0. 9 9

1. 0 0

1. 0 0

1. 0 0

1. 0 0

0. 9 7

1. 0 0

0. 9 9

0. 9 6

1. 0 0

0. 9 9

1. 0 0

1. 0 0

0. 9 8

0. 9 8

0. 9 0

0. 7 7

0. 9 2

0. 9 1

0. 9 2

0. 9 6

0. 9 9

Co

verag

e Fu

nctio

n

VR

Figure 5.5: Table that cross evaluates solutions generated for different valid qualityfunctions on these quality functions. The bottom row evaluates the user-generatedsolution on each quality function, and the right column shows the total coverage foreach solution. This table does not represent every quality function we tested, but only arandom subset. The total set contains 60 quality functions, and their average is shownon the right.

function. This will indicate the loss in quality of solutions due to a misspecifiedquality function. The latter describes the quality of the user-generated solutionwith respect to different possible quality functions.

ParticipantsFor this user study we recruited a single expert in designing camera networks. Thisexpert, who also provided the notion of quality that is used to construct qualityfunctions, is also responsible for generating the camera network.

ApparatusThis study is performed with the same equipment as the previous user study.

ResultsA random subset of the obtained ratio’s is displayed in Figure 5.5. From these

results, we can conclude that the solutions of different quality functions tend toagree on other quality functions. There is a much larger difference between thequality of the user-generated camera network with respect to different qualityfunctions. This means that for some quality functions, the user-generated scoreshigh, but for others, it scores lower. On average, automatically generated solutionsare 11% better than the user-generated solution with respect to specific qualityfunctions. If we compare the coverage (visibletotal environment points), whichis also important, the user-generated solutions score equal to or even better thanautomatically generated solutions. To compare the difference between an auto-matically generated network and a user-generated network, we show the obtainedquality volumes for both solutions in Figure 5.6.

Page 101: A general approach to robot path planning for optical ...

5.5. EXPERIMENTS 91

0 1 2 3 4

Cameras viewing each environment point

Figure 5.6: Comparison of a user-generated network (left) with a randomly chosenautomatically generated network (right). The colour values shown as a volume indicatefor each point in the area of interest its redundancy (how many cameras see the point).

Figure 5.6 provides an insight on why automatically generated solutions tend toscore higher on specific quality functions. The automatically generated solution(right) focuses many cameras at areas of a larger volume. The user-generatedsolution focuses many cameras on small areas between containers because theseareas form a potential safety hazard which results in a lower overall function value.This indicates that scoring higher on a quality function does not necessarily resultin better networks.

The expert noted that the quality of the solution he generated using our interface,is better than the automatically generated solutions despite the difference in qualityas measured by the quality functions. This was due to the following reasons:

1. The user-generated solution featured more regular patterns of overlap

2. Stereo constraints between cameras are better in the user-generated solutionwhich makes tracking algorithms more robust

3. In the solution provided by the expert, every camera is mounted in the sameorientation. This makes installation easier.

All these remarks of the expert can be encoded in the quality function but can onlybe implemented as a regularizer (see subsection 5.2.2). Based on these results, weconclude that the user-generated solution is at least highly competitive with theautomatically generated solution. This conclusion is based on the observation thatif we can find the perfect regularizer, we can get closer to the expert’s requirements.This will inevitably result in a deteriorated performance of automated algorithms.In this case, the quality gap between user-generated and computer-generatedsolution will typically only shrink.

Page 102: A general approach to robot path planning for optical ...

92 CHAPTER 5. HUMAN FACTORS IN CAMERA NETWORK DESIGN

5.5.3 Performance

It is important to note that we do not consider this as a formal performance test;there are too many variables that affect the computational performance. The maingoal is to convince the reader of the scalability of the interface. The experimentalscenes introduced in Figure 5.3 have different areas and triangle counts and thushave a different performance. Other parameters that have an impact on thecomputation time are the number of cameras and the number of voxels. In thisexperiment, we will focus on two parameters that indicate performance. The firstwhich we call latency is defined as the time between a user moving a camera andthe user seeing its effect on the volume. It is important to note that the latencydoes not affect the framerate in the virtual reality device because they operate indifferent computational threads. A second metric is the average framerate of thevirtual reality device.

In this section, we will evaluate both metrics on our implementation of the pre-sented interface. Even though there is room for optimization of our code concerningperformance, the results will give a lower bound on the achievable performance.All experiments were performed on a computer with Nvidia GTX1070 GPU, intelI7-8700 CPU and sufficient RAM8. Both latency and average framerate are recordedduring a walk through the scene. The latency is averaged over the entire run but isindependent of the position of the user.

In this experiment, we add an additional scene which is the car park depicted inFigure 5.3 in the right image. This scene is reconstructed from images and featuresthe highest triangle count with 530k triangles. The images are obtained from theETH3D 3D reconstruction benchmark [116]9 and reconstructed using commerciallyavailable Autodesk ReCap Photo software.

The main results are summarized in Figure 5.7. These results are obtained bypositioning ten cameras of a resolution of 400 by 640 pixels in each scene. Averageframerates for all scenes are over 40 FPS. We further noticed that the averageframerate was independent of the number of voxels. The effect of the number ofvoxels on the latency is linear as expected and remains linear up to a high numberof voxels (2 million). We also increased the number of cameras up to 30 camerasand noticed the same linear trend.

The framerates of the virtual reality device are enough to provide a pleasant virtualreality experience, even for scenes of up to 530k triangles and a volume of 2Mvoxels which qualifies as a large scale problem. The latency measured in ourexperiments indicates there is a clear linear trade-off between interactivity andaccuracy (see Figure 5.7). The user’s preferences can guide the choice betweenboth.

8The application roughly uses about 400MB RAM9The dataset consists of 44 images with resolution 6048 x 4032

Page 103: A general approach to robot path planning for optical ...

5.6. DISCUSSION 93

Performance report

.

0 600 1200 1800 24000

0.5

1

1.5

2Latency (s)

10³ voxels

53

40

40

Ave

rag

efr am

erate

10X

400x640

Figure 5.7: Summary of performance testing results. For different scenes, we tested theeffect of the number of voxels on the latency. We define latency as the maximum timebetween changing a camera position and a visible volume change. Latency times arefor a scene with ten cameras with a resolution of 400 by 640 pixels. Used symbols areintroduced in Fig. 5.3. Average framerate of the virtual reality thread is also included.

5.6 Discussion

The traditional recipe in the camera network design literature or related fields is toautomate the network design process and to keep users away from this process. Themotivation for this automation is that the task is too difficult for users to performqualitatively. We however believe that this difficulty is not necessarily related tothe problem structure. Automated algorithms also have structural difficulties withsolving these problems as we have shown, and users can rely on strong geometricalreasoning, which is not possible for computers. Using virtual reality, we wereable to visualize the quality of the camera coverage, which is usually invisibleand therefore augment users capabilities. The advantage of this approach is thatusers are much more involved with the task at hand, and the usual abstraction ofautomated algorithms is unnecessary. An additional advantage is that the user cantranslate what is important in a specific problem much more easily, which results inmuch fewer design iterations because he understands the problem.

It is, however, important to note that the results from the performed user studiesare not statistically significant. But the studies do indicate that there is a potentialfor using virtual reality augmented users in the camera network design process.More research and refined interaction techniques are however needed.

We believe that the importance of our results transcends the camera network design

Page 104: A general approach to robot path planning for optical ...

94 CHAPTER 5. HUMAN FACTORS IN CAMERA NETWORK DESIGN

literature. There are many problems in optimization that can uniquely be solved byexperts. Often experts rely on an intuitive understanding of the problem to solvethese issues. If this understanding is visual, a representation of this understandingin virtual reality can enable even non-experts to solve these problems. We havepresented an example of such a visualization, for a challenging problem and shownthat users are indeed capable of being competitive with automated approaches.

5.7 Conclusion

We started by linking the camera network design problem to the monotone-submodular maximization problem with a cardinality constraint. Using this link,we can conclude that the quality of solutions obtained by any automated algorithmis strictly bounded. Furthermore, for automated algorithms to be able to solve thenetwork design problem user interaction is required. This required user interactionis however, very abstract, which is a big obstacle in the adoption of these algorithms.

In this chapter, we proposed a virtual reality-based user interface where the usercan solve the camera network design problem manually, by applying geometricalreasoning. The workflow associated with this approach is much more transpar-ent and allows users without specialized knowledge to design camera networks.However, users with specialized knowledge can solve more specialized problemswithout knowledge about automated camera network design algorithms.

From our experiments, we concluded that user-specified camera networks arehighly competitive with automatically generated solutions. We demonstrated thisin two structurally different real-world camera network design problems. We alsodemonstrated the scalability of our approach to solve problems in geometries,resulting from 3D reconstructions from images up to high fidelity.

The fact that users without experience can generate high-quality camera networksis promising for robotic inspection planning. In the next chapter, we will takethe step from camera network design to the more complicated robotic inspectionplanning problem.

Page 105: A general approach to robot path planning for optical ...

Chapter 6666666666666666666666666666666666666666666666666666666666666666666666666Human factors in inspection path

planning

6.1 Introduction

In this chapter, we will extend the method from Chapter 5 to also include roboticpath constraints. The goal of this user-centric inspection planning method isto replace the automated inspection planning algorithm from Chapter 4. Theadvantage of this user-centric method over automated algorithms is that it ismore accessible to non-experts. In this chapter, we also compare the qualityof user-generated inspection paths with automatically generated paths withoutpost-processing. The post-processing technique for inspection paths developedin Chapter 3 can naturally be applied to the user-generated inspection paths, toimprove their quality further.

Traditionally, inspection paths are generated either by experts or by automated al-gorithms. However, to successfully use automated algorithms, some pre-processingis required. This pre-processing implies that expertise in both robotics and opticalinspection techniques is needed to use these algorithms. The expertise in opti-cal inspection techniques is required for the construction of a proper inspectionquality function. Any aspect that is not explicitly encoded in this function willbe ignored entirely by the automated algorithm. Robotics experience is neces-sary to construct a digital twin that can be accessed by the automated algorithm.This digital twin should contain the robot system, the measurement device, theobject that is being inspected and the environment in which the inspection isperformed. In the construction of this digital twin, a delicate trade off betweensystem fidelity and computational speed is essential. On the one hand, opticalrobotic inspection systems are complex mechatronic systems that are nontrivialto construct, for which fidelity is important. However, on the other hand finding,automated inspection paths is a very challenging problem that requires many callsto the digital twin as has been shown in Chapter 4. Futhermore, custom inversekinematics procedures are necessary for different robots which limits the flexibility

95

Page 106: A general approach to robot path planning for optical ...

96 CHAPTER 6. HUMAN FACTORS IN INSPECTION PATH PLANNING

of specific solutions. Overall, the potential of inspection planning is not realized inthe inspection community because this strict requirement for specialized knowledge[117].

The primary goal of this chapter is to construct a means for inexperienced usersto generate high-quality inspection paths. Our proposed system will extend thevirtual reality interface developed in Chapter 5. This interface will provide the userwith intuitive visualizations and interactions that aim to replace the requirementsfor specialized knowledge. However, it is normal that more experience in thesubject will lead to better inspection paths, so a second goal is to investigate theimportance of specialized knowledge in generating high-quality inspection paths.To do this, we perform a user study in which participants with different levels ofexperience in both robotics, and inspections generate inspection paths. Our finalgoal is to compare the quality of user-generated inspection paths with automaticallygenerated paths. This comparison will indicate which cases user input can bevaluable in robotic inspection planning. This is important, especially since userinput is traditionally not considered for generating inspection paths.

6.2 VR interface for robotic inspection planning

The basis of the virtual reality interface for robotic inspection planning is the same asthe virtual reality interface developed in Chapter 5. We change this interface in twoimportant ways. Firstly, we adapt the interactive quality visualization to visualizeinspection quality functions on meshes. We will not discuss this adaptation explicitlysince it is conceptually trivial. Figure 6.1 shows how users see the interactive qualityvisualization on objects. Secondly, we add the possibility for users to control robots.Our implementation is publically available in the V-REP VR toolbox1.

6.2.1 Robot programming interaction

Controlling the robot should be as intuitive as possible for users, as no user ex-perience can be presumed. To achieve this, the user can directly control the ToolCenter Point (TCP) of the robot with a virtual reality controller. Since the three-dimensional position and orientation of the virtual reality controller are tracked, itis possible to control the TCP of the robot completely. This approach also eliminatesthe need for advanced orientation control methods [118].

We avoid however, that the robot follows the controller the entire time. Thecontroller can move freely until the user presses the trigger button, after whichthe transformation between the TCP and the controller remains fixed, even if theuser moves the controller. Releasing the trigger button releases the TCP. As a result,the robot follows the user’s controller only if the trigger button is pressed. The

1https://github.com/BorisBogaerts/V-REP-VR-Toolbox

Page 107: A general approach to robot path planning for optical ...

6.2. VR INTERFACE FOR ROBOTIC INSPECTION PLANNING 97

Inspected objectC

olo

r scale

Quality

Controller

Virtual camera+ feed

Figure 6.1: This image shows how users see the interactive quality visualization. Theobject that is being inspected, an umbilic torus (for the enthusiasts), is dynamicallycoloured according to the inspection quality. The scene visible in the image is also usedto familiarize participants of the experiments with the concept of inspection planning.An umbilic torus was chosen as object for its interesting surface curvature, whichaffects inspection quality. The user can move the camera which changes the coloursthat are being displayed.

position of the buttons on the controller are shown in Figure 6.2. This feature isessential since the user can choose the optimal position relative to the robot beforecontrolling it, especially since occlusion of the inspected object by the robot fromthe perspective of the user must be avoided.

The robot follows the TCP by continuously performing inverse kinematics calcu-lations. To ensure that the motion of the robot is continuous, we use a dampedleast-squares solver. Joint angles found by the inverse kinematics solver are onlyapplied to the robot if the inverse kinematics calculation is successful. So if usersmove the TCP target out of reach of the robot, the robot stops moving. The usercan use this visual cue to reconsider movement of the TCP to positions that arereachable.

The user can record trajectories by pressing the trigger button of the secondcontroller. This recorded path is also visualized to the user by a simple line in 3D.Internally, the joint angles of the path are stored. When the main controller releasesthe TCP, the user can scroll through the part of the path that already had beenrecorded by scrolling over the trackpad. The previously recorded joint angles arethen re-applied to the robot in the reverse order in which they were recorded.

Some experiments feature a robot which contains an extra axis. This can be arotation table (see Figure 6.4 (middle) for a robot with rotation table), or a lineartranslation stage (see Figure 6.3 under ’the Challenge’ for a robot with translationstage). These robot systems cannot be controlled by simply moving the TCP. In

Page 108: A general approach to robot path planning for optical ...

98 CHAPTER 6. HUMAN FACTORS IN INSPECTION PATH PLANNING

Robot

Controller

TCP

Trigger

Trackpad

Figure 6.2: The user can control the TCP of the robot by pressing a trigger buttonon the controller, after which the robot follows the controller. The position of thetrackpad and trigger buttons are also shown. These buttons are colour-coded duringthe experiments to effectively communicate with the user.

these cases, the user can control this extra axis by scrolling over the trackpad of thesecond controller.

Note that in any sensible robotic inspection system, the measurement device isattached to the robot end-effector. Thus any movement of the robot’s TCP resultsin a motion of the measurement device. Thus after moving the robot’s TCP, theinteractive quality visualization re-calculates the inspection quality.

6.2.2 Usability

In this section, we will reflect on some fundamental issues in the usability oftraditional workflows with automated algorithms, that can be solved by replacingautomated algorithms with users.

A fundamental issue in automated inspection planning is related to the dimen-sionality of the state space of the robot system. In reality, high dimensional statespace result in more flexible robots, and thus have more path options, of whichsome can be better than what is achievable with a less flexible robot. Automatedalgorithms do not deal well with high dimensional state spaces. This is because theinspection planning problem is fundamentally characterized by a long planninghorizon [119]. Searching over longer time horizons in larger state spaces drasti-

Page 109: A general approach to robot path planning for optical ...

6.3. EXPERIMENTS DESIGN 99

cally reduces the search area that can be covered by automated algorithms (due toincreased computational complexity [94]). Thus with automated algorithms, theperformance of robots with larger state spaces typically deteriorate. So, at leastpartially, circumvent these issues complicated data-structures are required as wehave shown in Chapter 4. These data structures are delicate to set up and typicallyrequire user expertise. On the other hand, users with our interface will run intofewer issues when searching for inspection paths with highly flexible robots. Theeffect will typically be that robots are easier to control by users.

While digital twins aim to be perfect representations of real-world systems, they canstill fail in certain respects. For inspection planning, this can be especially the casesince robotic inspection systems are complex mechatronic systems. On the otherhand, since the inspection planning problem is challenging digital twin fidelity isoften sacrificed for an increase in simulation speed [120]. An example that cancause such problems are cables from the measurement setup that get tangled. Cablemovement and tangling can be simulated. However, their simulation will, in manycases be to be too expensive and get therefore omitted. If such a thing happens, itrequires an expert to fix the digital twin before automated algorithms can re-solvethe problem. In our approach, the user, who does not need to be an expert, cantake this aspect into account when programming a trajectory. Futhermore, custominverse kinematics procedures are necessary for different robots which limits theflexibility of specific solutions.

Another fundamental issue with automated algorithms is the fact that user prefer-ences are often challenging to encode in inspection quality functions. In Chapter 5we have shown that in some cases, inspection quality functions only partially reflectuser preferences. While the visualization that the user gets to see does not visualizethe unmodelled aspects, users who are familiar with these preferences still takethem into account. Automated algorithms, on the other hand, rely on the successof experts in encoding these challenging aspects.

6.3 Experiments design

The experiments consist of a user study, in which participants of different experiencelevels in robotics and optical inspection techniques generate inspection paths fordifferent inspection cases. It is important to note that this user study is formativerather than evaluating. This means that our solution is the first step towards futuresolutions, meant to investigate the potential for human-centered solutions.

Our user study consists of three main stages. In the first stage, participants areintroduced to all visualizations and interactions that can be performed with thevirtual reality interface. In the second stage, the participants generate inspectionpaths for six different inspection planning scenarios. And in the last optional stage,the users are asked to generate an inspection path for a more complicated andlarger scale inspection task. This task is significantly more challenging and requires

Page 110: A general approach to robot path planning for optical ...

100 CHAPTER 6. HUMAN FACTORS IN INSPECTION PATH PLANNING

more effort of the user, and is therefore optional. The user study is designed suchthat the first two stages can be completed in approximately one hour, and the lastoptional adds up to a half-hour to this. However, users are never pressured withtime limits, and inspection paths are only stored if the user is pleased with them.Furthermore, users are reminded that they can stop the study at any point if theydo not wish to continue.

A 360 degree MR video that accurately mimicks the experience of a participant tothe experiment is available online2.

6.3.1 User selection

An important contribution of this work is to investigate the importance of theexperience level of users in either robotics, inspection techniques or both. Thevirtual reality interface aims to compensate for lack of experience by providing aclear visualization of inspection quality that is sufficient to complete the inspectionplanning task. To investigate this, a population of participants with a range ofexperience levels in both robotics, inspection techniques or both are required.In order to find such specific profiles, we targeted specific people. This specifictargeting of users was deemed necessary to obtain a decent variety of participantswith varying experience levels. To measure the experience level of the participants,we asked each participant to rate their experience in both domains by a numberfrom one to five according to the following descriptions:

• 1 - I do not have any experience in the field

• 2 - I have only theoretical knowledge about the field, but no practical experi-ence

• 3 - I have programmed a robot once or twice/ I have performed an opticalmeasurement once or twice.

• 4 - I program a robot arm regularly/ I perform optical measurements regularly

• 5 - I have experience in designing robot systems/ I have experience in planningoptical inspections

We performed experiments with 16 participants. The participants included bothmales and females in an age range of 22-58. The distribution over all the combina-tions of experience levels is shown in Figure 6.6.

6.3.2 User preparation

In this first step of the user study, users are familiarized with the controls, theinspection planning problem, inspection quality visualization and robotics controls.

2https://youtu.be/7O629j58bTI

Page 111: A general approach to robot path planning for optical ...

6.3. EXPERIMENTS DESIGN 101

Figure 6.3: 360 degree image showing the virtual reality scene that is used to prepareparticipants for the experiments. This scene consists of three main parts. ’The quality’familiarizes users with the concept of inspection quality and how it is visualized. In’The objects’ users get to see the objects that need to be inspected in the experiments. In’The robots’ users learn how robots can be controlled.

To prepare the participant, a special scene was created, which consists of threemain elements. In the first element named ’the quality’ (see Figure 6.3) users arefamiliarized with the concept of inspection quality. The users learn how they canmove a camera, and see the inspection quality interactively projected onto a testobject. In this step, users can freely move the camera and see the effect on theinspection quality. Participants also learn how to record measurement trajectories.

Participants are transferred to the second step, which is called ’the objects’ (seeFigure 6.3) if they feel familiar with the concept of inspection quality. In this step,users can study the objects for which they need to generate inspection paths inthe following stage of the experiments. The user can already try to find inspectionpaths for these objects, without the constraints imposed by a robot system. Todo this, they use the same controls from the first step, and by doing so, get moreintuition about the inspection quality.

Finally, the users are transferred to the last step, named ’the robots’, to learn howrobot paths can be programmed with the VR interface. The participants learn thisin two different scenarios. In the first scenario users learn how to move the endeffector of the robot, a Kuka kr16 which is also used in further experiments (seeFigure 6.4), to a predefined set of configurations. These predefined set of posesare chosen such that the users experiences collisions and the kinematic limitationsof the robot. Each time such a collision or kinematic limitation happens, usersare informed about this, and it is explained that this should be avoided duringthe experiments. In the second scenario, users learn how robot paths can beprogrammed. This time, the robot is a UR10 with rotation table (see Figure 6.4),as is used in further experiments. Two predefined trajectories are one by oneshown, which need to be programmed by the user. The user thus needs to record atrajectory while following the shown trajectory with the robot end-effector. Onetrajectory is a helix which can be programmed by rotating the rotation table andmoving the Tool Center Point (TCP) of the robot linearly. This path was chosento show users how a rotation table can be used effectively in combination with a

Page 112: A general approach to robot path planning for optical ...

102 CHAPTER 6. HUMAN FACTORS IN INSPECTION PATH PLANNING

Created by Pham Duy Phuong Hungfrom the Noun Project

Kuka UR10

Figure 6.4: The tree different objects, and two different robots systems that are featuredin the experiments. All possible combinations between robots and measurement objectsmake up the six cases that the users must solve.

robot arm.

This user preparation step also aims to guard the experiment against users thathave difficulties with navigating in, and interacting with virtual environments inVirtual Reality. As experience with VR is not a factor of interest in this work, it couldtaint the results. To avoid this, users that cannot complete the user preparationcannot proceed to the following stages. From the 16 participants that entered thestudy, only one participant was rejected as a result of this preparation step. Finally,the user is presented with ’the challenge’, which is the optional third stage of theexperiment.

6.3.3 Inspection planning scenarios

The next stage in the experiments is the most important part. Participants are askedto program inspection paths that are as efficient as possible in six different cases.These cases are constructed by combining two different robot systems with threedifferent objects. These are shown in Figure 6.4. During these experiments, eachuser is free to try to program as many trajectories as desired. The user can stop eachcase if he is pleased with the final measurement trajectory. The user can also ask tostore intermediate trajectories, which can be considered as the final trajectory.

The three different objects that are featured in the experimental evaluation are abicycle frame, a transmission housing and a panel. The panel is the simplest objects,for which it is relatively straightforward to find a good inspection path. However,the slopes in this object make it slightly more interesting. The transmission housingis a nearly convex object, with a lot of smaller-scale occlusions. The third objectis a bicycle frame, for which the most complicated inspection path is required.Furthermore, two robots are considered in the experiments. The first is a Kuka kr16robot with a relatively large measurement device, which is the most straightforward

Page 113: A general approach to robot path planning for optical ...

6.3. EXPERIMENTS DESIGN 103

Figure 6.5: The inspection quality function used throughout the experiments. Thewhite arrow represents a surface normal. The colour represents the inspection qualityif a camera is located in a position, and points towards the base of the white arrow. Ahigher number of inspection quality indicates a better measurement.

robot system to control. The second robot is a more flexible UR10, together witha rotation stage. The latter is a more complicated robot system because both therobot’s TCP and the rotation stage must be controlled separately by the user.

In the final stage of the user study, users need to program a more challenginginspection path. This scenario is displayed in Figure 6.9. The robot during thisscenario is a Kuka kr16 that is placed on rails. The user can control this extra degreeof freedom with the left controller trackpad. This is the same control that is usedto control the rotation stage in the previous experiment. The object that is beinginspected is the housing of a wind turbine. This is a large and challenging object.Another challenging aspect of this object are deep notches, in which the robot mustnavigate to obtain good measurements. This experiment is optional because itrequires much concentration of the participant, which could be lacking after theprevious stages of the study. Surprisingly, all participants agreed to perform thisextra step.

6.3.4 Inspection problem definition

In this section, we will give some details on how all the functions that definethe submodular orienteering problem are chosen in the experiments. We assumethroughout the experiments that the inspection quality functions Q is of the follow-ing form:

Qpd, θq “ cospθq ¨ exp

ˆ

pd´ doptq2

σ2

˙

. (6.1)

Here, θ is the angle between the surface normal and the view direction of thecamera, and d is the distance between a point and the viewpoint of the camera.dopt and is the ideal measurement distance, and σ is used to determine how much

Page 114: A general approach to robot path planning for optical ...

104 CHAPTER 6. HUMAN FACTORS IN INSPECTION PATH PLANNING

the measurements can deviate from this ideal distance.

To define the distance function C we assume that α is zero. This is because wefocus on continuous measurements. We further assume that function c is of thefollowing form:

cpei,jq “ p1´ βqctppi, pjq ` βcopoi, ojq. (6.2)

Here ct is the euclidean distance between positions, and co is the angle in radiansof the axis angle rotation between orientations. To complete the definition of thisfunction we assume that β is 0.01.

6.3.5 Quality comparison

In Chapter 4, we formally discussed the inspection planning problem, and catego-rized is as the submodular orienteering problem. We will use this connection toobtain a method to compare the performance between users and algorithms fairly.A user uses heuristics to define an inspection path, without considering the formalinspection planning problem. However, we still want to compare user-generatedpaths with automatically generated paths. If a user generates a set of viewpoints(i.e. Xu) then its inspection quality can be evaluated (fpXuq). However, the budgetCpXuq of different paths will be different. A quality comparison that does notaccount for this difference is inherently unfair. Even if two paths generated by twodifferent users are compared, it would be unfair to simply compare the inspectionquality without considering the travelling budget.

For this reason, we will use the idea behind the OPT metric introduced in subsec-tion 4.3.5. This metric uses the fact that the solution of the greedy cost-benefit(GCB) algorithm is near-optimal. Thus from this solution, a relatively tight upperbound for the optimum solution can be derived. The OPT metric is then theratio of a solution and the upper bound for the optimal solution. The qualitycomparison we propose calculates the budget of a solution (i.e. CpXuq). Afterwhich the GCB algorithm is performed with a budget CpBq. From this solution,the OPT metric can be calculated. This normalization towards an upper bound ofthe optimum accounts for differences is travelling budget. However, since we alsowant to compare with the performance of automated algorithms, we can take theratio of the OPT metric of a solution of the automated algorithm, and the OPTmetric of the user-generated algorithm. Since both these paths are solutions to thesame abstract problem, their OPT ratio is the ratio of their inspection qualities (i.e.qr “ fpXuserqfpXGCB). To fairly represent the power of automated algorithms,we use the GCB` algorithm from subsection 4.3.4, which applies a post-processingstep to improve the solution even further.

However, while we can compare the quality of inspection paths theoretically, onepractical issue remains. The user is free to provide any path in the robot’s workspace,while the GCB algorithm is bound to a discretization of the problem. As a result,the user-generated solution Xu is not a part of the workspace graph used bythe automated algorithm. To account for this, we include all elements of the

Page 115: A general approach to robot path planning for optical ...

6.4. EXPERIMENTAL RESULTS 105

1 2 3 4 5

1

2

3

4

5

3891 70 40

73 8977

92 9999

8098 86 81

70

1 2 3 4 5

1

2

3

4

5

5875 68 78

89 4682

72 2482

4572 48 60

77

1 2 3 4 5

1

2

3

4

5

7186 53 43

63 8566

76 6784

3373 57 30

54

1 2 3 4 5

1

2

3

4

5

4279 66 14

7 6393

36 78108

7683 45 52

85

1 2 3 4 5

1

2

3

4

5

6950 67 18

17 7692

49 87105

5764 89 71

71

1 2 3 4 5

1

2

3

4

5

7755 71 15

51 9489

54 5280

8964 74 78

53

Ro

bo

tics

exp

erti

se

Optical measurements experience

Created by Pham Duy Phuong Hungfrom the Noun Project

Kuka

UR10

1 2 3 4 5

1

2

3

4

5

1 2 3 4 5

1

2

3

4

5

1 2 3 4 5

1

2

3

4

5

1 2 3 4 5

1

2

3

4

5

1 2 3 4 5

1

2

3

4

5

1 2 3 4 5

1

2

3

4

5

0

20

40

60

80

100

%

1 2 3 4 5

1

2

3

4

5

1 2 3 4 5

1

2

3

4

5

1 2 3 4 5

1

2

3

4

5

1 2 3 4 5

1

2

3

4

5

1 2 3 4 5

1

2

3

4

5

1 2 3 4 5

1

2

3

4

5

0

20

40

60

80

100

%

Figure 6.6: This figure shows the ratio of the inspection quality of user-generatedpaths, and automatically generated paths with the same budget. This ratio is given foreach user and inspection scenario in the second phase of the experiment. Sometimes, asquare in the experience level matrix is subdivided in two. This occurs when two usershave the same experience level.

user’s solution to the workspace graph considered by the automated algorithm (i.e.V` “ V Y Xu). We also augment the edges E with all edges between elementsfrom V and Xu if the distance between view poses is smaller than a predefinedthreshold.

6.4 Experimental results

6.4.1 Small scale inspection planning problems

In all the inspection cases we define dopt as 200mm, and σ as 100mm. In each case,the automated algorithm has a time limit of 30 minutes to compute an inspectionpath. The results of the six small scale experiments are provided in Figure 6.6.These results show that there is a large range in user performance. However, onthe other hand, it is also notable that users with minimal experience can provideinspection paths that are good compared to specialized automated algorithms. Thelowest median user performance (lowest of different inspection scenarios) is 66%.The time that users needed to find an inspection path ranged from 21 seconds

Page 116: A general approach to robot path planning for optical ...

106 CHAPTER 6. HUMAN FACTORS IN INSPECTION PATH PLANNING

KukaUR10

Created by Pham Duy Phuong Hungfrom the Noun Project

Figure 6.7: These images show two cases where users obtained a very low qualityratio. The path generated by the user is shown in red, and the path generated by theautomated algorithm is shown in blue. On the left, a user (1-robotics, 4-inspection)achieved a quality ratio of 15% on the turbine-UR10 inspection scenario. On the righta user (3-robotics, 5-inspection) achieved a quality ratio of 24% on the bike-Kukakr16 inspection scenario.

to 8 minutes and 20 seconds. In this maximum time, the user-generated a fewinspection paths. Thus each individual inspection path was generated in a shortertime. A visual analysis of the lower quality inspection paths generated shows thatthese paths typically contain some strategical mistakes. Two examples are shownin Figure 6.7. In Figure 6.7 (left) a user-generated an inspection path with a largeradius. As a result, the inspection budget increases rapidly. On the other hand,the automated algorithm generated a path with a smaller radius. This path is ata distance that is optimal according to the inspection quality function. Thus thechoice to generate a larger radius inspection path is punished twice. On the onehand, the inspection quality is lower, and on the other hand, the budget is larger.In Figure 6.7 (right) the user-generated an inspection path with a radius that wastoo large. However, the inspection path also contains a few detours. In these extradetours, budget is spent without a notable gain in inspection quality.

6.4.2 Large scale inspection planning problem

In this inspection case we define dopt as 300mm, and σ as 300mm. The automatedalgorithm has a time limit of 5 hours.

Page 117: A general approach to robot path planning for optical ...

6.4. EXPERIMENTAL RESULTS 107

1 2 3 4 5

1

2

3

4

5

6473

59 54

7569

49

5657

124

6976

70 72

56

50

60

70

80

90

100

110

120

130%

Optical measurements experience

Ro

bo

tics

exp

erti

se

Figure 6.8: his figure shows the ratio of the inspection quality of user-generated paths,and automatically generated paths with the same budget. This ratio is given for eachuser. Each user that entered the experiments performed this optional step voluntarily.Notice that the low end of the colour scale is 50% compared to the low end of thecolour scale of 0% in Figure 6.6. The inspection paths of the triangles with red angreen edges are shown in Figure 6.9.

The result of the large scale, optional inspection planning scenario is provided inFigure 6.8. It is notable that the lowest inspection quality if 49%, which is signifi-cantly better than in previous experiments. Moreover, users with little experiencein either robotics or inspections, are again able to generate decent inspection paths.The median performance of all users is 69%. Notable is that a user managed toachieve to generate a path with a 24% higher inspection quality than the automatedalgorithm with the same budget. It is furthermore notable that there are feweroutliers with a bad quality than with the smaller scale inspection problems. Thisis mainly because of the longer length of the total inspection path. The effectof an inefficient detour is less for longer paths. The higher inspection quality ofuser-generated paths indicates that the negative effect of detours decreases withlonger inspection paths. The time that users needed to generate an inspectionpath ranged from three minutes and 29 seconds to 9 minutes and 37 seconds. InFigure 6.9 three inspection paths are shown. The best (green) and worst (red)user-generated paths are shown, as well as the computer-generated path (blue).The red inspection path contains the strategical disadvantage that it is further fromthe object, which leads to a worse inspection path. It is furthermore more compact,which leads to lower total coverage. Compared to the green and blue paths, it alsocontains significantly more detours.

Page 118: A general approach to robot path planning for optical ...

108 CHAPTER 6. HUMAN FACTORS IN INSPECTION PATH PLANNING

Figure 6.9: This figure shows an inspection path generated by the automated algorithm(blue), the best user-generated inspection path (green) and the worst user-generatedinspection path (red).

6.4.3 Discussion

A first important side note is that during the user study, users were barely educatedin using our interface to define inspection paths. All users finished the study withinan hour. Considering that users learned how to use the interface, generated sixsmaller scale inspection paths and one complex inspection path, this is a short timeframe. Users even reported that their performance would likely improve with moreexperience. Users also reported that if they made more attempts, their inspectionpaths would also likely improve. Thus the results that were achieved in this userstudy provide lower bounds on the quality of inspection paths that well-trainedusers can provide.

The experiments also showed that if quality and efficiency are key, user-generatedpaths are not yet an option. In most cases, user-generated paths have a significantlylower quality than computer-generated ones. We identified some strategical areasin which user-generated algorithms can improve that can be targeted:

1. Fix positioning inaccuracy

2. Managing robot complexity

During the experiments, it was clear that the positioning inaccuracy of users hasa significant impact on inspection paths. One result of this is that users tendedto position the measurement device at a sub-optimal measurement distance. Asa result, measurement paths become longer, and inspection quality decreases.Another side effect of this inaccuracy is that it resulted in detours. When a usermissed some spot of an object (which could have been covered given a more delicatepositioning) users tended to fix this by generating a detour. This detour resulted intwo parts of the inspection path that were close to each other. This again increasesthe length of measurement paths without adding much to the inspection quality.Another complexity for users was related to the complexity of controlling robot

Page 119: A general approach to robot path planning for optical ...

6.5. CONCLUSION 109

systems. Sometimes users got distracted by managing the robot, which generatesdetours. Methods that help to avoid singularities in human-robot interaction canbe a first step in limiting the complexity of controlling robot systems [121].

In our experiments, the users were able to generate inspection paths faster thanthe automated algorithm discussed that was developed in Chapter 4. The speedof the automated algorithm can however be improved by considering randomizedalgorithms for the submodular orienteering problem [122, 123].

An interesting direction for future research is to investigate if these problems can besolved in either post-processing or with on-line aiding systems. In path processingeither local [63] or global [124, 125] solutions are possible. Human aiding systemscan focus on decreasing the degrees of freedom that must be controlled by theuser.

6.5 Conclusion

In this chapter, we investigated if inexperienced users can be used to generate high-quality inspection paths. The main idea of our solution is to replace the need forspecialized experience, by intuitive visualizations and interactions in virtual reality.To quantify the performance of user-generated inspection paths, we proposed anapproach based on the abstract structure behind the inspection planning problemthat was developed in Chapter 4. In this approach, user-generated inspectionpaths are compared with inspection paths resulting from a near-optimal inspectionplanning algorithm.

To investigate if an intuitive interface can replace specialized experience, we per-formed a user study with 15 valid users with different experience levels in roboticsand optical inspections. In this user study user solved six smaller scale, and onelarge scale inspection problem. From this user study, it was clear that, while userperformance was variable, users without experience could generate high-qualityinspection paths. The median user performance was in the range of 66-81% of thequality of a state-of-the-art automated algorithm from Chapter 4. We also discussedthe source of this variability in user performance and how this could be solved infuture.

From our experiments, it is clear that users were able to generate inspection pathsmuch faster than automated algorithms. Especially in the complex inspectionplanning scenario, users needed maximally 9 minutes and 37 seconds to generatean inspection path, while the automated algorithm needed 5 hours to do the same.A human-centric approach to inspection planning is also easier to set up sincedigital twin fidelity is less important. Furthermore, the notion of inspection qualitydoes not need to be defined as rigorously. It only serves to give users an indicationof inspection quality.

Page 120: A general approach to robot path planning for optical ...

110 CHAPTER 6. HUMAN FACTORS IN INSPECTION PATH PLANNING

Page 121: A general approach to robot path planning for optical ...

Conclusion

111

Page 122: A general approach to robot path planning for optical ...
Page 123: A general approach to robot path planning for optical ...

Chapter 7777777777777777777777777777777777777777777777777777777777777777777777777General Conclusions

7.1 Conclusions

The cost of inspections can potentially be reduced by including robots in inspectionsystems. Robots provide fast and repeatable motions of the optical measurementdevice. With these robotic inspection systems, it is however necessary to determineefficient robot paths. These robot paths must be efficient, and allow the measure-ment device to perform high quality, complete measurements. Programming theseinspection paths is however a challenging task which requires experts. The needfor these experts is a major hindrance in the adoption of robots for inspections.This expert must not only have expertise in robotics, but also in the inspectiontechnique that is used. As a result, robots cannot just be used as motion platformsfor inspections, since extra knowledge is required to use them efficiently.

In this thesis, we developed several methodologies aimed at reducing the need forexpertise from users. In this thesis, we also aimed for methodologies that can solveindustrially relevant robotic inspection problems. To achieve this, the followingrequirements must be met:

1. The methodologies must minimize the required cycle time to perform theinspections.

2. The methodologies must be able to plan trajectories for the inspection ofcomplex objects.

3. The methodologies must be general, such that many optical inspection tech-niques can benefit from them.

4. The methodologies must be flexible to adapt to different product require-ments.

5. The methodologies must be usable by non-experts.

113

Page 124: A general approach to robot path planning for optical ...

114 CHAPTER 7. GENERAL CONCLUSIONS

The first contribution of this thesis is that we developed a general model forinspection quality. During the construction of this model, we showed that opticalinspection techniques have important structural similarities. In any inspection, forexample, it is necessary to move the measurement device to multiple locations toachieve complete coverage. It is also common that not every measurement is thesame, which leads to the notion of inspection quality. Throughout this thesis, itbecame clear that these general similarities are more important than the specificshapes of the specific quality functions. All the methodologies that were developedduring this thesis are compatible with this general model of inspection quality. So,the first requirement that is met by our developed methodologies is that they aresufficiently general. We also provided a wide variety of inspection examples andshowed how these fit in the general inspection quality model. From these examples,it is clear that it is relatively straightforward to adapt this model to a wide varietyof product requirements and inspection techniques.

The most challenging requirement is that the methodologies must be usable by non-experts. To investigate this, we developed two different robotic inspection planningmethods. Both methods are based on a different philosophy and have differentapplication areas. The first method, which we discussed in Chapter 4, automaticallysolves the inspection planning method without any user input. However, beforethe method can automatically solve the problem, some pre-processing is neededthat requires dedicated user experience. This pre-processing could be managedin mature software packages that combine digital twins with inspection software.However, such software packages are not always available and are most likely notflexible. The second method, which is discussed in Chapter 6, puts the user centraland empowers him to generate inspection paths manually. In this method, wereplace the need for expertise by intuitive visualizations and interactions in virtualreality. This method provides the ultimate flexibility at the cost of an increasein cycle time. In a user study featuring 15 participants, we measured that themedian of the inspection quality from user-generated paths, is 70% of the qualityof computer-generated paths.

The methodology that maximally achieves the requirement for usability by non-experts is developed in two steps. In the first step, a simpler yet related problem,namely camera network design was discussed in Chapter 5. In this method, userscan manually change positions of cameras, to improve its quality. To help users, weprovide interactive visualizations of this quality. Users can then move cameras, andvisually see the effect on this quality. In that chapter, we performed a user studywith an expert and concluded that it is challenging to encode all user requirementsin inspection quality functions. An important conclusion is that users do not sufferfrom this problem. A user that is aware of the requirements takes these into accountwithout the need to explicitly program them in a quality function. With this method,it is possible to design camera networks, with a simpler quality function, whichincreases the usability for non-experts. Another surprising result is obtained froma user study that compared the performance of users with the performance of amathematically optimal algorithm. In this study, users achieved a comparable orbetter performance than the optimal algorithm. However, these user studies are

Page 125: A general approach to robot path planning for optical ...

7.1. CONCLUSIONS 115

formative rather than evaluating so more research is needed.

In Chapter 6 we extended this method to solve the robotic inspection planningproblem. We added an intuitive robot programming method to the visualizationsto achieve this. We performed a user study with 15 participants to investigatethe importance of the level of expertise of a user. From these experiments, wecan conclude that non-experienced users can define high-quality inspection paths,with the developed methodology. It is however important to note, that in theseexperiments, users cannot match the quality of automated algorithms. Anotheradvantage of this user-centric approach is that users can generate inspection pathsquicker than automated algorithms. This is especially the case for large scaleinspection problems. In a complex inspection planning scenario users neededmaximally ten minutes to generate an inspection path, while the automated methodrequired five hours to do the same.

The automated algorithm to which we referred earlier was developed in Chapter 4.To maximally achieve the requirements to minimize cycle time, we developed a near-optimal robotic inspection planning method that can solve real-world inspectionproblems. In the experiments, this method was subjected to a broad range ofinspection problems, in which it nearly always outperformed the state-of-the-art. From the experiments, it was clear that the near-optimality of the algorithmtranslated in increased robustness of changes in the inspection planning problem.This aspect is also linked to the requirements for the methods to be easily adaptableto changing inspection requirements, and the requirement to be general.

Regardless whether an inspection path was generated by an automated algorithmor a user-centric approach, we noted that it was likely locally sub-optimal. Thissame problem is also present in the planning of normal robot paths, where it issolved by applying a post-processing method. These so-called path optimizationmethods locally improve robot paths to make them more efficient. For roboticinspection paths, these post-processing techniques were lacking. In Chapter 3 wetherefore presented a robotic inspection path optimization approach that takesthe measurement task into account. We tested this approach in two inspectionscenarios where the optimization was able to remove respectively 31% and 27% ofthe required travelling budget. In a scenario of a constant speed measurement, thistravelling budget is proportional to the required cycle time.

In this thesis, we developed several methodologies for robotic inspection planning.These methodologies achieve almost all the requirements that are needed to makethem industrially relevant. However, From this thesis, it seems that there is atrade-off between two requirements. These are the requirement to minimize cycletimes maximally, and the requirement to be usable by non-experts. We providedtwo options, one user-centric approach that is simple to use for non-experts, andan automated algorithm that maximally minimizes cycle time.

Page 126: A general approach to robot path planning for optical ...

116 CHAPTER 7. GENERAL CONCLUSIONS

7.2 Recommendations and future work

In this thesis, we developed general methods for robotic inspection planning.An important direction for future research is to bring the research of inspectionplanning algorithms closer to researchers that can benefit from using the algorithms.This can be achieved by developing a software library that is accessible to NDT-experts. This software library should also be able to connect to common robots.This can be achieved by a connection to ROS (robot operating system). In thisthesis, we also developed a local inspection path optimization procedure. Ourimplementation of this method focused on flexibility at the cost of execution time.As a result, the time to optimize a trajectory was of the order of minutes. However,the gradient-based technique should be able to perform this optimization in a timethat is in the order of seconds. So an important area for future work is to investigatehow the implementation can be optimized to decrease the execution time. Otherareas for future work are to increase the flexibility by including quality functionsand to model the effect of occlusions accurately. Occlusions can be modelledproperly by minimizing the distance of occluded points relative to the visibilityevents that make them visible.

The main drawback of the user-centered approach, in which users plan inspectionpaths in Virtual Reality, is that these paths are not as efficient as computer-generatedpaths. We however believe that the quality of user-generated paths can significantlybe improved by assisting users in the planning process. We analyzed the behavioursof users and suggested some ways to assist them. The main inefficiencies of user-generated paths are that they contain measurements at suboptimal measurementdistances and that user-generated paths contain some detours. Both of these thingsare a result of the fact that users make small errors, or are not aware of details.Given this observation, it would be interesting to combine users that generateinspection paths and the developed local path optimization procedure. In such anapproach, users would define the global structure of an inspection path, and thepath optimization algorithm is responsible for the local details. This, together withoptions for users to edit paths, would increase the quality of user-generated paths.

Throughout this thesis, we assumed that the robot setup and object position arefixed and known beforehand. However, in reality, a robot setup has many variableparameters that can be changed. An example is the hand-eye transformationbetween the robot end-effector and measurement system. Typically an expertdecides this hand-eye transformation to maximize the flexibility of the robot system.From our observations, it is clear that these free parameters have a significantimpact on the quality of the resulting measurement trajectories. So an interestingopportunity for future research is to investigate optimization procedures thatautomatically, and optimally select these free parameters.

Page 127: A general approach to robot path planning for optical ...

References

117

Page 128: A general approach to robot path planning for optical ...
Page 129: A general approach to robot path planning for optical ...

Bibliography

[1] Vision Online Marketing Team. Machine vision techniques: Practicalways to improve efficiency in machine vision inspection, Dec 2017. URLhttps://www.visiononline.org/blog-article.cfm/Machine-Vision-Techniques-Practical-Ways-to-Improve-Efficiency-in-Machine-

Vision-Inspection/79.

[2] Jeff Kerns. Robotic arms dominate today’s automation trends, May2018. URL https://www.machinedesign.com/motion-control/robotic-arms-dominate-today-s-automation-trends.

[3] Skyspecs proves value of automated drone inspections with at-scale operations; lands $8m investment, Jan 2018. URL https:

//www.suasnews.com/2018/01/skyspecs-proves-value-automated-drone-inspections-scale-operations-lands-8m-investment/.

[4] Roboinspector provides flexible machine vision inspection solution,Mar 2019. URL https://metrology.news/roboinspector-provides-flexible-machine-machine-inspection-solutions/.

[5] Carmelo Mineo, Douglas Herbert, M Morozov, SG Pierce, PI Nicholson, andIan Cooper. Robotic non-destructive inspection. In 51st Annual Conferenceof the British Institute of Non-Destructive Testing, pages 345–352, 2012.

[6] Microsoft Dynamics 365. 2019 Manufacturing Trends Report. Technicalreport, Microsoft Dynamics, 2019. URL https://info.microsoft.com/ww-landing-DynOps-Manufacturing-Trends-eBook.html.

[7] Mary Kathryn Thompson, Giovanni Moroni, Tom Vaneker, Georges Fadel,R Ian Campbell, Ian Gibson, Alain Bernard, Joachim Schulz, Patricia Graf,Bhrigu Ahuja, et al. Design for additive manufacturing: Trends, opportuni-ties, considerations, and constraints. CIRP annals, 65(2):737–760, 2016.

[8] Syed AM Tofail, Elias P Koumoulos, Amit Bandyopadhyay, Susmita Bose,Lisa ODonoghue, and Costas Charitidis. Additive manufacturing: scientificand technological challenges, market uptake and opportunities. Materialstoday, 21(1):22–37, 2018.

[9] S Wenhardt, B Deutsch, J Hornegger, H Niemann, and J Denzler. An infor-mation theoretic approach for next best view planning in 3-d reconstruction.In Pattern Recognition, 2006. ICPR 2006. 18th International Conference on,volume 1, pages 103–106. IEEE, 2006.

119

Page 130: A general approach to robot path planning for optical ...

120 BIBLIOGRAPHY

[10] Christoph Munkelt, Andreas Breitbarth, Gunther Notni, and Joachim Denzler.Multi-View Planning for Simultaneous Coverage and Accuracy Optimisation.In Proceedings of the British Machine Vision Conference, pages 118.1–118.11.BMVA Press, 2010. ISBN 1-901725-40-5.

[11] Brendan J Englot and Franz S Hover. Sampling-based coverage path plan-ning for inspection of complex structures. In Twenty-Second InternationalConference on Automated Planning and Scheduling, pages 29–37, 2012.

[12] J. Irving Vasquez-Gomez, L. Enrique Sucar, and Rafael Murrieta-Cid. Hierar-chical ray tracing for fast volumetric next-best-view planning. In Proceedings- 2013 International Conference on Computer and Robot Vision, CRV 2013,pages 181–187, 2013. ISBN 9780769549835. doi: 10.1109/CRV.2013.42.

[13] Andreas Bircher, Mina Kamel, Kostas Alexis, Michael Burri, Philipp Oet-tershagen, Sammy Omari, Thomas Mantel, and Roland Siegwart. Three-dimensional coverage path planning via viewpoint resampling and touroptimization for aerial robots. Autonomous Robots, 40(6):1059–1078, 2016.

[14] Mike Roberts, Debadeepta Dey, Anh Truong, Sudipta Sinha, Shital Shah,Ashish Kapoor, Pat Hanrahan, and Neel Joshi. Submodular trajectory op-timization for aerial 3d scanning. In Proceedings of the IEEE InternationalConference on Computer Vision, pages 5324–5333, 2017.

[15] Benjamin Hepp, Matthias Nießner, and Otmar Hilliges. Plan3d: Viewpointand trajectory optimization for aerial multi-view stereo reconstruction. ACMTransactions on Graphics (TOG), 38(1):4, 2018.

[16] Chandra Chekuri and Martin Pal. A recursive greedy algorithm for walks indirected graphs. In 46th Annual IEEE Symposium on Foundations of ComputerScience (FOCS’05), pages 245–253. IEEE, 2005.

[17] Georgios Papadopoulos, Hanna Kurniawati, and Nicholas M Patrikalakis.Asymptotically optimal inspection planning using systems with differentialconstraints. In 2013 IEEE International Conference on Robotics and Automa-tion, pages 4126–4133. IEEE, 2013.

[18] Andreas Bircher, Kostas Alexis, Ulrich Schwesinger, Sammy Omari, MichaelBurri, and Roland Siegwart. An incremental sampling-based approach toinspection planning: the rapidly exploring random tree of trees. Robotica,35(6):1327–1340, 2017.

[19] Jia Pan, Liangjun Zhang, and Dinesh Manocha. Collision-free andsmooth trajectory computation in cluttered environments. Int. J.Rob. Res., 31(10):1155–1175, September 2012. ISSN 0278-3649.doi: 10.1177/0278364912453186. URL http://dx.doi.org/10.1177/0278364912453186.

[20] Matt Zucker, Nathan Ratliff, Anca D Dragan, Mihail Pivtoraiko, MatthewKlingensmith, Christopher M Dellin, J Andrew Bagnell, and Siddhartha SSrinivasa. Chomp: Covariant hamiltonian optimization for motion planning.The International Journal of Robotics Research, 32(9-10):1164–1193, 2013.

Page 131: A general approach to robot path planning for optical ...

BIBLIOGRAPHY 121

[21] Mylne Campana, Florent Lamiraux, and Jean-Paul Laumond. A gradient-based path optimization method for motion planning. Advanced Robotics, 30(17-18):1126–1144, 2016. doi: 10.1080/01691864.2016.1168317. URLhttp://dx.doi.org/10.1080/01691864.2016.1168317.

[22] Gustavo Olague and Roger Mohr. Optimal 3d sensor placement to obtainaccurate 3d point positions. In Primer Encuentro de Computacion ENC 97:Vision Robotica, pages 116–123, 1997.

[23] Ugur Murat Erdem and Stan Sclaroff. Automated camera layout to satisfytask-specific and floor plan-specific coverage requirements. Computer Visionand Image Understanding, 103(3):156–169, 2006.

[24] Bernard Ghanem, Yuanhao Cao, and Peter Wonka. Designing camera net-works by convex quadratic programming. In Computer Graphics Forum,volume 34, pages 69–80. Wiley Online Library, 2015.

[25] Aaron Mavrinac and Xiang Chen. Modeling coverage in camera networks: Asurvey. International journal of computer vision, 101(1):205–226, 2013.

[26] Robert D Schiffenbauer. A survey of aspect graphs. Polytechnic University,New York City, USA, Tech. Rep. TR-CIS-2001-01, 2001.

[27] Mark McDonnell. Big o for beginners mark mcdonnell, Jun 2016. URLhttps://www.integralist.co.uk/posts/big-o-for-beginners/.

[28] Jirı Bittner and Peter Wonka. Visibility in computer graphics. Environmentand Planning B: Planning and Design, 30(5):729–755, 2003.

[29] Ned Greene, Michael Kass, and Gavin Miller. Hierarchical z-buffer visibil-ity. In Proceedings of the 20th annual conference on Computer graphics andinteractive techniques, pages 231–238. ACM, 1993.

[30] Andrew S Glassner. An introduction to ray tracing. Elsevier, 1989.

[31] Donald Meagher. Geometric modeling using octree encoding. Computergraphics and image processing, 19(2):129–147, 1982.

[32] J Peeters, B Ribbens, JJJ Dirckx, and G Steenackers. Determining directionalemissivity: Numerical estimation and experimental validation by usinginfrared thermography. Infrared Physics & Technology, 77:344–350, 2016.

[33] William R Scott. Model-based view planning. Machine Vision and Applica-tions, 20(1):47–69, 2009.

[34] Mark Sheinin and Yoav Y Schechner. The next best underwater view. InProceedings of the IEEE Conference on Computer Vision and Pattern Recognition,pages 3764–3773, 2016.

[35] Ahmed Isheil, J-P Gonnet, David Joannic, and J-F Fontaine. Systematic errorcorrection of a 3d laser scanning measurement device. Optics and Lasers inEngineering, 49(1):16–24, 2011.

Page 132: A general approach to robot path planning for optical ...

122 BIBLIOGRAPHY

[36] Michael Trummer, Christoph Munkelt, and Joachim Denzler. Online next-best-view planning for accuracy optimization using an extended e-criterion.In 2010 20th International Conference on Pattern Recognition, pages 1642–1645. IEEE, 2010.

[37] Techbriefs Media Group. Robotic 3d scanner automatically scans spacecraftheat shields, Mar 2017. URL https://www.techbriefs.com/component/content/article/tb/features/application-briefs/8461.

[38] Mussa Mahmud, David Joannic, Michael Roy, Ahmed Isheil, and Jean-Francois Fontaine. 3d part inspection path planning of a laser scannerwith control on the uncertainty. Computer-Aided Design, 43(4):345–355,2011.

[39] Technical Committee : ISO/TC 213 Dimensional, geometrical product speci-fications, and verification. Geometrical product specifications (GPS) – In-spection by measurement of workpieces and measuring equipment – Part1: Decision rules for verifying conformity or nonconformity with specifica-tions. Standard, International Organization for Standardization, Geneva,CH, October 2017.

[40] Nick Van Gestel, Steven Cuypers, Philip Bleys, and Jean-Pierre Kruth. Aperformance evaluation test for laser line scanners on cmms. Optics andLasers in Engineering, 47(3-4):336–342, 2009.

[41] Glen A Turley, Ercihan Kiraci, Alan Olifent, Alex Attridge, Manoj K Tiwari,and Mark A Williams. Evaluation of a multi-sensor horizontal dual armcoordinate measuring machine for automotive dimensional inspection. TheInternational Journal of Advanced Manufacturing Technology, 72(9-12):1665–1675, 2014.

[42] Steven M Seitz, Brian Curless, James Diebel, Daniel Scharstein, and RichardSzeliski. A comparison and evaluation of multi-view stereo reconstructionalgorithms. In 2006 IEEE Computer Society Conference on Computer Visionand Pattern Recognition (CVPR’06), volume 1, pages 519–528. IEEE, 2006.

[43] Ruigang Yang et al. Dealing with textureless regions and specular highlights-a progressive space carving scheme using a novel photo-consistency measure.In Proceedings Ninth IEEE International Conference on Computer Vision, pages576–584. IEEE, 2003.

[44] Sebastian Haner and Anders Heyden. Covariance propagation and next bestview planning for 3d reconstruction. In Computer Vision–ECCV 2012, pages545–556. Springer, 2012.

[45] Robert Siegel. Thermal radiation heat transfer, volume 1. CRC press, 2001.

[46] Franz Reuleaux. The kinematics of machinery: outlines of a theory of machines.Courier Corporation, 2013.

Page 133: A general approach to robot path planning for optical ...

BIBLIOGRAPHY 123

[47] John Joseph Uicker, Gordon R Pennock, Joseph Edward Shigley, et al. Theoryof machines and mechanisms, volume 1. Oxford University Press New York,NY, 2011.

[48] Phillip John McKerrow and Phillip McKerrow. Introduction to robotics, vol-ume 6. Addison-Wesley Sydney, 1991.

[49] Andreas Aristidou, Joan Lasenby, Yiorgos Chrysanthou, and Ariel Shamir.Inverse kinematics techniques in computer graphics: A survey. In ComputerGraphics Forum, volume 37, pages 35–58. Wiley Online Library, 2018.

[50] Arati S Deo and Ian D Walker. Overview of damped least-squares methodsfor inverse kinematics of robot manipulators. Journal of Intelligent andRobotic Systems, 14(1):43–68, 1995.

[51] Samuel R Buss. Introduction to inverse kinematics with jacobian transpose,pseudoinverse and damped least squares methods. IEEE Journal of Roboticsand Automation, 17(1-19):16, 2004.

[52] Pablo Jimenez, Federico Thomas, and Carme Torras. 3d collision detection:a survey. Computers & Graphics, 25(2):269–285, 2001.

[53] Ming Lin and Stefan Gottschalk. Collision detection between geometricmodels: A survey. In Proc. of IMA conference on mathematics of surfaces,volume 1, pages 602–608, 1998.

[54] John Reif, John Reif, and Micha Sharir. Motion planning in the presence ofmoving obstacles. Journal of the ACM (JACM), 41(4):764–790, 1994.

[55] Lydia E Kavraki, Petr Svestka, J-C Latombe, and Mark H Overmars. Proba-bilistic roadmaps for path planning in high-dimensional configuration spaces.IEEE transactions on Robotics and Automation, 12(4):566–580, 1996.

[56] James J Kuffner and Steven M LaValle. Rrt-connect: An efficient approachto single-query path planning. In Proceedings 2000 ICRA. Millennium Confer-ence. IEEE International Conference on Robotics and Automation. SymposiaProceedings (Cat. No. 00CH37065), volume 2, pages 995–1001. IEEE, 2000.

[57] Edsger W Dijkstra. A note on two problems in connexion with graphs.Numerische mathematik, 1(1):269–271, 1959.

[58] Stephane Redon, Abderrahmane Kheddar, and Sabine Coquillart. Fast con-tinuous collision detection between rigid bodies. In Computer graphics forum,volume 21, pages 279–287. Wiley Online Library, 2002.

[59] Stephane Redon, Ming C Lin, Dinesh Manocha, and Young J Kim. Fastcontinuous collision detection for articulated models. Journal of Computingand Information Science in Engineering, 5(2):126–137, 2005.

[60] Michael Farber. Topological complexity of motion planning. Discrete andComputational Geometry, 29(2):211–221, 2003.

Page 134: A general approach to robot path planning for optical ...

124 BIBLIOGRAPHY

[61] Jonathan Binney and Gaurav S. Sukhatme. Branch and bound for infor-mative path planning. In Proceedings - IEEE International Conference onRobotics and Automation, pages 2147–2154, 2012. ISBN 9781467314039.doi: 10.1109/ICRA.2012.6224902.

[62] Eric Rohmer, Surya PN Singh, and Marc Freese. V-rep: A versatile and scal-able robot simulation framework. In 2013 IEEE/RSJ International Conferenceon Intelligent Robots and Systems, pages 1321–1326. IEEE, 2013.

[63] Boris Bogaerts, Seppe Sels, Steve Vanlanduit, and Rudi Penne. A gradient-based inspection path optimization approach. IEEE Robotics and AutomationLetters, 3(3):2646–2653, 2018.

[64] Georgios Papadopoulos, Hanna Kurniawati, and Nicholas M. Patrikalakis.Asymptotically optimal inspection planning using systems with differen-tial constraints. 2013 IEEE International Conference on Robotics and Au-tomation, pages 4126–4133, 2013. ISSN 1050-4729. doi: 10.1109/ICRA.2013.6631159. URL http://ieeexplore.ieee.org/lpdocs/epic03/wrapper.htm?arnumber“6631159.

[65] Andreas Bircher, Kostas Alexis, Ulrich Schwesinger, Sammy Omari,Michael Burri, and Roland Siegwart. An incremental sampling-basedapproach to inspection planning: the rapidly exploring random treeof trees. Robotica, pages 1–14, 2016. ISSN 0263-5747. doi:10.1017/S0263574716000084. URL http://www.scopus.com/inward/record.url?eid“2-s2.0-84960392861&partnerID“tZOtx3y1.

[66] William R. Scott. Model-based view planning. Machine Vision and Applica-tions, 20(1):47–69, 2009. ISSN 09328092. doi: 10.1007/s00138-007-0110-2.

[67] Sergey Alatartsev, Anton Belov, Mykhaylo Nykolaychuk, and Frank Ortmeier.Robot trajectory optimization for the relaxed end-effector path. In 2014 11thInternational Conference on Informatics in Control, Automation and Robotics(ICINCO), 2014.

[68] Nikhil Somani, Markus Rickert, Andre Gaschler, Caixia Cai, AlexanderPerzylo, and Alois Knoll. Task level robot programming using prioritizednon-linear inequality constraints. In Intelligent Robots and Systems (IROS),2016 IEEE/RSJ International Conference on, pages 430–437. IEEE, 2016.

[69] Jianfei Mao, Xianping Huang, and Li Jiang. A flexible solution to ax= xb forrobot hand-eye calibration. In Proceedings of the 10th WSEAS internationalconference on Robotics, control and manufacturing technology. World Scientificand Engineering Academy and Society (WSEAS), pages 118–122, 2010.

[70] Camillo J Taylor and David J Kriegman. Minimization on the lie group so(3) and related manifolds. Yale University, 16(155):6, 1994.

[71] Jean Gallier. Basics of classical lie groups: The exponential map, lie groups,and lie algebras. In Geometric Methods and Applications, pages 367–414.Springer, 2001.

Page 135: A general approach to robot path planning for optical ...

BIBLIOGRAPHY 125

[72] Jos-Luis Blanco. A tutorial on se(3) transformation parameterizations andon-manifold optimization. Technical report, University of Malaga, September2010.

[73] Hitoshi Tokunaga, Takaaki Okano, Norio Matsuki, Fumiki Tanaka, andTakeshi Kishinami. A method to solve inverse kinematics problems usinglie algebra and its application to robot spray painting simulation. In ASME2004 International Design Engineering Technical Conferences and Computersand Information in Engineering Conference, pages 85–91. American Societyof Mechanical Engineers, 2004.

[74] L. Sciavicco, Bruno Siciliano, and B. Sciavicco. Modelling and Control ofRobot Manipulators. Springer-Verlag New York, Inc., Secaucus, NJ, USA, 2ndedition, 2000. ISBN 1852332212.

[75] Jean-Antoine Dsidri. Multiple-gradient descent algorithm (mgda) formultiobjective optimization. Comptes Rendus Mathematique, 350(5):313 – 318, 2012. ISSN 1631-073X. doi: http://dx.doi.org/10.1016/j.crma.2012.03.014. URL http://www.sciencedirect.com/science/article/pii/S1631073X12000738.

[76] Dimitri P Bertsekas. Constrained optimization and Lagrange multiplier meth-ods. Academic press, 2014.

[77] Ingo Wald, Sven Woop, Carsten Benthin, Gregory S Johnson, and ManfredErnst. Embree: a kernel framework for efficient cpu ray tracing. ACMTransactions on Graphics (TOG), 33(4):143, 2014.

[78] Vasek Chvatal. A greedy heuristic for the set-covering problem. Mathematicsof operations research, 4(3):233–235, 1979.

[79] Matthew J. Weinstein and Anil V. Rao. A source transformation viaoperator overloading method for the automatic differentiation of math-ematical functions in matlab. ACM Trans. Math. Softw., 42(2):11:1–11:44, May 2016. ISSN 0098-3500. doi: 10.1145/2699456. URLhttp://doi.acm.org/10.1145/2699456.

[80] Richard Pito. A solution to the next best view problem for automated surfaceacquisition. IEEE Transactions on Pattern Analysis and Machine Intelligence,21(10):1016–1030, 1999.

[81] Howie Choset and Philippe Pignon. Coverage path planning: The boustro-phedon cellular decomposition. In Field and service robotics, pages 203–209.Springer, 1998.

[82] Enric Galceran and Marc Carreras. A survey on coverage path planning forrobotics. Robotics and Autonomous systems, 61(12):1258–1276, 2013.

[83] Amarjeet Singh, Andreas Krause, and William J Kaiser. Nonmyopic adaptiveinformative path planning for multiple robots. In IJCAI, volume 3, page 2,2009.

Page 136: A general approach to robot path planning for optical ...

126 BIBLIOGRAPHY

[84] Jingjin Yu, Mac Schwager, and Daniela Rus. Correlated orienteering problemand its application to informative path planning for persistent monitoringtasks. In 2014 IEEE/RSJ International Conference on Intelligent Robots andSystems, pages 342–349. IEEE, 2014.

[85] Geoffrey A Hollinger, Brendan Englot, Franz S Hover, Urbashi Mitra, andGaurav S Sukhatme. Active planning for underwater inspection and thebenefit of adaptivity. The International Journal of Robotics Research, 32(1):3–18, 2013.

[86] Boris Bogaerts, Seppe Sels, Steve Vanlanduit, and Rudi Penne. Interactivecamera network design using a virtual reality interface. Sensors, 19(5):1003,2019.

[87] Manohar Shamaiah, Siddhartha Banerjee, and Haris Vikalo. Greedy sensorselection: Leveraging submodularity. In 49th IEEE conference on decision andcontrol (CDC), pages 2572–2577. IEEE, 2010.

[88] David Applegate, Robert Bixby, Vasek Chvatal, and William Cook. Implement-ing the dantzig-fulkerson-johnson algorithm for large traveling salesmanproblems. Mathematical programming, 97(1-2):91–153, 2003.

[89] LLC Gurobi Optimization. Gurobi optimizer reference manual, 2018. URLhttp://www.gurobi.com.

[90] Daniel J Rosenkrantz, Richard E Stearns, and Philip M Lewis, II. An analysisof several heuristics for the traveling salesman problem. SIAM journal oncomputing, 6(3):563–581, 1977.

[91] Gerhard Reinelt. The traveling salesman: computational solutions for TSPapplications. Springer-Verlag, 1994.

[92] Michael Held and Richard M Karp. The traveling-salesman problem andminimum spanning trees. Operations Research, 18(6):1138–1162, 1970.

[93] Christine L Valenzuela and Antonia J Jones. Estimating the held-karp lowerbound for the geometric tsp. European journal of operational research, 102(1):157–175, 1997.

[94] Haifeng Zhang and Yevgeniy Vorobeychik. Submodular optimization withrouting constraints. In AAAI, volume 16, pages 819–826, 2016.

[95] Chao Qian, Jing-Cheng Shi, Yang Yu, and Ke Tang. On subset selection withgeneral cost constraints. In IJCAI, volume 17, pages 2613–2619, 2017.

[96] Andreas Krause and Daniel Golovin. Submodular function maximization.In Tractability: Practical Approaches to Hard Problems, pages 71–104. Cam-bridge University Press, 2014.

[97] Yale T Herer. Submodularity and the traveling salesman problem. Europeanjournal of operational research, 114(3):489–508, 1999.

Page 137: A general approach to robot path planning for optical ...

BIBLIOGRAPHY 127

[98] Ken Shoemake. Uniform random rotations. In Graphics Gems III (IBMVersion), pages 124–132. Elsevier, 1992.

[99] Donald B. Johnson. Efficient algorithms for shortest paths in sparse net-works. J. ACM, 24(1):1–13, jan 1977. ISSN 0004-5411. doi: 10.1145/321992.321993. URL http://doi.acm.org/10.1145/321992.321993.

[100] Ali Hosseininaveh, Ben Sargeant, Tohid Erfani, Stuart Robson, Mark Shortis,Mona Hess, and Jan Boehm. Towards fully automatic reliable 3d acquisition:From designing imaging network to a complete and accurate point cloud.Robotics and Autonomous Systems, 62(8):1197–1207, 2014.

[101] Shachar Fleishman, Daniel Cohen-Or, and Dani Lischinski. Automatic cam-era placement for image-based modeling. In Computer Graphics Forum,volume 19, pages 101–110. Wiley Online Library, 2000.

[102] Miquel Feixas, Mateu Sbert, and Francisco Gonzalez. A unified information-theoretic framework for viewpoint selection and mesh saliency. ACM Trans-actions on Applied Perception (TAP), 6(1):1, 2009.

[103] Wencheng Wang and Tianhao Gao. Constructing canonical regions forfast and effective view selection. In Proceedings of the IEEE Conference onComputer Vision and Pattern Recognition, pages 4114–4122, 2016.

[104] Joseph O’rourke. Art gallery theorems and algorithms, volume 57. OxfordUniversity Press Oxford, 1987.

[105] Marcelo C Couto, Pedro J de Rezende, and Cid C de Souza. An exactalgorithm for minimizing vertex guards on art galleries. International Trans-actions in Operational Research, 18(4):425–448, 2011.

[106] Richard Church and Charles ReVelle. The maximal covering location problem.In Papers of the Regional Science Association, volume 32, pages 101–118.Springer, 1974.

[107] Glenn H Tarbox and Susan N Gottschlich. Planning for complete sensorcoverage in inspection. Computer Vision and Image Understanding, 61(1):84–111, 1995.

[108] Cregg K. Cowan and Peter D Kovesi. Automatic sensor placement fromvision task requirements. IEEE Transactions on Pattern Analysis and machineintelligence, 10(3):407–416, 1988.

[109] Niv Buchbinder, Moran Feldman, Joseph Seffi Naor, and Roy Schwartz.Submodular maximization with cardinality constraints. In Proceedings ofthe twenty-fifth annual ACM-SIAM symposium on Discrete algorithms, pages1433–1452. Society for Industrial and Applied Mathematics, 2014.

[110] Jennifer Gillenwater. Maximization of non-monotone submodular functions.Citeseer, 2014.

Page 138: A general approach to robot path planning for optical ...

128 BIBLIOGRAPHY

[111] George L Nemhauser and Laurence A Wolsey. Best algorithms for approximat-ing the maximum of a submodular set function. Mathematics of operationsresearch, 3(3):177–188, 1978.

[112] Robert G Jeroslow. Trivial integer programs unsolvable by branch-and-bound.Mathematical Programming, 6(1):105–109, 1974.

[113] Grigori D Pintilie and Wolfgang Stuerzlinger. An evaluation of interactiveand automated next best view methods in 3d scanning. Computer-AidedDesign and Applications, 10(2):279–291, 2013.

[114] Will Schroeder, Ken Martin, and Bill Lorensen. The Visualization Toolkit–AnObject-Oriented Approach To 3D Graphics. Kitware, Inc., fourth edition, 2006.

[115] Jens Kruger and Rudiger Westermann. Acceleration techniques for gpu-based volume rendering. In Proceedings of the 14th IEEE Visualization 2003(VIS’03), page 38. IEEE Computer Society, 2003.

[116] Thomas Schops, Johannes L Schonberger, Silvano Galliani, Torsten Sattler,Konrad Schindler, Marc Pollefeys, and Andreas Geiger. A multi-view stereobenchmark with high-resolution images and multi-camera videos. In Con-ference on Computer Vision and Pattern Recognition (CVPR), volume 2017,2017.

[117] Carmelo Mineo, Stephen Gareth Pierce, Pascual Ian Nicholson, and IanCooper. Robotic path planning for non-destructive testing–a custom matlabtoolbox approach. Robotics and Computer-Integrated Manufacturing, 37:1–12,2016.

[118] Alexandre Campeau-Lecours, Ulysse Cote-Allard, Dinh-Son Vu, FrancoisRouthier, Benoit Gosselin, and Clement Gosselin. Intuitive adaptive orienta-tion control for enhanced human–robot interaction. IEEE Transactions onRobotics, 35(2):509–520, 2018.

[119] Jonathan Binney and Gaurav S Sukhatme. Branch and bound for informa-tive path planning. In 2012 IEEE International Conference on Robotics andAutomation, pages 2147–2154. IEEE, 2012.

[120] Jeroen Peeters, Simon Verspeek, Seppe Sels, Boris Bogaerts, and GuntherSteenackers. Optimized dynamic line scanning thermography for aircraftstructures. Quantitative InfraRed Thermography Journal, pages 1–16, 2019.

[121] Fotios Dimeas, Vassilis C Moulianitis, and Nikos Aspragathos. Manipulatorperformance constraints in human-robot cooperation. Robotics and computer-integrated manufacturing, 50:222–233, 2018.

[122] Sankalp Arora and Sebastian Scherer. Randomized algorithm for informativepath planning with budget constraints. In 2017 IEEE International Conferenceon Robotics and Automation (ICRA), pages 4997–5004. IEEE, 2017.

Page 139: A general approach to robot path planning for optical ...

BIBLIOGRAPHY 129

[123] Mengyu Fu, Alan Kuntz, Oren Salzman, and Ron Alterovitz. Towardasymptotically-optimal inspection planning via efficient near-optimal graphsearch. arXiv preprint arXiv:1907.00506, 2019.

[124] Daqign Yi, Michael A Goodrich, and Kevin D Seppi. Informative path plan-ning with a human path constraint. In 2014 IEEE International Conferenceon Systems, Man, and Cybernetics (SMC), pages 1752–1758. IEEE, 2014.

[125] Christopher Reardon, Hao Zhang, and Jonathan Fink. Shaping of sharedautonomous solutions with minimal interaction. Frontiers in neurorobotics,12:54, 2018.

Page 140: A general approach to robot path planning for optical ...

130 BIBLIOGRAPHY

Page 141: A general approach to robot path planning for optical ...
Page 142: A general approach to robot path planning for optical ...