Knowledge-Driven Agile Sensor-Mission Assignment

59
ACITA 2009 , Konrad Borowiecki , Geeth de Mel , Wamberto Vasconcelos, Amotz Bar-Noy , Matthew P. Johnson , Tom La Porta , Hosam Rowaihy Knowledge-Driven Agile Sensor-Mission Assignment Alun Preece, Diego Pizzocaro http://users.cs.cf.ac.uk/D.Pizzocaro [email protected]

Transcript of Knowledge-Driven Agile Sensor-Mission Assignment

Page 1: Knowledge-Driven Agile Sensor-Mission Assignment

ACITA 2009

, Konrad Borowiecki ,

Geeth de Mel , Wamberto Vasconcelos,

Amotz Bar-Noy , Matthew P. Johnson ,

Tom La Porta , Hosam Rowaihy

Knowledge-Driven Agile Sensor-Mission Assignment

Alun Preece, Diego Pizzocaro

http://users.cs.cf.ac.uk/D.Pizzocaro [email protected]

Page 2: Knowledge-Driven Agile Sensor-Mission Assignment

ACITA 2009

, Konrad Borowiecki ,

Geeth de Mel , Wamberto Vasconcelos,

Amotz Bar-Noy , Matthew P. Johnson ,

Tom La Porta , Hosam Rowaihy

Knowledge-Driven Agile Sensor-Mission Assignment

Alun Preece, Diego Pizzocaro

http://users.cs.cf.ac.uk/D.Pizzocaro [email protected]

Page 3: Knowledge-Driven Agile Sensor-Mission Assignment

Extensions

Outline

Previous work

Sensor-Mission Assignment problem

Implementation

1. Task Representation

Ontology-based matching

Conclusion & Future

http://users.cs.cf.ac.uk/D.Pizzocaro [email protected]

2. Asset Allocation

Page 4: Knowledge-Driven Agile Sensor-Mission Assignment

Main problem

Sensor-Mission Assignment

is the problem of assigning sensing assets to missions to cover the information needs (ISR)

of individual tasks in each mission.

http://users.cs.cf.ac.uk/D.Pizzocaro [email protected]

Page 5: Knowledge-Driven Agile Sensor-Mission Assignment

Sensors (or Sensing assets)

Simple sensors

Platforms

http://users.cs.cf.ac.uk/D.Pizzocaro [email protected]

Page 6: Knowledge-Driven Agile Sensor-Mission Assignment

Sensors (or Sensing assets)

Simple sensors

Platforms

http://users.cs.cf.ac.uk/D.Pizzocaro [email protected]

Page 7: Knowledge-Driven Agile Sensor-Mission Assignment

Sensors (or Sensing assets) Missions

Simple sensors composed by different TASKS:

e.g. Peace Support mission

TASK 1

Detectvehicle

TASK 4

Area Surveillance

TASK 3

Area Surveillance

TASK 2

Identifypeople

Platforms

http://users.cs.cf.ac.uk/D.Pizzocaro [email protected]

Page 8: Knowledge-Driven Agile Sensor-Mission Assignment

Scenario

http://users.cs.cf.ac.uk/D.Pizzocaro [email protected]

Page 9: Knowledge-Driven Agile Sensor-Mission Assignment

Scenario

• A network of heterogeneous sensing assets:

http://users.cs.cf.ac.uk/D.Pizzocaro [email protected]

Page 10: Knowledge-Driven Agile Sensor-Mission Assignment

Scenario

• A network of heterogeneous sensing assets:

- Support multiple tasks competing for bundles of sensors

TASK 7

LocalizeJeep

http://users.cs.cf.ac.uk/D.Pizzocaro [email protected]

TASK 3

DetectPeople

TASK 5

DetectHelicopter

TASK 1

Identifypeople

TASK 6

IdentifyTank

TASK 4

DetectAircraft

TASK 2

DetectGround Vehicle

Page 11: Knowledge-Driven Agile Sensor-Mission Assignment

Scenario

• A network of heterogeneous sensing assets:

- Support multiple tasks competing for bundles of sensors

TASK 7

LocalizeJeep

http://users.cs.cf.ac.uk/D.Pizzocaro [email protected]

TASK 3

DetectPeople

TASK 5

DetectHelicopter

TASK 1

Identifypeople

TASK 6

IdentifyTank

TASK 4

DetectAircraft

- Sensors are scarce and in high demand.

TASK 2

DetectGround Vehicle

Page 12: Knowledge-Driven Agile Sensor-Mission Assignment

Scenario

• A network of heterogeneous sensing assets:

- Support multiple tasks competing for bundles of sensors

TASK 7

LocalizeJeep

http://users.cs.cf.ac.uk/D.Pizzocaro [email protected]

TASK 3

DetectPeople

TASK 5

DetectHelicopter

TASK 1

Identifypeople

TASK 6

IdentifyTank

TASK 4

DetectAircraft

- Sensors are scarce and in high demand.

- Highly dynamic (sensor failures, change of plan)

TASK 2

DetectGround Vehicle

Page 13: Knowledge-Driven Agile Sensor-Mission Assignment

Scenario

• A network of heterogeneous sensing assets:

- Support multiple tasks competing for bundles of sensors

http://users.cs.cf.ac.uk/D.Pizzocaro [email protected]

TASK 1

Identifypeople

- Sensors are scarce and in high demand.

- Highly dynamic (sensor failures, change of plan)

Where to send that particular bundle?

TASK 2

DetectGround Vehicle

Page 14: Knowledge-Driven Agile Sensor-Mission Assignment

http://users.cs.cf.ac.uk/D.Pizzocaro [email protected]

Problem formulation

• We need schemes to assign bundles of assets to

the task they best serve.demonstrates how the core knowledge-based approach can beused to drive asset allocation, using the CASS combinatorialauction algorithm as an illustration. In Section VI we reviewthe status of our illustration-of-concept application, SAM(Sensor Assignment to Missions), and discuss a variety ofroles this kind of tool can play in supporting the process ofsensor-mission assignment.

While other papers have presented earlier or incompleteparts of our knowledge-driven approach to sensor-missionassignment (notably [6], [8], [9]) and resource allocation(notably [10], [7]) this is the first paper to show how theseelements can provide an integrated solution.

II. SENSOR-MISSION ASSIGNMENT PROBLEMFORMULATION

We formulate the sensor-mission assignment problem as agraph; an example is shown in Figure 1. We distinguish threekinds of node:

• Tasks denote the entities that are competing for availableassets.1 In our approach, the only important feature ofa task is its information requirements — these are whatdrive the asset matching and allocation processes — sothe task nodes represent information requirements.

• Bundles are collections of individual assets (sensors andplatforms). An arc between a bundle and a task indicatesthat the bundle is able to meet (at least to some extent)that information requirement (task). Each bundle has aunique bundle type, described in more detail below. Eachbundle is composed of several assets and, depending onits type, may be suitable for several tasks.

• Assets represent the individual sensor and platform re-sources. An arc between an asset and a bundle meansthat that asset can be assigned to that bundle. Assetsmay be suitable for assignment to several bundles, againdepending on the bundle type. Although not shown in thefigure, an asset may be assigned to bundles of differenttypes.

In these terms, a solution to the sensor-mission assignmentproblem is an assignment of bundles to tasks, subject to theconstraints: each task can have at most one bundle assignedto it, a bundle may be assigned to at most one task, each assetmay be assigned to at most one bundle. Thus, an assignmentis a matching in the sub bipartite graph (B, T ) and is a semimatching in the sub bipartite graph (A, B). Note that the thinarcs in Figure 1 show possible assignments, while the bold arcsshow one actual assignment. The figure is drawn with tasks onthe left and assets on the right to emphasise that, although theassignment arcs go right to left, solutions to the assignmentproblem are constructed left to right, that is, starting with tasks.

To illustrate this with a concrete example (originallyfrom [5]): in a peace support operation [4], UK and USbases have been established to detect/deter insurgent activ-ity on a border. The bases rely on a main supply route

1In much of our work, we consider these to be synonymous with “missions”(e.g. [8], [7]).

T1

T2

Tasks Bundles Assets

A1

A5

A6

A4

A2

A3

B1

B4

B2

B3

B5

BT1

BT2

Fig. 1. Example sensor-mission assignment problem as a graph

(MSR) which must be surveilled and protected. Surveillingthe border will likely involve, among other things, detectionof suspicious vehicle activity near it: vehicle detection canbe formalized as an information requirement task T1. Thismay be accomplished by a variety of means, depending onthe kinds of assets available. We assume these include flyinga UAV over the border to gather IMINT, or using acousticsensing to identify types of vehicles likely to be used byinsurgents. Further, we assume that a single UAV can coverthe area of interest (AoI), but that at least two acousticarrays will be needed. Each of these options is representedas a type of bundle: BT1 = !{UAV, IMINT-sensor}", BT2 =!{AcousticArray}, {AcousticArray}". Assume there are multipleUAV assets available (A1 and A3) but only one functioningIMINT payload (A2) that can be mounted on either A1 orA3. Assuming there are three acoustic arrays deployed in thearea (A4, A5, A6), then we can meet the requirements of T1

by assigning UAV-IMINT bundles {A1, A2} or {A3, A2} orone of the pairs of acoustic arrays (e.g. {A4, A5}, {A4, A6},{A5, A6}).

However, it is likely that there will be other tasks, potentiallyin competition for these assets; for example, a requirementto detect vehicles posing a potential threat to the MSR (T2)may be accomplished by the same types of bundle as T1

but, because the areas of interest (MSR and border) do notintersect, it will not be possible to share assets between thesetasks. So, for example, if we assign the UAV to T1 then we willhave to satisfy T2 by means of acoustic intelligence (ACINT).

We argue that, for a specific problem instance, constructionof the task-bundle-asset graph requires a significant amountof domain knowledge. Specifically, need to know which typesof bundle are suitable for which tasks, and which types ofasset can be collected into bundles. In our original work [6] —summarised in the next section — this knowledge is expressedin the form of a set of domain ontologies, allowing us toapply a reasoning procedure to derive the graph for a given

• Tasks: the entities competing for assets.

Each task is associated with Information Requirements.

• Bundles: collections of assets.

Each bundle has a unique Bundle Type (BT).

• Assets: individual sensors and platforms.

Page 15: Knowledge-Driven Agile Sensor-Mission Assignment

http://users.cs.cf.ac.uk/D.Pizzocaro [email protected]

Problem formulation

• We need schemes to assign bundles of assets to

the task they best serve.demonstrates how the core knowledge-based approach can beused to drive asset allocation, using the CASS combinatorialauction algorithm as an illustration. In Section VI we reviewthe status of our illustration-of-concept application, SAM(Sensor Assignment to Missions), and discuss a variety ofroles this kind of tool can play in supporting the process ofsensor-mission assignment.

While other papers have presented earlier or incompleteparts of our knowledge-driven approach to sensor-missionassignment (notably [6], [8], [9]) and resource allocation(notably [10], [7]) this is the first paper to show how theseelements can provide an integrated solution.

II. SENSOR-MISSION ASSIGNMENT PROBLEMFORMULATION

We formulate the sensor-mission assignment problem as agraph; an example is shown in Figure 1. We distinguish threekinds of node:

• Tasks denote the entities that are competing for availableassets.1 In our approach, the only important feature ofa task is its information requirements — these are whatdrive the asset matching and allocation processes — sothe task nodes represent information requirements.

• Bundles are collections of individual assets (sensors andplatforms). An arc between a bundle and a task indicatesthat the bundle is able to meet (at least to some extent)that information requirement (task). Each bundle has aunique bundle type, described in more detail below. Eachbundle is composed of several assets and, depending onits type, may be suitable for several tasks.

• Assets represent the individual sensor and platform re-sources. An arc between an asset and a bundle meansthat that asset can be assigned to that bundle. Assetsmay be suitable for assignment to several bundles, againdepending on the bundle type. Although not shown in thefigure, an asset may be assigned to bundles of differenttypes.

In these terms, a solution to the sensor-mission assignmentproblem is an assignment of bundles to tasks, subject to theconstraints: each task can have at most one bundle assignedto it, a bundle may be assigned to at most one task, each assetmay be assigned to at most one bundle. Thus, an assignmentis a matching in the sub bipartite graph (B, T ) and is a semimatching in the sub bipartite graph (A, B). Note that the thinarcs in Figure 1 show possible assignments, while the bold arcsshow one actual assignment. The figure is drawn with tasks onthe left and assets on the right to emphasise that, although theassignment arcs go right to left, solutions to the assignmentproblem are constructed left to right, that is, starting with tasks.

To illustrate this with a concrete example (originallyfrom [5]): in a peace support operation [4], UK and USbases have been established to detect/deter insurgent activ-ity on a border. The bases rely on a main supply route

1In much of our work, we consider these to be synonymous with “missions”(e.g. [8], [7]).

T1

T2

Tasks Bundles Assets

A1

A5

A6

A4

A2

A3

B1

B4

B2

B3

B5

BT1

BT2

Fig. 1. Example sensor-mission assignment problem as a graph

(MSR) which must be surveilled and protected. Surveillingthe border will likely involve, among other things, detectionof suspicious vehicle activity near it: vehicle detection canbe formalized as an information requirement task T1. Thismay be accomplished by a variety of means, depending onthe kinds of assets available. We assume these include flyinga UAV over the border to gather IMINT, or using acousticsensing to identify types of vehicles likely to be used byinsurgents. Further, we assume that a single UAV can coverthe area of interest (AoI), but that at least two acousticarrays will be needed. Each of these options is representedas a type of bundle: BT1 = !{UAV, IMINT-sensor}", BT2 =!{AcousticArray}, {AcousticArray}". Assume there are multipleUAV assets available (A1 and A3) but only one functioningIMINT payload (A2) that can be mounted on either A1 orA3. Assuming there are three acoustic arrays deployed in thearea (A4, A5, A6), then we can meet the requirements of T1

by assigning UAV-IMINT bundles {A1, A2} or {A3, A2} orone of the pairs of acoustic arrays (e.g. {A4, A5}, {A4, A6},{A5, A6}).

However, it is likely that there will be other tasks, potentiallyin competition for these assets; for example, a requirementto detect vehicles posing a potential threat to the MSR (T2)may be accomplished by the same types of bundle as T1

but, because the areas of interest (MSR and border) do notintersect, it will not be possible to share assets between thesetasks. So, for example, if we assign the UAV to T1 then we willhave to satisfy T2 by means of acoustic intelligence (ACINT).

We argue that, for a specific problem instance, constructionof the task-bundle-asset graph requires a significant amountof domain knowledge. Specifically, need to know which typesof bundle are suitable for which tasks, and which types ofasset can be collected into bundles. In our original work [6] —summarised in the next section — this knowledge is expressedin the form of a set of domain ontologies, allowing us toapply a reasoning procedure to derive the graph for a given

• Tasks: the entities competing for assets.

Each task is associated with Information Requirements.

• Bundles: collections of assets.

Each bundle has a unique Bundle Type (BT).

• Assets: individual sensors and platforms.

• GOAL: an assignment of bundles to tasks, such that

1. One-to-One matching between tasks-bundles

2. One-to-Many matching between bundles-assets

Page 16: Knowledge-Driven Agile Sensor-Mission Assignment

http://users.cs.cf.ac.uk/D.Pizzocaro [email protected]

Example

• Mission based on an ITA scenario: Peace support operation

TASK BUNDLE

BT1 = {(UAV, DaylightTV)}

BT2 = {(AcousticArray), (AcousticArray)}

SENSOR

Predator (UAV)

DaylightTV

Acoustic array 1

Acoustic array 2

Reaper (UAV)

!"#$%&'(&)*+,-&"./(,%&.0&"1#+)2,11,#"./11,("3&"4./5.6+&&7&89.*5.6,::#7;+#89.!5.<#+#$,&7=,89.>5.'&.2&%[email protected];17#"7&%#1?.9..

/5! <;+)B#CD.9.25.65.E#F"1#"D.9.G5.H;.6#+4;I.9.J5.K#$;,FCI..

8L;+',M.N",-&+1,4C9.N!O.7#"4;74.&3;,%P.;5'5Q+&&7&R7157;+',M5;75S=O.?N",-&+1,4C.#T./U&+'&&"9.N!..

DL,4C.N",-&+1,4C.#T.B&$.V#+=9.N0/O.I6&""1C%-;",;.04;4&.N",-&+1,4C9.N0/.

Knowledge representation and reasoning techniques can support sensor-mission assignment, proceeding from a high-level

specification of information requirements, to the allocation of assets such as sensors and platforms. In our previous work, we showed

how assets can be matched to mission tasks by formalising the military missions and means framework in terms of an ontology, and using this ontology to drive a matchmaking procdure. The work reported here extends the earlier approach in two important ways: (1)

by providing a richer and more realistic way for a user to specify their information requirements, and (2) by using the results of the

semantic matchmaking process to define the search space for efficient asset allocation algorithms.

The Task-Bundle-Asset model defines the search space relating what bundles of assets are required for different types of task.

Higher-level task representations allow

knowledge base to offer widest range of

ISTAR solution types."

Task: detect vehicles

Bundle 1: UAV

+IMINT

Bundle 2: acoustic

motes

Tasks denote the entities that are competing for

available assets. In our approach, the only important

feature of a task is its information requirements."

Bundles are collections of individual assets (sensors

and platforms). An arc between a bundle and a task

indicates that the bundle is able to meet (at least to

some extent) that information requirement (task). Each

bundle has a unique bundle type. "

Assets represent the individual sensor and platform

resources. An arc between an asset and a bundle

means that that asset can be assigned to that bundle."

A solution to the sensor-mission assignment problem

is an assignment of bundles to tasks, subject to the

constraints: each task can have at most one bundle

assigned to it, a bundle may be assigned to at most

one task, each asset may be assigned to at most one

bundle. !

Note that the thin arcs in the example show possible

assignments, while the bold arcs show one actual

assignment."

Hybrid ontology+rules-based reasoning:

Rule-based representation of (extended) NIIRS scale allows tasks of form:

detect/identify/distinguish <set of detectables>

Proves extensibility of our original ontology-based approach.

How it works: NIIRS-based rules infer basic capabilities (IMINT, ACINT, etc) needed for high-level tasks. We then apply matchmaking using Missions & Means Framework as before, to give us bundle types.

Each bundle type is an intensional definition of a set of bundles of assets that can

satisfy a task (to some extent). We generate candidate bundles, and find an

allocation using the CASS combinatorial auction algorithm.

The assignment problem can be formulated as an auction in which the bidders

are tasks T1,...,Tm and the items are assets A1,...,An. Each task is associated with

a utility demand dj, indicating the amount of sensing resources needed, and a

profit pj, indicating the importance of the task.

Each task places a bid, equal to its own profit, for any set of assets which satisfy

the task's demand and respect the task's budget constraint. We compute the

value that bidder Tj obtains if it receives the bundle B of assets using:

We are now able to demonstrate

full integration of hybrid reasoning

with optimization problem-solving, in our prototype SAM tool!

Where:

uj is utility of B to Tj

wj is cost of B to Tj

T is satisfaction threshold

! UK and US bases established to surveil the border

! Bases rely on a Main Supply Route (MSR)

Page 17: Knowledge-Driven Agile Sensor-Mission Assignment

http://users.cs.cf.ac.uk/D.Pizzocaro [email protected]

Example

• Mission based on an ITA scenario: Peace support operation

TASK BUNDLE

BT1 = {(UAV, DaylightTV)}

BT2 = {(AcousticArray), (AcousticArray)}

SENSOR

Predator (UAV)

DaylightTV

Acoustic array 1

Acoustic array 2

Reaper (UAV)

TASK 1

Detectvehicle

!"#$%&'(&)*+,-&"./(,%&.0&"1#+)2,11,#"./11,("3&"4./5.6+&&7&89.*5.6,::#7;+#89.!5.<#+#$,&7=,89.>5.'&.2&%[email protected];17#"7&%#1?.9..

/5! <;+)B#CD.9.25.65.E#F"1#"D.9.G5.H;.6#+4;I.9.J5.K#$;,FCI..

8L;+',M.N",-&+1,4C9.N!O.7#"4;74.&3;,%P.;5'5Q+&&7&R7157;+',M5;75S=O.?N",-&+1,4C.#T./U&+'&&"9.N!..

DL,4C.N",-&+1,4C.#T.B&$.V#+=9.N0/O.I6&""1C%-;",;.04;4&.N",-&+1,4C9.N0/.

Knowledge representation and reasoning techniques can support sensor-mission assignment, proceeding from a high-level

specification of information requirements, to the allocation of assets such as sensors and platforms. In our previous work, we showed

how assets can be matched to mission tasks by formalising the military missions and means framework in terms of an ontology, and using this ontology to drive a matchmaking procdure. The work reported here extends the earlier approach in two important ways: (1)

by providing a richer and more realistic way for a user to specify their information requirements, and (2) by using the results of the

semantic matchmaking process to define the search space for efficient asset allocation algorithms.

The Task-Bundle-Asset model defines the search space relating what bundles of assets are required for different types of task.

Higher-level task representations allow

knowledge base to offer widest range of

ISTAR solution types."

Task: detect vehicles

Bundle 1: UAV

+IMINT

Bundle 2: acoustic

motes

Tasks denote the entities that are competing for

available assets. In our approach, the only important

feature of a task is its information requirements."

Bundles are collections of individual assets (sensors

and platforms). An arc between a bundle and a task

indicates that the bundle is able to meet (at least to

some extent) that information requirement (task). Each

bundle has a unique bundle type. "

Assets represent the individual sensor and platform

resources. An arc between an asset and a bundle

means that that asset can be assigned to that bundle."

A solution to the sensor-mission assignment problem

is an assignment of bundles to tasks, subject to the

constraints: each task can have at most one bundle

assigned to it, a bundle may be assigned to at most

one task, each asset may be assigned to at most one

bundle. !

Note that the thin arcs in the example show possible

assignments, while the bold arcs show one actual

assignment."

Hybrid ontology+rules-based reasoning:

Rule-based representation of (extended) NIIRS scale allows tasks of form:

detect/identify/distinguish <set of detectables>

Proves extensibility of our original ontology-based approach.

How it works: NIIRS-based rules infer basic capabilities (IMINT, ACINT, etc) needed for high-level tasks. We then apply matchmaking using Missions & Means Framework as before, to give us bundle types.

Each bundle type is an intensional definition of a set of bundles of assets that can

satisfy a task (to some extent). We generate candidate bundles, and find an

allocation using the CASS combinatorial auction algorithm.

The assignment problem can be formulated as an auction in which the bidders

are tasks T1,...,Tm and the items are assets A1,...,An. Each task is associated with

a utility demand dj, indicating the amount of sensing resources needed, and a

profit pj, indicating the importance of the task.

Each task places a bid, equal to its own profit, for any set of assets which satisfy

the task's demand and respect the task's budget constraint. We compute the

value that bidder Tj obtains if it receives the bundle B of assets using:

We are now able to demonstrate

full integration of hybrid reasoning

with optimization problem-solving, in our prototype SAM tool!

Where:

uj is utility of B to Tj

wj is cost of B to Tj

T is satisfaction threshold

! UK and US bases established to surveil the border

! Bases rely on a Main Supply Route (MSR)

Page 18: Knowledge-Driven Agile Sensor-Mission Assignment

http://users.cs.cf.ac.uk/D.Pizzocaro [email protected]

Example

• Mission based on an ITA scenario: Peace support operation

TASK BUNDLE

BT1 = {(UAV, DaylightTV)}

BT2 = {(AcousticArray), (AcousticArray)}

SENSOR

Predator (UAV)

DaylightTV

Acoustic array 1

Acoustic array 2

Reaper (UAV)

TASK 1

Detectvehicle

!"#$%&'(&)*+,-&"./(,%&.0&"1#+)2,11,#"./11,("3&"4./5.6+&&7&89.*5.6,::#7;+#89.!5.<#+#$,&7=,89.>5.'&.2&%[email protected];17#"7&%#1?.9..

/5! <;+)B#CD.9.25.65.E#F"1#"D.9.G5.H;.6#+4;I.9.J5.K#$;,FCI..

8L;+',M.N",-&+1,4C9.N!O.7#"4;74.&3;,%P.;5'5Q+&&7&R7157;+',M5;75S=O.?N",-&+1,4C.#T./U&+'&&"9.N!..

DL,4C.N",-&+1,4C.#T.B&$.V#+=9.N0/O.I6&""1C%-;",;.04;4&.N",-&+1,4C9.N0/.

Knowledge representation and reasoning techniques can support sensor-mission assignment, proceeding from a high-level

specification of information requirements, to the allocation of assets such as sensors and platforms. In our previous work, we showed

how assets can be matched to mission tasks by formalising the military missions and means framework in terms of an ontology, and using this ontology to drive a matchmaking procdure. The work reported here extends the earlier approach in two important ways: (1)

by providing a richer and more realistic way for a user to specify their information requirements, and (2) by using the results of the

semantic matchmaking process to define the search space for efficient asset allocation algorithms.

The Task-Bundle-Asset model defines the search space relating what bundles of assets are required for different types of task.

Higher-level task representations allow

knowledge base to offer widest range of

ISTAR solution types."

Task: detect vehicles

Bundle 1: UAV

+IMINT

Bundle 2: acoustic

motes

Tasks denote the entities that are competing for

available assets. In our approach, the only important

feature of a task is its information requirements."

Bundles are collections of individual assets (sensors

and platforms). An arc between a bundle and a task

indicates that the bundle is able to meet (at least to

some extent) that information requirement (task). Each

bundle has a unique bundle type. "

Assets represent the individual sensor and platform

resources. An arc between an asset and a bundle

means that that asset can be assigned to that bundle."

A solution to the sensor-mission assignment problem

is an assignment of bundles to tasks, subject to the

constraints: each task can have at most one bundle

assigned to it, a bundle may be assigned to at most

one task, each asset may be assigned to at most one

bundle. !

Note that the thin arcs in the example show possible

assignments, while the bold arcs show one actual

assignment."

Hybrid ontology+rules-based reasoning:

Rule-based representation of (extended) NIIRS scale allows tasks of form:

detect/identify/distinguish <set of detectables>

Proves extensibility of our original ontology-based approach.

How it works: NIIRS-based rules infer basic capabilities (IMINT, ACINT, etc) needed for high-level tasks. We then apply matchmaking using Missions & Means Framework as before, to give us bundle types.

Each bundle type is an intensional definition of a set of bundles of assets that can

satisfy a task (to some extent). We generate candidate bundles, and find an

allocation using the CASS combinatorial auction algorithm.

The assignment problem can be formulated as an auction in which the bidders

are tasks T1,...,Tm and the items are assets A1,...,An. Each task is associated with

a utility demand dj, indicating the amount of sensing resources needed, and a

profit pj, indicating the importance of the task.

Each task places a bid, equal to its own profit, for any set of assets which satisfy

the task's demand and respect the task's budget constraint. We compute the

value that bidder Tj obtains if it receives the bundle B of assets using:

We are now able to demonstrate

full integration of hybrid reasoning

with optimization problem-solving, in our prototype SAM tool!

Where:

uj is utility of B to Tj

wj is cost of B to Tj

T is satisfaction threshold

B1

B2

B3

! UK and US bases established to surveil the border

! Bases rely on a Main Supply Route (MSR)

Page 19: Knowledge-Driven Agile Sensor-Mission Assignment

http://users.cs.cf.ac.uk/D.Pizzocaro [email protected]

Example

• Mission based on an ITA scenario: Peace support operation

TASK BUNDLE

BT1 = {(UAV, DaylightTV)}

BT2 = {(AcousticArray), (AcousticArray)}

SENSOR

Predator (UAV)

DaylightTV

Acoustic array 1

Acoustic array 2

Reaper (UAV)

TASK 1

Detectvehicle

!"#$%&'(&)*+,-&"./(,%&.0&"1#+)2,11,#"./11,("3&"4./5.6+&&7&89.*5.6,::#7;+#89.!5.<#+#$,&7=,89.>5.'&.2&%[email protected];17#"7&%#1?.9..

/5! <;+)B#CD.9.25.65.E#F"1#"D.9.G5.H;.6#+4;I.9.J5.K#$;,FCI..

8L;+',M.N",-&+1,4C9.N!O.7#"4;74.&3;,%P.;5'5Q+&&7&R7157;+',M5;75S=O.?N",-&+1,4C.#T./U&+'&&"9.N!..

DL,4C.N",-&+1,4C.#T.B&$.V#+=9.N0/O.I6&""1C%-;",;.04;4&.N",-&+1,4C9.N0/.

Knowledge representation and reasoning techniques can support sensor-mission assignment, proceeding from a high-level

specification of information requirements, to the allocation of assets such as sensors and platforms. In our previous work, we showed

how assets can be matched to mission tasks by formalising the military missions and means framework in terms of an ontology, and using this ontology to drive a matchmaking procdure. The work reported here extends the earlier approach in two important ways: (1)

by providing a richer and more realistic way for a user to specify their information requirements, and (2) by using the results of the

semantic matchmaking process to define the search space for efficient asset allocation algorithms.

The Task-Bundle-Asset model defines the search space relating what bundles of assets are required for different types of task.

Higher-level task representations allow

knowledge base to offer widest range of

ISTAR solution types."

Task: detect vehicles

Bundle 1: UAV

+IMINT

Bundle 2: acoustic

motes

Tasks denote the entities that are competing for

available assets. In our approach, the only important

feature of a task is its information requirements."

Bundles are collections of individual assets (sensors

and platforms). An arc between a bundle and a task

indicates that the bundle is able to meet (at least to

some extent) that information requirement (task). Each

bundle has a unique bundle type. "

Assets represent the individual sensor and platform

resources. An arc between an asset and a bundle

means that that asset can be assigned to that bundle."

A solution to the sensor-mission assignment problem

is an assignment of bundles to tasks, subject to the

constraints: each task can have at most one bundle

assigned to it, a bundle may be assigned to at most

one task, each asset may be assigned to at most one

bundle. !

Note that the thin arcs in the example show possible

assignments, while the bold arcs show one actual

assignment."

Hybrid ontology+rules-based reasoning:

Rule-based representation of (extended) NIIRS scale allows tasks of form:

detect/identify/distinguish <set of detectables>

Proves extensibility of our original ontology-based approach.

How it works: NIIRS-based rules infer basic capabilities (IMINT, ACINT, etc) needed for high-level tasks. We then apply matchmaking using Missions & Means Framework as before, to give us bundle types.

Each bundle type is an intensional definition of a set of bundles of assets that can

satisfy a task (to some extent). We generate candidate bundles, and find an

allocation using the CASS combinatorial auction algorithm.

The assignment problem can be formulated as an auction in which the bidders

are tasks T1,...,Tm and the items are assets A1,...,An. Each task is associated with

a utility demand dj, indicating the amount of sensing resources needed, and a

profit pj, indicating the importance of the task.

Each task places a bid, equal to its own profit, for any set of assets which satisfy

the task's demand and respect the task's budget constraint. We compute the

value that bidder Tj obtains if it receives the bundle B of assets using:

We are now able to demonstrate

full integration of hybrid reasoning

with optimization problem-solving, in our prototype SAM tool!

Where:

uj is utility of B to Tj

wj is cost of B to Tj

T is satisfaction threshold

B1

B2

B3

! UK and US bases established to surveil the border

! Bases rely on a Main Supply Route (MSR)

Page 20: Knowledge-Driven Agile Sensor-Mission Assignment

Ontology-basedmatching

matches types of assets to types of tasks,based on sensing capabilities provided and required.

http://users.cs.cf.ac.uk/D.Pizzocaro [email protected]

Previous Work

TASK

AerialImageryIntelligence

SENSING ASSET TYPES

Page 21: Knowledge-Driven Agile Sensor-Mission Assignment

Ontology-basedmatching

matches types of assets to types of tasks,based on sensing capabilities provided and required.

http://users.cs.cf.ac.uk/D.Pizzocaro [email protected]

Previous Work

TASK

AerialImageryIntelligence

SENSING ASSET TYPES

Page 22: Knowledge-Driven Agile Sensor-Mission Assignment

http://users.cs.cf.ac.uk/D.Pizzocaro [email protected]

Methodology

Ontology-based matching

• Semantic matchmaking to evaluate the fitness-for-purpose of collections of asset

types to the information requirements of a task.

Page 23: Knowledge-Driven Agile Sensor-Mission Assignment

http://users.cs.cf.ac.uk/D.Pizzocaro [email protected]

Methodology

Ontology-based matching

• Semantic matchmaking to evaluate the fitness-for-purpose of collections of asset

types to the information requirements of a task.

• We use ontologies for:

Page 24: Knowledge-Driven Agile Sensor-Mission Assignment

http://users.cs.cf.ac.uk/D.Pizzocaro [email protected]

Methodology

Ontology-based matching

• Semantic matchmaking to evaluate the fitness-for-purpose of collections of asset

types to the information requirements of a task.

• We use ontologies for:

Specify the information

requirements of a task.

Specify the sensing capabilities

provided by assets.

ISR ontologies: tasks, sensors,

etc

Page 25: Knowledge-Driven Agile Sensor-Mission Assignment

http://users.cs.cf.ac.uk/D.Pizzocaro [email protected]

Methodology

Ontology-based matching

• Semantic matchmaking to evaluate the fitness-for-purpose of collections of asset

types to the information requirements of a task.

• We use ontologies for:

Specify the information

requirements of a task.

Specify the sensing capabilities

provided by assets.

Compare the two of them.

ISR ontologies: tasks, sensors,

etc

MMF ontology

Page 26: Knowledge-Driven Agile Sensor-Mission Assignment

http://users.cs.cf.ac.uk/D.Pizzocaro [email protected]

Methodology

Ontology-based matching

• Semantic matchmaking to evaluate the fitness-for-purpose of collections of asset

types to the information requirements of a task.

• We use ontologies for:

Specify the information

requirements of a task.

Specify the sensing capabilities

provided by assets.

Compare the two of them.

ISR ontologies: tasks, sensors,

etc

MMF ontology

Page 27: Knowledge-Driven Agile Sensor-Mission Assignment

http://users.cs.cf.ac.uk/D.Pizzocaro [email protected]

MMF ontology

• Based on the pre-existent Missions and Means Framework (MMF)

! MMF is an informal framework to help human planners determine the

capabilities required to accomplish a military mission. (developed by US ARL)

! In the military domain very precise definitions of capabilities required:

! ISR requirements

Intelligence

Surveillance Reconnaisance

Page 28: Knowledge-Driven Agile Sensor-Mission Assignment

http://users.cs.cf.ac.uk/D.Pizzocaro [email protected]

MMF ontology (contd)

• Main concepts and relations in MMF ontology (implemented in OWL DL):

Task Capability

Operation

Mission

Asset

Platform System

Sensor

comprises toAccomplish

comprises toAccomplish

toPerform

is-a

is-a

is-a

mounts

attachedTo

requires

providesallocatedTo

interferesWith

entails

Page 29: Knowledge-Driven Agile Sensor-Mission Assignment

http://users.cs.cf.ac.uk/D.Pizzocaro [email protected]

Ontology-based matching

• Given a task T, with a set of ISR requirements: RT = {R1, R2, ...}

• Product of matching is a (BT) Bundle Type:

bundle of assets that can satisfy the requirements of a task.

Page 30: Knowledge-Driven Agile Sensor-Mission Assignment

http://users.cs.cf.ac.uk/D.Pizzocaro [email protected]

Ontology-based matching

• Given a task T, with a set of ISR requirements: RT = {R1, R2, ...}

• Product of matching is a (BT) Bundle Type:

bundle of assets that can satisfy the requirements of a task.

• Steps to build a BT:

1. generate a (PC) Platform Configuration = (P , S)

where P is a single platform type ,

S = {S1, ... , Sm} is a set of sensor types mountable on P (at the same time).

Page 31: Knowledge-Driven Agile Sensor-Mission Assignment

http://users.cs.cf.ac.uk/D.Pizzocaro [email protected]

Ontology-based matching

• Given a task T, with a set of ISR requirements: RT = {R1, R2, ...}

• Product of matching is a (BT) Bundle Type:

bundle of assets that can satisfy the requirements of a task.

• Steps to build a BT:

1. generate a (PC) Platform Configuration = (P , S)

where P is a single platform type ,

S = {S1, ... , Sm} is a set of sensor types mountable on P (at the same time).

2. generate a (PK) Package Configuration = { PC1 , PC2 , ... }

The aggregate capabilities of PK have to semantically match RT

Page 32: Knowledge-Driven Agile Sensor-Mission Assignment

http://users.cs.cf.ac.uk/D.Pizzocaro [email protected]

Ontology-based matching

• Given a task T, with a set of ISR requirements: RT = {R1, R2, ...}

• Product of matching is a (BT) Bundle Type:

bundle of assets that can satisfy the requirements of a task.

• Steps to build a BT:

1. generate a (PC) Platform Configuration = (P , S)

where P is a single platform type ,

S = {S1, ... , Sm} is a set of sensor types mountable on P (at the same time).

2. generate a (PK) Package Configuration = { PC1 , PC2 , ... }

The aggregate capabilities of PK have to semantically match RT

3. Create BT by adding cardinality constraints to each PC in the PK:

e.g. : “2 UAVs with a DaylightTV each” ! BT1 = { (UAV, DaylightTV), (UAV, DaylightTV)}

Page 33: Knowledge-Driven Agile Sensor-Mission Assignment

http://users.cs.cf.ac.uk/D.Pizzocaro [email protected]

Limitation

• This approach is conceptually simple, extensible and well founded (MMF)

• Limitation:

Task requirements have to be specified at a low level

! Over-reliant on the ontology of INT types

E.g. RT = { ACINT, IMINT }

trivially determines the need

for acoustic and imagery sensing.

• Need for higher-level representation of ISR tasks.

Fig. 3. Sample concept taxonomies relevant to the ISR domain: ISR tasksand INT types

their properties (e.g. [14], [15], [16]). There are also severalwell-known structured descriptions of tasks in the militarymissions context, most notably the Universal Joint Task List(UJTL)[17]. Based largely on these pre-existing sensor andtask ontologies, we have identified a collection of concepthierarchies relevant to the ISR domain. Portions of two ofthese (defining ISR tasks and various types of intelligenceinformation — “INT” types) are shown in Figure 3.

In these terms, we define a bundle type to be an intensionaldefinition of a set of bundles of assets that can satisfy atask. The essential part of a bundle type is the specificationof required sensor and platform types needed to providethe capabilities to satisfy the task. For this, we define aplatform configuration ! = !P,S", where P is a type ofplatform, and S = {S1, . . . , Sm} is a set of sensor typesthat can be mounted on P simultaneously. Given a task Twith a set of required ISR capabilities CT = {C1, . . . , Cn},a single platform configuration is a valid match for T ifthe combined capabilities of P and S satisfy CT . Formally,V(T ) = {!1, . . . ,!n} is the set of valid matches for task Tiff !P,S" # V(T ) $ (%Ci # CT )((P & 'provides.Ci)((Si #S & 'provides.Ci)).

More commonly, the requirements of a task will not besatisfied by a single platform configuration. We define a pack-age configuration ! = {!1, . . . ,!n}, where !i is a platformconfiguration. ! is a valid match for T if collectively theplatforms and sensors in {!1, . . . ,!n} satisfy the capabilitiesin CT , and ! is minimal with respect to CT (there does notexist any subset of ! that is a valid match for T ).

Bundle types are created by post-processing the packageconfigurations, to add cardinality constraints. This is currentlydone using pre-defined configuration knowledge, for example:

• at least 1 UAV with at least 1 Camera• at least 2 AcousticArrays with exactly 2 ACINTSensorsWhile this approach was conceptually simple, extensible (in

terms of the modularity of the domain-specific ontologies),

and well-founded (on MMF), it was rather limited in that itrequired task capabilities to be specified at a low level in orderto drive the matching procedure. Specifically, the approach wasover-reliant on the ontology of INT types shown in Figure 3 todetermine the type of sensor required; for example, a choiceof ACINT or IMINT in CT would determine — rather trivially— the need for acoustic or imagery sensing). We sought ahigher-level representation of ISR tasks, as described in thenext section.

IV. SPECIFIYING TASKS

Our main requirements for a higher-level representation oftasks were:

• That the task should as much as possible specify onlywhat is the information requirement, and avoid sayinganything about how it should be obtained. The reasonsfor this requirement were to avoid the main flaw in ourearlier approach, and allow a greater degree of flexibilityin allocating assets to tasks.

• That the representation of tasks should be familiar topotential users of the approach, not only to make any toolbased on the approach easier to use, but also to maximisethe acceptability of our approach (the same reason thatwe adopted MMF as the underlying framework for thetask-asset matching).

Following guidance from subject-matter experts, we based ourrepresentation on the National Imagery Interpretability RatingScale (NIIRS) for various kinds of imagery intelligence[18].NIIRS is a well-established way to determine the kinds of data(for example, visible imaging, or radar) that are interpretableto answer particular information requirements (detecting, iden-tifying and distinguishing various kinds of thing). The NIIRSscale defines ratings on a ten point scale (0–9) for variouskinds of imagery data; for example, detection of vehicles ofparticular types is achievable by Visible NIIRS 4 and RadarNIIRS 6. This is useful because sensor/platform combinationscan then be rated in terms of their NIIRS values, allowinganalysts to seek data that are suitable for their tasks. From ourpoint of view, NIIRS supports the kind of task-asset matchingwe established as described in the previous section.

We define a task as a quadruple, !NC, DS,A, T ", where:• NC is one of three basic “capabilities” defined in NIIRS:

detect (find or discover the presence or existence of anentity), distinguish-between (determine that two detectedentities are of different types), and identify (name anobject by type);

• DS is a set of “detectable” types of entity (for example,people, vehicles, or installations at various levels ofspecificity);

• A defines an area-of-interest;• T defines a period of time.

It is worth noting that, in accordance with our earlier work,we continue to refer to this representation as a “task” repre-sentation. In NIIRS, and in ISR more generally, these kindsof description are referred to as “capabilities”. In a sense,

Page 34: Knowledge-Driven Agile Sensor-Mission Assignment

Task Representation

http://users.cs.cf.ac.uk/D.Pizzocaro [email protected]

Extension 1

specify only WHAT is the information requirement,

and avoid saying HOW it should be obtained.

T = {ACINT, IMINT} T = {“Detect Vechicle”}

Page 35: Knowledge-Driven Agile Sensor-Mission Assignment

http://users.cs.cf.ac.uk/D.Pizzocaro [email protected]

• Approach based on National Imagery Interpretability Rating Scale (NIIRS)

! determines the kinds of data that are interpretable

to answer an information requirement:

E.g. Visible, Radar, etc.

! defines ratings on a ten point scale (0–9)

NIIRS

T= {“Detect Vehicles”}

Visible NIIRS 4

Page 36: Knowledge-Driven Agile Sensor-Mission Assignment

http://users.cs.cf.ac.uk/D.Pizzocaro [email protected]

• Approach based on National Imagery Interpretability Rating Scale (NIIRS)

! determines the kinds of data that are interpretable

to answer an information requirement:

E.g. Visible, Radar, etc.

! defines ratings on a ten point scale (0–9)

NIIRS

T= {“Detect Vehicles”}

Visible NIIRS 4

• Platform Configurations can be rated with NIIRS values

“UAV with DaylightTV-camera” ↔ Visible NIIRS 4

• Therefore “UAV with DaylightTVcamera” matches “Detect Vehicles”

! NIIRS can support the task-asset matching

Page 37: Knowledge-Driven Agile Sensor-Mission Assignment

http://users.cs.cf.ac.uk/D.Pizzocaro [email protected]

Hybrid reasoning

• To integrate NIIRS in sensor-task matching:

! We formalized it as a set of rules which allow for:

We adopt an HYBRID REASONING approach: ontology & rule-based reasoning

! Reasoning steps:

1) Specify high-level task:

2) NIIRS rule-based system infers basic capabilities:

3) Ontology-based matchmaking returns the bundle types:

Task specification

Detect/Identify/Distinguish <set of detectables>

Page 38: Knowledge-Driven Agile Sensor-Mission Assignment

http://users.cs.cf.ac.uk/D.Pizzocaro [email protected]

Hybrid reasoning

• To integrate NIIRS in sensor-task matching:

! We formalized it as a set of rules which allow for:

We adopt an HYBRID REASONING approach: ontology & rule-based reasoning

! Reasoning steps:

1) Specify high-level task:

2) NIIRS rule-based system infers basic capabilities:

3) Ontology-based matchmaking returns the bundle types:

as our approach makes clear, they can be regarded as high-level capabilities, from which lower-level capabilities can beinferred. However, we prefer to locate these representations inour ontology as specialisations of the Task class rather thanthe Capability class to avoid confusion. Note also that, in thecurrent implementation, we omit T as we assume that all tasksare required to happen in the same mission period.

!"#$%&'($)*+,-./0'&*+1

!0#$20&+$)*+3(4'1$*5$6('(3'0"7(1 !3#$8&+(1$*5$9/0+14*/'0'&*+

Fig. 4. A portion of our ontology of “detectable” types of entity from NIIRS

Full details of the representation of the NIIRS scale, and ourrules for performing reasoning with it, are given in [9]. Thevarious “detectable” types of entity are defined in an ontology,drawn from the entities appearing in the military version ofthe NIIRS documentation. A portion of this ontology is shownin Figure 4. Using this ontology, we formalised the variousNIIRS interpretation tasks as a set of clauses of the followingform: !NC, DS, FS, C,NT,NR", where:

• NC is a NIIRS “capability” as above (one of: detect,distinguish-between, identify);

• DS is a set of detectables, as above;• FS is a set of more specific features of the detectable

entities (for example, the roads or guard posts of a base,the runways of an airport, or the piers and warehouses ofa port) — these features are also defined in the ontologyof “detectables”;

• C is a context, defining the preconditions that must holdfor the clause to apply (for example, detection of a shipin the context of open water, or identification of vehiclesin a known motor pool); and

• NT is the type of imaging from the NIIRS scale (forexample, Visible or Infrared); and

• NR is a NIIRS rating (for example, Visible-6 or Infrared-4).

Some examples:• !detect, {Port}, {}, {}, Visible, Visible-1": a port can be

detected using visible imagery of rating 1;• !detect, {Port}, {Pier, Warehouse}, {}, Radar, Radar-1":

a port can be detected based on the presence of piersand warehouses using radar imagery of rating 1;

• !distinguish-between, {Taxiway, Runway}, {}, {Airfield},Visible, Visible-1": taxiways and runways can bedistinguished at an airfield using visible imagery ofrating 1;

• !detect, {Vehicle}, {}, {Motor-Pool}, Radar, Radar-4": ve-hicles can be detected in a known motor pool using radarimagery of rating 4;

The basic reasoning procedure for using these clauses is asfollows (there are additional features described in [9]:

1) Given a task T = !NC, DS,A, T ", find all clauses!NC !, DS!, FS,C, NT, NR" where NC = NC ! andDS # DS! or DS # FS;

2) For each clause where C is not empty, check that all theconditions in C hold, normally by referencing mappinginformation for area A, and if they do not hold, discardthe clause;

3) Gather all of the NT and NR elements from the re-maining clauses to form the set of required capabilities,CT ;

4) Because higher ratings of a given NIIRS type entaillower ones, we reduce CT so that it contains only thehighest rating of each NIIRS type.

We then use our previous approach to matching, describedin Section III, with two important modifications:

• We have added the relevant NIIRS capabilities to thesensor and platform types. Because of the way NIIRS isdefined, we associate the NIIRS imaging types NT withsensors (for example, Visible or Radar) and the NIIRSratings NR with platforms (for example, Visible-1 orRadar-4). Thus, an inferred platform configuration mustsatisfy both the required imaging type and the rating.

• Because NIIRS gives us alternative ways to satisfy a task,each different NR and its corresponding NT is matcheddisjunctively; so, for example, if CT contains Visible-1and Radar-4, we generate separate sets of valid matchesfor each, and then aggregate them into a complete set ofvalid matches.

An important limitation of this approach is the publishedNIIRS scale is restricted to imagery intelligence. However,in principle, there is no reason why it cannot be extendedto cover other types of intelligence also. For our purposes,we have extended our knowledge base where we know ofnon-imagery approaches that have been shown - at leastin controlled trials - to have particular capabilities. For ex-ample, following [19] we introduced clauses that indicateacoustic data can be used to detect vehicles (note thatwe leave the rating essentially unspecified by using zero):!detect, {Vehicle}, {}, {}, Acoustic, Acoustic-0"

In this section, we have shown how we extended our originalmatching approach to support higher-level and more realistictask representations. In the next section, we address the issueof asset allocation based on the output from the matchingprocess.

Task specification

Detect/Identify/Distinguish <set of detectables>

Page 39: Knowledge-Driven Agile Sensor-Mission Assignment

http://users.cs.cf.ac.uk/D.Pizzocaro [email protected]

Hybrid reasoning

• To integrate NIIRS in sensor-task matching:

! We formalized it as a set of rules which allow for:

We adopt an HYBRID REASONING approach: ontology & rule-based reasoning

! Reasoning steps:

1) Specify high-level task:

as our approach makes clear, they can be regarded as high-level capabilities, from which lower-level capabilities can beinferred. However, we prefer to locate these representations inour ontology as specialisations of the Task class rather thanthe Capability class to avoid confusion. Note also that, in thecurrent implementation, we omit T as we assume that all tasksare required to happen in the same mission period.

!"#$%&'($)*+,-./0'&*+1

!0#$20&+$)*+3(4'1$*5$6('(3'0"7(1 !3#$8&+(1$*5$9/0+14*/'0'&*+

Fig. 4. A portion of our ontology of “detectable” types of entity from NIIRS

Full details of the representation of the NIIRS scale, and ourrules for performing reasoning with it, are given in [9]. Thevarious “detectable” types of entity are defined in an ontology,drawn from the entities appearing in the military version ofthe NIIRS documentation. A portion of this ontology is shownin Figure 4. Using this ontology, we formalised the variousNIIRS interpretation tasks as a set of clauses of the followingform: !NC, DS, FS, C,NT,NR", where:

• NC is a NIIRS “capability” as above (one of: detect,distinguish-between, identify);

• DS is a set of detectables, as above;• FS is a set of more specific features of the detectable

entities (for example, the roads or guard posts of a base,the runways of an airport, or the piers and warehouses ofa port) — these features are also defined in the ontologyof “detectables”;

• C is a context, defining the preconditions that must holdfor the clause to apply (for example, detection of a shipin the context of open water, or identification of vehiclesin a known motor pool); and

• NT is the type of imaging from the NIIRS scale (forexample, Visible or Infrared); and

• NR is a NIIRS rating (for example, Visible-6 or Infrared-4).

Some examples:• !detect, {Port}, {}, {}, Visible, Visible-1": a port can be

detected using visible imagery of rating 1;• !detect, {Port}, {Pier, Warehouse}, {}, Radar, Radar-1":

a port can be detected based on the presence of piersand warehouses using radar imagery of rating 1;

• !distinguish-between, {Taxiway, Runway}, {}, {Airfield},Visible, Visible-1": taxiways and runways can bedistinguished at an airfield using visible imagery ofrating 1;

• !detect, {Vehicle}, {}, {Motor-Pool}, Radar, Radar-4": ve-hicles can be detected in a known motor pool using radarimagery of rating 4;

The basic reasoning procedure for using these clauses is asfollows (there are additional features described in [9]:

1) Given a task T = !NC, DS,A, T ", find all clauses!NC !, DS!, FS,C, NT, NR" where NC = NC ! andDS # DS! or DS # FS;

2) For each clause where C is not empty, check that all theconditions in C hold, normally by referencing mappinginformation for area A, and if they do not hold, discardthe clause;

3) Gather all of the NT and NR elements from the re-maining clauses to form the set of required capabilities,CT ;

4) Because higher ratings of a given NIIRS type entaillower ones, we reduce CT so that it contains only thehighest rating of each NIIRS type.

We then use our previous approach to matching, describedin Section III, with two important modifications:

• We have added the relevant NIIRS capabilities to thesensor and platform types. Because of the way NIIRS isdefined, we associate the NIIRS imaging types NT withsensors (for example, Visible or Radar) and the NIIRSratings NR with platforms (for example, Visible-1 orRadar-4). Thus, an inferred platform configuration mustsatisfy both the required imaging type and the rating.

• Because NIIRS gives us alternative ways to satisfy a task,each different NR and its corresponding NT is matcheddisjunctively; so, for example, if CT contains Visible-1and Radar-4, we generate separate sets of valid matchesfor each, and then aggregate them into a complete set ofvalid matches.

An important limitation of this approach is the publishedNIIRS scale is restricted to imagery intelligence. However,in principle, there is no reason why it cannot be extendedto cover other types of intelligence also. For our purposes,we have extended our knowledge base where we know ofnon-imagery approaches that have been shown - at leastin controlled trials - to have particular capabilities. For ex-ample, following [19] we introduced clauses that indicateacoustic data can be used to detect vehicles (note thatwe leave the rating essentially unspecified by using zero):!detect, {Vehicle}, {}, {}, Acoustic, Acoustic-0"

In this section, we have shown how we extended our originalmatching approach to support higher-level and more realistictask representations. In the next section, we address the issueof asset allocation based on the output from the matchingprocess.

Task specification

Detect/Identify/Distinguish <set of detectables>

T = {“Detect Vehicles”}

Page 40: Knowledge-Driven Agile Sensor-Mission Assignment

http://users.cs.cf.ac.uk/D.Pizzocaro [email protected]

Hybrid reasoning

• To integrate NIIRS in sensor-task matching:

! We formalized it as a set of rules which allow for:

We adopt an HYBRID REASONING approach: ontology & rule-based reasoning

! Reasoning steps:

1) Specify high-level task:

2) NIIRS rule-based system infers basic capabilities:

as our approach makes clear, they can be regarded as high-level capabilities, from which lower-level capabilities can beinferred. However, we prefer to locate these representations inour ontology as specialisations of the Task class rather thanthe Capability class to avoid confusion. Note also that, in thecurrent implementation, we omit T as we assume that all tasksare required to happen in the same mission period.

!"#$%&'($)*+,-./0'&*+1

!0#$20&+$)*+3(4'1$*5$6('(3'0"7(1 !3#$8&+(1$*5$9/0+14*/'0'&*+

Fig. 4. A portion of our ontology of “detectable” types of entity from NIIRS

Full details of the representation of the NIIRS scale, and ourrules for performing reasoning with it, are given in [9]. Thevarious “detectable” types of entity are defined in an ontology,drawn from the entities appearing in the military version ofthe NIIRS documentation. A portion of this ontology is shownin Figure 4. Using this ontology, we formalised the variousNIIRS interpretation tasks as a set of clauses of the followingform: !NC, DS, FS, C,NT,NR", where:

• NC is a NIIRS “capability” as above (one of: detect,distinguish-between, identify);

• DS is a set of detectables, as above;• FS is a set of more specific features of the detectable

entities (for example, the roads or guard posts of a base,the runways of an airport, or the piers and warehouses ofa port) — these features are also defined in the ontologyof “detectables”;

• C is a context, defining the preconditions that must holdfor the clause to apply (for example, detection of a shipin the context of open water, or identification of vehiclesin a known motor pool); and

• NT is the type of imaging from the NIIRS scale (forexample, Visible or Infrared); and

• NR is a NIIRS rating (for example, Visible-6 or Infrared-4).

Some examples:• !detect, {Port}, {}, {}, Visible, Visible-1": a port can be

detected using visible imagery of rating 1;• !detect, {Port}, {Pier, Warehouse}, {}, Radar, Radar-1":

a port can be detected based on the presence of piersand warehouses using radar imagery of rating 1;

• !distinguish-between, {Taxiway, Runway}, {}, {Airfield},Visible, Visible-1": taxiways and runways can bedistinguished at an airfield using visible imagery ofrating 1;

• !detect, {Vehicle}, {}, {Motor-Pool}, Radar, Radar-4": ve-hicles can be detected in a known motor pool using radarimagery of rating 4;

The basic reasoning procedure for using these clauses is asfollows (there are additional features described in [9]:

1) Given a task T = !NC, DS,A, T ", find all clauses!NC !, DS!, FS,C, NT, NR" where NC = NC ! andDS # DS! or DS # FS;

2) For each clause where C is not empty, check that all theconditions in C hold, normally by referencing mappinginformation for area A, and if they do not hold, discardthe clause;

3) Gather all of the NT and NR elements from the re-maining clauses to form the set of required capabilities,CT ;

4) Because higher ratings of a given NIIRS type entaillower ones, we reduce CT so that it contains only thehighest rating of each NIIRS type.

We then use our previous approach to matching, describedin Section III, with two important modifications:

• We have added the relevant NIIRS capabilities to thesensor and platform types. Because of the way NIIRS isdefined, we associate the NIIRS imaging types NT withsensors (for example, Visible or Radar) and the NIIRSratings NR with platforms (for example, Visible-1 orRadar-4). Thus, an inferred platform configuration mustsatisfy both the required imaging type and the rating.

• Because NIIRS gives us alternative ways to satisfy a task,each different NR and its corresponding NT is matcheddisjunctively; so, for example, if CT contains Visible-1and Radar-4, we generate separate sets of valid matchesfor each, and then aggregate them into a complete set ofvalid matches.

An important limitation of this approach is the publishedNIIRS scale is restricted to imagery intelligence. However,in principle, there is no reason why it cannot be extendedto cover other types of intelligence also. For our purposes,we have extended our knowledge base where we know ofnon-imagery approaches that have been shown - at leastin controlled trials - to have particular capabilities. For ex-ample, following [19] we introduced clauses that indicateacoustic data can be used to detect vehicles (note thatwe leave the rating essentially unspecified by using zero):!detect, {Vehicle}, {}, {}, Acoustic, Acoustic-0"

In this section, we have shown how we extended our originalmatching approach to support higher-level and more realistictask representations. In the next section, we address the issueof asset allocation based on the output from the matchingprocess.

Task specification

Detect/Identify/Distinguish <set of detectables>

T = {“Detect Vehicles”}

RT = {ACINT-level0, IMINT-level4}

Page 41: Knowledge-Driven Agile Sensor-Mission Assignment

http://users.cs.cf.ac.uk/D.Pizzocaro [email protected]

Hybrid reasoning

• To integrate NIIRS in sensor-task matching:

! We formalized it as a set of rules which allow for:

We adopt an HYBRID REASONING approach: ontology & rule-based reasoning

! Reasoning steps:

1) Specify high-level task:

2) NIIRS rule-based system infers basic capabilities:

3) Ontology-based matchmaking returns the bundle types:

as our approach makes clear, they can be regarded as high-level capabilities, from which lower-level capabilities can beinferred. However, we prefer to locate these representations inour ontology as specialisations of the Task class rather thanthe Capability class to avoid confusion. Note also that, in thecurrent implementation, we omit T as we assume that all tasksare required to happen in the same mission period.

!"#$%&'($)*+,-./0'&*+1

!0#$20&+$)*+3(4'1$*5$6('(3'0"7(1 !3#$8&+(1$*5$9/0+14*/'0'&*+

Fig. 4. A portion of our ontology of “detectable” types of entity from NIIRS

Full details of the representation of the NIIRS scale, and ourrules for performing reasoning with it, are given in [9]. Thevarious “detectable” types of entity are defined in an ontology,drawn from the entities appearing in the military version ofthe NIIRS documentation. A portion of this ontology is shownin Figure 4. Using this ontology, we formalised the variousNIIRS interpretation tasks as a set of clauses of the followingform: !NC, DS, FS, C,NT,NR", where:

• NC is a NIIRS “capability” as above (one of: detect,distinguish-between, identify);

• DS is a set of detectables, as above;• FS is a set of more specific features of the detectable

entities (for example, the roads or guard posts of a base,the runways of an airport, or the piers and warehouses ofa port) — these features are also defined in the ontologyof “detectables”;

• C is a context, defining the preconditions that must holdfor the clause to apply (for example, detection of a shipin the context of open water, or identification of vehiclesin a known motor pool); and

• NT is the type of imaging from the NIIRS scale (forexample, Visible or Infrared); and

• NR is a NIIRS rating (for example, Visible-6 or Infrared-4).

Some examples:• !detect, {Port}, {}, {}, Visible, Visible-1": a port can be

detected using visible imagery of rating 1;• !detect, {Port}, {Pier, Warehouse}, {}, Radar, Radar-1":

a port can be detected based on the presence of piersand warehouses using radar imagery of rating 1;

• !distinguish-between, {Taxiway, Runway}, {}, {Airfield},Visible, Visible-1": taxiways and runways can bedistinguished at an airfield using visible imagery ofrating 1;

• !detect, {Vehicle}, {}, {Motor-Pool}, Radar, Radar-4": ve-hicles can be detected in a known motor pool using radarimagery of rating 4;

The basic reasoning procedure for using these clauses is asfollows (there are additional features described in [9]:

1) Given a task T = !NC, DS,A, T ", find all clauses!NC !, DS!, FS,C, NT, NR" where NC = NC ! andDS # DS! or DS # FS;

2) For each clause where C is not empty, check that all theconditions in C hold, normally by referencing mappinginformation for area A, and if they do not hold, discardthe clause;

3) Gather all of the NT and NR elements from the re-maining clauses to form the set of required capabilities,CT ;

4) Because higher ratings of a given NIIRS type entaillower ones, we reduce CT so that it contains only thehighest rating of each NIIRS type.

We then use our previous approach to matching, describedin Section III, with two important modifications:

• We have added the relevant NIIRS capabilities to thesensor and platform types. Because of the way NIIRS isdefined, we associate the NIIRS imaging types NT withsensors (for example, Visible or Radar) and the NIIRSratings NR with platforms (for example, Visible-1 orRadar-4). Thus, an inferred platform configuration mustsatisfy both the required imaging type and the rating.

• Because NIIRS gives us alternative ways to satisfy a task,each different NR and its corresponding NT is matcheddisjunctively; so, for example, if CT contains Visible-1and Radar-4, we generate separate sets of valid matchesfor each, and then aggregate them into a complete set ofvalid matches.

An important limitation of this approach is the publishedNIIRS scale is restricted to imagery intelligence. However,in principle, there is no reason why it cannot be extendedto cover other types of intelligence also. For our purposes,we have extended our knowledge base where we know ofnon-imagery approaches that have been shown - at leastin controlled trials - to have particular capabilities. For ex-ample, following [19] we introduced clauses that indicateacoustic data can be used to detect vehicles (note thatwe leave the rating essentially unspecified by using zero):!detect, {Vehicle}, {}, {}, Acoustic, Acoustic-0"

In this section, we have shown how we extended our originalmatching approach to support higher-level and more realistictask representations. In the next section, we address the issueof asset allocation based on the output from the matchingprocess.

Task specification

Detect/Identify/Distinguish <set of detectables>

T = {“Detect Vehicles”}

RT = {ACINT-level0, IMINT-level4}

BT1 = {UAV, DaylightTV} BT2 = {AcousticArray, AcousticArray}

Page 42: Knowledge-Driven Agile Sensor-Mission Assignment

http://users.cs.cf.ac.uk/D.Pizzocaro [email protected]

Advantages & Limitations

• Higher-level task representations allow a wider range of assets to satisfy a task.

• Limitation: NIIRS scale is restricted to imagery intelligence (IMINT).

! How did we include acoustic intelligence?

• There is no reason why NIIRS cannot be extended to other types

• We introduced a number of rules for ACINT based on literature

• Still the need for NIIRS to be “officially” extended!

TASK 1

Detectvehicle

!"#$%&'(&)*+,-&"./(,%&.0&"1#+)2,11,#"./11,("3&"4./5.6+&&7&89.*5.6,::#7;+#89.!5.<#+#$,&7=,89.>5.'&.2&%[email protected];17#"7&%#1?.9..

/5! <;+)B#CD.9.25.65.E#F"1#"D.9.G5.H;.6#+4;I.9.J5.K#$;,FCI..

8L;+',M.N",-&+1,4C9.N!O.7#"4;74.&3;,%P.;5'5Q+&&7&R7157;+',M5;75S=O.?N",-&+1,4C.#T./U&+'&&"9.N!..

DL,4C.N",-&+1,4C.#T.B&$.V#+=9.N0/O.I6&""1C%-;",;.04;4&.N",-&+1,4C9.N0/.

Knowledge representation and reasoning techniques can support sensor-mission assignment, proceeding from a high-level

specification of information requirements, to the allocation of assets such as sensors and platforms. In our previous work, we showed

how assets can be matched to mission tasks by formalising the military missions and means framework in terms of an ontology, and using this ontology to drive a matchmaking procdure. The work reported here extends the earlier approach in two important ways: (1)

by providing a richer and more realistic way for a user to specify their information requirements, and (2) by using the results of the

semantic matchmaking process to define the search space for efficient asset allocation algorithms.

The Task-Bundle-Asset model defines the search space relating what bundles of assets are required for different types of task.

Higher-level task representations allow

knowledge base to offer widest range of

ISTAR solution types."

Task: detect vehicles

Bundle 1: UAV

+IMINT

Bundle 2: acoustic

motes

Tasks denote the entities that are competing for

available assets. In our approach, the only important

feature of a task is its information requirements."

Bundles are collections of individual assets (sensors

and platforms). An arc between a bundle and a task

indicates that the bundle is able to meet (at least to

some extent) that information requirement (task). Each

bundle has a unique bundle type. "

Assets represent the individual sensor and platform

resources. An arc between an asset and a bundle

means that that asset can be assigned to that bundle."

A solution to the sensor-mission assignment problem

is an assignment of bundles to tasks, subject to the

constraints: each task can have at most one bundle

assigned to it, a bundle may be assigned to at most

one task, each asset may be assigned to at most one

bundle. !

Note that the thin arcs in the example show possible

assignments, while the bold arcs show one actual

assignment."

Hybrid ontology+rules-based reasoning:

Rule-based representation of (extended) NIIRS scale allows tasks of form:

detect/identify/distinguish <set of detectables>

Proves extensibility of our original ontology-based approach.

How it works: NIIRS-based rules infer basic capabilities (IMINT, ACINT, etc) needed for high-level tasks. We then apply matchmaking using Missions & Means Framework as before, to give us bundle types.

Each bundle type is an intensional definition of a set of bundles of assets that can

satisfy a task (to some extent). We generate candidate bundles, and find an

allocation using the CASS combinatorial auction algorithm.

The assignment problem can be formulated as an auction in which the bidders

are tasks T1,...,Tm and the items are assets A1,...,An. Each task is associated with

a utility demand dj, indicating the amount of sensing resources needed, and a

profit pj, indicating the importance of the task.

Each task places a bid, equal to its own profit, for any set of assets which satisfy

the task's demand and respect the task's budget constraint. We compute the

value that bidder Tj obtains if it receives the bundle B of assets using:

We are now able to demonstrate

full integration of hybrid reasoning

with optimization problem-solving, in our prototype SAM tool!

Where:

uj is utility of B to Tj

wj is cost of B to Tj

T is satisfaction threshold

“IMINT” Bundle

“ACINT” Bundle

Page 43: Knowledge-Driven Agile Sensor-Mission Assignment

http://users.cs.cf.ac.uk/D.Pizzocaro [email protected]

Advantages & Limitations

• Higher-level task representations allow a wider range of assets to satisfy a task.

• Limitation: NIIRS scale is restricted to imagery intelligence (IMINT).

! How did we include acoustic intelligence?

• There is no reason why NIIRS cannot be extended to other types

• We introduced a number of rules for ACINT based on literature

• Still the need for NIIRS to be “officially” extended!

TASK 1

Detectvehicle

!"#$%&'(&)*+,-&"./(,%&.0&"1#+)2,11,#"./11,("3&"4./5.6+&&7&89.*5.6,::#7;+#89.!5.<#+#$,&7=,89.>5.'&.2&%[email protected];17#"7&%#1?.9..

/5! <;+)B#CD.9.25.65.E#F"1#"D.9.G5.H;.6#+4;I.9.J5.K#$;,FCI..

8L;+',M.N",-&+1,4C9.N!O.7#"4;74.&3;,%P.;5'5Q+&&7&R7157;+',M5;75S=O.?N",-&+1,4C.#T./U&+'&&"9.N!..

DL,4C.N",-&+1,4C.#T.B&$.V#+=9.N0/O.I6&""1C%-;",;.04;4&.N",-&+1,4C9.N0/.

Knowledge representation and reasoning techniques can support sensor-mission assignment, proceeding from a high-level

specification of information requirements, to the allocation of assets such as sensors and platforms. In our previous work, we showed

how assets can be matched to mission tasks by formalising the military missions and means framework in terms of an ontology, and using this ontology to drive a matchmaking procdure. The work reported here extends the earlier approach in two important ways: (1)

by providing a richer and more realistic way for a user to specify their information requirements, and (2) by using the results of the

semantic matchmaking process to define the search space for efficient asset allocation algorithms.

The Task-Bundle-Asset model defines the search space relating what bundles of assets are required for different types of task.

Higher-level task representations allow

knowledge base to offer widest range of

ISTAR solution types."

Task: detect vehicles

Bundle 1: UAV

+IMINT

Bundle 2: acoustic

motes

Tasks denote the entities that are competing for

available assets. In our approach, the only important

feature of a task is its information requirements."

Bundles are collections of individual assets (sensors

and platforms). An arc between a bundle and a task

indicates that the bundle is able to meet (at least to

some extent) that information requirement (task). Each

bundle has a unique bundle type. "

Assets represent the individual sensor and platform

resources. An arc between an asset and a bundle

means that that asset can be assigned to that bundle."

A solution to the sensor-mission assignment problem

is an assignment of bundles to tasks, subject to the

constraints: each task can have at most one bundle

assigned to it, a bundle may be assigned to at most

one task, each asset may be assigned to at most one

bundle. !

Note that the thin arcs in the example show possible

assignments, while the bold arcs show one actual

assignment."

Hybrid ontology+rules-based reasoning:

Rule-based representation of (extended) NIIRS scale allows tasks of form:

detect/identify/distinguish <set of detectables>

Proves extensibility of our original ontology-based approach.

How it works: NIIRS-based rules infer basic capabilities (IMINT, ACINT, etc) needed for high-level tasks. We then apply matchmaking using Missions & Means Framework as before, to give us bundle types.

Each bundle type is an intensional definition of a set of bundles of assets that can

satisfy a task (to some extent). We generate candidate bundles, and find an

allocation using the CASS combinatorial auction algorithm.

The assignment problem can be formulated as an auction in which the bidders

are tasks T1,...,Tm and the items are assets A1,...,An. Each task is associated with

a utility demand dj, indicating the amount of sensing resources needed, and a

profit pj, indicating the importance of the task.

Each task places a bid, equal to its own profit, for any set of assets which satisfy

the task's demand and respect the task's budget constraint. We compute the

value that bidder Tj obtains if it receives the bundle B of assets using:

We are now able to demonstrate

full integration of hybrid reasoning

with optimization problem-solving, in our prototype SAM tool!

Where:

uj is utility of B to Tj

wj is cost of B to Tj

T is satisfaction threshold

“IMINT” Bundle

“ACINT” Bundle

Page 44: Knowledge-Driven Agile Sensor-Mission Assignment

http://users.cs.cf.ac.uk/D.Pizzocaro [email protected]

Advantages & Limitations

• Higher-level task representations allow a wider range of assets to satisfy a task.

• Limitation: NIIRS scale is restricted to imagery intelligence (IMINT).

! How did we include acoustic intelligence?

• There is no reason why NIIRS cannot be extended to other types

• We introduced a number of rules for ACINT based on literature

• Still the need for NIIRS to be “officially” extended!

TASK 1

Detectvehicle

!"#$%&'(&)*+,-&"./(,%&.0&"1#+)2,11,#"./11,("3&"4./5.6+&&7&89.*5.6,::#7;+#89.!5.<#+#$,&7=,89.>5.'&.2&%[email protected];17#"7&%#1?.9..

/5! <;+)B#CD.9.25.65.E#F"1#"D.9.G5.H;.6#+4;I.9.J5.K#$;,FCI..

8L;+',M.N",-&+1,4C9.N!O.7#"4;74.&3;,%P.;5'5Q+&&7&R7157;+',M5;75S=O.?N",-&+1,4C.#T./U&+'&&"9.N!..

DL,4C.N",-&+1,4C.#T.B&$.V#+=9.N0/O.I6&""1C%-;",;.04;4&.N",-&+1,4C9.N0/.

Knowledge representation and reasoning techniques can support sensor-mission assignment, proceeding from a high-level

specification of information requirements, to the allocation of assets such as sensors and platforms. In our previous work, we showed

how assets can be matched to mission tasks by formalising the military missions and means framework in terms of an ontology, and using this ontology to drive a matchmaking procdure. The work reported here extends the earlier approach in two important ways: (1)

by providing a richer and more realistic way for a user to specify their information requirements, and (2) by using the results of the

semantic matchmaking process to define the search space for efficient asset allocation algorithms.

The Task-Bundle-Asset model defines the search space relating what bundles of assets are required for different types of task.

Higher-level task representations allow

knowledge base to offer widest range of

ISTAR solution types."

Task: detect vehicles

Bundle 1: UAV

+IMINT

Bundle 2: acoustic

motes

Tasks denote the entities that are competing for

available assets. In our approach, the only important

feature of a task is its information requirements."

Bundles are collections of individual assets (sensors

and platforms). An arc between a bundle and a task

indicates that the bundle is able to meet (at least to

some extent) that information requirement (task). Each

bundle has a unique bundle type. "

Assets represent the individual sensor and platform

resources. An arc between an asset and a bundle

means that that asset can be assigned to that bundle."

A solution to the sensor-mission assignment problem

is an assignment of bundles to tasks, subject to the

constraints: each task can have at most one bundle

assigned to it, a bundle may be assigned to at most

one task, each asset may be assigned to at most one

bundle. !

Note that the thin arcs in the example show possible

assignments, while the bold arcs show one actual

assignment."

Hybrid ontology+rules-based reasoning:

Rule-based representation of (extended) NIIRS scale allows tasks of form:

detect/identify/distinguish <set of detectables>

Proves extensibility of our original ontology-based approach.

How it works: NIIRS-based rules infer basic capabilities (IMINT, ACINT, etc) needed for high-level tasks. We then apply matchmaking using Missions & Means Framework as before, to give us bundle types.

Each bundle type is an intensional definition of a set of bundles of assets that can

satisfy a task (to some extent). We generate candidate bundles, and find an

allocation using the CASS combinatorial auction algorithm.

The assignment problem can be formulated as an auction in which the bidders

are tasks T1,...,Tm and the items are assets A1,...,An. Each task is associated with

a utility demand dj, indicating the amount of sensing resources needed, and a

profit pj, indicating the importance of the task.

Each task places a bid, equal to its own profit, for any set of assets which satisfy

the task's demand and respect the task's budget constraint. We compute the

value that bidder Tj obtains if it receives the bundle B of assets using:

We are now able to demonstrate

full integration of hybrid reasoning

with optimization problem-solving, in our prototype SAM tool!

Where:

uj is utility of B to Tj

wj is cost of B to Tj

T is satisfaction threshold

“IMINT” Bundle

“ACINT” Bundle

Page 45: Knowledge-Driven Agile Sensor-Mission Assignment

Asset Allocationassign bundles instances to competing tasks,

based on recommended bundle types.

http://users.cs.cf.ac.uk/D.Pizzocaro [email protected]

Extension 2

XTask 1

XTask 3

XTask 2

XTask 4

XTask 4

Page 46: Knowledge-Driven Agile Sensor-Mission Assignment

Asset Allocationassign bundles instances to competing tasks,

based on recommended bundle types.

http://users.cs.cf.ac.uk/D.Pizzocaro [email protected]

Extension 2

XTask 1

XTask 3

XTask 2

XTask 4

XTask 4

Page 47: Knowledge-Driven Agile Sensor-Mission Assignment

Asset Allocationassign bundles instances to competing tasks,

based on recommended bundle types.

http://users.cs.cf.ac.uk/D.Pizzocaro [email protected]

Extension 2

XTask 1

XTask 3

XTask 2

XTask 4

XTask 4

Page 48: Knowledge-Driven Agile Sensor-Mission Assignment

http://users.cs.cf.ac.uk/D.Pizzocaro [email protected]

Allocation model

• A set of tasks competing for the exclusive usage of sensing assets.

! For each task:

d = utility demand

w = budget p = priority

• A set of sensors to be grouped into bundles based on bundle types.

! For each bundle-task pair:

e = joint utility to a task c = joint cost to a task

• Goal: an allocation of sensor bundles that maximizes the total profit.

S1

S2

S3

S4

B1

B2

Sensors(sensing assets)

Bundles

T1

T2

Tasks

(p1, d1, w1)e11, c11

e12, c

12

(p2, d2, w2)

Page 49: Knowledge-Driven Agile Sensor-Mission Assignment

http://users.cs.cf.ac.uk/D.Pizzocaro [email protected]

Allocation model

• A set of tasks competing for the exclusive usage of sensing assets.

! For each task:

d = utility demand

w = budget p = priority

• A set of sensors to be grouped into bundles based on bundle types.

! For each bundle-task pair:

e = joint utility to a task c = joint cost to a task

• Goal: an allocation of sensor bundles that maximizes the total profit.

• The profit is a function of the utility cumulated by the task.

S1

S2

S3

S4

B1

B2

Sensors(sensing assets)

Bundles

T1

T2

Tasks

(p1, d1, w1)e11, c11

e12, c

12

(p2, d2, w2)

Page 50: Knowledge-Driven Agile Sensor-Mission Assignment

http://users.cs.cf.ac.uk/D.Pizzocaro [email protected]

Combinatorial auction

• This can be seen as a combinatorial auction

! bidders: tasks

! items: sensors

! tasks bids for bundles of sensors

• Many algorithms have been proposed:

we use CASS (Combinatorial Auction Structured Search)

Page 51: Knowledge-Driven Agile Sensor-Mission Assignment

http://users.cs.cf.ac.uk/D.Pizzocaro [email protected]

Combinatorial auction

• This can be seen as a combinatorial auction

! bidders: tasks

! items: sensors

! tasks bids for bundles of sensors

• Many algorithms have been proposed:

we use CASS (Combinatorial Auction Structured Search)

• Bundle generation issue:

! In the worst case we might have 2N bundles (with N = number of sensors)

! The REASONING step improves the bundle generation:

Reduce the number of

generated bundles

Prune the set of bids

placed by tasks

Generate only bundles

matching bundle types

Page 52: Knowledge-Driven Agile Sensor-Mission Assignment

ImplementationTwo prototypes of Top-to-bottom applications

to solve the Sensor-Mission assignment.

http://users.cs.cf.ac.uk/D.Pizzocaro [email protected]

Page 53: Knowledge-Driven Agile Sensor-Mission Assignment

http://users.cs.cf.ac.uk/D.Pizzocaro [email protected]

SAM - base station

• Level of use:

! Base station

! Operational analyst

Section IV. A screenshot of the new user interface is shown inFigure 5. As before, a user logs-in as a member of a coalition(in our example, we have a US/UK coalition). The user isable to select one or more areas-of-interest on a map (leftpanel) and, for each, to select multiple ISR tasks (top-rightpanel) using our NIIRS-based representation. (As we noted inSection IV our approach is currently simplified to assume thatall tasks are required at the same time; this could easily beextended.)

Fig. 5. Setting an area-of-interest and selecting tasks using the SAMapplication

Fig. 6. Selecting an asset bundle manually using the SAM application

SAM is implemented as a Web application in Java. TheNIIRS-based reasoning to infer capabilities from the selectedtasks is implemented as a rule-based system in Java (fordetails, see [9]. The matching procedure is implemented usingthe Pellet reasoner [26]. Before performing reasoning, theSAM application queries inventory catalogues to determinewhat types of platforms and sensors are actually available,so it can recommend package configurations including themost specific types that are potentially deployable. Once thepackage configurations have been found, SAM retrieves all

asset instances of the appropriate types from the inventorycatalogues, and generates all asset bundles specified by usingthe package configurations as bundle types, together withadditional cardinality information as noted at the end ofSection III. Because the number of assets is small in ourdemonstration, the set of bundles is currently presented tothe user, to make the choice of preferred bundle. This stepis shown in Figure 6.

In doing this, SAM takes into account access policies onthese resources, including ownership (note that the figureshows UK/US ownership of assets in the bottom-right panel)and whether the user has sufficient privileges to task thoseassets (shown by the “lock” icon next to the asset types).While simple, this mechanism is intended as an expansionpoint to allow the incorporation of more sophisticated accesspolicies in future, such as those described in [4]. Once anasset has been selected, SAM allows the user to “subscribe”to available data feeds from that asset, using the ITA SensorFabric [27]. Additionally, if an asset fails, SAM allows theuser to “backtrack” and make a different choice of bundle,and then obtain data that satisfies their task in some otherway. For example, the user might initially choose to satisfythe vehicle detection task (Figure 5) by imagery data to beobtained by a UAV (Figure 6); then, if the UAV fails for somereason, they could opt instead to solve the task using acousticdata from, for example, an acoustic array.

While the current version of SAM demonstrates the conceptof knowledge-driven sensor-mission assignment, it does notyet incorporate automated asset assignment, as it assumesan inventory with a relatively small number of assets. Thecombinatorial auction mechanism is currently implementedseparately. Integrating these two components requires a num-ber of choices to be made, most importantly:

• Users need a way to specify mission/task “profit” in away that is meaningful to them. Potentially this could bedone using a simple indication of priority. In principle,it ought to be part of a more general plan representationformalism. An open question here is whether priority canin some cases be inferred from the context of the task(for example, the type of mission: hostage rescue wouldnormally take priority over environmental monitoring).

• Further research is necessary to determine where are themost appropriate “choice points” for user intervention.It is clearly only feasible for users to select bundleinstances when there are very few choices available. Inmore typical cases, it would be more sensible to allowthe user to select (or rank) alternative bundle types (orpackage configurations), this giving a (perhaps “soft”)preference as to how they would prefer to obtain theirdata.

• SAM is intended to be used in a distributed fashion, asexplained in [5]. Multiple users in a coalition submit theirtasks using instances of SAM, and we need to examinealternative ways to apply the allocation algorithms inthis context. We have investigated distributed versus cen-tralised algorithms in the context of homogeneous sensor

Section IV. A screenshot of the new user interface is shown inFigure 5. As before, a user logs-in as a member of a coalition(in our example, we have a US/UK coalition). The user isable to select one or more areas-of-interest on a map (leftpanel) and, for each, to select multiple ISR tasks (top-rightpanel) using our NIIRS-based representation. (As we noted inSection IV our approach is currently simplified to assume thatall tasks are required at the same time; this could easily beextended.)

Fig. 5. Setting an area-of-interest and selecting tasks using the SAMapplication

Fig. 6. Selecting an asset bundle manually using the SAM application

SAM is implemented as a Web application in Java. TheNIIRS-based reasoning to infer capabilities from the selectedtasks is implemented as a rule-based system in Java (fordetails, see [9]. The matching procedure is implemented usingthe Pellet reasoner [26]. Before performing reasoning, theSAM application queries inventory catalogues to determinewhat types of platforms and sensors are actually available,so it can recommend package configurations including themost specific types that are potentially deployable. Once thepackage configurations have been found, SAM retrieves all

asset instances of the appropriate types from the inventorycatalogues, and generates all asset bundles specified by usingthe package configurations as bundle types, together withadditional cardinality information as noted at the end ofSection III. Because the number of assets is small in ourdemonstration, the set of bundles is currently presented tothe user, to make the choice of preferred bundle. This stepis shown in Figure 6.

In doing this, SAM takes into account access policies onthese resources, including ownership (note that the figureshows UK/US ownership of assets in the bottom-right panel)and whether the user has sufficient privileges to task thoseassets (shown by the “lock” icon next to the asset types).While simple, this mechanism is intended as an expansionpoint to allow the incorporation of more sophisticated accesspolicies in future, such as those described in [4]. Once anasset has been selected, SAM allows the user to “subscribe”to available data feeds from that asset, using the ITA SensorFabric [27]. Additionally, if an asset fails, SAM allows theuser to “backtrack” and make a different choice of bundle,and then obtain data that satisfies their task in some otherway. For example, the user might initially choose to satisfythe vehicle detection task (Figure 5) by imagery data to beobtained by a UAV (Figure 6); then, if the UAV fails for somereason, they could opt instead to solve the task using acousticdata from, for example, an acoustic array.

While the current version of SAM demonstrates the conceptof knowledge-driven sensor-mission assignment, it does notyet incorporate automated asset assignment, as it assumesan inventory with a relatively small number of assets. Thecombinatorial auction mechanism is currently implementedseparately. Integrating these two components requires a num-ber of choices to be made, most importantly:

• Users need a way to specify mission/task “profit” in away that is meaningful to them. Potentially this could bedone using a simple indication of priority. In principle,it ought to be part of a more general plan representationformalism. An open question here is whether priority canin some cases be inferred from the context of the task(for example, the type of mission: hostage rescue wouldnormally take priority over environmental monitoring).

• Further research is necessary to determine where are themost appropriate “choice points” for user intervention.It is clearly only feasible for users to select bundleinstances when there are very few choices available. Inmore typical cases, it would be more sensible to allowthe user to select (or rank) alternative bundle types (orpackage configurations), this giving a (perhaps “soft”)preference as to how they would prefer to obtain theirdata.

• SAM is intended to be used in a distributed fashion, asexplained in [5]. Multiple users in a coalition submit theirtasks using instances of SAM, and we need to examinealternative ways to apply the allocation algorithms inthis context. We have investigated distributed versus cen-tralised algorithms in the context of homogeneous sensor

• Features:

" High-level task representation

" Subscribe bundles manually

Page 54: Knowledge-Driven Agile Sensor-Mission Assignment

http://users.cs.cf.ac.uk/D.Pizzocaro [email protected]

SAM - base station

• Level of use:

! Base station

! Operational analyst

Section IV. A screenshot of the new user interface is shown inFigure 5. As before, a user logs-in as a member of a coalition(in our example, we have a US/UK coalition). The user isable to select one or more areas-of-interest on a map (leftpanel) and, for each, to select multiple ISR tasks (top-rightpanel) using our NIIRS-based representation. (As we noted inSection IV our approach is currently simplified to assume thatall tasks are required at the same time; this could easily beextended.)

Fig. 5. Setting an area-of-interest and selecting tasks using the SAMapplication

Fig. 6. Selecting an asset bundle manually using the SAM application

SAM is implemented as a Web application in Java. TheNIIRS-based reasoning to infer capabilities from the selectedtasks is implemented as a rule-based system in Java (fordetails, see [9]. The matching procedure is implemented usingthe Pellet reasoner [26]. Before performing reasoning, theSAM application queries inventory catalogues to determinewhat types of platforms and sensors are actually available,so it can recommend package configurations including themost specific types that are potentially deployable. Once thepackage configurations have been found, SAM retrieves all

asset instances of the appropriate types from the inventorycatalogues, and generates all asset bundles specified by usingthe package configurations as bundle types, together withadditional cardinality information as noted at the end ofSection III. Because the number of assets is small in ourdemonstration, the set of bundles is currently presented tothe user, to make the choice of preferred bundle. This stepis shown in Figure 6.

In doing this, SAM takes into account access policies onthese resources, including ownership (note that the figureshows UK/US ownership of assets in the bottom-right panel)and whether the user has sufficient privileges to task thoseassets (shown by the “lock” icon next to the asset types).While simple, this mechanism is intended as an expansionpoint to allow the incorporation of more sophisticated accesspolicies in future, such as those described in [4]. Once anasset has been selected, SAM allows the user to “subscribe”to available data feeds from that asset, using the ITA SensorFabric [27]. Additionally, if an asset fails, SAM allows theuser to “backtrack” and make a different choice of bundle,and then obtain data that satisfies their task in some otherway. For example, the user might initially choose to satisfythe vehicle detection task (Figure 5) by imagery data to beobtained by a UAV (Figure 6); then, if the UAV fails for somereason, they could opt instead to solve the task using acousticdata from, for example, an acoustic array.

While the current version of SAM demonstrates the conceptof knowledge-driven sensor-mission assignment, it does notyet incorporate automated asset assignment, as it assumesan inventory with a relatively small number of assets. Thecombinatorial auction mechanism is currently implementedseparately. Integrating these two components requires a num-ber of choices to be made, most importantly:

• Users need a way to specify mission/task “profit” in away that is meaningful to them. Potentially this could bedone using a simple indication of priority. In principle,it ought to be part of a more general plan representationformalism. An open question here is whether priority canin some cases be inferred from the context of the task(for example, the type of mission: hostage rescue wouldnormally take priority over environmental monitoring).

• Further research is necessary to determine where are themost appropriate “choice points” for user intervention.It is clearly only feasible for users to select bundleinstances when there are very few choices available. Inmore typical cases, it would be more sensible to allowthe user to select (or rank) alternative bundle types (orpackage configurations), this giving a (perhaps “soft”)preference as to how they would prefer to obtain theirdata.

• SAM is intended to be used in a distributed fashion, asexplained in [5]. Multiple users in a coalition submit theirtasks using instances of SAM, and we need to examinealternative ways to apply the allocation algorithms inthis context. We have investigated distributed versus cen-tralised algorithms in the context of homogeneous sensor

Section IV. A screenshot of the new user interface is shown inFigure 5. As before, a user logs-in as a member of a coalition(in our example, we have a US/UK coalition). The user isable to select one or more areas-of-interest on a map (leftpanel) and, for each, to select multiple ISR tasks (top-rightpanel) using our NIIRS-based representation. (As we noted inSection IV our approach is currently simplified to assume thatall tasks are required at the same time; this could easily beextended.)

Fig. 5. Setting an area-of-interest and selecting tasks using the SAMapplication

Fig. 6. Selecting an asset bundle manually using the SAM application

SAM is implemented as a Web application in Java. TheNIIRS-based reasoning to infer capabilities from the selectedtasks is implemented as a rule-based system in Java (fordetails, see [9]. The matching procedure is implemented usingthe Pellet reasoner [26]. Before performing reasoning, theSAM application queries inventory catalogues to determinewhat types of platforms and sensors are actually available,so it can recommend package configurations including themost specific types that are potentially deployable. Once thepackage configurations have been found, SAM retrieves all

asset instances of the appropriate types from the inventorycatalogues, and generates all asset bundles specified by usingthe package configurations as bundle types, together withadditional cardinality information as noted at the end ofSection III. Because the number of assets is small in ourdemonstration, the set of bundles is currently presented tothe user, to make the choice of preferred bundle. This stepis shown in Figure 6.

In doing this, SAM takes into account access policies onthese resources, including ownership (note that the figureshows UK/US ownership of assets in the bottom-right panel)and whether the user has sufficient privileges to task thoseassets (shown by the “lock” icon next to the asset types).While simple, this mechanism is intended as an expansionpoint to allow the incorporation of more sophisticated accesspolicies in future, such as those described in [4]. Once anasset has been selected, SAM allows the user to “subscribe”to available data feeds from that asset, using the ITA SensorFabric [27]. Additionally, if an asset fails, SAM allows theuser to “backtrack” and make a different choice of bundle,and then obtain data that satisfies their task in some otherway. For example, the user might initially choose to satisfythe vehicle detection task (Figure 5) by imagery data to beobtained by a UAV (Figure 6); then, if the UAV fails for somereason, they could opt instead to solve the task using acousticdata from, for example, an acoustic array.

While the current version of SAM demonstrates the conceptof knowledge-driven sensor-mission assignment, it does notyet incorporate automated asset assignment, as it assumesan inventory with a relatively small number of assets. Thecombinatorial auction mechanism is currently implementedseparately. Integrating these two components requires a num-ber of choices to be made, most importantly:

• Users need a way to specify mission/task “profit” in away that is meaningful to them. Potentially this could bedone using a simple indication of priority. In principle,it ought to be part of a more general plan representationformalism. An open question here is whether priority canin some cases be inferred from the context of the task(for example, the type of mission: hostage rescue wouldnormally take priority over environmental monitoring).

• Further research is necessary to determine where are themost appropriate “choice points” for user intervention.It is clearly only feasible for users to select bundleinstances when there are very few choices available. Inmore typical cases, it would be more sensible to allowthe user to select (or rank) alternative bundle types (orpackage configurations), this giving a (perhaps “soft”)preference as to how they would prefer to obtain theirdata.

• SAM is intended to be used in a distributed fashion, asexplained in [5]. Multiple users in a coalition submit theirtasks using instances of SAM, and we need to examinealternative ways to apply the allocation algorithms inthis context. We have investigated distributed versus cen-tralised algorithms in the context of homogeneous sensor

• Features:

" High-level task representation

" Subscribe bundles manually

Page 55: Knowledge-Driven Agile Sensor-Mission Assignment

http://users.cs.cf.ac.uk/D.Pizzocaro [email protected]

SAM - mobile

• Level of use:

! Mobile user (on the field)

! Not an expert

! Time constraint

Page 56: Knowledge-Driven Agile Sensor-Mission Assignment

http://users.cs.cf.ac.uk/D.Pizzocaro [email protected]

SAM - mobile

• Level of use:

! Mobile user (on the field)

! Not an expert

! Time constraint

• Features:

" High-level task representation

Page 57: Knowledge-Driven Agile Sensor-Mission Assignment

http://users.cs.cf.ac.uk/D.Pizzocaro [email protected]

SAM - mobile

• Level of use:

! Mobile user (on the field)

! Not an expert

! Time constraint

• Features:

" High-level task representation

" Automatic asset allocation

Satisfied Unsatisfied

Page 58: Knowledge-Driven Agile Sensor-Mission Assignment

http://users.cs.cf.ac.uk/D.Pizzocaro [email protected]

Conclusion & Future

• Future:

- Assets or bundle of assets might be shared between tasks

- Most appropriate “choice points” for user intervention (human-in-the-loop)

- Information delivery to a mobile user (bandwidth constraints)

• We extended our original ontology-based approach to provide:

! a richer and more realistic specification of task requirements (using NIIRS)

! automatic asset allocation (using combinatorial auctions)

• We also showed these new features in a centralized and a mobile application.

Page 59: Knowledge-Driven Agile Sensor-Mission Assignment

http://users.cs.cf.ac.uk/D.Pizzocaro [email protected]

Thanks for listening!

Questions?