SoS and T&E NDIA SoS SE Committee June 9, 2009. Background NDIA SoS SE Committee identified SoS and...
-
Upload
christine-phillips -
Category
Documents
-
view
221 -
download
0
Transcript of SoS and T&E NDIA SoS SE Committee June 9, 2009. Background NDIA SoS SE Committee identified SoS and...
Background NDIA SoS SE Committee identified SoS and T&E as a topic of interest White paper on SoS and T&E was circulated for comment prior to this
meeting• Paper provides a view which is generally understood, including
• SoS are more complicated than systems• SoS have constituent components that have independent needs and can exercise
free will in their development and operation that places their needs at times before that of the SoS
• SoS are generally asynchronous and this introduces difficulties in end-to-end testing, and alternates must be sought
• Emergent properties will emerge and must be accommodated• Set of ‘needs’
• Need for incremental development; need to address synchronization; the need to consider block testing; the need to monitor fielded behavior; the need to express limitations in a politically correct manner; and the means to address and incorporate methods to cope with these
• Paper does not address balance between SoS development and testing
White paper, including comments, is presented here as basis for committee discussion
DoD SoS SE Guide• Focus on technical aspects of SE
applicable to SoS • Characterize SoS in DoD Today• Describe Core Elements of SoS SE• Translate application of basic SE
processes for SoS SE
System of Systems (SoS)
System of Systems: (ref: Defense Acquisition Guide)
A set or arrangement of systems that results when independent and useful systems are integrated into a larger system that delivers unique capabilities.
SoS Types and examples• Directed - DoD Information System Network (DISN), National System
for Geospatial Analysis• Acknowledged – Ballistic Missile Defense System, Air Operations
Center• Collaborative – Communities of interest• Virtual - Internet
Translating capability objectives Translating capability objectives Translating capability objectives
Translating capability objectives Translating capability objectives Translating capability objectives
Addressing new requirements
& options
Addressing new requirements
& options
Addressingrequirements
& solution options
Addressing new requirements
& options
Addressing new requirements
& options
Addressingrequirements
& solution options
Understanding systems &
relationships(includes plans)
Understanding systems &
relationships(includes plans)
Understanding systems &
relationships
Understanding systems &
relationships(includes plans)
Understanding systems &
relationships(includes plans)
Understanding systems &
relationships
External Environment
Developing, evolving and maintaining
SoS design/ arch
Developing, evolving and maintaining
SoS design/ arch
Developing& evolving
SoS architecture
Developing, evolving and maintaining
SoS design/ arch
Developing, evolving and maintaining
SoS design/ arch
Developing& evolving
SoS architecture
Assessing (actual)
performance to capability objectives
Assessing (actual)
performance to capability objectives
Assessing performance to capability objectives
Assessing (actual)
performance to capability objectives
Assessing (actual)
performance to capability objectives
Assessing performance to capability objectives Orchestrating
upgrades to SoS
Orchestrating upgrades
to SoS
Orchestrating upgrades
to SoS
Orchestrating upgrades
to SoS
Orchestrating upgrades
to SoS
Orchestrating upgrades
to SoS
Monitoring & assessing
changesMonitoring & assessing
changes
Monitoring & assessing
changes
Monitoring & assessing
changesMonitoring & assessing
changes
Monitoring & assessing
changes
SoS T&E Issues Raised in the White Paper
1) If SoS are not programs of record (and not subject to T&E regulations) why should we worry about this at all?
2) If ‘requirements’ are not clearly specified up front for a SoS, what is the basis for T&E of an SoS?
3) What is the relationship between SoS metrics and T&E objectives?
4) Are expected cumulative impacts of systems changes on SoS performance the same as SoS performance objectives?
5) How do you test the contribution of a system to the end to end SoS performance in the absence of other SoS elements critical to the SoS results?
What if systems all implemented to their specification, but the overall SoS expected changes cannot be verified?
SoS T&E Issue (1)
Relationship between SoS and acquisition • Many SoS are not acquisition programs, but rather are
‘umbrella’ programs which incorporate multiple systems at different stages of development each with their own T&E requirements
• The constituent systems are operationally and managerially independent from the SoS and are on asynchronous development schedules
• Scope and complexity of SoS
If SoS are not programs of record (and not subject to T&E regulations) why should we worry about this at all?
Issue Discussion (1)
• The underlying driver for T&E regulations is the objective of assuring the user that the capability they need is provided by the systems.
• This driver exists whether or not a systems (or SoS) is a program of record
• Furthermore, all changes made to the constituent systems should be verified to confirm they have been implemented correctly, and end to end T&E supports the need to show that SoS changes have not inadvertently diminished other necessary capability
• T&E provides a mechanism to understand the impact of changes on desired results, so an informed fielding decision can be made
• The following recommendations on SOS T&E approaches are made based on this assumed importance of T&E
If SoS are not programs of record (and not subject to T&E regulations) why should we worry about this at all?
SoS T&E Issue (2)
Translating capability objectives into high level SoS requirements• In this element, the capability context for the SoS is
established, which grounds assessment of the current SoS performance.
• In many cases, SoS don’t have ‘requirements’ per se, but capability objectives or goals that provide the starting point for specific requirements for increments of SoS evolution
If ‘requirements’ are not clearly specified up front for a SoS, what is the basis for T&E of an SoS?
Translating capability objectives Translating capability objectives
Translating capability objectives
Issue Discussion (2)
• SoS typically have broad capability objectives versus specific performance requirements as defined for other systems; these provide a foundation for
• identifying systems supporting an SoS• development of an SoS architecture• recommended changes or additions to the systems in the SoS to address the
capabilities• This suggests that it is necessary to generate metrics defining the end-to-
end SoS capabilities that provide an ongoing ‘benchmark’ for SoS development.
• In some SoS circumstances, the capability objectives can be effectively modeled in simulation environments which can be used to identify appropriate changes at the system levels.
• The fidelity of the simulation provides for validation of the system changes needed to enable SoS-level capability.
• In those cases in which the system changes are driven by SoS-level simulation, the fidelity of the simulation can provide for the SoS evaluation.
• In cases where simulation is not practical, other analytical approaches must be used for T&E.
• Test conditions that validate the analysis must be carefully chosen to balance test preparation and logistics constraints against the need to demonstrate the objective capability under realistic operational conditions
If ‘requirements’ are not clearly specified up front for a SoS, what is the basis for T&E of an SoS?
SoS T&E Issue (3)
Assessing Extent to Which SoS Performance Meets Capability Objectives over Time • This element provides the capability measures for the
SoS which, as described in the guide, may be collected from a variety of settings as input on performance under particular condition and new issues facing the SoS from an operational perspective.
• Hence, assessing SoS performance is an ongoing activity, which goes beyond assessment of specific changes in elements of the SoS (e.g. changes in constituent systems to meet SoS needs, and system changes driven by factors unrelated to the SoS).
What is the relationship between SoS metrics and T&E objectives?
Assessing (actual)
performance to capability objectives
Assessing (actual)
performance to capability objectives
Assessing performance to capability objectives
Issue Discussion (3)
• Typically T&E objectives, particularly key performance parameters, are used as the basis for making a fielding decision
• SoS metrics on the other hand (as discussed above) provide an ongoing ‘benchmark’ for SoS development which when assessed over time show an improvement in meeting user capability objectives
• Because SoS are typically comprised of a mix of fielded systems new developments
• There may not be a discrete ‘SoS’ fielding decision• Instead the various systems are deployed as they are made
ready, at some point reaching the threshold that enables the new SoS capability
What is the relationship between SoS metrics and T&E objectives?
SoS T&E Issue (4)
Addressing requirements and solution options• Increments of SoS improvement are planned by the SoS
and systems managers and systems engineers
• For each increment there may be specific expectations for changes in systems and an anticipated overall effect on the SoS performance
• While it may be possible to define specifications for the system changes, it is more difficult to do this for the SoS, which is in effect the cumulative effect of the changes in the systems
Are expected cumulative impacts of systems changes on SoS performance the same as SoS performance objectives?
Addressing new requirements
& options
Addressing new requirements
& options
Addressingrequirements
& solution options
Issue Discussion (4)
• SoS increments are based on changes in constituent systems which cumulate in improvements in the SoS overall
• In most cases, changes in systems can be specified and tested.
• However, in SoS which are implemented in a variety of environments and are dependent on networks for end to end performance,
• Impact of the system changes on SoS end-to-end capabilities can be estimated with less certainty.
• This uncertainty must be considered when assessing the SoS against its performance objectives
Are expected cumulative impacts of systems changes on SoS performance the same as SoS performance objectives?
SoS T&E Issues (5)
Orchestrating Upgrades to SoS • Systems may make changes as part of their
development increment that will be ready to field once they have been successfully tested and evaluated.
• However, given the asynchronous nature of system development paths, other systems in the SoS increment may not be ready to test with the early delivering systems, thwarting the concept of end to end test.
How do you test the contribution of a system to the end to end SoS performance in the absence of other SoS elements critical to the SoS results?
What if systems all implemented to their specification, but the overall SoS expected changes cannot be verified?
Orchestrating upgrades
to SoS
Orchestrating upgrades
to SoS
Orchestrating upgrades
to SoS
Issue Discussion (5)
• SoS increments are based on changes in constituent systems which cumulate in improvements in the SoS overall
• In most cases, changes in systems can be specified and tested
• However, in SoS which are implemented in a variety of different environments and are dependent on networks for end to end performance
• Impact of the system changes on SoS end-to-end capabilities can be estimated with less certainty
• This uncertainty must be considered when assessing the SoS against its performance objectives
How do you test the contribution of a system to the end to end SoS performance in the absence of other SoS elements critical to the SoS results?
What if systems all implemented to their specification, but the overall SoS expected changes cannot be verified?
Summary and Recommendations
Approach SoS T&E as an evidence based approach to addressing risk
Encourage development of analytic methods to support planning and assessment
Address independent evaluation of networks which support multiple SoS
Employ a range of venues to assess SoS performance over time
Establish a robust process for feedback once fielded
Summary and RecommendationsApproach SoS T&E as providing evidence for risk assessment (1 of 2)
Motivation – SOS T&E limitations represent operational performance risk• Full conventional T&E before fielding may be impractical
for incremental changes in SoS based on systems with asynchronous development paths
• Explicit test conditions at the SoS level can be infeasible due to difficulty in bringing all constituent systems together and set up meaningful test conditions
Risk assessment in the SOS T&E context• With each increment of SoS development, identify areas
critical to success of the increment and places where changes in the increment could have adverse impacts on the user missions, and focus pre-deployment T&E on these risks areas
• Assess the risk using evidence from a range of sources including live test
Summary and RecommendationsApproach SoS T&E as providing evidence for risk assessment (2 of 2)
Evidence can be based on • Activity at the SoS level, as well roll-ups of activity at the
level of the constituent systems. • Activity can be explicit verification testing, results of models
and simulations, use of linked integration facilities, and results of system level operational test and evaluation.
Factor these risks into SoS and System development plan• If the results of the T&E indicate that the changes will have a
negative impact, then these can be discarded without jeopardizing the delivery of the systems updates to their users
Results would used to provide feedback to end users in the form of ‘capabilities and limitations’ as is done by the Navy Battle Group Assessment process, rather than as test criteria for SoS ‘deployment’.
Analytical models of the SoS behavior can serve as effective tools to • Assess system level performance values against SoS
operational scenarios• Validate the allocations to systems• Provide the analytical framework for SoS level
verification.
Such models can be used to develop reasonable expectations for SoS performance• Relevant operational conditions should be developed
with end user input and guided by design of experiments discipline, so as to expose a broad a range of conditions
Summary and RecommendationsEncourage development of analytic methods to support planning and assessment
Because of its central role and distributed responsibility, the network is a unique constituent of the SoS. • The implications demand a unique assessment of the
network capability in test and evaluation of SoS. • Realistic assessment of SoS performance demands
evaluation of the network performance and it’s degradation under the vagaries of operational conditions
Because DoD seeks to develop a set of network capabilities which are applied in a wide range of applications• Consider an approach to network assessment
independent of particular SoS applications, as an input to SoS planning and T&E
Summary and RecommendationsAddress independent evaluation of networks which support multiple SoS
Evaluation criteria are conventionally established based on quantified performance requirements• For SoS these may be end-user metrics used to assess the
results of loosely defined capabilities
Recommend using a range of available opportunities to collect data on SoS performance• Assessment opportunities will be both planned and
spontaneous• These may not be expressly timed to the development and
fielding of system changes to address SoS capability objectives
These data can • Support periodic assessments of evolving capability • Provide valuable insight to developers and users including
the opportunity to identify unexpected behavior
Summary and RecommendationsEmploy a range of venues to assess SoS performance over time
Once deployed, continuing "T&E" of the SoS capability of the fielded operations can be used to recognize operational problems and make improvements
Continual evaluation can be facilitated through system instrumentation and data collection to provide feed-back on • Constraints• Incipient failures warnings• Unique operational conditions
By establishing and exercising robust feedback mechanisms between field organizations and their operations and the SoS SE and management teams SoS T&E provides a vital link to the ongoing operational needs for the SoS
Summary and RecommendationsEstablish a robust process for feedback once fielded (1 of 2)
This includes technical and organizational dimensions • An example of the former is instrumenting systems for
feedback post-fielding• An example of the latter is posting a member of the SoS
SE and management team with the SoS operational organization
Well-developed and continually exercised feedback mechanisms between operational and acquisition/development communities are • A big enabler of “building the system right” • Continuing to do so throughout the multiple increments
of an SoS
Summary and RecommendationsEstablish a robust process for feedback once fielded (2 of 2)
Comments (1 of 3)
Paper provides a view which is generally understood, including• SoS are more complicated than systems• SoS have constituent components that have independent
needs and can exercise free will in their development and operation that places their needs at times before that of the SoS
• SoS are generally asynchronous and this introduces difficulties in end-to-end testing, and alternates must be sought
• Emergent properties will emerge and must be accommodated• Set of ‘needs’
• Need for incremental development• Need to address synchronization; the need to consider block
testing• Need to monitor fielded behavior• Need to express limitations in a politically correct manner and the
means to address and incorporate methods to cope with these
Comments (2 of 3)
Paper does not address balance between SoS development and testing• There is also a very important topic that is not addressed; the balance
between SoS development and testing. The paper is an advocate for excellent testing of an SoS and its hard to argue against that in concept, but in reality to cost of SoS development to include the SoS and the testing of it must be balanced to assure good testing by a testing program whose cost is affordable. I know inadequate testing is a recipe for disaster. But a test program whose cost significantly exceeds that of the rest of the development program is a non-starter, and may in some circumstances where adequate testing is unaffordable correctly lead to a cancellation of the conceived SoS. Practically there needs to be a balance which will require the significant inclusion of practitioners in the process.
There is one ironic contradiction… On the one hand the paper cautions against the testing of sub-elements of an SoS and drawing conclusions from such tests, and on the other hand it endorses separate testing of the network that, in some cases, is part of – in fact a sub-element of – an SoS. Aside from that, for MANET networks necessary for maneuver elements, the ability to predict performance on an analytical basis is so poor that, certainly in the end, testing must be done on an SoS basis.
Comments (3 of 3)
Paper highlights the correct points/concepts and is well written. Eventually, these points/concepts will have to "fleshed" out
We agree that defining a SoS T&E program in the classical sense is difficult. Having a V&V program focused at measures of effectiveness or operational effectiveness is a valid and useful approach; strict requirements verification would seem like a non-viable approach. Given the asynchronous development of the individual systems, an on-going assessment of built plus projected capability against SoS objectives would be very desirable. Maybe this is the same as the "evidence-based approach to addressing risk". More explanation on this section maybe needed
We whole-heartedly agree that feedback after fielding is valuable and the way to go, tying it to a T&E activity maybe less effective. Once fielded and operational, change will occur and re-assessment and re-adjustment of system capabilities will be necessary. Making this a T&E function starts to imply that from a developers standpoint there is not point of completion or transition from the lab to actual use. It would seem more useful to have the feedback as input to the starting phase of the next iteration of the lifecycle rather than having the perception of a "never-ending" last phase of the current iteration's lifecycle.