SSRR 12-4-2014 final...2018/08/13 · Agile/Lean*in*Systems*Engineering*Project*...
Transcript of SSRR 12-4-2014 final...2018/08/13 · Agile/Lean*in*Systems*Engineering*Project*...
Agile/Lean in Systems Engineering Project
Richard Turner, Stevens Ins>tute, PI [email protected]
SERC Sponsor Research Review
4 December 2014. Washington, DC
2
Agile/Lean in SE Overview
• Improved agility/efficiency cri6cal to system development/evolu6on
• Many approaches from agile and lean development communi6es
• Two major tasks under the SEMT Research Area
• Agile/Lean Enablers for SE (RT-‐124) ― Search out and evaluate poten6al value to SE of adap6ve (e.g. agile, lean) methods, processes and tools
• KEVAS (RT-‐126) ― Demonstra6on and research system to support kanban-‐based scheduling, SEaaS, and other agile/lean management techniques in systems of systems evolu6on
― Serves as a basis for simula6ng promising enablers
• Con6nues research from MPT and RT-‐35/35A tasks
3
Agile/Lean Enablers for SE [RT-‐124]
• Richard Turner, Stevens Ins>tute, PI • Ye Yang, Stevens Ins>tute • Forrest Shull, Carnegie Mellon (SEI)
• Student Researchers: • Keith Barlow, Stevens Ins6tute • Joshua (Jabe) Bloom, Carnegie Mellon • Richard Ens, Stevens Ins6tute • Dan Ingold, USC
Agile Enablers Team
4
Summary and Goals
• Seek out lean, agile, and other adap6ve techniques poten6ally applicable to systems engineering ― Par6cipate in conferences, symposia, and industry working groups ― Keep track of publica6ons and blogs by thought leaders ― Create opportuni6es by leveraging thought leader networks
• Evaluate their poten6al and iden6fy research strategies ― Develop an evalua6on methodology for poten6al candidates
• Include technologies iden6fied in earlier work
• Products are white papers and ac6vi6es
5
Ongoing work
• Evalua6on process ― Adap6ng process from early SERC MPT work [Turner and Shull]
• Further inves6ga6on of previous approaches ― Quan6ta6ve schedule accelera6on model [Lane, Yang and Ingold] ― SE as a Service [Barlow]* ― Evidence-‐/value-‐based and collabora6ve decision processes [Turner and RT-‐126]*
― Complexity theory and sensemaking [Bloom]*
• New approaches ― Virtual and physical visible flow indicators [Ens (Disserta6on)] ― Schedule accelera6on through crowdsourcing (extends QSAM) [Yang]*
*More detail provided
6
SEaaS
• Primary researcher: Keith Barlow
• Developing a framework for defining SE services
• Based on INCOSE/IEEE SEBOKwiki.org taxonomy
• Other sources include ― CMMI (Services and Development) ― OASIS ― SOA Reference Model ― SERC MPT Phase 2 Bridge diagrams ― RT-‐35a Final Report
• White paper to be published by 31 December
7
Complexity theory and sensemaking
• Primary Researcher: Joshua (Jabe) Bloom
• Based on work by Dave Snowden and others ― Kalawsky; Flach; Juarrero; Sheard & Mostashari; Stevens, Brook, Jackson & Arnold; Luzeaux, Ruault & Wippler; Jones
• EArlypossibili6es for applica6on to SE ― Differen6a6ng Complexity (Computa6onal, Cogni6ve, Social, Rela6onal) o Define a Sense Making process to understand how different types of complexity are
impac6ng Systems Engineering projects ― Differen6a6ng Design approaches from Engineering Approaches o Align Design and Engineering approaches to Complexity and to each other o Apply Systemic Design Research Processes to Systems Engineering
― Co-‐Design Approaches for Systems Engineering ― Actor-‐Network-‐Theory applica6on to Systems Engineering
• Ini6al white paper planned for December-‐January
Towards Agile Schedule Accelera>on Through Crowdsourcing
This material is based upon work supported, in whole or in part, by the U.S. Department of Defense through the Systems Engineering Research Center (SERC) under Contracts H98230-‐08-‐D-‐0171 and HQ0034-‐13-‐D-‐0004 . The SERC is a federally funded University Affiliated Research Center (UARC) managed by Stevens Ins6tute of Technology consis6ng of a collabora6ve network of over 20 universi6es. See www.SERCuarc.org.
Ye Yang, Richard Turner Stevens Ins6tute of Technology
[email protected] December 4, 2014
9
Defini>on and Mo>va>on
• Soqware crowdsourcing (Stol and Fitzgerald, 2014) ― The accomplishment of specified soXware development tasks on behalf of an organiza6on by a large and typically undefined group of external people with the requisite specialist knowledge through an open call.
• Characteris6cs ― Millions of online developers ― Soqware development mini-‐tasks ― 2 weeks comple6on ― $750 task prize for winning par6cipants ― Higher quality through broad par6cipa6on
• Research ques6on ― How can defense domain benefit from this emerging paradigm?
10
Growth Trend in Crowdsourcing
• Growth of workers • Top rated business trend ― From workforce to crowdsource
Source: http://sandfishdesign.co.uk, © 2012, Crowdsourcing, LLC
PlaYorm Crowd size (s/w) Prize
TopCoder 710K $81K open
eLance 600K $1B total
TaskCN 3.5M $6M total, $144k open
Sources: topcoder.com, elance.com, taskcn.com (as of Nov. 2014)
Source: “Accenture Technology Vision 2014”.
11
• Examples ― 2009 DARPA Network Challenge: $40000 ― 2013 Darpa FANG challenge: million dollar
• Crowdsourced ac6vi6es ― Conceptualiza6on ― Specifica6on ― UI Prototypes ― Architecture ― Component Design
― Component Development
― Code ― Test Scenarios ― Bug hunt
• Applica6ons ― Mobile applica6ons
― Analy6cs and op6miza6on
― Scien6fic algorithm development
― Online communi6es
― Open plaworms
― Digital media
― Business systems
SoXware Crowdsourcing: State of Art
12
How TopCoder Works
12
Credit: www.topcoder.com, © 2007, TopCoder, Inc
Top plaworm for crowdsourced soqware development
13
SoXware Development Crowdsourcing Task Profiles
Par6cipa6on # Registrants:
# Submissions:
Cost and Schedule Amount of prize:
Schedule:
Outcomes #Delivered LOC:
DevScore:
Task Size #Component spec pages:
#
#Reqt’s spec pages:
14
Evidence on Cost and Schedule Reduc>on
• Schedule reduc6on ― Compared with
Parkinson’s law o “Work expands so as
to fill the 9me available for its comple9on.”
• Cost reduc6on ― Compared with
COCOMO o EFFORT = a * SIZE b
15
SoXware Crowdsourcing Tasks are ACCURATELY Predictable!
Extract Data from
TopCoder
• 9/29/2003-‐9/2/2012 • 2859 design and 3015 dev. Tasks • 980 successful ones used
Define new drivers
• 16 drivers modeling task types, complexity, and par6cipa6on
Train Models • 9 learners
Validate Models
• 3 baselines Our models can predict the cost of crowdsourced tasks within 35% of actuals for 80% of the tasks.
16
Agile Accelera>on through Crowdsourcing
• Where applicable in defense domains? ― Idea explora6on to coping with emerging changes ― Rapid delivery of non-‐core features ― Specific func6onal or performance upgrades ― Where problems can be generalized, needing more flexible and compe66ve broad par6cipa6on than outsourcing
• Investment ― Formulate the crowd: within exis6ng DoD suppliers scope, or leveraging on general public crowdsource plaworm
― Sezng up soqware development environment/plaworm dedicated for the crowd
― Dedicated personnel responsible for crowdsourcing assessment, task decomposi6on, Q&A, and monitoring
17
From Workforce to Crowdsource: a Viable Op>on to System Engineering
Incremental View of Risk-‐driven Spiral Model For large scale soqware projects
1. Define tasks
2. “Pick-‐up” teams
4. Review crowd deliverables
0. Setup environment
3. Communicate and monitor
18
Crowdsourcing Decision Framework: An Elaborated Agile Rebaselining Triage Model
Propose Handling
Stabilized Increment-N Development Team
Change Proposers
Future Increment Managers
Agile Future- Increment Rebaselining Team
Nego6ate change disposi6on
Analyze op6ons in context of other changes
Handle Accepted Increment-‐N changes
Discuss, resolve deferrals to future increments
Propose Changes
Discuss, revise, defer, or drop
Rebaseline future-‐increment Founda6ons packages
Prepare for rebaselined future-‐increment development
Defer some Increment-N capabilities
Recommend handling in current increment
Accept changes
Handle in current rebaseline
Proposed changes
Recommend no action, provide rationale
Recommend deferrals to future increments
Crowd Managers
Crowdsourcing?
Filter changes
Plan tasks Yes
No
Manage tasks
Assemble, consolidate results
Assess changes
Filter Handling
Crowdsourcing? Yes
No
19
Summary
• Crowd labor: an emerging alterna6ve to schedule accelera6on
• Crowdsourcing decision framework helps to address key management concerns
• Many new challenges need to be addressed through further inves6ga6on ― Predic6ve models: cost, schedule, quality, crowd par6cipa6on and behavior ― Architec6ng and op6miza6on approaches: task decomposi6on and configura6on, mul6-‐objec6ve op6miza6on
― Collabora6on techniques: large popula6on, interac6ons, monitoring and control
• Look forward to feedbacks, discussions, and esp. collaborators!
20
KSS Evalua>on and Analysis System (KEVAS) [RT-‐126]
• Richard Turner, Stevens Ins>tute, PI • Jo Ann Lane, University of Southern CA • Levent Yilmaz, Auburn University • Forrest Shull, Carnegie Mellon (SEI) • Alice Smith, Auburn University • Jeff Smith, Auburn University
• Student Researchers: • Donghuang Li, Auburn University • Alexey Tregubov, University of Southern CA
KEVAS Team
21
Kanban-‐based Scheduling System Networks: Value-‐Driven Scheduling Across a System of Systems
• Impetus for KSSN research (What we heard from the trenches): ― IMSs are killing me -‐ I can’t make them work. Help? ― How do I make sure I am delivering high value across the SoS? ― Where do I stand implemen6ng my capabili6es? ― How do I know how to balance my [organiza6onal|contract] resources?
22
Targeted Results
• Be}er visibility and coordina6on managing mul6ple concurrent development projects
• More effec6ve integra6on and use of scarce SE resources
• Increased project and enterprise value delivered earlier
• More flexibility while retaining predictability
• Less blocking of product team tasks wai6ng for SE response
• Lower governance overhead
23
Fundamental Resource: Product Development Flow (Lean)
• Grew out of Toyota Produc6on System success
• Later applied to knowledge work (imperfectly)
• Key source: Principles of Product Development Flow (Reinertsen)
• Summary (essence) of 172 principles ― Take an economic view [economic framework]: o Minimize cost of delay o Priori6ze the work with the most value
― Varia6on of flow is inevitable, so an6cipate it o Ac6vely manage queues; pull rather than push o Reduce batch size o Accelerate feedback o Ac6vely manage work-‐in-‐progress o Empower teams to decentralize control
Summary adapted from Murray Cantor
24
Overview of the KSS Concept
• Pull (kanban) scheduling • Value-‐based selec6on • Limited WIP • Classes of Service
• SE as a Service • Value nego6a6on • Scarce resource-‐driven • Collabora6ve/Nego6ated
• Integrated work and data flow
• Informa6on radiators at all levels
• Appropriate organiza6onal structures
Capability)Engineering)Individual)Product)Team)
Execu9ve/Stakeholder))Management)(Customer))
SLA$establishment$and$monitoring$Strategic$planning$Capability$priori7za7on$
Dash%
Analyze$needs$and$alterna7ves$$Refine$capabili7es$Develop$requirements$Allocate$requirements$
Form$cross$organiza7onal$teams$Cross@product$and$specialty$engineering$
Validate$and$fully$enable$capabili7es$
Network)Domain)Team)
Pharmacy)Domain)Team)
Users$$$$$$$$$$User)Support)
$
Product/Domain)Engineering)
KSS$
KSS$
Dash%
KSS$
Customer$rela7ons$Ini7al$Triage$
Product$SE$Iden7fy$SW$Features$
Allocate$features$to$SWDT$Integrate$features$into$requirements$
SW)Development)Team$$
Dash%
KSS$
KSS$
KSS$
Dash%
Work$Flow$Visibility$
KSS$
Needs%Backlog*%
*$$All$organiza7ons$can$contribute$to$the$Needs$Backlog$
A Mul6-‐level Network of Kanban-‐based Scheduling Systems
25
Where we are
• RT-‐35 ― Researched issues ― Developed ini6al components of the concept ― Studied agile, lean, kanban, and SEaaS fundamentals ― A}empted simula6on development
• RT-‐35a ― Con6nued refinement of concept ― Developed a semi-‐populated healthcare system example ― Aimed at pilo6ng as a means of learning and valida6ng
• RT-‐126 ― Must validate the concept for any reasonable adop6on/transi6on to take place ― Pilo6ng deemed not feasible as primary transi6on strategy ― New transi6on strategy based on experimenta6on using stronger simula6on ― Enable poten6al pilot organiza6ons to gain understanding of concept benefits based on their own environments
26
KSSN Experimental Valida>on and Analysis System (KEVAS) Opera>onal Concept
• Vision
• Outcomes/Benefits
• Results Chain and Risks
• Ini6al Concept
• Sample Scenario
27
Vision
• KEVAS is envisioned to be a flexible, web-‐based modeling and simula6on capability to: ― demonstrate the principles and mechanisms of the KSS and KSS Networks (KSSN)
― advance the understanding of the KSSN value-‐based approach ― evaluate the effec6veness of the KSSN in a variety of organiza6onal environments
― inves6gate exis6ng and proposed mechanisms for KSSN implementa6on ― provide a learning and evalua6on environment to support organiza6ons that are interested in applying the concept to their environment.
• KEVAS will be a core asset in the transi6on of KSSN concepts and benefits to industry and government
28
Outcomes/Benefits
• Validate the KSSN concept
• Support a broader understanding of the benefits of KSSN and thus enhance the pool of possible adopters
• Provide poten6al adopters a means to “try before you buy” via simula6ons offered on a transi6on portal
• Accelerate genera6ng transi6on materials required for interested organiza6ons to conduct successful pilots
• Enable significant empirical experimenta6on into product development management approaches, par6cularly value and scheduling strategies, including the work of Reinertsen and others applying lean concepts to knowledge work
• The ul>mate benefit is improved visibility and flow through SoS development and evolu>on organiza>ons, and higher value delivered sooner to SoS customers and users.
29
Ini>al Concepts
• Experiments to simulate scheduling and management approaches as applied with a specific work flow and organiza6on
• Experiment components including, but are not limited to: ― Various simula6on engine(s): ini6ally a discrete event-‐based simula6on and an agent-‐based simula6on.
― Organiza6onal models ― Scheduling models; these may include tradi6onal, orthodox, agile, lean, KSSN, SE as a Service, and/or other mechanisms for determining schedule and priority
― Simula6on control, dashboard, and measurement op6ons ― Pre-‐defined, sta6s6cally created, or random work flows ― Experiment components may be created or selected/adapted from libraries
• Tools ― Create or edit experiment components ― Produce graphics/visualiza6ons of the simula6on results ― Database System and associated Data Products
30
Sample Scenario
• Transi6on Candidate creates and runs an experiment ― [Pre-‐condi6on]Transi6on Candidates much have acquired appropriate creden6als to access the system
― Select a simula6on engine for their experiment ― Select or create an organiza6onal model to represent their organiza6on ― Select or create a scheduling model to evaluate ― Select, create or import a work flow ― Select the data to be captured, displays, and controls ― Run the experiment ― Generate reports and analyses of the data produced ― Save the experiment and results in a private database ― [Op6onal]: Sani6ze the data and add it to the historical database
31
Proposed KEVAS Architecture
• Components ― Domain-‐specific Language (DSL
― Simula6on Engine(s)
― Output Processor ― Model Database ― Web Browser Interface
32
Valida>on
• Two valida6on targets: The simula6ons and the KSSN Concept
• How do you validate a simula6on, and against what? ― Industry standard prac6ce is difficult to determine ― Actual pilo6ng is difficult contractually ― Data is always hard to come by
• Op6ons being considered at this 6me ― Alterna6ve paths for pilo6ng (e.g. USC CSSE Member, help from Rob Flow) ― Tool vendors o Mine their industry performance and sample work flows o We have entre to LeanKit, VersionOne, Rally, and Ra6onal
― Rela6vity o Validate the orthodox/tradi6onal models within the simula6on using industry contacts.
o Define a threshold delta for the KSSN approach to meet for successful concept valida6on
33
Measurement
• Reinertsen provides an economic model as well as a queuing and informa6on theory approach to managing flow
• Compara6ve measurements and established baselines key to valida6on
• Deciding what to measure in any development effort is difficult
• Improvement involves evalua6ng the value of visibility and flow, measured at various points across the SoS and value delivered to the customer
• We are currently looking at capturing a few key measures that should support valida6on ― Value delivered over 6me ― True organiza6onal capacity (as of the 6me work enters the organiza6on) ― Percentage capacity u6liza6on for queues ― Queue size ― Size of work items (batch size) ― Work in progress (at all levels) ― Es6ma6ng cost of adop6on and delta cost of use
34
A Note About Value
• Orthodox methods do not use value the same way as KSSN
• Value must consider cost, but should not be equated with cost
• The various meanings of value mean we need a common measure to compare different mechanisms for value delivery over 6me while considering costs
• Reinertsen uses “life-‐cycle profits” as the common unit for his economic model for product development; we are researching a way to use this concept, modified to account for non-‐profits and government
35
KEVAS Results Chain
Key
An organization can try out the concept
Modelsand simulations
Develop initial KSSN Models and Simulations
Full KEVAS Incremental
Development and Validation
Tools for estimating benefit in a specific organization
KSSNConcept Validated
Valid
ated
mod
els,
si
mul
atio
ns a
nd
mec
hani
sms
Transition Advocacy,
Exposure, and Publications
Validation results,
benefits
KSSN concept needs
revision
KSSN is seen as
valuable in industry and government
KSSN adoption
Better flow and
visibility in developing
& sustaining SoS
Adop
tion
deci
sion
s
Successful mechanism
and infrastructure
requirements
Improved organizational processes and infrastructure
Validity determination
is possible
Modeling is possible and
tractable
Transition tools, success
stories, support
documentation
Changes and costs are
acceptable
Orgs trust the simulations
Concept is useful
Orgs take time to use the
simulations
Ongoing KSSN research
Validation results,
mechanism shortcoming
s
New ideas,
benefits
Improvem
ents and
innovation
Shortfalls, needs, desires
Success stories, le
ssons
learned
Sufficiernt interest in evolution
Orgs see KSSN as
valuable
Cost of adoption is acceptable
Adoption support is suitable
Assumption(risk)
Initiative or Activity
Expected Outcome
Models and simulations are suitable
Simulations easy to use and
convincing
KSSN concept seen as
valuable to sponsors Funding and support
Improved KSSN
concept & mechanisms
New models, simulations and mechanisms
Reque
st for
direc
tion
KSSN Models
ValidationData
Validate KSSN Concept
Valid
atio
n
Sufficiernt interest in evolution
Contribution to achieve
benefit
36
Assump>ons (Risks) from Results Chain
• Concept is valid ― Modeling is tractable and possible ― Validity determina6on is possible
• Models and simula6ons are suitable ― Simula6ons and models are valid
• Simula6ons are easy to use and convincing ― Organiza6ons will take 6me to use the simula6ons ― Organiza6ons will trust the simula6ons
• Organiza6ons see KSSN as valuable
• Cost of adop6on is acceptable [Changes and costs are acceptable]
• Sufficient interest in evolu6on of the KSSN Concept
• Adop6on support is suitable for organiza6onal use
37
Other Programma>c Risks
• No ongoing support for hos6ng the transi6on web site and the simula6on infrastructure
• “Best Prac6ce-‐Silver bullet syndrome” [success in one context does not necessarily imply universal success]
KSS simulator demonstra>on
By
Alexey Tregubov 6th Annual SERC Sponsor Research Review
December 4, 2014 Georgetown University
School of Con>nuing Studies 640 Massachusems Ave NW,
Washington, DC
www.sercuarc.org
39
Simula>on model
• Discrete event simula6on (ac6vity simula6on)
• WIs Network (work graph) consists of: ― WIs (id, status, required effort, completeness) – describe tasks, requirements, capabili6es
― Resources (id, special6es, assigned WI) ― Teams (is, resources, WIs backlog) – groups of resources
• Scenario consists of triggers: ― Trigger is an if-‐then rule: o Example: “If wi_x is 50% complete then add new WIs”. o Example: “If 6me is 30 then change value of capability C1”.
― Triggers describe all rela6onships between WIs and how they change over 6me.
• Configura6on: resources alloca6on, priori6za6on algorithm, …
40
Simula>on work flow
Scenario Generator KSS Simulator
Scenario
User input
Performanceindicators
Simulation Results
Simulation configuration-‐ Resources allocation,-‐ Prioritization algorithm,-‐ etc.
Scenario Configuration-‐ number of teams,-‐ number of Wis,-‐ complexity of dependencies,-‐ etc.
User input
41
Ques>ons?
• Rich Turner
42
Backups
43
Demo: Scenario generator & KSS simulator
43
44
Demo: resource configura>on
44
45
Demo: results
45
46
Results: complex scenario
0
10000
20000
30000
40000
50000
60000
0 2 4 6 8 10 12 14 16 18 20 22 24 26 28 30 32 34 36 38 40 42 44 46 48 50 52 54 56 58 60 62 64 66 68 70 72 74 76 78 80
Value
Time
KSS
Value-‐neutral (random selec6on)
LIFO
47
Results: complex scenario
0
5
10
15
20
25
30
35
0 2 4 6 8 10 12 14 16 18 20 22 24 26 28 30 32 34 36 38 40 42 44 46 48 50 52 54 56 58 60 62 64 66 68 70 72 74 76 78 80
Number of Suspended Tasks
Time
KSS
Value-‐neutral (random selec6on)
LIFO
48
Results: complex scenario -‐ total >me spent
48
61
62
63
64
65
66
67
68
69
70
71
KSS Value-‐neutral (random selec6on) LIFO
Total schedule (calendar days)
49
Results: complex scenario -‐ total effort
49
440
460
480
500
520
540
560
580
600
KSS Value-‐neutral (random selec6on) LIFO
Total effort (person-‐days)
Effort required if there are no interrup>ons
50
Results: capability completeness
50
0
5
10
15
20
25
30
KSS Value-‐neutral (random selec6on)
LIFO FIFO
Number of 100% complete capabili>es