Post on 01-Jan-2016
Embedded Software
Original Paper by: Edward A. Lee
Eal@eecs.berkeley.edu
Presentation and Review by: Chris A. Mattmann
Mattmann@usc.edu
What is Embedded Software?
Software– A traditional view of software is that of some sort of black box function in which input is given and some type of output is then
produced– Essentially it is just a transformation of any given data set into a result data set– From a high level view, software is not concerned with anything other than the output of the black box function (time and space
complexity are a concern, but are usually ignored if the software program “works”)
Embedded Software– More than a specialization of software, embedded software adds real world input (touch sensors, wireless network
availability,etc) to the software paradigm, as well as real time constraints– Embedded Software is not usually a scalable product due to the rigorous demands on the system
Solutions are usually built at the lower level APIs, instead of using higher level language constructs (I.e. handwriting assembly software instead of using a more robust language like C or C++ and then compiling the software into assembly)
Lower level APIs are used because programmers aren’t able to take advantage of Framework libraries, because they are not machine independent—they are in fact, very machine “dependent”
Machine independent software libraries and languages rely heavily on certain resources that are more common in desktop PCs, than in the type of mobile environments that embedded software thrives in.
One does not expect to have a 40 Gigabyte Hard Drive on a Palm Pilot! Also, a lot of Framework Library code relies on certain Processor designs (I.e. Processors that take advantage of caching
schemes, and next instruction estimation), that simply does not make sense on a mobile computing device General Electric is not going to put L2 Processor cache into a toaster microprocessor and raise the cost of toasters by $25—there
is no need for that kind of power on something that simply has to output toast at a given time-- and it simply isn’t good business
Most Systems that claim to be real time….well….aren’t
We need to redefine what we mean by “real time”– We are not talking in the Operating System sense of “real time”
In an Operating System, real time only refers to a process execution priority—assigned by user specification (I.e. root’s processes on UNIX tend to get more CPU time, than joebobuser’s)
– In most common PC software, “real time” usually refers to some type of user interaction through some GUI
Even this type of “real time” processing does not meet the requirements of Embedded Software It is still more passive—it cannot proceed unless it has some sort of user input
Embedded software is active– User input can be given, but is not REQUIRED to maintain program flow– Embedded software determines input parameters on the fly based upon need, rather than instruction– This is essentially why most embedded software is so machine specific—machine specific code is VERY fast,
because it takes advantage of all of the hardware, without abstracting interfaces into object orientated design– Object-oriented embedded systems are still very new
Most designers are skeptic to use object orientated design in embedded systems because object orientated design adds a layer of processing behind the scenes to the system
On home PCs this slowdown is usually acceptable, and even in some cases, corrected due to the advantages of the hardware mentioned before (caching, early instruction determination, branching,etc.)
This; however, is not practical on mobile systems because that type of hardware technology is too expensive money wise, and Physical space wise (there is no room on a Palm Pilot for a Pentium 4 processor)
Limits of Synchronization
Embedded Software needs to react concurrently with real world systems
– Concurrency requires some type of synchronization and process scheduling– Synchronization comes at a cost
Traditional synchronization primitives such as locks, semaphores and monitors are not implemented in hardware—these are software abstractions usually
So, because embedded software systems are usually written with low level hardware tools, synchronization objects usually aren’t available
– Even if you have synchronization, it still might not work The real world time requirements for user input, and problem complexities are too
much even for traditional software synchronization– Let’s try to get synchronization help from the Programming Language
Some programming languages have synchronization built in– This is usually too costly though because it requires the code to be incrementally compiled as need—
this is too resource expensive on embedded systems
Interaction with the world
Embedded Systems should not halt– Ability to recover from network outage, partial answers, and fault tolerance
essentially are viewed key elements of the system Embedded Systems expose interfaces
– Essentially this concept is not native to embedded systems in entirety, it is more of an inherited property of software systems on the whole, and software engineering principles
– Interfaces expose procedures that allow the outside world to interact with the software system These procedures are finite computations
– Method X takes parameters a,b,c and produces result d Lee argues in his paper though that Embedded Systems are more like processes than procedures
– Lee defines processes as “non terminating computations that transforms streams of data”– His cellular speech encoder example
He claims that it is “artificial” to think of the speech coder in terms of finite computations In actuality, procedures can be infinite computations also, even though their temporal space may be
terminating– This is because processes can be thought of as interim procedures which are serviced to the outside world by a non-
terminating procedure that WAITS on certain input parameters to be TRUE, or FALSE In essence, processes are just a special class of procedures
Components and Composition of Processes
Processes can be thought of as components– These components interact with their respective Framework (the
Operating System for example), and interact with each other using monitors, semaphores, message passing, RPC, rendezvous, essentially whatever is supported by the Framework
– Component exposes its interface through procedures– Process1+Process2=???
What is a composition of 2 processes?– Now the interface is not as clearly defined– Conclusion: Process1+Process2 != Process Z
Lee states that the reason that 2 Processes cannot be composed in the most general case is that we (as Process1 in our example) do not know what the interface is for each aggregate process in the composition(in our example, Process2’s interface)
To encode this type of information, we would need some type of service description, or ontology of
procedures in Process N, along with what parameters each procedure takes, what typed format they need to be in, and what, (if any) value we are expecting back
Types and Temporal Execution Constraints
Let’s touch on Object Oriented Design again– Problems with Embedded Systems
Type Checking– Object Orientated Designed Systems generally enforce strict type checking
constraints, but also give the systems programmer the ability to create abstract data types
– Embedded Systems do not necessarily define procedural parameter types at compile time, which is one of the requirements usually of object orientated systems (and even in the ones where it is not, the flexibility of no compile time type checking comes at a big resource cost) (run-time type checking = SLOWDOWN essentially)
Temporal Execution – Order of Procedure invocation not encoded within the general Object Orientated Paradigm (OOP)
What Lee does not mention; however, is that it is possible to “add” that specification to OOP (for example as Java does with its Event driven Applet API (I.e. whenever an applet starts, the init() method is called BEFORE the Run() method, etc, etc)
I believe he left that out though, as it is evident that really ANY specification can be added to really ANY Framework—as always though—at a COST of resources
Event Driven Systems and Object Orientated Design
Embedded Systems are Event-Driven– Procedures invocation is determined by real-time data, not priority
In fact, procedures in embedded systems do use priority in a sense—they can subsume (see Ref. Rodney Brooks, MIT) each other, or take control essentially of certain resources
– This is called pre-emptive control– But since all procedures are running concurrently in Embedded Systems, the only things
procedures control through priority is System Resources, and essentially Dynamic invocation/de-invocation of other procedures
OO is a great software engineering paradigm—let’s use it in Embedded Systems by making OO Event Driven
– Can be done Procedures can be composed (I.e. Pr1+Pr2 DOES = Pr3) unlike processes
– Procedures are often seen as passive Need to think of procedures as “active objects” (AGENTS for example) This is all great—but we really need to make Procedures compositional
Hardware Issues and Scheduling
Embedded Systems--Move synchronization to hardware– Identify the problem and decide whether or not to implement system
needs in hardware or software– If a requirement of your system is low latency, then try to implement a lot of
the system needs in hardware because it is fast– If your system can be modeled by a sequential process then it should be
implemented mainly in software because of the constructs that software provides
Turns out, embedded systems combine system needs of both software and hardware
Real-Time Scheduling and Operating Systems– Unknown resource needs at design time
– Processes resource needs are not known at design time, so rather than pre-system OS priority scheduling, need real time, resource-dependent scheduling, and event-driven scheduling
Need better Software Model
Embedded Systems have forced a need for a better temporal software model– Priority based Scheduling/OO Design
Flawed because it can’t handle composition or priorities– Priority Inversion Problem
(Preempt(Plow,Pmid) -> Preempt(Plow,Phigh) waits for Pmid when it should not have to) With some help, it can handle composition, but does satisfy all cases
– Real-Time Scheduling/OO Design CORBA,ROOM
– Still depend on programmer tuned parameters, that are known at compile time– Aren’t adaptive to “self-system tuning”
– Actor-Oriented Design “Actors” are objects that compute things concurrently
– Can Incorporate Thread of Control, but not required to (according to Lee)– Message Ports to communicate with other “Actors”
The actors’ interactions between each other are referred to as the “model of computation” This model is good for Embedded Systems because it allows for real world events to shape the outcome of the system by not
imposing a wealth of constraints
– Example: Lost Messages do not necessarily take down the system—they are expected
Actor Orientated Systems (cont.)
Actor Orientated Systems can be defined through an abstract syntax, or a concrete syntax
– Abstract Syntax defines general relations among components in the system (actors, more generally are components)
– Concrete Syntax is a formalization of several abstract syntaxes, and how they fit together
Semantics of Interaction within the System Components and Connectors
– Component is a State– Connector is a transition between States
This can be though of as a “model of computation” Define general procedure/service descriptions
– This promotes polymorphism because the semantics say nothing about the implementation they just speak of the parameters and the method name
What are models of computation?
Models of computation define interaction among components Need concurrency to help components interact
Can’t do it all in hardware Need some sort of software abstraction because the components
could be distributed (over different computers possibly, so no physical shared memory)
– Rendezvous– Synchronous Message passing
Hardware+Software abstraction=model of computation– New Programming Languages arise to better suit these computational models
This makes the models very machine dependent– Can also coordinate modular components written in more widely used languages
As long as a standard interface is defined, and the connections between the modules are well-known
Examples of Models of Computation
Dataflow model– Actors are atomic computations that are invoked when input data is present
Connectors represent dataflow between “producer” actors and “consumer” actors
Time Triggered– A model in which events are generated according to some
“clock” which is a temporal component of the internal subsystem– Once an event is generated, a component responds to the event by performing some computation
Synchronous/Reactive– Connections among components represent values that correspond to some global clock tick– At different ticks, connections can have either valid data, or invalid data
Discrete Events– Connections among components are events, that consist of a time, and some value, which are
placed on a timeline– Popular for hardware specifications
Telecommunications simulators
Process Networks– Components are concurrent processes or threads that communicate by asynchronous, buffered
message passing– Excellent for signal processing
Examples of Models of Computation (cont.)
Rendezvous– Components are concurrent processes or threads that communicate by synchronous message passing– Operation performed between 2 processes is one uninterruptible event
Publish and Subscribe– Components rely on Event Streams in which they register an interest in– When new events are generated, the components must communicate with some global server, that
gives the value of the event Continuous Time
– Attempt to find sets of linear equation that all satisfy a given time constraint– Connections between components are continuous time signals
Finite State Machines (FSM’s)– Components are states of the System– Connections between components represent conditions which must be met in order to transition from
State n to State n+1– FSM’s are ultimately sequential– Can be used to model control logic in embedded systems
– Can combine FSM’s and other concurrent models of computation to come up with hybrid theory
*charts (starcharts)
Embedded Systems and Time
Ultimately Embedded Systems need to deal with time, and ordering of events
Asynchronous messages with timestamps? Shared counters?
– Barrier problem Vector time?
– Only works if the number of processes is fixed
– No one right way to do it, and really each connection between components may actually use its own protocol (rendezvous, synchronous message passing with message timeouts, etc.)
It would be nice to negotiate the protocol at run-time
Case Study: Ptolemy project
Ptolemy is an embedded system project at Berkeley that emphasizes the composition of heterogeneous computational models
The composition exists because each component computational module is a “domain polymorphism”
– “Domain polymorphism” of a component is a property of embedded system components that requires the component to have a well defined behavior in a particular domain or system implementation, and a very different behavior in another domain
– Components are useful in several domains, and their exposed interfaces work differently in each domain
– An “application” is a set of composed actors, which are connected and assigned a particular domain
– System is an embedded system, and very robust– Improvements
Lee argues that the system can be improved by adding type checking to each computational model
Last Minute Notes on Embedded Systems
Component Reflection– Components store Service Descriptions of their exposed
interfaces Metrics and analysis can be performed to verify system
integrity on the fly
END OF PRESENTATION