Case study: the Mars Explorer - 123seminarsonly.com€¦  · Web viewFor example, in its input...

63
6 Software architecture is the body of instructions, written in a specific coding language that controls the structure and interactions of software modules. It embodies the structure of a system and provides the framework for the software modules to perform the functions of the system. Design of the interfaces between modules, constraints on the size and execution of the modules makes it easier or harder to integrate them into a working software system. The architecture of the system enforces the constraints on the modules and the properties of capacity, throughput, consistency and module compatibility are realized at the architectural level. Within the system are architectural modules either within the core operating system or in the middleware or custom designed that govern how the processing modules work together to do the system functions. Application calls Software Architecture P E R F O R M A N C E ERROR RECOVERY O A & M Featur es PROBLEM Function s 1

Transcript of Case study: the Mars Explorer - 123seminarsonly.com€¦  · Web viewFor example, in its input...

Page 1: Case study: the Mars Explorer - 123seminarsonly.com€¦  · Web viewFor example, in its input commands, MML reserved the first parameter field for command code (i.e., action to

6

Software architecture is the body of instructions, written in a specific coding language that controls the structure and interactions of software modules. It embodies the structure of a system and provides the framework for the software modules to perform the functions of the system. Design of the interfaces between modules, constraints on the size and execution of the modules makes it easier or harder to integrate them into a working software system. The architecture of the system enforces the constraints on the modules and the properties of capacity, throughput, consistency and module compatibility are realized at the architectural level.

Within the system are architectural modules either within the core operating system or in the middleware or custom designed that govern how the processing modules work together to do the system functions. Application calls these modules through special interfaces called Application Programming Interfaces (API). In the early days of computing these were simply the system calls made to operating system functions, like ‘dispatch a program.’ The communication architecture is code that governs the interactions of the processing modules with the data and with other systems. The data architecture is code that controls how the data files are structured, filled with data and accessed. Once the architecture is established, functions may be assigned to processing modules and the system may be built. Processing modules can vary greatly in size, scope, depending on the function each performs, and the same module may differ across installations. In every case, however, the processing architecture, communication

Software Architecture

PERFORMANCE

ERROR RECOVERY

OA&M

Features

PROBLEM

Functions

1

Page 2: Case study: the Mars Explorer - 123seminarsonly.com€¦  · Web viewFor example, in its input commands, MML reserved the first parameter field for command code (i.e., action to

architecture and data architecture constitute the software architecture that is the system’s unique and unchanging “fingerprint.”

When systems share a common architecture, they are the same, regardless of superficial differences in name or in site peculiar configurations or adjunct functions. When several sites use software systems with a common architecture, they are considered to be using the same software system even though they may do somewhat different things. No two iterations of a software system are exactly the same, despite their shared architecture. When a system is installed at two or more sites, localization is always required. Tables are populated with data to configure the software to meet the needs of specific customer sites. The customer may have special needs that require more than minor table adjustments. Customization of some modules may be required. New modules may be added.

Alternatively, two systems with differing architectures can perform the same function in alternative ways. These would be different systems. Function and output do not define a system. Only its architecture can describe and identify a system.

CASE Study- A Two Pronged Approach

In the late 1980’s, Bell Laboratories needed to develop a system to control a very critical congestion situation. The system was called NEMOS. It used a distributed database with several unique databases schemas. Some of the data overlapped. The schemas were designed to speed processing time. Architecture reviews revealed that that this architecture was extremely complex and broke new theoretical ground. Since there was no history of similar development to provide a guideline and the need was urgent, Bell Laboratories decided to develop a second system with a different architecture in parallel to mitigate the architectural risks identified. The insurance system was also called NEMOS but instead used an integrated database architecture, which means a single database for everything. The result was two systems with the same name, performing the same function. The system with the distributed database architecture could not be scaled to handle network growth and was discarded. The architecture with the integrated database architecture was successfully deployed and demonstrated its robustness when it managed telephone calls during the famous ‘World Series Earthquake’ in California. (On October 17, 1989 at 5:04 P.M., a major earthquake struck the San Francisco Bay area. The earthquake was nicknamed the World Series Earthquake because it occurred just before a World Series baseball game was scheduled to begin in Candlestick Park. Millions of people witnessed the motion of the earthquake on television. Sixty-seven people lost their lives, and property damage was estimated at $6 billion, but the telephones worked.) ******************************************************************* One of the most common failures of architecture design for software systems is not attending to features needed by system administrators. They are responsibilities for:

a. Training usersb. Configuring the computer to run the software c. Defining network requirements for the network manager

2

Page 3: Case study: the Mars Explorer - 123seminarsonly.com€¦  · Web viewFor example, in its input commands, MML reserved the first parameter field for command code (i.e., action to

d. Setting up the data files c. Maintaining adequate response time d. Trouble shooting

Too often problems arise during system startup. Few of these problems are identified as software errors because the system administrator is intimated by the new system. For example: In he request-response model (below), a number of host computers are connected to a local area network (LAN) using Ethernet and share the transmission media. Some of the host computers are servers and many more are clients. When one server and one client are connected, the system worked fine. As the number of clients were added the system continued to operate satisfactorily until there was a sudden network hang and no messages could be transmitted between any server and client. This is an example of a conditionally stable system.

A system is conditionally stable as long as any of its critical resources is not exhausted. Once it is consumed the system crashes or hangs and no messages can be sent. Frequently there are early warning symptoms of slow response time or lost messages before the system halts and other times there is no warning at all.

In this case clients are added and the system performed to specifications. When the one hundred and first client was installed and sent a message the system failed. No messages could be sent. There were so many message clashes on the transmission media that the computers sent all their time resending lost messages. The system administrator had to turn off all the hosts, bring up the server and limit the number of clients until a faster transmission media could be installed.

LANServer

…. Client

Request-response Model

With many computers sending messages the likelihood of two messages trying to use the transmission media simultaneously increases. Ethernet LANs that detect clashes and abort transmission. When this happens, the applications can timeout. Too many timeouts can lead to hangs or crashes. If n is number of clients on a LAN and p = the probability a client could be sending a message the optimal number of clients to minimize clashes is n-s = 1/p, where s is the number of servers on the LAN. For example

If a client is sending 60 messages an hour or one every minute and the message contains 1500 characters then it will take 1500 characters x 8 bits/character/ 1,000,000 bits/sec =

3

Page 4: Case study: the Mars Explorer - 123seminarsonly.com€¦  · Web viewFor example, in its input commands, MML reserved the first parameter field for command code (i.e., action to

12 milliseconds to send one message. Allowing ample margin for capturing the line it will take about 15 milliseconds per message. The probability of a message being on the line from one client is 0.015 so the optimum number of clients is about 60.

Case Study: System Installation A naïve system administrator could not start up a system. It was successfully tested and accepted by the customer’s organization. She had two thick handbooks in her hands and could not find the command that would let her install the database. A video of this situation is on the web site. Developers believed their software was of high quality and was a great system. . They documented it fully and yet system administrator was disgusted. There such a difference of perception because naïve users were not used to being up the system. The commands to edit the configuration files are thought to be well known UNIX library commands. The system administrator was unfamiliar with these commands even though she had been trained in UNIX based applications. The folklore about the basic steps needed to load the command files was not provided in the user manuals. The commands were listed in a reference book that had no index and so thick that it was hard to handle and hard to find basic commands. The first time I had to shut down a Windows system, I was stymied because ‘everybody knows that you must click startup to shutdown.’

Software Architecture Experience

Good software architecture is vital for a successful software development. To have a good architecture there must be a software architect as called for chapter 1. The architecture is the framework for all technical decisions and must be documented in a clear and concise way and communicated to everyone on the project. The process of developing the architecture begins once the prospectus is in hand as shown in figure 2. A ‘first cut’ architecture is created during the requirements process to assure that the system is feasible. Once the requirements are complete an architecture discovery review makes sure that all non feature based requirements are known. Then the architecture process begins. It is an iterative process that focuses on evaluating design alternatives and simplifying the system. Simplifications are possible when duplicate features are eliminated, when object classes match the problem domain, when existing libraries or components are used or when algorithms are simplified. An architecture review at the end of the architecture process makes sure that the system can be built and that it will solve the problem. A specific goal of the review is to simplify the system. One way to measure the degree of simplification achieved in the architecture process is to measure the function points at the end of the requirements specification process and again at the end of the architecture process.

Magic Number:

The goal for the architecture process is to reduce the number of function points by 40% by:

a. eliminating redundancies

4

Page 5: Case study: the Mars Explorer - 123seminarsonly.com€¦  · Web viewFor example, in its input commands, MML reserved the first parameter field for command code (i.e., action to

b. employing working componentsc. simplifying designsd. simplifying the interpretation of the requirementse. dropping complex features

The architecture process continues through out the development life of a software system to assure that that the integrity of the architecture is preserved as new features and functions are added.

6System Architecture Reviews

Architecture in a Project’s Life CycleArchitecture in a Project’s Life Cycle

Discovery

Planning and

ArchitectureReview

Review

Carries through thelife of the project

Iterative processuntil consensusis reached

Architecture PhaseProspectus

Requirements

Architecture

HighLevelDesign

LowLevelDesign

It encompasses the requirements, architecture and high level design phases of thetypical waterfall diagram. It also continues throughout the life of the project(someone continues to wear the architect’s hat).

The Architecture Process

The ‘4+1 Architecture Approach’ provides a way to structure the architectural design for a software system. See http://www.rational.com/products/whitepapers/350.jsp for a paper describing the approach.

Figure 4+1 Architecture Approach

5

Page 6: Case study: the Mars Explorer - 123seminarsonly.com€¦  · Web viewFor example, in its input commands, MML reserved the first parameter field for command code (i.e., action to

An Architectural ModelSoftware architecture can be thought of as a coherent set of abstract patterns guiding the design of each aspect of a larger software system. Software architecture underlies the practice of building computer software. In the same way as a building architect sets the principles and goals of a building project as the basis for the draftsman's plans, so too, a software architect sets out the software architecture as a basis for actual system design specifications, per the requirements of the client.

Software architecture is the result of assembling a certain number of elements in a structure that satisfies the functionality and performance requirements of the system, as well as some other, non-functional requirements such as reliability, scalability, portability, and availability.

Software architecture deals with abstraction, with decomposition and composition, with style and esthetics. It is best described with a model composed of multiple views or perspectives, see figure:

• The logical view, which is the functions the system will perform or an object model of the design • The process view, which captures the concurrency and synchronization aspects of the design,• The physical view, which describes the mapping(s) of the software onto the hardware and reflects its distributed aspect,

6

Page 7: Case study: the Mars Explorer - 123seminarsonly.com€¦  · Web viewFor example, in its input commands, MML reserved the first parameter field for command code (i.e., action to

• The development view, which the tools, or processors and environments used to compile, assemble, load and change the software artifacts.

The description of an architecture cab be organized around these four views, and then illustrated by a few s use cases, or scenarios based on business rules. These become a fifth view. This “4+1” view model allows stakeholders understand the architecture. Systems engineers approach it from the Physical view, then the Process view. End-users, customers, data specialists from the Logical view. Project managers, software configuration staff see it from the Development view. Other sets of views have been proposed, but this one is the most useful.

Software ComponentsA software component has more than one definition. One is the development and application of tools, machines, materials and processes that help to solve software problems. Another is a set of formal constraints on a software module. Software Component technology builds on prior theories of a software object. An object is a unique concrete instance of an abstract data type (that is, a conceptual structure including both data and the methods to access it) whose identity is separate from that of other objects, although it can "communicate" with them via messages. Parnas explains the need for modularity and the effectiveness of information hiding. He defined a software structure with minimum connections to other modules called coupling and a maximum of cohesion meaning that the subtleties of the software design and data interactions are hidden from other modules. This is called encapsulation and laid the foundation for object oriented design and includes the concept of abstraction, which is the ability for a program to ignore some aspects of the information it manipulates. Each object in the system serves as a model of an abstract "actor" that can perform work, report on and change its state, and "communicate" with other objects in the system, without revealing its structure. It adheres to some interface description language, which is a computer language or simple syntax for describing the interface of a software component. It is essentially a common language for writing the "manual" on how to use a piece of software from another piece of software, in much the same fashion that a user manual describes how to use a piece of software to the user. The UNIX community calls the structure of this information ‘Man’ pages. By encapsulating software functionality the form of object, which is a unique conceptual structure including both data and the methods to access it whose identity is separate from that of other objects, although it can "communicate" with them using messages. In some occasions, some object can be conceived of as a sub program which can communicate with others by receiving or giving instructions based on its, or the other's, data or methods. Data can consist of numbers, literal strings, variables, references. An object is often sometimes thought of as a region of storage. The set of constraints define the component:

a. Interfaces are always through formal structures that normalize data definitions as defined in the Jackson Design Methodology. The key here is that the notion of subprogram is not permitted outside the boundary of the component. If two programs need to communicate directly through data structures they become part of the same module. The component boundary is violated.

7

Page 8: Case study: the Mars Explorer - 123seminarsonly.com€¦  · Web viewFor example, in its input commands, MML reserved the first parameter field for command code (i.e., action to

b. The execution time and space of the component are bounded through the use of rejuvenation technology and boundary conditions are set limiting the domain of execution of the component. See Sha’s pioneering work on bounding software execution on the web site for this book.

c. The dynamics of the component are stochastic and periodic because on a regular basis the component states are reinitialized.

d. The module is limited in size to reduce defects, as explained by Hatton, to the range of 100 to 1000 instructions.

e. System and reliability testing are performed for 10 times the rejuvenation period to reduce the likelihood of executing defect states thereby causing hangs or crashes. Special tests are needed to assure that the component is stable within its constrained execution domain. These tests assure that a small input does not induce a large unbounded output.

f. A module can only be a component after its third release and 8-10 months of operation when the failure rate becomes constant with time (ref: Dependability paper)

g. A component is documented with a performance worksheet that specifies what the component does, its domain of execution, its inputs and outputs including the data value bounds and any other special constraints. These specifications are best placed in the preface of the source listing for the component.

Software components provide is a common and convenient means for inter-process communication, which is the exchange of data between one process and another, either within the same computer or over a network. It implies a protocol that guarantees a response to a request. Examples are UNIX sockets, Microsoft Windows' DDE. There are different forms of software components such as CORBA and .COM.

Figure shows two schematics for software components. The top one is the UML diagram and the bottom is the schematic commonly used by Microsoft’s .COM objects. The "lollipops" sticking out from the components are their interfaces.

8

Page 9: Case study: the Mars Explorer - 123seminarsonly.com€¦  · Web viewFor example, in its input commands, MML reserved the first parameter field for command code (i.e., action to

Douglas McIlroy, professor of computer science, mathematician, engineer and famous Bell Las programmer invented:

The pipes and filters architecture of UNIX. Pipes and filters are software design patterns. Design patterns are standard solutions to common problems in object-oriented software design. Algorithms are not thought of as design patterns, since they solve implementation problems rather than design problems. Typically, a design pattern is thought to encompass a tight interaction of a few classes and objects

The concept of software components. His seminal paper is in the appendix to this chapter (or on the web site).

Several widely used UNIX tools such as spell, diff, sort, join, graph, speak and others. .

The early development of what is believed to be one of the most influential operating systems in history was unique. Its goal was to demonstrate the portability of software among different hardware architectures. It was successful and was the first demonstration of portable software implementing the operating system as a layer of software separating the application from the unique designs of a supplier’s computer. It contained the concept of the ‘a pipe’ that signifies that the output of one program feeds directly as input to another program. In contrast a file in a computer system is a stream (sequence) of bits stored as a single unit, typically in a file system on disk or magnetic tape. While a file is usually presented as a single stream, it most often is stored as multiple fragments of data at different places on a disk (or even multiple disks). One of the architectural services operating systems is to organize files in a file system. The pipe is one such service and provides input or holds the output. In the pipe metaphor, a file is

9

Page 10: Case study: the Mars Explorer - 123seminarsonly.com€¦  · Web viewFor example, in its input commands, MML reserved the first parameter field for command code (i.e., action to

‘a container.’ A Unix shell, also called "the command line", provides the traditional user interface for the Unix operating system and uses the pipe character ‘|’ to join programs together. A sequence of commands joined together by pipes is known as a pipeline that represents the concept of splitting a job into sub processes in which the output of one sub process feeds into the next much like water flows from one pipe segment to the next). . For creating this mechanism, all UNIX tools have access to three distinct special files:

stdin—the standard input file, stdout—the standard output file, and stderr—the standard error file.

By joining one tool's stdout to another tool's stdin, a pipeline is formed. Errors are accumulated in stderr.

A filter program a UNIX program part of a pipeline of two or more UNIX tools. Generally a filter program will read its standard input and write to its standard output and do little else. Conventionally a filter program distinguishes itself by being fairly simple and performing essentially one operation, usually some sort of simple transformation of its input data.An example of a pipeline: cat * | grep "alice" | grep -v "wonderland" | wc -l

will print out the number of lines in all files in the current directory which contain the text "alice", but not the text "wonderland".

The pipeline has four parts: cat * concatenates the text of all files to its stdout; grep "alice" reads its stdin as lines, and prints on its stdout only those lines

which contain the word "alice"; grep -v "wonderland" reads its stdin and prints on its stdout only those

remaining lines which do not contain the word "wonderland" (note that -v inverts the selection);

wc -l counts the lines on its stdin, and prints a line count on its stdout.

Microsoft paved the way for actual deployment of component software with Object Linking and Embedding (OLE) technology. It was initially used primarily for copying and pasting data between different applications, especially using drag and drop, as well as for managing compound documents. It later evolved to become an architecture for software components known as the component object model (.COM), and later DCOM.

Component Object Model (.COM) is a Microsoft technology for software components, also known as ActiveX. It is used to enable cross-software communication. Although it has been implemented on several platforms, it is primarily used with Microsoft Windows. The basic idea in object-oriented programming (OOP) is that software should be written according to a mental model of the actual or imagined objects it represents. OOP and the related disciplines of object-oriented design and object-oriented analysis focus on

10

Page 11: Case study: the Mars Explorer - 123seminarsonly.com€¦  · Web viewFor example, in its input commands, MML reserved the first parameter field for command code (i.e., action to

modeling real-world interactions and attempting to create 'verbs' and 'nouns' which can be used in intuitive ways, ideally by end users as well as by programmers coding for those end users. Software component architectures, by contrast, makes no such assumptions, and instead states that software should be developed by gluing prefabricated components together. It accepts that the definitions of useful components, unlike objects, can be counter-intuitive. This notion has led to many academic debates about the pros and cons of the two approaches. We consider component technology to have evolved from object oriented technology. It takes significant effort and awareness to write a software component that is effectively reusable. The component needs:

a. to be fully documented; b. more thorough testing; c. robust input validity checking; d. to pass back useful error messages as appropriate; e. to be built with awareness that it will be put to unforeseen uses

Magic Number:The cost of making a module a component is 2.6 times the cost of making the

original module. Therefore, a module needs to be used three times to payoff the investment in making it a component.

Component Interfaces

Fu-Tin Man wrote a Brief History of TL1. It shows realization of critical system interface structures within an architecture that became the standard for the telecommunications industry:

A feature article published in Telephony1 states that, “Most telecom network elements in North America today can be managed using TL1, and no serious telecom management system developer can ignore it”. In view of the renewed interests in TL1 for the telecommunication industry, this article presents the history on the origin of TL1.Mission

Prior to the 1984 divestiture, Program Documentation Standards (PDS) was the pre-dominant operations language used to centrally manage the network equipment in the “Bell System”. It was used by the Switching Control Center System (SCCS) to manage the Lucent Technologies (formerly AT&T) family of No. 1AESSs and early members of the Digital Access and Cross-connect System (DACS) family.

In early 1985, Bellcore commissioned a task force2 to quickly recommend a non-proprietary set of operations messages to be exchanged between an operations system (OS) and a network element (NE) to conduct operations, administration, maintenance and provisioning (OAM&P) functions. The members of the task force comprised of subject matter experts in operations requirements, TIRKS, integrated digital loop carrier

1 Conor Dowling and Gerry Egan, “The Story of TL1 provides many lessons about the future of telecom management,” Telephony, Vol. 233, No. 9, September 1, 1997, pp. 34-36.

2 The author was the chair.

11

Page 12: Case study: the Mars Explorer - 123seminarsonly.com€¦  · Web viewFor example, in its input commands, MML reserved the first parameter field for command code (i.e., action to

(IDLC) system, digital cross-connect system (DCS), automated digital terminal system (ADTS), transmission testing, and X.25 protocols.

To help narrow the number of operations language candidates under consideration, the task force identified a few attributes for a viable operations language3 candidate including:

Cannot be owned by any one company except Bellcore for standardization Should be well-known Should have good documentation Can be made available to the public domain in a short time frame Can be developed by equipment vendors with little or no guidance Preferably has a good track record

Guided by the above list of attributes, the task force arrived at two operations language candidates. The first one was Bellcore’s Flexible Computer Interface Form (FCIF) that has been used among Bellcore-developed Operations Support Systems such as between the components of Facility Assignment and Control System (FACS)4, and between FACS and TIRKS. It was originally named the FACS Component Interface Format when it was conceived as part of the FACS architecture. It proved so robust that it was extended and used throughout the Telecommunications Industry. The second one was CCITT5

Man-Machine Language (MML).

Comparative Analysis

Below is a highlight of the analysis that compared the pros and cons of the two operations language candidates:

MML was one of the international standard languages adopted for network management in late 1970s. It was documented as a family of Z.300 recommendations in the 1980 CCITT Red Book. Even though FCIF had been a working operations language among Bellcore-developed OSs, it did not have the same level of documentation.6

Both FCIF and MML were character-based, and were judged to be inefficient for OS-to-NE communication. However, since the command language dialog was a requirement for a local craft interface7, the cost of developing two interfaces in an NE, one for human and one for machine, was considered greater than the penalty accrued from inefficient machine-to-machine communication.

FCIF was viewed even less efficient because the parameters in FCIF were keyword-defined, whereas those in MML could either be keyword-defined or position-defined. Keyword-defined parameters provide flexibility for specifying parameter value,

3 Specifically, an operations language is only a language syntax, whereas an operations message contains both language syntax and semantics.

4 The previous description of the FCIF acronym was FACS Component Interface Form. 5 Now renamed International Telecommunications Union-Telecommunication (ITU-T).6 A Bellcore special report, “FCIF Language Definition,” SR-STS-002603, Issue 2, October 1993, was later

published.7 It is also a current American National Standard as documented in “OAM&P - G Interface Specification for Use

with the Telecommunications Management Network (TMN),” ANSI T1.232-1993.

12

Page 13: Case study: the Mars Explorer - 123seminarsonly.com€¦  · Web viewFor example, in its input commands, MML reserved the first parameter field for command code (i.e., action to

whereas position-defined fields use fewer characters to specify parameter value but must appear in their pre-assigned positions.

MML was very human-readable as its parameter fields were distinctly separated. FCIF was comparatively less human-readable than MML because of its nesting syntax that was designed to capture customer name, address, telephone numbers, etc.

Both FCIF and MML offered no semantics that could be re-used to support a generic set of OAM&P functions, but MML had several semantic definitions that help specifying semantics. For example, in its input commands, MML reserved the first parameter field for command code (i.e., action to be taken). It also assigned the word “REPORT” (abbreviated as REPT) for NEs to autonomously report their self-detected events and conditions.

While FCIF had a good track record and a wealth of support tools, MML had none that the task force was aware of at the time.

The following table summarizes the results of a comparative analysis between FCIF and MML.

Feature MML FCIFRecognition International Little knownDocumentation 1980 CCITT Red Book LittleMachine communication efficiency Poor PoorHuman communication efficiency Excellent GoodAvailability of reusable semantics Little NoneWorking track record None ExcellentSoftware support tools None Abundant

Message Formatting

After selecting MML as the operations language, the task force focused on eliminating its human-oriented features without violating the MML recommendations, to the extent possible. Examples of these are white space, linefeed, and carriage return. The next task involved formulating the formats for input command, output response, and autonomous message.

The formatting task for an input command involved assigning mandatory parameter fields following the command code in an input command. These parameter fields are as follows:

Action-modifier-modifier:TID:AID:CTAG:data,data,....:data,data,....;

where TID = Target identity of NEAID = Access identity of NE componentCTAG = Correlation tag of input command

13

Page 14: Case study: the Mars Explorer - 123seminarsonly.com€¦  · Web viewFor example, in its input commands, MML reserved the first parameter field for command code (i.e., action to

At a later time, the parameter field following CTAG was designated as general block (GB) that is reserved for such data as scheduled entry of sequences of recent changes to a switch NE.

The formatting task for an output response involved using CTAG and normal (COMPLD) or error (DENY) response in addition to a TID, calendar date (YYYY-MM-DD) and 24-hour-clock time (HH:MM:SS). It also includes using four-letter error codes to help explain the reason for the error response. The first letter in an error code is designated for an error category. For example, IXXX signifies an error in the input category, and EXXX an error in the equipage category. As an option, an NE can provide an explanation for an error by enclosing the latter within a pair of quotes (“ “). However, a TL1 parser is not required to parse the text contained inside a pair of quotes.

The formatting task for an autonomous message involved using an autonomous tag (ATAG) for all REPT messages generated by an NE, in addition to a TID, calendar date and 24-hour clock time as well as the level of severity associated with the autonomous message.TL1 Message Formulation

Once the language definition was more or less completed, the remaining task was to specify actual TL1 messages (including semantics) that are used to implement OAM&P functions and publish them in the public domain. The task force selected provisioning, maintenance, and testing functional categories that would be presented in Technical Advisories TA-TSY-00199, TA-TSY-00200, and TA-TSY-00201, respectively8. It also dubbed the selected operations language Transaction Language One or TL1, with a view of providing TL2 and subsequent versions in the future.

The very first set of TL1 messages was published in TA-TSY-00199, Issue 1, August 1985, and was specified to conduct provisioning functions in transmission NEs such as DCS and ADTS. In an attempt to unify TL1 messages used across transmission and switching equipment, TA-TSY-00199, Issue 2 was published in February 1986 to also present a preliminary set of TL1 version of recent change messages for switch NEs. The switch TL1 input commands generally have four key verbs, namely, ENTER, DELETE, EDIT, and RETRIEVE, each of which can be applied to about a dozen of administrative views (i.e., modifiers).

Finally, a Bellcore Technology Requirements Industry Forum (TRIF) sponsored by Pacific Bell was held in San Francisco on June 1986 to present and discuss TL1 messages in the public domain.

To-date, Bellcore has published hundreds of TL1 messages for different operations domains and network technologies. To provide a roadmap to all of them, GR-811-CORE9 presents, on an ongoing basis, a listing of, and pointers to, all the TL1 messages

8 The bulk of the TL messages in these three TAs have now been migrated to GR-199-CORE, GR-833-CORE, and GR-834-CORE, respectively.

9 “OTGR: Operations Application Messages - TL1 Messages Index,” GR-811-GORE, Issue 3, June 1997.

14

Page 15: Case study: the Mars Explorer - 123seminarsonly.com€¦  · Web viewFor example, in its input commands, MML reserved the first parameter field for command code (i.e., action to

that have been published in various Technical Advisories (TAs), Technical References (TRs), and Generic Requirements (GR) documents.

Industry Support of TL1

Most of the TL1 messages in TA-TSY-00199 are currently used by provisioning OSs to remotely set cross-connections in DCSs and channel unit settings in digital loop carrier (DLC) systems as well as send recent change messages to switch NEs. Many surveillance OSs and testing OSs have also adopted TL1 as their communication language with NEs. They have implemented many of the TL1 messages documented in TA-TSY-00200 and TA-TSY-00201, respectively. In addition to the Bellcore OSs, a number of other vendor OSs, particularly testing OSs and SONET element management systems (EMSs), are using TL1 messages to manage their NEs.

Moreover, a large number of transport NE vendors have implemented TL1 messages to manage their families of DCSs, families of DLC systems, and SONET add-drop multiplexers (ADMs). Even a number of emerging transport NEs that support the family of digital subscriber line (xDSL) and wavelength division multiplex (WDM) technologies are planning or requested to provide TL-1 interfaces. Several switch NE vendors have implemented TL1 messages to administer their plain old telephone services (POTS), Centrex, multi-line hunt, and ISDN services.

As mentioned in the earlier Telephony article, various companies have developed object servers that provide an adaptation between TL1-based and object-oriented technologies. Along the same line, Network Programs, LLC (NPL) has developed a virtual management information base (MIB) for TL1 in conjunction with the MIBs for simple network management protocol (SNMP) and common management information protocol (CMIP). The virtual TL1 MIB provides an abstraction of managed information compatible with those presented by the SNMP or CMIP MIB. Unification among managed information is crucial to managing a multi-protocol, multi-vendor network.

Documenting the Architecture:

Systems benefit from good architecture documentation. The following is borrowed from Software Architecture Documentation in Practice at the Carnegie Mellon Software Engineering Institute:

Uses of Architecture Documentation

This is a way to inform stakeholders of system design at every development stage.

This perspective on architecture is forward-looking, involving steps of creation and refinement. Stakeholders include those involved in managing the project, as well as "consumers" of the architecture that must write code to carry it out, or

15

Page 16: Case study: the Mars Explorer - 123seminarsonly.com€¦  · Web viewFor example, in its input commands, MML reserved the first parameter field for command code (i.e., action to

design systems that must be compatible with it. Specific uses in this category include the following:

For downstream designers and implementers, the architecture provides their "marching orders." The architecture establishes inviolable constraints (plus exploitable freedoms) on downstream development activities.

For testers and integrators, the architecture dictates the correct black-box behavior of the pieces that must fit together.

For technical managers, architecture provides the basis for forming development teams corresponding to the work assignments identified.

For project managers, architecture serves as the basis for a work breakdown structure, planning, allocation of project resources, and tracking of progress by the various teams.

For designers of other systems with which this one must interoperate, the architecture defines the set of operations provided and required, and the protocols for their operation, that allows the interoperation to take place.

A basis for performing up-front analysis to validate (or uncover deficiencies in) architectural design decisions and refine or alter those decisions where necessary.

This perspective on architecture is, in some sense, inward-looking. It involves making prospective architectural decisions and then projecting the effect of those decisions on the system or systems that the architecture is driving. Where the effect is unacceptable, the relevant decisions are re-thought, and the process repeats. This process occurs in tight cycles (most architects project the effect of each of their decisions) and in large cycles (in which large groups of decisions, perhaps even the entire architecture, are subjected to formal validation). In particular, architecture provides the following:

For the architect and requirements engineers who represent the customer(s), architecture is a forum for negotiating and making trade-offs among competing requirements.

For the architect and component designers, architecture is a vehicle for arbitrating resource contention and establishing performance and other kinds of run-time resource consumption budgets.

For those wanting to develop using vendor-provided products from the commercial marketplace, the architecture establishes the possibilities for commercial off-the-shelf (COTS) component integration by setting system and component boundaries and establishing requirements for the required behavior and quality properties of those components.

For those interested in the ability of the design to meet the system's quality objectives, the architecture serves as the fodder for architectural evaluation methods

16

Page 17: Case study: the Mars Explorer - 123seminarsonly.com€¦  · Web viewFor example, in its input commands, MML reserved the first parameter field for command code (i.e., action to

For performance engineers, architecture provides the formal model that drives analytical tools such as rate monotonic schedulers, simulations and simulation generators, theorem provers and model checking verifiers.

For product line managers, the architecture determines whether a potential new member of a product family is in or out of scope, and if out, by how much.

The first artifact used to achieve system understanding.

This perspective on architecture is reverse-looking. It refers to cases in which the system has been built and deployed, and now the time has come to make a change to it or to extract resources from it for use elsewhere. Architecture mining and recovery fall into this category, as do routine maintenance activities. In particular, architecture serves the following roles:

For technical mangers, architecture is basis for conformance checking, for assurance that implementations have in fact been faithful to the architectural prescriptions.

For maintainers, architecture is a starting point for maintenance activities, revealing the areas a prospective change will affect.

For new project members, the architecture is usually the first artifact for familiarization with a system's design.

For those inheriting the job of architect after the previous architect's untimely departure, the architecture is the artifact that (if properly documented) preserves that architect's knowledge and rationale.

For re-engineers, architecture is the often first artifact recovered from a program understanding activity or (in the event that the architecture is known or has already been recovered) the artifact that drives program understanding activities at component granularities.

Architecture documentation is both prescriptive and descriptive. That is, it prescribes what should be true, and it describes what is true, about a system's design. The same documentation can serve both purposes. If the "build-as" documentation differs from the "as-built" documentation, then clearly there was a breakdown in the development process.

With architecture and its documentation in hand the project is ready for a formal Architecture Reviews. Joe Maranzono provides a checklist for these reviews in table:

Checklist for Architecture Reviews

17

Page 18: Case study: the Mars Explorer - 123seminarsonly.com€¦  · Web viewFor example, in its input commands, MML reserved the first parameter field for command code (i.e., action to

by Joe Maranzono email: [email protected]

1. Starting with the ‘4+1’ Overall architecture diagrams use additional diagrams to describe all the components of the system.2. List major components of the system and the functionality provided by each component.3. Trace scenarios of how data/information flows through components.4. Describe and list the special data error handling flows.5. Describe the user interfaces6. List all interfaces with other systems and for each interface describe:

a. The IPC mechanism to be used for data and for controlb. Any expected issues with the IPC mechanism (e.g.,

performance degradation at some capacity level, problems with overload control, new technology)

c. Failure modes and error handling for each type of failured. Error recovery to prevent lost data, if needede. Effective bandwidth of the interface mechanism

7. Examine strategies used to keep design simple and avoid unnecessary complexity.8. Examine robustness of component choices, COTS or custom-made including:

a. available supportb. defect recordsc. licensing costsd. performance under loade. extensibilityf. flexibility

9. Review local data bases or data stores used for temporary data storage or reference data storage10. Examine performance and capacity budgets by considering the expected traffic and background processing in terms of:

a. The average load and busy hour of the Transaction Profile

b. Number of simultaneous users.c. Expected system response time in terms of its average,

variance and bounds.d. Peak arrival ratese. Over night processing and calculationsf. Database Sizesg. Network demands including congestion strategy

11. Examine the Operations, Administration and Maintenance approach including:

a. Operational Environmentb. Interfaces to external sources and systemsc. System availability

18

Page 19: Case study: the Mars Explorer - 123seminarsonly.com€¦  · Web viewFor example, in its input commands, MML reserved the first parameter field for command code (i.e., action to

d. Failure avoidance and handlinge. Error handling f. Disaster recovery g. Security h. Data Consistency and Accuracy

Here are typical problems found in such reviews:1. 50% of all problems arise from incomplete requirements including:

a. Undefined usage scenariosb. Unknown customersc. No written requirements d. No acceptance criteria e. “Catchall statements,’ …the system must be reliable.’

2. 25% from performance issues a. No traffic profiles specifiedb. No model of offered load available c. Data volumes missing d. Night or batch processing loads unspecified e. No resource budget f. Unwarranted expectation of linear scalability

3. 10% from Unspecified Operations, Administration, Maintenance or Provisioning (OAM&P)

a. No analysis of system availability b. Recovery system lags on-line system so that there was no possibility of

database catch-up upon major outage c. Database tools inadequate d. Conversion tools missing

4. 5% error recovery missing 5. 5% using immature technology 6. 5% from a lack of analysis of subsystem and module dependencies.

Architecture Middleware

Middleware provides a reusable architectural component that solves the distributed application problem. An early 1980s example of middleware was the Tuxedo product used with distributed UNIX applications. A video showing how it uses two-phase commit technology is on the web site accompanying this book. Middleware gets its name from being the software component that provides simple access to operating system functions by applications. In an architectural hierarchy it ‘sits’ between the low level operating system and the application software. It helps programmer easily and quickly build distributed business application by isolating them from complex design issues, such as working with multiple operating systems, data communication protocols, transaction recovery across multiple applications and computers. The Open Software Foundation’s Distributed Computing Environment (DCE), Object Management Group’s Common Object Request Broker Architecture (CORBA), Microsoft’s Distributed Component

19

Page 20: Case study: the Mars Explorer - 123seminarsonly.com€¦  · Web viewFor example, in its input commands, MML reserved the first parameter field for command code (i.e., action to

Object Model (DCOM), Enterprise Java Bean (EJB) and BEA’s Tuxedo are widely used middleware products.

Upscale architecture middleware supports building the construction of sophisticated systems by assembling a collection of modules with the help of visual tool or programmatic interfaces. These interfaces are called Application Program Interfaces (API) and are the subject of heated arguments between vendors and customers. Keeping them stable and standard makes it easy to develop some applications but limits growth to new application areas and eliminates software provider’s perceived competitive advantages. In traditional middleware software component dispatch is tightly integrated Within the platform that makes it hard to extend for new, as yet undefined, applications. The new concept of a piped dispatch has special nodes, flow control and a data channel between neighboring nodes. Traditional middleware components are not interdependent because they exchange data. In piped workflow a special Meta data channel that exchanges data among all modules. Whenever the module needs more data to continue its operation, it has to ask for data channel. A data pool or pipes are two ways for data channels to exchange data. The data pool is a public temporary storage for all variables of all modules during one processing procedure, and it will be destroyed when the client processing finishes. Each module has to define which data it needs from data pool and which data will be produced to data pool. The pipe is another way to exchange data for computational modules. In data pools, all the necessary data have to be set before the module runs, and result can only be put into data pool after the module finishes. This is inefficient. The pipe consists of many independent sub-pipes that interface pair wise adjacent modules. Each sub-pipe contains a pair of interface objects that transfer data between the modules and pipes. In one object class pipes share the same temporary memory, so the intermediate result can be transferred from the non adjacent modules in an orderly progression of steps. Another approach is to use dynamic dependency to manage components in an already running system. To rely the dynamic dependencies use a component configurator responsible for storing the runtime dependencies between specific components. Every component may have a set of hooks to which other components can attach. There might be some other components (called clients) that depend on server components. Through the communication and event contracts between hooked components and their client, reconfiguration executing components is possible.

Middleware masks the problem of building distributed application among heterogeneous environments. But the complexity of distributed networks and unanticipated requirements makes the construction of middleware hard. The search for a high performance, general purpose middleware platform continues. Reference

Case Study Analysis of Middleware Dependencies.Background:

You are the architect for a software team developing a Customer Resource Management application. Your architecture review for the Operations shows that the middleware transaction recovery component is buggy,

20

Page 21: Case study: the Mars Explorer - 123seminarsonly.com€¦  · Web viewFor example, in its input commands, MML reserved the first parameter field for command code (i.e., action to

A new release of the Middleware is scheduled for June. This release is promised to be robust, fix the bugs and be industrial strength. The new transaction recovery scheme requires ‘minor’ changes to all the application modules. Here is your committed schedule.

Timeline:

Mars April May June July AugustMiddleware Version

2.0 3.0

Application SHIP DATE

Tests Inventory retest

Simple test

Test all transactions

Question: Should the architecture specify use of Middleware Version 2.0 or 3.0

Issues to Consider:a. Version 3.0 delivery to the application team might be later than estimated.b. Will OS.3 really fix the transaction recovery problem as advertised or is it

‘vaporware’c. Are the required code changes to the application modules needed to interface

with the new recovery scheme as simple as advertised?If people are assigned from OS development to test cases design, will they find work that is more satisfying on other projects or in other companies?

Options explored at the Architecture Review:1. Go live with Middleware version 2.0, and upgrade to version 3.0 later. This field

upgrade requires special conversion software and is thought to be extremely 2. complicated. 3. Go live with version 3.0. The software project manager has conducted a detailed

schedule analysis and foresees a 6-week delivery delay with a 1.5 weeks standard deviation with this option. Your top executives will be very angry as payments will be delayed causing the company to show a profit loss for the year.

4. Go live with Middleware Version 3.0 and insist that the middleware supplier provide onsite testing support avoid project delays. Important future middleware features will be delayed impacting the supplier’s profitability. The supplier wants premium payments for the support. The project manager projects a 50% reduction in profitability and the company will go from being profitable to breakeven.

This is not an easy decision. All options are risky. The architect evaluated the features of Version 2.0 and the risks. Working closely with the project manager the architect decided to chose option 1 Go live with Middleware Version 2.0 and slip out future releases while investing in for field conversion to Version 3.0. The architect was not attracted to technology hype such as, “Version 3.0 is solid as a rock and has great

21

Page 22: Case study: the Mars Explorer - 123seminarsonly.com€¦  · Web viewFor example, in its input commands, MML reserved the first parameter field for command code (i.e., action to

features!” Risks were managed carefully and conservatively. The customer participated in the decision. This seemingly simple decision could have huge repercussions. The architectural review gave the team the opportunity of thoughtful technical, schedule and risk analysis. The field conversion took six months longer than projected and delayed the next set of features. But the customer understood the risks, needed a reliable system and trusted the architect to deliver systems in good order. Business increased as customers learned of the care and thoughtfulness of the architect in assuring trustworthy products.

NASA Administrator Dan Goldin took the blame botched Mars missions, saying he pushed too hard, cut too much and made it impossible for spacecraft managers to succeed. But Goldin said he will not abandon the National Aeronautics and Space Administration's ``faster, better, cheaper'' approach. Mission managers will get enough money and people to do the job, but there won't be a return to the days of big, expensive spacecraft.

``We're going to make sure they have adequate resources, but we're not going to let the pendulum swing all the way back,'' he told employees of NASA's Jet Propulsion Laboratory, where Mars Polar Lander and Mars Climate Orbiter were managed. Goldin visited the lab Wednesday, a day after two reports were released on the recent Mars fiasco. They found mismanagement, unrealistic expectations and anemic funding were to blame as much as the mistakes that actually doomed the missions. Too many risks were taken by skipping critical tests or overlooking possible faults. And nobody noticed or mentioned the problems until it was too late.

The $165 million Mars Polar Lander was most likely doomed by a sensor that mistook a spurious signal for landing when the legs deployed, causing the software to stop the descent engines 130 feet above the planet's surface. The problem could have been easily resolved by beaming new software to the lander during its 11-month cruise -- if it had been noticed, said John Casani, a former JPL chief engineer who led one of the investigations. The lander was last heard from Dec. 3.

Mars Climate Orbiter was lost Sept. 23 when nobody realized that Lockheed Martin Astronautics delivered navigation data in English units rather than metrics. The $125 million craft burned up in the Martian atmosphere. Their combined cost was about the same as the last successful spacecraft to land on Mars -- Pathfinder in 1997. But even the first Pathfinder had software problem. Unfortunately NASA did not invest in the good software processes of architectural discovery after the first problem. It was an architectural deadlock problem.

Case study: the Mars Pathfinder

The Pathfinder lands successfully, gathers data on Mars and sends pictures back to earth. Then it occasionally stops sending images and as times goes on these stoppage occur

22

Page 23: Case study: the Mars Explorer - 123seminarsonly.com€¦  · Web viewFor example, in its input commands, MML reserved the first parameter field for command code (i.e., action to

more frequently for longer periods of time. The mission software engineers note that when this happens the computers reboot the software.

Three software tasks were involved in the problem. One task rebooted the computer whenever it was idle for a period of time. A second sent images to earth and a third took pictures on MARS. All three tasks used the common shared bus for communication and the computer processor.

A software task dispatch conflict was the cause of the problem.

Priorities should be:1. Reboot2. Send images3. Gather data

But due to a faulty use of preemptive multithreading, they were actually:1. Reboot2. Gather data3. Send images

There was a mismatch between priorities set in: The hardware The software The software that controls the bus

A watchdog counter rebooted the system after it had been inactive for some time, by issuing an interrupt. This is why the probe was rebooting after being silent for too long. This is a fail-safe system.

23

Computer

AntennaSensors

Shared Bus

Task

CPU

Bus

Time

1

2322

23212

Conflict

Page 24: Case study: the Mars Explorer - 123seminarsonly.com€¦  · Web viewFor example, in its input commands, MML reserved the first parameter field for command code (i.e., action to

Conclusion

This problem is a typical deadlock that can happen when access to a resource is not managed. It is similar to the deadlocks studied the database management systems or operating systems courses.. Case Study- Mars Explorer Resets Software Engineering Lessons

1. Design Defensively: The JPL team did this by leaving in the debug code and the fail safe resets. Build ‘fail-safe’ systems.

2. Stress test beyond the limits established in the requirements. It is best to test to the breaking point. The difference between the breaking point and the maximum design load is the system margin. The stress tests for the Mars Explorer were actually thought to be the worst case. But because the data gathering went better tan expected there was more data to process than expected. This resulted in the shared bus not being released. Had testers pushed the system to its breaking point there is a good chance, not a guarantee, that they would have found and fixed the fault before it became a failure.

3. Explain all anomalies that show up during system test. Sometimes you must release use the software for its intended use even though you have not been able to discover the cause of the anomaly. In these cases treat the release as ‘provisional’ and continue to work on understanding the anomaly. The Mars Explorer anomaly was thought to be a hardware problem even though there was no data supporting this belief.

Once resets occur, even once, during system test, developers must understand and solve them. Sometimes managers want to classify a problem as a ‘chance occurrence,’ these are especially difficult times for the testers. They insist on having a plan to find and fix the fault before it becomes a failure to avoid sleepless nights, panic and project bankruptcy. In the Mars mission, Mars is close enough to Earth for the Explorer to have enough fuel to complete the trip is once a year. So the testers faced the awesome responsibility of holding the mission for a year or explaining away an anomaly that occurred two or three times. The hardware people were already blaming the software people for schedule delays and the software people rationalized that if there was a hardware problem it was not their job to find it. In any event NASA might choose to launch on schedule to meet the window of opportunity. They could call the launch ‘provisional’ because there was a known fault. While the Explorer flew to Mars the JPL and NASA engineers might have worked to discover the reset problem in test labs well before it occurred on the planet. This philosophy moves teams from crises and to problem avoidance. Of course, an alternate approach was to delay launch until the fault was found and fixed A management team will evaluate the risks and decide to ‘launch or hold’ with data about all potential problems. By writing off the anomaly the NASA team shielded their management team from the risks.

The most difficult step to take in any project is to change the agreed to plan. But plans are worthless unless they change to meet newly discovered conditions.

Boring projects are the best. Beware of those who thrive on the thrill of crises.

Hold architecture reviews and pay special attention to performance issues. Process scheduling algorithms need detailed analysis. All shared resources must be understood and interaction must be analyzed. Process scheduling algorithms need detailed analysis. All shared resources must be understood and interactions must be analyzed. Simulate scenarios.

Reference: Steve March,” Learning from Pathfinder’s Bumpy Start,” Software Testing and

Quality Management, September/October 1999, Volume 1, Issue 5, page 10 www.stqemagazine.com

24

Page 25: Case study: the Mars Explorer - 123seminarsonly.com€¦  · Web viewFor example, in its input commands, MML reserved the first parameter field for command code (i.e., action to

Financial Systems Architecture

A computer running several software components is called an application server. Using a combination of application servers and software components is usually called distributed computing. The usual real-world application of this is in financial applications or other business software. Distributed computing is an emerging software architecture technology fraught with typical early technology problems.

In a bank, the following types of business function can be identified: management functions such as Asset & Liability Management (ALM), commercial management, operational management and strategic management. In addition there are a number of innovating functions such as product and market development and support functions such as financial administration. The core of the bank consists of the operational functions, in which financial contracts (such as those for saving, mortgages, etc.) are taken out and managed, and where transactions are executed, according to those contracts. Finally there are the business functions, which are responsible for client contact and client management via a range of distribution channels and media. Naturally, communication must be possible between all of these business functions. Here is a view of the Business Processes for a typical bank:

ALM CommercialManagement

OperationalManagement

StrategicManagement

Int. auditing

Cash mngt

Fin. adm.

SupportProd. Dev.

Market Dev.

Mngt. Dev.

InnovationTrading Service

Distribution MngtAcquisition Client Mngt

Figure: the usual business functions in a bank

Separate systems may be developed for every component of banking business operation. All these systems have to be 'fed' by what goes on elsewhere in the bank. A contract system for medium-term notes (MTNs), for example, must serve to keep other systems up to date about what has been laid down or changed in the contract. In other words, countless links are required, which together form a huge web. This web immediately expands whenever a new product is added, or as soon as the organization undergoes change. For example, if the bank decides to identify a 'business mortgage system' and a 'private mortgage system', as two separate systems for either strategic or administrative reasons, despite the fact that their underlying product, namely the mortgage, is common to both.

25

Page 26: Case study: the Mars Explorer - 123seminarsonly.com€¦  · Web viewFor example, in its input commands, MML reserved the first parameter field for command code (i.e., action to

Figure: common application architecture

A product-related layer in the architectureA product layer will reduce interactions by adding a level of indirection. The cost may be in performance buy the benefit is in the clarity of the relations between components. In this product-related layer, all knowledge and information relating to the nature of a single product is actually combined. Each banking product therefore can have its own product layer. This refactoring of the functions simplifies the architecture and subsequent implementation.

Figure: application architecture with a separate product layer

Finding Simple Components

A good architecture has clear product-related layer. The components that reside in his layer reduce coupling and increase cohesion. New components can be rapidly added and processes made up of these components can implement new features. The trick is to maximize the general nature of the component while minimizing the interactions with other components. Object class refactoring is an important development process that makes this possible.

Components need to be linearized in the sense that they only interface to other components through well defined structures. Object classes specially defined for interface support are the most flexible. In practice three or four iterations at the object class definitions with heavy constraints placed on the number of object classes is vital. The components are easiest to test if they have one entry point and one exit point. The underlying idea is that the speed for adding and altering products can be achieved by composing these products in their specification, using such components.

26

Page 27: Case study: the Mars Explorer - 123seminarsonly.com€¦  · Web viewFor example, in its input commands, MML reserved the first parameter field for command code (i.e., action to

Now let’s consider component size. Should they be small, elementary building blocks such as a simple interest calculation (principal amount times term times percentage), whereby a product will consist of a large number of small building blocks? Or should they be larger blocks such as a 'complete straight-line repayment lending construction, including variable interest calculations', entailing the risk that a large number of similar, but in detail different constructions are required? The best size is a mixture of the two in the range of 1 to 10 function points. Hatton points out hat this lowers defects and application engineers are not overwhelmed by a vast library of micro feature components. A range of components are needed for principal amount and interest computation. For example, for principal amounts, the situation may arise where the principal amount remains the same throughout the term (as is the case in an interest-only mortgage) or the principal amount becomes less throughout the term (as is the case in a linearly amortizing mortgage). This knowledge resulted in the identification of at least two different components for principal amounts. Similarly there are two different building blocks for interest components, for example an interest construction for daily-variable interest, or an interest construction for a rate-bounded system.

27

Page 28: Case study: the Mars Explorer - 123seminarsonly.com€¦  · Web viewFor example, in its input commands, MML reserved the first parameter field for command code (i.e., action to

CASE Study Banking Components. Example problem description

Example inventory of components

Architectural Process AgainWith a prospectus in hand the requirements phase of the project can begin. During this phase prototypes are used to understand

and validate the requirements. A first cut functional and physical architecture is synthesized to determine the feasibility of the project and understand its size. On the basis of the prospectus and a colloquial language description a formal product requirements specification is prepared. . This specification is readable by financial or other problem domain experts. They are therefore able to determine whether the product specification describes the product as precisely as they intend. This formal specification lays out the components of the system and their interaction. It identifies which components are to be purchases or drawn from a product line library of components and how they will work together. Some guidelines for this synthesis are:

Use operating system software and hardware familiar to the developers. If this is impossible invest in extensive training including extra development iteration.

Partition the software into separate modules. Modularize with well-defined interfaces to simplify testing and feature packaging.

Estimate performance, and then measure it in the prototype. Track module performance during the entire development cycle. Establish performance margins and manage to them.

Maximize the re-use of common modules within and across product lines.

In a Fixed Time Deposit the client places with the bank a fixed amount (the principal amount) for a fixed term, at a fixed, agreed interest percentage. At the end of the term, the principal amount and the accrued interest are paid to the client.

term: fixed start date, fixed end date

principal amount: fixed for the entire terminterest: fixed for the entire termprincipal amount settlement:

to be carried out

interest settlement: one-time, in arrearsinterest calculation: simple, daily methodetc.

28

A Component:

product DEPOSITdata:PA: amount ** principal amountSD: date ** start dateMD: date ** maturity date…

intermediate data:TERMDEP: interval ** term…

define TERMDEP asTERM using FIXED-TERM-BLOCKwith SDTERM -> SD, MDTERM -> MD

Page 29: Case study: the Mars Explorer - 123seminarsonly.com€¦  · Web viewFor example, in its input commands, MML reserved the first parameter field for command code (i.e., action to

Minimize cross feature dependencies and create components from modules. Components are modules that are limited in size, use single entry and exit points, have an explicit error recovery strategy, and are bounded in time and space and interfaces that normalize data structures between components. A mantra for this approach is that:

You can change some of the interfaces some of the time, or some of the components some of the time, but don’t even think about changing both the interfaces and the modules at the same time.

Isolate hardware and data structure dependencies from the logical functional modules. Understand and allow for levels of indirection that induce performance penalties to reduce development risk.

Simplify the product by refactoring, reusing existing components or most of all by taking Mark Twains advise on adjectives:

“As to the adjective (requirement or feature), when in doubt, strike it out.” Twain

As to adopting it to system features:

As to the feature, when in doubt, strike it out.

A byproduct of this effort is the number of function points that comprise the project. Now the intense architecture and design phase begins. The four views of the ‘4+1’ model are synthesized and a set of independent use cases are created.

A good architecture readily accommodates changes based on new or modified business functions. An example is the supervision carried out by banking regulatory bodies or change in internal management or a more detailed method of cost allocation based on market value rather than book value. The result is new information system requirements. Existing kernel objects or components may need to be expanded, and reporting system components will need to be added. New components need to be added to the system configuration through a registration process. Furthermore, changes in technological design can drive changes to the architecture. They usually relate to changes in the Information Technology operating environment. These tools can be divided into the following categories:

Hardware, including network technology; System and network software; Storage technology in a database system or Storage Area Network System development tools, such as new database systems.

The relationship between changes in reality and their impact on the system is summarized:

29

Page 30: Case study: the Mars Explorer - 123seminarsonly.com€¦  · Web viewFor example, in its input commands, MML reserved the first parameter field for command code (i.e., action to

A good architecture anticipates these changes so that the more frequently occurring changes have lower impact than those occurring less frequently. The frequent changes in hardware and software technology led to layered architectures so that the changes can be shielded from the applications and the users.

Case Study Implementation and advantages of Banking Architecture in practiceIn the 1990s banks observed that rapid developments in the financial world and within IT necessitated a review of the architecture of their software systems. One placed the emphasis on the need to correctly register the various financial products according to type of product and to prepare accurate risk and accounting reports.

Implementation was done in phases, by transferring products gradually from existing systems to the new system. While development was underway a bank merger occurred. The new bank adopted the new partially featured system, which was expanded intermittently with a group of new specific applications. This showed that new applications can be quickly added and incorporated into the new architecture. .

In subsequent years, treasury front-office systems and reporting systems were connected to the new system. This considerably boosted the quality of information provision to the financial administration and the risk management departments: it was more manageable, verifiable and reliable. It contained interest-related contracts, derivatives and bond portfolios. It was linked to front-office systems and other reporting systems. The concept of a system-of -systems that communicate with each other in the form of messages was implemented successfully through the new architecture.

When the EU-countries switched to one European currency (euro) in 1999, the banks quickly adapted contracts in pre-euro currencies to contracts in euros (whereby the customer could choose the moment of conversion for each contract separately). For each application a few new methods were added to the specification and the product kernel component was recompiled and loaded.

30

Page 31: Case study: the Mars Explorer - 123seminarsonly.com€¦  · Web viewFor example, in its input commands, MML reserved the first parameter field for command code (i.e., action to

The user departments are pleased with the improvement in the quality and consistency of the information in the reporting systems. Financial Product-related calculations were removed from the reporting systems and specified calculation rules in the kernel calculations are used for the reports. Although front-office staffs have their own dedicated applications, the back-offices are 'relieved' of many reconciliation problems, thanks to the consistency of the information.

This architecture was implemented at the savings division of a large retail bank. The initial situation here was entirely different. Within an existing environment, which included communication with the network of local branches, the central component containing balance and interest calculations needed to be replaced by a new system. The system had to offer support for the flexible and rapid introduction of new savings products. The environment would need to be adapted further at a later stage. The emphasis was the introduction of the application components within the existing architecture.

The assignment was to link up with many payment-oriented legacy systems for a wide variety of savings applications, with premiums, brackets and levels, but also for the savings components of mortgage constructions. In parallel, the balance and interest calculations of the current account system such as interest calculation and overdraft identification were centralized into a core kernel. Messages from the applications had to maintain their existing interfaces so that the incoming and outgoing interfaces could maintain their batch character. The 'account concept' remained primary: a withdrawal or deposit is first registered in the account system and then reported to applications.

Figure: Savings applications of large retail bank

31

Page 32: Case study: the Mars Explorer - 123seminarsonly.com€¦  · Web viewFor example, in its input commands, MML reserved the first parameter field for command code (i.e., action to

The direct benefit is that the kernels contain all rules for the application products in one place. Updates to these calculations now will take place uniformly. The flexibility introduced could be used in the future to provide on-line information to front office staff on the phone with the customer or to web based applications. This architecture let the bank introduce new financial products and product variations to the market quickly. The challenge was to handle the more than one million accounts and more than 60 million payment transactions a year without degrading performance while changing the applications. These demands were met

The first phase lasted twenty months. During this time the new infrastructure was designed, ten existing applications were upgraded and two new retail savings applications were added. The architecture's flexibility quickly proved useful. Previously, at the counters of the local branches calculations of balances and accrued interest were performed manually. Shortly after the implementation this time-consuming and error-prone procedure was replaced by an application program, which directly used information methods from the kernels, thus guaranteeing the same calculation rules to be applied in that front-office as in the back-office. By doing this, a uniform handling of product-rules could be enabled and enforced throughout the bank. .

Lessons LearnedThe benefits offered by integrated software architecture go beyond the ‘sum of the parts.’ Specifications for new or modified financial products may be specified in financial terms, using financial kernel components as building blocks. Automatic software generation is then possible for quick creation of a prototype. The architecture ensures that the prototype applications created fit into its environment immediately. Applications can be specified, evaluated, developed and embedded quickly. A formal development phase is needed because generating from a specification language of results in software so inefficient that it is not tolerated by users. Careful domain analysis ensures realistic scheduling. When products are specified within the same domain, it will quickly become clear which components overlap and which are reusable. Component may be developed or purchased for the money market and capital market business units and for retail front office offerings of credit, savings and payment services. When a new domain is entered, new components may be required and their development will slow the development process. Careful domain analysis and component purchases can help prevent unnecessary delays and ensure that the scheduling is realistic.

References1. D. Garlan & M. Shaw, “An Introduction to Software Architecture,” Advances in Software Engineering and Knowledge Engineering, Vol. 1, World Scientific Publishing Co. (1993).2. Krutchen 4+1 Systems,” Proceedings of the TRI-Ada ’94 Conference, Baltimore, November 6-11, 1994, ACM,

32

Page 33: Case study: the Mars Explorer - 123seminarsonly.com€¦  · Web viewFor example, in its input commands, MML reserved the first parameter field for command code (i.e., action to

2. . B. I. Witt, F. T. Baker and E. W. Merritt, Software Architecture and Design—Principles, Models, and Methods, Van Nostrand Reinhold, New-York (1994) 324p.Bass, Len, Clements, Paul, and Kazman, Rick. “Software Architecture in Practice,” Addison-Wesley, MA, 1998. ISBN 0-201-19930-0 Buschmann, Frank et al. “A System of Patterns,” John Wiley & Sons, Chichester, England, 1996. ISBN 0-471-95869-7 Gacek, Christina, Abd-allah, Ahmed, Clark, Bradford, and Boehm, Barry. “On the Definition of Software System Architecture,” Center for Software Engineering, University of Southern California, Los Angeles, CA 90089-0781, ICSE 17 Software Architecture Workshop, April 1995. Hall, Jane (ed). Management of Telecommunication Systems and Services, Springer, New York, 1991, ISBN 3-540-61578-4. IEEE Software, November 1995, Vol. 12, No. 6. Entire issue devoted to architectural questions and new developments. Lenzi, Marie. “Conduit and Content,” Object Magazine, October 1996, pp. 4-6. Morris, Charles R. and Ferguson, Charles H. “How Architecture Wins Technology Wars,” Harvard Business Review, March-April 1993, pp. 86-94. Shaw, Mary and Garlan, David. Software Architecture: Perspectives on an Emerging Discipline, Simon & Schuster, NJ 1996, ISBN 0-13-182957-2.

[7] D.C. Schmidt and C. Cleeland, Applying Patterns to Develop Extensible ORB Middleware. IEEE Comm. Magazine, IEEE CS Press. Los Alamitos, Calif., vol.37, no. 4, 1999, pp. 54–63.[8] Mark Astley, Daniel C. Sturman and Gul A.Agha, Customizable middleware for modular distributed software, Communications of the ACM, v.44 n.5, p.99- 107, May 2001Arnold, B.R.T., A. van Deursen, and M. Res (1995), An Algebraic Specification of a Language for

Describing Financial Products, in M. Wirsing (editor), Proceedings of the ICSE-17 Workshop on Formal Methods Applications, Software Engineering Practice, pages 6-13, Seattle, April 1995.

by M.D. McIlroyBell Telephone Laboratories, Inc., New Jersey, USA

NATO SCIENCE COMMITTEEGarmisch, Germany, 7 to 11 October ,1968

 

33

Page 34: Case study: the Mars Explorer - 123seminarsonly.com€¦  · Web viewFor example, in its input commands, MML reserved the first parameter field for command code (i.e., action to

Software components (routines), to be widely applicable to different machines and users, should be available in families arranged according to precision, robustness, generality and time-space performance. Existing sources of components - manufacturers, software houses, users’ groups and algorithm collections - lack the breadth of interest or coherence of purpose to assemble more than one or two members of such families, yet software production in the large would be enormously helped by the availability of spectra of high quality routines, quite as mechanical design is abetted by the existence of families of structural shapes, screws or resistors. The talk will examine the kinds of variability necessary in software components, ways of producing useful inventories, types of components that are ripe for such standardization, and methods of instituting pilot production.

The Software Industry is Not IndustrializedWe undoubtedly produce software by backward techniques. We undoubtedly get the short end of the stick in confrontations with hardware people because they are the industrialists and we are the crofters. Software production today appears in the scale of industrialization somewhere below the more backward construction industries. I think its proper place is considerably higher, and would like to investigate the prospects for mass-production techniques in software.

In the phrase ‘mass production techniques,’ my emphasis is on ‘techniques’ and not on mass production plain. Of course mass production, in the sense of limitless replication of a prototype, is trivial for software. But certain ideas from industrial technique I claim are relevant. The idea of subassemblies carries over directly and is well exploited. The idea of interchangeable parts corresponds roughly to our term ‘modularity,’ and is fitfully respected. The idea of machine tools has an analogue in assembly programs and com-pilers. Yet this fragile analogy is belied when we seek for analogues of other tangible symbols of mass production. There do not exist manufacturers of standard parts, much less catalogues of standard parts. One may not order parts to individual specifications of size, ruggedness, speed, capacity, precision or character set.

The pinnacle of software is systems - systems to the exclusion of almost all other considerations. Components, dignified as a hardware field, is unknown as a legitimate branch of software. When we undertake to write a compiler, we begin by saying ‘What table mechanism shall we build.’ Not, ‘What mechanism shall we use? but ‘What mechanism shall we build?' I claim we have done enough of this to start taking such things off the shelf.

Software ComponentsMy thesis is that the software industry is weakly founded, and that one aspect of this weakness is the absence of a software components subindustry. We have enough experience to perceive the outline of such a subindustry. I intend to elaborate this outline a little, but I suspect that the very name ‘software components’ has probably already conjured up for you an idea of how the industry could operate. I shall also

34

Page 35: Case study: the Mars Explorer - 123seminarsonly.com€¦  · Web viewFor example, in its input commands, MML reserved the first parameter field for command code (i.e., action to

argue that a components industry could be immensely useful, and suggest why it hasn’t materialized. Finally I shall raise the question of starting up a ‘pilot plant’ for software components.

The most important characteristic of a software components industry is that it will offer families of routines for any given job. No user of a particular member of a family should pay a penalty, in unwanted generality, for the fact that he is employing a standard model routine. In other words, the purchaser of a component from a family will choose one tailored to his exact needs. He will consult a catalogue offering routines in varying degrees of precision, robustness, time-space performance, and generality. He will be confident that each routine in the family is of high quality - reliable and efficient. He will expect the routine to be intelligible, doubtless expressed in a higher level language appropriate to the purpose of the component, though not necessarily instantly compilable in any processor he has for his machine. He will expect families of routines to be constructed on rational principles so that families fit together as building blocks. In short, he should be able safely to regard components as black boxes.

Thus the builder of an assembler will be able to say I will use a String Associates A4 symbol table, in size 500 x 8, and therewith consider it done. As a bonus he may later experiment with alternatives to this choice, without incurring extreme costs.

A Familiar ExampleConsider the lowly sine routine. How many should a standard catalogue offer? Off hand one thinks of several dimensions along which we wish to have variability:

Precision, for which perhaps ten different approximating functions might suffice

Floating-vs-fixed computation

Argument ranges 0- /2, 0-2 , also - /2 to /2, - to , -big to +big

Robustness - ranging from no argument validation through signalling of complete loss of significance, to signalling of specified range violations.

We have here 10 precisions, 2 scalings, 5 ranges and 5 robustnesses. The last range option and the last robustness option are actually arbitrary parameters specifiable by the user. This gives us a basic inventory of 300 sine routines. In addition one might expect a complete catalogue to include a measurement-standard sine routine, which would deliver (at a price) a result of any accuracy specified at run time. Another dimension of variability, which is perhaps difficult to implement, as it caters for very detailed needs is

35

Page 36: Case study: the Mars Explorer - 123seminarsonly.com€¦  · Web viewFor example, in its input commands, MML reserved the first parameter field for command code (i.e., action to

Time-space trade-off by table lookup, adjustable in several ‘subdimensions'

(a) Table size(b) Quantization of inputs (e.g., the inputs are known to be integral numbers of degrees)Another possibility is(c) Taking advantage of known properties of expected input sequences, for example profiting from the occurrence of successive calls for sine and cosine of the same argument.

A company setting out to write 300 sine routines one at a time and hoping to recoup on volume sales would certainly go broke. I can’t imagine some of their catalogue items ever being ordered. Fortunately the cost of offering such an ‘inventory’ need not be nearly 300 times the cost of keeping one routine. Automated techniques exist for generating approximations of different degrees of precision. Various editing and binding techniques are possible for inserting or deleting code pertinent to each degree of robustness. Perhaps only the floating-vs-fixed dichotomy would actually necessitate fundamentally different routines. Thus it seems that the basic inventory would not be hard to create.

The example of the sine routine reemphasizes an interesting fact about this business. It is safe to assert that almost all sines are computed in floating point these days, yet that would not justify discarding the fixed point option, for that could well throw away a large part of the business in distinct tailor-mane routines for myriads of small process-control and other real-time applications on all sorts of different hardware. ‘Mass production’ of software means multiplicity of what manufacturing industry would call ‘models,’ or ‘sizes’ rather than multiplicity of replicates of each.

Parameterized Families of ComponentsOne phrase contains much of the secret of making families of software components: ‘binding time.’ This is an ‘in’ phrase this year, but it is more popular in theory than in the field. Just about the only applications of multiple binding times I can think of are sort generators and the so-called ‘Sysgen’ types of application: filling in parameters at the time routines are compiled to control table sizes, and to some extent to control choice among several bodies of code. The best known of these, IBMs OS/360 Sysgen is indeed elaborate - software houses have set themselves up as experts on this job. Sysgen differs, though, in a couple of ways from what I have in mind as the way a software components industry might operate.

First, Sysgen creates systems not by construction, but rather by excision, from an intentionally fat model. The types of adjustment in Sysgen are fairly limited. For example it can allocate differing amounts of space to a compiler, but it can’t adjust the width of list link fields in proportion to the size of the list space. A components industry on the other hand, not producing components for application to one specific

36

Page 37: Case study: the Mars Explorer - 123seminarsonly.com€¦  · Web viewFor example, in its input commands, MML reserved the first parameter field for command code (i.e., action to

system, would have to be flexible in more dimensions, and would have to provide routines whose niches in a system were less clearly delineated.

Second, Sysgen is not intended to reduce object code or running time. Typically Sysgen provides for the presetting of defaults, such as whether object code listings are or are not standard output from a compiler. The entire run-time apparatus for interrogating and executing options is still there, even though a customer might guarantee he’d never use it were it indeed profitable to refrain. Going back to the sine routine, this is somewhat like building a low precision routine by computing in high precision and then carefully throwing away the less significant bits.

Having shown that Sysgen isn’t the exact pattern for a components industry, I hasten to add that in spirit it is almost the only way a successful components industry could operate. To purvey a rational spectrum of high quality components a fabricator would have to systemize his production. One could not stock 300 sine routines unless they were all in some sense instances of just a few models, highly parameterized, in which all but a few parameters were intended to be permanently bound before run time. One might call these early-bound parameters ‘sale time’ parameters.

Many of the parameters of a basic software component will be qualitatively different from the parameters of routines we know to-day. There will be at least:

Choice of Precision. Taken in a generalized sense precision includes things like width of characters, and size of address or pointer fields.

Choice of Robustness. The exact tradeoff between reliability and compactness in space and time can strongly affect the performance of a system. This aspect of parameterization and the next will probably rank first in importance to customers.

Choice of Generality. The degree to which parameters are left adjustable at run time.

Choice of Time-space behavior.

Choice of Algorithm. In numerical routines, as exemplified by those in the CACM, this choice is quite well catered for already. For non-numerical routines, however, this choice must usually be decided on the basis of folklore. As some non-numerical algorithms are often spectacularly unsuitable for particular hardware, a wide choice is perhaps even more imperative for them.

Choice of Interfaces. Routines that use several inputs and yield several outputs should come in a variety of interface styles. For example, these different styles of communicating error outputs should be available:

a. Alternate returns

37

Page 38: Case study: the Mars Explorer - 123seminarsonly.com€¦  · Web viewFor example, in its input commands, MML reserved the first parameter field for command code (i.e., action to

b. Error code returnc. Call an error handlerd. Signal (in the sense of PL/1)

Another example of interface variability is that the dimensions of matrix parameters should be receivable in ways characteristic of several major programming languages.

Choice of Accessing method. Different storage accessing disciplines should be supported, so that a customer could choose that best fitting his requirements in speed and space, the addressing capabilities of his hardware, or his taste in programming style.

Choice of Data structures. Already touched upon under the topic of interfaces, this delicate matter requires careful planning so that algorithms be as insensitive to changes of data structure as possible. When radically different structures are useful for similar problems (e.g., incidence matrix and list representations for graphs), several algorithms may be required.

Application AreasWe have to begin by thinking small. Despite advertisements to the effect that whole compilers are available on a ‘virtually off-the-shelf’ basis, I don’t think we are ready to make software sub-assemblies of that size on a production basis. More promising components to begin with are these:

Numerical approximation routines. These are very well understood, and the dimensions of variability for these routines are also quite clear. Certain other numerical processes aren’t such good candidates; root finders and differential equation routines, for instance are still matters for research, not mass production. Still other ‘numerical’ processes, such as matrix inversion routines, are simply logical patterns for sequencing that are almost devoid of variability. These might be sold by a components industry for completeness’ sake, but they can be just as well taken from the CACM.

Input-output conversion. The basic pieces here are radix conversion routines, some trivial scanning routines, and format crackers. From a well-designed collection of families it should be possible to fabricate anything from a simple on-line octal package for a small laboratory computer to a Fortran IV conversion package. The variability here, especially in the matter of accuracy and robustness is substantial. Considerable planning will evidently be needed to get sufficient flexibility without having too many basically different routines.

Two and three dimensional geometry. Applications of this sort are going on a very wide class of machines, and today are usually kept proprietary. One can easily list a few dozen fundamental routines for geometry. The sticky dimension of variability

38

Page 39: Case study: the Mars Explorer - 123seminarsonly.com€¦  · Web viewFor example, in its input commands, MML reserved the first parameter field for command code (i.e., action to

here is in data structures. Depending on which aspect of geometrical figures is considered fundamental — points, surfaces, topology, etc. - quite different routines will be required. A complete line ought to cater for different abstract structures, and also be insensitive to concrete structures.

Text processing. Nobody uses anybody else’s general parsers or scanners today, partly because a routine general enough to fulfil any particular individual needs probably has so much generality as to be inefficient. The principle of variable binding times could be very fruitfully exploited here. Among the corpus of routines in this area would be dictionary builders and lookup routines, scanners, and output synthesizers, all capable of working on continuous streams, on unit records, and various linked list formats, and under access nodes suitable to various hardware.

Storage management. Dynamic storage allocation is a popular topic for publication, about which not enough real knowledge yet exists. Before constructing a product line for this application, one ought to do considerable comparison of known schemes working in practical environments. Nevertheless storage management is so important, especially for text manipulation, that it should be an early candidate.

The MarketComing from one of the larger sophisticated users of machines, I have ample opportunity to see the tragic waste of current software writing techniques. At Bell Telephone Laboratories we have about 100 general purpose machines from a dozen manufacturers. Even though many are dedicated to special applications, a tremendous amount of similar software must be written for each. All need input-output conversion, sometimes only single alphabetic characters and octal numbers, some full-blown Fortran style I/O. All need assemblers and could use macro-processors, though not necessarily compiling on the same hardware. Many need basic numerical routines or sequence generators. Most want speed at all costs, a few want considerable robustness.

Needless to say much of this support programming is done sub-optimally, and at a severe scientific penalty of diverting the machine's owners from their central investigations. To construct these systems of high-class componentry we would have to surround each of some 50 machines with a permanent coterie of software specialists. Were it possible quickly and confidently to avail our-selves of the best there is in support algorithms, a team of soft-ware consultants would be able to guide scientists towards rapid and improved solutions to the more mundane support problems of their personal systems.

In describing the way Bell Laboratories might use software components, I have intended to described the market in microcosm. Bell Laboratories is not typical of computer users. As a research and development establishment, it must perforce spend more of its time sharpening its tools, and less using them than does a production

39

Page 40: Case study: the Mars Explorer - 123seminarsonly.com€¦  · Web viewFor example, in its input commands, MML reserved the first parameter field for command code (i.e., action to

computing shop. But it is exactly such a systems-oriented market toward which a components industry would be directed.

The market would consist of specialists in system building, who would be able to use tried parts for all the more commonplace parts of their systems. The biggest customers of all would be the manufacturers. (Were they not it would be a sure sign that the offered products weren’t good enough.) The ultimate consumer of systems based on components ought to see considerably improved re-liability and performance, as it would become possible to expend proportionally more effort on critical parts of systems, and also to avoid the now prevalent failings of the more mundane parts of systems, which have been specified by experts, and have then been written by hacks.

Present Day SuppliersYou may ask, well don’t we have exactly what I’ve been calling for already in several places? What about the CACM collected algorithms? What about users groups? What about software houses? And what about manufacturers’ enormous software packages?

None of these sources caters exactly for the purpose I have in mind, nor do I think it likely that any of them will actually evolve to fill the need.

The CACM algorithms, in a limited field, perhaps come closer to being a generally available off-the-shelf product than do the commercial products, but they suffer some strong deficiencies. First they are an in-gathering of personal contributions, often stylistically varied. They fit into no plan, for the editor can only publish that which the authors volunteer. Second, by being effectively bound to a single compilable language, they achieve refereeability, but must perforce completely avoid algorithms for which Algol is unsuited or else use circumlocutions so abominable that the product can only be regarded as a toy. Third, as an adjunct of a learned society, the CACM algorithms section can not deal in large numbers of variants of the same algorithm; variability can only be provided by expensive run time parameters.

User’s groups I think can be dismissed summarily, and I will spare you a harangue on their deficiencies.

Software houses generally do not have the resources to develop their own product lines; their work must be financed, and large financing can usually only be obtained for large products. So we see the software houses purveying systems, or very big programs, such as Fortran compilers, linear programming packages or flowcharters. I do not expect to see any software house advertising a family of Bessel functions or symbol tabling routines in the predictable future.

The manufacturers produce unbelievable amounts of software. Generally, as this is the stuff that gets used most heavily it is all pretty reliable, a good conservative grey, that doesn’t include the best routine for anything, but that is better than the average

40

Page 41: Case study: the Mars Explorer - 123seminarsonly.com€¦  · Web viewFor example, in its input commands, MML reserved the first parameter field for command code (i.e., action to

progranmier is likely to make. As we heard yesterday manufacturers tend to be rather pragmatic in their choice of methods. They strike largely reasonable balances between generality and specificity and seldom use absolutely inappropriate approaches in any individual software component. But the profit motive wherefrom springs these virtues also begets their prime hangup - systems now. The system comes first; components are merely annoying incidentals. Out of these treadmills I don’t expect to see high class components of general utility appear.

A Components FactoryHaving shown that it is unlikely to be born among the traditional suppliers of software I turn now to the question of just how a components industry might get started.

There is some critical size to which the industry must attain before it becomes useful. Our purveyor of 300 sine routines would probably go broke waiting for customers if that’s all he offered, just as an electronics firm selling circuit modules for only one purpose would have trouble in the market.

It will take some time to develop a useful inventory, and during that time money and talent will be needed. The first source of support that comes to mind is governmental, perhaps channeled through semi-independent research corporations. It seems that the fact that government is the biggest user and owner of machines should provide sufficient incentive for such an undertaking that has promise for making an across-the-board improvement in systems development.

Even before founding a pilot plant, one would be wise to have demonstrated techniques for creating a parameterized family of routines for a couple of familiar purposes, say a sine routine and a Fortran I/O module. These routines should be shown to be useable as replacements in a number of radically different environments. This demonstration could be undertaken by a governmental agency, a research contractor, or by a big user, but certainly without expectation of immediate payoff

The industrial orientation of a pilot plant must be constantly borne in mind. I think that the whole project is an improbable one for university research. Research-calibre talent will be needed to do the job with satisfactory economy and reliability, but the guiding spirit of the undertaking must be production oriented. The ability to produce members of a family is not enough. Distribution, cataloguing, and rational planning of the mix of product families will in the long run be more important to the success of the venture than will be the purely technical achievement.

The personnel of a pilot plant should look like the personnel on many big software projects, with the masses of coders removed. Very good planning, and strongly product-minded supervision will be needed. There will be perhaps more research flavor included than might be on an ordinary software project, because the level of programming here will be more abstract: Much of the work will be in creating

41

Page 42: Case study: the Mars Explorer - 123seminarsonly.com€¦  · Web viewFor example, in its input commands, MML reserved the first parameter field for command code (i.e., action to

generators of routines rather than in making the routines themselves.

Testing will have to be done in several ways. Each member of a family will doubtless be tested against some very general model to assure that same-time binding causes no degradation over run-time binding. Product test will involve transliterating the routines to fit in representative hardware. By monitoring the ease with which fairly junior people do product test, managers could estimate the clarity of the product, which is important in predicting customer acceptance.

Distribution will be a ticklish problem. Quick delivery may well be a components purveyor’s most valuable sales stimulant. One instantly thinks of distribution by communication link. Then even very small components might be profitably marketed. The catalogue will be equally important. A comprehensive and physically condensed document like the Sears-Roebuck catalogue is what I would like to have for my own were I purchasing components.

Once a corpus of product lines became established and profit potential demonstrated, I would expect software houses to take over the industry. Indeed, were outside support long needed, I would say the venture had failed (and try to forget I had ever proposed it).

Touching on StandardsI don’t think a components industry can be standardized into existence. As is usual with standards, it would be rash to standardise before we have the models. Language standards, provided they are loose enough not to prevent useful modes of computation, will of course be helpful. Quite soon one would expect a components industry to converge on a few standard types of interface. Experience will doubtless reveal other standards to be helpful, for example popular word sizes and character sets, but again unless the standards encompass the bulk of software systems (as distinguished from users), the components industry will die for lack of market.

SummaryI would like to see components become a dignified branch of software engineering. I would like to see standard catalogues of routines, classified by precision, robustness, time-space performance, size limits, and binding time of parameters. I would like to apply routines in the catalogue to any one of a large class of often quite different machines, without too much pain. I do not insist that I be able to compile a particular routine directly, but I do insist that transliteration be essentially direct. I do not want the routine to be inherently inefficient due to being expressed in machine independent terms. I want to have confidence in the quality of the routines. I want the different types of routine in the catalogue that are similar in purpose to be engineered uniformly, so that two similar routines should be available with similar options and two options of the same routine should be inter-changeable in situations indifferent to

42

Page 43: Case study: the Mars Explorer - 123seminarsonly.com€¦  · Web viewFor example, in its input commands, MML reserved the first parameter field for command code (i.e., action to

that option.

What I have just asked for is simply industrialism, with programming terms substituted for some of the more mechanically oriented terms appropriate to mass production. I think there are considerable areas of software ready, if not overdue, for this approach.

This white paper was scanned in from hard copy provided by Ian Hugo who was one of the two Scientific Secretaries at the 1968 NATO conference. The conference report editors were Peter Naur and Brian Randell. Ian Hugo (Scientific Secretary) - left - and Peter Naur (Co-editor) working on the 1968 NATO Software Engineering Confeence report

<< other White Papers

top of page

43