splashpage aug 01.htmlCopyright Rational Software 2001...articles this month on the various...

81

Transcript of splashpage aug 01.htmlCopyright Rational Software 2001...articles this month on the various...

Page 1: splashpage aug 01.htmlCopyright Rational Software 2001...articles this month on the various processes involved in managing code, people, tools, and other factors in software development.
jprince
http://www.therationaledge.com/splashpage_aug_01.html
jprince
Copyright Rational Software 2001
Page 2: splashpage aug 01.htmlCopyright Rational Software 2001...articles this month on the various processes involved in managing code, people, tools, and other factors in software development.

Editor's Notes

Projects, Progress, Process

You might call this the "Process" issue. I can't honestly say we planned to have a number of articles this month on the various processes involved in managing code, people, tools, and other factors in software development. But I can say that "process" is a constant topic of discussion at Rational. Whether it's Unified Change Management (UCM), the Rational Unified Process (RUP), or theories about more general forms of process - for instance, we have a piece here on "process culture" - Rational's tools and best practices are designed for the efficient use of our customers' time.

Not that we have all the answers. But with the right processes in place, we do believe you can get close. Our "august" company of Rational thinkers have a few ideas:

Brian White discusses the differences UCM can make when your development organization understands the relationship between projects, component baselines, and activities. Ross Beattie considers why software development teams choose certain processes over others. John Smith offers a very detailed look at how our own services organization helps customers understand the right balance of people, process, and tools for their software development needs. And Philippe Kruchten, Rational's own guru of the RUP, takes us through the process of software maintenance, showing how corrective, perfective, and adaptive maintenance needs can all be handled via the RUP.

For you C++ / UNIX developers, Goran Begic returns with another tour-de-force in debugging techniques using Rational Purify with the GDB debugger. Plus, there's something of an international theme this month as well, with a book review on Java and internationalization (have you heard of "I18N"?), and a broad view of how to expand your business overseas by Rational's UK-based vice president of strategic services, John Stewart.

And Grady Booch is here, with a guest appearance in the "Franklin's Kite" column (a small sampling of content from the new Rational Developer Network pages).

Entire issue in .pdf

Download the entire issue in .pdf(769 K)

jprince
http://www.therationaledge.com/content/aug_01/index.html
jprince
Copyright Rational Software 2001
Page 3: splashpage aug 01.htmlCopyright Rational Software 2001...articles this month on the various processes involved in managing code, people, tools, and other factors in software development.

Got any articles or content ideas you'd like to see in print on The Rational Edge? We're always delighted to receive your suggestions and the occasional manuscript via email.

Best wishes,

Mike PerrowEditor-in-Chief

Copyright Rational Software 2001 | Privacy/Legal Information

Page 4: splashpage aug 01.htmlCopyright Rational Software 2001...articles this month on the various processes involved in managing code, people, tools, and other factors in software development.

The Power of Unified Change Management

by Brian WhiteDirector of Change Management SolutionsRational Software

The term change management (CM) refers to the processes and tools an organization or project uses to plan, execute, and track changes to a software system. Unified Change Management (UCM) is a specific change management process developed by Rational in conjunction with our customers. UCM supports software project teams in managing the production and modification of files, directories, components, and systems. Academically speaking, change management processes consists of two disciplines:

● Software configuration management (SCM)

● Defect and change tracking (DCT)1

SCM deals with version control, workspace management, software integration, software builds, software deployment, and release processes. DCT deals with the processes and procedures by which defects, enhancement requests, and new features are submitted, evaluated, implemented, verified, and completed.

Rational has two tools that support these two disciplines, respectively. The first, Rational ClearCase®, automates SCM-related processes. The second, Rational ClearQuest®, automates DCT-related processes. By using these two tools together, you can automate UCM. Actually, you can automate almost any change management process using ClearCase and ClearQuest, but if you want out-of-the-box support for change management, then UCM is your best choice.

jprince
http://www.therationaledge.com/content/aug_01/f_powerOfUCM_bw.html
jprince
Copyright Rational Software 2001
Page 5: splashpage aug 01.htmlCopyright Rational Software 2001...articles this month on the various processes involved in managing code, people, tools, and other factors in software development.

At Rational, we have already responded to the question, "What is the UCM process?" in a variety of ways (see References below). We offer product documentation, a book on ClearCase and UCM, and a multi-media CD that can be ordered for free. So, if you already know a little bit about UCM, you might ask, "What makes UCM better than other change management processes?" I'll try to address that question here.

Let me begin by saying that one process cannot possibly be the best fit for all software projects. Therefore, it is actually meaningless to describe UCM as better than other change management processes without doing so in the context of an actual software development project. So instead, I will describe what makes UCM different from traditional change management (CM) processes. Then you can determine for yourself how these differences would apply to your own software development projects.

A Higher Level of Abstraction with UCM

If you look at the development of software languages, it is obvious that the level of abstraction from machine code has risen considerably over the decades of computer science and engineering. At the lowest level, it's all ones and zeros, and I expect very early programmers worked at this level. Quickly came assembly language, which abstracted away the ones and zeros to provide rudimentary machine instructions such as load register X with value Y. Next came languages like Pascal and C, which provided higher order constructs such as "if-then-else" statements. And now, today, we are starting to realize the potential of "programming" visually. By modeling the behavior of software systems, we can have the code generated for us. With the introduction of these abstractions, it has become easier and faster for developers to program more complex software systems.

A similar thing is happening with respect to the evolution of CM tools. Initially, CM tools consisted only of a repository for storing versions: the contents of a file or directory at a given point in time that is stored and identified, and can be retrieved as necessary. Then came tools that allowed users to manage workspaces: a collection of specific versions of files and directories chosen for a specific task or activity. And, as lower-level abstractions such as repository and workspace become common and widely accepted, higher-order functions can be layered on top to simplify the change management process. UCM does just that. Let's look at the three key abstractions that UCM encompasses: projects, component baselines, and activities.

Projects

Typically, software development teams are organized into projects. These projects, in turn, have sub-projects, etc., so that a project may be very large or very small. From a change management perspective, organization by project serves three purposes:

● First, it identifies the team members. This is useful for security purposes and collaboration purposes, both of which are critical to good change management.

Page 6: splashpage aug 01.htmlCopyright Rational Software 2001...articles this month on the various processes involved in managing code, people, tools, and other factors in software development.

● Second, a project limits the scope of files and directories the team needs to be aware of. That is, of all the files and directories, in all the repositories a given company is using, the project identifies the precise subset that a developer assigned to that project needs to think about.

● Third, a project identifies a common integration point for the work being performed by the members of the team.

This all may sound very mundane, but the key advantage of UCM as it is implemented in ClearCase and ClearQuest is that the project is actually a physical object in the CM system that maps to a real-world project. This allows a much higher degree of automation and security than would be possible if project were not one of the CM tool's concepts. When new developers join a UCM project, for example, they are automatically given a workspace pre-configured with the right versions of the files and directories they need.

Components and Component Baselines

The second key abstraction in UCM is the notion of components and component baselines. Most version control systems include the notion of a repository that stores a collection of files and versions of those files. Some tools, like ClearCase, also organize those files into directories and store the directories and directory versions. Almost all significant development efforts have a large number of files that contain the code needed to build the system under development. These files are typically organized into directories, and this organization is often aligned with the software architecture of the system. In traditional CM systems, these key directories are not treated any differently from any other directory. UCM, however, introduces the concept of a component for distinguishing and storing these directories. A UCM component is simply a directory tree made up of files and directories that has one component root directory.

Why does UCM do this?

The main advantage of UCM components, as with projects, is better automation. The best way to understand this is to look at the notion of a baseline. A baseline identifies a related set of file versions. The baseline, in other words, selects one version of each file in the component. Almost all version control tools claim to have support for baselining, but if you look closely, usually you will find that they really only support the concept of labeling. Labeling is a process by which you select a label name and then attach that label name to one or more file versions. By attaching the same name to a number of different file versions, you get a pseudo-baseline.

The problem with this approach to baselining is that there are no semantics implied by the label name -- except those implied by how you use the tool. You can't look at a label and know with certainty what files are associated with it. In fact, in the time it takes you to investigate which files have that label, the label can, in the meantime, be attached to new files, moved to new versions, or removed from selected files. Of course, you can implement controls and locking to enforce your own labeling

Page 7: splashpage aug 01.htmlCopyright Rational Software 2001...articles this month on the various processes involved in managing code, people, tools, and other factors in software development.

semantics, but UCM baselines solve these problems for you. These baselines are semantically rich objects that identify a "version" of a UCM component. By using them, you can be certain that all files in that component are associated with the same version. You can also be certain that the baseline will not change out from under you. Once created, UCM baselines are immutable and can be used for defining higher-level configurations. An entire system, for example, can be assembled from a set of component baselines.

Activities

Probably the most distinctive thing about UCM is that it is an activity-based change management model. What does this mean? It means that changes to files are grouped according to the reason for the change. Suppose you are fixing a defect or implementing an enhancement, for example. Whenever you change a file, you identify the reason you are making that change by declaring an activity at checkout time. An activity could be a defect, enhancement request, or simply a one-line description of your change, depending on how rigorous your defect and change tracking process needs to be. UCM supports all these types of activities -- and any others you choose to define yourself.

The primary advantage of activity-based change management is that no file can be changed without an associated reason. A secondary advantage is that the changes are integrated (or promoted) as a single, consistent whole. Most of the time, when you are making a change, you need to modify multiple files. For example, if you a fixing a defect, you may need to modify a C file and a header file. Oftentimes you need to modify many files. With UCM, all you have to do is select "activity" to record all the new versions created for all the files. Just as it does for projects and components, UCM introduces a physical activity object into the CM system that maps to a real-world object: the "unit-of-work." This has obvious, immediate benefits: When you are finished with a given task, for example, you can check in all your work at one time simply by checking in the activity.

In addition, however, there are more far-reaching automation and informational benefits. UCM moves changes through the system at the activity level. That is, when you are ready to have your changes integrated, you can "deliver" the activity. This is different from other CM approaches, which require merging a set of files or manually sending a bill of materials to someone who will then list the versions included in your change.

Actually, one of the greatest benefits of the activity-based approach is the way activities and baselines work in combination. After a component has been modified by a number of individuals, a new baseline is created. Through the use of activities and baselines, it is possible to automate the process of determining what is different between one baseline and another. This comparison between baselines produces not only a list of files that have changed from one baseline to the next, but also a list of activities! This has enormous advantages: You can automatically generate release notes, assist testers in determining the necessary set of regression tests to run after the nightly build, and so on.

Page 8: splashpage aug 01.htmlCopyright Rational Software 2001...articles this month on the various processes involved in managing code, people, tools, and other factors in software development.

Based on Customer Systems

This article offers just a taste of UCM's many capabilities and advantages. Fundamentally, this process for managing change on a software project -- automated through the use of Rational ClearCase and Rational ClearQuest -- raises the level of abstraction and possibilities for automation by introducing real-world objects into the CM system. These objects are projects, component baselines, and activities. If you're a long-time Rational ClearCase user, you may recognize some pieces of the UCM process in your ClearCase customization. Many of these script-based change management processes, built on top of ClearCase, played a key role in defining what UCM is -- and will continue to do so in determining what it will become!

References

Brian A. White (Intro by Geoffrey M. Clemm), Software Configuration Management Strategies and Rational ClearCase: A Practical Introduction (TheADDP9 Object Technology Series). Addison Wesley Professional, 2000.

Rational Change Management CD (Free Order/View Request Form)

Footnotes

1 Also known as change request management (CRM), not to be confused with customer relationship management.

For more information on the products or services discussed in this article, please click here and follow the instructions provided. Thank you!

Copyright Rational Software 2001 | Privacy/Legal Information

Page 9: splashpage aug 01.htmlCopyright Rational Software 2001...articles this month on the various processes involved in managing code, people, tools, and other factors in software development.

Expanding Your Business Overseas

by John StewartVice President, Strategic Services OrganisationRational Software

So you've conquered the US market with your stunning, next generation, ground-breaking product, and now it's time to conquer the world. Shouldn't be too difficult, especially as you are already getting lots of enquiries on your Web site from all sorts of countries (especially from "established" distributors wanting to resell). The product works 95 percent out of the box on French and Chinese NT, and you've heard that most potential customers speak English anyway. All you have to do is to figure out how to a) have some promotional material translated into a dozen languages for the buyers who don't speak English, b) iron out the remaining 5 percent of product issues and c) get payment in multiple currencies converted to US$.

Sound familiar?

Well, it's certainly true that the Internet has opened up some fascinating opportunities to promote, distribute, and support software products all over the world from a US base. It's also true (but not yet trivial, and rarely inexpensive) that there is much more experience and know-how on how to design a software product from the bottom-up that is "internationalised" and provides a baseline to be "localised" in many languages. Secure payment methods and the willingness of corporations to pay by credit card have eased the task of collecting payment. (This ignores the custom and tax issues arising from supplying software through the Internet across national boundaries, but we'll put that to one side for now.)

So if your product fits this scenario -- and many do, including re-usable components, libraries, and non-textual tools and utilities -- and if you've

jprince
http://www.therationaledge.com/content/aug_01/f_goingGlobal_js.html
jprince
Copyright Rational Software 2001
Page 10: splashpage aug 01.htmlCopyright Rational Software 2001...articles this month on the various processes involved in managing code, people, tools, and other factors in software development.

thought about protecting your IP (intellectual property) and how to provide 7x24 follow-the-sun support coverage, you could be on your way with relatively little investment. The occasional visit to (large) prospective customers coupled with an active program to solicit needs from overseas users will probably see you through for the first couple of years, assuming the business growth meets your aspirations.

On the other hand, many products won't fit this model, especially if your mission is closely aligned with, and dependent on, the success of your customer. Even if you begin by using the "remote Internet" model, solutions that ultimately require a strategic investment will require that you make a local investment. Besides, deep coverage is not achievable via the Internet alone, so business growth is normally limited. It is rare that a product "reinvents" itself every year or so, allowing a company to generate upgrade or renewal revenues with a minimum sales effort.

Most companies will find it essential to establish a local presence sooner or later. In practice, establishing a local presence presents more challenges than the more mechanical and better-understood activities of preparing the product itself for sale in an overseas market. The rest of this article will discuss some of the pitfalls and success factors in establishing a presence that will grow and mature in line with business expectations.

Rational Software, for Example

Allow me to put the issue of "going global" in a familiar context. Today, Rational has around 1,000 people in our international operations. We have subsidiaries and branches in more than twenty-five countries. We have been marketing our software development solutions outside the US since the mid-80s. (Our first international customer was in Sweden.) Our international business accounts for around 40-45 percent of Rational's total; the trend is toward 50 percent. In recent years, our highest growth markets have been in Asia-Pacific. As a consequence of being early entrants into new territories (and "paying our dues" by investing in market expansion), we are now a very significant player in many of the markets in which we operate. Fuelled by the rapid growth conditions over the last few years (e.g., deregulation, multinational expansion, industry leader consolidation, and all things "e"), our headcount and business volume have tripled since 1998.

As you are perhaps aware, Rational's strategy blends technology leadership with customer intimacy to provide world-class solutions aimed at increasing our users' software development capability. This recognises that software and -- for many organisations -- software development capability are critical to achieving the overall business mission. (CFOs and CTOs facing the current tough economic climate are beginning to realise that capability improvements result in significant cost reductions, as well as productivity and quality gains.)

To execute effectively on our strategy, we have built field teams (sales, consultants, local marketing) supplemented by product support groups and financial and administrative operations that are deeply rooted in the following commitments:

Page 11: splashpage aug 01.htmlCopyright Rational Software 2001...articles this month on the various processes involved in managing code, people, tools, and other factors in software development.

● Building a great company within their territory;

● Taking a long-term approach toward customers;

● Understanding the customer's technical and business drivers;

● Establishing credibility by proposing and implementing solutions that add value;

● Maintaining a fanatical commitment to customer success.

Not surprisingly, over the past fifteen years of doing international business, we have learned a lot -- with quite a few "bumps" along the way. Given that the primary asset of any field organisation is its people, the "bumps" inevitably lead to reflections on hiring, management approaches, setting goals and measuring results, work environment, etc. As a company, we believe our employees need to be aligned with our mission of making customers successful. We also maintain a strong belief in our team model. In the hiring process, we intentionally rank personal characteristics and motivation ahead of experience with a particular market segment or technology, since often knowledge can be learned and skills developed over time.

Translating Our Experience

Looking back, a few key principles have emerged that serve us well when we go through the steps of setting up a new operation in an international territory. (Interestingly, they almost all apply equally well to integrating an acquired business, something that Rational has done quite a lot of in the past ten years.) Applying these principles results in a solid foundation upon which we can build a rapidly growing organisation.

1. Build for the Future

Your first few hires will, in ten years, be the leaders of your operation. A common failure mode is to select individuals who are great at getting things going but lose their motivation once the business is beyond the start-up phase. This means hiring above the level of your current needs, which is often a challenge when your company or product is not well known or your business model is new to a territory. Customers in many countries outside the US don't like the chopping and changing of roles that are more common in the US job market. They prefer continuity, especially in more senior positions.

2. Recognise the Need for a Cultural Bridge

Much has been written about cultural differences and gaps and the problems they create in running multinational businesses. The problems are certainly more acute in countries that stick with their own ways of working or have not yet been influenced by American business practices. Three things to be said:

● Culture impacts many, many aspects of running a business -- e.g., how customers buy, how individuals work together, and how

Page 12: splashpage aug 01.htmlCopyright Rational Software 2001...articles this month on the various processes involved in managing code, people, tools, and other factors in software development.

organisations work, so culture cannot be disregarded. Many locals still look for signs that foreign business people lack appreciation for their local customs, so you should proactively avoid the "ugly foreigner" trap. At a business meeting in Korea, for example, you're expected to address your peers only -- not their bosses, not their reports -- no matter who may be physically present.

● Despite the above point, thanks to the emergence of strong global brands and solutions (e.g., Bill Gates and Company), many aspects of doing business are common throughout the world, and the superficial differences are fading fast; the 70/30 (common/unique) ratio in business factors is trending toward 80/20. I doubt the remaining 10-15 percent unique business factors will ever go away in most parts of the world.

● Recognise that it is possible to have heterogeneous external cultures (facing customers) and a more homogenous internal culture (facing inside your company). For example, recognising hierarchies in customer organisations does not mean that you can't operate shared leadership principles and flat organisations internally if that's your company culture.

Non-alignment with the company's mission and values often results from local management not being able to effectively communicate downwards to the troops to set a strategy and direction. Investing in new hires by "powering up" in company values and operating practices pays early dividends. There's also a problem when the troops don't effectively communicate their needs and issues upward, leading to frustrations and claims of "they're not listening." So look for key management personnel who have demonstrated an ability to act as a bridge between an American company and overseas entities (businesses and their customers), and who can coach other managers to do the same.

3. Invest in Integration and Communication

Two-way information sharing is a must, both to ensure that remote teams feel like part of the company and that their views and opinions are being taken into account. Of course, this may sound easy to people whose mother tongue is English. But often, international newcomers are reluctant to try out their not-so-perfect English in even the most common forms of internal communication (e.g., e-mail, conference calls), which "expose" them to their peers within the company.

The same goes for more public meetings and conferences. It's easy to fall into the trap of the content (and experience) being missed because of lack of patience with the form in which it is delivered. There are also specific cultural issues that come into play here; often Asians and "quieter" Europeans become polite listeners rather than active participants. This is not just related to language; it's more a question of raw culture. There is also no real substitute for face-to-face gatherings, especially within cultures where building a relationship is important. Our most successful internal events are driven by teams of employees from different countries targeted toward a common goal. We find it especially helpful to designate an experienced facilitator to make sure everyone is given an opportunity

Page 13: splashpage aug 01.htmlCopyright Rational Software 2001...articles this month on the various processes involved in managing code, people, tools, and other factors in software development.

to contribute.

4. Alignment with the Corporation -- Striking the Right Balance

Early on it is important to decide whether to implement a "one company" model or allow local entities to "tune" their operations for the local conditions. Autonomy has many potential dimensions: business planning, sales and marketing strategy, operations. Rational very intentionally follows a "one company" model. Other types of businesses -- especially those in which there is less sharing of IP, or those that have very different target customers and solutions from one part of the world to the next -- need not be so concerned. However, for software companies who sell predominantly to multinational companies operating in different geographies, it pays to establish early measures to avoid divergence from the "one company" model. In fact showing a "common face" to multinationals is a tremendous business value and competitive advantage.

5. Be Aggressive on Growth

This may seem obvious, but it is worth emphasising. Establishing a trajectory of timid investments early on, once the business case has been proven, leads to frustrations both internally and externally. It is important to get to a point of critical mass that can sustain a full-functioning business quickly, both for your local international team and their growing base of customers. (This "point" will vary according to the breadth of the product range or the minimum infrastructure needed to support an effective operation.) Many day-to-day activities become easier, since roles and responsibilities can be more effectively assigned and funded. Europeans and Asians also tend to be more risk averse in their habits, so getting a new in-country field team beyond a start-up size and showing a solid growth trajectory removes sales objections as well as the barriers to attracting and hiring good people.

6. Plan Regional Penetration Carefully

If you are about to embark on international expansion, it's best not to spread your resources too thinly by attacking many territories at one time. Instead, use a central country in a geographic region as a starting point, establish a base, and spread out from there. It may be tempting to pursue multiple opportunities in different regions at the same time, but the issues and challenges requiring management attention and commitment are such that the "span of control" is best kept reined-in.

Rational's international operations are on track for strong growth over the coming years. Besides the obvious benefits to having a well-balanced business that can more easily weather any short-term hiccoughs in the domestic market, our local companies have enabled us to better serve our customers and shareholders as well as to provide a platform for personal development for all of our people.

Good luck and good selling -- wherever in the world that may be!

Page 14: splashpage aug 01.htmlCopyright Rational Software 2001...articles this month on the various processes involved in managing code, people, tools, and other factors in software development.

For more information on the products or services discussed in this article, please click here and follow the instructions provided. Thank you!

Copyright Rational Software 2001 | Privacy/Legal Information

jprince
jprince
Page 15: splashpage aug 01.htmlCopyright Rational Software 2001...articles this month on the various processes involved in managing code, people, tools, and other factors in software development.

Software Maintenance Cycles with the RUP

by Philippe KruchtenRational FellowRational Software Canada

The Rational Unified Process®(RUP®) has no concept of a "maintenance phase." Some people claim that this is a major deficiency, and are proposing to add a production phase to cover issues like maintenance, operations, and support.1 In my view, this would not be a useful addition. First, maintenance, operations, and support are three very distinct processes; although they may overlap in time, they involve different people and different activities, and have different objectives. Operations and support are clearly outside the scope of the RUP. Maintenance, however, is not; yet there is no need to add another phase to the RUP's sequence of four lifecycle phases: Inception, Elaboration, Construction, and Transition. The RUP already contains everything that is needed in terms of roles, activities, artifacts, and guidelines to cover the maintenance of a software application. And because of the RUP's essentially iterative nature, the ability to evolve, correct, or refine existing artifacts is inherent to most of its activities.

Software Maintenance

The IEEE defines software maintenance as the "process of modifying a software system or component after delivery to correct faults, improve performance or other attributes, or adapt to a changed environment."2 Software maintenance is the process that allows existing products to continue to fulfill their mission, to continue to be sold, deployed, and used, and to provide revenue for the development organization. Generally speaking, maintenance refers to all the activities that take place after an initial product is released. However, in common usage, people apply the term maintenance not so much to major evolutions of a product, but

jprince
http://www.therationaledge.com/content/aug_01/t_softwareMaintenance_pk.html
jprince
Copyright Rational Software 2001
Page 16: splashpage aug 01.htmlCopyright Rational Software 2001...articles this month on the various processes involved in managing code, people, tools, and other factors in software development.

regarding efforts to:

● Fix bugs (corrective maintenance).

● Add small improvements, or keep up with the state-of-the-art (perfective maintenance).

● Keep the software up-to-date with its environment: the operating system, hardware, major components such as DBMS, GUI systems, and communication systems (adaptive maintenance).

The RUP applies well to all these circumstances, mostly because of its iterative nature. Indeed, the evolution or maintenance of a system is almost indistinguishable from the process of building that system in the first place -- so much so that the IEEE standard on maintenance3 looks like a recipe for development, covering problem and modification identification, analysis, design, implementation, regression and system testing, acceptance testing, and delivery!

For a look at the RUP's role in more significant evolutions for existing systems, see my article in the May issue of The Rational Edge: Using the RUP to Evolve a Legacy System.

Software Development Cycles

The RUP defines a software development cycle, which is always composed of a sequence of four phases:4

● Inception phase: specifying the end-product vision and its business case, defining the scope of the project.

● Elaboration phase: planning the necessary activities and required resources; specifying the features, and designing and baselining the architecture.

● Construction phase: building the product, evolving the vision, the architecture, and the plans until the product -- the completed vision -- is ready for a first delivery to its user community.

● Transition phase: finishing the transition of the product to its users, which includes manufacturing, delivering, training, supporting, and maintaining the product until the users are satisfied.

What is really important about the phases is not so much what you do in them, or how long they last, but what you have to achieve. A phase is judged by the milestone that concludes it, and each of these major milestones has some clear exit criteria attached to it, expressed in terms of artifacts that must be produced or evolved, and measurable objectives to be attained.

Then, within each phase, software development proceeds by iteration, repeating a similar set of activities and gradually refining the system to the point where the product can be delivered.

Page 17: splashpage aug 01.htmlCopyright Rational Software 2001...articles this month on the various processes involved in managing code, people, tools, and other factors in software development.

The phases are not optional, or skipped; in some cases they may be reduced to almost no work, but never really to nothing at all. Don't be fooled by the RUP "Hump Chart" (Figure 1); it does not imply that you have to spend time and money doing busy work. One of the RUP's key principles always applies: Do nothing without a purpose.

Figure 1: RUP "Hump Chart"

For example, suppose that just after exiting the inception phase, you realize that you already have all the elements in place for exiting the elaboration phase:

● The requirements are understood.

● The architecture will not change.

● The plan for the first iteration of the construction phase is in place.

Then you have probably done all the work you need to do for the elaboration phase. It is still worthwhile, however, to take a very good look -- just to be sure that this is really the case before hastily jumping to the next phase. At a minimum, you will need an end-of-phase review.

It is very rarely the case that you can collapse a phase to almost nothing for a new development project (greenfield development), but you may be able to do so for an existing system. Figure 2 shows a typical resource profile for an initial development cycle.

Page 18: splashpage aug 01.htmlCopyright Rational Software 2001...articles this month on the various processes involved in managing code, people, tools, and other factors in software development.

Figure 2: Version 1.0: Initial Development Cycle

Evolution Cycles

So now let us assume that the system exists, and it has gone through a RUP initial development cycle. What happens next is an evolution cycle. It has the same overall sequence of phases: Inception, Elaboration, Construction, and Transition, and will result in a new product release. Yet, since the system already exists, the ratios of effort required for the phases, and the actual activities performed in each phase, will be different from an initial development cycle.

In an initial development cycle, there is a lot of discovery and invention, and artifacts are often created from scratch; this is done mostly in the iterations of the Inception and Elaboration phases. In contrast, in an evolution cycle we proceed mostly by refinements of a body of existing artifacts. Is this new? No. This is exactly what we were already doing in the trailing iterations of the Construction phase and during the whole Transition phase of the initial development cycle.

Version 2.0: A Simple Extension

Let's assume we have released an initial product: Version 1.0. The evolution cycle for going to Version 2.0 could look like the one shown in Figure 3.

Figure 3: Adding an Evolution Cycle for a Simple Extension

To start, we have a business case for going to Version 2.0. The scope of this project is to:

Page 19: splashpage aug 01.htmlCopyright Rational Software 2001...articles this month on the various processes involved in managing code, people, tools, and other factors in software development.

● Complete all the requirements that were "scoped out" (but already captured) during the initial cycle;

● Add one or two features that were discovered on the way and captured, but which were left out of scope to avoid disrupting the schedule of the initial cycle;

● Fix a handful of bugs in the defect database.

If none of these requirements affects the overall architecture, and if there is no major risk to mitigate, then the Elaboration phase is reduced to almost nothing -- maybe a day or two of work. Construction (up to a beta version) and transition proceed iteratively as in an initial development cycle, with the same type of staff and allocation of roles. All artifacts are updated to reflect the evolution. The development cycle would then look like the one shown in Figure 4.

Figure 4: Adding an Evolution Cycle with a Minimal Elaboration Phase

Version 3.0: A Major Addition

Not all evolution cycles may be that simple. Let us assume that, based on the success of Version 2.0 (a single-user, single-processor system), the requirements for Version 3.0 include going to a distributed system supporting multiple users. We then need a serious Elaboration phase to evolve and validate the architecture of the system, since the evolution is riskier. Now the cycle would look more like the original profile of an initial development cycle (see Figure 2), with a non-trivial effort in inception and elaboration.

For more discussion of perfective maintenance, and how to tackle system evolution when the original system was not developed with the RUP, see my May Rational Edge article, "Using the RUP to Evolve a Legacy System."

Maintenance Cycles

Now let us look at the more typical cases of corrective and adaptive maintenance.

The "Bug Fix" Release: Corrective Maintenance

Suppose we need a new release of the system that fixes some annoying problems discovered by users. The evolution cycle would include:

Page 20: splashpage aug 01.htmlCopyright Rational Software 2001...articles this month on the various processes involved in managing code, people, tools, and other factors in software development.

● Inception phase: As for any project, we need a scope (What must really be fixed?), a plan (What is the commitment of effort and time?), and a business case (Why should we be doing this?).

● Elaboration phase: This should be minimal; hopefully, most of our bug fixing will not require a change to the requirements or the architecture.

● Construction phase: One iteration is needed to do the fixes, test the fixes, do regression testing, and prepare a release.

● Transition phase: If we are very lucky, and the end of construction showed no regression, then this phase might not need much work.

The key point is this: All the activities we run through are already in the RUP.

Alternative: Extending Your Transition Phase with More Iterations

If the corrective maintenance entails only a minimal amount of change, then you may consider it simply as an additional iteration in your Transition phase (see Figure 5).

Figure 5: Adding Small Corrective Maintenance Iterations to the Transition Phase

If we take this to the extreme, we have a pattern of incremental delivery, as described by Tom Gilb:5 after some good, solid work up-front, the system can be delivered incrementally, bringing additional functionality at each step.

Example: Activities for Simple Corrective Maintenance

Here is a list of activities that you may go through for a simple corrective maintenance iteration:

Activity: develop iteration plan

Objective is defined by a selection of the change requests to be done.

Activity: plan test

Identify specific test to create to validate the correction and all regression tests to run on the release.

Page 21: splashpage aug 01.htmlCopyright Rational Software 2001...articles this month on the various processes involved in managing code, people, tools, and other factors in software development.

Activity: schedule and assign work

Activity: create development workspace

Activity: create integration workspace

Activity: fix defect

This is repeated for each defect.

Activity: execute unit test

Activity: integrate system

Activity: execute system test

This includes all tests to validate the new release.

Activity: create baseline

Activity: write release note

Activity: update change request

Activity: conduct iteration acceptance review

Activity: release product

The "Compatibility" Release: Adaptive Maintenance

Often, we need a maintenance release because part of the system has evolved to a new release number. There could be changes to a system component, such as the database, or to some elements of the system environment, say the operating system, platform, or communication interface. In order to remain compatible, we must rebuild (sometimes) and retest (always) the system against the new elements. But the system itself does not need to be extended.

This type of maintenance cycle will also have a streamlined shape. In the simple case, few artifacts will have to change, and most of the activities will be in regenerating the system and testing it. If an interface has changed, then there may be some design and code to change. All the activities to run through are already defined in the RUP.

However, it is wise to plan two iterations, because of the inherent risks:

● An iteration to do the port or the conversion, and do thorough regression testing: to actually confront the risk.

● An iteration to do whatever corrections are identified from the first iteration: to resolve any issues that arose.

Comparing the Initial Cycle and the Maintenance Cycle

What is different in a maintenance cycle, compared to the initial development cycle, then? Mostly, we have to adjust the level of formality to:

● The size of the organization.

Page 22: splashpage aug 01.htmlCopyright Rational Software 2001...articles this month on the various processes involved in managing code, people, tools, and other factors in software development.

● The risks at stake.

If we are employing a very small staff, with maybe just one person to do these jobs, then that person will have to play many roles defined in the RUP, performing all the activities involved in updating the various artifacts, changing the code, testing the system, and releasing it. The artifacts are neither new nor different; they are simply updated, just as we did in the trailing iterations of the initial development cycle. The better the job we have done in the earlier development cycles, the easier the task of the persons who carry out these maintenance cycles.

If there is a high level of risk, technical or otherwise, then the development must proceed more carefully, making sure that the risks are properly addressed and mitigated, early in the cycle.

Configuration and change management get a higher profile in a maintenance cycle, especially with regard to parallel maintenance of many product variants and older releases.

Overlapping Cycles

Can a maintenance cycle start before the previous cycle is complete? Yes; it is feasible to overlap the cycles slightly, as shown in Figure 6. However, the project manager should keep the following in mind.

● There are often frantic efforts to complete a cycle, especially an initial cycle, and the best people may be needed to meet a scheduled delivery date. Consequently, there may be few people readily available to do a good job in the Inception and Elaboration phases of the next cycle.

● As you increase the overlap, there comes a point at which modifications to project artifacts (source code or others) for the next cycle will result in the need for an onerous reconciliation (merge) with the output of the current cycle.

● The more the cycles overlap, the higher the risks (e.g., miscommunications, regression, rework, competition for resources).

Figure 6: Overlapping Cycles: Feasible but Tricky

Page 23: splashpage aug 01.htmlCopyright Rational Software 2001...articles this month on the various processes involved in managing code, people, tools, and other factors in software development.

Conclusion

The RUP has no notion of a "maintenance phase" following its Transition phase because it does not need one. It has evolution cycles and maintenance cycles, which follow the very same pattern of phases: Inception, Elaboration, Construction, and Transition. Each phase has to satisfy the same exit criteria as for an initial development cycle. Depending on the nature of the evolution or maintenance, the relative effort spent in each phase will vary greatly, compared to that of an initial development cycle. All the artifacts, activities, roles, and guidelines of the RUP still apply, but with an emphasis on correction and refinements of the existing body of artifacts rather than invention and creation of a new one. What happens in a maintenance cycle is not at all different from what is done in the later iterations of an initial development cycle. As the size of the team is often reduced, sometimes to a handful of staff or fewer, the persons executing a maintenance cycle must be more polyvalent or versatile in terms of skills and competencies, since they will play more of the roles defined in the RUP.

Acknowledgments

Thank you to my friends and colleagues for shaping this article: John Smith, Grady Booch, Craig Larman, and the many participants in our internal process forum.

Footnotes

1Scott Ambler, "Enhancing the Unified Process." SD Magazine, October 1999. 2IEEE Standard 610.12:1990, Glossary of Software Engineering Terminology.3IEEE Standard 1219-1998, Software Maintenance.4Rational Software Corporation, Rational Unified Process. Cupertino California, 2001.5Tom Gilb, Principles of Software Engineering Management. Addison Wesley, 1988.

For more information on the products or services discussed in this article, please click here and follow the instructions provided. Thank you!

Page 24: splashpage aug 01.htmlCopyright Rational Software 2001...articles this month on the various processes involved in managing code, people, tools, and other factors in software development.

Copyright Rational Software 2001 | Privacy/Legal Information

Page 25: splashpage aug 01.htmlCopyright Rational Software 2001...articles this month on the various processes involved in managing code, people, tools, and other factors in software development.

Rational ContentStudio 2.0 Features New Troubleshooting Information

by Carem BennettSenior Technical WriterRational Software

In addition to all the great new product features in Rational ContentStudio 2.0 (see sidebar), there is now a complete chapter of helpful troubleshooting information in the documentation set to support all of the product's features. Look for the chapter entitled "Troubleshooting Rational Suite ContentStudio," in Rational Suite ContentStudio Release Notes Version 2001A.04.00, available on the product CD and online at: http://www.rational.com/support/documentation/release/v2001/index.jsp.

In it, you'll find questions, answers, guidelines, and error messages for each component of ContentStudio, as well as general troubleshooting tips at the end of each section -- for issues not addressed by the questions, answers, and guidelines.

For example, here's an excerpt from Troubleshooting the Four-Server Installation (Section 3.2). These are guidelines for installing database software.

■ SQL Server 7.0 Server Installation: The most critical step in installing SQL Server and configuring the CMS database is to ensure that SQL Server 7.0 has been installed using a custom installation and a sort order of Dictionary Order, Case Sensitive.

■ SQL Server 7.0 Client Installation: Any host that is going to access the SQL Server, must have the SQL Server client connectivity components. If following the suggested four host configuration, then the CMS_host and the CS_host will both need these components.

■ Oracle 8i Server Installation: The database should be installed using the Typical option. The Service Name is a fully qualified domain name (myoracle.domain.com). The values assigned to Service Name and SID will be used at later steps in the installation.

jprince
http://www.therationaledge.com/content/aug_01/t_contentStudio_cb.html
jprince
Copyright Rational Software 2001
Page 26: splashpage aug 01.htmlCopyright Rational Software 2001...articles this month on the various processes involved in managing code, people, tools, and other factors in software development.

■ Oracle 8i Client Installation: For each host that will be used to administer the Oracle database, the Oracle 8.1.6.9 client tools will need to be installed. To install and configure the client tools, do the following:

1. Open the Oracle Net8 Configuration Assistant

2. Click Local Net Service Name Configuration

3. Click Add > Next

4. Click Oracle 8i Database or Service > Next

5. Type the value of the Oracle Global Database Name in the Service Name field

6. Click TCP > Next

7. Type the name of the Oracle database server in the host name field, accepting the default port.

8. Click Perform a Test > Next

9. Click Next > Finish

Then, if you have any problems with this installation, you can consult the additional troubleshooting tips in section 3.9:

The most important component of the ClearQuest/Vignette Integration is the CsVgnCQSvc Service. Refer to the following TechNotes for troubleshooting any errors not addressed earlier. All TechNotes are available on the Web at http://www.rational.com/support/index.jsp.

17682 - Unable to load CsVgnCQSvc15723 - What is the best way to troubleshoot ClearQuest Integration HTTP failures with ContentStudio 1.0?15833 - ClearQuest troubleshooting for ContentStudio

If you still have difficulty and need to call Rational Technical Support, gather the following information before calling (this will greatly speed your results):

1. The CsVgnCQSvc service returns errors to the Windows Event Viewer Application Log. Check this log for error messages. Technical Support will request a copy of this log. The log can be saved by clicking Action > Save Log File As.

2. Verify that the ClearQuest database has the correct ContentStudio package applied to the schema and that the database is associated with the correct schema version. In the ClearQuest Designer, click View > Schema Summary.

3. Verify the operational integrity of the ClearQuest database by logging on to the database and performing basic tasks (for example, creating a new defect).

4. Note the ClearQuest User ID and Password in the ContentStudio Administration Utility. If these values change, restart Tomcat on the CS_Host.

5. Verify that the name of the ClearQuest user database (ClearQuest Database Name on the ClearQuest tab) matches the name of the user database in step 2. If you need to change it, you need to stop and restart the CsVgnCQSvc service.

Page 27: splashpage aug 01.htmlCopyright Rational Software 2001...articles this month on the various processes involved in managing code, people, tools, and other factors in software development.

As you can see, this new, detailed information is designed to make it easy for you to find the source of problems and fix simple ones on your own. Even if you do end up calling Rational Technical Support, following these suggestions will enable you to analyze and describe the problem to the expert on the other end of the line. That way, you can both work more effectively to quickly resolve the problem.

For more information on the products or services discussed in this article, please click here and follow the instructions provided. Thank you!

Copyright Rational Software 2001 | Privacy/Legal Information

Page 28: splashpage aug 01.htmlCopyright Rational Software 2001...articles this month on the various processes involved in managing code, people, tools, and other factors in software development.

Power C/C++ Debugging: Using Rational Purify with GDB

by Goran BegicTechnical Marketing Engineer, PDG Tech MarketingRational Software B.V.The Netherlands

Are there bugs in C/C++ applications developed on UNIX platforms? I guess we all know the answer to this question by now. Generally speaking, UNIX developers have most of the same problems as their Windows counterparts: syntax and logic errors, memory leaks and memory allocation errors, performance bottlenecks, and so on. The list is long, and every item on it can absorb a lot of a developer's time before an application is ready for delivery to the customer. The only way to test a C++ UNIX application for problems involving dynamic memory allocation is during run time. In this article, I will present a software debugging solution for this purpose that combines one of the popular debuggers --GNU GDB1 -- with our automated tool for detecting memory errors and memory leaks: Rational Purify.

Memory Errors and Rational Purify Error Detection

The C and C++ programming languages allow a developer to dynamically allocate and deallocate memory, using the memory allocation and deallocation APIs. The basic memory allocator in C Run-time is malloc(). Allocating memory with malloc() is easy. Malloc() takes the type and the size of the structure to be allocated in the dynamic memory area as parameters. As the result of a malloc() function call, the system will allocate the requested memory chunk (if enough memory is available) and return the reference to that chunk to the pointer of the allocated memory

jprince
http://www.therationaledge.com/content/aug_01/t_powerDebugging_gb.html
jprince
Copyright Rational Software 2001
Page 29: splashpage aug 01.htmlCopyright Rational Software 2001...articles this month on the various processes involved in managing code, people, tools, and other factors in software development.

area.

The good thing about dynamic memory allocation and C/C++ is that the programming language enables you to directly access the memory, resize it, and move, copy, and release as the application runs. It's important to remember, however, that these advanced memory manipulations should be handled with care.

Failure to Free Memory Resources

The most basic error you can commit with respect to dynamic memory allocation is to forget to free the resources after they are no longer needed. The memory allocated with malloc() can be returned to the system by calling the API free(). If the memory allocated with malloc() is allocated in a loop, for example, and the application is a system service that is supposed to run without restarting for a longer period of time, then the leaking memory can easily cause the application to run out of memory -- or even bring the whole system down.

Unfortunately, this scenario is all too likely to occur in reality. That is why some server vendors, such as IBM, ship their servers equipped with "software rejuvenation" applications; they can detect when an application is running out of memory and restart it before it consumes all available resources for other applications running concurrently on the server and brings down the system.

Out-of-Bounds Write and Read Errors

Other very serious memory errors are memory out-of-bounds read and write errors. These errors occur when you try to read or write memory outside the allocated memory area. For example, if you have stringA that consists of ten characters and you try to copy it to the memory allocated for stringB of five characters, then the five extra characters will be written beyond the end of memory allocated by stringB. The same thing can also happen if your indexes for arrays are simply incorrect. Say you have an array of ten integers but try to access the fifteenth element of the array. C/C++ as a language permits you to do this, so you can get away with this error as long as the copied string does not corrupt any valid data outside the boundaries for the allocated memory. Keep in mind, however, that although you may get away with it one time -- maybe even a hundred times -- sooner or later this error will corrupt some important data, and the application will crash. Most likely, the crash will occur long after the application is shipped to the customer, but it can happen at any time.

Other Memory Errors

In addition to memory leaks and out-of-bounds errors, there are other important memory-related errors:

● Accessing Uninitialized Memory. This type of error occurs when we read from unitialized memory. This is memory allocated by the program -- but not initialized. An example would be memory allocated using malloc(). We need to memset() the memory to

Page 30: splashpage aug 01.htmlCopyright Rational Software 2001...articles this month on the various processes involved in managing code, people, tools, and other factors in software development.

initialize it; otherwise the contents of that memory are undetermined.

● Accessing Freed Memory. This error occurs when there are pointers to memory that has been freed or reallocated. The memory is returned to the operating system, but there is still a pointer to it, and the application may attempt to use it in error.

● Incorrect Frees, or Reallocs. These occur when the application attempts to free, or realloc, memory which has already been freed, or which was never allocated. Incorrect frees also refer to errors involving a mismatch between the allocator and the deallocator: e.g., "Allocate with new(), but free with free()." The correct procedure should be to free with delete().

Unfortunately, these memory errors are difficult to detect, even when you run an application through a debugger. To really check for all memory errors, you need to trace the execution of the program in the debugger -- and look at every memory access to determine if it is legal or not.

That's why you need Rational Purify. It does that for you automatically and reports the memory errors as you are running the application.

Purify tracks memory errors by flagging each byte of allocated memory with a special pattern of bits. All memory used by the application is classified by state and indicated by a color, as shown in Figure 1.

Figure 1: Rational Purify Classifies Memory State by Color

Depending on the status of the memory and the operation the programmer would like to perform, Purify will report errors and warnings; if the operation is legal, then it will just continue the execution.

Page 31: splashpage aug 01.htmlCopyright Rational Software 2001...articles this month on the various processes involved in managing code, people, tools, and other factors in software development.

As an example, let's look at an array allocated with malloc():

char *string2 = malloc(10);

If the program tries to read the content of this string without initializing it first, then Purify will immediately report a UMR (Uninitialized Memory Read). The state bits are set to "yellow" for memory that is allocated but not initialized. If you write to the memory allocated with yellow, then the state bits will change to "green," which denotes memory that is allocated and initialized. Both read and write are valid operations for such memory.

"Red" memory is neither allocated nor initialized by the application. Every attempt to access this memory will result in an error. If the program tries to read from this memory, then Purify will report an access error such as ABR (Array Bounds Read), ZPR (Zero Page Read), NPR (Null Pointer Read), IPR (Invalid Pointer Read), etc., depending on the memory being accessed. Similarly, Purify will complain if there are any writes to this memory.

Various combinations of the state (color) and the memory operation will result in different error notifications by Purify.

Preparing an Application for Debugging

GDB is a free GNU debugger. Its long list of features and its ready availability make it a very popular choice among UNIX developers. Together with a free GCC compiler, it provides a strong foundation for ensuring quality in C/C++ applications. GDB can be used together with Rational Purify to thoroughly examine the application for memory errors and leaks. Both the debugger and Purify use symbolic debugging information to control breakpoints and link machine code with the source files of the application under test. In order to generate the debugging information, you need to compile your source code with the following option (-g) for the GCC compiler:

gcc -g helloEdge.c

Instrumenting an Application with Rational Purify

Rational Purify needs to insert additional assembly instructions into your program in order to control execution of the application and monitor memory allocation when you start it. Here is how to do it:

purify gcc -g helloEdge.c

When you start such an instrumented executable, the PUT (Program Under Test) will automatically invoke the Purify GUI (Graphical User Interface) and start collecting information about the run.

Running the GDB Debugger

Page 32: splashpage aug 01.htmlCopyright Rational Software 2001...articles this month on the various processes involved in managing code, people, tools, and other factors in software development.

You can start the GDB debugger simply by executing the command gdb from the command line. Once GDB is started, you can enter commands for the debugger. The easiest way to start your application in the GDB debugger is to use the name of the application as a parameter for the run:

gdb ./a.out

Example Application

Our example is a little "Hello, Rational Edge" application that doesn't do much of anything except create a couple of errors that are difficult to detect without a specialized tool like Rational Purify.

int main(void){

int i, length;char *string1 = "Hello, theEdge";char *string2 = malloc(10);

length = strlen(string2); // UMR

for (i = 0; string1[i] != '\0'; i++) { string2[i] = string1[i]; // ABW's}

length = strlen(string2); // ABRprintf("\nHello");printf("\n");

return 0;

}

So what is wrong here? An experienced C developer's eye would probably catch some obvious errors, even without knowing the meaning of the abbreviations in the commented lines. If you compile and run this small example, however, the compiler and the system will not complain: The application will seem to run just fine.

But what if this were a program with thousands of lines of code? Could an experienced developer's eye help there? I doubt it. And even static analysis tools cannot detect errors similar to those in our example; you'd have to run the program in order to catch all the problems.

The first error is simply an uninitialized memory read. We allocated a string and were reading from the allocated memory without assigning any string to the allocated memory.

The second error is an Array Bounds Write (ABW), which is an Out-of-Bounds type of error. The program will try to copy a string of fifteen characters (including the termination string) -- "Hello, theEdge" -- into the memory area allocated with malloc(), which only allows for ten characters. That means the "extra" characters from the first string will overwrite the

Page 33: splashpage aug 01.htmlCopyright Rational Software 2001...articles this month on the various processes involved in managing code, people, tools, and other factors in software development.

original termination string and continue writing outside the boundaries of the allocated array.

The third error is an Out-Of-Bounds Read from the same string whose boundaries we already violated.

In addition, we have a memory leak from the dynamically allocated array String2, since we didn't call free() for the string allocated with malloc() to free the allocated memory and return it to the system.

Debugging the Instrumented Application with Rational Purify

Running GDB against the instrumented application is the same as running it against the original, non-instrumented version of the tested application. Begin the execution with the command "r".

If you don't select any specific conditions for the execution, the application will execute without breaks, and Purify will report the three errors we just described above (see Figure 2).

Figure 2: The Instrumented Program Running in Rational Purify

As the application is executed, either in the debugger, or as a standalone process, Purify collects information and displays it, almost in real-time. Figure 3 shows the first report -- about the uninitialized memory read (UMR).

Page 34: splashpage aug 01.htmlCopyright Rational Software 2001...articles this month on the various processes involved in managing code, people, tools, and other factors in software development.

Figure 3: The Purify UMR Report

As this screen shows, Purify correctly detected an attempt to read uninitialized memory, and the report points to the place in the source file where the call has been made. It also shows the line where the array has been allocated.

If you continue the run, the next report will be similar: The error location and allocation location will be displayed, leaving no doubt that memory errors occurred during the run. This is really important, because the application would otherwise run "fine" most of the time; we might not detect the Out-of-Bounds Write (ABW) error; nor could we predict when it might actually corrupt valid data and cause a crash.

Figure 4 shows the report for an ABW error. In the for() loop, the application copied the elements of an array of ten characters into the memory allocated for an array of five characters. This type of error is very dangerous and should never be left in an application, especially not in the version that you ship to a customer.

Page 35: splashpage aug 01.htmlCopyright Rational Software 2001...articles this month on the various processes involved in managing code, people, tools, and other factors in software development.

Figure 4: ABW Error Report in Purify

Examining the Tested Application in Detail with Rational Purify and the GDB Debugger

Now, if you want to peek into every corner of the application (and you should!), then you can use the GDB debugger and Purify together. The tools are similar, but together they can provide you with even more important information about an application than you would get from running each tool individually.

Setting Breakpoints in the Debugger

Breakpoints are essential tools to use when debugging. A breakpoint is a special instruction placed into the code by the user. When this instruction gets to the processor, it will stop the execution (break) exactly at this instruction, and the user can view the content of the stacks, registers, and memory. In order for breakpoints to work, the program must be compiled with the option -g to create symbolic debugging information.

A breakpoint can be set for a certain line of code. For example:

(gdb) break 10

This instructs the application to stop execution at line ten of the source code.

To begin execution, the application needs to be started in the GDB debugger. The instruction for this is 'r' (run).

(gdb) r

The instrumented application will start, launch Purify, and stop at the breakpoint.

Page 36: splashpage aug 01.htmlCopyright Rational Software 2001...articles this month on the various processes involved in managing code, people, tools, and other factors in software development.

After breaking at the certain line of code, or the function, it is possible to call out the content of particular variables. Here is an example:

(gdb) print string1

You can continue the run of the instrumented application by entering the command 'c' for 'continue.'

(gdb) c

Another way of setting breakpoints is to specify the function name at which the execution should stop. For example:

(gdb) break main

This will stop execution when the main() function is called.

Instead of setting a breakpoint at a function or line of code, you can also set one at an address. Plus, you can set additional conditions for breakpoints. In such cases, the full expression would look like this:

(gdb) break <line number, or function name> <additional condition>

Finally, you can use the following command to delete all breakpoints:

(gdb) delete

For complete instructions on breakpoints, consult the GDB manual listed under References below.

Attaching the GDB Debugger During a Run

You can modify the application so that it doesn't exit automatically, by adding the following line of code, for example:

sleep(30);

This allows you to delay the execution of the application at that point, giving you enough time to attach the GDB debugger during the run. In real life, you would most likely be dealing with a program that runs as a service for a longer period of time.

I have added this line to our little "Hello, Rational Edge" example and recompiled and instrumented the application. Since we need the PID (Process identifier) for the instrumented executable, we can start the program with the following command:

./a.out &

That will give us the PID. In this case it is:

[1] 15050

Page 37: splashpage aug 01.htmlCopyright Rational Software 2001...articles this month on the various processes involved in managing code, people, tools, and other factors in software development.

Then, if we start the GDB debugger and pass it the PID for the running program, we will attach the GDB debugger to the process with the specified ID, and we can continue testing (setting up breakpoints and watchpoints as well):

gdb ./a.out 15050

Advanced Debugging with the GDB Debugger and Rational Purify

While we were running our "Hello, Rational Edge" application in the GDB debugger, Rational Purify was collecting data about the run. This is not the only way to use Purify with the GDB debugger. Not by a long shot. Below, I will describe some lesser-known but powerful features of Purify that can be very helpful in testing your software for memory errors and leaks.

Using Rational Purify API Functions

The Purify API consists of functions that you can use to help debug and diagnose memory errors.

Calling Purify API Functions from the GDB Debugger. Some API functions are meant for use from within a debugger. For example:

purify_stop_here()

In the GDB debugger, we can call this function by specifying the following GDB command:

(gdb) break purify_stop_here

The API will then set a breakpoint for whenever Purify reports an error. In our example, the first break would take place when Purify throws the UMR report.

If you stop the execution at this breakpoint and look at the call stack, it will confirm the Purify report by showing the functions called when the error occurred. The content of the call stack can be shown with the GDB command bt:

(gdb) bt#0 0x535d4 in purify_stop_here ()#1 0x413a4 in strlen ()#2 0x5710c in main () at helloEdge.c:10

Let's look at another example of an API that can be called from the debugger.

purify_describe(addr)

This will show how Purify sees the memory: "global data" or "on the stack"

Page 38: splashpage aug 01.htmlCopyright Rational Software 2001...articles this month on the various processes involved in managing code, people, tools, and other factors in software development.

or "X bytes from the start of the malloc'ed block at Y."

In the GDB debugger, this function can be called in conjunction with the command print. For example:

(gdb) print purify_describe(addr)

Calling Purify API Functions from Your Application. In order to call the functions from the application, you will need to include the header file (Purify.h) for your project. This header file comes from the product home directory, and you can obtain the path with the command:

purify -printhomedir

Similarly, the Purify API stub library (purify_stubs.a) also comes from that directory.

#include<purify.h>

You can also link your application with the Purify API stub library, which eliminates the need for conditional compilation.

Here are some examples of the APIs that can be called from within your code:

purify_is_running - Returns TRUE when the program is Purify'd.

purify_printf (_with_call_chain) - Prints a message to the log (with call-stack information).

purify_new_leaks / purify_new_inuse -Reports how much more memory is leaked/in use since the last call.

Using Rational Purify Watchpoints

By setting special Purify watchpoints, you can monitor specific kinds of memory accesses. Using watchpoints can be very helpful for situations in which memory mysteriously changes between the time it is initialized and the time it is used. When a watchpoint is set, Purify automatically reports the exact cause and result of each memory access.

There are four reasons why Purify watchpoints are better than debugger watchpoints. Purify watchpoints:

● Are faster -- since they come "for free" while running Purify.

● Can detect access to memory, i.e. read; the memory need not be written to.

● Can be set from within your program by using watchpoint APIs, thereby making them independent of the debugger command line.

Page 39: splashpage aug 01.htmlCopyright Rational Software 2001...articles this month on the various processes involved in managing code, people, tools, and other factors in software development.

● Can provide more information. For example, we can see when a stack address goes in and out of scope.

Our "Hello, Rational Edge" example is so small and trivial that we don't really need watchpoints to understand what is happening in the application during a run. But let's consider the following piece of code:

Object * myObject = NULL ; // global pointerint main() { myObject = create(); result = compute(myObject); report(result); destroy(myObject); return 0; }

For the developer who owns this part of the code, the following scenario could be a real challenge. Let's suppose we run our program in Rational Purify, and Purify reports a memory leak on the object -- even though we were freeing myObject in the function destroy();

We would need to use the debugger in order to examine the exact events that led to the memory leak. By using the debugger, we can determine, for example, that myObject is already NULL when destroy() is called! So what happened to the global pointer "myObject" in this program?

#include "purify.h" // header file for Purify APIObject * myObject = NULL ; // global pointerint main() { if ( purify_is_running() ) { purify_watch_n(&myObject, 4, "w") ; } myObject = create_object(); result = compute(myObject); report(result); destroy(myObject); return 0;}

The easy and elegant way of getting the answer to this question would be to set up Purify watchpoints in the code that will stop the execution at the moment when the global reference to the object gets removed.

After setting the watchpoint that will stop the execution every time the myObject pointer changes, we could expect the watchpoint to stop the execution at myObject = create_object(), because this is the line where we create this object.

If we had an error in the function compute(), for example by NULL-ing the reference to the object prematurely, without destroying the object first, then Purify would correctly report a Memory Leak event -- even if we have

Page 40: splashpage aug 01.htmlCopyright Rational Software 2001...articles this month on the various processes involved in managing code, people, tools, and other factors in software development.

created the function to destroy the object. This error would be detected with the second, unexpected break of the watchpoint in the function compute(). The line of code that caused the error might be perfectly legal. The developer may have NULL-ed the reference to the object prior to destroying it in the line MyObject=NULL;.

The Purify watchpoint would lead us directly to the heart of the problem, as you can see in Figure 5.

Figure 5: Rational Purify Report for the Unexpected, Second Purify Watchpoint

Suppressing Error Collection and Display in Purify

Rational Purify allows you to prevent certain messages from being collected and displayed by the viewer. This can be a very useful feature if:

● You have errors that you cannot correct, such as errors in third party libraries for which you don't have the source code.

● You'd like to focus only on specific errors; for example, if you want to hide errors you know about, so the new ones are easier to see.

● You want to hide reports that you know are harmless, such as a small, one-time leak.

There are several ways to engage these suppressions:

● By activating suppressions in the Viewer.

● By specifying suppressions directly in a .purify file.

● By using the option "-suppression-filenames."

Suppose you have a program that sometimes uses a counter:

if (use_the_counter)

Page 41: splashpage aug 01.htmlCopyright Rational Software 2001...articles this month on the various processes involved in managing code, people, tools, and other factors in software development.

object.counter = counter_initialize();object.counter++; if (use_the_counter) return object.counter;else return 0;

use_the_counter is Boolean and can have only two states: "true" or "false."

In this snippet we initialize the counter only if the condition is "true"; if the condition is false, then we read from it. This will create a UMR error if use_the_counter is "false." But as it is not really an error, this would be a good candidate for suppression.

Some errors are suppressed by default. For example, Purify distinguishes general, uninitialized memory reads from the reading of uninitialized memory just to make a copy. Such Uninitialized Memory Copy (UMC) messages are suppressed by default. If you use the copy later, Purify will generate a UMR.

Here is an example of the UMR message:

/* Suppose arg_ptr points to uninitialized memory */void SomeFun(int *arg_ptr) { int local = *arg_ptr; /* UMC (suppressed) */ printf("value is %d\n", local); /* UMR here */}

If you wanted to see this message in Purify, then you would need to engage the option "View/Suppressed Messages."

Combining Rational Purify with a Debugger Saves Time and Money

The debugging process for an application should not be limited to using a debugger reactively to detect the cause of known problems. Using an automatic run-time error and leak detection tool such as Rational Purify can help you detect and pinpoint hard-to-find memory errors. By combining such a tool with a debugger, you can make the software debugging task easier, and save your development team both time and money.

Note: Other popular debuggers, such as dbx or the debugger that comes with the Sun Forte or Sun Workshop development environment, work well with Rational Purify, too. For Hewlett Packard (HP), the most popular debugger is called WDB -- which is HP's open source implementation of the GDB debugger. More information on debuggers can be found on the sites listed under References below. A fully functional version of Rational PurifyPlus can be downloaded for evaluation purposes from the PurifyPlus Web page: http://www.rational.com/products/pqc/index.jsp.

Page 42: splashpage aug 01.htmlCopyright Rational Software 2001...articles this month on the various processes involved in managing code, people, tools, and other factors in software development.

References

GNU GDB manual:http://www.gnu.org/manual/gdb-4.17

PurifyPlus documentationhttp://www.rational.com/products/purify_unix/index.jsp

HP wdb debuggerhttp://devresource.hp.com/devresource/Tools/wdb/index.html

Debugging with dbxhttp://docs.sun.com/htmlcoll/.../SWKSHPDEBUG/DebuggingTOC.html

Footnotes

1 According to the GNU Web site, "The GNU Project was launched in 1984 to develop a complete Unix-like operating system which is free software: the GNU system. (GNU is a recursive acronym for ``GNU's Not Unix''; it is pronounced "guh-NEW".)"

For more information on the products or services discussed in this article, please click here and follow the instructions provided. Thank you!

Copyright Rational Software 2001 | Privacy/Legal Information

Page 43: splashpage aug 01.htmlCopyright Rational Software 2001...articles this month on the various processes involved in managing code, people, tools, and other factors in software development.

Process Culture and Team Behavior

by Ross BeattieSoftware Engineering SpecialistRational SoftwareSydney, Australia

Process culture is a framework that helps to explain why software development teams choose certain processes over others. In this article, I build on Walker Royce's early work on process culture, published in his book Software Project Management: A Unified Framework.1 The book discusses two major process discriminators -- management complexity and technical complexity -- and the way they affect software teams' decisions about how to tailor their development process. In the course of my work, I have found other process discriminators that significantly impact software team behavior. Here, I describe them in terms of two process cultures -- product culture and service culture.

Two Process Cultures

What do I mean by process culture? That every team has certain behaviors that an external observer can readily distinguish. Teams act, sound, and react differently to the situations they encounter during the software development lifecycle. The actual process a team follows will generally be influenced by this set of behaviors -- i.e., its particular process culture. I have identified two patterns of team behavior that I refer to as product culture and service culture. Table 1 briefly explains the basic differences between them.

jprince
http://www.therationaledge.com/content/aug_01/m_processCulture_rb.html
jprince
Copyright Rational Software 2001
Page 44: splashpage aug 01.htmlCopyright Rational Software 2001...articles this month on the various processes involved in managing code, people, tools, and other factors in software development.

Product Culture Service Culture

In a product culture project, the development team produces a software product for speculative sale to more than one customer: e.g., a word processing tool that comes in a shrink-wrapped package and is sold through office chain stores.

For such teams, the primary financial objective is to generate income from repeated sales of the software product. If the first sales happen to cover the development costs, and the company generates a quick return on investment, that is a bonus.

In a service culture project, the development team produces software under contract to one customer or relatively few stakeholders: e.g., a billing system for a single customer such as a telecommunications company.

For such teams, the primary financial objective is to generate income from the one-off sale of the software development service. If the company is able to generate further income by selling the resulting software, or parts of it, that is a bonus.

Table 1: Product Culture vs. Service Culture

How Does Process Culture Affect Behavior?

Now that we understand the basic differences between these two process cultures, let's look at some common process discriminators for software projects:

● Requirements specifications

● Cost and schedule estimates

● Customer role

● Lifecycle approach

● Customer/developer relationships

● Software acceptance and quality

● Component reuse

● Documentation

● Ceremony

How a team deals with these discriminators can vary greatly, depending on whether it is working on a product culture project or a service culture project. I will describe the typical ways in which teams within each culture approach these process discriminators, based on anecdotal evidence I have gathered from a variety of different types of software development projects.

Requirements Specifications

Page 45: splashpage aug 01.htmlCopyright Rational Software 2001...articles this month on the various processes involved in managing code, people, tools, and other factors in software development.

Product culture projects are characterized by a tension between developers and stakeholders. The latter want the product to reach market quickly enough to beat the competition and generate a reasonable profit and have little tolerance for extensive requirements specification efforts. Often, managers are concerned that if too much time is spent on specifying the software, then the market will change, and the specification will become outdated. They prefer a "Fire, then re-aim" strategy rather than an "Aim, then fire" approach. Often, this degrades into an "Aim, re-aim, re-aim, ..." syndrome, also known as "analysis paralysis."

Service culture projects are characterized by a greater emphasis on producing software that conforms to the stakeholder's stated requirements. The requirements and the tests linked to them are concrete criteria that the development team uses to show they have met their contractual obligations and should therefore be paid. The strategy here is "Aim very carefully, then fire."

Cost and Schedule Estimates

In a product culture project, planning usually hinges on two questions: 1) "How much time do we have to come up with a new release to counteract the inroads our competitors are making in the market?" and 2) "How much will it cost to implement these improvements, and will income from additional product sales -- if we get the product out earlier -- outweigh the cost?" Because of the severe schedule pressures on these teams, they are often given strong financial incentives to get new products or features to market before the competition. Companies that have already established a market for a software product may feel competitive pressures to release new versions at regular intervals (e.g., every six or twelve months).

Service culture teams are more likely to take the time to do up-front cost and schedule estimates, since the company must generate a profit from the original work. There is normally limited opportunity to generate additional income from sales of multiple copies of the software. If they've negotiated a fixed-price contract, both sides may be extremely reluctant to make cost or schedule changes because the process of renegotiating the contract can be so painful. Service culture projects also suffer schedule pressure, although it frequently comes from renegotiation of delivery dates based on changing priorities within the customer's organization. Delivery dates are usually negotiated based on schedule, cost, and resource trade-offs: e.g., "You can have it in six months, but it will cost you four times more than it would if you gave us nine months to deliver."

Customer Role

Product culture project customers emerge only after the initial version of the product is delivered to market. The customer's requirements are initially unknown, and may in fact be unknowable if the market for the product is unexplored. Customer requirements are generated incrementally as the product is actually used in a variety of settings.

Page 46: splashpage aug 01.htmlCopyright Rational Software 2001...articles this month on the various processes involved in managing code, people, tools, and other factors in software development.

Service culture project customers are present before software development begins and typically play a major role in defining the project. The people representing a customer's needs may change during the development lifecycle, however, or the requirements expressed by those people may change from time to time.

Lifecycle Approach

I hesitate to make definitive pronouncements about this discriminator, but I suspect that people working within product culture projects are more attracted to an iterative lifecycle approach, while those within service culture projects are, at least initially, more likely to be attracted to a waterfall approach.

Within a product culture project, the drive is to get the product to market quickly and to test market acceptance of its features. An iterative lifecycle approach is ideally suited to meet this need since products generally evolve in an iterative fashion based on customer feedback. Each iteration represents a potential opportunity to provide the market with new or updated product capabilities and features, or to correct critical defects and make the product acceptable to new or prospective customers. Occasionally, software development managers for service culture projects fear that the iterative approach implies the development effort will never end; they think the word iterative is a synonym for an undisciplined approach to software development. My experience has shown that it is just the opposite: An iterative approach is highly disciplined.

In the context of service culture projects, customers often seek reassurance that they are getting their money's worth from the development team. Some project managers are initially attracted to a waterfall approach because they or their customers perceive it as an orderly methodology that makes their project amenable to measurements of progress. These supposed benefits, however, are almost always illusory. The waterfall approach usually leads to a false sense of security. It typically results in a number of unplanned iterations due to the late discovery of serious architecture and design faults, and an inability to capture the customer's true requirements. Overall, although the transition from a waterfall approach to an iterative approach may be challenging for some managers (see Philippe Kruchten's article in the December 2000 issue of The Rational Edge), I have found that it is always worth the effort and the temporary discomfort.

Customer/Developer Relationships

Product culture projects are characterized, from a legal point of view, by a distant relationship between the customer and the developer. Most packaged software products are sold with license agreements that tend to favor the developer and traditionally state some variant of "Use the software at your own risk. No warranty or guarantee provided."

In a service culture project, the relationship with customers is much more intimate. The organization typically has contractual relationships with a small number of customer stakeholders. Too often, the development

Page 47: splashpage aug 01.htmlCopyright Rational Software 2001...articles this month on the various processes involved in managing code, people, tools, and other factors in software development.

team's objective is to build software that meets those stakeholders' stated requirements at minimal cost, not necessarily to develop software that meets the customer's real needs. Such teams can spend a great deal of effort crafting a compliance statement to prove to the customer that the requirements have been met, whereas what the customer most wants to see is whether their real needs have been met.

It is also true that, even with the best intentions on both sides, the customer-developer relationship is potentially quite adversarial: Some customers try to interpret every requirement to squeeze out maximum development effort and benefit, while the developers fight back, trying to closely scope the interpretation. If the bonds of trust between the customer and developer dissolve in the process, then this situation can degenerate into a legal debate over what the words in the requirements specification actually mean. Then, the only winning party is the mediator, usually a representative of the legal profession.

By using an iterative development lifecycle, much of this conflict can be avoided, because both sides can clearly see what they have to trade. At the end of each iteration is an executable release. Often customers do not know what they want, but they will know it when they see it. In such instances, an executable release provides an opportunity for customers to see a tangible manifestation of their specification, and for the development team to gain confidence in their ability to deliver. In the process of evaluating the release, customers can discover their true requirements, and both sides can discover opportunities to reprioritize and make trade-offs.

Software Acceptance and Quality

A development team in a product culture project must constantly review features and capabilities against the backdrop of evolving market demand. Features and capabilities that looked essential at the beginning of the project may be replaced by others as competitors introduce new capabilities or raise the performance bar in the marketplace. In terms of quality, a team within a product culture project acknowledges that testing is a premium they must pay in order to insure itself against possible financial and legal penalties should the software fail to meet functional and performance requirements stated in the product literature. They also acknowledge that the software product will probably always contain some defects -- both discovered and undiscovered. The challenge is to minimize the number of undiscovered defects and to make a judgment about whether the discovered defects are serious enough to delay release. Cost of failure is an important consideration, and the team recognizes that the effort they apply to quality assurance activities should be commensurate with that cost. If the software is part of a safety-critical system, the cost of failure is likely to be very high (or completely unacceptable), so the team must place a high premium on quality. In contrast, if a failure merely means that the customer will have to restart the application or system, then attention to quality can be a little lower. I refer to this as the "Quality Insurance" approach.

A team operating within a service culture project has a more stable set of acceptance criteria. Usually, they must simply demonstrate that

Page 48: splashpage aug 01.htmlCopyright Rational Software 2001...articles this month on the various processes involved in managing code, people, tools, and other factors in software development.

requirements have been objectively verified. These teams typically do a lot to assure the quality of the software, however. They form independent testing teams to create test cases and scripts (manual or automated). And within the development organization, the quality assurance team works to represent the customer's interests, and to provide independent verification that requirements have been met. I refer to this as the "Quality Assurance" approach.

Component Reuse

Teams operating within a product culture project are more likely to reuse software components, primarily because they are more likely to be working in a stable application domain (e.g., banking and finance, telecommunications).

Service culture projects, in contrast, may employ their teams in a variety of application domains and technical environments. For example, one project may require the development of an Internet-based accounting system, and the next an embedded process control application. It is almost always the case that a particular component cannot be reused from one project to another. The component may be unsuitable for the new application domain, have requirements that preclude its reuse, or be unusable in the new technical environment due to language or operating system constraints, for example.

Documentation

Product culture teams are more likely to build the product first, and then document it afterwards. This is often because of market forces: It can be a waste of time and energy to do a lot of documentation up front if market conditions are likely to force late changes to the product's features. Organizations in this situation often provide their product documentation via an online Help feature that is built in and delivered with the software.

Service culture teams are much more likely to write documentation up front that carefully defines the system and, in general, to be document-driven, as part of an overall effort to contain the customer's expectations about what the software will deliver. As model-driven development is becoming more widespread, however, this practice is changing. There has been a slow and steady evolution of understanding in the software industry. To a certain extent, the rise of the waterfall approach to software development was a reaction to the "bad old days" when developers hand crafted software with very little thought to requirements and design. Organizations suffered when these programmers resigned after completing the software (not necessarily at the end of the project!), leaving a trail of undocumented and poor quality code. The waterfall approach provided an opportunity for project managers to insist upon detailed, up-front documentation that invariably became a historical (in many cases, hysterical) snapshot of the original requirements, and design and implementation intentions. Now, with model-driven software development, teams can have the best of both worlds: they can use tools to automate development of high-quality software via the use of models, and they have the ability to generate documentation automatically from the models -- at

Page 49: splashpage aug 01.htmlCopyright Rational Software 2001...articles this month on the various processes involved in managing code, people, tools, and other factors in software development.

any time and at low cost.

Ceremony

I define ceremony as the level of formality a software development team needs to get the job done. Teams with large numbers of people often require more formality to overcome the problems that can arise when you have multiple interpersonal connections among team members. On the surface, it would appear that this discriminator has nothing at all to do with whether a team is working within a product versus a service culture project and everything to do with the size of the software system. But appearances can be deceptive.

Product culture projects typically foster an entrepreneurial spirit among team members, who resist unnecessary levels of ceremony. The prevailing attitude tends to be "Just build it and get it to market as soon as possible." And it's true that market forces can favor products that have first advantage, even if their quality is less than perfect.

Service culture projects tend to place greater emphasis on ceremony in the team's interactions with the customer, for a variety of reasons. First, there may be a number of stakeholders, not all of whom fully understand the complexities of developing software to specification. Second, often more money is involved than in a product culture project, which is commonly perceived as a higher-risk investment environment. Third, I have seen a number of situations in which a high degree of ceremony was used in interfacing with the customer in order to mask chaos within the development team. In one case, the project manager used high ceremony in order to slow down the rate of changes in requirements, thereby enabling the team to recover a sense of order.

Know Your Process Culture

My experience has shown time and again that a software development team's process culture -- whether it be a product culture or service culture -- has a dramatic effect on the type of process improvement the team will consider necessary. As the largely anecdotal evidence I've presented suggests, understanding the process culture within your organization can help you with these important decisions. If you are a team leader, for example, it can help you predict what type of process might be appropriate for the team as well as how willing the team will be to accept it!

Footnotes

1 Walker Royce, Software Project Management, A Unified Framework. Boston: Addison Wesley, 1998. (See Chapter 14: "Tailoring the Process.")

Acknowledgments

Page 50: splashpage aug 01.htmlCopyright Rational Software 2001...articles this month on the various processes involved in managing code, people, tools, and other factors in software development.

I would like to acknowledge Philippe Kruchten, who first suggested that I write this article for The Rattle. Thanks also to Walker Royce, Joe Marasco, Gary Pollice, John Smith, Catherine Southwood, and Marlene Ellin for reading my initial drafts and providing a number of valuable suggestions for improvement.

For more information on the products or services discussed in this article, please click here and follow the instructions provided. Thank you!

Copyright Rational Software 2001 | Privacy/Legal Information

Page 51: splashpage aug 01.htmlCopyright Rational Software 2001...articles this month on the various processes involved in managing code, people, tools, and other factors in software development.

Achieving Quality By DesignPart I: Best Practices and Industry Challenges

by Ed AdamsTesting EvangelistRational Software

I used to shoot people for a living. That experience gave me valuable insight into developing quality software. Before you call America's Most Wanted to turn me in, let me explain. Years ago I was a mechanical design engineer working on non-lethal weapons systems. One weapon I helped design was a perimeter-weighted net that could be fired as a ballistic projectile at a subject from a shotgun mount. The purpose was to restrain but not harm the target. As part of our field testing, I would take some fellow engineers out into a field and "shoot" gun mounted canisters containing these nets at them, and then we would see if they could escape.

OK, so now you know I was not a hit man, but you still may wonder just what that experience has to do with software development. Well, before we went out to shoot nets at our coworkers, we had already done a tremendous amount of work in the design phase of the project. We performed extensive testing before we constructed a prototype to test on live people.

In the mechanical design world, there is an established process for assessing the quality of your design before you build it:

● You model the application -- in this case the net and canister propulsion system -- typically using a computer aided design (CAD) system.

● Once you have a model, you test it using computer aided engineering (CAE) tools. With these kinds of tools you can put a load on a beam, put some flow through gas pipes, or stress test a net.

jprince
http://www.therationaledge.com/content/aug_01/m_qualityByDesign_ea.html
jprince
Copyright Rational Software 2001
Page 52: splashpage aug 01.htmlCopyright Rational Software 2001...articles this month on the various processes involved in managing code, people, tools, and other factors in software development.

● Model test results are analyzed and any needed design changes are made. Then the improved design is fed back into the stress test workflow, and you assess the new design. You repeat this process on the model until it passes the requirements placed on it. Wash, rinse, repeat.

● Only after you test your model and verify that your design is architecturally sound, do you start building the prototype -- or the Beta to follow the analogy into the software world. When we built our weapon, we already knew that it was going to work. At that point we were just doing fine tweaks; there were no costly design changes or architectural changes late in the game.

When I started working in the software industry I was amazed to find that there was no analogous process; and I thought everyone in the software world was absolutely mad. Developers were building Beta software -- or in some cases release software -- directly. Few organizations were modeling their applications at all, and nobody was able to test designs to ensure they were architecturally sound. So when I moved from the mechanical design world to the software world, I stopped shooting people. Now it was I who was scared to death.

That is why I feel Quality by Design is such an important topic. Quality by Design is a software development solution that uses a very specific process and set of best practices and tools to build in and measure quality at every stage of the software development lifecycle. It is a pro-active approach to software development and testing. More specifically, it is proactive from a quality standpoint, not just from a construction or development standpoint. A key factor, as we will see in Part II of this series, is the adoption and use of a design tool -- just like the ones I used to construct my non-lethal weapon systems.

The Business Problem

What was the world's first software project? Well, think about the biblical story of the Tower of Babel. This was not really a software project, but it had a lot in common with today's software projects. A lot of people worked on it, as it was the largest engineering project of its day. There was a goal -- build a tower -- but it was not very well defined. Everyone was using different terminology, different methods, and different tools. Each group was confident (cocky even) that its piece would be the best. Very little integration work was done. Lastly, nobody was serving as project coordinator for the entire project. What happened? The project imploded; the CEO - the big guy upstairs - got angry and canceled the whole thing. And as a result, now almost nobody knows how to speak Babylonian.

According to Standish Group's last report1 on the subject, nearly three out of four IT projects meet a fate similar to the Tower of Babel's. Why are so many of these projects canceled? Development teams are using different tools and different terminologies that make it difficult for them to communicate and focus on a single goal. It is also very difficult to measure quality and capture metrics along the way, or even agree on what metrics to capture.

Page 53: splashpage aug 01.htmlCopyright Rational Software 2001...articles this month on the various processes involved in managing code, people, tools, and other factors in software development.

Compounding these issues, software developers are being squeezed by opposing forces. On one side there is constant pressure for faster time to market; while at the same time the cost of failure has increased dramatically. Because most applications are used directly by customers (not just internally), we cannot afford to release low quality software. And, as if this situation were not bad enough, all these pressures are multiplied because today's applications are much more complex than they were even just a few years ago.

Quality and Cost Control: Everyone's Responsibility

To help speed development and shorten time to market, many organizations use component-based, modular designs. But many are finding that testing costs are the Achilles' heel of modular designs. A veritable explosion results when many scenarios on several modules must all be tested. Which leads me to one of my favorite quotes on this subject:

"Unless designers can break through the system-testing cost barrier, the option values … might as well not exist."2

The reason I like this quote so much is that it identifies system testing as a potentially huge cost barrier. It is almost always a huge cost barrier in terms of both time and dollars. And, more important, it states that it is the designers who must break through that system testing cost barrier, not the testers. Not to sound too clichéd, but quality and cost control are not the responsibility of just the test team; they are the responsibility of everybody on the team. The engineers, architects, and designers that are creating the fundamental structure or architecture are the people who can have the greatest impact on reducing system-testing costs. Without their help, there are no good options for overcoming this system-testing cost barrier; and this is particularly true for modular or component-based designs.

Numerous studies have shown that the cost of fixing a defect rises exponentially as the software development lifecycle progresses3. Forrester Research published an interesting report called "Why Most Web Sites Fail."4 It quantifies the time and money required to fix a site when it goes down, and sorts the results by cause of failure. The longer it takes to find a defect, the more expensive it is to fix. This follows intuitively from the fact that defects found late in the lifecycle will be more difficult to fix, and more likely to cause serious schedule delays, particularly if they reveal architectural problems that may require an extensive redesign to correct. Postponing testing until all components are assembled into a completed system is a risky proposition -- one that is likely to end in failure…or a really low quality release.

Typically, most testing is done during the Transition phase5 of a project, after the Construction phase has been completed6. As cost-conscious developers, our objective is to move testing earlier in the software development lifecycle, to start performing tests and finding defects in the Elaboration phase or during design, when they are easier to fix. By doing this, we can lower system testing costs later in the lifecycle.

Page 54: splashpage aug 01.htmlCopyright Rational Software 2001...articles this month on the various processes involved in managing code, people, tools, and other factors in software development.

Figure 1: Problems Found Late in the Development Lifecycle Are Much More Costly to Fix.

To put it differently, we should strive to make software testers into checkers or validators instead of defect finders. Right now, developers write code and pass it on to someone else to do the majority of the testing. We need to get away from that practice, and move toward a process in which designers and developers verify quality. Then, all testers will have to do is look at the final application and say "OK, that's right, that's right, and that's right" -- and the entire development process will be vastly more efficient.

An analogy can be made to the manufacturing industry, where parts are designed for manufacture (DFM) or assembly (DFA). The pieces are designed to assemble easily and they are checked before being passed on to the assembly line. This is a key concept. For example, there are specifications in Design for Manufacturing that address symmetry of hardware components. If someone is assembling a system, he does not have to worry about orienting the component correctly, because it will work even if it is rotated 90 degrees one way or the other before it is installed. By considering manufacturing issues in the design stage, the manufacturing stage is made easier.

Design for Testability is the same concept applied to quality. By considering testing issues in the design stage, the testing stage is made easier. Design for Testability is a well-established practice in a number of industries, including mechanical design and integrated circuit manufacturing to name just a couple. However, it is still not yet well established in software development.7 I'll revisit Design for Testability a little later on.

Quality by Design: Is It Possible?

After considering the disappointing failure of the Tower of Babel, you might begin to wonder if Quality by Design is even possible on large-scale projects. Let's consider another example: integrated circuits. The setup costs of manufacturing a new integrated circuit are quite high -- they include reserving time at the plant, creating masks for each layer of silicon and metal in the circuit, and so on. So engineers use CAD/CAM and CAE tools to test their designs fully before they are ever sent to the plant to be manufactured. They are able to do this because they capture sufficient information in the design phase to allow them to assess the quality of the design and validate the design before they build the first chips. They do

Page 55: splashpage aug 01.htmlCopyright Rational Software 2001...articles this month on the various processes involved in managing code, people, tools, and other factors in software development.

not spend huge amounts of time and money to build a complex chip, and then put probes on it, only to find that the half the circuit was not wired to ground (sound familiar to anyone testing "ready to go" software applications?). Instead, they tested the model to validate the design. When the first chips roll off the line, testers do not expect to find any bugs; they expect to test the chip and say "OK, that's right, that's right, and that's right." And that is exactly the approach we need to take in the software development world as well.

In the software industry we are finally beginning to realize that we too have a huge cost associated with the actual creation of the final application. As a result, it is becoming easier for us to see the significant benefits that can be achieved by testing the model as much as possible before we build it. Let's expand on this area by discussing some specific and common problems in today's applications.

Technical Challenges for the Software Industry

Now that we have the business problem in focus, let's review software's recent evolution as a prelude to discussing the technical challenges associated with quality by design.

In the good old days there were two-tier architectures. There was a fat client, typically a Windows application, built in C++, Visual Basic, PowerBuilder, or some other development environment. The fat client talked to a data server, and both the client and the data server contained some business logic. Releases occurred once every year or two. For the most part, applications were internal releases. If there was trouble, you could get on the PA system and tell everyone to get off the system so you could reboot the server. Life was easy.

Since then, things have gotten a great deal more complicated, especially now that we, as an industry, have moved to the Web. Now we have n-tier architectures. In these systems there is a thin client (commonly a browser or handheld device without much business logic), which talks to a middle layer (usually an EJB or application server, COM server, or Web server). The database is still there on the back end, but now there are two or three additional layers, all with their own business logic, and all communicating with each other via different protocols. With systems this complex, there are many more areas where problems can occur.

Page 56: splashpage aug 01.htmlCopyright Rational Software 2001...articles this month on the various processes involved in managing code, people, tools, and other factors in software development.

Figure 2: A Typical N-Tier Architecture

A .NET version would have the same fundamental structure but different names for its components.

(Source: http://www.java.sun.com)

In the J2EE application model, Web browsers, Java applications, and wireless devices interface with JSPs, which in turn talk to EJBs, which talk to the databases. In the .NET application model, the setup is very similar; it is just not Java. Instead of EJBs there are COM+ components, and instead of JSPs there are ASPs. The point is it does not matter if you're working in the .NET world, the EJB world, or any other world. You still have the same basic architecture, and you still have lots of potential problems to worry about. And it only gets worse.

A Multiplicative Effect

We all like component-based designs. They promote code reuse and parallel development, and they save both time and money. But there is a hidden danger in these designs. Even a system assembled from very high-quality components can have an unacceptably high likelihood of failure.

As an example, pick your favorite metric for measuring reliability. Assume that all the components in your system are fairly reliable -- 85, 90, or 95 percent -- as individual components. When they are combined into one system, you calculate the overall reliability by multiplying the individual reliability metrics for each component, not by averaging them. Consider a simple system with seven components all rated at 95 percent reliable. The overall system reliability is .95^7 or less than 70 percent. For many organizations that is well below minimum standards. And keep in mind that this is a very simple example with only seven components. If you have a system with twenty components, you can be in real trouble.

Page 57: splashpage aug 01.htmlCopyright Rational Software 2001...articles this month on the various processes involved in managing code, people, tools, and other factors in software development.

Testing Individual Components

As any tester knows, testing EJB or COM applications can be difficult because there is no GUI, and therefore no direct access to the components you need to test. Yet you need to test the logic and the use cases to validate them, which puts you (or more likely a developer) in the position of having to write a lot of test code yourself -- if there is time and talent enough to even do it. Consider the system in Figure 3. If you need to test Component B, then you need Components A, C, and D. If those components are not yet developed, then you will need to write a test driver to emulate Component A, and to build stubs for Components C and D so you can pass them data and test the expected results or exceptions. You have a major challenge on your hands. Creating all that code is expensive, and most of it cannot easily be leveraged on different projects -- so it just gets thrown away. This development effort takes resources away from the real development project and increases the overall cost of quality and development. But you cannot risk leaving Component B untested; you simply have to do it.

Figure 3: How Can You Test Component B Before Building Components A, C, and D?

So now that we understand both the business problem and the technical challenges faced by the software industry, let's talk about solutions. We'll look first at Design for Testability, an approach used in other engineering environments, and how it can be applied in software development. Remember the discussion of Design for Manufacture above? Well, this is a similar approach. The concept is to construct test access points into the design -- making it easier to validate. In next month's Rational Edge we'll see how the Unified Modeling Language and test reuse can support Quality by Design.

Design for Testability

Design for Testability is one facet of a good Quality by Design approach that is employed regularly by integrated circuit manufacturers, mechanical designers, and engineers in many other industries. By considering and addressing testing issues in the design phase, Design for Testability lowers overall development costs because it greatly simplifies the testing phase.

Page 58: splashpage aug 01.htmlCopyright Rational Software 2001...articles this month on the various processes involved in managing code, people, tools, and other factors in software development.

There are five key aspects of Design for Testability I'd like to consider in this article:

■ Test case capture

■ Design validation

■ Test access

■ Trusted components

■ Built in self-test

Let's consider how each of these can be applied in the software world to establish our own quality by design standards.

Test Case Capture

This is an easy one. If you are responsible for a design or application, then you need to provide some way to capture the use cases8 that you are trying to test.

"A use case is a snapshot of one aspect of your system. The sum of all use cases is the external picture of your system, which goes a long way toward explaining what the system will do. …A good collection of use cases is central to understanding what your users want."9

The design model must be instrumented so that it can be measured. For example, in doing the blueprint of a house, a use case requirement might be that the garage must be able to hold two average size cars. As a tester, you have to verify that your design meets this requirement. When you translate this into a test case you need to ask, How can I do that on my model? How do I instrument my model (the blueprint) to make sure the test case can be captured and allow someone to validate that requirement? On a blueprint it is easy, because there are physical measurements or aspect ratios. You just ensure the measurements are in the blueprint -- specify the width and height of the garage door in feet -- and that allows a tester to test whether this use case requirement can be met. Using the design tool, you can easily translate that specific use case requirement into a test case. You can even use the design tool to generate the test and validate the use case. Great concept.

Test case capture means that test cases can be realized by elements in the model. In this example, the model element we are trying to test is the size of the garage door opening. We can easily test that because the model allows us to determine the width and height of the opening and we have accurately decorated the model to facilitate test case capture and assessment.

Design Validation

Design validation is my favorite Design for Testability topic. Ultimately, it means measuring quality in the model. You can validate your design by

Page 59: splashpage aug 01.htmlCopyright Rational Software 2001...articles this month on the various processes involved in managing code, people, tools, and other factors in software development.

testing the model, and testability is a key aspect of quality by design. Again referring to the mechanical design world, consider the case of a turbine blade assembly (see Figure 4). This component has been modeled with a design tool. Now we are ready to test the design. Using the standards for turbine blade assemblies (to ensure we are in compliance with given standards…and by the way, that's another article series to watch for in upcoming issues of The Rational Edge), we are able to emulate a load being placed on this model. Design tools like this CAE system (and the Unified Modeling Language in the software world; more about that in Part II of this series) can be used to generate tests. That, in turn, gives you the power to validate your design and make changes to it when early tests fail.

Figure 4: Turbine Blade Assembly

Design tools can be used to generate tests that validate the design before it is built.

When you can test your model, get results, and find out whether it will pass your requirements, that is design validation. It ensures that you are building in quality from the beginning: instead of validating an as-built system, you are validating the as-designed system.

Test Access

Test Access means that you must provide test points or "hooks" in your model that will allow testers to do their jobs. These are interfaces that can be understood by people other than the designer. These interfaces must be designed so that they can accept data passed to them, but also so that they can vary boundary conditions, preconditions, and postconditions. The crucial point is that test access hooks allow others to determine whether a component works or not. Without these hooks, others have to figure out ways to get to the components, which can be costly, if not impossible. Taking the time to include the hooks up front saves a great deal of time

Page 60: splashpage aug 01.htmlCopyright Rational Software 2001...articles this month on the various processes involved in managing code, people, tools, and other factors in software development.

later -- for developers and testers alike.

Trusted Components

The vast majority of today's development work is component based; typically, a project either includes third-party components or is built by a distributed team, with different groups working on different components. Either way, the components eventually have to be integrated into the extended system. The idea behind trusted components is that each component must meet a set of auditable quality standards to ensure it will work -- before it is integrated into the larger system. Some EJB vendors, for example, certify that their components will work on specific application servers, under specific conditions; you can check a specific set of standards to audit the component. The trusted components requirement means that even (perhaps especially) if you do not have control over a component's source code, you still have to ensure that it works. If you implement a process to do this, and adhere to it strictly, then you can rest assured that every component you use for your final system integration is up to standard. That is what trusted components is all about.

Built-In Self-Test

Years ago I bought my first laser printer. Whenever I turned it on, it performed a self-test, proactively exercising certain functions to make sure that it was working the way it was designed to. I did not prompt it to do the testing; it ran through these functions itself, every time. Why not have the same kind of self-test on software applications? I have seen some applications (even an Operating System) that does some rudimentary self-testing upon startup. In every example, the application occasionally uncovered problems that helped me troubleshoot what would have been a much bigger problem. Again, the key concept here is early detection. Software bugs are like a disease. Catch them early and they are almost innocuous. Prevent them with self-tests, and your application will live a longer and healthier life.

Components must be responsible for ensuring and reporting their own quality, and they must provide public interfaces to allow inspection. For example, if I have a component in my Web application that validates credit cards, then every twelve hours that component should go through a self-test to make sure that it is still working correctly. If it suddenly stops validating credit cards for some reason, then I may be selling a whole bunch of stuff that I can never collect money for, and I certainly would want to know about that as soon as possible. With a built in self-test, the component is responsible for its own quality, and it would alert me if there were a problem.

Stay Tuned

Design for Testability is only part of the story; as software developers we have other tools and other methodologies that we can use to ensure the quality of our applications during the Inception phase. In next month's issue, I'll talk about how the Unified Modeling Language and test reuse can be applied to this problem, and I'll offer some predictions for the future of

Page 61: splashpage aug 01.htmlCopyright Rational Software 2001...articles this month on the various processes involved in managing code, people, tools, and other factors in software development.

Quality by Design.

References

Carliss Y. Baldwin and Kim B. Clark, Design Rules: The Power of Modularity. MIT Press, 2000.

Alfred Crouch, Design-for-Test for Digital IC's and Embedded Core Systems. Prentice-Hall, 1999.

Martin Fowler, UML Distilled. Addison Wesley, 2000.

Forrester Research, Inc., Cambridge, MA, "Why Most Web Sites Fail," September 1998.

Philippe Kruchten, The Rational Unified Process: An Introduction, 2e. Addison Wesley, 2000.

Dean Leffingwell and Don Widrig, Managing Software Requirements: A Unified Approach. Addison Wesley, 2000.

The Standish Group International, Inc., CHAOS Chronicles. 2001.

Footnotes

1 The Standish Group International, Inc., CHAOS Chronicles. 2001. 2 From Carliss Y. Baldwin and Kim B. Clark, Design Rules: The Power of Modularity. MIT Press, 2000. 3 Including Dean Leffingwell and Don Widrig, Managing Software Requirements: A Unified Approach. Addison Wesley, 2000. 4 Forrester Research, Inc., Cambridge, MA, "Why Most Web Sites Fail," September 1998. 5 The phases I refer to in this article are defined in The Rational Unified Process: Inception, Elaboration, Construction, and Transition. 6 Philippe Kruchten, The Rational Unified Process: An Introduction, 2e. Addison Wesley, 2000. 7 For more information on Design For Testability, I recommend the first two chapters of Alfred Crouch, Design-for-Test for Digital IC's and Embedded Core Systems. Prentice-Hall, 1999. The book focuses on embedded systems, but these chapters give a nice overview of DFT concepts. 8 A use case is a description of an action that a user would take on an application. It is a specific way of using the system from a user-experience point of view. Use cases can include both graphical representations and textual details. 9 From Martin Fowler, UML Distilled. Addison Wesley, 2000.

For more information on the products or services discussed in this article, please click here and follow the instructions provided. Thank you!

Copyright Rational Software 2001 | Privacy/Legal Information

Page 62: splashpage aug 01.htmlCopyright Rational Software 2001...articles this month on the various processes involved in managing code, people, tools, and other factors in software development.

Investing Wisely in People, Process, and Tools

by John SmithPrincipal Consultant, Strategic Services OrganizationRational SoftwareSydney, Australia

We in the Rational Strategic Services Organization have great faith in the Rational Unified Process® (RUP®). That's because we have seen time and again how a consistent process and the iterative development approach embodied in the RUP really help our customers achieve the twin goals of producing higher quality software and shortening time to market. But we also recognize that there is more than one way to approach the use of the RUP. It's a rich framework that requires tailoring for particular projects, and project teams are sometimes stretched too thin to expend a lot of resources on instituting process.

Our job is to help each client develop software faster and more effectively in the most cost-effective way. So before we make recommendations about how a customer should invest their time and resources, we do a careful assessment of their development environment: process maturity, staffing, tool set, and other factors. Then, we calculate rough returns for developing these potential investment areas. In this article, I will share with you some of the methods we use. Our analysis model is COCOMO II, the parameters of which I will explain briefly below. Following that, I'll take you through some of the scenarios we use to determine whether the client should invest more heavily in tools and people than in process, or vice versa. This may help you begin weighing the benefits of potential investment opportunities for your next project.

jprince
http://www.therationaledge.com/content/aug_01/m_investingWisely_js.html
jprince
Copyright Rational Software 2001
Page 63: splashpage aug 01.htmlCopyright Rational Software 2001...articles this month on the various processes involved in managing code, people, tools, and other factors in software development.

Using COCOMO II for ROI Calculations

COCOMO II is the next generation of the Constructive Cost Model (COCOMO) originally developed by Dr. Barry Boehm, and described in his classic work Software Engineering Economics published in 19811. COCOMO II has been developed at USC (University of Southern California), and Rational, as a USC affiliate, has openly supported this work.2 COCOMO II provides a way to estimate the effort and schedule for software development -- it is a logical extension to use it as the basis for ROI calculations -- as we do here. The effort as estimated by the COCOMO II model is related to the size (S) of the development (in function points or lines of code), modified by two sets of factors, the effort multipliers (EMi) and the scale factors (SFj). The effort (PM) predicted by COCOMO is derived from the equation:

As you can see from the equation, the effort multipliers (as their name implies) are factors that modify the effort by simply multiplying it. Therefore, a 10 percent change in an EM produces a 10 percent change in effort, all other things being equal, independently of S, that is, the percentage change is independent of the size of the development.

Scale factors, on the other hand, are used to form an exponent (E), which is applied to the size (S). In this case, the percentage effect of a change in a scale factor is dependent on the size of the development.

As an example: COCOMO II has a process maturity scale factor with a default range of 0 to 7.8. If we change the value of this scale factor by 10 percent of its range (0.78), we obtain the results shown in Table 1, for different sized developments.

Table 1: Percentage Change in Effort for Different Size Developments with the Same Change in Scale Factor

Development Size Scale Factor Change

sloc Function Points

Change in Effort

Small Development .78 (10%) 10,000 ~200 1.8%

Large Development .78 (10%) 2,000,000 ~40,000 6.1%

Sample Environment

What does this mean in practice? Let's look at a fairly typical, hypothetical software development environment and assume we have determined:

● Process maturity (a scale factor in COCOMO II). The process

Page 64: splashpage aug 01.htmlCopyright Rational Software 2001...articles this month on the various processes involved in managing code, people, tools, and other factors in software development.

maturity level in our sample organization is low (as defined by COCOMO II, this equates to the upper half of CMM level 1).

● Architectural/risk resolution capability (another scale factor in COCOMO II). Again, let's assume that this is low.3

● Analyst capability and programmer capability (an effort multiplier). We'll assume this is nominal (in the 55th percentile).

● Use of software tools (another effort multiplier). Again, we'll assume this is rated low, which means the development team has available and has used simple front-end and back-end CASE tools with little integration.

We'll also assume that all other drivers and factors are nominal. Now, let's look at five possible scenarios for improving this environment.

Scenario 1:

● Increase process maturity to high (CMM level 3). The Rational Unified Process (RUP) can support this.

● Increase architectural/risk resolution capability to high (the RUP can help here, but there will be other organizational factors to resolve).

● Leave the analyst and programmer capability at nominal.

● Leave the software tool rating at low.

Scenario 2:

● Leave process maturity at low.

● Leave architectural/risk resolution capability at low.

● Raise analyst and programmer capability to high (75th percentile). Let's assume a selection program and training could do this.

● Raise the software tool rating to high (strong, mature lifecycle tools, moderately integrated) by having the project invest in good automated solutions.

Scenario 3:

● Same as Scenario 2, but without attempting to change the analyst and programmer capability from nominal.

Scenario 4 -- a combination:

● Raise the process maturity as in Scenario 1.

● Raise the tool rating and the analyst and programmer capability, as in Scenario 2.

Scenario 5 -- a combination:

Page 65: splashpage aug 01.htmlCopyright Rational Software 2001...articles this month on the various processes involved in managing code, people, tools, and other factors in software development.

● Raise the process maturity as in Scenario 1.

● Raise the tool rating.

● Leave the analyst and programmer capability at nominal.

How Do These Scenarios Work for Different Size Teams?

The relative effect of these strategies (as determined by COCOMO II) on the effort to complete the project will depend on the project size. Let's consider three example projects/groups:

● Group 1: very small; 5 staff developing 200 function points

● Group 2: moderate size; 45 staff developing 4,000 function points

● Group 3: very large; 250 staff developing 40,000 function points

The tables below (2, 3, 4) present, for each of these group sizes, the baseline case and the five scenarios described above. They also show the predicted effort in staff-months, the schedule in months, and the average staffing figure. Benefit in staff days (assuming eighteen staff days/staff month) is shown in parentheses.

Table 2: Effort Required for Different Scenarios(Group 1) Very Small Group (5 Staff/200 Function Points)

Scenario Effort [staff months (and benefit in staff

days)]

Schedule[months]

Staff

Baseline 49.2 10.6 4.7

Scenario 1 49.2 (115) 9.7 4.4

Scenario 2 30.4 (340) 9.0 3.4

Scenario 3 40.6 (155) 9.9 4.1

Scenario 4 26.5 (410) 8.3 3.2

Scenario 5 35.4 (250) 9.1 3.9

Table 3: Effort Required for Different Scenarios(Group 2) Moderate-Sized Group (45 Staff/4,000 Function Points)

Page 66: splashpage aug 01.htmlCopyright Rational Software 2001...articles this month on the various processes involved in managing code, people, tools, and other factors in software development.

Scenario Effort [staff months (and benefit in staff

days)]

Schedule[months]

Staff

Baseline 1450.3 31.6 45.9

Scenario 1 1058.7 (7050) 26.3 40.2

Scenario 2 895.7 (10,000) 27 33.1

Scenario 3 1197.5 (4550) 29.7 40.3

Scenario 4 653.9 (14,335) 22.6 28.9

Scenario 5 874.2 (10,370) 24.8 35.3

Table 4: Effort Required for Different Scenarios(Group 3) Very Large Group (250 Staff/40,000 Function Points)

Scenario Effort [staff months (and benefit in staff

days)]

Schedule[months]

Staff

Baseline 19,537 73.3 266.4

Scenario 1 12,438 (127,782) 56.7 219.2

Scenario 2 12,066 (134,478) 62.7 192.3

Scenario 3 16,131 (61,308) 68.9 234.0

Scenario 4 7682.4 (213,383) 48.8 157.4

Scenario 5 10,270 (166,806) 53.5 192.2

Comparing the outcome of the non-combination strategies (that is, excluding Scenarios 4 and 5 so that we can compare the effect of the individual cost drivers), we see that:

● For the small team, Scenario 2 is a clear winner; the best strategy is to raise the analyst and programmer capability and leverage them with strong tools, and do little immediate investing in improving process maturity.

● For the moderate-sized team, Scenario 2 would also be a clear winner, except for the fact that it is typically very difficult to raise the average analyst and programmer capability to high in a team of that size within a relatively short time frame. So that means Scenario 1 is optimum, with a narrow advantage over Scenario 3.

● For the very large team, the same restrictions apply as for moderate-sized teams: Obviously, it would be hard to raise the average analyst and programmer capability quickly. So here, Scenario 1 is the clear winner, this time by a considerable margin over Scenario 3.

The crossover point at which Scenario 1 starts to be more effective than Scenario 2 (or Scenario 3 for larger teams) occurs at about 30 team members. Actually, the crossover is quite fuzzy, so it would be more accurate to say that: For teams with fewer than 20 people, Scenario 2 is the best choice; for more than 40 people, Scenario 1 is best.

Page 67: splashpage aug 01.htmlCopyright Rational Software 2001...articles this month on the various processes involved in managing code, people, tools, and other factors in software development.

What is the net effect of these assessments? They suggest that:

● For isolated projects/groups with fewer than 20 people, it's best to focus on equipping the team with a good automated toolset as well as on staff selection and training. Although raising process maturity should still be a long-range goal, an "essentials" version of the RUP (like those recommended by Gary Evans or Leslee Probasco in previous issues of The Rational Edge) is most valuable, because a team this small cannot afford to invest a lot of time in instituting process.

● For projects/groups of more than 40 people, full deployment of the RUP with the goal of raising process maturity makes great economic sense. Of course, automated tools and staff training are still key, but in a group this size, the average analyst or programmer capability is unlikely to increase significantly within a short timeframe.

Implementation Costs

What are the costs for implementing these scenarios? For Scenario 1 there is likely to be a significant fixed cost, plus a process training cost for each individual; for Scenario 2 a software license cost and a significant training cost; and for Scenario 3 a software license cost and a small training cost. Scenarios 4 and 5 are combinations and variations of Scenarios 1, 2 and 3. To estimate these costs, let's make a few simplifying assumptions:

1. The burdened cost of a software engineer is USD$800/day.

2. The cost of a typical software license for a complete automated tool set is approximately USD$16,000 (equivalent to 20 staff-days, using assumption 1) for each team member.

3. "Complete" training requires approximately 15 staff-days and costs USD$8,000 (equivalent to 10 staff-days) for each student, giving an effective total of 25 staff-days each.

4. "Compressed" training requires 8 staff-days and costs USD$4,000 (equivalent to 5 staff-days) for each student, giving an effective total of 13 staff-days each.

5. The fixed cost to go from CMM upper level 1 to CMM level 3 is 100 staff-months (actually, this is an inspired guess, based on dedicating a fixed number of people for eighteen months to two years), or 1,800 staff-days.

6. Training for the RUP requires 5 staff-days and costs USD$2,500 (equivalent to 3 staff days) for each student, giving an effective total of 8 staff-days each.

Normalizing all costs to staff-days, the total costs are shown below ("n" is the number of staff).

● Scenario 1: 1800 + n * 8 [from assumptions 5 and 6]

Page 68: splashpage aug 01.htmlCopyright Rational Software 2001...articles this month on the various processes involved in managing code, people, tools, and other factors in software development.

● Scenario 2: n * (20 + 25) = n * 45 [from assumptions 2 and 3]

● Scenario 3: n * (20 + 13) = n * 33 [from assumptions 2 and 4]

● Scenario 4: (Scenario 1 + Scenario 2) = 1800 + n * 53

● Scenario 5: (Scenario 1 + Scenario 3) = 1800 + n * 41

Tables 5, 6, and 7 show for each group size and scenario:

● Total costs: in staff-day equivalents: simplified by using same number of staff for each scenario (that is, 5, 45, and 250 staff).

● Net benefits: calculated using staff-day figures given in Tables 2, 3, and 4.

● Percentage ROI: calculated as (Net Benefit/Cost)/Duration (in years).

Note: in Tables 6 and 7 below, rows in gray italics indicate strategies that are almost certainly not feasible because they assume significant improvement in the average capability of a large group.

Table 5: Cost, Net Benefits, and ROI for Different Scenarios(Group 1) Very Small Group (5 Staff/9 Months)

Scenario Cost (staff days)

Net Benefit(staff days)

ROI(% per annum)

Scenario 1 1840 115 - 1840 = -1725 -125%

Scenario 2 225 340 - 225 = 115 68%

Scenario 3 165 155 - 165 = -10 -8%

Scenario 4 2065 410 - 2065 = -1655 -106%

Scenario 5 2005 250 - 2005 = -1755 -116%

Table 6: Cost, Net Benefits, and ROI for Different Scenarios(Group 2) Moderate-Sized Group (45 Staff/3 Years)

Scenario Cost (staff days)

Net Benefit(staff days)

ROI(% per annum)

Scenario 1 2160 7050 - 2160 = 4890 75%

Scenario 2 2025 10000 - 2025 = 7975

Scenario 3 1485 4550 - 1485 = 3065 69%

Scenario 4 4185 14,335 - 4185 = 10,150

Scenario 5 3645 10,370 - 3645 = 6725 61%

Table 7: Cost, Net Benefits, and ROI for Different Scenarios(Group 3) Very Large Group (250 Staff/5 Years)

Page 69: splashpage aug 01.htmlCopyright Rational Software 2001...articles this month on the various processes involved in managing code, people, tools, and other factors in software development.

Scenario Cost (staff days)

Net Benefit(staff days)

ROI(% per annum)

Scenario 1 3800 127,782 - 3800 = 123,982 652%

Scenario 2 11250 134,478 - 11250 = 123,228

Scenario 3 8250 61,308 - 8250 = 53,058 129%

Scenario 4 15050 213,383 - 15050 = 198,333

Scenario 5 12050 166,806 - 12050 = 154,756 257%

What to Emphasize: People, Process, or Tools?

There's no question that, whatever the group size, good, well-trained people and teams, and a quality toolset are key ingredients to project success. Investing in competent people always pays off. Note that based on the tables above, however, a very small team that does not change its personnel capability cannot expect an immediate ROI from tools if the project is short (less than nine months in duration); the team will have to use the tools in more than one project (that is, over a longer time) to see financial rewards. The return on tools changes fairly dramatically with an increase in team size (and implied increase in project size and duration) though: providing tools to a team of around ten people will have a good return, even on the first project.

When does it start to make economic sense to focus on improvements to process maturity? The ROI becomes quite positive for this -- above 10 percent per annum -- with teams of around 20 to 30 people. For smaller teams (remembering that these are not definitive boundaries), process improvement efforts should focus on adopting best practices and using process as a guide to tool use. The small project roadmap in the RUP, the mappings to Extreme Programming (XP) found in papers in the RUP Resource Center, and the articles on using RUP for shorter projects published in previous issues of The Rational Edge should be useful. As a team grows, the returns on adopting and institutionalizing process grow along with it. It is extremely difficult to raise the average individual capability across a larger group of people, and simply providing a large group with industrial-strength tools does not optimize their use; for larger teams, process is critical for using both people and tools more effectively.

Figure 1 illustrates these conclusions.

Page 70: splashpage aug 01.htmlCopyright Rational Software 2001...articles this month on the various processes involved in managing code, people, tools, and other factors in software development.

Figure 1: People, Tools, Process: Their Importance as Related to Project/Group Size

Ultimately, the scenario you choose to implement for your team will depend on your judgment of how the team operates within your own organization. A team may be small but operate in a rich and supportive organizational context that will bear the cost of process improvement. Or, you may be in a large organization that has many small groups operating as virtually independent entities, receiving little support from the larger organization. Or you may be in a small startup company with no other organizational context at all.

The caveat is that any organization that continues, as it grows, to function as a collection of isolated groups or projects will ultimately become uncompetitive. Even if each project in the collection is individually optimized, if those projects refuse to adopt process because the payoff to them individually would be small, then you will not see a global optimization of capability. Organizations making a transition from small to large have to find (or even assist in the creation of) organization-wide structures and mechanisms for adopting and deploying process and process improvement.

Implementing Process Projects

At Rational, we often say that the best way to introduce process is through small, low-risk, pilot projects. The immediate returns for such projects, however, may not be very high. It is important to recognize that such projects are merely "seeds"; the organization will not see much fruit until the new process is widely deployed. Of course, in order to convince your managers that widespread deployment is a good idea, you need to ensure the greatest success for the pilot project by leveraging the best tools you already have in place as well as your top people. If all your A-players are already busy on business-critical projects, then you may want to choose one of these as the technology change vehicle and do all you can to avoid disruption costs. And if costs are inevitable, then make sure they will be far outweighed by the benefits.

There really are no simple answers, but if you do a sound risk analysis and make sure that your pilot project has the kind of staff that will command attention and respect within your organization before you launch your project, then you can greatly increase your chances for success.

Page 71: splashpage aug 01.htmlCopyright Rational Software 2001...articles this month on the various processes involved in managing code, people, tools, and other factors in software development.

Footnotes

1 Boehm, Barry, Software Engineering Economics. Prentice Hall, 1981.

2 For more information on this and other developments at USC, visit the COCOMO site at http://sunset.usc.edu/research/cocomosuite/index.html.

3 A number of characteristics are assessed to make this judgment: risk management planning, percentage of development schedule devoted to establishing architecture, percentage of required top software architects available to project, level of uncertainty in key architecture drivers, and other factors.

References

Boehm, Barry, Software Engineering Economics. Prentice Hall, 1981.

For more information on the products or services discussed in this article, please click here and follow the instructions provided. Thank you!

Copyright Rational Software 2001 | Privacy/Legal Information

Page 72: splashpage aug 01.htmlCopyright Rational Software 2001...articles this month on the various processes involved in managing code, people, tools, and other factors in software development.

The Network Hobbyist

by Grady BoochChief ScientistRational Software Corporation

In the '70s and '80s, long before personal computers became commodity items, Byte magazine billed itself as "The Small Systems Journal." Catering mainly to the computer hobbyist, Byte offered how-to articles to people who wanted to build their own computers and peripherals from raw parts, all the way down at the chip level. Paging through an old issue of Byte is a real journey into the past: you'll find articles such as "Build a Versatile Keyboard Interface for the S100" and "Should the DO Loop Become an Assembly-language Construct?" Even the ads are amazing: Cromemco, Ohio Scientific, MicroPro, Zobex, Texas Instruments, a tiny company called Microsoft, and hundreds more pitched their wares. Most of the companies who advertised in Byte no longer exist. Virtually all of the technology each of them promoted is entirely obsolete. Byte itself is no longer published.

The generation who pursued their hobby during this period is the same one that brought personal computing to the masses. With the advent of the Apple II, the Radio Shack TRS80, and the IBM PC, individuals could finally buy an off-the-shelf solution. Dan Bricklin's VisiCalc was the killer app of the time, bringing Apple into the mainstream of business and bringing automation to small businesses. Alan Cooper's VisualBasic was equally revolutionary -- his app made it possible for a legion of developers to build new applications for DOS without having to learn the nasty details of 8080 assembly language and the even more nasty details of DOS. Alan

jprince
http://www.therationaledge.com/content/aug_01/k_networkHobbyist_gb.html
jprince
Copyright Rational Software 2001
Page 73: splashpage aug 01.htmlCopyright Rational Software 2001...articles this month on the various processes involved in managing code, people, tools, and other factors in software development.

Kay's Dynabook was a mere dream, although its vision was sufficient to launch the market for laptops and PDAs.

The era of the computer hobbyist is pretty much over, although I expect we'll see a renaissance of the hobby in a couple of decades when this current generation retires. Much like the model train industry, there are still those who enjoy crafting interesting computing artifacts from raw materials.

Although the computer hobbyist is waning, the network hobbyist is just ramping up. Why do I say this? Well, let me ask you a simple question: How many of you are systems administrators for your own home? You are probably the proud owner of a DSL line, so I expect you had to place your modem and wire your home to permit multiple connections (and let's not forget about the filters for your voice lines). I imagine also that your daughter needed to print her homework on your home office printer; your wife probably needs to sync her Palm to her computer as well. Since you have an always-on connection, you also have to worry about security -- should you install a firewall? Finally, wouldn't it be neat to set up a home Web server, just for fun?

Congratulations. Whether you know it or not, you've just become a sys admin.

I'm the de facto sys admin for my home, which currently hosts two PCs, three Linux boxes, five Macintoshes, two PDAs, and three printers. Getting my system to a stable state took several weekends and even now, I find myself spending an hour or two every week just to keep my system healthy. A few weeks ago -- all on the same day -- my PC crashed, my router overheated, my ISP changed my connection settings without notifying me, and, to top it off, a backhoe cut a major fiber line in the area. It took me hours to debug the problem, just so that I could get back to my e-mail. The good news is that, with all that experience, I suppose that I can get a real job some day. The bad news is that this set of disasters illustrates that simple network computing is not quite ready for the masses.

I should point out that my family is pretty much unsympathetic to these details; they are my most demanding users, for they expect simplicity. To heck with all this cool technology, they say, they just want to get things done.

And so it is with our industry. Today, most of us are deep in the details of J2EE and .NET, XML and UML, C# and Java. These are all hot topics today, but -- as for the computer hobbyist -- these technologies will probably look quite dusty in a couple of decades. Our industry made personal computing a commodity and the same thing will likely happen with network computing. In the process of commoditization, new opportunities for and models of computation will open up, which is exactly what our end users demand.

As professionals in this industry, we are the ones who make change happen. As such, we must push technology to its limits, but at the same time, we must not forget why all this technology is there in the first place -

Page 74: splashpage aug 01.htmlCopyright Rational Software 2001...articles this month on the various processes involved in managing code, people, tools, and other factors in software development.

- to serve our users, to offer real business value, to offer the illusion of simplicity.

NOTE: This article was originally published as part of the “Out of the Box” series on the Rational Developer Network, the learning and support channel for the Rational customer community, currently available only to Rational Suite customers. If you are a Rational Suite customer and have not already registered for your free membership, please go to www.rational.net.

Copyright Rational Software 2001 | Privacy/Legal Information

Page 75: splashpage aug 01.htmlCopyright Rational Software 2001...articles this month on the various processes involved in managing code, people, tools, and other factors in software development.

Interview

A Conversation with Eric J. Naiburg and Robert A. Maksimchuk Authors of UML for Database Design*

by Scott CronenwethReporter for The Rational EdgeRational Software

Many software systems, including nearly all e-business applications, involve database design. Yet bringing program code and database schemas together has traditionally been a challenge. Because they "speak different languages" with respect to data structures, and typically report through different departments, data modelers and software developers may not communicate effectively. This invites difficulties both in moving data among programs and databases, and in translating customer requirements into physical systems. The Unified Modeling Language (UML) bridges this long-standing gap.

One of the first things you'll notice if you pick up UML for Database Design is that it's clearly meant to help you negotiate the real world of system design, step-by-step. If you've been left stranded by other books in the past, this one may restore your faith. Thoughtfully organized and impressively comprehensive, this is the book you'll want to have open on your desk while you work.

Interestingly, it may be the dualistic viewpoints of the authors that enable this new effort to succeed. Eric Naiburg came to Rational Software from the world of data modeling, and has been a key player in the development of our UML database modeling profile. Bob Maksimchuk comes from the Object-Oriented side of the fence, where he specialized in modeling huge systems that happened to have huge databases. He's now Data Modeling Evangelist at Rational. Together, these authors were able to craft a technical reference and how-to guide of great utility for the data modeling and database design communities.

Q: What's new about this book? Why is it important?

A: One important thing our book does is help you establish common

jprince
http://www.therationaledge.com/content/aug_01/r_interviewUMLForDB_enrm.html
jprince
Copyright Rational Software 2001
Page 76: splashpage aug 01.htmlCopyright Rational Software 2001...articles this month on the various processes involved in managing code, people, tools, and other factors in software development.

ground between application developers and business analysts -- both of whom are concerned with process -- and database developers -- who are concerned with data. Consider that data modelers work to meet critically important requirements that the process people define, but for any number of reasons the two camps rarely communicate effectively. By bringing UML into the database world, we've given everyone involved a fundamentally better way to connect. So you don't end up with an architecture that defines "Customer Number" seventeen different ways in seventeen different places.

Another important thing is that, as far as we know, ours is the first book to address database design using UML over the full lifecycle of enterprise architecture design and development. It's the only book that usefully illustrates how the major players involved in the creation of enterprise system architectures can stick to a common set of requirements throughout the project, from requirements definition through analysis, right through deployment of the physical database. And once you're managing requirements, suddenly you find you can manage change pretty effectively. Then presto! You just might deliver a system that does what it should -- maybe even on time and within budget.

Q: Why now? Could you have written this book, say, five years ago?

A: Not really. Although in some ways this book was long overdue, particularly with respect to bringing data modelers into the UML fold. There were two big steps that had to happen before everything else could come together. The first was the availability of the UML profile for data modeling that is now built into Rational Rose Enterprise Edition, and which Eric helped to create. Anybody could have used UML for data modeling prior to that, but they'd have had to create their own profile. That means defining all their own stereotypes for tables, columns, constraints, and so on. For most organizations that's simply too much work. The second key step was pounding out a comprehensive, end-to-end case study that really illustrated the process in depth. That was a very, very time-consuming process that took not only a lot of focus but also every bit of our combined, real-world experience. Which probably explains why other authors have not yet done it successfully.

Q: Sounds like you essentially built the book around the case study. What's the advantage of that method of organization?

A: What's great about our case study framework is that it walks you through the entire project lifecycle from beginning to end, illustrating all the tips, tricks, and pitfalls along the way. These are the kinds of insights you only get from hands-on experience. And our case study mirrors a real-life experience pretty effectively. It's based on a real company, and it's predicated on the definition of actual requirements: system requirements, regulatory requirements, the works. From those requirements we perform

Page 77: splashpage aug 01.htmlCopyright Rational Software 2001...articles this month on the various processes involved in managing code, people, tools, and other factors in software development.

a complete analysis and evolve a design -- all the way to deployment. This type of highly detailed example is probably unique. In fact, ours is probably the most comprehensive and robust case study you'll find in any UML book. We packed it with highly relevant information for the database designer because we felt it was essential for both data modelers and project managers to know what to expect from UML. The book is actually organized so that the material up front talks to the team leader or project manager, to assist with planning. And the case study talks to the implementer.

Q: To what extent does the case study, or the book generally, presume you're using Rational Rose and/or Rational's UML profile for data modeling?

A: We used Rational Rose ourselves in creating screen shot type illustrations, and so forth. And we made use of the UML data modeling profile throughout. But strictly speaking, the book is "tool agnostic." It's valuable regardless of where your UML profile comes from.

Q: How receptive has the data modeling community been to the ideas your book presents?

A: There's been a strong, positive response to the book. After all, data modelers and database designers are not new to modeling. They want to know more about UML. Once you show them what it can do, and how it brings data and process together, the light goes on and they totally get it.

Q: How easy is it for a typical database design professional to learn UML?

A: It's very easy. UML is justifiably famous for its flexibility. It's being used to model everything from political systems to biological systems to you-name-it. So it's no surprise that UML works beautifully to describe database designs. It's also worth remembering that UML evolved straight out of entity/relationship diagrams and other kinds of constructs that are widely used in data modeling. That's another reason why it works so well for database design, and why database designers understand it intuitively and can begin using it almost immediately. Sure, there are a few things they might not be familiar with; use cases, for instance. But it all seems to gel very quickly, and off they go!

Q: Besides the organizational advantages you already mentioned, what might motivate a database designer to embrace UML?

Page 78: splashpage aug 01.htmlCopyright Rational Software 2001...articles this month on the various processes involved in managing code, people, tools, and other factors in software development.

A: The biggest win is being able to communicate effectively with the software developers and business analysts and participate as first class citizens in the analysis/design phases. In this way, the database designer can protect existing database assets from uncontrolled change.

UML also helps throughout the design process with earlier artifacts that can be used to "jump-start" subsequent activities. For example, UML class diagrams can clearly depict business rules. So everyone can work together to decide a rule's correctness and where in the system it should be implemented. UML diagrams can be used to reveal database performance issues earlier in the design process. Likewise, UML use cases can help you define your database views very early in the process. UML sequence diagrams make it possible to define transactions in the database system very early on. These same diagrams can be used later, in the testing phase, to ensure the transactions in question can really be executed against the final database structure. We cover all these kinds of practices thoroughly in the book. At every step, we make it clear exactly what the database person needs to be focused on.

Q: How much does the book presume you know about UML? And how much UML background does it provide?

A: The book introduces you to UML from the viewpoint of a database designer, so it's a very valuable introduction for our readership. Then it leads you through the process of UML-based database modeling and design specifically. This helps those unfamiliar with UML to apply new concepts right away. We cover the relevant types of UML diagrams and explain their utility for database designers. In each phase of the case study, for example, we describe both the relevant UML constructs and the relevant database modeling constructs, and illustrate how they relate to the design task at hand. Appendixes show you the actual UML models derived from the case study, so you can see the general types of models you'd need to build for a system like the one in the case study, and their level of complexity. So, basically, we tell you what you need to know about UML rather than referring you elsewhere. And, again, data modelers pick this stuff right up.

Q: Theory and practice, design and process -- this book covers a lot of ground. What overall goal did you hope to accomplish with it?

A: We focused on making the book as practical, concrete, and realistic as we could -- almost like a "virtual mentor" experience. We wanted to make it possible for any motivated database designer to begin working with the UML, and to be able to get the maximum value from the technology in the shortest amount of time. In essence, we wanted to pass along as much of the benefit of our experience as we possibly could. So our readers could learn the tricks without taking the pitfalls.

Page 79: splashpage aug 01.htmlCopyright Rational Software 2001...articles this month on the various processes involved in managing code, people, tools, and other factors in software development.

*Addison-Wesley, 2002ISBN: 0-201-72163-5Cover Price: US $39.99(320 pages)

For more information on the products or services discussed in this article, please click here and follow the instructions provided. Thank you!

Copyright Rational Software 2001 | Privacy/Legal Information

Page 80: splashpage aug 01.htmlCopyright Rational Software 2001...articles this month on the various processes involved in managing code, people, tools, and other factors in software development.

Book Review

Java Internationalizationby Andrew Deitsch and David Czarnecki

O'Reilly & Associates Inc., 2001 ISBN: 0-596-00019-7Cover Price: US$39.95(444 Pages)

Internationalization, or I18N as many software developers call it, has been defined in many ways. The authors of Java Internationalization describe it as "a term used to define the architecture and design of software for which only one set of source code and binary is produced to support all the markets in which software will be sold." Whether you are looking to architect software as this definition describes, or looking to make your existing software accessible and relevant to everyone in your target markets, this book can help.

Internationalization is one of the many important issues facing software developers today, especially those involved in online endeavors. We are all constantly looking to expand our reach globally, and properly internationalizing our software can help ensure our survival in the global marketplace. As the authors note, "Given the choice between two software products that offer similar features, most people would choose the product that is available in their native language."

For this reason, the authors also recommend that developers consider the markets they intend to support during the initial software design stage, so they can avoid the rework of internationalizing existing applications later on. As part of this consideration, they recommend that developers answer the following questions (among others):

● What markets are you going to target?

● Is it acceptable to internationalize only part of the application?

● Is it acceptable to treat a specific language identically across all locales?

With a better understanding of your internationalization needs, they explain, you can better target your efforts.

jprince
http://www.therationaledge.com/content/aug_01/r_javaInternationalization_km.html
jprince
Copyright Rational Software 2001
Page 81: splashpage aug 01.htmlCopyright Rational Software 2001...articles this month on the various processes involved in managing code, people, tools, and other factors in software development.

The authors adeptly address many of the aspects and issues relating to internationalization with Java. They not only supply technical details of the Java specification and how to implement it, but also supplement these details in many cases with a historical, linguistic, and cultural context. This enables a developer to better understand the choices involved in creating internationalized software.

The authors stress that translating the text used by an application is only one part of the internationalization process. To make your software seem "native" in your target markets, you will need to provide the appropriate languages, properly format messages, and ensure that the User Interface does not violate cultural norms, while at the same time ensuring that you properly manage text (sorting, searching, validation, etc.). The Java Development Kit (JDK) provides a robust framework you can leverage in building your internationalized software, and this book will help you use it more effectively.

I wanted to read and review Java Internationalization because it is directly relevant to the work my team is doing. Rational currently has a Web presence in twenty-two countries, which means we need to be designing and developing software that is relevant to each of these major markets. Our Web team, like most these days, is lean and mean, so it is important for us to ensure that adding new markets will not require rework or redesign. To accomplish this, the entire team needs a shared understanding of all the requirements for internationalization -- and I intend to leverage this book to help develop that understanding.

- Kevin P. MicalizziWeb Engineering ManagerRational Software

Read an interview with Rational's Eric J. Naiburg and Robert A. Maksimchuk, authors of the new book, UML for Database Design.

Copyright Rational Software 2001 | Privacy/Legal Information