Releasing Software Without Testing Team
-
Upload
akshay-mathur -
Category
Technology
-
view
177 -
download
1
description
Transcript of Releasing Software Without Testing Team
© Akshay Mathur Page 1
Releasing Software without Testing Team
Akshay Mathur
Introduction We can’t imagine software development without testing. Programmers are human and they’re bound to make mistakes. We use testing as a tool to ensure that defects are caught before we ship. Good software engineering practice requires us to isolate development and testing efforts. This is commonly implemented by assigning each of these responsibilities to different teams. Along these lines, the idea of the magical ratio of testers to developers was conceived. In order to improve quality, we intuitively increase the ratio of testers to developers. As a direct consequence of increasing the testing effort in this way, budgets and time to market are directly affected. There is also a hidden cost at work that results from the interpersonal issues when two teams with different objectives try to work together. Development teams that are directly responsible for the quality of the code they ship reveal several desirable advantages. By internalizing quality goals, developers are forced to adopt a ‘test first’ attitude. This approach allows them to get better over time at managing the expected standards required to ship software to customers. By eliminating the overhead of inter-‐team co-‐ordination, single teams find agility and are more responsive. Turnaround time for bug resolution is also reduced to it’s minimum. These are great qualities to achieve.
Software Development Life Cycle Before we can review the role of the software development team, we will need to revisit the traditional software development waterfall model and the role the team plays in this model. In this model requirements analysis, system and technical design, development, testing and release are each discrete steps. In practice, management and development teams rarely have the freedom to implement this model in this way. Let’s take a closer look at what the underlying forces are that disrupt the simple flow of this model.
Requirements If we think of any software project to be governed by technical and non-‐technical requirements, then the key underlying constraints are time and money. At the start of the cycle, we describe the functional requirements or what the software must deliver. These are created on basis of our current knowledge of what our customers want. As market demands change, functional requirements
© Akshay Mathur Page 2
must follow. In many cases, customers are themselves not aware of what it is they want. Further, written functional requirements are at best an abstraction of what is really desired. It is left to development team to fill in the details and plug the gaps. Thus, after having created the functional requirements, we are still left with lingering questions. How do we know for sure that what the development team delivers will meet the requirements? Can all the requirements be met within the budget defined? Considering these factors, we allow ourselves to revisit and change the requirements.
System Design Since requirements are a prerequisite to the design stage, many of the challenges we found in the first stage also affect this stage. You could require that designs eliminate the need to revisit development even if the requirements were to change. However, this approach risks over-‐engineering the software and increases the development complexity. It will result in significant waste when major changes are made to the original requirements in order to keep pace with the market.
Development and Testing Even with the backdrop of the flawed waterfall model, the implicit expectation is that the software delivered has to be of excellent quality. In a nutshell, it must meet the original ever-‐changing requirements, work in common and extreme scenarios that can be imagined, or at least automatically degrade gracefully in the remaining scenarios that could not be feasibly accounted for or imagined in the first place. Delivering this magical software is the joint responsibility of development and testing team. As we’re already aware, joint responsibilities lead to imperfect outcomes. Thus optimizing the tradeoff between our expectations and the ability for this joint team to deliver is of great importance to delivering great software. Let's dig a little deeper and see how development and testing teams work. For the sake of simplicity, we will assume that testing is given the same importance as development. For instance, requirements are discussed with both development and testing teams, and both the teams are able to begin their work together.
The problem
Communication Issues While development team does low level technical design and starts coding, testing team starts with writing test plan and test cases. At this point, only black-‐box testing stuff can be planned because actual code is not present for any white-‐box testing approaches.
© Akshay Mathur Page 3
As customer-‐facing teams learn more about the market and make changes in the original requirements, these changes need to be communicated to both teams, a costly and imperfect exercise. Further, as the development team makes progress and learns more about implementation limitations, requirements and designs will change to keep to delivery schedules and budget limits. In many cases, these changes remain implicit within the development team and their importance is realized only later. Thus, the development and testing teams understanding of the software in development begins to diverge. As anyone with experience in testing will tell you, delays and imperfect communication results in unexpected pauses, rework and a great deal of frustration. Features that commonly go ‘missing’ or are found to be unexplainably ‘broken’ in the build routinely halt testing efforts.
Coordination Issues The perspectives that the development and testing teams adopt are also at odds. Testers need to break the software whereas developers adopt an implementation perspective. As the perspectives are different, the order in which they want to approach the software and it’s sub-‐components will be different. However, in order to write comprehensive test cases, testing team requires implementation details of features that the development team will simply not be prepared for. The testing teams’ workload depends on what the development team can deliver. Development milestones must keep an optimal workload for both teams in mind. As a result, both developers and testers have to compromise on their natural order of work and come up with a mid-‐way build plan. Both teams are now constrained by this plan and must ignore any creative optimization of the plan to suit their individual objectives. Failing to meet the build plan affects both teams and raises more coordination issues.
Mindset Issues Duality in rewards and holding the two teams accountable also plays a significant role in further dividing the two teams. For instance, on uncovering embarrassing or severe defects – the testing team is pushed to tighten their processes. On the other hand, a successful release invites praise for the developers. The testing team essentially isolates developers from the how the software performs with end-‐users. If the two teams work in a staggered mode, developers simply throw code over the wall to the testers to test and move on to their next responsibility. As a result, developers miss out on the opportunity of processing valuable feedback from the field in context of the current release effort. Now bug resolution cycles are also longer as developers have to pre-‐empt their current responsibilities, switch context to fix bugs. Isolation from the end-‐user also has a subtle effect on individuals of allowing overall quality to slip. Developers begin to believe that they must now write code
© Akshay Mathur Page 4
to simply get past the test team. As someone else is responsible for delivering quality, now in the back of developer's mind, it is okay to write code that does not take care of all cases or does not deliver the complete functionality. Also, it is fine not to do the impact analysis upfront because if the new code breaks some other functionality, testing team will report that as well and that will go through the bug fix cycle. As testers are rewarded for the non-‐obvious defects find before release. This encourages attachment with bug reports. If for the testing team, the gross number of bug reports accepted for fixing is the only measuring benchmark it can be disappointing for testers and the team at large to have bugs rejected by developers.
The Problem in a nutshell In a practical software development project, the company needs to facilitate a great deal of coordination and communication infrastructure. In spite of that, a divide gets created between development and testing team members. Overall, these things not only add to the cost but also ruin environment of the workplace.
The Solution Simplest solution to eliminate the divide is to merge the teams. However, it is observed that superficial merging does not work.
Superficial Merging In an experiment to reduce this divide the two teams were merged into one 'Engineering Team' with single reporting lead. This is a cosmetic change and resulted into negative impact. Processes stayed the same and since the engineering manager, being from development stream, failed to understand the issues of testers. In a different experiment, developers were asked to alternate their primary development roles with testing responsibilities. This also failed for following reasons:
• Developers were pre-‐empted in the midst of their testing assignments as and when they need to address urgent issues from previous releases.
• Some developers refused to be rotated to testing team. • Testing team felt that developers coming on rotation need a lot of training
for adopting formal testing and were not being helpful.
True Merging True merging requires merging the objectives and the functions of both teams. This implies that there are no specialist positions. Each individual must do both development and testing. Thus, merging teams is difficult to implement as an afterthought. The complete process right from planning, recruitment, execution
© Akshay Mathur Page 5
to the delivery needs to be aligned with this idea and everybody has to work with this approach. The company management needs to make way for new processes. For example, when recruiting, the job profile needs to communicate to candidates that they are expected to wear both testing and development hats. Context switches between development and testing must be instantaneous. Interestingly, no formal testing documentation is needed – eliminating written test cases, test plans and test reports. The number and quality of bug reports also goes down. The above appears difficult to implement as it requires a change in culture but it works in favor of quality. In the remaining document, we shall discuss that in length.
How it works As said earlier, there is no dedicated testing team. Everybody involved in the project knows how the software works or what is expected from it. The waterfall model implementation remains the same. Just the development and testing is carried out differently from how two teams would approach it. Individuals are better aware of changing requirements and are team is co-‐ordination is better synchronized. When work begins they are free to work in the sequence they prefer and make optimizations and adjustments in the schedule as needed. While estimating time for a feature, members add additional time necessary to smoke-‐test the overall build after integration. Developers are skilled in basic testing techniques. For doing a better job in testing, the developer does a thorough impact analysis of any change he makes. Access and understanding of the details in code can increase the accuracy of impact analysis and including testability hooks. Knowledge of the code also helps in running better test cases and choosing better test data. In this way trivial bugs are resolved in the development environment itself. When a developer encounters a bug in the code written by someone else, he discusses the bug with original writer and together they come up with a plan for resolution. Bug tracking system is mostly used to serve reminders and that is why the quality of bug reports is poor as per standards. Now bug reports contain the investigation and resolution notes rather than steps to reproduce test data etc. In this system, both praise and fire come to the same person. This makes the developer completely responsible for the product quality and encourages better quality.
© Akshay Mathur Page 6
Limitations While the approach described here produces good quality software, the formal documentation of test cases, test data and test reports etc. becomes unavailable. Also there are cases where this methodology cannot directly apply or will need to be adapted. For instance, if a large software product is now simply being maintained, the development and testing workloads are no longer evenly matched. Thus, the testing workload is significantly higher and calls for specialization. In other cases such as mission-‐critical software, the cost of managing multiple teams plus all the overheads may still be significantly less than the cost of a defect going out in the field. Another case is of software being shipped embedded on a device where no matter how you test otherwise; test cycle on actual devices is mandatory. One more case is of WIPS (Wireless Intrusion Prevention Systems) where so many environmental factors are at play that a radio frequency isolated place is needed for testing.
Scaling The approach described in this paper was applied successfully with small team sizes, or in cases where the batch size of features released is small. However, the idea behind the proposal is to make the developers write good code at the first place by holding them responsible for the quality and making them think about the potential failure points ahead of time. At large scale, if the testing team can’t be completely eliminated, it should be introduced very late in the process. For instance, if the testing team comes in the picture at customer acceptance test level when for the purpose of development team, the software is considered ‘shipped’; the development team members have to upgrade themselves by all means for delivering high quality software.
Case Studies
ShopSocially ShopSocially provides embed-‐able apps to online retailers. The retailers add a code snippet on the pages of the online store and based on the rules configured in the console, different apps trigger on different pages. Some apps are run off retailer’s website as well. In a nutshell, ShopSocially has a web application that is distributed on the websites of 500+ retailers. Hence, ShopSocially serves multi million requests per day.
© Akshay Mathur Page 7
ShopSocially implemented this approach right from its incorporation. The processes work exactly in the way described in the ‘How it works’ section above. We at ShopSocially get many requirements and enhancement requests as bugs. Let’s leave them apart and talk about the software bugs testing team could have caught. In last 3 years, only 17 software bugs were reported from the field, while in last 6 months the number is only 2. None of these bugs was critical. Another category of bugs got reported is browser compatibility issues where some portion of the enterprise console does not render properly in Internet Explorer 7.0. In the beginning, a lot of cosmetic issues also got reported. As a result, on one hand, developers upgraded themselves for having a better eye for cosmetic details, on other hand, they adapted better user interface development tools and frameworks and the problem is now reduced to its minimum.
GS Lab GS Lab provides software development and testing services. Many of its clients are early stage start-‐ups. By adopting these techniques, GS lab delivers very good quality code. This saves a lot of money for client as the cost of testing team gets eliminated.
AirTight Networks AirTight Networks is the leader in Wireless Intrusion Prevention System (WIPS). The product is for everyone that is from SOHO to large enterprise. However, most of the customers are large enterprises having deployed the system across the globe. The product includes server appliances and a number of sensor boxes running proprietary code. AirTight Networks can’t completely eliminate the testing team because of following reasons:
• The testing requires specialized skills and environment. • The software needs to run on different devices and testing is required on
all the devices to eliminate any device specific issues. • Many releases of the product are deployed in the field and customers
upgrade at their convenience. The company has to support all of them. • It is a security product and a security breach may cost a lot to the
customer. However, after doing many experiments, AirTight Networks has employed many processes for bridging the gap between development and testing teams. Here are a few of them:
• Developers and testers sit next to each other. o This eliminates the requirement of formal communication in some
cases. Testers get automatically involved in the discussions happening on the developers’ desk during the course of development.
© Akshay Mathur Page 8
o Having the testers always present with them, developers never forget to include them in formal communication.
• Complete access of source code is made available to testing team. Testers are also trained for navigating the code and doing better impact analysis.
• Cross-‐referencing is made between source control system and bug tracking system.
• Commit log of source control system is made available via a web interface for eliminating push-‐based formal communicating from development team.
• While committing code, the developers write detailed commit log having bug report or enhancement request number and details of the change being committed.
• Developers also need to provide details of tests they ran so that testing team rests assured that trivial stuff is working fine.
• Scope of testing by testing team for intermediate builds is made limited only to new and impacted functionality and developers need to communicate the scope after doing their impact analysis.
• As far as development team is concerned, the software is considered shipped after integration test cycle. Any bug in previously released features discovered during regression test cycle, is considered as the bug from the field and developers are held responsible for it.
Summary Academically, the software development life cycle looks very simple but it carries a lot of challenges for practical scenarios. Making developers responsible for quality is the key of producing good quality software. In many cases, this may be achieved by eliminating testing team completely. In some other cases, late introduction of testing team in the process may help.