Controlling the cost of maintenance for a living documentation system is one of the biggest challenges a team may face. In this article from chapter 9 of




Скачать 73.58 Kb.
НазваниеControlling the cost of maintenance for a living documentation system is one of the biggest challenges a team may face. In this article from chapter 9 of
страница1/3
Дата03.10.2012
Размер73.58 Kb.
ТипДокументы
  1   2   3







Specification by Example

By Gojko Adzic


Controlling the cost of maintenance for a living documentation system is one of the biggest challenges a team may face. In this article from chapter 9 of Specification by Example, author Gojko Adzic presents some good ideas that the teams he interviewed used to reduce the long-term maintenance cost of their automation layers and covers two specific areas that caused automation problems for many teams: user interfaces and data management.


You may also be interested in…

Managing the Automation Layer


Controlling the cost of maintenance for a living documentation system is one of the biggest challenges the teams I interviewed faced in the long term. A huge factor in that is managing the automation effectively.

In this article, I present some good ideas that the teams used to reduce the long-term maintenance cost of their automation layers. The advice in this section applies regardless of the tool you choose for automation.

Don’t treat automation code as second-grade code

One of the most common mistakes that teams made was treating specifications or related automation code as less important than production code. Examples of this are giving the automation tasks to less capable developers and testers and not maintaining the automation layer with the same kind of effort applied to production code.



In many cases, this came from the misperception that Speci­fication by Example is just about functional test automation (hence the aliases agile acceptance testing and Acceptance Test-Driven Development), with developers thinking that test code isn’t that important.

Wes Williams said that this reminded him of his early experiences with unit-testing tools:

I guess it’s a similar learning curve to writing JUnit. We started do­ing the same thing with JUnit tests and then everyone started writing, “Hey guys, JUnit is code; it should be clean.” You ran into maintainability problems if you didn’t do that. The next thing we learned was that the test pages [executable specifications] themselves are “code.”


Phil Cowans listed this as one of the biggest mistakes his team made early on when implementing Specification by Example at Songkick. He added:

Your test suite is a first-class part of the code that needs to be main­tained as much as the regular code of the application. I now think of [ac­ceptance] tests as first class and the [production] code itself as less than first class. The tests are a canonical description of what the application does.

Ultimately the success is more about building the right thing than build­ing it well. If the tests are your description of what the code does, they are not just a very important part of your development process but a very important part of building the product and understanding what you built and keeping the complexity under control. It probably took us a year to realize this.


Clare McLennan says that it’s crucial to get the most capable people on the task of de­signing and building the automation layer:

When I went back the other day, one of the other developers said that the design of the test integration framework is almost more important than the design of the actual product. In other words, the testing frame­work needs to have as good a design as the actual product because it needs to be maintainable. Part of the reason why the test system succeeded was that I knew about the structure and I could read the code.

What typically happens on projects is they put a junior programmer to write the tests and the test system. However, automated test systems are difficult to get right. Junior programmers tend to choose the wrong ap­proximations and build something less reliable. Put your best architects on it. They have the power to say: If we change this in our design, it will make it much better and easier to get tested.


I wouldn’t go as far as saying that the automation code is more important than produc­tion code. At the end of the day, the software is built because that production code will help reach some business goal. The best automation framework in the world can’t make the project succeed without good production code.

Specifications with examples—those that end up in the living documentation—are much longer lived than the production code. A good living documentation system is crucial when completely rewriting production code in a better tech­nology. It will outlive any code.

Describe validation processes in the automation layer

Most tools for automating executable specifications work with specifications in plain text or HTML formats. This allows us to change the specifications without recompiling or redeploying any programming language code. The automation layer, on the other hand, is programming language code that needs to be recompiled and redeployed if we change it.

Many teams have tried to make the automation layer generic in order to avoid having to change it frequently. They created only low-level reusable components in the automation layer, such as UI automation commands, and then scripted the validation processes, such as website workflows, with these commands. A telling sign for this issue is specifications that contain user interface concepts (such as clicking links or opening windows) or, even worse, low-level automation commands such as Selenium operations.

For example, the Global Talent Management team at Ultimate Software decided at some point to push all workflow out of the automation layer and into test specifica­tions. They were using a custom-built, open source UI automation tool called SWAT, so they exposed SWAT commands directly as fixtures. They grouped SWAT commands together into meaningful domain workflows for specifications. This approach made writing specifications easier at first but caused many maintenance issues later, according to Scott Berger and Maykel Suarez:

There is a central team that maintains SWAT and writes macros. At some point it was impossible to maintain. We were using macros based on macros. This made it hard to refactor [tests] and it was a nightmare. A given [test context] would be a collapsible region, but if you expanded it, it would be huge. We moved to implementing the workflow in fixtures. For every page [specification], we have a fixture behind.

Instead of describing validation processes in specifications, we should capture them in the automation layer. The resulting specifications will be more focused and easier to understand.

Describing validation processes (how we test something as opposed to what’s being test­ed) in the automation layer makes that layer more complex and harder to maintain, but programming tools such as IDEs make that task easier. When Berger’s team described workflows as reusable components in plain-text specifications, they were essentially pro­gramming in plain text without the support of any development tools.

We can use programming tools to maintain the implementation of validation pro­cesses more efficiently than if they were described in plain text. We can also reuse the automated validation process for other related specifications more easily. See the sidebar “Three levels of user interface automation” further in this article for more information on this topic.

Don’t replicate business logic in the test automation layer

Emulating parts of the application business flow or logic in the automation layer can make the tests easier to automate, but it will make the automation layer more complex and harder to maintain. Even worse, it makes the test results unreliable.

The real production flow might have a problem that wasn’t replicated in the automa­tion layer. An example that depends on that flow would fail when executed against a real system, but the automated tests would pass, giving the team false assurance that everything is okay.

This is one of the most important early lessons for Tim Andersen at Iowa Student Loan:

Instead of creating a fake loan from test-helper code, we modified our test code to leverage our application to set up a loan in a valid state. We were able to delete nearly a third of our test code [automation layer] once we had our test abstraction layer using personas to leverage our application. The lesson here is don’t fake state; fantasy state is prone to bugs and has a higher maintenance cost. Use the real system to create your state. We had a bunch of tests break. We looked at them and discovered that with this new approach, our existing tests exposed bugs.


On legacy systems, using production code in automation can sometimes lead to very bad hacks. For example, one of my clients extended a third-party product that mixed business logic with user interface code, but we couldn’t do anything about that. My clients had read-only access to the source code for third-party components. Someone originally copied and pasted parts of the third-party functionality into test fixtures, re­moving all user interface bindings. This caused issues when the third-party supplier updated their classes.

I rewrote those fixtures to initialize third-party window classes and access private variables using reflection to run through the real business workflow. I’d never do any­thing like that while developing production code, but this was the lesser of the two evils. We deleted 90% of the fixture code and occasionally had to fix the automation when the third-party provider changed the way private variables are used, but this was a lot less work than copying and modifying huge chunks of code all the time. It also made tests reliable.

Automate along system boundaries

When: Complex integrations

If you work on a complex heterogeneous system, it’s important to understand where the boundaries of your responsibility lie. Specify and automate tests along those boundaries.

With complex heterogeneous systems, it might be hard or even impossible to include the entire end-to-end flow in an auto­mated test. When I interviewed Rob Park, his team was working on an integration with an external system that converts voice to data. Going through the entire flow for every automated case would be impractical, if not impossible. But they weren’t de­veloping voice recognition, just integrating with such a system.



Their responsibilities are in the context of what happens to voice messages after they get converted to data. Park says that they decided to isolate the system and provide an alternative input path to make it easier to automate:

Now we’re writing a feature for Interactive Voice Response. Policy numbers and identification get automatically transferred to the applica­tion from an IVR system, so the screens come up prepopulated. After the first Three Amigos conversation, it became obvious to have a test page that prepares the data sent by the IVR.


Instead of automating such examples end to end including the external systems, Park’s team decoupled the external inputs from their system and automated the validation for the part of the system that they’re responsible for. This enabled them to validate all the important business rules using executable specifications.

Business users naturally will think about acceptance end to end. Automated tests that don’t include the external systems won’t give them the confidence that the fea­ture is working fully. That should be handled by separate technical integration tests. In this case, playing a simple prerecorded message and checking that it goes through fully would do the trick. That test would verify that all the components talk to each other correctly. Because all the business rules are specified and tested separately, we don’t need to run high-level integration tests for all important use cases.

Don’t check business logic through the user interface

Traditional test automation tools mostly work by manipulating user interface objects. Most automation tools for executable specifications can go below the user interface and talk to application programming interfaces directly.

Unless the only way to get confidence out of automated specifications for a fea­ture is to run them end to end through the user interface, don’t do it.

User interface automation is typically much slower and much more expensive to maintain than automation at the service or API level. With the exception of using visible user interface automation to gain trust (as described earlier in this article), go­ing below the user interface is often a much better solution to verifying business logic whenever possible.

Automate below the skin of the application

When: Checking session and workflow constraints

Workflow and session rules can often be checked only against the user interface layer. But that doesn’t mean that the only option to automate those checks is to launch a browser. Instead of automating the specifications through a browser, several teams de­veloping web applications saved a lot of time and effort going right below the skin of the application—to the HTTP layer. Tim Andersen explains this approach:

We’d send a hash-map that looks a lot like the HTTP request. We have default values that would be rewritten with what’s important for the test, and we were testing by basically going right where our HTTP requests were going. That’s how our personas [fixtures] worked, by making HTTP requests with an object. That’s how they used real state and used real objects.



Not running a browser allows automated checks to execute in parallel and run much faster. Christian Hassa used a similar approach but went one level lower, to the web controllers inside the application. This avoided the HTTP calls as well and made the feedback even faster. He explains this approach:

We bound parts [of a specification] directly to the UI with Selenium but other parts directly to a MVC controller. It was a significant overhead to bind directly to the UI, and I don’t think that this is the primary value of this technique. If I could choose binding all specifications to the con­troller or a limited set of specifications to the UI, I would always choose executing all the specifications to the controller. Binding to the UI is op­tional to me; not binding all specifications that are relevant to the system is not an option. And binding to the UI costs significantly more.

Automating just below the skin of the application is a good way to reuse real business flows and avoid duplication in the automation layer. Executing the checks directly using HTTP calls—not through a browser—speeds up validation significantly and makes it possible to run checks in parallel.

Browser automation libraries are often slow and lock user profiles, so only one such check can run at any given time on a single machine. There are many tools and libraries for direct HTTP automation, such as WebRat,1 Twill,2 and the Selenium 2.0 HtmlUnit driv­er.3 Many modern MVC frameworks allow automation below the HTTP layer, making such checks even more efficient. These tools allow us to execute tests in parallel, faster, and more reliably because they have fewer moving parts than browser automation.

  1   2   3

Похожие:

Controlling the cost of maintenance for a living documentation system is one of the biggest challenges a team may face. In this article from chapter 9 of iconChapter 4Model-based design for ease of maintenance

Controlling the cost of maintenance for a living documentation system is one of the biggest challenges a team may face. In this article from chapter 9 of iconModel Development of a Total Integrated Maintenance System

Controlling the cost of maintenance for a living documentation system is one of the biggest challenges a team may face. In this article from chapter 9 of iconRussian Banks Face ‘System-wide Risks,’ Standard & Poor’s Says

Controlling the cost of maintenance for a living documentation system is one of the biggest challenges a team may face. In this article from chapter 9 of iconApplications of gps in low cost object tracking system

Controlling the cost of maintenance for a living documentation system is one of the biggest challenges a team may face. In this article from chapter 9 of iconChapter I deep within the physical body of each living person a psychic force organizes matter and at times transforms it

Controlling the cost of maintenance for a living documentation system is one of the biggest challenges a team may face. In this article from chapter 9 of iconArticle 13. 1 Drug effects on the nervous system

Controlling the cost of maintenance for a living documentation system is one of the biggest challenges a team may face. In this article from chapter 9 of iconWe have placed three stars ( ) at the beginning of each section and article to enable the use of your search features so that you may easily jump from article to article

Controlling the cost of maintenance for a living documentation system is one of the biggest challenges a team may face. In this article from chapter 9 of iconEnhancing a Face-to-Face Course with Online Lectures: Instructional and Pedagogical Issues

Controlling the cost of maintenance for a living documentation system is one of the biggest challenges a team may face. In this article from chapter 9 of iconTaking Face to Face Teaching On-Line a teachers’guide

Controlling the cost of maintenance for a living documentation system is one of the biggest challenges a team may face. In this article from chapter 9 of iconA conceptual Critique of An Interdependent Qualification Framework System: a consultative Document Prepared by a Joint Task Team of the Departments of Education and Labour

Разместите кнопку на своём сайте:
Библиотека


База данных защищена авторским правом ©lib.znate.ru 2014
обратиться к администрации
Библиотека
Главная страница