Скачать 3.51 Mb.
What is Test-First Programming?
The Chapter of the same name (on page 467) illustrates the complete lifecycle.
Write a failing test case, write code to pass it, refactor the code to improve its design, and repeat for each tiny code ability. After the fewest possible edits—say ten at the most—run all the tests and predict their results. This technique makes the odds of bugs and the odds of excess and difficult refactoring very low.
Over time, TFP replaces long hours debugging with short minutes writing tests. Time spent debugging, without knowing the source of a bug, is time wasted. Time spent manually testing, not knowing if an innocent-looking change broke anything, is also time wasted. The trick is fixing both wastes at the same time.
When you debug, you feed a program inputs, run to a breakpoint, and examine an intermediate value. This is an experiment, with an hypothesis and a result. A test case is a permanent record of such an experiment. Each case runs a function and examines one of its values. When you run all the tests, you perform every experiment again. This is like instantly, manually debugging your entire program, many times in many ways, to reconfirm that each intermediate value is still correct.
Without tests, if you change code very rapidly—adding features or improving its design—you will create many kinds of bugs, some easy to find and some hard. With tests, you can change code rapidly while constantly reducing the chances of bugs. The more test cases, the more ambitiously you can change code. So change turns from a scary thing into the most useful tool in software engineering.
What’s a Test Case?
Tests are code outside your application, written in the same language, and built by the same build scripts. Those should compile everything, run all the tests, and report their results.
A test case is one unit of testing. It assembles target objects, activates their test methods, and asserts the results were correct. Here’s a test case, from an arbitrary project:
CHECK_EQUAL("Plot Utility", pFrame->GetTitle());
The test’s target is pFrame, a member of FrameSuite. That class constructed the pFrame in a setUp() fixture (not shown), before running this case. The lines starting with CHECK_ are assertions. When they fail, the editor or test script reports the failing conditions.
That case requires the production code to have a Frame::GetTitle() method.
The TEST_() macro is a fixture that automatically registers its case with the global list of cases to run. That prevents the need to call extra registration methods.
Chapter 7: NanoCppUnit, on page 166, illustrates this ultra-light test rig.
How Do Tests Make Development Faster?
When automated tests fail, you have many more options than when manual testing exposes a bug. Because tests document and cohere with their tested code, their failures often clearly indicate the problem. You briefly fix it and keep going.
Testing is (slightly) more important than designing or adding features.
When you test before designing, the design's most important requirement, testability, is always satisfied.
You can rapidly change projects with tests—refactoring or adding features—by running the tests after every few edits. Use TFP to produce tests that fail:
Tests fail early if you run them as often as possible, and you strive to express every aspect of your project as a test. They fail loudly when you bond them to your editor and environment to invoke a hardware breakpoint at the failing assertion. And they fail expressively when they report the failure conditions, and when each test case is very close to its tested code.
Can Tests Catch Every Bug?
Some folks ask why bother to write tests if they can't prove a program is bug-free. It's true; the tests for a complex program must run in geological time to exhaustively prove every combination of situations. That's no excuse not to test.
Use tests to prevent bugs. That frees up your schedule, and makes your code’s condition highly visible, so you can easily catch the remaining few. Developers focus on tests that are easy to write. But…
Our list does not require tests to fail “accurately”! Most tests are accurate, of course. This book discusses ways to make all test failures relevant, without working too hard to make the tests perfect. No test can prove the absence of bugs. The next best thing is cheap tests that fail too often, rather than too infrequently. This book explores such Hyperactive Tests. They can force code into its most testable configuration, leading to very high coverage.
The Agile software development techniques, such as Extreme Programming, leverage unit tests and other development procedures that fail early, loudly, expressively, reversibly, and often accurately. That’s why Agility is success-oriented; errors and mistakes get the earliest possible opportunities to attract attention leading to a fix.
What’s the Best Way to Fix a Failing Test?
One myth of Agile development is it forbids using a debugger. In the ideal situation, you are allowed to use the debugger, but you are not motivated to. Test cases make an excellent platform for debugging, to learn about legacy code or review new routines. Convert such learning into new test cases as soon as possible.
When a test fails and you don’t want to fix the problem, for whatever reason, tests give you another, very powerful option. The TFP cycle is reversible, so your editor’s Undo system can always revert your code to the last state where all tests passed.
That state must be very recent, because you should not make more than 1~10 edits between passing all tests. After undoing, make even fewer edits between tests runs.
Why 1 to 10 Edits?
In your primary codebase, between test runs, you should only perform so few edits that they all fit in your short-term memory. You could manually reverse your changes, back to the last passing state, if you wanted to. Frequent testing positively reinforces and rewards your mental model of the code’s situation.
If you can’t think of a small edit, and need to research all the possibilities (see the middle of Chapter 11: Fractal Life Engine for this situation), copy your code out to a scratch project, and then party. When you learn what to do, return to the primary codebase, and exploit your research to perform small edits.
Why Test FIRST?
Prevention is better than a cure. To replace debugging with a more robust implementation technique, write tests before the risk of bugs arises. When you write a test and predict failure, you test the tests at the most efficient time. Inspect your assertions’ outputs to make certain the test failed for the correct reason, and that the assertions' outputs are useful. Writing code to pass such a test stakes out the territory where it can't regress.
Test-first is the most rapid and aggressive way to develop code that fails early, loudly, expressively, and reversibly. The code to pass a test should be simple, and the subsequent refactor to improve design should promote elegance.
Simple code resists bugs. Tested code is easy to simplify. Simple code is decoupled, and easy to test. This cycle squeezes waste out of your process. TFP searches a path of simple code for a clean design that satisfies all committed requirements.
Test-first for GUIs is hard as the temptation arises to “just look at” the GUI to see if new code works. If a test infrastructure can force predictable visual changes, it can lead advanced GUI modules to the same level of bug resistance and communication that other modules enjoy.
How Do Tests Help Requirements Gathering?
This chapter makes adding new features sound too easy. In a business environment, we must learn to use our powers for good and not evil. The goal of software engineering is to efficiently locate that 20% of features, or less, which will return 80% of the value, or more, for our users. Don't write too many features, just to log billable hours.
Tests help you frequently demo, review, and deploy your new features. A passing test batch should imply very high confidence that your project is ready for immediate delivery. Any question or impediment regarding this status should be answered with more tests.
Your customers should profit from your features as soon as possible, so they can learn what features to request next. This extends control over your project to those who need your features. Some software projects make this feedback loop as tight and comprehensive as possible using a simple trick. They build a cheap and flexible user interface for their test rig, so customer representatives can easily write new cases. This book examines such rigs in narratives and in sample code.
How Do Tests Sustain Growth?
Small projects are as easy to debug as to test-first. The goal is small projects that grow large with customer requests.
As a project grows without tests, it resists changes. To add a new and unforseen feature, you should reconcile its design with the rest of your program. This change should ripple through your system until the design is as clean as if you had predicted that feature. If you change too little, you lower the design quality. If you change too much, without tests, you introduce bugs. Either way, change without testing gets harder and harder over time.
A healthy test rig is an investment in a program’s future. It allows refactors that merge the code of new features with old ones. It prevents bugs while these refactors force code to grow more flexible and more bug-resistant.
When a project is out of control, the effort required to add each new feature can look like this:
The solid line is the cost of each feature. Even when features are the same size (when each requires the same number of code changes), the effort line can vary widely. But varying effort, alone, won’t make a project hard to control.
Each dotted bar represents the optimistic and pessimistic estimates for those features. If you can’t estimate, you can’t schedule a project.
In an out-of-control project, you never know which feature will be the one that causes radical design changes or long bug hunts. So all our estimates include “float”; very wide spans between optimism and pessimism. Sometimes a feature misses its window, and turns out harder than expected. Some features turn out easy, and waste their float times. Either way, useless estimates disturb our schedule.
Over time, the average effort trends upward. So the longer an uncontrolled project goes, the more risk it causes. The goal is projects that remain in control, like this:
The time and effort to code each feature is short, regular, and generally descending. More important, the estimates are narrow and increasingly accurate. Testing each change makes all changes easy and predictable.
That diagram assumes your project has invested in learning to make all tests easy to write. The sub-chapter Time vs. Effort, on page 54, compares the cost of a long project with poor tests, to the up-front cost of researching how to get difficult things under test.
|Dedicated to Ashley & Iris||Abramovitz, Janet N., and Ashley T. Mattoon. 1999|
|Plastics and the Environment. Hoboken. N. J. Wiley-Interscience. Ashley, S. 2002||Gilliland home is dedicated|
|Citations Acknowledging or using iris-related facilities and Data As of August 2010 Please send corrections and/or additions to||Dedicated to Jerry Lefcourt, Lawyer and Brother|
|Free to download magazine dedicated to Commodore computers||Morning session I: Dedicated to Prof. A. Acrivos, “Suspensions and particulates”|
|08: 30 Registration 09: 00 Welcome Remarks Morning session I: Dedicated to Prof. A. Acrivos, “Suspensions and particulates”||The Culture of Irises in the United States Iris Culture for the Mountain and Plains Region, D. M. Andrews 5|