Skip to content

Yegorov/at

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

44 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

License

The following principles may help you write better automated tests. They are excerpts from the Angry Tests book. To get a more detailed explanation you may want to read the book.

Chapter 2. Basics

[2.1] Keep every test shorter than a dozen lines.

[2.2] Assert only once per test.

[2.3] Assert in every test.

[2.4] Finish every test with an assertion.

[2.5] Don't use assertDoesNotThrow.

[2.6] Don't fail explicitly using fail.

[2.7] Use irregular input values, instead of "foo".

[2.8] Use different inputs in every test, don't reuse them.

[2.9] Always write failure messages.

[2.10] Don't share any data between tests; never use setUp and tearDown.

[2.11] Don't define constants (static literals) inside tests.

[2.12] Don't inline constants; assign them to variables first.

[2.13] Don't bypass object interfaces, especially not via Reflection.

[2.14] Name tests as full English sentences starting with a verb, e.g., buildsHtmlPage.

[2.15] Never write any comments inside or outside of test methods.

[2.16] Don't assert on behavior not promised by the object's contract.

[2.17] Assert on the most vulnerable pain points.

[2.18] Don't test getters, setters, and similar primitive functionality.

[2.19] Don't delete tests, disable them instead.

[2.20] Never test private or protected object methods.

[2.21] Close resources after use, such as files and sockets.

[2.22] Don't write feature code that is only used in tests.

[2.23] Assert at every step while getting to the end of a use case.

[2.24] Don't assert on side effects, like logs.

[2.25] Assert on all possible intermediate results.

[2.26] Don't test constructors — they are code-free anyway.

[2.27] Clean up before a test, not after it.

[2.28] Don't use mock frameworks, build fake objects instead.

[2.29] Don't forgive incorrect behavior, disable the test instead.

[2.30] Aim for one-statement tests.

[2.31] Use Hamcrest.

[2.32] Don't be discouraged from writing bad tests; they're better than nothing.

[2.33] Every time you change the code, add more tests.

Chapter 3. Advanced

[3.1] Classify tests as "fast" (50ms each) and "deep" (integrating everything).

[3.2] Make tests flaky and unstable, then expect bugs to be reported and fixed.

[3.3] Create custom matchers and reuse them in assert statements.

[3.4] Reproduce bugs with the minimum possible scaffolding.

[3.5] Tests must reproduce particular bugs, not successful usage scenarios.

[3.6] Don't assert on unimportant details, don't be pedantic without reason.

[3.7] Invent a DSL and write test stories using it.

[3.8] Use randomizers to generate test data.

[3.9] Don't help your tests; create the most inconvenient environment for them.

[3.10] Don't keep temporary files next to the source code; use a temporary directory.

[3.11] Don't test abstract classes.

[3.12] Don't let tests log anything; keep the log empty.

[3.13] Parameterize tests.

[3.14] Stop tests on timeout; don't let them run forever.

[3.15] Don't sleep for an arbitrary number of seconds; instead, wait for an event.

[3.16] When it's necessary to simulate a hang, sleep for a billion seconds.

[3.17] Let different tests test the same part of the feature code.

[3.18] Test whether your objects are thread-safe: in Java, in Ruby.

[3.19] Let tests retry when the behavior they are testing is flaky.

[3.20] Use tags to classify tests.

[3.21] Let your testing framework repeat some tests to increase the chance of hitting the bug.

[3.22] In tests for thread-safety, utilize all available CPUs.

[3.23] Run all tests with no Internet connection; they must pass.

[3.24] Don't assert on the details of errors.

[3.25] Keep the scope of try/catch as small as possible.

[3.26] Strictly one test per feature file: jtcop.

[3.27] Don't use verify() from a mock framework.

[3.28] Don't use PowerMock or similar frameworks.

[3.29] In tests, don't instantiate objects or call their methods with default arguments.

[3.30] Don't abbreviate; use curl --silent instead of curl -s.

[3.31] Don't let feature objects do the job of tests — verify inputs and state.

[3.32] Use decorating invariants to catch improper use of objects during testing.

[3.33] Run tests in parallel threads: threads.

[3.34] Every fast test must take less than 100 milliseconds.

[3.35] Don't mock the file system.

[3.36] Make the place with temporary files accessible after the end of the test suite: mktmp.

[3.37] Don't assert on the content of logs generated by feature code during tests.

[3.38] Use ephemeral TCP ports.

[3.39] Don't use inheritance to reuse test tools.

[3.40] Don't be scared of long test classes — they are OK.

[3.41] Don't fix feature code in response to a flaky test failure.

[3.42] Use maybeslow or a similar library to diagnose long-running tests.

[3.43] Kill long-running tests on timeout.

[3.44] Inline fixtures instead of keeping them in fixture files.

[3.45] Don't keep large fixtures in static files; let tests generate them.

[3.46] Create fixture objects that generate large fixtures at runtime.

[3.47] Keep reference fixtures as static files in the repository.

[3.48] Code duplication in tests is the last problem to fix.

[3.49] Extract test libraries.

Chapter 6. Fancy Tests

[6.1] Test for grammar mistakes and typos: languagetool. typos-action.

[6.2] Employ Property Based Testing tools: jqwik, quickcheck.

[6.3] Use Fuzzing tools: jqf, oss-fuzz, syzkaller.

[6.4] Use Mutation Testing: pitest.

[6.5] Check files layout during the build.

[6.6] Use Load Tests: jmeter, locust.

[6.7] Use Performance Tests: jmh.

[6.8] Benchmark new vs. previous builds: jmh-benchmark-action.

[6.9] Use Linters: eslint, rust-clippy, checkstyle, pylint, ruff.

[6.10] Use Static Analysis at build time: clang-tidy, spotbugs, infer.

[6.11] Collect source code metrics and fail when thresholds are exceeded.

[6.12] Test SQL queries: pgtap.

[6.13] Detect slowest SQL queries: dexter.

[6.14] Use Sanitizers.

[6.15] Use Dynamic Analysis tools.

[6.16] Employ Clone Detection tools: simian.

[6.17] Use In-Browser Testing: selenium, playwright.

[6.18] Use Multi-Browser Testing: saucelabs.

[6.19] Use Cross-Browser Testing.

[6.20] Use GUI Testing.

[6.21] Use Cloud Code Analyzers: SonarQube, Snyk, Codacy.

[6.22] Create Live Tests that verify production-ready configurations.

[6.23] Use Health Checks after deployment.

[6.24] Generate tests at build time: randoop.

[6.25] Employ Security Tests: zaproxy.

[6.26] Do License Compliance Testing: ort, reuse-action.

[6.27] Detect redundant tests.

[6.28] Validate architecture at build time: archunit.

About

Best Practices for Automated Testing (Excerpts from the "Angry Tests" Book)

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published