stack twitter rss linkedin cross

Wilco van Esch

Skip to main content

Search results

    Improving your testing by improving your thinking

    Testing is an analytical activity. You form running hypotheses about how a product works, what kind of problems could occur, what the cause of an observed problem could be, what the impact of the problem would be to the user and the business, and you verify or falsify one hypothesis after another by gathering and evaluating information.

    Training a tester's analytical skill is typically done through exercises where the tester is encouraged to generate testing ideas and ask questions. This helps testers be better investigators and is fun to do.

    However, another aspect of being a better investigator is to examine your thinking. Our brains are vulnerable to assumptions, biases, and logical fallacies. The more we're willing to admit this and learn to recognise them, the less time and effort will be wasted following up on inaccurate notions.

    Examples from the email marketing world to drive the point home

    1. A marketing email is sent out every month. Last month it included a discount code. The analytics show more people unsubscribed from the email than at any other time. A policy is introduced that discount codes should never be used again.

    Problems:

    • When we hint at a discount code in an email subject line the open rate is likely to be higher and accordingly the absolute number of people unsubscribing may be higher whereas the unsubscribe ratio may be the same as usual or in fact lower.
    • How do we know other differences in the email send were not responsible for the increased number of unsubscribers?
    1. A project team commits to meeting the WCAG 2.0 accessibility guidelines. Colour Contrast Analyser is used to determine contrast ratio for meeting success criterion 1.4.3. The tool shows a subheading fails all 4 checks (AA small text, AA large text, AAA small text, AAA large text). The developer is asked to fix this. The developer increases contrast of the font colour versus the background colour. It now fails 3 out of 4 checks. It is reworked again and fails again. Eventually a decision is made that, now and in future campaigns, 1 out of 4 checks should pass.

    Problems:

    • There is no basis for deciding 1 out of 4 checks should pass. What is the font size of the text? Are you committing to AA or AAA level conformance?
    • The reason for the guideline is forgotten. It's to ensure text is legible for people with reduced visual acuity. Tools should be used intelligently to inform whether or not you've succeeded in doing this.
    1. Email open statistics show that 7% of users use a new email client with limited CSS support. Considering the CSS issues, the project team says "no-one uses it, we won't support it". When faced with the usage statistics, the project team says "we'll increase the support threshold to 10%". When faced with the consequence that this means no more support for desktop or web clients, the project team says "what is the usage share when we divide it by OS and OS version?". This goes on until the data supports the decision.

    Problems:

    • The decision to support an email client should be based on the data rather than the other way around.
    • If you know you can't support something, justify it by the cost of trying instead of with massaged data. You can then offer it as a choice to the business: this % of additional users catered for versus this % of additional development and testing cost and effort.

    The respective thinking errors in these examples

    1. Illusory correlation. Our brains allow us to act quickly on a perceived correlation. We saw a connection, a plausible explanation, and acted. Yet, since it was not a life-or-death situation, we could have investigated the issue. Not sending discount codes might mean we weaken the conversion rate. It would have been worth looking into, even if it would end up leading to the same conclusion.

    2. Special pleading. An exception is made without a reasonable justification for that exception. This, in turn, is because of an arbitrary goalpost. Which, in turn, is caused by an ill-understood requirement. If as a tester you report issues identified by a tool, you should learn why they are issues.

    3. Motivated reasoning. You start from the conclusion and try to justify it instead of letting the evidence guide you. In the example, bad decisions are eagerly suggested and distract from the obvious solution: to present the forbiddingly high cost of supporting a difficult email client to the business stakeholder and let them either agree to exclude it or accept the additional budget and delays.

    We should be aware of logical fallacies such as these to be able to monitor our own thinking and make better decisions. This applies to everyone, of course, not just to testers.

    In addition all the examples could be said to have an element of knowledge insufficiency. A tester benefits from learning about the product domain, the technical underpinnings of the product, the business strategy, and linguistic knowledge (language comprehension and expression).

    Further reading

    Wikipedia has a good master list of logical fallacies.

    A publication from Beihang University provides a taxonomy of human error causes for software defects.

    James Reason covers rational processing errors in his book Human Error.