Manifesto for software testing

1. Testing is investigating in order to evaluate a product.

2. An evaluation is a judgement about quality – quality being value to persons who matter.

3. This makes testing a fundamentally human and contextual activity.

4. As such, testing is an exploratory and open-ended activity, requiring continuous evaluation of and experimentation with our practices.

Read more…

Testing maturity in an agile/CDT context

One day during a team meeting at Joep's previous job at a bank the Team Manager of Testing, listed a number of topics his testers could work on in the coming months. One of those topics was "testing maturity". This topic was on the list not because this manager was such a fan of maturity models, but because the other team managers (Business Analysis and Development) had produced one for their own teams and higher management would like to have one for testing as well. And although Joep saw little value in a classic five-tiered maturity model either, he was intrigued by the question: so what can you do with respect to maturity models that is of value?

Read more…

Regression testing, it means less than you think

The past weeks I have made several attempts at a blog post about regression testing. About how we use it to refer to different things: tests running on a CI server, people executing test scripts, etc. And about how often the term really doesn't mean much at all, yet nobody questions you when you use it: "What are you doing?" "Regression testing." "Oh good, carry on." The point of the post would be to argue we should use the term 'regression testing' a lot less, because most of the time we can be more specific without having to be more verbose.

However, the more I thought about (what I would qualify as) proper regression testing, the more I felt that regression versus progression (or progressive) testing is a distinction without difference. One interesting observation in this regard is that "regression testing" returns 30 times more results on Google than "progression testing" and "progressive testing" combined. So what's going on here if we have a dichotomy with one member producing so much more discussion than the other? And there's more: regression testing is commonly contrasted with test types like functional testing and usability testing. But how then should I categorize a regression test focusing on functionality1?

Read more…

Why the testing/checking debate is so messy - a fruit salad analogy

Five days ago James Thomas posted the following in the Software Testing & Quality Assurance group on LinkedIn:

Are Testing and Checking different or not?
This article by Paul Gerrard explains why we shouldn't be trying to draw a distinction between checking and testing, but should be paying more attention to the skills of the testers we employ to do the job.

I posted a reply there, but I think I can do better than those initial thoughts, so here we go.

Let's imagine the following scene: Alice and Bob are preparing a fruit salad together.
Alice: "Ok, let's make a nice fruit salad. We need some apples and some fruit."
Bob: "Euh, aren't apples fruit?"
Alice: "Yes. Of course. But when I say 'fruit', I mean 'non-apple fruit'."

Read more…

Two styles of leadership in spreading context-driven testing (TITANconf)

The last weekend of August I spent with some great people - Kristoffer Ankarberg (@KrisAnkarberg), Kristoffer Nordström (@kristoffer_nord), Anna Brunell (@Anna_Brunell), Fredrik Thuresson (@Thure98), Maria Kedemo (@mariakedemo), Henrik Andersson (@henkeandersson), Maria Månsson, Amy Philips (@ItJustBroke), Richard Bradshaw (@FriendlyTester), Duncan Nisbet (@DuncNisbet), Alexandru Rotaru (@altomalex), Oana Casapu, Simon Schrijver (@SimonSaysNoMore), Zeger Van Hese (@TestSideStory), Helena Jeret-Mäe (@HelenaJ_M), Aleksis Tulonen (@al3ksis), Anders Dinsen (@andersdinsen) - at the awesome TITAN peer conference in Karlskrona, Sweden.

During the conference we discussed leadership and testing and on Sunday morning I got the opportunity to tell my story1. (I do wish I had captured more of the discussion afterwards to include in this blog post.)

The first style

When thinking about my own leadership in testing, one of the first things that come to mind are my attempts to influence my colleagues at work (testers, developers, project managers) to become more context-driven in their attitude towards testing.

Read more…

What's the word for the part of testing that's not checking?

The question I asked

Yesterday I asked on twitter:

Question: what's the proper word for the part of testing that's not checking? #cdt #testing #semantics
- Joep Schuurkes (@j19sch) August 16, 2015

The reason I asked, is that I noticed I needed that word in discussions about testing and checking. If checking is part of testing - and in the RST namespace it most definitely is, see 'Testing and checking refined' -, then what can I contrast checking with? Contrasting checking with testing (as in 'checking versus testing') isn't going to work: there's one thing that's checking and then there's this other thing, testing, that contains that one thing and some other stuff1, but it's like a completely different thing. See the difference? Conceptually that just doesn't work - at least not in my mind.

The answers I got

So I figured I'd ask twitter in all its infinite testing wisdom and lo and behold, not only did people reply, a discussion ensued with the following people (listed in no particular order) participating in different configurations: @eddybruin, @mariakedemo, @SandroIbig, @TestPappy, @dwiersma, @ilarihenrik, @PhilipHoeben, @huibschoots and @deefex. Thank you all!

Read more…

Test automation - five questions leading to five heuristics

(I wrote a follow-up to this post in June 2019: how this tester writes code.)

In 1984 Abelson and Sussman said in the Preface to 'Structure and Interpretation of Computer Programs':

Our design of this introductory computer-science subject reflects two major concerns. First, we want to establish the idea that a computer language is not just a way of getting a computer to perform operations but rather that it is a novel formal medium for expressing ideas about methodology. Thus, programs must be written for people to read, and only incidentally for machines to execute. Second, we believe that the essential material to be addressed by a subject at this level is not the syntax of particular programming-language constructs, nor clever algorithms for computing particular functions efficiently, nor even the mathematical analysis of algorithms and the foundations of computing, but rather the techniques used to control the intellectual complexity of large software systems. [emphasis mine]

This oft-quoted sentence I emphasized, is even more true if the purpose of our programs is test automation1. So let's say you run your test automation program and the result is a list of passes and fails. The purpose of testing is to produce information. You could say that this list of results qualifies as information and I would disagree. I would say it is data, data in need of interpretation. When we attempt this interpretation, we should consider the following five questions.

Read more…

Three arguments against the verification-validation dichotomy

Last week while talking with two colleagues, one of them mentioned the verification/validation thing. And I noticed it made me feel uneasy. Because I know well enough what is meant by the distinction, but on a practical level I simply can't relate to it. When I think about what I do as a software tester and how verification versus validation applies to it, nothing happens. Blank mind. Crickets. Tumbleweed. So after giving it some thought, I present you with three arguments against the verification-validation dichotomy.

First of course, we have the obligatory interlude of defining these two terms. A place to start is the Wikipedia page on Software verification and validation. Unfortunately it contains conflicting definitions, so if anyone cares enough, please do fix. Luckily there's also the general Verification and validation page of Wikipedia, which gives us (among others) the tl;dr version of the distinction:

  • Verification: Are we building the product right?
  • Validation: Are we building the right product?

Finally there's the ISTQB glossary v2.4 that borrows from ISO 9000:

  • Verification: Confirmation by examination and through provision of objective evidence that specified requirements have been fulfilled.
  • Validation: Confirmation by examination and through provision of objective evidence that the requirements for a specific intended use or application have been fulfilled.

Now on to the three arguments.

Read more…

The test case - an epistemological deconstruction

(This article was first published in Dutch in TestNet Nieuws 18. The article below is a translation with minor changes. Many thanks to Joris Meerts and Ruud Cox for reviewing the original version.)

Testing as an information problem

Testing is an information problem. We are in search of certain information, of an answer to the question: does this application fulfill the relevant explicit and implicit expectations? The exact way in which we can answer this question, however, is not immediately clear. First we will need to decide which questions to ask, how to ask them and how to evaluate the responses. Hence, testing is an information problem.

For the traditional test methodologies (ISTQB and TMap being the most well-known) the test case is a large part of the solution. So let's take this solution apart epistemologically and see what it is we have in front of us. If the traditional test case is our solution, what information does a test case contain? What changes occur after executing it? And also, where is the understanding in all of this that's happening?

In this article, I will first describe how a typical test case is created and how it is used. Then we shall take a look at which kinds of information a test case contains. Finally, we will analyze where the understanding of what happens during testing, is present and where it is not.

Read more…

Joining the fray on ISO 29119

For those of you who weren't aware yet: something happened at CAST 2014 regarding the ISO/IEC/IEEE 29119 Software Testing. For instance, first this happened, which resulted in this and this and a whole bunch of other initiatives.

And then someone on twitter (forgot who, apologies) suggested watching Stuard Reid's Eurostar webinar "ISO 29119 – the new set of international standards on software testing" which can be found here. Since I liked to learn more about the standard, but not enough to pay almost 500€ to NEN (Dutch standards organisation) for parts 1 to 3, I began watching the webinar. And truth be told, I didn't watch all of it. Some parts were boring, some parts sounded quite reasonable and some parts I..euh..skipped. And that's ok because all I want to discuss is this one particular quote that begins at 33:25:

Read more…