VIPT - how to teach software testing

In this final post on VIPT (Value-Information-Processes-Tools) it's time to take a look at teaching software testing. My previous posts on VIPT can be found here, here and here.

A typical software testing course

A typical traditional software testing course (at least in the way I have taught them) has three elements: theory, stories and exercises.

The first element is all about definitions (testing, test cases, defects, etc.), process descriptions and testing techniques (mostly test design). So basically what happens is that students get a brief introduction about testing in general and then we move on to the main part: teaching a specifc testing method.

The second element of the course are the stories. These are mostly stories aobut how testing in the real world does not work as described in the theory. At best they are stories containing all four elements of VIPT. Most of the time, however, they are just real-world examples of a certain definition or technique.

Finally, there are exercises. As with the techniques, these are mostly about test design. Unfortunately they are also very linear. There is only one correct answer and often only one correct way to get to that answer. So the main gist seems to be: "I taught you a trick, now show me you can perform the trick." But shouldn't learning about testing be more than learning to jump through a hoop on command?

Read more…

VIPT - bottom-up or top-down

In this second post on VIPT I want to talk about bottom-up vs. top-down. The original plan for this post was to talk about the distance between tools and value, but in the past few days I figured out that bottom-up vs. top-down is a better approach. If you don't know what VIPT is, please read this previous post. Don't worry, I'll wait.

VIPT is a top-down model

For me as a context-driven tester the VIPT model is very much a top-down thing. You analyze the context, find out what value you should/can deliver and then you proceed to information, processes and tools. Of course, that's easier said than done. Going through this for everything you do, requires a lot of time and effort. So most of the time you do quick analysis of the context, decide that it sufficiently resembles a context you encountered earlier and you use the same tools you used then. Most of the time that's ok - as long as you stay alert to any signs that you misread the context.1

Read more…

VIPT Intermezzo - Models and the Unix philosophy

Thanks to Neil Thompson's comments on my previous post, I started thinking about what I want to do with the VIPT model. Do I want to expand and refine it to a grand unified theory of testing? And if not, then what?

The Unix philosophy

After some thinking I realized that with regard to models, I adhere to the Unix philosophy.

Do one thing and do it well

Particularly I am thinking about the following quote from Doug McIlroy:

"This is the Unix philosophy: Write programs that do one thing and do it well. Write programs to work together. Write programs to handle text streams, because that is a universal interface."

Read more…

Yet Another Testing Model: Value - Information - Processes - Tools

During Let's Test 2012 some ideas clicked in my mind with the result of yet another testing model: Value - Information - Processes - Tools.

For me this model really is a culmination of being part of the context-driven testing community. If you have been reading about context-driven testing, I'm sure you'll be able to spot plenty ideas I stole from others. ;-) So thank you so much to all of you!
Secondly, I have trouble believing I am the first one to come up with this simple model - although a google search didn't turn up anything.1 So if any of you know of similar models, please leave a comment!

And now to the actual model. Since you can't have a model without a drawing, here's a not so spectacular one:

Read more…

Let's Test 2012 - time travel advice

If I had a time machine and could travel back in time to give myself some advice before attending Let's Test 20212, it would be this:

- Go!
- Don't worry too much about which session to attend. One of the most valuable sessions you will attend, you'll choose because it has the least appealing abstract.
- Bring your running gear.
- If you ask Rikard Edgren nicely, he will give you a dead-tree version of his 'Little Black Book on Test Design'.
- Enjoy the food. Don't worry, it's not much of a challenge.

Read more…

Some thoughts after attending the 'Getting a Grip on Exploratory Testing' workshop

About two weeks ago I attended James Lyndsay's 'Getting a Grip on Exploratory Testing' workshop in Amsterdam. So it's about time to write something about it…

Now one of the things I dislike about workshop blog posts is that people will say "It was great! And person X is such a good trainer!" without saying much about the content of the workshop. However, I find myself now writing this post and thinking: I shouldn't post a full summary of the workshop. Not that it would spoil too much for any future attendee: most of the workshop consists of exercises and discussion. But posting a summary of the workshop that James has put a lot of effort in to create, just doesn't feel right. So let me just say this: the workshop was great and James is such a good trainer! :-D

Now that's out of the way, there are a few things from the workshop I'd like to share. Of course, the usual disclaimer applies: these are my thoughts on what was presented during the workshop. Any misrepresentations are my responsibility.

Read more…

The irony of scripted testing

A bit over a week ago, James Bach posted on twitter:

"This video shows a nice simple contrast between heavy scripted testing and exploratory testing http://youtu.be/PxTqjAwM2Pw"
- James Bach (@jamesmarcusbach) March 30, 2012

So I watched the video, hoping to see something that would make me go 'Cool!', but instead I went 'Hmmm.'

First let me say that this video does get a few things right:
- Exploratory testing can be structured by using charters.
- Exploratory testing allows you to easily change your test approach based on your test results.
- Exploratory testing is very adaptable when confronted with inaccurate or changing requirements.
Yet notice how the above only talks about exploratory testing, because for every thing the video gets right about exploratory testing, it gets something wrong about scripted testing – or rather about how scripted testing works in practice.

Read more…

The Seven Basic Principles of the Context-Driven School - part not-three

This blog post should have been part three on the basic principles of context-driven testing. As you can see from the title, it is not. :-) The plan was to see if and to what degree the other schools of testing could accomodate the principles of the context-driven school - as a way of highlighting what makes the context-driven school different from the other schools. The problem is I'm not that interested in doing that at the moment, so the chance is small an interesting post would result from it. Perhaps another time.

Read more…

The Seven Basic Principles of the Context-Driven School - part two

After the introductory post (to be found here) it's time to take a closer look at each of the basic principles. In the past weeks I found out that it's quite possible to take any one of these principles as a starting point for several different trains of thought. More importantly I discovered a story1 to the principles: the first five principles are ways in which software testing is intellectually challenging, as stated by principle six. And principle seven then wraps it all up. So below you can find some of the thoughts I had on the principles and the story I discovered.

1. The value of any practice depends on its context.

To get a better understanding of this principle I started thinking: what if the value of a practice did not depend on context? What else coud it depend on?

Read more…

The Seven Basic Principles of the Context-Driven School - part one

In the next several posts, I'd like to take a look at the seven basic principles of the Context-Driven School of software testing. The main reason for this is that I like what people belonging to this school say about testing, but I don't 'get' the seven basic principles. I do understand the principles when I read them, but I don't feel like I 'own' them.

So in the first few posts I will explore the principles as such. After that I will take a look at what other people have said about these principles. So please leave a comment if you know some good secondary sources on these principles!

The seven basic principles can be found here: http://www.context-driven-testing.com. They are also present as an appendix in "Lessons learned in software testing" by Cem Kaner, James Bach, Bret Pettichord, publised in 2002. This means that these principles were written in a context I'm not familiar with. They are at least four years older than my career in software testing, which began in 2006. And they were written by people from the USA, while I live and work in Western Europe.

Read more…