Testing people have mentioned the concept of “session based testing” as a radical new approach to testing called “session based testing”. The main purpose of this page is to collect a set of links aimed at helping me understand what this is.
“We had this problem when doing exploratory testing for a client. We wanted to be accountable for our work. We wanted to give status reports that reflected what we actually did. We wanted to show that we could be creative, skilled explorers, yet produce a detailed map of our travels. We invented Session-Based Test Management as a way to make those intangibles more tangible. It can be thought of as structured exploratory testing, which may seem like a contradiction-in-terms, but “structure” does not mean the testing is pre-scripted. It means we have a set of expectations for what kind of work will be done and how it will be reported.” – Jonathan Bach (co-inventor)
Session based testing is aimed at putting some level of structure into exploratory testing by establishing a session of testing which has a charter / goal and an uninterrupted period of time where testing is done. Testers do other things than testing, so idea is that testers do 2-3 sessions a day. Sessions are short (45 mins) or long (2 hours). In other words, a time-box.
Sessions have a standardized session report which allows people to understand what was done in the session and serves as the basis of metrics. The metrics are simple. For example defects found, things that puzzled the tester (or issues), time spent on-charter vs off-charter (off charter testing is encouraged as the idea behind exploratory testing is to let the knowledge of the tester drive direction, and so on.
After session there is a review session to understand what was done, and also determine future testing opportunities. In other words, a review / retrospective.
Note that people differentiate between adhoc testing and exploratory testing by (see http://en.wikipedia.org/wiki/Exploratory_testing):
“Exploratory testing has always been performed by skilled testers. In the early 1990s, ad-hoc was too often synonymous with sloppy and careless work. As a result, a group of test methodologists (now calling themselves the Context-Driven School) began using the term “exploratory” seeking to emphasize the dominant thought process involved in unscripted testing, and to begin to develop the practice into a teachable discipline. This new terminology was first published by Cem Kaner in his book Testing Computer Software and expanded upon in Lessons Learned in Software Testing. Exploratory testing can be as disciplined as any other intellectual activity.”
I like the idea of establishing standards for reporting and leaving it up to individual people and teams to do the work while producing data we can all use. I like the retrospective like approach that this seems to be encouraging.
The discussion seems to talk mostly about testing doing adhoc testing and so implies a serial workflow (dev → test). For situations where this is the setup, I could imagine that some level of structure applied to adhoc testing would be useful. Testing teams, project teams etc would seem to be a good fit.
However I also wonder whether something like this might be useful for “hardening” and “release” sprints especially as we have not replaced a lot of manual testing work with automated tests as a way to structure these sprints. I am not sure of how this relates to reporting we do for standard testing.
In general, I think we should look at testing the ideas. The set up is pretty minimal - a bit of discussion with test team, a place to put things.
So in summary, this looks pretty interesting to me especially wrt to hardening and release work where we don’t have (automated or manual) scripts to cover testing that we are doing. It also looks like something that would be relatively easy to set up to try out, though I expect actual execution might be “interesting”. And I would think it would be interesting to project and IT folks and when we do testing with external customers.