CAST2011 Wrap Up
|2011/08/14||Posted by CCAdmin under AST, CAST, conference, Learning, Software Testing, Tips|
Wow….just wow. Yup…that’s the best I can come up with to summarize my experience in one sentence. It was complete sensory overload. Intellectual overload (or “intellectually hung over, in the best possible way” as Jon Bach tweeted after the conference was over), meeting people whose blogs and books I’ve been reading (who were all super cool and down to earth), meeting new testers who live locally, tweeting great quotes and such over the whole conference (along with everybody else), etc…
Attending CAST 2011 was a GREAT experience. It put me on a big high, stimulated me intellectually, inspired me and motivated me too. I have a lot of great new ideas to blog about as well as lots of new ideas to look further in to. I also have some great opportunities for more growth because of the conference (more on that later in this post). What I’ve captured here are some notes on the keynotes and various sessions, but doesn’t truly capture the conference. That would be like saying you read a book by only reading the table of contents. The sum of the whole is much greater than the sum of the parts.
Disclaimer: I couldn’t possibly capture and remember every important point made in any keynote or track session. I’ll do my best to filter some of the major points and good quotes that I captured and put them in notes form. Quotes are as I heard them…no guarantees they are word for word correct.
“Decisions about quality are often political or emotional.” – Gerald Weinberg
Mention was made of a debate in 2007 between James Bach and Stuart Reid (of the ISTQB). I would love to find a link or write up of that debate, but I can’t find one. Anybody know of something?
“Some ambiguity will never be resolved. Sometimes bugs won’t be found. Testers must learn to accept reality.” – Gerald Weinberg
Testers Goal: Not to discover if the product works as expected, but to learn how the product works.
Not considering context (complexities, uncertainty, variability, etc..) is lazy and incompetent testing.
A focus on repeatability leads to weak testing – it’s better to focus on adaptability.
Don’t ask how to reduce tests, but ask how to run more tests faster.
Social science provides partial answers that may be useful. è Isn’t that testing?
Languages & cultures evolve from people’s context.
Measurement – art & science of making a quantitative measure
“Safety Language” – Know that nothing is definitive, it is all heuristic. The idea is not to claim “You’re wrong”, but more to declare, “From my point of view, your conclusion is incorrect.”
**Note: Talking with others at the conference, not sure they agree with the concept of the safety language. Some described it as a form “political correctness”.
Session 1: Agile Testing (even in a non-Agile world) – Paul Holland
At Paul’s employer, they have a Quality Plan (QP) document (sounded much like what we call an STP at my employer – Software Test Plan).
Paul questioned why this document was created in the first place. It was very big and verbose and surely no one read it. He said he was told it was created in case the customer asked for it. He asked if a customer had ever asked for it. He was told it was asked for one time, but not since. Paul guessed that it was so big and horrible (with little useful information) –that’s why it was never asked for again.
Paul stopped creating the document and said he’d create it if (and only if) a customer asked for it. He won that battle. They no longer create the QP document.
What are other things we assume in our process?
Paul then showed his document which is his Test Plan/Test Strategy/Test Cases. For a whole project this document was about 5 pages. The document described testing (with the audience being someone who knows system/project).
Then Paul showed how he planned work by breaking a project down into small enough tasks (test sessions typically) and using a kan-ban board to track progress with his team. Managers and others never had to “ask” how the project was going because they could all walk over and see the kan-ban board to know the progress.
The Kan-ban board was also a good way to prioritize. If a manager wanted to get a release out by a certain time, they could prioritize which tasks/areas they wanted to hold off for next release on the kan-ban board.
This was a fun session. I took few notes as it was generally more of an exercise-driven session.
Context – Anything that can change the model we are considering (for testing).
- If it’s not changing our view/model, then anything new is just information.
We were all at tables so that each table (of 6-8 people) was a team. Each team was given a small color coded piece of paper that described a product (there were physical models that were supposed to represent product – sort of a toy walking spider).
We were to quickly come up with a test plan (we only had about 5 minutes) – just a basic set of areas we thought were important to test.
Then after some teams spoke about their plans, more instructions were given out. The new instructions gave a new event or information that we learned about this project. We were then to change our plan based on this new information. Then a few more teams would present again.
This cycle went on for a few changes. It was very interesting to see how our entire testing perspective could change so quickly based on some small but important change (or new piece of information) to the project.
After the exercise Henrik asked us to all take what we learned through the context changes and come up with items that could change our context for a test plan. The three categories were changes on “our” end, changes of the “product” and changes on the “customer” side. After our lists were all put up, taxonomy of things to monitor for possible context change was made which will be published on a website (Update this when link is available).
The four variables dictating economics of Software Testing:
- Release Timing
Like most things, these variables aren’t mutually exclusive. One can affect another. For example, let’s say we want to create some test automation with a new tool (that no current employees are familiar with). We could hire a contractor that is an expert in that tool which would increase cost, but reduce (or maintain) release timing. Another option is that we could take a tester working half time to learn/set-up the tool. This could maintain cost, but push out the release timing.
Test automation is often not a problem finder, but more of a “change detector”. It’s most useful to detect a change because test scripts that were working suddenly stop working.
Tools tend to only look for certain things. They don’t “notice” other things that a human would. Tools are only good for checking for that one particular type of error and would miss virtually any other error that may appear.
Minefield problem: The idea is that you can think of software testing like navigating a minefield. The software is the field and software bugs are mines. Each test case is like a traversal of the minefield where you look for a “mine”.
The analogy is to explain automated software testing weaknesses. Once you’ve traversed a particular path through the minefield (e.g. run a test case), and the bugs are found and fixed, then the chances are much lower that a mine will show up along that same path (test case) again. It’s not impossible, but the odds are much lower. This is why automated tests (e.g. taking the same path again) tend only to “detect change” rather than “detect errors”.
Think about it. When you develop that automated test, you run the test manually the first time to make sure you are doing it right. By the time you create the test, the test has already been run and likely fixed (or submitted for a fix).
See Harry Robinson’s strategy of randomness.
ATDD = Acceptance Test Driven Development
Session 4: Vendor Meets User – Hexawise Test Desing Tool – Justin Hunter
Well, I intended to go to this session, but as I got out near the room a bit early, I saw Michael Larsen (@mkltesthead) outside the room at a table. I wanted to touch base with Michael, personally, as he taught the Black Box Software Testing (BBST): Foundations class I’d taken online in April. I figured I’d talk with Michael for a few minutes before the session started.
He remembered me and we started talking a little bit. While we were talking Matt Heusser (@mheusser) sat down with us. Matt runs what he calls a testing dojo (kind of a mentor/student type of school) called the Miago-Do School of Software Testing. I had email contact with Matt some months earlier asking about the Miagi-Do school. So, when Matt sat down, I greeted him and reminded him who I was (from our previous emails).
Michael is already in the Miagi-Do school and studying for his black belt. Matt jokingly asked him if he was ready to test for his black belt. Michael said “sure”. Matt then turned to me and asked if I’d like to take the test as well. I agreed as well.
Matt gave both Michael and I a copper coin from a stack he’d previously bought (just for these testing exercises). The test was simple, “how do I test this?” Michael and I started asking questions to determine what the “stake holder” valued to see what angle we needed to come from. To make a long story, short, both Michael and I, I think, misunderstood the stake holder’s desire, and were going down the wrong path for a while.
In the meantime, while we were going through the analysis, Adam Yuret (@AdamYuret) sat down with us. Adam was another person I’d wanted to meet. He’s very active on Twitter and we’ve tweeted with each other many times. He lives locally, but we’ve never managed to be at the same event at the same time. It seems Adam had played this game of Matt’s the previous day and done well with it.
Anyway, Michael and I got to the answer Matt wanted, but not quickly enough. Michael did not get his “black belt”, but Matt did say that he’d be willing to accept me into the Miagi-Do school. So now, I’m excited to be able to work with the Miago-Do school soon.
Our conversation went on right through the vendor session, but as it was just a vendor plug, I figure I got a lot more out of this discussion with Mike, Matt and Adam than I would have from a vendor session. It was definitely a good decision to skip this session and “network”.
Note: Michael did get his black belt after his performance with the rest of the Miago-Do team in the testing competition that night.
James mentioned a good book for software testing history, although probably not relevant (as a modern software testing reference) now – “Program Test Methods” by Bill Hetzel.
James talked about an example of a testing group for a company he was consulting for and working with. He asked the group to stop collecting metrics. They said, “we can’t….its required by our process and some (specific) manager wants them.” James asked them to stop anyway.
It just so happened he was having dinner with the big manager of the division (as well as many other managers in that division) that evening. James mentioned that he told the team to stop collecting metrics, but they were reluctant because the team was told they needed to collect the metrics. The “big” manager looked around the table at the other managers and asked, “Where is this coming from? Is it one of you requesting these metrics?” Big manager was totally OK with not collecting metrics.
Historically speaking, James mentioned the rise of “Ceremonial Testing” – working from templates without knowing why, etc…
There is talk of a new ISO standard for testing. The belief is that “other groups” can’t get traction with their testing ideas (likely the same groups that are behind the testing certifications) so they are working to make their idea “law” by being an ISO standard. There is work on a Testing maturity model (TMMI), as well. This is along the same lines as the ISO standard.
- Quantitative assessment – Giving a numeric scale value of how well tested product is, e.g. 0=No Testing, 1=Sanity Check….3=Well Tested (as opposed to something like giving # of test cases completed)
- Personal credibility and trust
- Ethical code – refusing to mislead people
- Story telling – tell story about testing rather than giving numbers and statistics
- Dialect culture (we need to learn how to disagree with each other)
- Heuristic culture – Admit that everything you use to validate with may fail.
- The new rigor – Rigor in skill development, not process/behavior
- Skill is the fulcrum, not symbols
Great Trends in Software Testing
- Dice Game – way to learn about test design
- Face to Face (Paired Exploratory Testing & Group Exploratory Testing)
- Black Box Software Testing (BBST) – series of courses taught by the Association for Software Testing (AST).
- Skype coaching
- Hidden Picture Game – Creep & Leep
- Testing Exercise Innovation (Testing Challenges website)
- Testing Dojos (Miagi-Do School of Software Testing, run by Matt Heusser [@mheusser])
- Weekend Testers (started by Parimala Shankariah, Sharath Byregowda, Manoj Nair and Ajay Balamurugadas, 5 WT chapters now, including U.S.)
- Test Framing – tell the story of the test
Teaching Testing Teachers
- Test Coaching Methodology – James Bach (@jamesmarcusbach ) & Anne-Marie Charrett (@charrett)
- Corporate Level Testing Professionalism (Progressive Insurance putting priority on software testing, see Greg McNelly’s session: “Developing a Professional Testing Culture” below)
- Skill Studies
- Game Film Analysis and Testing Narration (see Grigori Melnik (@gmelnik) session: “Game Films – A Technique For Reflective Testers” below)
Rapid Testing Management
- Session-based test management (similar to Kan-ban with manageable tasks as in Paul Holland’s “Agile Testing (even in a non-Agile world)” session in day 1)
- Thread-based test management (similar to session-based, but work area/thread based rather than task size based)
- Mind maps can be very useful here
- Testing Playbooks (Support Exploratory Testing)
Griffin Jones (@Griff0Jones) has done a lot of good work with software testing in a regulated environment (medical domain in his case). Note to self: Look up Griffin’s work. This is similar to my work environment (aerospace – also a very regulated environment).
Tacit And Explicit Knowledge – Harry Collins (HIGHLY recommended)
The Shape Of Actions: What Human Machines Can Do – Harry Collins
Seeing Like A State – James C. Scott
How To Read Wittgenstein – Ray Monk, Simon Critchley
When James tests, he often does a lot of “extra and unnecessary clicking, movement, etc…”. This is called galumphing.
Galumphing – to move along heavily and clumsily. (dictionary.com)
QA Clock – If all is perfect, this project will be done in X days, let’s say 5. When asked for status, you can say, “we are on day 3 of the 5 day QA clock”. Then asked, “what do you mean day 3…you’ve been working for 6 days?” Explain that the clock is in perfect conditions with no interruptions, access to software, etc… We are on day 3 of a 5 day QA clock under perfect conditions. The QA clock days to real days are not a 1-1 relationship – much like the blue progress bar when software is installing (LOL).
Went from testers having “one way to test” to an arsenal of tools and skills for testing.
Software developers learn new techniques and use new tools that make them more productive. If they are much more supported, then their productivity may increase much faster than tester productivity. Testers need support from management as much as developers to keep productivity in sync.
How is Progressive doing this?
- Local work groups
- Internal Community (Lunch & Learns, weekly meetings to discuss new things)
- Pairing testers (technical expert with domain expert)
- Move testing to earlier in the cycle.
Community of Practice for Test Engineering
- Created “T3” – kind of an internal testing conference
3 Things to make it all come together
- Local Groups (learn what’s happening on separate teams)
- Community of Testers (Groups, Lunch & Learn, conference, etc…)
- Organization Level (Need support of management)
- Testing Center of Excellence
The idea is the same as game films used in football. It’s even helpful to record your own testing sessions. Analyze the sessions and learn from your mistakes.
It’s all about reflection.
From Video: NFL coaches review games films 12-14 hours per day. è It’s serious business. It works!
Game Film Exercise:
We watched a video of a test session with James Bach and tried to identify test techniques he used. The problem was certain words typed into Notepad when saved and reloaded would show up as boxes (unprintable characters).
- Repro with different data
- Close and start again to see if it’s a state issue
- Check for external factors (virus, hacked version of Notepad, etc..)
- Add same data manually into new file & reload that new file
- New oracle against the data (read with another program, “list”)
- Look at/analyze binary data (ASCII/Unicode)
Method: Record video on computer as you are exploratory testing and narrate out loud as you are testing.
“I have a set of patterns I can easily spot, but I also try to be sensitive to the voice of a new pattern trying to be heard.” – James Bach
Clap Navigation – While recording video; every time you get a new idea clap your hands so it’s on the audio. When editing video later, using the audio graph you can easily find the ideas because of the clapping showing up on the audio graph.
Henrik and others found that James Bach’s CRUSSPIC STMPL model of software quality did not quite meet their needs. They wanted to improve on it. Starting with CRUSSPIC STMPL, they worked to add more specific items that worked for them in terms of quality considerations.
Any quality model is based on context and may need to change the quality model you use based on your own context.
They did so with: Software Quality Characteristics 1.0 (1.05 was handed out, not published yet)
- Time Zone Differences
- Audio Learning
- Agile Challenges
- Workable Office Space
Time Zone Differences
Get the time zone right because you care. Getting it wrong appears like it’s not important to you because you don’t care.
Think about when you’ll your email will arrive, etc… Plan it so that email is there first thing in the morning, not arriving at the end of their day.
Rotate inconvenient times/meetings.
Be aware of holidays in other countries. We (U.S.) assume everybody knows about Christmas, but do WE know the big holidays in India?
If traveling – recalculate time zone differences again. It’s easy to mess up. Try not (if possible) to schedule a call/meeting on a day you travel.
Create a Workable Office Space
If working from home on a regular basis, many items are mandatory.
- Land Line
- Fax Machine
Try putting all things in home office on wheels. You can reconfigure your entire office in only a few minutes. This is helpful if people come over or need to change for some reason.
Don’t just “work from anywhere”. Coffee Shops can be noisy and its rude when on a call. Working from library means you can’t talk and take phone calls, etc..
Think about the other team – meeting may be the only time they are with you all week. This is their only “impression” of you. Are you professional? Are there noises in the background like dogs barking, kids yelling?
Agile Development on the Virtual Team
Sometimes multiple people are in the same office, so they prefer to get a conference room and put you (calling in) on speaker phone. This is often BAD.
- You can hear all background noise (paper shuffling, finger tapping, etc…)
- Can’t see many things, like white boards, somebody pointing at something, etc…
It’s better if you all call in on phone, so that things are forcibly done online where all can see.
For documents – send before the meeting. Then send again 5 minutes before meeting so anybody who “can’t find it” can just check their email and get to it quickly.
Is the team slower because you aren’t there? Be in person sometimes (all the time if necessary).
Getting To Know People Through Audio Only
You can sometimes learn people’s moods through audio clues.
Listen for things like:
- Finger tapping
- Chair squeaking
These audio clues are good clues to help determine moods.
Don’t be faceless on a virtual team! People need to know what you look like.
Let people know when you’ll be out of the office ahead of time.
Trust On The Virtual Team
Notice the actions of people you trust and emulate it.
Be trustworthy so that your team trusts you.
Build a rapport – have “virtual coffee”.
Be consistent in all forms of communication (Facebook, Twitter, IM, Email, Phone, etc…)
Politics and the Virtual Team
The email game – Who do you CC, BCC, etc…
Not much different than office politics in general. The office politics still exists.