Friday, November 1, 2013

Deadly Defects


Toyota settles acceleration lawsuit after $3-million verdict

Toyota Motor Corp.'s first loss in a sudden acceleration case, in an Oklahoma courtroom this week, could embolden attorneys nationwide who are looking to bring hundreds of similar cases.
Worse for the Japanese automaker, the verdict centered on the company's electronics, which have been a focus for plaintiffs seeking to prove safety defects in the company's cars.

Toyota on Friday confirmed that it had reached a confidential settlement in the lawsuit, which involved the fatal 2007 crash of a Camry. The settlement came hours after a jury assessed $3 million in compensatory damages but before the panel could levy a punitive award.

The verdict could provide a road map for attorneys seeking to hold the automaker liable for injuries and deaths.

Things can get serious in the world of Test Engineering.  Again, this points to a need for Test Engineers that can do impact analysis and risk assessment.

Monday, September 30, 2013

Testing Infinite Scenarios

I have been involved with implementing integration into a mapping service.  I am thankful we were not responsible for testing the accuracy of the third-party GIS (Geographic Information System) service itself, but only the integration of the service with our solution.

However, in testing we did notice a few problems with some of the information we were receiving.  Some of our client properties were not displayed correctly.  They were *close*, but not exact.

When I read this article yesterday, I was reminded of an anxiety I had when I tried to place myself in the shoes of those implementing and testing the GIS itself.

Apple Map flaw results in drivers crossing airport runway



As test engineers know, most test configuration matrices are massive (exponentially affected when adding new configurations), but the GIS testing poses a truly staggering challenge.  Nearly infinite scenarios.

In these times, I use my handy-dandy guide to prioritizing test cases:

  1. Do an equivalence analysis (what configurations/scenarios are the same for testing purposes)
  2. Do a risk/impact analysis (what happens if something goes wrong?  do people die?  is revenue impacted?)
  3. Do a change set analysis (what has recently changed?)
  4. Prioritize your configurations (what are the most common configurations?)
  5. Prioritize your functionality (what is the most commonly used functionality?  usage statistics are very handy here)
  6. Identify the complexity and time-to-test of the different configurations (prioritize complex tests lower, all other factors being equal)
However, I feel like something is missing in my list above, with regards to this particular GIS problem.

How would you approach GIS testing?

Friday, September 27, 2013

Aggregating data in useful ways

This really interesting image shows an aggregation of many famous starships in fiction, shown to scale for comparison.


Every Sci-Fi Starship Ever (not really, but close enough)

While fascinating, it also brings up that personal weakness of mine- taking the time to manually aggregate data.  If a report is not one-click away, then I avoid it.  However, I really shouldn't, as data good for assessing quality is usually hard to get.  Many of the tools we use are created with project management or development in mind, NOT QA.  (See Rally, JIRA)

I have to block time off in my calendar to go mining for data.  It takes discipline, but it is worth it.

Now- taking that data and presenting it in a meaningful way... that's a whole other bucket of beans.

Kudos to the person who put this Starship comparison chart together.  It took a lot of manual work, I'm sure, but it can be appreciated by many, and tells a story.

Thursday, September 19, 2013

Get your jargon straight!

An oldie but a goodie.  This is sort of the opposite of the Urban Dictionary.  This is guide comes in handy when navigating the Sea of Wordplay and Meaning in the land of Corporate Culture.

Jargon Watch


Friday, September 13, 2013

A fun fusion of gaming and career

One theme that keeps popping up in my study of user experience is that of achievement reward systems.  You look at any task you would like a user to perform, and, no matter how mundane, you reward their achievement in some way.  This can even be something as simple as a meaningless list item in a long list of meaningless list items.

Companies have flirted with using this approach in real world activities, such as teeth brushing and bike riding.

In my mind, the three most important characteristics of a successful achievement reward system are:

  1. Show users what is possible to achieve (list the achievements possible, and a clear explanation of how to achieve them)
  2. Show users where they've been, what they've achieved, in a way that is satisfying.
  3. Make sure the achievements can be obtained at a reasonably consistent and fast pace (you don't want people getting bored while trying to achieve one thing... keep the rewards flowing)
With this in mind, I got a kick out of the following link.  It is a list of achievements toward being an expert web developer, presented like a role-playing game skill tree.  You developer yourself along this tree's branches, with the end goal of becoming the web developer you desire to become.  This page even lets you choose your character portrait and name.  Great stuff.  I hope you enjoy.





Thursday, September 5, 2013

Warren Spector's Commandments of Game Design

A wonderful list that is well worth examining.  Warren Spector is highly respected in his field, and has been involved in development of many successful games such as the Wing Commander series, the Ultima series, and the Deus Ex series.

Warren Spector's Commandments of Game Design

With regards to quality, we can take some good lessons away from this.  When we analyze a design for any type of user experience, we look for things like rewarding the user, making the goal visible, giving multiple ways to achieve objectives, avoiding unnecessary steps.

But, even more than these:  when game design meets user experience, it can be an amazing thing.  You tap into people's motivation to help them achieve their goals in a pleasant way.  Every solution should strive for this result.  It reminds me of an article I read recently where developers are bringing video-game-like-achievements into real life

If you can tap into people's motivations, not just their behaviors, you have created a successful user experience. 

You've gained the 'Floss your teeth' achievement!

Friday, August 23, 2013

US Dialects- fascinating maps

Once again, I love visual representations of data.  This is an enthralling map of US dialects.

American Dialects Mapped

I admit, I still say 'coke' for a generic soda.

Thursday, August 22, 2013

A view from Home (alternative working arrangements)

I have managed remote team members before, both at home and in other locations.  I find more pros than cons with this arrangement.  Here in an interesting article written from the 'inside'.  Since I have never actually worked from home long-term, I find value in reading these thoughts as one who manages employees who do.

Experiences and realities of a homesourced IT worker

What have your experiences been?  (both managing and working with alternative working arrangements)

Agile: DONE-DONE (done... done... DONE)

An insightful article regarding some of the many temptations we face in Agile development.

Getting to "Done" in Agile Development

Wednesday, August 21, 2013

More on Perception

An interesting quote I ran across today:

"If men define situations as real, they are real in their consequences."

This is known as The Thomas Theorem, from two sociologists in 1928.   This highlights the disconnect between  [perception + expected rational behavior] versus [reality + actual behavior].

Again, we should expect this and prepare for this as test engineers.

See my earlier post on this subject:  On "managing" perception?

Monday, August 19, 2013

Aspects of Quality


A couple of definitions of quality:
  • "The degree of excellence of something."
  • "The standard of something as measured against other things of a similar kind."

Take a tool, any tool, walk out to a car, and measure its degree of excellence.  In what units of measurement will you report your findings?  Inches?  Cubits?  Shakes?  Nibbles?

Testers, in a very real way, are explorers.  We confront the unknown, and answer questions about it.  We then report our findings in order to bring a greater understanding of what we have experienced.  Over the years, I have put together a list of questions we explorers can ask in order to ascertain the level of quality in our products.

Important questions to ask in order to measure quality:
  1. Does it solve a problem?
  2. Does it provide a desired service?
  3. Does it invite repeated use?
  4. Does it work?
  5. Is it easy to use?
  6. Does it perform its functions well?
  7. Is it stable under stress?
  8. Can it recover from disaster?
  9. Does it fit together as a cohesive whole?
  10. Is it easy to start?
  11. Is it easy to disengage?
  12. Is it secure?
  13. Does it compare well against similar products?
  14. Is it easily maintained?
  15. Is it easily moved or ported?
  16. Can it scale to broader or more limited use?
  17. Does it invoke in the user a positive emotional response?
Of course this list is may not be applicable to all products, nor is it exhaustive, but I have found it quite useful in assessing quality.

Are there any questions you would add?

Friday, August 16, 2013

History at your fingertips!

For those of us who love the study of history and graphical representations of data!



1931 Histomap

I wonder what this would extrapolate to now, and then again in 100 years?

Thursday, August 15, 2013

On "managing" perception

In the rare instances I am able to corner a customer, executive, or business partner, I like to ask them what the current buzz is.  Are people generally asking for more features, or better quality?  This admittedly un-scientific and ambiguous question has value for me, as a leader in test engineering.  The answer to this question matters for a few reasons:

First, to get just a bit of an idea of what our pipeline is looking like... a push for more features or a focus on improving quality.

Second, to show them we care about what is being thought and said.

Third, and most importantly, to get an idea of how the perception of our products is trending.

As a test engineer, perception is just as important as reality.  If your products are perceived as being of poor quality, then people will more readily notice and magnify the smaller defects that exist.  In the same way, when you, as a tester, are seen as being incapable, disinterested, or ineffective, people will focus on all of your flaws and mistakes.  Furthermore, they will distrust all work you do, effectively rendering you useless to the organization.

In "managing perception", I recommend the following:
  1. Communicate the context
  2. Be honest
  3. Be open
  4. Communicate frequently
All too often, people manage it this way:
  1. Report numbers without context
  2. Ignore it
  3. Obfuscate
  4. Delay
To make it simple, don't worry about managing perception- just focus on proactively reporting reality in proper context.

Leadership note:  Remember to manage the perceptions of team (capability), project (status), and product (health).

Does anyone have horror stories relating to perception in testing?

Tuesday, August 13, 2013

Becoming the HumanTester

Welcome to my humble space.  My name is Brian, and I am a test engineering manager.  I have worked in testing professionally and as a hobby for almost 20 years now.  After having worked at a few different companies with wildly different technologies and processes, I have started to see patterns emerge.  I thought I would share these patterns and ideas, so that people may take note of, criticize, and refine.

I like passing along knowledge (not just my own) and decided to do this for more than just my work teams.  I hope you feel free to participate, and I hope we all become better testers as a result of this blog.

Onward!