Strawman & Steelman
Feedback is a great way to determine whether something works in accord with SPECS & methods of collecting it are common, surveys for ex. But the Strawman & Steelman could help you tackle bias better.
The Bias is Real
Feedback is an important part of any organization today and more so the ones dealing with products, irrespective of the type, right from the age-old physical products like FMCG / Automotive to more Conventional Desktop Apps from where it all started in IT & then the 2000s with the DOTCOM boom leading to Internet products to the more ubiquitous eCommerce / Cloud-based SaaS or PaaS / Mobile-First Apps as of today.
The only noticeable change over the years is the way orgs., teams have practically learnt the importance of “testing an idea by subjecting it to a cycle of collecting feedback post the product’s release” & then thinking of the costs involved so as to cover for a case when they decide to pull the plug on it to evolve their thinking in accepting the pivotal role feedback could play if it were to be employed much before in the life cycle!
As of today we have Design, Development, QA teams who have evolved to a stage where working Models / Designs / High-Fidelity Wireframes / Mocks / Prototypes / MVPs / Small Components of the end product are produced as outcomes at the end of each silo and then subject to an array of feedback cycles so as to expose it to various sections of the user groups of the target markets with an aim to learn a ton of things, factor that understanding into improving and evolving the product offering itself.
But, if you notice, there’s a big problem in there!
B I A S!!
And, tons of it too!
What to measure & How?
If you are testing what could be a part solution to a problem that you are addressing via your product / feature release and you take it to a section of your supposed target market and showcase it, ask them to just play with it, test it via an unmoderated user testing session using a tool like say Hotjar, you may have heatmaps, detailed session recordings, CTR (click through rates) to pin on which is perhaps a good source for building a basic understanding of what users clicked on & how easy it was to accomplish the task that they wanted to.
But, can you plot a foolproof behavioral understanding with it?
Perhaps Not!
And that’s the very reason why Hotjar has built other features like Surveys, Feedback Widgets so that you could gather the attitudinal information you need to capture to complete the circle so to speak. Else, the conventional age-old time-tested method of handpicking a few users to interview would have been the only route and also one’s goto.
When the other methods seem fine over their relevance, yield, end result tuned to serving the purpose, there could still be a big problem with Surveys which is all to do with “the way questions are framed”.
And thinking of the effort, time & money weighing it out against the impact, it pains, it tears, it hurts!
“I’d rather hire the people I need, deploy them all physically to the geographical locations that matter, get them to talk to the users 1:1 for about 3 minutes to gather responses on my survey than float these and check back in day after day just to see 1-5 people responding when I need to hit a target of 5,000 responses”.
- Anon
Revisiting the Bias
If you have floated a survey and wanted to get responses in quick, whether it is a startup / big-ticket MNC you would correlate to some problems like Non-responsive Sample Space. Apart from such quantitative problems there are other deeper qualitative problems that could hurt deeply contributing largely to BIAS.
There could be various types and forms of BIAS one’d have to deal with.
Here’s a list of a few of them:
💡 PRO TIP: I have seen how some teams blame it on the person who is framing the questions and quote how there is a lack of interviewing skills and how the questions aren’t to the point so as to be deemed relevant. When the former part may as well be right, but the latter part doesn’t fit in. Framing survey questionnaire has hardly anything to do with interviewing skills. If anything, they are correlative of psychology, psychological awareness.
Actionable Insight
Even before one gets to the questionnaire on the survey one ought to sort out what insight one is looking to gain out of it & more importantly whether that is actionable given the circumstance.
To determine whether the insight obtained is actionable or not, one could simply connect it back with the interim goal / purpose and see if it is fitting in.
Case Study (a generic one):
Consider an org. building agriculture / farming product targeted at maintaining health of crops.
And just suppose they are planning an advancement over the existing & conventional drip irrigation system which provides much better control over feeding stuff like say manure, minerals et. al. that matter to the health of the plants in a garden / field.
Before taking the idea ahead they want to measure market interest over a few parameters like:
- the problem itself &
- the fitment of the solution
Firstly, your sample space has to largely comprise of farmers, farming orgs., individuals or orgs. that are into farm / garden maintenance, land owners & individuals who partake interest in farming of any scale which may even include a basic garden in a backyard.
Secondly, the insight that you ought to look for is:
1. The Problem - is automatic crop maintenance needed & worthy of consideration as a viable case / problem?
2. Possible Solution - is it needed to induce manure / other growth enhancing minerals into the soil from time to time so as to maintain a consistent health of the crop?
You could choose to ask all these questions in order:
Do you visualize how these questions & their supposed responses could link back to the problem (1) & the supposed solution (2) above?
That right there is how you could reach actionable insight.
But, could all that insight be obtained over just one question?
Hmmm! 🤔 Let’s see…
Strawman & Steelman
The criticality surrounding the whole act of posting surveys on to the general public as relevant and the collection of responses over it cannot be condoned at all because that insight it provides is imperative to proceed to the next stage of the product life cycle.
And often times when it comes to extracting such survey responses it becomes mandatory to frame questions precisely, concisely & succinctly and the Strawman & Steelman techniques have known to help greatly in this regard and have been widely used as well.
Straw manning is a form of argument that goes by entirely replacing the original argument with an absolute fallacy whilst maintaining the impression of refuting it altogether.
On the contrary Steel manning is about presenting the strongest part of the argument even it wasn’t originally intended which is usually done by removing flawed assumptions that one could easily refute or develop strong points so as to undermine one’s own position.
And, both these are known to be great techniques to use over framing the survey questionnaire.
Let’s try and understand their use applying it to the “farm / crop health” case study from the previous section.
To requote:
we have to get the audience from our sample space to respond to whether or not the idea of a product that’s going to help them maintain the health of their crops in their farms / gardens regardless of their active involvement is good
Straw manning technique
Starts with framing and presenting a fallacy that could possibly have no truth in it whatsoever.
Steel manning technique
Start off with putting forward a strongest point of an argument that may be largely believed to be the truth but withstanding a few preconditions.
Conclusion
The world has been using surveys all along and they are getting more commonplace today. But, the one noticeable drift in those surveys is the way some of the questions could prompt / spoon-feed / influence the responses paving way for tons of bias.
The Strawman & Steelman techniques could prove to work as a big positive over employing them on surveys so as to cut off most of the bias the questions induce providing some unbiased & truly actionable insight.
Floating surveys, one can't help but encounter all those response biases.
When one comes to terms with it over time, the lack of a befitting solution is really a big worry.
How does one tackle that?
Get on to this thread here which is a follow-up to my article to find out the answer...
https://typefully.com/BgpInv/l5542NE
#productmanagement #survey #qualitative #response #bias