Categories: Analytics, Qualitative Data, Theory, | Thu, 07 May 2020 17:05:47 GMT
Blogs > https://www.sociallyconstructed.online/blogs/how-to-fix-surveys-with-qualitative-science
Part 3 of a 3 part series on community surveys.
|
We just spent an inordinate amount of time across the previous 2 blogs touting the huge success of a 2-question survey, the Net Promoter Score (NPS).
For 2 weeks we’ve covered how and why the Net Promoter Score was one of the single best systems to start measuring your community audiences… and here we are now, telling you the NPS survey is, although useful, fundamentally flawed.
What gives?
Here’s the truth:
Social scientists and businesses use surveys
very differently, and businesses usually do it wrong.
But the reason most CEOs and Data Analysts just take a glance at the graphs, rip percentages out of context, and abandon them in their Survey Monkey account until next year, is because there are some fundamental problems with how businesses view the almighty survey.
In this blog, we’re going to go over 3 pitfalls to survey production, delivery, and analysis that have caused the average Marketer and CEO to distrust their community’s responses.
To prove we’re no negative Nancy and to stand by our promotion of the Net Promoter Score in the past two blogs, we’re also going to provide you grounded and simple solutions to avoid, fix, or altogether improve your survey implementations and convince your higher-ups to trust your respondents’ feedback.
Here we go!
Problem 1: There’s too much detailed data
to sort through & not enough time
We would spend tons of time figuring out what we needed to ask of our members, how best to ask it, and what we were looking for in an answer that would really matter.
We then spent several weeks working tirelessly to market the thing. We’d put out preliminary feelers, publish the survey, and bam, just like that, we’d get 1500 strongly worded opinions responses we had no real idea what to do with.
One of the ways we made processing easier on us was to limit the amount of “qualitative data” we’d collect because each question equaled an opinion, and that meant 1500 people times 10 comment boxes.
It was simply too much, so we structured the survey to make it easier for us to handle.
I’ve found that the same general approach happens in any company of any scale. Qualitative data is just a fire hose no one wants to turn on and if you do, no one wants to go through it. Even if it holds information that will save you a dumpster fire PR disaster, or generally improve your service, excellent. It’s still not likely that you’ll go through it.
The issue here is that most people pick questions intended to reduce the workload later on and, in doing so, limit the responses you receive. Alternatively, many go the other way and produce a survey with so much data that making sense of it becomes insurmountable.
As a result you’re incentivized to ignore the qualitative data, which is the bulk of the survey’s value.
To solve this problem instead of working to avoid it, learn how to best process the information you’re receiving. We recommend learning how to “abstract” or tag your data for quick and easy tallying later.
We’ll have a full blog on how to tag your data and “abstract” it into easy-to-digest themes in a few weeks so be sure to check back here for that, but here’s the gist:
|
To abstract data easily, port the data into a Word doc or spreadsheet and place comments on the feedback you find interesting. In that comment, use a simple 1 or 2-word phrase that encapsulates the theme of the statement. Note down what you mean by that term and then use that term each time you see similar feedback. Eventually, you’ll see that theme pop out of the text frequently.
|
Problem 2: Only a few get to
(or even want to) speak
“The only people who will take your survey,
are people who take surveys.”
One of the most common issues with surveys is that they require a person to take time out of their day to do something they weren’t planning to do, and put in effort they weren’t initially anticipating.
3 different kinds of “fallacies” rear their ugly heads here and build on each other to create a nasty issue with your resulting qualitative data set.
And if you can’t sidestep these fallacies, your survey is bunk. These fallacies are the main reasons data nerds cite when they poo-poo the idea of collecting opinions via survey.
I’ll explain each before we get into ways to look out for them.
Fallacy 1: Vocal Minorities or Polarized Involvement
In general, only about 2% of any community will be labeled “power-users.” These are the people who are ALWAYS talking and always giving opinions. Usually, they’re also the ones you interact with and trust the most.
On the flip side, detractors tend to be hyper-vocal about their opinions. As the saying goes, “Negative PR is about 10 times stronger than good PR.”
And then there’s the middle.
Fence-sitters tend to be less vocal and less invested. So getting their opinion is difficult. That means you’ll get biased answers from your polarized users, and fewer from them.
Fallacy 2: Survey Fatigue or more broadly The Diminishing Value of Work
You’ve likely run into the term Survey Fatigue before, but you probably haven’t spent much time digging into the theory behind it.
The diminishing value of work refers to the initial value a respondent feels completing the survey is worth at the beginning, and how that value is impacted as they move through it. There is a certain amount of commitment required for a person to perform any action, and this occurs every time a member participates in your community.
Each survey question is an additional amount of work. As effort is put into the survey, the value of that survey may become “less worth it.” Eventually, the value of the survey and how much effort they’ve put into it is no longer justified, and they click off.
Many people view this as a survey’s length and how long the questions are, but in reality, short or long doesn’t matter. It’s about imparting enough value before, during, and after they fill out the survey, that they feel their action is still worthwhile by the last question.
Fallacy 3: The Spiral of Silence
This last fallacy is less known, but you can think of it as the ultimate consequence of letting fallacies 1 and 2 get too far out of hand.
The vocal bias skews our data to favor the involved. Fence-sitters won’t see as much value but are still important. If you make decisions based on the more vocal than over time fence-sitters lose any sense of influence they did have and begin to think their opinion, if they had provided it, wouldn’t have made a difference.
So they start to think their opinion isn’t valued or you won’t listen to them. Then they intentionally refuse to count it. As a result, their ideas aren’t heard, and their voices DO become of less value.
If this sounds a lot like a certain country’s political situation – you’re right. It’s the exact same mechanism, and it happens at every level of a community; small group to policy.
Now let’s talk about solutions.
To get around these fallacies there are a lot of tactics and fail-safes you can implement. A lot of organizations will apply extrinsic rewards like raffles and badges that have a more stable “value” to their surveys to ensure the value is viewed more fairly.
It should be no surprise as community managers at SC.O that we don’t recommend that approach. Extrinsic reward is a great way to devalue the intrinsic value of influence by way of participation. It’s got less value for the work if you give them something detached from your brand.
On top of this, the quality of the submissions you get from those who simply want the reward at the end of the survey may not give responses that match in quality with those who are driven by intrinsic motivations.
Instead we recommend making the work smaller and spread out over time by adding 1-2 question surveys like the NPS to your regular community management or social media campaigns. Then have real public conversations that credit those thinkers and use those results to perform a transparent action.
The questions will encourage “passive engagement” rather than require active commitment so the effort is lower and the conversation is viewed as valuable. It will also pull some of your “lurkers” and “fence-sitters” out of their holes if you spin the conversation toward them. Consider priming an audience before the survey with a#LoveOurLurkers campaign!
You should also make this easier on yourself!
Collect, tag and measure your community’s passive comments across all your social channels in one place by implementing our Social Currency Metrics System for free! |
PROBLEM 3: MOST SURVEYS ARE noT
“SCIENce ENOUGH” TO JOIN THE SCIENCE CLUB
“If your survey uses the scientific method over
the social-scientific process, you’re not collecting
your data correctly, at all.”
What is the cause-effect relationship between your studied thing and your hypothesis? The idea is to control as many variables as possible and test the relationships between 1-3 unknown variables. This allows you to solidify correlations into findings and then theories.
This works great in lab environments, on problems with clearly defined answers, when different approaches have clear upsides and downsides, or with scientific principles that are the same no matter where you go.
But that’s simply not reality when you start adding people, culture, and social structures throughout the big wide world to the mix.
People are too diverse and do things for too many different reasons. Often their actions can only be defined correlationally.
And that is why the great social-scientists of the 1900s built on the scientific method with the lesser known but ridiculously impactful “social-scientific process”.
The social scientific process creates objective data out of subjective data by taking the cause-effect relationship of the scientific method further; it tests the environmental factors at the same time as the variables using the rule of generalization.
For example, on the traditional scientific method your survey goes around it once:
- You observed trends in your community
- You wrote a survey about it
- You wrote your questions specifically to suss out those variables
- You got your results, analyzed them, and made a report
- You disseminated them to the powers that be
This same process makes a really solid go at steps 1-3 of the social method, but it stops at the rule of generalization.
It doesn’t investigate the limitations of those hypotheses, it doesn’t root out fallacious conclusions, it doesn’t generalize to wider audiences, and it doesn’t test the limits of what you’ve learned so you know where the correlation ends.
If you do the survey using the social-scientific process it goes around the scientific method a full 3 times before you get “results”.
So, this social-scientific method is the reason we love the Net Promoter Score. If you implement the NPS like we taught you in our prior blogs, it will cover a full go-around of the social-scientific process as it happens over and over again.
To Conclude
So to conclude, we don’t want to discourage you from implementing these awesome community management tools. We are not against surveys.
What we are saying is that the way you implement these community analytics tools needs to be done with these issues in mind. Each of these problems is a reason people have started to mistrust qualitative data for the past several decades.
We aim to fix that by making qualitative data easier to collect and analyze, more objective, and harder to read falsely into by taking your use of qualitative data further with our Social Currency Metrics System.
Check out the system and how to build your own for free here, or read the previous two parts of this blog! |