Last month Pureprofile partnered with Michael Wells, Graduate Consultant at TRA, and Dr Catherine Frethey-Bentham, Senior Lecturer at the University of Auckland Business School to deliver a presentation at the Research Association of New Zealand conference titled ‘Money talks, but will I?’. The presentation was largely based on Michael’s Masters dissertation which identified the contributing factors that lead to better data quality in online panels.
The technological advancement in research over the past few decades has seen a significant shift in how research is collected - from door to door, to telephone, and now the rise of online research in academic, social and commercial research. While online research has presented the opportunity for global scale reach and efficiencies, it also presents a risk in that it’s harder to verify that online survey respondents are who they say they are.
For example, a study released by Sharpe Wessling et al. earlier this year found that up to 83% of respondents who completed a survey on MTurk (Amazon’s crowdsourcing internet marketplace) purposely misrepresented themselves in order to qualify for surveys. It found that:
- 17% of those in a study on smokers aged 50+ also participated in a study about active athletes aged under 35 years
- 900 participants claiming to have completed at least 10 reviews on yelp.com qualified for a survey; however all but 33 dropped out when asked for proof of these reviews via a screenshot
- 50% of those who had the opportunity to do so misrepresented their gender in order to qualify for a certain survey
Without the appropriate and necessary quality checks in place for online samples, the quality of the data which brands are making millions dollar decisions on, is seriously compromised.
In the Q3-4 2016 GRIT report, the most important consideration of research design was the ability to trust the results, yet two other findings that stood out were:
- Less than 10% believe it is important for participants to be fairly compensated for their time (including only 2% of research buyers or clients who agreed with this)
- Only 40% of research buyers believe it is important for field suppliers to share best practices with them
In Michael’s research, he looked at the impact on data quality when the industry doesn’t place value on these two factors. The study included a sample of n=1,013 Pureprofile account holders, and explored the impact of incentives on data quality, and the behaviour that ‘poor quality’ respondents have on data quality when best practice sample management procedures are not in place.
- Fair compensation for completing surveys is positively correlated with better data quality. Earning an incentive creates a positive sense of obligation, and the motivation of earning the incentive leads to better data quality. Further, the longer account holders have been part of the Pureprofile panel, the better their data quality, indicating the importance of maintaining an ongoing relationship with members.
- Sample management best practices undertaken by panel suppliers are essential in maintaining high data quality. Of the respondents that Pureprofile would have excluded from the survey under their normal data screening procedures*, 46% failed to select a certain answer when instructed to do so, and 62% provided inconsistent answers for similar questions in the survey.
In a world where fair compensation and best practices of panel suppliers is understood, clients will come to accept that the costs of effective panel management go hand-in-hand with better data quality and greater trust in the results.
We believe that data quality should be the top priority in all our clients’ online studies which is why Pureprofile is committed to creating an engaging respondent experience where account holders are valued for their time, and data quality is never compromised.
* For the purposes of the study, Pureprofile removed any automation scripts including speeding, straight-lining, cookie detection etc. and manual quality control procedures to assess the impact on the data.