Why you should run a pilot study before diving into quantitative research

Why you should run a pilot study before diving into quantitative research

Gearing up to run a large quantitative research project? Although often overlooked, pilot studies are one of the quickest and most efficient tactics to ensure you get the most out of unmoderated research before wasting your time and resources. In this blog we cover some of the reasons why you should run a pilot study for any new quantitative project and what you need to know to successfully set it up.

What is a pilot study?

Let’s start at the beginning. Running a pilot study before sending your survey or task to hundreds or thousands of people means testing your research survey or task with a smaller audience that reflects your desired group of participants. For example, getting four or five external people to complete a survey before you get feedback from a wider audience (note we said external, meaning your colleagues or people who are too close to the action are not the right audience for a pilot study).

Internal testing has its time and place, but with a pilot study with non-biased participants you can ensure instructions and language are clear and there is no internal jargon polluting your task. Testing internally might save you a bit of money initially, but it could damage your whole project if you exclusively stick to that option.

For remote unmoderated quantitative testing with a large group of participants, running a pilot study with a smaller group will iron out issues with questionnaires/tasks and identify the ‘must fixes’, such as the user-friendliness of the research task, navigation issues and, most importantly, biases and assumptions that are likely to undermine the end result.


Avoid small errors

Spelling mistakes, missing options, missing links… These are mistakes that anyone can make when designing a survey, a card sort or a tree test. Not only does this affect the user’s experience and the results you will collect, but it also looks sloppy.

Take this example: the meaning of 'our children' and 'your children' is a really small error that is unlikely to get picked up by a tool like Grammarly, but would definitely be noticed by your users and create some confusion. This is actually a real typo we recently detected in an online task we recruited for. This small mistake was not picked up during the survey design and internal testing stages, and it took the pilot for someone to reach out and say “we don’t have children together”.


Addressing assumptions

Assumptions shouldn’t be shrugged off, as they’re a big part of research and you can fall into dangerous areas by leading participants down accidental paths that just end up proving your own assumptions. A really simple (and somewhat extreme) example is asking a question such as “how much did you like the prototype?”, which will surely give you different answers when compared to “what did you think of the prototype?”.

Sounds obvious, but when you are too close to the action and the product, you tend to unknowingly guide the participants.

A different issue that pilot studies can help to avoid is using jargon or language that is not familiar to the participants, both in the questions and in the instructions. It’s so easy to make this mistake, according to our experience.

A pilot study will highlight this issue by showing you where users took longer to complete specific sections and where the user’s answers might have been skewed by the language used or where the instructions were not clear to the participants. It’s also useful to include text boxes and allow the participants in the pilot to express their opinion and feelings.


Finalising conditional logic

If you have ever designed a complex survey or unmoderated task, you know conditional logic can become a nightmare when you have numerous paths available to the participants. This can cause major issues if not tested properly.

The solution is not limiting the conditional options or simplifying the task, but to get at least a couple of people to test each conditional path and see where they end up. This is essential to find issues with broken logic paths or misdirected conditional questions, an issue that happens more often than not, according to our experience. A pilot will highlight whether participants are hitting the right spots during the research task.


What does the pilot tell you?

Hopefully, nothing too surprising. However, at this stage it’s useful to get a colleague(s) – ideally someone who is not involved in the project – to look at the results and analyse them with you so you can compare your conclusions.

Looking at the pilot results and finding a small issue, you may still be tempted to think that it’s extremely unlikely a participant would notice that or a user would go down that ‘hidden’ conditional path, but as design researcher Doug Collins recently wrote: “if it can be done by a user, someone will”.

 The lesson is: don’t discard any data from the pilot study, as small as it seems.

How do you find people for a pilot study?

You may run the first pilot with colleagues, friends or family, but their views will inevitably bring bias into the mix; a proper pilot study will need to be done with people who represent your desired audience. For example, when setting up surveys, card sorts and tree tests for our clients, we always test the unmoderated task in-house and share our feedback, but also offer the option to test with a panel of three to six participants. Even if you are not working with a third-party agency like People for Research, you can easily use one of the many online platforms that offer user recruitment services.

Finding the right recruitment for quantitative research projects can be a massive challenge, so if you are planning to work with a supplier, align yourself with a partner who understands your needs and requirements and can find exactly the types of participants you need.

Read more