Surveys can be incredibly useful in the clinical research operations arena to help measure process improvement among stakeholders. But before diving in, it is important to brush up on the basics and best practices of survey creation.
Define the Objective
To start the process of creating a survey, it’s necessary to define the objective. Think about the following questions before putting together your survey: What question are you trying to answer? What topic are you trying to address? Define your objective by thinking: “By the time I finish this survey, I want to know X about Y.”
Some examples of survey objectives pertaining to process improvement in clinical research might include:
- Staff satisfaction with particular processes
- Determining where gaps/roadblocks are in the study activation process
- Most effective ways to provide ongoing training in a particular area
- Feedback on the implementation of new processes (pre-implementation and/or post-implementation)
Define the Outcome
Define what outcome you are hoping to reach with the data you will collect from this survey. What are you really trying to measure? For example, with the survey you may hope to measure satisfaction level, agreement level, competency level, effective training types, etc.
Define the Audience
From whom are you interested in obtaining responses? To whom does this topic apply? Create a list of audience attributes to help define your intended audience and aid in question creation.
For example, a survey about what types of training should be implemented for your research staff would be applicable to all research staff regardless of years of experience, both participant facing or not, and managers and individual contributors. An example of when you would need to narrow down the audience would be training questions that only apply to newer employees because they address onboarding.
When coming up with the topic for your survey, it is best practice to start with a general topic. What areas describe the objective? Keep in mind nuances and anticipated differences based on respondent demographics, respondent activity, categories of the objective and situational differences.
Within each topic you have created, write down as many questions you can come up with to capture all aspects of each topic. Try to phrase questions in a common way to avoid confusion and think about the answers that accompany each question.
If possible, stick with a single scale to keep respondents from having to think too hard about the format of the responses and instead focus on the content. Types of scales include sliding scales that incorporate frequency and percentages, visual acuity scales, Likert scales, and free text answers. If you choose to include free text responses, consider how you will analyze and interpret them, as this can be a very manual process.
Neutral responses are another factor to keep in mind while developing questions. Do you include a “Neutral,” “No opinion,” “Neither agree/disagree” response in your survey? Oftentimes neutral responses don’t result in actionable data, so many survey designers choose to omit this option to force respondents to choose a side.
When reviewing the content, make sure there are NO double-barreled questions. If the respondent could answer two ways, then your question is double-barreled. Figure out what you really want to ask in that question and if there are two aspects to measure, ask two questions. Also make sure you are not asking leading questions. For example, “The outdated procedures do not cover training opportunities” could be turned into “The procedures do not cover training opportunities” to remove the leading portion of the statement.
Once you draft the questions, review them. Stick with the topic and if the question does not address your original objective, get rid of it. If questions overlap, determine the best wording and remove the extra question. If the questions are intended to address different items, figure out how to word the question to represent the desired difference. Finally ask “what am I missing?” and add questions as needed.
Minimize the number of questions needed to achieve the ability to measure the objective Respondents are less likely to finish the questions at the end if the survey is too long, so be mindful of the order in which you place questions, giving priority to the questions to which you’re most interested in seeing the answers. Put questions in an order that makes sense because good flow helps with obtaining complete and accurate survey results.
Think carefully about what demographics are needed in your survey. Are there any desired sub-analyses? Like the questions, only ask demographics needed for the analysis. Also consider elements about your respondents that help provide explanations and associations with results.
Determine the most effective method to distribute and collect responses (paper, email, web-based, etc.) How long should you keep the survey open? Reminders to participate can be helpful to get higher participation levels but not so many that it becomes a nuisance.
Data cleaning includes collecting all the data in a single location (making sure to save the original data in case of mistakes) and translating to a standard format that can be analyzed. The more time spent on ensuring your data is clean, the quicker and more accurate the analysis will be. Spend time understanding the data you receive especially if you have free-text responses. Determine how you would like to handle missing data (leave blank or fill in with another value). Follow an analysis plan consisting of descriptive statistics (frequencies, median/mode response) and or assessing differences (categorical and numerical).
Carefully planning your survey and following the guidelines above should result in accurate, meaningful data. Learn more about Forte’s latest survey by viewing our on demand webinar, How Technology Competency and Adoption Impact You and Your Organization. You can also view an analysis of the results in our Technology Competency Survey Report.