top of page
Keri Shurtliff

Measure learner reactions

“How do instructional designers measure learner reactions to training?”

“What are the key components of post-training surveys?”

“What should designers consider when writing questions for post-training surveys?”

“How do learning designers determine the right format for each question?”

“What are best practices for writing clear and unambiguous survey questions?”

“How do designers make sure surveys are user-friendly and engaging?”

“Why should learning designers test surveys before launching them?”

“How do learning designers use post-training survey data?”


Measuring learner reactions to training is pivotal for instructional designers as it offers feedback for refining training content and delivery. Positive reactions can bolster the credibility of the training program, signal potential learning outcomes, and validate the investment made in the training. This feedback informs logistical decisions, provides insights into instructor performance, and sets a baseline for future evaluations. Overall, gauging reactions is a foundational step in ensuring that training remains relevant, engaging, and effective for learners.


Through this lesson, you should be able to create a post-training survey to rate learner satisfaction.



How do instructional designers measure learner reactions to training?


Instructional designers often measure learner reactions to training as a part of the training evaluation process. Measuring reactions is considered the first level in Donald Kirkpatrick's Four-Level Training Evaluation Model. Here's how instructional designers typically measure learner reactions to training:

  • Feedback surveys/questionnaires: These are the most common tools used, and they are the primary focus of this lesson. They might ask learners about:

    • Overall satisfaction with the training

    • Relevance of the training content to their job roles

    • Quality and effectiveness of the training materials

    • Effectiveness of the trainer/facilitator (if applicable)

    • Training environment (for in-person trainings)

    • Technical quality (for online trainings)

  • Verbal feedback: If the training is instructor-led, learners can provide direct verbal feedback at the end of the session.

  • Focus groups and interviews: Designers can facilitate 1:1 interview or focus group discussions, where learners can provide deeper insights into their reactions and suggestions for improvement.

  • Feedback boxes: These can be placed at the training location (for physical trainings) or on the eLearning platform, where participants can drop their comments/suggestions anonymously.

  • Observations: In some settings, trainers or instructional designers might observe participants during the training. Non-verbal cues (like body language, engagement level, etc.) can give insights into their reactions.

  • Net Promoter Score (NPS): This is a simple metric used to gauge the likelihood of a participant to recommend the training to others. The question usually asked is, "On a scale of 0-10, how likely are you to recommend this training to a colleague?"

  • Usage and engagement metrics: For eLearning or digital training, analytics can be used to measure:

    • Time spent on the course or specific modules

    • Number of times the training was accessed

    • Drop-off rates (sections where learners stop or exit)

    • Participation in forums or discussion boards

    • Quiz or test attempts and scores


What are the key components of post-training surveys?


There are various key components of post-training surveys that help capture insights about various aspects of the training experience. Here are some key components to consider when creating post-training surveys:

  • Demographic information: Collect basic demographic data such as age, gender, role, level of experience, and department. This information helps segment the feedback and understand how different groups perceive the training.

  • Learning objectives: Include questions that assess participants' understanding of the learning objectives and whether they feel the training helped them achieve those objectives.

  • Content relevance: Ask participants to rate the relevance and usefulness of different training modules, topics, or activities. This helps identify which areas of content were most valuable and which might need improvement.

  • Engagement and interactivity: Measure participants' level of engagement with the training content and activities. Inquire about the interactivity, use of multimedia, and opportunities for active participation.

  • Challenges and suggestions: Encourage participants to share any challenges they faced during the training and provide suggestions for improvement. This can reveal areas that might need additional support or modifications.

  • Overall satisfaction: Ask participants to rate their overall satisfaction with the training on a scale or through open-ended questions.

  • Future learning needs: Inquire about participants' future learning needs or topics they would like to see covered in follow-up training sessions.

When designing post-training surveys, strike a balance between collecting enough detailed information and keeping the survey concise to encourage higher completion rates. Regularly reviewing and updating the survey based on the collected feedback can ensure its effectiveness in continuously improving the training experience.


What should designers consider when writing questions for post-training surveys?


When designing post-training survey questions, designers should consider several factors to ensure that the feedback collected is relevant, clear, and actionable. Here are some key considerations:

  • Purpose and objectives: Define the purpose of the survey. Are you looking to gauge general satisfaction, understand the effectiveness of a particular module, or get feedback on the instructor's performance? This will guide the type and focus of the questions.

  • Clarity: Ensure questions are straightforward and free from jargon. Ambiguous questions can lead to misinterpretation and skewed results.

  • Bias avoidance: Avoid leading or loaded questions that might sway the participant's answer. For instance, instead of asking "Did you enjoy the excellent training?", ask "How did you find the training?"

  • Question type: Determine whether you need open-ended questions for detailed feedback or closed-ended ones (like multiple choice or Likert scale) for quantitative analysis.

  • Brevity: Long surveys can be overwhelming and lead to drop-offs or rushed responses. Keep your survey concise, focusing on the most critical aspects.

  • Sequencing: Start with general questions and then move to more specific ones. It's also helpful to group related questions together.

  • Anonymity: If possible, allow participants to answer anonymously. This can lead to more honest feedback, as participants might be more candid without fear of repercussions.

  • Actionability: Frame questions in a way that the feedback received can be actionable. For example, asking "What can be improved?" is more actionable than a simple "Did you like the training?"

  • Inclusivity: Ensure the language is inclusive and the questions are applicable to all participants. Avoid making assumptions about participants' backgrounds or roles.

  • Scalability: When using rating scales, maintain consistency in the scale (e.g., always 1-5, where 1 is 'Strongly Disagree' and 5 is 'Strongly Agree').

  • Feedback on all elements: Remember to seek feedback not just on content but also on the delivery method, training environment, materials, duration, and facilitator effectiveness.

  • Pilot testing: Before rolling out the survey to all participants, test it with a small group to identify any confusing or ambiguous questions.

By keeping these considerations in mind, designers can craft effective post-training surveys that yield meaningful insights, facilitating continuous improvement in training programs.


Scroll through and review this two-page sample survey for an eLearning course:


Remember to strike a balance between quantitative and qualitative questions to gather both structured data and rich insights. Additionally, consider the order and flow of the questions to maintain participant engagement and encourage honest feedback.


How do learning designers determine the right format for each question?


Learning designers determine the right format for each question in a post-training survey based on several factors, including the type of information they want to collect, the level of detail required, the target audience, and the overall goals of the survey. Here's how learning designers can determine the appropriate format for each question:

  • Type of information needed:

    • Closed-ended (quantitative) questions: These questions offer predefined response options and are ideal for gathering quantitative data that can be easily summarized and analyzed. Use closed-ended questions for collecting ratings, rankings, and categorical data.

    • Open-ended (qualitative) questions: These questions encourage participants to provide detailed, open-ended responses. They are suitable for collecting rich, contextual feedback and uncovering insights that might not be captured by predefined response options.

  • Level of detail:

    • Multiple-choice questions: Use multiple-choice questions for situations where participants' responses can be categorized into specific options. This format works well when you want to narrow down responses to a few predetermined choices.

    • Likert scale questions: Likert scale questions ask participants to rate their agreement or disagreement on a scale. This format helps gauge participants' attitudes, opinions, or perceptions on a specific topic.

    • Ranking questions: For questions that require participants to prioritize or rank items, use ranking questions to understand the relative importance of different options.

  • Target audience: Consider the demographics and characteristics of the survey participants. Use question formats that are familiar and comfortable for your audience. For instance, if your participants are less tech-savvy, keep the interface simple and easy to navigate.

  • Survey goals: Consider the specific goals of the survey. If you're aiming to gather statistical data for analysis, closed-ended questions are valuable. If you want to understand the "why" behind participants' responses, incorporate open-ended questions to capture qualitative insights.

By carefully considering these factors, learning designers can select question formats that align with their goals and maximize the quality of feedback collected through post-training surveys. The goal is to design a survey that effectively captures the insights needed to improve the learning experience.


What are best practices for writing clear and unambiguous survey questions?


Writing clear and unambiguous survey questions is essential to ensure that participants understand the questions accurately and provide meaningful responses. Here are some best practices to follow when crafting survey questions:


Remember that the goal is to make the survey questions easily understandable by all participants, regardless of their background or familiarity with the topic. Following these best practices will result in survey questions that yield accurate and meaningful responses, which can lead to actionable insights for improving your training program or learning experience.


How do learning designers make sure surveys are user-friendly and engaging?


Creating user-friendly and engaging surveys is crucial for encouraging participation, obtaining valuable feedback, and ensuring a positive experience for participants. Here are some strategies that learning designers can use when creating post-training surveys:

  • Clear and simple design: Choose a clean and uncluttered design for the survey interface. Use a clear font, appropriate font size, and sufficient spacing between questions and response options.

  • Mobile-friendly design: Ensure that the survey is responsive and accessible on various devices, including smartphones and tablets. Many participants may prefer to complete surveys on mobile devices.

  • Intuitive navigation: Make sure the survey is easy to navigate. Use a logical flow from one question to the next, and consider using progress indicators to show participants how far they are into the survey.

  • Brief and concise: Keep the survey as concise as possible while still gathering necessary information. Avoid unnecessary questions that may lead to survey fatigue.

  • Engaging introduction: Begin the survey with a friendly and engaging introduction. Explain the purpose of the survey and how participants' feedback will be used to improve the learning experience.

  • Use visuals wisely: Incorporate visuals, such as images or icons, to make the survey visually appealing. However, ensure that visuals enhance understanding rather than confuse participants.

  • Mix question formats: Use a mix of question formats (multiple-choice, Likert scale, open-ended) to keep participants engaged and prevent monotony.

By focusing on these strategies, learning designers can create surveys that participants find easy to use, enjoyable to complete, and meaningful in terms of the feedback they provide. Ultimately, an engaging and user-friendly survey experience can lead to higher participation rates and more valuable insights for improving learning experiences.


Why should learning designers test surveys before launching them?


Testing post-training surveys before launching them is a critical step in the survey design process. Learning designers should test surveys for several reasons:

Identify issues: Testing helps uncover any technical glitches, formatting errors, or usability issues that participants might encounter when completing the survey. Early identification of these issues allows for timely resolution.

  • Improve clarity: Testing with a small group of participants helps identify questions that might be unclear or confusing. This provides an opportunity to revise and rephrase questions for better clarity and understanding.

  • Check flow and logic: Testing ensures that the survey flows logically from one question to the next and that skip logic or branching functions as intended. This prevents participants from encountering unexpected jumps or dead ends in the survey.

  • Evaluate timing: Testing provides insights into how long it actually takes participants to complete the survey. This helps manage participant expectations and ensures that the survey doesn't overburden them with a longer duration than initially communicated.

  • User experience: Testing allows learning designers to gauge the overall user experience of completing the survey. This includes assessing the ease of navigation, visual appeal, and engagement level.

In essence, testing a survey before launching it is a proactive measure that ensures the survey is well-designed, user-friendly, and capable of achieving its intended objectives. By addressing potential problems and making improvements based on testing feedback, learning designers can create a more effective, efficient, and engaging survey experience for participants.


How do learning designers use post-training survey data?


Learning designers use post-training survey data to gather insights, evaluate training effectiveness, make informed decisions, and continuously improve the learning experience. Here's how they typically use the survey data:

  • Evaluate learning outcomes: Learning designers analyze survey responses related to learning outcomes to determine whether participants achieved the intended goals of the training. This helps assess the effectiveness of the training content and methods.

  • Identify strengths and weaknesses: By reviewing survey feedback, learning designers identify strengths and weaknesses of the training program. This includes understanding which content was well-received and where improvements are needed.

  • Refine content and delivery: Survey data guides learning designers in refining course content, instructional methods, and learning resources. They can make adjustments based on participant preferences, challenges, and suggestions.

  • Tailor learning experiences: Learning designers use survey data to personalize future learning experiences. This might involve offering additional resources or activities to address specific learning needs identified by participants.

  • Improve facilitator performance: If the training involves an instructor or facilitator, survey feedback helps identify areas where the instructor's performance can be improved, such as communication style or engagement techniques.

  • Enhance engagement strategies: Survey data informs learning designers about the level of engagement participants experienced during the training. This feedback can be used to enhance engagement strategies for future training initiatives.

In summary, learning designers leverage post-training survey data as a valuable resource for evidence-based decision-making and continuous improvement. By analyzing the data, they gain insights into training effectiveness, participant preferences, and areas that require attention, leading to more impactful and engaging learning experiences.


Summary and next steps


Learning designers craft post-training surveys to gather data-driven feedback and insights from participants about their learning experience. Effective post-training surveys typically consist of several key components that are designed to gather comprehensive and actionable feedback from participants. When creating post-training surveys, consider using a mix of rating scales, multiple-choice questions, and open-ended questions to gather both quantitative and qualitative feedback. It’s important to write clear, concise questions, making the surveys more engaging and user-friendly. Ultimately, learning designers use post-training surveys to measure learner reactions, empowering them to continuously improve their craft and to strategize for future training programs.


Now that you are familiar with how to create a post-training survey, continue to the next lesson in LXD Factory’s Evaluate series: Measure learner mastery.


Comments


bottom of page