The data we collect and the analysis it enables
- Swick Learning

- Feb 1
- 4 min read
Updated: Feb 13
Our feedback model is designed to balance three sometimes competing needs:
The brevity required to avoid overburdening participants.
The depth of analysis needed to generate meaningful insights.
The ability to generate cross-program comparability.
To achieve that balance, we collect a combination of standardised core variables, optional topic-specific measures, and contextual demographic data.
Together, these create a dataset that supports both program-level reporting and broader insight across your organisation and different topics, audiences, professions, and sectors.
This piece outlines what we collect and what that makes possible analytically.
Quick aside: evaluation vs evaluation data As mentioned in our Overview of the RED Track Feedback process, the RED Track Feedback process focuses on collecting and structuring one type of evaluation data: feedback. This is not the same as conducting a full-scale program evaluation. We can assist with broader evaluation, but the RED Track Feedback process on its own generates one (very important) type of evaluation data. |
What data do we collect?
Our Eight-Point Model of Professional Learning Quality
At the centre of our approach is a standardised eight-factor model of what constitutes an effective training or professional learning experience.
Across all programs, we measure:
Engaging Experience – The extent to which the training was engaging and held participants' attention throughout
Digestibility of Content – The extent to which the content was clearly presented and easy to understand
Knowledge/Skill Gain – The extent to which the training improved participants' knowledge or skills
Confidence to Apply – The extent to which participants feel confident applying what they learned
Learning Expectations Met – The extent to which the training delivered the learning that was promised
Overall Satisfaction and Recommendation – The degree to which the participants would recommend this training to peers or colleagues
Notable Strengths – What participants appreciated most about the training
Points for Improvement – Ways the training could be improved
Variables 1–5 are collected on a 0–10 rating scale. Variable 6 uses the Net Promoter Score (NPS) methodology. Variables 7–8 are open-text responses.
Maintaining a single intuitive rating scale strikes a practical balance between brevity and analytical depth. It captures sufficient variance for meaningful analysis without creating survey fatigue or unnecessary non-completion.
Because these measures are applied consistently across all programs, we are able to:
Compare performance across different courses
Identify patterns across audiences and contexts
Detect systemic strengths and weaknesses
Benchmark delivery approaches internally
The standardisation that comes from this model creates insight that would not otherwise be possible.
Custom, Topic-Specific Variables
Where appropriate, we co-design additional measures aligned to specific program objectives.
These typically focus on outcomes like:
Competency development
Behavioural intent
Confidence shifts
Knowledge acquisition
Sentiment toward specific initiatives
Program-specific outcomes or KPIs
These measures are often used to track change over time. We can deploy pulse surveys before, during, immediately after, and well after a program to assess learning gain and learning decay.
Contextual & Demographic Variables
We guarantee anonymity. We do not retain identifying information such as email addresses, IP addresses, or personnel-linked identifiers.
Because responses cannot be linked to named individuals, we collect limited demographic and contextual variables to enable meaningful segmentation.
These can include:
Experience level
Role type or classification
Functional area
Geographic region
Prior exposure to related training
Standard demographic variables (e.g., age band, gender), where appropriate
The specific variables vary by organisation. However, we encourage intentional discipline — enough context to support analysis, without compromising brevity.
This contextual data allows us to:
Segment results meaningfully
Identify differential impact across groups
Interpret feedback in context
Generate broader organisational insight
Without it, interpretation is materially weaker.
We have also consistently observed very high response rates to optional but well-designed demographic questions.
What analysis does that make possible?
In practical terms, if the data exists, we can analyse it.
Because the datasets are clean and consistently structured, we can perform:
Cross-tabulation and segmentation analysis
Comparative program analysis
Trend analysis over time
Correlation analysis
Distribution and variance assessment
Internal benchmarking across programs or cohorts
Generally, the only analytical boundary is self-imposed (we do not link feedback responses to identifiable individuals).
Within that constraint, the analytical flexibility is really only limited by what is collected — either through our universal feedback framework or through the custom measures we co-design with you.
A Note for Evaluation Teams
For professional evaluators, a key concern is often along these lines:
If we move to an external approach, will we lose analytical flexibility?
The answer is no. Not if the right variables are captured at collection. Our approach preserves the optionality you need.
Our core model creates comparability and benchmark data that wouldn't otherwise be possible for evaluators.
Our custom measures address your program-specific evaluation needs.
Context variables enable segmentation and interpretation.
Moreover, our promise of anonymity to participants and analysis by a neutral third party puts upward pressure on the validity of your data.
Our role is to ensure that what is collected is proportionate, structured, and analytically useful — without overwhelming participants or compromising their candour.
_edited.jpg)


