
Blog
Beyond “feasibility”
Made-to-measure experimentation with Catalyst
31 July 2025
IGL’s Unlocking Innovative Potential project was designed to strengthen inclusive innovation through a community of practice and collaborative partnerships between local organisations, academic researchers, and our team. A core objective has been assessing the feasibility of experimentation within organisations by setting up small-scale pilot experiments.
What makes an experiment “feasible”? When I first joined this project, I interpreted this concept simply as “Is experimentation possible?” We anticipated key elements would be necessary: a sufficient number of participants, leadership buy-in, adequate resources, and ethical approval for randomisation. By this definition, we’ve succeeded: we’ve set up and completed two pilot studies over the past year.
However, while developing a pilot experiment with Catalyst, a non-profit innovation hub in Northern Ireland, I developed a much deeper understanding of “feasibility”. I now realise that while practical considerations are necessary, they aren’t sufficient for a truly successful experiment. My initial interpretation overlooked a crucial point: experiments are only as powerful as their measurements. It’s possible to run a logistically effective experiment without gaining valuable insights if the outcomes measured aren’t relevant to the organisation, capturing what they want to achieve and providing actionable and timely findings.
Here, I argue that feasibility isn’t just a tick-box exercise. It demands collaboratively building outcome measurements tailored to unique organisational needs.
Evaluating “Hello Possible”
Our pilot experiment with Catalyst evaluated their Hello Possible programme [LINK], which cultivates entrepreneurial ambition and activity among individuals from underserved communities in Northern Ireland. Hello Possible aims to empower participants by developing their interests, boosting their confidence, and teaching practical business ideation and strategy skills. The curriculum is based on the Disciplined Entrepreneurship (DE) framework developed by Bill Aulet at MIT. For our evaluation, we focused on the second stage of the programme, comparing individuals participating in an in-person intensive training event to those accessing an online MITx course with similar content. Catalyst has adapted the core DE concepts to the Northern Ireland context by incorporating local entrepreneurial role models, and has designed its curriculum to ensure inclusivity.
The Hello Possible team deserves huge credit for embracing new ideas, facilitating randomisation and building surveys, and navigating labyrinthine legal agreements. All the necessary conditions for this pilot were met. We were all committed to capturing the most important and relevant insights to inform future iterations of the program. However, defining and measuring key outcomes while the programme itself is still in a formative phase required hard thought and creative solutions.
The Measurement Conundrum
Our early discussions quickly revealed that measurement would be challenging for two reasons:
- Diverse participants: Hello Possible participants come from a wide array of backgrounds. For instance, participants in the first stage joined through two channels – Further Education (FE) colleges or directly via Catalyst – across multiple locations and modes of instruction (in-person, online). This diversity makes generalisation and direct comparison with existing research difficult.
- Multiple outcomes: the success of the Hello Possible programme isn’t just about participants learning new skills or starting a business – the subjective experiences of the participants are equally important, and whether the programme builds cultural capital though creating empowerment and a sense of agency. It was essential to capture how their mindset shifts and if they feel more capable of achieving goals.
Tailoring Measurement to Generate Insights
To address these challenges, we collaboratively developed a set of measurement tools and approaches:
- We combined objective and subjective measurements. Alongside assessing whether participants met their goals (both entrepreneurial and otherwise) and learned the course content, we also measured how the program influenced their attitudes towards entrepreneurship and themselves. While our internal analysis guide [LINK] often cautions against subjective and attitudinal outcomes, these are precisely the intended benefits of Hello Possible, making them entirely appropriate in this context.
- We applied tried-and-tested scales. Among these validated, scientifically rigorous scales were entrepreneurial self-efficacy, entrepreneurial intention and programme learning. Such measurements are used in academic research, lending credibility to our results and allowing us to compare findings with other programs.
- We complemented our quantitative data with qualitative data. Interviews helped us to capture the influence of local context and individual variation, exploring what aspects of the program truly drive its success. We also used this method to assess the measurements themselves: Did participants understand the survey questions? Were we measuring what mattered most to them?
- We embraced the uniqueness of Hello Possible participants. A key component of our data collection was the measurement of demographic characteristics, with which we can understand and quantify the diversity of backgrounds and experiences. Most studies on entrepreneurial intention focus on students or existing businesses, with none we know of involving marginalized individuals in Northern Ireland. Applying existing methods to this new population has allowed us to uncover relevant insights.
Importantly, our broad and flexible approach in the pilot experiment has generated evidence that will allow Catalyst to pinpoint the most important and relevant outcome measures for future evaluations of Hello Possible.
From Challenges to Opportunities for Inclusive Experimentation
Measurement challenges are likely to be faced by anyone in the innovation sector who wishes to evaluate both the economic and social impacts of their programmes – but within these challenges lies a significant opportunity to develop methods tailored to specific organisational needs and to advance our support of inclusive innovation:
- Varied participants are the norm in inclusive innovation programmes. Using flexible measurement scales can more accurately reflect this diversity, and qualitative approaches can illuminate individual variation.
- Local contexts are critical. Understanding how this context influences program effectiveness is essential for optimising design and delivery, and for interfacing with the innovation ecosystem. Insights generated about local populations are valuable for our overall understanding of inclusive innovation.
- Multiple outcomes are often expected. Although some outcomes are easier to measure, we should focus on what truly matters, whether this be objective, subjective or a mixture of the two. After casting a wide net in initial investigations, a smaller number of key outcomes can then be identified.
- Unique insights can inform broader efforts to evaluate and improve inclusive innovation programmes. We encourage the sharing of not just findings but also measurement scales and approaches, so that other evaluation teams can benefit. Together, we can develop a robust body of knowledge and network of support.
Experimentation isn’t only about having the processes and resources to implement a trial – it’s also about designing measurements that truly illuminate the experiences of the people who matter most. Inclusive innovation demands inclusive experimentation that leverages all available tools and remains flexible enough to capture the insights that will improve our societies.