What the Horizon 2020 Innosup programme can teach us about RCTs: The lessons

By Stella Ishack on Monday, 14 September 2020.

In the first part of our series ‘What the Horizon 2020 Innosup programme can teach us about RCTs’, we investigated the challenges, both anticipated and otherwise, that innovation agencies have faced so far when designing and running trials. Back in 2018, IGL was selected by The Executive Agency for Small and Medium-sized Enterprises (EASME) to deliver support to the INNOSUP-06-2018 projects. Almost one year since these projects kicked-off, we have compiled our learnings from our first phase of support in a short report and have summarised them in a two-part blog series. In this second blog, we move away from the challenges involved and delve deeper into what can be learned by those evaluating business support programmes and those designing trials themselves. With an awareness of the potential pitfalls, what are the main lessons and insights to take away from the initial phase of the INNOSUP-06-2018 funding call? 

The lessons when designing a trial for an innovation agency

We’ve summarised six of the most useful, and generally encountered lessons to take away, which you can read about in more detail in our report. We have found that when designing and implementing trials of this nature it is key to take the following into account: 

Understand the importance of best practice

With regards to the projects more widely, perhaps the main lesson that we can draw from the process is the importance of helping innovation agencies understand the experimental methodology as they develop their proposals.  

For most innovation agencies, calls like the Innosup programme pose a great opportunity to apply an experimental approach to policy development and within this, learn how best to use RCTs. There are many factors that agencies need to consider when they decide if, when and how to run an RCT. Considering the best approach to outcome measurement as well as to programme delivery at each phase of trial development is crucial if an RCT is to yield robust evidence. Being aware of this may require a change in the way agencies typically undertake evaluation as it needs to be planned in detail at the outset of the project, with a specific research question in mind and integrated into delivery. 

The sooner the better

Much like with any potential project, the detail is in the planning, and the sooner these details are refined, the greater the chance of a project’s success. Initial project assessment and engagement with teams is likely to lead to positive, substantial changes in project outcomes. Therefore, it’s useful to consider that projects benefit from this additional engagement earlier in the process, and to consider providing similar preparatory support in future calls. For instance, in addition to running webinars before final stage proposals, it could be beneficial for potential participants to join a more intensive workshop (or series of webinars) before project selection starts. This could further improve the range and quality of projects coming forward. Being fully aware of the demands of running an RCT (e.g. sample size demands) will also ensure that projects include sufficient time and resources for trial development into their proposals.

Know when to call in the experts

We have learned that having an evaluation partner from the design stage of the project is highly important. Some agencies may consider designing their experiment with their internal staff, and without the necessary technical support on the design side. However, in our experience this can lead to problems once the projects start, and then make them face some unnecessary delays as they could need to change the design of the programme because it was unsuitable for an RCT.

When it comes to supporting the trial design and implementation, requiring project teams to include dedicated research and statistical expertise is highly beneficial. While project teams may be highly motivated and responsive to feedback, lack of familiarity with robust evaluations could make running this type of project more difficult and this skills gap may inhibit their ability to respond effectively to feedback and to make informed decisions about the trial. Whether an external evaluation team is hired to bridge this gap, or project teams train their own staff to build and develop their expertise, this technical support is crucial.

Select feasible projects

It is crucial to make sure the project selection and objectives reflect the current status of the intervention and the technical feasibility of the trial. When interventions are at an early stage of development it is often hard to gauge how effectively it could be delivered and how a trial to evaluate its impact could be designed. 

When the intention is to run an impact evaluation then it is important to select projects where this will be feasible. This does not necessarily mean only selecting perfectly designed trials - often there are many challenges that can be addressed during the development of the project. But the selection process could be developed to include specific questions to help assessors gauge the feasibility of their proposal (e.g. the inclusion and justification of sample size calculations). 

The perks of peer-learning

Even if it’s more resource-intensive, opportunities for projects within a funding call such as this one, to learn and share experiences together in the same room brings very positive results and should be encouraged as far as the programme allows. Where there are project teams that are not expected to be fully familiar with the methodology, having a crash course on policy experimentation with peers allows them to share some questions and concerns that may not be easily presented during webinars or online chats. Hearing what other projects are experiencing and how they are overcoming challenges provides an excellent opportunity for teams to self-assess their own trials and see where improvements can be made.

Have a timeline

Once selected projects start designing and developing their experiment, it is incredibly beneficial to have a clear timetable in order to foresee the needs and resources for each stage. For instance, early design stages tend to be more intensive than implementation ones, as several implementation details need to be clarified before recruitment starts. A clear timeline and setting expectations around the time required to review and refine evaluation plans eases programme management and allows teams to avoid rushing at times where detailed planning is needed. We suggest that those managing a trial, run pilots of the intervention and data collection before proceeding to the full trial as this preparation allows for more time-efficient, well-prepared activities later.


Overall, the Horizon 2020 Innosup programme has so far provided various lessons to improve project selection and development of proposals; consistency, adequate and informed preparation, while employing the right methodology at the right time, allows for successful trials and robust outcomes. Ensuring that the details of your trial are refined from the outset and maintaining a rigorous approach, while accounting for potential unprecedented factors is the ultimate way of giving your project the greatest chance to yield strong evidence.  If you would like to read more detail about the challenges faced and lessons learned when supporting these projects, these further insights can be found in our full report.