On the search for evidence: what's left after running three RCTs?

By Kerstin Ganglmayer and Philipp Aiginger-Evangelisti on Thursday, 4 May 2023.

In 2019, we embarked upon a journey to find evidence of our impact. This was done by running three randomised controlled trials (RCTs) that we reflected upon on our previous blog. But what else can we learn from the twists and turns of the last 3 to 4 years? What role is experimentation playing in the agency now?

First of all, there is now knowledge and experience in FFG to decide when and where running RCTs really make sense. Here the context of the agency with often very low sample sizes is very important. Messaging trials seem to work best. Testing the evidence of services and funding instruments tends to be very difficult (maybe with the exception of voucher schemes). But there are many more methods which meet the overall aims of experimentation but lack the full rigour of an RCT, which makes more sense in the agency’s circumstances. This knowledge is used more often: For example, we ran a simple test to determine whether firms would be interested in a sustainability assessment. We therefore built in a very small hurdle to test the firm’s interest, by sending just an email with the information that they could get the assessment if they just sent a reply. Only 39% of the firms took that action. It was an easy test without randomisation but helped us understand businesses’ interests better. 

Secondly, FFG institutionalised the broader know-how about prototyping, experimenting and running RCTs. An “Innovation Network” was established, which consists of Innovation Coaches of each FFG department. FFG teams who want to design new services are supported by these Innovation Coaches. They will guide the teams through a testing and experimentation process, with tests early on in the service design. Coaches, for example, recently advised for a randomised allocation of funds in a funding scheme with a high number of applicants. And they suggested that a small RCT on top of that randomisation might allow for additional insights. We dared to touch on an existing funding scheme because we had experiential knowledge about how to estimate sample sizes and were confident that it would provide valuable learnings.

Thirdly, and referring to the introduction, the service design (and not the experimental design) is now the focus of all prototyping and testing activities. The experimentation methods are “only” the tools to gain insights about the service. This seems obvious, but especially when you design something big and exceptional like a large field experiment (which might even lead to a Nobel prize), one tends to lose focus. So, within the service design process, tools for experimentation are built-in but the main goal remains to design a service which is useful and used. Finding the right balance between gaining insights about the impact and making services work, is a key learning area.

Lastly, and perhaps most important of all for the IGL community, FFG learned how valuable experimentation is in transformative innovation policy. Although the mission might be clear for innovation agencies, experienced in handling funding, it is still not straightforward how to become a more active agency. This is not mainly because of a lack of capability within the agency, but more about established expectations and processes within the agency’s work in the innovation systems. For the transformation of these perceptions, prototyping and experimenting are very valuable. 

We are now on a new and adventurous expedition to build on the knowledge we have gained so far. We will keep experimenting.