IGL2018: Vouchers and Experimentation

By James Phipps on Thursday, 12 July 2018.

You may have noticed that Lou and I are yet to tell our vouchers story. We haven’t forgotten but have been kept busy of late with IGL2018 and several projects reaching critical points (look out for more information on these soon either on our blog or our newsletter!). In the interim I thought I’d share a couple of points from IGL2018 that connect to our vouchers story.

During the Policy and Practice Learning Lab at IG2018, Triin, Teo and I got together again to run a workshop on using randomised controlled trials (RCTs) in innovation, entrepreneurship and business policy. For this year’s workshop we based the session around a scenario where participants had to develop a new vouchers programme that would help boost small business growth. Below are two of the reasons why we choose vouchers for this scenario.

                    

Controlling for selection bias when evaluating a vouchers programme requires much more than just strong controls for who receives a voucher

One of our objectives for the workshop was to outline why RCTs are regarded as the ‘gold standard’ for demonstrating causal impact. Much of this is captured in other IGL outputs (such as this blog on trials). Below are some reflections on why RCTs can address the issues faced by other methodologies when evaluating voucher schemes, expanding on points discussed at the workshop.

Like many voucher programmes, those designed during the workshop had very broad eligibility criteria. In theory they would be open to applications from across the SME population. But in practice whilst millions could be eligible, it is likely that only thousands would apply, with those that do being very atypical. Factors such as business age, size and sector may be able to explain a lot about who applies for a programme. For example, the proportion of the smallest micro-businesses willing and able to invest £5,000 in an innovation voucher is likely to be much lower than amongst larger SMEs.  

As a result, methodologies such as propensity score matching could be used to match voucher recipients with non-voucher recipients who appear similar and equally as likely to have used vouchers. But whilst these characteristics may have very strong significance and explanatory power with regard to selection, they typically have very little connection with the outcomes of interest in policy evaluations - for example in business sales and employment growth.

As shown in the Venn diagram below, when it comes to removing selection bias we need to be concerned with factors that explain both selection and outcomes.

Figure: Factors affecting selection bias

                    

Source: Adapted from the Magenta Book

What could the factors in the middle be? A number of candidates come to mind and all of them are difficult to measure and typically missing from datasets used to track business outcomes.

For example, membership of business networks can affect business performance but also serve to raise awareness of available policy support. Growth ambition; management and leadership practices and innovation are also factors that have been linked with business performance which I expect would prove to be factors associated with interest and success in applying for the voucher schemes.

So when it comes to directly estimating impact, I would be willing to trade any variables in the middle of the Venn Diagram for those on the left, regardless of how much weaker my model to explain selection becomes.

The power of RCTs comes from taking all factors that might otherwise be in the middle of this Venn diagram and moving them to the right regardless of whether I can observe these factors in my data. Random assignment becomes the only determinant of who receives the voucher and who does not.

This is not to say that other evaluation approaches cannot be applied with success - there are many examples where they have been. But the difficulty of capturing factors in the middle of the Venn diagram was one of the reasons why I came to embrace experimentation.

Experimentation to help make choices

Another reason why we chose vouchers as the policy tool for the workshop scenario, was that it can be applied across a range of innovation, entrepreneurship and business policies. Also, whilst the concept is simple - the voucher is used to encourage businesses to take an action by subsidising the cost - there is huge variety in the choices of how a voucher scheme is designed and delivered.

One of the aims for our workshop was to show how experimentation could help inform and validate such decisions.

Given the constraints of a 90 minute workshop, we asked tables to come up with five characteristics of their voucher programme:

  • What types of business can apply?

  • What can the voucher be used for?

  • How much will each voucher be worth?

  • What financial contribution will businesses have to make?

  • What is the name of your voucher programme?

                     

The two voucher trials that Lou and I cited in our earlier blog involved evaluating the impact of providing applicants with a voucher, or not. We wanted to show participants the flexibility of trials to address other choices.

One of the most debated features was the contribution that businesses would have to make alongside the public subsidy. We outlined how trials could be used to explore this and help strike the right balance between additionality and cost to the taxpayer - for example testing impacts of vouchers offering 100% subsidy against those only offering 50%.

Decisions about the name could also be tested, with the potential to use rapid fire messaging trials to see which would generate the most interest and engagement amongst the target population.

One table was unsure on whether startups should be eligible, due to concerns that a high failure rate may weaken value for money. The scope for experimentation to test this decision is less straightforward - it is not possible to randomise whether or not a business is old or young.

However, it would be possible to gather evidence, in effect, by running two trials. One to test the benefits of providing the vouchers to startups and another testing the same vouchers for existing businesses. This would enable comparisons of the relative returns to public investment - for example did offering the vouchers to established SMEs create greater economic value per pound of public investment than for the vouchers offered to startups.

As you will hopefully have seen, this is one of a number of blogs capturing insights from IGL2018, with more to follow. To keep up to dates with these and our vouchers story, please subscribe to our monthly newsletter.

With thanks to Teo Firpo for comments.