Skip to content

Blog

Wiring the strategic brain for STI policy

30 September 2025

George Richardson

Share this page

The last decade has seen a rise in metascience – academic work dedicated to studying the science of research. While the first academic investigations into research ecosystems were published several decades ago, today there are entire conferences and institutions dedicated to the science of science. A range of fascinating insights have been produced, featuring in top journals and even mainstream outlets. Some of the findings have significant implications for the social processes that underpin research, bringing practices such as funding allocation and peer review under scrutiny.

All of this work has been enabled by large, open and commercial datasets that capture research and innovation activity, such as OpenAlex and Dimensions. On top of these, many companies and researchers have developed services to create intelligence for policymakers, or tools that offer various lenses through which to view the R&D system. An offshoot of all of this activity has been the production of tools designed to provide various lenses on the R&D ecosystem, from birdseye views of knowledge flows, to maps of science, to enhanced search tools, to AI powered talent identification.

Recently, the idea of a ‘strategic brain’ for R&D – a consolidated analytic capability for STI policy – has gained traction, and triggered discussions and meetings. Stian Westlake, chair of the Economic and Social Sciences Research Council has suggested this should be a joint venture between UKRI, the UK’s public science and innovation funder, and DSIT, the government department responsible for steering STI policy.

Such an arrangement makes sense. It would ensure the unit is connected to all levels of policy, from national strategic priorities to funding allocation, and that it can access all of the necessary administrative data. This arrangement has contributed to the success of the fledgling Metascience Unit.

But, questions on specifics remain. How can it be set up for success? How can it fill the gap for high quality intelligence on STI that remains, despite the academic research, data and commercial services on offer? Here, I offer six design choices for the neural architecture of the strategic brain that I believe would lead to it making an effective contribution to policymaking.

1. Maximise scope

STI systems involve academic research and publications, private and public funding and investment, business and entrepreneurial activities, training and talent, and a range of decision making institutions. Supporting the frontier industries of tomorrow requires thinking about the career paths and research interests of students today. Yet, most academic and policy analysis in STI is siloed, focusing on particular elements.

There are good reasons for this, and the work produced is still useful, but a new analytic function in government should look to break these barriers. This can be done in three ways. First, the unit should have a governance structure that gives it a clear remit to work across the full spectrum of the policy domain from the start. Second, it should build datasets covering all of the inputs, outputs and activities relevant to STI policy, and linking entities within them [1]. This is already being done to some extent in the UK. Recently, UKRI developed the GtR+ dataset, linking data on publicly funded projects to external academic and company databases, with support from Innovation Growth Lab.

It is only in this way that the unit would be able to generate insights to address long standing policy challenges such as knowledge transfer and research commercialisation.

2. End-to-end policy integration

A very sad strategic R&D brain would be set up to produce insights that get thrown over a wall, in the hope that they are picked up by some analyst or decision maker on the other side [2]. Not only would this be an inefficient use of time and resources, it would also be based on a false, linear view of the policymaking process. Instead, the unit should be set up to support the full policy lifecycle, starting from any point in the process.

This means supporting with exploring and understanding policy challenges, mapping STI activities to characterise and diagnose challenges, and running causal studies to understand their drivers. It also involves supporting the policy design, for example through modelling the impacts of proposed interventions. Analytics can also be deployed to continually inform decisions during policy delivery, or even be incorporated as part of the intervention. Policies can be evaluated by using data and analytics to develop novel outcome measures and track them, or to run quasi-experimental studies to investigate impacts.

As well as typical data analytics methods, AI tools can also contribute to each of these facets of policymaking, for example through information synthesis of the global policy landscape to better understand existing policy options, or assisting literature reviews during the design phase, or providing new platforms that support researchers and innovators.

3. The product mindset

As well as supporting every aspect of policymaking, the unit must avoid falling into the traps of either being highly reactive and rushing out a continuous stream of ad-hoc outputs, or designing and creating analytic outputs in an environment too far removed from the end users [3]. Taking a product approach to serving policymaking would help to navigate the path between these two, keeping the unit focused on outcomes in a responsive way.

This means an R&D brain that doesn’t just think for itself, but that is curious about the other brains with which it interacts. Who are the they? What outcomes are they trying to achieve? What are the critical issues in their current processes that could be made faster or more effective? What are the entirely new outputs they would like to create but currently cannot? Can their needs be met solely through interacting with data? Where do users need descriptive, prescriptive or predictive outputs?

It is important to note that ‘product’ does not always mean a user facing digital tool with a shiny user interface. Those might be one type of output. Process automation, self-serve data, or research services designed around repeat demands might also be part of the toolkit. The key is to have users, their needs and their ultimate outcomes in mind, not prefiguring a solution, and making sure that products make contact with reality as early as possible [4].

4. An experimental approach

One of the advantages of having an advanced, strategic analytics capability built into government and public funders would be the ability to test what does and doesn’t work. The potential for data and AI to speed up or augment policymaking exists, but in reality, we don’t currently know what the impacts of implementing different solutions will be. Running experiments to create evidence about the outcomes of products developed by the unit would help to prototype and build effective solutions, as well as support global efforts to improve policymaking through data analytics and AI.

To bring this to life, you could imagine a funding body deciding to take a portfolio approach to new grant allocations to increase the heterogeneity of projects. They might decide to support decision making using a data-driven tool. An experiment could be run to test whether using the tool leads to more diverse and novel research being carried out.

An experimental approach doesn’t necessarily mean just running randomised control trials. Collecting insights from the development and use of the products, through interviews, telemetry analysis, and working with early tester groups will also be important. This will also support the development of the most useful products.

5. Multidisciplinarity

An R&D brain sitting across government and national funding bodies would be faced with a range of requests and priorities from policymakers, analysts and senior decision makers. Flexibly meeting ad-hoc requirements, while simultaneously building infrastructure and products that target high value problems is a challenge for any analytics department.

One way to tackle this is to move away from having distinct data science, design and engineering teams, and instead form teams according to function. Some teams would focus on outcome areas, with their leaders empowered to direct their efforts towards the most valuable efforts, while others would focus on providing rapid response analysis, infrastructure development, and scaling.

These sub-teams should be multidisciplinary. Rather than being heavy on data scientists, other roles should be created to bring in wider analysis skills from economics and statistics, as well as key product development capabilities such as design and engineering. Qualitative researchers should also be part of the mix in order to deeply understand impacts.

Additionally, the unit could play a valuable role beyond the confines of government, through field building activities. Running fellowship schemes, convening discussions, and even hosting conferences would be valuable for bringing in fresh thinking, while nurturing a wider community.

6. New models for scale

In the model considered here, central government and national public funders would be the main beneficiaries of the strategic brain. But, the challenges addressed by the unit would be of wider significance, with many potential stakeholders from regional governments to universities and private institutions. For the largest benefit to STI outcomes, there should be scaling pathways for products developed by the unit.

This could be achieved through small, established steps, such as open sourcing methods or data, or through more ambitious technical initiatives such as setting new open standards for STI data. Investments in infrastructure and innovations in governance innovations could lead to even wider advances, such as the spin out of a new data and analysis platform, co-owned and governed by a multi-stakeholder steering group.

STI policy is ripe for these kinds of boundary breaking scaling initiatives because so many organisations grapple an overlapping set of questions, and can use a common set of data to address them.

What next? Wiring the brain to the body

Here, I have argued for a comprehensive approach to setting up a strategic brain for STI, calling for it be plugged in to the full spectrum of policymaking, equipped with the skills to ensure it provides high value for its stakeholders, and set up to generate positive spillovers.

So what should the strategy be to bring the unit into existence? A mistake would be to simply pull together a new team of data scientists and task them with cross-cutting analysis tasks that have been sitting around. But it might also be a mistake to make a large investment into an untested unit and model.

One winning approach would be to make a staggered investment in setting up a small multi-functional unit, equipping it with the diverse range of skills to deliver across the programme set out here. This would start off with establishing the leadership and team members needed for strategy development. Their task would be to rapidly embed within the policymaking process, build relationships with key stakeholders and identify valuable intervention points. Ideally these would be thorny policy challenges that cut across levels of policymaking and connect strategic ambitions to downstream decisions in STI funding or policy.

With a shortlist of challenges, the fledgling unit would then begin to research, design and prototype solutions. Through a first year, it would begin to deliver value, but also hone its cross-cutting, multidisciplinary approach, putting it in a position to be scaled.

Footnotes

[1] Eventually, the unit should make as much of this data accessible to researchers as possible through UKRI programmes to encourage a diverse range of analysis and methods to be developed, which could then be re-ingested and operationalised by the unit.

[2] To be clear, no one has suggested this, but it could be very possible that a team loaded with researchers might not develop a good theory of change for policy impact.

[3] The latter in particular seems to have led to a generation of data driven tools developed outside of government which while technically impressive, don’t appear to serve an obvious policy purpose or use case. This includes maps of science, huge exploratory dashboards, and chatbots that let you ‘talk’ to papers.

[4] The Incubator for AI is demonstrating this approach effectively.