Joint Programming Initiatives (JPIs) continue the trend of under-performing EU-instruments aimed at leveraging substantial national resources for research and innovation. But have JPIs, ERA-NETs, and the SET-Plan’s Berlin-Model instruments failed, or are we just measuring them by the wrong yardstick? 

Last month the evaluation report of the Joint Programming Initiatives (JPIs) was published and the conclusion is fairly clear: the JPIs have not delivered as expected and it is questionable if they will ever get the financial support from member states to do so.

Is that an unfair simplification of the 90-page evaluation report? A little, perhaps, but not much.

The report states that:

The JPIs were intended to be a new approach to joint programming that would operate at a much higher political level than the ERA-NETs.” (p.53)

and concludes that:

The analysis of investment in Joint Calls (Section 3.2) suggests that in most cases the level of investment so far has been no greater than for the best ERA-NETs but this is likely to increase in the future.” (p.45).

The last sentence – that investment is likely to increase in the future – is questionable. On page 39 the evaluation report groups the JPI member countries in 3 groups and asks what their future contribution will be. The pie diagrams below show that only Group C, who are those countries that currently participates marginally in JPIs, indicate a “moderate increase in participation”. That is hardly enough to be a financial game changer for the JPIs.

After years of meetings, strategies, planning, programming and implementing first calls the results are – all things considered – not what was envisages when the idea of JPIs was launched in the 2008 Communication from the European Commission:

Joint Programming has the potential to become a mechanism that is at least as important as the Framework Programmes in the European research landscape, and to actually change the way in which Europeans think about research.” (p.2, COM(2008) 468 final)

Clearly, that has not happened.

Will it happen?

The outlook for the financial contributions from member states does not bode well. And perhaps more discouraging, the main issues to be addressed listed in the evaluation report’s section 7.1 reads like a long check list of what usually holds back initiatives to leverage national research funding: Lack of commitment, cumbersome administrative setups, lack of awareness at member state level…..

Should we just drop the idea that the European Commission can foster a strong collaboration and coordination of nationally funded research activities?

Not necessarily. Both in the JPIs, the ERA-NETs, the ERA-NET+, the SET-Plan initiatives, article 185 initiatives etc. good results have been produced. Perhaps not at the scale expected and perhaps sometimes with a much too high administrative overhead, but national collaboration matters.

Perhaps we just need to adjust our expectations, appreciate the difficulty in doing this and acknowledge the many important small steps that are taken to improve coordination across borders?

How do we do that?

At least since FP7 (and that is as far back as my Framework Programme experience goes), the European Commission has argued that the FP only represents 6-10% of public research funding in Europe. Consequently, Member States need to pool significant additional resources from national programmes if Europe wish to overcome research fragmentation and strengthen the continents global competitiveness. The JPIs are one among many initiatives aimed to create a real leverage effect of H2020 resources.

However, the figure of 6-10% of public funding includes the financing of social security, overhead and basic funding that research organisations receive and which member states are unlikely to ever include in a European coordination efforts.

The relevant figure to look at is therefore the share of competitive funding in Europe that H2020 represents.

According to the Background paper prepared by Swedish VINNOVA for the LundRevisited conference in 2015, “the European Framework Programme provides more than 35% of competitive research funding available in Europe” (p.2)

If H2020 represents 35% of competitive available funding, how much more coordination do we want before we call it a success? Should it be 40%, 50% or 60% of the total competitive funding from public sources that is coordinated at European level?

And when evaluating the success of new initiatives, do we judge them by their contribution to the target for overall funding or by how much they increase the current state-of-affairs?

Let me give you an example.

A few years ago, the European Energy Research Alliance (EERA) tried to establish so called Berlin-model projects. That is projects where 3-5 research organisations go together to frame a joint project, but each project partner is funded from national agencies. Think of it as a kind of single-project ERA-NET, defined bottom-up.

When we asked around for examples of this among the 200 EERA partners, we found only 2. There are probably more examples, but for the sake of argument let’s say that there has been 10 such projects over the lasts 5-10 years. If that is the case, then adding just one project would mean that we increase the rate by 10%.

Clearly, such a project would have no effect on the overall alignment in Europe, but it would be a significant increase compared to the current alignment using those instruments.

 So by what yardstick do we measure success?

I’ll get back to this in forthcoming posts, but I am very interested to hear your views.

 

PS: and my apologies to those who are not at home in the cacophony of European instruments for national alignment. Google and read a couple of websites should do the trick, though. Interested newcomers can start here.

Advertisements