Digital Innovation Hubs in H2020: do you know what they are?

The H2020 Work Programme 2018-2020 is open; if you’ve had a look inside, you may have come across something called a ‘Digital Innovation Hub’ (DIH). Ever wondered what these are? I have.

Of course, there is information on the European Commission website and even some of the active DIHs provide some insight on what they are, but still I found it difficult to see what to do with them in some of the Work Programme Call topics. And as a consultant, I should – of course – know. So here is my summary in bullet-points. Any additions or corrections are welcome, but this post is primarily meant to (also) get other people started thinking about a DIH proposal.

Digital Innovation Hubs (DIH) in the H2020 Work Programme

Background to Digital Innovation Hubs

  • DIHs are one-stop shops where SME companies can go in case they have an innovation they wish to test/validate/upscale but do not have the knowledge or technical capacity/infrastructure themselves to do it.
  • DIHs are supposed to act as the centre of a regional co-system around a particular theme/domain (examples: robotics, or additive manufacturing).
  • DIHs are primarily created by regional authorities and/or regional development agencies, who have identified a particular theme/domain  in their Smart Specialisation Strategy (S3). Currently there are already some 450 DIH active across Europe (see: http://s3platform.jrc.ec.europa.eu/digital-innovation-hubs-tool).
  • Most regional DIH are funded at national/regional level through the Structural Funds. An operational DIH is a regional multi-partner collaboration of universities, RTOs, industry-representation, regional development agency and regional government authority. In addition to facilitating testing/validation of innovations by SME companies, a DIH also has the explicit aim to create a network and thus help improve the overall competitive position of these companies in their region within the theme/domain.
  • DIHs can also be set up via a direct application to the ESIF (“European Structural and Investment Funds”, which combines Structural Funds, ESF and Common Agricultural Fund money).
  • Several so-called DIH initiatives are also already running. These initiatives are paid directly by the European Commission as so-called CSA-projects in H2020. Their main aim is to help create further regional DIHs. They provide training and support in the organisational setting up of a new DIH. One such example is: i4MS.

DIH in the H2020 Work Programme 2018-2020

The EC provides direct support for the development of DIHs (€100 million p/a) within the H2020 Work Programme 2018-2020 in the following two ways:

  1. As CSA consortium projects, whereby DIHs in the same theme/domain but in different regions work together to (1) make publicity (dissemination) for innovation results by the companies that worked with them, (2) create a cross-border network on a theme/domain which allows companies to expand their network and work better together to maximise synergies.
  2. As IA consortium projects. In this case, the EC stipulates in the Work Programme in which theme/domain it expects the creation of a new DIH. The difference to what has been written above, is that in this case we are not talking about a regional DIH, but a cross-border (virtual) DIH of knowledge and capacity. In practice: this means universities, RTOs and large companies in different countries jointly providing access to their knowledge and infrastructure for validation/testing of innovations (as suppliers!). The level of ambition is high: the consortium must be highly specialised (knowledge, technical capacity) and/or have very expensive technical equipment available which smaller companies would never be able to afford. I would find it likely that such a consortium would also comprise one or more regional DIHs in the same theme/domain so that there is a natural link to regional industrial activities.

One thing very specific to IA-DIH is that they should transfer at least 50% of their own funding to the SME companies who want the consortium to perform a test/validation using their infrastructure. The model for doing this is called the ‘Third Party’-support model based on section K in the General Annex of the Work Programme (luckily, I already have experience working with this model, so that helps to understand how this works). In other words: the DIH-consortium will need to open Calls for Tender to which companies can apply. The consortium then decides which of the consortium members is best qualified and is available to perform the testing/validation.

A DIH-IA grant application does not have to be directly connected to an existing regional DIH, but I think this is advisable. The IA-consortium should – in any case – work toward the creation of new regional DIH is that same theme/domain so that (1) more companies gain access to infrastructure and knowledge they otherwise could not afford; (2) more companies gain access to a much bigger network of other regionally based SMEs that may provide new innovation opportunities.

The final question that remains is: so who should apply for funding in H2020 topics related to DIH? In my view, universities are probably the best placed to lead these projects. A consortium should also contain one or more RTOs and one or more regional DIH (as evidence that the proposal is founded in existing S3 initiatives). Alternatively, it could comprise a regional authority/development agency who has the stated intention (in their regional development plan) to create a regional DIH. Lastly the consortium should also comprise several larger companies that can also act as suppliers of knowledge or infrastructure to carry out the testing/validation/upscaling for the applicant SMEs.

Proposed schedules for Interim Payments in FP9 are a recipe for more tensions within project consortia

Many scientists across Europe with high research and innovation ambitions will have started the new year with some close reading of the recently published – and last – H2020 Work Programme. In the meantime, the European Commission as well as hundreds (if not: thousands) of lobbyists are already heavily engaged in discussions on the contours of the next European Research Programme, currently still called Framework programme 9 (FP9).

As the most sensitive part of these discussions – the budget for the whole programme – is about to start with many ideas, suggestions, rumours and counter-rumours happily making their rounds through the lobby-carrousel, some other – more concrete and even operational – approaches are already actively being communicated by the Commission to the various national contact points.

One of these approaches concerns the Commission’s intention to increase the use of lump-sum financing (*) of projects. The basic idea behind that is that this mechanism – which includes maintaining the flat 25%-rate for the calculation of indirect costs (e.g. overheads) will continue the Commission’s drive toward more simplification of financial and administrative project management as well as to reduce the risks of fraud from wrongly claiming personal costs (from my experience this is mainly caused by the cumbersome way of the process).
According to the current state-of-play, lump-sums are to be agreed during contract negotiations between the EC and the Consortium, meaning that actually incurred costs are no longer relevant and rules for eligibility and financial audits will become a thing of the past. Mind you, this does not mean that there will be no rules at all: the Grant Agreement will likely still specify who will do the work, what the grant attribution to international partners is and who are linked Third Parties and subcontractors. The Commission slides say that financial liability would be calculated based on each beneficiary’s share of the lump sum per work package.

Analysing the available Commission presentations, I can follow and agree with most of their arguments for increasing lump sum project financing as a grant instrument, but there is one aspect which I find quite concerning. I am referring to the Commission’s notion that in future interim payments to funded consortia may not be paid at predefined points in time (except for the pre-financing of course), but only after specific milestones have been achieved. Mind you: in the Commission’s view a milestone is not like any of the milestones that we are used to in H2020 projects. No, in FP9 interim payments would be directly linked to the completion of a particular work package in the project. In practice this would mean that an interim payment would only take place:

  1.  if ALL partners have declared that their individual share of the lump sum allocated to a specific work package has been used up;
  2. if ALL partners have declared that ALL of their activities in the work package have been fully completed, meaning that the work package as a whole is complete;
  3. the EC has approved the stated completion of the work package as a milestone.

Interim payment in FP9 linked to completion of work packages

So if a work package is not completed according to EC requirements in the Grant Agreement, the current view from the EC is that this will reduce the total grant amount and the consortium loses the share allocated to that work package.

Wow! Having been a H2020 project coordinator myself, I can just picture the additional consortium-internal tensions and fights an FP9 coordinator will be faced with. I give two examples below, but I am sure others will be able to think of more scenarios:

  • In any project there will be work packages which will – by design – take longer to complete. I wonder how SMEs will be able to pre-finance a much longer stretch of company costs before they are reimbursed through the next interim payment? How to avoid that they plainly refuse to participate in work packages that take longer to complete? Could we then possibly be moving to a situation where academic partners start to act as lenders to their SMEs consortium partners just to keep them on board;
  • What will happen if in a work package a partner (inadvertently) causes a significant delay in the technical completion of a given work package, with potentially a ripple-effect on subsequent work packages? Would the other consortium members not start to put so much pressure on this partner that it may be tempted to rush its work and thereby end up delivering a sub-par technical result?

All in all, the proposed structure for FP9 Interim Payments look to me as a recipe for more unnecessary tensions between consortium partners and can only harm the quality of the research innovation FP9 is supposed to generate. The Commission may think that a close link of payment to work-package completion will make projects finish faster, but show me evidence that there is (still) a significant number of H2020 projects that overrun their planned duration. In previous programmes (pre-FP7), long project duration extensions were approved fairly regularly, but we all know that in H2020 your chances of getting Commission approval for an extension beyond six months is already almost zero. I therefore fail to see why the schedule for interim payments cannot stay the way they are in H2020. To me, the risks of increased tension between project partners does not weigh up against maintaining a balanced management of collaborative projects. “If it ain’t broke, don’t try to fix it!”

Roy Pennings

(*) Please note that ‘lump sums’ already feature in a limited fashion in H2020. This article is based on presentations by the European Commission on the newly proposed Lump Sum Pilot in H2020 (and a revised Model Grant Agreement) which will then likely become the blueprint structure for FP9 lump sum financing.

“Horizon 2020 wordt een loterij”

Interview originally printed in the Dutch science newsletter “Onderzoek Nederland” (September 2015 issue, nr. 372) and based on an earlier article I wrote myself. In the interview I explain the real risk that getting H2020 funding under the Industrial Leadership and the Societal Challenges Pillars is starting to turn into a lottery. Many researchers – particularly those at the mid-stage of their career where the need to publish is highest – opt to participate in as many proposals as possible hoping that at least one will succeed. The solution to the problem for these pillars is to focus the evaluation not on scientific excellence, but on the participation of industry (e.g. Impact). After all: these specific pillars are meant to solve real-world problems and create real-world jobs for Europe. Only if proposals score equally on Impact the evaluators should consider scientific innovation to make a decision on who gets funded.

“Hoe de feedback verdween uit Brussel”

Interview originally printed in the Dutch science newsletter “Onderzoek Nederland” (June 2015). The interview (in Dutch) focussed on the lack and quality of feedback  in Evaluation Summary Reports (ESRs) of stage-1 and stage-2 H2020 proposals. Right now, applicants do not get an ESR in stage-1. Why not, if the comments by the evaluators could make the proposal better in stage-2… And why so limited information in stage-2? How is an applicant to judge whether he/she should resubmit the proposal if the suggestions and comments for improvement are so vague? ESRs should help applicants, not confuse them, is my point.

ScienceBusiness conference on future of H2020

Commissioner Moedas talking about Open Innovation during the ScienceBusiness H2020 conference

Commissioner Moedas talking about Open Innovation during the ScienceBusiness H2020 conference (Feb. 2016)

Last week, Brussels-based ScienceBusiness held its 4th edition of the Horizon2020 conference in Brussels. Central theme of the day was the concept of “Open Innovation” as the way forward in European research.
Commissioner Moedas shared the latest thoughts of the European Commission on the European Innovation Council (EIC).  Other speakers (non exhaustive list!) were former CERN-DG Rolf-Dieter Heuer, who is now Chair of the  EU Scientific Advice Mechanism and Jean-Pierre Bourguignon who is the President of the European Research Council. Nathan Paul Myhrvold,  formerly Chief Technology Officer at Microsoft and co-founder of Intellectual Ventures, compared US and EU approaches to innovation and gave examples of how a more entrepreneurial attitude among the new generation of scientists is already radically reforming our economies and societies. Over 230 people from academia and industry participated in the conference. Several thousand people watched the live-stream!

In a separate workshop I moderated the ScienceBusiness Network meeting, where we shared best practices on H2020 research proposal development and implementation. The Network Members discussed ‘do’s & don’ts’ and identified possible improvements for the next Research Framework Programme (FP9). On behalf of the  network I subsequently  participated in the panel discussion with Robert-Jan Smits, the director-general of DG-Research at the Commission (see picture above). Smits outlined several changes to the further execution of the H2020 funding call programme as well as some further simplifications in the implementation of project, fow example on the use of timesheets. Smits admitted that despite the overall success of the programme and the endorsement of the application process by the European research community,  there are still some significant improvements possible, in particular in relation to the evaluation process. Most Calls will in future follow a 2-stage process, whereby the aim of the Commission is that in the second-stage the success rate will be around 1:3. Following a question from the audience, Smits confirmed that he would re-assess the Commission’s earlier decision to scrap the so-called “Consensus Meeting”. The audience felt that the Consensus Meeting plays an important part in ensuring that the assessment of all evaluation panellists is taken into account before the final ranking. Smits also promised to check whether in future the Evaluation Summary Report can be a little more specific and detailed tan is currently the case.

The video-cast can still be accessed by clicking on this link: http://www.sciencebusiness.net/events/2016/the-2016-science-business-horizon-2020-conference-4th-edition/

Remarkable conclusions from High Level Expert Group on Economic Impact of FP7

high-level-expert-group(Please note that the article below is a personal opinion and does not reflect the view or position of any organisation I am working with/for).

Three weeks ago a High Level Expert Group published the ex-post evaluation of the 7th Framework Programme for Research, the predecessor of Horizon2020. The pan-European programme ran for 7 years, spending just about €55 billion euro across a multitude of interlinked research programmes. According to the report, 139.000 proposals were submitted, of which 25.000 actually received funding.

Whilst this report confirms what a great number of other reports have already said about the programme’s ability to advance science and create international collaborations, its assessment on the economic impact of the programme is – to say the least – remarkable.

Let’s start with a simple point. In report-chapter 6 called “Estimation of macro-economic effects, growth and jobs”, the Expert Group relies on two prior reports (Fougeyrollas et al. in 2012 and Zagamé et al. in 2012) to estimate that “…the leverage effect of the programme [stands; RP] at 0,74, indicating that for each euro the EC contributed to FP7 funded research, the other organizations involved (such as universities, industries, SME, research organisations) contributed in average 0,74 euro”.
What a statement to make: after all, everybody knows that in order to get access to FP7 funding, each organisation needed to provide a substantial cash or in-kind contribution. The Expert Group then continued by arguing that “…the own contributions of organizations to the funded projects can be estimated at 37 billion euro. In addition, the total staff costs for developing and submitting more than 139.000 proposals at an estimate of 3 billion euro were taken into account. In total, the contribution of grantees can be estimated at 40 billion euro”. Again, considering that organisations were required to chip in at least 30% of their own money to access the 70% of FP7 funding, one can only conclude that this amount was – in effect – a donation by these organisations and should not be confused with Actual Impact of the FP7 programme, as the Expert Group would have you believe.

But there is more: Using this flawed argument, the Expert Group subsequently draws the conclusion that: “The total investment into RTD caused by FP7 can therefore be estimated at approximately 90 billion euro”. Huh?! So if I take the €50 billion of FP7 given by the Commission and I add to that the contribution of applicant successful organizations (as the EC doesn’t fund 100%), I end up declaring that FP7 has contributed €90 million to the EU economy?! I don’t think so. Clearly, when doing their calculations, the Experts considered researcher salaries as the main metrics. This is confirmed in the report, as it says: “When translating these economic impacts into job effects, it was necessary to estimate the average annual staff costs of researchers (for the direct effects) and of employees in the industries effected by RTD (for the indirect effects). Based on estimated annual staff costs for researchers of 70.000 euro, FP7 directly created 130.000 jobs in RTD over a period of ten years (i.e. 1,3 million persons‐years).” If that’s the case, then why did we bother with FP7 at all… you could have achieved the same economic result by introducing a simple European tax reduction programme for employed researchers.

In fact, rather than the Expert’s grand conclusion that: “Considering both ‐ the leverage effect and the multiplier effect ‐ each euro contributed by the EC to FP7 caused approximately 11 euro of direct and indirect economic effects”, I would take the numbers to mean that thanks to the leverage effect and the multiplier effect, the European Commission managed to convince organizations during FP7 to give away 11 Euros for each Euro which it put into FP7. Furthermore, contrary to the Expert Group’s conclusion, I would at least argue, that their multiplier should then also have taken into account all the hours spent by applicant consortia writing proposals that did not get funding although they were above the funding threshold. As I showed in a previous article, the value of those hours amounts to roughly 1/5 of the total H2020 budget. In simple economic terms: their hours in service of the European economy could have been spent better.

FP7 wás a success: like no other research programme, it encouraged researchers from academia and industry alike to consider and test new ideas and start working together. The problem with FP7 was that ‘knowledge deepening’ did not (semi-)automatically mean that research results were then taken up by industry and translated into technologies and innovations. And that’s what FP7 to a large extent aimed to do. So in that sense, the only conclusion I can draw is that FP7 was a good effort. It’s most important achievement for me was that it paved the way for a new kind of thinking inside the Commission which put concrete and measurable impacts for science and society, for jobs and competitiveness at the centre. H2020 is the result of understanding that FP7 had some design faults which needed correcting. All the more interesting to read that Robert-Jan Smits’ (DG for Research at the Commission) solution for stemming the current tsunami of proposals now being submitted to H2020 is to put even more emphasis in the evaluation process on expected – measurable – Impact results. Go Commission!

H2020: Time to evaluate the evaluators?!

evaluate-evaluators

Picture source: mymanagementguide.com

A few weeks ago the European Commission published a list with all the names of evaluators they used during the first year of H2020[1]. The list contains the names of all evaluators and the H2020 specific work programme in which they judged proposals (but not the topics!). In addition it shows the “Skills & Competences” of each evaluator as he/she wrote it into the EC’s Expert Database.

It’s not the first time the EC publishes this type of list, of course, but the publication comes at a poignant moment. As success rates for submitted proposals are rapidly declining and  many researchers are wondering whether it is still worth developing anything for H2020 at all. In previous articles I already wrote about some of the main reasons behind this decline and possible solutions.

In this article I want to focus on the actual H2020 evaluation process, as for many researchers it is becoming very frustrated to write top-level proposals (as evidenced by scores well over the threshold and oftentimes higher than 13.5 points) and then see their efforts go under in short, sometimes seemingly standardised, comments and descriptions. Is it just the brutal trade-off between the increasing number of applications versus a limited budget, or do the evaluation comments and points hide another side to the process? Why are some people starting to call the H2020 evaluation process a ‘lottery’? Is there some truth to their criticism? How can the European Commission counter this perception?

I am sure most researchers have nothing against the EC’s basic principle of using a “science-beauty contest” to allocate H2020 funds. There is a lot of discussion however – also on social media –  on selection process. Part of that discussion concerns the level of actual experience and specific expertise of those who do the selection. Simply put: Are the evaluators truly the most senior qualified experts in the H2020 domains they are judging proposals on, or is a – possibly significant – significant part of the panels comprised of a mix of younger researchers – using the evaluation process to learn about good proposal writing – and mid-level scientists who are good in their specific field, but also have to evaluate proposals that are (partly) outside their core competence?

Before I continue, please accept that this article is not intended in any way to slack off the expertise of people who have registered on the FP7 and H2020 Expert Database and have made valuable time available to read and judge funding proposals. I myself am in no position to make statements about what constitutes sufficient research expertise to be acceptable for peers when judging their proposals. The only point I will be making below is that if the EC wants to keep the H2020 evaluation process credible (and not stigmatised as a ‘lottery’), it needs to demonstrate to the community that it is selecting evaluators not just on availability but on their real understanding of what top-class research in a given research field means.

So let me continue with my argument: H2020 specifically states that it is looking for the most innovative ideas from our brightest researchers and developed by the best possible consortia. If that is the case, knowing that you cannot be an expert-evaluator if you are part of a consortium in that same funding Call, then already quite a few of the “best possible experts” will by default have disqualified themselves from participation in the evaluation process. It is also widely known that not all top-level experts want to involve themselves in evaluation either because to them it is not sufficiently important or because of other (time-)constraints. As a result,  groups of typically 3-5 evaluators will typically consist a combination of real experts in that specific research domain and other evaluators who come from adjacent or even (very) different disciplines. So part of a given evaluation committee may therefore consist of – for lack of a better word – what I call ‘best effort amateurs’. Again: I have no doubt these are good researchers in their own field, but at this point they may be asked to judge projects outside of their core-competence.

Now you might think I am making these observations because I want to build a quick case against the evaluation process as it is. That’s not true. What I said above are views and comments that have been made by many researchers, on and off-stage, both from academia and inside industry. It’s thát feeling of unease and not knowing, that contributes to the more general and growing perception that H2020 is turning into a somewhat lottery-type process. For that reason alone, it is very important that the Commission now shows that their choice of combinations of evaluators is based on specific merit and not on general availability.

One way to refute the perception of a lottery, is – in my mind – for the EC to perform a rigorous analysis (and subsequent publication!) of the scientific quality of the people it used as evaluators. How to do that? A start would be to assess each evaluator against the no. of publications ánd the no. of citations he/she has in a given research science field. You can include some form of weighting if required to allow for the type and relative standing of the journals in which the publications featured. The assessment could also include the number of relevant patents of researchers. This should give a fairly clear indication on the average level of research seniority among the evaluation panels. One issue when doing this, will be that very experienced evaluators from industry may have fewer published articles, as the no. of publications is often considered less important (or even less desired) by the companies they work for. So the criterium of publications and citations should be applied primarily to academic evaluators. The same thing essentially also holds for patents, as ‘industry patents’ often are registered and owned by the company the researchers work for (or have worked for in the past).  In other words: the analysis will probably not deliver a perfect picture, but as academics make up the majority of the evaluation panels, it should still give a fairly good indication.

Next step could be to check if the evaluators were actually tasked to assess only projects in their core-research domain or if they also judged projects as ‘best effort amateurs’. The EC could do that my matching the mentioned publications/citations/patents overview against the list of “Skills & Competences” the researchers listed in the Expert Database and the specific H2020 topics they were asked to be an evaluation panel-member in. Once you know that, you can also check if the use of ‘best effort amateurs’ happened because there just were not sufficient available domain-specific top-experts, or if the choice was based on prior experience with or availability of particular evaluators. I would find it strange, should that be the case. After all: the EC database of experts exceeds 25.000 names. The final step – in my view – would then be for the Commission to publish the findings – the statistical data should of course be anonimised – so that the wide research community can see for itself whether the selection of proposals is based on senior research quality linked to the H2020 proposal domain, or not.

I am sure that I am not complete (or even fully scientifically correct) in my suggestions on how to analyse the H2020 evaluation process. Things like the number of proposals to score and the average time spent on reading individual proposals will propoably also have an effect on scores. So what is the time-pressure the evaluators are under when they do their assessment and are there possibilities to reduce that pressure?

Please take this article as an effort to trigger further discussion that will lead to appropriate action from the EC in avoiding that our top researchers start dismissing H2020 as a worthwhile route to facilitating scientific excellence.

So European Commission: are you up to the challenge? If not, I am sure that there will be somebody out there to pick up the glove… I will be most interested in the results. Undoubtedly to be continued…

[1] http://ec.europa.eu/research/participants/portal/desktop/en/funding/reference_docs.html#h2020-expertslists-excellent-erc

‘Impact’ should be leading evaluation criterion in funding proposals under H2020 SC and LEIT

impact-should-be-leading-criterium

Illustration taken from www.Buzzle.com.

In last week’s interview published in the Dutch science newsletter “Onderzoek Nederland” (September 2015 issue; click here to access the text of the Published article) I argue that the only way to keep the Societal Challenges (SC) pillar and the Industrial Leadership (LEIT) pillar viable, is to change the order and emphasis of the proposal evaluation. Instead of “scientific excellence” as the first evaluation criterion, emphasis should mainly be on which proposals have the most meaningful “industry participation” (e.g. Impact), including a detailed commercial roll-out exit strategy. After all: these specific pillars are meant to solve real-world problems and create real-world jobs for Europe. Only if proposals score equally on Impact, should – in my view – the evaluators consider scientific innovation to make a decision on who gets funded.

Now many may argue that science comes first, otherwise you do not have anything to manufacture or build in the first place. I don’t necessarily disagree with that view, but if you take a close look at what the expected results listed in the proposal Calls are, then it is clear that SC and LEIT are essentially about applied research and not so much fundamental resarch, for which there is a separate pillar. Combining existing science and technology and applying them through prototyping and demonstrations is what companies do best. If they participate in projects in a serious way with their own knowledge and capacity, I think the evaluators can safely assume there is a real-world demand and a route to that market. So if proposals are judged on the Impact-criterium first (or with a heavier weighting compared to the other sections), then many proposals which start from scientific curiosity can quickly be filtered out of the evaluation process.

Yes, you might say, but that will only cause more pressure on the third H2020 Pillar (Scientific Excellence). I agree that this will probably happen, but in that specific funding pillar it is only good and natural that only the absolute top X% get funded. That’s what scientific curiosity is all about! At the same time I also fully support the view of many that national governments should restore (and even increase) national funding for scientific research to pre-2008 levels as well, so that the research phases between fundamental and applied are also covered. I think there was never a realistic chance that H2020 could cover and finance all aspects of European research. It cannot replace national funding so the responsibility for maintaining Europe’s scientific excellence must lie mostly with the national member states.

The interview – which is in Dutch – also covers a few other possible ideas to improve H2020 success rates, among which the suggestion that proposals that do not even meet the EC’s threshold (on any of the three evaluation sections) could be told they cannot resubmit their proposal for two years. My argument for this is that even if the resubmission proves better than the first-time submission, these projects are unlikely to shoot from below-threshold to the top 10%-20% that are so good that they have a realistic chance of being funded.

H2020 FET-Open: 1,8% chance on getting funded

fet-open

Picture: Janina Vilén

A former colleague of mine , Frederik Vandecasteele, sent me the submission and success statistics on the latest FET-Open call for proposals (FETOPEN-2014-2015-RIA). The results are outright painful and cannot for a minute be presented by the European Commission as further evidence of the success of the H2020 programme and the evaluation system. Have a look:

  • topic budget for FETOPEN-2014-2015-RIA: €38.500.000 (deadline 31/03)
  • total budget requested for above-threshold proposals: €1.078.991.003
  • average requested budget per proposal: €1.078.991.003 / 326 = €3.309.788
  • estimated number of fundable projects: €38.500.000 / €3.309.788 = 12
  • number of proposals submitted: 670
  • number of above-threshold proposals: 326

The estimated chance of success to get funding under this Call was: 12/670 = 1,8% !!!

(data source: http://ec.europa.eu/research/participants/portal/doc/call/h2020/h2020-fetopen-2014-2015-ria/1665116-fet_open_flash_call_info_en.doc)

FET-Open was always a competitive programme, but this is getting ridiculous. I wonder whether it would not be better to shut down the programme for a while, so all those hard-working researchers can spend their time on useful things rather than on writing proposal after proposal that go nowhere. How can not even 2 projects of every hundred submitted – and all above threshold ! – projects contribute  to maintaining Europe’s leading role in global science??

The numbers do not say anything about who is getting funded. They only show that too many people are competing over too little money. Assuming that the member states are not willing or able to significantly pump up the H2020 programme budget, the only solution that I see is a fundamental rethink of the number of themes/domains that are being funded. H2020 is presently trying  to satisfy everybody’s needs and achieving almost none of it… It is time to re-prioritize if Europe wants to have any chance at remaining a science leader in … something…

Making this type of hard choices will undoubtedly be difficult, but if nothing is done soon, then ultimately the H2020 programme will make itself irrelevant to the advancement of science.

H2020 – how to avoid that its success becomes its failure

So now we know: in the first year of the H2020 Research & Innovation programme roughly 45.000 proposals were submitted for funding. According to the European Commission’s Director-General of DG RTD, Robert-Jan Smits, the funding rate has dropped from 19% at the end of the predecessor programme FP7 to a mere 14%. This is well below recently published average success rates in the United Stated (NSF fund: 22-24% and NIH fund: 18-21%[1]) or Australia (NHMRC fund: 21%[2]). Should we worry or is this proof of the programme’s popularity and success?

Let’s put some overall numbers to this: if indeed only 1 in 7 proposals were selected (several H2020 sub-programmes like Marie Curie-Sklodowska and the SME Instrument have an even lower success rate of 5% and 11% respectively), this would mean that roughly speaking 38.700 submitted proposals were rejected. From our own figures as an innovation and grants consultancy with 20 years experience in EU funding, we know that on average a collaborative single-stage costs between €70.000 and €100.000 in own time and effort for a consortium to develop and write. Assuming that half of the Calls were divided in 2 stages – and let’s agree that realistically speaking 70% of the total project development time and effort goes into developing and writing the stage-1 proposal and 10% is then moved on to the next stage – this implies that overall between €2.5 and €3 billion was spent in vain by applicants. And this will go on every year, adding up to a little more than 20% of the total H2020 budget. Ergo: yes, it is time to start worrying.

Before continuing my line of argument and proposing some structural improvements, I should be clear that I fully support the H2020 programme as a high-quality innovation programme and we should praise that in Europe we have developed structure that encourages and facilitates cross-border research and innovation in the way that it does. Each framework programme has built upon ‘lessons learned’ and from an administrative-bureaucratic point of view, H2020 is arguably the most sophisticated and easy-to-use of them all. However, as researchers perceive that national funding is becoming more rare due to unpredictable economic changes, their view is now much more firmly set on Europe with its €79 billion ring-fenced R&D fund. This appears to have created added pressure to submit proposals to H2020, whether they are well suited for the Call or not and irrespective of their objective level of research and innovation quality.

To give but one example: the recent stage-1 Health Call PHC-11 was very specific on the need for the innovative in vivo imaging tools and technologies and should make use of existing high-tech engineering or physics solutions or innovative ideas and concepts coming from those fields. A total of 348 proposals were submitted in stage-1. How likely is it that there really were 348 different significantly new and improved imaging technologies out there? Even if in every EU Member State we had 10 different high-quality research combinations of academia and industry and each had – coming from the engineering and physics community – a completely different approach to solving the challenge, we should still not have arrived at 348. We have many examples like that. Something appears not to be quite right.

In fact, looking at the many 2-stage proposals my company has been involved in recently, I dare argue that what used to be a normal 1-stage proposal in FP6 or FP7, is now a 2nd-stage proposal in H2020. To clarify: the results thus far from the evaluation process makes one wonder whether the H2020 1st stage is not used by the Commission as a quick fix to deal with the increasing amount of proposals. Stage-1 is no longer used to select the best proposals selected for admission to a competitive stage-2, but just to dismiss all the proposals that on face value to not match. Stage-1 has become an instrument of discouragement and not of finding and helping to improve intrinsic quality of the proposals that go on to the next phase. Does that really matter much?

Of course it does! It means that anyone who has made it to the 2nd stage cannot, for a minute, think that his/her proposal is intrinsically good or has a real chance of funding. No, it just means that you are now on the basic level playing field with other proposals in the same way that you were if it were an FP7 single-stage process. There is still a very good chance that the Evaluation Summary Report (ESR) of the 2nd stage proposal will say that your proposal is not innovative, the consortium is mediocre or that the expected results are not very marketable. Instead of having an average 1:3 or even 1:2 success chance in stage 2, the chances of successfully making it ‘all the way’ are – in many domains and on average in the different sub-themes – much worse. My plea would be to either drop the 2-stage approach altogether, or make clear that reaching the 2nd-stage really means that you have either ‘gold’ or ‘silver’. What should happen to the ‘silver’- proposals I will discuss later in this article.

Back to my previous point: a few weeks ago, a survey among Dutch researchers showed they spend 15% of their research time just on writing national and EU funding proposals. In their opinion this is far too much. In addition to the complexity of setting up collaborative projects, the decreasing success rate is making them more sceptical of the whole funding application process. In this context two issues always emerge: one is about the objective quality of the evaluations and the other is on the success rate of re-submitted proposals.

To start with the first point: you would not be the first to feel that comments on the ESR sometimes appear to have little to do with the project you submitted or that the evaluators seem to not have read the proposal in great detail or are not fully ‘au-fait’ with the state-of-play in the field. Commission assurances about the quality and fairness of the evaluation process are continually frustrated by the intended secrecy of the evaluation process itself. One does not know, so one quickly feels that a bad result is undeserved. Also consider this: it is no secret that academics tend to be heavily represented in the pool of evaluators. From having submitted 140 proposals for our clients in the first H2020 year across most domains and sub-programme’s, we at PNO have found that projects that were industry-led or have a large industry-contingent in the consortium have suddenly fared much better than proposals in which research organisations or universities were dominant. Not necessarily surprising if the Commission happens to have instructed its H2020 evaluators in the spirit of Mariana Mazzucato’s 2013 publication “The Entrepreneurial State”. There she states that “Successful states are obsessed by competition; they make scientists compete for research grants, and businesses compete for start-up funds—and leave the decisions to experts, rather than politicians or bureaucrats. They also foster networks of innovation that stretch from universities to profit-maximising companies, keeping their own role to a minimum”. There does not have to be a significant causal link, but what if a stronger focus on ‘profit-maximising Impact’ has taken those same evaluators a little out of their comfort zone to the level that projects showing a high industry participation and high ‘profit potential’ (with profit not just meaning financial profit, but referring to commercial replication, transferability and job creation) are scored higher, whilst more ‘academic-focussed’ projects are judged just a little harsher. Again: all we can do is speculate, but as the Commission does not really provide a clear insight, the general feeling among applicants is one of uncertainty. On the other hand, we should also admit and praise the fact that over the years the overall quality of funding proposals has gone up. That by itself is a testament to the professionalism and the dedication of all those same researchers and evaluators.

That brings me to the second point: years ago the Commission introduced an eligibility threshold below which a proposal is rejected. That eligibility threshold relates to the whole proposal, but can also refer to specific section within a funding proposal. In itself a good idea. Then there is something called a cut-off threshold, which in effect is the division of the scores against the available budget for that Call topic. The cut-off threshold is thus different for each Call topic. Now imagine that the cut-off is at 92 points (out of 100 in FP7) or at 4.40 (out of 5 in H2020) and your project scored 91 points (FP7) or 4.25 points (H2020)? Your project will not get funded. Is it a mediocre or possibly even bad project? No, your project will be far above the eligibility threshold and is – by all scientific and market-relevance standards an excellent piece of work. Still: no money…

The obvious approach – assuming you agree with the ESR – is to take its comments to improve the proposal and resubmit it in one of the next rounds of Calls. Sadly, re-submissions do not do well in evaluation rounds, not even those with scores that were within inches of making the cut-off threshold the first time. Resubmissions are usually scored by a different evaluation panel which may hold a very different view from the previous one, not seldom resulting in a new score on innovation, impact or implementation strategy which may even be lower than the old score. FP7 and H2020 programme rules allow the Commission’s responsible project officer to let the new evaluation panel know the proposal is a resubmission and the panel could have access to the old ESR. The decision appears to be up to the individual responsible project officer. Anecdotal evidence from Commission representatives and from evaluators is that this almost never happens, let alone that the evaluator could/would check the actual text of the old proposal to establish whether the ESR-requested improvements had been implemented or not. A missed opportunity, I believe. After all: ESRs often provide applicants with helpful suggestions to further improve a near-successful proposal. If the implementation of those comments is not seen by the new panel, then why provide those helpful comments in the first place one might wonder. There are no public figures available on the success rate of resubmissions, but academic and industry organisations that have come to us with resubmission projects are already very sceptical about the evaluation process itself. But why care? After all: there are still enough high quality projects that do get funded..

Ah, here we touch on a fairly sensitive point which goes directly to the heart of the aims of Europe’s research & innovation programmes and which I would like to illustrate by using a very recent example. In fact it is one example of several in the past 3 years. Imagine a promising young researcher who submitted FET proposal which would allow Europe in future to gain a significant competitive advantage in a commercially attractive market whilst delivering a significant contribution to a specific Societal Challenge. The evaluator’s comments in the ESR are just fantastic: great science, even better impact and a top implementation approach. But still, the proposal missed the cut-off threshold by a whisker. Naturally the young researcher is very disappointed. He already has a research offer from a major US university and was only waiting for the results of this proposal to decide whether or not to stay. His decision is now clear: ‘bye bye Europe’. By not having in place a proper alternative structure for those very high-quality ‘near-misses’ of which there will be more in the future if the success rate keeps on dropping, Europe may lose a much larger contingent of its most promising researchers. We then need to throw even more money at H2020 through the Marie Curie-Sklodowska programme to try and seduce them to come back. That’s not very efficient.

In a recent interview new Commissioner Moedas showed he clearly understood that the H2020 programme may be facing a credibility crisis if the cost-benefit of writing project proposals and the way that project that just fail to achieve the cut-off threshold are not better dealt with. So if the Commission recognises that a problem might be ‘on the horizon’ (pun intended), then let’s try to do something about it.

One of the new ideas is to facilitate that proposals that fall between the eligibility and the cut-off thresholds are ‘transferred’ to the national level, where they could be funded out of the much larger European Regional Development Fund (ERDF) and possibly include Interreg as well. After years of financing mostly physical infrastructure to improve regional job creation and economic growth, ERDF is now also being positioned as a real innovation programme at national and regional level. As the failed ‘silver’ H2020 proposals already have a ‘seal of approval’ from Europe on their technical quality, Moedas suggested, why not let the ERDF take care of business?! He is right, of course. ERDF, and to some extent also Interreg,  is an excellent funding instrument, in particular for those projects where the link between the innovation is very closely linked to he development of a particular regional development plan. In other words: it would almost certainly fit perfectly for projects that just missed the cut-off threshold in programmes like the SME Instrument. There you generally have a single applicant located in a specific location. It would be do-able to have the regional government set up a dedicated fund to support this type of high class research in their area.

The biggest pitfall for this idea lies in using ERDF for international collaborative proposals. There the management of the funding process will run the risk of becoming hopelessly complex, not to mention create an explosion of additional national bureaucratic procedures. The Eurostars programme is a programme which already operates in a similar way as proposed, but that is for a fairly narrowly defined type of projects where the national governments of the participating consortium members, supported by the EUREKA secretariat, must agree on whether to fund their part of the project out of available national funding. Not seldom has a Eurostars project been delayed or even cancelled because of the decision by one Member State that their funding priority was elsewhere. The 2010-interim evaluation of the Eurostars programme has also highlighted this problem and the programme evaluators clearly stated that “…the national differences in procedural efficiency and in procedures themselves remain unacceptably high. Further improvements are necessary to impose common eligibility rules”. Now imagine this for H2020 projects, where the number of consortium partners and the level of funding requested and the technical (reporting) complexity is significantly higher. ERDF may be a good instrument, but only if the role and responsibility of the supporting secretariat is beefed up substantially and the rules are procedures between countries are much more aligned. And then what do we have…

So now what?!

H2020 is – and probably remain – the best game in town as far as research & innovation funding is concerned. The problems to address concern the cost-benefit of preparing proposals and the credibility of the evaluation. This article does certainly not pretend to deliver a full-on solution for these problems, but I hope some of the ideas may trigger other to engage in further discussion. So here goes:

On the application process itself:

  • In a 2-stage evaluation process, the 1st stage should be evaluated in a way that makes it clear to applicants why their project is promoted to stage 2. This means that evaluation reports with concrete suggestions by the evaluators should be given to the applicants that continue the process. The selection of stage-2 proposals can be made tougher as long as the evaluators have clearly described their arguments and the success rate of a stage-2 proposal is moved up to at least a 1:3 chance.
  • I also believe that it is in the European Commission’s interest to bring more transparency to the evaluation in order to maintain credibility in its functioning. If the number of proposals continues to rise (or the success rate drops further), this is especially important with regard to proposal re-submissions. Here the Commission should – as a standard measure – provide the new evaluation panel with the old ESR and the old proposal. Right now it is largely left to the Commission’s project officer to decide whether to provide the new panel with the old ESR. Project officers should in future ensure that the new evaluation is consistent with the old one, making it unlikely that the new score falls below the old one. There is no new legislation needed for this. It is just a question of willingness on the Commission’s part to come up with a better time-table for evaluators.

On alternative funding for high-quality failed H2020 proposals:

  • I fully support the idea of facilitating single-applicant SME-type project that failed in the SME Instrument to access ERDF funds, because it is also in the direct interest of the competent authority to let the company create new jobs from a project idea that has already been technically vetted by the European Commission. In fact: why not go a step further and move the whole SME Instrument programme to the ERDF?
  • I believe that an ERDF-type of approach for multinational collaborative proposals is not the way forward. Instead I would plead for a revision of the evaluation structure. One way could be for the Commission to periodically define technology priorities for a limited number of domains and let the community of those domains define the topics and set up the evaluation structure. This approach will generate a higher buy-in from researchers within that community and encourage participation in the evaluation process. In other words: ‘let them govern themselves’ to a certain extent.  Aspects from the organisation structure of the different existing PPP’s and Technology Platforms could be taken as a good starting point. Other experiments in which ‘domain communities’ are created and which focus on ensuring a fully open and transparent access of stakeholders to topic selections and a trusted peer-review evaluation process – are also being planned and look promising. It’s will not be the answer for the problems of today, but it could be part of the solution for the day after tomorrow.

On managing expectation:

  • If many more examples like the PHC-11 stage-1 Call appear over the next year also, then there must be an ‚expectation-gap‘ between a fairly large group of applicants and the European Commission. It is understandable that applicants want money to fund their research and H2020 appears to them as the proverbial ‚pot of gold‘. But let’s be real: projects scoring well below the cut-off threshold are just not good enough for H2020 and there is a good chance that (recurring) submissions of projects in the same research direction will continue to fail. That message should be brought to those applicants much more clearly than is the case now: „Don’t submit (again), because it is not what we are looking for“. That message should be presented to prospective applicants in a much earlier stage in the process, maybe as a screening-service by the Commission’s national contact agencies or other services. That does not happen now; ideas are screened but prospective applicants are not told to „just drop it“ if the project concept is not up to standards expected by H2020. Instead false hope is given that with sufficient fine-tuning there is still a chance. Proper project-screening combined with a well-founded opinion by the screening-authority on whether to submit culd help ease the current deluge of proposals facing the Commission and the evaluators in new Call rounds.

Over the past years the Commission has shown that it is slowly letting go of its very top-down structure of R&D and innovation management. In H2020 topics are now less specific deterministic and detailed than – for example – in FP6. Maybe now is the time to take a next step. For H2020 let the Commission take more technocratic decisions based on true ‘societal need’ rather than political decisions based on having to satisfy everybody’s aspirations. Every 2 or 3 years the Commission should define the societal challenges and technology areas with the very highest priority. Then the relevant communities – comprising the chain from fundamental researcher to technology provider and end-user – should come up with the appropriate Calls. If the peer evaluation can be properly managed by the community itself, the level and quality of expert-participation in the evaluation may just rise and create more acceptance from the people aiming for the next breakthrough.

 

[1] Ted von Hippel and Courtney von Hippel in PlosOne, March 4 2015.

[2] Danielle L Herbert, Adrian G Barnett, Philip Clarke, Nicholas Graves in BMJOpen, 2013. The data related only to NHMRC Project Grant proposals in 2012.