EMBO has long advocated for moving beyond publication-based metrics in research assessment, promoting more qualitative, informed, and transparent evaluation practices. As a signatory of DORA and CoARA, EMBO and EMBO Press are committed to implementing responsible research assessment across the organization and its activities. This commitment is reflected in changes to funding schemes and publication policies.
At the annual EMBO Young Investigator Network meeting on 26 June, EMBO held a session to ask the participants to reflect on current challenges in selection processes and gather input on improving its policies.
This session was organized by Gerlind Wallon, Head of the EMBO Young Investigator Programme and Installation grants together with the EMBO Policy team.
Participants to the panel discussion included Guillermina López-Bendito, Chair of the Young Investigator Programme; Cayetano González, Chair of Global Investigator Scheme; Andrew Carter, Chair of the Installation Grants Scheme; Bernd Pulverer, Head of EMBO Press; and Alexander Auleha, EMBL group leader.
To frame the problem, the committee chairs highlighted the difficulties in evaluating so-called “grey zone” candidates, who fall between the top-ranked applicants who are funded and those who clearly do not meet the threshold. These in-between candidates are equally strong scientifically, and reviewers are faced with the difficult task of selecting among them. Could this be addressed by introducing additional assessment criteria?
Several suggestions emerged from the Young Investigator Network community. Participants emphasized the need to better recognize the value of collaborative research, arguing that in some areas research questions have become so complex that different expertise is needed to address them. For the committee, on the other hand, it is very difficult to distinguish or weigh off the original ideas and intellectual contributions. More detailed information on individual contributions to papers and increasing the time for Q&As during interviews may address the issue allowing applicants to better explain their roles in research projects.
Open science practices were discussed as potential indicators of efforts to foster collaboration and reproducibility. Preprints were broadly supported as evidence of research output, though participants acknowledged limitations in some fields, such as clinical or patent-sensitive research. Some noted that open science practices are often shaped by institutional or funder policies, and that publishing open access can be expensive. However, sharing datasets and software was viewed as a strong signal of transparency, collaboration and reproducibility efforts.
Leadership and people development were flagged as important but under-assessed aspects. Some proposed asking applicants to describe leadership or mentoring approaches or plans allowing space to articulate their values and how they support others, even if such statements remain subjective.
There was also interest in understanding how candidates balance research with heavy teaching loads. These constraints can significantly affect productivity and should be considered, for example, as a factor for extending the eligibility window.
Moreover, contributions to the research community, such as organizing workshops or conferences, are not consistently considered in evaluations.
Public engagement might also be viewed as a fundamental role for researchers that is worth considering and incentivising.
To counteract potential biases in the interviews, one suggestion was to include EDI officers as observers. For example, at EMBO, a Policy officer regularly serves as observers, and committees receive a bias awareness briefing ahead of each round of interviews.
A key point raised was that any changes made to the evaluation criteria should be transparently communicated and applied, and any new criterion should be explicitly integrated into the selection frameworks, instead of being left as optional or informal considerations. The discussion also extended to applying any new criteria to all candidates, and not only to the grey zone candidates.
A recurring issue are the difficulties of evaluating leadership, independence, and ambition across diverse institutional and cultural contexts. Ultimately, participants agreed that research assessment is a human process, inherently subjective and imperfect, but with potential for improvement and reflection.
EMBO will continue seeking input from its scientific community on reforming research assessment. A second, broader meeting involving all EMBO selection committees is planned for January 2026.