Resubmit or submit anew- the PI’s Quandary

PIs frequently face the problem to decide whether to resubmit or to write a new application. Imagine a game of dice with the following rules: roll a regular dice, you win if you get a six; however, you never get to see the face of the dice when it stops rolling, someone else does. If the dice turned a four or a five, you are asked to toss the dice again. If you turned a four in the first toss, you always lose whatever the second roll of dice turns up. However, if you turned a five in the first toss, someone will toss a coin privately. If it lands on Head, you win regardless of the result of your second dice roll, if it lands on Tail, only rolling a six is a win. Would you roll the dice again, or would you pass on the opportunity and start afresh?

noun_705970_cc

If you knew you had a five, you would resubmit because your chances of winning are much higher. This is often the case if you resubmit early and the review panel has not changed. But sometimes panel members change and the new members mostly view your resubmission as a new application (a new roll of the dice). And if you knew you had a four, no amount of rewriting effort would have helped and you would have submitted a new application.

So, how does one determine whether a four or a five turned up in the first roll of dice?

Reviewer feedback could guide you, but the most critical document to help you determine your chances is the summary in which the session chair captures your grant’s most important problems. So before you start addressing the comments of each reviewer in your resubmission, consider that summary first. Where is the real problem?  You may have to read between the lines to find out.

Are you the problem?  Investigator is one of the big five evaluation criteria (Innovation, Significance, Approach, INVESTIGATOR, Environment). Do you feel that most comments in all five criteria always point back to you, directly or indirectly: lack of confidence in your ability to manage a complex operation involving many participants and a large budget, lack of confidence in research expertise with no senior PI as collaborator or mentor, lack of confidence in your choice of collaborators (they are your friends – present more by courtesy than by necessity, or they are there by necessity but you have never worked with them in earlier projects or written papers together – so they present a risk). In that case, it may be better to rewrite your application with a better team.

Is significance the problem? To gain significance, you add specific aims which make your project complex given the high variety in methods used or the variety and cost of  equipment involved. To gain significance, you rely on too many assumptions; for example: if this is available, and that comes to fruition, and if the other thing happens – the research would have large implications. To gain significance you rely on the most iffy specific aim, the one with no preliminary data or work.  Significance is intimately linked to innovation. With little innovation, the significance is low because the next steps, i.e. the opportunities for further research your grant will open, are too limited. It may be better to rewrite a new application with more realistic and better supported aims, or with a greater focus on what happens next.

Is vagueness the problem? The comment “lack of detail” often hides a larger problem which could be remedied,  not by more unneeded details, but by details which are more relevant to assess your expertise and project planning skill. “Lack of detail” may indeed reveal that your plan is not worked through. But it may also be that you assumed the reviewers knew much more than they actually did, which made you leave out the justification for the approach or methods. Pity, for it would have shown you understand their strengths and limitations in a technique-rich landscape. You have not deemed necessary to add expert details or references that show you know and care about the known pitfalls of methods, statistics, and data collection. The “I’m-aware-of-that” detail is exactly what the reviewer needs to assess your competency. Finally, vagueness of the research question will inevitably lead to “lack of detail” comments, so clarify and resubmit.

Is lukewarm support from the panel the problem? Again, Do not be fooled by the mildly supportive ho, hum comments intended more to prevent your neurotic breakdown than to compliment you. Your research question may simply not be exciting enough, or is too insignificant. It needs to be looked at again. Write a new application when you have a better idea with greater significance.

The recommendations above are based on episode 63 of “the effort report” podcast by two eminent PIs with extensive panel and submission experience: Elizabeth Matsui and Roger Peng

of Sheep and Shepherd – not grant ready

Mont_Saint_Michel_Brume.JPG
Photo Jean-Luc Lebrun – Mont Saint Michel

 

Grant writers look for the financial health of the grantee because poor financial health  means trouble. Does your organization rely solely on grants for their funding, or do they have other means of financing research, such as licensing of patents, national program financing, partnership with industries… In  other words, grant writers want to avoid situations where the grant applicant is desperate to get cash to survive, because desperation often results in hurriedness, and hurriedness rarely bears fruit. In a world where all applications are graded with an A+ or an F, only the well-prepared get funded.

Grant writers also look for answers. They should not be the ones to fill the blanks. Sounds familiar? The senior PI, without a solid plan and research objective, may ask junior researchers to conceive one. The design of the plan is then left up to the less-paid unguided research associates under the excuse that preparing a grant provides hands-on experience in grant applications. I have seen that before, and I have looked into the eyes of unprepared junior researchers unable to answer the simplest questions about the grant they prepare without referring to the PI for answers. They get thrown in at the deep-end trying to accomplish the impossible when, in fact, a pilot study, or exploratory research is first needed before moving on to the larger grant they are asked to prepare. And as they reveal their problems to their PI, the PI alters the grant objectives based on his or her new understanding of the problem. The ground keeps shifting. Maybe this is a valid way to conduct research; But it is not a valid way to prepare for a grant. The senior PIs should indeed involve junior researchers in grant preparation, but the research design, the hypotheses, and the research objectives are squarely in the PI’s camp. Sheep do not guide the shepherd.

Crowdfunding Independent Research and the Technology Readiness Levels

 

Crowdfunding is here to stay. I have already spent hundreds of dollars funding different products on indiegogo. These products directly address personal or family needs that the market does not currently address. Crowdfunding is a wonderful way to encourage start-ups 1) by showing that their product ideas are meeting real needs, 2) by funding the development of the first prototype or embodiment of a product idea,  and 3) by enabling the collection of early feedback from pioneering users.

However, these regular crowdfunding sites do not cater for science projects and products which are below level 5 on the technology readiness Level (TRL).  That nine level scale, created by NASA in the 70’s and then extended to domains outside space exploration, helps with funding decisions. Many funding organizations ask the applicant to peg their project to a readiness level. So it is useful to assess the readiness of your own project BEFORE you target specific science crowdfunding sites. The closer to market the product/project is, the more fundable it becomes, and the less risky it is overall.

The book promoted by this website, mentions several crowdfunding sites: Experiment.com, and walacea.com (now crowd.science).

Alfred-Russel-Wallace-c1895
Sir Alfred Russel Wallace

The funder of Walacea (Natalie Jonk – UK) named her site after the pioneer in crowd funding for science, Sir Alfred Russel Wallace, who funded his independent research on natural selection by selling beetles and other specimens he collected in the East Indies. In a way, Sir Wallace networked extensively, gave talks, and solicited money, but had something to offer in exchange: new knowledge and exotic specimens from abroad.

Similarly, a project fully funded by crowd.science on pesticides and bees offers financial backers the type of scientific fares one would expect from academics: invitation to a lecture on the topic, copies of a book written by the project leader, or a guided walk. But private individual funding is not enough. Roughly a third of the funding for the pesticide and bees project came from wealthy sponsors whose purpose is aligned with the project (friends of the earth, beekeeper associations). So, if you want to fund independent research, make sure you have something to sell – or more politically-stated, “offer” to your funders: tangible things, knowledge, or shared vision. The higher the TRL level, the more palatable your offerings will be.

 

 

The same & not the same: grant vs. paper

Roald Hoffman, Winner of the 1981 nobel prize in Chemistry, wrote a enlightening book: THE SAME and NOT THE SAME. Both sides of the photo show the Khmer temple. The same temple?

IMG_6579I thoroughly enjoyed reading it. Roald Hoffman is a renaissance man with an encyclopedic culture and many talents outside of chemistry. But this blog is about the differences between a grant application and a scientific paper. The link with chemistry will become apparent later.

In my earlier “Ins and Outs of Grant Writing” classes, I asked the participants to list out what the differences are. Having no experience in grant writing, they listed out the obvious differences, for example: a paper is about completed research, a grant is about future research; a paper is to get recognition and citations, a grant is to get money; The structure and contents of a paper (mostly the IMRAD structure) are quite different from the structure and contents of a grant which also includes a budget, a schedule, a detailed list of past achievements and grants received.

By the time the class was finished, the list became more comprehensive… but note the ellipsis!

Differences between writing a grant or an article 
Grants: pre-significance
– It presents a plan
– It has more justification on the approach / methodology you want to adopt
– It presents alternatives (plan B) to mitigate risk
– It is read and accepted by more lay people, or people with less expertise in your research topic
– It has to show return on investment
– It needs to convince the readers that you are the right person to do this research (why me) and that you will use the money properly in the timeframe of the grant.
     => need to convince the project is feasible.
– It has a large admin overhead
– The visuals to represent flow / process / ideas play a very important role
– resubmission is limited
– Selection is collective (panel)
Articles: post-significance
– It presents a report
– It gives greater details
– It is read and accepted by experts in the field
– The why me aspect is less important
– It needs to convince the readers that the conclusions are correct
     => need to convince on conclusions
– Its visuals are more of the type that feature results: tables / graphs
– Resubmission is easy
– Selection is individual (editor)

One aspect was missing from that list, one that participants expected to be the same for both grant application and manuscript: the writing style. “The same and not the same” applies here. Roald Hoffman’s book is amazing. His writing style is clear. His sentences are precise, yet with a simpler structure and reduced number of clauses. He bonds with his reader, not chemically, but intellectually, by being clear and accessible, by making sure the jargon is properly explained and illustrated if need be. In his review of the book, Carl Djerassi, published scientist and author but better known as “the father of the pill”, writes “[Hoffman] is the ultimate literary alchemist […] he transforms base facts into golden understanding.” Your writing style should move from the scientific writing style towards the science writing style. Read popular scientific magazines like “the Scientist”, “the American Scientist”, “Scientific American”, or “New Scientist”. Personally, I read articles in “The Scientist”, and “SciTech daily” – daily online magazines covering both life and engineering sciences.

The same” precision and rigor of science, “Not the same” writing style. 

                                                                                                                                                                

Note: A list of these differences is proposed in an article written by Dr. Robert Porter, in a 2006 article entitled “Why Academics Have a Hard Time Writing Good Grant Proposals”. I do not quite agree with all of his differences – for example, I disagree that a paper is an individual exercise whereas a grant is a collective one. Nowadays, in 2017, more and more papers are a collective effort, especially in fields like Physics. I had forgotten all about Porter’s 2006 paper when I came across episode 57 (October 2017) of the chatty podcast of Dr. Roger Peng and Dr. Elizabeth Matsui – the effort report that mentioned it. They broach the topic of “not all scientific writing is the same” (move straight to minute 53 of the podcast!). When Elizabeth Matsui became interested in the topic in her capacity as a mentor of young PIs having to write their first grant, she realized that her new PIs came to a grant application with expectations that it would be similar to a scientific paper. After all, that is what they were familiar with. So highlighting the differences was essential.

George Heilmeier Proposal Metrics

In the IEEE Founders Award Lecture published in The Bridge, 1992, (National Academies press) on Some reflections on innovation and invention, luminary George Heilmeier put in eight simple questions what anyone proposing a project should be able to answer.

  1. “What are you trying to do? Articulate your objectives using absolutely no jargon.
  2. How is it done today, and what are the limits of current practice?
  3. What’s new in your approach and why do you think it will be successful?
  4. Who cares? If you’re successful, what difference will it make?
  5. What are the risks and the payoffs?
  6. How much will it cost?
  7. How long will it take?
  8. What are the midterm and final “exams” to check for success?”

https://en.wikipedia.org/wiki/George_H._Heilmeier

A proposal should be as jargon free as possible, and certainly its objective should be clear to all, experts and non experts. If you absolutely have to use jargon in the title, it must be defined in the abstract. Definitions cannot wait till the reviewer gets inside your grant. Why? Title and abstracts create the first impression. If the proposal presents the objective in a way that the reviewer feels his or her nascent interests are aligned with the proposal, a conformation bias is likely to appear – which of course, plays in your favor. It is difficult to feel good about something you do not understand!

The gap is what answers the second question. Remember that to innovate you extend, or modify what already exists (“done today”, “current practice”). Blue sky research grants exploring the pitch-dark side of research are not for everyone. You need an outstanding track record for those. The grey side is more accessible.

Novelty is expected, often in methods and approach, but with novelty comes risks, and risks are expressed in levels of confidence. That confidence is built over years of experience and a deep understanding of what your abilities are, but not only. The “You” is now plural. “You” are a team.

That you care about your project is a given, but do others care as much as you do? Who are these people? And if they do not know they should care, will you be able to show why they should, in a compelling way? Before you ask other people to get into your shoes, find out whose feet are blistered, feeling pinched, or in daily need of a shoe horn. For that, investigate, not the limitations that are there, but the consequence of these limitations.

Risk is a word rich in meaning. It spells potential trouble, but it also promises great rewards, not just payback, but payoffs – returns on the grant investment. Although it is not becoming to talk hard dollars when one is concerned with genes and quarks, it nevertheless will ultimately translate into hard currency for someone down the line. The economics of research matter to the grantor. If that payoff in dollars is further down that you would like, at least reveal the path that will get there. Build a dissemination plan to ensure long term impact.

Finally, just like students have midterm and final exams, build a solid evaluation plan to make sure you know if you are on track (or not) and meeting your specific aims, on time,  but also quantitatively and qualitatively. Good evaluation criteria are hard to come by. Learn from other PIs how they do their formative and summative evaluation plans. And remember that success is not what you say it is, but what others agree it is. So make sure you give them your success evaluation criteria before they create alternative ones you will disagree with. Deciding on these criteria is a team exercise. Involve your grant team. In the process of trying to establish these success criteria, you may discover that some of your specific aims are vague, not measurable, and require rewriting.

 

Grant Risks, their context and the seven ways to manage them

 

Should you ever doubt that risk is an important factor in grant evaluation, read on. I just looked at the reviewer comments for a European grant which missed funding despite an overall score of 87/100.  Let me just quote the reviewer comments under evaluation criterion 3 – Quality and Efficiency of the implementation:

“Potential risks in research activities are insufficiently substantiated. The mitigation strategies are only superficially addressed and insufficiently justified.”

To paraphrase, ” you have identified potential risks, but have not described them in detail – thereby indicating you need more awareness on how these virtual risks will take shape during your grant. Consequently, the strategies to lessen or handle these risks are superficial and whether they are the right ones to mitigate these risks is unclear.”

Note that the applicants are not accused of ignoring the risks. They are criticized for not demonstrating they have a clear idea of what these risks are and how to deal with them should the improbable occur.

So let’s learn about risks and the strategies to address them.

The norm ISO 31000:2009 defines risk as the effect of uncertainty on [accomplishing your] objectives. In the case of a grant, uncertainty affects pretty much everything: your budget, your plan A, your choice of a suitable approach and collaborators, etc.  Uncertainty’s effect is to make your project deviate from what you expect will be the normal course of events. Uncertainty introduces deviations around the normal (not a new concept, really) – some which can be compensated for, others which have far reaching and long lasting consequences. Where do these deviations originate from?  Unexpected events top the list. Either you did not now such events could happen,  or you presumed they would not happen, yet they did, and vice versa, or you minimized their likelihood,  or you misjudged their consequences and outcomes.

risk-management\CC BY-SA 3.0 Nick Youngson

You, the PI, are the risk owner and manager. Since you are accountable for taking risk, you need to justify the risk you take, within the grant application. But first, you have to identify the sources of risk, evaluate the degree of risk, and establish your risk profile. With your risk profile in place, you are ready to draw a risk-management strategy.

ISO 31000 identifies seven ways to manage risk: one to increase it, one to maintain it, and five to mitigate or cancel it.

  • Increase the risk exposure when greater risk would have an even greater overall payoff.
  • Monitor the risk closely without removing it for it has both beneficial and negative aspects.
  • Decrease the odds of the negative risk by reducing uncertainty.
  • Dilute the unwanted risk by sharing it.
  • Change the outcome of events that affect the objectives because they increase task cost, introduce delays in task schedule, lower safety, or worse impact on the environment.
  • Remove the risk trigger (circumstance, situation, process, environment, practice)
  • Cancel the activity associated with the unwanted risk or replace that activity with one of lower or no risk.

Read again the book chapter on risks and determine which of these seven ways to treat the many risks summarized page 176 would be the most effective in your context. Context does matter because it favors the occurrence of events. Context is both external and internal. Each PI will be influenced differently by it.  The external context includes the scientific trends and how they affect the PI’s employer , the political, financial, economic context at a local and global level, as well as the quality of the relationship between the grantor and the PI’s organization. The internal context includes the organization of the PI, its objectives and policies, the support environment (equipment, facilities, IT, grant officers, availability of postdocs, collaborators…), as well as how well networked the PI is within and outside of the host organization.

Risk matters. Clarity in what risk is will make you a more effective troubleshooter when the probable becomes certainty. And It will make funding you… less risky!

 


	

Give me the Specifics…. on the Specific Aims

When people want to know more than generalities, they say “Give me the specifics”, in other words: “Give me the precise details so that I can get a better idea of what’s going on here – don’t waffle!” A tragic mistake young PIs make is to leave the specifics for the inside of the grant and keep the the abstract and the one page specific aims at a high level. So the question is: how encompassing should  my aims be: the larger the aim, the less aims I need, right? The number of specific aims is limited to one or two for small grants, or maybe three or four for larger grants – but they can include sub-aims.  another question is “how specific should I be in my specific aims?”

Good question! To answer it you could look at the specific aims section of  a senior PI’s funded grants and learn by example. But learning by example is just like being thrown in at the deep end of a pool and asked to learn how to swim by watching others slice the water effortlessly in the pool. Chances are you will swallow a gulp or two, or maybe even drown before you can figure it out. Such technique is more likely to succeed if you have a minimum of grant knowledge, for example by reading our book (free serving unabashed plug), or if a mentor is holding the long bar in the water a few feet in front of you. Otherwise, you could look for discussions or podcasts on the topic. One such podcast I discovered came as a result of taking a number of MOOCs (Massive Open Online Courses) on Data Science organized by John Hopkins. In the effort report podcast, Roger Peng, prof in Biostatistics, my MOOC instructor and podcaster, with Elisabeth Matsui, Prof in Pediatrics, (both senior PIs at Johns Hopkins University) talk about specific aims . Skip the first 6 minutes to get to the topic. They discuss the one page specific aims commonly used for NIH grants.

They say that the specific aims page is a capsule (think concise one page) that is read by all review panel members – hence it’s importance (most time in a review panel, the full grant will only be read by two or three main reviewers). Roger and Elizabeth both concur with Professor Leo Chalupa, who extensively wrote on grants that half the time spent on writing the grant is spent on writing the specific aims page. So, besides its wide readership, why is that page so important?  It is the page that will “win over” the reviewer. It acts as a sort of ratchet. You do not move to the next cog unless you pass this one.

 The aims are the tasks, not the research questions answered, nor the hypotheses. The aims are related to the main objective, but they could be independent. Elizabeth uses the effective metaphor of “triangulation” where three different experiments (approaches) are tried to answer the same research question. These three tasks are specific aims. Most times, however, they are somewhat interdependent, and always logically connected.  How they all fit and all serve the main objective is clear. Logically connected does not mean deeply serially dependent, however. If aim 2 depends on the full accomplishment of aim 1 (think of a domino effect – the first one must fall before the next one can), your grant application may be rejected – unless of course, aim 1 is risk free, more like a project igniter. Just make sure the wick is not wet!

5177093939_cbae459a37_z

“scrolls” by psyberartist  CC BY 2.0

The podcast helps disambiguate three common adjectives: detailed and explicit and narrow. Narrow often bores and fails to raise sufficient interest. Explicit conveys idea that the aim is unfolded, unrolled, like a scroll. Unfortunately, the podcast only indirectly mentions what is on the scroll, so I will add to the dialogue. The task (aim) description is accompanied by one or several of the following: the purpose for the task, how it will be accomplished (approach or methodology), and, unless implicit, how the task relates to the clearly defined objective or the other aims. Being specific by stating What, what for, Why, and How shows the reviewer you “know what you are doing”, and you gave much thinking to what you will do. Nothing is superficial or lacks direction like a vaguely defined fishing expedition. “Laser sharp” focus will ensure your grant will be productive.

Elizabeth then presents her “secret aims-page weapon“: a crisp conceptual roadmap drawn in diagram form that shows the logical and relationship links (with directional arrows) between the objective, the aims and the various elements under each aim. It saves much reviewer time and acts as a booster to understand and value the grant.

Jargon is specific, and a method may be very specific to a problem. But this type of specification creates problems for the reviewers who are not totally domain experts. As Roger Peng correctly points out: some grants have multiple audiences which partially overlap in knowledge: biostatisticians, health epidemiologists, environmental exposure experts,… So he writes the aims for multiple audiences and justifies the importance of the choice made in each aims (methods, significance). Writing for multiple audiences and assuming a lower level of background knowledge makes it easier to “circulate” your specific aims page to people from different fields for feedback before submission.