The purpose of this section is to help students and young researchers understand some of the general rules I apply when conducting my research. This guide is by no means exhaustive. However, all of the world-class researchers I know practice or possess these attributes.*

I thank my many mentors for teaching me the things included in this guide.

An incomplete list of these people are: Ali-Reza Adl-Tabatabai, Dorothy Bell, Ras Bodik, Hans Boehm, Irina Calciu, Alvin Cheung, Dan Connors, David Dice, Pradeep Dubey, Henry Gabb, Paul Gottschlich, Moh Haghighat, Jim Held, Maurice Herlihy, Insup Lee, Tim Mattson, Abdullah Muzahid, Dave Patterson, Lori Peek, Paul Petersen, Alex Ratner, Koushik Sen, Nir Shavit, Jeremy Siek, Armando Solar-Lezama, Tatiana Shpeisman, Nesime Tatbul, Rich Uhlig, Manish Vachharajani, Youfeng Wu, Shane Zumpf, and many others to whom I apologize in advance for missing your name. 🙂

*As some of my colleagues and students have noted, this guide might be helpful in broader terms of becoming world-class in any particular domain.


In my opinion, the most important skill needed to conduct world-class research is introspection. Introspection, as defined by Google, is: “the examination or observation of one’s own mental and emotional processes.” In short, introspection is our ability to evaluate ourselves in, what we hope is, an unbiased fashion. It’s been my experience that most people — researchers or not — struggle with introspection. This makes sense to me. Analyzing ourselves, with all of our splendorous shortcomings, is not easy. However, I believe it is an essential skill to perform world-class research. This is because learning how to do world-class research will generally require a person to grow in many dimensions. The necessary dimensionality of that growth is generally only ever fully understood by the individual. Without introspection, a person may simply not understand which skills he or she lacks or how to improve them.

Consider for a moment, someone who isn’t introspective. Let’s call him Jack. Jack receives feedback on rejected research papers all the time. No matter; Jack isn’t bothered. He doesn’t even read the reviews. So what if he’s never published any tier-1 research? The peer-review process is broken or so he tells himself. This mindset puts Jack in a dangerous cycle without growth. Paper submitted; paper rejected. Paper submitted; paper rejected. Jack may never get his PhD and he certainly will never learn to do world-class research.

Now consider someone who is introspective. Let’s call her Jill. When Jill receives a paper rejection she may still ignore some things (e.g., biased criticisms, factual inaccuracies, etc.). However, Jill has made a promise to herself: she will never ignore meaningful feedback. This is because Jill sees meaningful feedback as gifts — gifts of growth. It’s Jill’s openness and acceptance of feedback that will likely lead her to become a world-class researcher. Each time she receives feedback, she grows. Before long, Jill’s doing world-class research all on her own.

Moreover, it’s been my experience that introspective individuals who are also self-confident can learn even from destructive feedback. That is, they can find ways to learn something from feedback that was never meant to be constructive. Once you are capable of doing this, I’d argue that you may have mastered the art of introspection.


A common weakness I see in researchers and engineers is their inability to communicate effectively, both in written and spoken form. As the late MIT Professor, Patrick Winston, eloquently said, “students shouldn’t be allowed to go out into life without the ability to communicate.” I couldn’t agree more. I recommend his talk on How To Speakif you haven’t seen it. I cannot overstate the importance of communication. I’ve personally seen individuals who seem to possess the qualities necessary to be wildly successful in many domains, fall short in all of them because they have lacked one skill: the ability to communicate.

Like Professor Winston, I don’t subscribe to the belief that one’s ability to communicate is predetermined by natural talent. I believe communication is, largely, a learned skill. Like most skills, practice and knowledge tend to make us better. I will concede that some people seem to be naturally gifted orators and writers, such as my colleague Tim Mattson. Yet, in addition to Tim’s natural gifts, he is well-versed in the art of communication. As Tim tells me, his oratory and literary skills are largely a byproduct of his study and practice. Tim has given over a hundred public talks; he’s published over a hundred research papers; he’s written five technical books. It’s through training and practice, like Tim’s, that I believe a person can learn to become an outstanding communicator. Few people may reach Tim’s level, but they may still end up being good enough to give a keynote address or write a book.

Lastly, if you haven’t yet read it, I recommend The Elements of Style by Strunk and White.


In my years conducting research and engineering software systems, I’ve found that many professionals have a tendency to communicate using strong, absolute claims rather than weak ones. An example of a strong or absolute claim might be:

“The only way to build system X is through Y.”

While it’s entirely possible the above claim may be true for a given context, there is a possibility it isn’t. However, by making such a claim, one tends to invite criticisms, critiques, and skepticism about not only the claim, but also the person making the claim. This can quickly lead to the dismissal of the idea, or even worse, a loss of the person’s technical credibility. However, I’ve found that much of this can be sidestepped simply by communicating using weak claims, like:

“While there may be many ways to build system X, I suspect using Y may be the most promising.”

One advantage of this type of communication is it tends to reduce the likelihood of disagreement. Moreover, it can create a more open, inclusive, and engaging environment. It’s also been my experience that highly intelligent people tend to be more receptive to people and their ideas when they use weak claims as it tends to be representative of a person who has the capacity to think deeply and is open to ideas other than his or her own.

In my career both as a researcher and engineer I’ve found that when I communicate with weak claims, the support for my ideas tends to increase. Perhaps more importantly is that I believe using these weak claims has the byproduct of improving your long-term intellectual credibility and critical thinking skills, as it tends to demonstrate to others (and yourself) that you are a fair-minded and inclusive thinker.


I believe precision in language is critical to effective communication. Broadly speaking, when we communicate with other human beings, we tend to share two types of information: (i) facts and (ii) beliefs. It’s been my experience that many people struggle to properly establish when they are communicating one or the other, even though most people seem to understand the difference. It’s my speculation that this is because properly articulating the difference can be non-trivial. I struggle with it and I’m the one writing this guide. 🙂

It’s been my experience that precise communicators, those who usually have years of experience training themselves in this area, not only know the difference between facts and beliefs, they tend to clarify when they are stating one versus the other. I argue that this can be especially important for the reasons explained below.

A precise communicator might say:

C1a: “I believe that C++ is the best programming language.”

C2a: “Neural networks can sometimes reduce error through backpropagation.”

Whereas, an imprecise communicator might say:

C1b: “C++ is the best programming language.”

C2b: “Neural networks always reduce error through backpropagation.”

The change in the first claim (from C1a to C1b), removes the words “I believe.” This changes C1 from being presented as a belief to being presented as a fact. However, C1 is not a fact; it’s a belief. This change increases the imprecision in the language and can confuse both an audience and the speaker/writer making the claim, because a belief is being misrepresented as a fact. We should avoid making such mistakes to eliminate the confusion they can cause for others and ourselves.

The change in the second claim (from C2a to C2b) changes the words “can sometimes” to “always.” This introduces a problem in an arguably less obvious way than the prior example. Let’s dissect it.

It is generally true that if a reduction in error is possible, neural networks perhaps most commonly use backpropagation to reduce it. However there are at least two important caveats to this. First, there are other ways to reduce error in a neural network without using backpropagation, such as dropout. Second, a reduction in network error is not always possible. This is because the error may have already reached a global minima for the dataset it’s trained against. It’s for these reasons this claim is no longer a fact and introduction of the word “always” creates both technical ambiguity and technical imprecision.

These differences may seem minor and pedantic. However, I personally believe such differences can be surprisingly important to not only communicate precisely to others, but also in helping us think more precisely with ourselves. My hypothesis is that the more precisely we communicate, the more precisely we think. If this hypothesis is true, I believe precise communication can have the byproduct to improve the quality of our research, help us be more critical and unbiased of research (including our own), and generally make us better scientific reasoners.


As a scientist, my beliefs may change as I acquire knowledge (i.e., as I learn new facts). These data may strengthen or weaken my previously held beliefs. I personally find that I tend to have more beliefs in my head at any given moment than facts to substantiate them. From a researcher’s perspective, I think this is probably okay, and, perhaps, even natural. I suspect such ideas or intuitions may be the formation of new hypotheses — so I encourage myself to do this. However, until I have demonstrable evidence to support my beliefs, I remind myself (and others around me) that these beliefs have not yet been substantiated. Moreover, even when some empirical observations have been made, it can be challenging to know when a conclusion from them is technically precise.

Consider this example:

  • S1. “According to a survey conducted by the Python Software Foundation and JetBrains in 2018, Python is the main programming language used by 84% of the developers they surveyed.” (assume this is a fact; by the way, it is)
  • S2. “In conclusion, S1’s survey demonstrates that Python is the most popular programming language amongst the developers who were surveyed.” (is this a logical conclusion?)

Implicitly, it may seem like S2 can be derived from S1. In fact, it cannot. This is because the word “popular” is ambiguous. We have not precisely defined what popular means. Google says it means “liked or admired” — so let’s go with that. The survey, however, did not ask developers if they liked or admired Python. The survey asked if the developers used Python as their main programming language. Because of this, S2 is not empirically substantiated by S1. To say so is factually imprecise. If the word “popular” is replaced by “used” in S2, then the statement transitions from being an opinion to a fact.

  • S2. “In conclusion, S1’s survey demonstrates that Python is the most popular programming language amongst the developers who were surveyed.” (incorrect)
  • S3. “In conclusion, S1’s survey demonstrates that Python is the most used programming language amongst the developers who were surveyed.” (correct)

I’ve found one of the fastest ways to lose scientific credibility to is exaggerate your findings. It’s my advice that you strive to avoid exaggeration in all forms of communication and replace it with unbiased, neutral language. According to my Google search, an exaggeration is “a statement that represents something as better or worse than it really is.”

I’ve found that imprecision through exaggeration is common for junior researchers. However, I’ve seen even senior researchers fall victim to it. Here are some concrete examples of scientific exaggeration I’ve seen in published tier-1 research papers.

S1: “Our system is notably more efficient than the state-of-the-art.” (strangely, this is actually a belief, not a fact)

  • What’s the difference between “more efficient” versus “notably more efficient”? The word “notably”, which is ambiguous, is biased language and changes this potential fact to a belief.
  • A more precise, unbiased, and neutral way to say this might be: “For our experiments, we found that our system is more efficient than other state-of-the-art systems by upwards of 74%.” This is now a fact. Let the reader decide if that’s notable or not.

S2: “Our system is highly novel.”

  • First, something is novel or it isn’t.
  • Second, novelty doesn’t have a height.

S3: “This is the first system of its kind.”

  • This might be the case, but better to be careful and say, “To our knowledge, this is the first system of its kind.” This is because there’s a lot that’s happened in the world. It’s unlikely that we know everything that’s ever been done.

To improve the scientific precision of our work, I believe we should try to replace exaggeration with precision, usually in the form of unbiased and neutral quantified substantiation (e.g., from “notably more efficient” to “74% more efficient”). This allows an audience to assign their own value.

If you commonly use the following words in your communication, you may want to work on reducing your exaggeration:

  • very, notably, significantly, much, highly, importantly

I strive for communication that contains no exaggeration. It’s been my personal experience that such writing can often (i) present our work in a less biased fashion, (ii) help us more clearly understand the technical strengths and weaknesses of our research, and (iii) help establish credibility amongst other world-class researchers, who likely respect unbiased presentation of research.


I believe that for scientific communication the word “very” should never appear. It adds nothing quantitative because it is ambiguous; it takes up unnecessary space or time in our papers and talks, respectively; and its use is usually a sign of weak communication. Before you submit your next paper for review, I recommend you scan for all cases of the word “very” and simply delete them. More likely than not, your paper just got better. Worst case, if even one “very” was eliminated, your paper is now technically more precise.


All of the outstanding technologists, engineers, and researchers that I know have at least one thing in common: they all work hard. By “hard”, I mean, they are not constrained into thinking that their work is controlled by some arbitrary schedule (e.g., 9am-5pm with an hour for lunch at noon). There is, literally, not a single successful researcher that I know that got to where he or she is without working hard.

It’s my opinion that there’s a misconception that to do brilliant research, one must be brilliant. In fact, I know many researchers who I believe have done brilliant research that openly say they are of only average intelligence. These people often tell me that they make up for their shortness in raw intellect by working hard, something they can directly control.

There seems to be a causal link between work ethic and success. That is, it seems that the harder someone works, the more successful he or she becomes. There are, of course, limitations to this. It has been shown that working too much can hurt performance and may deteriorate one’s mental and physical health. However, that discussion is outside the scope of this essay. 🙂

I think the causal relationship between work and success is more pronounced in research. I believe this is the case because research often requires deep, methodical contemplation of complex ideas. I’ve found that such ideas cannot, generally, be appreciated or fully understood with only a casual or passing understanding. Therefore, work ethic can quickly become a principal factor in one’s ability to perform, and understand, world-class research.

While it’s acceptable and even understandable for junior researchers to lack knowledge, it is unacceptable (in my opinion) for researchers, at any level, to lack the motivation to fill those holes in knowledge. If you fall into this category, you probably won’t want to work with me. More likely than not, you will likely end up not liking me much. 🙂


Prior to research, most traditional students have been exposed to one general kind of learning: classroom-based learning. In these environments, someone is teaching you directly. Moreover, they aren’t teaching you these things because you’ve asked them to. More often than not, they are teaching you the things they believe are important. In this context, you have no control over the knowledge you are acquiring. Yet, when one begins the journey into research, such settings often vanish.

Students generally no longer have the luxury of a professor lecturing them for hours upon hours, spoon-feeding them information. Instead, when doing research students must often find new ways to acquire knowledge. I’ve been told by prior students that this can often be challenging and stressful. As I’ve been told, this is because there are different ways to learn and different things to learn. How can someone, who is new to research, know what he or she should be learning? Moreover, what is the most effective way to learn such things?

A key piece of advice I have for students in this regard is simple: ask questions.

Although simple, I’ve found that many of the problems students encounter when first doing research can be solved if they simply ask more questions. I can’t know for sure why this has been problematic for some students. If I were to speculate, I think that it may be that the prior classroom settings didn’t require students to ask questions to gain information, so this is new. I also think students may fear they are “asking a dumb question.” My argument to that is the tired and overused, but true, claim: “there are no stupid questions.” Any person that makes fun of you for asking a question because they think it’s obvious or something you should already know is a person you probably wouldn’t want teaching you things anyway. 🙂

I do my best to always treat every question I get from students with the utmost care.


I believe discussions (or dialogues) are the cornerstone to advancement. However, I don’t believe all types of discussions are equal. In particular, there are ways to interact that I define as “graceful dialogue” versus “ungraceful dialogue.” In general, I believe we should be in environments where dialogues (i.e., discussions) are encouraged, but I believe those dialogues should always be done gracefully. Here are some examples:

  • Ungraceful: “I disagree with you. You should be doing XXX.”
  • Graceful: “I can see value to this approach, but I think I see some potential  weaknesses. Have you considered XXX?”
  • Ungraceful: “I need you to tell me when this is due.”
  • Graceful: “Can you tell me when you’d like this done?”

It’s been my experience that graceful dialogue is something that is respectful and will generally gain you favor amongst people in general (even those not directly participating in the discussion). Moreover, I believe that it tends to establish a more collaborative, communal research environment. Such environments, I believe, are often catalysts for innovation.


Successfully doing world-class research, which is likely required to graduate from any tier-1 PhD program, is uncommon. The data tells us that 1 in every 500 people has a PhD. That means only 0.2% of the global population has a PhD. Moreover, the attrition rates for some PhD programs are upwards of 50%. Just because you’ve been admitted to a PhD program doesn’t mean you will successfully complete your PhD. But why? What makes doing a PhD or world-class research, in general, so challenging?

In my opinion, a core challenge that separates world-class research from other types of work is that world-class research requires both technical precision and technical creativity. A PhD is the only degree I’m aware of that upon completion has (usually) demonstrable evidence that the individual possesses the ability to make novel scientific contributions. I believe this is an important distinction from other doctorates, where memorization of information tends to be all that is required. I mean no disrespect to professional doctors (i.e., MDs, JDs, PharmD, etc.) but memorization is, in my opinion, a simpler task than the formation of novel ideas that, through empirical analysis, advance science.

For many people, the PhD process is likely to be the first time in their life where they are simultaneously expected to

  •  learn in an independent fashion and
  •  invent new scientific contributions.

Taking an important step back, the PhD process itself, is a swift departure from all other prior forms of learning, where a simple regurgitation of information is generally sufficient.

The data tells us that most people will never even gain entry into a PhD program much less succeed once they’ve been accepted. It’s my hope that through the above guidelines that if you are on a quest to do your PhD, this guide helps you on your journey in some capacity.


Part of my job as a researcher is to serve on program committees. I always find it humorous when I see some of the world’s foremost experts in a given technical domain (e.g., one of my mentors, Maurice Herlihy, who invented transactional memory), rate themselves as “intermediate” when reviewing papers in that domain. I find it equally amusing when I see graduate students or recent PhDs who have perhaps 1-2 peer-reviewed publications claim themselves as an “expert” in nearly every paper they review.

The point of this section is two-fold. First, in general, don’t call yourself an expert — let others do that. Second, it’s okay if you aren’t an expert and the big secret toward becoming an expert is by using these three powerful words: “I don’t know.” Yet, I’ve seen many students seem to be unwilling to say “I don’t know”, which in turn can cause a number of short-term problems and more general long-term problems like inhibiting their ability to grow as a scholar and as a critical thinker.

I won’t speculate on why I’ve seen so many students do this, but I will point out that of all the brilliant people I know, none of them will judge you (especially a student) for admitting you don’t know something. Moreover, a common pattern I see amongst most of the world-class researchers I know is that they are often the first ones to admit when they don’t know or understanding something. They are confident enough to know that their transparency about lacking knowledge doesn’t make them less wise, it makes them more wise. At least one reason for this is that now they have the chance to learn something new. Wise.