As a relatively new program director years ago I spent an inordinate amount of time coaching trainees with performance difficulties to success. Most of them did not have difficulties with medical knowledge, but in multiple behavioral competencies, namely professionalism and interpersonal communication skills. Our previous traditional interview method did not give me any insight into those trainees that might develop problems and resulted in a number of trainees with poor fit. Remediation in behavior can be very costly in terms of time and resources, and can be less successful than remediation for medical knowledge. It was this pain point that led me to redesign our interview process to focus on best fit from the outset.


High Stakes Nature of the Interview

“Traditional” interviews have not been proven to be correlated with trainee performance and amount to little more than the candidate recapitulating a list of their accomplishments that are already evident in their application. Traditional questions tend to be subjective and rarely align with the ACGME core competencies. As a result, the interview becomes reduced to a transactional meeting where candidates “market themselves” to the program, and the program “markets itself” to the candidate, all in the hopes of a good fit. However, with such non-specific traditional questions, finding the right fit becomes a game of chance. The potential effect of selecting the wrong candidate can have a lasting impact on the program.


Structured Interviews

Structured interviews are designed to improve reliability by increasing standardization of question content and/or evaluation. Our current redesigned process at Henry Ford Hospital does both. We ask every interview candidate a standard set of questions focused on predetermined fit factors. Our questions are behavior based and mapped to the ACGME core competencies. We have found that behavior based questions that focus on how a candidate took action in a specific situation or has behaved are highly beneficial. It has allowed us to see the candidate in action, and we glean much information about them that we otherwise would be unable to obtain. Some candidates have given me feedback that they felt our program really allowed them to display who they truly are through these questions. Interestingly, other candidates were not receptive at all, because they couldn’t prepare for the questions and rehearse answers.

Behavioral interviewing is based on the premise that past behavior predicts future behavior. It is directly dependent on and informed by the candidate’s past experiences. In contrast, situational questions, which can also be used as part of a structured interview, are based on what a candidate would do or how they would behave. It relies on context and/or their intentions as the basis for their predicted behavior. I’m not entirely convinced that theoretical actions are helpful, as we all have encountered situations where we say we would do one thing and when it actually occurs we do something entirely different. For this reason I steer away from situational questions in our interviews.


Question Standardization

How does someone develop standard questions? I recommend that you start with your program’s mission; think about who you serve, who you value, and what makes your program unique. This will help you decide what factors are important for success in the institution, the program, and the clinical learning environment. Think about what characterizes a successful trainee within your program – the knowledge, skills and attitudes – and the behaviors that demonstrate these. When we started our process, this was the most difficult task. We actually had to work backwards and think about the characteristics that we did not want. For instance, we all agreed that we did not want fellows who behaved as “mavericks” in the ICU. We did not want fellows that were unable to collaborate, acting without regard for the ICU team or not adhering to ICU policies and rules. When we had discussions from that perspective, it became more clear what defined “fit” for us. Have the selection committee work together and decide on what fit factors are most important. An exhaustive list is not necessary, focusing on a few key behaviors is all that is necessary when starting the process. Once the fit factors are defined, you can write questions around those behaviors. The selection committee faculty can then review the questions and refine them. I would also highly suggest testing the questions out on a few current fellows to see if the answers they provide you give you the information you’re looking for and confirms what you already know about them. If not, then revise the question further. Your current trainees can be a valuable source of formative feedback, so get them involved!

After you’ve written and refined the questions, map them to the core competencies. This will enable you to ensure you have enough content in each of the domains and alignment with your most important values.


Evaluation Standardization

Just as we use rating scales with behavior anchors to evaluate trainee performance, I use them to evaluate the candidate interviews. I instruct our interviewers to take notes during the interview and use them to score the interview immediately after it is concluded and before the next one begins. The scoring rubric does not need to be complicated, and can be as simple as a Likert scale (1 “poor fit” – 5 “excellent fit”) for the entire interview or as complicated as a behavior-anchored scale for each question. Irrespective of your scoring methodology, it is important to ensure that each of your interviewers knows, understands, and agrees with what defines the anchors. Training your interviewers to use the scoring rubric with behavior anchors for the interview is essential. It will improve reliability of your scores and increase their ability to directly compare candidates.



At the end of each interview day, we conduct an interview team debrief. Each interviewer reports his or her interview score for each candidate, and I use them to compute each candidate’s overall interview score. Then we discuss any candidate that has a wide spread of scores (low inter-rater reliability) to discover the source of discrepancy and any potential source of bias. We also discuss those with scores consistent across raters (high inter-rater reliability) to determine what behaviors were demonstrated across the spectrum of behavior domains. This can be particularly illuminating for those who are found to have poor fit. In the debrief I reveal the academic score they were assigned at screening to the interview team. We use the model proposed and validated by Gabe Bosslet, MD in the EAST-IST Study to develop our academic scoring and use it consistently every year to screen applicants. Using the combined interview and academic score, we rank the candidates for the day. We decide by consensus whether any adjustments need to be made to their rank and choose the top 3 candidates by rank from each interview day to populate our preliminary MATCH rank list. This is supremely helpful given our long interview season that spans over 10 weeks. It otherwise would be impossible to remember, much less distinguish among and rank candidates from the beginning of the season to the end.

I would encourage you to consider a structured interview process. We’ve found that:

  1. It is feasible.
  2. The process provides common language for interpreting results among interviewers.
  3. It focuses on behaviors, not data.
  4. Developing and refining questions is iterative.
  5. It allows for direct comparison of candidates.
  6. Training interviewers on the scoring rubric is essential.
  7. The post-interview debrief is revealing.
  8. Debriefs can facilitate rank list generation.

I hope you find that using a structured interview process results in recruitment of candidates that are best aligned with your program’s mission, values, and learning environment and are great fellows within your respective programs.


Sample Interview Questions


Core Competencies Follow-Up Question(s)

Helps Assess

“Please describe the steps you take to determine the patient or family needs and expectations when performing a procedure.”
  • Patient Care
  • Interpersonal and Communication Skills
  • Systems-Based Practice
“What would let you know you met their needs?”
  • Listening Skills
  • Teach-back
“Reflecting on your most recent rotation in the ICU, give me an example of when you had to have a family discussion about goals of care. Tell me what you told the family and how you handled their response.”
  • Interpersonal and Communication Skills
  • Professionalism
“What specifically did you say to them to help them understand?”

“How did you know they understood?”

“How do you know you met their needs?”

  • Listening Skills
  • Empathy
“Describe a time when you were not following standard procedures and received feedback. How did you respond?”
  • Practice-based learning and improvement
“How do you know when you don’t have enough information and need to ask for help?”
  • Ability to seek input
  • Ability to respond to feedback


Geneva Tatem, MD is Clinical Associate Professor of Medicine at Wayne State University School of Medicine and is the Fellowship Program Director for Pulmonary and Critical Care Medicine at Henry Ford Hospital. She is also the Associate Division Head of Pulmonary and Critical Care Medicine for Henry Ford Health System.