by Nathan R. Kuncel, Ph.D.

Making admission decisions at any level, from preschool to the Ph.D., is frightfully complex. We are faced with numerous sources of information including prior grades, interviews, recommendations, student essays, test scores, and extracurricular activities. To make life more difficult, we are not admitting students with a singular goal in mind. Instead, we attempt to address and fulfill multiple, often competing, admission goals ranging from supporting academic and extracurricular excellence, to reaching yield, to matching personality fit with the school’s culture. Add in the goal of evaluating and anticipating the specific educational needs of each student, and the task becomes overwhelming.

In fact, the complexity of mentally juggling over a dozen pieces of information, while aiming to fulfill a dozen goals, exceeds the information processing capacity of our minds. If you have ever felt stuck simply choosing a single side dish for your meal (“Healthy salad or French fries? Aarrggg!”), you know what we are up against. As decision makers, we use heuristics or simplifying processes to deal with it all.

In collaboration with The Enrollment Management Association, I conducted observational research at a number of independent schools and observed that schools manage this complexity with two general methods. First, they distill the data overload into a smaller number of ratings or rankings that capture the most critical characteristics valued by the school. Often the focus is on academic skills, social skills, kindness, integrity, and drive. Second, they identify a specific recruitment goal and pull out for consideration a short list of students who address that goal. It might be time to find a French horn player or consider applicants from a specific region.

Decision Bias

Although both of these methods are reasonable approaches, they have a downside. Consider the applicants in Figure 1 below, which has been adapted from a classic study of decision making. We feel we weigh and judge the information for each student separately, but here’s the kicker. The student we choose mostly depends on whether Student 3 or Student 4 is being considered alongside 1 and 2. When evaluating Students 1, 2, and 3, people chose Student 1 by a ratio of about 3 to 1. However, if we happen to have 1, 2, and 4, in our applicant pool, preferences flip, and Student 2 is strongly preferred. It’s called the decoy effect, and whom we chose is actually dependent on the characteristics of other applicants. This is just one of many decision biases that plague hiring and admission decision making.

The available evidence on the effectiveness, fairness, and bias of many tools such as student essays, interviews, and reference letters is thin to nonexistent for middle and high school admission. The results of our research in college and graduate school settings is not encouraging. Student essays and personal statements are miserably poor predictors of subsequent success, and both letters of recommendation and interviews, in their typical form, are not up to carrying the load we ask of them. In fact, they may even facilitate biased decisions.

Fortunately, there are many ways we can improve the independent school admission process, whether a school is highly selective or primarily focused on assessing students to help meet their needs. One place to start is to upgrade the quality of the individual tools we use.

Improving The Interview

A century of research on interviewing tells us that traditional interviews are less than half as predictive of future success as what are called structured interviews (see Figure 2). In a structured interview, we identify the behavioral domains we see as critical (e.g., being kind to other students, demonstrating determination in the face of failure) and then ask questions that probe specific historical events or a specific hypothetical future event. We are relying on the fact that past behavior is the best predictor of future behavior in the same general context. This results in questions like “Tell me about the most recent time you were disappointed with your performance in a class and what you did about it?” or “Imagine that you got back a paper with a much lower grade than you expected. What would you do about it?”

Both types of questions lead to specific, school-relevant, and more easily quantifiable responses that tend to improve agreement among interviewers, while also reducing bias. Developing structured interviews and their supporting materials has multiple steps and can represent both some work and a culture shift. However, this change is one of the biggest-impact, lowest-cost changes an organization can make.

The need to upgrade individual assessment tools is critical. Although there is a foundation for evidence-based practice, there is a lot we don’t know about key elements of the admission process. One way to resolve this problem is to create a research consortium focused on admission decision making where questions can be tested and best practices can be shared across schools.

Nathan R. Kuncel, Ph.D. is a distinguished professor of industrial-organizational psychology/McKnight Presidential Fellow, University of Minnesota, and member of The Enrollment Management Association board of trustees.

EMA Members can view the full report in our Member Community.

Become a member to gain access to our full magazine
and more professional development tools.

Subscribe to learn more about EMA and our services.

EMA
February 10, 2017
Ready to make EMA part of your enrollment toolkit?

Subscribe to learn more about EMA and our services.

by Nathan R. Kuncel, Ph.D.

Making admission decisions at any level, from preschool to the Ph.D., is frightfully complex. We are faced with numerous sources of information including prior grades, interviews, recommendations, student essays, test scores, and extracurricular activities. To make life more difficult, we are not admitting students with a singular goal in mind. Instead, we attempt to address and fulfill multiple, often competing, admission goals ranging from supporting academic and extracurricular excellence, to reaching yield, to matching personality fit with the school’s culture. Add in the goal of evaluating and anticipating the specific educational needs of each student, and the task becomes overwhelming.

In fact, the complexity of mentally juggling over a dozen pieces of information, while aiming to fulfill a dozen goals, exceeds the information processing capacity of our minds. If you have ever felt stuck simply choosing a single side dish for your meal (“Healthy salad or French fries? Aarrggg!”), you know what we are up against. As decision makers, we use heuristics or simplifying processes to deal with it all.

In collaboration with The Enrollment Management Association, I conducted observational research at a number of independent schools and observed that schools manage this complexity with two general methods. First, they distill the data overload into a smaller number of ratings or rankings that capture the most critical characteristics valued by the school. Often the focus is on academic skills, social skills, kindness, integrity, and drive. Second, they identify a specific recruitment goal and pull out for consideration a short list of students who address that goal. It might be time to find a French horn player or consider applicants from a specific region.

Decision Bias

Although both of these methods are reasonable approaches, they have a downside. Consider the applicants in Figure 1 below, which has been adapted from a classic study of decision making. We feel we weigh and judge the information for each student separately, but here’s the kicker. The student we choose mostly depends on whether Student 3 or Student 4 is being considered alongside 1 and 2. When evaluating Students 1, 2, and 3, people chose Student 1 by a ratio of about 3 to 1. However, if we happen to have 1, 2, and 4, in our applicant pool, preferences flip, and Student 2 is strongly preferred. It’s called the decoy effect, and whom we chose is actually dependent on the characteristics of other applicants. This is just one of many decision biases that plague hiring and admission decision making.

The available evidence on the effectiveness, fairness, and bias of many tools such as student essays, interviews, and reference letters is thin to nonexistent for middle and high school admission. The results of our research in college and graduate school settings is not encouraging. Student essays and personal statements are miserably poor predictors of subsequent success, and both letters of recommendation and interviews, in their typical form, are not up to carrying the load we ask of them. In fact, they may even facilitate biased decisions.

Fortunately, there are many ways we can improve the independent school admission process, whether a school is highly selective or primarily focused on assessing students to help meet their needs. One place to start is to upgrade the quality of the individual tools we use.

Improving The Interview

A century of research on interviewing tells us that traditional interviews are less than half as predictive of future success as what are called structured interviews (see Figure 2). In a structured interview, we identify the behavioral domains we see as critical (e.g., being kind to other students, demonstrating determination in the face of failure) and then ask questions that probe specific historical events or a specific hypothetical future event. We are relying on the fact that past behavior is the best predictor of future behavior in the same general context. This results in questions like “Tell me about the most recent time you were disappointed with your performance in a class and what you did about it?” or “Imagine that you got back a paper with a much lower grade than you expected. What would you do about it?”

Both types of questions lead to specific, school-relevant, and more easily quantifiable responses that tend to improve agreement among interviewers, while also reducing bias. Developing structured interviews and their supporting materials has multiple steps and can represent both some work and a culture shift. However, this change is one of the biggest-impact, lowest-cost changes an organization can make.

The need to upgrade individual assessment tools is critical. Although there is a foundation for evidence-based practice, there is a lot we don’t know about key elements of the admission process. One way to resolve this problem is to create a research consortium focused on admission decision making where questions can be tested and best practices can be shared across schools.

Nathan R. Kuncel, Ph.D. is a distinguished professor of industrial-organizational psychology/McKnight Presidential Fellow, University of Minnesota, and member of The Enrollment Management Association board of trustees.

EMA
February 10, 2017