From Memberanda, Fall 2012

Media reports have shined a bright light on admission-test preparation for three-year-olds applying to elite independent schools. Yet little is reported about just what the admission process entails at the PK-2 level and how it determines if a student is qualified to attend. To begin to explore these questions, The Enrollment Management Association commissioned a study to focus on the early childhood admission process and to determine how it is influenced by school structures, admission pressures in the region, what schools are looking for, and the instruments used in the process.

The study involved a series of nine case studies of independent schools in the United States and Canada, representing single-gender and coeducational schools, schools of different size, governance, grade levels, histories, and admission/growth models. The study captured and interpreted data collected through interviews, observations, and analysis of admission materials.

While the study considered the entire process, the two major assessments administered for admission – individual and group – were the focus. Individual assessments are administered one-on-one with the students and are designed to measure individual cognitive abilities or developmental age. Group play assessments were all administered at the schools by school personnel and employed a mix of school-developed and commercial tools.

Individual Cognitive Assessment

The study found that instruments designed to determine the cognitive abilities or developmental ages of children were administered either by an outside testing agency or the school, depending on the form of assessment. Four delivery models were identified: tests administered by a licensed psychologist at an off-campus test center, administered on campus by the school psychologist, administered on campus by a consultant, or administered on campus by trained school staff. In addition to the school-designed assessments used, 12 commercial vehicles were identified. They all focused on one or more of the the following – developmental age, language acquisition, curriculum achievement, cognitive abilities, and IQ.

Of the nine schools, four give some form of an IQ test as part of their admission process. Most of these schools reported issues with inflated scores due to test preparation. In fact, some schools reported that they really do not look at the scores but focus more on the examiner’s comments, which they find to be the most useful outcome of the individual report for a student. It is important to note that IQ tests were never designed to be used for admission purposes and, when given to children as young as three, are a limited measure of future success.

Some schools are exploring the possibility of moving away from these types of tests entirely, but are reluctant to do so, because they might be perceived as less competitive. When it comes to selecting which assessment to use, several market factors are at play related to geographic isolation, other independent schools, parent pressure, and the perceived status of the school in that market. Schools with less parent and market pressure tend to use developmental assessments and school-developed assessments rather than IQ, achievement, and ability assessments. Schools in markets that belonged to associations and consortiums reported that group conversations about how to improve individual assessment are underway.

Group Assessment

The other practice examined was the group play assessment. Each school designed its own process, but they had common features. In general, the models consisted of a condensed version of a day in the class and included free-play time, circle time, teacher-directed activities, and a short individual assessment, which examined student development in four areas: physical, emotional, social, and cognitive. These assessments ranged from 45 minutes to 2.5 hours, depending on the grade being applied to and the tools used. Schools using the grade-appropriate version of the Stanford Achievement Test (SAT) usually had longer sessions, while those who designed their own or used some of the language assessments were shorter. Group sizes ranged from 6 to 16 per session, with the number of observers ranging from 2 to 5.

Lessons in Assessment

The study’s most interesting revelation was that no common construct was being utilized by all schools. With each school establishing its own process for individual assessments, there was little agreement as to what should be measured. Additionally, some schools have found it necessary to change their assessments as a result of the effects of tutoring, test preparation, and market forces, and have moved from an assessment examining one construct to one that measures a completely different construct, e.g. moving from the WPPSI to the Gesell or DAS II. While these assessments all look at abilities, there are different scales and intended purposes behind the tests. A person might give both as a way to validate findings in one, but it is not recommended to compare a student tested under one assessment with a student tested under a different one.

There was also a lack of consistency in the construction and scoring of assessments. While some schools had well-defined processes and assessments for group play, others created a new assessment each year. There was concern at some schools that they might not be consistent from one year to the next or even from one grade to the next. Schools using a version of the SAT found that the test was either too easy or too difficult, as it is normed for the end of the year and is not in sync with the admission cycle. Rolling enrollments are also problematic, since students who test later in the year have an advantage on these types of achievement tests. Additionally, rolling enrollments might limit possibilities to have the child attend a play-group session. In some cases, schools reported visiting the child’s current class as a substitute for the play-group session.

There was also little agreement as to what a child entering prekindergarten or kindergarten should know – getting at the question of school readiness and the purpose of the admission process. This issue is magnified for schools that enroll most of their graduating classes in these years and do not have other planned entries to the school. One practice that resonated at several schools, with both school-developed and commercial instruments such as the K-SEALS and DIBELS, was benchmarking the assessment with their current students in order to make useful comparisons.

While the process is far from perfect, schools are generally satisfied that the process they have works for them and that the use of multiple data points focuses on the whole child. This, however, is not to be confused with schools believing that the process is good. Schools were generally aware of issues surrounding the forms of assessment being used, issues created by parents preparing their children, and the fact that the process is time-intensive and expensive for parents, particularly when outside testing agencies are used. Concern also was expressed about who the process might screen out.

Next Steps

It is important to view the findings of this study in the context of its exploratory nature. Clearly, a larger conversation around early childhood admission is needed. Some questions yet to be answered are:

1. What constructs should we really be testing during this admission process – if we feel we should be testing at all?

2. What role do creativity, self-regulation, and mindset play in a student’s future success, and can that be assessed at this age?

3. Where/what should our baselines be when testing for early childhood admission?

4. What biases are we introducing and ignoring in the assessments we use and the methods we use to administer them?

5. What role might technology play in addressing the inefficiencies of individual testing?

There is a need to generalize the findings through validation with a large national sample and to capture additional information about the early childhood admission process. We need to explore measures of delayed gratification, self-efficacy, and self-regulation. There is clearly an opportunity here for a new type of individual assessment as well as an opportunity for guidance in developing the group play assessments and standardization of observational practices. It is hoped that this study will open the doors to further exploration into new measures and processes for helping to find the right fit for a child in one of our schools.

Chris Bigenho, this study’s author, is Director of Educational Technology at Greenhill School in Addison,Texas.


 

EMA Members can view the full report in our Member Community.

Become a member to gain access to our full magazine
and more professional development tools.

Subscribe to learn more about EMA and our services.

EMA
November 15, 2011
Ready to make EMA part of your enrollment toolkit?

Subscribe to learn more about EMA and our services.

From Memberanda, Fall 2012

Media reports have shined a bright light on admission-test preparation for three-year-olds applying to elite independent schools. Yet little is reported about just what the admission process entails at the PK-2 level and how it determines if a student is qualified to attend. To begin to explore these questions, The Enrollment Management Association commissioned a study to focus on the early childhood admission process and to determine how it is influenced by school structures, admission pressures in the region, what schools are looking for, and the instruments used in the process.

The study involved a series of nine case studies of independent schools in the United States and Canada, representing single-gender and coeducational schools, schools of different size, governance, grade levels, histories, and admission/growth models. The study captured and interpreted data collected through interviews, observations, and analysis of admission materials.

While the study considered the entire process, the two major assessments administered for admission – individual and group – were the focus. Individual assessments are administered one-on-one with the students and are designed to measure individual cognitive abilities or developmental age. Group play assessments were all administered at the schools by school personnel and employed a mix of school-developed and commercial tools.

Individual Cognitive Assessment

The study found that instruments designed to determine the cognitive abilities or developmental ages of children were administered either by an outside testing agency or the school, depending on the form of assessment. Four delivery models were identified: tests administered by a licensed psychologist at an off-campus test center, administered on campus by the school psychologist, administered on campus by a consultant, or administered on campus by trained school staff. In addition to the school-designed assessments used, 12 commercial vehicles were identified. They all focused on one or more of the the following – developmental age, language acquisition, curriculum achievement, cognitive abilities, and IQ.

Of the nine schools, four give some form of an IQ test as part of their admission process. Most of these schools reported issues with inflated scores due to test preparation. In fact, some schools reported that they really do not look at the scores but focus more on the examiner’s comments, which they find to be the most useful outcome of the individual report for a student. It is important to note that IQ tests were never designed to be used for admission purposes and, when given to children as young as three, are a limited measure of future success.

Some schools are exploring the possibility of moving away from these types of tests entirely, but are reluctant to do so, because they might be perceived as less competitive. When it comes to selecting which assessment to use, several market factors are at play related to geographic isolation, other independent schools, parent pressure, and the perceived status of the school in that market. Schools with less parent and market pressure tend to use developmental assessments and school-developed assessments rather than IQ, achievement, and ability assessments. Schools in markets that belonged to associations and consortiums reported that group conversations about how to improve individual assessment are underway.

Group Assessment

The other practice examined was the group play assessment. Each school designed its own process, but they had common features. In general, the models consisted of a condensed version of a day in the class and included free-play time, circle time, teacher-directed activities, and a short individual assessment, which examined student development in four areas: physical, emotional, social, and cognitive. These assessments ranged from 45 minutes to 2.5 hours, depending on the grade being applied to and the tools used. Schools using the grade-appropriate version of the Stanford Achievement Test (SAT) usually had longer sessions, while those who designed their own or used some of the language assessments were shorter. Group sizes ranged from 6 to 16 per session, with the number of observers ranging from 2 to 5.

Lessons in Assessment

The study’s most interesting revelation was that no common construct was being utilized by all schools. With each school establishing its own process for individual assessments, there was little agreement as to what should be measured. Additionally, some schools have found it necessary to change their assessments as a result of the effects of tutoring, test preparation, and market forces, and have moved from an assessment examining one construct to one that measures a completely different construct, e.g. moving from the WPPSI to the Gesell or DAS II. While these assessments all look at abilities, there are different scales and intended purposes behind the tests. A person might give both as a way to validate findings in one, but it is not recommended to compare a student tested under one assessment with a student tested under a different one.

There was also a lack of consistency in the construction and scoring of assessments. While some schools had well-defined processes and assessments for group play, others created a new assessment each year. There was concern at some schools that they might not be consistent from one year to the next or even from one grade to the next. Schools using a version of the SAT found that the test was either too easy or too difficult, as it is normed for the end of the year and is not in sync with the admission cycle. Rolling enrollments are also problematic, since students who test later in the year have an advantage on these types of achievement tests. Additionally, rolling enrollments might limit possibilities to have the child attend a play-group session. In some cases, schools reported visiting the child’s current class as a substitute for the play-group session.

There was also little agreement as to what a child entering prekindergarten or kindergarten should know – getting at the question of school readiness and the purpose of the admission process. This issue is magnified for schools that enroll most of their graduating classes in these years and do not have other planned entries to the school. One practice that resonated at several schools, with both school-developed and commercial instruments such as the K-SEALS and DIBELS, was benchmarking the assessment with their current students in order to make useful comparisons.

While the process is far from perfect, schools are generally satisfied that the process they have works for them and that the use of multiple data points focuses on the whole child. This, however, is not to be confused with schools believing that the process is good. Schools were generally aware of issues surrounding the forms of assessment being used, issues created by parents preparing their children, and the fact that the process is time-intensive and expensive for parents, particularly when outside testing agencies are used. Concern also was expressed about who the process might screen out.

Next Steps

It is important to view the findings of this study in the context of its exploratory nature. Clearly, a larger conversation around early childhood admission is needed. Some questions yet to be answered are:

1. What constructs should we really be testing during this admission process – if we feel we should be testing at all?

2. What role do creativity, self-regulation, and mindset play in a student’s future success, and can that be assessed at this age?

3. Where/what should our baselines be when testing for early childhood admission?

4. What biases are we introducing and ignoring in the assessments we use and the methods we use to administer them?

5. What role might technology play in addressing the inefficiencies of individual testing?

There is a need to generalize the findings through validation with a large national sample and to capture additional information about the early childhood admission process. We need to explore measures of delayed gratification, self-efficacy, and self-regulation. There is clearly an opportunity here for a new type of individual assessment as well as an opportunity for guidance in developing the group play assessments and standardization of observational practices. It is hoped that this study will open the doors to further exploration into new measures and processes for helping to find the right fit for a child in one of our schools.

Chris Bigenho, this study’s author, is Director of Educational Technology at Greenhill School in Addison,Texas.


 

EMA
November 15, 2011