10 (More) Things Every Admission Committee Member Needs to Know About the SSAT

EMA
December 16, 2015

10 (More) Things Every Admission Committee Member Needs to Know About the SSAT

EMA
December 16, 2015

10 (More) Things Every Admission Committee Member Needs to Know About the SSAT

EMA
December 16, 2015

10 (More) Things Every Admission Committee Member Needs to Know About the SSAT

EMA
December 16, 2015
While “standardized testing” has garnered more than its fair share of negative media attention in the last few years, it is important to remember what “standardized” actually means.
  1. Standardized is a good thing. While “standardized testing” has garnered more than its fair share of negative media attention in the last few years, it is important to remember what “standardized” actually means. In the case of the SSAT and other standardized tests, it simply means that it is given under “standard conditions,” i.e., has uniform procedures for administration and scoring, regardless of testing date or center. The goal of standardization is to minimize as much variability as possible, so that the test is uniformly fair for all students across all administrations.
  2. All tests are not created equal. First and foremost, it is imperative that any assessment tool in any educational context be used and understood for the purpose for which it was designed and developed. This sounds like a no-brainer, but often, in- house placement tests and/or achievement tests—designed to assess mastery of specific curricular content—are used in the admission process to make decisions. As an admission test, the SSAT was specifically designed (with a deep research base behind it) for the sole purpose of providing a common measure across disparate educational contexts and for predicting first-year academic success in an independent school.
  3. You cannot use SSAT and ISEE scores interchangeablyThe Independent School Entrance Examination (ISEE) is used by many fine independent schools. Like the SSAT, the ISEE is an admission test. However, while they are used for the same purpose, it is critical to remember that the tests are designed and scored differently. A 90th percentile on the ISEE is not the same as a 90th percentile on the SSAT. The two tests should be understood and interpreted independently as part of your school’s admission decision-making process.
  4. The bell curve gets a bad rap. We all rail against the idea of forced distributions— particularly when it comes to performance-based pay systems! But remember that admission tests, in order to fulfill the purpose for which they are designed, must create necessary differentiation within the applicant pool. Admission test questions are hard— they are designed to be answered correctly only about half of the time. The reality is that if all students performed equally well on the SSAT, it would fail to be useful when making selection decisions between and among applicants.
  5. The SSAT is a norm-referenced test. A norm-referenced test is different than a criterion-referenced test, because it makes test scores meaningful by indicating the test taker’s position relative to a norm group rather than a fixed standard. The SSAT norm group consists of all the test-takers (same grade/same gender), who have taken the test for the first time on one of the Standard Saturday or Sunday SSAT administrations in the U.S. and Canada over the last three years. It’s important to note that students taking the test are applying to college preparatory independent schools, so the SSAT has a highly-competitive norm group.
  6. The scaled score is the most precise for comparing one student’s abilities to anotherUnlike the percentile scores, which are variable depending on the norm group— the pool of students taking the SSAT in any given three-year period—the scaled score indicates actual performance on the test as derived from the raw score of rights, wrongs, and omitted questions during the score equating process. SSAT scores are reported for each subsection on a scale of 440-710 (midpoint score 575) for the Middle Level test and on a scale of 500-800 (midpoint score 650) for the Upper Level test.
  7. Equating adjusts for differences across test forms. Different SSAT forms are built and administered to students each year. Although test developers follow very stringent specifications when they assemble new forms, so that different forms can be parallel in difficulty as much as possible, in reality it is inevitable that there are variations in form difficulty. A statistical procedure referred  to as score equating is used to adjust for minor form difficulty differences, so that scores reported to students on the November test, for example, are comparable to those on the December test.
  8. There is a right way and a wrong way to look at “rights and wrongs.” The SSAT score report indicates the questions students get right and wrong. It’s tempting to compare these  “raw scores” across students; however, test forms vary in difficulty. Therefore, the number of correct answers needed to achieve a specific scaled score differs from form to form (see #7). Another common misconception is that the questions in each test section gradually increase in their level of difficulty. This is not correct, and no meaning should be derived from which questions in the section the student got right or wrong.
  9. The writing sample is the first test section a student completes. As you know, the SSAT provides an unscored sample of  a student’s writing. In many cases, the SSAT writing sample is one of the only unedited views of a student’s writing that a school may receive in the application process. It is important to remember that the writing section is the first—not last— section of the SSAT that is administered to students. So, a test taker’s relative fatigue should not be taken into account when assessing the student’s writing.
  10. Questions on the SSAT are written by independent school teachers. Teachers in our schools who have been trained by The Enrollment Management Association in the science of “item writing” are the source of test questions. To develop the actual test forms used, The Enrollment Management Association convenes review committees composed of both content experts and independent school teachers. The committees reach consensus regarding the appropriateness of the questions. Questions accepted by the committee are then pretested (i.e., administered to SSAT test takers in the unscored experimental section of the test) and analyzed. Questions that are statistically sound are selected and assembled into the test forms administered each year. 
EMA Members can view the full report in our Member Community.

Become a member to gain access to our full magazine
and more professional development tools.

Subscribe to learn more about EMA and our services.

EMA
December 16, 2015
Ready to make EMA part of your enrollment toolkit?

Subscribe to learn more about EMA and our services.

10 (More) Things Every Admission Committee Member Needs to Know About the SSAT

EMA
December 16, 2015
While “standardized testing” has garnered more than its fair share of negative media attention in the last few years, it is important to remember what “standardized” actually means.
  1. Standardized is a good thing. While “standardized testing” has garnered more than its fair share of negative media attention in the last few years, it is important to remember what “standardized” actually means. In the case of the SSAT and other standardized tests, it simply means that it is given under “standard conditions,” i.e., has uniform procedures for administration and scoring, regardless of testing date or center. The goal of standardization is to minimize as much variability as possible, so that the test is uniformly fair for all students across all administrations.
  2. All tests are not created equal. First and foremost, it is imperative that any assessment tool in any educational context be used and understood for the purpose for which it was designed and developed. This sounds like a no-brainer, but often, in- house placement tests and/or achievement tests—designed to assess mastery of specific curricular content—are used in the admission process to make decisions. As an admission test, the SSAT was specifically designed (with a deep research base behind it) for the sole purpose of providing a common measure across disparate educational contexts and for predicting first-year academic success in an independent school.
  3. You cannot use SSAT and ISEE scores interchangeablyThe Independent School Entrance Examination (ISEE) is used by many fine independent schools. Like the SSAT, the ISEE is an admission test. However, while they are used for the same purpose, it is critical to remember that the tests are designed and scored differently. A 90th percentile on the ISEE is not the same as a 90th percentile on the SSAT. The two tests should be understood and interpreted independently as part of your school’s admission decision-making process.
  4. The bell curve gets a bad rap. We all rail against the idea of forced distributions— particularly when it comes to performance-based pay systems! But remember that admission tests, in order to fulfill the purpose for which they are designed, must create necessary differentiation within the applicant pool. Admission test questions are hard— they are designed to be answered correctly only about half of the time. The reality is that if all students performed equally well on the SSAT, it would fail to be useful when making selection decisions between and among applicants.
  5. The SSAT is a norm-referenced test. A norm-referenced test is different than a criterion-referenced test, because it makes test scores meaningful by indicating the test taker’s position relative to a norm group rather than a fixed standard. The SSAT norm group consists of all the test-takers (same grade/same gender), who have taken the test for the first time on one of the Standard Saturday or Sunday SSAT administrations in the U.S. and Canada over the last three years. It’s important to note that students taking the test are applying to college preparatory independent schools, so the SSAT has a highly-competitive norm group.
  6. The scaled score is the most precise for comparing one student’s abilities to anotherUnlike the percentile scores, which are variable depending on the norm group— the pool of students taking the SSAT in any given three-year period—the scaled score indicates actual performance on the test as derived from the raw score of rights, wrongs, and omitted questions during the score equating process. SSAT scores are reported for each subsection on a scale of 440-710 (midpoint score 575) for the Middle Level test and on a scale of 500-800 (midpoint score 650) for the Upper Level test.
  7. Equating adjusts for differences across test forms. Different SSAT forms are built and administered to students each year. Although test developers follow very stringent specifications when they assemble new forms, so that different forms can be parallel in difficulty as much as possible, in reality it is inevitable that there are variations in form difficulty. A statistical procedure referred  to as score equating is used to adjust for minor form difficulty differences, so that scores reported to students on the November test, for example, are comparable to those on the December test.
  8. There is a right way and a wrong way to look at “rights and wrongs.” The SSAT score report indicates the questions students get right and wrong. It’s tempting to compare these  “raw scores” across students; however, test forms vary in difficulty. Therefore, the number of correct answers needed to achieve a specific scaled score differs from form to form (see #7). Another common misconception is that the questions in each test section gradually increase in their level of difficulty. This is not correct, and no meaning should be derived from which questions in the section the student got right or wrong.
  9. The writing sample is the first test section a student completes. As you know, the SSAT provides an unscored sample of  a student’s writing. In many cases, the SSAT writing sample is one of the only unedited views of a student’s writing that a school may receive in the application process. It is important to remember that the writing section is the first—not last— section of the SSAT that is administered to students. So, a test taker’s relative fatigue should not be taken into account when assessing the student’s writing.
  10. Questions on the SSAT are written by independent school teachers. Teachers in our schools who have been trained by The Enrollment Management Association in the science of “item writing” are the source of test questions. To develop the actual test forms used, The Enrollment Management Association convenes review committees composed of both content experts and independent school teachers. The committees reach consensus regarding the appropriateness of the questions. Questions accepted by the committee are then pretested (i.e., administered to SSAT test takers in the unscored experimental section of the test) and analyzed. Questions that are statistically sound are selected and assembled into the test forms administered each year. 
EMA
December 16, 2015