Challenging The New Digital SAT

Last weekend I had the opportunity to present at the National Test Prep Association, alongside such esteemed colleagues as Mike Bergin of Chariot Learning in Rochester, NY, Amy Seeley of Seeley Test Pros in Cleveland, Ohio, and Janet Godwin, CEO of ACT, Inc. At the conference, Angela DelBrocco, Executive Director of SAT Program Management and Strategy, also presented about the New SAT. In this article, I will not repeat information about the New SAT that was presented in my March article. In fact, I will challenge some of what was reported about the test.

I found some reports about the test to be questionable. For example, The College Board reported that students prefer the embedded (computer) calculator provided to their own calculator. (Students are permitted to use either.) I always advise my students to use the calculator they are used to on test day. I find it difficult to believe that a student would prefer a calculator they are more unfamiliar with. It is possible this is an attempt by the College Board to promote the new test as accessible to everyone who takes it, and thus, more equitable.

The New SAT is also touted as more secure. Admittedly, the first section of each test will be different, which will make cheating more difficult on test day, and since there are no more paper copies, said copies cannot be intercepted by students so they have the questions on the test before it is given…but if a 19-year-old Mark Zuckerberg could hack into Harvard’s system, I think it is only a matter of time until a resourceful teen hacks into the College Board’s system and leaks test questions.

The test was also praised because students could start the test at different times. However, studies show that a 2.8 second break in concentration doubles the chance a student will make an error on a difficult question. Although starting the test at different times may be convenient for students, it will inevitably impact their scores by breaking their concentration. This is especially concerning for students with learning differences such as ADD and ADHD.

Lastly, in 2020 1.7 million students had SAT scores that “confirmed or exceeded their GPA.” How could this possibly be measured? Wouldn’t the scale be created by matching a GPA with a respective SAT score? Wouldn’t some match, and half of the remaining scores be higher and half lower? This metric seems arbitrary to me.

Regardless of the perceived flaws of the test, it is still good that we will have a standardized national assessment that will cut through the fluff of inflated GPAs and padded college applications.

Suggestion for a topic? You can reach me at Paul.Pscolka@ivymasters.com 732-485-6480

Facebook
Twitter
LinkedIn
WhatsApp