The National Assessment of Educational Progress (NAEP) boasts a long and proud history. First administered in 1969 to national samples, NAEP typically tests students in 4th, 8th, and an upper high school grade in a variety of subjects. Mathematics and reading are tested most often, but other periodically tested topics include science, history, civics, geography, technology literacy, arts, writing, and economics.
The NAEP is “low stakes,” given that neither students, nor teachers, nor schools face any consequences for test performance. Indeed, given the NAEP’s “matrix” sampling design, where only some individual classrooms within schools are selected to participate and, even then, only to administer one test section—just part of a larger whole—test scores cannot be calculated for individual students, classrooms, or schools.
NAEP had long reported test scores for some of the larger states—where sample sizes were large enough naturally to produce statistically representative results—and for smaller states willing to subsidize sample size increases. Around 1990, however, the federal government instituted a bi-annual “State NAEP” in mathematics and reading (and occasionally science or writing), covering the costs for any state wishing to participate. After the passage of the No Child Left Behind Act in 2002, all states would participate.
2017 Report Reveals Stagnation
On April 10, the US Education Department released the much-anticipated national, state, and “Trial Urban District” assessment results from the 2017 administration. “Much anticipated” largely because the 2017 administration is only the second since the controversial Common Core Standards and their associated tests were fully implemented in most US states. The results from the previous, 2015 NAEP had shown a nationwide decline in math and reading proficiency, which some critics attributed to the widespread implementation of the Common Core Standards.
Two other issues loomed over the April 10 release:
- A possible effect of the switch from paper-and-pencil to a primarily “digitally-based administration” on computer tablets; and
- A possible influence of the “innovative” Common Core tests onto NAEP test formats and content (the 2017 test items are not made public).
Raising anticipation for the data release, Louisiana Education Superintendent John White, wrote a letter to the NAEP Commissioner a week before questioning the validity of the 2017 test results based on the digital transition. Like other state superintendents, he had received the results in advance and, as all can now know, scores in high profile, reform-heavy Louisiana generally declined.
(NAGB) which includes “Two governors, or former governors, of different political parties Two state legislators of different political parties.” No independents.Supervising the “Nation’s Report Card” is the 26-member National Assessment Governing Board
To assure that “the Governing Board is widely representative,” the NAEP legislation requires the NAGB to also recruit members from a variety of other professional roles. Thus, the board includes teachers, principals, superintendents, state and local school board members, and businesspersons, as well as curriculum and testing experts. Yet, on the issue of the nationally unpopular Common Core Standards, the NAGB includes only supporters.
As language arts scholar Sandra Stotsky writes:
“Given the growing dissatisfaction with the statewide tests aligned to Common Core’s standards, which are built into all four-year education plans submitted by state departments of education without state legislative or local school board approval to the U.S. Department of Education in 2016/2017, it is not surprising that many parents are concerned about the independence and integrity of these ‘Report Cards.’”
Thus far, the grand promises made for the allegedly “higher quality,” “more rigorous” (and certainly hugely expensive) Common Core standards and tests have yet to be observed, at least in NAEP results. That did not stop the state superintendents invited to speak at the April 10 data release from attributing score increases in their states to their Common Core adoption. Apparently, the few superintendents invited to speak came only from states with rising scores.
Though the overall trends over the past several years are flat to slightly lower, after a previous decade of rises, the most revealing recent trend concerns the widening “achievement gap”—the difference in score trends between high- and low-performing students. For every test—in 4th and 8th grade reading and math—national average scores for high-performing students are trending up and those for low-performing students are trending down.
Common Core critics nationwide might well say, “we told you so.”
Supporters claim that innovative and more rigorous instructional content and test formats fill Common Core standards and tests.
An abundance of research has shown, however, that complex (or, convoluted) instruction and formats tend to advantage advanced and wealthier students, and disadvantage struggling and poorer students. That is because the former students tend to have more background knowledge. The more complex the lesson or test format the more likely it is to include unfamiliar information (e.g., words, symbols) not necessarily essential to understand the central issue or to solve the problem.
According to NAEP Commissioner Peggy Carr, the 2017 test administration contained “very innovative tasks,” but she also asserts that the widening achievement gap is “absolutely not” caused by the digital transition (because the trend data was calculated from students taking the tests with paper and pencil).
She added that the international Progress in Reading Literacy Study (PIRLS) and Trends in Mathematics and Science Study (TIMSS) have also shown widening achievement gaps among US students, and those tests’ administration remain solely paper and pencil.