1) we want all students to be able to read and love doing so
2) we want all students to have a fundamental grasp of numeracy and mathematical thinking
3) we want all students to be able to write persuasively on a variety of subjects
You might also wish to nurture and develop certain qualities in students, e.g., curiosity, compassion, creativity, confidence.
So now the question becomes: how do we determine that students know these things, can do these things, or have acquired these qualities? The trick is to use assessment to help students know them, do them, and acquire them. In other words, good assessment is indistinguishable from good instruction. Good assessment drives instruction because it provides rich, meaningful information that both students and teachers can use -- teachers to improve their instruction and students to improve their learning.
So here's the litmus test for the current battery of assessments that have hijacked our curricula: do they provide rich, meaningful information that both students and teachers can use?
I would say the answer is "no" because most of the assessments in use today via "data-driven assessment" practices produce information that is shallow and disjointed. But even worse, some of the assessments used produce information that is simply wrong.
To make this case, let's consider the DIBELS test. The Dynamic Indicators of Basic Early Literacy Skills (DIBELS) is a set of one-minute measures: recognizing initial sounds, naming the letters of the alphabet, segmenting the phonemes in a word, reading nonsense words, oral reading of a passage, retelling, and word use. The measures are used to assess phonological awareness, the alphabetic principle, accuracy and fluency in reading connected text, vocabulary and comprehension.
The DIBELS is used to assess more than 1,800,000 students from Kindergarten to grade 6. Students who do not meet the expected benchmark are given the DIBELS over and over, i.e., the test becomes the exclusive means by which progress in reading is measured.
Jay Samuels, a professor of education psychology and of curriculum and instruction at the University of Minnesota, served as a member of the National Reading Panel and coauthored the fluency section of the panel's report. The NRP's report has become the gospel on how reading is to be taught in this country, so Samuels' opinion carries some weight. Here is what he recently wrote about the DIBELS:
--begin excerpt from Reading Research Quarterly (2007-10-01)--
The DIBELS's battery of tests . . . aim to identify students who may be at risk of reading failure, to monitor their progress, and to guide instruction. With the widespread use of DIBELS tests, a number of scholars in the field of reading have evaluated them, and not all of their evaluations have been flattering. For example, Pearson (2006, p. v) stated,
“I have built a reputation for taking positions characterized as situated in 'the radical middle'. Not so on DIBELS. I have decided to join that group convinced that DIBELS is the worst thing to happen to the teaching of reading since the development of flash cards.”
Goodman (2006), who was one of the key developers of whole language, is concerned that despite warnings to the contrary, the tests have become a de facto curriculum in which the emphasis on speed convinces students that the goal in reading is to be able to read fast and that understanding is of secondary importance. Pressley, Hilden, and Shankland (2005, p. 2) studied the Oral Reading Fluency and Retelling Fluency measures that are part of DIBELS. They concluded that “DIBELS mispredicts reading performance much of the time, and at best is a measure of who reads quickly without regard to whether the reader comprehends what is read.”
If Riedel's conclusion that administration of subtests other than Oral Reading Fluency is not necessary for prediction of end-of-first- and second-grade comprehension, in combination with the critical evaluations of DIBELS by some of our leading scholars in reading is not enough to raise the red flag of caution about the widespread use of DIBELS instruments, I have an additional concern about the misuse of the term fluency that is attached to each of the tests. Because each of the tests is labeled as a fluency test, it is only fair game to see if that term is justified. I contend that with the exception of the Retell Fluency test, none of the DIBELS instruments are tests of fluency, only speed, and that the Retell Fluency test is so hampered by the unreliability of accurately counting the stream of words the student utters as to make that test worthless. Let us not forget that, in the absence of reliability, no test is valid.