In November 2010, the U.S. Department of Education released the National Education Technology Plan. As we’ve mentioned in earlier Part 1 and Part 2, this plan is a broad vision for schools and districts to implement technology for learning. In this post, we will look more closely at the big ideas for assessment in the Plan.
The Plan’s goal states that we should use technology to help us “measure what matters” and to “use assessment data for continuous improvement.”
The question of what to assess
So, what matters? From President Obama to the Common Core State Standards, there is agreement that we should be teaching 21st –century skills such as problem-solving, critical thinking, entrepreneurship, and creativity. The big question is how to measure these complex competencies. The Plan identifies some directions for how technology can help with this:
- Use web-based assessment programs that consider a learner’s thinking processes, such as the one designed by Jim Minstrell. These are aligned with both the middle and high school National Science Education Standards and Benchmarks for Science Literacy.
- Directly assess problem-solving skills
- Make visible sequences of actions taken by learners in simulated environments
- Model complex reasoning tasks within the context of relevant societal issues
- Assess how well students communicate, such as in a multiuser virtual environment like the River City program at Harvard University. These are performance-based assessments, where students have to model, design, measure, graph, and describe results.
Assessments that help students learn
The Plan notes the two main types of assessments: summative assessments that “sum up” a student’s learning after teaching, and formative assessments that allow teachers to “form” or diagnose and modify instruction based on student responses.
As readers of this blog may know, the Explicit Direct Instruction model of teaching developed by DataWORKS founders John Hollingsworth and Dr. Silvia Ybarra emphasizes Checking For Understanding (CFU) as part of effective lesson delivery. This strategy, using random volunteers, Pair-Shares, and Whiteboards, is very effective for formative assessment during a lesson.
The Plan suggests that technology can help with this by:
- Using simple response devices working with multiple-choice or true-false questions
- Using electronic whiteboards with peer discussion, such as Group Scribbles
- Using networked graphing calculators to see how students interpreted graphs
For online learning, technology can accumulate data as students work. The system could track problem-solving sequences and strategy use based on what each student selects or inputs. It could track the number of attempts a student makes, the number of hints or type of feedback given, and the time it takes for each part of the problem. One example referenced is the ASSISTment system used in Worcester County Public Schools in Massachusetts. The system gives detailed reports of how students are doing on 100 math skills, as well as their accuracy, speed, attempts, etc. Students then receive tutoring tailored to their needs; it keeps “learning” about the students so teachers can guide them appropriately.
Assessments that offer continuous improvement
The new Common Core Standards are being assessed primarily with computer adaptive testing where the system assigns questions based on responses to previous items on the test. The Plan also suggests something called Adaptive Assessment which facilitates differentiated learning. It is designed to combine results of student surveys of how students like to learn with their actual learning gains after using different methods (such as tutoring, small-group instruction, learning online, and learning through games). The idea is to generate “playlists” of customized learning activities for each student.
Other ways technology could make assessment more efficient include:
- Speedier development of new test questions since they can be field-tested via the web
- Using online communities to evaluate student work – such as posting poems to a social networking site, videotaping public service announcements, writing mobile apps that get downloaded, scientists judging science fair entries, writers judging literary entries, and animators judging film competitions.
- Making school and district test scores available to the public rather than averages or proficiency levels.
- Reducing the amount of time devoted to tests by collecting data as students work. One example is West Virginia’s techSteps program, which uses sequenced activities to introduce technology skills. Teachers assess progress against a rubric, leading to a student’s Technology Literacy Assessment Profile. The state, thus, gets data on technology proficiency at every grade level without using a separate technology test.
- Creating a system of interconnected feedback for students, educators, parents, school leaders, and district administrators that will lead to better decisions based on real data. This can lead to better teacher collaboration and better evaluation of programs and interventions.
- Developing electronic learning records for every student. This is an extension of online grade books and electronic portfolios. They could include learning experiences, competencies, and include samples of student work. They could be evaluated by rubrics that delineate what quality is expected for each project or assignment.
There are obvious problems our whole system faces, such as multiple data systems, lack of common standards for data formats, and different system platforms. But the potential for actually using real current data to support educational decisions throughout the whole system is intriguing. That is the promise that technology brings to the field of educational assessment.
The promise of technology starts with a vision like this – then we have to work out the details.
Future posts will review the big ideas in the remaining goals of this broad Plan for Technology.