The Impact of Technology on Student Achievement

Lorraine Sherry and Daniel Jesse

RMC Research Corporation, Denver

October 2000



Note: Instead of the Figure in this draft document, please refer to: Sherry, Jesse, & Billig (2002) for complete tables and graphics that support this article.

In the Spring 1999 edition of Texas Study, Landon Shultz wrote:


Because of the quality of the human resources we are working with, our schools have succeeded in doing remarkably well in spite of the limitations of the factory model of education. Just think how successful we can be if we move to a more powerful model as we enter the 21st Century. (Texas Study, Spring 1999, p. 3).


As we move into the 21st Century, teachers are becoming increasingly aware of just how technology-savvy their students are. They are dynamic learners, eager to learn about the sophisticated and technologically based world that they live in, and about the types of jobs that will be available to flexible, creative, lifelong learners. 


With the E-rate in place, even the poorest Texas schools are now becoming wired and networked. About a year ago, Education Week reported that Texas had received nearly $150 million in E-rate funding, and that there was an average number of 13 students per Internet-connected computer. Already, districts are receiving funding to create models of excellent technology use. “The districts will be charged with showing how their pilot products improve student performance in at least two subject areas – math, science, language arts, or social studies.” (Education Week, September 23, 1999, p. 104.)


However, demonstrating that technology is having an impact on student achievement is a critical issue that these pilot districts – and similar schools and districts throughout the United States – are going to have to face. How can we demonstrate that technology increases student achievement?  At one level, we (and the general public) know that using technology to enrich learning is a good thing, but demonstrating its impact on test scores is tricky.  We know that students will be using technology extensively in their future careers, but acquiring, maintaining, and training teachers to infuse educational technology into the curriculum is both resource intensive and costly. 


In this era of increasing accountability, it is important to be able to demonstrate that student achievement is being impacted in some way by large expenditures.  The situation is exacerbated by the standards movement, and the increased importance for mastering the TEKS so that students will do well on the TAAS.


Does technology impact student achievement in a standards-based system?  Many people believe that it does, but the nature of the impact is complex.  RMC Research Corporation, as a partner of the Texas STAR Center (Region VIII Comprehensive Center) has documented that student achievement can be impacted, at least indirectly by technology.  This paper presents an overview of a new model – the type of powerful assessment model that Shultz envisioned – that has been piloted in another state (Vermont) and can be used to describe the connection between student motivation, metacognition, learning processes, and learning outcomes in a technology-rich environment. We believe that this model can be replicated in other situations in which standards-based instruction is the norm.


Measuring Student Achievement


In a technology-rich classroom, instruction often involves the use of problem-based learning, Internet research, computer-mediated communication, online dialogue about curriculum-related textbooks and resources, and multimedia projects in a variety of disciplines. Moreover, there is an increasing emphasis on standards-based instruction, especially in Texas. Both process and product measures are helpful for assessing student performance in this context. 


A process measure is an indicator of higher order thinking skills in action. It specifies student behaviors that can be observed by teachers. Process measures can be developed by teachers or other experts, or adapted from research-based publications. Here are some examples:


A product measure is more closely aligned with achievement. Teachers can develop product measures, especially if they work collaboratively with one another or with other experts. For example, one might start with a standard from the TEKS:


Social studies teachers can then gather examples of student work and develop a rating scale (for example, 0 = “no evidence”, 1 = “approaches standards”, 2 = “meets standards”, and 3 = “exceeds standards”) to score student written oral, and visual presentations of social studies information. Student products can then be assessed using these benchmarks.


Measuring the Impact of Technology


How do process and product measures help us understand the impact of technology?  At RMC Research Corporation (The WEB Project, 2000, Evaluation), we developed a model based upon the work of a prominent cognitive psychologist at Yale named Robert Sternberg. Simply speaking, his model suggests the following: “Motivation drives metacognitive skills, which in turn activate learning and thinking skills, which then provide feedback to the metacognitive skills, enabling one’s level of expertise to increase.” (Sternberg, 1998, p. 17.)


RMC Research’s extension of the Sternberg model breaks down the student learning process into chunks that indirectly impact achievement: motivation, metacognition, inquiry learning, application of skills, and the process and product outcomes just described. Some of the measures that were used for this study are based on the work of Perry (1992). The logic here is that impacting these various components of students’ higher order thinking and learning processes may indirectly have a long-term effect on student achievement. The following diagram illustrates how these components relate to one another.



Figure 1. RMC Research’s Model for Assessing Student Achievement in a Technology-Rich Learning Environment.


Student Achievement















Student Achievement




Of Skills







The use of technology in the classroom, as we have described here, will have a different impact upon outcomes, depending on whether the desired outcome is a process or product: 


When technology is used as a tool in the classroom, students are learning how to learn; they are learning new skills that will help them both in school and in the workplace; they are learning how to dialogue with professionals and use feedback; and they are motivated to stay in school.  Our research indicates that this particular approach is valuable because it serves the traditionally underserved populations in the schools we studied. The products that these students developed were impressive; the skills they developed were significant; and the indirect result on student achievement, if measured by tests like TAAS, will most likely improve.


In a recent study, researchers from Westat statistically analyzed the levels of poverty, access to educational technology, professional development, extent of technology use, and scores on the 1998-99 Illinois standardized tests. Their analysis showed that, in cases where teachers’ use of technology to facilitate or enhance classroom instruction was high, standardized test scores also were high. (Branigan, October 5, 2000, p. 2).



Documenting the Impact of Technology in the Classroom


Technology engages students in the learning process.  It helps them to develop skills and remain motivated. The question remains: How can you document its impact in the classroom?


One good place is to start with the model we have developed, and to measure student motivation, metacognition, inquiry learning and application of skills.  You can use RMC Research’s measures or develop your own.  Then you can tie these measures to process and product measures that are assessed by rubrics, again using ours or developing your own, depending on the academic content area that you are addressing.  It is then relatively straightforward to establish baselines and follow-ups to develop and refine classroom efforts as well as to conduct summative evaluations of impact.


What steps should be taken to document the impact of technology in the classroom using this approach? Here is the process that measures the path from motivation, to metacognition, to thinking and learning processes, to student achievement.


Starting point: Teachers identify the standards for their course or unit. These should come from the TEKS and be fairly general, rather than concentrating on lower order thinking skills like reciting facts or demonstrating fairly narrow skill sets such as using a spell checker.


  1. Teachers create a motivation scale using whatever tests the school counselor or administrators prefer.  Alternatively, they can create a motivation scale based on other tests that they are familiar with, which asks students to rate how they feel about a set of questions such as “I am satisfied with who I am”, “I work hard”, “I believe I am intelligent”, “I have initiative”, “When I take on new responsibilities, I follow through and complete them”, or “I try to do my best”.


  1. Teachers clearly identify the thinking and learning processes that they expect their students to exhibit while using technology to support learning within their academic content area. They ask questions that involve observable behaviors with technology.  For example, “In my work in this class I use technology do do …”, followed by a checklist of expected behaviors such as “do research”, “get ideas”, “show ideas”, “design graphics”, “solve problems”, “communicate with others”, “take part in simulations”, “make models”, or “build Web sites”. Some of these items will involve inquiry learning processes, while others will specify specific skills that students are applying.


  1. Teachers clearly identify the higher order thinking skills (metacognitive or strategic thinking sills) that they expect their students to develop in their class, which can be enhanced by technology. A typical question might look like this: “In my work with [art, music, math, language arts, etc.], I use technology to …”, followed by a checklist of metacognitive skills such as “get information from places I can count on”, “try different ways to solve a problem”, “get reasons for my answers”, “make sure my answers are right”, “find patterns”, “make connections”, “make a sketch or picture to show a problem or idea”, or “change or improve my idea or product”.


  1. Teachers adapt, adopt, or create a product rubric that can be used to score student final products that were created using technology. These products may be written reports with graphics, oral presentations supported by PowerPoint slides, multimedia presentations incorporated into Web pages, captured messages on electronic conferencing systems from students who are discussing language arts-related textbooks, videotapes of interviews with persons who are discussing social or historical areas, or any other curriculum-related product. The rubric must be benchmarked with examples of student work. Creating and benchmarking rubrics is time-consuming and requires collaboration among teachers, but it is the most authentic manner of scoring student products. The rubric will then be used to determine the students’ final grades, and will indicate whether they have nearly met, met, or exceeded the specified standards.


  1. Teachers adapt, adopt, or create a process rubric that addresses the thinking and learning processes that they are observing. This is best done by consensus of teachers within the same academic content area, as this will increase reliability of the process measure. Perhaps the rubric involves revision of student products; perhaps it addresses developing oral communication skills; but the behavior needs to be observable and measurable. Moreover, it must address learning processes that teachers actually value, teach to, are related to their classroom activities, and can be scored.


A survey that incorporates questions about motivation, thinking and learning processes, and higher order thinking skills (components 1, 2, and 3) can be given to students at the end of the term. This self-reported data can then be correlated with teacher scores for components 4 and 5.  Changes over time can also be assessed for all of these measures.




Besides being a new and useful model for assessing student achievement, this also represents a good instructional design model. The process is as follows: first select the standard; develop the assessment(s) based on identified knowledge, skills, and learning processes; develop the corresponding instructional activities; and use teacher-created rubrics to assess student learning processes and finished products.


A general model such as this can be applied to any academic subject area. The specific activity (for example, a study of some historical trend, using data to make predictions for science experiments, or presenting a point of view on some work of literature that is justified by evidence in the text) will be determined afterwards, based on the individual school’s curriculum.




Branigan, C. (2000, October 5). New study: Technology boosts student performance. eSchool News online. Available:


Education Week. (1999, September 23). State of the States. Author.


Kendall, J.S., & Marzano, R.S. (1995). Content knowledge: A compendium of standards and benchmarks for K-12 educators. Aurora, CO: McRel.


Marzano, R.J., Pickering, D., & McTighe, J. (1993).  Assessing student outcomes: Performance assessment using the Dimensions of Learning Model.  Aurora CO: McREL Institute.


Perry, S.M. (1992). Building Better Thinking Skills. Available: Aurora Public Schools, 1085 Peoria Street, Aurora CO 80011.


RMC Research Corporation. (2000). The WEB Project, 2000: Evaluation. Author.


Shultz, L.T. (1999, Spring). Education for the 21st Century: What changes should we create? Texas Study, VIII (2), pp. 1-3.


Sternberg, R.J. (1998, April).  Abilities are forms of developing expertise.  Educational Researcher, 27 (3), 11-20.