The
Impact of Technology on Student Achievement
Lorraine Sherry and Daniel
Jesse
RMC Research Corporation,
Denver
October
2000
Note: Instead of the Figure in this draft document, please refer to: Sherry, Jesse, & Billig (2002) for complete tables and graphics that support this article.
In the
Spring
1999 edition of Texas Study, Landon Shultz wrote:
Because
of the quality of the human resources we are working with, our schools
have
succeeded in doing remarkably well in spite of the limitations of the
factory
model of education. Just think how successful we can be if we move to a
more
powerful model as we enter the 21st Century. (Texas Study,
Spring
1999, p. 3).
As we move
into
the 21st Century, teachers are becoming increasingly aware of
just
how technology-savvy their students are. They are dynamic learners, eager
to
learn about the sophisticated and technologically based world that they
live
in, and about the types of jobs that will be available to flexible,
creative,
lifelong learners.
With the
E-rate
in place, even the poorest Texas schools are now becoming wired and
networked.
About a year ago, Education Week reported that Texas had received nearly
$150 million
in E-rate funding, and that there was an average number of 13 students per
Internet-connected computer. Already, districts are receiving funding to
create
models of excellent technology use. “The districts will be charged with
showing
how their pilot products improve student performance in at least two
subject
areas – math, science, language arts, or social studies.” (Education Week,
September 23, 1999, p. 104.)
However,
demonstrating that technology is having an impact on student achievement
is a critical
issue that these pilot districts – and similar schools and districts
throughout
the United States – are going to have to face. How can we demonstrate that
technology increases student achievement?
At one level, we (and the general public) know that using
technology to
enrich learning is a good thing, but demonstrating its impact on test
scores is
tricky. We know that students will
be
using technology extensively in their future careers, but acquiring,
maintaining,
and training teachers to infuse educational technology into the curriculum
is
both resource intensive and costly.
In this era
of
increasing accountability, it is important to be able to demonstrate that
student achievement is being impacted in some way by large
expenditures. The situation is exacerbated by the
standards movement, and the increased importance for mastering the TEKS so
that
students will do well on the TAAS.
Does
technology
impact student achievement in a standards-based system? Many people believe that it does, but
the
nature of the impact is complex.
RMC
Research Corporation, as a partner of the Texas STAR Center (Region VIII
Comprehensive Center) has documented that student achievement can be
impacted,
at least indirectly by technology.
This
paper presents an overview of a new model – the type of powerful
assessment
model that Shultz envisioned – that has been piloted in another state
(Vermont)
and can be used to describe the connection between student motivation,
metacognition, learning processes, and learning outcomes in a
technology-rich
environment. We believe that this model can be replicated in other
situations
in which standards-based instruction is the norm.
In a
technology-rich classroom, instruction often involves the use of
problem-based
learning, Internet research, computer-mediated communication, online
dialogue
about curriculum-related textbooks and resources, and multimedia projects
in a
variety of disciplines. Moreover, there is an increasing emphasis on
standards-based instruction, especially in Texas. Both process and product
measures are helpful for assessing student performance in this
context.
A
process
measure is an indicator of higher order thinking skills in action. It
specifies student behaviors that can be observed by teachers. Process
measures
can be developed by teachers or other experts, or adapted from
research-based
publications. Here are some examples:
A
product
measure is more closely aligned with achievement. Teachers can
develop product measures, especially if they work collaboratively with one
another or with other experts. For example, one might start with a
standard
from the TEKS:
Social
studies
teachers can then gather examples of student work and develop a rating
scale
(for example, 0 = “no evidence”, 1 = “approaches standards”, 2 = “meets
standards”, and 3 = “exceeds standards”) to score student written oral,
and
visual presentations of social studies information. Student products can
then
be assessed using these benchmarks.
How do
process
and product measures help us understand the impact of technology? At RMC Research Corporation (The WEB
Project, 2000, Evaluation), we developed a model based upon the work of a
prominent cognitive psychologist at Yale named Robert Sternberg. Simply
speaking, his model suggests the following: “Motivation drives
metacognitive
skills, which in turn activate learning and thinking skills, which then
provide
feedback to the metacognitive skills, enabling one’s level of expertise to
increase.” (Sternberg, 1998, p. 17.)
RMC
Research’s
extension of the Sternberg model breaks down the student learning process
into
chunks that indirectly impact achievement: motivation, metacognition,
inquiry
learning, application of skills, and the process and product outcomes just
described. Some of the measures that were used for this study are based on
the
work of Perry (1992). The logic here is that impacting these various
components
of students’ higher order thinking and learning processes may indirectly
have a
long-term effect on student achievement. The following diagram illustrates
how
these components relate to one another.
Figure 1. RMC
Research’s
Model for Assessing Student Achievement in a Technology-Rich Learning
Environment.
Student Achievement (Product) Inquiry Learning
Metacognition Motivation
Student Achievement (Process)
Application Of
Skills
-.055
The use of
technology in the classroom, as we have described here, will have a
different
impact upon outcomes, depending on whether the desired outcome is a
process or
product:
When
technology
is used as a tool in the classroom, students are learning how to learn;
they
are learning new skills that will help them both in school and in the
workplace; they are learning how to dialogue with professionals and use
feedback; and they are motivated to stay in school. Our research indicates that this particular approach is
valuable
because it serves the traditionally underserved populations in the schools
we
studied. The products that these students developed were impressive; the
skills
they developed were significant; and the indirect result on student
achievement, if measured by tests like TAAS, will most likely improve.
In a recent
study, researchers from Westat statistically analyzed the levels of
poverty,
access to educational technology, professional development, extent of
technology use, and scores on the 1998-99 Illinois standardized tests.
Their
analysis showed that, in cases where teachers’ use of technology to
facilitate
or enhance classroom instruction was high, standardized test scores also
were
high. (Branigan, October 5, 2000, p. 2).
Technology
engages students in the learning process.
It helps them to develop skills and remain motivated. The question
remains: How can you document its impact in the
classroom?
One good
place
is to start with the model we have developed, and to measure student
motivation, metacognition, inquiry learning and application of
skills. You can use RMC Research’s measures or
develop your own. Then you can tie
these measures to process and product measures that are assessed by
rubrics,
again using ours or developing your own, depending on the academic content
area
that you are addressing. It is
then
relatively straightforward to establish baselines and follow-ups to
develop and
refine classroom efforts as well as to conduct summative evaluations of
impact.
What steps
should be taken to document the impact of technology in the classroom
using
this approach? Here is the process that measures the path from motivation,
to
metacognition, to thinking and learning processes, to student
achievement.
Starting
point: Teachers
identify
the standards for their course or unit. These should come from the TEKS
and be
fairly general, rather than concentrating on lower order thinking skills
like
reciting facts or demonstrating fairly narrow skill sets such as using a
spell
checker.
A survey
that
incorporates questions about motivation, thinking and learning processes,
and
higher order thinking skills (components 1, 2, and 3) can be given to
students
at the end of the term. This self-reported data can then be correlated
with
teacher scores for components 4 and 5.
Changes over time can also be assessed for all of these
measures.
Besides
being a
new and useful model for assessing student achievement, this also
represents a
good instructional design model. The process is as follows: first select
the
standard; develop the assessment(s) based on identified knowledge, skills,
and
learning processes; develop the corresponding instructional activities;
and use
teacher-created rubrics to assess student learning processes and finished
products.
A general
model
such as this can be applied to any academic subject area. The specific
activity
(for example, a study of some historical trend, using data to make
predictions
for science experiments, or presenting a point of view on some work of
literature that is justified by evidence in the text) will be determined
afterwards, based on the individual school’s
curriculum.
Branigan, C. (2000, October 5). New study: Technology boosts student performance. eSchool News online. Available: http://www.eschoolnews.com/showstory.cfm?ArticleID=1652.
Education Week. (1999, September 23). State of the States. Author.
Kendall, J.S., & Marzano, R.S. (1995). Content knowledge: A compendium of standards and benchmarks for K-12 educators. Aurora, CO: McRel.
Marzano, R.J., Pickering, D., & McTighe, J. (1993). Assessing student outcomes: Performance assessment using the Dimensions of Learning Model. Aurora CO: McREL Institute.
Perry, S.M. (1992). Building Better Thinking Skills. Available: Aurora Public Schools, 1085 Peoria Street, Aurora CO 80011.
RMC Research Corporation. (2000). The WEB Project, 2000: Evaluation. Author.
Shultz, L.T. (1999, Spring). Education for the 21st Century: What changes should we create? Texas Study, VIII (2), pp. 1-3.
Sternberg, R.J. (1998, April). Abilities are forms of developing expertise. Educational Researcher, 27 (3),
11-20.