Cover letter from Rob Pangborn


  Final Report from University-wide Math Commission, June 2001


  IMPLEMENTATION PROCESS from Rodney A. Erickson


  Cover letter from Rob Pangborn


College of Engineering	               Robert N.  Pangborn, Dean 814.863.3750
Undergraduate Studies Dean's Office    Betty A. Mantz,
101 Hammond Building	                 Administrative Assistant
University Park, PA 16802	       Facsimile Number	         814.363.4749


DATE:	July 25,2001
TO:	Rodney A. Erickson
FROM:	Rob Pangborn
RE:	Math Commission Report

Attached are the final report and recommendations of the University-wide
Math Commission that was appointed in January. The Commission met every two
weeks throughout the semester and did substantial research and analysis of
placement data, current processes, and newer, more diagnostic approaches.
There is unanimous agreement among commission members that web-based
diagnostic testing will be of significant benefit to Penn State and its
students. However, the membership was more divided on the prospect of using
the test for binding placement of students in an appropriate level in the
math course sequences. The representatives of the math faculty on the
Commission were most apprehensive about reports from other schools that
taking the test in an unsecured, unproctored environment can result in
students' placement above their abilities. We don't want to inadvertently
jeopardize the strong record of success we have achieved at Penn State, both
in terms of students' performance in the math courses and in overall student
retention. However, all members agreed that, if pilot testing proves to be
reliable, there could be little argument with, or resistance to, utilizing
the web-based format for placement purposes.

Therefore, as indicated in the recommendations, we are encouraging that some
caution be exercised (#1) while a web-based diagnostic test is initially
developed (#2) and implemented, and that this new tool be carefully assessed
to establish consistency with the traditional placement process which has
been remarkably successful across the Penn State system (#3). We do,
however, believe that the web-test could be used as soon as it is available
for the significant cadre of students who do not anticipate taking a formal
sequence of mathematics study, and that computer-based testing could be
accommodated in a number of colleges that have appropriate facilities. As
stated at the conclusion of the report, these experiments with computer and
web-based testing could potentially involve one-third to one- half of all
entering students. Along with corresponding assessment, this pilot
experience should provide confidence in the technology and refinements in
the placement process that will allow extension of the web-based approach to
the broader population of entering students.

I hope you will find this report informative and responsive to the charge 
you gave the Commission. 


cc: J. Cahir	
J. Cohen


Final Report from University-wide Math Commission, June 2001

               University-wide Math Commission
               Final Report and Recommendations
                         June 2001

                      Executive Summary

The University-wide Math Commission was charged by the provost to recommend
new procedures for mathematics placement and diagnostic testing.  During the
Spring semester, 2001, the Commission reviewed benchmarking information from
within the CIC and from other institutions.  It analyzed data to assess the
effectiveness of math placement and student performance at Penn State.
Members of the Commission researched the relevant literature on placement
methods, retention, math testing, and the like.  The Commission also
reviewed opportunities and issues associated with web-based testing, in
particular examining a pilot web-testing study completed at Penn State
during this semester.  Based on its evaluation, the Commission makes the
following recommendations.

(1) For the present, maintain the existing  pencil-and-paper  test procedure,
or some  comparably   secured option,  for testing    and  placing   incoming

(2) Develop and offer a web-based diagnostic test - to include both enhanced
diagnostic capabilities and an embedded (traditional) placement test - to
all incoming students.

(3) Complete a systematic, comprehensive, data-based evaluation of the
effectiveness of a web-based placement testing prototype at Penn State.

(4) Eliminate testing redundancy or unneeded security, and explore interim
collaborations for computer-based testing options, and pilot a web-based
diagnotic testing/skills assessment instrument for students in academic
tracks that do not require mathematics course sequences or calculus.


The University-wide Math Commission was charged by executive vice
president and provost Rodney A. Erickson on January 31, 2001. The
provost asked the Commission "to recommend new procedures for
mathematics placement which take a diagnostic approach, preferably
using the World-Wide Web. The procedures should be aimed at clearly
discriminating student preparation for the various levels of beginning
mathematics, including preparation of students who do not contemplate
taking calculus." The Commission was further asked to provide
suggestions on how to deal with related implementation issues and to
take a Universitywide approach, so that the new system will apply at
all campuses.

In carrying out its charge, the Commission was encouraged to consider
diagnostic approaches, updating of cut-off scores, updating of methods
used for calculus- and non-calculus bound student populations, and
strategies for reassignment of students who may have been incorrectly

Maria-Carme Calderer, Co-Chair      (Department of Mathematics)        
Robert N. Pangborn, Co-Chair        (College of Engineering)           
George E. Andrews            	    (Department of Mathematics)        
Douglas K. Brown		    (Penn State Altoona College)       
Michael J. Dooris		    (Center for Quality and Planning)  
Paul J. Hutta			    (Penn State Abington College)      
Janet Fay Jester		    (Department of Mathematics)        
Shyam S. Sethuraman		    (College of Communications)        
Eric R. White			    (Division of Undergraduate Studies)
Dawn M. Zimmaro			    (University Testing Services)      


The Commission met first as a group on February 27, 2001. It organized
itself almost immediately into four informal working groups to
investigate and report out in various key areas:

* Benchmarking - with CIC and other institutions 
* Data analvsis - assessment of placement and student 
  success at Penn State
* Benchmarking - with CIC and other institutions
* Data analysis - assessment of placement and student success at Penn State
* Research - the literature on placement methods, retention, math testing,
 and related matters
* Web-based approaches - evaluating opportunities and issues associated with
 web-based testing.


The Commission gathered information from CIC and other peer institutions,
as well as from schools dissimilar to Penn State, such as smaller
universities and private or community colleges.  The Commission especially
sought information on computer-based testing.

Benchmarking results were mixed, probably because testing is only one of
many factors - such as the range of incoming student abilities, the nature
of advising, the sequencing of math courses - that make the total package
of math placement and education at any particular college or university

In the Big Ten, Ohio State, Michigan, Michigan State, and Minnesota are
doing placement on the web.  Although the programs are new enough that
little information on their success is available, the early reports from
Minnesota and Michigan State are generally positive, while Ohio State has
experienced scheduling difficulty because more students are placing into
calculus with web testing than with paper-and-pencil testing.

Other Big Ten schools are experimenting with web-based testing for
placement or early diagnostics, including Wisconsin, Northwestern, and
Purdue.  Nebraska is preparing to go to web-based testing, while Illinois,
Texas at Austin, MIT and others including Pitt and Temple still use
paper-and-pencil tests.  The purposes of the on-line and/or computer-based
tests vary from school to school, as does whether the tests are freely
accessible on the web or taken only at proctored sites.  There are other
differences as well: time limits, opportunity for retakes, the use of
calculators, and so on.

Web-based testing is available at some schools for diagnostic purposes but
not for placement. At others, proctored computer-based testing is available
as an optional alternative to pencil-and-paper.  Some institutions compare
the results of placement testing to math SAT scores or other measures to
judge validity and reliability.  At some schools, placement is binding; at
others, the results are used only to provide recommendations on course

Most placement tests have been created in-house by the individual
university and tailored to its courses, although some schools are using
tests purchased from ETS or ACT.  The commercial products tend to be used
by community colleges, in particular; they offer adaptive capabilities and
often cover a number of subjects.

Virginia Tech reported a pre-calculus (probably equivalent to Penn State's
Math 22) success rate -- defined as the proportion of students earning a C
or better -- of 54 percent in 1996.  Virginia Tech subsequently moved to an
"emporium format" and improved the rate to 73 percent by 1998.  (Very
briefly, the Virginia Tech math emporium is a technology-intensive approach
that emphasizes 24/7 access to learning resources, active learning, team
teaching, modular design of courses and materials, and continuous
improvement.) Penn State's data will be explored in the following section,
but the comparable success rate for Math 22 students at University Park
(the best Penn State comparator to Blacksburg) is 57 percent.

                           Data Analysis

Members of the Commission reviewed considerable data in some detail, so
generalizations are tricky.  Nonetheless, it is fair to say that many
aspects of the placement procedures and math sequences basically work well.
The Math Department's small-section calculus initiative has been very
successful, improving student success rates from about 45 percent to about
66 percent.

The drop-in-sequence option, through which a student struggling in, say,
Math 22 is allowed to drop to Math 21 early in the semester, also is
valuable; about 50-60 students take advantage of this option in a given
semester for all math courses.  On the other hand, a situation in which two
to three percent of Math 22 and Math 140 students need to drop in sequence
also highlights an opportunity for improvement.

Course-Taking Patterns

The Commission examined course-taking patterns for three different
categories of courses: calculus (primarily Math 110, 111, 140, 141),
pre-calculus (primarily Math 21, 22, 26) and general math (primarily Math
17, 35).  Table 1 shows the course-taking patterns for 5,763 Spring 2000
baccalaureate degree graduates (based on Registrar's data for these
students, going back ten years).

baccalaureate degree graduates (based on Registrar's data for these
students, going back ten years).

                               Table 1.
      Type of Math Courses Completed by Spring 2000 Graduates

University-wide         University Park         Campus colleges                
Calc	3157  (54.8%)	Calc    1731  (61.4%)   Calc    1426  (48.4%)     
Precalc 2356  (40.9%)	Precalc  759  (26.9%)	Precalc 1597  (54.2%)
General	690   (12.0%)	General  260  ( 9.2%)	General 430   (14.6%)

Because of data such as those in Table 1, the Math Commission's
analyses have dealt with the full range of math courses. In other
words, the Commission members' concerns include but are not limited to
the more advanced calculus sequences.

Not all Penn State students have an interest in math and not all are
in mathintensive majors. Table 2 shows the proportion of these 5,763
students taking calculus or pre-calculus, and those who satisfied the
general education quantification requirement in some other way (for
example, with logic or computer science or statistics courses).

                               Table 2.
         Spring 2000 Graduates: GQ Courses Completed

Only non-calculus/non-pre-calc	  Some other mix
	17%	                         74%
Only calculus/pre-calc	          None (e.g., transfer students)
	6% 	                          2%

The 17 percent figure illustrates that math placement is, in effect, a 
nonissue for a sizable minority of Penn State undergraduates.

Placement Testing and Success Rates

The Commission analyzed in particular detail five years of data
(1995-96 through 1999-2000) on students' math grades in relation to
FTCAP test scores. The Commission defined "success" as a C or better,
and included students who withdrew from math courses in the analvsis.

the data show that scoring and placement work reasonably well on the
whole.  For example, the overall success rate was 68 percent for Math
140 and 61 percent for Math 22. Although peer data are not extensive
nor exactly comparable, these overall Penn State rates seem more or
less consistent with the experience of other schools.

                              Table 3.
    Test Subscores and Related Success Rates for 1999-00

	Math 140 -	Earned C or	Math 22 -	Earned C or
	Relevant	Better in	Relevant	Better in
	Subscore	Math 140	Subscore	Math 22
	0-11	        49%
	12-13	        55%	        0-6	        41%
	14-25	        68%	        7-12	        56%
	26-34	        92%	        13-19	        69%
	Total	        68%	        Total	        61%

Table 3 illustrates, at a highly summarized level, several threads
that the Commission also observed in more detailed data. Placement
testing works especially well for students who are strong in math; it
does not work as well for weaker students. Placement testing works
better for upper-level calculus courses than lower- level math
courses. While Table 3 does not reveal as much about remediation as
about testing, it suggests that remediation may work less well than
placement. Other data support that idea; for example, the success rate
for students who place directly into Math 140 is 78 percent, but the
success rate for Math 140 is only 46 percent for students who first
completed Math 22. (This is also consistent with findings from the
research literature, reviewed later in this report.)

The Commission also examined information for all of the main pre-calculus
and calculus courses, and for all of Penn State's campus colleges, and there
are significantly different patterns in the data for University Park and
other locations.  For example, in direct placement situations, University
Park students are more successful than Commonwealth College students in Math
140 (73 percent versus 61 percent) and Math 110 (71 percent versus 63
percent). However, direct placement students are less successful at
University Park in Math 22 (57 percent versus 61 percent). Remediated
students are generally better off at a campus college than at University
Park in terms of eventual success rates in targeted, higher-level courses.

Remediation works less and less well as students need more and more
remediation.  At University Park, the Math 140 success rate for students who
started with Math 22 is 41 percent; the Math 140 success rate for students
who started with Math 21 is 22 percent.  The relationship is similar for
other course sequences and other locations.  In short, placement and test
scoring seem to work reasonably well; remediation is not a completely
successful substitute for strong high school math preparation.



The current test administered by Penn State is based on the model developed
by the Mathematics Association of America (MAA) in the early 1990's. The MAA
examinations have been administered by many academic institutions
nationally.  Test design focused on measuring the probability of success -
defined as the probability of earning a C or better in calculus.  The MAA
examinations are in paper-and-pencil format, and are administered in
secured, proctored settings.  Observers agree that these tests predict
calculus success fairly well, and that they have not been as effective in
predicting success in college algebra and trigonometry.  Adequate studies
are lacking, however; in any case, in the year 2000 the MAA discontinued
production of the tests.  The decision was in part motivated by the
acknowledgement that placement testing should be as sensitive as possible to
the academic situation at each institution.  It is difficult for a single
test format to serve unique, diverse institutional needs.


Nationally, college and university pre-calculus curricula have been
implemented - reluctantly in some cases - to remediate what colleges
perceived as an instructional deficiency for which high schools were
primarily responsible.  Because of this perception, the nature and structure
of these efforts vary widely across colleges and universities. For example,
in some universities of the SUNY system, the remedial curriculum has been
discontinued, effectively shifting this instructional responsibility to high

Pragmatically, effectiveness in remedial mathematics is highly linked with
courses having a very specific and small set of goals.  This means that
courses attempting to address a myriad of deficiencies, or poorly defined
deficiencies, are more likely to fail (as measured by student success rates)
than courses that target clear and focused goals ("Remedial Education in
Colleges and Universities: What's Really Going On?" Jamie P. Merisotis and
Ronald A. Phipps, The Review of Higher Education, Fall 2000, Volume 24, No.
1, pp. 67-85).

                          Web-based Approaches

University Testing Services, the Division of Undergraduate Studies, and the
Department of Mathematics collaborated to develop and offer a practice
version of the math placement exam via the web to incoming first-year
students.  Letters invited 1,850 students who were Fall 2001 paid accepts as
of December 2000 to participate in the web exam in early Spring 2001. 350 of
those incoming students actually attempted the web-based test.

The test required students to log in with their social security number.  The
test had 72 items parallel (that is, similar in content and difficulty) to
the items on the paper-and-pencil placement test.  The test was timed, to
allow 70 minutes in total.  Students received subset scores and other
feedback (such as a list of suggested topics where more study was indicated,
based on the FTCAP advising cutoffs and the FTCAP advising guide).  These
students were still required to take the paper-and-pencil version of the
FTCAP test.

In the pilot study results, all four math subscores (basic math, algebra I,
algebra II, and trigonometry) and the total math test score were positively
and significantly correlated with math SAT scores.  High school grade point
average did not correlate as well with the math test scores, but this should
be expected; a strong high school g.p.a. does not necessarily mean that a
student ever took trigonometry or calculus.  The test did not discriminate
well between University Park and campus college admits and again, this is as
expected; high school g.p.a. is the primary variable used in the prediction
model which refers students to campuses and, as already noted, g.p.a. is not
necessarily a valid measure of high school math preparation.  Math
Commission members reached the following conclusions from the pilot
web-testing program.

1. Based on comparing the web results with a control group and with student
   scores on the FTCAP test, it does not appear that students cheated on the
   web test.

2. There were strong correlations between the web test and the FTCAP test
   scores, suggesting that the web test is a valid measure of performance on
   the FTCAP test.

3. Students who took the web test generally did better on the FTCAP test
   than students who did not take the web test.  This might indicate that
   the web test may have helped students to study and review material before
   taking the FTCAP test.

4 Students who had low to average SAT math scores were most likely to show
  an improvement from the web test to the FTCAP test; for these students,
  there was a statistically significant (although small) gain.

5. Students who had a high SAT score did well on both the web and FTCAP

6. The web test scores were generally lower, and consequently placement
   action based on the web test scores would lead to placement in lower math

7. The greatest variation between the web and FTCAP test scores in terms of
   placement was in the mid-range math courses such as Math 21, 22, and 26.

There are obvious limitations to interpreting the results of this pilot test
- for example, the participating students self-selected into web-based
testing, and the students choosing to participate tended to have relatively
high math SAT scores - so the Math Commission hesitates to reach premature
conclusions.  Nonetheless, at this point the pilot provides an informative,
positive basis for continued consideration and evaluation of web-based math
testing at Penn State.

                        Guiding Principles

The members of the Math Commission agree in principle on many aspects of
desirable future directions for math placement at Penn State.  The processes
in place certainly work, and while there may be opportunities for
improvement, any changes should build upon, and not damage, the current

The lack of security (whether through proctoring, authentication, or some
other means) inherent in on-line placement testing is a main reason why this
method of testing has not been embraced by the mathematics community.
Security is an important issue for placement testing, especially in
circumstances in which placement is binding.

It would be desirable for placement testing to be more diagnostic - that is,
to provide feedback to incoming students on their mastery of various math
subjects (algebra, geometry, trigonometry, and calculus), so that they can
better address any weaknesses, possibly even before enrolling at Penn

Non-secure web-based testing can be valuable and appropriate as a diagnostic
tool that provides students or prospective students with fast, useful
feedback.  Security is not an issue for diagnostic testing.

It would be desirable for testing to be more adaptive - that is, to be more
responsive to the ability of individual students taking the test, and adjust
questions presented to each student based on prior correct or incorrect

Placement testing should be considered as one element that works in
combination with a strong advising system and an array of math courses and
sequences.  The goal for the system is to help inform and support students
in their choice of courses, majors, and career options (including those that
do not require calculus), and to help them to do well.

Any changes should preserve the beneficial aspects of the drop-in-sequence
and other measures that help students to succeed in their first Penn State
math course.

Testing should produce reliable, accurate, and useful measures, while taking
advantage of opportunities to improve existing processes, possibly reducing
cost and time commitments.

                  Specific Recommendations

(1) For the present, maintain the existing pencil-and-paper test procedure,
    or some comparably secured option, for testing and placing incoming

Most of the impacts, both positive and negative, of performing placement
testing via an on-line (unsecured) test, are speculative.  It is only
prudent to maintain a system that works.  In the meantime, alternative
processes can be developed, prototyped, and thoroughly evaluated.

(2) Develop and offer a web-based diagnostic test - to include both enhanced
    diagnostic capabilities and an embedded (traditional) placement test -
    to all incoming students.

The members of the Math Commission all agree that the ability to provide
early, formative feedback to prospective students is a worthwhile and
compelling goal.  This recommendation allows Penn State to move forward with
web-based testing without abandoning the validity of the existing
instrument, while providing an opportunity to further evaluate online

By embedding the conventional 50-question placement test in the new,
diagnostic instrument, Penn State will be able to expand pilot testing,
compare the results and cutoffs for the web-based test with those from
pencil-and-paper testing, and identify comparable scores/cutoffs for
placement using the new, expanded question bank.

The cost of developing the test for diagnostic purposes is independent of
the test's eventual or potential use for placement.  Providing appropriate
technical service to users and collecting and compiling the test results
would probably be done for informational purposes in any case.

Issues of when the test would be taken and by whom (by applicants, by
offers, by paid-accepts) would be determined as part of implementation, as
would details about how often the test could be taken, at what intervals,
with what imposed time constraints, and so on.

(3) Complete a systematic, comprehensive, data-based evaluation of the
    effectiveness of a web-based placement testing prototype at Penn

It can be argued that existing helpful interventions such as the
drop-in-sequence are founded on controlled enrollment based on binding
placement, and hence on secure testing.  This line of reasoning assumes that
cheating will likely occur in an unsecured setting, causing high numbers of
misplaced students and overwhelming the Math Department's ability to
reassign them.  These are legitimate concerns.  On the other hand, the need
for a drop-in-sequence option is itself evidence that current placement
practices are not perfect.

This recommendation is offered because the Math Commission wants to preserve
the strengths of the existing system, while remaining open to potential
improvements.  Therefore, it would be prudent to have a better idea of what
is likely to happen by having traditional results available during some
initial trial period.  This will insure that comparative data are available
to feed a comprehensive assessment process and answer a wide variety of

	Can problems with technology (incompatibility of home computers,
	etc.) be effectively overcome?

	Is the distribution of scores for the unsecured web test
	significantly different from the proctored, pencil-and-paper
	version? What is the incidence of significantly different
	results for individual students?

	To what extent does student performance improve as a result of
	early feedback from the web-based tests?

        Can the ability to add an adaptive element to testing make placement
	decision even more accurate?

(4) Eliminate testing redundancy or unneeded security, and explore interim
    collaborations for computer-based testing options.

In a typical year, about 13,000 incoming students take the placement test.
Of those, 15 to 20 percent satisfy the quantification category of general
education with courses such as logic, probability, and introductory computer
programming, or general math having no prerequisites.  Thus, for some 3,000
incoming students taking the placement test each year, whether it is binding
or not is of little consequence.  For students very committed to entering
certain career paths, further testing beyond the web-based version is
unnecessary, or could be accomplished later if their aspirations changed.

Another 10 to 20 percent of students might be accommodated in University
Park or Campus College computer labs to take the exam under secure
conditions, while agreements might be struck with some larger high schools
to offer the exam to students there.  This approach would also provide data
by which to compare the results for individual students taking the same test
under both proctored and un-monitored conditions.

Conservatively, between one-third and one-half of the current
paper-and-pencil testing could be eliminated without affecting the process
of placing students in the math sequences at all. 

Finally, the Commission encourages that au the academic units continue to
evaluate the needs of their majors for study in mathematics, statistics,
computer science and other competencies falling under the broad term of
quantification. Input from the various disciplines will be essential to the
career success of graduates and the design of curricular offerings
appropriate for students on different career tracks. It will also be crucial
to the development of formative feedback to students emanating from the
proposed diagnostic testing and to the effectiveness of the advising system.



Date:      November 15, 2001
From:      Rodney A. Erickson
To: 	   University Academic Measurement Committee
Subject:   Math Placement

I recently appointed a nine-member, University-wide commission to recommend
new procedures for math placement that take a diagnostic approach,
preferably using the web. While the Commission endorsed the ideas associated
with that approach, they asked that it be done in parallel with the existing
system. I also asked them to provide suggestions on how to deal with the
related implementation issues. The commission's analyses and four
accompanying recommendations contribute to a general understanding of math
placement for first year Penn State students, but are silent on the
implementation path I believe that we should take. Accordingly, I would like
us to move forward by taking the following steps.

Oversight: I am asking the UAMC to provide active oversight to the
implementation process, which the Office of Undergraduate Education
(University Testing and Division of Undergraduate Studies) and the Office of
Administrative Systems will carry out.

Pilot Test on the Web: Following the Math Commission's recommendation, an
immediate pilot should be implemented in which all out-of-state students,
inclusive of international students, take their fall 2002 placement exams in
math, English, and chemistry, on the Web. This exam should eliminate the
need for the tested students to take parallel paper and pencil exams or to
feel pressured by not having them completed when other students do.
Moreover, it will eliminate long trips for the few who now come to
Pennsylvania for testing.

Useful Advising: The report indicates that we possess the potential to
better incorporate placement testing into academic advising. With early
feedback, students may be able to address remediation needs prior to first
year enrollment, or early in their college experience. Appropriate feedback
also enables greater accuracy in student academic planning. Accordingly, I
am asking DUS to begin work on diagnosis and advising messages and to give
particular attention to the assessment of the Web-based placement pilot and
its implications for enhanced advising and student success.

In sum, the report indicates that appropriate placement, productive academic
advising, self-remediation, and useful diagnostics benefit significantly
from timely feedback. Web-based testing will be an increasingly valuable
tool to achieve these benefits, and your active involvement in seeing that
this happens this year will be respected and appreciated for many years to

The recommendations of the Math Commission are much appreciated, and I look
forward to reviewing the implementation of the protocols discussed above. I
want to thank members of UAMC for their participation in this important

cc:	Math Commission Members
	J. Gary Augustson
	Daniel J. Larson
	Susan Welch
	John J. Cahir