Interest in end-of-year accountability exams has increased
dramatically since the passing of the NCLB law in 2001. With this
increased interest comes a desire to use student data collected
throughout the year to estimate student proficiency and predict how
well they will perform on end-of-year exams. In this paper we use
student performance on the Assistment System, an on-line mathematics
tutor, to show that replacing percent correct with an Item Response
Theory (IRT) estimate of student proficiency leads to better fitting
prediction models. In addition, other tutor performance metrics are
used to further increase prediction accuracy. Finally we calculate
prediction error bounds to attain an absolute measure to which our
models can be compared.