Adrienne Decker
Department of Computer Science & Engineering
University at Buffalo
adrienne@cse.buffalo.edu
How Students Measure Up:  Creation of an Assessment Tool for CS1

Introduction

Computing Curricula 2001 (Curricula, 2001), as with the previous curricula before it, does not provide faculty with instructions for how to implement the suggestions and the guidelines contained within.  This leaves faculty to take their own approaches to the material, and invent assignments, lab exercises and other teaching aides for specific courses outlined in the curriculum.  Whenever a new curricular device is conceived, the next natural step in the investigation is to see if the innovation actually helps student’s understanding of the material.  Investigations into some of these innovations has previously been measured by lab grade, overall course grade, resignation rate or exam grades (Cooper, Dann, & Pausch, 2003; Decker, 2003; Ventura, 2003)

The problem with using these types of metrics in a study is that often they are not proven reliable or valid.  Reliability, or the degree of consistency among test scores, and validity, the relevance of the metric for the particular skill it is trying to assess are both essential whenever the results of  these metrics are to be analyzed (Kaplan & Saccuzzo, 2001; Marshall & Hales, 1972; Ravid, 1994) .  

With all of the claims of innovation in CS1 curriculum, we need a way of assessing student’s comprehension of the core CS1 material.  The goal of this work is to create a reliable and validated assessment tool for CS1.  The tool will be one that assesses the knowledge of a student who has taken a CS1 class using one of the programming-first approaches described in CC2001.  This assessment should be independent of both the approach used for CS1 and should not rely on testing a student’s syntactic ability with a particular language.

Theoretical Background & Previous Research in the Area 

Many have argued the best ways to teach introductory programming, particularly in regards to language and paradigm.  Back in the days of heavy Pascal use, Pattis (1993) argued about the appropriate point in the curriculum to teach subprograms.  Moving forward a few years, we see Culwin (1999) arguing for how to appropriately teach Object-Oriented programming, followed up by a strong course outline for an Objects-first CS1 advocated by Alphonce and Ventura (2002; Ventura, 2003) .  For these as well as others, while there may be strong anecdotal evidence to support an approach, little empirical evidence has been presented as to the real effect of these methodologies on learning the appropriate material for CS1.

The need for accurate assessment tools once again reveals itself when one looks at the literature on predictors of success for CS1 (Evans & Simkin, 1989; Hagan & Markham, 2000; Kurtz, 1980; Leeper & Silver, 1982; Mazlack, 1980; Wilson & Shrock, 2001) .  For each of these studies, different factors were identified as possible reasons for success in a programming-first CS1 course.  In each case, none of the measures used were validated.

There has been one documented attempt at creation of an assessment for CS1.  The working group from the Conference on Innovation and Technology in Computer Science Education (ITiCSE) in 2001, created a programming test that was administered to students at multiple institutions in multiple countries (McCracken et al., 2001) .  The group’s results indicated that students coming out of CS1 did not have the skills that the test assessed.  Even this study was flawed as recognized even by the participants.  They pointed out flaws in the presentation of the problems and the instructions for administering the exercises.  Therefore, even with all the positives of this study, there is still room to grow and make an assessment tool that could be more true to the current flavors of CS1 as described in CC2001. 

Goals of the Research

The ultimate goal of the research is to create a validated and reliable metric for assessing student's level of knowledge at the completion of a programming first CS1.  The test should be language and paradigm independent.  This test will then be available to assess not only student progress, but also as a way to gauge particular pedagogical advances and their true value within the classroom.

The current hypotheses are:

Current Status

At the time of this writing, a proposal has been prepared and it is undergoing revisions from my committee.  In progress is the analysis of CC2001 to determine what is the appropriate topical coverage for the tool.

Interim Conclusions

Target audience for the tool has been a challenge.  Looking at the various sanctioned methodologies for CS1 given in CC2001, much care was taken to figure out which of them overlapped.  The programming-first approaches all had much in common.  This overlap was not seen between the non-programming first approaches and the programming-first approaches, or even amongst the non-programming first approaches.  Therefore, the decision was made to create an assessment tool for only the programming-first approaches.

Validating the test will be accomplished using an expert review methodology.  After the tool is prepared, a pool of experts in the area will be asked to assess the test's appropriateness for students and the clarity and difficulty of the questions on the exam.

The exam will be field tested as a final exam for a CS1 course.  After the exam has been administered, reliability will be computing using one of the standard statistical methods, either odds-evens or split-halves (Kaplan & Saccuzzo, 2001; Marshall & Hales, 1972; Ravid, 1994)

Open Issues

Open issues for this research include

Current Stage in Program of Study

Defense of proposal in January and data collection to begin in the Spring semester.

What I Hope to Gain From Participation in Doctoral Consortium

I hope to gain input and feedback about my research ideas.  I am also hoping for informed guidance on the approach I am taking towards my research and suggestions on how to proceed forward. 

Bibliographic References

  1. Alphonce, C. G., & Ventura, P. R. (2002). Object orientation in CS1-CS2 by design. Paper presented at the 7th annual conference on Innovation and Technology in Computer Science Education, Aarhus, Denmark.  

  2. Cooper, S., Dann, W., & Pausch, R. (2003). Teaching objects-first in introductory computer science. Paper presented at the 34th SIGCSE technical symposium on Computer Science Education, Reno, Nevada.

  3. Curricula, T. J. T. F. o. C. (2001). Computing curricula 2001 computer science. IEEE Computer Society & Association for Computing Machinery. Retrieved October 30, 2003, from the World Wide Web: http://www.computer.org/education/cc2001/final/index.htm

  4. Decker, A. (2003). A tale of two paradigms. Journal of Computing Sciences in Colleges, 19(2), 238-246.

  5. Evans, G. E., & Simkin, M. G. (1989). What best predicts computer proficiency? Communications of the ACM, 32(11), 1322 - 1327.

  6. Hagan, D., & Markham, S. (2000). Does it help to have some programming experience before beginning a computing degree program? Paper presented at the 5th annual SIGCSE/SIGCUE conference on Innovation and technology in computer science education.

  7. Kaplan, R. M., & Saccuzzo, D. P. (2001). Psychological Testing: Principlies, Applications and Issues (Fifth ed.). Belmont, California: Wadsworth/Thomson Learning.

  8. Kurtz, B. L. (1980). Inivestigating the relationship between the development of abstract reasoning and performance in an introductory programming class. Paper presented at the 11th SIGCSE technical symposium on Computer Science Education, Kansas City, Missouri.

  9. Leeper, R. R., & Silver, J. L. (1982). Predicting success in a first programming course. Paper presented at the 13th SIGCSE technical symposium on computer science education, Indianapolis, Indiana.

  10. Marshall, J. C., & Hales, L. W. (1972). Essentials of Testing. Reading, Massachusetts: Addison-Wesley Publishing Co.

  11. Mazlack, L. J. (1980). Identifying potential to acquire programming skill. Communications of the ACM, 23(1), 14 - 17.

  12. McCracken, M., Almstrum, V., Diaz, D., Guzdial, M., Hagan, D., Kolikant, Y. B.-D., Laxer, C., Thomas, L., Utting, I., & Wilusz, T. (2001). A multi-national, multi-institutional study of assessment of programming skills of the first-year CS students. SIGCSE Bulletin, 33(4), 1 - 16.

  13. Pattis, R. (1993). The "Procedures Early" approach in CS1: A heresy. Paper presented at the 24th SIGCSE technical symposium on Computer science education, Indianapolis, Indiana.

  14. Ravid, R. (1994). Practical Statistics for Educators. Lanham: University Press of America.

  15. Ventura, P. R. (2003). On the origins of programmers: Identifying predictors of success for an objects-first CS1. Unpublished Doctoral, University at Buffalo, SUNY, Buffalo.

  16. Wilson, B. C., & Shrock, S. (2001). Contributing to success in an introductory computer science course: A study of twelve factors. Paper presented at the 32nd SIGCSE technical symposium on Computer Science Education, Charlotte, North Carolina.