Go homeTerence W. Cavanaugh Ph. D.
About me projects presentations Publications teaching Related sites

Tasks, Tests, and Teacher Candidates:
Lessons Learned While Designing and Implementing a Teacher Education Unit Assessment System

 

Larry G. Daniel, Terence Cavanaugh, and Cathy A. OíFarrell

University of North Florida

_______________†

Paper presented at the annual meeting of the Association of Teacher Educators, Dallas, TX, February 15-18, 2004.

Abstract

The National Council for Accreditation of Teacher Education (NCATE) has developed professional standards for accreditation of academic units offering professional education programs. NCATE requires that each unit have an assessment system for collecting and analyzing data on teacher education candidates and unit operations and programs. The present paper summarizes efforts to date in creating and utilizing a unit assessment system in the College of Education and Human Services at the University of North Florida.

 

Tasks, Tests, and Teacher Candidates: Lessons Learned While Designing and Implementing a Teacher Education Unit Assessment System

The purposes of the present paper are to describe procedures used in developing and implementing the teacher education unit assessment system at one institution and to share observations regarding creation of assessment tools, design of the system, and utilization of system data. We illustrate how a teacher education unit assessment system can be effectively designed and utilized to (a) inform teacher educators about the quality of teacher education candidates, (b) develop plans for remediation of candidates and improvement of programs, and (c) make decisions about the operation of a teacher education unit.

Review of the Literature

Teacher education program and curricula have become increasingly aligned with state and professional standards and benchmarks for teacher and student performance (Ambach, 1996; Weisenbach, 2000). Focused heavily on program and candidate outcomes (as opposed to inputs or processes), the new standards require teacher education programs to develop assessment system based on teacher candidate products (Denner, Salzman, & Harris, 2002). Teacher education programs must develop meticulous record-keeping systems to document the progress of candidates toward mastery of professional standards, with emphasis placed on evaluation of teacher candidate work samples (Fredman, 2002; Tomei, 2002).

Professional accrediting bodies have raised standards and implemented assessment procedures for assuring teacher candidate proficiency vis-ŗ-vis these standards. With the release of its NCATE 2000 Standards, the National Council for Accreditation of Teacher Education imposed the expectation (Standard 2) that institutions seeking initial accreditation or wishing to maintain continuing accreditation develop a unit assessment system that collects and analyzes data on applicant qualifications, candidate and graduate performance, and unit operations to evaluate and improve the unit and its programs (NCATE, 2002, p. 21).

As outlined by NCATE (2002), a unit assessment system should reflect the unitís conceptual framework, incorporate candidate proficiencies per professional and state standards, and utilize appropriate information technology in housing, storing, and accessing unit data. One such unit assessment system, developed and implemented at the presentersí institution of higher learning, is described and demonstrated. This system includes timelines for data collection and analysis related to candidate performance and unit operation.

To aid institutions in the development of their unit assessment systems, NCATE created a five-year transition plan for phasing in the system. By the time of the 2004 ATE annual meeting, institutions should be in year four of implementation of their assessment system. For institutions undergoing NCATE review in 2003-2004, assessments and criteria/rubrics for scoring each assessment should be fully developed, the assessment system should be fully operational, data from the assessments should be collected, and data analysis should be in progress. Developing the assessment system presented here involved creating of assessment tools, designing of the system for compiling data, and making of decisions about ways to utilize the system data.

The University of North Florida Unit Assessment System

The College of Education and Human Services (COEHS) at the University of North Florida (UNF) has developed a versatile assessment system linking the performance of its candidates to the unitís conceptual framework, national and state standards, professional organizational standards and directives, and K-12 student learning. The system includes a comprehensive and integrated set of evaluative measures useful in monitoring candidate performance and managing the unitís operations and programs. Our system is by no means uniqueóindeed it bears resemblance to various other systems developed by teacher education programs at other institutions (e.g., Harris, Salzman, Frantz, Newsome, & Martin, 2000)ónevertheless, we describe our system for the benefit of illustrating one means for operationalizing standards based assessment in hopes that our experiences may be useful to others in the field.

System Description

The system developed and currently being implemented at UNF allows for (a) tracking of the progress of individual candidates throughout their program of study in terms of their ability to meet professional, state, and program standards related to effective teaching and learning; (b) storage and recall of data for each candidate on a host of measures and artifacts, including pre-admission assessments, critical performance task assessments, candidate portfolios, and end-of-program summative measures; (c) development of summary reports on aggregated strengths and weaknesses of candidates in each of the unitís teacher education programs; and (d) unit-wide evaluation to determine the progress of the unit in meeting its intended purposes and to provide program faculty and administrators information needed in making changes to improve the unitís performance.

Candidate data are gathered prior to admission, during each course and clinical experience included in the program of study, at specific transition points during the program, and at the time of program completion. During courses and clinical experiences, candidates are assessed on critical performance tasks identified by faculty within candidatesí programs of study and designed to make decisions about candidatesí level of proficiency in the knowledge, skills, and dispositions necessary to help students learn. These critical tasks are used to assess the most significant outcomes of each course, and they are linked to several sets of professional standards, including the Florida Educator Accomplished Practices (a set of 12 standards developed by the Florida Department of Education for assuring teacher candidates, upon graduation, will be prepared to enter a classroom with the minimum skills essential to succeed as a teacher) and the Florida ESOL standards (a set of standards developed by the Florida Department of Education to assure that teachers in the stateís schools are adequately prepared to work with students whose first language is not English.)

UNF utilizes a standard database protocol for entering results of the critical task assessments and other candidate data into the system. As candidates complete critical task assignments, the faculty member responsible for assessing the assignment reports a score to the data base clerk based on a rubric designed for assessing the assignment. Once sufficient data are entered on multiple tasks across many candidates, the system is set up so that data may be compiled, sorted, and printed out by program, by candidate, or by the critical task being assessed.

Developing the System

††††††††††† Development of the system is the result of a comprehensive, multi-year effort resulting from the work of a number of COEHS committees and initiatives. Two special task forces worked on refinement of the undergraduate core curriculum common to all of the teacher education programs. The resulting undergraduate core included five broad areas of content: instructional planning, classroom management, human development and learning, assessment, and learners with special needs. The task force efforts resulted in a set of competency statements for each of these five core curriculum areas that expressed what candidates in each program were expected to master. Core courses were redesigned to address these competency areas, and unit goals were consulted in designing a set of common assessment tasks to address the core competencies.

Faculty groups then designed critical task assessments for assuring competence of their candidatesí per the Florida Educator Accomplished Practices, the Florida ESOL standards, and other relevant sets of professional standards, with educational practitioners serving in an advisory capacity, as appropriate, and with the unitís Continuing Accreditation Team (CAT) providing oversight of these efforts. The COEHS Technology Committee worked diligently during the 2001-2002 and 2002-2003 academic years to create and provide procedures for implementing the electronic database for tracking candidate outcomes. Finally, the COEHS Teacher Education Advisory Council, a unit advisory panel composed of professionals from both within and outside of the University, provided feedback on the development and scope of the unit assessment system at its regular meetings.

Figure 1 depicts the unit assessment system. Assessment occurs at three integrated levels: the individual candidate, the specific professional education program, and the overall unit. Faculty in each teacher education program assesses candidate performance and evaluates all educator preparation programs on a continuous basis. In addition to assessing whether candidates have developed competencies needed to meet unit standards, the process provides an empirical basis for evaluating and continuously improving the unitís educator preparation programs. The assessment system also includes a unit evaluation process, which provides a basis for incrementally improving unit operations.

Development and refinement of the system has occurred over a period of years. As of the last NCATE continuing accreditation visit (Spring, 1999), the Unit was utilizing candidate and program assessment measures as required by the Florida Department of Education. At the program level, assessment data were being compiled as a part of the annual Institutional Program Evaluation Plan (IPEP) which addresses performance of each initial preparation program against the stateís five standards for continuing approval of teacher education programs. At the candidate level, data were being utilized at the point of program admission and graduation to determine candidate success, and progress in individual courses served as a means for documenting candidate mastery of important knowledge and skills essential to program success.

Beginning with the 2000-2001 academic year, the unit began implementing its transition plan for complying with NCATE 2000 standards for its unit assessment system. The transition plan built upon the measures and procedures in place at the time of the previous continuing accreditation visit, with elements of the system added and refined each year in accordance with the NCATE 2000 ďTransition Plan for the Implementation of NCATE 2000 StandardsĒ (http://www.ncate.org/standard/ transitionplan.htm). During 2000-2001, the unit began a multi-year process of examining its undergraduate teacher education core curriculum with the aim of working toward course refinement and implementation of a set of course based assessments consistent across all instructors teaching these courses. The unit also identified major components of a unit assessment plan that would need to be implemented over the next several years.

A multi-year plan for phasing in the assessment plan components was developed during 2001-2002. This effort was accompanied by the implementation of changes in program content mandated by the Florida Department of Education, with several programs including practicing educators in their curriculum and assessment system development processes. During 2002-2003, efforts were devoted to implementing the program-specific aspects of the assessment system, with attention given to identifying program transition point assessments and/or course-based critical tasks within each program of study to be consistently used to assess the performance of all program candidates. This process included attention to (a) scoring procedures and rubrics for documenting the performance of candidates on each task and (b) planning for the development of a computer-based system for tracking these assessments at the unit level. The present academic year (2003-2004) has seen refinement of the program-based transition point and critical task assessments. Data from these assessments are currently being used to make decisions about candidate progression through programs and to reflect on the appropriateness and fairness of the assessment measures being employed. Further, the unitís computer-based data tracking system for monitoring data from these assessments is expected to be fully operational by the end of the academic year.

Candidate Assessment

At the individual candidate level, the system features decisions about candidate performance based on multiple assessments made at admission into programs, at appropriate transition points (gateways), and at program completion. A graphic presentation of the candidate assessment procedures used by the unit is provided in Figure 2. Program faculty assess candidatesí knowledge, skills, and dispositions through course-based assessments and at various decision-point program gateways. Data from these assessments are used to make decisions about candidate performance at the pre-admission, developmental, and program completion stages. As candidates progress through the educator preparation programs, they are expected to demonstrate increasingly higher levels of knowledge, skills, and dispositions as identified in the unitís conceptual framework and program knowledge bases. As feedback is given to candidates following assessments, growth is expected in the candidateís planning and delivery of instruction. The feedback given to candidates includes a review of strengths observed, concerns, and specific suggestions for developing knowledge, skills, and dispositions relative to professional and unit standards.

Course-Based Assessments

Once admitted to a program of study, the first level of candidate assessment occurs at the individual course level. Faculty in each program identify course objectives and assess the extent to which candidates accomplish these objectives. A wide variety of assessment types are used within courses to evaluate candidate knowledge, skills, and dispositions. Examples of these assessments are traditional tests, portfolios, group and individual presentations, reflective essays, examinations, lesson and unit planning activities, practicum observations, case studies, and videotape-based skill evaluations. Rubrics, checklists, and other scoring tools are used to assess candidate performance on these activities and to provide feedback to candidates. Course grades serve as one means for assuring that candidates have demonstrated competence in important course-based outcomes. Students in undergraduate programs must obtain grades of C or higher in all courses, and graduate students are typically expected to earn grades of B or higher.

At the undergraduate level, a primary feature of the unitís course-based assessment procedures is the utilization of ďcritical taskĒ assessments that are required of all candidates completing a given course regardless of the instructor teaching the course or the program of study in which the candidate is matriculating. These critical task assessments are linked directly to the Florida Educator Accomplished Practices, and attention is given to utilization of multiple critical tasks for each Accomplished Practice throughout the candidateís program study with the goal of thoroughly documenting candidate performance consistent with the depth, breadth, and intent of each practice. Success in the critical tasks is essential to candidate performance in each program course, with performance on the critical tasks weighted heavily in the course grading system and, in many cases, with successful completion of all critical tasks included in a given course essential if the student is to receive a successful grade in each course.

Decision Point (Gateway) Assessments

In addition to course-level assessments, the candidate assessment process for each program includes decision point assessments that occur at the pre-admission (program entry), developmental/intermediate, and program completion stages. These decision point assessments are used to determine whether the candidate meets the standards required to enter the program, continue the program, and complete the program. At the pre-admission stage, information on candidate potential is examined. With the exception of an allowance for 10% of admissions by exception, candidates cannot be admitted to the unitís initial or advanced professional education programs if they do not meet admission requirements.†

At the developmental/intermediate level, candidatesí progress in developing the necessary knowledge, skills, and dispositions is assessed in order to make decisions about their developmental needs and, therefore, their continuation in the program. Program completion assessments are used to evaluate candidatesí growth in the knowledge, skills, and dispositions identified by the program and, therefore, their potential for assuming professional responsibilities. This information assists faculty and candidates in making decisions about candidatesí developmental needs and their continuation in the program. Developmental or remediation opportunities are provided for candidates who exhibit deficiencies, but candidates who cannot satisfactorily meet the specified standard(s) following a reasonable degree of remediation are not permitted to move to the next level of the program. At the program completion stage, faculty use the collected data to assess candidatesí readiness for completing the program and assuming professional responsibilities.

Assessments used at the intermediate and program completion stages include portfolios, which are used in all programs; written essays and journals, which provide evidence of candidate reflection; videotapes, which are used to assess candidatesí instructional skills prior to student teaching and to assess counseling skills of counselor education candidates; observations by faculty, which are used in assessing teaching performance; ratings by clinical instructors, which are used to assess teaching performance demonstrated during field experiences; course evaluations; scores on traditional tests, including both course-based examinations and the Florida Teacher Certification Examination (state teacher licensure examination); and course and program projects.

Program Assessment

To thoroughly review each program on an annual basis, program faculty and department chairs examine findings developed through curriculum alignment audits, as well as aggregated internal data on candidate competencies and information from external sources, such as follow-up studies, candidate performance on licensure examinations, employer reports, and state program reviews. Aggregated candidate data collected at the pre-admission stage (number and qualifications of applicants by admission status) and at the intermediate and completion stages (including number of program graduates and graduation rates) are examined. Results of this program evaluation process are used for revising the program curriculum (see curriculum alignment audit below), for improving instruction, for revising field experiences, and for redesigning other components of the program to promote high levels of performance by all candidates.

Curriculum Alignment Audit

The College utilizes database technology to facilitate the program evaluation process. During the Fall of 2002 the COEHS Technology Committee, in cooperation with department chairs and the Office of the Dean, used the audit criteria specified in program groups and the Florida Educator Accomplished Practices to develop a database to track and house the student candidate data. The Technology Committee provided feedback on results of this curriculum audit to program coordinators, who worked with program faculty to provide clarification on program curricula. The electronic database is used to compile data from all critical tasks and program transition assessments as gathered by unit faculty. Each critical task is keyed to the Educator Accomplished Practice(s) and/or Florida ESOL standard(s) with which the task is most directly related. The electronic database also includes fields showing the type of learning addressed by each objective (knowledge, skill, or disposition), the specialized professional association standard associate with the objective (if relevant), whether the objective entails candidate reflection (an underlying theme throughout the unitís conceptual framework), and the general content and form of the assessment, including reference to the scoring tool.

The electronic database generates reports to assist faculty members within each program in examining the alignment of their curriculum with the Florida Educator Accomplished Practices, the Florida ESOL standards, and other relevant sets of professional standards. Faculty are able to analyze the curriculum holistically by examining the programís focus on appropriate knowledge, skills, and dispositions; by reviewing the various types of critical tasks and other assessments used in the curriculum; and by examining the extent to which candidates as a whole are experiencing success or difficulty in completing any relevant standard.†

Our programs began their initial curriculum audits during the 2002-2003 academic year through a program folio development process, with the work of some of the programs extending into the present academic year. The folio development process was premised on the assumption that curriculum auditing and refinement is an ongoing process designed to ensure that the curriculum has the capacity to produce the desired outcomes. Program faculty reviewed the alignment of the program curriculum with the Florida Educator Accomplished Practices, Florida ESOL standards, other professional standards as appropriate, and the unitís conceptual framework. Criteria used in this initial audit included the following:

        extent to which the program addresses the outcomes specified in the unitís conceptual framework;

        extent to which program objectives and assessments focus on the candidateís effectiveness in promoting learning among K-12 students;

        specification of critical tasks or transition points for assuring success of candidates in obtaining mastery in the Florida Educator Accomplished Practices, the Florida ESOL standards, and other relevant sets of professional standards;

        precision in classifying objectives by type of learning (knowledge, skill, or disposition);

        extent to which each program includes provisions valid and fair assessment of candidatesí effectiveness in promoting learning among P-12 students;

        extent to which the curriculum addresses candidate dispositions;

        specification methods or tools to be used in measuring the quality of the product or activity presented by the candidate to demonstrate his or her accomplishment of the objective (i.e., a test, rubric, checklist, assessment instrument, or other tool used in judging the quality of a candidateís work or in assessing candidatesí competence).

Unit Evaluation

Just as candidate data are aggregated for use in evaluating programs, comprehensive analyses of program strengths and weaknesses are aggregated for use in evaluating the unitís effectiveness. The unitís Continuing Accreditation Team (CAT), consisting of the Dean, Associate Dean, two professional education faculty, a part-time accreditation associate, and a graduate assistant responsible for maintaining the electronic tracking database, is responsible overseeing the program compliance portion of these unit evaluation efforts. The CATís unit evaluation activities include aggregation and analysis of assessment/evaluation data from all unit programs. The unit evaluation process also includes analysis of program recruitment, enrollment, retention, and completion data, as well as unit-wide data in the form of faculty evaluations; information on student, staff, and faculty diversity; and unit leadership assessments to identify changes needed to improve unit performance.

The electronic tracking database was pilot tested in summer 2003 in preparation for full implementation of the data collection and data entry process in fall 2003, with faculty in a select number of programs participating in the pilot. During the present academic year, data are being gathered for candidates in all unit programs. All critical tasks and/or program transition points associated with all programs of study should be formatted for entry into the database by the end of the Fall, 2003 semester. Program faculty will use these data in making decisions on candidate continuation and completion for candidates entering programs in fall 2004 and thereafter. Data on candidate performance will be aggregated for review at the end of Spring, 2004. Programs will use these aggregated candidate decision point data in preparing their internal review evaluations during the 2004-2005 academic year.

Unit Assessment Data

The unitís administrators analyze summaries of program strengths and weaknesses, as well as other relevant program data annually when preparing various internal and external reports requiring group data summaries for candidates from all unit programs. Using multiple assessments from internal and external sources, the unit collects data from applicants, candidates, recent graduates, faculty, and other members of the professional community, including, directing teachers and principals.

The unit regularly and systematically uses a variety of data to evaluate the efficacy of its courses, programs, and clinical experiences. Data sources include:

  • university instructional satisfaction questionnaires;
  • instructor surveys of courses;
  • candidate performance on the Florida Educator Accomplished Practice critical tasks and core competency tasks in courses, field experiences, and internships;
  • candidate performance on state tests;
  • surveys of interns, supervising teachers and school principals;
  • program area reports;
  • faculty annual reports;
  • graduate surveys;
  • graduate rehire rates.

In addition to providing information on individual candidate performance, the design of the electronic tracking database also supports the creation of standard and custom reports for use in evaluation at the program and unit levels. Creation of a candidate assessment database will permit aggregation of these data for use in identifying program strengths and weaknesses. This aggregation of candidate performance data may be combined with other unit internal data (e.g., summaries of candidate complaints and their resolution) and external data (e.g., first year principal evaluations) for purposes of making decisions about program outcomes and improvement.

 

References

Ambach, G. (1996). Standards for teachers: Potential for improving practice. Phi Delta Kappan, 78(3), 207-210.

Denner, P. R., Salzman, S. A., & Harris, L. B. (2002, February). Teacher work sample assessment: An accountability method that moves beyond teacher testing to the impact of teacher performance on student learning. Paper presented at the annual meeting of the American Association of Colleges for Teacher Education, New York. (ERIC Document Reproduction Service No. ED463285)

Fredman, T. (2002, February). The TWSM: An essential component in the assessment of teacher performance and student learning. Paper presented at the annual meeting of the American Association of Colleges for Teacher Education, New York. (ERIC Document Reproduction Service No. ED464046)

Harris, L. B., Salzman, S., Frantz, A., Newsome, J., & Martin, M. (2000, February). Using accountability measures in the preparation of preservice teachers to make a difference in the learning of all students. Paper presented at the annual meeting of the American Association of Colleges for Teacher Education, Chicago, IL. (ERIC Document Reproduction Service No. ED440926)

National Council for Accreditation of Teacher Education. (2002). Professional standards for the accreditation of schools, colleges, and departments of education (2002 ed.). Washington, DC: Author.

Tomei, L. J. (2002, February). Negotiating the standards maze: A model for teacher education programs. White paper. Paper presented at the annual meeting of the American Association of Colleges for Teacher Education, New York. (ERIC Document Reproduction Service No. ED463263)

Weisenbach, E. L. (2002). Myth 2: There is no connection between standards and the assessment of beginning teachers. In G. Morine-Dershimer & G. Huffman-Joley (Eds.), Dispelling myths about teacher education (pp. 25-32). Washington, DC: American Association of Colleges for Teacher Education.

Figure 1: The COEHS Unit Assessment System

Figure 2: Candidate Assessment in the University of North Florida Teacher Education Unit