Standard 5: Provider Quality, Continuous Improvement and Capacity

Standard 5

The provider maintains a quality assurance system comprised of valid data from multiple measures, including evidence of candidates’ and completers’ positive impact on P-12 student learning and development. The provider supports continuous improvement that is sustained and evidence-based, and that evaluates the effectiveness of its completers. The provider uses the results of inquiry and data collection to establish priorities, enhance program elements and capacity, and test innovations to improve completers’ impact on P-12 student learning and development.

Quality Assurance System
The Quality Assurance System utilized by the Education Preparation Program (EPP) at Purdue University Northwest (PNW) continues to evolve as national and state requirements change; available information from community partners, clinical educators, completers and are gathered and analyzed; and the EPP and institution grow and evolve. An essential aspect of this process is the consideration of the functions of and purposes for gathering, reviewing and making program decisions. Characteristic of these decisions are efforts to improve operational effectiveness, candidate progress, and the impact of completers on student learning and development.

The PNW EPP created the current iteration of the PNW Quality Assurance System (QAS) in the fall of 2018 as a consistent, recurring and elaborate model for assessment. Broadly speaking, there are three reasons for which assessment is conducted within the EPP:

  1. assessment for accreditation: to meet the needs of accrediting bodies (i.e., CAEP, HLC, state);
  2. assessment of candidate learning outcomes: the core skills, knowledge, and dispositions which candidates develop through their engagement in a program of study; and
  3. for program assessment: the process of asking and answering questions about how well candidates are achieving learning outcomes over the course of their plan of study.

Representing a model of continuous improvement, the QAS is based upon the Deming cycle of plan-do-check-act. As such, PNW’s QAS is a cyclical process for the collection, analysis, monitoring, and reporting of multiple measures used to monitor candidate progress, completer achievements, and operational effectiveness (Continuous Improvement Cycle). It describes the EPP Signature Assessments (i.e., those assessments used to assess all candidates within the EPP relative to CAEP preparation standards) and differentiates them from Program Area Key Assessments (i.e., those assessments used within specific program areas to measure candidate performance relative to Specialized Professional Association standards); communicates the roles and responsibilities of offices and individuals as they relate to the continuous improvement of programs of study and the EPP as a whole ( Continuous Improvement Cycle Groups); explains the communication with and involvement of community partners, clinical educators, completers, candidates and other stakeholders; and illustrates how data is managed, stored, and reviewed. EPP Signature Assessments are: Teacher Performance Assessment (edTPA), state licensure exams, Skills of Teaching Observation Tool (STOT), Niagara University’s Disposition Instrument (Niagara), the EPP Candidate Interview and the NeXT Exit Survey (SIGNATURE ASSESSMENTS).

Data Dialogue Days (DDD) and Use of Data
Occurring once each semester during the academic year, Data Dialogue Days (DDD) assure the systematic review of multiple measures, the monitoring of candidate progress and provide opportunities to make changes to programs and EPP based on data. Eight DDD occur throughout the year; four during the fall semester and four during the spring semester. One DDD each semester occurs at the Program level, another at the Program level Advisory Committee (PACs), one at the EPP level, and finally, one at the EPP Advisory Committee (EPAC). Twice each year (Fall and Spring semesters) at the Program, PAC, EPP, and EPAC levels, key stakeholders (both internal and external to the institution) engage in reviewing and analyzing data, making recommendations for program improvement, determining and/or improving upon processes and practices related to assessment, candidate progression within programs of study, and the recruitment and retention of diverse candidates.

EPP and Program DDD
During program level and EPP level DDD, candidate performance on EPP Signature and Program Area Key Assessments are reviewed, analyzed and used to make modifications to courses, coursework required of programs of study, field experiences, interview processes, as well as efforts to recruit and retain diverse candidates. The outcomes of this analysis, decisions and/or next steps made relative to it are recorded in the minutes of the meeting; progress towards implementation of the identified changes are monitored at the next DDD regardless of the level.

PAC and EPAC DDD
Similarly, once each semester, PACs and EPACs convene to review and analyze data from Signature and Program Area Key Assessments and recommend modifications to coursework, field experiences, interview processes, and recruitment and retention efforts. Data utilized for Program and PAC DDD are typically disaggregated to include candidates from that program area as well as past and present performance on the Signature or Program Area Key Assessments. Data utilized for EPP and EPAC DDD is typically aggregated data to include candidates from the entire EPP, past and present performance on the Signature or Program Area Key Assessments, and may be disaggregated by specific demographics. (EPP Master Calendar)

Quality Assurance Committee
The Quality Assurance Committee (QAC), comprised of five to six faculty and staff members and led by the Director of School of Education and Counseling is charged with overseeing and managing the QAS. Extended meetings are convened twice each year (i.e., once in the fall and again in the spring) for this body to review processes and policies related to the QAS, the implementation of these processes and policies, Signature Assessments, data reporting and/or review forms, and implementation of modifications identified at Program and EPP Levels. Further, a subset of the QAC coordinate the administration of assessments and maintenance of the data management system; provide oversight for the data collection system and document processes and procedures to ensure that they are being followed by units; and prepare data and agendas for Data Dialogue Days.

The QAS, in concert with the EPP Master Calendar, provides a roadmap for data collection, analysis, and decision making. The Master Calendar, shared annually on August 1st and again on January 1st, identifies key dates by which information is collected for review and analysis, meetings are to occur, and when Data Dialogue Days (DDD) occur during the academic year. As stated previously, review of data to inform program improvement and support continuous growth of the EPP occur. In addition to review of data, assessment processes and practices, these meetings provide opportunities for review of actions/decisions made based upon data during the year to be shared and evaluated for the value it has added.

Using Data to Inform Decisions
The CAEP Evidence Form (Decision-Based Report Form) was initially developed during the Fall 2018 by the Office of Assessment and Accreditation as a way of reporting, tracking, and monitoring the data-driven decisions made by the EPP and programs. Meeting reporters are to complete the CAEP Evidence Form within two days following a meeting. It was believed that this would be a way for documenting meetings and procedures allowing the EPP to develop systems of accountability and assessment. Reporters were asked to identify the type of meeting, the CAEP standard(s) that apply (including the cross-cutting themes of diversity and technology), and to include items and evidence from the meeting and submit it to the Assessment Coordinator. All documents were to be scanned and made available to all faculty on a shared drive. A review of this process conducted in Fall 2019 found that it has not been effective. Documents and decisions made based upon the review of data were not consistently being recorded and reported to the Office of Assessment and Accreditation. A review of the process found that many faculty and staff members found the completion of an additional form (i.e., Decision-based Report Form) to be redundant and onerous. A result of this review indicated that decisions made during these meetings could be recorded and tracked using minutes of the meetings (i.e., this information was already included there). Therefore, effective Spring 2020 the data-driven decisions are included as part of the standardized DDD meeting agenda and will be reported within the minutes. The Assessment Coordinator will include all data-driven decisions within the SoEC Annual Report. The annual report is available to all SoEC faculty and staff via MyPNW (the institution’s portal). Inherent in the QAS is continuous review to ensure that continuous improvement and growth is being realized by the EPP. Annual retreats provide the opportunity for members of the QAC to analyze the effectiveness of the system, its capability, and dependability. An outcome of one such retreat was the development of the SoEC Annual Report. This report incorporates findings and decisions based on data, the implementation and impact of these decisions, as well as reporting annual assessment information. It is distributed at the beginning of the academic year as a recapitulation or review of the previous academic year. Further, it provides documentation of efforts that the EPP has taken over the course of the academic year to improve educational opportunities for candidates; identifies changes in Signature Assessments, procedures, and processes; and is used to establish goals for the upcoming year. When taken together, these efforts along with the creation of a dedicated “Assessment” section in the MyPNW Online Portal (made available on-line along with assessment tools for faculty members), helps to create a cohesive system for collecting, reviewing, and making decisions for program improvement. Finally, it is important to note that the QAS utilizes technology to ensure timely collection and management of data, modification and collection of information from Signature and Program Area Key Assessments, and storage of data gathered by program areas within the EPP.

Signature Assessments
Tools provided by the university’s infrastructure and outside proprietary services function as the foundation for the comprehensive system. Administration and management of data is secure through programs such as TaskStream, Pearson, Banner, etc.

PNW’s EPP employs six measures to evaluate and determine its efficacy and effectiveness in preparing candidates. The six measures utilized are: Teacher Performance Assessment (edTPA), state licensure exams (CORE), Skills of Teaching Observation Tool (STOT), Niagara University’s Disposition Instrument (Niagara), the EPP Candidate Interview and the NeXT Exit Survey. These six assessments are referred to as EPP Signature Assessments and utilized across program areas within the institution.

The use of measures as EPP Signature Assessments is determined by the QAC and proposed to faculty members as part of the decision making process in the EPP during EPP Data Dialogue Days. In selecting measures as Signature Assessments, the QAC is guided by the following:

  • The specific broad learning objectives for the EPP. The InTASC and CAEP standards serve as these objectives.
  • The specific knowledge, skills, and dispositions to measure. State licensure exams and clinical educators inform this aspect of the Signature Assessment selection.
  • Available assessments with measures of reliability and validity. Assessments selected correlate with the broad learning objectives for the EPP and provide evidence of what is claimed.
  • Ability to disaggregate assessment results. Signature Assessment results are able to be disaggregated in ways that allow for accurate representation of each program area within the EPP as well as along other dimensions (e.g., demographics, location, etc.).
  • Administration costs, if any. Proprietary assessments such as the edTPA and CORE licensure exams require a fee. These fees are assessed as part of candidates’ course fees and vouchers are assigned to candidates for completion of these assessments.
  • Analysis of assessment results are part of Data Dialogue Days (DDD) where systematic review of multiple measures, the monitoring of candidate progress and decisions are made to programs and the EPP based on data. Eight DDD occur throughout the year; four during the fall semester and four during the spring semester. One DDD each semester occurs at the Program level, another at the Program level Advisory Committee (PACs), one at the EPP level, and finally, one at the EPP Advisory Committee (EPAC).

Efforts to improve the EPP’s ability to produce empirical evidence that interpretations from data are valid and reliable have resulted in the adoption of five proprietary assessments. They are the edTPA, CORE, STOT, Niagara, and the NeXT exit survey. The edTPA is a performance-based, subject-specific assessment designed to measure candidate’s preparedness for work in classrooms as educators (i.e., ability to positively impact learning). The three tasks of the edTPA align with InTASC and CAEP standards, highlighting the cross-cutting themes of technology and diversity(edTPA 2014 Crosswalk). The Indiana CORE assessments for educator licensure are criterion referenced and standards based assessments designed to ensure that candidates have the pedagogical and content knowledge required to teach effectively in Indiana public schools. Developed by the North Dakota Association of Colleges for Teacher Education, the STOT measures candidate performance on four factors: the learner and learning, content knowledge, instructional practices, and professional responsibilities. These areas are aligned with InTASC standards. The Niagara University Disposition Assessment is a valid and reliable measure of candidate professional dispositions. Professional dispositions in education are the values, actions, attitudes and beliefs enacted through interaction with learners, their families, community members, and colleagues. The NeXT exit survey is designed to gather perspectives of candidates regarding their educator preparation programs as they leave the institution and enter the teaching profession.

Realizing the importance of methodically and successfully implementing these new assessments, the QAC has developed a process for implementing New EPP-wide assessments (Implementing New EPP Assessment Process).

The EPP Candidate Interview rubric is currently the only instrument designed by the EPP in implementation. Surveys of faculty members, candidates and clinical educators have been collected to obtain feedback about the interview process, its usefulness, to determine inter-rater reliability, and to provide a better understanding of the quality of candidates entering the program (Quality Assurance Report for EPP Student Interviews into the Program). Inter-rater reliability was found to be 56% agreement between scores from two faculty members interviewing the same individual, with that agreement increasing to 97% for scores within one rubric point. Faculty members and candidates agreed that the environment, organization and timing were appropriate for the interview process. Further, faculty members indicated that the process provided opportunities to become better acquainted with potential candidates.

How Signature Assessments Were Determined
The Professional Disposition Summary (Professional Dispositions Summary) illustrates efforts undertaken by the EPP to create a tool to measure professional dispositions. This summary chronicles the process undertaken by the EPP. It begins with the development of the Professional Behavior Evaluation for Candidate Admission to Program tool; describes efforts to establish measures of reliability and validity for it; includes its introduction and first use with candidates; and the final determination that use of a proprietary instrument was necessary since content validity was difficult to establish. Training on the newly adopted tool, the Niagara University Disposition Assessment, occurred in October 2019 and it remains in use as one of the EPP Signature Assessments.

Attempts to determine and improve the validity of EPP developed measures has been challenging. One example of determining validity of assessments was done in Fall 2019 with the EPP developed exit survey of candidates that was completed by the university supervisors (Program Exit Survey Summary). During the EPP DDD meeting, stakeholders (faculty members, university supervisors, clinical educators) were asked to help determine content and construct validity study for the existing instrument. Participants were asked to score each item from least essential (a score of 1) to most essential (a score of 10) as an element related to effective teaching. While the assessment was found to have construct validity (i.e., it measured the perceptions of university supervisors on candidate’s readiness for the classroom), it lacked content validity (i.e., it did not cover all relevant parts of the knowledge, skills and dispositions that candidates needed to demonstrate to indicate their readiness for the classroom). Based upon analysis of these efforts, the Office of Assessment and Accreditation determined that the changes to the assessment would require a significant number of changes and sought available assessments with established measures of reliability and validity. The NeXT exit survey, developed by a consortium of 14 colleges and universities in Minnesota, North Dakota, and South Dakota was selected and presented at a School of Education and Counseling Leadership Committee. With the approval of the SoEC Leadership team, the NeXT was shared at the next EPP meeting and adopted for use as an EPP Signature Assessment beginning Fall 2019.

Training on Signature Assessments
As is evident, the EPP relies heavily on proprietary measures and their establishment of validity and reliability. Further to ensure that the data gathered from these measures is reliable, the EPP provides training each semester with clinical educators. An example of this occurred in October 2019 when extensive training with clinical educators was completed on the STOT. This training introduced the assessment to clinical educators, discussed the four factors assessed (i.e., the learner and learning, content knowledge, instructional practices, and professional responsibilities), unpacked the performance levels used as part of rating a candidate’s performance, defined ways to mitigate biases, and provided overall guidelines for the tools use. Videos were used by the clinical educators to practice scoring an observation using the tool. After this initial scoring using the STOT rubrics, calibration was done by comparing these scores to an expert panel. The results indicated that 54% absolute and 46% adjacent agreement on video 1. Inter-rater reliability was 62% absolute and 38% adjacent agreement on video 2 following calibration (Inter-rater Reliability STOT Report).

The Office of Partnership and Outreach, as a central component in its work with clinical educators, provides training and workshops multiple times each semester for clinical educators. Analysis of candidate performance on Signature Assessments serves as the basis for the content presented. These workshops have included information about the edTPA, introduction to the Niagara, as well as the training to reliability conducted with the STOT.

Commitment to Continuous Improvement
The EPP at PNW is committed to continuous improvement and has established a culture in which all stakeholders are invested in ongoing review of performance. To that end, the EPP has designed structures, systems and processes to regularly and systematically assess performance against its goals and relevant standards. Through these activities, the EPP is able to track results over time and test innovations. Further, the EPP examines the effects of selection criteria on subsequent progress and completion, and intentionally uses results to improve program elements as well as the instituted processes. The EPP at PNW engaged in a co-constructive, iterative process which has resulted in the Revolutionizing the Educator Preparation Program at PNW (REP3) which outlines the goals, strategies, innovations, and outcomes for PNW’s EPP.

Revolutionizing the Educator Preparation Program at PNW (REP3) is the manifestation of the EPP Conceptual Framework of the Educational Leader and provides the roadmap for functional and operational aspects of the EPP. The REP3 outlines the goals, tools and measures, strategies and innovations and assigns ownership to the functional area. We believe The Educational Leader is one who relies on research to construct knowledge through continuous and integrated inquiry, develop practice through continuous engagement with diverse learning environments and community while cultivating relationships with learners, partners and stakeholders. To that end, goals were formulated based on the data collected through the processes from the Quality Assurance System. The EPP has adopted eight goals within the framework of the Council for the Accreditation of Educator Preparation (CAEP) Standards.

Goal One: Increase number of candidates successful (i.e., passing) on the first attempt of the licensure exam (CAEP 1).

The Program Areas, led by the Program Area Coordinators (i.e., Elementary Education, Special Education, Secondary Education, Transition to Teach) have taken responsibility for the goal of increasing the number of candidates who pass the licensure exam on the first attempt. The program areas purposefully identify strategies to create support for candidates as well as enhance communication to candidates about these available supports. The SoEC Leadership Team, comprised of the Program Area Coordinators, convenes to share strategies, review data and provide yet another layer of ongoing monitoring of this goal.

Goal Two: Provide training to clinical educators (e.g., training to reliability, Signature Assessments, best practice in clinical education, etc.) (CAEP 2).

The SoEC Office of Partnerships and Outreach (OPO) is responsible for oversight of clinical educators, including recruitment, selection and training of qualified individuals to serve the EPP. As part of the Onboarding and Orientation for clinical educators, the OPO assesses self-identified needs to prepare and plan for support and training. Additionally, an analysis of the data related to inter-rater reliability for the Signature Assessments (e.g., Skills of Teaching Observation Tool (STOT) and Niagara Dispositions) as well as the quality of supervision (e.g., end-of-semester surveys) informs the content for training. The OPO meets twice a month to evaluate progress toward this goal by collecting the documentation (e.g., OPO Glitter Meeting agendas, SoEC Master calendar), reviewing informal and formal feedback from clinical educators, and developing plans based on the EPP data.

Goal Three: Create and implement clinical placement tracking and monitoring system to ensure candidates have diverse experiences (CAEP 2).

Another area of oversight for the OPO is the documentation of candidate experiences in clinical placements. The EPP strives to ensure that all candidates have experiences that are purposefully planned to incorporate diverse demographics, SES, rural/urban/suburban schools, a variety of grade levels, etc. As candidates have multiple field experiences over the course of their plan of study, it is essential that a system is created and faithfully used to track and monitor all characteristics of clinical placements. Through the collaborative efforts of the OPO, Site Tracker for the EPP (STEPP), was designed to address this goal. Again, the OPO meets twice a month to review progress of all goals, including the creation and implementation of a systematic tracking and monitoring of clinical placements.

Goal Four: Develop and expand relationships with community partners (CAEP 3).

Goal Five: Increase diversity of candidates entering and completing EPP degree/licensure programs to align with the demographics of the region (CAEP 3).

Goal Six: Create unique and diverse opportunities for candidates to engage in their profession (CAEP 3).

As outlined in Revolutionizing the Educational Workforce (REW), the EPP has identified three interrelated goals specific to the recruitment, selection and retention of education candidates. REW defines the goals, strategies and outcomes to be tracked and the SoEC Office of Recruitment and Retention is charged with tracking and monitoring these goals, using data outlined in the (Continuous Improvement Cycle) to assess progress toward these objectives. The Office of Recruitment and Retention is comprised of the academic advisors. The SoEC has established a Recruitment Task Force and a Retention Task Force, each co-chaired by an academic advisor and populated with SoEC faculty and staff, to ensure that the goals and strategies for recruitment, selection and retention of candidates are jointly owned and monitored. Further, the Office of Recruitment and Retention initiates crossfunctional activities to reinforce these goals, such as engaging with the SoEC Office of Partnerships and Outreach to develop and expand relationships with community partners and working with the Office of Concurrent Enrollment Programs to design appropriate, goal-related activities.

Goal Seven: Create infrastructure within the EPP for ongoing study of the impact of candidates.(CAEP 4).

For the purpose of conducting this self-study, the EPP elected to design and implement a research study to examine the impact of candidates on student learning, resulting in The Application of Educators’ Knowledge, Skills and Dispositions to Impact Student Learning: A Case Study of an Educator Preparation Program.Through this process, overseen by the SoEC Office of Assessment and Accreditation, EPP faculty made visits to schools where program completers are teaching to observe them in their classrooms and to survey students. The case study, using a mixed- methodology approach, proved to be a meaningful and generative exercise allowing the EPP to reflect on the impact of program completers on student learning. After a review of the case study, several benefits were identified, including the systematic collection of impact data and continued outreach and collaboration with school partners. It was determined that the EPP would adopt this goal to create an infrastructure for ongoing study of the impact of program completers on student learning. With the initial case study completed, the EPP has a benchmark on which to build processes and structures to ensure that an abbreviated case study is completed on a regular basis for the purpose of understanding how EPP completers are using their knowledge, skills and dispositions in learning environments. The collective body of case studies provides data which, in turn, informs the EPP. This goal is tracked and monitored through the stewardship of the Office of Assessment and Accreditation and the

Goal Eight: Establish and monitor progress toward meeting goals and establishing new ones for EPP. (CAEP 5).

The responsibility for establishing an overarching system to document, track, monitor and assess the EPP’s progress toward meeting the goals and standards is housed primarily within the EPP Quality Assurance Committee (QAC). The QAC, in concert with all stakeholder groups, has initiated the Quality Assurance System (QAS), a guiding document for enacting the EPP’s goals as outlined in the Revolutionizing Educator Preparation Program (REP3). The EPP goals, set forth in the REP3, were co- constructed with EPP faculty, staff, clinical educators and other community partners in alignment with the CAEP Standards. The QAC meets to review EPP data from all sources, document data-drive modifications, set priorities and determine systems and processes for the operations of the EPP as related to goals and standards. The Continuous Improvement Cycle, also created by the QAC, sets forth the timeline for specific point-in-time reviews such (e.g., Data Dialogue Days, data collection points, EPP Forums, etc.). These may include, but are not limited to, baseline data, interventions, tracking over time, rationale and justifications for recommendations, comparative analyses and recommendations for next steps. The QAC provides regular evidence-based recommendations to the SoEC Leadership Team who serves as an additional check and balance for oversight and guidance of this process. The QAC is also responsible for the communication system used to disseminate this information to all EPP stakeholders, internal and external. To that end, the EPP goals and the progress toward meeting goals is detailed in the SoEC Annual Report. In sum, the EPP is fundamentally dedicated to continuous improvement, as demonstrated in the systematic, intentional processes for establishing, tracking, monitoring and assessing progress toward goals and standards. A reliance on quality data to inform decisions is central to the EPP’s mission and conceptual framework, The Educational Leader.

Innovations and Improvements
As is illustrated in REP3 and integral in the Continuous Improvement Cycle of PNW’s EPP is the gathering of data to identify potential innovations and improvements; analysis of the implementation of these innovations and improvements; and finally the evaluation of the impact of these on the EPP’s ability to prepare graduates who are competent and caring educators. A key innovation that has served the EPP well in this process has been the Quality Assurance Committee (QAC). Within the EPP this body plays a crucial role in ensuring adherence to the quality of performance outlined by the CAEP Standards. It is responsible for planning, directing and coordinating quality assurance measures of the EPP, formulating quality control policies, and helping it to operate in a more efficacious and efficient manner. Through its efforts, key structural changes have occurred (e.g., processes and procedures for the evaluation of EPP-created measures, EPP Commitment to Diversity statement, systematic analysis of data to improve and inform programs and processes (DDD), defining/clarifying of roles and responsibilities of various offices within the EPP, creation of Technology Task Force, etc.). This group has also been instrumental in creating and clarifying the policies and procedures of the EPP (e.g., SPR, Smoky Room, REW, REP3, use of vouchers for Signature Assessments) as well as in the selection and use of Signature Assessments (e.g., implementation of edTPA, STOT, Niagara). Finally, it has been influential in increasing transparency and the inclusion of all stakeholders (e.g., Data Dashboard, Case Study, CAEP Self Study, EPP Annual Report, EPP Forums, PAC creation).

Data Dialogue Days
Data Dialogue Days held each semester at various levels (i.e., program level, EPP level, PAC, EPAC) have helped to reinforce the use of data to identify areas of improvement and innovation and serve as a means for determining the impact of changes made. An example of an improvement arising from these meetings is the creation and implementation of field guides for each field experience. Prior to the implementation of these, data showed that clinical educators, faculty members and candidates did not appear to share a common understanding of expectations of field experiences resulting in an uneven experience for candidates across and within programs of study. Further, stakeholders indicated that field experiences, while important, were not sufficient in preparing candidates for work in classrooms. As a result, programs and their PACs developed and implemented incremental field experiences which increase the amount of time spent by candidates in a field over their plan of study. Finally, the Site Tracker for the EPP (STEPP) system, created by the SoEC Office of Partnership and Outreach (OPO), was implemented to track and monitor candidate experiences. These innovations, a result of collaborative efforts between clinical educators, candidates, faculty, and the OPO and guided by data, have increased the number and diversity of field placement sites, increased the number of MoAs between the EPP and schools/agencies, increased consistency of candidate experiences in field, and generated a developmental approach to field placement.

Vouchers
Another improvement made by the EPP has been the implementation of the use of vouchers for Signature Assessments. Data indicated that a large number of candidates were not completing their licensure exams. Candidates often waited until their final semester of study or post-graduation to complete these tests. They attributed this delay to a lack of resources and/or feeling unprepared. The EPP implemented a policy change requiring candidates to attempt the CORE exam(s) prior to student teaching semester. While this increased the number of candidates attempting this assessment, many were not successful in obtaining the required score. To address the feelings of being unprepared and/or candidates not being successful on the exam(s), program areas identified coursework aligned with CORE content and recommended that candidates complete the exam following the conclusion of the course. Further, degree plans and degree paths for programs were created and distributed to better communicate this information. While these efforts encouraged candidates to complete the CORE exams in a more timely manner, many continued to delay the exams until the end of their programs of study which was too late for support to be provided. The EPP then concluded that the cost factor that was inhibiting candidate completion of this licensure requirement. To address this, the EPP implemented the assessment and use of course fees to purchase vouchers for each of the Signature Assessments that require a cost (i.e., CORE, edTPA). These vouchers are distributed to candidates in courses identified as appropriate for the completion of the CORE exam(s), and faculty help candidates to identify the appropriate time within the semester to register and complete the exam. The implementation of these changes has increased candidate completion of CORE exams in close proximity to coursework allowing them to matriculate to student teaching and/or the professional year without delay.

Supporting Candidate Success
The processes of selection and retention of candidates adhered to the requirements identified by the state (Criteria for Selection and Retention) However, many faculty and staff identified that candidates were not being successful and were experiencing difficulties in coursework and/or field experiences. Further, they identified that there was not a process or procedure for identifying candidates who might need additional support and for determining how the EPP might provide that support. To begin to address these concerns, the QAC proposed the use of the EPP Interviews for Prospective Candidates. Typically occurring at the end of a candidate’s third semester at PNW (this is slightly different for secondary, transition to teach, and graduate special education candidates due to program organization), prospective candidates are identified by the SoEC Office of Recruitment and Retention and invited to participate in the interview process. To receive an invitation, candidates need to have completed or be in the process of completing the pre-professional, educational coursework. As part of the interview process, individuals complete a timed writing sample, make a presentation, and respond to questions from a panel of faculty and clinical educators. The purpose of the interview is to meet the prospective candidates and identify if they would benefit from additional support (e.g., time management, presentation skills, resume writing, etc). While this process affords the EPP with opportunities to identify early candidates who may need additional support, it did not go far enough. The EPP implemented two additional measures to identify and support candidates who might be experiencing difficulty with courses or field experiences: the Smoky Room and the PNW Early Warning System for Grades. The Smoky Room is a meeting that occurs each semester with faculty and clinical educators where candidates are reviewed for academic performance, possible dispositional issues, and/or issues related to the field. The PNW Early Warning System for Grades is a recent addition to the EPP’s processes and allows for the identification and communication of academic issues to candidates midway through the semester. Again, the EPP uses these tools for the identification of candidate needs so that they are able to be met. Finally, the Student Performance Review (SPR) processes and procedures have been implemented to address concerns related to candidate performance and dispositions that have not been able to be addressed through other  efforts. The SPR is a formal review of the candidate based upon a complaint that has been issued by faculty, staff, or clinical educator. This formal process often requires a hearing to be held where all sides of the issue are presented and a resolution crafted. Outcomes of the SPR are: no further action is warranted; candidate is allowed to continue in program on a probationary status (i.e., development of a plan of improvement); or candidate is dismissed from the program. The implementation of these efforts has resulted in workshops and additional support being offered to candidates so that they are able to achieve their goals.

edTPA
In response to criticisms of biases, scoring problems and a lack of standardization in the use of portfolios as part of the capstone experience (i.e., student teaching), the EPP sought an available assessment that would demonstrate candidate’s impact on student learning. Analysis of available assessments indicated that the Teacher Performance Assessment (edTPA) would fulfill this need and provide the potential basis for assessing a completer’s impact on student learning following graduation. The EPP piloted the use of the edTPA during the Spring 2018 semester with candidates in the Early Childhood Education and Elementary Education programs. In subsequent semesters, programs across the EPP piloted the edTPA. It was subsequently adopted by the EPP and implemented for all programs in Spring 2020 (edTPA Summary).

The edTPA provides a structure for candidates to carefully consider the importance of assessing student performance during their planning and instruction. As part of the assessment task, candidates further analyze student data and provide responses to reflective prompts in commentaries asking them to define, describe, and support the impact they made relative to student learning. Analysis of data from the pilot and first semester of implementation, indicated that training was needed for all clinical educators (i.e., faculty, university supervisors, cooperating teachers) on the edTPA, what it requires of candidates, and its role within the EPP. Further, it was determined that activities and assignments to support candidates (e.g., integration of commentaries within field experiences, recording and analysis of their instruction, intentional feedback to learners, etc.) would be embedded within coursework across candidates plans of study. In this way, candidates would be supported to integrate what they have learned throughout their programs prior to student teaching. This was accomplished through orientations, additional training at the EPP Data Dialogue Days, and faculty determining in their program meetings how to better integrate elements of edTPA in their courses.

Completer Impact Committee
The Quality Assurance Committee (QAC) constituted the Completer Impact Committee (CIC) in the Fall 2018. The task of this committee is to study the impact completers have on P-12 learning and development. The CIC conducted a case study which was guided by the following questions:

  • In what ways, if any, do completers of an educator preparation program impact the learning of the students in their classrooms?
  • In what ways, if any, do completers of an educator preparation program apply the professional knowledge, skills, and dispositions they learned in their preparation program?

Participants were selected using stratified random sampling techniques of licensed completers from educator programs at the university within the last five years. Educator preparation programs formed the strata (e.g., early childhood, elementary, special education, and secondary education) to ensure that representation of all program areas occurred. Participants were asked to take part in the following activities as part of the study: a focus group discussion, classroom observation, Praxis Performance Assessment for Teachers-Student Surveys (PPAT), and to share their previous year’s district teaching evaluation and student performance on the state assessment if they felt comfortable doing so. Finally, statewide measures of program impact data were used to inform the EPP’s completer impact in classrooms (these include: principal survey, teacher survey, and effectiveness ratings). The report of this investigation, entitled The Application of Educators’ Knowledge, Skills and Dispositions to Impact Student Learning: A Case Study of an Educator Preparation Program, found that overall, the data support that completers perceive and are perceived to be making instructional decisions that positively impact the learning in their classrooms. They indicate and are perceived to use analysis of student data in this decision making process. Their understanding of content and pedagogical content knowledge is seen in their classroom practices, the expectations of performance they communicate to learners, and learner performance on assessments. The report also indicates that the EPP can and should do more to support candidates prior to graduation to develop their sense of self-efficacy related to instructional decision making and analysis of student learning. Towards this end, the EPP is piloting a year-long residency experience in Spring 2020 called the Professional Year (PY).

Professional Year
During the PY, candidates are placed in classrooms for extended periods of time (three-consecutive days the first semester and fiveconsecutive days the final semester). This extended period of time seeks to address completers’ desires for more time in classrooms and provides the opportunity for candidates to learn more about statewide assessments and tools for mitigating the pressure students feel during this time identified by completers as areas of need.

STEPP
Further, the EPP has employed a new field experience plan and monitoring system to ensure that candidates are placed in a variety of grade levels, within diverse school settings (e.g., rural, urban, suburban; ethnic, socioeconomic, linguistic, etc.). The new system has formalized placement processes, created a streamlined process for candidate placement that involves interviewing, and seeks to provide a multitude of learning opportunities from which candidates are able to learn and grow.

State Data and Its Use
House Enrolled Act No. 1388 (HEA 1388) was enacted during the 2014 session of the Indiana General Assembly. This act requires that the Indiana Department of Education (IDOE) collect and report information from educator preparation programs, principals, and teachers annually. Purdue University Northwest, in addition to other Indiana institutions, collaborated with the IDOE to determine information and data that would be collected and made available for the public as a means of interpreting or comparing program quality. Data that is reported is included within a non-ranking matrix that is posted on the state website. Data included are:

  • The “attrition, retention, and completion rates of teacher candidates for the previous three (3) calendar years.”
  • Averaged scaled or standard scores of program completers in basic skills, content, and pedagogical testing.
  • Average number of times program completers took the basic skills, content, and pedagogy tests before passing.
  • Percentage passing the basic skills, content, and pedagogy tests on the first attempt.
  • Admission practices of each program as they compare to the Council for the Accreditation of Educator Preparation (CAEP) minimum admission standards.
  • Principal survey results of the quality of their teachers completing an Indiana program within the previous two (2) years.
  • Teacher feedback form results for those receiving initial license within the previous three (3) years.
  • Staff performance evaluation results reported in the aggregate.
  • The number of teacher candidates in each content area who complete the teacher preparation program during the year, disaggregated by ranges of cumulative grade point averages.
  • The number of teacher candidates in each content area who, during the year:
    • (A) do not pass a content area licensure examination; and
    • (B) do not retake the content area licensure examination (from: https://www.doe.in.gov/epps/data-comparative-performance)

EPP’s receive a report from the IDOE in late fall with this information. This information is shared at the EPP Data Dialogue Day (DDD) during the spring semester, via the annual assessment report (available to all faculty and staff through myPNW), as well as on the EPP data dashboard website. Decisions about future directions based on data are found in the minutes from DDD. The CAEP Annual Report provides an additional opportunity for the EPP to analyze, share and act upon in decision-making for programs, resource allocation, and future direction. Included within this report are four measures of impact (impact on P-12 learning and development; indicators of teaching effectiveness; satisfaction of employers and employment milestones; satisfaction of completers) and four outcome measures (graduation rates; ability of completers to meet licensing and state requirements; ability of completers to be hired in education positions; and student loan default rates and other consumer information).

EPP Data Dashboard
PNW’s EPP considers data an essential component for continuous program improvement and provides access to the four impact and four outcome measures on the EPP Data Dashboard. The EPP Data Dashboard underwent a design overhaul with the sharing of the 2018-2019 data. This was the first time the EPP had received information from the IDOE 1388 and the new format offers an ease of access to information, allows for the communication of trends in the data, and comparisons between years to occur. The site includes the Higher Education Act Title II reports for previous years. Additionally, this data is shared and analyzed during the spring EPP DDD providing opportunities for all stakeholders to provide input and be part of the decision-making process. The decisions made during this semester are ones that are typically to be acted upon in the subsequent fall semester. This cyclical process helps to ensure that actions are further monitored and resources are appropriately allocated.

Stakeholders in Decision Making
Stakeholders, those individuals and organizations that have an interest in or are affected by PNW EPP evaluations and/or results, are an integral part of the Quality Assurance System. The Continuous Improvement Cycle and the Continuous Improvement Cycle Groups identifies the purpose of each group, members of these committees, their roles and responsibilities, and the data that are to be discussed within the improvement cycle at specific points in that cycle.

Quality Assurance Committee
The QAC provides oversight of the QAS, reviewing instruments and systems, and employs actions to increase accountability throughout the EPP. It was initially formed as a group of CAEP Standard team leaders (i.e., each standard had 1-2 tenured/tenure-track faculty members serving as the team lead for the investigation of that specific standard) that was composed of tenured or tenure-track faculty and staff members. Each of the CAEP Standard Team Leads worked with groups of faculty and staff members (i.e., their team) on a focused examination of their standard, the compilation of evidence that supported and/or represented the EPP’s efforts to meet that standard.

Teams shared information, findings, and thoughts at each EPP meeting, asking for input and/or feedback from other stakeholders. Beginning with the fall 2019 semester, this committee has evolved from a committee of nine to a committee of six (i.e., Director, Academic Advisors, Field Coordinator, Data Manager, and 2-3 tenure-track/tenured faculty members elected). This composition of the committee allows for the QAC to focus on appropriate issues related to the QAS, provides for increased involvement, and effective engagement with data.

Program Advisory Councils
Program Advisory Councils (PAC) are another specified stakeholder group that is integral to the QAS. Comprised of eight to twelve members of clinical educators (e.g., classroom teachers, alumni, principals, university supervisors and faculty members) and candidates, PACs meet each semester to share information and discuss models of best practice. As a primary means for the identification of community needs (e.g., staffing, recruitment, career readiness, etc.), PACs provide input and feedback on the processes and data. The members are essential in providing assistance in the data decision making processes undertaken by the EPP and helps to monitor the implementation and efficacy of the actions taken as a result of the data-informed decision making process (PAC Meetings).

EPP Advisory Committee
A final method for the integration of stakeholders in the EPP program evaluation and improvement process occurs in the form of the EPP advisory committee. The EPAC is comprised of superintendents, directors of special education, state level education professionals, leadership of Head Start, community college representatives, educational champions (those outside of the field) and EPP faculty. Members are invited to participate in Educational Forums convened by the Director of the SoEC each semester. The primary purpose for the work of this group is to build relationships and extend partnerships, to identify additional community needs through a different lens, and to provide input and feedback on processes undertaken as part of the EPP’s continuous improvement cycle (EPP County Forum Meetings)