Are you looking for an evaluation model to apply to an educational program? A great evaluation approach is Daniel Stufflebeam’s CIPP evaluation model (Fitzpatrick, Sanders & Worthen, 2011; Mertens & Wilson, 2012; Stufflebeam, 2003; Zhang, Zeller, Griffith, Metcalf, Williams, Shea & Misulis, 2011). In this decision-oriented approach, program evaluation is defined as the “systematic collection of information about the activities, characteristics, and outcomes of programs to make judgments about the program, improve program effectiveness, and/or inform decisions about future programming.” (Patton, 1997, p. 23). The CIPP evaluation model (see figure 1) is a framework for guiding evaluations of programs, projects, personnel, products, institutions, and evaluation systems (Stufflebeam, 2003).
Figure 1: Components of Stufflebeam’s (2003) CIPP Model.
Designed to assist administrators in making informed decisions, CIPP is a popular evaluation approach in educational settings (Fitzpatrick et al., 2011; Zhang et al., 2011). This approach, developed in the late 1960s, seeks to improve and achieve accountability in educational programming through a “learning-by-doing” approach (Zhang et al., 2011). Its core concepts are context, input, process, and product evaluation, with the intention of not to prove, but rather improve, the program itself (Stufflebeam, 2003). An evaluation following the CIPP model may include a context, input, process, or product evaluation, or a combination of these elements (Stufflebeam, 2003).
The context evaluation stage of the CIPP Model creates the big picture of where both the program and evaluation fit (Mertens & Wilson, 2012). This stage assists in decision-making related to planning, and enables the evaluator to identify the needs, assets, and resources of a community in order to provide programming that will be beneficial (Fitzpatrick et al., 2012; Mertens & Wilson, 2012). Context evaluation also identifies the political climate that could influence the success of the program (Mertens & Wilson, 2012). To achieve this, the evaluator compiles and assesses background information, and interviews program leaders and stakeholders. Key stakeholders in the evaluation are identified. In addition, program goals are assessed, and data reporting on the program environment is collected. Data collection can use multiple formats. These include both formative and summative measures, such as environmental analysis of existing documents, program profiling, case study interviews, and stakeholder interviews (Mertens, & Wilson, 2012). Throughout this process, continual dialogue with the client to provide updates is integral.
To complement context evaluation, input evaluation can be completed. In this stage, information is collected regarding the mission, goals, and plan of the program. Its purpose is to assess the program’s strategy, merit and work plan against research, the responsiveness of the program to client needs, and alternative strategies offered in similar programs (Mertens & Wilson, 2012). The intent of this stage is to choose an appropriate strategy to implement to resolve the program problem (Fitzpatrick et al., 2011).
In addition to context evaluation and input evaluation, reviewing program quality is a key element to CIPP. Process evaluation investigates the quality of the program’s implementation. In this stage, program activities are monitored, documented and assessed by the evaluator (Fitzpatrick et al., 2011; Mertens & Wilson, 2012). Primary objectives of this stage are to provide feedback regarding the extent to which planned activities are carried out, guide staff on how to modify and improve the program plan, and assess the degree to which participants can carry out their roles (Sufflebeam, 2003).
The final component to CIPP, product evaluation, assesses the positive and negative effects the program had on its target audience (Mertens & Wilson, 2012), assessing both the intended and unintended outcomes (Stufflebeam, 2003). Both short-term and long-term outcomes are judged. During this stage, judgments of stakeholders and relevant experts are analyzed, viewing outcomes that impact the group, subgroups, and individual. Applying a combination of methodological techniques assure all outcomes are noted and assist in verifying evaluation findings (Mertens & Wilson, 2012; Stufflebeam, 2003).
This summary is the work of myself and Christine Miller.
References:
Fitzpatrick, J., Sanders, J., & Worthen, B. (2011). Program evaluation: Alternative approaches and practical guidelines (4th Ed.). New York: Allyn & Bacon. Canadian Publisher: Pearson. ISBN: 978-0-205-57935-8
Mertens, D. & Wilson, A. (2012). Program evaluation theory and practice: A comprehensive guide. New York: Guilford Press. EISBN: 9781462503254
Patton, Q. M. (1997). Utilization focused evaluation: The new century text (3rd Ed.), London: Sage Publications.
Stufflebeam, D. (2003). The CIPP model of evaluation. In T. Kellaghan, D. Stufflebeam & L. Wingate (Eds.), Springer international handbooks of education: International handbook of educational evaluation. Retrieved from http://www.credoreference.com.ezproxy.lib.ucalgary.ca/entry/spredev/the_cipp_model_for_evaluation
Zhang, G., Zeller, N., Griffith, R., Metcalf, D., Williams, J., Shea, C. & Misulis, K. (2011). Using the context, input, process, and product evaluation model (CIPP) as a comprehensive framework to guide the planning, implementation, and assessment of service-learning programs. Journal of higher education and outreach engagement 15(4), 57 – 83.
Advertisement
Privacy Settings