STEM Central

A Community of Practice for NSF STEM Projects

Breakout Session II -08 Evaluation 101: How to Construct and Strategize For Your Evaluation Plans

Presenter: Mack Shelley, University Professor of Statistics, Political Science, and Educational Leadership and Policy Studies -- Iowa State University

Moderator: Myles Boylan, NSF 

Summary:

This workshop will introduce the basics of setting up an evaluation plan and how to make strategic decisions about how best to conduct your evaluation to provide the most informative results.

We will explore how to construct goals and objectives for successful program and project evaluation, with a focus on linking empirical quantitative measurements and qualitative data to determining whether the goals and objectives have been attained. The use of logic models will be discussed as a useful mechanism to guide the conceptualization of goals and objectives and to establish the framework for successful evaluation strategies. Human subjects issues are discussed. As a result of participating in this workshop, you will be better able to design, implement, and communicate evaluation findings, and you will be able to assess critically the quality of evaluation reports and of the advice you may receive about how to conduct your own evaluation.

Recommended Resources:

Emison, G.A. (2007). Practical program evaluations: Getting from ideas to outcomes. Washington, DC: CQ Press.

Atomic Dog’s Research Methods Knowledge Base, an online text.

An online training course in the application of logic models, developed by the University of Wisconsin Extension is available at:

o      a schematic of the logic model is available at: http://www.csrees.usda.gov/nea/food/fsne/pdfs/full_logic_model_2006.pdf

Relevant information about the treatment of human subjects is provided in the Code of Federal Regulations [CFR (45 CFR 46)] governing the treatment of human subjects in research, and the related Belmont Report and the Nuremberg Code, which are available on the Web at: