What to know
By using the information in this chapter, you will be able to plan, conduct, and integrate activities to evaluate your program’s EHDI-IS. The findings from the evaluation can be used to ensure that your EHDI-IS serves as an effective tool in identifying and providing services to deaf and hard of hearing (D/HH) infants and children.

Chapter objectives
This chapter will help you to
- Understand the importance of an EHDI-IS evaluation;
- Identify the key steps for developing and implementing evaluation plans; and
- Understand how to report findings from an EHDI-IS evaluation.
The practical use of program evaluation among EHDI programs
The goal of this chapter is to help strengthen understanding about how staff can use evaluation to monitor and assess their program's EHDI Information System (EHDI-IS) and integrate this into routine program practice. This chapter will illustrate how to plan evaluation activities, regardless of the type and stage of EHDI-IS development. It is meant to serve as a resource to assist those who are responsible for conducting the evaluation and any partners who have an interest in the evaluation results. This chapter is based on the 星空娱乐官网's Framework for Program Evaluation and the for Evaluating Public Health Surveillance Systems.
Why evaluate your EHDI-IS?
Evaluation is essential for improving public health programs and providing accountability to policy makers and other partners. Comprehensive guidelines for Evaluating Public Health Surveillance Systems were published in 星空娱乐官网's Morbidity and Mortality Weekly Report (MMWR) (July 27, 2001). This MMWR was developed to promote the best use of public health resources by developing efficient and effective public health surveillance systems.
Integrating evaluation into the routine activities for the EHDI-IS can help ensure that
- The data collected and documented are of high quality and describe the true characteristics of the newborn's hearing screening, diagnostic, and early intervention status. This results in data that are accurate, complete, consistent, timely, unique, and valid.
- Each jurisdictional EHDI-IS has the acceptability, flexibility, simplicity, and stability needed to collect data, and the system can be operated by both users and reporters.
- Each jurisdiction has a useful EHDI-IS in place to support: (a) tracking of infants and young children throughout the EHDI process; and (b) connecting D/HH infants and young children with services they need.
- Resources to support and maintain EHDI-IS are being used effectively.
Evaluation
Program evaluation is "the systematic collection of information about the activities, characteristics, and outcomes of programs to make judgments about the program, improve program effectiveness, and/or inform decisions about future program development."
Evaluation can take many different directions, but it is always responsive to the program's needs, partners, and resources.
Purpose of the evaluation
Through evaluation, team members can identify what is working well or poorly, whether the objectives are being achieved, and provide evidence for recommendations about what can be changed to help the system better meet its intended goals. Using the evaluation results and sharing lessons learned will lead EHDI programs to improve upon their success.
It is a common misconception that evaluation is something that you plan and conduct at the end of a project. Evaluations are designed to do one of two things or a combination of both: 1) improve aspects of your system and/or 2) prove or show that the system is reaching its intended outcomes. Establishing a clear purpose from the beginning will reduce misunderstandings and facilitate consideration of how evaluation findings will be used.
Often an evaluation is conducted because the funding agencies require it or because of program staff and jurisdiction partners' interests. In both cases the evaluation purpose may want to include the following information:
- What does this evaluation strive to achieve?
- What is the purpose of this evaluation?
- How will findings from the evaluation be used?
Monitoring versus evaluation
Program monitoring is part of the evaluation process and provides a core data source to build a program evaluation. While monitoring tracks implementation progress, evaluation examines the factors that contribute to the success or failure to understand why a program may or may not be working. Evaluation activities build on the data routinely collected in ongoing monitoring activities.
We can define the monitoring process as the observation and recording of multiple activities within the system to help staff identify problems with program operation. EHDI program staff can implement monitoring processes using their EHDI-IS as an "early warning system" to identify and address data issues at several steps (i.e., hearing screening, diagnostic assessment, and enrollment in early intervention) in meeting the EHDI 1-3-6 benchmarks.
The number and frequency of the monitoring process activities are decided by the jurisdiction and depends on the area of the assessment. For example, those monitoring processes done to assess the completeness of the data submitted by hospitals may be performed on a daily or weekly basis. While other processes to identify duplicate entries in records, data errors, or missing data may be performed on a monthly basis. Each EHDI-IS has the ability to be queried and provide ad hoc reports, which are useful when developing monitoring processes, and vice versa.
Planning for your evaluation: Where to start?
An example of how to plan and conduct an evaluation of your EHDI-IS can be found in the document Planning the Evaluation of your EHDI-IS. [PDF – 518KB]. In this chapter, evaluation of an EHDI-IS is divided into six steps:
- Assess the context
- Describe the program
- Focus the evaluation question and design
- Gather credible evidence
- Generate and support conclusions
- Act on findings

1. Assess the context
Before starting your EHDI-IS evaluation, consider conducting an evaluability assessment. An evaluability assessment is a tool for examining evaluation readiness. Various factors can be examined through an evaluability assessment, including the extent to which the goals of the EHDI-IS are attainable through the proposed activities, the availability of appropriate resources to support the evaluation (e.g., funding, staff capacity, and data), and the level of interest in the evaluation. Addressing the factors examined in an evaluability assessment increases your readiness to conduct an evaluation that produces relevant, useful, and rigorous findings.
Interest holders
Interest holders, also known as partners, are those who have an interest in the evaluation and may use its results in some way (e.g., those involved in the evaluation, those affected by the evaluation, and primary intended users of the evaluation). Jurisdictional EHDI program staff and evaluators will work with interest holders or partners to produce evaluations that best fit their EHDI-IS and answer the evaluation questions with credible evidence. Examples of interest holders or partners include hospitals, audiologists, medical homes, early intervention programs, and state health departments. Interest holders or partners are extremely valuable throughout the evaluation process and should be engaged early and often, especially in the planning stages of the evaluation.
Place
It is important to understand the social and historical context when conducting your EHDI-IS evaluation. Aspects of the context that are important to understand include but are not limited to:
- Program features: These features include how your EHDI-IS operates, who is involved in managing your EHDI-IS, who has the authority for decision-making, and what might influence the decision-making process.
- Program environment: It is important to understand the current and historical features of the environment in which your EHDI-IS operates. For example, it is important to understand the people who might be engaged in the evaluation as well as the power dynamics among people who interact with or influence your EHDI-IS.
Evaluation Capacity
It is helpful to understand your program's existing capacity to conduct an EHDI-IS evaluation because it can help identify the strengths people bring to support planning and implementation. Evaluation capacity can be examined at the organizational level by assessing what resources (e.g., funds, staff members, volunteers, time, technology, and data) are available to support planning and implementation. Organizational evaluation capacity can also be examined by understanding the interest holders or partners (e.g., hospitals, audiologists, early intervention programs) that are available, willing, and able to support the evaluation. Interest holders or partners who help plan and implement the evaluation, as well as use the findings, might have varying assumptions, beliefs, and experiences with evaluation. Discussing how interest holders or partners understand evaluation can lead to clarity about your EHDI-IS evaluation process.
2. Describe your EHDI-IS
The second step in 星空娱乐官网's evaluation framework is to describe your EHDI-IS, which is the foundation for all subsequent steps in the evaluation. This very important step ensures that program staff, the evaluator, and other partners share a clear understanding of what the EHDI-IS entails and how the system is supposed to work.
You have the opportunity to describe
- The need for the EHDI-IS;
- The stage of development;
- The outcomes your EHDI-IS intends to achieve;
- The key activities that are expected to lead to those outcomes;
- The resources used to operate it; and
- The social and political context in which the system is implemented.
Once all the components of the system have been identified, a graphic representation (e.g., logic model) may help summarize the relationships among those components. The aim is to produce a concise EHDI-IS description to better understand your program’s system.
Describing your information system using logic models
A logic model is a simplified graphic representation of your system. Logic models are common tools that EHDI staff can use for planning, implementation, evaluation, and communication. Developing a logic model collaboratively improves clarity and agreement among program staff and partners on the main activities and intended EHDI-IS outcomes. A logic model includes four components: inputs, activities, outputs, and outcomes.
Inputs
Inputs are the resources that go into the system and help to determine the number and type of activities that can be reasonably implemented during the project. Inputs include the people (usually from inside and outside the program), budget, infrastructure, and information needed. During the planning stage, the lists of resources are essential to deciding the type of activities the program will be able to implement.
Activities
Activities are the actual events or actions that take place as a part of the EHDI-IS, such as:
- Maintaining data systems in accordance with the EHDI-IS Functional Standards;
- Enhancing systems and advancing efforts on data linkage and integration;
- Providing technical assistance to data reporting sources;
- Collecting data from data sources;
- Analyzing data;
- Promoting and supporting coordination around tracking and surveillance activities;
- Supporting targeted dissemination of surveillance data; and
- Developing and conducting evaluation activities.
Outputs
Outputs are the direct products of the program components' activities, such as:
- Number of data dissemination products (e.g., data reports, dashboards, websites);
- Protocols in place for data reporting;
- Number of trainings implemented for data reporting sources;
- Number of meetings with partners; and
- Number of data sharing agreements in place.
Outcomes
Outcomes are the intended effects of the program's activities. They are the changes you want to occur or things you want to maintain in your surveillance system. These changes can be expressed as short-, mid-, and long-term outcomes. All outcomes must indicate the direction of the desired change (i.e., increase, decrease, and maintain).
Short-term outcomes
Short-term outcomes are the immediate effects of your program component, such as:
- Increased skills among data reporters after the implementation of training;
- Reduced numbers of errors after examining data queries; and
- Increased reporting of timely data by partners at each phase of EHDI 1-3-6.
Mid-term outcomes
These are the intended effects of your program components that take longer to occur. Examples include:
- Decreased data collection and reporting barriers;
- Improved timeliness in the collection of screening data after the implementation of a new web data reporting system;
- Improved electronic exchange of data with Vital Records after data sharing agreements are in place;
- Increased collaborations and reporting by providers after meetings and trainings;
- Improved program planning after evaluation results are disseminated; and
- Increased knowledge among decision makers after dissemination of key information.
Long-term outcomes
Long-term outcomes are the intended effects of your program components that may take several years to achieve. Examples include:
- Improved surveillance of infants and young children throughout the EHDI process;
- Increased utilization and dissemination of EHDI data among EHDI programs and partners for tracking and to inform decision making;
- Data driven programmatic changes to help D/HH children reach their age-appropriate milestones and improve EHDI 1-3-6 benchmarks;
- Improved documentation of early intervention services outcomes;
- Development and implementation of revised or new policies; and
- Implementation of a useful EHDI-IS that conforms to 星空娱乐官网 EHDI Functional Standards and serves as a tool to help programs ensure all D/HH infants are identified early and can receive intervention services.
After you have decided on the various components of your logic model, arrange them in a way that reflects how your program operates. Examine the model carefully. Does each step logically relate to the other? Are there missing steps that disrupt the logic of the model? It is important to remember that logic models are living documents, which can change over time with improvements to the system, changes in resources, or modifications made to the program.
Overarching EHDI-IS Logic Model
3. Focus your evaluation question and design
After completing steps one and two, you and your partners should have a clear understanding of your EHDI-IS. The evaluation team then decides the evaluation purpose, list of intended users and use of evaluation findings, type of evaluation (e.g., process evaluation or outcome evaluation), list of evaluation questions, and evaluation design.
Evaluation purpose
Clearly describing an evaluation purpose can help you decide how the EHDI-IS evaluation will be conducted and maintain the intended scope of the evaluation efforts. There may be many potential purposes for conducting an evaluation. Therefore, it is important to gain clarity with your partners about the highest priority purpose(s).
Intended users and uses of evaluation
The evaluation team may consider asking the following questions before beginning the evaluation:
- Who will use the evaluation findings?
- How will the findings be used?
- How feasible is this evaluation?
- Can it be done with available resources and within the available timeframe?
Type of evaluation
There are two basic types of evaluation: 1) process evaluation and 2) outcome evaluation. Process evaluation focuses on examining the implementation of the system, determining whether activities are being implemented as planned, and whether the inputs and resources are being used effectively. Outcome evaluation focuses on showing whether or not a program component is achieving the desired changes.
Examples of process evaluation questions include:
- To what extent are all data reporters willing to participate and report data in a timely manner to EHDI-IS?
- In what ways may the existing information technology (IT) infrastructure be improved for better data collection and management?
- To what extent are hospitals' screening staff complying with the established protocols?
- What additional trainings are needed?
- Did the training meet the attendees' needs?
- Did the training meet the attendees' needs?
- What are the most important causes of loss to follow–up in the state?
Examples of outcome evaluation questions include:
- Is the number of active users increasing over time as more facilities are trained?
- Did the disseminated surveillance data lead to strategic actions that benefitted the program partners?
- Did the disseminated surveillance data lead to strategic actions that benefitted the data users connecting D/HH babies with services they need?
When we examine our programs, the perception that each component and process is necessary for the larger whole may make it difficult to decide what to evaluate. However, resources are rarely available to address all aspects of a program. Therefore, it is necessary to establish priorities to make final decisions about what specific evaluation questions the evaluation team will answer and how.
Evaluation questions for assessing system attributes
What attributes should be considered when evaluating the EHDI-IS?
The evaluation of public health surveillance systems involves an assessment of system attributes. As noted in the above logic model, the more important attributes for the EHDI-IS are data quality, timeliness, acceptability, simplicity, flexibility, stability, and usefulness.
The graphic below provides a list of data quality attributes (dimensions) that state EHDI programs can choose to adopt when looking to assess the quality of the data in the EHDI-IS. It is not a rigid list. The use of the dimensions will vary depending on the requirements of individual EHDI jurisdictions.
![data quality graphic Data Quality Table defining five attributes: Accuracy (the extent that data are correct, reliable, and certified free of error); Completeness (the proportion of stored data against the potential of "100% complete"); Consistency (the absence of difference, when comparing two or more representations of a thing against a definition); Uniqueness (nothing will be recorded more than once based upon how that thing is identified); Validity [data are valid if it conforms to the syntax (format, type, range) of its definition].](/hearing-loss-children/images/data-quality-graphic.png)
A complete description of the dimensions of EHDI Data Quality Assessment can be found the document titled The Six Dimensions of EHDI Data Quality Assessment.
When evaluating data quality, consider including the following evaluation questions:
- Is the EHDI-IS able to document complete, accurate, unique, consistent, and valid screening and diagnostic data on all occurrent births in the state?
- Which factors are affecting the quality of the data?
- Is the system able to identify and monitor system errors?
- What changes in the system need to be implemented?
This table provides a list of attributes and evaluation questions that jurisdiction EHDI programs can choose to adopt when assessing their EHDI-IS.
Acceptability
The willingness of persons and organizations to participate in the EHDI information system.
To what extent are hospital screening staff complying with the established protocols?
- What additional trainings are needed?
- Did the training meet the attendees' needs?
To what extent do audiologists in the state know about the EHDI program and use the EHDI-IS system?
How could the EHDI program better educate audiologists about using the EHDI-IS system?
What barriers prevent audiologists from reporting diagnostic data?
How could the EHDI program better increase awareness among audiologists about the importance of documenting and communicating results?
What additional partners should the EHDI program engage to improve documentation of diagnostic and early intervention data?
Flexibility
EHDI-IS can adapt to changing information needs or operating conditions with little additional time, personnel, or allocated funds.
How flexible is the EHDI-IS to electronically interchange data with other data systems?
Does the EHDI-IS have the ability to shift from hearing loss tracking to surveillance (explained below)?
Can your system meet changing detection needs?
- Can it add unique data?
- Can it capture other relevant data?
- Can it add providers or users to increase capacity?
Simplicity
Refers to both its structure and ease of operation. EHDI-IS should be as simple as possible while still meeting their objectives.
How easy is it to use the EHDI-IS for managing data and generating reports?
Do you have the ability to generate new reports with minimal effort (e.g., without having to submit a request to a vendor or your IT department)?
What downtime is needed for servicing/updating?
Stability
Reliability (i.e., the ability to collect, manage, and provide data properly without failure) and availability (to be operational when it is needed).
Is the EHDI-IS consistently operating?
What is the frequency of outages?
Usefulness
EHDI-IS contributes to early identification of hearing loss and connects deaf and hard of hearing babies with desired services; provides understanding of hearing loss implications; contributes performance measures for accountability.
Did the collected EHDI data lead to strategic actions that benefitted the program or partners?
Did the EHDI-IS contribute to the early identification of deaf and hard of hearing infants and connecting them with intervention services?
Does the EHDI-IS support policy development, program planning, and delivery of services?
There are many methods, strategies, and tools that can help with prioritizing evaluation questions. Regardless of the method selected, utility and feasibility are two important evaluation standards to keep in mind.
Evaluation design
Now that you have selected the evaluation questions to answer and you have a clear evaluation purpose, it is time to think about the evaluation design. The evaluation design depends on the purpose of the evaluation. A design (indicators, data collection methods, data collection sources) is used to structure the evaluation and show how all of the major parts of the evaluation project work together to address the evaluation questions.
There are three overarching types of evaluation designs: 1) experimental, 2) quasi-experimental, and 3) non experimental. For many surveillance evaluations, you will find that a simple, non experimental design is an appropriate evaluation design. However, other evaluation designs may be used depending on the question you intend to answer. In this chapter, we emphasize non experimental designs and some quasi-experimental designs.
Non experimental designs, also known as observational or descriptive designs
Non experimental designs include case study and post-test only designs. In these designs, there is no randomization of participants to conditions, no comparison group, and no multiple measurements of the same factors over the time. For example:
- Evaluation teams may use surveys to assess the level of partners' satisfaction with the data and data dissemination.
- Evaluation teams may interview audiologists to assess whether the electronic reporting process or paper reporting form are user friendly.
Quasi-experimental design
Quasi-experimental designs are characterized by the use of one or both of the following: 1) the collection of the same data at multiple points in time, or 2) the use of a comparison group. Many designs fall under this heading, including but not limited to:
- Pre-post tests without a comparison group;
- A nonequivalent comparison group design with a pre-post test or post-test only;
- Interrupted time series; and
- Regression discontinuity.
For example:
- Evaluation teams might use data reports to determine if the number of audiologists reporting EHDI data is increasing over time after the implementation of training (interrupted time series before and after training).
- Evaluation teams may use pre– and post–test to assess how effective the training was in increasing knowledge of the data reporting process.
4. Gather credible evidence
After choosing your evaluation design, you will need to determine the evidence needed to answer your evaluation questions. The information you gather in your evaluation must be reliable and credible for those who will use the evaluation findings. In this step, you will work with your partners to identify the data collection methods and sources you will use to answer your evaluation questions.
The following defines what constitutes credible evidence for partners to specify:
- Quantity: What amount of information is sufficient?
- Quality: Is the information trustworthy (i.e., reliable, valid, and informative for the intended uses)?; and
- Context: What information is considered valid and reliable by the partners?
Indicators
Indicators are specific, observable, and measurable characteristics or changes that show the program's progress toward achieving an objective or specified outcome. Indicators are tied to the objectives identified in the program's description, the logic model, and/or the evaluation questions. The indicator must be clear and specific about what it will measure.

The evaluation staff must decide which data collection, management, and analysis strategies are most appropriate for each indicator and whether needed technical assistance for implementation is available and affordable. Different methods can be used to analyze quantitative or qualitative data. For example, data for a short survey can be manually analyzed or you can use various free online tools. When you are collecting a large amount of quantitative or qualitative data, you will likely need to use specialized software tools to support the analysis and management of this information.
When conducting outcome evaluations, there are some common standard indicators for EHDI programs (e.g., loss to follow-up/loss to documentation for diagnosis). Consider including some of them to show progress toward achieving specific outcomes.
5. Generate and support conclusions
After analyzing your data, the next step is to examine your results and determine what the evaluation findings "say" about your EHDI-IS. This involves linking all the findings to the evaluation questions and telling your program's story. Keep your audience in mind when preparing the report. What do they need and want to know? Also, keep in mind that the findings are the basis for developing recommendations for program improvement.
Recommended elements for the evaluation report include:
- Comparison of actual with intended outcomes;
- Comparison of program outcomes with those of previous years by using existing standards as a starting point for comparisons; and
- Limitations of the evaluation, such as
- Possible biases;
- Validity of results;
- Reliability of results; and
- Generalizability of results.
- Possible biases;
Although a final evaluation report is important, it is not the only way to distribute findings. Depending on your audience and budget, you can consider different ways of delivering evaluation findings. Additional information on reporting evaluation findings can be found in the document titled Evaluation Reporting: A Guide to Help Ensure Use of Evaluation Findings. [PDF – 550KB]
6. Act on findings
Always ask the question "so what?" at the end of the evaluation. This is important because the evaluation results can help improve aspects of your EHDI-IS, strengthen current activities, and/or change elements that may not be working.
It is important to make sure the findings from your EHDI-IS evaluation are used and disseminated appropriately. Strategies for communicating the evaluation findings, recommendations, and lessons learned should be tailored based on your partners. For example, a report may be more appropriate for hospitals while a presentation may be more appropriate for early intervention programs.