A Quick Guide to Monitoring and Evaluation Techniques in 2021
How to get up to speed with M&E methods fast
Introduction
I recently took an excellent online course meant for professionals in the monitoring and evaluation space. Being a data analyst and also having dealt with a lot of data, I found it to be very enlightening. I will try my best to outline the key points of the course, though I urge you to go through them. Why? First of all, it is free and it incorporates real-life case studies which also happen to be interactive.
First, let us go through the definitions.
What is monitoring? Monitoring involves the continuous collection of data, analysis, and reporting of the information gleaned from the data in a project.
Evaluation is a periodic, one-off assessment to ascertain project progress.
Accountability is the plan implemented to ensure stakeholder needs are balanced and their participation in the project lifecycle is guaranteed.
Learning is simply the steps taken to improve on past results and from the particular project being undertaken.
Monitoring, evaluation, accountability, and learning (MEAL) is broken down into 5 phases; design, planning, data collection, data analysis, and data/information use. As these steps are self-explanatory, I’ll focus on the tools used to plan and implement these processes.
Logic Model Design
Logic model design is the framework used to determine the clear-cut items which will form the basis for project M & E resource allocation. It is often the case that different organizations have over many years refined their own logic model designs, and, if they are funding the project, insist that these logic models be used for M&E.
There are three frameworks in the logic model each building from the former.
The Theory of Change is usually the first framework to be created. It incorporates general statements with a flow of logic. The general format is starting with the goal to be achieved then working back to the requirements and smaller objectives needed for it to be achieved. An example is shown below.
In analytics, it is known as a directed acyclic graph. TOC is often the precursor to the results framework model.
The results framework model incorporates more specific workflows as compared to the TOC. It breaks down its steps into a goal, a strategic objective, intermediate results, and outputs. This example from USAID is simple and clearly illustrated.
The RF gives way to the logical framework or logframe. This is usually the final document used for planning purposes. It succinctly maps the progression from the initial activities undertaken, to the end goal. It is in a table format. Different organizations have different methods of implementing logframe particulars.
The below snapshot from tools4dev gives an excellent example of how log frames are implemented. Notice the left-to-right, bottom-up approach to preparing the log frame.
A competent MEAL manager must have a knack for compiling effective indicators to be used in the project. Luckily there are a lot of examples and case studies out there detailing how each project manager came up with their specific indicators.
The logical framework now leads to the creation of the most important document in the M&E arsenal, in my opinion, which is the PMP- performance management plan, also known as the monitoring and evaluation plan. It answers the following questions:
· How are the indicators defined?
· Who is responsible for MEAL activities?
· When will MEAL activities take place?
· How will data be analyzed?
· How will data be used?
· Whether and how assumptions will be monitored?
It comes in several formats depending on the implementing organization, but this simple format from the Kaya MEAL Dpro course really captures the spirit of the document in my opinion.
The other important documents are:
1. The summary evaluation table details the evaluation terms of reference and contains details about the evaluation process; type, budget, questions to be asked, logistics, and use of data.
2. The data flow map shows how data flows from primary and secondary sources to reports, the PMP, TOR, and other documents.
Once the above measures are taken and the documentation prepared, data collection and analysis becomes easier.
Data Collection
Data collection planning should be thorough so as to get high-quality information from the process. The design of the questionnaire should be simple and easy to understand for the interviewees, include all the plausible options in the answer section, have skip logic for unanswered questions, and include necessary metadata, for instance, geographical location and demographics. This assumes the field officers undertaking the process have been well trained and understand the system at hand.
After the survey design is completed, the questionnaires should be tested repeatedly to ensure the final product is as bias-free as possible.
Quantitative data is collected when the M&E team needs to quantify behavior or attitudes, generate data that is transformed into usable statistics, and, via proper survey design, make inferences about a certain population. It requires the questions asked to be closed-ended.
Qualitative data is collected when researchers want to uncover underlying thought processes in a population. Its questionnaires have open-ended questions and are divided into content mapping and content mining questions.
This piece about the prerequisites to statistical experimental design is an in-depth explanation of the considerations to be taken when planning the data collection.
After data is collected, it should be uploaded to the necessary databases for storage, preceding data analysis. This process has been more or less automated as organizations integrate cloud-based storage and analysis solutions. Organizations should be keen to dispose of data that is not needed to maintain their stakeholders’ privacy.
Data Analysis
Data analysis is my forte, so I approach it with a lot of passion and experience. There are two ways to perform the analysis; quantitative and qualitative analysis.
Quantitative analysis can be either descriptive, which means it could be seeking to only get summary statistics about the sample taken (mean, median, and mode), or it could be inferential, meaning its purpose is to generalize certain statistics from the sample onto the population (hypothesis testing). Inferential statistical analysis is the most common and also the most informative but has to be done rigorously to avoid errors.
I have explained the considerations behind hypothesis testing here, with detailed points regarding choosing the sample size, confidence intervals and margin of error, power analysis, and the kind of test to be performed.
Qualitative analysis uses mostly notes, audio, and video to extract attitudes from the data. It is also referred to as content analysis. Using thematic analysis, codes are acquired from the data and used to confirm pre-written themes (deductive analysis) or obtain new themes altogether (inductive analysis).
With causation being very difficult to confirm a hypothesis in a controlled experiment, mixed methods- quantitative and qualitative analysis, are usually used to substantiate a hypothesis, rather, to assert a stronger case for a certain initiative given the evidence. Using different analysis methods to confirm whether the research results are true is referred to as triangulation.
The information gleaned from the data should be presented, according to the stakeholder involved, in a manner that encourages some action to be taken. Charts and visuals are very effective when used properly but also with integrity to avoid confusion and disappointment down the line.
Learning sessions should also be incorporated throughout the project lifecycle as different analyses come to the forefront. This is called adaptive management and is crucial for quicker success in subsequent projects.
Data Use
To ensure that the participants’ privacy is protected (which is a requirement for some organizations for example the GDPR), any data which compromises this state should be either done away with, after the project is completed, or should be anonymized by deleting the aspects of the data that reveal identities.
Conclusion
Monitoring and evaluation entail much more than covered in this article, the reader should only use this as a guide or general checklist. Reference to company material or previous documentation is also important to get a sense of what methods worked in a previous project.
Since there’s a lot of grey areas in the survey side of project research, I will be writing a more in-depth article concerning survey/questionnaire scripting and its intricate details.