************************ Monitoring and Evaluation of ICT in Education Projects December 06, 2006 ****************************************************** The reading material is among the few we have seen in this course that had statistically-sophisticated analysis of outcomes. Getting convincing numbers requires incredible resources -- to achieve the number of independent sites, ensuring randomness of participants, and generally making sure that few (if any) confounding factors creep in. The question is what the contribution of each factor is toward the goal of improved student learning. Those factors include improved teacher training, presence of computing technology in the classroom, availability of software relevant to the content that students were learning and in the language they understand. The cost to affect each factor differs, sometimes significantly. This is another input to policy as to where resources could be directed most effectively -- to achieve maximum value at a minimum cost (since cost constraints could be severe in the developing world). A study involving ~3000 students at several dozen sites in rural India showed a ~10% gain between students in a treatment group (who were allowed access to computers, playing math games for 2 hours a week in groups of 2) and students in a controlled group. Many studies on the use of computers in the developing world -- including this one -- have found that there wasn't local expertise in setting up the computers. Often, computers would be locked up in rooms to save them (since they are so valuable), thus reducing the student access and hurting the potential benefit that technology may have. Gathering rigorous statistics is hard. There is non-trivial logistics in running the experiment (and gathering the data properly). Typically, this is part of a larger program that is externally funded, and the evaluation is done over a short period of time (during which people from the evaluation team are present locally). Language and cultural translation is necessary. The costs can be significant. Another potential difficulty is developing tests of appropriate difficulty. Even a well-planned experiment may still not yield the intended results, if it is found that the questions are too easy or too difficult. This may require that an experiment be re-run. Tests may also need to be independently developed, since at some places standardized exams may not be adequate -- e.g., the exam setting could be inappropriate (or the expectations not what is desired), there may be corruption in the evaluation or pre-setup, etc. (This is in contrast to the methodology of using standarized tests in physics in Western world classrooms for the purposes of evaluating various teaching-related interventions.) In comparison to other studies we read about, this one seemed to have involved more effort (e.g., chasing down students who had missed a test and asking them to retake in order to keep the attrition rate of the study low). The recommendation is that 10% of a project go toward monitoring and evaluation. Much of that would, of course, depend on the currency and the involvement of experts from Western world vs. those who are local. In the developed world, there have been a number of studies -- some with radically different outcomes (in terms of success or failure) -- about the effects of technology. Teasing apart what makes some succeed and others fail is tricky (since people have vested interests), but if we know what patterns tend to lead to successful outcomes and what tend to have the opposite effect, those conclusions and recommendations may be transferable to the developing world. Often, computers are used to train students to use particular applications (Word, Excel, etc.), on the hope that this would improve their job prospects in a knowledge economy. In the end, it may be that teacher training is far more cost-effective than introducing technology. It also has the added benefit of employing tutors, so the money stays in locally.