Since the release of the 2016 Kenya Certificate and Primary of Education and the Kenya Certificate of the Secondary Education examination results, there have been a number of reactions from the public, mostly questioning the integrity of the examination process.
Politicians have suggested that candidates may have performed poorly because of the presence of police during the examination period. To be fair, there have always been policemen at examination centres. KCPE employs an objective-type format.
Marking is done by machine. The speed of the machine will not affect final results. Those who do not have access to such a machine often perforate A4 size paper to select correct answers. Anyone can then mark the scripts. Other commentators have raised issue with the validity of the examinations and that candidates were not aligned on a “normal curve”.
Validity of an examination refers to whether the examination questions test what they are supposed to. The Kenya public school system follows a centralised curriculum. A student in Lodwar follows the same syllabus as a student in Nairobi.
There are many ways one can see this selection of content as being blatantly unfair to the unexposed rural child. Nevertheless, teachers in rural schools strive to teach the entire syllabus notwithstanding their limitations. To the extent that national examinations are based on a national centralised curriculum, they are valid.
But critics have used the term “validity” to extend to the actual marking process. They have opined that KNEC could not have had time to mark fairly and get results released in one month.
A long time ago KNEC adopted the “conveyor belt” system of marking. In a four-question answer booklet, four different examiners will mark the four questions, one each. That way, no single examiner can individually influence the final outcome for a candidate.
After marking, and within the conveyor system, a senior examiner cross-checks the script for standards of marking, referred to as “deviating” in marking centres. Deviations of +-3 on one question are considered serious enough to warrant a caution because if that were allowed to persist it implies that for a script of four questions the total deviation would be +-12 marks!
In 2016, the “Magoha Rules” made it even more stringent.
A candidate's marks were punched into the computer system as soon as examiners cleared with the individual. To change a mark one had to visit the CEO giving reasons why. The marking process was therefore “valid” when validity is used loosely. Moderation is done at the time of examination setting. An examination for 2020 may be set in 2017 and moderated by all, including potential candidates, in a kind of pilot study.
This process has been greatly abused in the past. Moderators learned the prospective examination and circulated it. When it was discovered that teachers were teaching the forthcoming examinations, the known examinations were cancelled.
The ‘normal curve’ of distribution is a graphical representation of a series of scientific measurements. It shows the same forces applied to the same situation tend to produce the same result. Applied to a school system, it shows the teachers' efforts in school have been evenly distributed throughout the instruction groups or schools. From the point of view of the learner, it shows the relative position of the learner with reference to his group.
The normal curve of distribution is used in estimating students' marks. It works when there is no common use of measurement such as a common examination. The normal curve theory has been criticised for assuming that when one distributes marks, one is actually distributing talent.
Teachers' efforts in different schools will not have been evenly distributed among learners, a basic requirement for the application of the normal curve theory. The awards ceremony at KNEC has in the past been abused where it was argued that results must approximate the “normal curve” as if students had been exposed to the same experiences in their different schools.
True examination results are tampered with to reach the five categories E, D, C, B, and A, following the stated percentages. Membership in a group, say C, should be the effect of one’s efforts. The cause should be the score upon which that position is based. Thus, the proper basis of rating pupils is achievement measured in terms of an objective score.
In conclusion, it may be said that an education system where pupils' ratings are determined on the principle of relativity in a group, society gets to a level where retardation is cured by the simple process of manipulating graphs. Bad results are made to fit a predetermined graph! Equally, teachers in schools will often show low percentages of failure regardless of the quality of work done.
Stay informed. Subscribe to our newsletter
In the past, KNEC showed low percentages of failure when poor quality of work had been done. Only the 12 per cent in group “E” fail. It is good that the “Magoha Rules” put aside the normal curve theory so that we get a proper cure for retardation.