The effective management of human resources in the education sector is not only dependent on clear standards and regulations, appropriate incentives, and effective accountability mechanisms but also on the availability of and access to accurate and up-to-date information about staff and school needs. This information is critical to monitor teacher assignments and performance although the decision on what information should be used to measure personnel and school needs, and the subsequent collection of that information, is a challenge.
A study of teacher management information systems in Botswana, Malawi, South Africa, and Uganda examined the personnel data collected, processed, and available at nationwide level (Göttelman-Duret and Hogan 1998). In Uganda, a computerized management information system made it possible for the government to identify 25,000 ghost teachers. In Botswana, the information system allowed the government to accurately monitor the number of teachers employed and the salary each received. South Africa was able to improve its personnel and financial management by using an electronic Personnel and Salary system.
Despite the successes, these information systems suffered from problems, especially in terms of the capacity to maintain and further develop the systems. Creation of databases and computerized systems is the first step; making sure that the information stored is accurate and up-to-date is another, more challenging step. In the case of these countries, collection of information was annual, and data were only processed and disseminated one year after they were entered into the system thereby severely reducing their usefulness in terms of policy and monitoring, and making it difficult to hold officials, schools, and teachers accountable for performance.
Other weaknesses that are common in many countries include: incomplete data; difficulty updating individual records due to lack of monitoring mechanisms; no collection of data on teacher turnover and transfers; lack of uniformity in measures used; and absence of adequate databases and records at the provincial or district levels (Göttelman-Duret and Hogan 1998). In addition, when capitation grants are tied to the number of students enrolled, local officials and principals have an incentive to report exaggerated enrollment numbers in order to receive more funding (World Bank 2007c).
Box 2. information and incentives for performance in Punjab, Pakistan
The Punjab Education Sector Reform Program (PESRP) is an interesting example of a program to promote good governance in education to improve school performance through information gathering and oversight, incentives linking school rewards to performance, and accountability through competition.
The PESRP program has helped decentralize the highly centralized, province-level Department of School Education, responsible for service delivery in over 63,000 schools with more than 500,000 employees. Teacher recruitment, performance management, and even minor disciplinary issues were all centralized at the provincial level. The absence of incentives and accountability in this highly centralized system undermined the quality of service delivery (World Bank 2004d). One of the components of the PESRP program aims to improve education governance by strengthening teacher management, school councils, and monitoring and evaluation.
To improve teacher management the program is “(i) changing the terms of contracts with new teachers from standard civil service recruitment to site-specific, fixed-term contracts, with almost 20% of the teaching staff already recruited through such contracts; and (ii) introducing district monitoring mechanisms to curb absenteeism and monitor local schools. Transparency in teacher recruitment, postings and transfers is being enhanced through (i) new merit-based recruitment criteria, based on a point system, which favors recruiting women and local candidates; and (ii) temporarily freezing transfers of education management staff, during the first year, to provide continuity” (World Bank 2004d: p. 17).
The provincial and district governments have undertaken several measures to empower school councils including “(i) an agreement on the role of School Councils included in the [terms of partnership] (TOPs); (ii) the issuance of guidelines by the provincial government clarifying the role of [school councils], including their authority to undertake small procurement of works; and (iii) contracting NGOs for provision of capacity support to school [councils] in six districts during the first year” (World Bank 2004d: p. 18). Building upon this pilot, the Government is expanding the capacity building program to strengthen school council capability across all 35 districts.
Finally, to build monitoring capacity the Government of Punjab has “(a) developed district education profiles and established baseline indicators of education performance for the reform program; (b) agreed on monitoring targets with the districts and included them in the TOPs; (c) established a Program Monitoring and Implementation Unit (PMIU) in the Provincial Education Department; and (c) approved an education awareness campaign for disseminating information about the reform program” (World Bank 2004d: p. 18).
As part of the effort to strengthen monitoring each school (a total of 63,000) receives an unannounced monthly visit by an inspector from the independent monitoring unit, and the data collected on school performance are matched with school information systems and cross-checked with implementation systems. The information is then reviewed at district level and aggregated at the provincial level to enable monthly performance assessments.
The performance evaluation index currently consists of twelve components: teacher absenteeism, transfer of funds to school councils, free textbook provision, enrollment-attendance gap, non-teaching staff absenteeism, school inspections by District Education Department staff, meetings of the District Review Committee, illegal fees, school cleanliness, missing facilities schemes progress, school utilities functionality, and teacher training (Shakil 2008). The index components are revised periodically to reflect the requirements for monitoring of various aspects of the sector.
The frequent monitoring of school performance has enabled the introduction of incentives in the form of awards to the best performing district managers for school investments, but there are no sanctions for poor performance, the accountability element is partial. The information on performance in each district is disseminated to all other districts introducing an element of competition among education managers. However, the information is not disseminated to the public, doing that in the future could help introduce the basis for external accountability where community and parental groups pressure for better performance. Figure 6 shows the index for the five best and worst performing Punjab districts in November and December of 2007.
Figure 6. Variation in school performance in Punjab, Nov-Dec 2007
A successful example of using student performance on achievement tests to oversee school performance comes from Mexico. States that implement their own student assessments in addition to international PISA assessments disseminate results to the schools and the public; engage with schools on their test performance to find ways to raise scores; and design strategies and policies for improving education in the state based on that information have students performing significantly better on both PISA mathematics and reading tests (Álvarez, Garcia Moreno, and Patrinos 2007). The competition, external scrutiny, and engagement on policy and program issues of specific schools offer a form of school accountability to state education officials. It also improves transparency and oversight, which together help to promote better performance. Effectively the accountability arises through the increased transparency and involvement of the community (despite the lack of sanctions).
To make information systems work it is not sufficient to put new systems in place but also to introduce incentives at each level, whether at local government or school level, to collect, accurately report, and actually use the available information. Once accurate data are available on a regular basis, sustained monitoring and periodic third-party validation are typically required; the resulting information can then be used to introduce incentives and hold providers accountable and, ultimately, improve teacher and school performance.
Box 3. Does school-based management improve incentives, oversight, and accountability, and ultimately, performance?
The last few years there has been a surge in decentralization and thereby a delegation of decision-making power from central government to the community and school levels in many countries with the objective of increasing the effectiveness and responsiveness of resource allocations and to improve performance. Because decision-makers at the central level may be too far removed from schools to ensure that spending is appropriately targeted and managed, decentralization is seen as a way of bringing decisionmaking to the local level where officials, in theory at least, have better information on school resource needs, and face greater pressure from communities to effectively deliver education services (assuming there is local voting) (Wößman 2003; World Bank 2007a).
To evaluate the impact of school-based management on education performance and outcomes it is important to be clear about which decisions (human resources, supplies, finances) are decentralized, and to whom (school officials, parents, teachers), since different combinations of decisions transferred, and to different parties, can produce different outcomes (De Grauwe 2005). Moreover, de jure decentralization does not always translate into de facto decentralization. Finally, the circumstances under which school-based management is introduced matters.
In Madagascar there was concerted effort to decentralize to the level of the community and to allow parents to make decisions, however, teachers’ promotions and service location were determined centrally (Brinkerhoff and Keener 2003) and accountability remained centralized and parents’ authority was dissipated.
Some studies report a positive effect of school-based management on student attendance and performance. In post-conflict El Salvador, student attendance was higher in the EDUCO schools managed by local parent committees than in regular schools, and standardized test scores were similar to those in regular schools despite students coming, on average, from more disadvantaged backgrounds (Jimenez and Sawada 2000).
Students in schools with greater autonomy in Nicaragua had higher test scores than students in non-autonomous schools (King and Özler 2001). This effect seemingly worked through schools’ authority over teacher staffing, monitoring, and evaluation, suggesting that school autonomy contributed to improved student performance through its impact on teacher quality.
A study on Brazil used panel data at state level for the period 1981–1993 to estimate the impact of school autonomy in the form of transfer of funds to schools; election of school principals; and the setting up of school councils, and found that school autonomy reduced drop-out rates and repetition rates (Pães de Barros, Ricardo, and Mendonça 1998).
Another study assessed the impact of Mexico’s Quality Schools Program (PEC) program, a voluntary, urban-based program open to all public schools and found that school-based management reduced repetition, drop-out and failure rates (Skoufias and Shapiro 2006).
Gertler, Patrinos, and Rubio-Codina (2006) assessed the impact the Support to School Management (AGE) program in Mexico, which consisted of financial support and training to parents associations at rural primary schools in deeply disadvantaged regions. The schools received AGE over the period 1998-2001 with the objective of increasing the influence of parents, teachers, and principals over how the schools were run. The main impacts of AGE was a significant reduction in grade failure and in repetition rates. However, there was no statistically significant impact on student drop-out rates.
Another study on the effect of a school-based management program, PROHECO,10 in Honduras, found a direct positive impact on teacher behavior, which in turn improved student learning outcomes as measured by student test scores (Di Gropello and Marshall 2005).
An important issue to consider when assessing the impact of school-based management is the potential difference between de jure and de facto school autonomy. Even if power is transferred to “all” schools, some schools will find it easier to use their new powers than others, which will be reflected in any evaluation results. That is, the results will not cleanly capture the impact of school-based management itself but also the effect of some schools being better able to use their new powers (Gunnarsson et al. 2006).
Gunnarsson et al. (2006) found that parental participation and school supplies have statistically significant and positive effects on 4th grade test performance, whereas school autonomy had no impact. They argued that SBM works when communities have the necessary capacity and will to manage schools, but fails when communities lack the required skills, authority, and information.
Available evidence suggests that school-based management at the primary level in some cases increases attendance and reduces drop-out, repetition, and failure rates. However, the evidence on the impact on student test scores is less encouraging. Overall, the research on primary level school-based management suggests that it is effective if parents have authority over funds and/or teachers, which allows them to hold providers accountable (Lewis 2004).