Performance Monitoring - an overview (2023)

Crash Data Sets and Analysis

Young-Jun Kweon, in Handbook of Traffic Psychology, 2011

2.2.1.3 Highway Performance Monitoring System

The HPMS is a national highway system database containing data on the extent, condition, performance, use, and operating characteristics of highways in the United States, and it supports a data-driven decision process regarding national highway issues. HPMS data are used for assessing performance and investment needs of highway systems and for apportioning federal highway funds. Although the HPMS is not designed specifically for traffic safety analysis, its data contain information useful for traffic safety analysis, covering various aspects of highway characteristics such as roadway inventory (e.g., facility type, turn lanes, and speed limit), traffic operations and controls (e.g., AADT by vehicle type and signals and stop signs), geometric features (e.g., lane width, median type, and grade), and pavement (e.g., surface type, rutting, and base type).

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780123819840100086

Economic Characteristics of Air Traffic Management

Margaret Arblaster, in Air Traffic Management, 2018

Performance Monitoring and Benchmarking

Performance monitoring and performance benchmarking are measures that are used to stimulate good performance outcomes. They are considered important in industries where there is an absence of competitive market processes. Performance monitoring involves the measurement of performance over time against indicators of performance or key performance indicators (KPIs).28

Performance benchmarking is a complex activity requiring comparable, consistent, and validated data to be meaningful. Differences between ANSPs in terms of the influence of a variety of local, regional, and global factors need to be taken into account. There are techniques that can be used to help reduce the impact of these factors, including cost indicators where data are normalized by output levels (e.g., costs/instrument flight rules [IFRs] flight hours) and presentation of trend analysis (Neiva, 2015). A limitation of annual benchmarking measurements on capacity, delays, and costs is that it reflects the existing industry structure and conduct of the market, i.e., not relative to a competitive ideal.

Benchmarking measures a firm’s performance relative to the performance of other similar firms with the aim of identifying best practices. However, although benchmarking can suggest best practices, benchmarking studies can have the problem of being too general in scope and not providing sufficient details to firms (and regulators) to assist them (Neiva, 2015).

Air Navigation Service Provider Benchmarking Studies

Three well-known industry studies that regularly monitor ANSP performance are described in Table 6.5.

Table 6.5. International studies of air navigation service provider performance

StudyOrganization/sStates involvedYear commenced and frequencyCoverage
ATM Cost-effectiveness Benchmarking ReportEurocontrol Performance Review Unit37 Air navigation service providers (ANSPs) in EuropeAnnual since 2003A diverse range of indicators of cost efficiency and productivity of individual national providers of air navigation services (ANS)
Comparison of ATM-Related Performance: U.S.–EuropeEurocontrol and Federal Aviation Administration (FAA)Europe and the United States (Air Traffic Organization, FAA)Biennial since 2008Operational key performance indicators derived from comparable databases for both Eurocontrol and the FAA
Global ANS Performance ReportCivil Aviation Navigation Services OrganizationGlobalAnnual since 2010Compares the cost efficiency, productivity, pricing, and revenues of ANSPs

Eurocontrol, 2016a. ATM Cost-effectiveness (ACE) 2014 Benchmarking Report with 2015–2019 Outlook, Prepared by the Performance Review Unit (PRU) with ACE Working Group. https://www.eurocontrol.int/sites/default/files/content/documents/single-sky/pru/publications/ace/ACE-2014-Benchmarking-Report.pdf, Eurocontrol and FAA, August 2016. 2015 Comparison of ATM-related Performance: U.S. – Europe, Produced by EUROCONTROL on Behalf of the European Union and the Federal Aviation Administration Air Traffic Organization System Operations Services.https://www.faa.gov/air_traffic/publications/media/us_eu_comparison_2015.pdf, and CANSO, 2015b. Global Air Navigation Services Performance Report 2015. 2010-2010 Performance Results. The ANSP View.https://www.canso.org/sites/default/files/GlobalANSPerformanceReport2015%20the%20Industry%20View.pdf.

The Performance Review Unit (PRU) of Eurocontrol is responsible for the evaluation of European ATM performance and regularly performs and commissions studies with the aim of monitoring performance of national service providers (Bilotkach etal., 2015). Almost all ANSPs in Europe are integrated in a system that reports ANSPs’ performance in a standardized manner. The PRU monitors and reviews the performance of the European air navigation system and compares the efficiency of air navigation service providers.

The FAA (ATO) and Eurocontrol have developed similar sets of key operational performance areas and indicators using common procedures on comparable data, which are used to make comparisons between the performance of ATM in the United States and Europe. Performance monitoring undertaken by the Civil Aviation Navigation Service Organization (CANSO) is the only global monitoring of air navigation performance (CANSO, 2016a). Participation in the CANSO study is voluntary. In 2016, 27 ANSPs, less than one-third of CANSO full members, participated and also agreed to have their performance measurements made publicly available. The group of ANSPs in the CANSO Global Air Navigation Service Performance Reports includes many European ANSPs and the FAA who are already participating in international publicly available performance reporting. Outside Europe and North America there appears to be relatively few ANSPs who participate in international performance benchmarking.

Academic Literature

Two key academic studies of ANSP cost-efficiency performance published since 2013 are: Bilotkach, V., etal. (2015) and Button and Neiva (2014). The studies relate to Europe and use data envelopment analysis, a linear programming tool that can estimate the relative levels of efficiency of multiple firms. The study by Bilotkach, V., etal. (2015) covers a longer time frame, from 2002 to 2011, and a more diverse set of efficiency indicators in comparison with that of Button and Neiva (2013), which involves 36 European systems from 2002 to 2009.

The study by Bilotkach, V., etal. (2015) finds that Avinor (Norwegian ANSP) and MUAC (a service operated by Eurocontrol for high-altitude navigation over Benelux and Northern Germany) exhibit the highest level of economic efficiency. Overall, the Western European providers appear to have higher efficiency than their counterparts from Eastern Europe. Over time, the ANSPs in the study have generally become more cost-efficient.

The Button and Neiva’s (2014) study indicates a wide dispersion in the relative levels of efficiency of ANSPs within Europe and that the pattern of relative efficiency has tended to change over time. Their analysis demonstrates that overall the productivity of the national ANSPs has increased over the period covered by the data. In particular, three out of four providers have increased their productivity, and about two out of three have become more cost-efficient.

(Video) Performance Monitoring: An Overview

Neiva (2015) tested for spatial autocorrelation for European ANSPs in the Single European Sky, i.e., whether the efficiency of one ANSP provider is affected by the inefficiencies of those around it. The results suggest that there is at least some sort of spatial dependency from an economic efficiency perspective, with each ANSP being impacted in its level of efficiency by other systems. An implication is that no matter how efficient an ATC system is, its interdependency with other systems may lead to a drop in its performance if third parties are less efficient (Neiva, 2015).

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780128111185000060

Spain

A. Tiana, in International Encyclopedia of Education (Third Edition), 2010

Performance Monitoring, Evaluation, and Research

Performance monitoring is a shared responsibility between the Ministry of Education and the regions. Each of the 17 regions is responsible for the monitoring and evaluation of the education system and schools in its own territory. Some of them have created regional centers or institutes for evaluation and have developed school-evaluation programs. There is a national statistical plan which allows the collection of national data on education and training on a regular basis and the contribution to international statistics.

The Evaluation Institute, belonging to the Ministry of Education but governed in cooperation with the regions, sets the frame for the general evaluation of the education system and the collection of national indicators. National evaluation programs set in place in the 1990s are now developing toward a new model based on the assessment of key competencies.

The Evaluation Institute also coordinates Spanish participation in international evaluations. Most of the Spanish regions are participating in PISA with comparable samples, allowing comparison with other countries and regions.

Universities and their teaching staff are evaluated by the National Agency for Quality Evaluation and Accreditation (ANECA) or the corresponding agencies created by some regions.

Educational research is mainly undertaken by universities, with the participation of some public or private centers or foundations. The Center for Educational Research and Documentation, belonging to the Ministry of Education, plays a significant role in fostering research and disseminating its outcomes.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780080448947014329

South Africa

P. Kgobe, J. Pampallis, in International Encyclopedia of Education (Third Edition), 2010

Performance Monitoring and Evaluation

A number of performance monitoring and evaluation mechanisms have been put in place. The South African Qualifications Authority (SAQA) Act of 1995 provides for the establishment of education and training quality assurance bodies (ETQAs) across the various sectors of the education system. The main purpose of the ETQAs is to monitor and audit achievements in terms of national standards.

Two independent quality-assurance bodies have been established by statute. The first is Umalusi – the council for quality assurance in general and further education, which aims to enhance and assure education quality in public and private schools, further education institutions, and adult education providers. The second is the higher education quality committee (HEQC), a permanent committee of the council on higher education, which has a similar function in the higher education sector. In late 2007, the government took the decision to establish another similar body, the quality council for trades and occupations (QCTO), with responsibility for occupation-specific training programs in workplaces and institutions outside of registered universities and colleges.

Twenty-three sector education and training authorities (SETAs) have been established with responsibility for promotion and coordination of skills development within their respective economic sectors. This includes a quality-assurance role. They are accredited by SAQA to quality assure qualifications in their areas of primary focus. So, for example, the quality assurance of programs aligned to bricklaying or carpentry qualifications is delegated to the Construction SETA, which must ensure that all accredited training meets approved standards.

Within the system of school administration, the department of education has created the integrated quality management system (IQMS). This system serves four purposes – to identify the specific needs of educators, schools, and district offices with a view to supporting and developing them; to promote accountability; to monitor the overall effectiveness of institutions; and to evaluate educator performance. The IQMS uses three strategies, namely developmental appraisal, performance measurement, and whole school evaluation, all of which are aimed at enhancing and monitoring the performance of the education system as a whole.

There are also systemic evaluation studies which focus primarily on assessing the achievement of learners at the various transitional stages of the system – namely, grades 3, 6, and 9. The purpose of the systemic evaluations is to assess the effectiveness of the entire system and the extent to which the vision and goals of the education system are being achieved. Two systemic evaluations have been conducted to date (among grade-3 learners in 2003 and among grade-6 learners in 2004/2005). Both have given rise to major concerns regarding the teaching of literacy and numeracy in primary schools, as pupil performance has been unacceptably low. As a result, the department of education has developed special programs to tackle the problems identified.

Read full chapter

(Video) Performance Monitoring: An Overview

URL:

https://www.sciencedirect.com/science/article/pii/B9780080448947014317

The Canine Model of Human Brain Aging: Cognition, Behavior, and Neuropathology

P. Dwight Tapp, Christina T. Siwak, in Handbook of Models for Human Aging, 2006

Inhibitory control in aging dogs

Inhibitory control and performance monitoring are executive functions that show decreased efficiency during normal aging (McDowd et al., 1995). According to the inhibitory deficit hypothesis of aging, the inability to maintain attention to relevant task features and to inhibit interfering information or previously activated cognitive processes is the single greatest factor affecting age-related cognitive decline (Hasher and Zacks, 1988). Deficits on tests of sensory processing, memory, and reading comprehension are often attributed to a lack of inhibitory control with age (McDowd et al., 1995).

Although not exclusively designed to measure executive function, the nature of errors on discrimination reversal tasks provide evidence of inhibitory control deficits. Discrimination reversal tasks require subjects to inhibit prepotent responses to previously correct stimuli and shift responses to a new stimulus–reward contingency within the same perceptual dimension. In humans, reversal deficits correlate with dementia severity. For example, perseverative responding, a measure of impaired inhibitory control, is more severe in Alzheimer's patients than demented Parkinson's patients or normal controls (Freedman and Oscar-Berman, 1989). Similar deficits are observed in aged nonhuman primates (Voytko, 1999) and rodents (Means and Holsten, 1992).

Reversal learning in the dog was previously examined using an object discrimination task (Milgram et al., 1994). Aged dogs were impaired relative to young dogs on the object discrimination reversal task but not the initial object discrimination. Although this study found that aged dogs required more sessions for the initial phase of the reversal task, it did not determine whether this was an inhibitory control deficit or a stimulus–reward association deficit. More recently, the types of errors made during a size discrimination reversal learning task were examined to dissociate inhibitory control deficits from stimulus–reward learning deficits (Tapp et al., 2003a). After learning an initial size discrimination (Figure 35.2) in which dogs were rewarded for responding to one of two identical stimuli that differed only in size (i.e., height), the reward contingencies were reversed. Reversal learning errors were categorized by stages based on previously published criteria (Duel et al., 1971). Stage I errors were defined as seven or more errors per 10-trial test session and represented perseverative responding. Stage II and III errors were defined as four to six or less than 3 errors per 10-trial test session respectively, and represented a stimulus–reward deficit. Overall, old dogs made more errors than young dogs on both the size and size reversal tasks. Qualitative analysis of errors on the reversal task however, suggested that separate cognitive processes were responsible for the learning deficits in two subgroups of old dogs (Figure 35.4).

Performance Monitoring - an overview (1)

Figure 35.4. Median number of sessions spent at Stage I, II, and III during reversal learning by young, middle-aged, old, and senior Beagle dogs. Compared to young and old dogs, senior dogs exhibited greater perseveration and stimulus-reward learning deficits.

Reprinted with permission from Cold Spring Harbor Laboratory Press. Copyright 2003, Learning and Memory 10(1), p. 68.Copyright © 2003

Most errors made by old dogs (aged 8–11 years) were Stage II, reflecting a deficit in learning a new stimulus–reward contingency. By contrast, senior dogs (aged 11.5–14 years) made significantly more Stage I, or perseverative errors. The old dogs were separated into old and senior dogs based on neuropathological and neuropsychological information. Senior dogs are over 11.5 years of age, equivalent to approximately 74 human years. Executive function deficits in humans commonly occur in people over the age of 70, and we wanted to isolate this age population in our dogs (Tapp et al., 2003a, 2004b). Dogs over the age of 11.5 also show particular vulnerability to reduced frontal lobe volume and beta-amyloid deposition (Tapp et al., 2004a; Head et al., 1998), which warrants a distinction in the aged group. These data suggest that inhibitory control deficits are a characteristic of aging in very old dogs and, like aging humans and patients with dementia, may underlie patterns of cognitive or executive dysfunction in aging dogs.

Installing Snort 2.6

In Snort Intrusion Detection and Prevention Toolkit, 2007

Testing Snort

Testing and tuning rules and sensors is one of the most, if not the most, important aspects of an IDS. Most testing should occur in a test lab or test environment of some kind. One part of Snort (new to the 2.1 version) is the use of a preprocessor called perfmonitor. This preprocessor is a great tool for determining sensor load, dropped packets, the number of connections, and the usual load on a network segment. Of greater benefit is to use perfmonitor combined with a graphing tool called perfmonitor-graph, located at http://people.su.se/~andreaso/perfmon-graph.

It does take some tweaking of the perfmon preprocessor to generate the snortstat data. Moreover, an ongoing issue with the perfmon preprocessor seems to be that it counts dropped packets as part of the starting and stopping of a Snort process. This issue hasn't been resolved as of this writing. However, one suggestion is to document every time the Snort process is stopped or started, and that time should match the time in the graph.

Tools & Traps …

Performance Monitoring

Perfmonitor-graph generates its graphics based on the Perl modules used by RRDtool (http://people.ee.ethz.ch/~oetiker/webtools/rrdtool). RRDtool is a great tool usually used by network operations staff. This tool takes log data from Cisco and other vendors’ logs and provides graphs about things such as load, performance, users, and so forth. If you don't want to install the full RRDtool, you can just install the Perl libraries:

Performance Monitoring - an overview (2)

With this installed, the perfmonitor-graph functions will work and generate the graphics.

Perfmonitor-graph combs through the data logged by the Snort preprocessor and displays it in a generated HTML page. With some tweaking, this is a great way to make hourly/daily/weekly charts of trends in several metric-capable charts. This can prove invaluable in larger or government organizations where metrics control the budget.

(Video) Performance Monitoring

When it comes to Snort rules, Turbo Snort Rules (www.turbosnortrules.org) is a great place to visit when looking to optimize your sensor's ruleset. Turbo Snort Rules provides speed/efficiency testing of your Snort rules as well as provides tips for making Snort run faster via optimized rulesets. Virtual machines are a hot topic these days. VMware (www.vmware.com) and Xen (http://www.xensource.com) are great virtualization software and prove invaluable to the budget constrained security analyst. It provides the ability to run multiple and disparate operating systems on the same machine at the same time. This is quite useful in gaining experience with other operating systems similar to the ones’ in your production environment, and provides worry free testing and development environments for those of us who like to tinker and tweak our systems.

Testing within Organizations

Whether your security team is composed of one person or several 24/7 teams throughout the world, testing new rules and Snort builds should be the second most important role your team handles. The first is to document just about everything your team does, including testing and rule creation, removal, and maintenance. The scope of a security team's testing also may depend on the size of an organization, monetary backing, and time and materials. Several ways to test include using a test lab with live taps from the production network to a single laptop/desktop plugged into a network, or using Snort rule generation tools such as Snot and Sneeze. Snot and Sneeze are just two of the tools that take the contents of a rules file and generate traffic to trigger on the rules. A new and controversial toolset, Metasploit, is available to help organizations protect their networks (www.metasploit.com/projects/Framework).

Notes from the Underground …

Metasploit

The authors of this book are in no way encouraging readers to download or run this tool. Metasploit is a flexible set of the most current exploits that an IDS team could run in their test network to gather accurate signatures of attacks. One of the “features” of the Metasploit framework is its capability to modify almost any exploit in the database. This can be useful for detecting modified exploits on a production network, or writing signatures, looking deep within packets for telltale backdoor code. The possibilities that this brings to an IDS team in terms of available accurate, understandable attack data are immense. Although all of these methods are great for testing, most organizations are going to have to choose some combination thereof.

Small Organizations

We consider “small” organizations as those without a dedicated IDS team or those that have an IDS team of up to five people, and not much monetary backing. As such, most of these teams use either open source tools or tools that are fairly inexpensive; for example, using a second-hand desktop/laptop or doubling up a workstation as a testing box.

Using a Single Box or Nonproduction Test Lab

One method that a person or small team could use to test new rules and versions of Snort before placing them in a production environment is to use a test lab with at least one attack machine, one victim machine, and a copy of an existing IDS sensor build. Understandably, this might be a lot for a small team to acquire, so a suggestion would be to find a single box. If one can't be found in the organization, usually a local electronics store will sell used or cheap machines. This box should be built with the same operating system as a team's production OS and have the same build of Snort. That way, when the team is testing rules or versions, if an exploit or bug occurs for the OS or, in the rare case, for Snort, the team can know it before it hits a production system. This method can be made easier if the team uses disk-imaging software, such as dd from the open source community or a commercial product such as Norton Ghost. This way, as the team's production systems change, they can just load the production image onto the test box to test against the most current production system.

If the team or person doesn't have the time or resources to run a dedicated test machine, one option is to use a virtual test lab. You can create a virtual test lab by adding a tool such as VMware or Virtual PC to a workstation on the network. This would provide a means to install a guest OS such as Linux or *BSD, which is most likely the OS of choice for a Snort sensor in a small security team. This small team could then test and run new rules or Snort builds against any traffic hitting the workstation, without having to use the production sensors. If this software is loaded on a standard Intel PC, with a little tuning, the image, in the case of VMware, could be placed on a laptop and taken to other sites for use as a temporary sensor when testing at new or remote sites.

Finally, another option for a smaller organization is for the security team to perform testing with its own workstation. As most organizations have a Microsoft Windows environment for their workstations, we will be using Windows as the OS of choice in this discussion. There are Snort builds for the Windows environment, known as Win32 builds, which allow people to run Snort from a Windows machine. One piece of software, called EagleX and available from Eagle Software (www.eagle-software.com), does a nice job of installing Snort, the winpcap library needed to sniff traffic, the database server, and the Web server. This is all done with only local access to the resources, setting up a Snort sensor on the Windows workstation to log all information to a local MySQL database, and running the Analyst Console for Intrusion Detection (ACID), which is a Web-based front end for Snort. This is great for both new Snort users and a small staff to test rules and determine whether a Snort build or a rule is going to flood Snort and its front end.

Large Organizations

We consider “large” organizations as those with an IDS team of more than five people. These are teams who are usually given their own budget and cover a 24/7 operation or are geographically dispersed. In an environment such as this, a team should have a dedicated test lab to run exploit code and malware to determine signatures for detecting attacks and for testing new Snort builds and rules. This test lab would also ideally have a live-feed tap from the production network to test with accurate data and load of the rules and builds. Creating an image of the production sensor build would make the most sense for large security teams. This would greatly help the deployment time and processes of new sensors, and provide a means to quickly test rules in the current sensors.

Another option for a large organization is the consideration of port density on each point on a network where sensors are located. If, for example, at each tap/span of live data this is plugged into a small switch or hub, the production systems could be plugged into the switch/hub. Then, a spare box, perhaps of the same OS build as the production system, could be placed at points on the tap infrastructure most important to the organization. By placing an extra box at the span point, testing of a new rule or Snort build could be exposed to a real-time accurate load, giving the best picture for a sensor. We have found this to be good for use on points, such as the external tap used for testing and running intelligence rule tests such as strange traffic that normally wouldn't be getting through the firewall. Alternatively, you could place an extra box at the RAS/virtual private network remote access points, as nearly every IDS analyst who has monitored a RAS link into an organization knows that these are the points where you can see some of the earliest victims of viruses and worms, out-of-date security-patched machines, and strange traffic in general. If you placed an extra tap at each of these locations, you would get a highly accurate view of the new rules or Snort builds and how they would perform, without compromising the integrity of the production sensors.

Finally, another extremely useful method for large organizations to test Snort rules and builds is a full test lab. This is sometimes shared with other IT teams such as Operations for new infrastructure equipment or a help desk team for testing new user software. If all of these are present, this will help in demonstrating the effectiveness of an attack or virus. For example, if this lab is a disconnected network from the live network, when malware or exploits are found, they can be run in this environment to help the Computer Incident Response Team understand containment and countermeasures to use, and the IDS team can use this data to create and test signatures to determine infection and detect initial attacks and, possibly, other side effects of hostile traffic.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9781597490993500082

Computerized maintenance management systems

Theodore Cohen, ... William M. Gentles, in Clinical Engineering Handbook (Second Edition), 2020

Benchmarking and data sharing

As described above, performance improvement depends on performance monitoring—knowing your current level of performance. Performance improvement also depends on knowing what’s possible—what level of performance is realistically achievable. That’s where benchmarking comes in.

Every CE program should be doing internal benchmarking, monitoring its own performance overtime. Moving to the next level requires external benchmarking, monitoring the program’s performance relative to other programs with similar characteristics. For internal benchmarking, all you need are performance metrics that are internally consistent overtime; in other words, use the same metrics from year to year. For external benchmarking, you need to use performance metrics that are standardized across CE programs (Cohen et al., 2015). To accomplish that, you need to configure your CMMS to generate standard metrics.

For example, if you find that your performance on one metric is near the level of your peer CE programs, then your room for improvement is limited. In this case, consider focusing on a different aspect of performance. On the other hand, suppose you find your performance on a different metric is significantly below your peers. This tells you that there’s room for improvement, a genuine opportunity for achievable improvement. Consider focusing your efforts in this area.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780128134672000341

Public-Sector Contract Management

E.R. Yescombe, Edward Farquharson, in Public-Private Partnerships for Infrastructure (Second Edition), 2018

§19.3.6 Preparing for the Operation Phase

Prior to moving on to the operation phase, performance monitoring and the payment mechanism (cf. Chapter 15) should be tested in trial runs to ensure that they work and that both parties know what to expect. Joint training of the contracting authority contract-management team and the project company’s team may take place. To the extent possible, users of the facility should also be told what level of service to expect and what to do if there is a fault in the service. The risk register (cf. §11.6), as well as communications and stakeholder-management plans are also updated (cf. §6.5). In some countries, the transition to the operation phase is a point for a gateway review (cf. §5.4).

(Video) Performance Monitoring Windows Server + How to Use Perfmon + Which counters to add in Perfmon

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B978008100766200019X

Impact assessment in practice: case studies from save the children programs in Lao PDR and Afghanistan

Veronica Bell, Yasamin Alttahir, in Assessing the Impact of Foreign Aid, 2016

The importance of time

Ultimately, aligned with our organizational ambition to achieve sustainable impact at scale, the PMEP is intended to generate relevant and necessary evidence that the program is improving the health and education outcomes and opportunities for the children of Uruzgan in order to influence its broader adoption by key national stakeholders, including both government and nongovernment actors. A 4-year timeframe provides us with a very different evidence base than what we have been able to gather over more than 20 years in Laos, but in terms of national policy influence, the CoU program has achieved impressive results in a short period of time.

A formal preschool teacher training package developed through the program is expected to be accredited and a national curriculum adopted and implemented nationally by the Ministry of Education. The teacher training package for CBE teachers developed by the program has been approved and adopted by the Ministry of Education’s CBE Unit in an effort to assist all children to become literate and to increase children’s access to education, especially in remote areas. The Ministry of Education is encouraging development partners to participate in the roll out of this strategy and will gradually incorporate CBE and outreach classes in the official education system. The Ministries of Education and Public Health are collaborating, with technical support from Save the Children, to incorporate the CoU school health and nutrition (SHN) model into their respective strategies and to roll it out nationwide. This would see community health workers (CHWs) located at all schools as part of the Basic Package of Health Services under the Ministry of Public Health and the promotion of these CHWs and their services through a revised curriculum and training for school teachers undertaken by the Ministry of Education.

Regarding increased opportunities for women and girls, the Ministry of Education is undertaking a series of efforts and initiatives to increase the number of female students and teachers in the system. The Girls Learning to Teach Afghanistan (GLiTTA) program developed by CoU aims to motivate girls studying Grade 11 and 12 to become a community-based teacher in their village, enroll in Teacher Training Colleges or apply to be a contract teacher with the Ministry of Education in their district. Save the Children is piloting GLiTTA beyond Uruzgan and is currently tracking participating girls and teachers in three provinces. The findings from this pilot study will enable Save the Children and the Ministry of Education to assess the potential for broader application of the program nationally. The Community Midwife Education School in Tirin Kot underwent its final and binding accreditation assessment by the Ministry of Public Health in July 2014. The government has formally accredited all 24 students who have graduated from the school. At the end of 2014, 12 were already working as midwifes in Uruzgan and nine were waiting for positions in their home districts.

The signs are optimistic that CoU has delivered results that have the potential to influence broader policy change. As already mentioned, robust evidence is vital to support behavior and practice change and policy influence. CoU has done a tremendous job in gathering a significant body of evidence and using that information to underpin its ambitious change agenda. The fact this has been achieved in such a challenging context is testament to the efforts of the staff and partners on the ground. CoU concludes in September 2015. As part of its close-out strategy, Save the Children will undertake a comprehensive final evaluation exercise to document achievements over the life of the program and assess the potential for ongoing impact as a result of the program investments. In line with Save the Children’s organizational commitment to leveraging our knowledge to enable sustainable impact at scale, the final evaluation will analyze changes that have occurred in the target communities since the inception of CoU and assess if and/or how the program has contributed to these. Women and girls are a key constituency for CoU and the evaluation will place particular emphasis on the extent to which the program has considered and addressed gender issues, and the resulting effects, throughout its lifecycle. In addition, the evaluation will assess the extent to which CoU has delivered benefits for, and influenced any change in the lives of, other marginalized groups including ethnic and linguistic minority groups, people with a disability, and the most remote communities in the locations where the program has been delivered. As mentioned repeatedly in this case study, Uruzgan is a complex and insecure operating context and the final evaluation will critically examine the approaches Save the Children has adopted to deliver – and to assess the results of – the program and how efficient and effective these have been in order to share lessons internally and with other implementers and donors working in similar contexts.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780128036600000143

Ghana

R. Palmer, in International Encyclopedia of Education (Third Edition), 2010

Performance Monitoring, Evaluation, and Research

The Planning, Budgeting, Monitoring and Evaluation (PBME) department of the MoESS is responsible for performance monitoring and evaluation. The National Education Sector Annual Review (NESAR), instituted with the implementation of the ESP 2003–15, provides the opportunity for all sector stakeholders to participate in the review of sector performance annually. The process, from its inception, has taken place at the national level with representatives from regional and district offices participating. The establishment of the NESAR has led to tremendous improvement in education delivery in the country. This is a collaborative approach to ensure the pooling together of resources and the harmonization of programs and activities for the realization of the goals and objectives of the education sector. Through the involvement of all stakeholders in the review, the NESAR enhances accountability and transparency within the sector; a key to the successful implementation of the ESP, where all stakeholders in the sector work together under the overall lead of the MoESS. At the NESAR 2006, the recommendation was made for the conduct of Regional Education Sector Annual Reviews (RESAR) as a prelude to the organization of NESAR. The regional reviews in all the ten regions took place in the first week of May 2007 (GoG, 2007).

The majority of all externally funded initiatives in Ghana's education and training system have built in monitoring and evaluation components. However, public formal TVET in Ghana has very weak monitoring and evaluation dimensions; in many cases, evaluation is completely absent.

Much educational research is financed externally. For example, the UK Department for International Development is currently financing three research programs examining education access, quality, and outcomes in Ghana.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B978008044894701407X

(Video) Datadog Application Performance Monitoring (APM)

Videos

1. Webinar: An Introduction to Performance Monitoring
(RedLine13)
2. Performance Monitoring: Getting Started
(Sentry)
3. What is APM (Application Performance Monitoring)
(APM Tribe)
4. Website Performance Monitoring - Dotcom-Monitor Platform Overview
(DotcomMonitor)
5. Performance Monitoring using Programmable Reports
(Cisco)
6. Web Performance Monitor: Monitoring the End-User Experience
(solarwindsinc)
Top Articles
Latest Posts
Article information

Author: Msgr. Refugio Daniel

Last Updated: 17/06/2023

Views: 5909

Rating: 4.3 / 5 (74 voted)

Reviews: 81% of readers found this page helpful

Author information

Name: Msgr. Refugio Daniel

Birthday: 1999-09-15

Address: 8416 Beatty Center, Derekfort, VA 72092-0500

Phone: +6838967160603

Job: Mining Executive

Hobby: Woodworking, Knitting, Fishing, Coffee roasting, Kayaking, Horseback riding, Kite flying

Introduction: My name is Msgr. Refugio Daniel, I am a fine, precious, encouraging, calm, glamorous, vivacious, friendly person who loves writing and wants to share my knowledge and understanding with you.