73325614 Performance Management Project

105

Transcript of 73325614 Performance Management Project

Human resourcesHuman resources is a term with which many organizations describe the combination of traditionally administrative personnel functions with performance management, Employee Relations and resource planning. The field draws upon concepts developed in Industrial/Organizational Psychology. Human resources have at least two related interpretations depending on context. The original usage derives from political economy and economics, where it was traditionally called labor, one of four factors of production. The more common usage within corporations and businesses refers to the individuals within the firm, and to the portion of the firm's organization that deals with hiring, firing, training, and other personnel issues. This article addresses both definitions.

The objective of Human Resources is to maximize the return on investment from the organization's human capital and minimize financial risk. It is the responsibility of human resource managers to conduct these activities in an effective, legal, fair, and consistent manner. Human resource management serves these key functions:

1. Recruitment Strategy Planning 2. Hiring Processes(recruitment) 3. Selection 4. Training and Development 5. Performance Evaluation and Management 6. Promotions 7. Redundancy 8. Industrial and Employee Relations 9. Record keeping of all personal data. 10. Compensation, pensions, bonuses etc in liaison with Payroll 11. Confidential advice to internal 'customers' in relation to problems at work. 12. Career development

Human resources

Modern analysis emphasizes that human beings are not "commodities" or "resources", but are creative and social beings that make class contributions beyond 'labor' to a society and to civilization. The broad term human capital has evolved to contain some of this complexity, and in micro-economics the term "firm-specific human capital" has come to represent a meaning of the term "human resources."

Advocating the central role of "human resources" or human capital in enterprises and societies has been a traditional role of Human Resource socialist parties, who claim that value is primarily created by their activity, and accordingly justify a larger claim of profits or relief from these enterprises or societies. Critics say this is just a bargaining tactic which grew out of various practices of medieval European guilds into the modern trade union and collective bargaining unit.

A contrary view, common to capitalist parties, is that it is the infrastructural capital and (what they call) intellectual capital owned and fused by "management" that provides most value in financial capital terms. This likewise justifies a bargaining position and a general view that "human resources" are interchangeable.

A sign of consensus on this latter point was the ISO 9000 series of standards which in its 1994 revision could be understood to require procedures or a "job description" of every participant in a productive enterprise. The 2000 revision of ISO 9001 in contrast requires identifying the processes, their sequence and interaction, and to define and communicate responsibilities and authorities. In general, heavily unionized nations such as France and Germany have adopted and encouraged such job descriptions especially within trade unions. One view of this trend is that a strong social consensus on political economy and a good social welfare system facilitates labor mobility and tends to make the entire economy more productive, as labor can move from one enterprise to another with little controversy or difficulty in adapting.

An important controversy regarding labor mobility illustrates the broader philosophical issue with usage of the phrase "human resources": governments of developing nations often regard developed nations that encourage immigration or "guest workers" as appropriating human capital that is rightfully part of the developing nation and required to further its growth as a civilization. They argue that this appropriation is similar to colonial commodity fiat wherein a colonizing European power would define an arbitrary price for natural resources, extracting which diminished national natural capital.

The debate regarding "human resources" versus human capital thus in many ways echoes the debate regarding natural resources versus natural capital. Over time the United Nations have come to more generally support the developing nations' point of view, and have requested significant offsetting "foreign aid" contributions so that a developing nation losing human capital does not lose the capacity to continue to train new people in trades, professions, and the arts.

An extreme version of this view is that historical inequities such as African slavery must be compensated by current developed nations, which benefited from stolen "human resources" as they were developing. This is an extremely controversial view, but it echoes the general theme of converting human capital to "human resources" and thus greatly diminishing its value to the host society, i.e. "Africa", as it is put to narrow imitative use as "labor" in the using society.

In a series of reports of the UN Secretary-General to the General Assembly over the last decade [e.g. A/56/162 (2001)], a broad inter sectoral approach to developing human resourcefulness has been outlined as a priority for socio-economic development and particularly anti-poverty strategies. This calls for strategic and integrated public policies, for example in education, health, and employment sectors that promote occupational skills, knowledge and performance enhancement.

In the very narrow context of corporate "human resources", there is a contrasting pull to reflect and require workplace diversity that echoes the diversity of a global customer base. Foreign language and culture skills, ingenuity, humor, and careful listening, are examples of traits that

such programs typically require. It would appear that these evidence a general shift to the human capital point of view, and an acknowledgment that human beings do contribute much more to a productive enterprise than "work": they bring their character, their ethics, their creativity, their social connections, and in some cases even their pets and children, and alter the character of a workplace. The term corporate culture is used to characterize such processes.

The traditional but extremely narrow context of hiring, firing, and job description is considered a 20th century anachronism. Most corporate organizations that compete in the modern global economy have adopted a view of human capital that mirrors the modern consensus as above. Some of these, in turn, deprecate the term "human resources" as useless.

As the term refers to predictable exploitations of human capital in one context or another, it can still be said to apply to manual labor, mass agriculture, low skill "McJobs" in service industries, military and other work that has clear job descriptions, and which generally do not encourage creative or social contributions.

In general the abstractions of macro-economics treat it this way - as it characterizes no mechanisms to represent choice or ingenuity. So one interpretation is that "firm-specific human capital" as defined in macro-economics is the modern and correct definition of "human resources" - and that this is inadequate to represent the contributions of "human resources" in any modern theory of political economy.

Human resource development

In terms of recruitment and selection it is important to consider carrying out a thorough job analysis to determine the level of skills/technical abilities, competencies, flexibility of the employee required etc. At this point it is important to consider both the internal and external factors that can have an effect on the recruitment of employees. The external factors are that out-with the powers of the organization and include issues such as current and future trends of the labor market e.g. skills, education level, government investment into industries etc. On the other hand internal influences are easier to control, predict and monitor, for example management styles or even the organizational culture.

In order to know the business environment in which any organization operates, three major trends should be considered:

• Demographics – the characteristics of a population/workforce, for example, age, gender or social class. This type of trend may have an effect in relation to pension offerings, insurance packages etc.

• Diversity – the variation within the population/workplace. Changes in society now mean that a larger proportion of organizations are made up of "baby-boomers" or older employees in comparison to thirty years ago. Also, over recent years organizations have had to become more diverse in their employment practices to cope with the lower work ethic of the newer generations. The service industry for example, has embraced those "baby-boomers" desiring to reenter the workforce. Traditional advocates of "workplace

diversity" simply advocate an employee base that is a mirror reflection of the make-up of society insofar as race, gender, sexual orientation, etc. These advocates focus on the social engineering theory without understanding the more important points: diversity of ideas to prevent stagnation of products and business development; expanding the customer base through "outreach"; and profit. Alarmists and advocates of social engineering theory cite a "rise in discrimination, unfair dismissal and sexual/racial harassment cases" as an indicator of the need for more diversity legislation. While such measures have a significant effect on the organization, they effect little or no real change in advancing diversity of ideas in the workplace. Anti-discrimination laws and regulations do require businesses to undertake a cost-benefit analysis. The result of this analysis is often to adopt an approach that generally recognizes gender, racial, and sexual orientation diversity as a cheaper alternative to fighting endless litigation. In summary, diversity, based on social engineering “is about creating a working culture that seeks, respects and values difference” without regard to how diversity increases productive and unity of effort.

• Skills and qualifications – as industries move from manual to more managerial professions so does the need for more highly skilled graduates. If the market is "tight" (i.e. not enough staff for the jobs), employers will have to compete for employees by offering financial rewards, community investment, etc.

In regard to how individuals respond to the changes in a labor market the following should be understood:

• Geographical spread – how far is the job from the individual? The distance to travel to work should be in line with the pay offered by the organization and the transportation and infrastructure of the area will also be an influencing factor in deciding who will apply for a post.

• Occupational structure – the norms and values of the different careers within an organization. Mahoney 1989 developed 3 different types of occupational structure namely craft (loyalty to the profession), organization career (promotion through the firm) and unstructured (lower/unskilled workers who work when needed).

• Generational difference –different age categories of employees have certain characteristics, for example their behavior and their expectations of the organization.

While recruitment methods are wide and varied, it is important that the job is described correctly and that any personal specifications are stated. Job recruitment methods can be through job centers, employment agencies/consultants, headhunting, and local/national newspapers. It is important that the correct media is chosen to ensure an appropriate response to the advertised post.

Modern concept of human resources

Though human resources have been part of business and organizations since the first days of agriculture, the modern concept of human resources began in reaction to the efficiency focus of Taylorism in the early 1900s. By 1920, psychologists and employment experts in the United

States started the human relations movement, which viewed workers in terms of their psychology and fit with companies, rather than as interchangeable parts. This movement grew throughout the middle of the 20th century, placing emphasis on how leadership, cohesion, and loyalty played important roles in organizational success. Although this view was increasingly challenged by more quantitatively rigorous and less "soft" management techniques in the 1960s and beyond, human resources had gained a permanent role within an organization

Organization development is the process through which an organization develops the internal capacity to most efficiently and effectively provide its mission work and to sustain itself over the long term. This definition highlights the explicit connection between organizational development work and the achievement of organizational mission. This connection is the rationale for doing OD work. Organization development, according to Richard Beckhard, is defined as: a planned effort, organization-wide, managed from the top, to increase organization effectiveness and health, through planned interventions in the organization's 'processes', using behavioral science knowledge.

According to Warren Bennis, organization development (OD) is a complex strategy intended to change the beliefs, attitudes, values, and structure of organizations so that they can better adapt to new technologies, markets, and challenges.

Warner Burke emphasizes that OD is not just "anything done to better an organization"; it is a particular kind of change process designed to bring about a particular kind of end result. OD involves organizational reflection, system improvement, planning, and self-analysis.

The term "Organization Development" is often used interchangeably with Organizational effectiveness, especially when used as the name of a department or a part of the Human Resources function within an organization.

Organization Development is a growing field that is responsive to many new approached including Positive Adult Development.

Definition

At the core of OD is the concept of an organization, defined as two or more people working together toward one or more shared goals. Development in this context is the notion that an organization may become more effective over time at achieving its goals.

OD is a long range effort to improve organization's problem solving and renewal processes, particularly through more effective and collaborative management of organizational culture, often with the assistance of a change agent or catalyst and the use of the theory and technology of applied behavioral science.

History

Early development

Kurt Lewin played a key role in the evolution of organization development as it is known today. As early as World War II, Lewin experimented with a collaborative change process (involving himself as consultant and a client group) based on a three-step process of planning, taking action, and measuring results. This was the forerunner of action research, an important element of OD, which will be discussed later. Lewin then participated in the beginnings of laboratory training, or T-groups, and, after his death in 1947, his close associates helped to

develop survey-research methods at the University of Michigan. These procedures became important parts of OD as developments in this field continued at the National Training Laboratories and in growing numbers of universities and private consulting firms across the country.

The failure of off-site laboratory training to live up to its early promise was one of the important forces stimulating the development of OD. Laboratory training is learning from a person's "here and now" experience as a member of an ongoing training group. Such groups usually meet without a specific agenda. Their purpose is for the members to learn about themselves from their spontaneous "here and now" responses to an ambiguous hypothetical situation. Problems of leadership, structure, status, communication, and self-serving behavior typically arise in such a group. The members have an opportunity to learn something about them and to practice such skills as listening, observing others, and functioning as effective group members.

As formerly practiced (and occasionally still practiced for special purposes), laboratory training was conducted in "stranger groups," or groups composed of individuals from different organizations, situations, and back grounds. A major difficulty developed, however, in transferring knowledge gained from these "stranger labs" to the actual situation "back home". This required a transfer between two different cultures, the relatively safe and protected environment of the T-group (or training group) and the give-and-take of the organizational environment with its traditional values. This led the early pioneers in this type of learning to begin to apply it to "family groups" — that is, groups located within an organization. From this shift in the locale of the training site and the realization that culture was an important factor in influencing group members (along with some other developments in the behavioral sciences) emerged the concept of organization development.

Case history

The Cambridge Clinic found itself having difficulty with its internal working relationships. The medical director, concerned with the effect these problems could have on patient care, contacted an organizational consultant at a local university and asked him for help. A preliminary discussion among the director, the clinic administrator, and the consultant seemed to point to problems in leadership, conflict resolution, and decision processes. The consultant suggested that data be gathered so that a working diagnosis could be made. The clinic officials agreed, and tentative working arrangements were concluded.

The consultant held a series of interviews involving all members of the clinic staff, the medical director, and the administrator. Then the consultant "the matized", or summarized, the interview data to identify specific problem areas. At the beginning of a work shop about a week later, the consultant fed back to the clinic staff the data he had collected.

The staff arranged the problems in the following priorities:

1. Role conflicts between certain members of the medical staff were creating tensions that interfered with the necessity for cooperation in handling patients.

2. The leadership style of the medical director resulted in his putting off decisions on important operating matters. This led to confusion and sometimes to inaction on the part of the medical and administrative staffs.

3. Communication between the administrative, medical, and outreach (social worker) staffs on mutual problems tended to be avoided. Open conflicts over policies and procedures were thus held in check, but suppressed feelings clearly had a negative influence on interpersonal and inter group behavior.

Through the use of role analysis and other techniques suggested by the consultant, the clinic staff and the medical director were able to explore the role conflict and leadership problems and to devise effective ways of coping with them. Exercises designed to improve communication skills and a workshop session on dealing with conflict led to progress in developing more openness and trust throughout the clinic. An important result of this first workshop was the creation of an action plan that set forth specific steps to be applied to clinic problems by clinic personnel during the ensuing period. The consultant agreed to monitor these efforts and to assist in any way he could. Additional discussions and team development sessions were held with the director and the medical and administrative staffs.

A second workshop attended by the entire clinic staff took place about two months after the first. At the second workshop, the clinic staff continued to work together on the problems of dealing with conflict and interpersonal communication. During the last half-day of the meeting, the staff developed a revised action plan covering improvement activities to be undertaken in the following weeks and months to improve the working relationships of the clinic.

A notable additional benefit of this OD program was that the clinic staff learned new ways of monitoring the clinic's performance as an organization and of coping with some of its other problems. Six months later, when the consultant did a follow-up check on the organization, the staff confirmed that interpersonal problems were now under better control and that some of the techniques learned at the two workshops associated with the OD programs were still being used.

Modern development

In recent years, serious questioning has emerged about the relevance of OD to managing change in modern organizations. The need for "reinventing" the field has become a topic that even some of its "founding fathers" are discussing critically.

Definition:

Performance management is the process of creating a work environment or setting in which people are enabled to perform to the best of their abilities. Performance management is a whole work system that begins when a job is defined as needed. It ends when an employee leaves your organization.

Many writers and consultants are using the term “performance management” as a substitution for the traditional appraisal system. I encourage you to think of the term in this broader work system context. A performance management system includes the following actions.

• Develop clear job descriptions.

• Select appropriate people with an appropriate selection process.

• Negotiate requirements and accomplishment-based performance standards, outcomes, and measures.

• Provide effective orientation, education, and training.

• Provide on-going coaching and feedback.

• Conduct quarterly performance development discussions.

Your organization’s performance is our compass your employee’s performance is our focus we do so by training, coaching & consulting

- VERGOUWEN OVERDUIN

Why to measure performance?

• What you cannot measure you cannot improve.• If you cannot improve you cannot grow.• Measurement helps in objectively differentiating between performers and non-

performers.• Pay for performance is possible only through metrics.

About the system

• The appraiser and the appraise jointly set the Key Result Areas (KRAs) and assign

mutually agreed weightage expressed as a percentage. The Achievement of the KRA is

also expressed as a percentage.

• Simple mathematical relationship between set weightage and accomplishment gives a

final numerical score on KRAs.

• To evaluate all management personnel on company values and leadership attributes a

new section has been added entitled “Values in Action”.

What is a KRA

• A KRA refers to a target that needs to be achieved by the appraise in a given time.

• KRAs are the set of performance expectations from the appraise.

• The focus is on tangible outputs. However this does not mean that tasks that have a

qualitative output cannot form a KRA.

• KRAs are not job descriptions or routine activities.

KRA setting process

• Key Result Areas for an employee emerge from the organizational objectives,

departmental goals and work unit goals. This facilitates congruency between individual

and departmental goals.

• The process of setting KRAs is a TOP-DOWN approach.

Setting KRA in case of functional reporting relationship

• Functional reporting cases will require input from the functional superior in setting

KRAs for the appraise.

• The appraiser, the appraise and the functional superior will have to mutually agree upon

the KRAs for the appraise.

• In case of a disagreement , it will be the functional superior’s responsibility to

convince the administrative superior to reach an agreement on the KRAs and

Communicate the same to the appraise.

Callanges

Motivational workforce practices do not work effectively in an organisation that lacks an objective performance measurement system. In CSC India’s performance management system, gaps were observed in effective discussion of employee’s Individual Development Plans (IDPs) and regular feedback on Key Result Areas (KRAs). Also, individuals felt a lack of clarity in their roles and the eligibility criteria for career growth. To address the employee concerns, CSC in India enhanced its performance management system, to provide the employees with greater opportunities to develop their career potential and align their performance with the organisation’s objectives.

Methodology

Employee performance management includes planning work and setting expectations, developing the capacity to perform, continuously monitoring performance, and evaluating it. Shows the process, which guides the performance management activities in CSC in India:

Planning (Setting Objectives)

Planning employees' performance involves establishing the standards or measures like KRAs, value system, ethics, and performance factors, which guide an employee’s Appraisal. For an employee, performance objectives (in

form of KRAs & IDP) shall be developed in line with the respective department’s/ project/groups objectives,

Doing (Competency Development)

Doing involves evaluating employee developmental needs that will help them strengthen their job-related skills and competencies, and prioritising and developing a plan of action to achieve the set targets. CSC in India offers an online IDP tool to facilitate the employees plan their personal developmental goals. Figure 3 shows how the IDP of an employee is developed and finalised:

Checking (Continuous Monitoring)

Checking includes conducting ongoing reviews where employees’ performance is quantitatively measured against the set standards to identify how well the employees are meeting the set goals. Thereafter, the quantitative data is used toderive performance rating during the appraisal period. For low performance, an immediate plan of action is taken rather than wait until the end of the appraisal period when summary rating levels are assigned.CSC in India facilitates this process of regular feedback through an online feedback system, as shown:

Acting (Performance Evaluation)

Acting includes evaluating job performance against the standards in the employee’s performance plan and assigning a rating to the employee based on work performed during the entire appraisal period. Depicts the entire rating process:

Innovations Introduced CSC India’s performance management system is driven by Specific, Measurable, Achievable, Resultoriented and Time bound (SMART) principles. This system helps employees develop greater selfawareness, role clarity, and provides them with the opportunity to plan developmental needs using organisational resources.As a part of performance management activities, CSC in India performs the following activities:

• Captures and tracks employees’ IDPs through a database system to enhance visibility about their development plans

• Ongoing performance feedback to employees on their progress toward reaching the set goals• Defines shared KRAs to align employees’ work efforts in line with the organisation’s objectives

• Links the competency framework with performance management to enhance visibility and perform an objective assessment of employees’ readiness for next role

• Sustainable awards and recognitions like Employee of the Year (EOY), Employee of the Quarter (EOQ), and cash prize to high contributing individuals ensure motivation and contributes to the strategic business direction.

• A Plan for Requisite Performance (PRP) process is in place to address unsatisfactory work performance issues and allow the organisation to look beyond employees as mere resources to tap into the human element and facilitate their recovery into more productive individuals

• Global Performance Planning and Review tool to facilitate objective evaluation of CSC’s worldwide employees on common organization-wide ethics and performance parameters

Impact

With implementation of well laid out performance management system as highlighted above, CSC in India realized the following benefits:

• With continuous performance feedback to employees, formal employee dissension on performance evaluation have reduced from 15-20 to less than 5 on an employee base of 500-1000 (Over 70% reduction)

• With formalised PRP, 60% employees with unsatisfactory work performance have improved from a non-compliant level to meeting expectations level

• Reduction in rehiring cost through enhanced retention by using PRP Common understanding between the managers and the subordinates on the expectations and the evaluation criteria through increased transparency in the appraisal system.

• Greater employee satisfaction on uniform reward/recognition policy and development opportunities.

• Consistently achieving Customer Satisfaction Index (CSI) of greater than 4 (on a scale of 5) for almost 80% of our long term projects as a result of linking individual KRAs with organisational goals

Performance management

Performance measurement is the process of assessing progress toward achieving predetermined goals. Performance management is building on that process, adding the relevant communication and action on the progress achieved against these predetermined goals.

• In network performance management,

(a) a set of functions that evaluate and report the behavior of telecommunications equipment and the effectiveness of the network or network element and

(b) a set of various sub functions, such as gathering statistical information, maintaining and examining historical logs, determining system performance under natural and artificial conditions, and altering system modes of operation.

• In organizational development (OD), performance can be thought of as Actual Results vs Desired Results. Any discrepancy, where Actual is less than Desired, could constitute the performance improvement zone. Performance management and improvement can be thought of as a cycle:

1. Performance planning where goals and objectives are established.2. Performance coaching where a manager intervenes to give feedback and adjust

performance 3. Performance appraisal where individual performance is formally documented and

feedback delivered

A performance problem is any gap between Desired Results and Actual Results. Performance improvement is any effort targeted at closing the gap between Actual Results and Desired Results.

• (APM) refers to the discipline within systems management that focuses on monitoring and managing the performance and availability of software applications. APM can be defined as workflow and related IT tools deployed to detect, diagnose, remedy and report on application performance issues to ensure that application performance meets or exceeds end-users’ and businesses’ expectations.

• Business performance management (BPM) is a set of processes that help businesses discover efficient use of their business units, financial, human and material resources.

• Operational performance management (OPM) focus is on creating methodical and predictable ways to improve business results, or performance, across organizations.

Simply put, performance management helps organizations achieve their strategic goals. Rather than discarding the data accessibility previous systems fostered, performance management harnesses it to help ensure that an organization’s data works in service to organizational goals to provide information that is actually useful in achieving them. and focus on the Operational

Networking Processes between that performance level. The main purpose of performance management is to link individual objectives and organizational objectives and bring about that individuals obey important worth for enterprise. Additionally, performance management tries to develop skills of people to achieve their capability to satisfy their ambitiousness and also increase profit of a firm.

Application Performance Management, or APM, refers to the discipline within systems management that focuses on monitoring and managing the performance and service availability of software applications.

APM can be defined as process and use of related IT tools to detect, diagnose, remedy and report application’s performance to ensure that it meets or exceeds end-users’ and businesses’ expectations.

Application performance relates to how fast transactions are completed on behalf of, or information is delivered to the end user by the application via a particular network, application and/or Web services infrastructure. The standard method to collect application performance data from the application for analysis is through use of byte code implementation. Currently available commercial APM solutions that use this technology to collect performance data are dyna Trace Diagnostics, JenniferSoft's Jennifer, Wily, and i3.

The use of application performance management is common for web services loaded on J2EE platform. J2EE is preferred in building web service because of its flexibility in interfacing and communicating with foreign systems. Also, web service applications are frequently used for servicing online customers and end-users in financial, retail, government, Logistics and other sectors, usually as a form of websites and services provided within them. The application performance data directly correlate to the satisfaction level of customers’ and end-users’ experience using the web service.

Methodology in Application Performance Management

Monitoring Service Response Time

In Web Application Server, service response time can be expressed as measurement of customer satisfaction. Even if there are some bugs in a system, if the bug does not cause any problem in service response time or site’s functionality, it cannot be seen as a problem. even if there is no bug found in the system, if the service response time is not fast enough to fulfill customer satisfaction, the system itself has a problem and cannot be considered normal. Service response time is an important information source in measuring system’s stability and diagnosing system problems. Following documentation describes using service response time to resolve system performance issues and why monitoring the system resource alone is not the correct approach to Application Performance Management system.

Resource Usage Cannot Exceed 100%

System resource usage cannot exceed 100%. This means that system resource usage cannot be used to diagnosis system capacity. Let’s take look at a situation where vm stat is being used to monitor CPU usage. CPU usage is constantly very high, 95~100%. Is this a problem? Most system administrators cannot determine if this is a problem. All they can say is that the CPU is being used heavily. The administrators cannot determine whether the number of incoming requests exceed system capacity just by monitoring the system resources alone. For example,

lets say that it takes 20 concurrent request to max out the CPU usage of a server. What if there are 30 concurrent requests? Whether there are 20 or 30 requests, the CPU usage will 100% in both cases. Of course administrator usually cannot tell how many concurrent incoming requests will max out system resource.

Monitoring all system resources is an inefficient

Another limitation of resource monitoring is that there are too many things to monitor. In any given system, there exist many hardware- and software-related resources such as CPU, memory, network interfaces, HEAP, connection pool, etc…; it is either inefficient or impossible to monitor all these system resources individually.

Incoming requests exceeding system capacity results in delayed response time

In order to overcome the limitations of system resource monitoring, service response time must be monitored. As the incoming requests exceed system capacity, service response time is increased indefinitely, letting administrators know that resource shortage exists within the system. Since response time increases if any system resource is lacking, response time can be used to monitor system resource.

Response time must be measured per transaction

Then how should service response time be measured? Before discussing this point, let’s look at the relationship between service and the resource. In a web system, service may interact with many different components such as class, DB, LDAP, file, etc… and when the different system resources that are tied to each component is combine, that number can be very large. Also, resources are only used by specific requests while others may be used by many different services. To conclude, the relationship between resource and service is N:M relation and it cannot be clearly defined. The N:M relationship between services and resources are only expressible as average response time grouped by service name or functional category in a line graph. Instead the individual transaction must be plotted separately.

There are a few reasons why service response time must be measured individually rather than in groups

First, when identical services are executed multiple times, the response time may be delayed for specific transactions only. No matter how the grouping is done, the individual service response time will be diluted if it is averaged out with other services in the group. Secondly, there is mapping issue between response time and profiling. If the grouping is done by service name, the mapping would somewhat make sense, but if the mapping is done by business object, the mapping will be too complicated to be used effectively. Thirdly, Service cannot be classified easily by name. Since service name is determined by the initial request that called it, it does not capture the internal changes that occur during its process. Grouping different services with that has changes dynamically during its process simply because they share the same service name is not very effective way to group the services.’

Effective Response Time Monitoring

Response Time Monitoring

Service response time is one of most important factor in determining website visitor’s satisfaction level. While it is important to prevent system downtime, preventing system crashes alone isn’t enough for today’s web system management and administration. Website visitors must be able to surf through the site with ease and visitor waiting time and site loading time must be minimized. Website with slow response time tends to frustrate website visitors and it is one of leading cause of deterioration in customer satisfaction level, and ultimately, loss of the customers. Thus, service response time must be considered in customer satisfaction and website administrator much continually work to improve service response time, prioritizing services that are most frequently accessed. The traditional graph used for monitoring response time has been the line graph of average response time over time.

Traditional Average Response Time Line Graph

However, the traditional line graph cannot convey any more information than the response time itself. Line graph cannot show if there is a specific performance problem, or if there is overall increase in response time (ex: due to network issue) or if only certain services are receiving overwhelming requests, etc… furthermore, line graph is ineffective in discovering which service is experiencing problem, or the root cause of the crawling response time.

Monitoring Response Time with Scatter Graph...

Thus, response time must be monitored in a single graph while individual service transactions plotted separately. In order to express individual service transactions in a single graph, line graph is not appropriate but scatter graph must be used.

Response Time Scatter Graph

We call this type of graph Response Time Scatter Graph, aka X-View. The entire history of service transaction can be monitored in a single graph and if any one or group of service transaction is selected, detailed information concerning selected transaction (Method Call Path, SQL, Socket, File etc…), is displayed in a separate window.

JenniferSoft's X-View

While, the concept of using scatter graph to display service transaction, and drag-and-drop feature to see the individual transaction detail is revolutionary, this is not the only advantage that the X-View offers to the users. While monitoring, if there are any performance problem in the WAS or related systems, the problem typically is expressed as a specific pattern in the X-View. The pattern found in the scatter graph provides important information about the monitored WAS system to the user.

Please see below for the examples of scatter graph pattern typically encountered and their descriptions.

Vertical Streaming Pattern

As if slowly pouring water from a kettle, the service transactions line up vertically.

Vertical Pattern

This phenomenon describes situation where transactions called by different service requests experience delayed response time due to shortage of the same system resource. For example, when many different transactions are effected by single database lock, a vertical streaming pattern is formed. Finding the common factor within this pattern is relatively easy. If the plotted dots in this pattern are from multiple different WAS instances, it is usually due to problem with a system resource outside of the WAS, and if the plotted dots are from a single WAS instance, then it is due to system resource within the WAS. Also, if the plotted dots are from the same service request, the shared resource within the application (usually DB) is the problem.

During web system tuning, administrator should give priority to transactions in vertical streaming pattern than any one plotted dot with extremely high response time. A single dot with delayed response time may have its own specific issues, but a group of lined up transactions may indicate significant problem within the entire system.

Layered Cake Pattern

As throughput increases, horizontal lines may form in X-View. If there is no relationship between increase in throughput and increase in number of horizontal lines, then this pattern is not applicable. Multiple applications with frequent service request may also form a horizontal line, but in this case, even if the throughput increases, the number of horizontal line will not increase along with it.

Horizontal or Layer Pattern

Layered Cake pattern is a phenomenon that can be seen when many transactions are forced to wait for same amount of time until a shared resource becomes available. As throughput increases, the probability of waiting becomes higher and the response time becomes longer; the response time becomes longer uniformly across all transaction that uses shared resource thus the horizontal lines form in X-View. In this case, just investigating whether the transactions in the line come from the same WAS instance, can provide important clue in finding the root cause of the problem. In addition, looking for resources with parameter that indicate wait time (the time difference between one layer and the next) is also necessary.

Matrix Pattern

Named after the opening scene of the movie Matrix, this phenomenon describes pattern where many short vertical (and sometimes horizontal) lines are spread all over the screen.

It can be seen when a resource bottleneck causes problem only for some of the application, and when it happens evenly spread out over time.

"Matrix" Pattern

An example is while Dirty Read is not allowed in the DB, there exists lock that is higher level than READ_COMMITTED at least. Matrix pattern does not happen due to just a single resource issue, it is not associated with typical H/W resource problem; it’s been observed in environment with Sybase or MS-SQL database. Once Matrix pattern is observed, the problem cannot be resolved without adjusting the LOCK LEVEL of the entire database. Fundamental adjustment of LOCK LEVEL and consequential adjustment of application is inevitable.

Waterfall Pattern

The plotted dots resemble series of water falls. Waterfall pattern can be seen when a specific resource experience shortage or lock suddenly then becomes normal again, and then the pattern repeats multiple times.

Water Fall Pattern

There seems to be no indication that the throughput or response time suddenly increase at the time when Waterfall pattern appears. But the requests that call on the problem resource must have been increased. In this case, the applicable resource usage is not proportionate to the overall response time until the resource is depleted; the Waterfall pattern occurs when at the point when the resource is depleted. Another characteristic is that while amidst of the pattern, short transactions less than 3 seconds can be seen. This means that the problem resource responsible for the Waterfall pattern is not one that is used by the entire application.

Waterfall patterns, as seen in second screen shot, share fixed height as one of its characteristic. Waterfall pattern occurs when resources that are used for a short time is depleted suddenly (Full, Overflow), and the services are suspended temporarily until the necessary resource is refreshed and suspension is released.

Comparison

Lastly, screen shots below are two type of graph captured from same production system at the same time.

Line Graph

The red box indicates a performance problem, although this is not at all visible on line graph.

Response Time Scatter Graph

Business performance management (BPM) (or corporate performance management, Enterprise performance management, Operational performance management) is a set of processes that help organizations optimize their business performance. It is a framework for organizing, automating and analyzing business methodologies, metrics, processes and systems that drive business performance.

BPM is seen as the next generation of business intelligence (BI). BPM helps businesses make efficient use of their financial, human, material and other resources.

For years, owners have sought to drive strategy down and across their organizations, they have struggled to transform strategies into actionable metrics and they have grappled with meaningful analysis to expose the cause-and-effect relationships that, if understood, could give profitable insight to their operational decision makers.

Now corporate performance management (CPM) software and methods allow a systematic, integrated approach that links enterprise strategy to core processes and activities. “Running by the numbers” now means something as planning, budgeting, analysis and reporting can give the measurements that empower management decisions.

History

Reference to non-business performance management occurs in Sun Tzu's The Art of War. Sun Tzu claims that to succeed in war, one should have full knowledge of one's own strengths and weaknesses and full knowledge of one's enemy's strengths and weaknesses. Lack of either one might result in defeat. A certain school of thought draws parallels between the challenges in business and those of war, specifically:

• collecting data - both internal and external • discerning patterns and meaning in the data (analyzing) • responding to the resultant information

Prior to the start of the Information Age in the late 20th century, businesses sometimes took the trouble to laboriously collect data from non-automated sources. As they lacked computing resources to properly analyze the data they often made commercial decisions primarily on the basis of intuition.

As businesses started automating more and more systems, more and more data became available. However, collection remained a challenge due to a lack of infrastructure for data exchange or due to incompatibilities between systems. Reports on the data gathered sometimes took months to generate. Such reports allowed informed long-term strategic decision-making. However, short-term tactical decision-making continued to rely on intuition.

In modern businesses, increasing standards, automation, and technologies have led to vast amounts of data becoming available. Data warehouse technologies have set up repositories to store this data. Improved ETL and even recently Enterprise Application Integration tools have

increased the speedy collecting of data. OLAP reporting technologies have allowed faster generation of new reports which analyze the data. Business intelligence has now become the art of sieving through large amounts of data, extracting useful information and turning that information into actionable knowledge.

In 1989 Howard Dresner, a research analyst at Gartner , popularized "Business Intelligence" as an umbrella term to describe a set of concepts and methods to improve business decision-making by using fact-based support systems. Performance Management is built on a foundation of BI, but marries it to the planning and control cycle of the enterprise - with enterprise planning, consolidation and modeling capabilities.

The term "BPM" is now becoming confused with "Business Process Management", and many are converting to the term "Corporate Performance Management" or "Enterprise Performance Management".

What is BPM?

BPM involves consolidation of data from various sources, querying, and analysis of the data, and putting the results into practice.

BPM enhances processes by creating better feedback loops. Continuous and real-time reviews help to identify and eliminate problems before they grow. BPM's forecasting abilities help the company take corrective action in time to meet earnings projections. Forecasting is characterized by a high degree of predictability which is put into good use to answer what-if scenarios. BPM is useful in risk analysis and predicting outcomes of merger and acquisition scenarios and coming up with a plan to overcome potential problems.

BPM provides key performance indicators (KPIs) that help companies monitor efficiency of projects and employees against operational targets.

Methodologies

There are various methodologies for implementing BPM. It gives companies a top down framework by which to align planning and execution, strategy and tactics, and business unit and enterprise objectives. Some of these are six sigma, balanced scorecard, activity-based costing, total quality management, economic value-add, and integrated strategic measurement. The balanced scorecard is the most widely adopted performance management methodology. Methodologies on their own cannot deliver a full solution to an enterprise's CPM needs. Many pure methodology implementations fail to deliver the anticipated benefits because they are not integrated with the fundamental CPM processes.

Metrics / Key Performance Indicators

For business data analysis to become a useful tool, however, it is essential that an enterprise understand its goals and objectives – essentially that they know the direction in which they want the enterprise to progress. To help with this analysis key performance indicators (KPIs) are laid down to assess the present state of the business and to prescribe a course of action.

Metrics and Key performance Indicators (KPI’s) are critical in prioritization what has to be measured. The methodology used helps in determining the metrics to be used by the organization. It is frequently said that one cannot manage what cannot be measured. Identifying the key metrics and determining how they are to be measured helps the organizations to monitor performance across the board without getting deluged by a surfeit of data; a scenario plaguing most companies today.

More and more organizations have started to speed up the availability of data. In the past, data only became available after a month or two, which did not help managers react swiftly enough. Recently, banks have tried to make data available at shorter intervals and have reduced delays. For example, for businesses which have higher operational/credit risk loading (for example, credit cards and "wealth management"), A large multi-national bank makes KPI-related data available weekly, and sometimes offers a daily analysis of numbers and real-time dashboards are also provided. This means data usually becomes available within 24 hours, necessitating automation and the use of IT systems.

Most of the time, BPM simply means use of several financial/no financial metrics/key performance indicators to assess the present state of the business and to prescribe a course of action.

Some of the areas from which top management analysis could gain knowledge by using BPM:

1. Customer-related numbers: 1. New customers acquired 2. Status of existing customers 3. Attrition of customers (including breakup by reason for attrition)

2. Turnover generated by segments of the Customers - these could be demographic filters. 3. Outstanding balances held by segments of customers and terms of payment - these

could be demographic filters. 4. Collection of bad debts within customer relationships. 5. Demographic analysis of individuals (potential customers) applying to become

customers, and the levels of approval, rejections and pending numbers. 6. Delinquency analysis of customers behind on payments. 7. Profitability of customers by demographic segments and segmentation of customers by

profitability. 8. Campaign management 9. Real-time Dashboard on Key operational metrics

1. Overall Equipment Effectiveness

10. Click stream analysis on a website 11. Key product portfolio trackers 12. Marketing Channel analysis 13. Sales Data analysis by product segments 14. Call center metrics

This is more an inclusive list than an exclusive one. The above more or less describes what a bank would do, but could also refer to a telephone company or similar service sector company.

What is important is:

1. KPI related data which is consistent, correct and provide an insight into operational aspects of a company.

2. Timely availability of KPI-related data. 3. KPIs designed to directly reflect the efficiency and effectiveness of a business 4. Information presented in a format which aids decision making for top management and

decision makers 5. Ability to discern patterns or trends from organized information

BPM integrates the company's processes with CRM or ERP. Companies become able to gauge customer satisfaction, control customer trends and influence shareholder value.

Application software types

People working in business intelligence have developed tools that ease the work, especially when the intelligence task involves gathering and analyzing large amounts of unstructured data.

Tool categories commonly used for business performance management include:

• OLAP — Online Analytical Processing, sometimes simply called "Analytics" (based on dimensional analysis and the so-called "hypercube" or "cube")

• Score carding, dash boarding and data visualization • Data warehouses • Document warehouses • Text mining • DM — Data mining • BPM — Business performance management • EIS — Executive information systems • DSS — Decision support systems • MIS — Management information systems • SEMS — Strategic Enterprise Management Software • Business Dashboards

Designing and implementing a business performance management program

When implementing a BPM program one might like to pose a number of questions and take a number of resultant decisions, such as:

• Goal Alignment queries: The first step is determining what the short and medium term purpose of the program will be. What strategic goal(s) of the organization will be addressed by the program? What organizational mission/vision does it relate to? A hypothesis needs to be crafted that details how this initiative will eventually improve results / performance (i.e. a strategy map).

• Baseline queries: Current information gathering competency needs to be assessed. Do we have the capability to monitor important sources of information? What data is being collected and how is it being stored? What are the statistical parameters of this data, e.g., how much random variation does it contain? Is this being measured?

• Cost and risk queries: The financial consequences of a new BI initiative should be estimated. It is necessary to assess the cost of the present operations and the increase in costs associated with the BPM initiative? What is the risk that the initiative will fail? This risk assessment should be converted into a financial metric and included in the planning.

• Customer and stakeholder queries: Determine who will benefit from the initiative and who will pay. Who has a stake in the current procedure? What kinds of customers / stakeholders will benefit directly from this initiative? Who will benefit indirectly? What are the quantitative / qualitative benefits? Is the specified initiative the best way to increase satisfaction for all kinds of customers, or is there a better way? How will customer benefits be monitored? What about employees, shareholders, and distribution channel members?

• Metrics-related queries: These information requirements must be operationalized into clearly defined metrics. One must decide what metrics to use for each piece of information being gathered. Are these the best metrics? How do we know that? How many metrics need to be tracked? If this is a large number (it usually is), what kind of system can be used to track them? Are the metrics standardized, so they can be benchmarked against performance in other organizations? What are the industry standard metrics available?

• Measurement Methodology-related queries: One should establish a methodology or a procedure to determine the best (or acceptable) way of measuring the required metrics. What methods will be used, and how frequently will data be collected? Are there any industry standards for this? Is this the best way to do the measurements? How do we know that?

• Results-related queries: The BPM program should be monitored to ensure that objectives are being met. Adjustments in the programmed may be necessary. The program should be tested for accuracy, reliability, and validity. How can it be demonstrated that the BI initiative, and not something else, contributed to a change in results? How much of the change was probably random?

Change Management (people)Organizational Change Management is a structured approach to transitioning individuals, teams, and organizations from a current state to a desired future state. Organizational Change Management is characterized by a shift in behaviors and attitudes in people to adopt and embrace the future state.

Organizational Change Management must be differentiated from Change Management as the term is used in project management, where the term "Change Management" is used to mean the practice of managing changes to technical or project specifications in a rigorous way in order to prevent scope creep.

Theories of change

The evolution of the change management field stems from psychology, business and engineering. Hence, some models are derived from an organization development perspective whereas others are based on the individual behavioral model. For this reason, this section is divided into two sub-categories: Individual Change Management and Organizational Change Management.

Individual change management

A number of models are available for understanding the transitioning of individuals through the phases of change.

Unfreeze-Change-Refreeze

An early model of change developed by Kurt Lewin described change as a three-stage process. The first stage he called "unfreezing". It involved overcoming inertia and dismantling the existing "mindset". Defense mechanisms have to be bypassed. In the second stage the change occurs. This is typically a period of confusion and transition. We are aware that the old ways are being challenged but we do not have a clear picture to replace them with yet. The third and final stage he called "refreezing". The new mindset is crystallizing and one's comfort level is returning to previous levels. Rosch (2002) argues that this often quoted three-stage version of Lewin’s approach is an oversimplification and that his theory was actually more complex and owed more to physics than behavioral science. Later theorists have however remained resolute in their interpretation of the force field model. This three-stage approach to change is also adapted by Hughes (1991) who makes reference to: "exit" (departing from an existing state), "transit" (crossing unknown territory), and "entry" (attaining a new equilibrium). Tannenbaum & Hanna (1985) suggest a change process where movement is from "homeostasis and holding on", through "dying and letting go" to "rebirth and moving on". Although elaborating the process to five stages, Judson (1991) still proposes a linear, staged model of implementing a change:

(a) Analyzing and planning the change;

(b) Communicating the change;

(c) Gaining acceptance of new behaviors;

(d) Changing from the status quo to a desired state, and

(e) Consolidating and institutionalizing the new state.

Kübler-Ross

Some change theories are based on derivatives of the Kübler-Ross model from Elizabeth Kubler-Ross's book, "On Death and Dying." The stages of Kubler-Ross's model describe the personal and emotional states that a person typically encounters when dealing with loss of a loved one. Derivatives of her model applied in other settings such as the workplace show that similar emotional states are encountered as individuals are confronted with change.

Formula for Change

A Formula for Change was developed by Richard Beckhard and David Gleicher and is sometimes referred to as Gleicher's Formula. The Formula illustrates that the combination of organizational dissatisfaction, vision for the future and the possibility of immediate, tactical action must be stronger than the resistance within the organization in order for meaningful changes to occur.

ADKAR

The ADKAR model for individual change management was developed by Prosci with input from more than 1000 organizations from 59 countries. This model describes five required building blocks for change to be realized successfully on an individual level. The building blocks of the ADKAR Model include:

1. Awareness – of why the change is needed 2. Desire – to support and participate in the change 3. Knowledge – of how to change 4. Ability – to implement new skills and behaviors 5. Reinforcement – to sustain the change

Organizational change management

Organizational change management includes processes and tools for managing the people side of the change at an organizational level. These tools include a structured approach that can be used to effectively transition groups or organizations through change. When combined with an understanding of individual change management, these tools provide a framework for

managing the people side of change. People who are confronted by change will experience a form of culture-shock as established patterns of corporate life are altered, or viewed by people as being threatened. Employees will typically experience a form of "grief" or loss (Stuart, 1995).

Dynamic conservatism

This model by Donald Schön explores the inherent nature of organizations to be conservative and protect them from constant change. Schön recognises the increasing need, due to the increasing pace of change for this process to become far more flexible. This process being one of 'learning'. Very early on Schön recognised the need for what is now termed the 'learning organization'. These ideas are further expanded on within his frame work of 'reflection-in-action'[3], the mapping of a process by which this constant change could be coped with.

The role of the management

Management's responsibility (and that of administration in case of political changes) is to detect trends in the microenvironment as well as in the microenvironment so as to be able to identify changes and initiate programs. It is also important to estimate what impact a change will likely have on employee behavior patterns, work processes, technological requirements, and motivation. Management must assess what employee reactions will be and craft a change program that will provide support as workers go through the process of accepting change. The program must then be implemented, disseminated throughout the organization, monitored for effectiveness, and adjusted where necessary. Organizations exist within a dynamic environment that is subject to change due to the impact of various change "triggers", such as evolving technologies. To continue to operate effectively within this environmental turbulence, organizations must be able to change themselves in response to internally and externally initiated change. However, change will also impact upon the individuals within the organization. Effective change management requires an understanding of the possible effects of change upon people, and how to manage potential sources of resistance to that change. Change can be said to occur where there is an imbalance between the current state and the environment.

Other Approaches to Managing Change

Working with assumptions

• Process Oriented Psychology by Rakesh talks about the field in which each human relationship exists. Its application field, World work, intends to transform systems by shifting roles that people unconsciously hold in a system.

• Dialogue (by David Bohm) is a new form of communication in large groups that is based on the suspension of assumptions, thus letting the common knowledge of a group emerge.

• Appreciative Inquiry, one of the most frequently applied approaches to organizational change, is partly based on the assumption that change in a system is instantaneous ('Change at the Speed of Imagination')

• Scenario Planning: Scenario planning provides a platform for doing so by asking management and employees to consider different future market possibilities in which their organizations might find themselves.

• Theory U of Otto Scharmer who describes a process in which change strategies are based on the emerging future rather than on lesson from the past.[4]

The constructionist principle

The map is not the territory: The map/territory relation is proven by neuroscience and is used to signify that individual people do not have access to absolute knowledge of reality, but in fact only have access to a set of beliefs they have built up over time, about reality. It has been coined into a model by Chris Argyris called the Ladder of Inference[5]. As a consequence, communication in change processes needs to make sure that information about change and its consequences is presented in such a way that people with different belief systems can access this information. Methods that are based on the Map/Territory Relation help people to:

• become more aware of their own thinking and reasoning (reflection), • make their thinking and reasoning more visible to others (advocacy), and • Inquire into others' thinking and reasoning (inquiry).

Some methodological frameworks that are based on this principle are:

• Neuro-linguistic programming (NLP), an eclectic school of psychotherapy developed by Richard Bandler, John Grinder, Robert Dilts, and others;

• Circular Questioning and other techniques basically developed in Systemic Family Therapy;

• Gestalt Psychology, a theory of mind and brain that proposes that the operational principle of the brain is holistic, parallel, and analog, with self-organizing tendencies;

• The concept of the Fifth Discipline by Peter Senge and other management thinkers • Scenario Thinking, a method that helps people to create stories about the future

Personality psychology studies personality based on theories of individual differences. One emphasis in this area is to construct a coherent picture of a person and his or her major psychological processes (Bradberry, 2007). Another emphasis views personality as the study of individual differences, in other words, how people differ from each other. A third area of emphasis examines human nature and how all people are similar to one other. These three viewpoints merge together in the study of personality.

Personality can be defined as a dynamic and organized set of characteristics possessed by a person that uniquely influences his or her cognitions, motivations, and behaviors in various situations (Ryckman, 2004). The word "personality" originates from the Latin persona, which means mask. Significantly, in the theatre of the ancient Latin-speaking world, the mask was not used as a plot device to disguise the identity of a character, but rather was a convention employed to represent or typify that character.

The pioneering American psychologist, Gordon Allport (1937) described two major ways to study personality, the nomothetic and the idiographic. Nomothetic psychology seeks general laws that can be applied to many different people, such as the principle of self-actualization, or the trait of extraversion. Idiographic psychology is an attempt to understand the unique aspects of a particular individual. The study of personality has a rich and varied history in psychology, with an abundance of theoretical traditions. Some psychologists have taken a highly scientific approach, whereas others have focused their attention on theory development. There is also a substantial emphasis on the applied field of personality testing with people.

Philosophical assumptions

Many of the ideas developed by the historical and modern Personality Theorists stem from basic philosophical assumptions they hold. A good textbook for understanding basic assumptions behind personality theories is Hjelle and Ziegler (1992) - this book is now out of print, but similar views are articulated by Ryckman (2000). Psychology is not a purely empirical discipline, as it brings in elements of art, science, and philosophy to draw general conclusions. The following five categories are some of the most fundamental philosophical assumptions where theorists disagree:

Freedom versus Determinism

The debate over whether we have control over our own behavior and understand the motives behind it (Freedom), or if our behavior is basically determined by some other force over which we might not have control (Determinism).

Heredity versus Environment

Personality is thought to be determined largely by either genetics and/or heredity, or by environment and experiences, or both. There is evidence for all possibilities. Ruth Benedict was one of the leading anthropologists that studied the impact of one's culture on the personality and behavioural traits of the individual. Personality psychology studies personality

based on theories of individual differences. One emphasis in this area is to construct a coherent picture of a person and his or her major psychological processes (Bradberry, 2007). Another emphasis views personality as the study of individual differences, in other words, how people differ from each other. A third area of emphasis examines human nature and how all people are similar to one other. These three viewpoints merge together in the study of personality.

Personality can be defined as a dynamic and organized set of characteristics possessed by a person that uniquely influences his or her cognitions, motivations, and behaviors in various situations (Ryckman, 2004). The word "personality" originates from the Latin persona, which means mask. Significantly, in the theatre of the ancient Latin-speaking world, the mask was not used as a plot device to disguise the identity of a character, but rather was a convention employed to represent or typify that character.

The pioneering American psychologist, Gordon Allport (1937) described two major ways to study personality, the nomothetic and the idiographic. Nomothetic psychology seeks general laws that can be applied to many different people, such as the principle of self-actualization, or the trait of extraversion. Idiographic psychology is an attempt to understand the unique aspects of a particular individual. The study of personality has a rich and varied history in psychology, with an abundance of theoretical traditions. Some psychologists have taken a highly scientific approach, whereas others have focused their attention on theory development. There is also a substantial emphasis on the applied field of personality testing with people.

Uniqueness versus Universality

The argument over whether we are all unique individuals (Uniqueness) or if humans are basically similar in their nature (Universality).

Proactive versus Reactive

Do we primarily act through our own initiative (Proactive), or do we react to outside stimuli (Reactive)?

Optimistic versus Pessimistic

Finally, whether or not we can alter our personalities (Optimistic) or if they remain the same throughout our whole lives (Pessimistic).

Optimistic=looking at the present & future with hope.

Pessimistic=looking at the present & future without hope.

Personality can be defined as a dynamic and organized set of characteristics possessed by a person that uniquely influences his or her cognitions, motivations, and behaviors in various situations.

In Sigmund Freud : In Freud’s Psychoanalytic Theory of personality, human psychological make-up comprises three structural components-id,ego, and super ego.

a) The id, represents the instinctual core of the person, is irrational, impulsive and obedient to the pleasure principle. It consists of everything psychologically that is inherited and present at the time of birth. Id represents a storehouse of all instincts, containing in its dark depths – all wishes, desires that unconsciously direct, and determine our behavior. Id is largely childish, irrational, never satisfied, demanding, and destructive of others. But id is the foundation upon which all other parts of personality are erected. Reflex actions and primary process thinking are used by the id in obtaining gratification of instinctual urges.

i) Primary: Attempts to discharge a tension by forming a mental image of desirable means of releasing the tension. But this kind of tension release is temporary and mental and would not satisfy the real need. II) Reflex Actions: The rendition release is reflected in the behavior of individual such as blinking of eyes, eyebrows, rubbing the cheeks, etc.

Id is instinctive, often unconscious and unrecognized and is unaffected by socially or culturally determined restrictions. Id basically represents an individual’s natural urges and feelings.

b) Ego: The ego represents the rational component of personality and governed by the reality principle. Its, through secondary production thinking is to provide the individual with a suitable planed of action in order to satisfy the demands of the id within the restrictions of the social world and the individual’s conscience. Ego constantly works to keep a healthy psychology balance between id’s impulsive demand and superego’s restrictive guidance. Ego is rational master. The ego is said to be the executive part of the personality because it controls the gateway to action, selects the features of the environment to which it will respond and decides which instincts will be satisfied. Ego performs its tasks by –

i) Observing accurately what exists in the outside world ii) Recording these experiences carefully (remembering). iii) Modifying the external world in such a way as to satisfy the instinctual wishes (acting).

C) Super Ego: The super ego, the final structure developed represents the moral breach of personality. As child grows and absorbs parental and cultural attitudes and values he develops a superego. It is also labeled as “ego-ideal” tat tells an individual what is acceptable. Superego is the moral segment of the human personality. The primary concern of super is to determine whether the action proposed by “ego is right or wrong so that the individual acts in accordance with the values and standards of the society.

The superego, in some respects is the antithesis of Freudian Theory of personality. The instinctual drives of id a superego are constantly battling each other and seeking to break out of bonds of reason – the ego. As a person becomes torn between these conflicts. A friction develops and results in anxiety, an ominous feeling that all is not well. Anxiety creates tension and as such a person resorts to defensive mechanism in order to reduce tension. This defense mechanism may be – Aggression, repression, rationalization, reaction, projection and introjections.

Critics of personality theory claim that personality is "plastic" across time, places, moods, and situations. Changes in personality may indeed result from diet (or lack thereof), medical effects, significant events, or learning. However, most personality theories emphasize stability over fluctuation.

Trait theory

In psychology, Trait theory is a major approach to the study of human personality. Trait theorists are primarily interested in the measurement of traits, which can be defined as habitual patterns of behavior, thought, and emotion. According to this perspective, traits are relatively stable over time, differ among individuals (e.g. some people are outgoing whereas others are shy), and influence behavior.

Gordon Allport was an early pioneer in the study of traits, which he sometimes referred to as dispositions. In his approach, central traits are basic to an individual's personality, whereas secondary traits are more peripheral. Common traits are those recognized within a culture and may vary between cultures. Cardinal traits are those by which an individual may be strongly recognized. Since Allport's time, trait theorists have focused more on group statistics than on single individuals. Allport called these two emphases "nomothetic" and "idiographic," respectively.

There is a nearly unlimited number of potential traits that could be used to describe personality. The statistical technique of factor analysis, however, has demonstrated that particular clusters of traits reliably correlate together. Hans Eysenck has suggested that personality is reducible to three major traits. Other researchers argue that more factors are needed to adequately describe human personality. Many psychologists currently believe that five factors are sufficient.

Virtually all trait models, and even ancient Greek philosophy, include extraversion vs. introversion as a central dimension of human personality. Another prominent trait that is found in nearly all models is Neuroticism, or emotional instability.

Two taxonomies

Eysenck's three factor model contains the traits of extraversion, neuroticism, and psychoticism. The five factor model contains openness, extraversion, neuroticism, agreeableness, and conscientiousness. These traits are the highest-level factors of a hierarchical taxonomy based on the statistical technique of factor analysis. This method produces factors that are continuous, bipolar, can be distinguished from temporary states, and can describe individual differences. Both approaches extensively use self-report questionnaires. The factors are intended to be orthogonal (uncorrelated), though there are often small positive correlations between factors. The five factor model in particular has been criticized for losing the orthogonal structure between factors. Eysenck has argued that fewer factors are superior to a larger number of partly related ones. Although these two approaches are comparable because of the use of factor analysis to construct hierarchical taxonomies, they differ in the organization and number of factors.

Whatever the causes, however, psychoticism marks the two approaches apart as the five factor model contains no such trait. Moreover, apart from simply being different high-level factor psychoticism, unlike any of the other factors in either approach, does not fit a normal distribution curve. Indeed, scores are rarely high thus skewing a normal distribution. However, when they are high there is considerable overlap with psychiatric conditions such as antisocial and schizoid personality disorders. Similarly, high scorers on neuroticism are more susceptible to sleep and psychosomatic disorders. Five factor approaches can also predict future mental disorders.

Lower order factors

Similarities between lower order factors for psychoticism and the factors of openness, agreeableness, and conscientiousness (from Matthews, Deary & Whiteman, 2003)

There are two higher order factors that both taxonomies clearly share, extraversion and neuroticism. Both approaches broadly accept that extraversion is associated with sociability and positive affect, whereas neuroticism is associated with emotional instability and affect. Many lower order factors are similar between the two taxonomies. For instance, both approaches contain factors for sociability/gregariousness, for activity levels, and for assertiveness within the higher order factor, extraversion. However, there are differences too. First, the three-factor approach contains nine lower order factors and the five-factor approach has six. Eysenck's psychoticism factor incorporates some of the polar opposites of the lower order factors of openness, agreeableness and conscientiousness. A high scorer on tough-mindedness in psychoticism would score low on tender-mindedness in agreeableness. Most of

the differences between the taxonomies stem from the three factor model's emphasis on fewer high-order factors.

Causality

Although both major trait models are descriptive, only the three factor model offers a detailed causal explanation. Eysenck suggests that different personality traits are caused by the properties of the brain, which themselves are the result of genetic factors.[18] In particular, the three factor model identifies the reticular system and the limbic system in the brain as key components, with the specific functions of mediating cortical arousal and emotional responses respectively. Eysenck advocates that extraverts have low levels of cortical arousal and introverts have high levels, leading extraverts to seek out more stimulation from socialising and being venturesome. Moreover, Eysenck surmised that there would be an optimal level of arousal after which inhibition would occur and that this would be different for each person. In a similar vein, the three factor approach theorizes that neuroticism is mediated by levels of arousal in the limbic system with individual differences arising because of variable activation thresholds between people. Therefore, highly neurotic people when presented with minor stressors, will exceed this threshold, whereas people low in neuroticism will not exceed normal activation levels, even when presented with large stressors. By contrast, proponents of the five factor approach assume a role of genetics and environment but offer no explicit causal explanation.

Given this emphasis on biology in the three factor approach it would be expected that the third trait, psychoticism, would have a similar explanation. However, the causal properties of this state are not well defined. Eysenck has suggested that psychoticism is related to testosterone levels and is an inverse function of the serotonergic system, but he later revised this, linking it instead to the dopaminergic system.

Type Theories

Personality type refers to the psychological classification of different types of people. Personality types are distinguished from personality traits, which come in different levels or degrees. According to type theories, for example, there are two types of people, introverts and extraverts. According to trait theories, introversion and extraversion are part of a continuous dimension, with many people in the middle. The idea of psychological types originated in the theoretical work of Carl Jung and William Marston whose work is reviewed in The Personality Code, a book by Dr. Travis Bradberry. Jung's seminal 1921 book on the subject is available in English as Psychological Types.

Building on the writings and observations of Carl Jung, during WWII Isabel Briggs Myers and her mother Katharine C. Briggs delineated personality types by constructing the Myers-Briggs Type Indicator. This model was later used by David Keirsey with a different understanding from Jung, Briggs and Myers.

The model is an older and more theoretical approach to personality, accepting extraversion and introversion as basic psychological orientations in connection with two pairs of psychological functions:

Perceiving functions: intuition and sensing (trust in conceptual/abstract models of reality or concrete sensory-oriented facts)

Judging functions: thinking and feeling (thinking as the prime-mover in decision-making or feelings as the prime-mover in decision-making).

Briggs and Myers also added another personality dimension to their type indicator in order to indicate whether a person has a more dominant judging or perceiving function. Therefore they included questions designed to indicate whether someone desires to either perceive events or have things done so that judgements can be made.

This personality typology has some aspects of a trait theory: it explains people's behaviour in terms of opposite fixed characteristics. In these more traditional models, the intuition factor is considered the most basic, dividing people into "N" or "S" personality types. An "N" is further assumed to be guided by the thinking or objectication habit, or feelings, and be divided into "NT" (scientist, engineer) or "NF" (author, human-oriented leader) personality. An "S", by contrast, is assumed to be more guided by the perception axis, and thus divided into "SP" (performer, craftsman, artisan) and "SJ" (guardian, accountant, bureaucrat) personality. These four are considered basic, with the other two factors in each case (including always extraversion) less important. Critics of this traditional view have observed that the types are quite strongly stereotyped by professions, and thus may arise more from the need to categorize people for purposes of guiding their career choice. This among other objections led to the emergence of the five factor view, which is less concerned with behavior under work stress and more concerned with behavior in personal and emotional circumstances. Some critics have argued for more or fewer dimensions while others have proposed entirely different theories (often assuming different definitions of "personality").

Socionics, a social psychology discipline founded by Aushra Augusta, equates Jung's function concept with information elements and aspects, which are elements of Anton Kepinsky's information metabolism theory.[1] The information elements are observed to have conflicts between themselves, which lead to persistent patterns of relation between two individuals.[2]

Type A personality: During the 1950s, Meyer Friedman and his co-workers defined what they called Type A and Type B behavior patterns. They theorized that intense, hard-driving Type A personalities had a higher risk of coronary disease because they are "stress junkies." Type B people, on the other hand, tended to be relaxed, less competitive, and lower in risk. There was also a Type AB mixed profile. Dr. Redford Williams, cardiologist at Duke University, refuted Friedman’s theory that Type A personalities have a higher risk of coronary heart disease; however, current research indicates that only the hostility component of Type A may have health implications. Type A/B theory has been extensively criticized by psychologists because it tends to oversimplify the many dimensions of an individual's personality.

Psychoanalytic Theories

Psychoanalytic theories explain human behaviour in terms of the interaction of various components of personality. Sigmund Freud was the founder of this school. Freud drew on the physics of his day (thermodynamics) to coin the term psychodynamics. Based on the idea of converting heat into mechanical energy, he proposed that psychic energy could be converted into behavior. Freud's theory places central importance on dynamic, unconscious psychological conflicts.

Freud divides human personality into three significant components: the ego, superego, and id. The id acts according to the pleasure principle, demanding immediate gratification of its needs regardless of external environment; the ego then must emerge in order to realistically meet the wishes and demands of the id in accordance with the outside world, adhering to the reality principle. Finally, the superego inculcates moral judgment and societal rules upon the ego, thus forcing the demands of the id to be met not only realistically but morally. The superego is the last function of the personality to develop, and is the embodiment of parental/social ideals established during childhood. According to Freud, personality is based on the dynamic interactions of these three components.

The channeling and release of sexual (libidal) and aggressive energies, which ensues from the "Eros" (sex; instinctual self-preservation) and "Thanatos" (death; instinctual self-annihilation) drives respectively, are major components of his theory. It is important to note that Freud's broad understanding of sexuality included all kinds of pleasurable feelings experienced by the human body. Freud proposed five psychosexual stages of personality development:

1. Infantile Stage - Birth until four to five years a.) Oral Stage - birth to approximately eighteen months b.) Anal Stage - eighteen months to three years c.) Phallic Stage - between three and five

2. Latency Period - roughly from six years to puberty

3. Genital Stage - adolescence and adulthood

Heinz Kohut thought similarly to Freud’s idea of transference. He used narcissism as a model of how we develop our sense of self. Narcissism is the exaggerated sense of one self in which is believed to exist in order to protect one's low self esteem and sense of worthlessness. Kohut had a significant impact on the field by extending Freud's theory of narcissism and introducing what he called the 'self-object transferences' of mirroring and idealization. In other words, children need to idealize and emotionally "sink into" and identify with the idealized competence of admired figures such as parents or older siblings. They also need to have their self-worth mirrored by these people. These experiences allow them to thereby learn the self-soothing and other skills that are necessary for the development of a healthy sense of self.

Another important figure in the world of personality theory would be Karen Horney. She is credited with the development of the "real self" and the "ideal self". She believes that all people have these two views of their own self. The "real self" is how you really are with regards to personality, values, and morals; but the "ideal self" is a construct you apply to yourself to conform to social and personal norms and goals. Ideal self would be "I can be successful, I am CEO material"; and real self would be "I just work in the mail room, with not much chance of high promotion".

Behaviorist Theories

Behaviorists explain personality in terms of the effects external stimuli have on behavior. It was a radical shift away from Freudian philosophy. This school of thought was developed by B. F. Skinner who put forth a model which emphasized the mutual interaction of the person or "the organism" with its environment. Skinner believed that children do bad things because the behavior obtains attention that serves as a reinforce. For example: a child cries because the child's crying in the past has led to attention. These are the response, and consequences. The response is the child crying, and the attention that child gets is the reinforcing consequence. According to this theory, people's behavior is formed by processes such as operant conditioning. Skinner put forward a 'three term contingency model' which helped promote analysis of behavior based on the 'Stimulus - Response - Consequence Model' in which the critical question is: "Under which circumstances or antecedent "stimuli" does the organism engage in a particular behavior or "response," which in turn produces a particular "consequence"?"

Richard Herrnstein extended this theory by accounting for attitudes and traits. An attitude develops as the response strength (the tendency to respond) in the presences of a group of stimuli become stable. Rather than describing conditional traits in non-behavioral language, response strength in a given situation accounts for the environmental portion. Herrstein also saw traits as having a large genetic or biological component as do most modern behaviorists.

Ivan Pavlov is another notable influence. He is well known for his classical conditions experiments involving a dog. These physiological studies on this dog led him to discover the foundation of behaviorism as well as classical conditioning. Pavlov would begin his experiment by first ringing a bell, which would cause no response from the dog. He would proceed to place food in front of the dog's face, causing the dog to salivate. Several seconds later, he would ring the bell again, causing the dog to now salivate. After continuing this experiment several times, the dog would salivate at just the ring of the bell. These conditioning experiments can be used for many different types of experiments.

John B. Watson, The Father of American Behaviorism, made four major assumptions about radical Behaviorisms -

1. Evolutionary Continuity: The laws of behavior are applied equally to all living organisms, so we can study animals as simple models of complex human responses.

2. Reductionism: All behaviors are linked to physiology.

3. Determinism: Animals do not respond freely, they respond in a programmed way to external stimuli. Biological organisms respond to outside influences.

4. Empiricism: Only our actions are observable evidence of our personality. Psychology should involve the study of observable (overt) behavior.

All behaviorists focus on observable behavior. Thus there is no emphasis on unconscious motives, internal traits, introspection, or self analysis. Behavior modification is a form of therapy that applies the principles of learning to achieve changes in behavior.

Cognitive Theories

In cognitive, behavior is explained as guided by cognitions (e.g. expectations) about the world, especially those about other people. Cognitive theories are theories of personality that emphasize cognitive processes such as thinking and judging.

Albert Bandura, a social learning theorist suggested that the forces of memory and emotions worked in conjunction with environmental influences. Bandura was known mostly for his "Bobo Doll experiment". During these experiments, Bandura video taped a college student kicking and verbally abusing a bubo doll. He then showed this video to a class of kindergartners who were getting ready to go out to play. When they entered the play room, they saw bobo dolls, and some hammers. The people observing these children at play saw a group of children beating the doll. He called this study and his findings observational learning, or modeling.

Early examples of approaches to cognitive style are listed by Baron (1982). These include Witkin's (1965) work on field dependency, Gardner's (1953) discovering people had consistent preference for the number of categories they used to categorise heterogeneous objects, and Block and Petersen's (1955) work on confidence in line discrimination judgments. Baron relates early development of cognitive approaches of personality to psychology. More central to this field have been:

• Self-efficacy work, dealing with confidence people have in abilities to do tasks (Bandura, 1997);

• Locus of control theory (Lefcourt, 1966; Rotter, 1966) dealing with different beliefs people have about whether their worlds are controlled by themselves or external factors;

• Attributional style theory (Abramson, Seligman and Teasdale, 1978) dealing with different ways in which people explain events in their lives. This approach builds upon locus of control, but extends it by stating that we also need to consider whether people attribute to stable causes or variable causes, and to global causes or specific causes.

Various scales have been developed to assess both attributional style and locus of control. Locus of control scales include those used by Rotter and later by Duttweiler, the Nowicki and

Strickland (1973) Locus of Control Scale for Children and various locus of control scales specifically in the health domain, most famously that of Kenneth Wallston and his colleagues, The Multidimensional Health Locus of Control Scale (Wallston et al, 1978). Attribution style has been assessed by the Attribution Style Questionnaire (Peterson et al., 1982), the Expanded Attribution Style Questionnaire (Peterson & Villanova, 1988), the Attributions Questionnaire (Gong-guy & Hammen, 1990), the Real Events Attribution Style Questionnaire (Norman & Antaki, 1988) and the Attribution Style Assessment Test (Anderson, 1988).

Walter Mischel (1999) has also defended a cognitive approach to personality. His work refers to "Cognitive Affective Units", and considers factors such as encoding of stimuli, affect, goal-setting and self-regulatory beliefs. The term "Cognitive Affective Units" shows how his approach considers affect as well as cognition.

Albert Ellis, an American cognitive-behavioral therapist, is considered by many to be the grandfather of cognitive-behavioral therapy. In 1955 Ellis developed Rational Emotive Behavior Therapy (REBT), which later came to be known as Rational Therapy (RT). REBT required that the therapist help the client understand — and act on the understanding — that his personal philosophy contains common irrational beliefs that lead to his own emotional pain. Because thinking and emotion have a cause and effect relationship, Ellis believes that the thoughts we have become our emotions and the emotions we have become our thoughts. The basic theory of REBT is that majority of people create their own sort of emotional consequences because to sustain an emotion it must have had some form of thought. Ellis also created the A-B-C theory of personality. (A), is the activating event which is followed by (B), the belief system that the person holds and then (C), the emotional consequence. What the theory states is that (A) does not cause (C); but that (B) causes (C). The emotional consequences are caused by what the person believes in. An example would be if a person is walking outside and a stranger in a car pulls up next to them asking for directions (A), and the persons' belief system is that any stranger in a car that wants directions wants to hurt you (B) so therefore the person fears the person in the car is going to hurt them (C).

Aaron Beck, who is widely noted as the father of cognitive-behavioral therapy (CBT), suggested that nearly all psychological dilemmas can be redirected in a positive (helpful) manner with the changing of the suffering individual's thought processes. He has worked extensively on depression and suicide, and is now redirecting his theories towards those with borderline personality disorder, and the various anxiety disorders (OCD, neurosis, phobias, PTSD, etc.). Extensive evidence has proven the effectiveness of combining CBT with pharmacotherapy in treating the most severe psychiatric disorders such as bi-polar disorder and schizophrenia. Aaron Beck's continuing research in the field has proven to be a greater success over time.

Humanistic theoriesIn humanistic psychology it is emphasized people have free will and that they play an active role in determining how they behave. Accordingly, humanistic psychology focuses on subjective experiences of persons as opposed to forced, definitive factors that determine

behavior. Abraham Maslow and Carl Rogers were proponents of this view, which is based on the "phenomenal field" theory of Combs and Snygg (1949) .

Maslow spent much of his time studying what he called "self-actualizing persons", those who are "fulfilling themselves and doing the best that they are capable of doing". Maslow believes that all who are interested in growth move towards self-actualizing (growth, happiness, satisfaction) views. Many of these people demonstrate a trend in dimensions of their personalities. Characteristics of self-actualizers according to Maslow include the four key dimensions; 1) Awareness - maintaining constant enjoyment and awe of life. These individuals often experienced a "peak experience". He defined a peak experience as an "intensification of any experience to the degree that there is a loss or transcendence of self". A peak experience is one in which an individual perceives an expansion of his or herself, and detects a unity and meaningfulness in life. Intense concentration on an activity one is involved in, such as running a marathon, may invoke a peak experience. 2) Reality and problem centered - they have tendency to be concerned with "problems" in their surroundings. 3) Acceptance/Spontaneity - they accept their surroundings and what cannot be changed. And 4) Unhostile sense of humor/democratic - they do not like joking about others, which can be viewed as offensive. They have friends of all backgrounds and religions and hold very close friendships.

Maslow and Rogers emphasized a view of the person as an active, creative, experiencing human being who lives in the present and subjectively responds to current perceptions, relationships, and encounters. They disagree with the dark, pessimistic outlook of those in the Freudian psychoanalysis ranks, but rather view humanistic theories as positive and optimistic proposals which stress the tendency of the human personality toward growth and self-actualization. This progressing self will remain the center of its constantly changing world; a world that will help mold the self but not necessarily confine it. Rather, the self has opportunity for maturation based on its encounters with this world. This understanding attempts to reduce the acceptance of hopeless redundancy. Humanistic therapy typically relies on the client for information of the past and its effect on the present, therefore the client dictates the type of guidance the therapist may initiate. This allows for an individualized approach to therapy. Rogers found that patients differ in how they respond to other people. Rogers tried to model a particular approach to therapy- he stressed the reflective or empathetic response. This response type takes the client's viewpoint and reflects back his or her feeling and the context for it. An example of a reflective response would be, "It seems you are feeling anxious about your upcoming marriage". This response type seeks to clarify the therapist's understanding while also encouraging the client to think more deeply and seek to fully understand the feelings they have expressed.

Biopsychological Theories

Around the 1990s, neuroscience entered the domain of personality psychology. Whereas previous efforts for identifying personality differences relied upon simple, direct, human observation, neuroscience introduced powerful brain analysis tools like Electroencephalography (EEG), Positron Emission Tomography (PET), and Functional Magnetic Resonance Imaging (FMRI) to this study. One of the founders of this area of brain

research is Richard Davidson of the University of Wisconsin-Madison. Davidson's research lab has focused on the role of the prefrontal cortex (PFC) and amygdala in manifesting human personality. In particular, this research has looked at hemispheric asymmetry of activity in these regions. Neuropsychological studies have illustrated how hemispheric asymmetry can affect an individual's personality (particularly in social settings) for individuals who have NLD (non-verbal learning disorder) which is marked by the impairment of nonverbal information controlled by the right hemisphere of the brain. Progress will arise in the areas of gross motor skills, inability to organize visual-spatial relations, or adapt to novel social situations. Frequently, a person with NLD is unable to interpret non-verbal cues, and therefore experiences difficulty interacting with peers in socially normative ways. An integrative, biopsychosocial approach to personality and psychopathology, linking brain and environmental factors to specific types of activity is the hypostatic model of personality, created by Codrin Stefan Tapu (Tapu, 2001).

Personality tests

There are two major types of personality tests. Projective tests assume that personality is primarily unconscious and assess an individual by how he or she responds to an ambiguous stimulus, like an ink blot. The idea is that unconscious needs will come out in the person's response, e.g. a very hostile person may see images of destruction. Objective tests assume that personality is consciously accessible and measure it by self-report questionnaires. Research on psychological assessment has generally found that objective tests are more valid and reliable than projective tests.

Examples of personality tests include:

• Holland Codes • Rorschach test • Minnesota Multiphasic Personality Inventory • Morrisby Profile • Myers-Briggs Type Indicator • Enneagram Type Indicator • NEO PI-R • Thematic Apperception Test • Kelly's Repertory Grid

Critics have pointed to the Forer effect to suggest that some of these appear to be more accurate and discriminating than they really are.

Network performance managementNetwork performance management is the discipline of optimizing how networks function, trying to deliver the lowest latency, highest capacity, and maximum reliability despite intermittent failures and limited bandwidth. While availability management is critical, infrastructure reliability has improved to the point at which 99.9% availability is not uncommon. Given these improvements in device availability, companies are focusing more attention on performance management. By measuring how networked applications perform under normal circumstances, understanding how performance is impacted by infrastructure and application changes, and isolating the sources of above-normal latency, IT organizations can ensure problems are resolved quickly, mitigate risk, and take measured steps to optimize application performance.

Reliable and unreliable networks

Networks connect users or machines to one another using sets of well-defined protocols to govern how data is transmitted. Depending on the type of network and the goals of the application, the protocols may be optimized for specific characteristics:

• A best-effort network protocol tries to send data, but may lose some along the way to avoid congestion. IP and UDP are popular examples of this. Often, this kind of protocol is used for isochronous traffic such as voice over IP (VOIP).

• A reliable network guarantees delivery of traffic, favoring correctness and completeness over speed. TCP, which is the basis for most Internet protocols including the http protocol over which web applications are delivered, is the most common example.

Factors affecting network performance

Unfortunately, not all networks are the same. As data is broken into component parts (often known frames, packets, or segments) for transmission, several factors can affect their delivery.

• Latency: It can take a long time for a packet to be delivered across intervening networks. In reliable protocols where a receiver acknowledges delivery of each chunk of data, it is possible to measure this as round-trip time.

• Packet loss: In some cases, intermediate devices in a network will lose packets. This may be due to errors, to overloading of the intermediate network, or to intentional discarding of traffic in order to enforce a particular service level.

• Retransmission: When packets are lost in a reliable network, they are retransmitted. This incurs two delays: First, the delay from re-sending the data; and second, the delay resulting from waiting until the data is received in the correct order before forwarding it up the protocol stack.

• Throughput: The amount of traffic a network can carry is measured as throughput, usually in terms such as kilobits per second. Throughput is analogous to the number of lanes on a highway, whereas latency is analogous to its speed limit.

These factors, and others (such as the performance of the network signaling on the end nodes, compression, encryption, concurrency, and so on) all affect the effective performance of a network. In some cases, the network may not work at all; in others, it may be slow or unusable. And because applications run over these networks, application performance suffers. Various intelligent solutions are available to ensure that traffic over the network is effectively managed to optimize performance for all users. See Traffic Shaping

The performance management discipline

Network performance management consists of measuring, modeling, planning, and optimizing networks to ensure that they carry traffic with the speed, reliability, and capacity that is appropriate for the nature of the application and the cost constraints of the organization. Different applications warrant different blends of capacity, latency, and reliability. For example:

• Streaming video or voice can be unreliable (brief moments of static) but need to have very low latency so that lags don't occur

• Bulk file transfer or e-mail must be reliable and have high capacity, but doesn't need to be instantaneous

• Instant messaging doesn't consume much bandwidth, but should be fast and reliable

Network performance management tasks and classes of tools

Network managers perform many tasks; these include performance measurement, forensic analysis, capacity planning and load-testing or load generation. They also work closely with application developers and IT departments who rely on them to deliver underlying network services.

• For performance measurement, operators typically measure the performance of their networks at different levels. They either using per-port metrics (how much traffic on port 80 flowed between a client and a server and how long did it take) or they rely on end-user metrics (how fast did the login page load for Bob.)

o Per-port metrics are collected using flow-based monitoring and protocols such as Netflow (now standardized as IPFIX) or RMON.

o End-user metrics are collected through web logs, synthetic monitoring, or real user monitoring. An example is ART (application response time) which provides end to end statistics that measure Quality of Experience.

• For forensic analysis, operators often rely on sniffers that break down the transactions by their protocols and can locate problems such as retransmissions or protocol negotiations.

• For capacity planning, modeling tools such as OPNET that project the impact of new applications or increased usage are invaluable.

• For load generation that helps to understand the breaking point, operators may use software or appliances that generate scripted traffic. Some hosted service providers also offer pay-as-you-go traffic generation for sites that face the public Internet

Performance improvementPerformance improvement is the concept of measuring the output of a particular process or procedure, then modifying the process or procedure in order to increase the output, increase efficiency, or increase the effectiveness of the process or procedure. The concept of performance improvement can be applied to either individual performance such as an athlete or organizational performance such as a racing team or a commercial enterprise.

In Organizational development, performance improvement is the concept of organizational change in which the managers and governing body of an organization put into place and manage a programmed which measures the current level of performance of the organization and then generates ideas for modifying organizational behavior and infrastructure which are put into place in order to achieve a better level of output. The primary goals of organizational improvement are to improve organizational effectiveness and organizational efficiency in order to improve the ability of the organization to deliver its goods and/or services and prosper in the marketplaces in which the organization competes. A third area of improvement which is sometimes targeted for improvement is organizational efficacy which involves the process of setting organizational goals and objectives.

Performance improvement at the operational or individual employee level usually involves processes such as statistical quality control. At the organizational level, performance improvement usually involves softer forms of measurement such as customer satisfaction surveys which are used to obtain qualitative information about performance from the viewpoint of customers.

Performance defined

Performance is a measure of the results achieved. Performance efficiency is the ratio between effort expended and results achieved. The difference between current performance and the theoretical performance limit is the performance improvement zone.

Another way to think of performance improvement is to see it as improvement in four potential areas. First, is the resource INPUT requirements (e.g., reduced working capital, material, replacement/reorder time, and set-up requirements). Second, is the THROUGHPUT requirements, often viewed as process efficiency; this is measured in terms of time, waste, and resource utilization. Third, OUTPUT requirements, often viewed from a cost/price, quality, functionality perspective. Fourth, OUTCOME requirements, did it end up making a difference.

Performance is an abstract concept and it must be represented by concrete, measurable phenomena or events in order to be measured. Baseball athlete performance is abstract covering many different types of activities. Batting average is a concrete measure of a particular performance attribute for a particular game role, batting, for the game of baseball.

Performance assumes an actor of some kind but the actor could be an individual person or a group of people acting in concert. The performance platform is the infrastructure or devices used in the performance act.

There are two main ways to improve performance: improving the measured attribute by using the performance platform more effectively, or by improving the measured attribute by modifying the performance platform, which in turn allows a given level of use to be more effective in producing the desired output.

For instance, in several sports such as tennis and golf, there have been technological improvements in the apparatuses used in these sports. The improved apparatus in turn allows players to achieve better performance with no improvement in skill by purchasing new equipment. The apparatus, the golf club and golf ball or the tennis racket, provide the player with a higher theoretical performance limit.

Levels

Performance improvement can occur at different levels:

• an individual performer • a team • an organizational unit • the organization itself

Cycle

Business performance management and improvement can be thought of as a cycle:

1. Performance Planning where goals and objectives are established 2. Performance Coaching where a manager intervenes to give feedback and adjust

performance 3. Performance appraisal where individual performance is formally documented and

feedback delivered

Performance appraisalPerformance appraisals are a regular review of employee performance within organizations.

Generally, the aims of a scheme are:

• Give feedback on performance to employees. • Identify employee training needs. • Document criteria used to allocate organizational rewards. • Form a basis for personnel decisions: salary increases, promotions, disciplinary actions,

etc. • Provide the opportunity for organizational diagnosis and development. • Facilitate communication between employee and administrator. • Validate selection techniques and human resource policies to meet federal Equal

Employment Opportunity requirements.

A common approach to assessing performance is to use a numerical or scalar rating system whereby managers are asked to score an individual against a number of objectives/attributes. In some companies, employees receive assessments from their manager, peers, subordinates and customers while also performing a self assessment. This is known as 360° appraisal.

The most popular methods that are being used as performance appraisal process are:

• Management by objectives (MBO) • 360 degree appraisal • Behavioral Observation Scale (BOS) • Behaviorally Anchored Rating Scale (BARS)

Trait based systems, which rely on factors such as integrity and conscientiousness, are also commonly used by businesses. The scientific literature on the subject provides evidence that assessing employees on factors such as these should be avoided. The reasons for this are two-fold:

1) Because trait based systems are by definition based on personality traits, they make it difficult for a manager to provide feedback that can cause positive change in employee performance. This is caused by the fact that personality dimensions are for the most part static, and while an employee can change a specific behavior they cannot change their personality. For example, a person who lacks integrity may stop lying to a manager because they have been caught, but they still have low integrity and are likely to lie again when the threat of being caught is gone.

2) Trait based systems, because they are vague, are more easily influenced by office politics, causing them to be less reliable as a source of information on an employee's true performance. The vagueness of these instruments allows managers to fill them out based on who they want to/feel should get a raise, rather than basing scores on specific behaviors

employees should/should not be engaging in. These systems are also more likely to leave a company open to discrimination claims because a manager can make biased decisions without having to back them up with specific behavioral information.

Management by Objectives (MBO) is a process of agreeing upon objectives within an organization so that management and employees agree to the objectives and understand what they are.

The term "management by objectives" was first popularized by Peter Drucker in his 1954 book 'The Practice of Management'.

Domains and levels

Objectives can be set in all domains of activities (production, services, sales, R&D, human resources, finance, information systems etc.).

Some objectives are collective, for a whole department or the whole company, others can be individualized.

Practice

MBO is often achieved using set targets. MBO introduced the SMART criteria: Objectives for MBO must be SMART (Specific, Measurable, Achievable, Relevant, and Time-Specific). In some sectors (Healthcare, Finance etc) many add ER to make SMARTER. Where the E=Extendable R=Recorded ).

Objectives need quantifying and monitoring. Reliable management information systems are needed to establish relevant objectives and monitor their "reach ratio" in an objective way. Pay incentives (bonuses) are often linked to results in reaching the objectives

Limitations

However, it has been reported in recent years that this style of management receives criticism in that it triggers employees' unethical behavior of distorting the system or financial figures to achieve the targets set by their short-term, narrow bottom-line, and completely self-centered thinking.

A more fundamental and authoritative critics comes from Walter A. Shewhart / W. Edwards Deming the fathers of Modern Quality Management for whom MBO is the opposite of their founding Philosophy of Statistical Process Control.

The use of MBO needs to be carefully aligned with the culture of the organization. While MBO is not as fashionable as it was pre the 'empowerment' fad, it still has its place in management today. The key difference is that rather than 'set' objectives from a cascade process, objectives are discussed and agreed, based upon a more strategic picture being available to employees. Engagemant of employees in the objective setting process is seen as a strategic advantage by many.

A saying around MBO and CSF's -- "What gets measured gets done" -- is perhaps the most famous aphorism of performance measurement; therefore, to avoid potential problems SMART and SMARTER objectives need to be agreed upon in the true sense rather than set.

In human resources, 360-degree feedback, also known as 'multi-rater feedback', 'multi source feedback', or 'multi source assessment', is employee development feedback that comes from all around the employee. "360" refers to the 360 degrees in a circle. The feedback would come from subordinates, peers, and managers in the organizational hierarchy, as well as self-assessment, and in some cases external sources such as customers and suppliers or other interested stakeholders. It may be contrasted with upward feedback, where managers are given feedback by their direct reports, or a traditional performance appraisal, where the employees are most often reviewed only by their manager.

The results from 360-degree feedback are often used by the person receiving the feedback to plan their training and development. The results are also used by some organizations for making promotional or pay decisions, which are sometimes called "360-degree review."

Rater accuracy

A study on the patterns of rater accuracy shows that how long the rater has known the person has the most effect on the accuracy of a 360-degree review. The study shows that subjects in the group “known for one to three years” are the most accurate, followed by “known for less than one year,” followed by “known for three to five years” and the least accurate being “known for more than five years.” The study concludes that the most accurate ratings come from knowing the person long enough to get past first impressions, but not so long as to begin to generalize favorably

Effects of 360-degree feedback

A study on 360-degree feedback to leaders conducted by Arizona State University has supported the hypothesis that improvement in a leader’s consideration and employee development behaviors will lead to positive changes in employees' job satisfaction and engagement, and reduce their intent to leave (Brett 582-583).

Strategic Data

While the value of 360-degree feedback is often seen in terms of individual development, aggregate reporting of all recipients' results can provide valuable data for the organization as a whole. It enables leaders to

• Take advantage of under-utilized personnel strengths to increase productivity • Avoid the trap of counting on skills that may be weak in the organization • Apply human assets data to the valuation of the organization • Make succession planning more accurate • Design more efficient coaching and training initiatives • Support the organization in marketing the skills of its members

History

The US armed forces first used 360-degree feedback to support development of staff in the 1940s. The system gained momentum slowly, but by the 1990s most HR and OD professionals understood the concept. The problem was that collecting and collating the feedback demanded a paper-based effort including either complex manual calculations or lengthy delays while a commercial provider assembled reports. The first led to despair on the part of practitioners; the second to a gradual erosion of commitment by recipients.

When the first online 360 degree feedback tools appeared in 1998, it became possible to request feedback from raters anywhere in the world by email, to customize automated systems, and to generate reports for recipients in minutes. In recent years, Internet-based services have become the norm, with a growing menu of useful features: e.g. multi languages, comparative reporting, and aggregate reporting.

Benefits

• Individuals get a broader perspective of how they are perceived by others than previously possible.

• Increased awareness of and relevance of competencies. • Increased awareness by senior management that they too have development needs. • More reliable feedback to senior managers about their performance. • Gaining acceptance of the principle of multiple stakeholders as a measure of

performance. • Encouraging more open feedback — new insights. • Reinforcing the desired competencies of the business. • Provided a clearer picture to senior management of individual’s real worth (although

there tended to be some ‘halo’ effect syndromes). • Clarified to employees critical performance aspects. • Opens up feedback and gives people a more rounded view of performance than they

had previously. • Identifying key development areas for the individual, a department and the organization

as a whole. • Identifying strengths that can be used to the best advantage of the business. • A rounded view of the individual’s/ team’s/ organization’s performance and what the

strengths and weaknesses are. • Raised the self-awareness of people managers of how they personally impact upon

others — positively and negatively. • Supporting a climate of continuous improvement. • Starting to improve the climate/ morale, as measured through the survey. • Focused agenda for development. Forced line managers to discuss development issues. • Perception of feedback as more valid and objective, leading to acceptance of results and

actions required.

Introducing 360 feedback in an organization

Before introducing 360 feedback in an organization the planning process must include the step addressing the benefits and perceived risks of all participants. Recipients of feedback and reviewers may have concerns about issues like confidentiality of reviews, how the completed reviews will be used in the organization and what sort of follow up they can expect. Communication and support provided throughout the project must take this into account if the programmed is to provide maximum value for the individuals and the organization using 360 feedback.

Why organizations may not adopt the 360 degree approach

1. Return on investment, for the time and energy required, is perceived to be minimal. 2. Transparent feedback can be adversely affected by emotions and ongoing peer

conflicts. 3. Appraises are not ready for honest and open feedback. 4. Some cultures rigidly avoid passing negative feedback, or information, to superiors or

elders.

BARS Behaviorally Anchored Rating scales is a method that combines elements of the traditional rating scales and critical incidents methods. In order to construct BARS seven steps are followed as mentioned below

1. Examples of effective and ineffective behavior related to job are collected from people with knowledge of job.

2. These behaviors are converted in to performance dimensions.

3. A group of participants will be asked to reclassify the incidents. At this stage the incidents for which there is not 75% agreement are discarded as being too subjective.

4. Then the above mentioned incidents are rated from one to nine on a scale.

5. Finally about six to serve incidents for each performance dimensions- all meeting retranslation and standard deviation criteria will be used as BARS.

This is by far the best method used for a performance appraisal method

Applied behavior analysis (ABA) is the science of applying experimentally derived principles of behavior to improve socially significant behavior. ABA takes what we know about behavior and uses it to bring about positive change (Applied). Behaviors are defined in observable and measurable terms in order to assess change over time (Behavior). The behavior is analyzed within the environment to determine what factors are influencing the behavior (Analysis). Applied behavior is the third of the four domains of behavior analysis, the other three being, behaviorism, experimental analysis of behavior and professional practice of behavior analysis. Applied behavior analysis contributes to a full range of areas including: AIDS prevention, conservation of natural resources, education, gerontology, health and exercise, industrial safety, language acquisition, littering, medical procedures, parenting, seatbelt use, sports, and zoo management and care of animals. ABA-based interventions have gained recent popularity in the last 20 years related to teaching students with autism spectrum disorders.

Definition

ABA is defined as the science in which tactics derived from the principles of behavior are applied systematically to improve socially significant behavior and experimentation is used to identify the variables responsible for change.

Baer, Wolfe, and Risley's 1968 article is still used as the standard description of ABA. and it describes the seven dimensions of ABA; application, a focus on behavior, the use of analysis, a technological approach, conceptually systematic, effective, and generality.

Characteristics

Baer, Wolf, and Risley's seven dimensions:

• Applied: ABA focuses on areas that are of social significance. In doing this, behavior scientists must take into consideration more than just the short-term behavior change, but also look at how behavior changes can affect the consumer, those who are close to the consumer, and how any change will affect the interactions between the two.

• Behavioral: ABA must be behavioral, i.e.: behavior itself must change, not just what the consumer SAYS about the behavior. It is not the goal of the behavior scientists to get their consumers to stop complaining about behavior problems, but rather to change the problem behavior itself. In addition, behavior must be objectively measured. A behavior scientist can not resort to the measurement of non-behavioral substitutes.

• Analytic: The behavior scientist can demonstrate believable control over the behavior that is being changed. In the lab, this has been easy as the researcher can start and stop the behavior at will. However, in the applied situation, this is not always as easy, nor ethical, to do. According to Baer, Wolf, and Risley, this difficulty should not stop a science from upholding the strength of its principles. As such, they referred to two designs that are best used in applied settings to demonstrate control and maintain ethical standards. These are the reversal and multiple baseline designs. The reversal design is one in which the behavior of choice is measured prior to any intervention. Once the pattern appears stable, an intervention is introduced, and behavior is measured. If there is a change in behavior, measurement continues

until the new pattern of behavior appears stable. Then, the intervention is removed, or reduced, and the behavior is measured to see if it changes again. If the behavior scientist truly has demonstrated control of the behavior with the intervention, the behavior of interest should change with intervention changes.

• Technological: This means that if any other researcher were to read the study's description, that researcher would be able to "replicate the application with the same results". This means that the description must be very detailed and clear. Ambiguous descriptions do not qualify. Cooper et al. describe a good check for the technological characteristic: "have a person trained in applied behavior analysis carefully read the description and then act out the procedure in detail. If the person makes any mistakes, adds any operations, omits any steps, or has to ask any questions to clarify the written description then the description is not sufficiently technological and requires improvement."

• Conceptually Systematic: A defining characteristic is in regards to the interventions utilized; and thus research must be conceptually systematic by only utilizing procedures and interpreting results of these procedures in terms of the principles from which they were derived.[

• Effective: An application of these techniques improves behavior under investigation. Specifically, it is not a theoretical importance of the variable, but rather the practical importance (social importance) that is essential.

• Generality: It should last over time, in different environments, and spread to other behaviors not directly treated by the intervention. In addition, continued change in specified behavior after intervention for that behavior has been withdrawn is also an example of generality.

In 2005, Heward, et al added the following four characteristics:

• Accountable: Direct and frequent measurement enables analysts to detect their success and failures to make changes in an effort to increase successes while decreasing failures. ABA is a scientific approach in which analysts may guess but then critically test ideas, rather than "guess and guess again". this constant revision of techniques, commitment to effectiveness and analysis of results leads to an accountable science.

• Public: Applied behavior analysis is completely visible and public. This means that there are no explanations that cannot be observed. There are no mystical, metaphysical explanations, hidden treatment, or magic

• Empowering: ABA provides tools to practitioners that allow them to effectively change behavior. By constantly providing visual feedback to the practitioner on the results of the intervention, this feature of ABA allows clinicians to assess their skill level and builds confidence in their technology.

• Optimistic: According to several leading authors, practitioners skilled in behavior analysis have genuine cause to be optimistic for the following reasons:

• The environmental view is essentially optimistic as it suggests that all individuals possess roughly equal potential

• Direct and continuous measurements enable practitioners to detect small improvements in performance that might have otherwise been missed

• As a practitioner uses behavioral techniques with positive outcomes, the more they will become optimistic about future success prospects.

• The literature provides many examples of success teaching individuals considered previously unreachable.

Concept

Behavior

Behavior is the activity of living organisms. Human behavior is the entire gamut of what people do including thinking and feeling. Behavior can be determined by applying the Dead Man's test:

"If a dead man can do it, it ain't behavior. And if a dead man can't do it, then it is behavior"

Often, the term behavior is used to reference a larger class of responses that share physical dimensions or function. In this instance, the term response indicates a single instance of that behavior. If a group of responses have the same function, this group can be classified as a response class. Finally, when discussing a person's collection of behavior, repertoire is used. It can either pertain specifically to a set of response classes that are relevant to a particular situation, or it can refer to every behavior that a person can do.

Operant conditioning

Operant behavior is that which is selected by its consequences. The conditioning of operant behavior is the result of reinforcement and punishment. Operant behavior is produced primarily by striated muscles and sometimes by smooth muscles and glands

Respondent conditioning

All organisms respond in predictable ways to certain stimuli. These stimulus-response relations are called reflexes. The response component of the reflex is called respondent behavior. It is defined as behavior which is elicited by antecedent stimuli. Respondent conditioning (also called Classical Conditioning) is learning in which new stimuli acquire the ability to elicit respondents. This is done through stimulus-stimulus pairing, for example, the stimulus (smell of food) can elicit a person's salivation. By pairing that stimulus (smell) with another stimulus (word "food"), the second stimulus can obtain the function

Environment

The environment is the entire constellation of circumstances in which an organism exists. This includes events both inside and outside of an organism, but only real physical events are included. The environment is comprised of stimuli. A stimulus is an "energy change that affects an organism through its receptor cells."

A stimulus can be described:

Formally by its physical features.

Temporally by when they occur in respect to the behavior.

Functionally by their effect on behavior.

Reinforcement

Reinforcement is the most important principle of behavior and a key element of most behavior change programs. It is the process by which behavior is strengthened, if a behavior is followed closely in time by a stimulus and this results in an increase in the future frequency of that behavior. The addition of a stimulus following an event that serves as a reinforce is termed positive reinforcement. If the removal of an event serves as a reinforce, this is termed negative reinforcement. There are multiple schedules of reinforcement that affect the future frequency of behavior. Extinction is a schedule of reinforcement in which no reinforce follows a behavior and results in a decline in future frequency of behaviors.

Punishment

Punishment is a process by which a consequence immediately follows a behavior which decreases the future frequency of that behavior. Like reinforcement, a stimulus can be added (positive punishment) or removed (negative punishment). Broadly, there are three types of punishment: presentation of aversive stimuli, response cost and time out. Punishment in practice can often result in unwanted side effects, and has as such been used only after reinforcement-only procedures have failed to work. Unwanted side effects can include the increase in other unwanted behavior as well as a decrease in desired behaviors. Some other potential unwanted effects include escape and avoidance, emotional behavior, and can result in behavioral contrast.

Discriminated operant and three-term contingency

In addition to a relation being made between behavior and its consequences, operant conditioning also establishes relations between antecedent conditions and behaviors. This differs from the S-R formulations (If-A-then-B), and replaces it with an AB-because-of-C formulation. In other words, the relation between a behavior (B) and its context (A) is because of consequences (C), more specifically, this relationship between AB because of C indicates that the relationship is established by prior consequences that have occurred in similar contexts. This antecedent-behavior-consequence contingency is termed the three term contingency. A behavior which occurs more frequently in the presence of an antecedent condition than in its absence is called a discriminated operant. The antecedent stimulus is called a discriminative stimulus SD. The fact that the discriminated operant occurs only in the presence of the discriminative stimulus is an illustration of stimulus control.

Measuring behavior

When measuring behavior, there are both dimensions of behavior and quantifiable measures of behavior. In applied behavior analysis, the quantifiable measures are a derivative of the dimensions. These dimensions are repeatability, temporal extent, and temporal locus.

Repeatability

Response classes occur repeatedly throughout time -- ie how many times the behavior occurs.

• Count is the number of occurrences in behavior. • Rate/Frequency is the number of instances of behavior per unit of time. • Celeration is the measure of how the rate changes over time.

Temporal extent

This dimension indicates that each instance of behavior occupies some amount of time -- ie how long the behavior occurs.

• Duration is the amount of time in which the behavior occurs.

Temporal locus

Each instance of behavior occurs at a specific point in time -- ie when the behavior occurs.

• Response latency is the measure of elapsed time between the onset of a stimulus and the initiation of the response. • Interresponse time is the amount of time that occurs between two consecutive instances of a response class.

Derivative measures

Derivative measures are unrelated to specific dimensions:

• Percentage is the ratio formed by combining the same dimensional quantities. • Trials-to-criterion measurement of the number of response opportunities needed to

achieve a predetermined level of performance.

Analyzing behavior change

Experimental control

In applied behavior analysis, all experiments should include the following:

At least one participant

• At least one behavior (dependent variable) • At least one setting • A system for measuring the behavior and ongoing visual analysis of data • At least one treatment or intervention condition • Manipulations of the independent variable so that its effects on the dependent variable

Functional Behavior Assessment (FBA)

Functional assessment of behavior provides hypotheses about the relationships between specific environmental events and behaviors. Decades of research has established that both desirable and undesirable behaviors are learned through interactions with the social and physical environment. FBA is used to identify the type and source of reinforcement for challenging behaviors as the basis for intervention efforts designed to decrease the occurrence of these behaviors.

Functions of behavior

The function of a behavior can be thought of as the purpose a behavior serves for a person. Function is identified in an FBA by identifying the type and source if reinforcement for the behavior of interest. Those reinforces might be positive or negative social reinforces provided by someone who interacts with the person, or automatic reinforces produced directly by the behavior itself.

• Positive Reinforcement - social positive reinforcement (attention), tangible reinforcement, and automatic positive reinforcement.

• Negative Reinforcement - social negative reinforcement (escape), automatic negative reinforcement.

Function versus topography

Behaviors may look different but can serve the same function and likewise behavior that looks the same may serve multiple functions. What the behavior looks like often reveals little useful information about the conditions that account for it. However, identifying the conditions that account for a behavior, suggests what conditions need to be altered to change the behavior. Therefore, assessment of function of a behavior can yield useful information with respect to intervention strategies that are likely to be effective.

FBA methods

FBA methods can be classified into three types:

• Functional (experimental) Analysis • Descriptive Assessment • Indirect Assessment

Functional (experimental) analysis

A functional analysis is one in which antecedents and consequences are manipulated to indicate their separate effects on the behavior of interest. This type of arrangement is often called analog because they are not conducted in a naturally occurring context. However, research is indicating that functional analysis done in a natural environment will yield similar or better results.

A functional analysis normally has four conditions (three test conditions and one control):

• Contingent attention • Contingent escape • Alone • Control condition

Advantages - it has the ability to yield a clear demonstration of the variable(s)that relate to the occurrence of a problem behavior. Serves as the standard of scientific evidence by which other assessment alternative is evaluated, and represents the method most often used in research on the assessment and treatment of problem behavior.

Limitations - assessment process may temporarily strengthen or increase the undesirable behavior to unacceptable levels or result in the behavior acquiring new functions. Some behaviors may not be amenable to functional analysis (e.g. those that, albeit serious, occur infrequently). Functional analysis conducted in contrived settings may not detect the variable that accounts for the occurrence in the natural environment.

Indirect FBA

This method uses structured interviews, checklists, rating scales, or questionnaires to obtain information from persons who are familiar with the person exhibiting the behavior to identify possible conditions or events in the natural environment that correlate with the problem behavior. They are called "indirect" because they do not involve direct observation of the behavior, but rather solicit information based on others' recollections of the behavior.

Advantages - some can provide a useful source of information in guiding subsequent, more objective assessments, and contribute to the development of hypotheses about variable that might occasion or maintain the behaviors of concern.

• Limitations - informants may not have accurate and unbiased recall of behavior and the conditions under which it occurred.

Descriptive FBA

As with Functional Analysis, descriptive functional behavior assessment utilizes direct observation of behavior; unlike functional analysis, however, observations are made under naturally occurring conditions. Therefore, descriptive assessments involve observation of the problem behavior in relation to events that are not arranged in a systematic manner.

There are three variations of descriptive assessment:

• ABC (antecedent-behavior-consequence) continuous recording - observer records occurrences of targeted behavior and seelected environmental events in the natural routine.

• ABC narrative recording - data are collected only when behaviors of interest are observes, and the recording encompasses any events that immediately precede and follow the target behavior.

• Scatter plots -a procedure for recording the extent to which a target behavior occurs more often at particular times than others.

Conducting an FBA

Provided the strengths and limitations of the different FBA procedures, FBA can best be viewed as a four-step process:

1. The gathering of information via indirect and descriptive assessment. 2. Interpretation of information from indirect and descriptive assessment and formulation of a hypothesis about the purpose of problem behavior. 3. Testing of a hypothesis using a functional analysis. 4. Developing intervention options based on the function of problem behavior.

Task analysis

Task analysis is a process in which a task is analyzed into its component parts so that those parts can be taught through the use of chaining: forward chaining, backward chaining and total task presentation. Task analysis has been used in organizational behavior management, a behavior analytic approach to changing organizations. Behavioral scripts often emerge from a task analysis. Bergan conducted a task analysis of the behavioral consultation relationship and Thomas Kratochwill developed a training program based on teaching Bergan's skills. A similar approach was used for the development of micro skills training for counselors. Ivey would later call this "behaviorist" phase a very productive one and the skills-based approach came to dominate counselor training during 1970–90. Task analysis was also used in determining the skills needed to access a career. In education, Englemann (1968) used task analysis as part of the methods to design the Direct Instruction curriculum.

Chaining

The skill to be learned is broken down into small units for easy learning. For example, a person learning to brush teeth independently may start with learning to unscrew the toothpaste cap. Once the he or she has learned this, the next step may be squeezing the tube, etc.

For problem behavior chains can also be analyzed and the chain can be disrupted to prevent the problem behavior. Some behavior therapies, such as Dialectical Behavior Therapy, make extensive use of behavior chain analysis.

Prompting

A prompt is a cue or assistance to encourage the desired response from an individual. Prompts are often categorized into a prompt hierarchy from most intrusive to least intrusive. There is some controversy about what is considered most intrusive: physically intrusive versus hardest prompt to fade (ie. verbal). In an errorless learning approach, prompts are given in a most-to-least sequence and faded systematically to ensure the individual experiences a high level of success. There may be instances in which a least-to-most prompt method is preferred. Prompts are faded systematically and as quickly as possible to avoid prompt dependency. The goal of teaching using prompts would be to fade prompts towards independence, so that no prompts are needed for the individual to perform the desired behavior.

Types of prompts:

• Verbal prompts: Utilizing a vocalization to indicate the desired response.

• Visual Prompts: a visual cue or picture.

• Gestural prompts: Utilizing a physical gesture to indicate the desired response.

• Positional prompt: The target item is placed closer to the individual.

• Modeling: Modeling the desired response for the student. This type of prompt is best suited for individuals who learn through imitation and can attend to a model.

• Physical prompts: Physically manipulating the individual to produce the desired response. There are many degrees of physical prompts. The most intrusive being hand-over-hand, and the least intrusive being a slight tap to initiate movement.

This is not an exhaustive list of all possible prompts. When using prompts to systematically teach a skill, not all prompts need to be used in the hierarchy; prompts are chosen based on which ones are most effective for a particular individual.

Fading

The overall goal is for an individual to eventually not need prompts. As an individual gains mastery of a skill at a particular prompt level, the prompt is faded to a less intrusive prompt. This ensures that the individual does not become overly dependent on a particular prompt when learning a new behavior or skill.

Thinning

Thinning is often confused with fading. Fading refers to a prompt being removed, where thinning refers to the spacing of a reinforcement schedule getting larger. Some support exists that a 30% decrease in reinforcement can be an efficient way to thin. Schedule thinning is often an important and neglected issue in contingency management and token economy systems, especially when developed by unqualified practitioners (see professional practice of behavior analysis).

Generalization

Generalization is the expansion of a student's performance ability beyond the initial conditions set for acquisition of a skill. Generalization can occur across people, places, and materials used for teaching. For example, once a skill is learned in one setting, with a particular instructor, and with specific materials, the skill is taught in more general settings with more variation from the initial acquisition phase. For example, if a student has successfully mastered learning colors at the table, the teacher may take the student around the house or his school and then generalize the skill in these more natural environments with other materials. Behavior analysts have spent considerable amount of time studying factors that lead to generalization.

Shaping

Shaping involves gradually modifying the existing behavior into the desired behavior. If the student engages with a dog by hitting it, then he or she could have their behavior shaped by reinforcing interactions in which he or she touches the dog more gently. Over many interactions, successful shaping would replace the hitting behavior with patting or other gentler behavior. Shaping is based on a behavior analyst's thorough knowledge of operant conditioning principles and Extinction (psychology). Recent efforts to teach shaping have used simulated computer tasks.

Video modeling

One teaching technique found to be effective with some students, particularly children, is the use of video modeling (the use of taped sequences as exemplars of behavior). It can be used by therapists to assist in the acquisition of both verbal and motor responses, in some cases for long chains of behavior.

Interventions based on an FBA

Critical to behavior analytic interventions is the concept of a systematic behavioral case formulation with a functional behavioral assessment or analysis at the core. This approach should apply a behavior analytic theory of change .This formulation should include a thorough functional assessment, a skills assessment, a sequential analysis (behavior chain analysis),an ecological assessment, a look at existing evidenced-based behavioral models for the problem behavior (such as Fordyce's model of chronic pain) and then a treatment plan based on how environmental factors influence behavior. Some argue that behavior analytic case formulation can be improved with an assessment of rules and rule governed behavior. Some of the interventions that result from this type of conceptualization involve training specific communication skills to replace the problems behavior as well as specific setting, antecedent, behavior, and consequence strategies.

The Balanced Scorecard (BSC) began as a concept for measuring whether the smaller-scale operational activities of a company are aligned with its larger-scale objectives in terms of

vision and strategy. It was developed and first used at Analog Devices in 1987. By focusing not only on financial outcomes but also on the human issues, the Balanced Scorecard helps provide a more comprehensive view of a business, which in turn helps organizations act in their best long-term interests. The strategic management system helps managers focus on performance metrics while balancing financial objectives with customer, process and employee perspectives. Measures are often indicators of future performance.

Use

Implementing Balanced Scorecards typically includes four processes:

1. Translating the vision into operational goals; 2. Communicating the vision and link it to individual performance; 3. Business planning; 4. Feedback and learning, and adjusting the strategy accordingly.

The Balanced Scorecard is a framework, or what can be best characterized as a “strategic management system” that claims to incorporate all quantitative and abstract measures of true importance to the enterprise. According to Kaplan and Norton, “The Balanced Scorecard provides managers with the instrumentation they need to navigate to future competitive success”.

Many books and articles referring to Balanced Scorecards confuse the design process elements and the Balanced Scorecard itself. In particular, it is common for people to refer to a “strategic linkage model” or “strategy map” as being a Balanced Scorecard.

Balanced Scorecard is a performance management tool. Although it helps focus managers' attention on strategic issues and the management of the implementation of strategy, it is important to remember that the Balanced Scorecard itself has no role in the formation of strategy. In fact, Balanced Scorecards can comfortably co-exist with strategic planning systems and other tools.

Original methodology

The earliest Balanced Scorecards comprised simple tables broken into four sections - typically these "perspectives" were labeled "Financial", "Customer", "Internal Business Processes", and "Learning & Growth". Designing the Balanced Scorecard required selecting five or six good measures for each perspective.

Many authors have since suggested alternative headings for these perspectives, and also suggested using either additional or fewer perspectives. These suggestions were notably triggered by recognition that different but equivalent headings would yield alternative sets of measures. The major design challenge faced with this type of Balanced Scorecard is justifying the choice of measures made. "Of all the measures you could have chosen, why did you choose these?" This common question is hard to ask using this type of design process. If users are not

confident that the measures within the Balanced Scorecard are well chosen, they will have less confidence in the information it provides. Although less common, these early-style Balanced Scorecards are still designed and used today.

In short, early-style Balanced Scorecards are hard to design in a way that builds confidence that they are well designed. Because of this, many are abandoned soon after completion.

Improved methodology

In the mid 1990s, an improved design method emerged. In the new method, measures are selected based on a set of "strategic objectives" plotted on a "strategic linkage model" or "strategy map". With this modified approach, the strategic objectives are typically distributed across a similar set of "perspectives", as is found in the earlier designs, but the design question becomes slightly less abstract.

Managers have to identify five or six goals within each of the perspectives, and then demonstrate some inter-linking between these goals by plotting causal links on the diagram. Having reached some consensus about the objectives and how they inter-relate, the Balanced Scorecard is devised by choosing suitable measures for each objective. This type of approach provides greater contextual justification for the measures chosen, and is generally easier for managers to work through. This style of Balanced Scorecard has been commonly used since 1996 or so.

Several design issues still remain with this enhanced approach to Balanced Scorecard design, but it has been much more successful than the design approach it supersedes.

Popularity

Kaplan and Norton found that companies are using Balanced Scorecards to:

• Drive strategy execution; • Clarify strategy and make strategy operational; • Identify and align strategic initiatives; • Link budget with strategy; • Align the organization with strategy; • Conduct periodic strategic performance reviews to learn about and improve strategy.

In 1997, Kurtzman found that 64 percent of the companies questioned were measuring performance from a number of perspectives in a similar way to the Balanced Scorecard.

Balanced Scorecards have been implemented by government agencies, military units, business units and corporations as a whole, non-profit organizations, and schools.

Many examples of Balanced Scorecards can be found via Web searches. However, adapting one organization's Balanced Scorecard to another is generally not advised by theorists, who

believe that much of the benefit of the Balanced Scorecard comes from the implementation method.

Variants, Alternatives and Criticisms

Since the late 1990s, various alternatives to the Balanced Scorecard have emerged, such as The Performance Prism, Results Based Management and Third Generation Balanced Scorecard. These tools seek to solve some of the remaining design issues, in particular issues relating to the design of sets of Balanced Scorecards to use across an organization, and issues in setting targets for the measures selected.

Applied Information Economics (AIE) has been researched as an alternative to Balanced Scorecards. In 2000, the Federal CIO Council commissioned a study to compare the two methods by funding studies in side-by-side projects in two different agencies. The Dept. of Veterans Affairs used AIE and the US Dept. of Agriculture applied Balanced Scorecards. The resulting report found that while AIE was much more sophisticated, AIE actually took slightly less time to utilize. AIE was also more likely to generate findings that were newsworthy to the organization, while the users of Balanced Scorecards felt it simply documented their inputs and offered no other particular insight. However, Balanced Scorecards are still much more widely used than AIE.

A criticism of Balanced Scorecards is that the scores are not based on any proven economic or financial theory, and therefore have no basis in the decision sciences. The process is entirely subjective and makes no provision to assess quantities (e.g., risk and economic value) in a way that is actuarially or economically well-founded.

Another criticism is that the Balanced Scorecard does not provide a bottom line score or a unified view with clear recommendations: it is simply a list of metrics.

Some people also claim that positive feedback from users of Balanced Scorecards may be due to a placebo effect, as there are no empirical studies linking the use of Balanced Scorecards to better decision making or improved financial performance of companies.

The Four Perspectives

The grouping of performance measures in general categories (perspectives) is seen to aid in the gathering and selection of the appropriate performance measures for the enterprise. Four general perspectives have been proposed by the Balanced Scorecard:

• Financial perspective; • Customer perspective; • Internal process perspective; • Learning and growth perspective.

The financial perspective examines if the company’s implementation and execution of its strategy are contributing to the bottom-line improvement of the company. It represents the long-term strategic objectives of the organization and thus it incorporates the tangible outcomes of the strategy in traditional financial terms. The three possible stages as described by Kaplan and Norton (1996) are rapid growth, sustain and harvest. Financial objectives and measures for the growth stage will stem from the development and growth of the organization which will lead to increased sales volumes, acquisition of new customers, growth in revenues etc. The sustain stage on the other hand will be characterized by measures that evaluate the effectiveness of the organization to manage its operations and costs, by calculating the return on investment, the return on capital employed, etc. Finally, the harvest stage will be based on cash flow analysis with measures such as payback periods and revenue volume. Some of the most common financial measures that are incorporated in the financial perspective are EVA, revenue growth, costs, profit margins, cash flow, net operating income etc.

The customer perspective defines the value proposition that the organization will apply in order to satisfy customers and thus generate more sales to the most desired (i.e. the most profitable) customer groups. The measures that are selected for the customer perspective should measure both the value that is delivered to the customer (value position) which may involve time, quality, performance and service and cost and the outcomes that come as a result of this value proposition (e.g., customer satisfaction, market share). The value proposition can be centered on one of the three: operational excellence, customer intimacy or product leadership, while maintaining threshold levels at the other two.

The internal process perspective is concerned with the processes that create and deliver the customer value proposition. It focuses on all the activities and key processes required in order for the company to excel at providing the value expected by the customers both productively and efficiently. These can include both short-term and long-term objectives as well as incorporating innovative process development in order to stimulate improvement. In order to identify the measures that correspond to the internal process perspective, Kaplan and Norton propose using certain clusters that group similar value creating processes in an organization. The clusters for the internal process perspective are operations management (by improving asset utilization, supply chain management, etc), customer management (by expanding and deepening relations), innovation (by new products and services) and regulatory & social (by establishing good relations with the external stakeholders).

The learning and growth perspective is the foundation of any strategy and focuses on the intangible assets of an organization, mainly on the internal skills and capabilities that are required to support the value-creating internal processes. The learning and growth perspective is concerned with the jobs (human capital), the systems (information capital), and the climate (organization capital) of the enterprise. These three factors relate to what Kaplan and Norton claim is the infrastructure that is needed in order to enable ambitious objectives in the other three perspectives to be achieved. This of course will be in the long term, since an improvement in the learning and growth perspective will require certain expenditures that may decrease short-term financial results, whilst contributing to long-term success.

Key Performance Indicators

According to each perspective of the Balanced Scorecard, a number of KPIs can be used such as:

Financial

• Cash flow • ROI • Financial Result • Return on capital employed • Return on equity

Customer

• Delivery Performance to Customer - by Date • Delivery Performance to Customer - by Quality • Customer satisfaction rate • Customer retention

Internal Business Processes

• Number of Activities • Opportunity Success Rate • Accident Ratios • Overall Equipment Effectiveness

Learning & Growth

• Investment Rate • Illness rate • Internal Promotions % • Employee Turnover • Gender/Racial Ratios

Key Performance Indicators (KPI) are financial and non-financial metrics used to help an organization define and measure progress toward organizational goals. KPIs are used in Business Intelligence to assess the present state of the business and to prescribe a course of action. The act of monitoring KPIs in real-time is known as business activity monitoring. KPIs are frequently used to "value" difficult to measure activities such as the benefits of leadership development, engagement, service, and satisfaction. KPIs are typically tied to an organization's strategy (as exemplified through techniques such as the Balanced Scorecard).

The KPIs differ depending on the nature of the organization and the organization's strategy. They help an organization to measure progress towards their organizational goals, especially toward difficult to quantify knowledge-based processes.

A KPI is a key part of a measurable objective, which is made up of a direction, KPI, benchmark, target and time frame. For example: "Increase Average Revenue per Customer from £10 to £15 by EOY 2008". In this case, 'Average Revenue Per Customer' is the KPI.

KPIs should not be confused with a Critical Success Factor. For the example above, a critical success factor would be something that needs to be in place to achieve that objective; for example, a product launch.

Identifying indicators

Performance indicators differ from business drivers & aims (or goals). A school might consider the failure rate of its students as a Key Performance Indicator which might help the school understand its position in the educational community, whereas a business might consider the percentage of income from return customers as a potential KPI.

But it is necessary for an organization to at least identify its KPI's. The key environments for identifying KPI's are:

• Having a pre-defined business process. • Having clear goals/performance requirements for the business processes. • Having a quantitative/qualitative measurement of the results and comparison with set

goals. • Investigating variances and tweaking processes or resources to achieve short-term

goals.

When identifying KPI's the acronym SMART is often applied. KPI's need to be:

• Specific • Measurable • Achievable • Result-oriented or Relevant • Time-bound

Marketing KPIs

Among the marketing KPIs top management analyzes are:

1. Customer related numbers: 1. New customers acquired 2. Status of existing customers 3. Customer attrition

2. Turnover generated by segments of the customers - these could be demographic filters. 3. Outstanding balances held by segments of customers and terms of payment - these

could be demographic filters. 4. Collection of bad debts within customer relationships. 5. Demographic analysis of individuals (potential customers) applying to become

customers, and the levels of approval, rejections and pending numbers. 6. Delinquency analysis of customers behind on payments. 7. Profitability of customers by demographic segments and segmentation of customers by

profitability.

Many of these aforementioned customer KPIs are developed and improved with customer relationship management.

This is more an inclusive list than an exclusive one. The above more or less describe what a bank would do, but could also refer to a telephone company or similar service sector company.

What is important is:

1. KPI-related data which is consistent and correct. 2. Timely availability of KPI related Data.

Faster availability of data is beginning to become a concern for more and more organizations. Delays of a month or two were commonplace. Of late, several banks have tried to move to availability of data at shorter intervals and less delays. For example, in businesses which have higher operational/credit risk loading (that involve credit cards, wealth management), Citibank has moved onto a weekly availability of KPI related data or sometimes a daily analysis of numbers. This means that data is usually available within 24 hours as a result of automation and the use of IT.

KPIs for Manufacturing

Overall Equipment Effectiveness / Overall Equipment Efficiency, or OEE, is a set of broadly accepted non-financial metrics which reflect manufacturing success.

Categorization of indicators

Key Performance Indicators define a set of values used to measure against. These raw sets of values fed to systems to summarize information against are called indicators. Indicators identifiable as possible candidates for KPIs can be summarized into the following sub-categories:

• Quantitative indicators which can be presented as a number. • Practical indicators that interface with existing company processes. • Directional indicators specifying whether an organization is getting better or not. • Actionable indicators are sufficiently in an organization's control to effect change.

Key Performance Indicators in practical terms and strategy development means are objectives to be targeted that will add the value to the business most (most = KEY INDICATORS OF SUCCESS).

Problems

In practice, organizations and businesses looking for Key Performance Indicators discover that it is too expensive, difficult or impossible (eg staff morale may be impossible to qualify with a number) to measure the performance indicators exactly required to a particular business or process objective. Often a business metrics with history that is similar is used to measure that KPI. In practice this tends to work but the analyst must be aware of the limitation of what is being measured which is often a rough guide rather than an exact measurement.

Another serious issue in practice is that once a KPI is created, it becomes difficult to change them as your yearly comparisons with previous years can be lost.

Furthermore you should be aware that if it is too in house, it may be extremely difficult for an organization to use its KPIs to get comparisons with other similar organizations.

Feedback is a process whereby some proportion of the output signal of a system is passed (fed back) to the input. This is often used to control the dynamic behavior of the system. Examples of feedback can be found in most complex systems, such as engineering, architecture, economics, thermodynamics, and biology.

Negative feedback was applied by Harold Stephen Black to electrical amplifiers in 1927, but he could not get his idea patented until 1937. Arturo Rosenblueth, a Mexican researcher and physician, co-authored a seminal 1943 paper Behavior, Purpose and Teleology[2] that, according to Norbert Wiener (another co-author of the paper), set the basis for the new science of cybernetics. Rosenblueth proposed that behavior controlled by negative feedback, whether in animal, human or machine, was a determinative, directive principle in nature and human creations.[citation needed]. This kind of feedback is studied in cybernetics and control theory.

In organizations, feedback is a process of sharing observations, concerns and suggestions between persons or divisions of the organization with an intention of improving both personal and organizational performance. Negative and positive feedback have different meanings in this usage, where they imply criticism and praise, respectively.

Overview

Feedback is both a mechanism, process and signal that is looped back to control a system within itself. This loop is called the feedback loop. A control system usually has input and output to the system; when the output of the system is fed back into the system as part of its input, it is called the "feedback."

Feedback and regulation are self related. The negative feedback helps to maintain stability in a system in spite of external changes. It is related to homeostasis. Positive feedback amplifies possibilities of divergences (evolution, change of goals); it is the condition to change, evolution, growth; it gives the system the ability to access new points of equilibrium.

For example, in an organism, most positive feedback provide for fast autoexcitation of elements of endocrine and nervous systems (in particular, in stress responses conditions) and play a key role in regulation of morphogenesis, growth, and development of organs, all processes which are in essence a rapid escape from the initial state. Homeostasis is especially visible in the nervous and endocrine systems when considered at organism level.

Types of feedback

Types of feedback are:

• negative feedback: which tends to reduce output (but in amplifiers, stabilizes and linearizes operation),

• positive feedback: which tends to increase output, or • bipolar feedback: which can either increase or decrease output.

Systems which include feedback are prone to hunting, which is oscillation of output resulting from improperly tuned inputs of first positive then negative feedback. Audio feedback typifies this form of oscillation.

Bipolar feedback is present in many natural and human systems. Feedback is usually bipolar—that is, positive and negative—in natural environments, which, in their diversity, furnish synergic and antagonistic responses to the output of any system.

Applications

In biology

In biological systems such as organisms, ecosystems, or the biosphere, most parameters must stay under control within a narrow range around a certain optimal level under certain environmental conditions. The deviation of the optimal value of the controlled parameter can result from the changes in internal and external environments. A change of some of the environmental conditions may also require change of that range to change for the system to function. The value of the parameter to maintain is recorded by a reception system and conveyed to a regulation module via an information channel.

Biological systems contain many types of regulatory circuits, both positive and negative. As in other contexts, positive and negative don't imply consequences of the feedback have good or bad final effect. A negative feedback loop is one that tends to slow down a process, while the positive feedback loop tends to accelerate it. The mirror neurons are part of a social feedback system, when an observed action is ´mirrored´ by the brain - like a self performed action.

Feedback is also central to the operations of genes and gene regulatory networks. Repressor (see Lac repressor) and activator proteins are used to create genetic operons, which were identified by Francois Jacob and Jacques Monod in 1961 as feedback loops. These feedback loops may be positive (as in the case of the coupling between a sugar molecule and the proteins that import sugar into a bacterial cell), or negative (as is often the case in metabolic consumption).

Any self-regulating natural process involves feedback and is prone to hunting. A well known example in ecology is the oscillation of the population of snowshoe hares due to predation from lynxes.

In zymology, feedback serves as regulation of activity of an enzyme by its direct product(s) or downstream metabolite(s) in the metabolic pathway (see Allosteric regulation).

Hypothalamo-pituitary-adrenal and ovaraian or testicular axis is largely controlled by positive and negative feedback, much of which is still unknown.

In climate science

The climate system is characterized by strong feedback loops between processes that affect the state of the atmosphere, ocean, and land. A simple example is the ice-albedo positive feedback loop whereby melting snow exposes more dark ground (of lower albedo), which in turn absorbs heat and causes more snow to melt. This is part of the evidence of the danger of global warming.

In control theory

Feedback is extensively used in control theory, using a variety of methods including state space (controls), pole placement and so forth.

The most common general-purpose controller using a control-loop feedback mechanism is a proportional-integral-derivative (PID) controller. Each term of the PID controller copes with time. The proportional term handles the present state of the system, the integral term handles its past, and the derivative or slope term tries to predict and handle the future.

In economics and finance

A system prone to hunting (oscillating) is the stock market, which has both positive and negative feedback mechanisms. This is due to cognitive and emotional factors belonging to the field of behavioral finance. For example,

• When stocks are rising (a bull market), the belief that further rises are probable gives investors an incentive to buy (positive feedback, see also stock market bubble); but the increased price of the shares, and the knowledge that there must be a peak after which the market will fall, ends up deterring buyers (negative feedback).

• Once the market begins to fall regularly (a bear market), some investors may expect further losing days and refrain from buying (positive feedback), but others may buy because stocks become more and more of a bargain (negative feedback).

George Soros used the word "reflexism" to describe feedback in the financial markets and developed an investment theory based on this principle.

The conventional economic equilibrium model of supply and demand supports only ideal linear negative feedback and was heavily criticized by Paul Ormerod in his book "The Death of Economics" which in turn was criticized by traditional economists. This book was part of a change of perspective as economists started to recognise that Chaos Theory applied to nonlinear feedback systems including financial markets.

In education

Young students will often look up to instructors as experts in the field and take to heart most of the things instructors say. Thus, it is believed that spending a fair amount of time and effort

thinking about how to respond to students may be a worthwhile time investment. Here are some general types of feedback that can be used in many types of student assessment:

Confirmation Your answer was incorrect.

Corrective Your answer was incorrect. The correct answer was Jefferson.

Explanatory Your answer was incorrect because Carter was from Georgia; only Jefferson called Virginia home.

Diagnostic Your answer was incorrect. Your choice of Carter suggests some extra instruction on the home states of past presidents might be helpful.

Elaborative Your answer, Jefferson, was correct. The University of Virginia, a campus rich with Jeffersonian architecture and writings, is sometimes referred to as Thomas Jefferson’s school.

(Adapted from Flemming and Levie.)

A different application of feedback in education is the system for "continuous improvement" of engineering curricula monitored by the Accreditation Board for Engineering and Technology (ABET)

In electronic engineering

The processing and control of feedback is engineered into many electronic devices and may also be embedded in other technologies.

If the signal is inverted on its way round the control loop, the system is said to have negative feedback; otherwise, the feedback is said to be positive. Negative feedback is often deliberately introduced to increase the stability and accuracy of a system. This scheme can fail if the input changes faster than the system can respond to it. When this happens, the lag in arrival of the feedback signal results in positive feedback, causing the output to oscillate or hunt[5] Positive feedback is usually an unwanted consequence of system behaviour.

Harry Nyquist contributed the Nyquist plot for assessing the stability of feedback systems. An easier assessment, but less general, is based upon gain margin and phase margin using Bode plots (contributed by Hendrik Bode). Design to insure stability often involves frequency compensation, one method of compensation being pole splitting.

In government

Examples of feedback in government are:

• Elections • Mass media • Revolution • Curfews

In mechanical engineering

In ancient times, the float valve was used to regulate the flow of water in Greek and Roman water clocks; similar float valves are used to regulate fuel in a carburetor and also used to regulate tank water level in the flush toilet.

The windmill was enhanced in 1745 by blacksmith Edmund Lee who added a fantail to keep the face of the windmill pointing into the wind. In 1787 Thomas Mead regulated the speed of rotation of a windmill by using a centrifugal pendulum to adjust the distance between the bed stone and the runner stone (i.e. to adjust the load).

The use of the centrifugal governor by James Watt in 1788 to regulate the speed of his steam engine was one factor leading to the Industrial Revolution. Steam engines also use float valves and pressure release valves as mechanical regulation devices. A mathematical analysis of Watt's governor was done by James Clerk Maxwell in 1868.

The Great Eastern was one of the largest steamships of its time and employed a steam powered rudder with feedback mechanism designed in 1866 by J.McFarlane Gray. Joseph Farcot coined the word servo in 1873 to describe steam powered steering systems. Hydraulic servos were later used to position guns. Elmer Ambrose Sperry of the Sperry Corporation designed the first autopilot in 1912. Nicolas Minorsky published a theoretical analysis of automatic ship steering in 1922 and described the PID controller.

Internal combustion engines of the late 20th century employed mechanical feedback mechanisms such as vacuum advance (see: Ignition timing) but mechanical feedback was replaced by electronic engine management systems once small, robust and powerful single-chip microcontrollers became affordable.