Research Data Management at Humboldt-Universität zu Berlin. Status Quo and Perspectives
1. Research Perspectives on Management
-
Upload
megha-sharma -
Category
Documents
-
view
214 -
download
0
Transcript of 1. Research Perspectives on Management
8/3/2019 1. Research Perspectives on Management
http://slidepdf.com/reader/full/1-research-perspectives-on-management 1/31
1
RESEARCH PERSPECTIVESON MANAGEMENT
8/3/2019 1. Research Perspectives on Management
http://slidepdf.com/reader/full/1-research-perspectives-on-management 2/31
2
B ASIC ISSUES IN MANAGEMENT
Management comprises planning, organising, resourcing, leading or directing and controllingan organisation (a group of one or more people or entities) or effort for the purpose of
accomplishing a goal. Resourcing encompasses the deployment and manipulation of human
resources, financial resources, technological resources and natural resources. Management
also refers to the person or people who perform the acts of management. The verb manage
comes from the Italian maneggiare (to handle- especially a horse), which in turn derives
from the latin manus hand).
THE EVOLUTION OF MANAGEMENT THEORY
Management and organizations are products of their historical and social times and places.
Thus, we can understand the evolution of management theory in terms of how people have
wrestled with matters of relationships at particular times in history. One of the central lessons
of this chapter, and of this book as a whole is that we can learn from the trials and tribulations
of those who have preceded us in steering the fortunes of formal organizations. As you study
management theory you will learn that although the particular concerns of Henry Ford and
Alfred Sloan are very different from those facing managers in the mid-1990s, we can still see
ourselves continuing the traditions that these individuals began long before our time. B y
keeping in mind a framework of relationships and time, we can put ourselves in their shoes as
students of management.
Imagine that you are a manager at an American steel mill, textile factory, or one of Ford's
plants in the early twentieth century. Your factory employs thousands of workers. This is a
scale of enterprise unprecedented in Western history. Many of your employees were raised in
agricultural communities. Industrial routines are new to them. Many of your employees, as
well, are immigrants from other lands. They do not speak English well, if at all. As a manager under these circumstances, you will probably be very curious about how you can develop
working relationships with these people. Your managerial effectiveness depends on how well
you understand what it is that is important to these people. Current-day challenges parallel
some of those faced in the early twentieth century. In the 1980s 8.7 million foreign nationals
8/3/2019 1. Research Perspectives on Management
http://slidepdf.com/reader/full/1-research-perspectives-on-management 3/31
3
entered the U.S. and joined the labor market. They often have distinct needs for skills and
language proficiency, much as those before them at the advent of the industrial age.
Early management theory consisted of numerous attempts at getting to know these
newcomers to industrial life at the end of the nineteenth century and beginning of the
twentieth century in Europe and the United States. In this section, we will survey a number of
the better-known approaches to early management theory. These include scientific
management, classical organization theory, the behavioral school, and management science.
As you study these approaches, keep one important fact in mind: the managers and theorist
who developed these assumptions about human relationships were doing so with little
precedent. Large-scale industrial enterprise was very new. Some of the assumptions that they
made might therefore seem simple or unimportant to you, but they were crucial to Ford and
his contemporaries.
THE SCIENTIFIC MANAGEMENT SCHOOL
Scientific Management theory arose in part from the need to increase productivity. In the
United States especially, skilled labor was in short supply at the beginning of the twentieth
century. The only way to expand productivity was to raise the efficiency of workers.
Therefore, Frederick W. Taylor, Henry L. Gantt, and Frank and Lillian Gilbreth devised the
body of principles known as scientific management theory.
FREDERICK W. TAYLOR
Frederick W. Taylor (1856-1915) rested his philosophy on four basic principles:
1. The development of a true science of management, so that the best method for
performing each task could be determined.
2. The scientific selection of workers, so that each worker would be given responsibility
for the task for which he or she was best suited.
3. The scientific education and development of the worker.
4. Intimate, friendly cooperation between management and labor.
Taylor contended that the success of these principles required "a complete mental revolution"
on the part of management and labor. Rather than quarrel over profits, both sides should try
to increase production; by so doing, he believed, profits would rise to such an extent that
8/3/2019 1. Research Perspectives on Management
http://slidepdf.com/reader/full/1-research-perspectives-on-management 4/31
4
labor and management would no longer have to fight over them. In short, Taylor believed
that management and labor had a common interest in increasing productivity.
Taylor based his management system on production-line time studies. Instead of relying on
traditional work methods, he analyzed and timed steel workers' movements on a series of
jobs. Using time study as his base, he broke each job down into its components and designed
the quickest and best methods of performing each component. In this way he established how
much workers should be able to do with the equipment and materials at hand. He also
encouraged employers to pay more productive workers at a higher rate than others, using a
"scientifically correct" rate that would benefit both company and worker. Thus, workers were
urged to surpass their previous performance standards to earn more pay Taylor called his plan
the differential rate system.
MARY PARKER FOLLETT
Mary Parker Follett (1868-1933) was among those who built on classic framework of the
classical school. However, she introduced many new elements especially in the area of
human relations and organizational structure. In this, she initiated trends that would be further
developed by the emerging behavioral and management science schools.
Follett was convinced that no one could become a whole person except as a member of a
group; human beings grew through their relationships with others in organizations. In fact,
she called management "the art of getting things done through people." She took for granted
Taylor's assertion that labor and management shared a common purpose as members of the
same organization, but she believed that the artificial distinction between managers (order
givers) and subordinates (order takers) obscured this natural partnership. She was a great
believer in the power of the group, where individuals could combine their diverse talents into
something bigger. Moreover, Follett's "holistic" model of control took into account not just
individuals and groups, but the effects of such environmental factors as politics, economics,
and biology.
Follett¶s model was an important forerunner of the idea that management meant more than
just what was happening inside a particular organization. B y explicitly adding the
organizational environment to her theory, Follett paved the way for management theory to
include a broader set of relationships, some inside the organization and some across the
8/3/2019 1. Research Perspectives on Management
http://slidepdf.com/reader/full/1-research-perspectives-on-management 5/31
5
organization's borders. A diverse set of model management theories pays homage to Follett
on this point.
B ASIC FUNCTIONS OF MANAGEMENT
Management operates through various functions, often classified as planning, organising,leading/ motivating and controlling.
Planning: deciding what needs to happen in the future and generating plans for action.
Organising: Making optimum use of resources required to enable the successful carrying out
of plans.
Leading/ Motivating: exhibiting skilled in these areas for getting others to pay an effective
part in achieving plans.
Controlling/Monitoring: checking progress against plans, which may need modifications
based on feedback.
FORMATION OF B USINESS POLICY
y The mission of the business is its most obvious purpose ± which may be for example
to make soap.
y The vision of the business reflects its aspirations and specifies its intended direction
or future destination.
y The objectives of the business refers to the ends or activity at which a certain task is
aimed.
y The business policy is a guide that stipulates rules, regulations and objectives and may
be used in the managers decision making. It must be flexible and easily interpreted
and understood by all employees.
y The B usiness strategy refers to the coordinated plan of action that it is going to take,
as well as the resources that it will use to realise its vision and long term objectives. It
is a guideline to managers stipulating how they ought to allocate and utilise the factors
of production to the business¶s advantage. Initially it could help the mangers decide
on what type of business they want to form.
HOW TO IMPLEMENT POLICIES AND STRATEGIES
8/3/2019 1. Research Perspectives on Management
http://slidepdf.com/reader/full/1-research-perspectives-on-management 6/31
6
y All policies and strategies must be discussed with all managerial personnel and staff.
y Mangers must understand where and how they can implement their policies and
strategies.
y A Plan of action must be devised for each department.
y Policies and strategies must be reviewed regularly.
y Contingency plans must be devised in case the environment changes.
y Assessments of progress ought to be carried out regularly by top level managers.
y A good environment is required within the business.
The D evelopment of policies a nd S tr a tegies
y The missions, objectives, strengths and weaknesses of each department must beanalysed to determine their roles in achieving the business mission.
y The forecasting method develops a reliable picture of the business¶s future
environment.
y The planning unit must be created to ensure that all plans are consistent and that
policies and strategies are aimed at achieving the same mission and objectives.
y Contingency plans must be developed just in case.
All policies must be discussed with all managerial personnel and staff that is required
of any departmental policy.
Where Policies and Strategies fit into the Planning Process/
y They give mid and lower level managers a good idea of the future plans for each
department.
What makes a good leader or manager? For many it is someone who can inspire and get the
most from their staff.
There are many qualities that are needed to be a good leader or manager.
y B e able to think creatively to provide a vision for the company and solve problemsy B e calm under pressure and make clear decisions
y Possess excellent two-way communication skills
y Have the desire to achieve great things
y B e well informed and knowledgeable about matters relating to the business
y Possess an air of authority
8/3/2019 1. Research Perspectives on Management
http://slidepdf.com/reader/full/1-research-perspectives-on-management 7/31
7
D o you have to be born with the correct qualities or can you be taught to be a good leader? It
is most likely that well-known leaders or managers (Winston Churchill, Richard B ranson or
Alex Ferguson?) are successful due to a combination of personal characteristics and good
training.
Managers deal with their employees in different ways. Some are strict with their staff and liketo be in complete control, whilst others are more relaxed and allow workers the freedom to
run their own working lives (just like the different approaches you may see in teachers!).
Whatever approach is predominately used it will be vital to the success of the business. ³An
organisation is only as good as the person running it´.
There are three main categories of leadership styles: autocratic, paternalistic and
democratic .
Autocratic (or authoritarian) managers like to make all the important decisions and closely
supervise and control workers. Managers do not trust workers and simply give orders (one-
way communication) that they expect to be obeyed. This approach derives from the views of
Taylor as to how to motivate workers and relates to McGregor¶s theory X view of workers.
This approach has limitations (as highlighted by other motivational theorists such as Mayo
and Herzberg) but it can be effective in certain situations. For example:
When quick decisions are needed in a company (e.g. in a time of crises)
When controlling large numbers of low skilled workers.
Paternalistic managers give more attention to the social needs and views of their workers.
Managers are interested in how happy workers feel and in many ways they act as a father
figure (pater means father in Latin). They consult employees over issues and listen to their
feedback or opinions. The manager will however make the actual decisions (in the best
interests of the workers) as they believe the staff still need direction and in this way it is still
somewhat of an autocratic approach. The style is closely linked with Mayo¶s Human Relation
view of motivation and also the social needs of Maslow.
A democratic style of management will put trust in employees and encourage them to make
decisions. They will delegate to them the authority to do this (empowerment) and listen to
their advice. This requires good two-way communication and often involves democratic
discussion groups, which can offer useful suggestions and ideas. Managers must be willing to
encourage leadership skills in subordinates.
The ultimate democratic system occurs when decisions are made based on the majority view
of all workers. However, this is not feasible for the majority of decisions taken by a business-
indeed one of the criticisms of this style is that it can take longer to reach a decision. This
8/3/2019 1. Research Perspectives on Management
http://slidepdf.com/reader/full/1-research-perspectives-on-management 8/31
8
style has close links with Herzberg¶s motivators and Maslow¶s higher order skills and also
applies to McGregor¶s theory Y view of workers.
COMPETENCE
Competence is a standardised requirement for an individual to properly perform a specific
job. It encompasses a combination of knowledge, skilled and behaviour utilised to improve
performance. More generally, competence is the state or quality of being adequately or well
qualified, having the ability to perform a specific role. For instance, management competency
includes the traits of systems thinking and emotional intelligence, and sills in influence and
negotiation. A person possesses a competence as long as the skills, abilities and knowledge
that constitute that competence are a part of them, enabling the person to perform effective
action within a certain workplace environment. Therefore, one might not lose knowledge, a
skill or ability but still lose a competence if what is needed to do a job well changes.
Competence is also used to work with more general descriptions of the requirements of
human beings in organisational and communities. Examples are educations and other
organisations who want to have a general language to tell what a graduate of an education
must be able to do in order to graduate or what a member of an organisation is required to be
able to do in order to be considered competent. All competences have to be action
competences, which means you show in action, that you are competent.
Competence is shown in action in a situation in a context that might be different the next time
you have to act. In emergency contexts, competent people will react to the situation following
behaviours they have previously found to succeed, hopefully to good effect. To be competent
you need to be able to interpret the situation in the context and to have a repertoire of possible
actions to take and have trained in the possible actions in the repertoire, if this is relevant.
Regardless of training, competence grows through experience and the extent of an individual
to learn and adapt. The concept of competence has different meanings and continues to
remain one of the most diffuse terms in the management development sector and the
organisational and occupational literature.
General Competence development
It is interesting to register companies, in HR it is much more important to have a policy for
development competences especially the general competencies . D reyfus and D reyfus has
8/3/2019 1. Research Perspectives on Management
http://slidepdf.com/reader/full/1-research-perspectives-on-management 9/31
9
introduced a language of the levels of competences in competence development. The levels
are:
1. Novice: Rule based behaviour, strongly limited and inflexible
2. Experienced B eginner: incorporates aspects of the situation.
3. Practitioner; Acting consciously from long term goals and plans.4. Knowledgeable Practitioner: Sees the situation as a whole and acts from personal
conviction.
5. Expert; has an intuitive understanding of the situation and zooms in on the central
aspects.
6. Virtuoso: Has a higher degree of competence, advances the standards and has an easy
and creative way of doing things.
7. Maestro: Changes the history in a field by inventing and introducing radical
innovations.
The process of competence development is a lifelong series of doing and reflecting. And it
requires a special environment, where the rules are necessary in order to introduce novices,
but people at a more advanced level of competence will systematically break the rules if
using terms such as learning organisation, knowledge creation, self organising and
empowerment.
RESEARCH METHODOOGY
Research is an endeavour to discover answers to intellectual and practical problems through
the application of scientific method. ³Research is a systematized effort to gain new
knowledge´.
Research is the systematic process of collecting and analyzing information (data) in order to
increase our understanding of the phenomenon about which we are concerned or interested.
Descriptive Research
This research is the most commonly used and the basic reason for carrying out descriptive
research is to identify the cause of something that is happening. For instance, this research
could be used in order to find out what age group is buying a particular brand of cola,
whether a company¶s market share differs between geographical regions or to discover how
many competitors a company has in their marketplace. However, if the research is to return
8/3/2019 1. Research Perspectives on Management
http://slidepdf.com/reader/full/1-research-perspectives-on-management 10/31
10
useful results, whoever is conducting the research must comply with strict research
requirements in order to obtain the most accurate figures/results possible.
This type of research is also a grouping that includes many particular research methodologies
and particular research methodologies and procedures, such as observations, surveys, self
reports and tests. The four parameters of research will help us understand how descriptive
research in general is similar to, and different from, other types of research. Unlike
qualitative research, descriptive research may be more analytic. It often focuses on a
particular variable or factor. D escriptive research may also operate on the basis of hypotheses
(often generated through previous, qualitative research). That moves it toward the deductive
side of the deductive/heuristic continuum. Finally, like qualitative research, descriptive
research aims to gather data without any manipulation of the research context. In other words,
descriptive research is also low on the ³control or manipulation of research context.´ Scale. It
is non-intrusive and deals with naturally occurring phenomena. In addition, the data
collection procedures used in descriptive research may be very explicit. Some observations
instruments, for example, employ highly refined categories of behaviour and yield
quantitative (numerical) data. These differences also lead to another significant characteristic
of descriptive research the types of subjects it studies. D escriptive research may focus on
individual subjects and go into great depth an detail in describing them. Individual variation
is not only allowed for but studied. This approach is called case-study. On the other hand,
because of the data collection and analysis procedures it may employ, descriptive research
can also investigate large groups of subjects. Often these are pre-existing classes. Often these
are pre-existing classes. In these cases, the analytical procedures tend to produce results that
show average´ behaviour for the group.
Kinds of Research
Generally speaking in second language research it is useful to distinguish between B asic,
Applied and Practical research. B asic research is concerned with knowledge for the sake of
theory. Its design is not controlled by the practical usefulness of the findings. Applied
research is concerned with showing how the findings can be applied or summarised into some
type of teaching methodology. Practical Research goes one step further and applies the
findings of research to a specific µpractical´ teaching situation. A useful way to look at the
relationships among these three research types contributes to the other in helping revise and
frame the research from each category.
8/3/2019 1. Research Perspectives on Management
http://slidepdf.com/reader/full/1-research-perspectives-on-management 11/31
11
TYPES OF DESIGNS
What are the different major types of research designs? We can classify designs into a simple
threefold classification by asking some key questions. First, does the design use random
assignment to groups? [ D on't forget that random assignment is not the same thing as ra ndom
selection of a sample from a population!] If random assignment is used, we call the design a
randomized experiment or true experiment. If random assignment is not used, then we have
to ask a second question: D oes the design use either multiple groups or multiple waves of
measurement? If the answer is yes, we would label it a quasi-experimental design. If no, we
would call it a non-experimental design. This threefold classification is especially useful for
describing the design with respect to internal validity. A randomized experiment generally is
the strongest of the three designs when your interest is in establishing a cause-effect
relationship. A non-experiment is generally the weakest in this respect. I have to hasten to
add here, that I don't mean that a non-experiment is the weakest of the the three designs
overall, but only with respect to intern a l va lidity or cau sa l a ssessment . In fact, the simplest form
of non-experiment is a one-shot survey design that consists of nothing but a single
observation O. This is probably one of the most common forms of research and, for some
research questions -- especially descriptive ones -- is clearly a strong design. When I say that
the non-experiment is the weakest with respect to internal validity, all I mean is that it isn't a
particularly good method for assessing the cause-effect relationship that you think might exist
between a program and its outcomes.
To illustrate the different types of designs, consider one of each in design notation. The first
design is a posttest-only randomized experiment. You can tell it's a randomized experiment
because it has an R at the beginning of each line, indicating random assignment. The second
design is a pre-post non-equivalent groups quasi-experiment. We know it's not a randomized
experiment because random assignment wasn't used. And we know it's not a non-experiment
because there are both multiple groups and multiple waves of measurement. That means it
must be a quasi-experiment. We add the label "non-equivalent" because in this design we do
not explicitly control the assignment and the groups may be non-equivalent or not similar to
each other (see non-eq u iva lent gro u p designs ). Finally, we show a posttest-only no
experimental design. You might use this design if you want to study the effects of a natural
disaster like a flood or tornado and you want to do so by interviewing survivors. Notice that
in this design, you don't have a comparison group (e.g., interview in a town down the road the
road that didn't have the tornado to see what differences the tornado caused) and you don't
8/3/2019 1. Research Perspectives on Management
http://slidepdf.com/reader/full/1-research-perspectives-on-management 12/31
1 2
have multiple waves of measurement (e.g., a pre-tornado level of how people in the ravaged
town were doing before the disaster). D oes it make sense to do the non-experimental study?
Of course! You could gain lots of valuable information by well-conducted post-disaster
interviews. B ut you may have a hard time establishing which of the things you observed are
due to the disaster rather than to other factors like the peculiarities of the town or pre-disaster
characteristics.
Introducing ParametersB ecause the scope of language research is so broad and there are so many variables involved,
it is sometimes difficult to find any hard and fast rules to follow when doing research. On the
next few you will see a useful set of interrelated and independent parameters to guide a plan
language related research. They are independent in that they can be considered separately.
B ut they are interrelated because in actual practices researchers choices within one parameter
will influence choices in others.
Qualitative Research
This type of research goes by many names; ethnography, cognitive, anthropology etc, A good
way to understand qualitative research is to examine it in terms of the research parameters
we¶ve already discussed ; first, qualitative research tends to the synthetic rather than analytic.
It attempts to ree of control over research context is low. Qualitative researchcapture the µbig
picture ³and see how a multitude of variable work together in the real world. Another characteristic of qualitative research is that it is generally heuristic or hypothesis generating.
Unlike deductive research, it does not start with preconceived notions or hypotheses,
attempting to discover, understand, and interpret what is happening in the research context. In
Addition, the deg examines naturally occurring behaviour, so the investigate methods are as
non-intrusive as possible. Therefore, the researcher¶s effect on the subjects and data is
minimal.
MAIN KINDS OF EXPERIMENTAL RESEARCH
Within the realm of experimental research, there are three major types of design:
y TRUE-EXPERIMENTAL
y QUASI-EXPERIMENTAL
y PRE-EXPERIMENTAL
8/3/2019 1. Research Perspectives on Management
http://slidepdf.com/reader/full/1-research-perspectives-on-management 13/31
1 3
If you choose to conduct experimental research, one of your most important tasks will be to
choose the design that gives your research the best combination of internal and external
validity. At the same time, it must be practical enough so that you can actually do the
research in your own circumstances.
Remember, no particular type is right for all situations. Real-world constraints will often
dictate what is practical or possible. In any case you need to be careful to recognize the
weaknesses of the design you choose. D o not attempt to prove things or make claims in your
findings that are beyond the capabilities of your design.
TRUE-EXPERIMENTAL DESIGNS must employ the following:
y Random selection of subjects
y Use of control groups
y Random assignments to control and experimental groups
y Random assignment of groups to control and experimental conditions
In order for an experiment to follow a true-experimental design, it must meet the preceding
criteria. There is some variation in true-experimental designs, but that variation comes in the
time(s) that the treatment is given to the experimental group, or in the observation or
measurement (pre-test, post-test, mid-test) area.
Advantages of the true-experimental design include:
y Greater internal validity
y Causal claims can be investigated
Disadvantages:
y Less external validity (not like real world conditions)
y Not very practical
QUASI-EXPERIMENTAL DESIGNS are usually constructions that already exist in the
real world. Those designs that fall into the quasi-experimental category fall short in someway of the criteria for the true experimental group. A quasi-experimental design will have
some sort of control and experimental group, but these groups probably weren't randomly
selected. Random selection is usually where true-experimental and quasi-experimental
designs differ.
Some advantages of the quasi-experimental design include:
8/3/2019 1. Research Perspectives on Management
http://slidepdf.com/reader/full/1-research-perspectives-on-management 14/31
1 4
y Greater external validity (more like real world conditions)
y Much more feasible given time and logistical constraints
Disadvantages:
y Not as many variables controlled (less causal claims)
PRE-EXPERIMENTAL DESIGNS are lacking in several areas of the true-experimental
criteria. Not only do they lack random selection in most cases, but they usually just employ a
single group. This group receives the "treatment," there is no control group. Pilot studies,
one-shot case studies, and most research using only one group, fall into this category.
The advantages are:
y Very practical
y Set the stage for further research
Disadvantages: y Lower validity
RESEARCH QUESTIONS
Finding a RESEARCH QUESTION is probably the most important task in the reasearch
process because the question becomes the driving force behind the research-from beginning
to end.
A research question is always stated in question form. It may start out being rather general
and become focused and refined later on (after you become more familiar with the topic,
learn what others have discovered; define your terms more carefully, etc.)
The research question you start out with forms the basis for your review of related research
literature. This general question also evolves into your hypothesis (or focused research
question). When you draw conclusions, they should address this question. In the end, the
success of your research depends on how well you answer this question.
It is important to choose a question that satisfies certain criteria: y It must not be too broad or general (although you will focus it even more later on in
the process).
y It shouldn't have already been answered by previous research (although replication
with variation is certainly acceptable).
8/3/2019 1. Research Perspectives on Management
http://slidepdf.com/reader/full/1-research-perspectives-on-management 15/31
1 5
y It ought to be a question that needs to be answered (i.e., the answer will be useful to
people).
y It must be a question that can be answered through empirical means.
You can go to many sources to find topics or issues that can lead to research questions.
Here are a few: y Personal experience
y Professional books
y Articles in professional periodicals
y Professional indexes (LL B A, MLA, ERIC etc.)
y Other teachers and administrators
y B ibliographies of various types
y Unpublished research by others
It is wise to focus your research so that it is "do-able." B e careful! D on't try to do too
much in one study. It is, however, very possible (and quite common) to address
several related research questions in one study. This approach is "economical" in that
it produces more results with about the same amount of effort.
Here are a couple of examples:
Will students learn a foreign language better when they are in a relaxed state of mind?
What is the relationship between learners' ages and their accents?
Literature ReviewA LITERATURE REVIEW is a formal survey of professional literature that is pertinent to
your particular question. In this way you will find out exactly what others have learned in
relation to your question. This process will also help frame and focus your question and move
you closer to the hypothesis or focused question.
Once you have decided on a general research question, you need to read widely in that area.
Use the same sources of information that you consulted when you came up with your general
question, but now narrow your focus. Look for information that relates to your research
question.
Hypothesis and Focused Question
In deductive research, a HYPOTHESIS is necessary. It is focused statement which predicts
an answer to your research question. It is based on the findings of previous research (gained
from your review of the literature) and perhaps your previous experience with the subject.The
ultimate objective of deductive research is to decide whether to accept or reject the
8/3/2019 1. Research Perspectives on Management
http://slidepdf.com/reader/full/1-research-perspectives-on-management 16/31
1 6
hypothesis as stated. When formulating research methods (subjects, data collection
instruments, etc.), wise researchers are guided by their hypothesis. In this way, the hypothesis
gives direction and focus to the research.
Here is a sample HYPOTHESIS:
The " B owen technique" will significantly improve intermediate-level, college-age ESLstudents' accuracy when pronouncing voiced and voiceless consonants and tense and lax
vowels.
Sometimes researchers choose to state their hypothesis in "null" form. This may seem to run
counter to what the researchers really expect, but it is a cautious way to operate. When (and
only when) this null hypothesis is disproved or falsified, the researcher may then accept a
logically "alternate" hypothesis. This is similar to the procedure used in courts of law. If a
person accused of a crime is not shown to be guilty, then it is concluded that he/she is
innocent.
Here is a sample NULL HYPOTHESIS:
The B owen technique will have no significant effect on learners' pronunciation.
In heuristic research, a hypothesis is not necessary. This type of research employs a
"discovery approach." In spite of the fact that this type of research does not use a formal
hypothesis, focus and structure is still critical. If the research question is too general, the
search to find an answer to it may be futile or fruitless. Therefore, after reviewing the relevant
literature, the researcher may arrive at a FOCUSE D RESEARCH QUESTION.
Here is a sample FOCUSE D RESEARCH QUESTION: Is a contrastive presentation
(showing both native and target cultures) more effective than a non-contrastive presentation
(showing only the target culture) in helping students understand the target culture?
RESEARCH PHILOSOPHY
You probably think of research as something very abstract and complicated. It can be, but
you'll see (I hope) that if you understand the different parts or phases of a research project
and how these fit together, it's not nearly as complicated as it may seem at first glance. A
research project has a well-known structure -- a beginning, middle and end. We introduce the
basic phases of a research project in The Str u ct u re of R ese a rch . In that section, we also
introduce some important distinctions in research: the different types of questions you can
ask in a research project; and, the major components or parts of a research project.
8/3/2019 1. Research Perspectives on Management
http://slidepdf.com/reader/full/1-research-perspectives-on-management 17/31
1 7
B efore the modern idea of research emerged, we had a term for what philosophers used to
call research -- logical reasoning. So, it should come as no surprise that some of the basic
distinctions in logic have carried over into contemporary research. In S ystems of Logic we
discuss how two major logical systems, the inductive and deductive methods of reasoning,
are related to modern research.All research is based on assumptions about how the world is perceived and how we can best
come to understand it. Of course, nobody really knows how we can best understand the
world, and philosophers have been arguing about that very question for at least two millennia
now, so all we're going to do is look at how most contemporary social scientists approach the
question of how we know about the world around us. We consider two major philosophical
schools of thought -- Positivism a nd Post-Positivism -- that are especially important perspectives
for contemporary social research (OK, we are only considering positivism and post-
positivism here because these are the major schools of thought. Forgive us for not
considering the hotly debated alternatives like relativism, subjectivism, hermeneutics,
deconstructivism, constructivism, feminism, etc. If you really want to cover that stuff, start
your own Web site and send me your URL to stick in here).
Quality is one of the most important issues in research. We introduce the idea of va lidity to
refer to the quality of various conclusions you might reach based on a research project. Here's
where I've got to give you the pitch about validity. When I mention validity, most students
roll their eyes, curl up into a fetal position or go to sleep. They think validity is just
something abstract and philosophical (and I guess it is at some level). B ut we think if you can
understand validity -- the principles that we use to judge the quality of research -- you'll be
able to do much more than just complete a research project. You'll be able to be a virtuoso at
research, because you'll have an understanding of why we need to do certain things in order
to assure quality. You won't just be plugging in standard procedures you learned in school --
sampling method X, measurement tool Y -- you'll be able to help create the next generation
of research technology. Enough for now -- more on this later.
RESEARCH STRUCTURE
Most research projects share the same general structure. You might think of this structure as
following the shape of an hourglass. The research process usually starts with a broad area of
interest, the initial problem that the researcher wishes to study. For instance, the researcher
could be interested in how to use computers to improve the performance of students in
8/3/2019 1. Research Perspectives on Management
http://slidepdf.com/reader/full/1-research-perspectives-on-management 18/31
1 8
mathematics. B ut this initial interest is far too broad to study in any single research project (it
might not even be addressable in a lifetime of research). The researcher has to narrow the
question down to one that can reasonably be studied in a research project. This might involve
formulating a hypothesis or a focus question. For instance, the researcher might hypothesize
that a particular method of computer instruction in math will improve the ability of elementary school students in a specific district. At the narrowest point of the research
hourglass, the researcher is engaged in direct measurement or observation of the question of
interest.
Once the basic data is collected, the researcher begins to try to understand it, usually by
analyzing it in a variety of ways. Even for a single hypothesis there are a number of analyses
a researcher might typically conduct. At this point, the researcher begins to formulate some
initial conclusions about what happened as a result of the computerized math program.
Finally, the researcher often will attempt to address the original broad question of interest by
generalizing from the results of this specific study to other related situations. For instance, on
the basis of strong results indicating that the math program had a positive effect on student
performance, the researcher might conclude that other school districts similar to the one in
the study might expect similar results.
Social research is always conducted in a social context. We ask people questions, or observe
families interacting, or measure the opinions of people in a city. An important component of
a research project is the u nits that participate in the project. Units are directly related to the
question of sampling. In most projects we cannot involve all of the people we might like to
involve. For instance, in studying a program of support services for the newly employed we
can't possibly include in our study everyone in the world, or even in the country, who is
newly employed. Instead, we have to try to obtain a representative sample of such people.
When sa mpling , we make a distinction between the theoretical population of interest to our
study and the final sample that we actually measure in our study. Usually the term "units"
refers to the people that we sample and from whom we gather information. B ut for some
projects the units are organizations, groups, or geographical entities like cities or towns.
Sometimes our sampling strategy is multi-level: we sample a number of cities and within
them sample families. In causal studies, we are interested in the effects of some cause on one
or more outcomes. The outcomes are directly related to the research problem -- we are
usually most interested in outcomes that are most reflective of the problem. In our
hypothetical supported employment study, we would probably be most interested in measures
of employment -- is the person currently employed, or, what is their rate of absenteeism.
8/3/2019 1. Research Perspectives on Management
http://slidepdf.com/reader/full/1-research-perspectives-on-management 19/31
1 9
Finally, in a causal study we usually are comparing the effects of our cause of interest (e.g.,
the program) relative to other conditions (e.g., another program or no program at all). Thus, a
key component in a causal study concerns how we decide what units (e.g., people) receive
our program and which are placed in an alternative condition. This issue is directly related to
the rese a rch design that we use in the study. One of the central questions in research design isdetermining how people wind up in or are placed in various programs or treatments that we
are comparing.
Deduction & Induction
In logic, we often refer to the two broad methods of reasoning as the deductive and inductive
approaches. D eductive reasoning works from the more general to the more specific.
Sometimes this is informally called a "top-down" approach. We might begin with thinking up
a theory about our topic of interest. We then narrow that down into more specific hypotheses
that we can test. We narrow down even further when we collect observation s to address the
hypotheses. This ultimately leads us to be able to test the hypotheses with specific data -- a
confirmation (or not) of our original theories.
Inductive reasoning works the other way, moving from specific observations to broader
generalizations and theories. Informally, we sometimes call this a "bottom up" approach
(please note that it's "bottom up" and not "bottoms up" which is the kind of thing the
bartender says to customers when he's trying to close for the night!). In inductive reasoning,
we begin with specific observations and measures, begin to detect patterns and regularities,
formulate some tentative hypotheses that we can explore, and finally end up developing some
general conclusions or theories. These two methods of reasoning have a very different "feel"
to them when you're conducting research. Inductive reasoning, by its very nature, is more
open-ended and exploratory, especially at the beginning. D eductive reasoning is more narrow
in nature and is concerned with testing or confirming hypotheses. Even though a particular
study may look like it's purely deductive (e.g., an experiment designed to test the
hypothesized effects of some treatment on some outcome), most social research involves both
inductive and deductive reasoning processes at some time in the project. In fact, it doesn't
take a rocket scientist to see that we could assemble the two graphs above into a single
circular one that continually cycles from theories down to observations and back up again to
theories. Even in the most constrained experiment, the researchers may observe patterns in
the data that lead them to develop new theories.
8/3/2019 1. Research Perspectives on Management
http://slidepdf.com/reader/full/1-research-perspectives-on-management 20/31
20
Positivism & Post-Positivism
Let's start our very brief discussion of philosophy of science with a simple distinction
between epistemology and methodology . The term epistemology comes from the Greek word
epistêmê, their term for knowledge. In simple terms, epistemology is the philosophy of
knowledge or of how we come to know. Methodology is also concerned with how we come
to know, but is much more practical in nature. Methodology is focused on the specific ways -
- the methods -- that we can use to try to understand our world better. Epistemology and
methodology are intimately related: the former involves the philosophy of how we come to
know the world and the latter involves the practice .
When most people in our society think about science, they think about some guy in a white
lab coat working at a lab bench mixing up chemicals. They think of science as boring, cut-
and-dry, and they think of the scientist as narrow-minded and esoteric (the ultimate nerd --
think of the humorous but nonetheless mad scientist in the Back to the Future movies, for
instance). A lot of our stereotypes about science come from a period where science was
dominated by a particular philosophy -- positivism -- that tended to support some of these
views. Here, I want to suggest (no matter what the movie industry may think) that science has
moved on in its thinking into an era of post-positivism where many of those stereotypes of the
scientist no longer hold up.
Let's begin by considering what positivism is. In its broadest sense, positivism is a rejection
of metaphysics (I leave it you to look up that term if you're not familiar with it). It is a
position that holds that the goal of knowledge is simply to describe the phenomena that we
experience. The purpose of science is simply to stick to what we can observe and measure.
Knowledge of anything beyond that, a positivist would hold, is impossible. When I think of
positivism (and the related philosophy of logical positivism) I think of the behaviorists in
mid-20th Century psychology. These were the mythical 'rat runners' who believed that
psychology could only study what could be directly observed and measured. Since we can't
directly observe emotions, thoughts, etc. (although we may be able to measure some of the
physical and physiological accompaniments), these were not legitimate topics for a scientific
psychology. B .F. Skinner argued that psychology needed to concentrate only on the positive
and negative reinforcers of behavior in order to predict how people will behave -- everything
else in between (like what the person is thinking) is irrelevant because it can't be measured.
In a positivist view of the world, science was seen as the way to get at truth, to understand the
world well enough so that we might predict and control it. The world and the universe were
8/3/2019 1. Research Perspectives on Management
http://slidepdf.com/reader/full/1-research-perspectives-on-management 21/31
21
deterministic -- they operated by laws of cause and effect that we could discern if we applied
the unique approach of the scientific method. Science was largely a mechanistic or
mechanical affair. We use deductive reasoning to postulate theories that we can test. B ased
on the results of our studies, we may learn that our theory doesn't fit the facts well and so we
need to revise our theory to better predict reality. The positivist believed in empiricism -- theidea that observation and measurement was the core of the scientific endeavor. The key
approach of the scientific method is the experiment, the attempt to discern natural laws
through direct manipulation and observation.
OK, I am exaggerating the positivist position (although you may be amazed at how close to
this some of them actually came) in order to make a point. Things have changed in our views
of science since the middle part of the 20th century. Probably the most important has been
our shift away from positivism into what we term post-positivism . B y post-positivism, I don't
mean a slight adjustment to or revision of the positivist position -- post-positivism is a
wholesale rejection of the central tenets of positivism. A post-positivist might begin by
recognizing that the way scientists think and work and the way we think in our everyday life
are not distinctly different. Scientific reasoning and common sense reasoning are essentially
the same process. There is no difference in kind between the two, only a difference in degree.
Scientists, for example, follow specific procedures to assure that observations are verifiable,
accurate and consistent. In everyday reasoning, we don't always proceed so carefully
(although, if you think about it, when the stakes are high, even in everyday life we become
much more cautious about measurement. Think of the way most responsible parents keep
continuous watch over their infants, noticing details that non-parents would never detect).
One of the most common forms of post-positivism is a philosophy called critical realism . A
critical realist believes that there is a reality independent of our thinking about it that science
can study. (This is in contrast with a subjectivist who would hold that there is no external
reality -- we're each making this all up!). Positivists were also realists. The difference is that
the post-positivist critical realist recognizes that all observation is fallible and has error and
that all theory is revisable. In other words, the critical realist is critical of our ability to know
reality with certainty. Where the positivist believed that the goal of science was to uncover
the truth, the post-positivist critical realist believes that the goal of science is to hold
steadfastly to the goal of getting it right about reality, even though we can never achieve that
goal ! B ecause all measurement is fallible, the post-positivist emphasizes the importance of
multiple measures and observations, each of which may possess different types of error, and
the need to use triangulation across these multiple errorful sources to try to get a better bead
8/3/2019 1. Research Perspectives on Management
http://slidepdf.com/reader/full/1-research-perspectives-on-management 22/31
22
on what's happening in reality. The post-positivist also believes that all observations are
theory-laden and that scientists (and everyone else, for that matter) are inherently biased by
their cultural experiences, world views, and so on. This is not cause to give up in despair,
however. Just because I have my world view based on my experiences and you have yours
doesn't mean that we can't hope to translate from each other's experiences or understand eachother. That is, post-positivism rejects the relativist idea of the incommensurability of different
perspectives, the idea that we can never understand each other because we come from
different experiences and cultures. Most post-positivists are constructivists who believe that
we each construct our view of the world based on our perceptions of it. B ecause perception
and observation is fallible, our constructions must be imperfect. So what is meant by
objectivity in a post-positivist world? Positivists believed that objectivity was a characteristic
that resided in the individual scientist. Scientists are responsible for putting aside their biases
and beliefs and seeing the world as it 'really' is. Post-positivists reject the idea that any
individual can see the world perfectly as it really is. We are all biased and all of our
observations are affected (theory-laden). Our best hope for achieving objectivity is to
triangulate across multiple fallible perspectives! Thus, objectivity is not the characteristic of
an individual, it is inherently a social phenomenon. It is what multiple individuals are trying
to achieve when they criticize each other's work. We never achieve objectivity perfectly, but
we can approach it. The best way for us to improve the objectivity of what we do is to do it
within the context of a broader contentious community of truth-seekers (including other
scientists) who criticize each other's work. The theories that survive such intense scrutiny are
a bit like the species that survive in the evolutionary struggle. (This is sometimes called the
natural selection theory of knowledge and holds that ideas have 'survival value' and that
knowledge evolves through a process of variation, selection and retention). They have
adaptive value and are probably as close as our species can come to being objective and
understanding reality.
Clearly, all of this stuff is not for the faint-of-heart. I've seen many a graduate student get lost
in the maze of philosophical assumptions that contemporary philosophers of science argue
about. And don't think that I believe this is not important stuff. B ut, in the end, I tend to turn
pragmatist on these matters. Philosophers have been debating these issues for thousands of
years and there is every reason to believe that they will continue to debate them for thousands
of years more. Those of us who are practicing scientists should check in on this debate from
time to time (perhaps every hundred years or so would be about right). We should think about
the assumptions we make about the world when we conduct research. B ut in the meantime,
8/3/2019 1. Research Perspectives on Management
http://slidepdf.com/reader/full/1-research-perspectives-on-management 23/31
23
we can't wait for the philosophers to settle the matter. After all, we do have our own work to
do!
EVALUATION RESEARCH AND STRATEGIES
One specific form of social research -- evaluation research -- is of particular interest here. The
Introd u ction to E va lua tion R ese a rch presents an overview of what evaluation is and how itdiffers from social research generally. We also introduce several evaluation models to give
you some perspective on the evaluation endeavor. Evaluation should not be considered in a
vacuum. Here, we consider evaluation as embedded within a larger Pla nning- E va lua tion Cycle .
Evaluation can be a threatening activity. Many groups and organizations struggle with how to
build a good evaluation capability into their everyday activities and procedures. This is
essentially an organizational culture issue. Here we consider some of the issues a group or
organization needs to address in order to develop a n ev a lua tion cu lt u re that works in their
context.
Evaluation is a methodological area that is closely related to, but distinguishable from more
traditional social research. Evaluation utilizes many of the same methodologies used in
traditional social research, but because evaluation takes place within a political and
organizational context, it requires group skills, management ability, political dexterity,
sensitivity to multiple stakeholders and other skills that social research in general does not
rely on as much. Here we introduce the idea of evaluation and some of the major terms and
issues in the field.
'Evaluation strategies' means broad, overarching perspectives on evaluation. They encompass
the most general groups or "camps" of evaluators; although, at its best, evaluation work
borrows eclectically from the perspectives of all these camps. Four major groups of
evaluation strategies are discussed here.
Scientific-experimental models are probably the most historically dominant evaluation
strategies. Taking their values and methods from the sciences -- especially the social sciences
-- they prioritize on the desirability of impartiality, accuracy, objectivity and the validity of
the information generated. Included under scientific-experimental models would be: the
tradition of experimental and quasi-experimental designs; objectives-based research that
comes from education; econometrically-oriented perspectives including cost-effectiveness
and cost-benefit analysis; and the recent articulation of theory-driven evaluation.
The second class of strategies are management-oriented systems models. Two of the most
common of these are PERT, the Program Evaluation and Review Technique, and CPM, the
8/3/2019 1. Research Perspectives on Management
http://slidepdf.com/reader/full/1-research-perspectives-on-management 24/31
24
Critical Path Method. B oth have been widely used in business and government in this
country. It would also be legitimate to include the Logical Framework or "Logframe" model
developed at U.S. Agency for International D evelopment and general systems theory and
operations research approaches in this category. Two management-oriented systems models
were originated by evaluators: the UTOS model where U stands for Units, T for Treatments,O for Observing Observations and S for Settings; and the CIPP model where the C stands for
Context, the I for Input, the first P for Process and the second P for Product. These
management-oriented systems models emphasize comprehensiveness in evaluation, placing
evaluation within a larger framework of organizational activities.
The third class of strategies are the qualitative/anthropological models. They emphasize the
importance of observation, the need to retain the phenomenological quality of the evaluation
context, and the value of subjective human interpretation in the evaluation process. Included
in this category are the approaches known in evaluation as naturalistic or 'Fourth Generation'
evaluation; the various qualitative schools; critical theory and art criticism approaches; and,
the 'grounded theory' approach of Glaser and Strauss among others.
Finally, a fourth class of strategies is termed participant-oriented models. As the term
suggests, they emphasize the central importance of the evaluation participants, especially
clients and users of the program or technology. Client-centered and stakeholder approaches
are examples of participant-oriented models, as are consumer-oriented evaluation systems.
With all of these strategies to choose from, how to decide? D ebates that rage within the
evaluation profession -- and they do rage -- are generally battles between these different
strategists, with each claiming the superiority of their position. In reality, most good
evaluators are familiar with all four categories and borrow from each as the need arises.
There is no inherent incompatibility between these broad strategies -- each of them brings
something valuable to the evaluation table. In fact, in recent years attention has increasingly
turned to how one might integrate results from evaluations that use different strategies,
carried out from different perspectives, and using different methods. Clearly, there are no
simple answers here. The problems are complex and the methodologies needed will and
should be varied.
Types of Evaluation
There are many different types of evaluations depending on the object being evaluated and
the purpose of the evaluation. Perhaps the most important basic distinction in evaluation
8/3/2019 1. Research Perspectives on Management
http://slidepdf.com/reader/full/1-research-perspectives-on-management 25/31
25
types is that between formative and summative evaluation. Formative evaluations strengthen
or improve the object being evaluated -- they help form it by examining the delivery of the
program or technology, the quality of its implementation, and the assessment of the
organizational context, personnel, procedures, inputs, and so on. Summative evaluations, in
contrast, examine the effects or outcomes of some object -- they summarize it by describingwhat happens subsequent to delivery of the program or technology; assessing whether the
object can be said to have caused the outcome; determining the overall impact of the causal
factor beyond only the immediate target outcomes; and, estimating the relative costs
associated with the object.
Formative evaluation includes several evaluation types:
y needs assessment determines who needs the program, how great the need is, and what
might work to meet the need
y evaluability assessment determines whether an evaluation is feasible and how
stakeholders can help shape its usefulness
y structured conceptualization helps stakeholders define the program or technology, the
target population, and the possible outcomes
y implementation evaluation monitors the fidelity of the program or technology delivery
y process evaluation investigates the process of delivering the program or technology,
including alternative delivery procedures
Summative evaluation can also be subdivided:
y outcome evaluations investigate whether the program or technology caused
demonstrable effects on specifically defined target outcomes
y impact evaluation is broader and assesses the overall or net effects -- intended or
unintended -- of the program or technology as a whole
y cost-effectiveness and cost-benefit analysis address questions of efficiency by
standardizing outcomes in terms of their dollar costs and values
y secondary analysis re-examines existing data to address new questions or use methods
not previously employed
y meta-analysis integrates the outcome estimates from multiple studies to arrive at an
overall or summary judgement on an evaluation question
Evaluation Questions and Methods
8/3/2019 1. Research Perspectives on Management
http://slidepdf.com/reader/full/1-research-perspectives-on-management 26/31
26
Evaluators ask many different kinds of questions and use a variety of methods to address
them. These are considered within the framework of formative and summative evaluation as
presented above.
In formative research the major questions and methodologies are:
What is the definition and scope of the problem or issue, or what's the question?Formulating and conceptualizing methods might be used including brainstorming, focus
groups, nominal group techniques, D elphi methods, brainwriting, stakeholder analysis,
synectics, lateral thinking, input-output analysis, and concept mapping.
Where is the problem and how big or serious is it?
The most common method used here is "needs assessment" which can include: analysis of
existing data sources, and the use of sample surveys, interviews of constituent populations,
qualitative research, expert testimony, and focus groups.
How should the program or technology be delivered to address the problem?
Some of the methods already listed apply here, as do detailing methodologies like simulation
techniques, or multivariate methods like multiattribute utility theory or exploratory causal
modeling; decision-making methods; and project planning and implementation methods like
flow charting, PERT/CPM, and project scheduling.
How well is the program or technology delivered?
Qualitative and quantitative monitoring techniques, the use of management information
systems, and implementation assessment would be appropriate methodologies here.
The questions and methods addressed under summative evaluation include:
What type of evaluation is feasible?
Evaluability assessment can be used here, as well as standard approaches for selecting an
appropriate evaluation design.
What was the effectiveness of the program or technology?
One would choose from observational and correlational methods for demonstrating whether
desired effects occurred, and quasi-experimental and experimental designs for determining
whether observed effects can reasonably be attributed to the intervention and not to other
sources.
What is the net impact of the program?
Econometric methods for assessing cost effectiveness and cost/benefits would apply here,
along with qualitative methods that enable us to summarize the full range of intended and
unintended impacts.
8/3/2019 1. Research Perspectives on Management
http://slidepdf.com/reader/full/1-research-perspectives-on-management 27/31
27
Clearly, this introduction is not meant to be exhaustive. Each of these methods, and the many
not mentioned, are supported by an extensive methodological research literature. This is a
formidable set of tools. B ut the need to improve, update and adapt these methods to changing
circumstances means that methodological research and development needs to have a major
place in evaluation work.
The Planning-Evaluation Cycle
Often, ev a lua tion is construed as part of a larger managerial or administrative process.
Sometimes this is referred to as the planning-evaluation cycle. The distinctions between
planning and evaluation are not always clear; this cycle is described in many different ways
with various phases claimed by both planners and evaluators Usually, the first stage of such a
cycle -- the planning phase -- is designed to elaborate a set of potential actions, programs, or
technologies, and select the best for implementation. D epending on the organization and the
problem being addressed, a planning process could involve any or all of these stages: the
formulation of the problem, issue, or concern; the broad conceptualization of the major
alternatives that might be considered; the detailing of these alternatives and their potential
implications; the evaluation of the alternatives and the selection of the best one; and the
implementation of the selected alternative. Although these stages are traditionally considered
planning, there is a lot of evaluation work involved. Evaluators are trained in needs
assessment, they use methodologies -- like the concept m a pping one presented later -- that
help in conceptualization and detailing, and they have the skills to help assess alternatives
and make a choice of the best one.
The evaluation phase also involves a sequence of stages that typically includes: the
formulation of the major objectives, goals, and hypotheses of the program or technology; the
conceptualization and operationalization of the major components of the evaluation -- the
program, participants, setting, and measures; the design of the evaluation, detailing how these
components will be coordinated; the analysis of the information, both qualitative and
quantitative; and the utilization of the evaluation results.
Sampling
Sampling is the process of selecting units (e.g., people, organizations) from a population of
interest so that by studying the sample we may fairly generalize our results back to the
population from which they were chosen. Let's begin by covering some of the key terms in
sa mpling like "population" and "sampling frame." Then, because some types of sampling rely
8/3/2019 1. Research Perspectives on Management
http://slidepdf.com/reader/full/1-research-perspectives-on-management 28/31
28
upon quantitative models, we'll talk about some of the st a tistic a l terms u sed in sa mpling .
Finally, we'll discuss the major distinction between prob a bility and Nonprob a bility sampling
methods and work through the major types in each.
Measurement
Measurement is the process observing and recording the observations that are collected as
part of a research effort. There are two major issues that will be considered here.
First, you have to understand the fundamental ideas involved in measuring. Here we consider
two of major measurement concepts. In Levels of M ea su rement , I explain the meaning of the
four major levels of measurement: nominal, ordinal, interval and ratio. Then we move on to
the reli a bility of measurement, including consideration of true score theory and a variety of
reliability estimators.
Second, you have to understand the different types of measures that you might use in social
research. We consider four broad categories of measurements. Su rvey rese a rch includes the
design and implementation of interviews and questionnaires. S ca ling involves consideration of
the major methods of developing and implementing a scale. Q ua lita tive rese a rch provides an
overview of the broad range of non-numerical measurement approaches. And u nobtr u sive
me a su res presents a variety of measurement methods that don't intrude on or interfere with
the context of the research.
Design
Research design provides the glue that holds the research project together. A design is used tostructure the research, to show how all of the major parts of the research project -- the
samples or groups, measures, treatments or programs, and methods of assignment -- work
together to try to address the central research questions. Here, after a brief introd u ction to
rese a rch design , I'll show you how we classify the major types of designs . You'll see that a
major distinction is between the experiment a l designs that use random assignment to groups
or programs and the q ua si-experiment a l designs that don't use random assignment. [People
often confuse what is meant by random selection with the idea of random assignment. You
should make sure that you understand the distinction between ra ndom selection a nd ra ndom
a ssignment .] Understanding the rel a tionships a mong designs is important in making design
choices and thinking about the strengths and weaknesses of different designs. Then, I'll talk
about the heart of the art form of designing designs for rese a rch and give you some ideas
about how you can think about the design task. Finally, I'll consider some of the more recent
8/3/2019 1. Research Perspectives on Management
http://slidepdf.com/reader/full/1-research-perspectives-on-management 29/31
29
a dv a nces in q ua si-experiment a l thin k ing -- an area of special importance in applied social
research and program evaluation.
Concept Mapping
Social scientists have developed a number of methods and processes that might be useful in
helping you to formulate a research project. I would include among these at least the
following -- brainstorming, brainwriting, nominal group techniques, focus groups, affinity
mapping, D elphi techniques, facet theory, and qualitative text analysis. Here, I'll show you a
method that I have developed, called concept mapping, which is especially useful for
research problem formulation.
Concept mapping is a general method that can be used to help any individual or group todescribe their ideas about some topic in a pictorial form. There are several different types of
methods that all currently go by names like "concept mapping", "mental mapping" or
"concept webbing." All of them are similar in that they result in a picture of someone's ideas.
B ut the kind of concept mapping I want to describe here is different in a number of important
ways. First, it is primarily a group process and so it is especially well-suited for situations
where teams or groups of stakeholders have to work together. The other methods work
primarily with individuals. Second, it uses a very structured facilitated approach. There are
specific steps that are followed by a trained facilitator in helping a group to articulate its ideasand understand them more clearly. Third, the core of concept mapping consists of several
state-of-the-art multivariate statistical methods that analyze the input from all of the
individuals and yields an aggregate group product. And fourth, the method requires the use of
specialized computer programs that can handle the data from this type of process and
accomplish the correct analysis and mapping procedures.
Although concept mapping is a general method, it is particularly useful for helping social
researchers and research teams develop and detail ideas for research. And, it is especiallyvaluable when researchers want to involve relevant stakeholder groups in the act of creating
the research project. Although concept mapping is used for many purposes -- strategic
planning, product development, market analysis, decision making, measurement development
-- we concentrate here on it's potential for helping researchers formulate their projects.
8/3/2019 1. Research Perspectives on Management
http://slidepdf.com/reader/full/1-research-perspectives-on-management 30/31
30
So what is concept mapping? Essentially, concept mapping is a structured process, focused
on a topic or construct of interest, involving input from one or more participants, that
produces an interpretable pictorial view (concept map) of their ideas and concepts and how
these are interrelated. Concept mapping helps people to think more effectively as a group
without losing their individuality. It helps groups to manage the complexity of their ideaswithout trivializing them or losing detail.
A concept mapping process involves six steps that can take place in a single day or can be
spread out over weeks or months depending on the situation. The first step is the Preparation
Step. There are three things done here. The facilitator of the mapping process works with the
initiator(s) (i.e., whoever requests the process initially) to identify who the participants will
be. A mapping process can have hundreds or even thousands of stakeholders participating,
although we usually have a relatively small group of between 10 and 20 stakeholdersinvolved. Second, the initiator works with the stakeholders to develop the focus for the
project. For instance, the group might decide to focus on defining a program or treatment. Or,
they might choose to map all of the outcomes they might expect to see as a result. Finally, the
group decides on an appropriate schedule for the mapping. In the Generation Step the
stakeholders develop a large set of statements that address the focus. For instance, they might
generate statements that describe all of the specific activities that will constitute a specific
social program. Or, they might generate statements describing specific outcomes that might
occur as a result of participating in a program. A wide variety of methods can be used toaccomplish this including traditional brainstorming, brainwriting, nominal group techniques,
focus groups, qualitative text analysis, and so on. The group can generate up to 200
statements in a concept mapping project. In the Structuring Step the participants do two
things. First, each participant sorts the statements into piles of similar ones. Most times they
do this by sorting a deck of cards that has one statement on each card. B ut they can also do
this directly on a computer by dragging the statements into piles that they create. They can
have as few or as many piles as they want. Each participant names each pile with a short
descriptive label. Second, each participant rates each of the statements on some scale. Usually
the statements are rated on a 1-to-5 scale for their relative importance, where a 1 means the
statement is relatively unimportant compared to all the rest, a 3 means that it is moderately
important, and a 5 means that it is extremely important. The Representation Step is where the
analysis is done -- this is the process of taking the sort and rating input and "representing" it
in map form. There are two major statistical analyses that are used. The first --
8/3/2019 1. Research Perspectives on Management
http://slidepdf.com/reader/full/1-research-perspectives-on-management 31/31
multidimensional scaling -- takes the sort data across all participants and develops the basic
map where each statement is a point on the map and statements that were piled together by
more people are closer to each other on the map. The second analysis -- cluster analysis --
takes the output of the multidimensional scaling (the point map) and partitions the map into
groups of statements or ideas, into clusters. If the statements describe activities of a program,the clusters show how these can be grouped into logical groups of activities. If the statements
are specific outcomes, the clusters might be viewed as outcome constructs or concepts. In the
fifth step -- the Interpretation Step -- the facilitator works with the stakeholder group to help
them develop their own labels and interpretations for the various maps. Finally, the
Utilization Step involves using the maps to help address the original focus. On the program
side, the maps can be used as a visual framework for operationalizing the program. on the
outcome side, they can be used as the basis for developing measures and displaying results.
This is only a very basic introduction to concept mapping and its uses. If you want to find out
more about this method, you might look at some of the articles I've written about concept
mapping, including An Introduction to Concept Mapping, Concept Mapping: Soft Science or
Hard Art?,or the article entitled Using Concept Mapping to D evelop a Conceptual
Framework of Staff's Views of a Supported Employment Program for Persons with Severe
Mental Illness.