Prepare your classical conditioning projects to turn in. Write down your weekly reading assignment:...

64
Prepare your classical conditioning projects to turn in. Write down your weekly reading assignment: Read and take notes on pages 334 – 343.

Transcript of Prepare your classical conditioning projects to turn in. Write down your weekly reading assignment:...

Prepare your classical conditioning projects to turn in.

Write down your weekly reading assignment:Read and take notes on pages 334 – 343.

AP PsychologyMs. Desgrosellier

3.2.2010

Objective: SWBAT identify the two major characteristics that distinguish classical conditioning from operant conditioning.

acquisition

extinction

spontaneous recovery

generalization

discrimination

Classical Conditioning:

- forms associations between stimuli (a CS and the US it signals)

- involves respondent behavior

Operant Conditioning:

- involves operant behavior

associative learning: learning that certain events (a response and its consequences in operant conditioning) occur together.

operant conditioning: a type of learning in which behavior is strengthened if followed by a reinforcer or diminished if followed by a punisher.

respondent behavior: behavior that occurs as an automatic response to some stimulus.Skinner’s term for behavior learned through classical conditioning.

operant behavior: behavior that operates on the environment, producing consequences.The behavior operates on the environment to produce rewarding or punishing stimuli.

We can tell the difference between classical and operant conditioning by asking: Is the organism learning associations between events that it doesn’t control (classical conditioning)?

Or is it learning associations between its behavior and resulting events (operant conditioning)?

See table 22.1 for more information.

Objective: SWBAT state Thorndike’s law of effect, and explain its connection to Skinner’s research on operant conditioning.

B.F. Skinner (1904 – 1990) is one of behaviorism’s most influential and controversial figures.His work elaborated on Edward L. Thorndike’s (1874 – 1949) observation of a simple fact of life:

law of effect: Thorndike’s principle that behaviors followed by favorable consequences become more likely, and that behaviors followed by unfavorable consequences become less likely.

Skinner designed his experiments using the Skinner box, or anoperant chamber: a chamber containing a bar or key that an animal can manipulate to obtain a food or water reinforcer, with attached devices to record the animal’s rate of bar pressing or key pecking.

His experiments have explored the exact conditions that foster efficient and enduring learning.

He would put an animal in the box, first rats and then pigeons.

They would move around the box until they accidently pushed the button that would release the food.

Eventually, with constant food rewards, they would learn that pushing the button caused food to appear.

They would then push the button intentionally to receive their reward.

Objective: SWBAT describe the shaping procedure, and explain how it can increase our understanding of what nonverbal animals and babies can discriminate.

Shaping: an operant conditioning procedure in which reinforcers guide behavior toward closer and closer versions of the desired behavior.For example, if you wanted to shape a rat’s behavior to press a bar, you would first observe its natural behavior and then build on them.

You might give the rat a food reward every time it moves toward the bar.

Shaping: an operant conditioning procedure in which reinforcers guide behavior toward closer and closer versions of the desired behavior.Then, when the rat was doing this regularly, you would reward it only when it got closer to the bar.

Finally, you would require it to touch the bar to get the food.

Shaping: an operant conditioning procedure in which reinforcers guide behavior toward closer and closer versions of the desired behavior.This method is called successive approximations, rewarding responses that are ever-closer to the final desired behavior, and you ignore all other responses.

By shaping non-verbal organisms to discriminate between stimuli, psychologists can also determine what they perceive.

If we shape them to respond to one stimulus and not another, then obviously they can perceive the difference.

discriminative stimulus: it signals that response will be reinforced.

Objective: SWBAT compare positive and negative reinforcement, and give one example each of a primary reinforcer, a conditioned reinforcer, an immediate reinforcer, and a delayed reinforcer.

Reinforcer: in operant conditioning, any event that strengthens the behavior that follows.This is not just rewards! If yelling at someone increases their behavior (like in the military), then this is still reinforcement.

positive reinforcement: increasing behaviors by presenting positive stimuli, such as food.A positive reinforcer is any stimulus that, when presented after a response, strengthens the response.

negative reinforcement: increasing behaviors by stopping or reducing negative stimuli, such as shock.A negative reinforcer is any stimulus that, when removed after a response, strengthens the response (note: negative reinforcement is not punishment!).

Giving out candy when someone participates in class?Positive reinforcement

When the seatbelt buzzer in your car stops when you buckle up?Negative reinforcement

DO NOWWhat is the difference between positive reinforcement and negative reinforcement?

Give one example of each.

Primary reinforcer: an innately reinforcing stimulus, such as one that satisfies a biological need.e.g. getting food when you’re hungry or stopping an electric shock.

Conditioned reinforcer: a stimulus that gains its reinforcing power through its association with a primary reinforcer; also known as a secondary reinforcer.e.g. a rat in a Skinner box that knows that a light means that food is coming, so the rat works to turn on the light. The light has become a conditioned reinforcer.

Immediate reinforcers are given directly after a desired action.

Delayed reinforcers are given after the desired action

In rats in a Skinner box, delayed reinforcers will not help the rat to learn to press the bar.

Humans can respond to delayed reinforcers, like a paycheck at the end of the work week or a good grade at the end of the semester.

People need to learn to delay gratification to receive greater long-term rewards.

Objective: SWBAT discuss the strengths and weaknesses of continuous and partial (intermittent) reinforcement schedules, and identify four schedules of partial reinforcement.

continuous reinforcement: reinforcing the desired response every time it occurs.learning occurs rapidly, but so does extinction.when the reinforcement stops, the desired behavior stops.

This is also not like real life.

partial (intermittent) reinforcement: reinforcing a response only part of the time.Results in slower acquisition of a response but much greater resistance to extinction than continuous.

fixed-ratio schedule: in operant conditioning, a reinforcement schedule that reinforces a response only after a specified number of responses.With this schedule, an animal will pause only briefly after a reinforcer and will then return to a high rate of responding.

variable-ratio schedules: in operant conditioning, a reinforcement schedule that reinforces a response after an unpredictable number of responses.Produces high rates of responding because reinforcers increase as the number of responses increases.

fixed-interval schedule: in operant conditioning, a reinforcement schedule that reinforces a response only after a specified time has elapsed.Produces a choppy stop-start pattern rather than a steady response rate.

variable-interval schedule: in operant conditioning, a reinforcement schedule that reinforces a response at unpredictable time intervals.Produces slow, steady responding.

Reward every 30 times a rat presses the buttonFixed-ratio schedule

checking e-mail repeatedly to get the reward of a new messagevariable-interval schedule

People who play slot machines in hopes of winning the jackpotvariable-ratio schedule

people checking for the mail as delivery time gets closerfixed-interval schedule

Objective: SWBAT discuss the ways negative punishment, and positive punishment, and negative reinforcement differ, and list some drawbacks of punishment as a behavior-control technique.

Punishment

punishment: an event that decreases the behavior that follows it.A punisher decreases the frequency of a preceding behavior, usually by administering an undesirable consequence or withdrawing a desirable one.

Punishment

Positive punishment: gives an undesirable consequence after a behavior to decrease the likelihood of repeating the behavior.Examples:

Punishment

In YOUR OWN WORDS, define punishment.

Negative punishment: taking away a desirable consequence after a behavior to decrease the likelihood of repeating the behavior.Examples:

Punishment

If punishment is avoidable, the punished behavior may reappear in safe settings.i.e. the child may learn discrimination – it’s okay to swear when you’re not around your parents.

Punishment

Physical punishment may increase aggressiveness.

Punishment may create a sense of fear, and if its unpredictable and inescapable, animals and people may develop feelings of helplessness and depression.

Punishment

Even though punishment suppresses unwanted behavior, it often does not guide one toward more desirable behavior.Punishment tell you what not to do, while reinforcement tells you what to do.

Punishment combined with reinforcement is better than punishment alone.

Punishment

Punishment often teaches simply how to avoid it (says Skinner).

Most psychologists favor reinforcement – notice someone doing something right and affirm them for it.

Punishment

Objective: SWBAT explain how latent learning and the effect of external rewards demonstrate that cognitive processing is an important part of learning.

Cognition and Operant Conditioning

Skinner (and behaviorism) resisted the belief that cognitive processes – thoughts, perceptions, expectations – have a necessary place in psychology and conditioning.

Cognition and Operant Conditioning

Research has shown hints of cognitive processes at work in operant learning.e.g. animals on fixed-interval reinforcement schedules respond more and more frequently as the time approaches when a response will produce a reinforcer.

Cognition and Operant Conditioning

cognitive map: a mental representation of the layout of one’s environment.For example, after exploring a maze, rats act as if they have learned a cognitive map of it.

This is seen in rats in a maze with no obvious rewards (i.e. no reinforcement).

Latent Learning

latent learning: learning that occurs but is not apparent until there is an incentive to demonstrate it.

There is more to learning than associating a response with a consequence.

Latent Learning

Unnecessary rewards sometimes carry hidden costs.

Sometimes rewards given for tasks people already find interesting can lower their natural interest in the activity.

Intrinsic Motivation

Intrinsic motivation: a desire to perform a behavior for its own sake.Excessive rewards can undermine intrinsic motivation.

Intrinsically motivated people work and play in search of enjoyment, interest, self-expression, or challenge.

Intrinsic Motivation

Extrinsic motivation: a desire to perform a behavior due to promised rewards or threats of punishment.

Intrinsic Motivation

Objective: SWBAT explain how biological predispositions place limits on what can be achieved with operant conditioning.

Biological Predispositions

An animals predispositions constrain its capacity for operant conditioning.e.g. it’s easy to teach a hamster to associate food with standing and digging because they are part of its natural food-seeking behaviors.

It’s much more difficult to get them to associate food with face washing because it’s not normally associated with food or hunger.

Biological Predispositions

Bottom line: biological constraints predispose organisms to learn associations that are naturally adaptive.

instinctive drift: “misbehaviors” occurred as the animals reverted to their biologically predisposed patterns.

Biological Predispositions

Objective: SWBAT describe the controversy over Skinner’s views of human behavior.

SKINNER’S LEGACY

Skinner believed that only external influences shaped behavior and urged the use of operant principles.

Critics said this is dehumanizing to people and neglects their personal freedom and seeks to control their actions.

Skinner said we could use external consequences to better humans and society.

SKINNER’S LEGACY

Objective: SWBAT identify the major similarities and differences between classical and operant conditioning.

Contrasting Classical and Operant Conditioning

Classical conditioning involves an organism associating different stimuli that it does not control and responds automatically to (respondent behaviors).

Operant conditioning involves an organism associating its operant behaviors (those that act on its environment to produce rewarding or punishing stimuli) with their consequences.

Contrasting Classical and Operant Conditioning

Cognitive processes and biological predispositions influence both kinds of conditioning.

Both involve acquisition, extinction, spontaneous recovery, generalization, and discrimination.

See table 22.4 for more information.

Contrasting Classical and Operant Conditioning