Ressie

download Ressie

of 5

Transcript of Ressie

  • 8/11/2019 Ressie

    1/5

    1

    RESCORLA-WAGNER MODEL

    What are some characteristics of a good model?

    1. Variables well-described and manipulatable.

    2. Accounts for known results and able to predict non-trivial results of new experiments.

    3. Dependent variable(s) predicted in at least relativemagnitude and direction.

    4. Parsimonious (i.e., minimum assumptions for maximum effectiveness).

    The principal theoretical variable in the model is called associative strength, usually

    denoted by the capital letter V. This variable is notdirectly observable or measurable but

    is presumably related to a measured variable, for example, the magnitude of a conditional

    response after some number of conditioning trials. This is a common feature of models,

    especially mathematical models in psychology, where the variables and other parametersentering into the model represent some hypothetical or intervening constructs putatively

    related to specific observed and measured events. Moreover, almost all models in

    psychology are only relatively predictive, that is, they might be able to predict that an

    outcome of one experiment will be bigger or smaller than that of another experiment, but

    not howbig or smallorwhat the actual obtained values would be. Models that can make

    precise quantitative predictions could be calledparameter predictive. Some models in

    physics, for example, can predict experimental values to better than 15 decimal places!

    The Rescorla-Wagner model of Pavlovian conditioning is, at best, only relatively

    predictive. Yet, it has been remarkably successful in its domain in spite of (or because

    of) the fact that it is probably the simplest possible plausible model of conditioning.

    Assumptions of the Rescorla-Wagner model:

    1. When a CS is presented its associative strength, Vcs, may increase (CS+), decrease

    (CS-), or remain unchanged.

    2. The asymptotic strength () of association depends on the magnitude (I) of the UCS:= f (UCSI).

    3. A given UCS can support only a certain level of associative strength, .

    4. In a stimulus compound, the total associative strength is the algebraic sum of the

    associative strength of the components. [ex. T: tone, L: light. VT+L= VT+ VL]

    5. The change in associative strength, V, on any trial is proportional to the differencebetween the present associative strength, Vcs, and the asymptotic associative strength,

    .

  • 8/11/2019 Ressie

    2/5

    2

    The last assumption yields the basic model:

    1

    1

    ; 01.n

    ni

    i

    VKVK

    =

    = SH. Assume that K = 0.2, = 100, and on trial 1 Vsum = 0. Howwill conditioning proceed?

    Trial 1: V1 = 0.2 (100 - 0) = 20

  • 8/11/2019 Ressie

    3/5

    3

    Trial 2: V2 = 0.2 (100 - 20) = 16Trial 3: V3 = 0.2 (100 - 36) = 12.8Trial 4: V4 = 0.2 (100 - 48.8) = 10.24, etc.

    Just after Trail 4, V4(sum) = 59.04, a bit more than half-way up to the asymptote of 100.

    What would V10 (i.e., Vsum on the 10thtrial) be? Is there a short-cut way to find out?

    Show that, in general,

    1

    11

    1

    1 and that

    (1)1(1).

    n

    n

    nin

    n

    i

    VK

    V

    VKKK

    +

    =

    =

    ==

    Using the last expression we find that V10 = 89.26.

    Example 2: Extinction

    Work through this on your own, starting with the results for V4 and V10 obtained above.

    What will be now?

    Example 3: Overshadowing

    With stimulus compounds you have to be careful to work with each stimulus separatelyon each trial and then add the associative strengths together (Assumption 4) to get the

    Vsum on that trial.

    Suppose you started with two potential CSs, a tone, T and light, L where the L > T, that

    is, the light is significantly more salient than the tone. This difference would be captured

    by differences in the Ks. Let KL= 0.5 and KT= 0.2. Let = 100.

    Trial 1: VL= 0.5 (100 - 0) = 50 VL= 0.2 (100 - 0) = 20

    VT + L= 70

    Trial 2: VL = 0.5 (100 - 70) = 15 VT= 0.2 (100 - 70) = 6

    VT+ L= 91

    Trial 3: VL= 0.5 (100 - 91) = 4.5 VT= 0.2 (100 - 91) = 1.8

    VT+L= 97.3

  • 8/11/2019 Ressie

    4/5

    4

    Total VL = 69.5 Total VT= 27.8

    Thus, the light is overshadowing the tone.

    Example 4: Blocking

    Assume two stimuli L and T are equally salient, that is KL= KT. We pair L only with

    the UCS for three trials. On the fourth trial we combine L with T (L + T) and pair them

    with the UCS. Let K = 0.5 and = 100. What will VL3be? What will VL4and VT4be?

    Why is this called blocking?

    Example 5: Conditioned inhibition

    The term inhibition is controversial, but in the context of the Rescorla-Wagner model, a

    stimulus that through conditioning comes to have negative associative strength is called a

    conditioned inhibitor. How can this come about?

    Suppose we have paired a light, L, with shock to develop a conditioned response, CR.

    After the CR is clearly manifest, we shift to extinction but combine the light with an

    equally salient tone, T. After a number of extinction trials what happens to VL? What

    about VT? What is ? Heres an example:

    Let KL = KT= 0.2 and assume that VLstarts at 90 (why do we assume that VL>>0?).

    Initially VT = 0 (why?). Also = 0 (why?).

    Trial 1: L: VL= 0.2 (0 - 90) = -18

    T: VT= 0.2 (0 - 90) = -18

    VL= 72 (i.e., 90 - 18); VT= -18 (i.e., 0 -18) thus, VL+T= 54.

    Trial 2: VL= 0.2 (0 - 54) = -10.8

    VT= 0.2 (0 - 54) = -10.8

    VL= 61.2; VT= -28.8 thus, VL+T= 32.4.

    etc.

    What will be the limiting values of VLand VT ? How could you test to see if the tone

    were a conditioned inhibitor, that is, what procedures might allow you to functionally

    define an inhibitor?

  • 8/11/2019 Ressie

    5/5

    5

    There are a number of other examples possible showing the applications and limitations

    of the Rescorla-Wagner model. Try to make up some cases yourself. Think about

    conditions where Ks or s might be manipulated or where CSs are separatelyconditioned and then combined, etc. Finally, how might you use the assumptions of the

    model to show that if the P (UCS | CS) = P (UCS | ~CS), then VCS= 0?

    The Rescorla- Wagner model exemplifies in perhaps the simplest possible way a set of

    characteristics common to many other models of learning, some embodying much more

    sophisticated mathematical structures, for example, neural networks, genetic algorithms,

    and various dynamical programming models. Such models, at a minimum, capture three

    essential aspects of behavioral change: (1) some form of selective association; (2) some

    form of memory; and (3) some variation of a delta rule or feedback process that drives the

    system toward stable or meta-stable equilibriain the terminology of dynamical

    systemsto fields of attractors. Can you also see the relations between these

    characteristics of learning models and processes involved in natural selection driving

    biological evolution?

    M. Jackson Marr, Psych. 3031/6016