ADVERTISEMENTS:
This article throws light upon the top five theories of learning. The theories are: 1. Watson’s Behaviourism in Learning Theory 2. Pavlov’s Contribution to Learning Theory 3. Guthrie’s Contiguous Conditioning Theory 4. Skinner’s Operant Conditioning Theory 5. Hull’s Deductive Theory.
Theory of Learning # 1. Watson’s Behaviourism in Learning Theory:
Early in the twentieth century objective behaviourism became the main feature in American psychology and came more and more in conflict with the German tradition.
Pressure increased to break the traditional mould and to develop a psychology that was directly oriented towards objective behaviour and practical usefulness.
John Broadus Watson (1878-1958):
It was through his vigorous attacks on traditional psychology and his attempt to build a radically different system that a new stance in psychology appeared in the American psychological proceedings. Watson’s opposition to admitting anything subjective into psychology led him to reject much more than the study of consciousness. His psychology came to be known as “Behaviourism” with a categorical objective stance.
The reason for the name ‘behaviourism’ was clear enough. Watson was interested only on Behaviour, not in the conscious experiences. Human behaviour was to be studied as objectively as behaviour of machines. Consciousness was not objective, therefore, not scientifically valid and could not be meaningfully studied was the thrust of Watson.
He meant nothing more abstruse than the movement of muscles. What is speech (he said) ? —Movement of the muscles of the throat. What is thought ? —Sub-vocal speech, talking silently to oneself. What are feeling and emotion? —Movement of the muscle of the gut. Thus, Watson disposed of mentalism in favour of a purely objective science of behaviour.
Another of his targets was the analysis of motivation in terms of instincts. He opposed the instincts which were considered innate and, therefore, imbibes the character of mentalistic nature. He asserted that our behaviour is, on the contrary, a matter of ‘conditioned reflexes’, that is the responses learned by what is now called ‘classical conditioning’. He said that we do not show aggression or sociability because we are born with an instinct to do so, but because we have ‘learned’ to do so through conditioning.
He believed all we inherit is our bodies and a few reflexes, differences in ability or personality is differences in learned behaviour. Thus, Watson was in several respects a strong exponent of ‘environment’ as against heredity in the familiar nature-nurture controversy. What we are depends entirely (except for anatomical differences) on what we have “learned”. And since what can be learned can be unlearned,— this contention meant that the human nature, either in general or in a particular person, was greatly subject to change— “There is no limit to what a person, properly conditioned, might become”.
Watson’s Interpretation of Learning:
He regarded all learning as classical conditioning. We are born with certain stimulus-response connections called ‘reflexes’. These reflexes, according to Watson, are the entire behavioural repertoire that we inherit. However, we can build a multiplicity of new stimulus-response connections by the process of conditioning, first described by Pavlov—”that if a new stimulus occurs along with the stimulus for the reflex response, after such pairing the new stimulus alone will produce response”.
This conditioning process, according to Watson, is how we learn to respond to new situations (by chaining of responses). Such conditioning, however, is only part of the learning process. We must not only learn to respond to new situations, we must also learn new responses. Thus new and complex behaviour is acquired through “serial combination of simple reflexes”—a complex sequence of stimulus-response connections. There arises the questions, like: How does complex learning, then, take place through serial conditioning? Or how does a particular sequence of stimulus-response connections will be formed? Watson had two answers for these questions. One answer was to say that the stimulus-response connections that make up the skilled act are conditioned reflexes.
Each response produces sensations that become conditioned stimuli for the next response and thus the whole sequence of conditioned stimuli-response connections is formed. This formulation gave Watson the satisfaction of having reduced complex habits to their simple building blocks-conditioned reflexes. However, he did not try to explain it further and left the work for the physiologists to explain that. It was, in fact, more apparent than real.
Watson’s another explanation of this form of learning is in terms of ‘frequency’ and ‘recency’. The principle of frequency states that the more frequently we made a given response to a given stimulus, the more likely we are to make that response to that stimulus again. Similarly, the principle of recency states that the more recently we have made a given response to a given stimulus, the more likely we are to make it again. In order to explain the occurrence of a particular stimulus in a complex sequence, Watson says that, during learning, many different responses occur to the stimulus, but that many of the learned responses drop out.
The responses that change the situation gain in frequency and recency and the particular stimulus- response unit in the sequence is then complete. All of these statements about the learning of new responses are left rather underdeveloped in Watson’s treatment.
Even the theory of learning of emotional responses forwarded by Watson is not tenable, though he made a little concession to heredity in this case. He recognizes three innate patterns of emotional reaction—fear, rage and love (primary emotions) which are not feeling according to Watson but are same as reflexes—patterns of movement of internal organs. But, in reality, emotions are more complicated than what is usually meant by reflexes. Emotional learning, he emphasized, also involves conditioning of these three patterns of emotional response to new stimuli.
Long-Lasting influence of Watson:
“Watson’s great contribution to the development of psychology was his rejection of the distinction of body and mind and his emphasis on objective behaviour. This battle was so effectively won that most of the later learning theories are in the broader sense of the word behaviourist framework”. But Watson was much as thorough than he might have been in dealing with the detailed problems of learning.
They are mostly incomplete and inconsistent in his treatment of complex learning. But he opened the door for later psychologists to enter the realm of behaviourism in order to explain impact of learning. Therefore, it has remained for others to try to build, within the behaviouristic framework, a more complete theory of learning.
Pavlov’s Classical Conditioning:
Watson’s reliance on classical conditioning had been a direct effect of the experimental conclusions reached by Pavlov in his physiological laboratory in Russia. “One cannot mention conditioned reflexes without thinking of the distinguished Russian physiologist, Ivan Petrovich Pavlov, who gave them their name.” His investigation on conditioned reflex has influenced Watson and later learning psychologists like Guthrie, Skinner and Hull.
The classical experiment was carried on with meat powder in the experimental dog’s mouth producing salivation which is considered as ‘unconditioned reflex’—a physiological phenomenon. The meat powder or food is the ‘unconditioned stimulus’. Then another stimulus—the sound of metronome (or any other arbitrary stimulus)—is combined with the presentation of the food. After repetition, eventually, if time relationship is correct, the sound of the metronome will evoke salivation independently of the food. The sound then becomes ‘conditioned stimulus’ and the response to it becomes the ‘conditioned reflex’.
The mechanism by which animals learn, i.e. receive information, store and relate subsequent behaviour to this information is one of the basic areas of experimental psychology. Although the precise mechanism of learning is still not understood, a major advance in theory came as the result of the experiments conducted by Pavlov. In a conditioning experiment an animal is placed in a rather passive situation, with a simplified environment and a limited number of things for it to do. In his classic experiment Pavlov used ‘hungry’ dogs which will naturally salivate at the sight of the food.
The automatic reaction—’food/salivation’, he called ‘reflex’ and assumed it to be instinctive. In the next stage of experiment he rang a bell at the moment that the food was presented to the dog and after a number of presentations—’bell/food/ salivation’, he found that he could omit the food and get bell/ salivation. This new link he called ‘conditioned reflex’, and proposed that it must be the fundamental unit of learning.
In a brilliant series of studies he was able to show the enormously predictable nature of conditioning experiments and plot, in graphical form, the effect of different variables on the conditioning. For example, the bell and the food had to be presented reasonably close together for the conditioned reflex to be set up and probability of it being set up was a function of how closely the two stimuli were paired—if the bell preceded the food by too great a margin, conditioning will be slow or would not take place at all. He was also able to plot the parameters of ‘reconditioning’.
He showed the process of ‘extinction’ by means of not presenting food after the bell in repeated trials until the animal ceased to salivate to the conditioned stimulus-bell. These experiments and results may seem simple and obvious to us today, but their real and historical significance was the fact that Pavlov had shown that a process such as learning—which has previously tended to be looked upon in a rather mystical way—could not only be studied in the laboratory, but could also be shown to follow relatively simple and describable laws.
The theory and concept underlying the process of conditioning were the laws of association “when the laws were stated in quantitative form, there came a strong tendency to emphasize continuity of elements as having priority over the principles of similarity and contrast. Physiological explanations tended to rest primarily on contiguity”.
The law is- “When two elementary brain processes have been active together, or in immediate succession, one of them, on reoccurring tends to propagate its excitement into the other”. The same principle had been working in Pavlov’s conditioned reflex mechanism. |
The notable accompanying events discovered by Pavlov were: ‘Reinforcement’, ‘Extinction’, ‘Spontaneous Recovery’, ‘Generalization of Excitation’ and ‘Inhibition’ and ‘Differentiation’, in developing his theory. Some of these we have discussed in the section—principles of learning theory in the earlier chapter. The rest will be discussed in the light of conditioned reactions.
‘Reinforcement’—The simple conditioned reflex begins with its acquisition through repeated ‘reinforcement’ (presentation of food and satisfaction of the animal) i.e. the repeated following of the conditioned stimulus by the unconditioned stimulus and response at appropriate time intervals.
‘Extinction’—When reinforcement is discontinued and the conditioned stimulus (CS) is presented alone, unaccompanied by unconditioned stimulus (US), the conditioned response (CR) gradually diminishes and ultimately disappears. This process is called by Pavlov as ‘experimental extinction’.
‘Spontaneous recovery’—An amazing event that takes place, is however, after some time has elapsed following extinction without further repetition of any kind, the conditioned salivation returns and the event is called ‘spontaneous recovery’ of the extinguished reflex. This has immense importance in the theory of learning and forgetting.
The phenomena of spontaneous recovery and forgetting are interrelated. Pavlov was the first physiologist to report about spontaneous recovery while experimenting with conditioning of reflexes (salivation in dogs in particular). Following experimental extinction of a conditioned response (CR), the CR showed some recovery if the dog was removed from the apparatus and allowed to rest in his home cage for a while before being returned to the experimental situation and tested.
The CR has “spontaneously recovered” without any special reconditioning by the experimenter. Later works showed that the amount of recovery increased was the increment with the length of the rest interval between sessions. This proved the existence of psychological factors (security, nervousness etc.) involved in physiological reaction of the instinctive reaction.
Pavlov and others had also performed experiments in which the CR had been repeatedly extinguished over consecutive daily sessions. They reported that the amount of recovery of the CR became progressively less as the extinction session proceeded and, finally, CR ceased totally (a learned reaction).
This event seems to be similar to ‘spontaneous regression’ pointed out by Ebbinghaus (1885) in his learning and memory experiments, mentioned earlier. “The amount forgotten increases with the time that has elapsed since the end of practice, and the amount of session-to-session forgetting becomes progressively less as daily practice on a task continues”.
‘Generalization of excitation and inhibition’—In the process of conditioning, the response can be evoked by a broad bank of stimuli of more or less similar nature, that is those which are conditioned stimulus-specific. The CR will occur on a test to a neighbouring stimulus to an extent dependent upon the similarity of the test stimulus to the training stimulus.
This is called ‘stimulus generalization’. (This has been discussed in details in the previous chapter on the principles of the theory of learning). Stimulus generalization occurs due to training and not only the stimlilus generalization but generalization of inhibitions follow training in extinction.
The inhibitory phenomenon within conditioning, first described in connection with extinction, became of great interest to Pavlov.
A classification of various types of empirical manifestations of inhibition have been summarized by Hilgard and Marquis as follows:
A. External Inhibition:
Temporary decrement of a conditioned response due to an extraneous stimulus, such as a sudden accompanied loud sound reduces conditioned salivation to a light (conditioned stimulus).
B. Internal inhibition:
Internal inhibition develops slowly and progressively when a conditioned stimulus is repeatedly presented under one of the following conditions:
1. Experimental extinction:
The weakening of response to a conditioned stimulus which is repeated a number of times without reinforcement.
2. Differential inhibition:
A conditioned response given originally to either of two stimuli is restricted to one of them through the reinforcement of one and the non- reinforcement of the other. The non-reinforced negative stimulus becomes inhibitory.
3. Conditioned inhibition:
A combination of stimuli is rendered ineffective through non-reinforcement, although the combination includes a stimulus which alone continues to evoke the conditioned response. The other simile in the combination are conditioned inhibitors.
4. Inhibition of delay:
If a regular interval of sufficient duration elapses between the commencement of a conditioned stimulus and its reinforcement, during the early portion of its isolated action the conditioned stimulus becomes not only ineffective, but actively inhibitory of other inter-current activities.
C. Disinhibition:
Temporary reappearance of an inhibited conditioned response due to an extraneous stimulus. This may be considered as an external inhibition of an internal inhibition.
Theory of Learning # 2. Pavlov’s Contribution to Learning Theory:
Pavlov’s Contribution to Learning Theory Involves the following Learning Problems:
1. Capacity:
The capacity to form conditioned reflex is the action of the nervous system: hence there are some congenital differences in learning ability.
2. Practice:
In general, the conditioned reflexes res pond and strengthens due to repeated reinforcement, but, while providing learning, care should be taken to avoid accumulation of inhibition even in repeated reinforcement.
3. Motivation:
Motivation is the most important factor in inducing conditioned reflexes, for example, in the case of the experiment with dog, the first condition is that the animal must be ‘hungry’, as ‘drive’ is important condition in instrumental conditioning.
4. Understanding:
Though a physiologist, Pavlov uses subjective terms like understanding or insight and says, “when a connection or an association, is formed, this undoubtedly represents the knowledge of the matter”. He did not ignore the men-talistic concepts in learning. He affirms a relationship existing in the external world.
5. Transfer:
Transfer is the result of generalization in learning whereby one stimulus serves to evoke the conditioned reflex learned to another.
6. Forgetting:
Forgetting is a consistent phenomenon in learning and is emphasized by Pavlov not in terms of retention of forgetting but in physiological term like extinction. He did not deal with forgetting as such in the laboratory Repeated conditioned reflex experiments on the same dog were greatly overlearned. Hence forgetting was not considered as a laboratory problem, but extinction was.
Therefore, decline of CR through experimental extinction was dealt with and Pavlov concluded that conditioned reflexes (CR) are temporary which are evidences of forgetting. But then he recognized the distinction between forgetting and extinction (which is a weakened CR). Extinction is frequently followed by spontaneous recovery as a reflex and therefore, forgetting is not total.
The important contribution made by Pavlov is to make his study purely objective and leading psychology to behaviourist tradition, though he himself remained strictly within physiology. But from within his own campus he exerted great influence on contemporary psychology, and learning theory, in particular, enormously, and opened new vistas for learning psychologists.
Theory of Learning # 3. Guthrie’s Contiguous Conditioning Theory:
Guthrie is one of those learning psychologists who interpreted his theory of learning in the same behaviouristic fashion and maintained the tradition. Edwin, R. Guthrie (1886-1959) remained closest to Watson’s original position of Behaviourism and therefore, his interpretation of learning sounds much like Watson’s idea of behaviourism.
Yet in some respects his system seem to follow Thorndike and Pavlov in the sense that on the one hand his learning theory rests on the objective stimulus-response association psychology and on the other uses the conditioned response terms coming from Pavlov.
It was somewhat later that the conditioned reflex of Pavlov might serve as a useful paradigm for learning and he wrote a number of books emphasizing the standpoint of behaviourism in learning and psychology. This exposures influenced some follower psychologists among whom Guthrie was an important one – the result of his work and faith in behaviourism was published in the book “General Psychology in Terms of Behaviour, jointly written by him and Smith, S.”
Among the theories of learning, Guthrie’s is one of the easiest to read in his own words, but nonetheless difficult for someone else to discuss. It is easy to read because he wrote in an informal style, making his points with homely anecdotes rather than with technical terms and mathematical equations. But his casual presentation was apparent, its deep meaning contains the germ of a highly deductive theory of learning.
Guthrie’s basic principle of learning is similar to that of conditioning principles that was basic for Watson, but stated m a general form. Guthrie describes his theory and its basic principles in his book “Psychology of Learning” in 1952. In describing the contiguity of cue and response he formulates one law of learning, from which all else about learning will be comprehensible.
The law is stated by him in the following language:
“A combination of stimuli which was accompanied by a movement will on its recurrence tend to be followed by that movement”, Winfred, F. Hill, in his book “Learning” paraphrased it as follows – “If you do something in a given situation, the next time you are in that situation you will tend to do the same thing again”.
This principle is more general than classical conditioning in that it says nothing about the unconditioned stimulus—it says that the response accompanying a stimulus is likely to follow the stimulus again. In classical conditioning the response occurs with the (conditioned) stimulus during training because the unconditioned stimulus elicits it.
This sequence, of course, fulfils Guthrie’s conditions of learning. As long as the conditioned stimulus and the response occur together, learning will occur.
Guthrie’s statement is apparently very simple, which avoids mention of drives, of successive repetitions of reward and punishments and only mentions about the combination of stimuli and movements. But this one principle serves the basis of a very intriguing theory of learning—when interpreted loosely, is the source of all interpretations of learning and its management, interpreted rigorously, it becomes the chief postulate of a deductive theory.
Its proper interpretation, in fact, needs a second statement to complete the basic postulates about learning and he writes: “A stimulus pattern gains its full associative strength on the occasion of its first pairing with a response”.
The statement is rather paradoxical and the first difficulty with this principle is that one often does many different things in the same situation and improves upon with practice. Which one will occur next time remains indefinite. This challenge is no problem for Guthrie and he simply replies, “the last one” (learning the recent one). Here the emphasis is upon some kind of “recency principle” of Watson. Because, if learning occurs completely in one trial, that which was last done in the presence of a stimulus combination will be that which will be done when the stimulus combination next occurs, for it refers to the last response of a succession rather than to recency in time.
He accepted the recency principle ignoring the principle of frequency of Watson in order to forward a simple theory of contiguity. Like other sophisticated theorists, he did not proceed by denying familiar forms of learning. He wanted to show that complex learning can be derived from these basic postulates—that each form of learning (insightful or purposive or problem-solving) requires no new principle to explain it but can be explained basically by primary law of association of contiguity.
Guthrie’s belief in contiguity is irrespective of the time interval between the conditioned stimulus and unconditioned response—the time interval which is considered as most important in conditioning experiments. The cue, in the conditioning process, is more important than the original stimulus, according to Guthrie. He defended strongly the strict simultaneity of cue and response in the process of conditioning to occur. Guthrie brought in the movement factor to fill in the time gap between the presentation of stimulus and response to it made by the animal.
He argues that an external stimulus will give rise to movements of the organism—these movements, in turn produce Kinesthetic stimuli and a bond or association is made between the stimuli and response separated by time interval and the gap in between has been filled up by movement. Therefore, the association is formed between the simultaneous events, the time interval remains insignificant in the process of conditioning to Guthrie.
He cites example of “delayed response” made by the animal in support of his contention—that the animal learns deliberately delay its response to a signal in order to get reward. The signal cues off a chain of adjunctive behaviour or movements (as he terms them) that usher the animal directly into the point of reinforcement. There is a strong preference for movement-produced stimuli as the true conditioners in Guthrie’s system. Guthrie makes no use of the concept of reinforcement.
The aspect of Guthrie’s theory that has been most attacked in his lack of concern with success and failure, with learning to do the “right” thing. He does not say that we learn to make those responses that work, or that obtain reward. He says that something we do in a situation becomes learned as a response to that situation and acts as a cue to a different situation, so that it becomes the last thing done in the old situation and the cue becomes a stimulus for the response to be made in another simultaneous event.
The two events are associated with movements produced. The movement- produced stimuli which act as conditioners permit the integration of habits within a wide range of environmental change in stimulation, because these stimuli are carried around by the organism.
As regards practice, in learning, Guthrie thinks; that practice brings improvements through repetition of movements (kinesthetic) and not as a goal achievement. To other learning theorists, according to Guthrie, success, improvements and achievements, all refer to acts or outcomes to learnings and not to movements as he suggested.
The prediction of movements in practice in acquiring a skill is important to him because acquisition of a skill does not depend on a single movement but a number of movements made under a number of different circumstances—a complicated skill calls for practice in all the different situations. Guthrie’s theory emphasizes the importance of movements in practice because, he maintains, that practice produces its consequences, not according to a law of frequency, as suggested, by Watson but according to the simple principle of attachment of cues to the movement. Therefore, a skilled task needs practice because it is composed of a large number of habits involving different kinds of movements.
Guthrie’s interpretation of forgetting is similar. Habits do not weaken with disuse. They are replaced by other habits. But forgetting—like acquisition—is gradual because of the many specific stimulus-response connections that make up a complex habit. Hence it is possible to make one definite prediction from Guthrie’s interpretation of forgetting. We can predict that a habit will be better retained if it has been practiced in different situations (that is, in the presence of a number of different stimulus combination).
Forgetting in the theory of Learning is the counterpart of retention and, therefore, original learning and interfering learning follow the same rule. Conditioning experiments explain forgetting as a fact of extinction. Guthrie did not consider extinction as a decay in habits strength due to mere non-reinforcement but due to associative inhibition, i.e. through learning of a interfering factor producing incompatible response. As stated above, he explained forgetting in the same way, i.e. if there is no interference with old learning, there would be no forgetting.
Guthrie also suggested methods of breaking habits by accelerating its replacement. He suggested counter-movements to occur in the presence of the cues to the habit. Guthrie’s breaking of habits have been adopted by the psychotherapists in the form of behaviour modification techniques. The modern behaviour therapists have picked up and applied the techniques in helping their patients overcome certain debilitating behaviour problems. One such method is called “systematic desensitization”.
The impression one gets from most of Guthrie’s writing is that human behaviour is a mechanistic matter. Behaviour is rigidly controlled by stimuli and changes in the stimulus response connections follow simple mechanical laws. However, Guthrie has been more receptive than Watson to such concepts as desire and purpose. He recognizes that much behaviour has a goal directed character, but he interpreted it in rigorously physical terms.
Guthrie reminds us to look at the particular response that is being made and particular stimuli that are eliciting learning and not rely too much on rewards and punishments. Because both Watson and Guthrie totally avoided the concept of reinforcement (which is the key point in learning process of modern- learning psychologists) and insisted only on the mechanistic way of interpreting human behaviour and learning, they are, therefore, called “contiguity theorists”.
Thorndike’s Connectionism:
Edward L. Thorndike’s (1874-1949) contribution to learning psychology is of major importance because his theory is the starting point of explaining the process of child learning which he thought is similar to animal learning. He, therefore, deduced his theory of connectionism from the observation of animal behaviour in the laboratory.
He observed that human infants are as impulsive and dependent on the. pleasure and pain as consequences of their actions as the animals. He also discovered that animals learn in the same mechanistic way as the infants do. This theory was first announced by Thorndike in his book “Animal Intelligence” in 1898. His theory is usually studied in two parts.
The earliest writings on learning of Thorndike was mere associationism—learning at that period was considered to be based on association between sense impressions and impulses to action (responses). Such an association has been termed by Thorndike as a “bond” or a “connection”, which are responsible for making or breaking of habit (a learned response to a specific stimulus). Hence Thorndike’s theory comes to be called as “Connectionism”, and is recognized as the original S-R (bond between stimulus and response) psychology of learning.
The elementary form of learning of both lower animals and human being was considered by Thorndike as ‘trial-and-error’ learning (selecting and connecting response to the stimulus). The principle of trial and error has already been described in the last chapter.
This concept has been introduced by Thorndike to explain the strategy apparently underlying animal behaviour in learning. In his experiments of trial-and- error, typically when confined to a “puzzle box” (problem situation) the animal makes a series of movements, more or less random, but one or more of which will ultimately help to escape.
Thorndike argued that all learning proceeded on this basis—the successful responses being rewarded and thus learned or established, the unsuccessful ones being unrewarded and failing to be learned. The trial-and-error learning is supposed to be more or less automatic with the animal being unconscious of what he is doing or how it is actually effecting the escape or achieving its goal. A true mechanist of his day, Thorndike sought to provide a ‘mechanistic’ account of animal learning—a simple behaviour.
Thorndike was a pioneer in experimental animal psychology who first introduced the idea that pleasure and pain consequences of our acts are important determiners of behaviour. In order to establish his contentions, he took animals into the laboratory, presented them with standardized problems and made careful observations of how they solved those problems.
His most widely quoted study was with cats in the puzzle box. After the experiments were carried on in the laboratory, he concluded that the cat’s learning to pull the string involved not an “intelligent” understanding of a relation between string pulling and door opening but a gradual “stamping in” of the stimulus-response connection between seeing the string and pulling it. Similarly rejection process operates simultaneously and the unnecessary and unrelated behaviours are “stamped out” by the animal.
This gradual learning was typically graphed as a “learning curve” with the time elapsed before successful responses plotted on the vertical axis and successive practice trials on the horizontal axis. A learning curve can be drawn from the repetition of a performance by a well-motivated subject which shows progressive improvement. If a measure of success on each trial is plotted, the success measure being the ordinate and the amount of practice up-to including the given trial being the abscissa, we have a learning curve.
The learning curve rises or falls according to the measure adopted. The learning curve rises if the measure is amount accomplished per trial or per unit of time; and falls if the measure is errors per trial. Thorndike gives a large collection of such curves and discusses many questions of proper measurement.
To obtain a complete learning curve we should start with no previous training and continue, till the subject reaches the limit of his ability, most actual curves are cut short at both ends since we neither start from absolute zero of practice nor do we carry the subject to his absolute limit.
The slope of the learning curve indicates the rate of improvement. If this rate were constant through the learning, the curve would be a straight line. Almost any learning curve shows on the contrary a negative acceleration; it flattens out as practice advances—the rate of improvement decreases, interesting is the general shape of the curve which exhibits the course of improvement throughout the learning process. Horizontal parts of the curve, showing no improvement led to assume three levels in the course—the initial level, the intermediate level and the final level.
It is possible that the ‘plateau’ occurs at the final level. A plateau in the learning curve may occur due to both physiological or emotional factors of the subject with the time elapsed before successful response plotted on the vertical axis and successive practice trials on the horizontal axis. Such curves typically show high values initially, declining to low, relatively stable values near the end of 10 to 20 practice trials. The characteristic curve reveals when performance in a learning task is plotted against time, till complete mastery (as below figure 6.1). The most common form is S-shaped (ogive), denoting a slow improvement in
At the time Thorndike published these studies, they are radical in two respects: their careful observation of animal behaviour under controlled conditions and their concern with the gradual strengthening of stimulus response bonds. They were Thorndike’s answer to the argument about whether animals solve problems by reasoning or by instinct. By neither, said Thorndike, but rather by gradual learning of the correct response through the process of’ stamping it)’ and through the process of ‘stamping out’ of unsuccessful ones. Whole process is automatic and mechanistic which helped Thorndike to formulate his first law of learning—”the law of effect”. Thorndike says nothing about the animal’s feeling, but only about what animal does.
Thus, he adheres to the concern of behaviourism with what individuals do. His language may sound subjective, but his meaning is as objective as Watson’s. He was a pioneer of objective psychology, but he incorporated within his objective psychology of learning the “law of effect”, and thus became, in fact, the first reinforcement theorist— allowing the role of motivation in learning for the future learning theorists to apply.
“Thorndike formulated his mechanistic law of effect in the following terms : responses to a situation which are followed by a rewarding state of affairs will be strengthened or stamped in as habitual responses to that situation : responses which are unsuccessful will be weakened or stamped out as responses to that situation. Rewards, or success or failures, were thus introduced as providing a mechanism for selection of the more adaptive response.
Thorndike’s law of effect had cast profound influence upon his thinking about human learning which contains, in addition to “effect, readiness and exercise”. He says, “Both theory and practice need emphatic and frequent reminders that man’s learning is fundamentally the action of the law of readiness, exercise and effect”.
The law of exercise is also known as the law of use and disuse in terms of strengthening of connection of stimulus and response. Strengthening increases the probability that the same response will be made when the situation occurs and depends upon the regular exercise (practice). The law of disuse gradually introduce forgetting due to decrease in strength or lack of connection between stimulus and response.
The law of readiness, in brief, is an accessory principle going along with law of effect, which characterizes the circumstances under which a learner tends to be satisfied or annoyed. These conditions were explained by Thorndike in terms of neuronal units “conducting impulses (conduction units) rather than in terms of mental phenomenon”.
He thinks that it is a bodily impulse or the attractive stimulus which causes readiness in the responses of the action sequence making the chain of performances. This law refers to fatigue of situation effects; e.g. if a person is already stuffed with food, being forced to take another bit is positively aversive. “Thorndike’s readiness is a law of preparatory adjustment, not a law about growth” .
In addition to the above three major laws of learning. Thorndike forwarded five subsidiary laws of learning:
(1) Multiple response,
(2) Set or attitude,
(3) Pre-potency of elements,
(4) Response by analogy,
(5) Associative shifting.
These subordinate laws, in fact, involves principles adopted in learning. “The multiple response principles means that in order to produce the desired response, a number of varied responses must occur (varied reaction).”
“The second principle is that learning is guided by a total attitude or “set” of the organism.”
“The third principle states that the learner is able to react selectively to prepotent or salient elements in the problem.”
“The fourth principle states that the response to a new situation occurs with reference to a previously learned or known situation and familiarity plays an important role to master a new piece of learning.”
“The fifth principle is the direct application of Pavlovian conditioned response theory.” The law states that shifting of stimuli can take place by substituting a conditioned stimulus for an unconditioned one. Thorndike believes that “we may get any response of which a learner is capable associated with any situation to which he is sensitive”.
The principle of associative shifting of stimuli can produce amazing experience of learning—same response to a completely new stimulus, provided the organism is presented with stimulating situations throughout the shifting process. The technology of teaching-machine programs have utilized successfully the above principle of associative shifting in teaching-learning situations.
Theory # 4. Skinner’s Operant Conditioning Theory:
As against classical conditioning of Pavlov which emphasizes physiological reflexes and the role of mechanical conditioning in learning, Skinner’s operant conditioning emphasizes the role of satisfaction in the light of Thorndike’s law of effect.
Skinner was a staunch behaviourist and tried to avoid such mentalistic concepts as ‘satisfaction’ and replaced the term as reinforcement—an objective phenomenon. Skinner’s theory of learning therefore, is also known as the theory of ‘instrumental conditioning’—reinforcement being the instrument. His analysis of learning, behaviour is a “functional analysis”, which operates in identifying and isolating environmental variables and learning is a product of such functions.
Skinner’s theory consists of a collection of concepts and principles and research strategies to explain and analyse a complex behaviour. He believed that any complex behaviour, like thinking or problem-solving, when properly analysed will be interpretable in terms of complex interplay of the elementary concepts and principles involved in them.
But this analysis Skinner conducted by following absolutely behaviouristic methodology, completely ignoring the mentalistic and cognitive explanation of behaviour or any behaviour to be caused by inner psychic forces. But only significant departure from traditional stimulus-response psychology of his predecessors (Watson, Pavlov and Thorndike) he advanced a division between respondent and operant behaviour in terms of responses—stimuli being less important.
Skinner proposes that two types of responses can be distinguished—a class of elicited responses and a class of emitted responses. Responses elicited by natural stimuli are called “respondents” and emitted responses are designated “operants”. Hilgard, writes: “The operant- respondent distinction drawn above was to dominate learning theory for some 30 years. It was called “two-factor” and it laid claim to a number of correspondences.
Responses of glands and internal organs (respondents) were distinguished (by hypothesis) by being (a) elicited by innate, unconditioned stimuli, (b) controlled by the autonomic nervous system, (c) involuntary, (d) characterized by minimal response produced feedback, and most importantly, (e) able to be classically conditioned (Type S) but not operantly conditioned (Type R).
In stark contrast, responses of striated, peripheral muscles (operants) were distinguished by being (a) sometimes emitted, without identifiable stimuli, (b) controlled by the central nervous system, (c) under voluntary control, (d) characterized by distinctive proprioceptive feedback, and (e) able to be operantly conditioned but not classically conditioned”.
The components of the responses elicited by the reinforcer, becomes conditioned and occur in anticipation of it. Therefore, in instrumental conditioning, the placing of reinforcer is very important i.e. how the reinforcements are scheduled. In order to emphasize the role of reinforcement Skinner introduced a process in experiment known as conditioning of “superstitions””, —a phenomenon described in terms of operant reinforcement.
Skinner noticed in the behaviour of the pigeons that when the reinforcement is first delivered, the pigeon’s pecking follows the sterotyped ways for some time, so that this “operant” becomes strengthened. Skinner noticed a number of learning the superstition sequences by human adults are exposed to random, noncontingent reinforcement. The significance of superstition in learning is that whenever a reinforcement is delivered to an organism, it strengthens whatever operant behaviour is in progress at that time.
Reinforcement can be described as a stimulant, as when a person or an animal is rewarded, that is, reinforced, for what it does—the same behaviour will tend to be repeated with increasing frequency. Skinner showed the effect of reinforcement through a specially prepared box (Skinner Box) and conducted experiments with the pigeons following the principles of the law of effect of Thorndike. When a pigeon is placed in the Skinner Box for the first time, it behaves erratically and exhibits “emotional behaviours”. After a period of practice, the bird is allowed to get “adapted” and undergoes “magazine training” to behave as the experiment demands.
At this point, if food is made contingent upon pecking the “Key”, the bird will show a number of irrelevant behaviours for a considerable period at first and when, ultimately, abruptly, it will peck the Key a single time allowing the tray consisting of food to reach the pigeon. Once the pigeon takes food from the tray brought by it by pressing the key and getting the reward all by his own effort, the process becomes remarkably orderly and predictable.
It is almost certain that the bird on restricted diet will soon peck the key again and again and that, in a relatively short period of time, the pecks on the key will occur at a moderate and stable rate allowing a graph to be plotted. By introducing the term “reinforcement” Skinner had restated the Law of effect to include the word “reward” or Thorndike’s own phrase, “satisfying state of affairs”, which meant to include all of the things which can be classified as rewards. “Skinner’s definition of ‘reinforcement’ is as follows, “a reinforcer is anything or any event that increases the likelihood of responses that immediately retain it”.
The process of operant conditioning is integrally related with the reinforcement because it is through reinforcement that operant conditioning is being strengthened. The conditioned reinforcers are also called “secondary reinforcers”. It is through the conditioned reinforcers that the organism learns the behaviour which operates in his environment. A reinforcer can also be artificially applied but it produces a ‘natural’ or physical consequence of behaviour in the form of response. When a reinforcer is no longer reinforced in repeated performances continuously, the process of ‘extinction’ appears. All the accessory processes due to reinforcement or non-reinforcement or delayed reinforcement have been discussed in the previous section.
A reinforcer which serves an obvious biological function is called a ‘primary reinforcer’. A stimulus which accompanies or slightly precedes such a primary reinforcer may, therefore, take on the power to reinforce. Such stimuli are called ‘conditioned reinforcers’. These are the secondary reinforcers which are the effect of learning and are acquired.
Schedules of Reinforcement:
A caution is needed in the application of reinforcement to produce desired response. The reinforcement of operant behaviour in ordinary life is not regular and uniform. Therefore, in order to acquire desired responses, “intermittent, reinforcement” is necessary not only in the laboratory conditions but also in real—life situations.
The intermittent reinforcement used in laboratory by Skinner in a controlled condition, led to the discovery of two main classes of intermittent reinforcement viz. ‘interval schedules’ and ‘ratio schedules’. A third type of reinforcement has also been used by him strictly according to the time interval and known as “fixed interval schedule” where the food for the pigeon will be available only after a fixed number of responses after lapse of a designated or fixed interval of time, measured from the preceding reinforcement or from the onset of a “trial stimulus”.
This arrangement controls the number of reinforcements delivered per hour to the animal. Fixed Interval (FI) schedules produce lawful and orderly results. Sometimes the FI schedules are varied by ‘variable intervals’ (VI) schedules, in which a range of intervals from very short to very long are used in a random, variable order. The VI schedules show that average performance is remarkably stable and uniform and are found to be unusually resistant to extinction.
Anderson and Faust (1975, ibid) summarized the use of intermittent schedules and states: “If an intermittent schedule of reinforcement is gradually introduced, a pigeon can be made to peck a key at a higher rate than it will maintain under continuous reinforcement. Perhaps more important because of its practical implications, intermittent schedules sustain performance indefinitely with little reinforcement. If a schedule has predictable features, then performance will be periodic, its exact features determined by the particular contingencies of reinforcement that are in effect. If, on the other hand, the pigeon is reinforced on a variable and, therefore, unpredictable schedule, remarkably stable rates of performance can be sustained. The general conclusion to be drawn from the research on schedules of reinforcement is that the rate and consistency of an organism’s performance are precisely controlled by the frequency and contingencies of reinforcement”.
The reinforcements used in the laboratory led Skinner to deduce the operant strength in the animal learning process. Various schedules helped the operation of instrumental conditioning and thus contributing to the methods of primary learning. The success in the laboratory learning process guided Skinner to deal with a procedure which he called functional analysis, which we have discussed before.
Nevertheless, a functional analysis is concerned with the lawfulness of the relationships and the manner in which these relationships fluctuate under specified conditions—the reinforcement and the various schedules are thus very important class of events in the functional analysis of behaviour. They express how do the reinforcement and their scheduling affect operant strength. Two other such classes of events are ‘drives’ and ’emotions’.
Drive, basically, is something which pushes an animal or a human into action. Drive, therefore, is functional and operational. Skinner used the term drive to mean a set of operations (such as withholding of food for certain number of hours, or reducing the organism’s weight, which have effect upon rate of responding to a stimulus. A drive, according to Skinner, is not a stimulus, or a physiological state or a mental state, but refers to certain classes of operation and may be called as a behavioural state which is affected by reinforcement.
Drive, according to Skinner, is dependent on deprivation; for example, hours of food deprivation are important in determining the rate of responding in a Skinner box and thus is related to reinforcement (in terms of the length of time interval) eliciting operant conditioning. Experimental studies made by Skinner with the ‘Skinner box’ illustrates the kinds of relationships between ‘drive’ and ‘operant conditioning’. Deprivation of food is the medium through which these relationships are established.
Deprivation involves withholding something for a period of time. It is the condition in which a reinforcer is denied or unavailable, so that activity or behaviour is restricted for the time being. Consequently, deprivation increases the efficacy of appropriate reinforcer inducing drive and thereby producing an increase in alertness and restlessness.
The effects of deprivation are found when the reinforcer has been withheld, releasing more activities. Drive then becomes an operant reinforcer for further activities. Deprivation is opposed to satisfaction, which, on the other hand, leads to cessation of activities. Therefore, drive itself is not a stimulus but acts as an operant behaviour of a reinforcer.
In the same manner, as above, Skinner advances his opinion to establish the state of emotion, not as a class of responses but a set of operations. Emotion functions as causing changes in response. He emphasized that emotional responses can be conditioned (like Watson) and called them ‘conditioned emotional responses’ (CER), a phenomenon which he demonstrated with electric shock suppressing appetitive behaviour. The same phenomenon (CER) had also been demonstrated by Pavlov in advancing his theory of classical conditioning. ,
Most of the later psychologists, however, thought that CER is due to motivational competition between conditioned anxiety and appetitive motives.
Theory # 5. Hull’s Deductive Theory:
Hull’s theory of learning is the most ambitious of the connectionist theories we have discussed so far. Clark, L. Hull., a professor of Yale University, was the most influential learning theorists of his time. Originally an engineer, later becoming psychologist, he made effective use of his engineering training and engineering outlook in formulation of psychological theories later on.
His engineering mental makeup was evident in his desire to construct an ‘elaborate, formal, precise structure of psychological theory’. In him we see the full flowering of the connectionist reinforcement tradition in developing his hypothetic-deductive system. The system he followed was strict behaviourism and, therefore, falls into the family of Guthrie and Skinner—three of them combined to apply force and power to Watsonian behaviourism. Hull’s theory is fundamentally mechanistic, totally devoid of consciousness, but with some exceptions. Hull’s central concept of learning theory is ‘habit’.
He derived most of the information about habit from his experiments with conditioned responses. His theory of learning has two highly significant characteristics—’elegancy’ and ‘drive-reduction’—each consisting of distinctive features. Hull’s theory conforms to the requirements of science and logic which was the demand of his time and so gathered fame as ‘elegant’.
Secondly, Hull’s theory of learning is a drive-reduction theory based on the assumption that life process is tension- producing and the organism’s endless effort to reduce this tension is learning to make compromise with the processes of living. In the process, “the organism finds itself in disequilibrium with its environment, that is, it finds itself deprived of something it needs”. As illustration, we can cite the physiological need for food (hunger is the drive, here). Drive then becomes the “tension state” associated with the need.
In order to fulfil the need the drive instigates the organism to become active and the drive is ‘energized’. The movement and the energized activity produces its own stimuli and its own responding (instrumental conditioning) satisfying the need and thereby reducing the drive. The responses here, becomes reinforced and the reinforcement of a response causes it to be learned, since it is tension reducing.
Basic elements in Hull’s theory:
A. Hull’s theory is essentially a stimulus-response theory. But since the stimulus-response bond occurs in an observable situation, causing a change in the organism, Hull postulates, firstly, certain intervening variables and called them symbolic constructs. Actually, the observable intervening variables are not new constructs but may be considered as elaboration of Woodworth’s S-O-R concept and its different interpretations. The S-O-R signifies that the stimulus (S) affects the organism (O), and what happens as a consequence is the response (R), depends upon O as well as S.
The stimulus-response theory in the experimental conditions considers only input (the environmental influences on the organism)-output relationship, (output being the organism’s response). But in between, there remains all sorts of contingencies of reinforcement, such as deprivation schedules, prior practice and others. These variables are capable of producing different responses. An understanding of the effect of these variables would help us to deduce new phenomena.
B. “Reinforcement, the primary condition for habit formation”. Hull emphasized the role of reinforcement as the primary condition of habit formation. Hull used the contribution of reinforcement in terms of law of effect of Thorndike rather than of Pavlovian way. This means that he considered reinforcement to be used as instrumental conditioning. Reinforcement that Hull considered was primary reinforcing state of affair leading to drive reduction.
Gradually Hull shifted his original position from primary reinforcement to secondary reinforcement as the latter in instrumental conditioning came into greater and greater prominence in reinforcement theories. In considering Hull’s theory of reinforcement it should be noted that Hull had identified “drive” and “drive-stimulus” to convey the same idea and “drive-stimulus reduction” as similar to “need-reduction”.
C. Anticipatory goal responses: Most of the primary behavioural laws in Hull’s system were derived from either classical or instrumental conditioning (Learning under the control of reinforcement) and are not confined to simple conditioning, rather deals more with discriminatory learning, maze-learning, all forms of verbal and rote memorization, tool using and so on. Hull proposed a number of intermediate mechanisms, derivable from the basic laws of his system. One of such intermediary variables is the notion of anticipatory goal responses.
He assumed that the stimuli at the goal stage was present at the initial stage too. These stimuli include the internal drive, environmental stimuli present both earlier as well as during reinforcement, persisting up to the goal state and also the stimuli aroused by the organism’s own movements.
Hull assumed that all of these stimuli become conditioned to the goal response. This anticipatory goal responses are already achieved prior to reaching the goal and are considered as fractional goal responses. These fractional antedating goal responses (designated by Hull as ra’s) are very important integrators, in Hull’s system. Guthrie used the same concepts in formulating his theory but not as explicitly as done by Hull.
The fractional anticipatory responses give rise to stimuli (sg) which can be conditioned (because they are response producing ra) to differential responses and so aid in eliciting them. These rg-sg stimuli are used by Hull in describing the further mechanisms of ‘secondary reinforcement’, ‘the gradient of reinforcement’ and the ‘habit-family hierarchy’; sg, according to Hull, can be a mechanism of wider generality in addition to rg and is capable of getting connected to earlier stimuli in an instrumental sequence.
Hull established his theory of deductive reasoning – a formal system on the basis of adequately defined terms and certain basic postulates.
The basic postulates of Hullian Theory:
The postulates:
The original postulate of Hullian theory is known as Mathematico-deductive theory of rote learning.
Hull, however, organized and classified the arrangements in 1943 in a more systematic way, which he states as follows:
A. The external cues which guide behaviour, and their neural representation
Postulate 1:
Afferent neural impulses and the preservative stimulus trace.
“Stimuli impinging upon a response give rise to afferent neural impulses which rise quickly to a maximum intensity and then diminish gradually. After the termination of the stimulus, the activity of the afferent nervous impulse continues, in the central nervous system for some time.
Postulate 2:
Afferent neural interaction.
Afferent neural impulses interact with other concurrent afferent neural impulses in a manner to change into something partially different. The manner of change varies with every impulse or combination of impulses,.
B. Responses to need; reinforcement and habit strength:
Postulate 3:
Innate responses to need.
Organisms at birth possess a hierarchy of need— terminating responses which are aroused under conditions of stimulation and drive. The responses activated by a given need are not a random selection of the organisms responses but are those more likely to terminate the need.
Postulate 4:
Reinforcement and habit-strength. Habit-strength increases when receptor and effector activities occur in close temporal contiguity provided their approximately contiguous occurance is associated with primary or secondary reinforcement.
C. Stimulus equivalence:
Postulate 5:
Generalization.
The effective habit-strength aroused by a stimulus other than the one originally entering into conditioning depends upon the remoteness of the second stimulus from the first on a continuum in units of discrimination thresholds. (Just noticeable difference).
D. Drives as activation of responses:
Postulate 6. Drive Stimulus:
Associated with every drive is a characteristic drive stimulus whose intensity increases with strength of drive,.
Postulate 7:
Reaction potential aroused by drive.
Habit-strength is sensitized into reaction ‘potential by the primary drives active at a given time.
E. Factors opposing responses:
Postulate 8:
Reactive inhibition.
The evocation of any reaction generates reactive inhibition, a disinclination to repeat the response. Reactive inhibition is spontaneously dissipated in time.
Postulate 9:
Conditioned inhibition.
Stimuli associated with the cessation of a response become conditioned inhibitors.
Postulate 10:
Oscillation of inhibition. The inhibitory potential associated with every reaction potential oscillates, in amount from instant to instant.
F. Response evocation:
Postulate 11:
Reaction threshold The momentary effective reaction potential must exceed the reaction threshold before a stimulus will evoke a reaction.
Postulate 12:
Probability of reaction above the threshold. The probability of the response is a normal function of the extent to which the effective reaction potential exceeds the reaction threshold.
Postulate 13:
Latency. The more the effective reaction potential exceeds the reaction threshold, the shorter the latency of response.
Postulate 14:
Resistance of extinction. The greater the effective reaction potential, the more unreinforced responses of striate muscle occur before extinction.
Postulate 15:
Amplitude of response. The amplitude of responses mediated by the autonomic nervous system increases directly with the strength of the effective reaction potential.
Postulate 16:
Incompatible responses. When reaction potentials to two or more incompatible responses occur in an organism at the same time, only the reaction whose effective potential is greatest will be evoked,.
These 16 postulates form the basis of Hull’s theory of learning. His systematic behaviour theory involves the system as a chain of symbolic constructs.
Six major symbolic constructs may be identified as follows: (which are inferred as intervening variables).
1. Reinforcement:
Habit Strength(SHR ) is the result of a reinforcement of stimulus response connection in accordance with their proximity to need reduction (postulate 3 and 4 ).
2. Generalization:
Generalized habit strength (sHr) depends both upon direct reinforcement of an S-R connection and upon generalization from other similar S’-R’ habits. (Postulate 5)
3. Motivation:
Reaction potential (SEr) depends upon the interaction of habit strength and drive (Postulate 6 and 7).
4. Inhibition:
Effective reaction potential (sEr) is reaction potential as reduced by reactive inhibition and conditioned inhibition (Postulate 8 and 9).
5. Oscillation:
Momentary effective reaction potential (sEr) is effective reaction potential as modified from instant to instant by the oscillating inhibitory factor associated with it (Postulate 10).
6. Response evocation:
Responses are evoked if the momentary effective reaction potential is above the threshold of reaction. Such responses may be measured according to the probability of reaction, latency of reaction, resistance to extinction, or amplitude (Postulate 11 to 16).
From the above statements, it can be summarized that Learning, according to Hull, depends upon contiguity of stimulus and response closely associated with reinforcement defined as need reduction (or in secondary reinforcement associated with a stimulus that itself has been associated with need reduction).
That the growth of learning is based upon the increment of habit strength with each reinforcement and it is a constant fraction of the amount remaining to be learned.
This can be referred to the fractional anticipatory responses giving rise to Stimuli (SG)r when the goal is partially reached releasing a number of environmental stimuli persisting from both earlier and during reinforcement. Further, the upper limit M of the association between S and R depends on the amount of reward (need-reduction) and the delay of reward.
It concerns temporal relationship between CS onset and the UR in classical conditioning. Hence, basically, Hull’s theory is an associationistic theory—association by contiguity, which is similar to Guthrie’s.
Drive:
The next important role in Hull’s theory is the role of Drive, because:
(1) Primary reinforcement is not possible without some drive—internal or external and consequently it cannot evoke secondary reinforcement. Primary reinforcement requires a strong D to enable the subject to learn, or else secondary reinforcement will not occur because the secondary reinforcement can only be evoked when association between the stimulus and primary reinforcement has been established.
(2) Without drive there could be no response, for drive (D) activates habit-strength into reaction potential. Hull assumed that drive (D) multiplied habit-strength, so that a “zero-drive” state implied that no SER could exceed the reaction threshold.
(3) Again, without the distinctiveness of drive stimulus (SD), there could be no regulation of habits by the need state of the organism. The organism will not learn to select the proper line to respond to the different directions for the requisite need state.
Hull provides an equation that will express the relationship among reaction potential, habit strength, and drive, based on the assumption that drive interacts with habit strength in some multiplicative fashion to produce reaction potential.
Inhibition:
Hull’s inhibitory postulates were not very original but are deduced by him from the physiology of fatigue and Pavlov’s ideas about “internal” or conditioned inhibition generated by conditions of non-reinforcement, specifically during the extinction trials. For Hull, as for Pavlov, SlR was a ‘learned response’, responded to active inhibition—a deliberate learned opposition to a particular response.
Hull, of course attempted a drive-reduction interpretation of SlR (inhibition postulate). He thought that when fatigued IR, gets in, in spite of the R, or reaction being very effortful, it would generate much aversive drive leading to fatigue (IR). Then stopping or quitting the response would be reinforced by the immediate reduction of fatigue.
The process suggests that SlR is really based on “an associative habit for the stopping of R” (reaction). Hull very hastily has drawn such a conclusion which he could not prove equation-wise—what he had successfully done in the case of habit and habit-strength. Moreover, subsequent conceptual and empirical studies by later psychologists attacked Hull’s notion of inhibition in a strong way.
Hull, however, ignored such criticisms and maintained his contention till the end, when in 1952 he forwarded his theory and interpretation of final behaviour system.
Derived Intermediate Mechanisms:
Hull assumed that behaviour system is non-physiological, complex due to environmental influences and are analyzable into simpler mechanisms. Hence Hull’s system has been classified as a reductive’ system, in that more complex phenomena are deduced on the basis of simpler and more basic phenomena and relationship; and thus they are also “reduced”.
By being reduced they become so different and simple that sometimes these cannot be perceived to be coming from the postulates.
In order to bridge the gap, Hull assumed in the postulates and corollaries the presence of some derived intermediate mechanisms which would also explain the connection between “laboratory experiments and the more familiar behaviour of the organisms adapting to a complex environment”.
These mechanisms, he assumed, would facilitate the emergence of many more varieties of behaviour. Two” such intermediate mechanisms referred by Hull’ to explain complex learning behaviour were ‘the gradients of reinforcement’ (the goal gradient) and the ‘habit-family hierarchy’.
The goal gradient is the term used by Hull to describe the observation that an animal speeds up its activity as it nears a goal. It also refers to the observation that when an animal is learning a goal-directed sequence, those aspects closer to the goal are first learned. A goal response can easily be achieved by combining recognition of a “short primary gradient” and “secondary reinforcing stimuli” scattered along the route towards goal.
From the observation that “a long chain of behaviours will be reinforced and strengthened to a lesser degree than will behaviour components closer to the goal, Hull and Spense derived the goal-gradient”.
The principle behind it was that responses nearer to the goal would be more strongly conditioned than those further removed. The ‘goal-gradient’ of Hull is opposed to the time gradients involved in classical conditioning experiments.
Habit-Family Hierarchy:
Another mechanism—the habit-family hierarchy—is an important means of learning deduced by Hull, not originating in the postulate but as an intermediate mechanism. It is derived from a more basic principle of learning and carries great weight in the deduction of further behaviour phenomena. It is a phrase coined by Hull to refer to the mechanism by which the organism in a complete environment selects one response in preference to another.
Hull found that the natural environment of learning is often not very simple and consists of a number of alternative means and the correct path has to be chosen for the organism to reach the goal. How to encounter these complex problems is often left to the organism to find out. A given complex environment for learning may provide multiple routes between a starting point and a goal.
The organism learns alternative ways of moving from a common starting point to a common goal position where it finds need-satisfaction. These alternatives constitute a family of equivalent responses—called a habit-family—because of an inferred integrating mechanism.
The most suitable response in this situation has to be selected from this family hierarchy. The likely response will be the one that heads the ‘habit-family hierarchy’. If this is blocked, number two in the hierarchy is elicited. The most suitable response incurs the habit strength and is integrated in the habit-family.
Hull describes the process as inferred integrating mechanism. This takes place as an integration into a family hierarchy by way of the fractional antedating goal reaction, present in each alternative response being performed. The fractional antedating goal reactions become conditioned stimuli (SG) to which all overt responses are conditioned, possessing a derived gradient of reinforcement.
They also constitute alternative means and compose a family with hierarchical arrangement. The means now become responses and the responses which are more strongly conditioned (secondary reinforcement) to SG than others have now been comfortably chosen.
Hull, in 1937, writes that “if one number of a habit-family hierarchy is reinforced in a new situation, all other members of the family share at once in the tendency to be evoked as reactions in that situation. This makes possible the explanation of response equivalences and other appropriate reactions in novel and problematic situations, such as those found in insight and reasoning experiments”. The habit-family hierarchy, thus, forms a chain of operative responses and, thus, if one is blocked the next in the hierarchic order takes its place.
Hull’s comprehensive systematic deductive theory of learning opened up far-reaching vistas to explain problem learning, evoke insightful thinking and learning by reasoning to solve a complex situation. It casts immense influence in developing social learning theories and their applications in school situations.
Outlines of Social Learning Theories (Non-Associative):
The dictionary meaning of social learning is the process of acquiring knowledge by observation or imitation. Various patterns relevant to an individual’s interaction with his group or society.
Socialization means the process by which an individual learns the rules of the society. More specifically, it refers to the learning of patterns of behavior expected of one by society as a whole and by the segment of the society—sex, race, religion, culture, social backgrounds etc.—of which one is a part. The deliberate efforts and processes which help such learning constitutes the contents of social learning theories.
Social learning theory is a form of cognitive learning different from Skinnerian and other behavioristic contentions and are, therefore, not dependent on the principle of conditioning only. Social learning theory was advanced systematically by Bandura and Mischel and N. E. Miller and Dollard, Rotter.They advanced their theoretical positions through a balanced synthesis of cognitive psychology with the principles of behaviour modification.
They emphasized the development of personality, social and personal competences which evolve out of the social conditions within which learning occurs. The main issue of this theory, as has already been pointed out, is that only associationism in human learning is not tenable and that classical and operant conditioning are not sufficient for explaining initial learning of all behaviour.
They emphasized that a learner is an observer and he imitates a model who manifests the behaviour that is new to the learner. Their primary contribution thus is the explanation of the learning of new behaviours or responses through observation, and imitation which they refer to as vicarious learning.
Ausubel, on the other hand, refused to accept that human learning can be explained by the principle of conditioning. He believed that humans assimilate, relate, organize and store information for further use. His theory is known as Subsumption theory’ emphasizing the function of central nervous system in learning.
Bandura’s observational learning theory indicates that learning would necessarily show the following effects to the observer:
(a) A modeling effect.
(b) A disinhibitory effect.
(c) An eliciting effect.
Cumulative Learning:
Gagne (1970) developed a model of “cumulative learning” describing eight types ranging from the learning of simplest associations between two things or events through learning to solve problems.
Gagne agrees to accept associative learning for elementary learning effort but he thinks that complex concept learning or learning the principles or rules and problem-solving cannot be explained only through any of the conditioning principles.
A Model of Cumulative Learning:
Gagne’ has synthesized knowledge concerning various types of learning and has formulated a model of cumulative learning. According to Gagne, the effects of learning are cumulative. The higher level skills that man possesses are gradually developed.
That individuals first learn the capabilities that build successively on one another and gradually develop and help the individual acquire knowledge. Gagne, while working with cumulative learning, recognized that, to foster the capabilities, identification of eight types of related learning is necessary.
The following eight types of learning consist of starting from simplest, signal learning to most complicated problem- solving.
Gangne has shown the hierarchy of difficulty level as:
Gagne not only classified different types of learning but also considers the conditions—internal and external—of the learner and learning situations.
Brief description of each type of learning and the related conditions are presented below:
Type 1: Signal Learning:
Signal learning corresponds directly to classical conditioning. Learning to respond to signals is common form of reflective behaviour—both in animals and human infants. Signal learning uses the same mechanism of classical conditioning, two stimuli occurring simultaneously.
The primary condition involved in signal learning is the time proximity of the conditioned and unconditioned stimuli. Gagne explains signal learning in terms of contiguity and repetition.
Type 2: Stimulus-Response (S-R) Learning:
Precise movements of the muscles in response to specific stimuli that have become discriminated from other stimuli—are the main product of S-R learning. This is also associative learning and the associative connection is discriminated as operant and, hence, a case of instrumental conditioning.
The conditions required for S-R learning is temporal contiguity between the response and the stimulus. In addition, the desired response is reinforced to assure that only desired response is made to a particular stimulus; consequently, other responses are extinguished.
Types 3 and 4: Motor Chains and Verbal Chains:
Two or more and 4 separate motor responses may be combined or chained to develop a more complex skill. Here, two or more verbal responses—such as words— can be combined to form a two-word association, such as pepper-salt, father-mother etc.
The process of contiguity and repetition are essential situational conditions of verbal chaining. The infernal conditions for forming chains require that already learned S-R responses be in proper sequence and being capable of forming chains. Verbal chaining will also occur due to reinforcement which must immediately follow the chained response.
Type 5: Discrimination Learning:
The complex discrimination learning also follows the same principle like S-R learning and proper acquisition of S-R chain responses will lead to mastery of complex discrimination learning. The essential situational conditions are the same as S-R learning i.e. contiguity and repetition.
Type 6: Concept Learning:
Similarly, concept learning has been explained by Gagne following the same S-R principle. But the important feature of concept learning is that, having attained a concept, one is able to identify other examples of the concept without further learning. This is typical of concept learning only. All other types that we have seen do not possess such characteristic.
In the earlier types, each new association, chain or discrimination must be learned as and when it is encountered. To attain a concept the individual must have some prerequisites, such as capability of discrimination and capability of making common response. For example, to learn the concept “dog” the child must discriminate between a dog and other objects and must perceive the commonality of two dogs.
The learned discrimination between dogs and other organisms and things must be available to the learner in close temporal contiguity. So that they can be reinstated to form a concept of “dog”. Some repetition of the learning sequence as well as confirmation of the correct responses are also essential for learning concepts. Concept learning is very important for gaining organized knowledge.
Type 7: Rule Learning:
Gagne defines rule in S-R terminology as a chain of two or more concepts. Rule learning must corroborate with a statement made. It is differentiated from rote learning in making correct responses or, rather, exactly same responses as is given in the statement presented for learning. It is presumed that rule learning helps problem-solving in future. Essential to rtile learning are having the concepts embodied in the rule and the capability for making the responses specified by the rule. This means that rule learning provides capability for forming verbal chains. It is a simple associative process.
Type 8: Problem-Solving:
This is the culmination of all the forms of learning, in the sense that when an individual learns to solve a problem his learning reaches the highest stage of gathering knowledge. It enables an individual to acquire new ideas independently and is itself an achievement on the part of the learner. But learning to solve problems is a gradual process involving defining a problem, suggesting a hypotheses and establishing a related one and verifying the final hypothesis and achieving a solution.
Klausmeir et al describes problem-solving learning type (starting with needed rules) needs the ability of the individual to recall and apply them to the problem. These two are the essential prerequisites of problem-solving. They are the internal conditions.
The external conditions include making the component principles available to the learner in close temporal contiguity aiding the learner to recall the principles and providing cues to guide thinking. They then become almost verbal instruction to the learner.
Much has been already stated about concept learning in the development chapter of this book; The reader” can refer the related materials about concept learning from the chapter, if necessary. The interested reader may also refer Gagne’s experiments with several curriculum projects applicable to school learning involving all the eight types of learning mentioned above.
The basic learning theories have been discussed briefly as we consider learning from behavioural point of view. In recent times, there seems to be a shift of emphasis from merely formulation of laws to the techniques involved in school learning. The school learning is not only a learning process, it is a teaching-learning process-. Therefore, more emphasis is laid on meaningful learning than on mere theorizing. Textbooks, reference works, use of computer and other mechanical devices, sound, films and other instruction materials are used extensively in schools to present information to the students
As a result, new theories have been presented by the modern educational psychologists, namely, “The Information- Processing Theories” of behaviour. They deal with meaningful learning in the classroom situations and are interested in working with mechanical devices.
The modern trend in the teaching-learning situation is the extensive and intensive use of new techniques in psychological theorizing. Consequently, in formulating theories, they started using a format of ‘program’ and arrange the learning materials and methods in the form of program-learning, that run in high-speed computers. The emphasis of this theorizing has been shifted from describing the nature of behaviour to the nature of processes or “actions”.
The aim of this enterprise is to get such a programmed computer to go through a series of actions which, in some essential ways, resemble or simulate the cognitive and behavioural actions of a real subject performing the task. The students study the materials and try to relate the new information to what they already know.
Two dimensions of learning processes are fundamental in this theory (cognitive and meaningful learning theory). These two ways are ‘reception learning’ and ‘discovery learning’, on the one hand, and meaningful and rote learning on the other — both ways furnishing a cognitive structure with organized sets of facts, concepts and generalization. In rote learning the students only get reception and no discovery, while in discovery learning all that is to be learned are presented in the final form and the learner gets some information independently. This information is then integrated into the existing cognitive structure and reorganised and transformed to produce a new or modified cognitive structure.
The information-processing theory uses reception and discovery process through programmed model. The first stage is characterized by reception and discovery learning, in which information to be learned becomes available to the learner. In the second stage, the learner acts on the information in an attempt to remember it so that it will be available thereafter. If the learner attempts to retain the new information by relating it to what is already known, meaningful learning occurs.
The same approach has been adopted in programming of the computer so that it goes through the same steps (after the model) that one would if he is to make a verbal deduction from the theory. The computer does the job mechanically in short time and with no errors. The computer programms are generally problem-solving and help simulate learning. They demonstrate how past learning can be utilized in solving new problems, as described in earlier paragraphs. The information processing theory categorically has a cognitive bias.
The historical antecedents of this theory show that both the technique and the mode of thinking have been imported into psychology from engineering, introducing a new line of research and work—a practical one—known as educational technology. Educational technology uses a number of charts, models, visual aids and computers to aid information processing.
As the working of information processing is computer- dependent, learning also uses the same descriptive languages as computer, the computer-simulation approach. The stimuli, data, instructions are all called “information” and are input or read in the computer; the computer reads out or exposes outputs either in printing or in any other mode the machine permits.
This is the information processing mechanism. For the operation of this mechanism, it is essential, first of all, to prepare the learning programmes. Verbal learning and concept learning, meaningful learning are thus programmed in such a way that the theorists, while deciding these working processes, remain cautious and concerned more with describing and modelling experimental results than with actually producing an intelligent machine.
Simulation of how an actual subject might learn and improve his performance is not the goal, how effective learning can take place through the information processing is the ultimate goal—the former is only the first step to the final goal.