top of page


If you've come this far down the ABOUT tab, then you're probably looking for some information on the training methods, techniques, "style"advocated on this site.  You may want to find out more about my training philosophy...


You may want to know-- Do I suggest or solely utilize "positive reinforcement" or "positive-only" methods for teaching behaviors?  Do I use a clicker?  Do I promote the use of aversives or utilize leash "corrections" in behavior modification work?  Do I think that Obedience Training is the best way to address existing behavioral issues or to prevent them from occurring in the first place?  Would I ever use a remote training (electronic) collar on a dog??

If you are a trainer, or have been indoctrinated by one, then you've likely come to this page to determine if your own training "philosophy" aligns with what you'll find here.  You might skim for key words that hint at what methods, tools, or techniques are discussed.  If your beliefs appear to align with mine, or if you do not have established beliefs just yet, then great, you can simply follow what is put forth.  


On the other hand, if you

This way you can make a quick decision without having to actually think at all.    If not however, then you can summarily dismiss anything of value offered herein and be on your way.  


Well I can just about guarantee that if you come here with the expectation of philosophy-checking, you will not find a match here...


You probably want to know my 'philosophy' with regard to training-- which methodology, approach, and/or the specific techniques I use when working with dogs-- and see if that aligns with your own belief system before you proceed further.  Well if that's why you are here, then you will likely be disappointed no matter where on the training methods spectrum you might reside.

If on the other hand you come here seeking a better theoretical understanding of learning and conditioning (at least in non-conceptual learners like dogs), then you have come to the right place.  Or if you are searching for practical and effective training methods, ones that actually work in real-world situations, especially when dealing with difficult canine behavior problems, then you have come to the right place.  Or if you would just like to hear it like it is, without any bias due to blind adherence to some philosophical dogma based on faulty academic learning theories, then you have definitely come to the right place.


  especially for use in addressing behavioral issues, then you have come to the right place.  Perhaps you are a dog owner or trainer who has come to realize that the training approach(es) that were sold to you don't in fact solve or prevent all behavior issues.  Perhaps you've discovered that in some cases these approaches have made existing behavior problems worse.  P







Lesson Learned


H.E.E.L.S. Guidelines

Well-behaved before well-behaved.  Dog must respect you, trust you, and then like you, in that order of importance.

Always consider, learn to recognize or read, the dog's internal or emotional drive state before acting to train or modify behavior...

Learn how dogs learn, not follow a simplistic, out-moded and out-dated model like "Operant Conditioning" to understand how learning occurs.

If you do not understand why or how a prong collar might be useful for rehabilitating a fearful dog, or why you might need to toss food to a dog that growls or snaps (and why that approach may not reinforce the growling/snapping behavior), or why you might mark and reward a dog the instant it looks away from you in order to reinforce its behavior of looking at you, or you are not fully aware of the dramatic difference in behavioral outcomes between using food and giving affiliative interactions (affection)...







A Very Brief History and Description of Classical Conditioning


Classical Conditioning was proposed by Russian physiologist Ivan Pavlov during the very late 1800's as a paradigm for understanding simple learning processes (Classical Conditioning is in fact also referred to as Pavlovian Conditioning).  Most people are vaguely familiar with at least some of Pavlov's work.  Pavlov famously conditioned dogs to salivate to the sound of a bell.  He would first 

condition the dogs by ringing a bell (or metronome) just before food was delivered (meat powder).  Then after several trials, Pavlov demonstrated that the dogs would now salivate to the sound of the bell alone, without the presence (sight, smell, taste) of any food.  The salivation now produced by the sound of the bell was termed a "signal reflex" by Pavlov.  A simple demonstration that dogs get excited when they know they're about to be fed-- genius.

Joking aside, Pavlov performed many types of conditioning experiments, exploring different stimulus pairing protocols and using a variety of different stimuli.  

But the 'salivate-to-the-bell experiments' are most widely remembered and referenced as the paragon of Classical Conditioning.  [For more information, see Pavlov's twenty-two "Lectures" on the Conditioned Reflexes.  They are all well worth the read.  Pavlov made many keen observations on different aspects of conditioning which have been subsequently 'lost' in contemporary teachings of this simple but profound form of learning.]

There are a number of problems however that arise when using the Classical Conditioning theory (at least in the way that it is currently taught or generally understood) as a way to explain or describe simple associative learning.  At least 3 major problems exist with this theory that are relevant to the training of animals.  

Issue #1:  CR-UR Equivalency


The first issue revolves around the supposed equivalency of the CR and the UR, and by extension equivalency of the CS and US.  The CR or Conditioned Response in the Pavlov example above would be the "behavior" of salivation by the dog in response to hearing the sound of the bell.  The UR or Unconditional Response would be the salivation by the dog in response to the close presence of food itself.  The NS or Neutral Stimulus would be the sound of the bell before any conditioning or pairing with the presence of food had occurred.  The CS or Conditioned Stimulus would be the sound of the bell after the initial conditioning trials had occurred (bell - food delivery pairings). And the US or Unconditional Stimulus would be the close presence of food.

Salivation to the US (food) occurs unconditionally, regardless of prior conditioning* to the sound of a bell.  Pavlov believed that when the bell became a CS (Conditioned or Conditional Stimulus) following initial bell-food pairings, the behavioral response it would produce, the CR, was equivalent to the UR.  In other words, the dog's response to the bell alone was now the same as the response to the food.  The response to the bell was now what Pavlov called a 'signal reflex' and the response to food (alone) was considered an inborn reflex.  Pavlov stated:


"It is certainly a sufficient argument for making a definite distinction between the two types of reflex and for considering the signal reflex in a group distinct from the inborn reflex.  But this does not invalidate in any way our right logically to term both "reflex," since the point of distinction does not concern the character of the response on the part of the organism, but only the mode of formation of the reflex mechanism."

Pavlov believed the this type of conditioning was the basic building-block of all behaviors, and that complex behaviors were merely a summation of many simple 'connections' like the bell-food associations from his experiment.  Pavlov was correct to believe that this new response to a formerly neutral stimulus was reflex-like, but he mistakenly believed that this response was equivalent to the one observed during food (US) delivery.  For this to be true, the eliciting property of the food (the US) would need to be somehow "absorbed by" or transferred to the CS (the bell) during initial conditioning trials.  This isn't possible in Serial Conditioning procedures, as was the 

experimental case with his bell-food pairings.  The only way to achieve such transference is through Parallel Conditioning, which will be discussed shortly.  Nonetheless, Pavlov believed that the CR response produced by the bell (CS) alone was the same as the UR response elicited by just the food (US).  And by logical extension, the bell (CS) had come to acquire the same eliciting power as the food (US)-- that the CS had become functionally equivalent to the US.


So there are really two questions here- the first is whether or not the CS and US achieve the same eliciting power after initial conditioning trials.  And the second question and bigger issue at hand is whether or not the CR and UR responses actually become equivalent after conditioning.  Is the "behavior" or response of salivation (which is only a small part of the overall response) that is elicited by the sound of the bell equivalent to the salivation response elicited by the presence of food itself?  In Pavlov's bell-food experiments, it might at first seem that this assumption makes sense.  In both cases, presentation of either the CS or the US result in salivation. However, the injection of a little common sense, or even just basic observation, quickly calls this suggestion of equivalence into question, and renders such a conclusion as invalid.  


Dogs actually respond differently to any CS than they do to a US predicted by that CS.  In Pavlov's famous experiment, salivation may be common to the either the presentation of the CS or the US, but that response is only a small part of the overall behavioral response to either the CS or the US.  For instance, dogs exhibit behavioral responses to the US that are different from the responses observed when just the CS is presented.  Dogs conditioned to the sound of a bell for example do not put their heads down and begin to chew and swallow when they hear the bell (CS).  Chewing and subsequent swallowing are certainly observable behaviors that occur when a dog is presented with a bowl of food (US), but no similar or comparable behaviors are observed when just the CS is presented.  Conversely, dogs exhibit behavioral responses to the CS that are not observed when just the US is presented.  Upon hearing the bell (CS), dogs may jump up and down, pace back and forth, whine, bark, etc.  None of these behaviors are generally observed when the US is presented-- the dog is too busy chewing and swallowing the food!

Furthermore, even when considered in isolation, the corresponding parts of the CR and UR responses that appear identical are in fact different.  Salivation responses in the above experiments for example would appear to be the same, whether they occur following the presentation of the CS alone or following the presentation of the US.  But the salivation it turns out is different in chemical composition.  This is because salivation is both an anticipatory and a preparatory response to food.  If one were to look at just this response in isolation to all other behaviors, it might be easy to assume that the behavioral response to food and the corresponding response to a (conditioned) bell sound were equivalent.  And from there, one might suggest that the bell had come to acquire the same eliciting power as the food itself.  This just isn't the case at all.  Shortly after Pavlov's experiments were published, Behkterev did an excellent job of demonstrating that anticipatory behaviors preceding a pending US-- those elicited by a CS-- were in fact not equivalent to those behaviors elicited by that same US alone.  Other 

researchers after Behkterev followed suit, and found the same thing to be true-- that the CR and the UR are not equivalent.   However, the notion that the UR and the CR are equivalent was and still is a very enticing idea indeed, and this notion remained "stuck" in the minds of many for decades.  So much so that many college-level textbooks stated this erroneous conclusion

as recently as the very end of the 20th century, despite clear and conclusive evidence that this assertion was simply false.  

The CR-UR (and CS-US) equivalency idea still exists today, just in different forms.  For example, the "click" or marker sound used in clicker training is often described as a "secondary reinforcer" of behavior.  The term secondary reinforcer is used to describe stimuli that reinforce behavior only after conditioning or previous pairing with some US.  However the "click" is not any sort of reinforcer of behavior.  It is simply a predictive event, an omen, a discriminative stimulus, etc. useful for the animal because it foretells the coming of some other relevant event (presumably a US as it would be in the context of animal training).  Such a cue produces anxiety or anticipation, or even preparatory or avoidance behaviors.  But these are not the same responses as those elicited by the US itself.  The click or marker sound, as a conditioned stimulus, may produce a response (CR) that is, on the surface or in part, similar to the response (UR) elicited by the US, but they are in fact two very different sets of responses.

As a side note- some researchers have attempted to limit or constrain the definition of a CR to encompass only those conditioned responses that are "similar" to the UR.  That is, the CR response must originate from the same biological effector system as the UR in order to qualify as a CR.  For example, in order for a change in heart rate to be considered a true CR, then heart rate change must be a common component of both the UR and the resulting conditioned response.  Otherwise it might be a part of the conditioned behavioral response, but would not be considered a CR.  This proposal seems to ignore the fact that, in most instances of observed classical conditioning, the CR is not identical or even similar to the UR.  One cannot cherry-pick which components of the resulting conditioned behavior are to be acknowledged and which are to be ignored, and then define the CR in terms of the desired components only.  The CR and the UR are not the same.  This fact, while easily demonstrated and observed, cannot simply be side-stepped by dancing the terminology two-step.  



[[ Such is a simple example of procedural myopathy in science-- focusing one's attention only on data that is congruent with a cherished theory or assumption to the exclusion of all other (contradictory) data.  


[[ Researchers should not resort to data sifting, whereby they toss out, ignore, or reject incompatible data and preserve that which remains consistent with their ideals.  This is not investigative science but rather selective validation of.


Bottom line-  The behavioral response elicited by an Unconditional Stimulus is not the same as the response to another (neutral) stimulus that may be observed following Serial Conditioning procedures.  In Serial Conditioning, when a preceding (neutral) stimulus reliably predicts the arrival of a Unconditional Stimulus, a new behavioral response or CR will be created.  This conditioned response or CR is best characterized as an anticipatory and/or preparatory response to the coming US (or perhaps as an avoidance response in the case of a forthcoming aversive US).  The CR should not be considered a response that is equivalent to the US.  By extension, the Conditioned Stimulus CS does not achieve eliciting equivalency with the Unconditional Stimulus US.  The CS however does acquire eliciting power as a predictive stimulus or discriminative cue, motivating the animal to voluntarily act or involuntarily respond.

Issue #2:  NS then US- Serial Conditioning is the only viable temporal arrangement of stimuli

According to many "experts" and authors, Serial Conditioning is the only mechanism by which animals can form associations.  That is, conditioning (including instrumental learning!) is possible solely because of the occurrence of some Serial Conditioning process or processes.  For Conditioning, that means that in order for a conditioning effect to manifest, the NS must precede the US.  If the NS occurs only during the delivery of the US (resulting in NS - US overlap), then supposedly conditioning will not occur.  Parallel Conditioning, where stimuli temporally overlap, is generally discounted, ignored, and discredited, and regarded by many as an invalid method for creating association formation.  In fact many of these same authors and experts go so far as to state that conditioning and learning cannot occur through Parallel Conditioning processes-- that such stimulus pairing arrangements are INEFFECTIVE in manifesting a conditioning effect.   I use the all-caps term INEFFECTIVE because that reflects the intensity with which some "experts" will reject the use of conditioning procedures that involve simultaneous stimulus presentations.  (Actually that's exactly how one well-known former dolphin trainer presents parallel conditioning procedures in lectures to her students).  Today, nearly every explanation of learning and conditioning revolves solely around Serial Conditioning models.  

Parallel Conditioning procedures are regarded as irrelevant stimulus pairings that produces no conditioning effect, despite the fact that temporal overlap of stimuli can lead to one of the most powerful forms of associative learning.

So what exactly are Serial and Parallel Conditioning Processes?  The terms Serial and Parallel refer to the spatial and temporal arrangements or occurrences of various stimuli.  For example, if you consider a bell-food experiment similar to Pavlov's, there would be two relevant stimuli-- the bell sound and the food.  You would first ring a bell, and then present food to your dog.  In this case, the two stimuli were presented in series, or serially.  First the bell, then the food-- one stimulus and then the other.  The bell sound would not overlap or be presented at any time while the dog was eating.*  If one stimulus starts and stops before another stimulus begins, we have a serial conditioning procedure.

However if you were to ring the bell at any time while your dog was eating, the two relevant stimuli (the food and the ringing sound) would temporally coincide.  Such a presentation, where two stimuli overlap in time and space, would represent a parallel conditioning process, as the two stimuli occur simultaneously or in parallel with each other.  There are of course other 'hybrid' possibilities here, for instance when one stimulus overlaps with another but for only part of the duration of the other.  When there is temporal overlap, it becomes a bit more complicated-- the potential conditioning outcomes depend on numerous factors, including the start-stop points of one stimulus relative to another; the contingency between the NS and the presence of the US; and the qualitative nature of the US itself (aversive vs. appetitive).

*  Actually, in Pavlov's original bell-food (meat powder) experiments, he did include stimulus overlap.  In fact in his original lectures, he made mention of the importance of the temporal overlap of stimuli in the conditioning of certain associations.  Pavlov stated that there should be an interval where the bell sound overlapped with the food presentation for maximal conditioning effect: 

"The fundamental requisite is that any external stimulus which is to become the signal in a conditioned reflex must overlap in point of time with the action of an unconditioned stimulus."  


However, the relevance and even necessity of stimulus overlap in conditioning procedures later came into question, and soon a serial-only explanation for all associative conditioning was adopted

Parallel Conditioning is actually incorporated frequently into training protocols, especially in behavior modification work, and has even been studied to an extent in laboratory settings.  In scientific research, stimuli that strictly overlap temporally and spatially are called compound stimuli.  However, many people, including most dog trainers and so-called behavior experts, simply do not acknowledge that Parallel Conditioning is even possible, or that it is responsible for any observed changes in behavior.  But Why?  If it is such a powerful form of conditioning, is a known phenomenon to researchers, and occurs frequently either incidentally or by design, why is Parallel Conditioning so neglected and maligned?  Well the reason for this really stems from Issue #1 described earlier.  Back in the 1900's when it was still believed by many that the CS could achieve equivalency with the US and produce the same response (UR/CR), scientists designed experiments to see which conditioning protocols would be effective in transforming a NS into a CS.  In other words, which presentation order of stimuli would create a CS from a NS that would then elicit the "same" behavioral response as the US?  With the flawed notion of CR-UR equivalency in mind, the assumption at the time was that any presentation of a NS with a US that didn't produce a CR/UR would indicate that conditioning had not occurred.  For example, scientists would vary the timing of the bell sound during the feeding procedure for dogs-- sometimes the bell would ring before food presentation; sometimes the bell would ring only after the delivery of food but during the interval while dogs ate; sometimes it rang only after the dogs had finished eating, etc.  Well it should be obvious by now that a bell sound presented after the beginning of the delivery of food (US), either while the dogs ate or after they had finished eating, would not produce an anticipatory response, or a salivation CR (not a UR!).  You will not see an anticipatory response from an animal if the NS doesn't foretell the coming of the US (or change in its status), so therefore looking for one is (likely) futile.  It was revealed by such experiments that a bell could become a CS if it preceded the delivery of food, and this would occur regardless of whether the bell also overlapped the onset of  the US (food) presentation.  But the bell would not become a CS if it solely overlapped the US (food) interval.  From this and other similar evidence, it was concluded that stimulus overlap was not critical for conditioning to occur, and in fact did not lead to any conditioning effect.


But in no way does this mean that conditioning cannot or does not occur when two or more stimuli overlap (Parallel Conditioning).  The resulting conclusions drawn from the results of these experiments were over-generalized to encompass all cases of stimulus overlap.  Unfortunately, this practice of 'conclusion over-generaliztion' is, in and of itself, fundamentally a mistake.  Also, the design of these experiments were based on a faulty premise (that the CR and UR are equivalent responses) so the results and subsequent conclusions should immediately be called into question.  When looking for a possible conditioned response to measure, it would be wise not to choose an anticipatory one for investigation.  And if anticipation is the desired conditioned response variable to be investigated, great care should taken in the procedural construction of the experiments, specifically in regards to the change in status of the US after the NS is presented.  Finally, all US stimuli are not created equal, and this certainly is pertinent in regards to the general aversive/appetitive nature of the US.  NS that overlap an aversive US stimulus are more likely to produce conditioning effects than those that overlap an appetitive US.  Direct Association Formation is far likelier when an NS overlaps an aversive US rather than an appetitive one, especially if that NS is present during the entire US interval.  


So what does this all mean?  Animals can and do learn from Parallel Conditioning Procedures when stimuli overlap, and this can easily be observed and tested.  The most important potential 

outcome of any Parallel Conditioning process is the possibility of Transference between stimuli.  That is, if two stimuli overlap in time, the eliciting property of one stimulus can be "given" or transferred to another stimulus that initially lacked any significant eliciting property.  Or, if two stimuli are themselves US stimuli and individually elicit similar UR responses, then conditioning by temporal overlap can enhance the CR response of either stimulus when only one of the stimuli are subsequently presented.

This phenomenon- the direct transference of the eliciting properties of one stimulus to another- is what Pavlov had originally proposed as the mechanism of conditioning when he observed his dogs salivate to the sound of a bell only (CS).

A few examples from real world scenarios:

NS + US aversive  (NO + aversive: also- e-stim near object or person or area)

US + US aversive  (collar correction + emotional trigger> another dog)

US good + US aversive (food + nail clip, in fact serial conditioning protocols may not work at all, at first)

US good + NS/unsure (  emotional/physical sensation and causal agents; or smell of ball and sight ball itself-- only scent is now required to elicit excitement.  = Premise of "imprinting" in scent detection training)

Finally, it should be noted that Parallel Conditioning procedures do not necessarily result in any form of conditioning-- simple stimulus overlap does not guarantee that conditioning will occur.  Aversive stimuli seem to possess the capacity to more readily transfer their eliciting properties to other temporally/spatially-related stimuli.  However this phenomenon can certainly occur when a pleasant US is paired in parallel with another stimulus.


Further, even if a conditioning effect is established via Parallel Conditioning procedures (stimulus overlap), it may in fact not be due to the formation of Direct Associations via Transference.  Parallel arrangements of stimuli are also capable of producing Predictive Associations, similar to those formed during the non-temporal-overlap procedures of Serial Conditioning.  The possibility of Predictive Association formation from stimulus overlap again largely depends on XXXX



Bottom Line-  Conditioning of two or more stimuli is possible with Parallel Conditioning procedures, where stimulus overlap occurs either only in part, or when stimulus overlap is the only temporal relationship that exists between the relevant stimuli (for example a compound stimulus).  Transference of stimulus properties may or may not occur between the coincidental stimuli; however the NS (or even a second US) can serve as a predictive cue for the primary US, indicating changes in the state or presence of that US, when NS - US temporal overlap (only) 



Issue #3:  NS + US => CS = Classical Conditioning

The third major issue with regard to current Classical Conditioning theory, at least in the way that it is generally presented and taught, involves the NS and the sequence or order in which it occurs relative to the US.  We have just previously discussed the NS and the possibility of its temporal overlap with an ongoing US (Parallel Conditioning).  Here we will examine two aspects of the traditional Classical Conditioning paradigm-- the nature of the "other" stimulus, generally a NS; and the temporal relationship (order of presentation) between this stimulus and another stimulus, a US.  

Classical Conditioning, as it is generally taught, encompasses a form of conditioning best described as Serial Conditioning, whereby Predictive Associations are potentially formed.  Dogs learn, in the temporal sense, that "one thing leads to another" in life.  One event comes to predict the (near) future occurrence of another event.  Dogs learn or become conditioned to the fact that a meaningful association exists between these two now-connected events.  The meaningful association that is formed between these events is qualitatively predictive.

First, let's take a closer look at the nature of the 'first' stimulus in a Classical Conditioning procedure, typically the NS.  Nearly every textbook, behavior expert, or similar authority that discusses Classical Conditioning will use the same formula to describe the process:  They explain that Classical Conditioning begins with a NS, or Neutral Stimulus, and defines this as an 

environmental stimulus or event that has no intrinsic meaning or value to an animal (we'll stick with dogs here).  This NS must precede the US (stimulus overlap is optional but not considered relevant), and that a number of these pairings or repetitions is required to obtain a conditioning effect.  Then after multiple such pairings, the NS becomes a Conditioned Stimulus or CS, and the presentation of the CS alone is now capable of eliciting a response that is... (the description of that resultant response varies somewhat-- some sources describe the response as similar to the one produced by the US; a few still erroneously posit that the response is equivalent to the one produced by the US; and others more accurately state that the resulting change is in fact the creation of an anticipatory response.)  


In these descriptions of Classical Conditioning, very few sources will even mention the possibility of pairing the US with anything other than an initially neutral stimulus (NS).*  But what might happen if you serially pair, for instance, 2 unconditional stimuli (US) together?  This arrangement of stimuli is more relevant (and far more likely) in real life scenarios, and certainly more important in behavior modification and rehabilitation efforts.  So what does happen when two US stimuli are paired together?  Can an aversive US stimulus be conditioned to a pleasant or appetitive US stimulus??  Can two aversive US stimuli be paired together to create a conditioned response or effect (or two pleasant US stimuli for that matter)?  Is conditioning even possible between 2 US stimuli?   Does the order of presentation of the two US stimuli matter?  Does the dog's response to either of the 2 initial US stimuli change after conditioning between them has taken place?  These are by far more intriguing and tantalizing questions.  And the answers to all of these questions are actually very important whenever two or more US are serially paired together.  Yet the possibility of pairing a US with anything other than a NS is rarely discussed or even considered*.  As a result, most psychology students, dog and animal trainers, behaviorists, etc. come away thinking that Classical Conditioning is nothing more than the pairing of an antecedent NS with a subsequent US.   This however represents only a mere fraction of the many possibilities that exist for the serial conditioning of two (or more) stimuli.

Bottom Line-  Serial Conditioning procedures can result in Predictive Association formation, and this can occur between two stimuli other than just an initial NS and a US.  Two NS stimuli can be serially conditioned; two US stimuli can be serially conditioned; and a single NS stimulus can be conditioned to a null set (no other stimulus forthcoming)-- all of these combinations can potentially lead to conditioning and the formation of Predictive Associations. 

*Again, Pavlov did conduct some interesting (if not ethically questionable) research on this very subject.  Of particular interest were his explorations of conditioning procedures involving electric shock and food presentation as the two US variables.  Many other researchers also examined the possibility of conditioning one US to another US.  There are examples of US + US conditioning studies available in the scientific literature.

Second, we need to consider the order of presentation of the stimuli in a Classical Conditioning procedure.  Again in most textbooks, Classical Conditioning is taught as a serial conditioning process whereby the NS precedes the delivery or presentation of some relevant or eliciting stimulus, the US.  From here, most texts will state that there are 2 possible stimulus pairings 

that will lead to Predictive Association Formation (Classical Conditioning):

1. An NS precedes (and thus comes to predict) the delivery of some pleasant or appetitive stimulus; or

2. An NS precedes (and thus comes to predict) the delivery of some unpleasant or aversive stimulus.

So this leaves us with NS -> US (aversive stimulus) and NS -> US (pleasant stimulus) as the only two (classical) conditioning procedures available.  But Conditioning can occur with other NS - US temporal arrangements as well.  If we assume for this portion of the argument that the 'other'

stimulus paired with the US is indeed a NS, then there are at least a total of 10 arrangements of two stimuli whereby conditioning can occur (no US-US pairings here, for simplicity).  This includes a few scenarios where the two stimuli will coincide or overlap in time, but this does not necessarily produce a Transference effect where the stimulus properties between the NS and US are exchanged (see discussion above).  Instead, in all of these arrangements it is possible for the NS to simply act as a discriminative stimulus or predictive cue for the occurrence of a future event(s), depending on what happens with regard to the occurrence (introduction, maintenance, termination, or non-imminence) of the US stimulus, and the qualitative nature (aversive vs. 

appetitive) of the US stimulus.  

Again, given the manner in which Classical Conditioning theory is typically presented or explained, the student or avid learner is left with only a partial picture of the theoretical scope of Serial Conditioning and of the Predictive Associations that may be formed as a result, and thus they typically lack a comprehensive understanding of this very powerful form of associative learning.  Students and practitioners should be aware of the more extensive selection of viable stimulus-pairing arrangements available to them, so that they may incorporate a broader set of conditioning strategies into their training and behavioral rehab programs.


Bottom Line- Predictive Associative Conditioning can occur with temporal NS and US 

arrangements other than NS onset precedes US onset--  the order of stimulus presentation said to be required for Classical Conditioning procedures.


In summary, Classical or Pavlovian Conditioning is, at best, a very basic and elementary paradigm for understanding the formation of certain types of associations between stimuli.  Many expert authorities and academic textbooks present Classical Conditioning in a simplified, short-hand format-- that NS precedes US presentations, resulting in the formation of Predictive Associations by the subject animal.   This definition ignores or omits many other important and viable Serial stimulus-pairing combinations.  As such, prospective students are often left with an incomplete conceptual framework and skewed view of associative conditioning processes.

The most glaring fault of current Classical Conditioning theory, however, lies in its lack of inclusion of Parallel Conditioning processes.  Temporal Stimulus overlap is an exceptionally important source of association formation, and may in fact represent the only mechanism to account for Transference, or the exchange of eliciting properties between stimuli.  


a phenomenon possible through Parallel Conditioning operations.  Serial Conditioning processes, as Classical Conditioning is defined cannot account for conditioning phenomena where the properties of one specific stimulus are transferred to another stimulus.  This process is likely only possible through Parallel Conditioning procedures, of which there is no accounting in the Classical Conditioning schema.  Classical Conditioning, as is generally accepted, is a serial-only stimulus pairing procedure.

account for stimulus property

So-called one-trial learning is often due to a parallel conditioning process. eating + feeling bad

P = interrupters of learned behavior:

blocks acquisition -- prevents establishment of new behavioral pattern formation; or of maintenance-- prevents realization or initiation of currently utilized behavioral pattern.

Therefore P must come at the beginning of a behavior / behavioral sequence-- just as or just before the behavior sequence begins-- and not after as is typically the temporal placement of the aversive when describing or defining the punishment process.

Also direct association- temporal stimulus overlap with a voluntary action, association with behavior (physical action) itself, or with sensory input, or with nearby (spatial) or overlapping (temporal) stimulus/object

Associative vs consequential learning

P Can be used (after operant behavior has been completed) with dogs that have some training or experiences with choice-making, although this may kill drive to participate.

general behavioral suppression

Blocking escalation of emotional state response.









An equally Brief Overview and Description of Behaviorism and Operant Conditioning



The Theory of Operant Conditioning was born from Behaviorism, a quasi-scientific, extremist movement within Psychology that began in this country during the early 1900's.  Behaviorists of the time (as well as many that still do today) believed that all behaviors are learned through or because of direct interactions with the environment, and that these behaviors result from stimulus-response conditioning.  J.B. Watson, the credited founder of Behaviorism, believed that animals including humans did not behave because of any inherited mechanism or internal drive or regulatory system.  Rather, all animal behavior was a product of summed experiences only.  The mind was a tabula rasa, or "blank slate" when an animal was born.  Animals had to learn everything by way of contact with environmental stimuli, as newborns coming into the world were only equipped with the capacity to make associations between environmental events. Behavior was not due to genetically pre-determined characteristics, capacities, or drives.  Differences in behavior observed between individuals, Watson asserted, were due strictly to exposures to different life experiences.  Speech, personality, emotionality, cognitive ability-- these were all explainable as learned patterns of behavior.  Watson held that all learning, and thus all subsequent behavior, originated from 'Classical Conditioning' derived exposures that 

occurred during the course of an animal's life.  


Behaviorists of the time even rejected the concept of the "mind" as well as that of consciousness and free-will.  They believed that we, along with all other animals, are slaves to the consequences we received for our past choices.  Prior consequence is the explanation for both how and why we behave presently.  Thus behavior can easily be changed, even manipulated, to produce new or different behaviors.  Mr. Watson once said (now famously) that given 12 healthy human babies, he could raise any one of them to be anything he chose-- artist, doctor, beggar, thief, etc., regardless of that individual human's talents, inherent abilities, genetic make-up, or desires, based solely on the consequences (he would theoretically control and deliver) for each choice that that person would make in his/her life.  It is one thing to claim that all learning is a result of one's choices and experiences.  It's quite another to assert that all behavior arises solely from choice/experience.  He was clearly a fanatic-- his ideas reside at the very far end of the sane, rational-thinking spectrum, but nonetheless Behaviorism found a footing within the scientific community of that time.  

B. F. Skinner

Inspired by Watson's work, B. F. Skinner formally put forth the notion of Operant Conditioning in the early 1930's.  Skinner's work led to an offshoot of Behaviorism known as Radical Behaviorism (which ironically was not quite as radical as Watson's version).  Skinner still clung to many of the theoretical tenets of Watson's (Methodological) Behaviorism described above.  On the surface at least, Skinner acknowledged that innate behaviors existed, and therefore biology and physiology rather than simply environmental stimuli played a role in the expression of behavior.  However, the environment was still the source of an animal's behavior-- "Where else would it come from?" he once replied when asked about the origins of behavior.   Skinner believed that some behaviors could be created or derived from multi-layered, complex associations, and that these behaviors could be explained via Classical Conditioning mechanisms.  However he thought that the majority of observable behaviors were the result of, or at least shaped by, the consequences of choices made by an animal.  


Skinner's views on learning and behavior, in practice at least, were no less flawed and unreal as Watson's.  For example Skinner once claimed that the stalking behaviors of predators, like those seen in cats, could be taught to non-predator animals like horses or cows.  He contended that such predator-like behavior could be shaped or conditioned gradually, and that to teach a cow to 'stalk' would only require an "animated bundle of corn" or some similar reinforcer.  He claimed that such a stalking cow would appear to behave just as a cat would when approaching a mouse or bird, with differences in behavioral outcome due only to the cow's size and speed.  His proposals on learning and behavior were some of the more bizarre, ridiculous, and logically flawed of any "scientist" of the modern era.  Yet by creating the illusion of rigorous scientific study and promoting his conclusions with a generous dose of grandiose hyper-generalization, he was able to "wow" his peers as an adept magician might enthrall a rapt audience.


Followers of Skinner's early works included the Brelands, behavior researchers from...

In 19444, they wrote a paper published in the Journal of Psych... titled "Behavior"  This scientific paper had the tone more indicative of a propaganda or marketing piece rather than of a legitimate scientific paper.  The Brelands stated that with Skinner's theories on learning, "a whole new science of behavior...

Unfortunately, barely 10 years later, they wrote another scientific paper for the J of GG.  In this paper titled "The Misbehavior of Organisms" they described, in a much more somber tone, their experiences employing Operant Conditioning principles in real life situations.  Their newest account of OC was outlined with the benefit of experience rather than from a [wishful thinking, hopeful] perspective.   When they attempted to apply the "principles" of learning in more realistic scenarios, the results were far from predictable or reliable.  It seemed that the new 'science of behavior' had fallen far short of their expectations, as it had for many others.  Well that should not have been at all surprising.  When "Behaviorist" researchers starve their experimental subjects (rats and pigeons) for prolonged periods of time beforehand, relegate them to small cramped containers without any other salient features, isolate them from all other stimuli and distractions, and prevent them from having any social contact or interaction with conspecifics or other species, one should not expect to see results that reflect real-world learning or "normal" behavior.  

But these highly artificial experimental conditions were only the beginning of Skinner's manipulation.  Skinner very clearly wanted to establish a set of predictable and repeatable experiments that would produce the precise data he desired to support his theory on consequential learning.  And so he set out to design experiments that would support his theoretical notions.  And when I say design experiments, I mean manipulate their design.  He rigged his experimental measurement apparatus to produce only data that was 

consistent with the behavioral responses that he expected or intended.  Thus, whatever data were collected would conform to his proposed theories.  Not surprisingly, Skinner also chose not to incorporate any control groups into most (all?) of his classic experiments from which 'Operant Conditioning' was ultimately derived.  Without control groups, it was easy for Skinner to purport that his manipulated data reflected the nature of learning in general, as there was no other experimental data to compare.

However most unsettling was the fact that Skinner investigated only a very narrow band of behaviors (using only two species as subject animals-- white rats and pigeons!).  Skinner hand-selected the "operants" or behaviors that were to be studied beforehand.  And he didn't just look at any random behavior, or select and examine a number of different behaviors for comparison.  Instead he chose just two behaviors to investigate, one for rats and another for pigeons.  On top of that, he selected behaviors that the subject animals would likely or naturally do anyway in the pursuit of the reinforcer offered (food).  Rats use their paws to search for and acquire food; pigeons use their beaks to attain food.  So of course Skinner chose to look at lever-pressing and beak-pecking behaviors, respectively.  From the results he obtained from such highly contrived and heavily manipulated experiments, Skinner proposed a theory of learning that was to encompass nearly all behavior, across all species of animals.  There is hardly an equal when it comes to hyper-generalization of a conclusion based on a very limited or specific data set.

Then in 1952, Skinner had the audacity to write Walden II, a book that proposed the consequential structure that could be used to control mankind's behavior in a utopian society.

Claimed that-- like claiming that you taught children how to draw...

BF Skinner was one of the greatest scientific frauds of the 20th century, not so much for the work that he did, but for what he did with his work.  Hyper-generalization and inductive reasoning gone terribly wrong 

He should have known better = not an accident, purposeful misuse of data to advance a personal agenda



Operant Conditioning

Operant Conditioning is a learning paradigm which supposedly describes a process of conditioning that occurs when a behavioral choice is followed by environmental consequence.  Most everyone is at least familiar with the "Quadrant Diagram' that shows the four (or five) behavioral contingencies-- contingencies which describe the changes in behavioral 'strength' that will occur, based on the type of consequence that follows that given behavior.  Below is an exam of the 4-Quadrant Contingency Diagram, as it has been called.










While Classical Conditioning suffers from an incompleteness in scope and a glaring omission of important forms of conditioning not contained within its standard definition, Operant Conditioning is, on the other hand, a complete mess.  Although a thorough examination, critique, and deconstruction of this paradigm of learning could fill an entire volume (it has-- see Meta-Modern Learning Theory: A Primer on Learning and Conditioning for the animal trainer, behaviorist, and behavior modification/rehabilitation specialist), here we will only address some of the more relevant issues with Operant Conditioning theory and related concepts borne from 'behaviorist' thinking.

We'll start with the irritating and confusing, progress through some theoretical and practical hiccups, and end with the fundamental problem with the construct-- the achilles heel-- of Operant Conditioning as a legitimate learning paradigm.

The first issue surrounds the selection of the specific verbiage or scientific terminology that is used to describe various concepts and operations within the study of learning and behavior.  For the newcomer to behavioral studies, this can be a bit of a frustrating obstacle, as the choice of terms seems to be aimed at excluding anyone with only a casual interest in the subject.  For the seasoned behavior aficionado, it's still worth a review, since many sources (mostly online and in current print books) still get the terminology confused or misdefined.  We'll start with the terms Positive and Negative (as in Positive Reinforcement, etc.)

Skinner was opposed to nearly anything subjective when it came to the 'study' of behavior.  You could not infer that your test subjects liked or disliked something, that they might be afraid of something or really liked something else.

**Semantics and philosophy-

Relativity in adding or subtracting

What's so negative about a positive?

What's Punishment? (and Reinforcement?)

**The curious case of Negative Reinforcement

Antecedent (General/Contextual cues, Discriminative stimulus) -> 'Operant' Behavior -> Stimulus change (Positive / Negative) -> Change in behavioral outcome (this sequence of events changes the future probability that the operant behavior will occur following the introduction of the discriminative stimulus, with regard to Frequency / Intensity / Duration characteristics of the operant)

**so what happens if the sequence doesn't change the future behavioral probability, or does so in a way not included in the OC quadrant? (look away-mark = less looking away...


There's a Scheduling Conflict


Operant Conditioning is based on the following premise:  the environmentally-derived 

consequence of a behavioral choice is the factor responsible for the change in the future probability of the occurrence of a given behavior.  Consequence drives behavioral change or learning (and according to Skinner, behavior itself).  Skinner decided to design experiments to test an interesting idea- what would happen if the delivery of a consequence were manipulated in certain ways?  Specifically, how would variations in the delivery schedule of reinforcers effect the performance (future probability) of a given operant behavior?  Skinner chose two variables-- the elapsed time between the delivery of one reinforcer and the next, and the number of correct behavioral responses required for the delivery of a reinforcer to occur.  For the time gap between 

reinforcer deliveries, Skinner chose the term Interval; and for the response volume criterion for reinforcer delivery, he chose the term Ratio.  Within each of these categories (the amount of time elapsed between reinforcement delivery or Interval, and the number of correct responses or Ratio) these variables could be a set at a Fixed amount/number during the experiment, or they could change throughout the course of an experimental trial and be Variable.  That gave 4 possible reinforcement arrangements or Schedules: Fixed Interval, Fixed Ratio, Variable Interval, and Variable Ratio.  There's also the special case within the Fixed Ratio category- 

Continuous Reinforcement- whereby the subject animal receives reinforcement for every single correct response.


So from this, one should be able to logically predict the best way to change the 'strength' (or probability of future occurrence) of a behavior.  One would assume that applying a consequence to every single relevant behavioral choice would be the most effective method, since according to Operant Conditioning dogma, consequence is the key ingredient to behavioral change.  If we just look at positive reinforcement for example, we should see the most robust "strengthening" of a behavior if we introduce a reinforcer (the consequence) after every correct or desirable behavioral choice, the Continuous Reinforcement option.  This is, after all, one of the central tenets of Operant Conditioning-- reinforcement following a behavior causes the increase in future probability of that behavior.   However, if we were to follow this reinforcement recipe, we'd find this procedure-- the one most consistent with the stated definition of positive reinforcement-- to be very disappointing.  It's not the best procedure for 'strengthening' a behavior.  Nor is it second best, or even third best, out of the possible reinforcement schedules... it's the least effective procedure for strengthening behavior.  Dead last.  And to make matters worse (and more interesting) this procedure also produces the fastest rate of response extinction.  That is, when reinforcers are subsequently withheld following correct responses, the future probability of the occurrence of that behavior takes a nosedive, deceasing in strength at the fastest rate of all possible reinforcement arrangements.  So what gives?  Why would the predicted best schedule of reinforcement turn out to be the worst and least effective?  And why does this fact concern so few people, if they even bother to stop and acknowledge it?  One could argue that Skinner's Schedules of Reinforcement are not relevant to learning, per se, but are really more applicable to the maintenance of a known behavior or performance.  Still, the results fly in direct opposition of what one should expect to see, whether discussing skill acquisition (where Continuous Reinforcement fares better) or in behavior maintenance/performance.  If consequences truly drive learning and 'strengthen' future behavior, as Skinner claims, then one should expect to see reinforcement schedules where consequences are given most regularly produce the most effective results.  By extension, this effect should also be observed when assessing performance or behavioral maintenance.  However it is not the case here either.

It's ironic in a sense that Skinner, who was known for 'discovering' and documenting these Schedules of Reinforcement, and who purported that his Operant Conditioning paradigm 

described learning contingencies, did not care to elaborate as to why these results seem to directly contradict the outcomes predicted by his own Operant Conditioning theory.  If you are a believer in the myth of "Positive Reinforcement" as a method or training technique, then you should certainly know why the simple reinforcement of a desired behavior does not produce the best outcomes in terms of future behavioral strength (the presumed primary goal of training), and does not produce lasting behaviors resilient to fluctuations in or cessations of reinforcement delivery (the definition of training efficacy failure).  It's not a difficult question, but the answer is nonetheless fundamentally important in order to understand the process of learning and to appreciate the internal drives that maintain behavioral performance over time.  

To discover that answer though, you'll have to take your head out of the Skinner Box in order to see it...



**Presentation Problem

"...reinforcement is a consequence that will strengthen an organism's future behavior..."

"Skinner defined reinforcers according to the change in response strength (response rate) rather than to more subjective criteria, such as what is pleasurable or valuable..." 



*Conundrum- either classify, albeit subjectively, those stimuli that are delivered as a consequence of behavior as either pleasant or unpleasant (as perceived by the receiver), or we ignore the subjective perceptions of the receiver and only concern ourselves with whether or not those stimuli are "added" or "removed" as a consequence of behavior.  If we chose the former

If we chose the latter

*Similar Dilemma with the definitions for Reinforcement and Punishment-

if reinforcement is to equal an increase in the future possibility of behavior, and punishment is to equal a decrease in the probability of future behavior..

then we must accept that the spanking of a child for lying, and the beating of a puppy for nervous or submissive urination, are examples of "positive reinforcement" if in fact these measures lead to an increase in the likelihood of the respective behaviors occurring in the future (they often do lead to increases in such behaviors in both examples).  The same goes for "leash corrections" for a reactive dog, scolding a dog for stealing food while unsupervised,  .... the list goes on and on.  Most people- trainers, behavior experts, and researchers-- would be not immediately agree that any of these procedure-outcomes should be considered 'positive reinforcement' even though they fit perfectly into the definition of the term.

**You can't avoid Escape-Avoidance Conditioning

**Fundamental Flaw- Procedure or Outcome?






























bottom of page