Protocols created by pet professionals have been around for as long as I can remember. By protocol, I mean a system or recipe that provides a step-by-step instructional guide, presented as something that works better or differently than generally accepted practices described in standardized scientific terms.
I have used protocols in the past and I think the desire to find that shiny, new, better protocol comes from a passionate desire to help more dogs. No matter how good we are as trainers, there is that client that struggles. We want to help.
Thus is born the argument that we need more tools in the training toolbox. Don’t get me wrong, I am completely in favour of better and I am in favour of learning about new things. New does not mean better. New does not mean it will stand the test of time. New does not mean that it belongs in my training toolbox. Perhaps it belongs in my “interesting new factoid” box. Evidence supporting some cognitive ability in dogs does not mean it will successful translate into something useful for dog training. Not even all of Skinner’s work held up upon review. That is okay. That is how science works.
This is why the idea that “there is a study” is not sufficient to say that new is better. It is like those people who replaced their butter consumption with margarine. Trans fat is not better even if it was newer than butter.
Will a specific study or protocol hold up over time? I do not know. I only know that before me is a human being, a client with a dog. It is my job to:
- Choose a strategy that has evidence of being effective.
- Ascertain that the strategy is suitable for that particular problem.
- Understand, mitigate, avoid and warn about risks and costs.
- Factor in the client’s capabilities and safety measures.
That is a tall order to fill. Human psychology has wrestled with this problem. It resulted in a series of task reports by Chambless and Ollendick on evidence based practice. It recognizes that new treatments may become available, and we need to balance the potential for new effective treatments with the scientific evidence at hand. As a result, many psychological associations list ratings of treatment options.
For example, the Society of Clinical Psychology lists the treatments for panic disorders as:
- Cognitive Behavioral Therapy NEW (strong research support)
- Applied Relaxation (modest research support)
- Psychoanalytic Treatment (modest research support/controversial)
The terms “strong research” and “modest research” reflects specific criteria that explains the amount and type of research supporting that treatment. An untested treatment plan may or may not work. Its omission from the list is an honest way of communicating that we just do not know.
Should controversy exist, this is also noted, creating a transparent system. Reviewing, revisiting and questioning evidence does not constitute a personal attack. As the task force explains:
“Experts are not infallible. All humans are prone to errors and biases. Some of these stem from cognitive strategies and heuristics that are generally adaptive and efficient. Others stem from emotional reactions, which generally guide adaptive behavior as well but can also lead to biased or motivated reasoning.”
Criteria leading to a “strong research support” (well-established) designation are stringent. According to the Chambless and Ollendick’s criteria:
I – At least two good between group design experiments demonstrating efficacy in one or more of the following ways:
A Superior (statistically significantly so) to pill or psychological placebo or to another treatment.
B Equivalent to an already established treatment in experiments with adequate sample sizes.
OR
II – A large series of single case design experiments (N>9) demonstrating efficacy. These experiments must have:
A Used good experimental designs and
B Compared the intervention to another treatment as in IA.
Further Criteria for both I and II
III Experiments must be conducted with treatment manuals.
IV Characteristics of the client samples must be clearly specified.
V Effects must have been demonstrated by at least two different investigators or investigating teams.
(Bolded areas by myself to highlight the many requirements.)
You do not need to be a researcher in order to see that this is well beyond, “There is a new study – here is my new protocol.” Well established treatments have multiple, reputable studies with multiple researchers and teams that review and debate the merits of that evidence.. Even those listed as “modest research support” go well beyond one study and an idea.
How do we choose what is the best therapy for a particular client? The task force suggests,
“…evidence should be considered in formulating a treatment plan, and a cogent rationale should be articulated for any course of treatment recommended.”
In dog training circles, protocols are marketed differently than the above. Clients and trainers alike are told that we, “need more tools, or dogs will die.” This insinuates that nothing but more protocols can save lives, overlooking that this is not the issue at hand. The choice of options is not between new protocols and death. Our choice lies between therapies with a strong body of evidence and others with little to none.
More choices and more protocols create an ethical dilemma. We do not know if shiny, new things are better than placebo, nor do we know if they carry risks. We are working without the safety net that testing provides.
We also create an opportunity cost. We abandon the well-established treatments in favour of the unknown. There is a finite amount of time, money and resources in a client’s life. Attention to the new takes time and attention away from a strategy that has a strong track record of working.
Even if we could mash methods and offer multiple strategies, it is unlikely that anyone has tested or reviewed if methods are complimentary. Do the effects of our shiny, new protocol trigger blocking effects in the tried and true? Without testing, this presents yet another concerning unknown. It is entirely possible that we are setting the client up to fail.
Out of the plethora of shiny new protocols, perhaps some will stand the test of time. We remain in the dark until rigorous testing happens.
We, as dog trainers, have no right to override or skip testing or review. Our experiences and anecdotes are not superior to the tenents of scientific processes. Nothing gives us the right to let our ego grow to the point where we believe we can create a protocol – skip testing – sell it to clients at will – without disclosure – while taking payment for that service.
Until shiny, new studies and protocols become tested and reliable, we have choices to make for the individual client before us. If we choose to go the route of shiny and new, then at the very least clients deserve to know that they are signing up for something experimental. They also have a right to know that a supported treatment is available to them elsewhere.
To be quite blunt, while we dabble in the new and untested, we are asking our clients to be our guinea pigs.
New does not mean better. Better is better. We will know we have better when we have proof that it’s better. In the meantime, perhaps our focus is better served at becoming better at that which already meets “well established” treatment guidelines.
Very nicely put, and I’ll try to add something to the end here. That, no matter how much research and testing is done with a new protocol, we know it will not be effective with all dogs, and cannot be sufficiently implemented by all people. With many problem dogs, we may briefly try a number of different protocols, before deciding on the best choice. For dog owners, we try to give reasonable and specific expectations for initial responses, so that in a few weeks, or sometimes only days, we have some indication if a change in approach is needed. Expectations described in specific incremental behavior changes, as opposed to simple impressions or if the entire problem is resolved.
This also applies to possibly complimentary protocols, where even prior testing will only give you the probability that their joint usage is not conflicting.
Having said that, many new protocols may simply be slight variations on older ones, but still adhering to the same principles. Where it may be neither practical nor necessary to require specific testing and research before using it. My issue with some of the “shiny” new protocols I’ve read is that they do ignore or deviate from well established principles, without sufficient justification or research to support them, and everything you said here applies.
This is wonderful information about how decisions can and should made regarding evidence-based practice. Most of us tend to be attracted to shiny new toys, I guess. I hope instead we can work toward a training culture that is evidence-based in adoption of–and also appropriate hesitation to adopt–new training methods. Especially marketed protocols with zero studies or one.
I agree that the “one-study” problem is huge. I have succumbed to it myself. There’s a study; it must be true! We need to remind ourselves how science, experimental design, and statistics work.