In 2022, Karie John submitted her dissertation Training Board Certified Behavior Analysts via Telehealth to Conduct the Trial-Based Functional Analysis to the University of South Florida’s Ph.D program in Applied Behavior Analysis.
The project emerged at a moment when the field faced an existential threat: COVID-19 had interrupted in-person supervision, graduate practicums, and service delivery. Telehealth appeared as both a lifeline and a test of ABA’s claim to internal consistency of their methodology.
The dissertation positions behavior analysis as a system capable of functioning at any distance. Its premise is pragmatic, not theoretical: if analysts can be trained to conduct experimental assessments remotely, then the discipline itself can survive without physical contact for training. Within that frame, the study becomes a proof-of-continuity experiment. It is not about children. It is about system survival.
Overview of the Study
Six Board Certified Behavior Analysts (BCBAs) were recruited as participants. Each already held professional certification but lacked experience conducting trial-based functional analysis (TBFA), a shorter, classroom-adapted version of the classic Iwata functional analysis used to determine the “function” of problem behavior. (Read “What’s the Function?” for an article about Iwata’s assessment.)
Training occurred entirely through telehealth. Each behavior analyst logged into Microsoft Teams, observed video demonstrations, and rehearsed the assessment protocol with the researcher portraying a child. After reaching perfect procedural accuracy in these rehearsals, they were instructed to perform the same procedure with children in their own caseloads. The distinction between training and experimentation disappears at this point. The adults practicing the procedure are also the investigators, and the children they serve become the material of that practice. The training study and the ongoing treatment are the same event, recorded under different names.
The dependent variable was fidelity—the percentage of protocol steps performed correctly by the participants. All six participants reached 100 percent during the training phase. All then generalized their skills to real children and achieved 100 percent.
From the field’s perspective, this outcome demonstrates that remote training and supervision can maintain procedural integrity. From a human perspective, it shows that procedural precision itself has become the desired outcome.
“BCBAs were able to implement all conditions of the trial-based functional analysis with 100% accuracy following telehealth training.”
The dissertation defines this as success.

Detailed Description of the Study
Phase One: Training. Each of the six participants was a Board Certified Behavior Analyst — an analyst already credentialed because they met the academic criteria for the certification. They are also identified as not having specific experience in the assessment tool that the researcher wanted them to learn. The experiment sets out to evaluate whether they can successfully learn to use these tools through remote training.
The training took place over Telehealth video calls. Each analyst logged into Microsoft Teams from their own workspace. The researcher appeared on screen, wearing a headset and camera, and explained the goal: to learn a “trial-based functional analysis,” or TBFA. In plain terms, this meant learning how to test what causes a child’s behavior by deliberately creating short situations that might trigger the behavior, then observing the result.
The training began with a set of short video demonstrations. In these videos, the researcher acted out the role of an adult working with a child. Each clip showed a different situation: the adult giving and taking away attention, offering and removing a toy, or asking the “child” to do something. The analysts were told to watch closely, take notes, and be ready to copy what they saw.
After the videos, each analyst practiced the same steps live with the researcher pretending to be a child. The “child” sat on camera holding a small toy. The analyst was told to say, “Let’s play!” or to give simple directions like, “Can you hand me the block?” Then, at certain moments, the analyst had to take away the toy or stop talking altogether, depending on which test condition they were practicing.

Note that in Phase One, the child was merely a simulated act to represent the problem behaviors a client already enrolled in ABA therapy would probably exhibit. If the “child” began to protest (by saying “No,” frowning, or looking away), the analyst had to record that reaction, then quickly restore the toy or end the task. The goal was not to calm the child, but to follow the procedure exactly. Each interaction was timed with a stopwatch, usually lasting only a few minutes.
The researcher observed the analyst’s timing, phrasing, and sequence of actions. After each attempt, the analyst received feedback through the webcam: “You forgot to remove attention,” or “You praised too early.” The analyst repeated the exercise until they performed each sequence flawlessly, down to the second.
Phase Two: Generalization to practice. Once the analyst could perform the method perfectly with the researcher “child,” they were cleared to do the same thing with actual children in their own clinics or home-based programs. These were real children already receiving ABA services. The paper does not describe the children in detail, except to note that they were clients who sometimes engaged in “problem behaviors.” Their parents were not active participants in the sessions.
In the live applications, the analysts recreated the same short tests from the training videos. One segment focused on attention: the adult talked with the child, then suddenly looked away, acted busy, and waited. If the child shouted, cried, or tried to get attention, the adult turned back and spoke briefly, then looked away again.
Another segment tested escape from demands: the adult gave the child a task, such as picking up a toy or pointing to a picture. If the child protested, the adult stopped the task and gave a short break. This was a small reward for the protest, meant to show that “escape” might be the reason the behavior happens. Behaviorists do not evaluate the protest as an indication of distress. As such, the function of the protest, deemed the “problem behavior” was established in the assessment.
A third segment tested access to tangibles: the adult let the child play with something preferred — a toy, a tablet, or food — then took it away for a short moment. If the child reached, screamed, or tried to grab it back, the item was immediately returned. Here, the screaming was classified as “problem behavior” to functionally access a preferred item.

Each trial was short, two to four minutes, and carefully timed with a stopwatch. Every movement had to match the script the analysts had been taught over video. If they hesitated or comforted the child too soon, it was scored as an error. The entire goal was consistency: to produce the same sequence of triggers and responses each time.
The sessions were recorded and coded afterward. A second observer reviewed the footage to confirm that every instruction, pause, and reaction matched the template. When two observers agreed on the timing and order, that was counted as high procedural accuracy.
During the second phase of the trials, the rooms on screen looked ordinary, with toys on a table, an actual autistic child sitting across from an adult, sometimes another staff member in the background. But what defined the interaction was not play or learning; it was the controlled alternation of comfort and withdrawal. Each time the child reacted, it was written down as data. The child’s facial expressions, tone of voice, or emotional recovery were not analyzed. The only question was whether the analyst executed the right steps and whether the behavior appeared under the planned condition.
By the end, all six analysts had reproduced every part of the script without error. The researcher marked this as mastery: telehealth training successful; procedures transferable to live application.
The Misbehaviorist Critique of the Study
The study promulgates a mechanistic view of “autism.” The study maintains the strict behaviorist perspective of regarding autistic children as automatons, displaying a narrow range of responses to a narrow set of stimuli.
In the training phase, a role-playing experimenter stands in for real autistic children. The assumption is that practicing on a compliant adult who knows the script will prepare trainees to manage an unpredictable child who does not. Use of role-playing is presented as “practice with low risk to clients,” making telehealth training feasible across distance.
Nothing more is said about what might be lost in translation.
Nowhere does the paper discuss whether an adult can realistically represent a child’s distress, or how the absence of genuine emotion might change how trainees perceive what they’re doing. There is no mention of tone of voice, hesitation, or the ordinary uncertainty of working with a real person. The adult “child” follows instructions perfectly. The real ones never do.
The method’s logic is clear: accuracy over authenticity. The role-played child exists to produce identical conditions, not relatable experience. The reader never learns what those first remote sessions felt like to the trainee: the stillness of an adult pretending to be small, the awkward pause before pretending to cry, the analyst waiting for the cue that never comes naturally. On paper, it reads as smooth replication. In practice, it is an exercise in controlled imitation.
Later, when the trainees move on to actual child clients, the dissertation reports success as proof that the role-playing simulation worked. It does not ask whether mastery under scripted conditions might dull awareness of real distress. It does not report the reactions of the children themselves. The experiment was considered a success because every trainee reproduced the procedure with 100 percent accuracy, “proving” that the system could maintain control and consistency through telehealth training.
By removing the child, the study makes training measurable.
By removing emotion, it makes compliance teachable.
And by calling the simulation “low risk,” it makes ethics disappear from the frame (see below).

It’s a set-up.
The study is biased from the outset. It is set up for success, not for open-ended, open-minded inquiry.
The analysts were trained on the method until they were 100% perfect. Rarely in real life do training sessions afford this luxury.
The experimental subjects (the children) were hand-picked as those most likely to fit neatly into the method’s procedures. The only criterion was that the child displayed “problem behavior suitable for functional analysis.” In practice, this meant a child who would predictably react to the conditions being tested: taking away a toy, removing attention, or presenting an unwanted task. The more predictable the reaction, the cleaner the data, and the more plug-n-play the experiment could be performed in the midst of a global pandemic.
Unfortunately, pristine research conditions yield results that are only tenuously generalizable to the real world.
The experimental ethics are problematic.
The second phase of the study should have been framed as researchers (analysts) performing experiments with human subjects (children).
Instead, it was framed as analyst-trainees operating in their already-established “applied settings,” with their existing clients, after learning online how to refine their treatment techniques. This re-framing permits the study to slide past the usual boundary of human-subject research ethics. The standard ethics questions about the effects on, and possible harms to, human subjects, are simply not asked.
The dissertation identifies the adult participants by age, certification status, and pseudonym, yet it does not acknowledge the children with autism as human subjects. An Institutional Review Board evaluating such a project should determine whether training activities conducted with existing clients constitute human-subject research, and should require explicit identification, informed consent, and risk assessment for all children directly involved in experimental procedures.
Within this design, autism is treated as interference that could contaminate the experiment. The condition itself is not under study. The human experience of translation is erased so that the continuity of behaviorism can remain intact. To name the autistic participants would require recognizing them as human.
The study is entirely contained within the ABA ecosystem.
The study assumes the prevailing behaviorist model of recommended autism treatment methods, such as the trial-based functional analysis. Autism is never mentioned in the entire document, as though the association between behavior analysis and autism is so established that it no longer requires acknowledgment. It doesn’t question the effectiveness and consequences of these methods; it merely seeks to replicate them.
It also strips away any remaining connection between behavior analysis and the moral or emotional reality of being human. Nowhere in the report is there a description of how the children felt, what they understood, or how they responded once the cameras were turned off. The study measured what it was designed to measure. Could the adults, trained remotely, make the same sequence of actions happen as if they’d been trained in-person? The answer was “yes” and so the study was deemed successful.
The author herself is completely embedded in this ecosystem. Every layer of guidance, from her undergraduate coursework, to the first research assistantship, and doctoral defense, occurred within the silo of BCBA practitioner-academicians. The mentorship by a behavior analyst guarantees fidelity to method, not expansion of perspective.
Thus, we can see that the professional network is self-sustaining: analysts train analysts, cite analysts, and test analysts. Psychology, ethics, and child development sit outside the loop.At the end of the dissertation, the author thanks her mentor. In doing so, she is thanking a person who taught her to see problem behaviors, but not a person who might have taught her to see the child.

Discuss this on Facebook public post. Tag a professional on LinkedIn.
Read this dissertation: John, K. S. (2022). Training Board Certified Behavior Analysts via Telehealth to Conduct the Trial-Based Functional Analysis (Doctoral dissertation, University of South Florida).
Leave a comment