Sindy Sanchez’s 2018 dissertation, What’s the Function? Assessing Correspondence between Functional Analysis Procedures, was submitted to the University of South Florida’s Ph.D. program in Applied Behavior Analysis under the supervision of Raymond Miltenberger. The project positions behaviorism as the default analytic framework for classroom settings, when children are labeled as having developmental disabilities.
The dissertation’s institutional frame is unmistakably educational and clinical: it explicitly cites the Individuals with Disabilities Education Act (IDEA, 1997; 2004) as the legislative context that authorizes and compels schools to conduct Functional Behavior Assessments (FBAs) when students engage in “problem behaviors.” Congress did not use the phrase “problem behavior” in IDEA ’97. That phrasing originates from behavior-analytic and special-education interpreters (like Ervin et al. and Yell & Katsiyannis) who reframed the statutory “behavior that impedes learning” into their own disciplinary terms, and Sanchez’s dissertation illustrates how this interpretive shift took shape in practice.
Federal special education law does not require schools to use Functional Behavior Assessments (FBAs) as the default or primary way to address behavior. FBAs are permitted as one option, typically in serious situations where behavior has escalated to a crisis level and immediate intervention is needed. An autism diagnosis itself has come to be treated as a justification for immediate behavioral action by schools. This shift did not come from the law. These interpretations stretch the law’s general requirement to investigate concerns into a mandate for behavior-analytic methods.
Congress intended IDEA to be open and flexible — allowing schools to use any evidence-based strategies. That has quietly become narrow and closed, with FBAs treated as the evidence-based standard. Behaviorist interpretations reframed that flexibility into a single preferred tool: the Functional Behavior Assessment.
How We Got Here:
Sanchez’s review of the behaviorism literature of the past 50 years, shows a quiet but important shift in how Functional Behavior Assessments (FBAs) have been used. What began as a humane clinical practice gradually turned into a bureaucratic tool used in schools. In the early days, researchers whom Sanchez cites such as Carr, Newsom, and Binkoff (1980), suggested that so-called “problem behavior” was actually a form of communication — an attempt to understand the meaning behind actions that had previously been treated as signs of illness or disorder. Decades ago, the bright idea of ABA was to help a small number of seriously involved individuals who were already isolated in tightly controlled institutional settings. At that time, FBAs were used clinically, often on autistic people living far from public view.
Close to a half-a-century later, ABA has been grossly misappropriated as a standard course of treatment, defining how autistic people are regarded in society today. Sanchez cites Iwata and colleagues, who, since the 1980s, later refined FBAs into a careful experimental method used in tightly controlled clinical settings. This marks the evolution of Misbehaviorism: no longer were behaviorists studying the “but why” of communicated behavior, but instead, moved into the “what shall we do about it” phase. Over time, their approach became standardized for use in schools. By the 2010s, FBAs had largely lost their original precision and empathy. Sanchez tests Iwata’s assessment in her experiment.
For this dissertation, IDEA thus serves as both a moral and procedural mandate to proceed with behavior analytical tools for evaluating problem behavior. Through this frame, the document locates the behaviorist as an implementer of federal expectations—a professional intermediary translating law into measurable contingencies. The dissertation emerges from a milieu in which “functional analysis” (FA) is understood as the gold standard of behavioral assessment. Sanchez’s work attempts to reconcile the analytic purity of session-based FA (Iwata et al., 1982/1994) with the pragmatic demands of classroom environments through a comparative study between trial-based and session-based procedures. In this way, the dissertation is not a treatment study, but a calibration study: it measures the degree to which institutional and environmental constraints distort or preserve analytic fidelity.
The Experiment and Procedures
The study took place in a private school in Brandon, Florida, described as serving children with “varying disabilities and problem behaviors.” By naming the site this way, the dissertation assumes the reader already accepts what “problem behavior” means and does not question the label. Consistent with a medical model, the student’s record contains a literal “problems list” appended to their educational record, right alongside ear infections. From that point forward, the research tests how well assessment tools agree with each other—not why the children behave as they do.
Five children, aged 5 to 10, were selected. Classrooms were arranged in “Level 1–3” tiers of academic and behavioral functioning. “The classroom was not completely empty, as this was not feasible in this setting.” Materials such as toys, worksheets, or preferred items (Play-Doh, crayons, puzzles, small electronics) were on the table. The researcher had a stopwatch and a clipboard for recording data.

Sanchez gives close, almost ethnographic portraits:
- “Participant 1 was a 5-year-old, Hispanic boy… engaged in severe problem behaviors throughout the entire school day, including screaming, physical aggression, property destruction, self-injury, rigidity with routines and arrangements, and elopement.”
- “Participant 2 was a 6-year-old, Caucasian boy… diagnosed with ASD, Oppositional Defiant Disorder, and ADHD… at risk of being removed to a Level 1 classroom given the frequency and intensity of his problem behaviors.”
- Later participants (3–5) are described with comparable detail, including a note that participant 5’s self-injury “was frequent and severe enough to leave a permanent callus.” (p. 28).
This descriptive framing transforms clinical biography into functional data: each child is introduced through topography of “problems,” immediately establishing an analytic distance between person and behavior. The classroom was never truly empty. Chairs scraped in the hallway, another class worked at the far side of the room, and there were always a few children waiting their turn at the back table. From a parent’s perspective, this matters: what looks like a “controlled” experimental environment on paper felt, in reality, like a busy, half‑quieted classroom where their child was being watched, measured, and then altered.
Study 1 tested whether two experimental formats for identifying the function of “problem behavior” would yield the same result. Sanchez called this “assessing correspondence between functional analysis procedures,” meaning a comparison of whether two methods point to the same environmental triggers for observable behavior. Yet in this framework, “environment” no longer refers to the child’s lived or emotional world, but to the specific, engineered conditions the evaluators created within the interaction itself. Once behavior is treated as something separate from the child, adults all around, stop asking whether the intervention is good for the child and focus only on whether the behavior changes.
The two procedures were:
- Trial-Based Functional Analysis (TBFA) – short, two-minute “test” and “control” segments inserted into the children’s normal classroom routines.
- Session-Based Functional Analysis (SBFA) – the standard ten-minute Iwata-style sessions conducted in a separate room.
The purpose of these sessions was to find out what made certain behaviors happen: whether a child screamed to get attention, to escape a task, or to regain something that was taken away. The method, called a functional analysis, consisted of short, timed situations in which adults deliberately changed the environment and recorded the child’s reactions with a stopwatch and a clipboard. The stated goal was methodological, not therapeutic—to test whether a quicker, classroom-based version could replace the older laboratory design from institutional research. In doing so, Sanchez effectively relocated that institutional experiment into a classroom, transforming a place meant for learning into a controlled site of behavioral testing.

In the classroom: For the classroom-based version, called a trial-based functional analysis, a researcher sat near the child during their normal school routine. Each “trial” lasted about four minutes. The adult would quietly observe for two minutes while the child had access to toys or activities they liked. Then, for the next two minutes, the adult would change one thing:
- remove the toy,
- ask the child to do a small task,
- stop talking to the child, or
- move away.
If the child yelled, hit, or tried to run away, the researcher marked it down, stopped the timer, and gave back whatever was taken away or ended the task. Then they reset and did another trial. This routine happened many times—ten or more per day—over several days, often between other school activities. For some children, that meant having preferred items repeatedly taken away or having adults briefly withdraw attention throughout the day. Each of these actions was part of the procedure designed to see which situation caused the target behavior to appear most often.
In the separate room: The same children also did a session-based analysis in a quieter room. Here, each session lasted ten minutes and focused on one type of situation at a time. In one session, an adult ignored the child until they produced the target behavior, then gave short attention and went back to ignoring. In another, the adult presented schoolwork, stopped it if the child protested, then started again. In another, a favorite item was removed, returned for a moment if the child reacted, and then removed again. The point was to record exactly when and why the behavior appeared most often. Each session ended with numbers: how many times the behavior occurred per minute.
Later sessions: learning replacement behaviors
Next, the experimenter applied the analysis of the behavior, as a behavior modification intervention. After the adults determined which situations produced the behavior, they ran another set of short sessions to teach a new action that would serve the same purpose. A child who had screamed when toys were taken away was taught to say “Give it back.” A child who hit to escape a task was taught to say “Go away.” A child who mouthed Play-Doh was reinforced for keeping it out of his mouth for short periods. When the child used the new phrase or action, the adult immediately responded by ending the demand or returning the item. If the old behavior appeared, the adult quietly waited until the new one was used. These sessions were three to five minutes long and repeated until the new pattern was stable.

What isn’t described? The dissertation does not mention how the children responded emotionally to these procedures or how the repeated removal of items or attention affected them beyond the data. There is no record of whether their personal routines, sensory rituals, or calming habits were interrupted, though by design the analysis temporarily changed or withheld things they valued. The study focuses on measurable changes—rates of screaming, hitting, or self-injury—not on how the experience felt for the child. Every reaction is translated into data points on a graph, expressed as “responses per minute.”
Beyond collecting, and modifying labeled “problem behaviors”, the experiment did not consider the lasting impact of any procedures applied. In fact, Sanchez frames the intervention’s benefit largely in terms of its impact on educators rather than the children themselves. By depicting “problem behavior” as a source of teacher burnout, low self-efficacy, and attrition, she positions functional assessment as a solution that restores teacher control and professional stability, rather than centering the student’s subjective or developmental experience.
From the outside, these sessions would have looked like a series of short, controlled interactions: an adult timing, removing or restoring materials, and noting each response. The children were not punished or restrained, but their access to toys, tasks, or attention was deliberately controlled to test what triggered behavior. Each child experienced many brief reversals—having something taken away and then returned—as part of that pattern. For a parent, teacher, or ethics reviewer, this description clarifies what the phrase functional analysis means in practice. Understanding what those minutes actually looked like—what was removed, what was restored, how quickly adults responded—helps frame ethical questions about consent, predictability, and comfort in behavioral research with children.
What the parent sees is this: every step of the process—from baseline to intervention—is designed to measure how well their child can approximate non‑autistic behavior under pressure. The classroom may not be completely empty, but the metrics are: empty of the child’s voice, empty of family priorities, and empty of any recognition that autistic regulation is not a disease symptom to extinguish, but a vital way their child stays connected to themselves in a noisy, demanding world.
Sanchez Analyzed the Data, and this is what she “found”
Sanchez reports that “both assessments corresponded in 87% of all functions identified.” She presents this numerical concordance as evidence that the briefer, classroom-embedded assessment (TBFA) can serve as a reliable substitute for the 1980s assessment (SBFA). Each child’s data are narrated in tightly coupled summary sentences: Participant 1: “Screaming occurred in the highest percentage of trials in the access (50%) and escape (80%) conditions.” Participant 2: “Physical aggression occurred the most frequently in the escape (50%) and access (90%) trials.” Participant 3: “Mouthing occurred in almost every trial… suggesting automatic reinforcement.”

Such phrasing foregrounds statistical clarity over narrative complexity; human behavior is rendered as differential rates responding to experimentally arranged contingencies. In her Discussion, Sanchez reasserts the central institutional function of the dissertation: to demonstrate that ABA’s “gold standard” methods can be adapted without loss of authority. She writes, “The results… indicate that comparison between assessment methodologies is not necessary, and the effectiveness of these assessment strategies lies with their ability to inform effective treatment.” Read on.
In this sentence, Sanchez closes the loop of her study: once both methods produce similar results because they led to a “successful intervention,” the comparison itself becomes unnecessary. Effectiveness is defined not by understanding the child, but by the method’s capacity to generate treatment. She’s implying that functional analysis no longer needs cross-validation within science; it is validated by its success in producing control. That move privileges intervention efficacy as the field’s ultimate epistemic authority — a classic instance of applied behavior analysis treating its own outcomes as proof of its correctness. In Sanchez’s conclusion, “success” is measured by the tool’s reliability, not by the learner’s experience. What validates the work is methodological agreement—proof that the system still explains behavior in its own terms. When she writes, “Perhaps the process itself, rather than a specific set of procedures, should be considered the ‘gold standard,’” the word process functions as a safeguard for the discipline’s continuity. It transforms change into self-preservation, allowing behavior analysis to update its techniques while leaving its underlying logic untouched. Sanchez’s dissertation stands as a record of how behavior analysis secures its own continuity — not by what it discovers about children, but by how it redefines discovery itself. Applied Behavior Analysis (ABA) in its methodological design is inhumane and incompatible with what is an existential truth to human experience.

Leave a comment