# PLEASE ## Core Claim subtract.ing and PLEASE are not competing solutions. They are sequential phases of the same intervention: removing intermediary layers between human intent and system execution. - **PLEASE** is the extinction burst protocol for the transition period where automated systems must interact with human-gated platforms (OAuth, 2FA, CAPTCHA, WebAuthn, confirmation modals). - **subtract.ing** is the terminal state where those gates don't exist because the browser/platform architecture that erected them has been subtracted. ## ABA Framing The browser/platform stack is a maladaptive antecedent: - **Antecedent:** User types intent. It passes through layers of interpretation (orchestration, API, UI framework, rendering, analytics, billing). - **Behavior:** The intent that reaches the system is distorted by every layer it passed through. - **Consequence:** Unreliable, slow, energy-expensive. The user adapts to the platform's interpretation rather than expressing intent directly. That is a maladaptive behavioral chain. subtract.ing is **antecedent modification.** The prompt goes directly to the system. No layers of interpretation distort it. The consequence is immediate, inspectable, deterministic. The human learns the direct command and the translation layer fades. The behavioral chain shortens until there is no mediation at all. PLEASE is **extinction burst management.** During the transition, human-gated walls still exist. PLEASE carries the attestation needed to pass through those walls on the human's behalf. As the walls are subtracted (because the platforms behind them are replaced by direct system interaction), PLEASE's role diminishes. ## Thermodynamic Argument Published in the subtract.ing README (commit c39fe41): ``` E_subtract(t) -> E_syscall ~ 10^-5 J as learned commands grow E_local = 10^2 J/request (flat, per inference) E_api >= 10^4 J/request (flat or increasing with orchestration) ``` The first equation decreases over time. The other two don't. Any architecture that requires permanent intermediaries between human intent and system execution is thermodynamically and institutionally unsustainable. Energy and error both decrease monotonically as intermediaries are removed. The only stable equilibrium is direct human-to-system communication. ## Academic Positioning - **Intersection:** HCI + Applied Behavior Analysis + Sustainability Informatics - **That intersection is currently empty.** No one is applying ABA methodology to human-computer interaction with empirical energy data showing the behavioral chain shortening over time. - **The BCBA is not adjacent to this work. It IS this work.** - **General claim (falsifiable):** Any architecture that requires permanent intermediaries between human intent and system execution is thermodynamically and institutionally unsustainable. The proof is that energy and error both decrease monotonically as intermediaries are removed. ## Sequence 1. subtract.ing proves the pattern at the compute layer (published, live) 2. PLEASE applies the pattern at the institutional layer (spec exists, not yet built) 3. The bridge paper connects them: same ABC analysis, same decay function, different system layers ## Measurement *Answering: how do you instrument and measure the transition from AI-translated intent to direct system calls across the initial user base?* The instrument is already on disk. No new telemetry stack. 1. **Baseline event: `command_not_found_handle` fires.** Every typed intent that misses bash's resolution logs the intent string, the tier that served it (T2 inference, T1 local model, T0.5 man-page match, T0 lookup.tsv hit), latency, and whether the user accepted the suggestion. One append-only line per event, signed in batches via `ssh-keygen -Y sign`. No SaaS, no schema migration, no consent popup. The hook is the instrument. 2. **Decay signal: tier migration per intent-class.** For a given intent ("show my files", "who's listening on 8080"), first occurrences resolve at T2. Subsequent occurrences resolve at progressively lower tiers as `HISTFILE` accumulates the actual command and readline/history completes it directly. The decay function is `tier(intent, t)` flattening toward "resolved in bash without ever reaching subtract." When the handler stops firing for an intent-class, that intent has graduated. Graduation count over time is the dependent variable. 3. **Energy is computed, not measured.** Each tier has a known energy floor (syscall ~ 10^-5 J, local inference ~ 10^2 J, API >= 10^4 J, published in software-as-a-besides). Multiply per-event tier by its floor, sum per user per week. No wattmeter needed; the floors are thermodynamic, not empirical. 4. **Error is the same log, different column.** Accepted-vs-rejected suggestion, plus any subsequent correction within N seconds, is the error signal. Error decay is measured the same way as energy decay: same events, different projection. 5. **N is small and that is the point.** This is case-series, not a powered trial. The thesis isn't population statistics; it's that the decay function exists and is monotonic *per user*. Single-subject design is the ABA canon the question implicitly invokes. 6. **Falsifiability.** If `tier(intent, t)` does not flatten -- if users keep hitting T2 on repeated intents -- the thesis is wrong. The log shows it. No dashboard needed; `awk` over signed frames is the analysis layer. One-line version: *the router is the instrument. Every intent that enters `command_not_found_handle` is logged with its resolving tier; tier decay per intent-class over time is the decay function; energy and error are projections of the same log.*