An independent research lab building trauma-informed, Procrustean-informed software. This website practices what it describes — Synaptic Reading surfaces conceptual connections across the text you're reading, and a live Hebbian plasticity demo lets you interact with the learning principle underpinning the app's design. AI-assisted development, grounded in published science. Early-stage — not a finished product.
az8T Lab builds software rooted in three commitments: trauma-informed design (interactions shaped by nervous system evidence — calm rhythms, no alarm), Procrustean-informed design (the tool adapts to the person, never the reverse), and Eco-Coding (computational efficiency as environmental responsibility). Every application is AI-assisted. This website is itself a working example — its animations, disclosure patterns, and interactive features all follow these same constraints.
The methodology draws on perceptual learning, active inference, and nervous system regulation research. Software structured to support user capacity — active participants building lasting skill, not passive consumers. These principles are informed by established science but their specific application to consumer software is az8T Lab's original exploration, not yet independently validated.
The working blueprint: trauma-informed, Procrustean-informed, and Eco-Coding principles applied as engineering constraints. Interaction rhythms, animation curves, color choices, and disclosure patterns are derived from clinical evidence. These constraints are codified into repeatable design rules built into the codebase at the architectural level.
Structured practice strengthens recognition patterns. Like learning to read music or drive — repeated, active engagement builds awareness that deepens with practice.
Awareness costs mental energy. Structured practice lowers that cost — making perception less effortful over time. The less effort it takes, the more naturally it happens.
Before the AI shows you anything, your brain is already guessing. That gap between prediction and reality is exactly what sharpens perception — not passive consumption.
Calm rhythms, gentle animations, no alarms. Every interaction respects the nervous system — especially for people whose systems have learned to be on high alert.
Path uses your phone's camera and AI to scan indoor spaces for risks most people overlook — not because the dangers are invisible, but because nobody trained their eyes to catch them. Each scan gives your visual system structured practice with real hazard patterns.
AI identifies potential combustible material proximity, synthetic material concentrations, and egress accessibility. Findings are suggestive, not deterministic.
Every perceptible indoor surface at all body heights — head, knee, shin, foot, elbow. Not limited to head-height. Material inference drives injury severity prediction.
Fall chains, furniture climbing incentives, progressive instability, temporal degradation patterns. One risk amplifies another — the system predicts the sequence.
Designed as a cognitive orthotic rather than a prosthetic — meaning it supports your perception while encouraging your own recognition, retrieval, and reasoning skills. The goal is building independence, not dependence on the tool.
Consent-based features: Night Light (blue light protection with empirical justification), 20-20-20 Vision Protection, ECO indoor plant density assessment.
Not a feature — a systematized engineering methodology. 60 BPM grounding haptics, fade-only animations (200-300ms), progressive disclosure, consent-first gating, supportive peer tone. Every constraint is evidence-based and architecturally enforced. No red. No alarm. No startle.
This system has no validated false positive or false negative rates. No user studies, before/after testing, or independent evaluation have been conducted. AI analysis is exploratory — it may miss real hazards or flag non-hazards. Path is not a substitute for professional safety assessment and should be used alongside your own judgment, not in place of it.
Independent review identified concrete use cases where this tool addresses genuine pain points:
Parents of toddlers — real-time visual feedback on hazards manual checklists miss.
Elderly care — falls are the leading cause of injury death in adults 65+. Accessible DIY assessment.
Renters evaluating apartments — identify hazards before signing a lease.
Trauma survivors, neurodivergent individuals — trauma-informed interface that never alarms or startles.
Activate the nodes below in any order — the guidance highlights suggest a path, but you choose. Watch connections strengthen as you build the circuit. This is the Hebbian principle Path's design draws on: neurons that fire together, wire together. Each reload reshuffles the suggested sequence.
Follow the guidance glow — or choose your own path...
This project draws from established research in several fields. None of this is invented here — the original contribution, if any, is the specific combination applied to consumer safety software. Whether that combination is genuinely novel is an open question.
Established neuroscience: repeated exposure strengthens recognition patterns. Applied here as structured practice with real hazard images — not a new discovery, but a specific application to safety awareness.
Clinical principles codified as hard engineering constraints — not guidelines. 60 BPM haptics, 200–300ms fade-only animations, no red, no alarm. Draws from existing trauma-informed care frameworks, applied as architecturally inheritable rules that any project can adopt.
Karl Friston's theoretical framework suggests the brain minimizes prediction error. This project applies that concept to hazard detection — reducing the cognitive cost of identifying risks through structured exposure. The underlying science is established; this application is exploratory.
All az8T Lab software is built using AI-assisted development — natural language describing intent, AI generating implementation. This is a widely adopted practice in 2025-2026, not something we invented. We use it because it works.
Our term for treating computational efficiency as environmental responsibility — pausing off-screen animations, consolidating redundant operations, throttling idle processes. The individual techniques are common; framing them as a design constraint rather than performance optimization is our contribution.
Teaching complex concepts through interactive demonstrations, timed consolidation, and progressive disclosure — rather than static explanation. On this page: the guided Hebbian Demo is a live proof-of-concept; Synaptic Reading reveals conceptual connections across the text in real time.
If the approach proves useful in indoor risk detection, similar principles could potentially apply to other domains. This is speculative — we haven't validated the core approach yet, let alone extensions of it.
AI-assisted risk indicator identification, impact surface mapping, spatial risk pattern recognition, and awareness training through live camera analysis.
If the design principles work as intended, they might transfer to other domains. This remains to be seen.
Any future work would depend entirely on whether the current approach proves genuinely useful — which hasn't been validated yet.
Path performs live AI analysis of camera frames captured from your device. Each frame is transmitted to cloud-based neural networks for real-time surface risk assessment. No images are stored, cached, or retained — frames are analyzed and immediately discarded. No image data persists on any server after analysis completes.
During active scanning, your camera feed is transmitted over an encrypted internet connection to external AI infrastructure. This means: (1) your indoor environment is briefly visible to cloud processing systems during each analysis request, (2) network intermediaries could theoretically intercept encrypted traffic under extraordinary circumstances, and (3) the AI provider's data handling practices are governed by their own privacy policy. Path does not collect, store, or sell any visual data.
No user accounts. No image databases. No behavioral tracking. No analytics pipelines. No advertising identifiers. The only data that leaves your device is the camera frame being analyzed — and it is discarded immediately after processing. The privacy posture is not a feature; it is a structural constraint of the cognitive orthotic model. A tool designed to build your independence has no architectural reason to accumulate your data.
This page practices what it describes. The adaptive neural canvas, the Synaptic Reading system, and the Hebbian Demo all participate in the same Eco-Coding constraints documented below — structurally integrated from the start, not bolted on as afterthoughts.
Eco-Coding is our term for a simple idea: every computation has an energy cost, and software should be designed to minimize unnecessary computation. The techniques themselves — pausing off-screen animations, caching DOM references, throttling idle processes — are well-established. Our contribution is framing them as a design principle rather than treating them as performance optimization afterthoughts.
CSS animations on this page are automatically paused when their elements leave the viewport. Off-screen elements consume zero animation frames — a structural constraint, not a feature toggle.
Duplicate @keyframes definitions were identified and merged. Three identical opacity animations became one shared definition — eliminating redundant browser parsing and reducing stylesheet weight.
Scroll-driven updates reference pre-cached DOM elements instead of querying the document on every frame. This eliminates thousands of unnecessary DOM lookups per browsing session.
The neural network canvas throttles to 15fps when no user interaction is detected, dropping to full 60fps only when needed. Idle pages consume ~75% fewer GPU cycles.
Path builds on decades of published research across multiple fields. The scientific foundations are not ours — the researchers listed below did the difficult, original work. Our contribution is limited to combining their findings into a specific software application, which remains unvalidated.
Hebbian learning theory (1949) — the synaptic plasticity principle underlying our perceptual training model. "Neurons that fire together wire together."
Free energy principle and active inference framework — the prediction-error minimization model that structures how Path presents environmental information.
Polyvagal theory — nervous system state regulation research informing our trauma-informed interaction design. Calm rhythms, no alarm signals.
Perceptual learning research — the empirical foundation for how structured exposure builds lasting environmental awareness without explicit instruction.
Somatic marker hypothesis — embodied cognition research connecting emotional processing to decision-making, informing our haptic and sensory design choices.
Extended mind thesis and predictive processing — the philosophical framework supporting Path's role as cognitive orthotic rather than passive tool.
This list is not exhaustive. Dozens of researchers across neuroscience, cognitive psychology, human-computer interaction, and fire safety science have contributed foundational knowledge that this project draws upon. We acknowledge the gap between citing research and validating its application.
The scientific principles above are established and published. The intellectual labor claimed by this project is specifically the operationalization — translating those principles into working, interactive software. This website is itself evidence of that work, not a description of it.
Claim: A real-time conceptual connection engine built from scratch.
Verifiable evidence: Activate the Synaptic toggle on this page. The system walks the entire DOM tree, identifies ~50 keywords across 5 conceptual clusters (perception, risk, learning, design, neural), wraps matching text nodes in annotated spans, and cross-links concepts that appear across multiple visible sections simultaneously. Phase-offset breathing animations, scroll-aware IntersectionObserver activation, and gravity-point integration with the neural canvas are custom-engineered — no library, no plugin, no precedent implementation was adapted.
Claim: An interactive operationalization of Hebb's 1949 synaptic plasticity principle.
Verifiable evidence: Interact with the Hebbian Demo above. Each node activation triggers SVG connection lines drawn between activated pairs, pulse-ring emissions, guided sequence shuffling on each reload, and a progressive state tracker. The circuit visualization directly demonstrates the principle it references — connections literally strengthen as you build the sequence. This is original engineering: no existing Hebbian visualization library was used.
Claim: The website is itself a working implementation of the methodology it describes.
Verifiable evidence: This page's consent-first progressive disclosure (you chose to expand this section), its fade-only linear animations (200-300ms ease, no bounce), its adaptive neural canvas (throttling to 15fps when idle), its wellness reading timer, and its Eco-Coding constraints (animation pause protocol, cached DOM references, consolidated keyframes) are not described separately and then bolted on — they are the same architectural rules applied to the same codebase. The design methodology and the design artifact are one object. This self-referential coherence is the intellectual contribution.
None of the above constitutes peer-reviewed evidence that this approach works for end users. The claim is narrower: that the specific operationalization — turning published principles into interactive, self-demonstrating software — is original engineering work by this project's author. Whether it achieves its intended outcomes remains an open empirical question.