Fremd Engineering Panel • Palatine, IL (District 211)
March 12, 2026
Guest Lecture
High School Engineering Students
A career journey from computer science and behavioral neuroscience through data science, edtech, and scaling systems — connecting how the brain detects signals in noise to how we build and evaluate non-deterministic AI systems.
This talk traces a career path from computer science through behavioral neuroscience, data science, edtech platform building, and into the age of accessible LLMs. Each phase built on the last — and the thread connecting them all is the question of how we separate signal from noise.
Where it started. The foundation for everything that followed — logic, systems thinking, and the discipline of building things that work.
Studying how the brain makes decisions under uncertainty. How do you detect a faint signal when the world is full of noise? This is the core question of signal detection theory, and it turns out it applies far beyond the lab.
Statistics became the bridge between neuroscience and engineering. Understanding distributions, thresholds, and confidence intervals isn't just academic — it's how you build systems that make reliable decisions at scale.
Taking those foundations into the real world: building platforms, working with messy data, and learning that the gap between a model and a product is where most of the hard work lives.
Multiple rounds of building, scaling, and optimizing systems. Each iteration brought new lessons about what breaks under load, what matters to users, and how to make tradeoffs that hold up over time.
The landscape shifted. LLMs made AI accessible in ways that weren't possible before — but they also introduced a fundamental challenge: non-determinism. The same prompt can produce different outputs. So how do you evaluate a system that never answers the same way twice?
Traditional engineering assumes stability. But humans aren't stable — they break rules, generate noisy signals, and behave probabilistically. AI systems that work in the real world must be built with uncertainty in mind. The question isn't "is this the answer?" — it's "how confident are we, given the signal?"
You can't inspect the mechanism directly — you can only probe the output and use the response as signal. This is true of the visual system (raw photons compressed through layers of abstraction into perception) and it's true of language models (text at scale compressed through attention layers into responses). The engineering insight: use the same feedback loop the brain runs, now at scale.
Every hard problem in AI is a signal-in-noise problem. Uncertainty isn't what's left when the engineering is done — it's the material you work with. The brain mastered this. Now we're building systems that do the same. The next chapter of this field is yours to write.