← Back to Home

The Scientific Mindset

The Scientific Mindset

I earned my PhD at the end of 2013 with a thesis topic on low-level vision science. Obtaining a PhD is a long and sometimes arduous task. One cannot be afraid of digging in and doing it yourself—it is that attitude, along with a large dose of tenacity and caffeine, that gets you to the finish line.

But here's what most people outside academia don't realize: the PhD itself—the letters after your name, the dissertation on the shelf—is almost beside the point. The main takeaway from the pursuit of a PhD is the training around how to be a scientist. It's a way of thinking, a methodology that becomes so deeply internalized that it shapes how you approach every problem afterward.

It is surprising how many people I meet in my day-to-day who do not think as scientists. So much so that I need to change the way I speak with them—not because they lack intelligence, but because we're operating from fundamentally different frameworks for approaching uncertainty.

From Neuroscience to Software: An Unexpected Path

Since my PhD, I have found myself within the domain of software engineering. I originally had aspirations of doing something within the field of computational neuroscience, particularly with respect to the visual system. I wasn't sure exactly what, but I allowed the cards to fall where they would. Eventually I found myself in a small consulting company that was interested in using AI and looking to expand into the tech industry.

The transition felt jarring at first. Academia trains you for depth—to become the world expert on an incredibly narrow slice of knowledge. Industry demands breadth—to solve novel problems quickly with imperfect information. But what I discovered was that the scientific method transcends domains. The same systematic approach that helped me understand how the visual system processes crowded stimuli turned out to be exactly what software engineering needed, even if most practitioners didn't realize it.

The Scientific Method, Translated to Debugging

A scientist goes about a problem by following the scientific method. This maps remarkably well to debugging complex systems:

First, gather all known information and constraints. What do we already KNOW about the problem? Not what we assume, not what seems likely—what can we verify as fact? In software terms: What are the error messages? What changed recently? What's the expected behavior versus actual behavior? What does the data show?

This step alone eliminates half the wild goose chases I've witnessed. Teams will spend hours debugging based on assumptions that turn out to be completely wrong because no one stopped to verify the basics.

Next, form hypotheses—educated guesses as to what the problem could be. Not just one hypothesis, multiple. This is critical. The moment you latch onto a single explanation, you've stopped being a scientist and started being an advocate for your pet theory.

Good hypotheses are specific and falsifiable. "The cache might be wrong" is not a hypothesis. "If the cache is serving stale data, then clearing it should resolve the issue temporarily" is a hypothesis. It predicts an outcome. It can be tested.

Then, deconstruct each hypothesis objectively. What are the implications? What would we expect to see if this hypothesis were true? What evidence would disprove it? This is where emotional detachment becomes essential.

I've watched talented engineers become so invested in their initial theory that they start explaining away contradictory evidence rather than updating their mental model. "Well, the logs don't show what I expected, but maybe the logging is broken." Maybe. Or maybe your hypothesis is wrong.

Finally, test with rigor. Control your variables. In terms of debugging a programmed function, it is a series of very pointed if/else questions. Change one thing. Observe the result. Did the behavior change in the predicted way? If yes, you've narrowed the search space. If no, you've eliminated a possibility.

Every test should produce learning, regardless of outcome. "We tried X and it didn't work" is only a failure if you learned nothing from it.

Why Background Knowledge Matters (But Isn't Everything)

Background knowledge is essential to being a good debugger. One must fully understand the context of the problem—the architecture, the data flows, the business logic. Deep domain expertise lets you generate better hypotheses faster and recognize patterns that others might miss.

But even if the entire context is unknown, one is able to find a hook and follow the problem backwards until reaching the point of failure. This is what separates scientific thinking from pure domain expertise. I've debugged systems in languages I barely knew, in domains I'd never touched, because the methodology transcends the specifics.

You start with what you can observe. You form a hypothesis about what might be one step upstream. You test it. You follow the thread. Eventually, you reach the point where your assumptions no longer hold—and there's your bug.

The Legacy Code Gauntlet

Let me give you an example that I am often thrown at work. As with many companies, we have quite a bit of legacy code we are forced to maintain, only because many vital business transactions and processes rely on this code. On occasion, an issue will pop up that no one knows anything about. The original developer is long gone. The documentation is sparse or nonexistent, and the system has been modified so many times that nobody fully understands how it works anymore.

This is where the difference between scientists and non-scientists becomes starkly visible.

In my tenure as a software engineer, I have worked with my fair share of non-scientist developers, with the difference being quite notable. These non-scientists typically are unable to formulate a hypothesis and instead opt to simply code around the issue. Many will offer to refactor huge swaths of code, regardless of the technical debt involved.

I understand the impulse. When you don't understand why something is breaking, rewriting it feels like regaining control. If you can't explain the bug, at least you can explain the new code. But this approach treats the symptom (our confusion) rather than the disease (the actual bug). Worse, it often introduces new bugs while leaving the underlying issue unresolved.

Of course, this is challenged by people like me. When pressed for follow-up as to why a particular solution is chosen by the non-scientist developer, the answer usually falls flat with a weak "I don't know," or I've even had the person become highly critical of past architectural decisions made by the company.

The defensiveness is telling. It emerges from insecurity—not knowing why something breaks but feeling pressured to have an answer. So they deflect: "This whole system is badly designed anyway." Maybe it is. But that doesn't absolve you from understanding it well enough to fix it properly.

Such non-scientists tend to be more defensive about their code-writing as they lack the ability to defend its architecture and motivation for certain decisions. Without the scientific method as a foundation, every question feels like an attack, every bug feels like a personal failure.

The Vulnerability of Not Knowing

Personally, I want the truth. Sometimes the truth hurts. I'm not the best developer nor the best manager, but I recognize this and strive always towards what is better. I live off of constructive feedback because that is the only surefire way to grow.

This might sound like false modesty, but it's actually central to scientific thinking: epistemic humility. I owe my scientific attitude to the notion of "the more I know, the more I realize I don't know."

I am never convinced that there is a RIGHT answer—I constantly speculate. Could there be a better solution? Maybe that person knows more than I do; let's have a listen. It is important to stay humble and to realize that you don't have all the answers. Only then is there room for growth.

This creates a paradox that confuses people who haven't internalized scientific thinking: I can be simultaneously confident in my diagnostic process and uncertain about any specific hypothesis. I trust the method while holding my conclusions lightly.

When I say "I think the issue is database replication lag," I'm not declaring truth—I'm proposing the most likely hypothesis given current evidence. If new evidence contradicts it, I'll update my model without ego involvement. The hypothesis failing doesn't mean I failed; it means I learned something.

Why Debugging Is Actually Satisfying

Taking a scientific approach to debugging unleashes the power to debug ANYTHING. You can simply follow the trail of facts until hypotheses and assumptions are no longer being met. This is liberating. You're not limited to problems in your domain of expertise. You're not dependent on having seen this exact issue before.

Surprisingly, I enjoy debugging. I like digging into the unknown and figuring out the puzzle. There's something deeply satisfying about taking a system that seems chaotic and finding the underlying order. About following a chain of logic to its breaking point and discovering exactly where reality deviates from expectations.

I like to tap into existing patterns and simply inject a solution into current frameworks. This is pragmatic science—you don't always get to redesign the whole system. Sometimes you're working within constraints that are less than ideal. But understanding how things currently work lets you make targeted interventions that minimize disruption while solving the actual problem.

I pride myself on finding a bug and fixing it with minimal technical debt. Bug squashing can be a tedious job. It's not always fun like creating something from scratch with all the fancy bells and whistles. But it is inherently satisfying to crack open the code and track down an issue that has eluded others for so long.

I can do this because I approach the problem like a scientist. It is this same attitude that I am always on the lookout for when scoping out others to join the team.

Here's what I'm actually screening for in interviews and code reviews:

  • Do they form multiple hypotheses before committing to a solution? Or do they jump to the first explanation that seems plausible?
  • How do they respond when evidence contradicts their initial theory? Do they update their model or defend their original position?
  • Can they articulate their reasoning? Not just what they did, but why—what were they testing and what they expected to learn?
  • Are they comfortable saying "I don't know"? Or do they feel compelled to have an answer to every question?
  • Do they treat bugs as learning opportunities? Or as frustrating obstacles to shipping?

These aren't personality traits. They're habits of mind that can be developed. But they're also surprisingly rare, especially in an industry that often rewards speed over rigor and confidence over accuracy.

The Broader Lesson

The scientific method isn't just useful for debugging code. It's a framework for approaching any complex problem where the solution isn't immediately obvious: architectural decisions, performance optimization, team dynamics, business strategy.

The pattern is always the same: gather facts, form hypotheses, test systematically, update your model based on evidence. Stay humble enough to question your assumptions and rigorous enough to follow the data wherever it leads.

This is what I bring to engineering leadership now—technical expertise and a methodology for navigating uncertainty. And it's what I wish more people in technology would develop, because the problems we're solving are only getting more complex.

The best engineers I've worked with aren't necessarily the ones who know the most. They're the ones who think like scientists.