Teaching Systems Thinking with AI: What Building a Weather Agent Taught Me

Woman Vectors by Vecteezy

One of the most powerful ways to learn something deeply is to teach it to someone else — and nothing tests your ability to explain complex ideas like trying to do it with a child.

Recently, I built a small AI-powered weather agent with my 8-year-old. The goal was simple: take real weather data and turn it into outfit advice a kid could actually use. But the real value wasn’t in the app we built — it was in what the process taught both of us about engineering, AI, and systems thinking.

Building the Project: A Simple, End-to-End System

The project itself was intentionally straightforward: connect a data source, run it through a reasoning layer, and present the output in a friendly interface. Here’s the high-level architecture:

  • Data layer: Weather data from a public API (NOAA/NWS)
  • Orchestration: Workflow automation (e.g., n8n) for polling, routing, and triggering logic
  • Reasoning: A language model instructed to provide weather-based outfit advice
  • Interface: A basic front-end that displays “Today” and “Tomorrow” recommendations

From an engineering perspective, it was nothing groundbreaking. But from a learning perspective, it was a goldmine — because each layer revealed an essential concept about how intelligent systems work.

Lesson 1: Data Flow Makes the Abstract Concrete

For kids (and frankly, many adults), software can feel like magic. Connecting inputs and outputs demystifies it. Once my co-builder saw that:

  1. The system starts a flow
  2. Fetches forecast data
  3. Passes that data to an AI model
  4. Transforms it into advice
  5. Sends it to a front-end

— he understood that AI isn’t sorcery. It’s a pipeline. That single concept — data flows through systems — is the foundation of engineering thinking.

Lesson 2: Prompts Are Instructions, Not Magic Spells

The first version of our “weather advisor” confidently suggested wearing Paw Patrol gear. My 8-year-old was horrified. “No one my age thinks Paw Patrol is cool!”

It was the perfect moment to teach a critical lesson: prompts are not magic — they’re specifications. The more precisely you define intent, tone, and constraints, the better the model performs. When we changed the system prompt to “speak like you’re talking to an 8-year-old,” the entire experience shifted. The advice became relevant, engaging, and on-target.

This is prompt engineering in its purest form: understanding how language shapes behavior.

Lesson 3: Edge Cases Are Where Real Learning Happens

We also encountered some real-world friction: API quirks, webhook timeouts, CORS errors, and model hallucinations. Each “failure” became a teaching moment about debugging, iteration, and the engineering mindset.

It’s easy to focus on getting a perfect answer. But shipping something imperfect — and fixing it together — teaches far more. It’s how engineers think: form a hypothesis, test it, debug, and iterate.

Lesson 4: Kids Naturally Think Like Engineers

One surprise was how intuitive “if/then” logic felt to a child. Kids already reason about the world conditionally (“If it’s raining, we need an umbrella”). That mental model maps beautifully to how prompts, workflows, and automation behave.

It’s a reminder that systems thinking is a deeply human skill — one we can nurture early by connecting it to things kids already understand.

Lesson 5: The System Prompt Is the UX

Perhaps the most subtle but powerful insight was this: the system prompt defines the user experience. Changing just a few words — from “describe the weather” to “give advice like a friend” — turned a sterile tool into something delightful.

In large-scale AI systems, this principle holds true too. Whether you’re building developer agents, customer-facing assistants, or internal copilots, how you frame the problem at the system-prompt level has an outsized impact on usability and trust.

Why This Matters

What started as a weekend project became a microcosm of real-world AI engineering. It showed that the fundamentals of building intelligent systems — data pipelines, orchestration, prompt design, error handling, and iteration — are not just technical patterns. They’re ways of thinking.

And perhaps most importantly, it proved that the next generation is ready to understand these concepts far earlier than we think. They just need the chance to build something tangible — and a guide willing to debug with them along the way.

Because in the end, engineering isn’t about writing perfect code.
It’s about solving problems, learning from feedback, and designing systems that keep getting smarter — together.

Tags

Jen Anderson is an engineering leader, AI strategist, and writer passionate about building high-performing teams and exploring the future of technology. With experience leading transformations across industries—from scaling engineering organizations to pioneering agentic AI—Jen brings both technical depth and human-centered leadership to every project.

Through writing, speaking, and projects like Ask-Jentic, Jen shares insights at the intersection of technology, leadership, and innovation, helping others rethink how we build, lead, and work in the age of AI.