Based in Brazil
// 6 min

Building CapyCast: How a Capybara Shipped a Weather App

ai ios swift capy indie-dev

The best way to test your own tools is to build something you’d actually use. I tested mine by shipping a weather app with a herd of capybaras.


Nobody Needs Another Weather App

Let’s get this out of the way. The App Store has thousands of weather apps. Some of them are genuinely excellent. Building another one is, on paper, a terrible idea.

But here’s the thing — none of them make you smile. Weather apps show you numbers. Maybe a nice gradient. Maybe a radar map. They’re utilities. They’re fine.

I wanted to build one with a personality. A pixel-art capybara that reacts to the weather, wears outfits, and gives you unsolicited wisdom. Something that makes you check the weather because you want to see what the capybara is doing, not because you need to know if it’s raining.

That’s CapyCast. And I built it with Capy.


What Is Capy

Capy is a workflow builder I created to orchestrate multi-agent coding workflows. Think of it as a way to break a project into structured tasks — quests — and let AI agents handle implementation while you focus on product decisions.

I built it because I kept running into the same problem: AI coding tools are powerful in isolation, but managing a full project with them requires a layer of orchestration that didn’t exist. You end up being a project manager for agents, manually feeding context, sequencing work, and stitching results together. Capy does that coordination.

It’s not a code generator. It’s a workflow engine. The difference matters.


The Build

CapyCast is a SwiftUI app targeting iOS 17+. MVVM architecture, WeatherKit for data, StoreKit 2 for monetization, a widget extension for the home screen, and localization in English and Brazilian Portuguese. It’s not a toy — it’s a real app with real infrastructure.

Here’s how the workflow actually played out.

Breaking It Into Quests

I started by defining what the app needed to be. Not features — identity. The capybara isn’t a mascot bolted on. It’s the core experience. Weather data is the input; the capybara’s reaction is the output. Everything flows from that.

From there I broke the build into quests in Capy: core weather service, capybara rendering system, outfit management, StoreKit integration, widget extension, localization, App Store assets. Each quest had clear inputs and outputs. Each one could be handed to an agent with enough context to execute.

What I Did vs. What the Agents Did

This is the part people always want to know, so here’s the honest breakdown.

Me:

  • Product vision and identity — the capybara-first design philosophy
  • Pixel art direction — defining the visual language, reviewing every outfit
  • UX decisions — what goes on the main screen, what gets tucked away, how the wardrobe works
  • App Store strategy — pricing tiers, screenshot composition, description copy
  • Quality calls — when something felt off, I killed it or redirected

The agents (via Capy):

  • WeatherKit integration with retry logic and coordinate rounding
  • StoreKit 2 boilerplate — product definitions, purchase flows, receipt validation
  • The entire widget extension — shared app group, cached data reads, timeline reloads
  • Localization strings for both languages
  • Build scripts, CI configuration, IAP creation scripts for App Store Connect
  • MVVM scaffolding and service singletons

That split was intentional. I handled everything that required taste. The agents handled everything that required patience.


What Worked

The quest-based workflow is real. Breaking work into discrete, well-scoped tasks with clear context made the agents dramatically more effective than open-ended prompting. When an agent knows exactly what it’s building and what constraints apply, the output quality jumps. Capy’s orchestration kept this structured across the entire project.

The prompts were also very detailed and organized - the workflow classifies if it’s a light or heavy task - and then creates the prompt in an adequate manner (plain english or xml structured for example)

StoreKit 2 is a perfect AI task. The API is well-documented, the patterns are clear, and there’s a lot of boilerplate. I have multiple product types — non-consumable outfits, a Plus tier, a Live subscription, consumable tips — and the agents wired all of it up correctly. Transaction listeners, entitlement checks, restore flows. I barely touched that code.

Localization scales linearly. Adding Brazilian Portuguese was almost free. The agents generated the .strings files, I reviewed for tone, done. Japanese is next — and capybaras are genuinely popular there, so the market fit might actually be better.

The widget came together fast. Widget extensions have a specific architecture — timeline providers, shared app groups, entry views — that’s tedious but well-defined. Perfect agent territory. The widget reads cached weather data from the main app and renders the capybara’s current state. It worked on the first real build.


What Didn’t

Pixel art needs a human eye. I tried getting agents to generate outfit descriptions and pixel art layouts. The results were technically correct and aesthetically wrong. Every outfit in the final app went through my direction. The agents could implement the rendering code, but the creative decisions had to be mine.

Taste can’t be delegated. The main weather view went through several iterations where the agents produced something functional that felt lifeless. The pastel color theming, the way the capybara’s wisdom quotes land, the haptic feedback on interactions — these are details I had to feel out myself. The agents gave me a canvas. I had to paint on it.

Context limits are real. Even with Capy managing the workflow, there were moments where an agent didn’t have enough context about a previous quest’s output. I had to bridge those gaps manually- and then update the capy workflow to persist state.


The Numbers

Here’s where I get honest.

  • Downloads: 14
  • Marketing spend: $0
  • Revenue: negligible
  • Languages: 2 (English, Brazilian Portuguese)
  • IAP products: 15+ (outfits, tiers, tips)
  • Privacy: No tracking. No analytics. No ads. Zero user data collected.

Fourteen downloads. That’s the reality of shipping an indie app with no marketing. The app is polished. Nobody knows it exists.

I’m not spinning that as a win. It’s a distribution problem, and I knew it going in. The goal was never to compete with Weather.com. It was to build something with personality, test Capy on a real project, and ship.

All three happened.


What’s Next

The distribution problem has an interesting solution baked into the product itself: share cards. Let users share their capybara’s weather report as a styled image. Every share is a tiny billboard. The capybara becomes the marketing.

Beyond that:

  • Apple Watch complication — the capybara on your wrist, wearing your outfit choices in real time
  • Japanese localization — capybaras are legitimately beloved in Japan, and the pixel art aesthetic fits
  • More outfits — seasonal drops, limited editions, the kind of thing that gives people a reason to come back

The Actual Takeaway

Tools don’t ship apps. Decisions ship apps. And I find this a very important conclusion given the current “vibe coding” era.

Capy(and Claude) helped me move faster. It handled the tedious parts so I could focus on the parts that matter — the parts that give a weather app a soul. But Capy didn’t decide that the capybara should wear a samurai outfit. It didn’t decide that wisdom quotes should rotate with the weather. It didn’t decide that privacy matters more than analytics.

I made those calls. The agents executed them. That’s the workflow.

If you’re building with AI tools and wondering why the output feels generic, it’s probably because you’re delegating the wrong things. Delegate the patience work. Keep the taste work.

And since you’re here, go ahead and try out CapyCast (iOS ready- soon on Android)

CapyCast on the App Store | Capy on GitHub