Back to writing
·7 min read

Claude as Health Coach: What I Learned Building HealthPulse

I built a health dashboard that uses Claude to turn scattered Apple Health data into a daily briefing. What I learned about AI as advisor, not search.

I didn't want another health app. I have plenty. Apple Watch. Oura ring. Blood test results in PDFs. Sleep scores. Step counts. Resting heart rate trends.

What I wanted was synthesis. One place that looks at everything and tells me: here's what matters today.

So I built one.

The problem isn't data

I track a lot. Weight every morning. Steps from the Watch. Sleep from the Oura ring. Workouts logged manually. Heart rate variability every few days. Nutrition when I'm being disciplined.

The data exists. The pattern recognition doesn't.

Every app shows you its own slice. Apple Health is a warehouse with no analyst. It stores everything beautifully and tells you almost nothing useful. The sleep score says 78. The HRV is trending down. The step count is up. What does that mean together? What should I actually do differently today?

That's what HealthPulse does. It imports from Apple Health into a local SQLite database, ingests everything from a single XML export, and then hands the whole picture to Claude with a simple question: what do you see?

This is the framing that changed how I think about these tools.

Most AI interfaces are Q&A. You ask, it answers. "What was my average sleep last week?" is a search query. Useful. Efficient. Limited. The burden of knowing what to ask is entirely on you.

HealthPulse inverts this. Claude looks at everything and surfaces what matters. I don't tell it what to analyze. It reads the last 90 days of data, identifies where things are trending, flags anything anomalous, and writes a briefing. The question is always the same: here's your data, what should this person know today?

That shift, from reactive search to proactive advice, is the interesting one. It changes the relationship with the tool. Instead of using AI as a faster database query, you're using it as a coach who reviewed the tape before you walked in.

I notice this matters more than the quality of any individual answer. A mediocre insight you didn't think to ask for is often more useful than a brilliant answer to a question you formulated yourself.

The local-first decision

Health data is personal in a way that most data isn't. I didn't want to sync to a cloud. Not because cloud is inherently wrong, but because for this category of data, the default should be local.

HealthPulse runs on my Mac. The SQLite database is on my machine. Apple Health export files go to a local folder. Nothing leaves my network unless I'm sending a prompt to Claude's API, which I considered carefully and accepted as a deliberate trade-off.

The consequence: no cross-device sync. I can't check it from my phone. If my Mac breaks without a backup, the data is gone. These are real costs. I chose them intentionally.

The Apple Health import itself took more engineering than I expected. The XML export is thousands of records spanning years, covering dozens of different data types with different schemas. Steps and flights climbed live in one format. Sleep stages in another. Workouts with optional GPS routes in a third. The deduplication between iPhone and Watch data was its own problem: both devices record some of the same metrics, so the import has to identify and collapse duplicates before writing to the database.

Getting that import reliable took longer than everything else combined.

The local-first decision also shaped what the app is not. There is no notification system. No scheduled job that runs every morning and pushes a briefing to my phone. The analysis only happens when I open the dashboard and ask for it. That friction is intentional. A health app that pesters you with daily summaries becomes noise. One you go to when you're curious stays useful longer.

Context window management is the hard part

Once the data pipeline worked, the interesting challenge became: how much history do you give Claude?

Too little and the advice is shallow. "Your sleep was a bit short last night" is not useful. Too much and you're burning tokens on data from 18 months ago that has no bearing on today.

What works: recent data in detail, older data summarized, anomalies flagged explicitly. I give Claude the last 30 days in full, a 90-day trend summary for key metrics, and any data points that fall outside two standard deviations over the past year. That structure lets it reason about what's normal for me specifically, not for people in general.

The framing prompt matters too. Clinical vs. supportive produces very different outputs. "Analyze this health data" returns a medical-sounding assessment that I find hard to receive. "You're a thoughtful coach reviewing this data before a weekly check-in. What would you say?" returns something I actually read.

I didn't expect tone to matter that much. It does.

Trust calibration takes time

The third thing I didn't expect: you have to calibrate your trust, and that takes months.

The first few weeks, I acted on almost everything. Sleep score low — cut back evening screen time. HRV trend down — reduce intensity this week. Steps dropping — add a lunchtime walk.

Then I noticed something. The AI's recommendations were internally consistent but not always causally accurate. It would correctly identify a pattern (sleep quality down for 10 days) and offer a plausible explanation (stress, late eating, alcohol) without any way to know which one was actually true. All three explanations would be reasonable. Only one would be right for me that week.

Now I use the briefing differently. As a prompt for my own reflection, not a prescription. "Your HRV has been lower than your 90-day baseline for two weeks" is a fact I act on. "Consider reducing training load and prioritizing recovery" is a hypothesis I test, not a directive I follow.

That distinction matters. The coach sees the data. You see your life.

What I don't trust it for

This is important to be explicit about.

HealthPulse is not a medical tool. It catches patterns. It does not diagnose. If the AI notices that my resting heart rate has been elevated for two weeks, that's a signal worth paying attention to. It is not a reason to skip a doctor.

I've built in a deliberate guardrail in the prompt: the coach is always instructed to note when something warrants professional attention rather than lifestyle adjustment. When it does, I treat that seriously and verify through proper channels.

Anything that looks like a real health concern still goes to a doctor. The AI is the analyst, not the physician.

Why this matters beyond personal health

HealthPulse is a hobby project. It will stay that way. But building it taught me something that I think about in my professional context every day.

The pattern, AI that proactively synthesizes context and surfaces what matters rather than waiting for questions, is exactly where enterprise tools are heading. Agentforce is trying to do this for sales teams. Instead of asking "what should I focus on?" the system tells you "here's what matters today based on everything it can see."

I wanted to understand that design pattern with my own data before I encounter it at scale in professional tools. What makes it work? Where does it break down? When do you trust it and when do you verify?

Now I know. That's worth the SQLite schema and the finicky XML parser.

HealthPulse is open source on GitHub.