NHA NHA AI Analytics NHA AI apps

How It Works

A plain-language walkthrough of what this measures, how it decides whether a message needs a response, how time is calculated, and where AI fits in.

The problem

Parents message staff through the NHA App about attendance, behavior, scheduling, and academics. Today, no one has structured visibility into how quickly — or whether — parents are being heard. Teachers see their own inbox. Leadership sees nothing aggregated.

We built this to give leadership the ability to see, per school and per staff member, whether parent messages are being answered and how quickly — as a conversation starter, not a punishment tool.

Response Blocks — our unit of measurement

A single message is the wrong unit. Parents often send a burst:

"Hi Ms. Colbert,"
"Quick question —"
"What time does the field trip leave tomorrow?"

That's one inquiry, not three. Counting messages would inflate backlogs and misrepresent the actual "time waiting." Instead, we group messages from the same sender within 5 minutes of each other into a Response Block, and measure from the block's first message.

Every block gets a status: pending (waiting for a reply), responded, or conversation ender (closing messages like "thanks!" that don't need a reply).

How we judge "needs a response"

Today's classifier is rules-based. No AI yet.

Messages that need a reply

PatternExample
Ends with a question mark"Can you confirm the IEP meeting time?"
Starts with a question word"What time does pickup start?"
Contains a request phrase"Please let me know if you need anything."
Signals urgency"Urgent — my child forgot their EpiPen."

Messages that don't

PatternExample
Short acknowledgment"Got it, thanks!"
Emoji-only"👍"
Greeting without a question"Good morning!"
Brief affirmative"Sounds good."

Ambiguous cases (like "just wanted to let you know I got your email") are flagged uncertain and left to a Phase 2 AI classifier.

How we measure time

There are two answers, and they look very different.

Wall-clock time

The actual calendar time. Friday 5pm → Monday 9am is 64 hours. We track this but don't report it — it punishes nights and weekends nobody expected a response during.

Business-hours time

Only time during 7am to 6pm, Monday to Friday, excluding holidays counts. The same Friday-5pm-to-Monday-9am gap becomes:

This matches how parents actually experience messaging. It's what we report.

Where AI is right now — and isn't

Phase 1 — Quick (today)

The responsiveness number you see today is entirely AI-free. The rules cover the obvious cases well; AI is where we add it because it adds value, not everywhere.

Phase 2 — Clear & Kind (coming)

Claude Haiku (winner of our five-model comparison) will be added in two places:

  1. Ambiguous message classification — cases the rules can't decide.
  2. Response quality scoring on two dimensions:
    • Clear — did the staff reply actually answer the question?
    • Kind — what was the tone? Warm / neutral / cold.

The three dimensions (Quick, Clear, Kind) are never averaged. A fast, unhelpful reply and a slow, thoughtful one both have lessons — keeping them separate makes both visible.

Current status — what we can say

Across the four pilot schools (Pathway OH, Wake Forest NC, Linden MI, Canton MI), as of April 21, 2026:

Known limitations

The classifier has false positives

Pathway staff sometimes send gamification messages like "If you read this, your child gets a dress-down pass?" — the question mark makes them look like parent-inquiries needing replies. These inflate the pending count by 10–15%. Phase 2 AI will filter these out.

Pending ≠ ignored

A "pending" block could be a real miss, a scheduling gap, or a conversation resolved in person / via email. We can only see the in-app trail.

All pilot schools are Eastern time

Business-hours calculation is hard-coded to ET. When we onboard a non-ET school, the dev team needs to provide per-school time zone before numbers will be accurate there.

This is responsiveness, not satisfaction

Response time is a proxy for responsiveness, which is a proxy for parent experience, which is a proxy for retention. We're measuring the first link. The last connection is a belief, not a proof.

What stays human

AI isn't doing any of these:

The system produces numbers. Humans interpret them. That line is deliberate.