A set of ideas for your consideration. Nothing here has been built, changed, or written to your base.
🐄TherMOOnalysis-Copy, and we won't — every recommendation below is something you would action on your own timeline, if and only if it sounds useful. We're here to support your workflow, not disrupt it.
Hiroko — thank you for sharing the TherMOOnalysis base, the Feedback_2016-04-15.csv export, the recs.zinc SkySpark dump, and the PointMemos workbook with us. You shared all of that on trust, and that matters more to us than any single feature could.
This document is not a plan of things we will do to your base. It is a set of observations and ideas that your team could consider, in your own time, at whatever scope feels right. If anything below is off the mark, or overlaps something you've already tried and set aside, we'd much rather hear that than press forward. The institutional relationship is the deliverable; everything else is downstream of it.
In particular, you flagged in your 2026-04-15 email that the Mechanical Notes field is the high-value one and that LLM-assisted interpretation would be welcome — several recommendations below build on those hints directly.
From the 2026-04-15 snapshot you shared, the base contains 5 core tables. A quick read of the field schema and population rates suggests how the team uses them day-to-day:
| Table | Rows (in snapshot) | What it tells us |
|---|---|---|
| Feedback | 29,258 | ~40 user-facing + triage fields (Comment, Comfort, Building and Room, Mechanical Notes, Triage Done, Follow Up, Send Email, Email Draft, Assigned To, UCD Health Send Email, Ongoing Projects, Screen Shot, Success Story, Response). Every report has a place for what the user said, what the tech observed, who owns it, and what happens next. |
| Room Inventory | 52 | Physical-asset truth table: thermostats, vents, CFM, setpoints, Siemens visibility, photos, occupancy, construction date. The canonical "what's in the room." |
| In-Depth Investigation | 4 | Long-form case files (Description, Diagnosis, Investigation Actions Taken, Resolution, HVAC Data & Trends (screenshots)). The "when we got to the bottom of it" archive. |
| Response Templates | 13 | Canned email content keyed by name and status. Consistency + time-saver for the "what do I say back?" problem. |
| Comfort Band | 18 | Per-building (or per-room) comfort-band overrides, with previous setpoints preserved for reversibility. This is institutional memory about what the room should feel like. |
A few numbers from the feedback table that informed the recommendations below:
| Field | Populated | What that suggests |
|---|---|---|
Comment (user's own text) | 29.6% | Roughly 1 in 3 reports carry a free-text signal — already high for a voluntary survey. |
Mechanical Notes | 23.1% | Your team is writing diagnostic notes on roughly 1 in 4 reports — a meaningful corpus (~6,700 entries) of institutional knowledge. |
Triage Done | 10.9% | The checked-off triage fraction — consistent with the "staffing-not-detection constraint" Daniel mentioned at the Apr 14 meeting. |
Follow Up, Send Email, Email Draft | 0.6% – 3.5% | Active workflow signals used on selected cases — suggests these are deliberately reserved for high-need reports, not a dropped habit. |
Assigned To | 2.1% | Per-report ownership is used sparingly but exists — useful lever for the chronic-case view below. |
Success Story | 0.6% | Small but important: the record of "we did something and the room got better." |
Percentages are from the snapshot you sent on 2026-04-15. None of the numbers above are actionable findings by themselves — they're the baseline we used to rank recommendations.
Ranked roughly by effort × value, with the lightest-weight / highest-leverage ones first. Every recommendation below is a thing your team would choose to do (or not), in your base, on your schedule.
Mechanical Notes field with a lightweight addendum LOW effortYou noted that Mechanical Notes is where the real diagnostic content lives — it's currently free-text, which is exactly right for capturing what techs actually see. The ~6,700 populated notes in the 2026-04-15 snapshot are a genuinely valuable corpus.
One option to consider: keep Mechanical Notes exactly as it is (free-text, unconstrained), and add a small number of optional, structured companion fields that your team can fill in when the info is handy. For example:
Root Cause Category (single-select: damper / coil / VAV / sensor / setpoint / user-physiology / other)Parts or Labor Needed (short text — "replace actuator", "clean coil")Expected Re-verify Date (date) — pairs naturally with the closed-loop idea in §3.5The free-text note keeps all its context and nuance; the structured companions make it easy to produce views like "show me all damper-related reports in Hart Hall this quarter" without losing the story.
Why this first: it's additive, doesn't change what anyone already does, and the value compounds over time because the structured fields index the free text rather than replacing it.
A recurring thread at the Apr 14 meeting (Hiroko + Daniel) was the "dead work-order pattern" — a spark gets raised, a ticket gets filed, and the connection back to the original comfort report is easy to lose across three different systems. In your emails you called out that not every PointMemo corresponds to something in SkySpark either; the systems drift apart over time.
One option to consider: rather than putting that linkage inside Airtable (which would add fields to your base), TowerWatch could maintain an external annotation index that joins on Building and Room + timestamp, and exposes the result as a read-only web view your team could check optionally.
Each feedback record, when opened in the external view, would show "this report appears to correspond to spark RMI-AHU3-2026-01-14, ticket WO-412039, and zone-temperature trajectory from PI." If the linkage is wrong, the view says so; nothing is ever pushed back.
Why this is worth considering: it addresses the dead-work-order pattern without asking your team to maintain cross-references manually. If you later decide the linkages are solid enough to want them in your base, we'd bring them for your review — we would not add them unilaterally.
You wrote: "LLM AIs are pretty good at interpreting these cryptic strings" — speaking about PointMemos, but the same is true of the Comment + Mechanical Notes pair on each feedback record.
One option to consider: for each new feedback row, TowerWatch could run an LLM pass (free tier, gpt-5-mini — no usage cost to UC Davis) over Comment + Mechanical Notes and produce a 1–2 sentence plain-English triage summary, stored externally. When your team is triaging, they would see the summary alongside the original record in a read-only view — the official Triage Done, Follow Up, Response actions still happen in your base, by your team, exactly as today.
Explicit constraints on this idea: the summary is never written to your base, never drafts an email on your behalf, never auto-sets Triage Done. It's a reading aid, not a decision. If a summary is wrong it stays wrong in our view only; your records are untouched.
Why it could help: it converts the 6,700+ populated Mechanical Notes into a skim-able index without asking anyone to re-type anything or change how notes are currently written.
These are view definitions you (or anyone with a power-user seat on your base) could build in a few minutes each. Listed here as four recipes rather than "please install these." If any of them would duplicate something you already maintain, skip — they're only useful if they fill an actual gap.
| View idea | Filter logic (conceptual) | What it helps with |
|---|---|---|
| Chronic rooms | Group by Building and Room, count reports in trailing 30 days, surface rooms with 5+ reports |
Identifies rooms that deserve a sit-down (sensor placement, recommissioning) rather than another triage |
| Contrarian reports | Comfort = "hot" during winter months OR Comfort = "cold" during 90°F+ days (joins to Outside Air Temp.) |
Flags rare-but-diagnostic reports that are often real faults (over-cooling, stuck heating, solar ricochet) |
| Aging untriaged | Triage Done is empty AND Date < 7 days ago |
Surfaces backlog at a glance; pairs with "staffing-not-detection" reality — this view helps you budget attention, not add work |
| Possible dead work orders | Mechanical Notes populated AND Success Story empty AND Follow Up empty AND more than 30 days since Last Modified |
Reports that got diagnostic attention but no recorded closure — the specific pattern Daniel and you flagged on Apr 14 |
Why this is low effort: these are pure Airtable view definitions — no new fields, no automations, no external integrations. Zero footprint if you delete them.
The view in §3.4 surfaces probable dead work orders. This recommendation is about making sure fewer of them happen in the first place.
One option to consider: add (or repurpose) two lightweight fields on the Feedback table:
Outcome Verified? — single-select: pending / verified-fixed / verified-persists / deferred / not-applicableOutcome Verified Date — when the re-verification happened (date)Combined with the Expected Re-verify Date suggested in §3.1, you get a clean loop: a report is triaged, a mechanical action is taken, a follow-up date is set, and the outcome is explicitly recorded even if the answer is "it's still broken, we're tracking it" or "the occupant report was a one-off." The alternative — silent closure by timeout — is exactly the dead-work-order pattern.
Importantly: no automation is being proposed here. No email-nag, no auto-ping. Your team updates the field when you update the record, same as Triage Done today.
Why this is higher-leverage than the view alone: the view in §3.4 can only find dead work orders after they exist. The Outcome Verified? column helps you not create them in the first place, and becomes a meaningful supervised label for anomaly-model training (if you ever want to share that data with us).
The parts of the base we think are particularly well-designed — and that we specifically don't want to disturb:
Status — that's institutional memory about how to talk to people made portable and consistent. This is rarer than it sounds.UCD Health Send Email field acknowledges that health-services communication is its own channel with its own timing. That subtle distinction is usually missing from comfort-tracking systems.Assigned To. Sparingly used (2.1%) but where it is used, it's clearly the high-attention cases. That matches the "staffing-not-detection" reality better than forced-assignment would.Previous Cooling Set Point + Previous Heating Set Point mean every override is reversible. That's exactly the discipline recommissioning work requires.Description → Diagnosis → Investigation Actions Taken → Resolution shape is textbook case-file structure.During our reverse-engineering pass on the TherMOOstat bundle and in conversations referenced from the Apr 14 meeting, three data-reality drifts came up that your team has almost certainly hit:
| Artifact | What happens | Field that could help flag it |
|---|---|---|
| Room-number drift | Occupant reports a room number that was renumbered years ago; BMS still uses the old one (or vice versa). The report and the sensor don't join cleanly. | An optional BMS Room ID (resolved) companion field on Feedback. Blank is fine; populated when the tech resolved the mismatch. |
| Multi-zone master-sensor (per Brian Lima, Apr 14 meeting) | One physical sensor governs multiple nominal rooms, so a report in Room N is legitimately about a sensor physically in Room M. | An optional Governing Sensor Room text field — lets the tech record once that "Room N is driven by Sensor in M" and surface that every time Room N complains. |
| Solar ricochet (per Brian Lima, Apr 14 meeting) | Solar arrays reflect heat onto adjacent buildings' envelope sensors, producing thermal anomalies that look like HVAC faults but aren't. | A checkbox like Solar Ricochet Candidate? on Room Inventory — set once per room, visible at every triage on that room. |
None of these need to be added; you may already have ad-hoc ways of capturing them in Mechanical Notes. We flag them only because they're common enough that explicit fields tend to pay for themselves within a year.
Things we can't infer from the data — but that would sharpen any of the above if you're willing to share:
Triage Done rate in the snapshot is a single moment in time and doesn't tell us about pacing.