TherMOOstat — Comfort feedback. Better spaces. Together.
Workflow Recommendations · For your consideration

TherMOOstat Airtable — Workflow Recommendations

A set of ideas for your consideration. Nothing here has been built, changed, or written to your base.

Audience Hiroko Masuda + UC Davis Facilities Management  ·  From Dave MacDonald, TowerWatch  ·  Date 2026-04-16  ·  Status Suggestions only — no changes made
Read-only stance. Everything in this document is a suggestion for your team to consider. We have not added, modified, or removed a single field, view, record, comment, automation, or Extension in 🐄TherMOOnalysis-Copy, and we won't — every recommendation below is something you would action on your own timeline, if and only if it sounds useful. We're here to support your workflow, not disrupt it.
Contents
  1. Thank-you preamble
  2. Observations about the current workflow
  3. Five recommendations to consider
  4. What's already working well
  5. Measurement artifacts your team has probably encountered
  6. Questions for your team

1. Thank-you preamble

Hiroko — thank you for sharing the TherMOOnalysis base, the Feedback_2016-04-15.csv export, the recs.zinc SkySpark dump, and the PointMemos workbook with us. You shared all of that on trust, and that matters more to us than any single feature could.

This document is not a plan of things we will do to your base. It is a set of observations and ideas that your team could consider, in your own time, at whatever scope feels right. If anything below is off the mark, or overlaps something you've already tried and set aside, we'd much rather hear that than press forward. The institutional relationship is the deliverable; everything else is downstream of it.

In particular, you flagged in your 2026-04-15 email that the Mechanical Notes field is the high-value one and that LLM-assisted interpretation would be welcome — several recommendations below build on those hints directly.

2. Observations about the current workflow

From the 2026-04-15 snapshot you shared, the base contains 5 core tables. A quick read of the field schema and population rates suggests how the team uses them day-to-day:

TableRows (in snapshot)What it tells us
Feedback 29,258 ~40 user-facing + triage fields (Comment, Comfort, Building and Room, Mechanical Notes, Triage Done, Follow Up, Send Email, Email Draft, Assigned To, UCD Health Send Email, Ongoing Projects, Screen Shot, Success Story, Response). Every report has a place for what the user said, what the tech observed, who owns it, and what happens next.
Room Inventory 52 Physical-asset truth table: thermostats, vents, CFM, setpoints, Siemens visibility, photos, occupancy, construction date. The canonical "what's in the room."
In-Depth Investigation 4 Long-form case files (Description, Diagnosis, Investigation Actions Taken, Resolution, HVAC Data & Trends (screenshots)). The "when we got to the bottom of it" archive.
Response Templates 13 Canned email content keyed by name and status. Consistency + time-saver for the "what do I say back?" problem.
Comfort Band 18 Per-building (or per-room) comfort-band overrides, with previous setpoints preserved for reversibility. This is institutional memory about what the room should feel like.

Field-population pattern

A few numbers from the feedback table that informed the recommendations below:

FieldPopulatedWhat that suggests
Comment (user's own text)29.6%Roughly 1 in 3 reports carry a free-text signal — already high for a voluntary survey.
Mechanical Notes23.1%Your team is writing diagnostic notes on roughly 1 in 4 reports — a meaningful corpus (~6,700 entries) of institutional knowledge.
Triage Done10.9%The checked-off triage fraction — consistent with the "staffing-not-detection constraint" Daniel mentioned at the Apr 14 meeting.
Follow Up, Send Email, Email Draft0.6% – 3.5%Active workflow signals used on selected cases — suggests these are deliberately reserved for high-need reports, not a dropped habit.
Assigned To2.1%Per-report ownership is used sparingly but exists — useful lever for the chronic-case view below.
Success Story0.6%Small but important: the record of "we did something and the room got better."

Percentages are from the snapshot you sent on 2026-04-15. None of the numbers above are actionable findings by themselves — they're the baseline we used to rank recommendations.

3. Five recommendations to consider

Ranked roughly by effort × value, with the lightest-weight / highest-leverage ones first. Every recommendation below is a thing your team would choose to do (or not), in your base, on your schedule.

3.1 Structure the Mechanical Notes field with a lightweight addendum LOW effort

You noted that Mechanical Notes is where the real diagnostic content lives — it's currently free-text, which is exactly right for capturing what techs actually see. The ~6,700 populated notes in the 2026-04-15 snapshot are a genuinely valuable corpus.

One option to consider: keep Mechanical Notes exactly as it is (free-text, unconstrained), and add a small number of optional, structured companion fields that your team can fill in when the info is handy. For example:

The free-text note keeps all its context and nuance; the structured companions make it easy to produce views like "show me all damper-related reports in Hart Hall this quarter" without losing the story.

Why this first: it's additive, doesn't change what anyone already does, and the value compounds over time because the structured fields index the free text rather than replacing it.

3.2 A sidecar "annotation layer" that links feedback to SkySpark sparks and TRIRIGA tickets MED effort

A recurring thread at the Apr 14 meeting (Hiroko + Daniel) was the "dead work-order pattern" — a spark gets raised, a ticket gets filed, and the connection back to the original comfort report is easy to lose across three different systems. In your emails you called out that not every PointMemo corresponds to something in SkySpark either; the systems drift apart over time.

One option to consider: rather than putting that linkage inside Airtable (which would add fields to your base), TowerWatch could maintain an external annotation index that joins on Building and Room + timestamp, and exposes the result as a read-only web view your team could check optionally.

TherMOOstat Airtable Feedback + Mech Notes (YOUR base) SkySpark Sparks (rule hits) TRIRIGA Work orders PI Historian Zone / AHU measurements TowerWatch Annotation Layer (external, read-only to your base) joins on Building + Room + timestamp · preserves provenance · never writes back "Related items" web view your team opens it if/when useful — nothing pushed to your base

Each feedback record, when opened in the external view, would show "this report appears to correspond to spark RMI-AHU3-2026-01-14, ticket WO-412039, and zone-temperature trajectory from PI." If the linkage is wrong, the view says so; nothing is ever pushed back.

Why this is worth considering: it addresses the dead-work-order pattern without asking your team to maintain cross-references manually. If you later decide the linkages are solid enough to want them in your base, we'd bring them for your review — we would not add them unilaterally.

3.3 LLM-assisted triage summary — strictly advisory, never overrides a human MED effort

You wrote: "LLM AIs are pretty good at interpreting these cryptic strings" — speaking about PointMemos, but the same is true of the Comment + Mechanical Notes pair on each feedback record.

One option to consider: for each new feedback row, TowerWatch could run an LLM pass (free tier, gpt-5-mini — no usage cost to UC Davis) over Comment + Mechanical Notes and produce a 1–2 sentence plain-English triage summary, stored externally. When your team is triaging, they would see the summary alongside the original record in a read-only view — the official Triage Done, Follow Up, Response actions still happen in your base, by your team, exactly as today.

Comment: "Freezing in Wellman 212, vent right over my desk" Mechanical Notes: "Checked AHU-3 supply 50F, CHW valve 100%" │ ▼ (external LLM pass, not your base) Proposed summary: "Diffuser damper over-cooling complaint — AHU-3 supply appears low (50 F) with CHW valve fully open. Archetype: ventilation-driven over-cooling. Candidate action: inspect diffuser damper." Confidence: medium. Human triage still owns the call.

Explicit constraints on this idea: the summary is never written to your base, never drafts an email on your behalf, never auto-sets Triage Done. It's a reading aid, not a decision. If a summary is wrong it stays wrong in our view only; your records are untouched.

Why it could help: it converts the 6,700+ populated Mechanical Notes into a skim-able index without asking anyone to re-type anything or change how notes are currently written.

3.4 Saved views / dashboards for the four questions your team probably asks most LOW effort

These are view definitions you (or anyone with a power-user seat on your base) could build in a few minutes each. Listed here as four recipes rather than "please install these." If any of them would duplicate something you already maintain, skip — they're only useful if they fill an actual gap.

View ideaFilter logic (conceptual)What it helps with
Chronic rooms Group by Building and Room, count reports in trailing 30 days, surface rooms with 5+ reports Identifies rooms that deserve a sit-down (sensor placement, recommissioning) rather than another triage
Contrarian reports Comfort = "hot" during winter months OR Comfort = "cold" during 90°F+ days (joins to Outside Air Temp.) Flags rare-but-diagnostic reports that are often real faults (over-cooling, stuck heating, solar ricochet)
Aging untriaged Triage Done is empty AND Date < 7 days ago Surfaces backlog at a glance; pairs with "staffing-not-detection" reality — this view helps you budget attention, not add work
Possible dead work orders Mechanical Notes populated AND Success Story empty AND Follow Up empty AND more than 30 days since Last Modified Reports that got diagnostic attention but no recorded closure — the specific pattern Daniel and you flagged on Apr 14

Why this is low effort: these are pure Airtable view definitions — no new fields, no automations, no external integrations. Zero footprint if you delete them.

3.5 Closed-loop outcome tracking — a small addition that addresses the dead-work-order pattern head-on MED effort

The view in §3.4 surfaces probable dead work orders. This recommendation is about making sure fewer of them happen in the first place.

One option to consider: add (or repurpose) two lightweight fields on the Feedback table:

Combined with the Expected Re-verify Date suggested in §3.1, you get a clean loop: a report is triaged, a mechanical action is taken, a follow-up date is set, and the outcome is explicitly recorded even if the answer is "it's still broken, we're tracking it" or "the occupant report was a one-off." The alternative — silent closure by timeout — is exactly the dead-work-order pattern.

Importantly: no automation is being proposed here. No email-nag, no auto-ping. Your team updates the field when you update the record, same as Triage Done today.

Why this is higher-leverage than the view alone: the view in §3.4 can only find dead work orders after they exist. The Outcome Verified? column helps you not create them in the first place, and becomes a meaningful supervised label for anomaly-model training (if you ever want to share that data with us).

4. What's already working well

The parts of the base we think are particularly well-designed — and that we specifically don't want to disturb:

5. Measurement artifacts your team has probably already encountered

During our reverse-engineering pass on the TherMOOstat bundle and in conversations referenced from the Apr 14 meeting, three data-reality drifts came up that your team has almost certainly hit:

ArtifactWhat happensField that could help flag it
Room-number drift Occupant reports a room number that was renumbered years ago; BMS still uses the old one (or vice versa). The report and the sensor don't join cleanly. An optional BMS Room ID (resolved) companion field on Feedback. Blank is fine; populated when the tech resolved the mismatch.
Multi-zone master-sensor (per Brian Lima, Apr 14 meeting) One physical sensor governs multiple nominal rooms, so a report in Room N is legitimately about a sensor physically in Room M. An optional Governing Sensor Room text field — lets the tech record once that "Room N is driven by Sensor in M" and surface that every time Room N complains.
Solar ricochet (per Brian Lima, Apr 14 meeting) Solar arrays reflect heat onto adjacent buildings' envelope sensors, producing thermal anomalies that look like HVAC faults but aren't. A checkbox like Solar Ricochet Candidate? on Room Inventory — set once per room, visible at every triage on that room.

None of these need to be added; you may already have ad-hoc ways of capturing them in Mechanical Notes. We flag them only because they're common enough that explicit fields tend to pay for themselves within a year.

6. Questions for your team

Things we can't infer from the data — but that would sharpen any of the above if you're willing to share:

  1. What's the current triage volume per week, and how does it flex across the academic calendar (move-in, finals, summer)? The 10.9% Triage Done rate in the snapshot is a single moment in time and doesn't tell us about pacing.
  2. Which fields do you consider the most indispensable today? Which ones have you grown out of using, and why? (This would let us ranks §3.1 / §3.4 ideas against what your team actually reaches for.)
  3. What's a reasonable target closed-loop time between a comfort report and a verified outcome — a week? a quarter? something else? Answering this calibrates §3.5.
  4. Are there communication channels (student-facing or internal) where a passive "your report was aggregated with 3 others; this is what happened" loop would be welcome, and which ones you'd prefer to keep quiet?
  5. Are there existing analyses, reports, or pivots on this data that your team already produces that we should not try to replicate? We'd rather plug into your existing cadence than parallel it.
  6. Your 2026-04-15 email mentioned the PointMemos file is "messy, cryptic" — would an LLM-interpreted export of PointMemos (external to your base, in our artifact store) be useful to your team as a standalone reference, independent of any TherMOOstat integration?
A reminder about scope. Every recommendation in this document assumes your team is in the driver's seat. We have not written anything to your base, we have no automations pointing at it, no Extensions installed, no email templates queued, and no side panels waiting to be flipped on. The token we hold remains scoped to reads only. If any of the above is worth picking up, we'd love to help — at whatever pace and scope you choose.