PPC 1:1 Metrics Analysis Tool
1:1 Metrics Analysis Tool
Perfection PC Inc. ยท SDM Prep Tool
How to Use โ€” Weekly Review Mode
🔒
Data Privacy Notice
Screenshots are processed by Google Gemini. Crop or blur any tech names, client names, or company names visible in screenshots before uploading. Techs are identified by initials only. Phase 2/3 will migrate to a privacy-compliant API.
  1. Select the tech from the dropdown using their initials. Their tier will load automatically.
  2. In MSPBots Core v3, filter the dashboard to the tech's name and set date range to Last 7 Days.
  3. Screenshot the gauge row โ€” include MTTR, MTFR, SLA CR, and TT only. Do not include the BUR/RUR gauges โ€” mid-day snapshots are inaccurate for those metrics.
  4. Screenshot the BUR & RUR trend line (scroll down on Core v3) โ€” this is the accurate weekly picture for those two metrics.
  5. Click Analyze. The tool will read both screenshots and generate talking points, root cause analysis, and a suggested One Thing.
  6. Download the Word doc and paste or import it into the tech's OneNote 1:1 notebook โ€” or screenshot the output panels directly.
๐Ÿ’ก Tip: Run this tool 5โ€“10 minutes before each meeting so the analysis is ready when you sit down with the tech.
Tech Selection
Screenshot Uploads
๐Ÿ“Š
Gauge Row Screenshot
Core v3 โ†’ top row of gauges
MTTR, MTFR, SLA CR, TT only
Do NOT include BUR/RUR gauges
Filtered to tech ยท Last 7 Days
Required
⚠ BLUR NAMES BEFORE UPLOAD
๐Ÿ“‰
BUR & RUR 7-Day Trend
Core v3 โ†’ Trend Line section
BUR and RUR charts
Filtered to tech ยท Last 7 Days
Required
⚠ BLUR NAMES BEFORE UPLOAD
Reading screenshots...
Analysis Ready
๐Ÿ’ฌ What's Working / What's Not Working
โœ… What's Working
    โš ๏ธ What's Not Working
      ๐Ÿ”Ž Root Cause Summary
      โญ This Week's Suggested One Thing
      Recommended Focus Action
      ๐Ÿ“‹ Export
      ๐Ÿ—บ Tool Roadmap โ€” Future Development
      Perfection PC ยท 1:1 Analysis Tool ยท Phase Planning
      Phase 1
      Current State โ€” Manual Screenshots
      Live now ยท No additional infrastructure required
      What this phase delivers: SDM uploads screenshots from MSPBots manually before each 1:1. AI analyzes gauge and trend images, generates ITIL-grounded metric analysis, root cause reasoning, and a specific One Thing action. Output exports as a formatted Word doc matching the OneNote 1:1 template or as screenshots.
      โœ…
      Gauge row + BUR/RUR trend screenshots โ†’ AI analysis via Anthropic API
      โœ…
      Word doc export matching Perfection PC 1:1 template (sections 1โ€“7, signature line)
      โœ…
      Clipboard copy fallback for SharePoint-hosted environments
      โš ๏ธ
      Known limitation: Analysis is based on visual gauge/trend data only. No access to which specific tickets drove SLA misses, what ticket types are taking longest, or whether BUR issues are coding errors vs. genuine underwork. Coaching advice is directional, not definitively diagnostic.
      Phase 2
      Auto-Staged PSA Report Data
      Power Automate + SharePoint + Microsoft Graph API ยท No new paid tools required
      What this phase unlocks: PSA report data (CWM/BMS) is automatically emailed on a schedule and saved to SharePoint. The tool reads pre-staged files per tech instead of requiring manual uploads. Analysis becomes genuinely diagnostic โ€” the AI can see which tickets breached SLA, what type they were, and where time was spent.
      ๐Ÿ”ง
      CWM/BMS Scheduled Reports Configure CWM or BMS to email scheduled reports on a weekly cadence. Reports needed per tech: (1) Closed ticket SLA report โ€” last 7 days filtered to tech, shows which tickets missed SLA and their type/priority. (2) Time entry detail report โ€” billable vs. non-billable breakdown to diagnose BUR/RUR gaps. (3) Daily ticket close distribution โ€” new vs. closed count by day to spot throughput inconsistencies. (4) Reopen detail report โ€” tickets reopened with original resolution time, to separate genuine MTTR from reopen-inflated numbers.
      โšก
      Power Automate Flow Build a flow in Power Automate (available in your M365 Business Premium license): Trigger = when email arrives in a designated mailbox from the PSA report sender. Action = extract attachment โ†’ save to SharePoint document library at path /1on1-data/{TechName}/{ReportType}/latest.csv. This flow runs automatically every time a scheduled report lands โ€” no manual steps.
      ๐Ÿ”Œ
      Microsoft Graph API Integration Update this tool (hosted in SharePoint) to authenticate via Graph API using the logged-in user's M365 session (no separate credentials). When a tech is selected, the tool calls GET /sites/{site}/drives/{drive}/items/{path} to retrieve the latest staged CSV files for that tech. These are passed to the AI alongside or instead of screenshots. Graph API is free within M365 โ€” no additional cost.
      ๐ŸŽฏ
      Upgraded AI Prompt With structured CSV data, the AI can reason from actual ticket records: "3 of 4 SLA breaches this week were P2 tickets that sat in 'In Progress' for more than 4 hours before resolution โ€” likely escalation decisions being delayed." This replaces directional inference with evidence-based coaching.
      Phase 3
      Auto-Staged MSPBots Metric Data
      MSPBots email export + Power Automate ยท Eliminates manual screenshot uploads entirely
      What this phase unlocks: MSPBots emails metric snapshots (BUR, RUR, MTTR, MTFR, SLA CR, TT) on a schedule โ€” per tech, per week. The same Power Automate flow from Phase 2 captures and stages this data to SharePoint. The tool pre-populates the metrics table automatically when a tech is selected. No screenshots needed at all.
      ๐Ÿค–
      MSPBots Scheduled Email Export Configure MSPBots to send a scheduled metric summary email for each tech โ€” daily or weekly. This can be a digest of the Core v3 dashboard values per tech, or individual widget email exports if MSPBots supports that format. The email lands in the same monitored mailbox the Phase 2 flow already watches.
      โšก
      Extended Power Automate Flow Update the existing flow to also handle MSPBots emails: parse the metric values from the email body or attachment and write them to a structured JSON file at /1on1-data/{TechName}/metrics/latest.json. The tool reads this JSON on tech selection and pre-fills the gauge scorecard.
      ๐Ÿ”Œ
      Tool Update โ€” Auto-Load Mode Add a "Load from SharePoint" toggle to the tool. When enabled and a tech is selected, the tool fetches their latest staged metrics and report data via Graph API and runs analysis automatically โ€” no uploads required. Manual screenshot mode stays available as a fallback.
      Phase 4
      Fully Automated 1:1 Prep via Rewst
      Rewst workflow orchestration ยท 1:1 prep packet auto-generated before the meeting
      What this phase unlocks: Rewst triggers the full analysis pipeline on a schedule โ€” before each scheduled 1:1. It pulls staged data from SharePoint, calls the Anthropic API, generates the Word doc, and saves it to the tech's OneNote notebook or emails it to the SDM. The SDM arrives at the meeting with the analysis already done.
      ๐Ÿ”
      Rewst Workflow Trigger Build a Rewst workflow that fires on a schedule tied to each tech's 1:1 cadence (e.g., Tuesday 8am for tech A, Wednesday 8am for tech B). Rewst reads the staged SharePoint data via Graph API, assembles the analysis payload, and calls the Anthropic API directly.
      ๐Ÿ“„
      Automated Word Doc Generation Rewst receives the AI analysis JSON, uses a document generation step (or calls a lightweight Azure Function) to build the Word doc from the same template logic this tool uses, and saves the output to /1on1-docs/{TechName}/{Date}_1on1.docx in SharePoint.
      ๐Ÿ“ฌ
      SDM Notification Rewst sends a Teams message or email to the SDM: "1:1 prep for [Tech] is ready โ€” [link to doc]." SDM reviews, adjusts if needed, and walks into the meeting already prepared.
      ๐Ÿงฉ
      Dependencies Phases 2 and 3 must be complete before Phase 4 can run. Rewst needs Graph API access configured and an Anthropic API key stored as a Rewst secret. No additional paid tools required beyond existing Rewst license.
      ๐Ÿ“Š Phase 2 Report Data โ€” Why Each Report Matters
      SLA CR
      Closed Ticket SLA Report (last 7 days, filtered to tech) Shows exactly which tickets missed their SLA window โ€” ticket ID, type, priority, open time, resolution time, SLA deadline. Currently, if SLA CR is 91%, we know 9% of tickets breached but not which ones or why. With this report, the AI can say: "2 of 3 SLA misses were P2 tickets that sat unworked for 3+ hours mid-afternoon โ€” likely a workload prioritization issue, not a process gap."
      MTTR
      Closed Ticket Resolution Time Detail (last 7 days, sorted by duration descending) A single 18-hour outlier ticket can drag a week's average MTTR from 3 hrs to 6 hrs. Without this report, the AI can only say "MTTR is elevated." With it: "One ticket (ID #12345, P1 network issue) accounted for 60% of your total resolution time this week. The other 21 tickets resolved in an average of 2.8 hrs. This is an outlier problem, not a systemic one."
      MTFR
      First Response Timestamp Report (tickets with first response time, last 7 days) If MTFR is in breach, the key question is: when did the delays happen? This report shows first response times by ticket and by hour-of-day. If delays cluster between 12โ€“1pm, that's a lunch coverage gap. If they're random, it's a habit issue โ€” the tech is not checking the queue consistently.
      BUR / RUR
      Time Entry Detail Report (last 7 days, billable vs. non-billable breakdown by tech) BUR and RUR divergence tells a specific story. High RUR + low BUR = tech is working but logging to non-billable codes (internal, admin, or wrong agreement). Low RUR + low BUR = tech is genuinely underloaded or not logging time at all. This report gives the AI the actual numbers to distinguish between these two completely different problems.
      TT
      Daily Ticket Close Count (last 7 days, by day) A tech who closes 15 tickets on Monday and 1โ€“2 per day the rest of the week has a very different problem than one who closes 3โ€“4 per day consistently. The first pattern suggests batch-working or cherry-picking easy tickets early in the week. The second may suggest workload distribution or ticket complexity issues. Neither is visible from a weekly total alone.
      Reopen Rate
      Reopen Detail Report (tickets reopened in period, with original close date and reopen reason) Reopens inflate MTTR and tank SLA CR in ways that look like the tech's fault but may actually be a documentation or resolution-quality problem. If a tech has 4 reopens in a week, the coaching conversation is completely different from a tech who has 0 reopens but is just slow to resolve. This is currently invisible in the tool.

      +1 (509) 489-3344