Matching Agents and To-Do's
Teaching AI to understand what you actually mean
The Context
Marcy: âI have like 47 things on my to-do list at any given time. Some are vague (âfigure out the deployment thingâ), some are specific (âupdate the API endpoint to return user preferencesâ). And now I have this whole Agency of Agents with specialized agents that could help - a PRD analyzer, a meeting transcriber, a status report generator.
But hereâs the annoying part: I have to remember which agent does what and manually assign tasks. Or I just use ChatGPT for everything and lose the benefit of having these specialized agents that know our workflow.â
So weâre building a tool to help us make sure we use the right tools at the right time, and donât forget any to-doâs. Itâs like Motionâs âAI calendar/task systemâ but instead of just reorganizing our calendars, it suggests which agent should handle each task.
The Idea
A plugin that lays over your to-do list and matches tasks to the right agent from your Agency of Agents. You write down âneed to turn this weekâs meeting notes into ticketsâ and it suggests the Meeting â Tickets agent. You write âfigure out why the deploy failedâ and it knows thatâs different from âwrite a PRD for the new feature.â
The flow:
- We dump our to-do list in Google Tasks (or wherever - in the future we can support any task manager you already love)
- The system matches each task to agents weâve already built
- It groups similar tasks together - do all your PRDs back to back, batch all your status reports
- Agents run on tasks they can handle autonomously, or create to-doâs for things they recognize are missing
- We get a morning review session for anything that needs our input
Instead of context-switching between 12 different types of work, we do focused blocks and let the agents handle the translation work.
How It Works
1. Task Analysis
Parse your to-do list and figure out:
- What are you actually trying to accomplish?
- What context does this need? (related tasks, previous work, domain knowledge)
- Can this run autonomously or does it need human input?
2. Agent Matching
Compare each task against your Agency of Agents:
- What domain does this live in?
- Whatâs the output format? (GitHub issue, status report, PRD)
- Which agent has the right capabilities?
3. Context Assembly
- Identify what data the agent needs (last 5 client meeting transcripts, current sprint issues, etc.)
- Pull that context from your structured data store
- Show you what itâs about to send for approval
- Only then generate the prompt with that specific context
So if youâre running the Meeting â Tickets agent, it doesnât get access to your entire Google Drive. It gets the specific meeting doc you pointed at, plus maybe the current sprint board for context. Or, asks you for the right data.
4. Execution Coordination
For tasks that can run autonomously:
- Agent runs with the approved context
- Results get added to your review queue
- You approve/reject/edit in bulk
For tasks that need your input:
- System flags them and explains why
- You provide the missing context
- Agent runs with your additions
For tasks you may have missed:
- System checks your monthly goals and compares to your to-doâs
- Suggests to-doâs you may have missed
- You approve/reject/edit
Current Status
Weekend project territory. We built a vibe coded version with Figma Make and Claude, on our tn-bootstrapper.
- Connects to Google Tasks API
- Allows for upload of prompts / agents
- Suggests agents with confidence scores
- Learns from our corrections when we override its suggestions
The question isnât whether itâs technically possible - it clearly is. The questions are:
- Will we actually use it, or is this solving a problem we donât really have?
- Does the matching overhead save time, or does it just add another step?
- How good does the matching need to be before we trust it enough to let things run autonomously?
Where We Could Go
The interface we want is proactive notifications that understand our workflows:
Friday at 4pm: âItâs Friday. Bet you didnât send client roundups. I prepped them and theyâre ready for your review here.â
Tuesday morning: âItâs Tuesday and you have all your client check-ins. Hereâs the Friday summaries, and hereâs what from your task list to monitor this week.â
Monday afternoon: âYou donât have tasks queued in GitHub for Cue for next week. Do you have a PRD? Want to build one?â
Itâs not just âwhich agent should run on this taskâ - itâs âbased on your calendar, your recurring responsibilities, and your current task list, hereâs what needs to happen and I already started it for you.â
Weâre sure this exists somewhere, but if it could plug into Agency of Agents instead of generic AI, it would actually understand how we work, not just what weâre working on.
What Weâre Learning
The Interface Is More Important Than The Matching
Getting a perfect agent match doesnât matter if we have to manually trigger it every time. The magic is in âitâs Friday, you always do this on Friday, I already did it.â
Trust Comes From Explanations
âI picked the Status Report agent because itâs Friday and you have 3 client check-ins next weekâ is way more trustworthy than silent automation.
Context Permissions Are The Hard Part
We donât want to give agents unfettered access to everything. But we also donât want to manually approve every document reference. The right model is probably: âThis agent wants access to the last 5 meeting notes from Project X. Approve once and remember for next time?â
Batching Similar Work Actually Saves Time
If we could do all our PRDs in one focused 2-hour block instead of scattering them across the week, that alone would be worth building this.
Open Questions
- Whatâs the minimum viable version?
- What actually saves us the most time?
- What do we remember to use?
- How does it learn individual patterns?
- If we override a suggestion, does it update the model?
- If we always run Status Reports on Friday at 4pm, does it start suggesting that automatically?
- How does it avoid learning bad habits?
- Whatâs the context approval UX?
- âThis agent wants access to [these 5 docs]. Approve?â
- âRemember this approval for similar tasks?â
- How do we revoke access later, or ensure we donât have data leaks?
- Is this actually better than just using ChatGPT?
- The whole point is that specialized agents know our workflow
- But if the overhead of matching/approving/batching is too highâŚ
- Maybe the real win is just ârun these recurring tasks automaticallyâ and forget the smart matching?
Why This Matters
Right now we have a few specialized agents in the Agency of Agents. We can remember what each one does and usually remember to use them. But we often forget because it takes as much time to use it as it does to do it ourselves. The cognitive load of âshould we use an agent for this?â kills the momentum. If we have to stop, think about which agent, go find it, set it up, and run it - weâll just do the task manually.
But if the system just says âHey, I can do that for youâ in a system weâre already using, we just might start to automate ourselves.
Related Research
- Agency of Agents - The specialized agents this system coordinates
- Tour of Duty in the AI Era - How role-based agents fit into career development
- Distribution vs Depth - Using agents to extract patterns from task assignments