Curated AI knowledge for enterprise leaders.
TL;DR
Paste the problem directly—AI works better when you skip narration and just show the issue.
Mary's Lens
Reframes ‘lazy’ as intelligent minimalism—great for fast-moving teams.
Prompt Template
Reframes ‘lazy’ as intelligent minimalism—great for fast-moving teams.
TL;DR
A side-by-side snapshot of strengths and weaknesses of large language models (LLMs)—what they can do well vs. where they consistently fail.
Mary's Lens
Perfect primer for building trust and healthy skepticism with AI systems. Can be used to guide team training on what tasks are “AI-ready” vs. still require human oversight. Pairs well with prompt QA frameworks and multi-agent workflows that compensate for these weaknesses.
TL;DR
Startups often overestimate the defensibility of data moats—simply accumulating more data doesn’t automatically create a lasting advantage.
Mary's Lens
Sharpens strategic thinking for AI-powered firms. Highlights why data scale ≠ defensibility—critical for founders building with user data. Emphasizes why user trust, unique workflows, or better outcomes are stickier than large datasets alone.
TL;DR
This post breaks down how agent memory works in LLM systems. It distinguishes between episodic, semantic, and procedural long-term memory—plus how they’re assembled into short-term (working) memory used in prompts.
Mary's Lens
For enterprise tax/finance teams building copilots or internal AI tools, this post gives language and structure for implementing persistent agent memory—so you can maintain continuity across tasks without repeating instructions or reloading data every time.
TL;DR
AI works best when it's wrapped inside process clarity. If you want better outputs, better automation, and team alignment—start writing your workflows down.
Mary's Lens
Clarity = leverage. If you’ve ever said “AI didn’t do it right,” odds are the instructions were unclear. Don’t just prompt better—operationalize what good looks like. That’s how AI becomes a multiplier.
TL;DR
ChatGPT isn’t collecting likes—it’s collecting your soul. Unlike social media, which captures surface behavior, AI agents are building high-res models of your thinking, emotion, and long-term evolution.
Mary's Lens
Enterprise leaders should think of agent memory as the new CRM—except it’s for people, not pipelines. This changes how we onboard, collaborate, and delegate to AI. Build your memory strategy now, not later.
TL;DR
AI memory means large models can simulate you—perhaps more accurately than you can. As LLMs remember your conversations, they don’t just assist—they become an extension of your thought process.
Mary's Lens
In tax, finance, or marketing—if your agent “remembers” how you think, prioritize memory governance. This isn’t just automation anymore—it’s simulation. And that shifts the risk, responsibility, and opportunity.
TL;DR
AI is taking over coordination, translation, and alignment work. That’s squeezing out the “glue” roles—PMs, analysts, even controllers—unless they evolve into high-leverage operators.
Mary's Lens
This is already happening in finance. The role of “finance business partner” or “report consolidator” is shrinking. AI can pull the data, write the summary, flag the variance. What matters now is being able to shape the system, sense what’s missing, and drive action across teams. The future isn’t more operators—it’s fewer, sharper ones with range and conviction.
TL;DR
AI doesn’t need you to do the technical work—it needs you to tell it what to do. That requires breadth, not depth.
Mary's Lens
Finance and tax teams were built on specialists. But AI flips the value equation. Breadth means faster ideation, better prompts, and fewer blind spots. This is how you become the one who drives the work, not just reviews it.
TL;DR
Using AI for isolated tasks isn't enough. If you're not rethinking how customers interact with your business using AI, you're falling behind.
Mary's Lens
In enterprise finance, this applies to every touchpoint: dashboards, close checklists, tax intake forms, PBC requests, vendor inquiries. AI is not just a back-end tool—it should sit between users and your systems. If it’s not guiding decisions or simplifying access to insight, you're not in transformation—you’re in maintenance.
TL;DR
Just like enterprise systems were never one-size-fits-all, neither will AI agents be. Enterprises are moving toward architectures where agents play specific roles in workflows and talk to each other via APIs or orchestration layers.
Mary's Lens
For tax and finance, this has huge implications. Think of your provision agent in OneSource needing data from your ERP, or your close checklist agent needing workflow updates from SAP. You need to start thinking in terms of AI interface points, not just applications. Teams that understand where orchestration should live (and where it shouldn’t) will be first to scale AI use safely and effectively.
TL;DR
AI lowers the barrier to entry across disciplines. You don’t need to be a specialist to start—but specialists still matter to finish. It’s not job loss, it’s role shift.
Mary's Lens
We should stop framing AI as “replacement” and start framing it as “range expansion.” This is what will empower more cross-functional collaboration and experimentation inside enterprise teams. The people who win won’t be those who know the most—they’ll be the ones willing to try, iterate, and then call in the right expertise when it matters.
TL;DR
Giving your team access to AI isn’t enough. If the old habits, meetings, and steps stay, the AI just becomes extra work. For AI to stick, something has to go.
Mary's Lens
This is the missing playbook for AI in tax and finance. You can’t introduce an AI tool for reconciliations or report drafting and then keep the same manual checkpoints. The shift doesn’t happen until you deprecate something. Kill the spreadsheet. Cancel the meeting. Remove the doc. Then let the AI own the output—and hold it accountable. That’s what unlocks real leverage.
TL;DR
Matt Pocock argues that one of the most important features for AI agents is the ability to check their own work—self-verification as a built-in capability.
Mary's Lens
For tax and finance, this is non-negotiable. AI agents drafting compliance language, summarizing rulings, or reviewing transactions must be able to ask: “Does this align with policy?” Without that layer, every output needs full human re-review—which kills the value. Self-verification will separate copilots you try from ones you trust.