Context is All Your Company Needs

March 12 2026

By Seb Boyer

The AI models are now good enough for the vast majority of knowledge work. When someone tells you otherwise, it almost always means they didn't give the model the right context. The bottleneck to AI productivity is not intelligence, latency or cost. The bottleneck is context, and finding it has become the defining challenge for every company trying to put AI to work.

Most companies are sitting on vast amounts of knowledge they can't access, and these same companies haven’t realized how critical their knowledge actually is. At Zeffy, the 40 people fintech startup I am running, we have started to take context seriously and I can see it paying off very fast so I am sharing my perspective here.

When you think about it, you can now feed a 3,000-page book (~ 1M tokens) into every single one of your queries. That is both a lot and a little. It's a lot compared to your 2024 ChatGPT prompt that was a few lines long. It's nothing compared to a company's knowledge base, even for a small 40-person company. Every company has customer data, deal data, product roadmaps, decks, financial plans, PRs, ads, employee records and much more. For each of these documents, they have versions and edits and comments and notes. Then you have Slack and emails. 3,000 pages is nothing.

The problem that logically follows is “which context should this particular query get?” That is the engineering and design problem most companies are facing today. Knowledge is living in hundreds of different places and employees are clustered in dozens of different permission groups. How do you make knowledge easily accessible for agents? Knowledge work productivity feels largely constrained right now by how good your company context is. By good, I mean 1) how accessible it is and 2) how extensive it is.

Our SaaS stack was our liability

Most enterprise software was built to be the system of record, the place where data lives and stays. That made sense when humans were the only ones consuming it. Today, the data is only valuable if it is easy and efficient to access by AI agents.

We experienced this firsthand when we tried to build AI workflows on top of our existing stack. Our hiring pipeline lived in Lever. Deal context lived in our CRM. Project context lived in Notion. Each tool had its own data model, its own permissions, its own API limitations — or no API at all.

We started pulling data out of these tools and into files. Plain files — spreadsheets, markdown, structured documents. We use mostly Notion for this (for easy readability by humans). We replaced Lever entirely with a spreadsheet + AI agent system. This unlocked much more fluid communication between our hiring processes and the rest of our tools — interviews, products, job descriptions and company news. We can run agents to scrape potential employee profiles and populate our file (which was nearly impossible to do on Lever).

Files feel almost too simple to be the answer. But I think that simplicity is the point. They are understandable by both humans and AI agents: easy to share, version, edit and protect. They are the lowest-friction interface between your company's knowledge, the humans and the AI agents that need to consume it.

Files can start living in OneDrive, Google Drive or Notion. Over time, I expect most companies will shed the overhead of heavyweight SaaS tools and move to lighter-weight systems like Obsidian. I believe graph structures are theoretically ideal, but will take slightly longer to be adopted because they are further away from the current knowledge structures. In the meantime, we maintain what we call "maps" for our AI agents: regularly updated snapshots of where everything is, what each folder contains and how documents relate to each other. This allows agents to quickly find the relevant documents and load them in their context windows at the right time.

We try to record everything

Making existing context accessible is only half the expedition. It's also about learning to generate much more of it.

Meeting transcripts are the obvious starting point, and by now pretty ubiquitous. Most companies already record every meeting — internal syncs, user interviews, hiring interviews etc. What makes this so powerful is that it doesn't have to be well organized when it is generated, organizing it in the right place can come later, when AI agents curate your knowledge base from these raw files.

Recording what people say is one thing, recording what people think is another. Think about how most companies set quarterly goals. Someone writes a doc with goals, the doc gets circulated, discussed and revised. The final version is clean and legible, but the reasoning behind each goal, why this metric and not that one, what was debated and discarded and what assumptions were made, disappears into the conversation that produced it. Six months later, when priorities shift, no one remembers why the original goals were set that way.

Our current approach to try to solve this is to prototype an agent that captures these micro decisions. When someone creates a high-stakes document like quarterly objectives, a pricing change or a product pivot, the agent reviews it and asks clarifying questions like —Why was this target chosen and not a higher one? What's the downside case? And, what would change your mind? The answers get stored alongside the document. We use scheduled jobs on Claude Cowork to send a Slack message to whoever created the “important” Notion page.

This is harder than it sounds. It requires people to slow down at the exact moment they want to move fast. But I'm hopeful the compounding returns will be significant, that every documented decision becomes context for the next one.

We aggressively capture external context too

Internal context is necessary but not sufficient. I think companies need to systematically capture external context like market data, competitor movements and audience signals, and make them available alongside their internal knowledge.

We started to do this through a combination of API integrations and structured web scraping. We are recording external data over time to build historical trends rather than relying on point-in-time snapshots. When our marketing team works on a new campaign, they can access fresh information about what matters to our audience and who to target. The dedicated context of each particular target or user can be used seamlessly by any team member.

We try to better understand our users, our competitors and the broader ecosystem we operate in. I am convinced that the quantity and quality of this external context help us be much better at reaching, understanding and serving our customers.

Context is becoming our new moat

With raw intelligence becoming a commodity, I think that the know-how built into a company's proprietary records and its employees' minds becomes one of the core differentiators (along with durable moats like brand, scale, regulation heavy processes and network effects).

I think we are in the earliest days of this transition. Most companies haven't even started to connect a few systems of record to AI agents yet. My experience experimenting with agents over the past year is that investing in context is both very easy (it costs nothing and doesn't require technical genius), and very hard (it requires a deep change of habits and internal processes). I am convinced that the companies that start now, searching through their own knowledge, recording what they find and charting the territory for their agents to navigate, will compound their advantage quietly, then suddenly be impossible to catch.

It seems that in the era of abundant intelligence, good context is all most companies need, so you probably want to make sure that yours is well taken care of.