You've been asked to fund AI. You don't have a CTO. And every research report you've opened so far is written for people who already know the vocabulary.
This is the plain-language version. Ten questions a CEO, COO, or founder asks before they sign off on an AI investment, with direct answers you can take into your next vendor pitch, board meeting, or planning session.
Plan for $75,000 to $250,000 for a scoped first build covering a single, well-defined use case. The lower end buys discovery plus a working prototype that proves the use case is viable. The upper end buys discovery plus a production deployment with workflow integration, a governance baseline, and adoption tracking.
Where the money goes depends on whether you're proving the technology works or putting it into the hands of users. Gartner and IDC both place enterprise AI infrastructure spending growth in the double digits, but most of that lift is concentrated at companies that have already moved past their first build. For your first build, the most expensive mistake is skipping the $25,000 of discovery and spending $200,000 on a use case that drifts in month two.
If a vendor proposes a number outside this range without a clear explanation of scope, push back. A $40,000 quote usually means no production deployment. A $500,000 quote usually means you're funding their roadmap, not yours.
Roughly 90 days for a focused build. Anything longer and scope has probably drifted from a single use case into a platform, a re-architecture, or a vague "AI strategy." All three are different projects with different price tags.
The 90-day clock starts when you have a named internal owner, an agreed-upon use case, and access to the relevant data. If any of those three pieces are missing, your timeline starts when you sort them out, not at kickoff. Most delays in first AI builds come from waiting on decisions: which dataset, which workflow, which metric to commit to.
Start with an internal pain point. Pick a workflow your team already does, that's slow, that has clear inputs and outputs, and that costs you measurable time or money each week. Customer-facing AI carries reputational and compliance risk that a first build isn't equipped to absorb.
Good first candidates: research and synthesis work, repetitive document processing, internal Q&A and knowledge retrieval, sales or service ticket triage. Bad first candidates: anything that touches the product UX, anything regulated, anything where a wrong answer goes directly to a customer. The pattern across the good ones: high cost of slow answers, low cost of an occasional wrong answer, and a workflow your team already understands. For agent-specific use cases, our FAQ on AI agents for business use covers the operational questions in more depth.
The honest baseline: most enterprise data isn't ready for AI on day one, and that's expected. According to Gartner, 57% of organizations report that their data isn't AI-ready. The fix is to identify the minimum data foundation your specific use case needs and get that piece right. Anything more becomes a multi-month data project that delays the build before it starts.
The four checks: do you have a single source of truth for the data the use case touches, is it accessible without seven approvals, is it labeled or structured well enough for a model to use, and do you know who owns it. If three of four are yes, you can start. If only one or two are yes, the first phase of your project is data work, and your timeline lengthens accordingly. Our guide to preparing your data for AI implementation goes deeper on each check.
Three terms get used interchangeably, and the slippage costs companies money. A clear definition before signing anything saves a lot of pain later.
A POC answers the question can the technology do this? It runs in a sandbox, not on live data, and the success criterion is technical feasibility. A pilot answers the question does this work in our environment, with our users, on our data? It's live, with a controlled rollout to a defined population. Production is always-on, supported, monitored, governed, and tied to a business metric. If your contract says "pilot" but your CFO is expecting production, you have a problem before the work starts.
No. But you need two things: an internal owner with authority over data access and workflow changes, and access to technical judgment when the key choices come up.
The owner is operational. They answer questions like which data we're using, who approves a change to how a team works, and what success looks like by month six. A capable VP of Operations, Director of Product, or Chief of Staff can play this role. A CEO who plans to sign every decision personally cannot, because the work moves faster than your calendar.
Technical judgment can come from your vendor's senior architect, a fractional CTO, or a trusted engineering lead. Someone who's built systems before and is in the room for the choices you can't undo cheaply: model selection, integration architecture, security boundaries. The owner runs the work. The advisor keeps it from drifting.
Five specific questions separate a serious team from a pitch deck:
The question that separates a strong vendor from a weak one is the one most CEOs forget to ask: Walk me through the last AI engagement you stopped or paused, and why. A strong partner can name one. They'll describe what surfaced, what they recommended, and what the client decided. A weak partner will say it hasn't happened, pivot to a success story, or claim their methodology prevents it. The first answer means they recognize trouble early. The second means you'll be paying to learn that lesson with them.
AI governance is the answer to three questions. Who is allowed to use this AI system, with what data, and what happens when it gets something wrong? It's a set of rules plus a person whose job it is to enforce them. The most common misconception is treating governance as a piece of software you buy or a one-time review you pass before launch.
AI governance is a continuous operating practice you fund every quarter. The data shifts, the rules shift, and failure modes you didn't anticipate show up after the system is live. A working governance baseline names a human owner, defines escalation paths for incidents, and includes a review cadence. Quarterly is the floor for a first build.
Three concrete markers. One production use case running with live users. One business metric that has measurably moved. A second use case scoped, prioritized, and ready to start.
If you have one of three at month six, you're behind but salvageable. If you have zero of three, the engagement needs a re-scope, a different vendor, or a different internal owner before more money goes in. Stanford HAI's 2026 AI Index Report found that 88 percent of organizations now use AI in at least one business function, but fewer than 10 percent have fully scaled it in any single function. The ones still talking about strategy at month nine usually never ship.
Funding the tool before funding the adoption.
Tools don't change behavior — people do. A working AI system with 8% adoption is a $200,000 line item on your P&L that doesn't move the business. Plan to spend 30 to 40 percent of your build cost on adoption: a named internal owner with calendar time blocked off, training and documentation, and a weekly adoption metric being tracked and reported. At minimum, one champion per fifty users. Companies that skip this almost always pay twice. Once to build. Then again to relaunch when the first launch quietly fails.
The clause to insist on in your first AI contract: a specific production date, a specific business metric that has to move by that date, and a defined consequence if it doesn't. Most first AI contracts measure effort (hours, sprints, deliverables) when they should be measuring outcomes (whether the system shipped, whether the metric moved, what happens if it didn't). Without an outcome clause, the project runs until the budget does, and you never have a clean moment to decide whether to keep going.