AI is already changing how people work.

You can see it first in pretty practical ways. Advisors are using AI to get ready for meetings faster, summarize notes, draft follow-up, and cut down on some of the manual work that slows down the day.1 But from where I sit, the bigger question isn’t whether AI has arrived. It clearly has. The real question is whether a firm has the right foundation to use it well.

That foundation isn’t just the AI tool itself. It’s the data behind it, the way that data is organized, the workflows it needs to support, and the controls around what the system can see and do. If a firm’s data is fragmented, its workflows are disconnected, or teams are still manually stitching information together across systems, AI isn’t going to solve that problem on its own. More often, it’s going to make the gaps easier to see.

That’s why AI readiness is really a data and workflow question before it becomes a tool question.

 

Start With the Questions You Want to Answer

One of the easiest mistakes firms can make is starting with the technology instead of the use case.

The first conversation shouldn’t be, “What can this model do?” It should be, “What are we actually trying to answer?”

If you know the kinds of questions your advisors or firm leaders need answered, you have a much better starting point for thinking about AI. You can work backward from there and ask what data would be required to answer those questions if a person were doing it manually.

For an advisor, that question might be pretty simple: I’ve got a client meeting in 10 minutes. What’s going on with this household? Normally, answering that might mean checking the CRM, pulling portfolio data, reviewing notes, and looking across a few other systems to get ready. AI can help by doing that work in parallel and pulling it together into something useful in seconds.

At the firm level, the questions can shift. Which advisors are meeting most frequently with clients? How does that line up with client satisfaction or retention? Where are there patterns in service activity, engagement, or performance that leadership should understand better?

Those are different questions, but the same principle applies. Start with the question. Then determine what data is needed to answer it well.

Turn AI Interest Into a Smarter Next Step

If your firm is thinking about AI, start with the systems behind it. Learn how stronger data, connected workflows, and a clearer plan can help you put AI to work more effectively.

AI Is Only as Useful as the Data Behind It

If the data feeding an AI system is incomplete, poorly understood, inconsistently defined, or disconnected across systems, the answers you get back will be weaker. Maybe not every time, but still often enough that trust will start to break down.

That means the real work in effective AI usage starts well before you even get to the prompt.

You need to know what your data is, where it comes from, what it means, and what the source of truth is for a given answer. You also need the business context around that data. It's not enough to know that a field exists. You need to understand how that field is used, what normal looks like, when a value is out of bounds, and how it should be interpreted inside the workflow.

That’s why data cataloging and data understanding matter so much. If a firm can’t clearly describe its own data in a structured way, it becomes much harder to make AI reliably useful on top of it. Data governance and security are must-haves for firms that want to get the most impact out of AI.2

This doesn’t mean firms need to have everything perfectly cleaned up before they can get started. In many cases, you can get meaningful short-term value from the data you already have, as long as you understand it well enough and apply it carefully. But there’s also a longer tail of more advanced and more valuable use cases that may require firms to rethink how data is organized, represented, and made available over time.

So the goal isn’t “feed everything into AI and hope for the best.” The goal is to enable the system with the right data for the right job.

 

Connected Workflows Matter More Than Flashy Features

A lot of AI conversations focus on what a tool can generate. That’s understandable, because the output is what people see first. But in practice, one of the biggest differences between a helpful AI implementation and a frustrating one is what happens around the output. If an advisor gets a summary from one tool, then has to copy it into another system, update a third by hand, and manually piece together the next step, the firm may have AI in the workflow without getting much leverage from it.

That’s why I think connected workflows matter more than flashy features.

The real value comes when AI can sit on top of a firm's integrated data and support the next action, not just generate a response. That might be preparing an advisor for a conversation faster, helping leadership spot patterns across the business, or even producing a visualization on demand to help someone understand the answer more clearly. Whatever the use case, the point is the same: the strength of AI increases when the workflow around it is stronger.

 

Don't Let Governance Be an Afterthought

Another thing firms need to think seriously about is governance. This isn’t just about whether a system is “closed” or whether a firm is using a public model — it’s a lot broader than that. If you’re going to put AI on top of firm data, you need to understand what that data is, how access is controlled, and what the system is allowed to see and query.

Access control has to be built in at a very granular level. In practice, that can mean field-level security, row-level security, advisor-level restrictions, and clear rules around which data points and datasets are even available to the system in the first place.

It also means auditability matters. Firms should be able to understand what the system queried, when it happened, and what came back. If you can’t reconstruct that, it becomes much harder to govern and much harder to trust.

That’s not just a best practice issue. FINRA says the use of generative AI can implicate rules around supervision, communications, recordkeeping, and fair dealing, and points firms toward formal review and approval processes, governance frameworks, ongoing monitoring, and storing prompt and output logs where appropriate.3 The SEC’s 2026 exam priorities also say examiners will review firms’ policies and procedures, internal controls, oversight of third-party vendors, and governance practices in connection with AI-related risks.4

That’s why AI governance has to be part of the foundation, not coming at the end of the conversation.

 

AI Should Be a Force Multiplier

In software engineering, AI is already changing the way many of us work by making us faster and helping us move through repeatable tasks more efficiently. I think the same will be true in wealth management.

Now, I don't think that means every firm needs to adopt every new capability immediately. But I do think firms need to understand the implications of not learning how to use these tools effectively. For some, the impact may be modest. For others, it may mean slower growth, slower service, or a harder time keeping up with firms that are using AI to create more capacity.

The key point is that AI doesn't replace expertise, but rather that it can multiply it.

That’s especially true for common, repeatable questions that take a lot of time to answer today. AI is very good at understanding what a person is trying to achieve, translating that into the right query or set of queries, and then synthesizing the result into something a human can actually use.

Used the right way, it can help result in faster understanding, not just faster output.

 

What Firms Should Test Right Now

If a firm wants to think more clearly about AI readiness, here are the questions I’d start with:

  1. What questions are we trying to answer?
  2. What workflows are slowing our people down today?
  3. What data would we need if we were answering those questions manually?
  4. Do we understand that data well enough to trust the output?
  5. Are access controls and audit mechanisms in place?
  6. Can the AI operate inside the workflow, or are people still doing the work of connecting everything by hand?

Those questions are the ones that really matter, even if they aren't flashy.

Ultimately, whether or not a firm is AI-ready has very little to do with simply having access to a new tool. It’s much more about whether the firm has the appropriate data foundation, workflow design, and governance discipline to make that tool useful. If not, AI implementation may wind up like a lot of other technology tools — heavy on the promises, light on the delivery. But when those things are in place, firms will find AI can deliver actual, impactful business value.

Is Your Tech Stack Ready for AI?

Before you add another AI tool, make sure your firm is prepared to support it. See what advisors and firm leaders should evaluate across data, integration, workflows, and oversight.