At this point, it isn’t news: AI is here, and it will change how we work.
The headlines are breathless, excitedly recounting the promise of AI and the productivity gains it can create in your everyday work. However, AI has a downside — it is, after all, still a largely untested technology — and no one (not even the experts) can say how it will act in some situations.
There have been several high-profile examples of AI causing trouble for professionals who used the technology without fully understanding it.
There’s the attorney who turned to the generative AI tool ChatGPT for help drafting a legal brief and was shocked to learn that ChatGPT had completely fabricated six previous cases it suggested for the document — complete with bogus citations. When the lawyer tried to double-check his work, asking ChatGPT if the cases were real, it incorrectly assured him they were.
Then there are the Samsung employees who fed confidential information into ChatGPT, not realizing that the platform uses the information people share in prompts to train future versions of the tool.
This is why all leaders — particularly those in highly regulated industries like financial services — should pause and consider all angles before implementing AI technology.
Here we’ll offer tips for staying on top of compliance best practices regarding AI.
Explore the top fintech trends, where advisors are focusing their technology investments, and how you can leverage tech for growth in our 2023 Wealthtech Survey Results.
Educate Your Team About AI
Many of these corporate AI horror stories ultimately boil down to employees getting swept up in the AI buzz and acting without full knowledge of the technology’s limitations and risks.
It’s up to you to ensure every team member understands the basics. Take time to teach your team about the various types of AI on the market — from generative text AI like ChatGPT to AI solutions that help handle, process, and categorize large datasets — and how they work.
Make sure your team understands that AI is not infallible. For example, large language models like ChatGPT are prone to “hallucinations,” or fabricating facts, statistics, or data (as in the case of the attorney and the fake citations). Communicate to your team that you expect them to double-check and verify any work done by AI — the technology should not be allowed to operate without human oversight.
Financial professionals must be especially focused on the risks of granting AI access to confidential information. Teach your team how popular AI tools store, use, and interact with data — and remind them never to input sensitive information into tools like ChatGPT.
Gather Leadership To Define Your Firm’s Stance on AI
Even after providing your team with general information about how AI works, it’s vital that you set company policy for using the technology.
Not all AI is created equal; ChatGPT is a new and sophisticated AI offering, but some tools we use daily have relied on AI for years. For example, some email spam filters rely, in part, on AI to filter out junk.
So the type of AI solutions you’re comfortable allowing your team access to will vary, and it’s essential that you gather the leadership team to discuss your firm’s approach and appetite to risk, specifically as it pertains to AI.
Give legal, compliance, and your in-house technical team (your CIO or head of IT) a seat at the table. Consider consulting outside experts if your team is lean or lacks the AI-specific expertise to evaluate the issue from all angles.
Sometimes, it’s best to start with strict limitations and open up slowly. A blanket ban on the technology gives your leaders time to educate themselves, and watch for changes to the technology or regulatory landscape and industry precedents. You can always move toward more liberal policies, but it’s impossible to undo an error or compliance risk that arises from too-early tech adoption.
Communicate Your AI Policy To Your Team
Once your leadership team knows where it stands, draft clear guidelines for the rest of your organization. Outline the dos and don’ts, and include concrete examples.
Don’t stop at guideline creation; make it easy for your team to adhere to them. Ensure that the AI policy is widely distributed, and let your team know where to find and access it.
Consider hosting Q&As or lunch and learn sessions with your team to socialize these new guidelines and ensure everyone has time to talk through questions or concerns.
You can also implement steps into everyone’s daily workflow to ensure these new policies and procedures are followed. For example, let’s say you set a rule that AI is not allowed in creating text for client outreach emails. You may add a step into your internal compliance approval workflow that asks compliance officers to run any text they review through a tool that can help identify AI-generated writing.
Keep Your AI Siloed
If you decide to allow your team to use AI, ensure its usage is controlled and siloed. Articulate in your policy that AI should not be “let loose” within your system, then create guardrails and internal controls to ensure those expectations are adhered to.
One of the most significant risks of AI tools is that they are not entirely predictable. Even the experts who build and train AI solutions are sometimes surprised by what they do. Giving such a tool unfettered access to your entire tech stack (and all your sensitive data) presents a significant compliance risk.
Instead, define specific, narrow tasks you’ll allow AI to undertake and keep those items (and the AI solution itself) siloed from the rest of your tech.
You’ll also benefit from instituting AI-specific controls into your compliance workflow. Generally speaking, compliance controls help you monitor for compliance risks or missteps; defining AI-focused controls and assigning related monitoring tasks to specific colleagues will help you spot any unexpected or concerning behavior from your AI tool.
Stay Alert to AI-Related Regulation
AI is a rapidly changing field — things that were true six months ago are no longer so, and it will behoove your team to watch the space closely.
In addition to scanning the horizon for updates, risks, challenges, and opportunities in AI technology itself, you’ll also want to keep tabs on any AI-related regulation.
Lawmakers, regulators, and government officials are all monitoring AI news, and there is a rising chorus of individuals and organizations calling for regulatory action. When it comes, you’ll want to be ready.
This starts with educating yourself. Read up on AI technology and government intervention, and consider setting relevant Google News alerts so you receive notifications when a new headline appears.
Then, be prepared to act. If you’ve already implemented AI within your firm (in any capacity), and new regulation restricts it, you're responsible for adjusting your approach to comply with new laws or guidelines.
Keep in mind that regulation could come from any number of sources. The federal or state government might pass a law that impacts AI usage in business, generally, while the SEC might offer specific guidance on AI’s use at financial advisory firms.
The Buck Stops with You
Ultimately, you and your (human) team are responsible for maintaining compliance at your firm.
If AI introduces compliance risks or errors into your system, it’s your job to find and address them. Your team has a fiduciary duty to your clients, and it’s not one you can share with or pass off to an AI bot.
There’s much to be excited about when it comes to AI. But, as with any foray into the unknown, the watchword with AI should be caution. Instead of racing ahead, take time to consult with experts, receive the legal and compliance guidance you need, and (if appropriate) proceed with caution on a limited scope. AI isn’t going anywhere — there’s no need to rush — and there’s significant compliance upside to adopting a measured approach when engaging with any new technology.
Learn how to increase efficiency and improve your daily output in our webinar announcing the launch of Orion Stacks: a fully-integrated UX for financial advisors.