Orion Advisor Technology aims to start rolling out its ChatGPT-embedded software to advisors on its Redtail Speak communications texting platform within 60 days, Brian McLaughlin, president of Orion Advisor Technology, tells ThinkAdvisor in an interview.
That would make it the first financial services firm to integrate this groundbreaking, disruptive technology for advisor use.
“I think we’ve finally reached a tipping point with assistive technology for advisors,” McLaughlin says.
In the coming months, Redtail Speak’s thousands of customers will have access to the ChatGPT integration, he notes.
Will ChatGPT capability become an industry trend? Absolutely, according to McLaughlin, who sold his firm, Redtail Technology, to Orion last year.
Other areas of investment management will be “popping up” with ChatGPT as well, he says.
“The world is starting to see the benefits of [artificial intelligence]. It’s becoming more mainstream — not just in financial services but everywhere.”
The Redtail Speak-ChatGPT integration generates automatic prompts and responses to client questions.
But controlling the process is the advisor, who can vet the content before transmission — a good thing since AI is prone to making mistakes.
In the interview, McLaughlin discusses the benefits that ChatGPT will bring to financial advisors, such as time savings and greater efficiency.
The embedded technology is cost-free to advisors, and using it requires no training.
ChatGPT, available at no cost to everyone, was introduced last November through OpenAI, a firm founded by Sam Altman, Elon Musk and other investors.
Orion showed a prototype of its integrated applications for Redtail Speak at the firm’s National Ascent Conference in February.
No advisors have used it as yet, McLaughlin says.
“Now we need to see it in practical application by thousands of people to know the [actual] value of it,” he says in the interview.
Future ChatGPT features that Orion intends to integrate into Redtail Speak include expanded content based on a few sketchy paragraphs the advisor composes and the capability to change the tone of a message so it conveys more positivity; that is, making it “more human friendly,” McLaughlin explains.
Meanwhile, he foresees compliance challenges concerning ChatGPT, and AI in general, based largely on the issue of: Who wrote this? A robot or a human?
ThinkAdvisor interviewed McLaughlin on March 8. He was speaking by phone from Omaha, Nebraska, where Orion is based.
He says the firm is already using ChatGPT internally, mainly for programming, and that it is “super useful.”
Here are excerpts from our interview:
THINKADVISOR: Why did you integrate ChatGPT into your Redtail Speak communications platform?
BRIAN McLAUGHLIN: We think it’s a huge value for advisors to be able to utilize this type of technology to make their lives easier and more efficient. It will let them have more time.
Now we need to see it in real use, in practical application by thousands of people to know the [actual] value of it.
It’s an aid, like an assistive technology. We let the advisor control the message.
When will your advisor customers have it?
In the next 60 days. We haven’t given it to advisors to try yet, but a lot of people — executives, design teams, product teams, engineer teams — who have played with or used it think it’s life-changing for them.
So you’ve employed it at your firm?
Internally, we already use it pretty extensively for programming. It’s super useful.
What are some specific ways it can help advisors?
One, it can give them automatic prompts. We can load in the last couple of messages from a client, and the AI will automatically prompt the advisor with a potential reply. So it uses that data to generate a response.
Say a client is asking questions about their financial plan. AI can automatically suggest a response. But the advisor is able to [first] look at it and decide whether they want to fire that response back to the client.
Please elaborate on the main benefits for the advisors.
Saving time and removing the complexity of a conversation. Often, it’s about rewriting certain words or phrases or reorganizing the message to be more effective.
What capabilities will you be adding?
We’re going to start digging into things like understanding the sentiment of a conversation. The AI filter can detect the tone of a message.
Let’s say you’ve written a message to a client; the AI can analyze it to see whether or not it has more of a negative tone or a positive tone.
If you’re going for a positive tone, you can ask the AI to convert your message into a more positive sentiment.
And we’ll be adding features, such as AI’s ability to expand ideas that an advisor may have, to help generate content. They’ll be able to, for example, create a couple of paragraphs and [get] an [entire] letter out of that.
AI can take a complex idea, say, a complex investment strategy that the client doesn’t fully understand, which an advisor is trying to explain and simplify it to make it a little more human-friendly — in print, text, email.
What’s a concrete example?
I had it help me write [content] for shareholder meetings. I asked it to expand on my ideas. You tell it a couple of things you want to talk about, and it expands into an essay.
Do you think the use of ChatGPT will become an industry trend?
One hundred percent! We already know that other places in the general investment-management space will be popping up with it.
We think the world is starting to see the benefits of AI, and we’re already seeing it embedded into other applications. So it’s becoming more mainstream, not just in financial services but everywhere.
But why should financial advisors take to it?
I think we’ve finally reached a tipping point with assistive technology for advisors.
About five years ago, I gave a keynote speech about it, but [the idea] was uncomfortable [for many] people.
Now that it’s mainstream and people are using it for programming, college homework, so many different things, it’s becoming more and more commonplace and acceptable to use.
And the quality is really high. Really good content is being generated.
Who else has the ChatGPT technology or something similar?
There are at least 10 [firms] that I’m aware of. Microsoft has it embedded now. [Companies] have had the technology for five years. It just wasn’t mainstream enough [to introduce].
People were skeptical of AI. But somebody eventually broke the model that made it more consumer-friendly to the average person who isn’t very tech savvy and now realizes there’s value in it.
Five or 10 years ago, having a speaker in your house that you talked to, like Alexa, was radical. You would have never wanted [a device like that] back then. Now you can’t live without it.
I see the same thing happening as we embed these technologies [like ChatGPT] into our existing suites of tools. They’ll be used to make life easier.
But the raw material ChatGPT uses to create or to change content is gleaned mostly from the internet. Is that where you’ll get information for ChatGPT to expand on ideas that advisors sketch out?
AI is only as good as the data you give it. It’s a massive data set. But it only goes up to about 2021 [of internet data]. So there are limitations.
A new learning model is coming out, though, called GPT-4 that, I think, is 20 times the [current] data-set size. It’s in beta now. When it’s available, we’ll upgrade to it.
What are challenges that might arise in using ChatGPT with advisors?
Some will be on the compliance front: “Is AI writing a recommendation to a client, or is the advisor doing it?” “Was it reviewed?” There’ll be those types of questions.
That’s what scares people sometimes. With AI, are you talking to a human or a robot? Is this a computer talking to me or a human?
We’re going to have to deal with those kinds of regulatory issues as we go along down the road. We’ll discover a lot [of them] over time as we keep rolling out features.
But we’ve taken the stance that we’re having the user [advisor] vet the content first because the reality is that AI can make mistakes.
“Large language models [AI] are not the solution. They may well be the catalyst for calamity,” argues Gary N. Smith, professor of economics at Pomona College. His new book is “Distrust: Big Data, Data Torturing, and the Assault on Science.”
AI is potentially dangerous, particularly when working with people’s assets and investments. Your thoughts?
It could be dangerous if used for trading, say, and without supervision. These are things we’re not currently pursuing. They can be pretty detrimental.
We’re starting with conversations first.
Was integrating ChatGPT into Redtail Speak a project you had in your lab for a while?
It was in Redtail’s lab. It came from a developer. On the executive level, we followed what [our engineers] were working on with ChatGPT.
I just fell in love with it and asked them to keep going.
We [thought it was] a really cool feature.
(Pictured: Brian McLaughlin)