Bank of America’s Hari Gopalkrishnan on AI, Innovation, and the Customer Experience
- SIFMA Editors

A Conversation at SIFMA’s 2025 Annual Meeting
At the 2025 SIFMA Annual Meeting, Hari Gopalkrishnan, Chief Technology and Information Officer of Bank of America, sat down with Laura Peters Chepucavage, Chair of SIFMA’s 2025 Board of Directors, for a candid conversation on the future of AI, the evolution of customer and employee experiences, and the strategic decisions driving Bank of America’s $13 billion annual technology investment.
Gopalkrishnan outlined a clear framework for AI’s evolution – from automation, to predictive analytics, to large language models, and now to reasoning and intelligent agents – while highlighting what’s working today and where firms will benefit from imore discipline and governance.
Key Takeaways
-
AI is moving from prediction to reasoning and agents: Bank of America views AI across four layers: automation, predictive models, generative AI, and now goal-oriented “agents” that can plan steps and orchestrate workflows. As an industry, we’re still only in the early innings of generative and agent-based applications.
-
Real impact comes from reimagining end-to-end workflows: The biggest gains come when firms redesign processes – not just bolt AI onto legacy steps. Examples include trade surveillance, call centers, and client meeting prep, where tasks that once took hours can be compressed into minutes by combining automation, predictive models, and generative AI.
-
Guardrails, governance, and human review are non-negotiable: Bank of America uses techniques such as “LLM-as-judge,” strict data boundaries, and human-in-the-loop sampling to manage hallucinations and accuracy. For high-stakes use cases, AI augments decision-making rather than replaces it.
-
Clients want seamless experiences, not “AI” labels: Retail customers don’t ask for AI—they want fast, intuitive, and proactive service. Tools like Bank of America’s Erica deliver value when they surface relevant insights (e.g., unusual charges, payment reminders) and allow natural-language interactions that simplify everyday financial tasks.
-
Prioritization and ROI discipline are critical as costs scale: With hundreds of potential use cases, Bank of America evaluates AI investments based on business impact and efficiency: focusing on areas with large volumes and manual effort, and weighing trade-offs such as latency, accuracy, and compute cost. The goal is to reinvest productivity gains into higher-value work, not chase every shiny new tool.
Speakers
- Hari Gopalkrishnan, Chief Technology and Information Officer, Bank of America
- Laura Peters Chepucavage, Head of Global Financing and Futures, Global Rates and Counterparty Portfolio Management, Bank of America
Chair, SIFMA 2025 Board of Directors
Transcript
Laura Chepucavage: Awesome. Thank you for having us. Hari, thank you for coming-
Hari Gopalkrishnan: Great to be here.
Laura Chepucavage: … and talking to us about AI and the various aspects of how it’s getting integrated into our industry. Hari and I both boast very easy last names, so I’m very happy that Ken introduced Hari’s last name since I’d mess it up. But I thought we’d start with the baseline with Hari in terms of where are we in the AI evolution. It’s obviously such a hot topic. We heard Ken on stage previously discussing it. It’s all top of mind for most of us as we work through our businesses. So maybe if you could just set the stage, Hari, for us in terms of where we are.
Hari Gopalkrishnan: Sure. And we have a framework by which we think about the overall platform and it starts as journey through basic automation, which all of us have been doing for dozens of years, and that’s a straightforward automation rules, connecting systems together, but let’s not forget about it because it’s going to make its way back again when we talk about agents, but automation, foundational to the building block. Then about probably 10 plus years ago we started doing predictive AI, which is take a bunch of structured data, bring it together and can you start to create some intelligent analytics regression and then predictions based upon that. We call that predictive AI. So classic example is you have fraud, you see a bunch of transactions, millions of transactions, you bucket them into, “These are fraud, these are not fraud.” You tell the machine to learn and over time the machine learns and now you can start pumping real transaction and starts to make fraud decisions.
So traditional predictive AI, and we’ve been doing that for probably 10 plus years. Then you get into content and language and the last end of predictive AI does have some content language. Think of spam filters, right? Everybody gets emails and things get bucketed as being spam, not spam. When customer calls in, can you call this a call about a bill or a trade or a dispute? Those are all simple language models that have been here for probably the last seven or eight years. And then three years ago, you start to see the increase in the use of large language models, which is now where you start getting novel foundation models that can be built once and used repeatedly to solve problems that historically were much more difficult to solve from a human perspective. And so that’s where the large language models have been the last three years.
And then most recently, you start getting into reasoning, where you start asking questions, could you just give the machine a goal and say, “Go figure it out”? And that’s where agents start to come in where you start to get intelligent agents that perhaps, theoretically, and I say that for a real reason, have the ability to understand what you’re asking for, have a goal, plan for the goal, come up with steps, use predictive and generative AIs to go solve those goals using automation capabilities that we’ve built over the years. So everything then starts getting stitched together to create end-to-end solutions for clients and associates. So that’s kind of been the framework for us as we think about it.
Laura Chepucavage: And where are we in terms of stitching that all together as an industry and where do you think people are along this continuum across the industry, whether that’s financial services or the brokerage industry, and we can talk specifically about Bank of America as well, because you’re doing great work at the firm.
Hari Gopalkrishnan: Yeah, look, the automation journey is old news. We’re all doing it in many ways, shape and form. I think predictive in terms of looking at things like fraud, looking at things like portfolio optimization, where you look at various parts of a customer’s or client’s portfolio, understand their risk factors, run Monte Carlo simulations, try to figure out where they want to go in their thing. I think that’s something that, by and large, we are doing as an industry, fairly common. If I shift to the language models, some form of customer service and chatbots that actually help serve customers, your mileage varies a bit. I think there are… For example, we made an investment about seven years ago in a platform we call Erica. We’ve deployed that now to 20 million clients, 3 billion transactions. And so that’s been an example of sort of widespread usage, but I think everyone’s probably has been working through those.
What’s new in the last couple of years is the use of generative AI and we’re seeing generative AI in multiple places. Things like helping an advisor prepare for a client meeting, being able to say, “I’m about to meet Hari. Can you just give me a summary based on his portfolio, his holdings, his risk profile, what he’s interested in the market, a first draft of a summary that I can go use to talk to him.” Using it in trade surveillance, for example, using structured and unstructured data, coming together and being able to say like, “Look, this looks like behavior that’s abnormal. Let’s go chase it down in a much more automated fashion.” Those are probably things I would say as an industry, we’re still in the second to third innings on, which a lot more to come in the future going forward. And on agents, the hype is super real there and we’ll talk hopefully a little bit about what to watch for and what the tremendous possibilities are there as well.
Laura Chepucavage: Okay, great. Okay, great. So you talked a little bit about what we’ve implemented that’s been successful, so maybe also talk a little bit about things that we’ve tried that haven’t worked.
Hari Gopalkrishnan: Yeah, look, our approach generally was, a couple of years back, when the generative AI landscape kicked in, is we didn’t want to have thousands of points of light where everybody randomly start doing stuff because just, A, it’s expensive. B, it’s counterproductive. We’ll talk hopefully a little bit about ROI later. But these things get expensive pretty fast and you have to govern them. So we actually came up with about 45 different proof of concepts, and fortunately, almost all of them have seen success in some way, shape or form. Those range from chatbots that are used by customer servicing teammates to help answer questions.
We have in call centers, for example, the ability to real time listen to a call, have a transcribed using speech to text in real time, have it classified into one of many intents so we know what the customer’s calling about, route them to the right agent and then have the agent solve their problem and have entire conversation summarized and filed with no human intervention involved. So that’s been very powerful from a servicing perspective. From a sales and relationship management perspective, we’ve had the client meeting prep I mentioned where you just go in and say, “I’m about to meet client X, produce a document for me.”
We’ve seen very good success with that. We’ve seen a success in front, middle, back office teams talking through a chatbot. I’m about to set up a bond and I’m having a problem with system six and seven, who can help me with this thing? It intelligently routes you to the right person, the person picks up the problem, sets it up. All that stuff happens in a fluid fashion. So lots of good example of success stories. The other side, to your point of lessons learned, there are a couple of things that we learn as you do these proof of concepts. For example, even if I go back to the early Erica days. We started to build a platform that could be a chatbot both for voice and text. Customers could either, “Hey, I want to pay a bill, I want to learn about this trade,” or could talk into it. And we’re spending the same amount of effort on both.
And at that time we realized 95% of our customers weren’t talking into the phone, they were actually typing in. And so we realized we’re the amount of work that we were doing on voice at that point. And so we pulled back on the investment and started to go with like… We’ll keep the investment there, but really focus on how do you make it much more of text-centric thing because people are still nervous about talking about that financials into the phone in a public forum, so it’s easier for them to text. More recently, everybody thinks the holy grail of meeting summarization is you go to a meeting or a Zoom call and magically insight pops out, everything gets summarized and it goes into your CRM platform and life is good and that is a great aspirational view, but you learn a couple of things along the way. Technically, it all works. The two things to watch for is things like disclosures and document retention.
Suddenly, what used to be a phone call is written communication because you’re technically transcribing what the person is talking to. It is now written, you got to keep it for the period of time. It’s disclosure. There’s a lot of stuff that comes along with that. So you just have to be aware of the laws and regs-
Laura Chepucavage: A lot of written communication-
Hari Gopalkrishnan: … surrounding that. It’s written communication at that point. Second is, this word slop that you may hear recently, which is AI slop, which is if you leave the machine to itself, it’s going to create a bunch of crap. And if I have one-hour conversation with you, chances are we have four or five really insightful things we talk about, but along the way we’re talking about family and how’s things going and da-da-da. And if you just transcribe that whole thing and just dump it into a CRM system, the signal-to-noise isn’t that great. And so you have to think about those two things. And so we’re now working on a second iteration of that work. But in the first iteration, we thought, “Simple. Listen to the call, transcribe it, save it. Life is good.”
Laura Chepucavage: I mean, can’t we tell it, “Only transcribe things related to business”?
Hari Gopalkrishnan: Well, and then you get into the sort of naturalness of the conversation, number one. And secondly, even business becomes like, “When are your kids going to go to college?” Is that a business thing? I mean, maybe we’re talking about a 529 plan for you, as opposed to, “I just want to know where your kids are going to go to college.” So it’s all doable. It’s all… At the end of the day, it’s work. Some things that just come naturally and some things when they have more legal risk, compliance considerations, take a little bit more time to iron out. And AI slop is really real, which is the amount of noise that can be generated if you don’t manage the nature of the content you’re creating is something you have to think about as you do this at scale.
Laura Chepucavage: Well, and I’ll go maybe a little off from our questions, but also can we talk a little bit about accuracy? So just in my personal life when I’m putting things into… I’m sure we’re all the same, into Gemini or ChatGPT, it’s wrong sometimes, right? So how are we thinking about that? Particularly for our employees that are going to use these various tools. Are we telling, “Okay, you need to use this,” but then also double check, how does that get better over time?
Hari Gopalkrishnan: Yeah, it’s a great question and there’s probably two or three very important points to that. So let’s talk about the non-technical part, which is the training and awareness of what these tools can or can’t do. At the end of the day, these are predictive models that are predicting what the next word in a sentence should be in the generative sense. And as it is, they can be wrong. And that’s just the nature of what these things are. So when you educate your teammates as to what should you use it for, what is the work you should double check, that’s part of the program, is that you have to have education understand human in the loop. But when you deploy these tools, you also have to implement a set of guardrails that come along with it. How do you check for hallucinations? How do you check the fact-
Laura Chepucavage: How do you check for hallucinations?
Hari Gopalkrishnan: So there are a number of ways in which you can do that. Some technologically. The technology piece of it is there are guardrail capabilities that let you use a large language model as a judge. So you have the first large language model that answers your question like, “How do you think the markets are going to perform if this happens?”
Laura Chepucavage: Right.
Hari Gopalkrishnan: Large language comes up with an answer as the output. You can then have multiple other large language models check just the work and ask the question running across multiple models saying, “How coherent, how consistent, how relevant is it?”” And based upon a composite score, it comes back with, “I’m 98% certain this is good,” and you’re never going to get 100%. Or I’m seeing big disconnect between what the first model output and the second model judge, that’s just an example. There’s a term called LLM as judge. That’s a pattern that’s used in the industry, but there are many similar guardrails. There are metrics that are generated that are looking at, for example, how similar is the output against the input that you provided? Are you only using the enterprise data you’re supposed to use or are you reaching outside the boundaries of what you’re telling it to do?
Because when I ask that question, you may only want to answer it based on what the bank has about you, but left unguarded, it could reach out and say, “Well actually I learned from some Wikipedia article that this too could happen.” How do you guard against that? So there are well-defined industry metrics, it’s not a perfect answer, that are available to actually as part of your training process, as part of a governance process, you have to think about how are you going to govern this? And it’s things like LLM as a judge, things like some of these metrics that are out there that you have to make as part of the development process. All that said, it’s not perfect.
And that’s why one of the other things we do is as part of our model governance, there is a human review process, which is people that look at it and saying, “I’m going to run a sample of a thousand queries and I’m going to look at what the machine said.” I’m a human. I know what right answer should be. I’m going to say yes for these… Is it contextual? Is it relevant, is it accurate? And I’m going to tag all these things. And if the thousand thing comes through and you get 999 right, you’re in a good place. If you’re doing that and it turns out like, “Wait a minute, like 50 out of the thousand just making up stuff,” time to go back to the well and figure out what’s wrong with how you’re priming your data, how you’re implementing your models so you’re actually planning for safety along the way.
Laura Chepucavage: Yeah, we’re seeing that a bit with our Maestro application within markets where it’s our internal research portal where we can aggregate the information very quickly. So you walk in the morning, “Give me an idea around US treasury debt footprint and projections that came out based on the recent meetings,” whatever that may be. And it’s scraping all of our internal research and it seems like we can toggle between, “Should I bring in outside research or only inside research?” Explain to me though, and the audience, so a lot of times as we try out these new tools, you do get some errors and so they’re only as good as how often you use them and train them.
Hari Gopalkrishnan: Yeah. They are, but I would also say you’re bringing up an important point, which is user experience matters a lot.
Laura Chepucavage: Yeah.
Hari Gopalkrishnan: It’s not just slap a model, throw it into your data and voila, magic happens. It doesn’t work that way. The best scenarios we’ve seen, the best use cases we’ve seen, I’ll give you… Pull the thread on that call center example I gave you. We’ve actually incorporated the model into the call center desktop. So when the phone rings, the first thing that happens is it tries to predict based on the phone number who’s calling, then starts to figure out some insights as to what are the reasons Hari could be calling. Like what do I know from the data set? It turns out that he has never had a fee levied before.
What are the next five things he could be calling about? So all that prep work happens in the context of a contact center dashboard. The teammate who’s picking up the call, even before they pick up the call, is seeing the client start talking to the IVR saying, “Hey, I’m calling because I was traveling to Istanbul and I drop my wallet somewhere.” Now the real time call is being transcribed and the agent before they even pick are seeing that being transcribed and are starting to see pops that say, “Probably a lost card.”
And if that’s what it sounds like, let me take you to the screen on your screen that’s going to be in two clicks, let you ship a new card and be done with that client in a seamless fashion. That’s a user experience play. The AI is the model, but you could have done that in a very crappy fashion where you have them go to four different screens and copy and paste stuff and have the model spit out something which is not aligned with what the associate is seeing and the client gets frustrated or you build UX within that. And by the way, at the end of the day, the associate can do a thumbs up or thumbs down to indicate whether or not what they got out of that model was good or not, which is then used to train the model to say, “Okay. Actually, we got that one right.” So going forward when we see this type of language, it’s probably is a lost card. When I see this kind of language, actually turns out we just are having false alerts, so we should try to refine and tune it.
So this role of user experience, test and learn, deploying in small pilots, growing, is a very important concept because the worst thing you can do is saying, “I got this cool tchotchke, let throw it at 10,000 people,” because the negative-
Laura Chepucavage: Disaster-
Hari Gopalkrishnan: … convexity effort you get is disaster. Instead, you go to 50 people, they will love 70% of it and they say this 30% is like, “Eh, this is not very good.” Then you go figure out next iteration. Now it’s like 85% there. Now you go, “Okay, I’m going from a hundred people to a thousand people.” You get more feedback and then more feedback. Same way we do client rollouts, by the way. We start with internal teammates of ours who are clients. They use the platform first, then we pick a small state, we roll it out there, get client feedback. So especially with these tools, rinse, repeat, iterate becomes a really important part of how you deploy these capabilities.
Laura Chepucavage: What are you hearing? Let’s think about the consumer, customer of the bank. So not the institutional investors that I’m dealing with on a daily basis, but the consumer, customer, I mean, are they… Demanding is the wrong word, but what do they want to see from us from an AI perspective or do they not care if it’s AI or not, they just want efficiency in their questions answered quickly? How are we thinking about that from the customer’s-
Hari Gopalkrishnan: Yeah. Look, no retail customer ever came to us and say, “I want digital, I want AI, I want these things.” Nobody likes to go wake up in the morning and say, “I want go banking today. It’s going to be fun.”
Laura Chepucavage: Come on-
Hari Gopalkrishnan: Sorry. No one does. They just want to live their financial lives. They want to live their financial lives and they want to live it in a way that is fully integrated with the rest of their lives. So what does that mean? Things that are predictive in nature and insights are valuable. Being able to wake up in the morning and say, “Look, I think you just got overcharged for this thing. You may want to look at this.” And that’s a very common insight that we have.
Laura Chepucavage: The fraud protection too is very useful.
Hari Gopalkrishnan: The fraud protection is an example. But in some cases, just a merchant accidentally… This happened with my son’s membership, like one month they actually accidentally triple charged him. Now, I may have seen it at the end of the month in a statement, but now I get Erica saying, “Hey, it looks like this month your gym membership is three times it usually is. It doesn’t make any sense. You should look at it.” And that’s insight, right? That’s valuable. So customer didn’t care that it was AI generating it. They just cared about the fact that you gave them information that otherwise they would’ve missed. And in doing so, have lost the money along the way.
Second is ease of access. One thing we found with our customers, and we have 65 million customers, is this is a small five-inch screen. How much are you going to jam in there? And they’re like, “Well, I can’t find stuff anymore.” And so the idea of natural language became very important to us and that’s why we built Erica seven, eight years ago. Usability matters. When I’m here to do a financial transaction, I want to in and out. I’m not here to hunt and peck 15 menus and 16 screens and enter 17 different things. I just want to say, “Hey, I owe Laura 15 bucks. Can you send her the money?” I want to say that. And it says-
Laura Chepucavage: Into Erica.
Hari Gopalkrishnan: Into Erica.
Laura Chepucavage: Yeah.
Hari Gopalkrishnan: I just say, “I want to send Laura 15 bucks,” and it pulls from my contact list, Laura. If there are two Lauras, it says, “Which Laura did you mean to send it to? This one? Which account do you want to send from? Are you sure it’s 15?”
Laura Chepucavage: And that’s taking you to Zelle.
Hari Gopalkrishnan: That’s taking you to Zelle. Exactly. That’s taking you to Zelle. You’re not sitting there saying, “Where’s Zelle in my mobile app?” Right. Like you’re just saying-
Laura Chepucavage: I’m still doing that. I need to get this assistant. I need to work with Erica more.
Hari Gopalkrishnan: Yeah, we need to get you to training.
Laura Chepucavage: Yeah. Definitely need it.
Hari Gopalkrishnan: Yeah. Absolutely.
Laura Chepucavage: So how do you see Erica continuing to evolve?
Hari Gopalkrishnan: Well, I think-
Laura Chepucavage: Or has Erica reached her kind of max utilization?
Hari Gopalkrishnan: There’s always more. I mean, if you think about it, a lot of what we did there is essentially first generation of servicing, the easy stuff. And this is where now let’s tease into where the AI industry evolves into. So a lot of our workflows across the companies we work at is still fairly what the workflows used to be years ago. They’re people, people aggregate data when a problem comes in, whatever the problem may be.
Laura Chepucavage: They analyze it.
Hari Gopalkrishnan: They look at the data, they analyze it, they make a prediction. Based on the prediction, they then take it to the next step, take a decision, take some actions, and life goes on. That’s a standard rinse and repeat in any problem statement that you have. The question for all of us is, well off that workflow, how much of it can actually be automated through traditional automation, predictive and generative AI? So let’s use trade surveillance as an example.
Laura Chepucavage: Sure.
Hari Gopalkrishnan: Because this group would care a lot about this. You have an incoming alert that says you got to go chase this thing down. Today, you have someone saying, “Okay, I got to chase this thing down. I got to go to seven different places.” It could be I’m front running a market, I could be doing this, I could be doing that. You look at all these systems, you come back, what do I think? Are you dispositioned in a case management system? Then it goes somewhere. It’s a lot of human touch. Tomorrow, before even a human-
Laura Chepucavage: And a lot of false positives
Hari Gopalkrishnan: And a lot of false positives. In comes an alert before it even hits a human. Could you imagine a world where it says, “Hold on for a second, let me look at this thing.” I kind of know that I have these seven or eight systems that I should go look into with the purpose that this thing is used for client limits. This thing is used for trade surveillance road, this thing. So let me go hit those things. Let me come back with a joint view of it. Let me apply some level of reasoning, maybe. Let me say, “What does it feel like to me, the machine?” It says, “Okay, this actually feels like this could be a real issue, a violation.” Now when I get to the human, it says, “By the way, here’s an alert. I think it’s a violation based on these seven things I’ve just done for you. Go ahead and take a look at it.”
That’s where the human in loop comes in, right? The human in the loop looks at it, goes, “All right, let me look at it. That’s good research. I get it. I agree with the judgment and now I can disposition that thing,” as opposed to I’m going to do all that research. Similarly, if you see something and say, “Hang on for a second, I don’t like the reason you said that. I want to go double check.” You could still double check that, but you just took a two-hour process and probably made it a 15-minute process. And the reason I say that is because that, in some way where we go next year, is understanding every place we have processes and we have all have tons of processes, looking end-to-end across these journeys, not across one myopic part of going to system seven for getting an answer, but looking at end-to-end journey, mapping it out in a way that actually understand where humans touch it.
Where do you actually add value with humans touching it and understanding across all these tools, automation, predictive AI, generative AI, and some reasoning. How do you transform it? How do you simplify the process? Don’t just automate the existing process because that would suck. Take the existing process, reimagine it using these new tools. Now apply these tools to it, and magic happens, right? That’s where we start to see things that used to take hours of time now takes seconds and minutes and that’s what we have to go do next is like one by one by one by one, understand where our processes are.
Laura Chepucavage: It’s adding in the layer of productivity and efficiency to it .
Hari Gopalkrishnan: Every step of the way. Right. And client satisfaction because essentially, that lack of a false alarm or being able to disposition, even if it’s a misunderstanding, being able to… Doing that in seconds and minutes versus hours and days makes a huge difference.
Laura Chepucavage: And then it gets quite sensitive too as it relates to the efficiency gains around labor, and what does that mean, do you roll that labor into more productive and higher uses, right? Or does that mean that… How are we thinking through that?
Hari Gopalkrishnan: Yeah, look, our view is we have a tremendous backlog of work we’re not getting to. I’ll use an example in the software development-
Laura Chepucavage: There’s a lot to do. I agree.
Hari Gopalkrishnan: There’s lot to do.
Laura Chepucavage: I agree.
Hari Gopalkrishnan: In software development, we’ve deployed coding agents to 18,000 developers in the company. That has already yielded for a good portion of our development cycle, like 20% efficiency. What we’ve done now is saying, “We want to reinvest that,” because we have plenty of software development to do. It’s not like, “Oh, we’re done with software development for now.” We are reinvesting the saved money and now it’s going into next year’s budget as incremental investment without actually spending more dollars. So to me, it’s like there is… Unless any of us is in a place where there’s no backlog left to prosecute, there’s so much backlog to prosecute.
And the other thing I’d offer up is leverage. If you think about how many clients a given banker can cover. Can you imagine a world where this type of automation and AI lets you say, “Hey, I could never get to client 17, 18, 19, and 20 and I had to choose between the first 16 because I ran out of time. Now, I’m doing it much more efficiently. I can actually cover those clients as well.” And now the incremental revenue production out of those clients drops to the bottom line. So there is just so much opportunity out there before we get into the other-
Laura Chepucavage: That last piece resonates with me within the global markets framework
Hari Gopalkrishnan: For sure.
Laura Chepucavage: If I can analyze the data, the incoming research, do the analytics on screen faster.
Hari Gopalkrishnan: Yeah.
Laura Chepucavage: Right?
Hari Gopalkrishnan: Exactly.
Laura Chepucavage: Then I can go with content to the clients in a much more meaningful way and you get to more clients and you’re more productive. We’re bringing more business in.
Hari Gopalkrishnan: Absolutely.
Laura Chepucavage: So that does resonate in my area of the bank. Hari, can you talk a little bit about… I get overwhelmed as we go through the various stages of business planning and how we prioritize our tech spend. When I think about in the markets business, whether that’s going to be e-trading, helping out your downstream platforms run more efficiently. Now we’re talking about digital and AI. Now if you just take the AI part of it, and I know you’re looking at tech across the bank, but just take the AI portion of it, it’s moving so quickly. So how do you then take a step back and say, “What’s the framework and how do we prioritize such that we don’t chase the shiniest thing and the newest thing that we know may have some sort of limited probability into success?”
Hari Gopalkrishnan: And one of the things we’ve done there is essentially partner with the business teams, teammates like yourself and said, “Look, we have to understand what is the net efficiency you’re going to gain out of this investment.” It’s very easy to get enamored by this cool demo someone shows you.
Laura Chepucavage: It happens to me all the time.
Hari Gopalkrishnan: Yeah, I have to back her down from this stuff.
Laura Chepucavage: It’s bad, I’m like, “Ooh, this looks great.”
Hari Gopalkrishnan: It looks great. And then you say, “What will it actually do for you?” And it’s going to save me six minutes of time once every two weeks and gee, it’s going to cost me a million bucks. And you go like, “Yeah, I don’t know about that, Laura.” On the other hand, if you look at a process whereby you’re looking at a million documents that come in… Let’s say term sheets come in, and you could actually look at the data out of these term sheets and you have thousands of operators that are looking at that every week, every month and manually re-keying it or manually doing classification of those documents. If you gave me those two extremes, which life is not an extremes, but let’s use those examples, you would say like, “I’d rather invest a million bucks here any day as opposed to here.”
In a way, that’s kind of what we’re doing. There’s hundreds of ideas that are coming through. We’re challenging our business teammates to kind of rank them in the order of impact from an ROI perspective by looking at tasks and activities and understanding how much of those tasks and activities they can actually automate, which then frees it up for managing the backlog, as I mentioned. The more that something does that, the higher the probability that you kind of pop up the funding cycle. The more you go like, “Yeah, it’s cool, it’s awesome. It’s a great demo, but I can’t commit more than like a five-minute save for three people,” then you’re not going to get through the system. So I think we’ve created a nice framework whereby businesses come to us with the sense of priorities and then we use that framework to actually decide what we’re going to fund.
Laura Chepucavage: Yeah, it requires a lot of discipline.
Hari Gopalkrishnan: It does, because you can spend money extremely fast on this stuff. You can easily see tons of money flying out the door. And a couple of things I would sort of… Lessons learned for us. One is around the hallucinations and guardrails we talked about, right? Just you have to think about human in the loop. I mean, there’s no way around it. There’s just… Unless you can afford to be wrong. In a call center, routing it… Who cares if you route them to the wrong person? You can always reroute them. It’s not a problem. For anything meaningful, you have to worry about that. The second is, costs can get very expensive here. Either you’re going to run these models on-prem, which means you’re running expensive GPUs that are not commodity. Or with a cloud provider where the cash register is ringing every time you’re making a call.
Here’s a simple example. People will tell me, “I want one second latency in my response time.” The difference between one second latency versus five second latency could be a million dollars in a year. You have to ask them the question, “Do you really want one second latency for the million dollars spent?” And typically, if you’re a business user, you go like, “Whoa, whoa, five’s good. Can you give me nine seconds? That’s plenty.” But if you don’t tell them the trade-off, they’re always going to want the first. Similarly, someone will say, “I want a 99% accurate large language model.” Well, if you have a human in the loop that’s actually reviewing it after, is a 96% good enough? If 96% is, again, $300,000 a year and 99% is 5 million a year, do you need the 99%? And typically people go, “Actually, I don’t.” Next thing I’m doing is reviewing this client meeting prep anyway. Why do I need the 99%? So I think as a savvy business user, the ability for you to know what those trade-offs are, become really important as you go forward.
Laura Chepucavage: Well, I think it then goes back to also what you were mentioning around the training-
Hari Gopalkrishnan: Absolutely.
Laura Chepucavage: For the users.
Hari Gopalkrishnan: Absolutely. Yep.
Laura Chepucavage: Okay. Maybe we’ll end, I think we are almost out of time. We have 17 seconds.
Hari Gopalkrishnan: Four more questions.
Laura Chepucavage: Yeah, exactly. I was going to ask you a little bit about some of the risks and how we think about those, but maybe not because we don’t have the time. But I guess where do you see us in two or three years?
Hari Gopalkrishnan: I think my hope would be that in two or three years we’ve started to have much more of a common understanding of the risks and opportunities that come up. So right now there’s a mad hyper on agents and everybody’s going like crazy. And you ask 15 people, “What do you mean agent,” and you get 17 different answers.
Laura Chepucavage: I feel that way.
Hari Gopalkrishnan: Right. I mean, so everybody comes… So I think this idea of the fact that step back from hype, step back from all the… There’s so much hype and drama everywhere. Step back and say, “What is the business problem you’re trying to solve?” Have a common vernacular around like… You got automation, you’re predictive, you have generative, you have reasoning. They all do these things well, they don’t do these things well. We can apply them to these problems in this way. You can apply them to these problems this way. And you kind of get a little past the sort of everybody pitching the hype. Now, it’ll be the next wave of hype after that, I’m sure. But I think even if technology innovation stopped at this moment, there’s plenty out there for us to reap the benefit off for the next five to seven years. It’s more for us to organize our thinking and priorities around it.
Laura Chepucavage: Great. I appreciate your time. Thank you for-
Hari Gopalkrishnan: Thank you.
Laura Chepucavage: Thank you for spending this time with us.
Details
More Content
- Press ReleasesDec 17, 2025
SIFMA Issues 2026 and 2027 Fixed Income Recommendations for Full & Early Holiday Closes in the U.S., U.K., and Japan
SIFMA released its holiday trading recommendations for U.S. dollar-denominated fixed income securities for 2026 and 2027, applicable in the U.S. and U.K., with Japan's 2027 recommendations pending Bank of Japan confirmation.
Details
More Content
- Press ReleasesDec 17, 2025
SIFMA Issues 2026 and 2027 Fixed Income Recommendations for Full & Early Holiday Closes in the U.S., U.K., and Japan
SIFMA released its holiday trading recommendations for U.S. dollar-denominated fixed income securities for 2026 and 2027, applicable in the U.S. and U.K., with Japan's 2027 recommendations pending Bank of Japan confirmation.