JPMorgan’s AI rollout: Jamie Dimon’s a ‘tremendous’ user and it’s caused some ‘healthy competition’ among teams

Before a business review with JPMorgan CEO Jamie Dimon, Teresa Heitsenrether runs her presentation through one of the bank’s generative AI tools to help her pinpoint the message she wants to convey to the top boss.

“I say, what is the message coming out of this? Make it more concise. Make it clear. And it certainly has helped with that,” Heitsenrether, who is responsible for executing the bank’s generative AI strategy, told a conference in New York on Thursday.

Dimon himself is a “tremendous user,” she said, and is waiting for the ability to use the bank’s tools on his phone.

“He’s been desperate to get it on his phone and so that’s a big deliverable before the end of the year, ” Heitsenrether added.

JPMorgan, America’s largest bank, has now rolled out the LLM Suite, a generative AI assistant, to 200,000 of its employees.

The tools are the first step in adopting AI technology across the firm. Heitsenrether, JPMorgan’s chief data and analytics officer, speaking at the Evident AI Symposium, said that the next generation would go beyond helping employees write an email or summarize a document and link the tools with their everyday workflow to help people do their jobs.

“Basically go from the five minutes of efficiency to the five hours of efficiency,” she added, saying it will take time to reach that goal.

‘The flywheel effect’

The response to the LLM rollout has been “enthusiastic” and has created “healthy competition” between teams, she said. The wealth and asset management arm was the first division to use generative AI, piloting a generative AI “copilot” for its private bank this summer.

“When the investment bank found out they said ‘Well, wait a minute, we want to be on there too,'” she said. “It does create a flywheel effect.”

JPMorgan offers courses and in-person training for employees to use the firm’s generative AI tools, such as how to prompt a chatbot properly, but the bank is also leaning on superusers, or the 10% to 20% of employees who are “really keen” to help with adoption.

“We embed people within different groups to be the local source of expertise to be able to help people that they work with understand how to adopt it,” Heitsenrether explained.

The most common superusers seem to be those who clearly see the benefits of generative AI, such as a lawyer who saves hours by getting a synopsis of contracts or regulations instead of reading them all.

Despite Wall Street’s interest in generative AI, getting workers to actually adopt the technology has been a key hurdle for finance firms, Accenture Consultant Keri Smith previously told B-17. As a result, training and reskilling efforts have come under the spotlight, she said.

Heitsenrether said that they’re trying to engage with the “pockets of resistance” now because it will be harder to convert them once the technology becomes intertwined with workflows.

She also said that the sooner people engage with AI, the less skeptical they are, and they see how it can augment, not replace, their jobs.

“Having it in your hands I think demystifies it quite a lot,” Heitsenrether said. She used the example of a developer using it to more quickly write a test case. She said if they see the benefits, they realize “this is not something that’s going to be done without me, but it’s just a way to make my work that much more effective.”

What’s next: AI assistants

By this time next year, Heitsenrether told the audience that she hopes she’ll be talking about “enabling employees with their own assistant” that’s specific to them and their jobs.

Some of the legwork needed to develop those more autonomous forms of AI is currently being done in pilots, Sumitra Ganesh, a member of JPMorgan’s AI research team, said during another panel.

Even still, the early use cases for AI workers will likely be constrained because these systems still need a human in the loop to ensure the reliability needed in such a regulated industry.

“We don’t have a lot of trust right now in these systems,” she said. Having an expert in the loop who can verify AI outputs is “kind of babysitting these agents at this point, but hopefully, it’s like training wheels — at some point we will be confident enough to let them go,” Ganesh said.

Similar Posts

Leave a Reply