“It's Really About Embracing AI and Considering It to Be on Your Team, But Not Letting It Loose”
Silo Busting 58: Responsible AI in Financial Services with Kathryn Hughes and Alex Jimenez
When it comes to artificial intelligence, how responsible is the financial services industry? How responsible must it be? And how what steps should it take to get there? These are the sort of questions Alex Jimenez, EPAM’s Managing Principal of Financial Services Consulting, is asking of Kathryn “Kate” Hughes, our Director of LegalTech and ERM Business Consulting, in this #TakeItToTheBank conversation.
Hughes says that many financial firms have long invested in AI and they’re “starting to see a level of incremental benefit from AI.” She mentions a tool called ChatGPT—you might have heard about it—and says that people have been looking for a “sweet spot of a scenario or a use case” and that ChatGPT “hit it out of the park.”
Jimenez notes that while it’s good to invest in AI, it’s important to view it realistically. “ChatGPT is not built to be to be replacing a call center,” he says. “It's not built to replace the advice from your licensed advisor or from your banker or from your accountant.”
The pair have both tested ChatGPT, and quickly found some limits. Jimenez recently asked it about himself. “It invented a whole biography, he says. “It talked about my life and how I lived in San Francisco and how I did all these things, which are not real. And then when I asked for the sources, the sources were made-up as well.” Hughes recently asked the AI about her pension. “It gave me a really good overview of the United States pension system but did not in any way actually answer my question.”
Hughes wisely argues for a measured approach to AI. “I think we have to be adults when engaging this technology,” she says. “One of the finest ways to engage AI is to think about it as a co-collaborator… rather than think of it as some, you know, alien other, and to sort of bring it into the mix.”
Risk is, obviously, a big issue here. “Banks need to start thinking about how they manage the risks around AI,” says Jimenez, and he warns against the danger of “digital redlining,” which is when “the data that we're using is biased and now the decisions that the AI is doing are biased as well.”
Hughes speaks about the guidelines proposed by the Wolfsberg Group and the proposed legislation for the EU harmonization of AI regulation, which call for things like ensuring that there is a legitimate purpose, making sure that there is accountability and oversight, openness and transparency built in.
Guidelines are great but the big question remains: How to move FS toward AI responsibility?
Hughes says there are two paths. The first involves insuring “that the organization has its own governance and policy, brand reputation and ethic.”
The second path involves education. Hughes recommends reading about the proposal for harmonized rules for AI across the EU, the Algorithmic Accountability Act, and the AI Bill of Rights and notes organizations should start familiarizing themselves “with the actual content of these proposed legislative items.”
Right now, it’s time to familiarize yourself with the full Silo Busting conversation. Get clicking!
Host: Alison Kotin
Engineer: Kyp Pilalas
Producer: Ken Gordon