Blog
12.2023

Sathish Muthukrishan, Chief Information, Data, and Digital Officer, Ally

Today we welcome Sathish Muthukrishan, Chief Information, Data, and Digital Officer at Ally to “The CXO Journey to the AI Future” podcast. He joined Ally Financial in January 2020 with a focus on modernizing processes, teams, and technology and driving transformative business outcomes across multiple Fortune 500s.

Question 1: First Job: Could you talk a little bit about your background and how you got to the position where you are now?

Today I serve as Chief Information, Data, and Digital Officer, and it’s a really huge privilege to be serving under that title at a digitally native bank. Because fundamentally, everything we do centers around technology. It’s a dream job for any IT leader who wants the accountability and responsibility to make everyone else around them look extremely successful.

I originally grew up in the airline world, starting with Singapore Airlines and then United. I later moved onto other verticals: American Express and Honeywell before joining Ally Financial.

Question 2: Generative AI: Everybody’s talking about it. Undoubtedly you have been doing AI analytics and other things in and around automation for years. Could you put a little context around Generative AI? Is this the highest priority for you right now?

I may have a slightly different philosophy and take, but I’ll get to your question. I’ve believed for a long time that we should never lead with technology. If you lead with technology, you take your eye off what’s important, which are the capabilities you’re building for your customers. The technology must be relevant to the customer, and must drive business impact.

So we’ve always led with asking whether or not this is the right capability for the customers. Is it creating the right business impact? And then you figure out what technology you have to use to build that.

However, your question is very important right now, because sometimes a technology will come around that can disrupt every industry and every company out there globally. Gen AI may accomplish exactly that. So, there’s an extreme focus on how we can securely (and scalably) utilize gen AI, while still keeping in mind our two core principles:

  • What business impact are you creating?
  • What customer value are you producing?

Question 3: New Metrics: There are lots of use cases where Generative AI could be used. How will you go about deciding where to put attention? How do you go about defining the right metrics for an AI deployment? Is it productivity? Is it business impact? What are those metrics that matter?

Everybody is thrilled and excited to start using gen AI as you can imagine, and we have taken a very simplistic approach: Anything that we do with gen AI should either drive efficiency or drive effectiveness. Efficiency means an additive technology to make your everyday job better, easier, or more productive. Effectiveness is driving real business impact and results.

So how do we go about that? We know that this isn’t a technology that can be executed within IT – the entire company will have to participate, and it will have to become a revolution. So we created what we call an Ally AI Playbook. The AI playbook provides a platform for anybody who has an idea to take that idea to production. This could mean finding a path, figuring out the governance, learning who the right people to engage with are, getting access to the tooling, and moving everything forward.

In order to bring this playbook to life, we’ve created a step-by-step process:

  1. It starts with what we call AI days. These are half-days where we talk about AI to the entire company. We’ve had thousands of people participate from within the business, across all organizations. We bring in outside speakers. But more importantly, we bring in use cases that we’re working on to show everyone what success looks like
  2. Afterwards, people will come up with ideas, and we’ll guide them to the AI playbook by using a persona
  3. We’ve created a central AI factory team that gives them access for 30 days. So, they can take their idea and start experimenting with it. At the end of the 30 days, they prepare a whitepaper to demonstrate whether it’s hitting the efficiency or effectiveness mark. If it is, we take that through the production runway.
  4. Once on that runway, we figure out the elements around risk and security, and whether or not the idea is useful. Then, it can be taken to production.

That’s how we’re approaching our AI use cases.

Bonus: Everyone has been moving towards SaaS and no one is building their own custom solutions anymore. Do you believe that to be true? What’s the ‘buy versus build’ thesis with regards to AI that we should be thinking about right now?

Oh, what a great question to ask! And the right one to ask right now.

My take is that it’s buy and build, not buy versus build. Now I’ll tell you why. There are several reasons:

The first one is if you go buy generative AI and use it, you’re going to produce the same outcome as every other company. So how are you going to differentiate yourself?

And that leads to number two: If you go and buy off-the-shelf software or any SaaS offering for that matter, they’re using AI. And if you don’t play with it yourself and understand how it works, how are you going to leverage the AI capability that someone else is providing?

Finally, cybercriminals have the same access to this technology that we have. They’re trying to use it for proactive measures. They’re trying to use it to get a compromise from any company. So how do you then protect against that?

For all of these reasons, if your organization does not fully understand the impact that AI can provide, and if your technology team does not have a hands-on understanding of the technology, you will be left far behind.

And that’s why we say: buy and build. The way it has come to life at Ally is that we quickly understood that this technology is moving way too fast and new competitors are coming every day, so we created what we call an Ally AI Platform. You can think of it as a bridge between the internal Ally and all its applications, and the external AI capabilities. It gives me access to go interact with a model, let’s say, OpenAI, which is what we’re connected with today. If I don’t like it, or if I want an additional model to compare it to, I can go and connect with a LLaMA model or a Bard, and we’re doing that in our lower environments. But in our production environments, we are just connected to OpenAI. So the Ally AI platform gives me the ability to point to multiple models and either choose the best option or get all the outputs and compare (or even combine them) and make it more relevant to Ally.

Question 4: What are some of the issues, or maybe even gaps in the technology, that you believe need to be addressed? I know this isn’t a technology-only issue.

There are different buckets and each of them has their own weight in terms of how strong these headwinds are.

The one that I particularly worry about is third-party risk. We have an existing footprint in terms of using third-party technologies. Do we feel comfortable? Have they secured their tech environment the way we would secure ours given the advancement in security threats?

And like I said, gen AI has democratized cyber attacks. The same cyber attack that’s going after the financial industry can go to the healthcare industry. So, how do we protect ourselves?

The second is the sophistication of these cyber threats. It’s become emotional. And it’s turning into parallel attacks, where you have to be on your toes and keep educating your entire organization all the time. That’s the second headwind that I think about.

If I look at it internally, what do I have to do? First is education. You can’t leave the organization behind. So you have to be patient in educating everyone and showcasing the success that gen AI brings. Show them that this is not rocket science. Everybody can use it. English is the most popular language in gen AI, so everyone can play a part.

I come from a regulatory organization. The regulators control partners’ risk compliance and legal audit and help them understand what we’re implementing and proactively identify risks that come along with it. This way they can twist their brains to figure out what other risks there are.

We need to push the organization forward, but if we’re upfront about all the risks that we’ve identified, they can help us mitigate those risks. So that’s another headwind in terms of educating and bringing the organization along.

Then finally, there’s talent. How can you keep your talent challenged and give them better opportunities? You want them to stay with your company and not leave to join someone more sophisticated.

Question 5: Responsible AI: How do you think about it? What does it mean to you?

As a regulated institution, I have to think, breathe, speak, and eat Responsible AI.

One, because we need to have consistency of service. Our goal is to be a fair and responsible bank that mitigates model drift and bias, and this all starts with data cleanliness and accessibility. Further, we focus on testing our models ourselves, as well as the outcomes across various customer segments – to ensure that bias is not introduced.

Then, our independent risk organization tests the models out. For the first use case that we launched, they looked at 60,000 outcomes to make sure that there wasn’t model drift or bias introduced.

That’s from a business perspective.

From a social perspective, I worry even more. In this decade and this era, you have a lower chance of succeeding if you are not digitally connected.

You and I probably don’t think twice before we pick up our smartphones to connect and do what we do every day. But millions of people don’t have access to the internet, they don’t have access to a smartphone, or don’t have the education or opportunity to learn how to efficiently use them.

Gen AI, if not appropriately executed, implemented, and integrated within society, is going to widen the digital divide. And chances are it will become a digital abyss.

So, tech leaders are responsible for modernizing their organization with gen AI because they understand it the best. They can connect the dots. The public-private organizations have to make sure that they pay it forward to create a society that doesn’t become more divided on account of these digital advancements.

Sathish is currently the Chief Information, Data, and Digital Officer at Ally, a leading digital financial services company. Reporting to the CEO and a member of the company’s Executive Committee, he is responsible for leading product, user experience, data, digital, technology, security, network, and operations – an end-to-end role in the midst of everything that happens in the digital native organization.

Most recently, he was Honeywell Aerospace’s first CDIO with a charter to digitize the Aerospace businesses. As an example, he successfully transformed the used serviceable airline parts into a digital business powered by blockchain. He was also responsible for leading and transforming Honeywell AERO’s 10,000-plus global engineering organization.

Sathish was one of the pioneers of American Express’ digital transformation. He led the launch of several unique, industry-first, groundbreaking digital products enabling strategic partnerships with Foursquare, Facebook, Twitter, Microsoft, Apple, Samsung, and TripAdvisor steering the American Express journey from payments to commerce. He delivered the ability for Card Members to Pay with Points creating a new currency for payments.

Sathish’s experience in several technology leadership positions with Honeywell, American Express, United & Singapore Airlines has led to over 30+ patents in the manufacturing, payments, and digital technology space.

# #