Nick Selby and Sarah Wells - The Rush to Adopt AI: Business Risks & How to Get it Right
Nick Selby and Sarah Wells - The Rush to Adopt AI: Business Risks & How to Get it Right
Sarah Wells, EPSD’s Lead Consultant for Engineering Effectiveness, and Nick Selby, EPSD’s founder and Managing Partner, explore why the current rush to adopt AI tools introduces significant business risks in this episode of GOTO Unscripted. They discuss how AI vendors deliberately blur security terminology to confuse buyers, how AI tools’ insatiable appetite for data creates enormous blast radii when breaches occur, and what organizations can do to adopt AI responsibly - from threat modeling and cross-disciplinary governance to minimum-permission principles and incident readiness.
The Problem Nobody Is Talking About
Sarah Wells: We’re going to talk about something that came up when I was writing my talk for GoTo Copenhagen this year — a talk about governance and how to reduce risk without slowing people down. When I was thinking through the examples, so many of them were related to AI implementations. We’ve both been working across security engineering and it’s something we’ve really noticed in the last year. I wanted to talk about the current rush to adopt AI and why that introduces significant business risk.
My background: I’m an independent consultant. I generally work to help organisations improve their engineering effectiveness — making sure you have platforms and processes in place for delivering business value.
Nick Selby: I’m a managing partner at a company called EPSD — it doesn’t stand for anything. It’s a group of independent consultants who got together to address the strategic issues around information technology adoption. One of the things we’ve consistently noticed in our consulting lives is that executives will be frustrated or confused about performance issues. They’ll say, “I thought we bought the best stuff — didn’t we take care of that by going to X platform?” Meanwhile, the engineering teams being talked about are thinking, “Well, if you would stop pivoting, maybe we could get things done.” There is a huge chasm between how these groups communicate, and that happens to be what we work on.
Sarah Wells: Whenever you talk to people about the problem — that engineering teams aren’t delivering as much value as quickly as expected — you go and talk to the engineering team, and first of all, they absolutely know where the problems are. But secondly, it’s very rarely actually an engineering problem. It’s “we don’t actually know what the direction is — after two years, we still don’t know where we’re going.” Or “we’re being asked to spend no time on technical debt and keeping things up to date, and you just get slower and slower.” That’s been an interesting pattern for a few years. What’s making it even more interesting now is the rush to AI.
Nick Selby: Absolutely. And when executives say, “it’s our engineering,” they expect a technical response — but the answer is usually a strategic or programmatic one at heart. Because we’ve already framed it as an engineering or technical problem, it further obfuscates the issue.