Mike Coogan

Mike Coogan: How to Govern Emerging Technology Risks Without Slowing Growth

0 Shares
0
0
0
0

Cybersecurity leaders often get a bad rap as anti-innovation gatekeepers, but that perception is usually the result of poor planning. Summoned too late, their recommendations risk slowing down digital and product initiatives that are already in motion. Mike Coogan, VP, IT Services and CISO at Brinks Home, has spent decades working to dismantle that dynamic. “If you can figure out a way to work with those folks, they will work with you,” he says. “If you don’t, they will work around you, and that’s far worse.”

Governing emerging technology risk means understanding it well enough to guide it. Emerging technology risk is not new, despite how conversations around AI, cloud platforms, and hyperconnected devices treat each wave as unprecedented. Experts like Coogan, whose background spans more than 20 years in IT transformation, have seen this cycle before, from the rise of the internet to social media and large-scale cloud adoption. “There’s nothing really new under the sun,” he says. “We’ve dealt with change and new technology over and over. The pace is faster, absolutely, but we know how to deal with it.”

What tends to derail organizations is not the technology itself, but the posture taken toward it. Fear-driven governance creates friction, especially when security is introduced at the last moment. It’s a dynamic that reinforces the stereotype of security as the department of bad news, a role Coogan views as self-inflicted. “Security will always be scared. That’s something we naturally do,” he says. “But if the organization dies in the marketplace, it doesn’t matter how secure it was.” The issue is that  boards and executive teams, usually see emerging technology as a growth lever, so when security teams are not embedded early enough to translate that ambition into safe execution, organizations experience late-stage resistance and friction.

Security As An Early Partner In Innovation

One of the most effective governance practices Coogan points to is to talk early and talk often. Bringing security into product and technology conversations at the beginning changes the tone of decision-making. It replaces last-minute objections with shared problem-solving. “It’s always terrible to bring the security person in at the last second and then have them say, ‘Wait, let’s think this through,’” Coogan says. “Those interactions hurt. But together you grow and figure out how to make it work.”

It’s an approach that helps reframe security as a capability provider. Rather than blocking progress, teams are offered options: human review steps where automation introduces risk; additional scanning where data exposure increases; or proven patterns borrowed from prior implementations. “We get to instead say, ‘the way you’re doing it has issues, but we know how to solve them.'” This enthusiasm for new technology then makes it easier for product teams to engage openly on accountability and risk.

Governing The Third-Party Ecosystem

As organizations rely more heavily on external platforms and service providers, third-party risk management becomes a growth issue as much as a security one. Product teams increasingly depend on specialized vendors for cloud services, data analytics, and AI capabilities, which means delays in vendor approval can directly slow innovation and revenue timelines. However, legacy, manual review processes were designed for a slower, more centralized IT environment and struggle to keep pace with modern development cycles. AI itself becomes an enabler, accelerating vendor reviews and surfacing risk patterns earlier. Industry expectations are also converging, with baseline assurances like SOC 2 Type II becoming table stakes for assessing vendor risk quickly and consistently, making third-party exposure visible and manageable at speed. 

“You cannot do this the way we did it before,” he says. “These processes have to take in work and deliver work as fast as the business is moving.”

Using AI To Strengthen Judgment

Coogan’s perspective on AI is shaped by a career that stretches back to its earliest academic roots. Today’s generative models, he explains, are powerful pattern-recognition engines fueled by unprecedented volumes of data. They are useful precisely because of that, but not infallible. At Brinks Home, a provider of residential smart security and alarm services, AI is already embedded as a co-pilot in operational workflows, from incident response to root cause analysis. “It dramatically reduces time to resolution,” he says, while still preserving human oversight. As AI systems move toward more autonomous, agentic behavior, governance must ensure predictability.

“In the middle of an incident, you don’t want people getting creative,” Coogan says. The same principle applies to machines. AI should operate within defined boundaries, escalate uncertainty to humans, and be documented in ways leaders can explain and defend.

Accountability Still Belongs To Humans

The deeper implication of Coogan’s approach is cultural. Technology will continue to advance faster than governance frameworks can be written. Successful organizations will be those that accept that reality and focus on accountability. “You don’t get to blame the AI,” Coogan says. “You’re still accountable.” For Coogan, governing emerging technology ultimately comes down to leadership posture. Security leaders who want a seat at the table must act like business leaders first, enabling progress while setting clear guardrails. “Smart security enables whole teams to move forward,” Coogan says. “That’s how you keep growth moving without losing sight of risk.”

Follow Mike Coogan on LinkedIn for more insights. 

0 Shares
You May Also Like