Jimmy Malhan

Jimmy Malhan: Why Your Organization’s AI Strategy Is Broken and What To Do About It

0 Shares
0
0
0
0

The most dangerous AI risk in most organizations is not the one the chief information officer (CISO) is tracking. It is the engineer pasting proprietary code into ChatGPT, the sales leader feeding competitive strategy into a prompt, the product manager uploading an unreleased roadmap to get a summary. Every one of those interactions is a data leak dressed as productivity and it is happening across every function, every day, without a single security alert firing. 

Jimmy Malhan, founder of Pretense, built his company because nobody else was solving this. The Samsung incident, where employees leaked intellectual property, codebase details and database schemas directly to ChatGPT, is not an anomaly. It is what happens at scale when AI adoption outpaces AI security. “You’re not spending millions to get thousands back,” Malhan says. “You’re spending millions while leaking billions of what you’ve already built. That’s the loop nobody is talking about.”

The Security Perimeter Has Moved. Most Organizations Have Not

Traditional data loss prevention tools were architected for a threat model that no longer exists: network firewalls, virtual private network (VPN) monitoring and email scanning. They cannot intercept what gets typed into an AI coding assistant. They were not designed to. The exposure happening inside engineering teams, sales functions and executive workflows today is invisible to the security stack most enterprises have spent years building. The organizations that laid off engineers to accelerate AI adoption and are now rehiring them to fix AI-generated errors and plug data leaks – effectively paying twice for this lesson. “The security is not what it used to be,” Malhan says. “It is AI security now.” 

Recognizing that distinction is not a technical conversation – it is a strategic one. Further, it belongs at the board level before the exposure becomes a liability that surfaces in a competitor’s product roadmap or an acquisition conversation where the intellectual property (IP) is not worth what was assumed. The attack surface has changed. The security investment has not kept pace. Until it does, every AI productivity gain the organization claims is partially offset by the proprietary value quietly leaving through the same tools delivering it.

Copying Is a Strategy for Finishing Second

Most organizations will not discover their AI strategy is broken until the damage is already done and the reason is structural. Rather than running internal experiments and placing their own bets, they hired consultants with ten clients from the same industry and implemented what was already working elsewhere. The result is a strategy built on someone else’s risk tolerance, someone else’s timing and someone else’s context, perpetually one cycle behind the organizations that acted first.

The return on investment (ROI) gap is real. Companies spending millions are getting thousands back. But that gap is not evidence that AI fails to deliver. It is evidence that the market is still in transition, comparable to the period between the pre-internet and post-internet eras, when value was real, but returns lagged investment by years. “The market is not yet mature,” Malhan says. “But the organizations building understanding and running experiments now are the ones that will capture the value when it arrives.”

Waiting for certainty before acting does not reduce risk. It transfers the upside to whoever moved earlier and assigns the cost of delay to the organization that hesitated. The fix is not a larger AI budget. It is a dedicated experimentation budget, real resources allocated to internal bets, independent of what competitors are doing. Without it, the strategy is reactive by design.

The Decision That Separates the Winners From the Rest

In three years, there will be organizations that AI has strengthened and organizations it quietly hollowed out. The deciding factor is not budget allocation or access to technology. It is the willingness to act before the path is fully clear and the discipline to secure what makes the organization worth competing with in the process. Malhan frames this through a sharp contrast: one approach is to periodically discard and rebuild core systems from scratch; another is to iterate incrementally on what already exists. Both approaches have merit. They have different ceilings.

“If I have to nuke it, I’m going to nuke it,” Malhan says. The founder mindset accepts that bold action sometimes means starting over entirely. The incremental mindset optimizes what exists. In a market moving at this pace, the ceiling on incremental optimization is lower than most organizations want to admit. Moving fast without protecting proprietary data is exposure. Moving cautiously without experimenting is obsolescence. The organizations that hold both disciplines simultaneously are the ones that will be standing when the transition completes. Every other organization is betting that the damage will stay invisible long enough to matter less than it does.

Follow Jimmy Malhan on LinkedIn and visit Pretense for more insights on AI security, engineering leadership and building organizations that stay ahead of AI disruption.

0 Shares
You May Also Like