SHARE
Facebook X Pinterest WhatsApp

When AI Writes the Code, Security Must Manage the Risks

thumbnail
When AI Writes the Code, Security Must Manage the Risks

Security must scale alongside AI development rather than lagging behind. This requires rethinking AppSec and risk management as a continuous practice driven by intelligence.

Feb 18, 2026

AI is now actively writing software every day. Across industries, AI tools are generating functions, workflows, and entire applications at a pace no human team can match. For financial services organizations, where precision, transparency, and accountability are essential, this shift presents a new class of application security challenges that traditional security practices were not designed to handle.

Recently, while experimenting with an AI coding tool on a side project, I asked it to generate a simple database workflow. The result looked clean and efficient at first glance. However, on closer inspection, it contained a textbook SQL injection vulnerability. This was introduced not through negligence or lack of expertise, but through speed and automation. The AI did exactly what it was asked to do, and in doing so, replicated a security mistake developers have been trained for decades to avoid.

This is not an isolated incident. As AI-generated code becomes ubiquitous, organizations must acknowledge its risks. The same tools that accelerate innovation can quietly expand the attack surface, allowing vulnerabilities to reach production faster than security teams can realistically keep up. The implications also extend beyond creating individual bugs to cascading governance and supply chain issues at scale.

See also: Can Vibe Coding Survive the New Era of Security

AI-Generated Code and the Expanding Attack Surface

AI-assisted development fundamentally changes the volume and velocity of code creation. Where teams once reviewed pull requests measured in hundreds of lines of code, they now contend with thousands generated in moments. Each new line of code represents potential risk, and when that code is produced without an intrinsic security context, vulnerabilities scale along with productivity.

In financial services, this expansion of output is especially important to manage. Applications are deeply interconnected, reliant on APIs, third-party libraries, and shared services that create cascading dependencies. When AI generates code that interacts with these systems, it often lacks awareness of the nuanced security constraints and regulatory requirements embedded in legacy environments.

The result is more frequent vulnerabilities. Small weaknesses compound when introduced repeatedly across applications and environments. Security teams are then tasked with identifying which issues matter most, which are exploitable, and which can safely wait to be addressed, or not. Without clear prioritization, remediation efforts risk becoming reactive rather than strategic.

AI does not inherently make software less secure. However, it dramatically compresses the window between creation and deployment, leaving less time for traditional review and testing. This compression exposes a structural mismatch between modern development velocity and legacy AppSec processes that were built for slower, more deliberate release cycles.

See also: Vibe Coding: The New Literacy for the AI-Native Software Generation

Advertisement

Why Traditional AppSec Struggles to Keep Pace

Most application security programs were designed around predictable workflows. Code is written by developers, reviewed by peers, scanned by tools, and remediated according to severity. AI disrupts these assumptions.

First, attribution becomes murky. When a vulnerability appears in AI-generated code, responsibility is diffused across prompts, models, training data, and human oversight. This complicates accountability and slows decision-making. Second, signal-to-noise ratios worsen. Automated scanners flag the same issues they always have, but are now working against a vastly larger codebase, overwhelming teams with findings that lack context or prioritization.

Third, remediation takes longer. Developers working with AI tools often regenerate code rather than refactor it, which can reintroduce vulnerabilities in different forms. Security teams may fix one instance of an issue only to see it reappear elsewhere moments later. The cycle becomes repetitive and inefficient.

Security teams need to know which vulnerabilities actually expose the organization to risk, how those risks map to real-world threats, and where remediation efforts will have the greatest impact. Static lists of findings are insufficient in this regard.

See also: Vibe Coding: How AI-First Workflows Are Redefining Customer Data Engineering

Advertisement

Conversational Security and Context-Driven Remediation

To adapt, AppSec must evolve from being detection-centric to context-centric. This means shifting focus from simply identifying vulnerabilities to understanding their exploitability and business impact.

One emerging approach is the use of conversational workflows within security operations. Instead of manually correlating data across scanners and repositories, security leaders can interrogate their risk posture in plain language. Questions like “Which AI-generated services expose customer data?” or “Which vulnerabilities intersect with known exploited CVEs?” can be answered in seconds rather than days.

This enables security teams to operate at the same speed as AI-assisted development. This way, security becomes an integrated decision/support function, guiding developers toward safer patterns and highlighting where automation introduces unacceptable risk.

Platforms such as ArmorCode exemplify this approach by aggregating AppSec signals and applying AI to prioritize remediation based on exposure and context rather than raw severity scores. Tools like these illustrate how security workflows can be reimagined to match the realities of AI-driven code production.

Importantly, this approach also improves collaboration. When developers understand why a vulnerability matters (not just that it exists), they are more likely to address it effectively.

Advertisement

AI, Software Supply Chains, and the Next Wave of Reporting

Beyond immediate vulnerabilities, AI-generated code raises long-term questions about software supply chain transparency. Financial services organizations already contend with complex reporting requirements around third-party risk, open-source usage, and needing to report on their software bills of materials (SBOMs).

When code is generated dynamically, what constitutes a component? How do organizations document the provenance of AI-generated logic or the models that influenced it? As regulators and auditors seek greater visibility into digital supply chains, these types of questions become essential to answer.

The concept of AI-specific SBOMs is gaining traction as a way to capture the AI systems involved in code creation. This would allow organizations to assess downstream risk if a model is found to produce insecure patterns or if a training dataset is compromised, but preparing for this future requires investment now. Security leaders must work closely with development and compliance teams to define what transparency looks like in this environment.

Advertisement

Building Security That Scales With AI

AI is not going away, nor should it. Its benefits to productivity, innovation, and accessibility are undeniable. The challenge is ensuring that security scales alongside AI development rather than lagging behind.

This requires rethinking AppSec and risk management as a continuous practice driven by intelligence. Automated scanning must be complemented by contextual analysis. Manual review must be augmented by AI that understands risk relationships. And governance frameworks must evolve to account for code that is machine-generated, not just human-written.

Organizations that embed intelligence into remediation workflows, prioritize risks based on real exposure, and plan for future supply chain transparency can harness AI’s speed without sacrificing trust.

thumbnail
Paolo Del Mundo

Paolo del Mundo is a senior security leader and Director of Application Security at The Motley Fool, where he focuses on application security, risk management, and secure software delivery in highly regulated environments. He works closely with engineering teams to modernize AppSec practices and has hands-on experience evaluating how AI-assisted development impacts vulnerability management and remediation.

Recommended for you...

How Can AI Improve Industrial Inventory Management (Practical Use Cases)
Luke Crihfield
Feb 17, 2026
Why Intelligence Without Authority Cannot Deliver Enterprise Value
Harsha Kumar
Feb 17, 2026
Real-time Analytics News for the Week Ending February 14
Cleaning up the Slop: Will Backlash to “AI Slop” Increase This Year?
Henry Young
Feb 13, 2026

Featured Resources from Cloud Data Insights

When AI Writes the Code, Security Must Manage the Risks
Paolo Del Mundo
Feb 18, 2026
How Can AI Improve Industrial Inventory Management (Practical Use Cases)
Luke Crihfield
Feb 17, 2026
Why Intelligence Without Authority Cannot Deliver Enterprise Value
Harsha Kumar
Feb 17, 2026
Real-time Analytics News for the Week Ending February 14
RT Insights Logo

Analysis and market insights on real-time analytics including Big Data, the IoT, and cognitive computing. Business use cases and technologies are discussed.

Property of TechnologyAdvice. © 2026 TechnologyAdvice. All Rights Reserved

Advertiser Disclosure: Some of the products that appear on this site are from companies from which TechnologyAdvice receives compensation. This compensation may impact how and where products appear on this site including, for example, the order in which they appear. TechnologyAdvice does not include all companies or all types of products available in the marketplace.