SHARE
Facebook X Pinterest WhatsApp

Study Notes Vulnerabilities in Code-Generating AI Systems

thumbnail
Study Notes Vulnerabilities in Code-Generating AI Systems

The study suggests that projects using AI code-generating tools still need some level of human oversight and expertise for critical security tasks.

Jan 19, 2023

A recent study published by researchers affiliated with Stanford has found that developers utilizing code-generating AI may be more likely to introduce security vulnerabilities into their projects. These initial findings stand at odds with a recent uptick in marketing these systems, leaving researchers to wonder about the best way to utilize programs without introducing vulnerabilities.

The study itself looked specifically at Codex, a system developed by San Francisco-based OpenAI. The study asked 47 developers with a range of industry programming experience to use the system to complete security-related problems. The problems ranged across several common coding languages.

As developers build, Codex suggests additional lines of code and context using training from billions of lines of code publicly available. In the study, developers relying on these code-generating suggestions were found more likely to write insecure and incorrect code than the control group. Even further, these developers were more likely to believe their code was secure.

See also: Reports of the AI-Assisted Death of Prose are Greatly Exaggerated

The findings suggest human expertise is still needed

While this isn’t a condemnation of the program, it does suggest that developers still need some level of human oversight and expertise for critical security tasks. None of the developers had the security expertise generally associated with these tasks.

The researchers believe that programs like Codex have a place in less sensitive tasks and can help speed up development where appropriate. However, it’s important to understand the weaknesses of AI-driven programs like this.

So can AI get better at security? The research notes that human oversight and refinement could help AI get better at specialized knowledge, such as the security field. In addition, working to develop ultra-secure default settings to work within can help.

The researchers believe more work needs to be done to ensure best practices and to develop methods for fixing challenges like these.

thumbnail
Elizabeth Wallace

Elizabeth Wallace is a Nashville-based freelance writer with a soft spot for data science and AI and a background in linguistics. She spent 13 years teaching language in higher ed and now helps startups and other organizations explain - clearly - what it is they do.

Recommended for you...

Model-as-a-Service Part 1: The Basics
If 2025 was the Year of AI Agents, 2026 will be the Year of Multi-agent Systems
AI Agents Need Keys to Your Kingdom
The Rise of Autonomous BI: How AI Agents Are Transforming Data Discovery and Analysis

Featured Resources from Cloud Data Insights

The Difficult Reality of Implementing Zero Trust Networking
Misbah Rehman
Jan 6, 2026
Cloud Evolution 2026: Strategic Imperatives for Chief Data Officers
Why Network Services Need Automation
The Shared Responsibility Model and Its Impact on Your Security Posture
RT Insights Logo

Analysis and market insights on real-time analytics including Big Data, the IoT, and cognitive computing. Business use cases and technologies are discussed.

Property of TechnologyAdvice. © 2026 TechnologyAdvice. All Rights Reserved

Advertiser Disclosure: Some of the products that appear on this site are from companies from which TechnologyAdvice receives compensation. This compensation may impact how and where products appear on this site including, for example, the order in which they appear. TechnologyAdvice does not include all companies or all types of products available in the marketplace.