Two-Thirds of Security Leaders Consider Banning AI-Generated Code
- by Anoop Singh
- 8
One of the most-touted benefits of the proliferation of artificial intelligence is how it can assist developers with menial tasks. However, new research shows that security leaders are not entirely on board, with 63% contemplating banning the use of AI in coding due to the risks it imposes.
An even larger proportion, 92%, of the decision-makers surveyed are concerned about the use of AI-generated code in their organisation. Their main concerns all relate to the reduction in quality of the output.
AI models may have been trained on outdated open-source libraries, and developers could quickly become over-reliant on using the tools that make their lives easier, meaning poor code proliferates in the company’s products.
SEE: Top Security Tools for Developers
Furthermore, security leaders believe it is unlikely that AI-generated code will be quality checked with as much rigour as handwritten lines. Developers may not feel as responsible for the output of an AI model and, consequently, won’t feel as much pressure to ensure it is perfect either.
TechRepublic spoke with Tariq Shaukat, the CEO of code security firm Sonar, last week about how he is “hearing more and more” about companies that have used AI to write their code experiencing outages and security issues.
“In general, this is due to insufficient reviews, either because the company has not implemented robust code quality and code-review practices, or because developers are scrutinising AI-written code less than they would scrutinise their own code,” he said.
“When asked about buggy AI, a common refrain is ‘it is not my code,’ meaning they feel less accountable because they didn’t write it.”
The new report, “Organizations Struggle to Secure AI-Generated and Open Source Code” from machine identity management provider Venafi, is based on a survey of 800 security decision-makers across the U.S., U.K., Germany, and France. It found that 83% of organisations are currently using AI to develop code and it is common practice at over half, despite the concerns of security professionals.
“New threats — such as AI poisoning and model escape — have started to emerge while massive waves of generative AI code are being used by developers and novices in ways still to be understood,” Kevin Bocek, chief innovation officer at Venafi, said in the report.
While many have considered banning AI-assisted coding, 72% felt that they have no choice but to allow the practice to continue so the company can remain competitive. According to Gartner, 90% of enterprise software engineers will use AI code assistants by 2028 and reap productivity gains in the process.
SEE: 31% of Organizations Using Generative AI Ask It to Write Code (2023)
Security professionals losing sleep over this issue
Two-thirds of respondents to the Venafi report say they find it impossible to keep up with the uber-productive developers when ensuring the security of their products, and 66% say they cannot govern the safe use of AI within the organisation because they do not have the visibility over where it is being used.
As a result, security leaders are concerned about the consequences of letting potential vulnerabilities slip through the cracks, with 59% losing sleep over the matter. Nearly 80% believe that the proliferation of AI-developed code will lead to a security reckoning, as a significant incident prompts reform in how it is handled.
Bocek added in a press release: “Security teams are stuck between a rock and a hard place in a new world where AI writes code. Developers are already supercharged by AI and won’t give up their superpowers. And attackers are infiltrating our ranks — recent examples of long-term meddling in open source projects and North Korean infiltration of IT are just the tip of the iceberg.”
One of the most-touted benefits of the proliferation of artificial intelligence is how it can assist developers with menial tasks. However, new research shows that security leaders are not entirely on board, with 63% contemplating banning the use of AI in coding due to the risks it imposes. An even larger proportion, 92%, of the…
One of the most-touted benefits of the proliferation of artificial intelligence is how it can assist developers with menial tasks. However, new research shows that security leaders are not entirely on board, with 63% contemplating banning the use of AI in coding due to the risks it imposes. An even larger proportion, 92%, of the…