First programs show: AI can write code like humans. But it also creates unexpected errors & new problems. AI has made great strides, in recent years. AIs can not only control our household appliances. You can also compose music, drive our cars (partially) and even translate brain waves into speech.
Therefore, it is not surprising that many companies are experimenting with new AI functions. One of them: AI should be able to write code independently.
AI: Writing Code Helpful, But Also Buggy
First software developers: experiment with programs of this kind. One of the best-known examples is Github’s “Copilot” program. Github specializes in software development and has been part of Microsoft since 2018.
At Copilot, programmers enter the first lines of code. The AI can then guess what kind of program it should be and then write the rest of the code. Copilot is based on an AI program from the company Open AI and is intended to relieve coders at Github.
First experiences show that Copilot is helpful and saves time, but at the same time creates new problems. So the Copilot is not flawless. But on the contrary.
AI And Humans Make Different Mistakes
Alex Naka is a data scientist at Github and has worked with Copilot. Not only did he discover that Robots, like humans, builds bugs into code. Naka also found it more difficult to find these bugs, as he reports to Wired magazine.
“There have been a few times where I’ve missed some underlying flaws after accepting any of the suggestions”. “And it can be very difficult to identify these, maybe because the AI makes different mistakes than I would have made.”
Security: Incorrect Code 40 Per Cent Of The Time
Naka is not an isolated incident. A study by New York University looked in detail at Copilot and found that the AI makes mistakes in security features in around 40 per cent of cases.
According to the study, this is often because a line of code is misinterpreted or the AI lacks a more precise context that was not programmed beforehand.
Brendan Dolan-Gavitt, one of the authors of the copilot study, also found that the AI had accidentally incorporated several forbidden, offensive terms into the code. These are the terms the program should avoid.
Osage de Moor, on the other hand, who co-developed the Copilot for Github, emphasizes that this error rate only relates to a sub-category of the entire code – namely a category in which errors are more likely.
Osage de Moor also refers to “CodeQL”, a spell checker for Copilot, which he also developed. Codell can reveal programming errors within Copilot, so de Moor recommends using the two programs together.
Huge Market Potential
Of course, Github isn’t the only company using AI to write code. Because technology has great potential. In 2020, the market value of Artificial Intelligence was $ 35.92 billion. Analyzes also show that the AI market could even grow to a value of 360.36 billion US dollars by 2028.
Nevertheless, some developers are still very skeptical inside. They fear that the way AIs write code is just based on copying other programs. A practice that the tech community wants to eradicate.
Others also worry about the safety of programs written by AIs. Because if the code is based on existing code, it makes it much easier for hackers to infiltrate the programs and place malware.
AI developer de Moor, on the other hand, believes in the future of programs. He admits there is still a lot to be improved. But he also believes that AIs are still at the beginning of code writing and will soon be making very few mistakes.