Monday, November 25, 2024
HomeTechnologyChatGPT Code: Is the AI Really Good At Writing Code?

ChatGPT Code: Is the AI Really Good At Writing Code?



This text is a part of our unique IEEE Journal Watch collection in partnership with IEEE Xplore.

Programmers have spent a long time writing code for AI fashions, and now, in a full circle second, AI is getting used to write down code. However how does an AI code generator evaluate to a human programmer?

A research revealed within the June concern of IEEE Transactions on Software program Engineering evaluated the code produced by OpenAI’s ChatGPT when it comes to performance, complexity and safety. The outcomes present that ChatGPT has a particularly broad vary of success relating to producing practical code—with a hit price starting from anyplace as poor as 0.66 % and pretty much as good as 89 %—relying on the issue of the duty, the programming language, and plenty of different components.

Whereas in some instances the AI generator may produce higher code than people, the evaluation additionally reveals some safety issues with AI-generated code.

Yutian Tang is a lecturer on the College of Glasgow who was concerned within the research. He notes that AI-based code technology may present some benefits when it comes to enhancing productiveness and automating software program growth duties—nevertheless it’s vital to know the strengths and limitations of those fashions.

“By conducting a complete evaluation, we are able to uncover potential points and limitations that come up within the ChatGPT-based code technology… [and] enhance technology strategies,” Tang explains.

To discover these limitations in additional element, his crew sought to check GPT-3.5’s means to handle 728 coding issues from the LeetCode testing platform in 5 programming languages: C, C++, Java, JavaScript, and Python.

“An affordable speculation for why ChatGPT can do higher with algorithm issues earlier than 2021 is that these issues are ceaselessly seen within the coaching dataset.” —Yutian Tang, College of Glasgow

Total, ChatGPT was pretty good at fixing issues within the completely different coding languages—however particularly when trying to unravel coding issues that existed on LeetCode earlier than 2021. As an illustration, it was capable of produce practical code for simple, medium, and onerous issues with success charges of about 89, 71, and 40 %, respectively.

“Nevertheless, relating to the algorithm issues after 2021, ChatGPT’s means to generate functionally appropriate code is affected. It generally fails to know the which means of questions, even for simple degree issues,” Tang notes.

For instance, ChatGPT’s means to supply practical code for “straightforward” coding issues dropped from 89 % to 52 % after 2021. And its means to generate practical code for “onerous” issues dropped from 40 % to 0.66 % after this time as effectively.

“An affordable speculation for why ChatGPT can do higher with algorithm issues earlier than 2021 is that these issues are ceaselessly seen within the coaching dataset,” Tang says.

Basically, as coding evolves, ChatGPT has not been uncovered but to new issues and options. It lacks the essential considering abilities of a human and might solely tackle issues it has beforehand encountered. This might clarify why it’s so significantly better at addressing older coding issues than newer ones.

“ChatGPT might generate incorrect code as a result of it doesn’t perceive the which means of algorithm issues.” —Yutian Tang, College of Glasgow

Curiously, ChatGPT is ready to generate code with smaller runtime and reminiscence overheads than at the very least 50 % of human options to the identical LeetCode issues.

The researchers additionally explored the flexibility of ChatGPT to repair its personal coding errors after receiving suggestions from LeetCode. They randomly chosen 50 coding eventualities the place ChatGPT initially generated incorrect coding, both as a result of it didn’t perceive the content material or downside at hand.

Whereas ChatGPT was good at fixing compiling errors, it usually was not good at correcting its personal errors.

“ChatGPT might generate incorrect code as a result of it doesn’t perceive the which means of algorithm issues, thus, this straightforward error suggestions data is just not sufficient,” Tang explains.

The researchers additionally discovered that ChatGPT-generated code did have a good quantity of vulnerabilities, reminiscent of a lacking null take a look at, however many of those have been simply fixable. Their outcomes additionally present that generated code in C was essentially the most complicated, adopted by C++ and Python, which has an analogous complexity to the human-written code.

Tangs says, primarily based on these outcomes, it’s vital that builders utilizing ChatGPT present extra data to assist ChatGPT higher perceive issues or keep away from vulnerabilities.

“For instance, when encountering extra complicated programming issues, builders can present related information as a lot as attainable, and inform ChatGPT within the immediate which potential vulnerabilities to pay attention to,” Tang says.

From Your Website Articles

Associated Articles Across the Internet

RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Most Popular

Recent Comments