One of ChatGPT’s biggest claims to fame was the chatbot’s ability to code. Shortly after it was released, people quickly noticed that ChatGPT could do advanced coding tasks such as debugging code.
As a result, developers, whose work relies heavily on coding, have embraced the technology.
Too: Who owns the code? If ChatGPT’s AI Helps You Write Your App, Is It Still Yours?
Stack Overflow’s 2023 Developer Survey Polled over 90,000 developers to gather industry insights, including developer sentiment towards AI.
Of the 90,000 respondents, 70% of all respondents use AI tools in their development process or plan to use AI tools this year. Only 29.4% of the respondents said that they do not use it and have no plans to.
Insights from a previous survey question also revealed that developers learning to code are more likely to use AI tools than professional developers (82% versus 70%). This highlights that the value of ChatGPT includes, but is not limited to, coding, as professional developers are still using it.
Too: How to use ChatGPT to write code
In terms of personal feelings towards AI, 77% of all respondents expressed that they have either a favorable or very favorable stance on using AI tools as part of their development workflow.
Developers outlined that the top use cases for using AI tools in their workflow were writing code (83%), debugging and getting help (49%), documenting code (35%), learning about the codebase (35%), 30%), and includes tests. code (24%).
Despite the positive sentiment and widespread implementation, many developers show hesitation about the accuracy of these AI tools.
Only 42% of poll respondents trust the accuracy of the output while 31% are on the fence and 27% are either somewhat distrustful or highly distrustful.
Too: What is your liability risk if you use AI-generated code?
This distrust of results is probably rooted in the hallucinations that AI models are prone to. These hallucinations refer to false outputs or misinformation that AI models can generate at times.
The consequences of these hallucinations can be as minor as getting the wrong answer or significant enough to sue OpenAI. To get the most out of AI assistance while maintaining accuracy, it is best to combine human work with AI.











