Software developers are widely adopting artificial intelligence tools while reporting a fundamental lack of trust in the generated output [1].
This disconnect suggests that while AI can accelerate the coding process, it has not yet reached a level of reliability that allows professionals to trust it without rigorous oversight. The trend highlights a tension between efficiency and accuracy in the tech industry.
According to a Google Cloud study from 2025, coders have integrated AI into their workflows but remain skeptical of the results [1]. Some developers said the experience of trusting AI-generated code is like believing the promises of a politician [1]. This perceived unreliability stems from the tendency of these tools to produce errors or hallucinations that require manual correction.
Despite these concerns, the appetite for AI integration appears to vary across different consumer sectors. For example, 80% of car shoppers said they are open to using AI in their purchasing process [2]. This contrast suggests that professional users with high-stakes requirements, such as software stability, maintain higher standards for accuracy than general consumers.
Developers continue to use the tools because they offer speed and a starting point for complex tasks. However, the necessity of verifying every line of code means the technology serves as an assistant rather than a replacement for human expertise [1].
“Software developers are widely adopting artificial intelligence tools while reporting a fundamental lack of trust in the generated output.”
The gap between usage and trust indicates that AI in software development is currently a tool for productivity rather than a reliable source of truth. While consumers in other industries may be more accepting of AI, the high cost of failure in coding ensures that human verification remains the primary safeguard in the development lifecycle.




