Software engineering is on the cusp of a transformative shift with the release of OpenAI’s GPT-4, as well as other newly advanced large language models (LLMs). Artificial intelligence (AI) has recently blown up in terms of accessibility, capability, and availability, not to mention accelerating conversations of the technology’s potential harms and pitfalls. As the software engineering world transforms, there will be positives and negatives to weigh before we hand the keys over to AI.
While ChatGPT (GPT-3) has captured the attention and interest across the internet, GPT-4 is the latest, most advanced version of the platform, boasting superior capabilities such as improved reliability and creativity, as well as understanding more nuanced language. It has already proven some of its capabilities, such as creating entire websites or fully functioning applications from simple instructions. It won’t take complete control of engineers, but it will enable them to work more efficiently, increasing both productivity and expectations. However, as AI takes more control over writing basic code, the demand for entry-level engineers will decrease.
GPT-4 and other language models still pose some challenges to be addressed, including ethical considerations. Although GPT-4 is designed to reduce bias, there is a risk that models based on biased datasets could enable existing biases to stick around in resulting code or products. Competition will likely be unbalanced as larger companies, such as Microsoft, get access to new tools sooner than small businesses — and as OpenAI closely guards its technology’s inner workings.