Coding Created GPT-3, Now LLMs Are at the Verge of Cursing Coding

GPT

The question of whether LLMs will wipe out programming jobs has been constant

Vaitalik Buterin, the ETH/USD creator when opined, ‘unfriendly artificial intelligence is the biggest risk to humanity’, perhaps the programming community has a hint or more to take from what he said. Large language models, evolved as a replacement for the human craft of communication are by-products of reams of code generated by programmers to make the machine learning algorithms work. Now that models for coding like Copilot and Codex have also come into existence, the question of whether LLMs will wipe out programming jobs has been constant. In an Evan Data Corp survey, around 29% of the surveyed programmers stated AI taking over their career was a worrying aspect. With concepts like meta-programming, self-modifying code, and the phenomena of evolutionary algorithms fast becoming the mainstay of AI programming and machine learning models, the feeling of being cursed by the LLMs is justified. Nevertheless, the question remains if the fears are worth holding to.  

How AI can help Programmers?

AI models work by observing code over various sources on the internet. Therefore, it knows where the piece of code the programmer is looking for lies. In a way, it acts like an index of coding snippets strewn around the web space saving programmers from the drudgery of going from place to place to look up for API docs, examples on Stack overflow, etc, so much so that programmers can choose from options instead of creating it all by themselves. For this very reason, many programmers are taking to programming models as they can automate most of the routine work. CodeQL, an AI-based de-bugging engine developed by De Moor, lets the programmer query code and find variants of vulnerability, eradicating them forever. Even before LLMs came to prominence, Microsoft had a working coding tool Deep Coder, way back in 2017 with limited capabilities as a functional tool for program induction learning and generalizing strategies across problems and integrating neural network architectures with search-based techniques rather than replacing them. OpenAI’s own AI coder built on top of Codex can take verbal commands in plain English for a non-coder to convert them into meaningful and workable programs.  

The flaws that make you feel good

Despite being an obedient tool, AI coders do have flaws. Language models are content aggregators and therefore there is a huge scope for errors creeping into programs in many different ways. A programmer would in general know the source of error provided the programming is a personal curative or creative work because one would be aware of personal style of programming. According to NYU research, which analyzed code generated by Copilot found that almost around 40% of the time the code had security flaws. Brendan Dolan Gavitt, a professor at NYU, who was part of the research stated,” But the way Copilot was trained wasn’t actually to write good code, but to just produce the kind of text that would follow a given prompt.” While programmers wonder if AI applications like Copilot and Codex can push them out of jobs, researchers are of opinion that as long as developers need to vet or manipulate the code suggestion, which in itself needs skills and discretion, coders can stay assured. Open AI’s application that works text-to-code converter too is not without flaws. It has been reported that it is very much susceptible to churning out offensive and biased outputs, without actually understanding the context in the code generated, making way for vulnerabilities, the context of which the coder is unaware. As OpenAI co-founder Greg Brockman, in a conversation with Techcrunch says, “Programming is about having a vision and dividing it into chunks, and make code for those pieces”, a programmer’s prerogative for putting together the logic will not be taken away so soon. For improvisation, what experts suggest is a system of interaction between the human and machine to take feedback. One good example is the TiCoder framework. It refines and formalizes user intent through a mechanism known as “test-driven user-intent formalization”, aimed at generating code through repetitive feedback to understand the context.
Join our WhatsApp and Telegram Community to Get Regular Top Tech Updates
Whatsapp Icon Telegram Icon

Disclaimer: Any financial and crypto market information given on Analytics Insight are sponsored articles, written for informational purpose only and is not an investment advice. The readers are further advised that Crypto products and NFTs are unregulated and can be highly risky. There may be no regulatory recourse for any loss from such transactions. Conduct your own research by contacting financial experts before making any investment decisions. The decision to read hereinafter is purely a matter of choice and shall be construed as an express undertaking/guarantee in favour of Analytics Insight of being absolved from any/ all potential legal action, or enforceable claims. We do not represent nor own any cryptocurrency, any complaints, abuse or concerns with regards to the information provided shall be immediately informed here.

598 Views
Close