Vaitalik Buterin, the ETH/USD creator when opined, 'unfriendly artificial intelligence is the biggest risk to humanity', perhaps the programming community has a hint or more to take from what he said. Large language models, evolved as a replacement for the human craft of communication are by-products of reams of code generated by programmers to make the machine learning algorithms work. Now that models for coding like Copilot and Codex have also come into existence, the question of whether LLMs will wipe out programming jobs has been constant. In an Evan Data Corp survey, around 29% of the surveyed programmers stated AI taking over their career was a worrying aspect. With concepts like meta-programming, self-modifying code, and the phenomena of evolutionary algorithms fast becoming the mainstay of AI programming and machine learning models, the feeling of being cursed by the LLMs is justified. Nevertheless, the question remains if the fears are worth holding to.
AI models work by observing code over various sources on the internet. Therefore, it knows where the piece of code the programmer is looking for lies. In a way, it acts like an index of coding snippets strewn around the web space saving programmers from the drudgery of going from place to place to look up for API docs, examples on Stack overflow, etc, so much so that programmers can choose from options instead of creating it all by themselves. For this very reason, many programmers are taking to programming models as they can automate most of the routine work. CodeQL, an AI-based de-bugging engine developed by De Moor, lets the programmer query code and find variants of vulnerability, eradicating them forever. Even before LLMs came to prominence, Microsoft had a working coding tool Deep Coder, way back in 2017 with limited capabilities as a functional tool for program induction learning and generalizing strategies across problems and integrating neural network architectures with search-based techniques rather than replacing them. OpenAI's own AI coder built on top of Codex can take verbal commands in plain English for a non-coder to convert them into meaningful and workable programs.
Despite being an obedient tool, AI coders do have flaws. Language models are content aggregators and therefore there is a huge scope for errors creeping into programs in many different ways. A programmer would in general know the source of error provided the programming is a personal curative or creative work because one would be aware of personal style of programming. According to NYU research, which analyzed code generated by Copilot found that almost around 40% of the time the code had security flaws. Brendan Dolan Gavitt, a professor at NYU, who was part of the research stated," But the way Copilot was trained wasn't actually to write good code, but to just produce the kind of text that would follow a given prompt." While programmers wonder if AI applications like Copilot and Codex can push them out of jobs, researchers are of opinion that as long as developers need to vet or manipulate the code suggestion, which in itself needs skills and discretion, coders can stay assured. Open AI's application that works text-to-code converter too is not without flaws. It has been reported that it is very much susceptible to churning out offensive and biased outputs, without actually understanding the context in the code generated, making way for vulnerabilities, the context of which the coder is unaware. As OpenAI co-founder Greg Brockman, in a conversation with Techcrunch says, "Programming is about having a vision and dividing it into chunks, and make code for those pieces", a programmer's prerogative for putting together the logic will not be taken away so soon. For improvisation, what experts suggest is a system of interaction between the human and machine to take feedback. One good example is the TiCoder framework. It refines and formalizes user intent through a mechanism known as "test-driven user-intent formalization", aimed at generating code through repetitive feedback to understand the context.
Join our WhatsApp Channel to get the latest news, exclusives and videos on WhatsApp
_____________
Disclaimer: Analytics Insight does not provide financial advice or guidance. Also note that the cryptocurrencies mentioned/listed on the website could potentially be scams, i.e. designed to induce you to invest financial resources that may be lost forever and not be recoverable once investments are made. You are responsible for conducting your own research (DYOR) before making any investments. Read more here.