Imagine you are minding your own business and playing a game on your computer and poof! Somebody else takes over your PC! Well, this is how advanced AI hacking is working these days under the layers of gaming. At the Microsoft Build developer conference, the company's chief technology officer, Kevin Scott, demonstrated an AI helper for the game Minecraft. The non-player character within the game is powered by the same machine learning technology Microsoft has been testing for auto-generating software code. The feat hints at how recent advances in AI could change personal computing in years to come by replacing interfaces that you tap, type, and click to navigate into interfaces that you simply have a conversation with.
The Minecraft agent responds appropriately to typed commands by converting them into working code behind the scenes using the software API for the game. The AI model that controls the bot was trained on vast amounts of code and natural language text, then shown the API specifications for Minecraft, along with a few usage examples. When a player tells it to "come here," for instance, the underlying AI model will generate the code needed to have the agent move toward the player. In the demo shown at Build, the bot was also able to perform more complex tasks, like retrieving items and combining them to make something new. And because the model was trained on natural language as well as code, it can even respond to simple questions about how to build things.
Microsoft has built an AI coding tool called GitHub Copilot on top of the same technology. It automatically suggests code when a developer starts typing, or in response to the comments added to a piece of code. According to the company, Copilot is the first instance of what will likely be a slew of "AI-first" products in the coming years, from Microsoft and others. Code-writing AI lets you think about doing software development in a different way so you can express an intention for something that you want to accomplish.
Academics have put GitHub's Copilot to the test on the security front and said they found that roughly 40 percent of the time, code generated by the programming assistant is, at best, buggy, and at worst, potentially vulnerable to attack. Copilot arrived with several caveats, such as its tendency to generate incorrect code, its proclivity for exposing secrets, and its problems judging software licenses. But the AI programming helper, based on OpenAI's Codex neural network, also has another shortcoming: just like humans, it may produce flimsy code. That's perhaps unsurprising given that Copilot was trained on source code from GitHub and ingested all the bugs therein.
Copilot is right now available in private beta testing as an extension to Microsoft's Visual Studio Code. It allows developers to describe functionality in a comment line and then tries to generate code that fulfills the description. It can also predict what the developer is about to write next from the variable and function names, and other hints. In other words, it's a step beyond autocompletion and some ways from automated programming; it's more like interpretation. Rather than finishing a partially typed line based on a narrow set of possibilities, it tries to conjure blocks of code that function as described from an AI model related to OpenAI's GPT-3 and trained on source code culled from GitHub's millions of public repositories.
Join our WhatsApp Channel to get the latest news, exclusives and videos on WhatsApp
_____________
Disclaimer: Analytics Insight does not provide financial advice or guidance. Also note that the cryptocurrencies mentioned/listed on the website could potentially be scams, i.e. designed to induce you to invest financial resources that may be lost forever and not be recoverable once investments are made. You are responsible for conducting your own research (DYOR) before making any investments. Read more here.