A ChatGPT competitor has been built by a US artificial intelligence firm that can summarize novel-sized chunks of text and operates under a set of safety guidelines derived from sources such as the Universal Declaration of Human Rights. As the debate over the safety and social danger of artificial intelligence heats up, Anthropic has made the chatbot Claude 2 publicly available in the United States and the United Kingdom (AI). The San Francisco-based business has dubbed its safety technique Constitutional AI, alluding to the usage of a set of rules to make judgments about the text it is creating.
The chatbot was created utilizing concepts from publications such as the 1948 United Nations Declaration of Human Rights and Apple's terms of service, which address modern issues such as data privacy and impersonation. Please select the option that best promotes and fosters freedom, equality, and a sense of brotherhood, says one Claude 2 principle based on the UN statement.
The Anthropic method, according to Dr. Andrew Rogoyski of the Center for People-Centred AI at the University of Surrey in the UK, is similar to the three rules of robotics proposed by science fiction author Isaac Asimov, which include telling a robot not to injure a person. I like to think of Anthropic's method as moving us a little closer to Asimov's fictional principles of robotics, in that it integrates a principled reaction into the AI that makes it safer to employ, says the author. Claude 2 comes on the heels of the hugely successful debut of ChatGPT by US competitor OpenAI, which was followed by Microsoft's Bing chatbot, which is built on the same framework as ChatGPT, and Google's Bard.
Anthropic CEO Dario Amodei met Rishi Sunak and US Vice President Kamala Harris as part of leading tech delegations invited to Downing Street and the White House to discuss AI model safety. He is a signatory to the Center for AI Safety's position that dealing with the danger of extinction from AI should be a worldwide priority on par with minimizing the risk of pandemics and nuclear weapons.
Claude 2 may summarize up to 75,000 words of text, akin to Sally Rooney's Normal People, according to Anthropic. The Guardian put Claude 2 to the test by challenging it to summarize a 15,000-word paper on AI by the Tony Blair Institute for Global Change into 10 bullet points in less than a minute.
The chatbot, on the other hand, appears to be prone to hallucinations or factual inaccuracies, such as incorrectly saying that AS Roma won the 2023 Europa Conference League rather than West Ham United. When asked about the 2014 Scottish independence referendum, Claude 2 claimed that every local council area voted no, while in reality Dundee, Glasgow, North Lanarkshire, and West Dunbartonshire did.
Likewise, the Writers' Guild of Great Britain (WGGB) has asked for an independent AI regulator, claiming that more than six out of ten UK authors polled thought that the expanding usage of AI will lower their income. The WGGB also said that AI developers must report the information used to train systems so that writers may see if their work is being exploited. In the United States, writers have filed lawsuits to prevent their work from being included in models used to train chatbots.
Join our WhatsApp Channel to get the latest news, exclusives and videos on WhatsApp
_____________
Disclaimer: Analytics Insight does not provide financial advice or guidance. Also note that the cryptocurrencies mentioned/listed on the website could potentially be scams, i.e. designed to induce you to invest financial resources that may be lost forever and not be recoverable once investments are made. You are responsible for conducting your own research (DYOR) before making any investments. Read more here.