AI Chatbots and Pandemics: Alleged Threat Examined

AI Chatbots and Pandemics: Alleged Threat Examined
Published on

AI is constantly transforming the world and guaranteeing that humanity makes quantum leaps

Google makes learning how to perform a terrorist attack somewhat challenging. The first few pages of results for a Google search on how to create a bomb, commit murder, or unleash a biological or chemical weapon won't teach you anything about how to accomplish it. These things are not impossible to learn on the internet. Individuals have created functional explosives using publicly available information. Because of similar concerns, scientists have cautioned others not to reveal the blueprints for dangerous viruses. Yet, while the material is undoubtedly available on the internet, it is not easy to learn how to murder many people, owing to a coordinated effort by Google and other search engines. 

How many lives are expected to be saved as a consequence of this? It is a difficult question to answer. It's not as if we could do a responsibly controlled experiment in which directions for committing big atrocities are simple to check up on at times and not at others. But, significant advancements in big language models suggest that we may be conducting an uncontrolled experiment in this regard (LLMs). 

Obscurity provides security: When developed initially, AI systems like ChatGPT were often prepared to provide complete, precise instructions for carrying out biological weapons assaults or building a bomb. For the most part, Open AI has remedied this trend over time. But, a class exercise at MIT revealed that it was straightforward for groups of students, as documented in a preprint paper earlier this month and featured last week in Science, with no relevant biology experience to acquire specific proposals for biological warfare from Generative AI systems. 

 Managing information in an AI world: We need improved controls at all the chokepoints, Nuclear Threat Initiative's Jaime Yassif told Science. It should be more challenging to persuade AI systems to provide explicit directions for constructing bioweapons. Yet, many of the security issues that the AI systems accidentally discovered, such as noting that users may contact DNA synthesis firms that do not filter orders and hence are more likely to allow a request to synthesize a deadly virus, are still present. 

The great news is that pro-biotech players are beginning to take this problem seriously. Ginkgo Bioworks, a major synthetic biology firm, has collaborated with US intelligence agencies to create tools that can identify manufactured DNA at scale, allowing investigators to fingerprint an artificially made germ. That cooperation exemplifies how cutting-edge technology may safeguard the globe from the harmful impacts of… cutting-edge technology. 

We may make it a requirement for all DNA synthesis businesses to do screening in all circumstances. We should also exclude publications concerning harmful infections from training data for robust AI systems, as Esvelt suggests. We may be more cautious about releasing studies that provide precise instructions for creating lethal viruses in the future.

Join our WhatsApp Channel to get the latest news, exclusives and videos on WhatsApp

                                                                                                       _____________                                             

Disclaimer: Analytics Insight does not provide financial advice or guidance. Also note that the cryptocurrencies mentioned/listed on the website could potentially be scams, i.e. designed to induce you to invest financial resources that may be lost forever and not be recoverable once investments are made. You are responsible for conducting your own research (DYOR) before making any investments. Read more here.

Related Stories

No stories found.
logo
Analytics Insight
www.analyticsinsight.net