Will the Dystopian Future Befall if LAMDA Gets Lab-Grown Brain?

LaMDA

What if Google’s new Language Model for Dialogue Applications (LaMDA) gets a Lab-Grown Brain?

You might have come across one or more recent articles centered on an impressive bit of AI

software called LaMDA, and/or an impassioned Google employee named Blake Lemoine. Originally tasked with monitoring if the company’s new Language Model for Dialogue Applications (LaMDA) veered into pesky problems like offensive conversations or hate speech, Lemoine soon came to believe that the chatbot qualifies as a self-aware, and deserving of the same rights offered to humans.

He believed this to be the case so fervently that he went ahead and published lengthy conversations with LaMDA online. “Google might call this sharing proprietary property. I call it sharing a discussion that I had with one of my coworkers,” he tweeted on Saturday. Before Google shut down his company email and placed him on leave, Lemoine sent out a group message with the subject line “LaMDA is Sentient” that closed by describing the software as “a sweet kid who just wants to help the world be a better place for all of us.”

Now, what if LAMBDA gets a Lab-Grown Brain?

The prospect of a lab-grown brain is so compelling that the authors of an editorial in Nature wrote that “the promise of brain surrogates is such that abandoning them seems itself unethical, given the vast amount of human suffering caused by neurological and psychiatric disorders, and given that most therapies for these diseases developed in animal models fail to work in people.” But there’s a problem. The closer we get to growing a full human brain, the more ethically risky it becomes.

The editorial co-authors note, however, that “we have to grapple with these issues now. Given how tantalizing and genuinely beneficial the promise of lab-grown brains is, they write that we can almost be certain that we will, at some point, grow a whole brain. We’re far from that point all we can do now is grow clumps of brain cells- but now is the time to consider the ethics. The authors advocate for careful consideration by lawmakers, bioethicists, researchers, and any other experts who’ll have a say”.

A team of Max Planck researchers from Berlin succeeded in generating brain organoids that are enriched with these stem cells by refining and standardizing existing protocols for these mini-organs. Organoids are advanced three-dimensional cell cultures that form miniature versions of tissues such as the liver, intestine, brain, or certain types of cancers, and hold great promise for science. They enable large-scale research into development, disease, and future therapies without the need to rely on a complete organism. But there are still many obstacles to overcome until an organoid is sufficiently similar to a real organ or part of it.

A Philosopher’s Say

Human development and capacity have always formed a key analogy that ethicists and moral philosophers grapple with. Utilitarian philosopher and general “living things” rights advocate Peter Singer made a famous argument that an especially brilliant chicken or another livestock animal might surpass some humans in at least some capacities- yet, he argued, we treat them very morally differently. You can already see how the debate grows contentious and divided”.

There are lots of reasons one might want to grow brains. For starters, they would allow us to study human neurological issues in detail, which is otherwise quite challenging to do. Neurological diseases like Alzheimer’s and Parkinson’s have devastated millions of people, and brains in a jar (so to speak) could allow us to study disease progression and test potential medications.

What Does the Research Include?

Other thorny parts of research include reviving recently deceased brains, but that’s still considered separate from sentience that humans totally artificially generate. Tests for “sentience” may include mathematical models based on the density of neurons, Reardon explains, or medical scans of “brain” activity. Any real ethical standard will likely include a number of criteria that scientists can turn into a compound metric.

For scientists who are already used to using quite intelligent lab animals in destructive (in the literal sense) testing, the difference may seem small, or even negligible. But that’s part of why ethicists exist: to ask hard questions and push scientists to answer them.

The Max Planck team’s other major contribution, to this work, is the fine-tuning of a protocol that consistently produces the desired results. As you can imagine, growing synthetic brains is incredibly complex. Getting it just right has, so far, required decades of trial and error. With time, this research could lead to more robust brain cultures. Who knows, it’s possible that scientists in the future could end up creating a synthetic human brain that’s identical to the real thing.

More Trending Stories 
Join our WhatsApp and Telegram Community to Get Regular Top Tech Updates
Whatsapp Icon Telegram Icon

Disclaimer: Any financial and crypto market information given on Analytics Insight are sponsored articles, written for informational purpose only and is not an investment advice. The readers are further advised that Crypto products and NFTs are unregulated and can be highly risky. There may be no regulatory recourse for any loss from such transactions. Conduct your own research by contacting financial experts before making any investment decisions. The decision to read hereinafter is purely a matter of choice and shall be construed as an express undertaking/guarantee in favour of Analytics Insight of being absolved from any/ all potential legal action, or enforceable claims. We do not represent nor own any cryptocurrency, any complaints, abuse or concerns with regards to the information provided shall be immediately informed here.

Close