AI Chatbots Are Invading Internet Search!

AI Chatbots Are Invading Internet Search!

Published on

As OpenAI's chatGPT takes the lead and falters, it is time to think if AI chatbots for internet search is a feasible idea

ChatGPT can answer any question. That's right. A functionality tech companies are drinking in with enthusiasm so much so that Google's Bard got itself into controversy in its very first demo. What if we say search engines will be run by such AI chatbots? It is a scary enough proposition but it is happening. ChatGPT, Bard, Ernie, are only a few names that are discussed frequently but in reality, there are many other minor chatbots that are being developed exclusively for search engine functions. Going by the reputation OpenAI's ChatGPT has gained, we can say that the bots have the potential to be versatile and fluent, generating context-specific responses when compared to the normal search engine generated random and related responses expressed via the endless number of blue links. But the question remains, if they can be trusted given the fact that they only reproduce the statistical patterns of text rather than checking for facts. For the hype they have gathered, wouldn't it amount to overrating their abilities and thereby trusting them too much?

What is wrong with LLM-based Search engines?

Trust and transparency are two major issues the new and evolutionary AI search engines have to face. The transactions you can have with LLM-based search engines are intensely personal sound attractive and reliable to users who are inherently subjective, unlike detached replies delivered by conventional search engines that leave a window of doubt and further scrutiny. A study at the University of Florida in Gainesville found that when participants interact with chatbots employed by Amazon and Best Buy, they consider the conversation more human and tend to trust the organization more. In one way, it is a positive sign for AI, for the trust users put in can make the search smoother. But there is a trade-off here. The enhanced sense of trust hinders the purpose of objectivity for chatbots. Bard has proved that chatbots have the tendency to make up stories for the questions it doesn't know the answer to, a red flag for search engine applications. One mistake by Bard and Google lost around $100 million. It is pretty much clear that early perception is very much important and a rigorous testing process is highly required.

Can transparency fix the fault?

The problem of inaccuracy apparently arises from a lack of transparency. When a traditional search engine is asked a question it provides you citations, leaving it up to the user to decide. Ironically AI chatbots like chatGPT do provide citations — when asked for – that are cooked up. This one instance is enough to understand how scary it is to use AI chatbots for internet search. How AI search engines work is completely opaque and therefore are the least dependable. As chatbot-based search engines can blur the line between machines and humans, it is imperative that tech companies take a moment to think before unleashing them into the market, particularly when users do not have the tools or the awareness of how such tools can cause unintended damage.

Join our WhatsApp Channel to get the latest news, exclusives and videos on WhatsApp

                                                                                                       _____________                                             

Disclaimer: Analytics Insight does not provide financial advice or guidance. Also note that the cryptocurrencies mentioned/listed on the website could potentially be scams, i.e. designed to induce you to invest financial resources that may be lost forever and not be recoverable once investments are made. You are responsible for conducting your own research (DYOR) before making any investments. Read more here.

logo
Analytics Insight
www.analyticsinsight.net