Chatbots Can’t Yet Replace Analysts
ChatGPT and other chatbots have raised concerns that they could replace humans. We analyze their relevance to the task of analysis and explain their inadequacies.
Chatbots have created a stir over whether AI can replace white-collar workers such as doctors, lawyers, and industry analysts. They’ve captivated users with their ability to carry on a conversation, write code, compose essays, and perform other humanlike functions.
Large language models (LLMs), driven by Transformer networks, have burst into the spotlight because of their seeming ability to understand language. OpenAI’s ChatGPT, an LLM-based chatbot, reached one million users in just five days, setting an industry record. Although some people see LLMs as sentient, others see them as just mathematical representations of the training data. To investigate the likelihood of LLMs replacing analysts, we asked chatbots a series of pertinent questions.
The technology’s fundamentals are sound, but the models learn primarily from the Internet, limiting the scope and quality of their “knowledge.” They often generate incorrect responses or hallucinate (i.e., produce responses that are grammatically correct but factually inaccurate or misleading), making them unlikely to replace analysts and similar professionals. But chatbots are promising for some tasks, such as text summarization, that could benefit these industries.
Subscribers can view the full article in the TechInsights Platform.
Revealing the innovations others cannot inside advanced technology products
1891 Robertson Rd #500, Nepean, ON K2H 5B7