Humans have long feared that machines will advance to the point that they outsmart us and take over the world. It’s been a common theme in Sci-Fi movies (e.g., The Matrix franchise) for years. However, concerns about AI getting the best of us have now migrated to news headlines as publications pose questions such as, “Can AI Replace Humans?”
On the surface, it might seem like generative AI solutions like OpenAI’s ChatGPT could easily do many jobs currently held by humans. Many business leaders are understandably excited about AI’s potential to increase productivity and automate mundane tasks. However, when implementing AI, you must remember that human intelligence (HI) is still critical.
Despite its impressive ability to spit out content, artificial intelligence still has flaws and shortcomings. This is not an exhaustive list, but these are some of the top reasons AI solutions require human oversight.
1. AI tends to “hallucinate.”
It’s widely known at this point that generative AI has no qualms about fabricating if it doesn’t know the factual answer to an inquiry. In the blog entry introducing ChatGPT, OpenAI warns about this issue, which AI experts call “hallucinating.”
Daniela Amodei – co-founder and president of the AI safety and research company Anthropic – recently told the Associated Press, “I don’t think that there’s any model today that doesn’t suffer from some hallucination.”
That means relying on AI-generated content without running it by human editors and fact-checkers could land you in hot water. For example, a couple of lawyers found themselves in trouble earlier this year after they submitted a court filing created by ChatGPT that cited fictitious past court cases, as reported by the Associated Press.
Even if you save time by asking generative AI to churn out content, make sure a human checks it before publication or submission. Otherwise, your company could end up looking foolish at best and suffering legal consequences at worst.
2. AI still needs ideas from humans.
Large language models and other generative AI solutions seemingly write and produce images independently. However, these programs gain their capabilities by absorbing large datasets consisting of content created by humans. For example, the OpenAI article “How ChatGPT and Our Language Models Are Developed” states that the company develops large language models with publicly available info online, information licensed from third-party sources, and info from human trainers. Ultimately, generative AI requires well-thought-out prompts, feedback, and guidance from humans to create high-quality content.
3. Machines aren’t necessarily impartial.
Another reason human stakeholders must review AI-generated work is the potential for bias. Programs trained on flawed datasets could end up unfairly disqualifying loan or rental applications, for example, according to the National Institute of Standards and Technology (NIST). NIST researchers have recommended a “socio-technical” strategy to combat AI bias, as a solely technical approach will likely miss the broader societal context.
Ultimately, AI is evolving at an incredibly fast pace, and these limitations might disappear in the future. However, as of right now, machines aren’t capable of fully replacing us yet. While artificial intelligence can transform your business, you still need human intelligence to succeed.
If you want expert guidance as you craft your AI (and HI) strategy, our technology consultants would be happy to help. Drawing on over 20 years of IT industry experience, we can rapidly pinpoint the best course of action for your organization and leverage our network of best-in-class suppliers and detailed comparison matrices to find solutions that align with your unique needs.
Advance confidently today by calling 877-599-3999 or emailing firstname.lastname@example.org to schedule a consultation.