Share this guide

Famed Formula One driver Mario Andretti once said that, "If everything seems under control, you're just not going fast enough." In February, the LLM-powered chatbot race hit breakneck speeds and, for a moment there, seemed to careen off the tracks. 

Where to even begin? On February 6th, Google announced its new chatbot, Bard. Much like how the reigning chat champ, OpenAI's ChatGPT, is backed by a flavor of GPT-3.5, Bard is based on Google's LaMDA model. Some said the Google announcement felt "rushed," maybe for good reason: Just one day later—February 7th—rival Microsoft unveiled new versions of its Bing search engine and Edge browser, now bundled with Bing Chat, which is based on a customized GPT model from OpenAI.

What's the big deal? Well, unlike ChatGPT, whose corpus of training data ends in 2021, Bard and Bing Chat are ostensibly plugged into the constantly-updating search indices of their corporate parents. In theory, it positions these chatbots as conversational search assistants. In practice, it means that they'll also authoritatively parrot false information extracted from web search results. 

You might recall the kerfuffle caused by Bard's mistaken assertion that the James Webb Space Telescope captured the first images of an exoplanet. Coincident with that error, Google's parent company Alphabet lost $100 billion in market capitalization. While some pundits blamed the falsehood for Alphabet's stock drop, it's also possible that investors got skittish, realizing that Bing's refresh could pose a threat to Google's dominance in search, and that chat-style interfaces threaten Google's advertising model. 

Although Bing Chat achieved infamy for a wholly different reason, it's also not immune to relaying incorrect information, or even just making stuff up. Then there's the whole situation with Bing Chat's alter ego, Sydney, which proved to be an enthralling if problematic conversational partner. 

Not to be outdone by tech stalwarts Microsoft and Alphabet, or newcomer OpenAI, Meta (née Facebook) introduced LLaMA to the world on February 24th. Weighing in at up to 65 billion parameters (and also in 7B, 13B, and 33B sizes) LLaMA is smaller and more efficient than other large language models, and it outperforms GPT-3 and PaLM on certain tasks. Considering the significant computing resources required to train and run a large language model is the technology's biggest barrier to entry, any model that delivers comparable performance in a much smaller envelope is a step in the right direction.

We conclude this look back at a historic month in generative AI with a question: What are these LLMs good for, really? They're not great at telling the truth. They're pretty good at assisting with creative tasks, like writing and coding. But if there's one thing that Bing's shadow self, Sydney, showed it's that LLMs' "killer app" is in entertainment. Being chastised by a chat bot is remarkably fun, and sometimes scary, but ultimately pretty safe so long as you remember that you're just talking to a statistical model pulling the next words it says out of digital noise. Like a movie or video game, chat may be yet another medium for escape.

Unlock language AI at scale with an API call.

Get conversational intelligence with transcription and understanding on the world's best speech AI platform.

Sign Up FreeBook a Demo