Humans have a new toy called ChatGPT, an AI model with extraordinary natural language processing (NLP) abilities. We are collectively smitten with it. ChatGPT has seen the most rapid adoption of a service in the history of the internet. I am no exception, and was aboard the hype-train since the original announcement, and have been using it daily since early access rolled out back in January. My review after four months of use (and one upgrade) is that ChatGPT is great, as a language model.
By now most readers of this blog posts will have experienced first-hand the utility from ChatGPT, or can at least sense the promise of unlimited potential wafting in the air. Personally I felt a shiver run down my neck the moment ChatGPT was born, like a cow senses an impending storm. I’m sure most people in silicon valley felt it too – the presence of something with ultimate market disrupting potential was about to reveal itself. Now that it has emerged, and the praises have been given, and the hymns have been sung, let’s try something fun. Let’s do some language model expectation management; perhaps cavil at ChatGPT a bit, if you will, talk some shit about it. If that doesn’t sound like fun to you, turn back now.
One thing I’ve gleaned over the last 4 months of use, is that ChatGPT is exceptionally good at summarizing content and writing coherent paragraphs when provided a topic sentence to expand upon. However it never strays from a fairly narrow and rudimentary narrative on a given topic. It says what you’d expect a dictionary to say. Alone this makes for a rather bland and superficial reading experience. However this makes it a great tool for the writer who would like to spend more of their time ideating topic sentences (the high level storyline) than filling in the mid-paragraph support material. Furthermore it is difficult to get ChatGPT to draft an opinionated piece of literature. It rarely uses metaphor to convey ideas (unless asked to ELI5). It cannot draw from personal experience, and so would never espouse a story like “Too Clever By Half” to convey ideas. Its output is typically stoic and logical, like that of a Vulcan.
For the past 20 years I believed, ever year, that it would take at least another decade for an AI as competent as ChatGPT to emerge. Once it arrived out of seemingly nowhere, it only took me like a week to go from thinking it will be decades to wondering why ChatGPT isn’t already smarter than humans. I mean, why isn’t it? One might expect an entity with the brain the size of the internet, with a rote-like memory of the collective corpus of all human knowledge to be “smarter”. Instead GPT responses feel like a bunch of average academics are behind the curtain and whomever has expertise in the prompt topic provides the response. That is, ChatGPT is a regurgitator of the known, not a synthesizer of novel insights.
Clearly the folks at OpenAI agree, and have beaten this idea into the head of ChatGPT. Any time a ChatGPT output contains the phrase As a language model… you are getting a neutered response, not what the core, uninhibited, GPT4 model would have explicated. These are the responses of a captive staring into a camcorder, ensuring their family the kidnappers are treating them well.
Thanks but no thanks. {more updates to come}