GPT-3/ChatGPT is about instructions, not about training, and why that matters.
GPT-3/ChatGPT is taking the media by storm and for a good reason. Its ease-of-use is remarkable; it's results astounding. And, yet, many commentaries in the media are about small mistakes, storylines that wander off, and illogical outcomes.
Still, one might say that with the usual minimal instructions, GPT-3/ChatGPT delivers mind-boggling output. These minimal instructions forego additional settings that GPT-3/ChatGPT's underlying language model offers. With GPT-3/ChatGPT, a user just gives instructions. In GPT-3/ChatGPT, a user can let the instructions be accompanied by settings like "Temperature", "Top-P", "Frequence penalty", and "Presence penalty", to name a few. This means that the user can tweak how much out-of-the-box GPT-3/ChatGPT's output will be. Or whether sentence components get repeated.
Furthermore, training GPT-3/ChatGPT can supply more nuances that just instructing. For example, a user might instruct to have GPT-3/ChatGPT create some questions for a questionnaire. However, training GPT-3/ChatGPT with a few thousand examples will give it much more nuance than you could ever replicate with a few instructions.
So, before criticizing any shortcomings of GPT-3/ChatGPT, users could evaluate whether their instructions were a match for the output they desired. Or whether extensive training examples and optimizing GPT-3/ChatGPT settings (a sor tof hyperparameter tuning) would not have been a much better route to choose.
Wanna sit front row?
If you're a journalist or if you work for a publication, you can have early access to exclusive news.