BBC Radio 4 – Word of Mouth, Chatbots 1
Like lots of other folk I’ve been reading plenty about Large Language Models, AI & Chatbots and playing with some of the toys.
I really liked Professor Bender’s approach and method. I also found this a very easy listen. My mind has tended to wander off when reading blogs post about AI. Very clear on the “not intelligent” and the risks associated with chatbots trained on large piles of language.
And specifically the things that they’re predicting is what would be a plausible next word given all the preceding words here and then again and then again and again.
…
And so that’s linguistically interesting that once you get to billions of words of text, there’s enough information in there just in the distribution of words to stick with things that are both grammatical and seemingly coherent.
…
So that’s a cool observation and it’s dangerous because we tend to react to grammatical, fluent, coherent, seeming text as authoritative and reliable and valuable.
…
So instead of talking about automatic speech recognition, I prefer to talk about automatic transcription because that describes what we’re using it for and doesn’t attribute any cognition to the system that is doing the task for us.2
- I subscribe to the RSS feed of this BBC radio program as a podcast, pity you can’t find the feed on the webpage. ↩
- Ironically I used Aiko to get the text of the podcast for the quotes: “transcription is powered by OpenAIβs Whisper model running locally on your device” ↩