For the people opining on what ChatGPT/AI will mean, please be aware you are almost 100% guaranteed to be making a fool of yourself right now. December 5th, 2000:
@bert_hubert The problem of course, not being that it goes away, but that we don't really know were it's going.
@bert_hubert It's certainly impressive but still: Maybe, maybe not.
I keep thinking about things like the self driving car craze of about 5 years ago. We were supposed to have full self driving cars by now. But they seem to be unable to fix the last 10%. And a car that doesn't drive into a ditch 90% of the time isn't a full self driving car.
Or the craze about IBM Watson. It was supposed to revolutionize healthcare.
@JasperSiepkes @bert_hubert I think the comparison is flawed. Self-driving cars have never existed, and are only the result of overestimating technology on the part of investors. While ChatGPT is already working amazingly well ahead of its official release and is already being used by a large community. In addition, it can be used even without the last 10% without the risk of it driving me into a wall :)
@JasperSiepkes @bert_hubert With human intelligence we don't have the claim that it is perfect, why should we have this claim with AI?
@winfried @bert_hubert When the self-driving cars hype started there were early iterations just like with ChatGPT. Watson had the Jeopardy thing.
ChatGPT can be really wrong about things and presents them as facts buried between other facts. Take this example where I talk with it about a movie. Everything reply contains at-least a single falsehood buried in it.
It needs to be perfect because ChatGPT presents it as facts and people will put (too) much faith in it.
@JasperSiepkes @bert_hubert Yes, that happens all the time, it does some things better than others. I wonder if it can judge for himself the degree of correctness of its own answers.
@winfried @bert_hubert That's an interesting question.
I would think that if it could the makers would have used it in the texts it generates. Something like: "I'm not entirely sure but I think XYZ".
I wonder if they might add something like that later. Or maybe it's just not possible with the model they are using? Or maybe too resource intensive?
@JasperSiepkes @winfried @bert_hubert
The 'AI' fail rate in the press is 100% human greed and stupidity fail , not AI. Try plain GPT3 yourself. Responses are plain, simple, deterministic and conservative - as long you configure the model the right way!
Microsoft must have heated the model base parameter to an unreasonable level with the intention to get in the press with some catchy lines.
@bert_hubert my favourite sly dunk on media coverage of "the Internet" was this IBM ad from '97 https://www.youtube.com/watch?v=IvDCk3pY4qo
@bert_hubert Don't want to spoil your thread but same for digital currencies and related fields.
@aerique all my bitcoin predictions have been right though.. for the past 10 years ;-)
@bert_hubert @aerique So you are a millionaire now? :)
@bert_hubert but, nobody uses "Internet" anymore, only "web" or "apps", thus "fad" was in fact correct! ½s
@bert_hubert Twee maanden later, op 5 ferbruari 2001....
https://hergebruik.blogspot.com/2023/02/het-gaat-goed-met-internet-dank-u.html
@bert_hubert Wat een grappig mengsel van spijkers op de kop en misgeslagen planken, al zeg ik het zelf.
@hmblank @bert_hubert "surfen", zo werd dat toen genoemd, ja!
@bert_hubert I'm keeping this for future presentations. For every time I'm trying to tell "hey, this data product might be interesting" and my colleagues respond with the knee-jerk "it's probably just a hype and we don't need to look at it".
@bert_hubert I present the following exhibits:
- https://www.jstor.org/stable/4240303
- https://www.theguardian.com/technology/2000/dec/05/internetnews.g2
- https://www.bbc.com/news/technology-36279855
- image below.
@pinchito @bert_hubert i do not believe in this.