ChatGPT is starting to worry me

I'm more worried about the fact that that these LLM will be used in a whole heap of fields irresponsibly. And even though I hope people keep in mind that you always have to double check what they generate; basically don't trust anything it produces without double checking, at a certain point that will be lost I'm afraid. It has too good of an ability to seem anthropomorphic, especially when people don't scrutinize it properly.
Yes, scary. That's what I meant by unreliable. Those tools seem unable to say 'I don't know', which I understand since it's only a process returning the closest similar pattern. Although the author of the video showed a case where the tool couldn't find data for 2 of the 5 cases it had to summarize and simply said so, leaving blanks instead of making something up. But in my experience, I've had references that were completely invented, several times. For example, books with fancy ISBN numbers or with titles that simply didn't exist. When I asked the AI about it, it simply said 'sorry, you're right, it doesn't exist'...

If the AI could at least give a certainty ratio with its answers. It wouldn't be entirely accurate since it ultimately depends on the credibility of the original facts it absorbed, but it would be a good indication about its own reasoning.

Thankfully, we're already getting more critical because of the Internet. I remember people being much more gullible before (and yet, journalists weren't more cautious about checking what they wrote than today).
 
Joined
Aug 29, 2020
Messages
10,394
Location
Good old Europe
Yes, scary. That's what I meant by unreliable. Those tools seem unable to say 'I don't know', which I understand since it's only a process returning the closest similar pattern. Although the author of the video showed a case where the tool couldn't find data for 2 of the 5 cases it had to summarize and simply said so, leaving blanks instead of making something up. But in my experience, I've had references that were completely invented, several times. For example, books with fancy ISBN numbers or with titles that simply didn't exist. When I asked the AI about it, it simply said 'sorry, you're right, it doesn't exist'...

If the AI could at least give a certainty ratio with its answers. It wouldn't be entirely accurate since it ultimately depends on the credibility of the original facts it absorbed, but it would be a good indication about its own reasoning.

Thankfully, we're already getting more critical because of the Internet. I remember people being much more gullible before (and yet, journalists weren't more cautious about checking what they wrote than today).
From what I understand they don't even have the concept (here I go humanizing it, when there's no consciousness to speak of) of know or correct. It can't know what it's even producing and can't do a global analysis of what it produced to correct itself.
It just produces text, effectively it's only concern is what's the next word that best fits given the previous words and the context of the question.
 
Joined
Jul 31, 2007
Messages
6,408
From what I understand they don't even have the concept (here I go humanizing it, when there's no consciousness to speak of) of know or correct. It can't know what it's even producing and can't do a global analysis of what it produced to correct itself.
It just produces text, effectively it's only concern is what's the next word that best fits given the previous words and the context of the question.
It should at least know the ratio associated with any output it gets from the network of the base and the assistant models. I'm not familiar enough to see if those values can be used to estimate the quality of the answer, though, and I think there are other factors that could make those values relatively approximative (for example, how accurate the scanned documents were or how deep the training was).
 
Joined
Aug 29, 2020
Messages
10,394
Location
Good old Europe
Back
Top Bottom