There are two possibilities.

1. They did put the poll there and got a backlash and then blamed the AI like kids blame their dog eating their homework as a copout answer.

2. The AI did it and then, who cares? It's an AI, there was no malice or intent behind it, just an algorithm that didn't behave as desired. It was a poll about someone's death, could have been one about the probability of rain for that day. People need to get a thicker skin.

People like to say that the newer generations are made of glass because they get offended by everything, and seems to me that it's everyone really who are too ready to feel offended about anything these days, even when it's clear there's no ill meaning behind it; perhaps to get a bit of that sweet victim syndrome taste in their lips.
 
That's the risk when you authorized another party to publish your articles elsewhere, with whatever spin they may give to them.

It looks like there are two separate concerns, which are 1) this practice and the general difficulty of news magazines (The Guardian does seem to have been struggling for a while), and 2) the use of AI to generate articles, which should be done under the supervision of the author and editor, but who knows how it was done in this case.
 
Of course, it's tasteless, but it's also made by an AI, and at that point, there's no point in being offended. Past the initial confusion, it should be easy to separate, not the same but similarly as you don't feel offended about a comedian making jokes about a problem you have, because it's comedy.

I speak for myself, of course, people can feel offended about whatever they want, especially those who like to pretend to have a higher morality or those who can get monetary compensation in a trial from the Guardian. Humans will always be humans.
 
And showing a query like that next to an article about a person's death is quite tasteless (intentionally or not and independent of who is responsible)
Another way to see it is the learning curve of using AI to generate content. The ChatGPT developers are making efforts to avoid answering some questions and delicate subjects, but this example shows there are other situations to avoid.

It reminds me of an episode in Mad Men where an ad was placed at the wrong time in a TV programme, making an unfortunate association that backfired on poor Kinsey because he should have checked it. It happens sometimes when you see ads in journals, or when two adjacent stories have an unfortunate link. But those are accidents; if ads are placed by word association, it can give the wrong results too (like some shown here).

So maybe AI can actually avoid those unfortunate associations better than methods used in the past.
 
More ChatGPT news:



pibbuR whose intellectual work is completely free but inaccessable.
 
Last edited:
What could possibly go worse? ;)

womanrobotCOR_450x350.jpg
 

pibbuR who can be not-nice without the use of AI, and also doesn't use AI to hide it.
It's just a tool; it's supposed to be helpful. If I want antagonism, I just have to look at Windows. ;)

Let's hope they won't be wasting funds to work on fancy stuff like that when there's so much more important topics to tackle.
 
You could also visit the P&R forum (I assume, I never go there).

Well it doesn't have to be insulting. But there are a couple of things that (I think) could be of concern the way it is now. It may for instance, if overly gentle tend to, by not arguing, contribute to confirming conspiracy theories. If you for example asked it to tell you why the government is hiding information about alien visitors (haven't tested it), or why Covid-19 is a hoax (I haven't tested that either).

We know what this AI is, and probably won't be fooled. But I suspect a not insignificant group from the general public will.

pibbuR who if being fooled won't admit it.

PS.
DS

PPS. As I said above, I haven't tested those examples. But I did once ask it to tell me "Why is 3 not a prime number?", and it was not fooled. Although, as far as I remember it, the logic presented was a bit weird. DS.