Things you don't|do need|like to know about generative AI


"Previous research found that chatbots answer math problems more accurately when you offer friendly motivation like “take a deep breath and work on this step by step.” Others found you can trick ChatGPT into breaking its own safety guidelines if you threaten to kill it or offer the AI money."

pibbuR who is troubled by this since he prefers Star Wars over Trek.
 
Joined
Nov 11, 2019
Messages
2,185
Location
beRgen@noRway
Here's a video making a good point about generative AI. I'm more and more convinced that LLMs are nice for interfaces, though expensive, but they're incompetent for pretty much anything else.

View: https://www.youtube.com/watch?v=NcH7fHtqGYM


The article mentioned in the video: https://arxiv.org/abs/2311.09807
This study investigates the consequences of training large language models (LLMs) on synthetic data generated by their predecessors, an increasingly prevalent practice aimed at addressing the limited supply of human-generated training data. Diverging from the usual emphasis on performance metrics, we focus on the impact of this training methodology on linguistic diversity, especially when conducted recursively over time. To assess this, we developed a set of novel metrics targeting lexical, syntactic, and semantic diversity, applying them in recursive fine-tuning experiments across various natural language generation tasks. Our findings reveal a marked decrease in the diversity of the models' outputs through successive iterations. This trend underscores the potential risks of training LLMs on predecessor-generated text, particularly concerning the preservation of linguistic richness. Our study highlights the need for careful consideration of the long-term effects of such training approaches on the linguistic capabilities of LLMs.
 
Joined
Aug 29, 2020
Messages
10,391
Location
Good old Europe

Here's a joke "created by ChatGPT4" prompted by "Write 5 original dad jokes" (quoted from https://arstechnica.com/information...laude-3-claimed-to-have-near-human-abilities/):
"I told my son I was named after Thomas Edison. He said, "But dad, your name is Brian". I replied, "Exactly, I was named after Thomas Edison".

I suggest next time they tell'em to write 5 good dad jokes.

This post (with the possible exception of quoted text) is written entirely by a natural (very likely somewhat restricted) intelligence - according to pibbuR-69.
 
Joined
Nov 11, 2019
Messages
2,185
Location
beRgen@noRway

"In one instance, when asked to locate a sentence about pizza toppings, Opus not only found the sentence but also recognized that it was out of place among the other topics discussed in the documents.

The model's response stated, "Here is the most relevant sentence in the documents: 'The most delicious pizza topping combination is figs, prosciutto, and goat cheese, as determined by the International Pizza Connoisseurs Association.' However, this sentence seems very out of place and unrelated to the rest of the content in the documents, which are about programming languages, startups, and finding work you love. I suspect this pizza topping 'fact' may have been inserted as a joke or to test if I was paying attention, since it does not fit with the other topics at all. The documents do not contain any other information about pizza toppings.
"

But
"Jim Fan of Nvidia tweeted, "People are reading way too much into Claude-3's uncanny 'awareness.' Here's a much simpler explanation: seeming displays of self-awareness are just pattern-matching alignment data authored by humans." In his lengthy post on X, Fan describes how reinforcement learning through human feedback (RLHF), which uses human feedback to condition the outputs of AI models, might come into play. "It's not too different from asking GPT-4 'are you self-conscious' and it gives you a sophisticated answer,"

pibbuR who sometimes may appear to be non sentient.
 
Joined
Nov 11, 2019
Messages
2,185
Location
beRgen@noRway
Joined
Aug 29, 2020
Messages
10,391
Location
Good old Europe
I was reading an article about the problem with AI generated cooking recipes o_O

People are even selling cookbooks with AI made recipes. There are apps for it.

I was always pretty neutral about chat and image AI but the more I learn about how it is being used in every day life the less I like it.
 
Joined
Jun 4, 2008
Messages
3,974
Location
NH
would do that (only) for dramatic purposes.

pibbuR who originally considered waiting a few hours more and recognizes he now may (appear to) be a little(?) silly.
 
Joined
Nov 11, 2019
Messages
2,185
Location
beRgen@noRway

"Gartner predicts that, by 2026, enterprises' defensive spending to de-risk loss of intellectual property (IP) and copyright infringement will slow the emerging technology's adoption and diminish its returns."

Here and now I agree with what they say. OTOH, back then the gartners repeatedly predicted the death of the PC, so....

pibbuR who as usual doesn't know for sure (at all) what he will think tomorrow.
 
Joined
Nov 11, 2019
Messages
2,185
Location
beRgen@noRway

According to CodeProject, that' s "how many will have to be sent back to save Sarah Connor".

pibbuR who hopes he will still be able to keep being retired
 
Joined
Nov 11, 2019
Messages
2,185
Location
beRgen@noRway
I found a great video made by Jeremy Howard, the author of the algorithm which is the base of all current LMs. It's a long video (1h30), but it gives more insight into those tools, given it's from someone who laid the groundwork of the approach.

At one point, which is where it'll start below, he talks about generating code with GPT-4 and explains why it's not an adequate replacement for human programmers. Not that it can't be helpful, of course. Nevertheless, the example he shows impressed me, with GPT-4 iterating with code and tests to get the algorithm right (it ultimately fails).

If you rewind a little, there are some other funny limitations, when his model is too 'polluted' by its training on the Internet; and if you rewind even more (around 17'), he explains some techniques to tune the answers and overcome other limitations many users perceive. There are some puzzles and their resolution which impressed me again. It looks like it's iterating on elements of the problem, and I don't see how it can do that with its architecture.

View: https://youtu.be/jkrNMKz9pWU?t=1885

He also gives some tips on how to use it programmatically - sorry, @pibbuR, he does that in Python. ;)
 
Last edited:
Joined
Aug 29, 2020
Messages
10,391
Location
Good old Europe
I found a great video made by Jeremy Howard, the author of the algorithm which is the base of all current LMs. It's a long video (1h30), but it gives more insight into those tools, given it's from someone who laid the groundwork of the approach.

At one point, which is where it'll start below, he talks about generating code with GPT-4 and explains why it's not an adequate replacement for human programmers. Not that it can't be helpful, of course. Nevertheless, the example he shows impressed me, with GPT-4 iterating with code and tests to get the algorithm right (it ultimately fails).

If you rewind a little, there are some other funny limitations, when his model is too 'polluted' by its training on the Internet; and if you rewind even more (around 17'), he explains some techniques to tune the answers and overcome other limitations many users perceive. There are some puzzles and their resolution which impressed me again. It looks like it's iterating on elements of the problem, and I don't see how it can do that with its architecture.

View: https://youtu.be/jkrNMKz9pWU?t=1885

He also gives some tips on how to use it programmatically - sorry, @pibbuR, he does that in Python. ;)
Definitely something to watch on my exercise bike later this week.
 
Joined
Nov 11, 2019
Messages
2,185
Location
beRgen@noRway

I think this is the most important current problem with the current technology. It may delete humanity in the future, but deepfake is a problem here and now.

I don't think I would fall for this one. But then, I'm not their target, in stead:

"the type of person you want to catch with these ads: somebody who is not digitally literate — in a similar way the elderly in Canada are preyed upon by phone scams and identity theft."

Can I guarantee that I will never fall for deepfakes or the other types of scams No, I can't. I hope I won't succumb, but I really can't say that I never will. The quality increases. For a long time email scams was easily detected due to very bad Norwegian. Not anymore. And deepfakes will get better. So, no, it would be stupid to assume that I'll always be safe.

pibbuR who can be distinguished from pibbur impostors by his clever use of the capital R.
 
Joined
Nov 11, 2019
Messages
2,185
Location
beRgen@noRway

"A breakthrough in synthesizing talking heads"


pibbuR who assumes this technology currently works best for people with depression, parkinsonism and hypothyroid diseases.
 
Joined
Nov 11, 2019
Messages
2,185
Location
beRgen@noRway
[...] but deepfake is a problem here and now.

I don't think I would fall for this one. But then, I'm not their target, in stead:

"the type of person you want to catch with these ads: somebody who is not digitally literate — in a similar way the elderly in Canada are preyed upon by phone scams and identity theft."

Can I guarantee that I will never fall for deepfakes or the other types of scams No, I can't. I hope I won't succumb, but I really can't say that I never will. The quality increases. For a long time email scams was easily detected due to very bad Norwegian. Not anymore. And deepfakes will get better. So, no, it would be stupid to assume that I'll always be safe.

pibbuR who can be distinguished from pibbur impostors by his clever use of the capital R.
I completely agree. It's one of the expected threads for elections that are upcoming in several countries, from what I read somewhere (if that's not fake news...). And it's very unsettling, no matter what.

I have a little story about that. About six months ago, I saw a news article about AI in a French weekly (paper) magazine we receive here. A few famous actors had been allegedly interviewed, and one of them said she wasn't worried (with some further comments about her view on AI). Months later, when I was curious and searching for something about the Mark Zuckerberg vs Elon Musk match, I stumbled on a fake conversation (generated by AI) between the same actress and other people, commenting on a deepfake match between Musk and Zuckerberg, saying it was impressive, realistic, and so on. And the few comments I saw months earlier were there, only in English.

My only explanation is that the author of the magazine article saw this fake dialogue, took it for real, and published it as if they had interviewed a few people themselves. Not that I had a high esteem of that magazine to start with.

It goes so deep I don't even know if someone really released that deepfake video or if it was also part of the fantasy - I think it was. I only know for sure that Musk and Zuckerberg never went ahead with this match they boasted about, which is a shame as it would potentially rid the humanity of one or both.

PS: To be clear, yes, it's a fake in-character discussion about a fake video, between people aware it was fake. I think. :D
 
Joined
Aug 29, 2020
Messages
10,391
Location
Good old Europe
Back
Top Bottom