@pibbuR had already shared that a few posts up.

From experience, it takes more time to develop, but it may pay off in stability and when expanding the software base.

Sometimes, you get very annoying road blocks, though. I recently met one when I needed an efficient tree structure to store, then update data bottom-up.

Levelling up is more arduous with Rust than other languages - maybe it's the Dark Souls of programming languages. ;)

Bergstrom said Google has a similar migration underway moving developers from Java to Kotlin and that the time it takes to retrain developers in both cases – Java to Kotlin and C++ to Rust – has been similar. That is, in two months about a third of devs feel they're as productive in their new language as their old one. And in about four months, half of developers say as much, based on anonymous internal surveys.
I find that hard to believe, though. Kotlin is much easier and flexible than the old Java, and going from Java to Kotlin is a breeze, but Rust requires you to think differently than C++. For example, you have to mind the borrow checker, and you don't have the OOP power, only a part of it (and different).

There is no way you can be productive in all aspects after 2 months. People who say that have only scratched the surface, or they stuck to similar tasks.
 
Last edited:
The Velato programming language:


Velato is a programming language, created by Daniel Temkin in 2009, which uses MIDI files as source code: the pattern of notes determines commands.

Here's Hello World:
HelloWorld.gif

pibbuR who sings Daisy, Daisy.....
 

pibbuR who is happy that he doesn't need to apply for jobs anymore. And of course that he doesn't use Python.
Well, that's a good test, almost legitimate. ;)

It's still quite suspicious from a company to make you install stuff and depend on a local machine. I had an interview with Python tests, but they were using HackerRank, which seemed the right and usual way to do.
 
View: https://www.youtube.com/watch?v=FARf9emEPjI


It looks pretty impressive, even if it does require verification and adjustment from a knowledgeable developer at most everything.
What will be interesting is seeing the impact of these tools on developers that will be growing alongside these sorts of tools. Since you need to have experience and knowledge to be able to correct and adjust it.
So what do you do when the developers will be constantly in a tug-of-war of whether to just trust it, or start digging on the side to find out what it's trying to do and if it's correct.

It'll be a sort of combination of treating it like a blackbox but also needing to dig inside the blackbox, for when it goes wrong.
But how do you know where it went wrong? And how do you avoid just fixing the issue via hack or a workaround, that'll lead to more issues down the line?

I really cannot see this working without the developer using it also being sufficiently knowledgeable and experienced, so it'll be interesting how future generations will evolve.
 
I tried it and I'm far from convinced. Firstly, it's bugged when you use it in VSCode, which is its natural habitat, and it can't always see the whole code unless you jump through hoops. More importantly, it generates unreliable code.

I think that the way to do, if you really have to, is use the AI assistant for small menial tasks like auto-completing instructions and generating very basic stuff, then reviewing the code as if it came from a green developer who's been binge drinking the night before. And never, ever use it to generate the unit and integration tests (you may ask for ideas, though, as if brainstorming).

Something that I found useful to some extent is the possibility to ask about libraries or language elements: 'is there a method to do such and such on my variable x?' It's a good database. You can also ask it to comment the code or explain how it works, but I had mixed results.

There's a real risk with hallucinations. Typically, many people use the same pattern for a specific task (sorting items, building a graph, etc). If you have a slightly different pattern because of a special requirement but that looks close enough, the AI will blindly apply what it's learned from other people's code and introduce a potentially nasty bug by ignoring that slight difference.

I've seen a couple of studies that showed that code produced by that AI required more changes later, discouraged code reuse, and looked like it was made by someone unaware of the current code and the rules observed by other developers on the same project. I didn't read it thoroughly, though.
 
Yeah, the fact that anything these LLM produce needs to be checked, verified and adjusted is an absolute given.

There's no going around that. Unfortunately it's being used and will be used in a lot of fields where its output won't be verified. That's the real tragedy. Business pushes out such concerns, until maybe something very horrible happens.
But seeing the state of Boeing's planes, and how little things have changed because of that, doesn't really give me any hope that something will change when it's something a lot less risky, than say planes crashing.

I was only speaking from that pov. Given that you need to check everything it outputs, I still think it will lead to more productivity.
But as I said, I really wonder how the next generations of programmers will be impacted. When it'll be a fresh programmer in the field that uses this, without deep knowledge to notice the issue in the generated code, and will simply let it slide.
And then they will try to fix the issue much later on, when it'll lead to a bunch of improvised and hacky fixes.

Plus, it might even lead to a lot of wasted time of just trying to ask it to fix itself, instead of fixing it themselves. Since they won't have the confidence to fix it themselves, since they were spared from that experience, of needing to go deep in the muck and fix it. So they'll just spend time, trying to ask in various ways, to fix it without knowing what the fix is. I can foresee those prompts given to the LLM growing bigger and bigger, while trying to give it more context. It could become a real shitshow.
 
I'm not sure about productivity, but some people are allegedly happy about it (I'd be curious to see the demographics on that). I can't imagine how, except if they used it for small tasks. Otherwise, you have to spend the time to ask it in details what to generate, understand what it did, and fix the errors or even rewrite a good part of it. As you said, asking it to fix any issue is usually not working because it's due to applying a known pattern blindly, and it will just repeat the error or stray away.

It would be better if they added an iterative reflection, as explained in a video I posted a few weeks ago. Currently, the output of an LLM is as if someone asked you '2+2=?': you don't have to think, it's a straight anwser from you school training with addition tables. If the question is about solving a Tower of Hanoi problem, or finding a checkmate in 4 turns, then you have to start thinking harder, and imagining the situation as it progresses towards the goal, which LLMs don't do. It's not hard to see that good programming belongs to the latter rather than the former, except for basic hello world stuff.

When you work in a team, there's yet another level. You try to blend your code in the current style and existing code, making it look as if it were all written by a single person, reusing and sharing as much as possible with what's been already there. AIs don't do that very well (which is an understatement).
 
Yeah, it's more and more obvious that this is a top-down push, of using these tools, rather than bottom-up.
No wonder, at the huge amounts of money they're investing into it. They really need to monetize it and make it be used. They can't just wait and hope it naturally happens.

I'm really curious to hear the feedback from the companies that do end up using it.
Hopefully the truth will be discernible among the hailstorm of pr bullshit.

The plan they jokingly mentioned is absolutely hilarious.
View: https://www.youtube.com/watch?v=pLnyjxgFxew
 
I'm really curious to hear the feedback from the companies that do end up using it.
Hopefully the truth will be discernible among the hailstorm of pr bullshit.
I'm very curious too.

JetBrains did a survey recently, but to be honest, their AI assistant is much crappier than Copilot. Maybe you'll find better insight with a Copilot survey, but I just happen to have that link. 50% of people were 'satisfied', 91% estimated they saved some time (peak is 1-3 hours/week), and the #1 feature they found useful was the AI chat (so the same as ChatGPT). #2 was refactoring suggestions, #3 find problems in the code, #4 explain code, and #5 generate documentation. Nothing about generating code.
 

I know. I've posted this already in the AI thread.

But: "Most recently, Karpathy has been working on a project called "llm.c" that implements the training process for OpenAI's 2019 GPT-2 LLM in pure C, dramatically speeding up the process and demonstrating that working with LLMs doesn't necessarily require complex development environments"

And
"This is in contrast to typical deep learning libraries for training these models, which are written in large, complex code bases. So it is an advantage of llm.c that it is very small and simple, and hence much easier to certify as Space-safe."

pibbuR who thinks Cing is believing.

PS: Despite double posts, you don't have to read the link twice. DS
 

I know. I've posted this already in the AI thread.

But: "Most recently, Karpathy has been working on a project called "llm.c" that implements the training process for OpenAI's 2019 GPT-2 LLM in pure C, dramatically speeding up the process and demonstrating that working with LLMs doesn't necessarily require complex development environments"

And
"This is in contrast to typical deep learning libraries for training these models, which are written in large, complex code bases. So it is an advantage of llm.c that it is very small and simple, and hence much easier to certify as Space-safe."

pibbuR who thinks Cing is believing.

PS: Despite double posts, you don't have to read the link twice. DS

Safety. I predict someone will read this and start an llm.rs project right away (rs is the Rust extension). Maybe even ask ChatGPT to write the code for them. ;)

Not that an LLM needs to have the same safety standard in space as critical navigation software. But wait... isn't the training done on Earth anyway? :D
 
Measuring performances is a tricky business. 😅

I was comparing the timing of two benchmarks, a function I created vs a similar function in a library that I didn't want to use. The other function was 10 times quicker, and after checking its code, I couldn't understand why. I've spent hours optimizing and moving bits in mine, but the results remained slower...

Now I've just realized the compiler has done an unexpected and quite amazing simplification in the other function because of how I tested it. They both return a vector at each iteration, and I tested only one element to make sure the compiler didn't optimize it away. Even though this element means the other ones have been calculated, the compiler managed to remove enough to make it much quicker (the creation of the vector itself).

Fortunately, I finally thought about that possibility and used a trick to tell the compiler the whole result was to be created normally.

At least, my code is well optimized, now...

PS: Seriously, it's tricky. There are books on the subject, with the measures to perform, how to properly measure time, how to analyze the results, how to remove the noise, the different types of load to apply, and so on.
 

pibbuR who claims that he only used I to post this.

PS. "Attempting to get through a work day without reading or hearing about AI is like trying to send a text message with a carrier pigeon — highly unlikely". They tried the pigeon protocol at the Bergen Unix group some (more than 20) years ago. Their conclusion: High latency and packet loss (due to unauthorized traffic from nearby clusters of non-participating pigeons)