Experienced software developer, here. “AI” is useful to me in some contexts. Specifically when I want to scaffold out a completely new application (so I’m not worried about clobbering existing code) and I don’t want to do it by hand, it saves me time.
And… that’s about it. It sucks at code review, and will break shit in your repo if you let it.
On that last note, important thing they left out here being general news reporting tech stuff is that this was specifically bug fixing tasks. It can typically only provide the broadest of advice on that, and it’s largely incapable of tackling problems holistically when you often need to be thinking big picture while tackling a bug.
Interesting that the AI devs thought they were being quicker though.
Same. I also like it for basic research and helping with syntax for obscure SQL queries, but coding hasn’t worked very well. One of my less technical coworkers tried to vibe code something and it didn’t work well. Maybe it would do okay on something routine, but generally speaking it would probably be better to use a library for that anyway.
I actively hate the term “vibe coding.” The fact is, while using an LLM for certain tasks is helpful, trying to build out an entire, production-ready application just by prompts is a huge waste of time and is guaranteed to produce garbage code.
At some point, people like your coworker are going to have to look at the code and work on it, and if they don’t know what they’re doing, they’ll fail.
I commend them for giving it a shot, but I also commend them for recognizing it wasn’t working.
Not a developer per se (mostly virtualization, architecture, and hardware) but AI can get me to 80-90% of a script in no time. The last 10% takes a while but that was going to take a while regardless. So the time savings on that first 90% is awesome. Although it does send me down a really bad path at times. Being experienced enough to know that is very helpful in that I just start over.
In my opinion AI shouldn’t replace coders but it can definitely enhance them if used properly. It’s a tool like everything. I can put a screw in with a hammer but I probably shouldn’t.
Like I said, I do find it useful at times. But not only shouldn’t it replace coders, it fundamentally can’t. At least, not without a fundamental rearchitecturing of how they work.
The reason it goes down a “really bad path” is that it’s basically glorified autocomplete. It doesn’t know anything.
On top of that, spoken and written language are very imprecise, and there’s no way for an LLM to derive what you really wanted from context clues such as your tone of voice.
Take the phrase “fruit flies like a banana.” Am I saying that a piece of fruit might fly in a manner akin to how another piece of fruit, a banana, flies if thrown? Or am I saying that the insect called the fruit fly might like to consume a banana?
It’s a humorous line, but my point is serious: We unintentionally speak in ambiguous ways like that all the time. And while we’ve got brains that can interpret unspoken signals to parse intended meaning from a word or phrase, LLMs don’t.
I have limited AI experience, but so far that’s what it means to me as well: helpful in very limited circumstances.
Mostly, I find it useful for “speaking new languages” - if I try to use AI to “help” with the stuff I have been doing daily for the past 20 years? Yeah, it’s just slowing me down.
FreedomAdvocate is right, IMO the best use case of ai is things you have an understanding of, but need some assistance. You need to understand enough to catch atleast impactful errors by the llm
Sometimes I get an LLM to review a patch series before I send it as a quick once over. I would estimate about 50% of the suggestions are useful and about 10% are based on “misunderstanding”. Last week it was suggesting a spelling fix I’d already made because it didn’t understand the - in the diff meant I’d changed the line already.
AI tools are actually improving at a rate faster than most junior engineers I have worked with, and about 30% of junior engineers I have worked with never really “graduated” to a level that I would trust them to do anything independently, even after 5 years in the job. Those engineers “find their niche” doing something other than engineering with their engineering job titles, and that’s great, but don’t ever trust them to build you a bridge or whatever it is they seem to have been hired to do.
Now, as for AI, it’s currently as good or “better” than about 40% of brand-new fresh from the BS program software engineers I have worked with. A year ago that number probably would have been 20%. So far it’s improving relatively quickly. The question is: will it plateau, or will it improve exponentially?
Many things in tech seem to have an exponential improvement phase, followed by a plateau. CPU clock speed is a good example of that. Storage density/cost is one that doesn’t seem to have hit a plateau yet. Software quality/power is much harder to gauge, but it definitely is still growing more powerful / capable even as it struggles with bloat and vulnerabilities.
The question I have is: will AI continue to write “human compatible” software, or is it going to start writing code that only AI understands, but people rely on anyway? After all, the code that humans write is incomprehensible to 90%+ of the humans that use it.
I’m seeing exactly the opposite. It used to be the junior engineers understood they had a lot to learn. However with AI they confidently try entirely wrong changes. They don’t understand how to tell when the ai goes down the wrong path, don’t know how to fix it, and it takes me longer to fix.
So far ai overall creates more mess faster.
Don’t get me wrong, it can be a useful tool you have to think of it like autocomplete or internet search. Just like those tools it provides results but the human needs judgement and needs to figure out how to apply the appropriate results.
My company wants metrics on how much time we’re saving with ai, but
I have to spend more time helping the junior guys out of the holes dug by ai, making it net negative
it’s just another tool. There’s not really a defined task or set time. If you had to answer how much time autocomplete saved you, could you provide any sort of meaningful answer?
I’ve always had problems with junior engineers (self included) going down bad paths, since before there was Google search - let alone AI.
So far ai overall creates more mess faster.
Maybe it is moving faster, maybe they do bother the senior engineers less often than they used to, but for throw-away proof of concept and similar stuff, the juniors+AI are getting better than the juniors without senior support used to be… Is that a good direction? No. When the seniors are over-tasked with “Priority 1” deadlines (nothing new) does this mean the juniors can get a little further on their own and some of them learn from their own mistakes? I think so.
Where I started, it was actually the case that the PhD senior engineers needed help from me fresh out of school - maybe that was a rare circumstance, but the shop was trying to use cutting edge stuff that I knew more about than the seniors. Basically, everything in 1991 was cutting edge and it made the difference between getting something that worked or having nothing if you didn’t use it. My mentor was expert in another field, so we were complimentary that way.
My company (now) wants metrics on a lot of things, but they also understand how meaningless those metrics can be.
I have to spend more time helping the junior guys out of the holes dug by ai, making it net negative
Shame. There was a time that people dug out of their own messes, I think you learn more, faster that way. Still, I agree - since 2005 I have spend a lot of time taking piles of Matlab, Fortran, Python that have been developed over years to reach critical mass - add anything else to them and they’ll go BOOM - and translating those into commercially salable / maintainable / extensible Qt/C++ apps, and I don’t think I ever had one “mentee” through that process who was learning how to follow in my footsteps, the organizations were always just interested in having one thing they could sell, not really a team that could build more like it in the future.
it’s just another tool.
Yep.
If you had to answer how much time autocomplete saved you, could you provide any sort of meaningful answer?
Speaking of meaningless metrics, how many people ask you for Lines Of Code counts, even today?___
Shame. There was a time that people dug out of their own messes, I think you learn more, faster
Yes, that’s how we became senior guys. But when you have deadlines that you’re both on the hook for and they’re just floundering, you can only give them so much opportunity. I’ve had too many arguments with management about letting them merge and I’m not letting that ruin my code base
Speaking of meaningless metrics, how many people ask you for Lines Of Code counts, even today?
We have a new VP collecting metrics on everyone, including lines of code, number of merge requests, times per day using ai, days per week in the office vs at home
I’ve had too many arguments with management about letting them merge and I’m not letting that ruin my code base
I guess I’m lucky, before here I always had 100% control of the code I was responsible for. Here (last 12 years) we have a big team, but nobody merges to master/main without a review and screwups in the section of the repository I am primarily responsible for have been rare.
We have a new VP collecting metrics on everyone, including lines of code, number of merge requests, times per day using ai, days per week in the office vs at home
I have been getting actively recruited - six figures+ - for multiple openings right here in town (not a huge market here, either…) this may be the time…
Interesting idea… we actually have a plan to go public in a couple years and I’m holding a few options, but the economy is hitting us like everyone else. I’m no longer optimistic we can reach the numbers for those options to activate
Now, as for AI, it’s currently as good or “better” than about 40% of brand-new fresh from the BS program software engineers I have worked with. A year ago that number probably would have been 20%. So far it’s improving relatively quickly. The question is: will it plateau, or will it improve exponentially?
Huh? I’m definitely not hyping AI. If anything it would be the opposite. We’re also literally in the comment section for an a study about AI productivity which is the first remotely reputable study I’ve even seen. The rest have been rigged marketing stunts. As far as judging my opinion about the productivity of AI against junior developers against studies, why don’t you bring me one that isn’t “we made an artificial test then directly trained our LLM on the questions so it will look good for investors”? I’ll wait.
Yeah but a Claude/Cursor/whatever subscription costs $20/month and a junior engineer costs real money. Are the tools 400 times less useful than a junior engineer? I’m not so sure…
This line of thought is short sighted. Your senior engineers will eventually retire or leave the company. If everyone replaces junior engineers with ai, then there will be nobody with the experience to fill those empty seats. Then you end up with no junior engineers and no senior engineers, so who is wrangling the ai?
This isn’t black and white. There will always be some junior hires. No one is saying replace ALL of them. But hiring 1 junior engineer instead of 3? Maybe…and that’s already happening to some degree.
Not that I agree, but if you believe that the LLMs will continuously improve, then in 5-10 years you may only need 1/3rd the seniors, to oversee and prompt. Again, that’s what these CEOs are relying on.
They might become seniors for 99% more investment. Or they crash out as “not a great fit” which happens too. Juniors aren’t just “senior seeds” to be planted
Experienced software developer, here. “AI” is useful to me in some contexts. Specifically when I want to scaffold out a completely new application (so I’m not worried about clobbering existing code) and I don’t want to do it by hand, it saves me time.
And… that’s about it. It sucks at code review, and will break shit in your repo if you let it.
On that last note, important thing they left out here being general news reporting tech stuff is that this was specifically bug fixing tasks. It can typically only provide the broadest of advice on that, and it’s largely incapable of tackling problems holistically when you often need to be thinking big picture while tackling a bug.
Interesting that the AI devs thought they were being quicker though.
Same. I also like it for basic research and helping with syntax for obscure SQL queries, but coding hasn’t worked very well. One of my less technical coworkers tried to vibe code something and it didn’t work well. Maybe it would do okay on something routine, but generally speaking it would probably be better to use a library for that anyway.
I actively hate the term “vibe coding.” The fact is, while using an LLM for certain tasks is helpful, trying to build out an entire, production-ready application just by prompts is a huge waste of time and is guaranteed to produce garbage code.
At some point, people like your coworker are going to have to look at the code and work on it, and if they don’t know what they’re doing, they’ll fail.
I commend them for giving it a shot, but I also commend them for recognizing it wasn’t working.
Not a developer per se (mostly virtualization, architecture, and hardware) but AI can get me to 80-90% of a script in no time. The last 10% takes a while but that was going to take a while regardless. So the time savings on that first 90% is awesome. Although it does send me down a really bad path at times. Being experienced enough to know that is very helpful in that I just start over.
In my opinion AI shouldn’t replace coders but it can definitely enhance them if used properly. It’s a tool like everything. I can put a screw in with a hammer but I probably shouldn’t.
Like I said, I do find it useful at times. But not only shouldn’t it replace coders, it fundamentally can’t. At least, not without a fundamental rearchitecturing of how they work.
The reason it goes down a “really bad path” is that it’s basically glorified autocomplete. It doesn’t know anything.
On top of that, spoken and written language are very imprecise, and there’s no way for an LLM to derive what you really wanted from context clues such as your tone of voice.
Take the phrase “fruit flies like a banana.” Am I saying that a piece of fruit might fly in a manner akin to how another piece of fruit, a banana, flies if thrown? Or am I saying that the insect called the fruit fly might like to consume a banana?
It’s a humorous line, but my point is serious: We unintentionally speak in ambiguous ways like that all the time. And while we’ve got brains that can interpret unspoken signals to parse intended meaning from a word or phrase, LLMs don’t.
I have limited AI experience, but so far that’s what it means to me as well: helpful in very limited circumstances.
Mostly, I find it useful for “speaking new languages” - if I try to use AI to “help” with the stuff I have been doing daily for the past 20 years? Yeah, it’s just slowing me down.
I like the saying that LLMs are good at stuff you don’t know. That’s about it.
FreedomAdvocate is right, IMO the best use case of ai is things you have an understanding of, but need some assistance. You need to understand enough to catch atleast impactful errors by the llm
Like search engines, and libraries…
Everyone on Lemmy is a software developer.
Sometimes I get an LLM to review a patch series before I send it as a quick once over. I would estimate about 50% of the suggestions are useful and about 10% are based on “misunderstanding”. Last week it was suggesting a spelling fix I’d already made because it didn’t understand the - in the diff meant I’d changed the line already.
Exactly what you would expect from a junior engineer.
Let them run unsupervised and you have a mess to clean up. Guide them with context and you’ve got a second set of capable hands.
Something something craftsmen don’t blame their tools
AI tools are way less useful than a junior engineer, and they aren’t an investment that turns into a senior engineer either.
AI tools are actually improving at a rate faster than most junior engineers I have worked with, and about 30% of junior engineers I have worked with never really “graduated” to a level that I would trust them to do anything independently, even after 5 years in the job. Those engineers “find their niche” doing something other than engineering with their engineering job titles, and that’s great, but don’t ever trust them to build you a bridge or whatever it is they seem to have been hired to do.
Now, as for AI, it’s currently as good or “better” than about 40% of brand-new fresh from the BS program software engineers I have worked with. A year ago that number probably would have been 20%. So far it’s improving relatively quickly. The question is: will it plateau, or will it improve exponentially?
Many things in tech seem to have an exponential improvement phase, followed by a plateau. CPU clock speed is a good example of that. Storage density/cost is one that doesn’t seem to have hit a plateau yet. Software quality/power is much harder to gauge, but it definitely is still growing more powerful / capable even as it struggles with bloat and vulnerabilities.
The question I have is: will AI continue to write “human compatible” software, or is it going to start writing code that only AI understands, but people rely on anyway? After all, the code that humans write is incomprehensible to 90%+ of the humans that use it.
I’m seeing exactly the opposite. It used to be the junior engineers understood they had a lot to learn. However with AI they confidently try entirely wrong changes. They don’t understand how to tell when the ai goes down the wrong path, don’t know how to fix it, and it takes me longer to fix.
So far ai overall creates more mess faster.
Don’t get me wrong, it can be a useful tool you have to think of it like autocomplete or internet search. Just like those tools it provides results but the human needs judgement and needs to figure out how to apply the appropriate results.
My company wants metrics on how much time we’re saving with ai, but
I’ve always had problems with junior engineers (self included) going down bad paths, since before there was Google search - let alone AI.
Maybe it is moving faster, maybe they do bother the senior engineers less often than they used to, but for throw-away proof of concept and similar stuff, the juniors+AI are getting better than the juniors without senior support used to be… Is that a good direction? No. When the seniors are over-tasked with “Priority 1” deadlines (nothing new) does this mean the juniors can get a little further on their own and some of them learn from their own mistakes? I think so.
Where I started, it was actually the case that the PhD senior engineers needed help from me fresh out of school - maybe that was a rare circumstance, but the shop was trying to use cutting edge stuff that I knew more about than the seniors. Basically, everything in 1991 was cutting edge and it made the difference between getting something that worked or having nothing if you didn’t use it. My mentor was expert in another field, so we were complimentary that way.
My company (now) wants metrics on a lot of things, but they also understand how meaningless those metrics can be.
https://clip.cafe/monsters-inc-2001/all-right-mr-bile-it/
Shame. There was a time that people dug out of their own messes, I think you learn more, faster that way. Still, I agree - since 2005 I have spend a lot of time taking piles of Matlab, Fortran, Python that have been developed over years to reach critical mass - add anything else to them and they’ll go BOOM - and translating those into commercially salable / maintainable / extensible Qt/C++ apps, and I don’t think I ever had one “mentee” through that process who was learning how to follow in my footsteps, the organizations were always just interested in having one thing they could sell, not really a team that could build more like it in the future.
Yep.
Speaking of meaningless metrics, how many people ask you for Lines Of Code counts, even today?___
Yes, that’s how we became senior guys. But when you have deadlines that you’re both on the hook for and they’re just floundering, you can only give them so much opportunity. I’ve had too many arguments with management about letting them merge and I’m not letting that ruin my code base
We have a new VP collecting metrics on everyone, including lines of code, number of merge requests, times per day using ai, days per week in the office vs at home
I guess I’m lucky, before here I always had 100% control of the code I was responsible for. Here (last 12 years) we have a big team, but nobody merges to master/main without a review and screwups in the section of the repository I am primarily responsible for have been rare.
I have been getting actively recruited - six figures+ - for multiple openings right here in town (not a huge market here, either…) this may be the time…
Interesting idea… we actually have a plan to go public in a couple years and I’m holding a few options, but the economy is hitting us like everyone else. I’m no longer optimistic we can reach the numbers for those options to activate
LOL sure
I’m not talking about the ones that get hired in your 'leet shop, I’m talking about the whole damn crop that’s just graduated.
Is “way less useful” something you can cite with a source, or is that just feelings?
It is based on my experience, which I trust immeasurably more than rigged “studies” done by the big LLM companies with clear conflict of interest.
Okay, but like-
You could just be lying.
You could even be a chatbot, programmed to hype AI in comments sections.
So I’m going to trust studies, not some anonymous commenter on the internet who says “trust me bro!”
Huh? I’m definitely not hyping AI. If anything it would be the opposite. We’re also literally in the comment section for an a study about AI productivity which is the first remotely reputable study I’ve even seen. The rest have been rigged marketing stunts. As far as judging my opinion about the productivity of AI against junior developers against studies, why don’t you bring me one that isn’t “we made an artificial test then directly trained our LLM on the questions so it will look good for investors”? I’ll wait.
Understood, thanks for being honest
Yeah but a Claude/Cursor/whatever subscription costs $20/month and a junior engineer costs real money. Are the tools 400 times less useful than a junior engineer? I’m not so sure…
The point is that comparing AI tools to junior engineers is ridiculous in the first place. It is simply marketing.
This line of thought is short sighted. Your senior engineers will eventually retire or leave the company. If everyone replaces junior engineers with ai, then there will be nobody with the experience to fill those empty seats. Then you end up with no junior engineers and no senior engineers, so who is wrangling the ai?
This isn’t black and white. There will always be some junior hires. No one is saying replace ALL of them. But hiring 1 junior engineer instead of 3? Maybe…and that’s already happening to some degree.
And when the current senior programmers retire the field of juniors that are coming to replace them will be much smaller.
Not that I agree, but if you believe that the LLMs will continuously improve, then in 5-10 years you may only need 1/3rd the seniors, to oversee and prompt. Again, that’s what these CEOs are relying on.
Even at $100/month you’re comparing to a > $10k/month junior. 1% of the cost for certainly > 1% functionality of a junior.
You can see why companies are tripping over themselves to push this new modality.
I was just ballparking the salary. Say it’s only 100x. Does the argument change? It’s a lot more money to pay for a real person.
Wasn’t it clear that our comments are in agreement?
It wasn’t, but now it is.
❤️
The difference being junior engineers eventually grow up into senior engineers.
Does every junior eventually achieve becoming a senior?
No, but that’s the only way you get senior engineers!
I agree, but the goal of CEOs is “line go up,” not make our eng team stronger (usually)
Capitalism, shortsighted? Say it ain’t so!
Except junior engineers become seniors. If you don’t understand this … are you HR?
They might become seniors for 99% more investment. Or they crash out as “not a great fit” which happens too. Juniors aren’t just “senior seeds” to be planted