Around half of people are worried they'll lose their job to AI. And they're right to be concerned: AI can now complete real-world coding tasks on GitHub, generate photorealistic video, drive a taxi more safely than humans, and do accurate medical diagnosis. And it's set to continue to improve rapidly. But what's less appreciated is that, while AI drives down the value of skills it can do, it drives up the value of skills it can't— because they become the bottlenecks to further automation (for a while at least).
That is the point where I stopped reading.
Yes, the author of this article should worry about AI, because AI is indeed quite effective in writing nonsense articles like this one. But AI is nowhere near replacing the real specialists. And it isn’t the question of quantity, it is a principal question of how modern “AIs” work. While those principles won’t change, AIs won’t be able to do any job that involves logic and stable repeated results.
ironically, replacing shitty clickbait journalists is something AI can and will likely do in the near future.
It can complete coding tasks. But that’s not the same as replacing a developer. In the same way that cutting wood doesn’t make me a carpenter and soldering a wire doesn’t make me an electrician. I wish the AI crowd understood that.
Yep. I write code almost entirely with a. I now for my OWN projects.
The amount of iteration and editing it requires almost requires a new specialty dev called "A. I developer support. ".
It’s honestly kinda awful. I’ve been trying to use it a bit to help speed up some of my projects at work, and it’s a crapshoot how well it helps. Some days I can give it the function I’m writing with an explanation of purpose and error output and it helps me fix it in 5 minutes. Other days I spend an hour endlessly iterating through asinine replies that get me no where (like when I tried to use it to help figure out a bit very well documented API, had it correct me and use a different method/endpoint until it gave up and went back to my way that didn’t even work! I ended up just hacking together a workaround that got it done in the most annoying way possible, but it accomplished the task so WTFE)
It can complete coding tasks, but not well AND unsupervised. To get it to do something well I need to tell it what it did wrong over 4 or 5 iterations.
This is close to my experience for a lot of tasks, but unless I’m working in a tech stack I’m unfamiliar with, I find doing it myself leads to not just better results, but faster, too. Problem is it makes you have to work harder to learn new areas, and management thinks it’s faster for everything and
I think it’s still faster for a lot of things. If you have several different ideas for how to approach a problem the robot can POC them very quickly to help you decide which to use. And while doing that it’ll probably mention something that’ll give you ideas for another couple approaches. So you can come up with an optimal solution in about the same time as it’d take to clack out a single POC by hand.
Yeah, I was thinking about production code when I wrote that. Usually I can get something working faster that way, and for tests it can speed things up, too. But the code is so terrible in general
Edit: production isn’t exactly what I was thinking. Just like. Up to some standards above just working
80000 hours are the same cultists from lesswrong/EA that believe singularity any time now and they’re also the core of people trying to build their imagined machine god in openai and anthropic
it’s all very much expected. verbose nonsense is their speciality and they did that way before time when chatbots were a thing