Around half of people are worried they'll lose their job to AI. And they're right to be concerned: AI can now complete real-world coding tasks on GitHub, generate photorealistic video, drive a taxi more safely than humans, and do accurate medical diagnosis. And it's set to continue to improve rapidly. But what's less appreciated is that, while AI drives down the value of skills it can do, it drives up the value of skills it can't— because they become the bottlenecks to further automation (for a while at least).
I feel that this article is based on beliefs that are optimism rather than empiricism or rational extrapolation, and trains of thought driven way into highly simplified territory.
Basically like the Lesswrong, self-proclaimed “longtermists” and Zizians crowds.
Illustrative example: Categorizing nannies under “human touch strongly preferred - perhaps as a luxury”. This assumes automation is not only possible to a degree way beyond what we see signs of, but that the service itself isn’t inherently human.