Ask HN: What do you expect will be the real impact of AI on society in 10 years
I started to get more curious on this, especially beyond the obligatory “genAI will do everything.” What are your thoughts on societal impact. As I see it currently: knowledge work is (surprisingly?) the first group of people seeing massive job losses and replacements from work. In the past “knowledge” was a scarce resource, now the AI delivering knowledge will become the sparse resource. So people owning / hosting it can sell it and make lots of money-> in the current setup this means the few rich in AI will get richer. In addition letting money work for yourself will also stay (investment) so again rich people will become richer.
Interesting is the question about physical labor. The economics of pushing atoms in the physical world is nowhere near the economics of pushing electrons (bytes), so if you are not part of group 1 (entrepreneurs) or group 2 (investors), doing physical work is something that will earn you some money (I also expect care work to stay, since people will probably prefer for a long time to have humans care for them). But this means that still group 1 and 2 will be the big winners, paying some money to group 3.
Where do you disagree? Where do you see a different outcome? I’m curious to learn about your thoughts
I think kids will have a hard time learning and being smart with AI chewing everything for them.
I've read a few stories about parents questioning the over-use of AI from their child, adding to that I've seen my fair share of adult who cannot do anything without asking ChatGPT first.
Sam will promise in 2035 that AGI is very close and probably will happen at the end of the year, same as every year (Elon als still promises FSD is close and probably comes out EOY, just that FSD might actually be realistic by then).
People use AI a little here and there but nothing much will have changed due to it. Mostly more work for people needing to correct AI mistakes.
Consolidation of power.
How would that look like? And what would be the societal implications?
different view but for that I need to pm/dm you. Any way that is suitable/comfortable to you. I'm not a scammer and I don't have malicious intentions.
It's not the same old boring responses which bring more uncertainty.
It's obvious but it's not that obvious that you would likely get it from current ai reasoning models.
Now I'm intrigued. Why not post it publicly?
everything
Well the most pressing question is whether it will kill us all. There are good reasons to suspect that; Nick Bostrom's Superintelligence: Paths, Dangers, Strategies (2014) remains my favorite introduction to this thorny problem, especially the chapter called "Is the Default Outcome Doom?" Whether LLMs are sufficient for artificial superintelligence (ASI) is of course also an open question; I'm actually inclined to say no, but there probably isn't much left to get to yes.
A lot of smart people, including myself, find the argument convincing, and have tried all manner of approaches to avoid this outcome. My own small contribution to this literature is an essay I wrote in 2022, which uses privately paid bounties to induce a chilling effect around this technology. I sometimes describe this kind of market-first policy as "capitalism's judo throw". Unfortunately it hasn't gotten much attention even though we've seen this class of mechanisms work in fields as different as dog littering and catching international terrorists. I keep it up mostly as a curiosity these days. [1]
That future is boring; our current models basically stagnate at their current ability, we learn to use them as best we can, and life goes on. If we assume the answer to "Non-aligned ASI kills us all" to be "No", and the answer to "We keep developing AI, S or non-S" to be "Yes", then I guess you could assume it would all work out in the end for the better one way or another and stop worrying about it. But we'd do well to remember Keynes: In the long run, we're all dead. What about the short term?
Knowledge workers will likely specialize much harder, until they cross a threshold beyond which they are the only person in the world who can even properly vet whether a given LLM is spewing bullshit or not. But I'm not convinced that means knowledge work will actually go away, or even recede. There's an awful lot of profitable knowledge in the world, especially if we take the local knowledge problem seriously. You might well make a career out of being the best informed person on some niche topic that only affects your own neighborhood.
How about physical labor? Probably a long, slow decline as robotics supplants most trades, but even then you'll probably see a human in the loop for a long time. Old knob-and-tube wiring is very hard to find expertise around to distill into a model, for example, and the kinds of people who currently excel at that work probably won't be handing over the keys too quickly. Heck, half of them don't run their businesses on computers at all (much easier to get paid under the table that way).
Businesses which are already big have enormous economic advantages to scaling up AI, and we should probably expect them to continue to grow market share. So my current answer, which is a little boring, is simply: Work hard now, pile money into index funds, and wait for the day when we start to see the S&P500 start to double every week or so. Even if it never gets to that point this has been pretty solid advice for the last 50 years or so. You could call this the a16z approach - assume there is no crisis, things will just keep getting more profitable faster, and ride the wave. And the good news is if you have any disposable capital at all it's easy to get a first personal toehold on this by buying e.g. Vanguard ETFs. Your retirement accounts likely already hold a lot of this anyway. Congrats! You're already a very small part of the investor class.
[1]: https://andrew-quinn.me/ai-bounties/
[dead]