Of course, that has always been true. What concerns me now is the proportion of useful to useless people. Most societies are - while cybernetically complex - rather resilient. Network effects and self-organization can route around and compensate for a lot of damage, but there comes a point where having a few brilliant minds in the midst of a bunch of atavistic confused panicking knuckle-draggers just isn’t going to be enough to avoid cascading failure. I’m seeing a lot of positive feedback loops emerging, and I don’t like it.
As they say about collapsing systems: First slowly, then suddenly very, very quickly.
Same argument was already made around 2500BCE in Mesopotamian scriptures. The corruption of society will lead to deterioration and collapse, these processes accelerate and will soon lead to the inevitable end; remaining minds write history books and capture the end of humanity.
…and as you can see, we’re 4500 years into this stuff, still kicking.
One mistake people of all generations make is assuming the previous ones were smarter and better. No, they weren’t, they were as naive if not more so, had same illusions of grandeur and outside influences. This thing never went anywhere and never will. We can shift it to better or worse, but societal collapse due to people suddenly getting dumb is not something to reasonably worry about.
Almost certainly not, no. Evolution may work faster than once thought, but not that fast. The problem is that societal, and in particular, technological development is now vastly outstripping our ability to adapt. It’s not that people are getting dumber per se - it’s that they’re having to deal with vastly more stuff. All. The. Time. For example, consider the world as it was a scant century ago - virtually nothing in evolutionary terms. A person did not have to cope with what was going on on the other side of the planet, and probably wouldn’t even know for months if ever. Now? If an earthquake hits Paraguay, you’ll be aware in minutes.
And you’ll be expected to care.
Edit: Apologies. I wrote this comment as you were editing yours. It’s quite different now, but you know what you wrote previously, so I trust you’ll be able to interpret my response correctly.
Yes, my apologies I edited it so drastically to better get my point across.
Sure, we get more information. But we also learn to filter it, to adapt to it, and eventually - to disregard things we have little control over, while finding what we can do to make it better.
I believe that, eventually, we can fix this all as well.
1925: global financial collapse is just about to happen, many people are enjoying the ride as the wave just started to break, following that war to end all wars that did reach across the Atlantic Ocean…
Yes, it is accelerating. Alvin Toffler wrote Future Shock 45 years ago, already overwhelmed by accelerating change, and it has continued to accelerate since then. But these are not entirely new problems, either.
I mean, Mesopotamian scriptures likely didn’t foresee having a bunch of dumb fucks around who can be easily manipulated by the gas and oil lobby, and that shit will actually end humanity.
People were always manipulated. I mean, they were indoctrinated with divine power of rulers, how much worse can it get? It’s just that now it tries to be a bit more stealthy.
And previously, there were plenty of existential threats. Famine, plague, all that stuff that actually threatened to wipe us out.
We’re still here, and we have what it takes to push back. We need more organizing, that’s all.
Honestly, the “human extinction” level of climate change is very far away. Currently, we’re preventing the “sunken coastal cities, economic crisis and famine in poor regions” kind of change, it’s just that “we’re all gonna die” sounds flashier.
We have the time to change the course, it’s just that the sooner we do this, the less damage will be done. This is why it’s important to solve it now.
The thing about LLMs in particular is that - when used like this - they constitute one such grave positive feedback loop. I have no principal problem with machine learning. It can be a great tool to illuminate otherwise completely opaque relationships in large scientific datasets for example, but a polynomial binary space partitioning of a hyper-dimensional phase space is just a statistical knowledge model. It does not have opinions. All it can do is to codify what appears to be the consensus of the input it’s given. Even assuming - which may well be far too generous - that the input is truly unbiased, at best all it’ll tell you is what a bunch of morons think is the truth. At worst, it’ll just tell you what you expect to hear. It’s what everybody else is already saying, after all.
And when what people think is the truth and what they want to hear are both nuts, this kind of LLM-echo chamber suddenly becomes unfathomably dangerous.
For some yes unfortunately but we all choose our path.
Of course, that has always been true. What concerns me now is the proportion of useful to useless people. Most societies are - while cybernetically complex - rather resilient. Network effects and self-organization can route around and compensate for a lot of damage, but there comes a point where having a few brilliant minds in the midst of a bunch of atavistic confused panicking knuckle-draggers just isn’t going to be enough to avoid cascading failure. I’m seeing a lot of positive feedback loops emerging, and I don’t like it.
As they say about collapsing systems: First slowly, then suddenly very, very quickly.
Same argument was already made around 2500BCE in Mesopotamian scriptures. The corruption of society will lead to deterioration and collapse, these processes accelerate and will soon lead to the inevitable end; remaining minds write history books and capture the end of humanity.
…and as you can see, we’re 4500 years into this stuff, still kicking.
One mistake people of all generations make is assuming the previous ones were smarter and better. No, they weren’t, they were as naive if not more so, had same illusions of grandeur and outside influences. This thing never went anywhere and never will. We can shift it to better or worse, but societal collapse due to people suddenly getting dumb is not something to reasonably worry about.
Almost certainly not, no. Evolution may work faster than once thought, but not that fast. The problem is that societal, and in particular, technological development is now vastly outstripping our ability to adapt. It’s not that people are getting dumber per se - it’s that they’re having to deal with vastly more stuff. All. The. Time. For example, consider the world as it was a scant century ago - virtually nothing in evolutionary terms. A person did not have to cope with what was going on on the other side of the planet, and probably wouldn’t even know for months if ever. Now? If an earthquake hits Paraguay, you’ll be aware in minutes.
And you’ll be expected to care.
Edit: Apologies. I wrote this comment as you were editing yours. It’s quite different now, but you know what you wrote previously, so I trust you’ll be able to interpret my response correctly.
Yes, my apologies I edited it so drastically to better get my point across.
Sure, we get more information. But we also learn to filter it, to adapt to it, and eventually - to disregard things we have little control over, while finding what we can do to make it better.
I believe that, eventually, we can fix this all as well.
1925: global financial collapse is just about to happen, many people are enjoying the ride as the wave just started to break, following that war to end all wars that did reach across the Atlantic Ocean…
Yes, it is accelerating. Alvin Toffler wrote Future Shock 45 years ago, already overwhelmed by accelerating change, and it has continued to accelerate since then. But these are not entirely new problems, either.
I mean, Mesopotamian scriptures likely didn’t foresee having a bunch of dumb fucks around who can be easily manipulated by the gas and oil lobby, and that shit will actually end humanity.
People were always manipulated. I mean, they were indoctrinated with divine power of rulers, how much worse can it get? It’s just that now it tries to be a bit more stealthy.
And previously, there were plenty of existential threats. Famine, plague, all that stuff that actually threatened to wipe us out.
We’re still here, and we have what it takes to push back. We need more organizing, that’s all.
Well, it doesn’t have to get worse, AFAIK we are still headed towards human extinction due to Climate Change
Honestly, the “human extinction” level of climate change is very far away. Currently, we’re preventing the “sunken coastal cities, economic crisis and famine in poor regions” kind of change, it’s just that “we’re all gonna die” sounds flashier.
We have the time to change the course, it’s just that the sooner we do this, the less damage will be done. This is why it’s important to solve it now.
Really well said.
Thank you. I appreciate you saying so.
The thing about LLMs in particular is that - when used like this - they constitute one such grave positive feedback loop. I have no principal problem with machine learning. It can be a great tool to illuminate otherwise completely opaque relationships in large scientific datasets for example, but a polynomial binary space partitioning of a hyper-dimensional phase space is just a statistical knowledge model. It does not have opinions. All it can do is to codify what appears to be the consensus of the input it’s given. Even assuming - which may well be far too generous - that the input is truly unbiased, at best all it’ll tell you is what a bunch of morons think is the truth. At worst, it’ll just tell you what you expect to hear. It’s what everybody else is already saying, after all.
And when what people think is the truth and what they want to hear are both nuts, this kind of LLM-echo chamber suddenly becomes unfathomably dangerous.