Unrepentant Techno-Hermit, forever trying to make less do more.

  • 0 Posts
  • 15 Comments
Joined 2 months ago
cake
Cake day: March 8th, 2025

help-circle

  • Almost certainly not, no. Evolution may work faster than once thought, but not that fast. The problem is that societal, and in particular, technological development is now vastly outstripping our ability to adapt. It’s not that people are getting dumber per se - it’s that they’re having to deal with vastly more stuff. All. The. Time. For example, consider the world as it was a scant century ago - virtually nothing in evolutionary terms. A person did not have to cope with what was going on on the other side of the planet, and probably wouldn’t even know for months if ever. Now? If an earthquake hits Paraguay, you’ll be aware in minutes.

    And you’ll be expected to care.

    Edit: Apologies. I wrote this comment as you were editing yours. It’s quite different now, but you know what you wrote previously, so I trust you’ll be able to interpret my response correctly.


  • Thank you. I appreciate you saying so.

    The thing about LLMs in particular is that - when used like this - they constitute one such grave positive feedback loop. I have no principal problem with machine learning. It can be a great tool to illuminate otherwise completely opaque relationships in large scientific datasets for example, but a polynomial binary space partitioning of a hyper-dimensional phase space is just a statistical knowledge model. It does not have opinions. All it can do is to codify what appears to be the consensus of the input it’s given. Even assuming - which may well be far too generous - that the input is truly unbiased, at best all it’ll tell you is what a bunch of morons think is the truth. At worst, it’ll just tell you what you expect to hear. It’s what everybody else is already saying, after all.

    And when what people think is the truth and what they want to hear are both nuts, this kind of LLM-echo chamber suddenly becomes unfathomably dangerous.