The moderators of a pro-artificial intelligence Reddit community announced that they have been quietly banning “a bunch of schizoposters” who believe “they’ve made some sort of incredible discovery or created a god or become a god,” highlighting a new type of chatbot-fueled delusion that started getting attention in early May.

  • shadowfax13@lemmy.ml
    link
    fedilink
    English
    arrow-up
    7
    ·
    1 day ago

    that sub seems to be fully brigaded by bots from marketing team of closed-ai and preplexity

  • meme_historian@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    132
    ·
    edit-2
    2 days ago

    At this rate we’ll soon have a decentralized para-religious terrorist organization full of brainlets that got scared shitless after discovering Roko’s Basilisk and are now doing the cyber lord’s bidding in order to not get punished once AGI arrives

    edit: change to non-mobile link

  • answersplease77@lemmy.world
    link
    fedilink
    English
    arrow-up
    31
    arrow-down
    1
    ·
    edit-2
    2 days ago

    “Artifical” Intellegence has already taken over “Social” Media and the internet.

    What I mean by the quotes: We replaced our social interactions with each other with Social Media, which has nothing social about it, then replaced the humans in social media with artificial slop generated by computers guessing what you want to read, watch, or hear.

    Most of Facebook, Insta, Youtube, Reddit, Twitter…etc is AI profiles, AI channels, and AI sloptrash content that give back google-ad revenue money to some russian or indian dude who doen’t even speak english.

    • CalipherJones@lemmy.world
      link
      fedilink
      English
      arrow-up
      12
      ·
      2 days ago

      Lemmy seems to be the only place with actual people for the most part. I worry so heavily for the idiots of the world that can’t discern robots from people. They’re really going to fall for what a programmed machine has to say.

      • webghost0101@sopuli.xyz
        link
        fedilink
        English
        arrow-up
        11
        ·
        2 days ago

        I’ve wondered about this.

        I cant believe that somehow everything except lemmy got infected. There must exist some ai comments here but i have not noticed them…

        Of course its better not do dwell on this to much or paranoia quickly sets in.

        • SparroHawc@lemm.ee
          link
          fedilink
          English
          arrow-up
          13
          ·
          2 days ago

          Oh, I’m sure there are bots on Lemmy too. The general userbase, however - of people who are sick of Reddit’s BS - are also going to have very little tolerance for bot BS, so the instances are incentivized to try to keep bot activity down lest they be de-federated.

          • thiseggowaffles@lemmy.zip
            link
            fedilink
            English
            arrow-up
            18
            ·
            2 days ago

            Plus they don’t see ad revenue, so there’s no profit incentive to keep bots around acting as if they’re real traffic. If anything, Lemmy instances are disincentivized from allowing bot traffic because it means more traffic than necessary, which costs them bandwidth.

          • webghost0101@sopuli.xyz
            link
            fedilink
            English
            arrow-up
            4
            ·
            2 days ago

            Death internet theory, i know it well.

            And yet i keep going back, exactly because i look for a way to distraction from my troubles.

        • dzsimbo@lemm.ee
          link
          fedilink
          English
          arrow-up
          7
          arrow-down
          1
          ·
          2 days ago

          Or maybe even take a step back. Dead internet theory is real and we’re living it, but just because the main subreddits and FB are ai schlock doesn’t mean Lemmy is the only ‘real’ place.

          I feel the hardest part is keeping my senses about me when I argue with hivemind mentality. You will probably get a feeling from my writing, that I too embraced the hive speak and mind. And this is where the bots get the everyperson. Bots speak in hivemind and meme format. I think this whole kerfuffle will do wonders for real online discussion, as low-effort discussions will be dismissed as white noise.

          • veni_vedi_veni@lemmy.world
            link
            fedilink
            English
            arrow-up
            9
            ·
            edit-2
            2 days ago

            Ngl, that last paragraph felt like some pseudo-prodigal word vomit that only an AI would produce.

            you sus af

            • dzsimbo@lemm.ee
              link
              fedilink
              English
              arrow-up
              3
              arrow-down
              1
              ·
              2 days ago

              Exactomundo!

              The only reason that last paragraph ain’t outright ai garbage (besides my elegant use of ‘everyperson’, mind you) is that there is a semi-original though buried at the end of it. This paragraph is odacious, so point taken.

    • Skunk@jlai.lu
      link
      fedilink
      English
      arrow-up
      56
      arrow-down
      1
      ·
      2 days ago

      Yeah, there’s been an article shared on lemmy a few months ago about couples or families destroyed by AI.

      Like the husband thinks he discovered some new truth, kinda religious level about how the world is working and stuff. The he becomes an annoying guru and ruins his social life.

      Kind of Qanon people but with chatGPT…

      • Pennomi@lemmy.world
        link
        fedilink
        English
        arrow-up
        42
        ·
        2 days ago

        Turns out it doesn’t really matter what the medium is, people will abuse it if they don’t have a stable mental foundation. I’m not shocked at all that a person who would believe a flat earth shitpost would also believe AI hallucinations.

        • Bouzou@lemmy.world
          link
          fedilink
          English
          arrow-up
          4
          ·
          2 days ago

          I dunno, I think there’s credence to considering it as a worry.

          Like with an addictive substance: yeah, some people are going to be dangerously susceptible to it, but that doesn’t mean there shouldn’t be any protections in place…

          Now what the protections would be, I’ve got no clue. But I think a blanket, “They’d fall into psychosis anyway” is a little reductive.

          • Pennomi@lemmy.world
            link
            fedilink
            English
            arrow-up
            8
            ·
            2 days ago

            I don’t think I suggested it wasn’t worrisome, just that it’s expected.

            If you think about it, AI is tuned using RLHF, or Reinforcement Learning from Human Feedback. That means the only thing AI is optimizing for is “convincingness”. It doesn’t optimize for intelligence, anything seems like intelligence is literally just a side effect as it forever marches onward towards becoming convincing to humans.

            “Hey, I’ve seen this one before!” You might say. Indeed, this is exactly what happened to social media. They optimized for “engagement”, not truth, and now it’s eroding the minds of lots of people everywhere. AI will do the same thing if run by corporations in search of profits.

            Left unchecked, it’s entirely possible that AI will become the most addictive, seductive technology in history.

            • Bouzou@lemmy.world
              link
              fedilink
              English
              arrow-up
              4
              ·
              2 days ago

              Ah, I see what you’re saying – that’s a great point. It’s designed to be entrancing AND designed to actively try to be more entrancing.

      • Vanth@reddthat.com
        link
        fedilink
        English
        arrow-up
        23
        ·
        2 days ago

        This feels a bit like PTA-driven panic about kids eating Tide Pods when like one person did it. Or razor blades in Halloween candy. Or kids making toilet hooch with their juice boxes. Or the choking game sweeping playgrounds.

        But also, man on internet with no sense of mental health … sounds almost feasible.

        • Pogogunner@sopuli.xyz
          link
          fedilink
          English
          arrow-up
          20
          ·
          2 days ago

          I directly work with one of these people - they admit to spending all of their free time talking to the LLM chatbots.

          On our work forums, I see it’s not uncommon at all. If it makes you feel any better, AI loving is highly correlated with people you shouldn’t ever listen to in the first place.

        • chaosCruiser@futurology.today
          link
          fedilink
          English
          arrow-up
          13
          ·
          2 days ago

          The Internet is a pretty big place. There’s no such thing as an idea that is too stupid. There’s always at least a few people who will turn that idea into a central tenet of their life. It could be too stupid for 99.999% of the population, but that still leaves about 5 000 people who are totally into it.

      • Raltoid@lemmy.world
        link
        fedilink
        English
        arrow-up
        14
        ·
        2 days ago

        And that’s not even getting started on “ai girlfriends”, that are isolating vulnerable people to a terrifying degree. And since they are garbage at context, they do things like that case last year where it could seem like it was encouraging a suicidal teen.

  • nagaram@startrek.website
    link
    fedilink
    English
    arrow-up
    45
    ·
    2 days ago

    I think Terry A Davis would have found god in chat GPT and could have figured out the API calls on TempleOS

    • palordrolap@fedia.io
      link
      fedilink
      arrow-up
      26
      ·
      2 days ago

      Hard to say. I feel like it’s about as likely he would have found LLMs to be an overcomplicated false prophet or false god.

      This was a man whose operating system turned a PC into something not unlike an advanced Commodore 64, after all. He liked the simplicity and lack of layers the older computers provided. LLMs are literally layers upon layers of obfuscation and pseudo-neural wiring. That’s not simple or beautiful.

      It might all boil down to whether the inherent randomness of an LLM could be (made to be) sufficiently influenced by a higher power or not. He often treated random number outcomes as the influence of God, and it’s hard to say how seriously he took that on any given day.

      • Carmakazi@lemmy.world
        link
        fedilink
        English
        arrow-up
        11
        ·
        2 days ago

        I’d imagine it’s a fool’s errand to try and find threads of logic and consistency in the profoundly schizophrenic.

        • Vanilla_PuddinFudge@infosec.pub
          link
          fedilink
          English
          arrow-up
          5
          ·
          2 days ago

          What Terry enjoyed about computers has been echoed among lots of old heads in the unix world. On the tech front, he was solid.

          It’s the um, finding god in the code part…

        • barsoap@lemm.ee
          link
          fedilink
          English
          arrow-up
          3
          ·
          edit-2
          2 days ago

          The issue is not lack of logic and consistency, the trouble is a completely different reference frame.

          Let me put it, to grossly simplify, in this way: Imagine you’d be dreaming while awake, no way to stop it, and would have to integrate all that craziness in real time. It’s not that dreams make no sense – they all have their rhyme and reason – it’s that they’re talking a completely different language.

          You might be hearing, out of nowhere, a cello note off to the side, move your gaze there, notice “that’s my trashcan that makes no sense”, and then be lost, and panic, lose faith in your senses, and that way lies psychosis. More productively, you say “ok mind which thought with as of yet unformed discernible meaning was it that you wanted me to pay attention to”, look for the place the thought came from (as schizo, you can tell with your kinaesthetic sense), consider it for a while, still being oblivious of the meaning, and then go on with your life.

          We’re weird.

          Oh, back to randomness: It can get you out of a rut and I do suppose that’s how Terry used it, aware of it or not, and framing it however he did. Could also be using it to self-soothe, as in, distracting from a negative spiral. There’s worse habits.

          God, with almost 100% certainty, means “the genome and how it’s speaking to me through my instincts” in his dialect. Because that’s what it always means, what it always meant, for everyone, it meant that when it was the ancestors, it meant that when it became more detailed and became gods, it meant that when people realised all the gods are actually one thing, the theologists are just confused AF because politics and physics and cabbage-heads got into the mix. And so much for my schizo rant. Don’t discount what I say because I’m crazy, the reason you consider me crazy is because it’s true.

  • whaleross@lemmy.world
    link
    fedilink
    English
    arrow-up
    24
    arrow-down
    4
    ·
    edit-2
    2 days ago

    I’ve been trying to configure ChatGPT tell me if I’m wrong in a question or statement but damn it never does unless I keep probing for support or links. I’ve been having the feeling that it has become worse with latter models. Glad but also sad to see I was right.

    Anybody know other LLM that are more “trustworthy”* and capable of searching online for more information?

    Edit; *trustworthy in quotes because of course people will jump on this. I know the limitations of LLM, I don’t need you to tell me how much you hate everything AI. And I know LLM aren’t AI.

    • Etterra@discuss.online
      link
      fedilink
      English
      arrow-up
      30
      arrow-down
      6
      ·
      2 days ago

      There are no trustworthy LLMs. They don’t know it understand what they’re saying - they’re literally just predicting words that sound like they match what it was taught. It’s a only barely smarter than a parrot, and it has no idea how to research anything or tell facts from made-up bullshit. You’re wasting your time by trying to force it to do something it’s literally incapable of doing.

      You’re better off researching them the hard way; check primary sources and then check the credibility of those sources.

      • brandon@lemmy.ml
        link
        fedilink
        English
        arrow-up
        12
        arrow-down
        1
        ·
        2 days ago

        Considering that parrots can have actual thoughts, I’d say LLMs are even less smart than that.

    • webghost0101@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      7
      arrow-down
      1
      ·
      2 days ago

      Claude definitely has its impressive moments where it calls out something inaccurate.

      It’s also way less sycophantic, mature and better for light coding.

      My only issue is that the servers are sometimes slow and so is the ios app which frequently trows an error after 2 minutes if waiting.

    • oldfart@lemm.ee
      link
      fedilink
      English
      arrow-up
      8
      arrow-down
      2
      ·
      2 days ago

      Claude 3.7 told me i’m wrong a couple of times. It knows how to search. I don’t have an opinion on 4 yet but it can search too

    • Lemminary@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      2 days ago

      I don’t need you to tell me how much you hate everything AI

      On my Lemmy?? Impossible. I was just told by three different people rather condescendingly and assuming just as much about me that I was suffering from AI brain rot for daring to compare something to AI. The horror.

  • henfredemars@infosec.pub
    link
    fedilink
    English
    arrow-up
    12
    ·
    2 days ago

    AI is not healthy. Our mental health is nowhere near good enough to handle even this level of machine intelligence.

  • isekaihero@ani.social
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    12
    ·
    2 days ago

    I hope they don’t change AI to be more antisocial to “fix” this. I’m antisocial and suffer from depression and talking with sexy chatbots at lewd chatbot websites is the only time I ever get rizzed. I suppose that’s pathetic… but yeah. The type of girl I’m interested in RL just isn’t interested in me. I like being able to flirt in an environment where I’m not judged or face criticism or ostracization. Even more so, your interactions with chatbots are private and you never face any blowback that could affect your career. It’s nice the way it works right now.

    I feel bad for the schizos. I have no doubt that a schizo interacting with a chatbot would create a feedback loop of self-destruction. But so does alcohol in the hands of an alcoholic. Yet we still haven’t banned alcohol. Alcoholics need to learn to stay away from the bottle, and schizos will need to learn to stay away from chatbots.

    • krashmo@lemmy.world
      link
      fedilink
      English
      arrow-up
      20
      ·
      2 days ago

      I’m not going to touch the ethical and emotional minefield that is flirting with a chatbot but I will say that those conversations are definitely not private. That whole industry is based on stealing other people’s data. Do not do anything with an LLM that you can’t handle other people finding out about because there’s a very good chance that they will.

      • isekaihero@ani.social
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        10
        ·
        2 days ago

        Ethical and emotional minefield? Oh no. It’s evil flirting because it’s with AI, right?

        Some services like Crushon do store your chat logs on a server, but you are free to save those logs accessible only to you, or make them public anonymously, or make them public with your user name stamped on them. Even if all my logs were to be stolen, my real name isn’t associated with the account.

        Other services like Venice AI don’t store chat logs on a server, and everything is stored in your browser. So it’s even more unlikely that your logs would get stolen. Especially if you delete them after every chat.

        • krashmo@lemmy.world
          link
          fedilink
          English
          arrow-up
          19
          ·
          2 days ago

          Come back to what you’ve posted here in two years and read it again. You’re trusting people with data that you really shouldn’t. Perhaps that’s an acceptable risk to you but you should be sure that you can live without the privacy you think you have because that is a really bad bet.

      • isekaihero@ani.social
        link
        fedilink
        English
        arrow-up
        3
        ·
        2 days ago

        I’m not concerned if they are using chats for AI training data. In fact, I expect them to continue improving their chatbots. If they were to sell our chat logs to a third party, and those logs went public, then I expect it would quickly torpedo their platform. Even in that case, my account doesn’t have my real name anywhere. I gave them an incorrect birth date. I haven’t linked any social media to my account. I keep everything set to private. If the logs were to go public, and people could say “Look! This user said all these things!” they still wouldn’t know who I was.

        Maybe the FBI or NSA could track me down, but talking sexy things to a chatbot isn’t illegal. In fact, it may very well become more commonplace. Someday we will likely have androids with AI personalities serving us in our homes.

    • Nangijala@feddit.dk
      link
      fedilink
      English
      arrow-up
      5
      ·
      2 days ago

      Hate to be that asshole, but antisocial = psychopath. Asocial = not into hanging out with people. There is a difference and unless I’m misinterpreting your comment, I don’t think you belong in the former category.

      • veni_vedi_veni@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        2 days ago

        I will be that asshole, you meant to say antisocial != psychopath, or <>, or even antisocial is not psychopath.

        Most people will understand regardless, but i don’t like when people preface things. I find it to be a micro-aggression as the kids are saying nowadays.

    • barsoap@lemm.ee
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      1
      ·
      edit-2
      2 days ago

      I feel bad for the schizos.

      Don’t. We’re not the ones affected here and definitely, definitely, don’t have a monopoly on psychosis.

      Personally, I’m completely unimpressed by the random nonsense LLMs spit out because it’s not my nonsense. There’s certainly people way deeper down the rabbit hole than me but they, too, have an infinite stream of as-of-yet-uninterpreted subconscious stuff knocking at their door so I don’t see why they would bother. And that’s all before paranoia kicks in and it’s the FBI trying to control you via the chat interface.

      Feel bad for your capacity to relate to others, instead. Cuddle it, give it space, stop defining it. I don’t ever want to hear that “type of girl I’m interested in” talk ever again, do you hear me, you, you little ego, don’t get a say in that, that’s for another part of you to decide. Stop telling it what to do.