• tabular@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    4
    ·
    3 days ago

    The only thing one can be 100% certain of is that one is having an experience. If we were a fancy autocomplete then we’d know we had it 😉

    • thiseggowaffles@lemmy.zip
      link
      fedilink
      English
      arrow-up
      7
      arrow-down
      1
      ·
      2 days ago

      What do you mean? I don’t follow how the two are related. What does being fancy auto-complete have anything to do with having an experience?

      • tabular@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        3
        ·
        2 days ago

        It’s an answer on if one is sure if they are not just a fancy autocomplete.

        More directly; we can’t be sure if we are not some autocomplete program in a fancy computer but since we’re having an experience then we are conscious programs.

        • thiseggowaffles@lemmy.zip
          link
          fedilink
          English
          arrow-up
          7
          ·
          edit-2
          2 days ago

          When I say “how can you be sure you’re not fancy auto-complete”, I’m not talking about being an LLM or even simulation hypothesis. I’m saying that the way that LLMs are structured for their neural networks is functionally similar to our own nervous system (with some changes made specifically for transformer models to make them less susceptible to prompt injection attacks). What I mean is that how do you know that the weights in your own nervous system aren’t causing any given stimuli to always produce a specific response based on the most weighted pathways in your own nervous system? That’s how auto-complete works. It’s just predicting the most statistically probable responses based on the input after being filtered through the neural network. In our case it’s sensory data instead of a text prompt, but the mechanics remain the same.

          And how do we know whether or not the LLM is having an experience or not? Again, this is the “hard problem of consciousness”. There’s no way to quantify consciousness, and it’s only ever experienced subjectively. We don’t know the mechanics of how consciousness fundamentally works (or at least, if we do, it’s likely still classified). Basically what I’m saying is that this is a new field and it’s still the wild west. Most of these LLMs are still black boxes that we only barely are starting to understand how they work, just like we barely are starting to understand our own neurology and consciousness.