An interesting development, but I doubt it’ll be a good thing, especially at first. This looks like the kind of thing that will be an entirely new threat vector and a huge liability, even when used in the most secure way possible, but especially when used in a haphazard way that we’ll certainly see from some of the early adoptors.

Just because you can do a thing, does not mean that you should.

I almost feel like this should have an NSFW tag because this will almost certainly not be safe for work.

Edit: looks like the article preview is failing to load… I’ll try to fix it. … Nope. Couldn’t fix.

  • Dark Arc@social.packetloss.gg
    link
    fedilink
    English
    arrow-up
    2
    ·
    1 day ago

    Yeah, it looks like basic reasoning but it isn’t. These things are based on pattern recognition. “Assume all x are y, all z are y, are all z x?” is a known formulation … I’ve seen it a fair number of times in my life.

    Recent development has added this whole “make it prompt itself about the question” phase to try and make things more accurate … but that also only works sometimes.

    AI in LLM form is just a sick joke. It’s like watching a magic trick where half of people expect the magician to ACTUALLY levitate next year because … “they’re almost there!!”

    Maybe I’ll be proven wrong, but I don’t see it…