A software developer and Linux nerd, living in Germany. I’m usually a chill dude but my online persona doesn’t always reflect my true personality. Take what I say with a grain of salt, I usually try to be nice and give good advice, though.

I’m into Free Software, selfhosting, microcontrollers and electronics, freedom, privacy and the usual stuff. And a few select other random things, too.

  • 0 Posts
  • 41 Comments
Joined 10 months ago
cake
Cake day: June 25th, 2024

help-circle

  • Privacy would be the main concern. Every single one of your words, documents, pictures will probably end up in some large database over at OpenAI. I don’t like that at all. And as a company for example, it might be against the law to share some information about clients with third parties.

    Then you don’t get any of the freedoms we got with Free Software. It’s a service you rely on with very little opportunities to customize, or look inside and tinker. There is little control for the user whatsoever. Additionally we already had companies cease service. So it might become unavailable tomorrow, which is a bad thing if you’re attached to it, invested or built things around it.

    And since “the internet is for porn”… We also have a noteworthy community doing those kinds of things. And well… go ahead and ask the big services to generate a lewd story. Most of them even refuse to write a murder mystery story for me, instead they’ll lecture me on how it is not ethical to murder someone. So that would be use-cases where local AI outperforms any of the market leaders.

    Personally, I’m a bit opposed to the entire concept of letting other people’s algorithms dictate my life. I don’t want to rely on them. I also don’t want them to pick the bias for my perspective on the world. The algorithms in social media are dwarfed by how dangerous it’s gonna be once people rely on AI more and more. And it gets to choose which information to show and which to drop. What kind of bias to introduce in summaries etc. Teach people how to think. And I already don’t like the way all big AI chatbots talk to me with a lot of emojis and in a “Explain like I’m 5 yo” way.

    So to go back to the original question… I think the more “useful” AI is, the more reasons there are to retain some control yourself. What do you think?


  • I don’t think it is about that. The information collection is an added bonus they happily accept and make use of. I think it’s mainly about power and money, though. They get rid of everyone who isn’t completely in line and subservient. That’s from the playbook on how to become an autocratic regime. And they’re oviously interested in the money as well. Cut off everyone and everything they don’t like. Like weak people, poor people, your grandma and children. That money can then be funneled towards other people. Guess whom. I think the power and control aspect is the original idea though. And money has power as well. So does information and data, so it’s more a combination of things.

    But the way they act, I’d say they had a look at other oligarchies and corrupt regimes and wanted in, too. Saw you need to replace all the people in any color of power and replace them with your own henchmen. Then they also hate a lot of people and always wanted to take their money. The AI and data thing looks more to me like something they discovered while at it. And I don’t believe the traditional MAGA people are smart enough to have anticipated that. But naturally, information is power. And AI can be used as a mindless slave to someone. I’d say it’s worth trying to foster it instead rely on human clerks and officials. It’ll be a new form of administration. One that does away with a lot of middle-men like the corrupt government workers other regimes have to pay.

    And Musk looks like he has his own motivation, which might or might not be aligned with the “grand plan” I can’t really see there is. He is (was) free to combine the useful with what’s enjoyable to him. Currently the tactics is mostly to break a lot of stuff. Doesn’t really matter how or what. So that’s what they’re doing right now. I think the struggle and in-fighting on who gets to replace what with exactly what kind of things hasn’t really started yet. It’s already there, but not the main concern as of now. So we can’t tell the exact dynamics we’re bound to see in the near future. I’d say mass surveillance plus yet more AI is likely a formula to success, though.








  • By the way, you can still run the Yunohost installer ontop of your Debian install… If you want to… It’s Debian-based anyway so it doesn’t really matter if you use its own install media or use the script on an existing Debian install. Though I feel like adding: If you’re looking for Docker… Yunohost might not be your best choice. It’s made to take control itself and it doesn’t use containers. Of course you can circumvent that and add Docker containers nonetheless… But that isn’t really the point and you’d end up dealing with the underlying Debian and just making it more complicated.

    It is a very good solution if you don’t want to deal with the CLI. But it stops being useful once you want too much customization, or unpackaged apps. At least that’s my experience. But that’s kind of always the case. Simpler and more things automatically and pre-configured, means less customizability (or more effort to actually customize it).


  • Thanks for your perspective. Sure, AI is here to stay and flood the internet with slop and arbitrary (mis)information phrased like a factual wikipedia article, journalism, a genuine user review or whatever its master chose. And the negative sides of the internet have been there long before we had AI to the current extent. I think it is extremely unlikely that the internet is going to move away from being powered by advertisements, though. That’s the main business model as of today, and I think it is going to continue that way. Maybe dressed in some new clothes, but social media platforms, Google etc still need their income. I wonder how it’ll turn out for the AI companies, though. To my knowledge, they’re currently all powered by hype and investor money. And they’re going to have to find some way to make profit at some point. Whether that’s going to be ads or having their users pay properly, and not like today where the majority of people I know use the free tier.






  • hendrik@palaver.p3x.detoFediverse@lemmy.worldFirst draft woes
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    12 days ago

    I think it needs to work across instances, since we’re concerned wit the Fediverse and federation is one of the defining mechanics. Also when I have a look at my subscriptions, they come from a variety of instances. So I don’t think a single instance feature would be of any use for me.

    Sure. And with the cosine similarity, you’d obviously need to suppress already watched videos. Obviously I watched them and the algorithm knows, but I’d like it to recommend new videos to me.



  • Wasn’t “error-free” one of the undecidable problems in maths / computer science? But I like how they also pay attention to semantics and didn’t choose a clickbaity title. Maybe I should read the paper, see how they did it and whether it’s more than an AI agent at the same intelligence level guessing whether it’s correct. I mean surprisingly enough, the current AI models usually do a good job generating syntactically correct code one-shot. My issues with AI coding usually start to arise once it gets a bit more complex. Then it often feels like poking at things and copy-pasting various stuff from StackOverflow without really knowing why it doesn’t deal with the real-world data or fails entirely.


  • I’ve also had that. And I’m not even sure whether I want to hold it against them. For some reason it’s an industry-wide effort to muddy the waters and slap open source on their products. From the largest company who chose to have “Open” in their name but oppose transparency with every fibre of their body, to Meta, the curren pioneer(?) of “open sourcing” LLMs, to the smaller underdogs who pride themselves with publishing their models that way… They’ve all homed in on the term.

    And lots of the journalists and bloggers also pick up on it. I personally think, terms should be well-defined. And open-source had a well-defined meaning. I get that it’s complicated with the transformative nature of AI, copyright… But I don’t think reproducibility is a question here at all. Of course we need that, that’s core to something being open. And I don’t even understand why the OSI claims it doesn’t exist… Didn’t we have datasets available until LLaMA1 along with an extensive scientific paper that made people able to reproduce the model? And LLMs aside, we sometimes have that with other kinds of machine learning…

    (And by the way, this is an old article, from end of october last year.)