I’m a #SoftwareDeveloper from #Switzerland. My languages are #Java, #CSharp, #Javascript, German, English, and #SwissGerman. I’m in the process of #LearningJapanese.

I like to make custom #UserScripts and #UserStyles to personalize my experience on the web. In terms of #Gaming, currently I’m mainly interested in #VintageStory and #HonkaiStarRail. I’m a big fan of #Modding.
I also watch #Anime and read #Manga.

#fedi22 (for fediverse.info)

  • 2 Posts
  • 158 Comments
Joined 1 year ago
cake
Cake day: March 11th, 2024

help-circle
  • Update 7/31/25 4:10pm PT: Hours after this article was published, OpenAI said it removed the feature from ChatGPT that allowed users to make their public conversations discoverable by search engines. The company says this was a short-lived experiment that ultimately “introduced too many opportunities for folks to accidentally share things they didn’t intend to.”

    Interesting, because the checkbox is still there for me. Don’t see things having changed at all, maybe they made the fine print more white? But nothing else.

    In general, this reminds me of the incognito drama. Iirc people were unhappy that incognito mode didn’t prevent Google websites from fingerprinting you. Which… the mode never claimed to do, it explicitly told you it didn’t do that.

    For chats to be discoverable through search engines, you not only have to explicitly and manually share them, you also have to then opt in to having them appear on search machines via a checkbox.

    The main criticism I’ve seen is that the checkbox’s main label only says it makes the chat “discoverable”, while the search engines clarification is in the fine print. But I don’t really understand how that is unclear. Like, even if they made them discoverable through ChatGPT’s website only (so no third party data sharing), Google would still get their hands on them via their crawler. This is just them skipping the middleman, the end result is the same. We’d still hear news about them appearing on Google.

    This just seems to me like people clicking a checkbox based on vibes rather than critical thought of what consequences it could have and whether they want them. I don’t see what can really be done against people like that.

    I don’t think OpenAI can be blamed for doing the data sharing, as it’s opt-in, nor for the chats ending up on Google at all. If the latter was a valid complaint, it would also be valid to complain to the Lemmy devs about Lemmy posts appearing on Google. And again, I don’t think the label complaint has much weight to it either, because if it’s discoverable, it gets to Google one way or another.









  • Why is Fediverse moderation, even more Draconian than Reddit?

    No central oversight. Reddit can theoretically remove the worst of the worst, but the same doesn’t apply on the fediverse. Not across instances at least. Theoretically that lack of control is why we have defederation, but no one is going to defederate over some mods being extra draconian.

    As for why it’s even at a similar level to Reddit in the first place, it’s because despite the fediverse’s superiority complex, moderation on Reddit is organic, and so it is here. It’s not like Reddit tells them to be the way they are, moderators choose to be that way. And there’s no reason why they would choose to be different on the fediverse.

    I think it’s worth remembering that people who seek the power of authority aren’t usually the best people. I’m not saying this applies to all moderators, but those that become moderators for the power it gives them aren’t going to be friends, no matter which platform they’re on. It’s not like the platform makes them bad, it just enables them by giving them the power.

    Why is it so hard to find a non left leaning place on the Internet?

    There ARE right wing Lemmy instances. They’re just usually defederated by the ones leaning left. There’s also /r/conservative on Reddit.

    “You know I kind of feel Israel has a right to defend itself ya know?”

    This one is definitely a big problem imo. Like, I’m not in the pro-Israel camp, but I think it’s clear this side of the fediverse is currently an echo chamber that isn’t welcoming to opposing voices, especially on that topic. But also in regards to others like AI.

    Reddit is a lot better in that regard. I think there is a point to fighting disinformation and bad faith actors, but that’s not reasoning if you then allow one side’s disinformation (like the whole “AI is completely useless” narrative which is just factually false, it’s being abused for tasks it’s wildly unsuited for, but that doesn’t make it useless for what it’s designed to do) or tolerate complete faith into your side’s propaganda.

    Imo this is a big barrier to the fediverse currently. I can’t in good faith recommend the fediverse to people whom I know to be right-leaning, because I know they’re going to have a bad time here.

    Even posting a Fox News article in the News areas will get your post removed…with a ban of course.

    I do think a ban is excessive unless you’re a repeat offender, but… it makes sense to ban articles from a self-proclaimed entertainment network which only idiots would take as news (Fox News’s official position as argued in court, not my opinion) from a News community.










  • Here’s a question regarding the informed consent part.

    The article gives the example of asking whether the recipient wants the AI’s answer shared.

    “I had a helpful chat with ChatGPT about this topic some time ago and can share a log with you if you want.”

    Do you (I mean generally people reading this thread, not OP specifically) think Lemmy’s spoiler formatting would count as informed consent if properly labeled as containing AI text? I mean, the user has to put in the effort to open the spoiler manually.