Want to wade into the snowy surf of the abyss? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(Last substack for 2025 - may 2026 bring better tidings. Credit and/or blame to David Gerard for starting this.)

  • rook@awful.systems
    link
    fedilink
    English
    arrow-up
    4
    ·
    3 hours ago

    This is a fun read: https://nesbitt.io/2025/12/27/how-to-ruin-all-of-package-management.html

    Starts out strong:

    Prediction markets are supposed to be hard to manipulate because manipulation is expensive and the market corrects. This assumes you can’t cheaply manufacture the underlying reality. In package management, you can. The entire npm registry runs on trust and free API calls.

    And ends well, too.

    The difference is that humans might notice something feels off. A developer might pause at a package with 10,000 stars but three commits and no issues. An AI agent running npm install won’t hesitate. It’s pattern-matching, not evaluating.

      • mlen@awful.systems
        link
        fedilink
        English
        arrow-up
        5
        ·
        10 hours ago

        It’s a treasure trove of hilariously bad takes.

        There’s nothing intrinsically valuable about art requiring a lot of work to be produced. It’s better that we can do it with a prompt now in 5 seconds

        Now I need some eye bleach. I can’t tell anymore if they are trolling or their brains are fully rotten.

        • lagrangeinterpolator@awful.systems
          link
          fedilink
          English
          arrow-up
          7
          ·
          edit-2
          5 hours ago

          Don’t forget the other comment saying that if you hate AI, you’re just “vice-signalling” and “telegraphing your incuruosity (sic) far and wide”. AI is just like computer graphics in the 1960s, apparently. We’re still in early days guys, we’ve only invested trillions of dollars into this and stolen the collective works of everyone on the internet, and we don’t have any better ideas than throwing more money compute at the problem! The scaling is still working guys, look at these benchmarks that we totally didn’t pay for. Look at these models doing mathematical reasoning. Actually don’t look at those, you can’t see them because they’re proprietary and live in Canada.

          In other news, I drew a chart the other day, and I can confidently predict that my newborn baby is on track to weigh 10 trillion pounds by age 10.

          EDIT: Rich Hickey has now disabled comments. Fair enough, arguing with promptfondlers is a waste of time and sanity.

        • swlabr@awful.systems
          link
          fedilink
          English
          arrow-up
          7
          ·
          9 hours ago

          these fucking people: “art is when picture matches words in little card next to picture”

  • CinnasVerses@awful.systems
    link
    fedilink
    English
    arrow-up
    4
    ·
    edit-2
    18 hours ago

    A few weeks ago, David Gerard found this blog post with a LessWrong post from 2024 where a staffer frets that:

    Open Phil generally seems to be avoiding funding anything that might have unacceptable reputational costs for Dustin Moskovitz. Importantly, Open Phil cannot make grants through Good Ventures to projects involved in almost any amount of “rationality community building”

    So keep whisteblowing and sneering, its working.

    Sailor Sega Saturn found a deleted post on https://forum.effectivealtruism.org/users/dustin-moskovitz-1 where Moskovitz says that he has moral concerns with the Effective Altruism / Rationalist movement not reputation concerns (he is a billionaire executive so don’t get your hopes up)

    • Sailor Sega Saturn@awful.systems
      link
      fedilink
      English
      arrow-up
      4
      ·
      edit-2
      12 hours ago

      All of the bits I quoted in my other comment were captured by archive.org FWIW: a, b, c. They can also all still be found as EA forum comments via websearch, but under [anonymous] instead of a username.

      This newer archive also captures two comments written since then. Notably there’s a DOGE mention:

      But I can’t e.g. get SBF to not do podcasts nor stop the EA (or two?) that seem to have joined DOGE and started laying waste to USAID. (On Bsky, they blame EAs for the whole endeavor)