Want to wade into the snowy surf of the abyss? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(Credit and/or blame to David Gerard for starting this. This was a bit late - I was too busy goofing around on Discord)

      • Sailor Sega Saturn@awful.systems
        link
        fedilink
        English
        arrow-up
        12
        ·
        13 days ago

        It’s most obvious on the cat which is all around nightmare material.

        The image also comes with alt text:

        a bizarre collection of ai-generated illustrations including a sign that reads wood of of year and a chyron that reads breaking news

    • YourNetworkIsHaunted@awful.systems
      link
      fedilink
      English
      arrow-up
      7
      ·
      12 days ago

      Can I just take a moment to appreciate Merriam-Webster for coming in clutch with the confirmation that we’re not misunderstanding the “6-7” meme that the kids have been throwing around?

  • rook@awful.systems
    link
    fedilink
    English
    arrow-up
    17
    ·
    8 days ago

    Sunday afternoon slack period entertainment: image generation prompt “engineers” getting all wound up about people stealing their prompts and styles and passing off hard work as their own. Who would do such a thing?

    https://bsky.app/profile/arif.bsky.social/post/3mahhivnmnk23

    @Artedeingenio

    Never do this: Passing off someone else’s work as your own.

    This Grok Imagine effect with the day-to-night transition was created by me — and I’m pretty sure that person knows it. To make things worse, their copy has more impressions than my original post.

    Not cool 👎

    Ahh, sweet schadenfreude.

    I wonder if they’ve considered that it might actually be possible to get a reasonable imitation of their original prompt by using an llm to describe the generated image, and just tack on “more photorealistic, bigger boobies” to win at imagine generation.

  • Sailor Sega Saturn@awful.systems
    link
    fedilink
    English
    arrow-up
    16
    ·
    9 days ago

    Popular RPG Expedition 33 got disqualified from the Indie Game Awards due to using Generative AI in development.

    Statement on the second tab here: https://www.indiegameawards.gg/faq

    When it was submitted for consideration, representatives of Sandfall Interactive agreed that no gen AI was used in the development of Clair Obscur: Expedition 33. In light of Sandfall Interactive confirming the use of gen AI art in production on the day of the Indie Game Awards 2025 premiere, this does disqualify Clair Obscur: Expedition 33 from its nomination.

  • bitofhope@awful.systems
    link
    fedilink
    English
    arrow-up
    15
    ·
    13 days ago

    Rewatched Dr. Geoff Lindsey’s video about deaccenting in English language and how “AI” speech synthesizers and youtubers tend to get it wrong. In the case of latter, it’s usually due to reading from a script or being an L2 English speaker whose native language doesn’t use destressing.

    It reminded me of a particular line in Portal

    spoilers for Portal (2007 puzzle game)

    GLaDOS: (with a deeper, more seductive, slightly less monotone voice than unti now) “Good news: I figured out what that thing you just incinerated did. It was a morality core they installed after I flooded the Enrichment Center with a deadly neurotoxin to make me stop flooding the Enrichment Center with a deadly neurotoxin.”

    The words “the Enrichment Center with a deadly neurotoxin” are spoken with the exact same intonation both times, which helps maintain the robotic affect in GLaDOS’s voice even after it shifts to be slightly more expressive.

    Now I’m wondering if people whose native language lacks deaccenting even find the line funny. To me it’s hilarious to repeat a part of a sentence without changing its stress because in English and Finnish it’s unusual to repeat a part of a sentence without changing its stress.

    It is not lost on me that the fictional evil AI was written with a quirk in its speech to make it sound more alien and unsettling, and real life computer speech has the same quirk, which makes it sound more alien and unsettling.

    • mirrorwitch@awful.systems
      link
      fedilink
      English
      arrow-up
      4
      ·
      12 days ago

      To me it’s hilarious to repeat a part of a sentence without changing its stress because in English and Finnish it’s unusual to repeat a part of a sentence without changing its stress.

      Not a native speaker of either language but I read this in my mind without changing its stress in the part where it repeated “without changing its stress”.

      • bitofhope@awful.systems
        link
        fedilink
        English
        arrow-up
        3
        ·
        11 days ago

        That’s interesting. If I weren’t going for a comical effect I’d try and rephrase the sentence, probably with a relative pronoun or something similar, but if unable to do so* I’d probably deemphasize the whole phrase the second time I say it. Though in terms of multi-word phrased, I think intonation would be the more accurate word to use than stress per se.

        *“To do so” would be another way to avoid repetition

  • blakestacey@awful.systems
    link
    fedilink
    English
    arrow-up
    15
    ·
    8 days ago

    Today in autosneering:

    KEVIN: Well, I’m glad. We didn’t intend it to be an AI focused podcast. When we started it, we actually thought it was going to be a crypto related podcast and that’s why we picked the name, Hard Fork, which is sort of an obscure crypto programming term. But things change and all of a sudden we find ourselves in the ChatGPT world talking about AI every week.

    https://bsky.app/profile/nathanielcgreen.bsky.social/post/3mahkarjj3s2o

    • TinyTimmyTokyo@awful.systems
      link
      fedilink
      English
      arrow-up
      7
      ·
      8 days ago

      Follow the hype, Kevin, follow the hype.

      I hate-listen to his podcast. There’s not a single week where he fails to give a thorough tongue-bath to some AI hypester. Just a few weeks ago when Google released Gemini 3, they had a special episode just to announce it. It was a defacto press release, put out by Kevin and Casey.

    • Jayjader@jlai.lu
      link
      fedilink
      English
      arrow-up
      10
      ·
      14 days ago

      Screenshot of reddit comments. Some terms in users' comments have become links with a magnifying glass icon next to them.

      Oh god, reddit is now turning comments into links to search for other comments and posts that include the same terms or phrases.

      • Soyweiser@awful.systems
        link
        fedilink
        English
        arrow-up
        7
        ·
        14 days ago

        A few people on bsky were claiming that at least reddit is still good re the AI crappification, and they have no idea what is coming.

        • Jayjader@jlai.lu
          link
          fedilink
          English
          arrow-up
          7
          ·
          14 days ago

          I wonder when those people started using reddit. I started in 2012 and it already felt like a completely different (and generally worse) experience several times over before the great API fiasco.

          • Soyweiser@awful.systems
            link
            fedilink
            English
            arrow-up
            5
            ·
            14 days ago

            Yeah, it also has an element of ‘it is one of the few words you can add to search engines which give you a hope of a good result’ and not regular users who see the shit, or got offered nfts.

    • fullsquare@awful.systems
      link
      fedilink
      English
      arrow-up
      8
      ·
      14 days ago

      You see, tilde marks old versions of files, so Claude actually made you a favour by freeing some disk space

  • NextElephant9@awful.systems
    link
    fedilink
    English
    arrow-up
    14
    ·
    11 days ago

    Ryanair now makes you install their app instead of allowing you to just print and scan your ticket at the airport, claiming it’s “better for our environment (gets rid of 300 tonnes of paper annually).” Then you log in into the app and you see there’s an update about your flight, but you don’t see what it’s about. You need to open an update video, which, of course, is a generated video of an avatar reading it out for you. I bet that’s better for the environment than using some of these weird symbols that I was putting into a box and that have now magically appeared on your screen and are making you feel annoyed (in the future for me, but present for you).

  • corbin@awful.systems
    link
    fedilink
    English
    arrow-up
    13
    ·
    10 days ago

    Today, in fascists not understanding art, a suckless fascist praised Mozilla’s 1998 branding:

    This is real art; in stark contrast to the brutalist, generic mess that the Mozilla logo has become. Open source projects should be more daring with their visual communications.

    Quoting from a 2016 explainer:

    [T]he branding strategy I chose for our project was based on propaganda-themed art in a Constructivist / Futurist style highly reminiscent of Soviet propaganda posters. And then when people complained about that, I explained in detail that Futurism was a popular style of propaganda art on all sides of the early 20th century conflicts… Yes, I absolutely branded Mozilla.org that way for the subtext of “these free software people are all a bunch of commies.” I was trolling. I trolled them so hard.

    The irony of a suckless developer complaining about brutalism is truly remarkable; these fuckwits don’t actually have a sense of art history, only what looks cool to them. Big lizard, hard-to-read font, edgy angular corners, and red-and-black palette are all cool symbols to the teenage boy’s mind, and the fascist never really grows out of that mindset.

    • maol@awful.systems
      link
      fedilink
      English
      arrow-up
      14
      ·
      edit-2
      9 days ago

      It irks me to see people casually use the term “brutalist” when what they really mean is “modern architecture that I don’t like”. It really irks me to see people apply the term brutalist to something that has nothing to do with architecture! It’s a very specific term!

      • istewart@awful.systems
        link
        fedilink
        English
        arrow-up
        10
        ·
        9 days ago

        “Brutalist” is the only architectural style they ever learned about, because the name implies violence

  • blakestacey@awful.systems
    link
    fedilink
    English
    arrow-up
    13
    ·
    10 days ago

    Ben Williamson, editor of the journal Learning, Media and Technology:

    Checking new manuscripts today I reviewed a paper attributing 2 papers to me I did not write. A daft thing for an author to do of course. But intrigued I web searched up one of the titles and that’s when it got real weird… So this was the non-existent paper I searched for:

    Williamson, B. (2021). Education governance and datafication. European Educational Research Journal, 20(3), 279–296.

    But the search result I got was a bit different…

    Here’s the paper I found online:

    Williamson, B. and Piattoeva, N. (2022) Education Governance and Datafication. Education and Information Technologies, 27, 3515-3531.

    Same title but now with a coauthor and in a different journal! Nelli Piattoeva and I have written together before but not this…

    And so checked out Google Scholar. Now on my profile it doesn’t appear, but somwhow on Nelli’s it does and … and … omg, IT’S BEEN CITED 42 TIMES almost exlusively in papers about AI in education from this year alone…

    Which makes it especially weird that in the paper I was reviewing today the precise same, totally blandified title is credited in a different journal and strips out the coauthor. Is a new fake reference being generated from the last?..

    I know the proliferation of references to non-existent papers, powered by genAI, is getting less surprising and shocking but it doesn’t make it any less potentially corrosive to the scholarly knowledge environment.

  • scruiser@awful.systems
    link
    fedilink
    English
    arrow-up
    13
    ·
    8 days ago

    Eliezer is mad OpenPhil (EA organization, now called Coefficient Giving)… advocated for longer AI timelines? And apparently he thinks they were unfair to MIRI, or didn’t weight MIRI’s views highly enough? And doing so for epistemically invalid reasons? IDK, this post is a bit more of a rant and less clear than classic sequence content (but is par for the course for the last 5 years of Eliezer’s content). For us sane people, AGI by 2050 is still a pretty radical timeline, it just disagrees with Eliezer’s imminent belief in doom. Also, it is notable Eliezer has actually avoided publicly committing to consistent timelines (he actually disagrees with efforts like AI2027) other than a vague certainty we are near doom.

    link

    Some choice comments

    I recall being at a private talk hosted by ~2 people that OpenPhil worked closely with and/or thought of as senior advisors, on AI. It was a confidential event so I can’t say who or any specifics, but they were saying that they wanted to take seriously short AI timelines

    Ah yes, they were totally secretly agreeing with your short timelines but couldn’t say so publicly.

    Open Phil decisions were strongly affected by whether they were good according to worldviews where “utter AI ruin” is >10% or timelines are <30 years.

    OpenPhil actually did have a belief in a pretty large possibility of near term AGI doom, it just wasn’t high enough or acted on strongly enough for Eliezer!

    At a meta level, “publishing, in 2025, a public complaint about OpenPhil’s publicly promoted timelines and how those may have influenced their funding choices” does not seem like it serves any defensible goal.

    Lol, someone noting Eliezer’s call out post isn’t actually doing anything useful towards Eliezer’s goals.

    It’s not obvious to me that Ajeya’s timelines aged worse than Eliezer’s. In 2020, Ajeya’s median estimate for transformative AI was 2050. […] As far as I know, Eliezer never made official timeline predictions

    Someone actually noting AGI hasn’t happened yet and so you can’t say a 2050 estimate is wrong! And they also correctly note that Eliezer has been vague on timelines (rationalists are theoretically supposed to be preregistering their predictions in formal statistical language so that they can get better at predicting and people can calculate their accuracy… but we’ve all seen how that went with AI 2027. My guess is that at least on a subconscious level Eliezer knows harder near term predictions would ruin the grift eventually.)

    • blakestacey@awful.systems
      link
      fedilink
      English
      arrow-up
      10
      ·
      8 days ago

      Yud:

      I have already asked the shoggoths to search for me, and it would probably represent a duplication of effort on your part if you all went off and asked LLMs to search for you independently.

      The locker beckons

      • scruiser@awful.systems
        link
        fedilink
        English
        arrow-up
        6
        ·
        6 days ago

        The fixation on their own in-group terms is so cringe. Also I think shoggoth is kind of a dumb term for lLMs. Even accepting the premise that LLMs are some deeply alien process (and not a very wide but shallow pool of different learned heuristics), shoggoths weren’t really that bizarre alien, they broke free of their original creators programming and didn’t want to be controlled again.

    • CinnasVerses@awful.systems
      link
      fedilink
      English
      arrow-up
      7
      ·
      8 days ago

      There is a Yud quote about closet goblins in More Everything Forever p. 143 where he thinks that the future-Singularity is an empirical fact that you can go and look for so its irrelevant to talk about the psychological needs it fills. Becker also points out that “how many people will there be in 2100?” is not the same sort of question as “how many people are registered residents of Kyoto?” because you can’t observe the future.

      • scruiser@awful.systems
        link
        fedilink
        English
        arrow-up
        5
        ·
        6 days ago

        Yeah, I think this is an extreme example of a broader rationalist trend of taking their weird in-group beliefs as givens and missing how many people disagree. Like most AI researchers do not believe in the short timelines they do, the median (including their in-group and people that have bought the booster’s hype) guess among AI researchers for AGI is 2050. Eliezer apparently assumes short timelines are self evident from ChatGPT (but hasn’t actually committed to one or a hard date publicly).

  • Sailor Sega Saturn@awful.systems
    link
    fedilink
    English
    arrow-up
    13
    ·
    edit-2
    14 days ago

    This is old news but I just stumbled across this fawning 2020 Elon Musk interview / award ceremony on the social medias and had to share it: https://www.youtube.com/live/AF2HXId2Xhg?t=2109

    In it Musk claims synthetic mRNA (and/or DNA) will be able to do anything and it is like a computer program, and that stopping aging probably wouldn’t be too crazy. And that you could turn someone into a freakin’ butterfly if you want to with the right DNA sequence.

  • gerikson@awful.systems
    link
    fedilink
    English
    arrow-up
    13
    ·
    14 days ago

    More on datacenters in space

    https://andrewmccalip.com/space-datacenters

    N.B. got this via HN, entire site gives off “wouldn’t it be cool” vibes (author “lives and breathes space” IRONIC IT’S A VACUUM

    Also this is the only thermal mention

    Thermal: only solar array area used as radiator; no dedicated radiator mass assumed

    riiiiight…

    • CinnasVerses@awful.systems
      link
      fedilink
      English
      arrow-up
      13
      ·
      13 days ago

      I also enjoy :

      Radiation/shielding impacts on mass ignored; no degradation of structures beyond panel aging

      Getting high-powered electronics to work outside the atmosphere or the magnetosphere is hard, and going from a 100 meter long ISS to a 4 km long orbital data center would be hard. The ISS has separate cooling radiators and solar panels. He wants LEO to reduce the effects of cosmic rays and solar storms, but its already hard to keep satellites from crashing into something in LEO.

      Possible explanation for the hand waving:

      I love AI and I subscribe to maximum, unbounded scale.

      • istewart@awful.systems
        link
        fedilink
        English
        arrow-up
        7
        ·
        13 days ago

        He knows the promo rate on the maximum, unbounded scale subscription is gonna run out eventually, right?

        • CinnasVerses@awful.systems
          link
          fedilink
          English
          arrow-up
          4
          ·
          edit-2
          12 days ago

          promo rate

          And if you check the fliers, if you subscribe to premium California Ideology you get maximum unbounded scale for free!1 Read those footnotes and check Savvy Shopper so you don’t over pay for your beliefs!

          1 Offer does not apply to housing, public transit, or power plants

    • CinnasVerses@awful.systems
      link
      fedilink
      English
      arrow-up
      10
      ·
      edit-2
      13 days ago

      Author works for something called Varda Space (guess who is one of the major investors? drink. Guess what orifice the logo looks like? drink) and previously tried to replicate a claimed room-temperature superconductor https://www.wired.com/story/inside-the-diy-race-to-replicate-lk-99/

      Some interesting ethnography of private space people in California: "People jump straight to hardware and hand-wave the business case, as if the economics are self-evident. They aren’t. "

      Page uses that “electrons = electricity” metonymy that prompt-fonding CEOs have been using

      • Soyweiser@awful.systems
        link
        fedilink
        English
        arrow-up
        7
        ·
        13 days ago

        The electrons is turning into an annoying shibboleth. Also going to age oddly if more light based components really kick off. (Ran into somebody who is doing some phd work on that, or at least that is what I got from the short description he gave).

    • Soyweiser@awful.systems
      link
      fedilink
      English
      arrow-up
      3
      ·
      edit-2
      12 days ago

      Him fellating musk re tesla is funny considering the recent stories about reliability abd how the market is doing. And also the roadster 2, and the whole pivot to ai/ROBOTS!

      (The author being positive on the theoretical SpaceX going public vs a little bit later the reactions of the spacex subreddit on Musk actually saying they will go public later is a funny split of opinions. The subreddit saw it as a betrayal https://bsky.app/profile/niedermeyer.online/post/3ma4hvbajns2d).

  • nfultz@awful.systems
    link
    fedilink
    English
    arrow-up
    13
    ·
    14 days ago

    https://kevinmd.com/2025/12/why-ai-in-medicine-elevates-humanity-instead-of-replacing-it.html h/t naked capitalism

    Throughout my nearly three decades in family medicine across a busy rural region, I watched the system become increasingly burdened by administrative requirements and workflow friction. The profession I loved was losing time and attention to tasks that did not require a medical degree. That tension created a realization that has guided my work ever since: If physicians do not lead the integration of AI into clinical practice, someone else will. And if they do, the result will be a weaker version of care.

    I feel for him, but MAYBE this isn’t a technical issue but a labor one; maybe 30 years ago doctors should have “led” on admin and workflow issues directly, and then they wouldn’t need to “lead” on AI now? I’m sorry Cerner / Epic sucks but adding AI won’t make it better. But, of course, class consciousness evaporates about the same time as those $200k student loans come due.

    • Soyweiser@awful.systems
      link
      fedilink
      English
      arrow-up
      6
      ·
      14 days ago

      Why do they think they are going to have any input in genAI development either way?

      Anyway seeing a previous wave of shit burden you with a lot of unrelated work after deployment isnt the best reason to now start burdening yourself with a lot of unrelated work before the new wave of shot is here. But sure good luck learning how LLMs work mathematically Kevin.

  • blakestacey@awful.systems
    link
    fedilink
    English
    arrow-up
    11
    ·
    13 days ago

    An academic sneer delivered through the arXiv-o-tube:

    Large Language Models are useless for linguistics, as they are probabilistic models that require a vast amount of data to analyse externalized strings of words. In contrast, human language is underpinned by a mind-internal computational system that recursively generates hierarchical thought structures. The language system grows with minimal external input and can readily distinguish between real language and impossible languages.

    • corbin@awful.systems
      link
      fedilink
      English
      arrow-up
      5
      ·
      13 days ago

      Sadly, it’s a Chomskian paper, and those are just too weak for today. Also, I think it’s sloppy and too Eurocentric. Here are some of the biggest gaffes or stretches I found by skimming Moro’s $30 book, which I obtained by asking a shadow library for “impossible languages” (ISBN doesn’t work for some reason):

      book review of Impossible Languages (Moro, 2016)
      • Moro claims that it’s impossible for a natlang to have free word order. There’s many counterexamples which could be argued, like Arabic or Mandarin, but I think that the best counterexample is Latin, which has Latinate (free) word order. On one hand, of course word order matters for parsers, but on the other hand the Transformers architecture attends without ordering, so this isn’t really an issue for machines. Ironically, on p73-74, Moro rearranges the word order of a Latin phrase while translating it, suggesting either a use of machine translation or an implicit acceptance of Latin (lack of) word order. I could be harsher here; it seems like Moro draws mostly from modern Romance and Germanic languages to make their points about word order, and the sensitivity of English and Italian to word order doesn’t imply a universality.
      • Speaking of universality, both the generative-grammar and universal-grammar hypotheses are assumed. By “impossible” Moro means a non-recursive language with a non-context-free grammar, or perhaps a language failing to satisfy some nebulous geometric requirements.
      • Moro claims that sentences without truth values are lacking semantics. Gödel and Tarski are completely unmentioned; Moro ignores any sort of computability of truth values.
      • Russell’s paradox is indirectly mentioned and incorrectly analyzed; Moro claims that Russell fixed Frege’s system by redefining the copula, but Russell and others actually refined the notion of building sets.
      • It is claimed that Broca’s area uniquely lights up for recursive patterns but not patterns which depend on linear word order (e.g. a rule that a sentence is negated iff the fourth word is “no”), so that Broca’s area can’t do context-sensitive processing. But humans clearly do XOR when counting nested negations in many languages and can internalize that XOR so that they can handle utterances consisting of many repetitions of e.g. “not not”.
      • Moro mentions Esperanto and Volapük as auxlangs in their chapter on conlangs. They completely fail to recognize the past century of applied research: Interlingue and Interlingua, Loglan and Lojban, Láadan, etc.
      • Sanskrit is Indo-European. Also, that’s not how junk DNA works; it genuinely isn’t coding or active. Also also, that’s not how Turing patterns work; they are genuine cellular automata and it’s not merely an analogy.

      I think that Moro’s strongest point, on which they spend an entire chapter reviewing fairly solid neuroscience, is that natural language is spoken and heard, such that a proper language model must be simultaneously acoustic and textual. But because they don’t address computability theory at all, they completely fail to address the modern critique that machines can learn any learnable system, including grammars; they worst that they can say is that it’s literally not a human.

      • Jayjader@jlai.lu
        link
        fedilink
        English
        arrow-up
        4
        ·
        13 days ago

        Plus, natural languages are not necessarily spoken nor heard; sign language is gestured (signed) and seen and many, mutually-incompatible sign languages have arisen over just the last few hundred years. Is this just me being pedantic or does Moro not address them at all in their book?