Want to wade into the snowy surf of the abyss? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(Last substack for 2025 - may 2026 bring better tidings. Credit and/or blame to David Gerard for starting this.)

  • Sailor Sega Saturn@awful.systems
    link
    fedilink
    English
    arrow-up
    9
    ·
    7 days ago

    https://github.com/leanprover/lean4/blob/master/.claude/CLAUDE.md

    Imagine if you had to tell people “now remember to actually look at the code before changing it.” – but I’m sure LLMs will replace us any day now.

    Also lol this sounds frustrating:

    Update prompting when the user is frustrated: If the user expresses frustration with you, stop and ask them to help update this .claude/CLAUDE.md file with missing guidance.

    Edit: I might be misreading this but is this signs of someone working on an LLM driven release process? https://github.com/leanprover/lean4/blob/master/.claude/commands/release.md ??

    Important Notes: NEVER merge PRs autonomously - always wait for the user to merge PRs themselves

    • George B@masto.nyc
      link
      fedilink
      arrow-up
      6
      ·
      5 days ago

      @sailor_sega_saturn @flaviat

      Lol “important notes” is this the new “I’m a sign, not a cop”?

      If you have to use an LLM for this (which no one needs to), at least run it as an unprivileged user that spits out commands that you then need to approve not with full access to drop the production database and a “pretty please don’t”

    • lagrangeinterpolator@awful.systems
      link
      fedilink
      English
      arrow-up
      13
      ·
      6 days ago

      So many CRITICAL and MANDATORY steps in the release instruction file. As it always is with AI, if it doesn’t work, just use more forceful language and capital letters. One more CRITICAL bullet point bro, that’ll fix everything.

      Sadly, I am not too surprised by the developers of Lean turning towards AI. The AI people have been quite interested in Lean for a while now since they think it is a useful tool to have AIs do math (and math = smart, you know).

      • rook@awful.systems
        link
        fedilink
        English
        arrow-up
        3
        ·
        4 days ago

        A lot of the money behind lean is from microsoft, so a push for more llm integration is depressing but unsurprising.

        Turns out though that llms might actually be ok for generating some kinds of mathematical proofs so long as you’ve formally specified the problem and have a mechanical way to verify the solution (which is where lean comes in). I don’t think any other problem domain that llms have been used in is like that, so successes here can’t be applied elsewhere. I also suspect that a much, uh, leaner specialist model would do just as good a job there. As always, llms are overkill that can only be used when someone else is subsidising them.

      • istewart@awful.systems
        link
        fedilink
        English
        arrow-up
        12
        ·
        6 days ago

        The whole culture of writing “system prompts” seems utterly a cargo-cult to me. Like if the ST: Voyager episode “Tuvix” was instead about Lt. Barclay and Picard accidentally getting combined in the transporter, and the resulting sadboy Barcard spent the rest of his existence neurotically shouting his intricately detailed demands at the holodeck in an authoritative British tone.

        If inference is all about taking derivatives in a vector space, surely there should be some marginally more deterministic method for constraining those vectors that could be readily proceduralized, instead of apparent subject-matter experts being reduced to wheedling with an imaginary friend. But I have been repeatedly assured by sane, sober experts that it is just simply is not so

        • corbin@awful.systems
          link
          fedilink
          English
          arrow-up
          2
          ·
          3 days ago

          I don’t have any good lay literature, but get ready for “steering vectors” this year. It seems like two or three different research groups (depending on whether I count as a research group) independently discovered them over the past two years and they are very effective at guardrailing because they can e.g. make slurs unutterable without compromising reasoning. If you’re willing to read whitepapers, try Dunefsky & Cohan, 2024 which builds that example into a complete workflow or Konen et al, 2024 which considers steering as an instance of style transfer.

          I do wonder, in the engineering-disaster-podcast sense, exactly what went wrong at OpenAI because they aren’t part of this line of research. HuggingFace is up-to-date on the state of the art; they have a GH repo and a video tutorial on how to steer LLaMA. Meanwhile, if you’ll let me be Bayesian for a moment, my current estimate is that OpenAI will not add steering vectors to their products this year; they’re already doing something like it internally, but the customer-facing version will not be ready until 2027. They just aren’t keeping up with research!

        • mirrorwitch@awful.systems
          link
          fedilink
          English
          arrow-up
          5
          ·
          4 days ago

          When I first learned that you could program a chatbot merely by giving instructions in English sentences as if it was a human being, I admit I was impressed. I’m a linguist, natural language processing is really hard. There was a certain crossing over boundaries over the idea of telling it at chatbot level, e.g. “and you will never delete files outside this directory”, and this “system prompt” actually shaping the behaviour of the chatbot. I don’t have much interest in programming anymore but I wondered how this crossing of levels was implemented.

          The answer of course is that it’s not. Programming a chatbot by talking to it doesn’t actually work.

      • o7___o7@awful.systems
        link
        fedilink
        English
        arrow-up
        12
        ·
        6 days ago

        It reminds me of the bizzare and ill-omened rituals my ancestors used to start a weed eater.

      • jonasty@awful.systems
        link
        fedilink
        English
        arrow-up
        11
        ·
        edit-2
        6 days ago

        One of my old teachers would send documents to the class with various pieces of information. They were a few years away from retirement and never really got word processors. They would start by putting important stuff in bold. But some important things were more important than others. They got put in bold all caps. Sometimes, information was so critical it got put in bold, underline, all caps and red font colour. At the time we made fun of the teacher, but I don’t think I could blame them. They were doing the best they could with the knowledge of the tools they had at the time.

        Now, in the files linked above I saw the word “never” in all caps, bold all caps, in italics and in a normal font. Apparently, one step in the process is mandatory. Are the others optional? This is supposed to be a procedure to be followed to the letter with each step being there for a reason. These are supposed computer-savvy people

        CRITICAL RULE: You can ONLY run release_steps.py for a repository if release_checklist.py explicitly says to do so […] The checklist output will say “Run script/release_steps.py {version} {repo_name} to create it”

        I’ll admit I did not read the scripts in detail but this is a solved problem. The solution is a script with structured output as part of a pipeline. Why give up one of the only good thing computers can do: executing a well-defined task in a deterministic way. Reading this is so exhausting…

    • flaviat@awful.systems
      link
      fedilink
      English
      arrow-up
      8
      ·
      7 days ago

      Yes, they are trying to automate releases.

      sidenote: I don’t like how taking an approach of mediocre software engineering to mathematics is becoming more popular. Update your dependency (whose code you never read) to v0.4.5 for bug fixes! Why was it incorrect in the first place? Anyway, this blog post sets some good rules for reviewing computer proofs. The second-to-last comment tries to argue npm-ification is good actually. I can’t tell if satire

      • Seminar2250@awful.systems
        link
        fedilink
        English
        arrow-up
        8
        ·
        6 days ago

        I don’t like how taking an approach of mediocre software engineering to mathematics is becoming more popular

        would you be willing to elaborate on this? i am just curious because i took the opposite approach (started as a mathematician now i write bad python scripts)

        • flaviat@awful.systems
          link
          fedilink
          English
          arrow-up
          9
          ·
          6 days ago

          The flipside to that quote is that computer programs are useful tools for mathematicians. See the mersenne prime search, OEIS and its search engine, The L-function database, as well as the various python scripts and agda, rocq, lean proofs written to solve specific problems within papers. However, not everything is perfect: throwing more compute at the problem is a bad solution in general; the stereotypical python script hacked together to serve only a purpose has one-letter variable names and redundant expressions, making it hard to review. Throw in the vibe coding over it all, and that’s pretty much the extent of what I mean.

          I apologize if anything is confusing, I’m not great at communication. I also have yet to apply to a mathematics uni, so maybe this is all manageable in practice.

          • corbin@awful.systems
            link
            fedilink
            English
            arrow-up
            5
            ·
            3 days ago

            One important nuance is that there are, broadly speaking, two ways to express a formal proof: it can either be fairly small but take exponential time to verify, or it can be fairly quick to verify but exponentially large. Most folks prefer to use the former sort of system. However, with extension by definitions, we can have a polynomial number of polynomially-large definitions while still verifying quickly. This leads to my favorite proof system, Metamath, whose implementations measure their verification speed in kiloproofs/second. If you give me a Metamath database then I can quickly confirm any statement in a few moments with multiple programs and there is programmatic support for looking up the axioms associated with any statement; I can throw more compute at the problem. While LLMs do know how to generate valid-looking Metamath in context, it’s safe to try to verify their proofs because Metamath’s kernel is literally one (1) string-handling rule.

            This is all to reconfirm your impression that e.g. Lean inherits a “mediocre software engineering” approach. Junk theorems in Lean are laughably bad due to type coercions. The wider world of HOL is more concerned with piles of lambda calculus than with writing math proofs. Lean as a general-purpose language with I/O means that it is no longer safe to verify untrusted proofs, which makes proof-carrying Lean programs unsafe in practice.

            @Seminar2250@awful.systems you might get a laugh out of this too. FWIW I went in the other direction: I started out as a musician who learned to code for dayjob and now I’m a logician.

            • blakestacey@awful.systems
              link
              fedilink
              English
              arrow-up
              5
              ·
              3 days ago

              Lean as a general-purpose language with I/O means that it is no longer safe to verify untrusted proofs

              This is darkly funny.

            • flaviat@awful.systems
              link
              fedilink
              English
              arrow-up
              4
              ·
              3 days ago

              Thank you for the links

              Junk theorems in Lean are laughably bad due to type coercions.

              Those look suspicious… I mean when you consider that the set of propositions is given a topology and an order, “The set {z : ℝ | z ≠ 0} is a continuous, non-monotone surjection.” doesn’t seem so ridiculous after all. Similarly the determinant of logical operations gains meaning on a boolean algebra. Zeta(1) is also by design. It does start getting juicy around “2 - 3 = +∞” and the nontransitive equality and the integer interval.

          • Seminar2250@awful.systems
            link
            fedilink
            English
            arrow-up
            8
            ·
            6 days ago

            no need to apologize, i understand what you mean. my experience with mathematicians has been that this is really common. even the theoretical computer scientists (the “lemma, theorem, proof” kind) i have met do this kind of bullshit when they finally decide to write a line of code. hell, their pseudocode is often baffling — if you are literally unable to run the code through a machine, maybe focus on how it comes across to a human reader? nah, it’s more important that i believe it is technically correct and that no one else is able to verify it.