Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.
Any awful.systems sub may be subsneered in this subthread, techtakes or no.
If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.
The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)
Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.
(Credit and/or blame to David Gerard for starting this.)
got sent this image
wonder how many more of these things we’ll see before people start having a real bileful response to this (over and above the fact that a number of people have been warning about exactly this outcome for a while now)
(transcript below)

transcript
title: I gave my mom’s company an Al automation and now she and her coworkers are unemployed
body: So this is eating me alive and I don’t really know where else to put it. I run this little agency that builds these Al agents for staffing firms. Basically the agent pre-screens candidates, pulls the info into a neat report, and sends it back so recruiters don’t waste hours on screening calls. It’s supposed to be a tool, not a replacement.
My mom works at this mid sized recruiting company. She’s always complained about how long it takes to qualify candidates, so I set them up with one of my agents just to test it. It crushed it. Way faster, way cheaper, and honestly more consistent than most of their team.
Fast forward two months and they’ve quietly laid off almost her whole department. Including my mom. I feel sick. Like I built something that was supposed to help people, and instead it wiped out my mom’s job and her team. I keep replaying it in my head like I basically automated my own family out of work.
Pressing F for doubt, looks like a marketing scam to me.
It’s pretty screwed up that humble bragging about putting their own mother out of a job is a useful opening to selling a scam-service. At least the people that buy into it will get what they have coming?
that or some kind of bait

I didn’t dig into the post/username at all so I can’t guesstimate likelihood of this! get where you’re coming from
(…I really need to finish my blog relaunch (this thought brought to you by the explication I was about to embark on in this context))
(((it’s soon.gif tho!)))
Gonna have to agree with zogwarg here. I checked out the Reddit profile and they’re a self-proclaimed entrepreneur whose one-man “agency” has zero clients and yet to even have an idea, attempting to crowdsource the latter on r/entrepreneur.
dude has a post named “from 0 to 1 clients in 48h” where someone calls him out for already claiming to have 17 customers, so it’s reasonable to assume that this guy is full of shit either way
then again, there’s plenty of clueless, could be real, because welcome to current year, where everything is fake, satire is dead and reuters puts the onion out of the business
‘set them up with’
Anybody want to bet if they did it for free?
could go either way tbh
In case you needed more evidence that the Atlantic is a shitty rag.

The phrase “adorned with academic ornamentation” sounds like damning with faint praise, but apparently they just mean it as actual praise, because the rot has reached their brains.
also, they misspelled “Eliezer”, lol
I’ve created a new godlike AI model. Its the Eliziest yet.
My copy of “the singularity is near” also does that btw.
(E: Still looking to confirm that this isn’t just my copy, or it if is common, but when I’m in a library I never think to look for the book, and I don’t think I have ever seen the book anywhere anyway. It is the ‘our sole responsibility…’ quote, no idea which page, but it was early on in the book. ‘Yudnowsky’).
Image and transcript

Transcript: Our sole responsibility is to produce something smarter than we are; any problems beyond that are not ours to solve…[T]here are no hard problems, only problems that are hard to a certain level of intelligence. Move the smallest bit upwards [in level of intelligence], and some problems will suddenly move from “impossible” to “obvious.” Move a substantial degree upwards and all of them will become obvious.
—ELIEZER S. YUDNOWSKY, STARING INTO THE SINGULARITY, 1996
Transcript end.
How little has changed, he has always believed intelligence is magic. Also lol on the ‘smallest bit’. Not totally fair to sneer at this as he wrote this when he was 17, but oof being quoted in a book like this will not have been good for Yudkowskys ego.
The Atlantic puts the “shit” in “shitlib”
The implication that Soares / MIRI were doing serious research before is frankly journalist malpractice. Matteo Wong can go pound sand.
It immediately made me wonder about his background. He’s quite young and looks to be just out of college. If I had to guess, I’d say he was probably a member of the EA club at Harvard.
His group chats with Kevin Roose must be epic.
Just earlier this month, he was brushing off all the problems with GPT-5 and saying that “OpenAI is learning from its greatest success.” He wrapped up a whole story with the following:
At this stage of the AI boom, when every major chatbot is legitimately helpful in numerous ways, benchmarks, science, and rigor feel almost insignificant. What matters is how the chatbot feels—and, in the case of the Google integrations, that it can span your entire digital life. Before OpenAI builds artificial general intelligence—a model that can do basically any knowledge work as well as a human, and the first step, in the company’s narrative, toward overhauling the economy and curing all disease—it is aiming to build an artificial general assistant. This is a model that aims to do everything, fit for a company that wants to be everywhere.
Weaselly little promptfucker.
deleted by creator
The usual suspects are mad about college hill’s expose of the yud/kelsey piper eugenics sex rp. Or something, I’m in bed and can’t be bothered to link at the moment.
I’m sorry, we finally, officially need to cancel fantasy TTRPGs. If it’s not the implicit racialization of everything, it’s the use of the stat systems as a framework for literally masturbatory eugenics fetishization.
You all can keep a stripped-down version of Starfinder as a treat. But if I see any more of this, we’re going all the way back to Star Wars d6 and that’s final.
To be fair to DnD, it is actually more sophisticated than the IQ fetishists, it has 3 stats for mental traits instead of 1!
also: The int-maxxing and overinflated ego of it all reminds me of the red mage from 8-bit theater, a webcomic based on final fantasy about the LW (light warriors) that ran from 2001-2010
E: thinking back on it, reading this webcomic and seeing this character probably in some part inoculated me against people like yud without me knowing
I never read 8bit. I read A Modest Destiny. Wonder how that guy is doing, he always was a bit weird and combative, but when he deleted his blog it was getting very early signs of right wing culture warrior bits (which was ironic considering he burned a us flag).
Never read AMD (and shan’t). The author’s site appears to be live.
8BF’s site has been taken over by bots, and I can’t be bothered to find an alternate source. Dead internet go brrrrr. Otherwise, the creator, Brian Clevinger, appears to have had a long career in comics, and has written many things for Marvel.
8BF’s site has been taken over by bots, and I can’t be bothered to find an alternate source.
You can find it directly on Brian Clevinger’s blog, Nuklear Power. Here’s a direct link to the archive.
Ah thanks! On mobile the main page gets redirected to spam, but the site is navigable from the archive.
Yeah, but he used to have forums, and then a blog, and then no blog and then a blog again, and then a hidden blog etc. Think Howard has only a few minor credits on some games, he always came off as a bit of a weirdly combative nerd who thought he was right and the smartest in the room and didn’t get that people didn’t agree with his definitions/assumptions. He is a big idea guy for example. One of his comics was also called ‘the atheist, the agnostic and the asshole’ so yeah. The 00’s online comic world was something.
has only a few minor credits[…], he always came off as a bit of a weirdly combative nerd who thought he was right and the smartest in the room and didn’t get that people didn’t agree with his definitions/assumptions. He is a big idea guy for example.
gosh i’m sure glad that these kinds of people disappeared from the internet /s
Anyone found with a non-cube platonic solid will be lockerized indefinitely
I would simply learn how to keep “games” and “reality” separate. I actually already know. It helps a lot.
Racists are gonna racist no matter what. They didn’t need TTRPGs around to give them the idea of breaking out the calipers.
Yes but basic dnd does have a lot of racism build in, esp with Gygax not being great on that end (nits make lice he said about how it lawful for paladins to kill orc babies). They did drop the sexism pretty quickly, but no big suprise his daughters were not into it. It certainly helps with the whole hierarchical mindset. My int/level is higher than yours so im better than you stuff. And sadly a lot of people do have trouble keeping both seperate (and even that isn’t always ideal, esp in larps).
But yes this, considering the context ks def a bit of a case of some of their ideologies, or ideological fantasies bleeding through. (Esp considering, Yud has been corrected on his faulty understanding of genetics before).
Is the scoop that besides being an EA mouthpiece KP is also into the weird stuff?
Weird rp wouldn’t be sneer worthy on it’s own (although it would still be at least a little cringe), it’s contributing factors like…
-
the constant IQ fetishism (Int is superior to Charisma but tied with Wis and obviously a true IQ score would be both Int and Wis)
-
the fact that Eliezer cites it like serious academic writing (he’s literally mentioned it to Yann LeCunn in twitter arguments)
-
the fact that in-character lectures are the only place Eliezer has written up many of his decision theory takes he developed after the sequences (afaik, maybe he has some obscure content that never made it to lesswrong)
-
the fact that Eliezer think it’s another HPMOR-level masterpiece (despite how wordy it is, HPMOR is much more readable, even authors and fans of glowfic usually acknowledge the format can be awkward to read and most glowfics require huge amounts of context to follow)
-
the fact that the story doubles down on the HPMOR flaw of confusion of which characters are supposed to be author mouthpieces (putting your polemics into the mouths of character’s working for literal Hell… is certainly an authorial choice)
-
and the continued worldbuilding development of dath ilan, the rationalist utopia built on eugenics and censorship of all history (even the Hell state was impressed!)
…At least lintamande has the commonsense understanding of why you avoid actively linking your bdsm dnd roleplay to your irl name and work.
And it shouldn’t be news to people that KP supports eugenics given her defense of Scott Alexander or comments about super babies, but possibly it is and headliner of weird roleplay will draw attention to it.
obligatory reminder that “dath ilan” is misspelled “thailand” and I still don’t know why. Working theory is Yud wants to recolonise thailand
That’s about what I was thinking, I’m completely ok with the weird rpg aspect.
Regarding the second and third point though I’ll admit I thought the whole thing was just yud indulging, I missed that it’s also explicitly meant as rationalist esoterica.
also explicitly meant as rationalist esoterica.
Always a bad sign when people can’t just let a thing be a thing just for enjoyment, but see everything as the ‘hustle’ (for lack of a better word). I’m reminded of that dating profile we looked at which showed that 99% what he did was related to AI and AI doomerism, even the parties.
I actually think “Project Lawful” started as Eliezer having fun with glowfic (he has a few other attempts at glowfics that aren’t nearly as wordy… one of them actually almost kind of pokes fun at himself and lesswrong), and then as it took off and the plot took the direction of “his author insert gives lectures to an audience of adoring slaves” he realized he could use it as an opportunity to squeeze out all the Sequence content he hadn’t bothered writing up in the past decade^ . And that’s why his next attempt at a HPMOR-level masterpiece is an awkward to read rp featuring tons of adult content in a DnD spinoff, and not more fanfiction suitable for optimal reception to the masses.
^(I think Eliezer’s writing output dropped a lot in the 2010s compared to when he was writing the sequences and the stuff he has written over the past decade is a lot worse. Like the sequences are all in bite-size chunks, and readable in chunks in sequence, and often rephrase legitimate science in a popular way, and have a transhumanist optimism to them. Whereas his recent writings are tiny little hot takes on twitter and long, winding, rants about why we are all doomed on lesswrong.)
I missed that it’s also explicitly meant as rationalist esoterica.
It turns in that direction about 20ish pages in… and spends hundreds of pages on it, greatly inflating the length from what could be a much more readable length. It then gets back to actual plot events after that.
-
We’ve definitely sneered at this before, i do not recall if it was known that KP was the cowriter in this weird forum RP fic
E: googling “lintamande kelsey piper” and looking at a reddit post digs up the inactive since 2018 AO3. A total just shy of 130k words, a little marvel stuff, most of it LOTR based, and some of it tagged “Vladmir Putin/Sauron”. How fun!
No judgement from me, tbh. Fanfic be fanficking. I aint gonna read that shit tho.
Previous thread
E: we didn’t fucking know
Not sure if anybody noticed the last time, but so they get isekayed into a DND world, which famously runs on some weird form of fantasy feudalism and they expect a random high int person to rule the country somehow? What in the primogenitor is this stuff, you can’t just think yourself into being a king, that is one of the issues with monarchies.
E: ah no they are in a totalitarian state ruled by the literal forces of hell, places that totally praise merit based upwards mobility.
ah no they are in a totalitarian state ruled by the literal forces of hell, places that totally praise merit based upwards mobility.
Hey, write what you know
An encounter of this sort is what drove Lord Vetinari to make a scorpion pit for mimes, probably.
For all of the 2.2 seconds I have spent wondering who Yud’s coauthor on that was, I vaguely thought that it was Aella. I don’t know where I might have gotten that impression from. A student paper about fanfiction identified “lintamande” as Kelsey Piper in 2013.
I tried reading the forum roleplay thing when it came up here, and I caromed off within a page. I made it through this:
The soap-bubble forcefield thing looks deliberate.
And I got to about here:
Mad Investor Chaos heads off, at a brisk heat-generating stride, in the direction of the smoke. It preserves optionality between targeting the possible building and targeting the force-bubble nearby.
… before the “what the fuck is this fucking shit?” intensified beyond my ability to care.
Yeah I couldn’t find the strength to even get to the naughty stuff, I gave up after one or two chapters. And I’ve read through all of HPMOR. 😐
I’m hard-pressed to think of anything else I have tried to read that was comparably impenetrable. At least when we played “exquisite corpse” parlor games on the high-school literary magazine staff, we didn’t pretend that anything we improvised had lasting value.
New piece from the Financial Times: Tech utterly dominates markets. Should we worry?
Pulling out a specific point, the article’s noted how market concentration is higher now than it was in the dot-com bubble back in 2000:

You want my overall take, I’m with Zitron - this is quite a narrative shift.
Gary asks the doomers, are you “feeling the agi” now kids?

To which Daniel K, our favorite guru lets us know that he has officially
moved his goal postsupdated his timeline so now the robogod doesnt wipe us out until the year of our lorde 2029.
It takes a big brain superforecaster to have to admit your four month old rapture prophecy was already off by at least 2 years omegalul
Also, love: updating towards my teammate (lmaou) who cowrote the manifesto but is now saying he never believed it. “The forecasts that don’t come true were just pranks bro, check my manifold score bro, im def capable of future sight, trust”
look at me, the thinking man, i update myself just like a computer beep boop beep boop
So, as I have been on a cult comparison kick lately, how did it work for those doomsday cults when the world didn’t end, and they picked a new date, did they become more radicalized or less? (I’m not sure myself, I’d assume it would be the people disappointed leave, and the rest get worse).
… prophecies, per se, almost never fail. They are instead component parts of a complex and interwoven belief system which tends to be very resilient to challenge from outsiders. While the rest of us might focus on the accuracy of an isolated claim as a test of a group’s legitimacy, those who are part of that group—and already accept its whole theology—may not be troubled by what seems to them like a minor mismatch. A few people might abandon the group, typically the newest or least-committed adherents, but the vast majority experience little cognitive dissonance and so make only minor adjustments to their beliefs. They carry on, often feeling more spiritually enriched as a result.
When Prophecy Fails is worth the read just for the narrative, he literally had his grad students join a UFO / Dianetics cult and take notes in the bathroom and kept it going for months. Really impressive amount of shoe leather compared to most modern psych research.
Clown world.
How many times will he need to revise his silly timeline before media figures like Kevin Roose stop treating him like some kind of respectable authority? Actually, I know the answer to that question. They’ll keep swallowing his garbage until the bubble finally bursts.
And once it does they’ll quietly stop talking about it for a while to “focus on the human stories of those affected” or whatever until the nostalgic retrospectives can start along with the next thing.
“Kevin Roose”? More like Kevin Rube, am I right? Holy shit, I actually am right.
deleted by creator
Meanwhile on /r/programmingcirclejerk sneering hn:

transcription
OP: We keep talking about “AI replacing coders,” but the real shift might be that coding itself stops looking like coding. If prompts become the de facto way to create applications/developing systems in the future, maybe programming languages will just be baggage we’ll need to unlearn.
Comment: The future of coding is jerking off while waiting for AI managers to do your project for you, then retrying the prompt when they get it wrong. If gooning becomes the de facto way to program, maybe expecting to cum will be baggage we’ll need to unlearn.
Promptfondlers are tragically close to the point. Like I was saying yesterday about translators the future of programming in AI hell is going to be senior developers using their knowledge and experience to fix the bullshit that the LLM outputs. What’s going to happen when they retire and there’s nobody with that knowledge and experience to take their place? I’ll have sold off my shares by then, I’m sure.
gemini isn’t even trying now

Oh, looks like gemini is a fan of the hacky anti-comedy bits from some of my favorite podcasts
New Atlantic article regarding AI, titled “AI Is a Mass-Delusion Event”. Its primarily about the author’s feelings of confusion and anxiety about the general clusterfuck that is the bubble.
better, or equivalent to, a mass defecation event?
Our Very Good Friends are often likened to Scientology, but have we considered Happy Science and Aum Shinrikyo?
https://en.wikipedia.org/wiki/Happy_Science https://en.wikipedia.org/wiki/Aum_Shinrikyo
Aum is very apt imo given how it recruited stem types.
And how it fused Buddhism with more Christian religions. Considering how often you heard of old hackers being interested in the former.
aum recruited a lot of people, and also failed at some things that would be presumably easier to do safely than what they did
Meanwhile, Aum had also attempted to manufacture 1,000 assault rifles, but only completed one.[37]
otoh they were also straight up delusional about what they could achieve, including toying with the idea of manufacturing nukes, military gas lasers, and getting and launching Proton rocket. (not exactly grounded for a group of people who couldn’t make AK-74s)
they were also more media savvy in that they didn’t pollute info space with their ideas only using blog posts, they
had entire radio stationrented time from a major radio station within russia, broadcasting both within freshly former soviet union and into japan from vladivostok (which was much bigger deal in 90s than today)they were also more media savvy in that they didn’t pollute info space with their ideas only using blog posts, they had entire radio station rented time from a major radio station within russia, broadcasting both within freshly former soviet union and into japan from vladivostok (which was much bigger deal in 90s than today)
Its pretty telling about Our Good Friends’ media savviness that it took an all-consuming AI bubble and plenty of help from friends in high places to break into the mainstream.
radio transmissions in russia were money shot for aum, and idk if it was a fluke or deliberate strategy. people had for a long time expectation that radio and tv are authoritative, reliable sources (due to censorship that doubled as fact-checker, and about all of it was state-owned) and in 90s every bit of that broke down because of privatization, and now you could get on the air and say anything, with many taking that at face value, as long as you pay up. at the same time there was major economic crisis and cults prey on the desperate. result?
Following the sarin gas attack on the Tokyo subway, two Russian Duma committees began investigations of the Aum – the Committee on Religious Matters and the Committee on Security Matters. A report from the Security Committee states that the Aum’s followers numbered 35,000, with up to 55,000 laymen visiting the sect’s seminars sporadically. This contrasts sharply with the numbers in Japan which are 18,000 and 35,000 respectively. The Security Committee report also states that the Russian sect had 5,500 full-time monks who lived in Aum accommodations, usually housing donated by Aum followers. Russian Aum officials, themselves, claim that over 300 people a day attended services in Moscow. The official Russian Duma investigation into the Aum described the cult as a closed, centralized organization.
With all that money sloshing around, It’s only a matter of time before they start cribbing from their neighbors and we get an anime adaptation of HPMoR.
aum:
Advertising and recruitment activities, dubbed the “Aum Salvation plan”, included claims of […] realizing life goals by improving intelligence and positive thinking, and concentrating on what was important at the expense of leisure.
this is in common with both our very good friends and scientology, but i think happy science is much stupider and more in line with srinivasan’s network states, in that it has/is an explicitly far-right political organization built in from day one
Yeah, good point.
Network State def has that store-brand Team Rocket vibe.
Not sure why this “member of technical staff at METR” felt the need to post about the lowered productivity of Black people in the southern US states after slavery was abolished. I’m sure it’s nothing.
https://www.lesswrong.com/posts/Zr37dY5YPRT6s56jY/thomas-kwa-s-shortform?commentId=iwGgqsmpY6Tcex5je
Free people have less prodictivity, time to wirehead everyone! A Brave New World!
the eternal mba graduate worldview
He seems to state that after the abolition of slavery, less of the profits from a unit time of labor accrued to the owners of the land in question. The reasons for this is of course a mystery.
It’s always the people you most expect.
Ed Zitron’s planning to hold AI boosters to account:

Well if the bubble pops he will have to pivot to people who pivot. (That is what is going to suck when to bubble pops, so many people are going to lose their jobs, and I fear a lot of people holding the bags are not the ones who really should be punished the mosts (really hope not a lot of pension funds bought in). The stock market was a mistake).
I imagine it’ll be a pretty lucrative pivot - the public’s ravenous to see AI bros and hypesters get humiliated, and Zitron can provide that in spades.
Plus, he’ll have a major headstart on whatever bubble the hucksters attempt to inflate next.
the hucksters attempt to inflate next.
Quantum, it has already started: https://www.schneier.com/blog/archives/2025/07/cheating-on-quantum-computing-benchmarks.html
Y’know, I was predicting at least a few years without a tech bubble, but I guess I was dead wrong on that. Part of me suspects the hucksters are gonna fail to inflate a quantum bubble this time around, though.
Quantum computing is still too far out from having even a niche industrial application, let alone something you can sell to middle managers the world over. Anybody who day-traded could get into Bitcoin; millions of people can type questions at a chatbot. Hucksters can and will reinvent themselves as quantum-computing consultants on LinkedIn, but is the raw material for the grift really there? I’m doubtful.
Hucksters can and will reinvent themselves as quantum-computing consultants on LinkedIn, but is the raw material for the grift really there? I’m doubtful.
By my guess, no. AI earned its investor/VC dollars by providing bosses and CEOs alike a cudgel to use against labour, either by deskilling workers, degrading their work conditions, or killing their jobs outright.
Quantum doesn’t really have that - the only Big Claim™ I know it has going for it is its supposed ability to break pre-existing encryption clean in half, but that’s near-certainly gonna be useless for hypebuilding.
I think they will just start to make up capabilities, also with the added capabilities of quantum of a computing paradigm, AGI is back on the menu. Now, due to quantum without all the expensive datacenters and problems. We are gonna put quantum in glasses! VR/Augmented reality quantum AI glasses!
Quantum computing isn’t just hard, it’s hadamard
Ran across a viral post on Bluesky:

Unsurprisingly, the replies and quotes are universally outraged at the news.
Every task you outsource to a machine is a task that you don’t learn how to do.
And school is THE PLACE WHERE YOU ARE SUPPOSED TO LEARN THINGS, JESUS H. FUCK
Okay so I know GPT-5 had a bad launch and has been getting raked over the coals, but AGI is totally still on, guys!
Why? Because trust me it’s definitely getting better behind the scenes in ways that we can’t see. Also China is still scary and we need to make sure we make the AI God that will kill us all before China does because reasons.
Also despite talking about a how much of the lack of progress is due to the consumer model and this is a cost-saving there’s no reference to the work of folks like Ed Zitron on how unprofitable these models are, much less the recent discussions on whether GPT-5 as a whole is actually cheaper to operate than earlier models given the changes it necessitates in caching.
Everyone can also agree that the direct jump from GPT-4o and o3 to GPT-5 was not of similar size to the jump from GPT-3 to GPT-4
Sure babe, you keep telling yourself that.
Everyone agrees that the release of GPT-5 was botched. Everyone can also agree that the direct jump from GPT-4o and o3 to GPT-5 was not of similar size to the jump from GPT-3 to GPT-4, that it was not the direct quantum leap we were hoping for, and that the release was overhyped quite a bit.
a quantum leap might actually be accurate
New edition of AI Killed My Job, focusing on how translators got fucked over by the AI bubble.
The thing that kills me about this is that, speaking as a tragically monolingual person, the MTPE work doesn’t sound like it’s actually less skilled than directly translating from scratch. Like, the skill was never in being able to type fast enough or read faster or whatever, it was in the difficult process of considering the meaning of what was being said and adapting it to another language and culture. If you’re editing chatbot output you’re still doing all of that skilled work, but being asked to accept half as much money for it because a robot made a first attempt.
In terms of that old joke about auto mechanics, AI is automating the part where you smack the engine in the right place, but you still need to know where to hit it in order to evaluate whether it did a good job.
It’s also a lot less pleasant of a task, it’s like wearing a straightjacket, and compared to CAT (eg: automatically using glossaries for technical terms) actually slows you down, if the translation is quite far from how you would naturally phrase things.
Source: Parents are Professional translators. (They’ve certainly seen work dry up, they don’t do MTPE it’s still not really worth their time, they still get $$$ for critically important stuff, and live interpreting [Live interpreting is definetely a skill that takes time to learn compared to translation.])
Could I ask you a question I’ve always wondered about the translation business? Why do people send in machine translation asking for cleanup and even expect it’ll cost them less?
Maybe I’m ignorant, but the way I see it, great machine translation tools are widely and freely accessible to anyone. If I needed professional translation done, I wouldn’t think copy-pasting a document into Google Translate – something that takes literal minutes – would get me any type of discount. It just doesn’t make sense to me.
Some of it is driven by translation agencies, which will refer work to freelance translators.
I would say the biggest gap is that many customers aren’t even bothering to use translators at all, and the ones that do realize it needs fixing up don’t really understand the work involved, many people misunderstand translation as being a 1-1 process, and think that Machine translation got you most of the way there.
It’s also the are we willing to pay that much more, when the shitty translation is “good enough”.
One big issue is that translation as a low barrier of entry, and many people will accept stupid work at stupid rates, and to keep rates high you have to prove the added value.
(Proving the added value as also gotten harder, as some clients even more often than before will “correct” your work before publish it, as highlighted in the article)
gwern: It’s not “AI slop” if I wasted hours dicking around with MidJourney to make it.
rsaarelm: People don’t appreciate the beauty of Substack’s built-in slop generator.
gwern: “I refuse to submit to the tyranny of the lowest common denominator and dumb down my writings or illustrations.” Have you appreciated the depth of my artist’s statement?
I put several hours of thought and effort into the concept and creating it,
Several hours of thought.
He is talking about an ice skating image where they are skating on flowing water, without skates. https://gwern.net/doc/fiction/gene-wolfe/suzanne-delage/2023-10-28-gwern-midjourneyv5-germanexpressionistlinocutofsinisternewenglandtowninwinter.jpg this image. It is supposed to evoke the idea of a declining town under draculas influence.
(Imagine if had just spend those hours on something else and paid an artist the same hours to make something. Or if he had grabbed a pencil or charcoal himself).
God I looked into the article this is meant to illustrate, and I have feelings. The idea this mysterious, evocative short story is something to be solved, and that he’s somehow cracked the code. And it must be precisely about Dracula. Don’t ask yourself why the name comes from Proust, and why the style and themes are so heavily proustian. Proust is not genre literature, it isn’t in communication with the literature of ideas, which means it is of no value. Gene Wolfe is genre literature, so the story must be about vampires, and nothing else. Any literary depth is mere distraction, a ruse meant to mislead you and have you fail the test. Can’t wait for the rationalist Pale Fire remake!!!
Thanks for pointing me to this. I hadn’t read the Wolfe story and I appreciated it. I skipped most of the gwern fluff, precisely because while his preferred interpretation is one possible of many, what I like about Wolfe is that the story can be about multiple things beside that.
And the illustration sucks.
Goddammit now I actually have to credit Gwern for something unambiguously positive in directing me to this story.
I found myself appreciating it a lot even just on a relatively surface level. I must confess to having no experience with Proust or some of the other references it makes, but it sent my mind back to my own time in school and struck me with a very particular kind of social vertigo, thinking about all the people I vaguely knew but haven’t spoken to or about since we were classmates. Like, people talk about the feeling that everyone around you is a full person with their own inner life and all that, and it feels similar to think how many people, especially in childhood, live their lives almost parallel to ours, intersecting only in passing.
Also given how many rationalists seem utterly convinced that many of not most people are just NPCs who don’t meaningfully exist when “off screen” I’m not surprised that they’re excited to have this mess of an interpretation that sidesteps that whole concept.
Ed: Also, the illusion sucks.
From Gwern’s “solution”:
Ophelia goes mad and forgets being in love with Hamlet
Dafuq?
One of the most striking aspects of the Dracula interpretation of SD is that SD turns out to be alluding to it indirectly, by making parallel allusions—the opening chapters of Dracula allude to the same parts of Hamlet that SD does! This clinches the case for SD-as-Dracula, as this is too extraordinary a coincidence to be accidental.
Yes, two different stories both alluding to the most quoted work in English goddamn literature can’t be a coincidence. It’s not like the line “there are more things in Heaven and Earth…” has been repeated so often that even Wolfe’s narrator calls it “hackneyed”… Hold on, I’m getting a message, just let me press my finger to my imaginary earpiece…
I would say myself that Wolfe’s alluding to a line rather than quoting it exactly serves to call up the whole feel of Hamlet, rather than a single moment. It evokes the Gothic wrongness, the inner turmoil paired with outer tumolt, the appearances that sometimes belie reality and sometimes lead it. You could take this as suggesting that Susie D. is the Devil in a pleasing shape. Or, with all the Proustian business, and the lengthy excursus about historical artifacts hanging on as though the past lies thick in the present and refuses to lift… Perhaps the secondary Hamlet allusion behind the obvious one is “the time is out of joint”. Maybe Suzanne is a notional being, an idea tenuously made manifest, a collective imaginary friend or dream-creature leaking out into our reality. She looks the same from one generation to the next, because the dream of the girl next door stays the same. Perhaps the horror is that our reality is fragile, that these creations are always slipping in, and we only have a stable daylight world because we refuse to see them.
Also, the illustration sucks.














