

the noisy reviews — which were more focused on judging a paper’s worth than providing constructive feedback
dafuq?


the noisy reviews — which were more focused on judging a paper’s worth than providing constructive feedback
dafuq?


Yud:
I have already asked the shoggoths to search for me, and it would probably represent a duplication of effort on your part if you all went off and asked LLMs to search for you independently.
The locker beckons


Thread locked, to prevent moving further in the direction of debate club.


(Detaches whiteboard from wall, turns whiteboard upside-down)
Inverted, motherfuckers


Today in autosneering:
KEVIN: Well, I’m glad. We didn’t intend it to be an AI focused podcast. When we started it, we actually thought it was going to be a crypto related podcast and that’s why we picked the name, Hard Fork, which is sort of an obscure crypto programming term. But things change and all of a sudden we find ourselves in the ChatGPT world talking about AI every week.
https://bsky.app/profile/nathanielcgreen.bsky.social/post/3mahkarjj3s2o


Relatedly, AI is fucking up academic copy-editing.
One of the world’s largest academic publishers is selling a book on the ethics of artificial intelligence research that appears to be riddled with fake citations, including references to journals that do not exist.


Ben Williamson, editor of the journal Learning, Media and Technology:
Checking new manuscripts today I reviewed a paper attributing 2 papers to me I did not write. A daft thing for an author to do of course. But intrigued I web searched up one of the titles and that’s when it got real weird… So this was the non-existent paper I searched for:
Williamson, B. (2021). Education governance and datafication. European Educational Research Journal, 20(3), 279–296.
But the search result I got was a bit different…
Here’s the paper I found online:
Williamson, B. and Piattoeva, N. (2022) Education Governance and Datafication. Education and Information Technologies, 27, 3515-3531.
Same title but now with a coauthor and in a different journal! Nelli Piattoeva and I have written together before but not this…
And so checked out Google Scholar. Now on my profile it doesn’t appear, but somwhow on Nelli’s it does and … and … omg, IT’S BEEN CITED 42 TIMES almost exlusively in papers about AI in education from this year alone…
Which makes it especially weird that in the paper I was reviewing today the precise same, totally blandified title is credited in a different journal and strips out the coauthor. Is a new fake reference being generated from the last?..
I know the proliferation of references to non-existent papers, powered by genAI, is getting less surprising and shocking but it doesn’t make it any less potentially corrosive to the scholarly knowledge environment.


Razor and Blade Model
HACK THE PLANET


deleted by creator


Relatedly:
As is typical for educators these days, Heiss was following up on citations in papers to make sure that they led to real sources — and weren’t fake references supplied by an AI chatbot. Naturally, he caught some of his pupils using generative artificial intelligence to cheat: not only can the bots help write the text, they can supply alleged supporting evidence if asked to back up claims, attributing findings to previously published articles. […] That in itself wasn’t unusual, however. What Heiss came to realize in the course of vetting these papers was that AI-generated citations have now infested the world of professional scholarship, too. Each time he attempted to track down a bogus source in Google Scholar, he saw that dozens of other published articles had relied on findings from slight variations of the same made-up studies and journals. […] That’s because articles which include references to nonexistent research material — the papers that don’t get flagged and retracted for this use of AI, that is — are themselves being cited in other papers, which effectively launders their erroneous citations.


ACM is now showing an AI “summary” of a recent paper of mine on the DL instead of the abstract. As an author, I have not granted ACM the right to process my papers in this way, and will not. They should either roll back this (mis)feature or remove my papers from the DL.


The ACM has fallen
https://types.pl/@wilbowma/115733550130706711
https://types.pl/@arjen@idf.social/115736432049018026


He was doing the easy sex-swapping thing in The Ophiuchi Hotline, several years before Steel Beach.


Or, since we already know that it’s insipid fashtech with the cortical impact of moonshine, we could… not do that. Instead of wasting carbon on a joke about failing to synthesize a joke, maybe pet a cat? Drink a hot cocoa? Sing along to the "oh oh oh"s in “Sweet Caroline”?


Merriam-Webster’s human editors have chosen slop as the 2025 Word of the Year. We define slop as “digital content of low quality that is produced usually in quantity by means of artificial intelligence.” All that stuff dumped on our screens, captured in just four letters: the English language came through again.
https://www.merriam-webster.com/wordplay/word-of-the-year?2025-share


“Your mother shubscribed to makshimum, unbounded shcale last night, Trebek.”


An academic sneer delivered through the arXiv-o-tube:
Large Language Models are useless for linguistics, as they are probabilistic models that require a vast amount of data to analyse externalized strings of words. In contrast, human language is underpinned by a mind-internal computational system that recursively generates hierarchical thought structures. The language system grows with minimal external input and can readily distinguish between real language and impossible languages.
Revealed: World’s shittiest “tag yourself” meme