• 6 Posts
  • 354 Comments
Joined 2 years ago
cake
Cake day: August 29th, 2023

help-circle
  • even assuming sufficient computation power, storage space, and knowledge of physics and neurology

    but sufficiently detailed simulation is something we have no reason to think is impossible.

    So, I actually agree broadly with you in the abstract principle but I’ve increasingly come around to it being computationally intractable for various reasons. But even if functionalism is correct…

    • We don’t have the neurology knowledge to do a neural-level simulation, and it would be extremely computationally expensive to actually simulate all the neural features properly in full detail, well beyond the biggest super computers we have now and “moore’s law” (scare quotes deliberate) has been slowing down such that I don’t think we’ll get there.

    • A simulation from the physics level up is even more out of reach in terms of computational power required.

    As you say:

    I think there would be other, more efficient means well before we get to that point

    We really really don’t have the neuroscience/cognitive science to find a more efficient way. And it is possible all of the neural features really are that important to overall cognition, so you won’t be able to do it that much more “efficiently” in the first place…

    Lesswrong actually had someone argue that the brain is within an order or magnitude or two of the thermodynamic limit on computational efficiency: https://www.lesswrong.com/posts/xwBuoE9p8GE7RAuhd/brain-efficiency-much-more-than-you-wanted-to-know



  • Continuation of the lesswrong drama I posted about recently:

    https://www.lesswrong.com/posts/HbkNAyAoa4gCnuzwa/wei-dai-s-shortform?commentId=nMaWdu727wh8ukGms

    Did you know that post authors can moderate their own comments section? Someone disagreeing with you too much but getting upvoted? You can ban them from your responding to your post (but not block them entirely???)! And, the cherry on top of this questionable moderation “feature”, guess why it was implemented? Eliezer Yudkowsky was mad about highly upvoted comments responding to his post that he felt didn’t get him or didn’t deserve that, so instead of asking moderators to block on a case-by-case basis (or, acasual God forbid, consider maybe if the communication problem was on his end), he asked for a modification to the lesswrong forums to enable authors to ban people (and delete the offending replies!!!) from their posts! It’s such a bizarre forum moderation choice, but I guess habryka knew who the real leader is and had it implemented.

    Eliezer himself is called to weigh in:

    It’s indeed the case that I haven’t been attracted back to LW by the moderation options that I hoped might accomplish that. Even dealing with Twitter feels better than dealing with LW comments, where people are putting more effort into more complicated misinterpretations and getting more visibly upvoted in a way that feels worse. The last time I wanted to post something that felt like it belonged on LW, I would have only done that if it’d had Twitter’s options for turning off commenting entirely.

    So yes, I suppose that people could go ahead and make this decision without me. I haven’t been using my moderation powers to delete the elaborate-misinterpretation comments because it does not feel like the system is set up to make that seem like a sympathetic decision to the audience, and does waste the effort of the people who perhaps imagine themselves to be dutiful commentators.

    Uh, considering his recent twitter post… this sure is something. Also" “it does not feel like the system is set up to make that seem like a sympathetic decision to the audience” no shit sherlock, deleting a highly upvoted reply because it feels like too much effort to respond to is in fact going to make people unsympathetic (at the least).


  • So one point I have to disagree with.

    More to the point, we know that thought is possible with far less processing power than a Microsoft Azure datacenter by dint of the fact that people can do it. Exact estimates on the storage capacity of a human brain vary, and aren’t the most useful measurement anyway, but they’re certainly not on the level of sheer computational firepower that venture capitalist money can throw at trying to nuke a problem from space. The problem simply doesn’t appear to be one of raw power, but rather one of basic capability.

    There are a lot of ways to try to quantify the human brain’s computational power, including storage (as this article focuses on, but I think its the wrong measure, operations, numbers of neural weights, etc.). Obviously it isn’t literally a computer and neuroscience still has a long way to go, so the estimates you can get are spread over like 5 orders of magnitude (I’ve seen arguments from 10^13 flops and to 10^18 or even higher, and flops is of course the wrong way to look at the brain). Datacenter computational power have caught up to the lowers ones, yes, but not the higher ones. The bigger supercomputing clusters, like El Capitan for example, is in the 10^18th range. My own guess would be at the higher end, like 10^18, with the caveat/clarification that evolution has optimized the brain for what it does really really well, so that the compute is being used really really efficiently. Like one talk I went to in grad school that stuck with me… the eyeball’s microsaccades are basically acting as a frequency filter on visual input. So literally before the visual signal has even got to the brain the information has already been processed in a clever and efficient way that isn’t captured in any naive flop estimate! AI boosters picked estimates on human brain power that would put it in range of just one more scaling as part of their marketing. Likewise for number of neurons/synapses. The human brain has 80 billion neurons with an estimated 100 trillion synapses. GPT 4.5, which is believed to have peaked on number of weights (i.e. they gave up on straight scaling up because it is too pricey), is estimated (because of course they keep it secret) like 10 trillion parameters. Parameters are vaguely analogs to synapses, but synapses are so much more complicated and nuanced. But even accepting that premise, the biggest model was still like 1/10th the size to match a human brain (and they may have lacked the data to even train it right).

    So yeah, minor factual issue, overall points are good, I just thought I would point it out, because this factual issue is one distorted by the AI boosters to make it look like they are getting close to human.




  • Very ‘ideological turing test’ failure levels.

    Yeah, his rational is something something “threats” something something “decision theory”, which has the obvious but insane implication that you should actually ignore all protests (even peaceful protestors that meet his lib centrist ideals of what protests ought to be) because that is giving into the protestors “threats” (i.e. minor inconveniences, at least in the case of lib-brained protests) and thus incentivizing them to threaten you in the first place.

    he tosses the animal rights people (partially) under the bus for no reason. EA animal rights will love that.

    He’s been like this a while, basically assuming that obviously animals don’t have qualia and obviously you are stupid and don’t understand neurology/philosophy if you think otherwise. No, he did not even explain any details of his certainty about this.


  • I haven’t looked into the Zizians in a ton of detail even now, among other reasons because I do not think attention should be a reward for crime.

    And it doesn’t occur to him to look into the Zizians in order to understand how cults keep springing up from the group he is a major thought leader in? Like if it was just one cult, I would sort of understand the desire just to shut ones eyes (but it certainly wouldn’t be a truth-seeking desire), but they are like the third cult (or 5th or 6th if we are counting broadly cult-adjacent group) (and this is not counting the entire rationalist project as cult). (For full on religious cults we have: leverage research, and the rationalist-Buddhist cult; for high-demand groups we have: the Vassarites, Dragon Army’s group home, and a few other sketchy group living situations (Nonlinear comes to mind)).

    Also, have an xcancel link, because screw Elon and some of the comments are calling Eliezer out on stuff: https://xcancel.com/allTheYud/status/1989825897483194583#m

    Funny sneer in the replies:

    I read the Sequences and all I got was this lousy thread about the glomarization of Eliezer Yudkowsky’s BDSM practices

    Serious sneer in the replies

    this seems like a good time to point folks towards my articles titled “That Time Eliezer Yudkowsky recommended a really creepy sci-fi book to his audience and called it SFW” and "That Time Eliezer Yudkowsky Wrote A Really Creepy Rationalist Sci-fi Story and called it PG-13


  • Even taking their story at face value:

    • It seems like they are hyping up LLM agents operating a bunch of scripts?

    • It indicates that their safety measures don’t work

    • Anthropic will read your logs, so you don’t have any privacy or confidentiality or security using their LLM, but, they will only find any problems months after the fact (this happened in June according to Anthropic but they didn’t catch it until September),

    If it’s a Chinese state actor … why are they using Claude Code? Why not Chinese chatbots like DeepSeek or Qwen? Those chatbots code just about as well as Claude. Anthropic do not address this really obvious question.

    • Exactly. There are also a bunch of open source models hackers could use for a marginal (if any) tradeoff in performance, with the benefit that they could run locally, so that their entire effort isn’t dependent on hardware outsider of their control in the hands of someone that will shut them down if they check the logs.

    You are not going to get a chatbot to reliably automate a long attack chain.

    • I don’t actually find it that implausible someone managed to direct a bunch of scripts with an LLM? It won’t be reliable, but if you can do a much greater volume of attacks maybe that makes up for the unreliability?

    But yeah, the whole thing might be BS or at least bad exaggeration from Anthropic, they don’t really precisely list what their sources and evidence are vs. what is inference (guesses) from that evidence. For instance, if a hacker tried to setup hacking LLM bots, and they mostly failed and wasted API calls and hallucinated a bunch of shit, if Anthropic just read the logs from their end and didn’t do the legwork contacting people who had allegedly been hacked, they might "mistakenly’ (a mistake that just so happens to hype up their product) think the logs represent successful hacks.


  • This is somewhat reassuring, as it suggests that he doesn’t fully understand how cultural critiques of LW affect the perception of LW more broadly;

    This. On Reddit (which isn’t actually mainstream common knowledge per se, but I still find it encouraging and indicative that the common sense perspective is winning out) whenever I see the topic of lesswrong or AI Doom come up on unrelated subreddits, I’ll see a bunch of top upvoted comments mentioning the cult spin offs or that the main thinker’s biggest achievement is Harry Potter fanfic or Roko’s Basilisk or any of the other easily comprehensible indicators that these are not serious thinkers with legitimate thoughts.


  • Another ironic point… Lesswronger’s actually do care about ML interpretability (to the extent they care about real ML at all; and as a solution to making their God AI serve their whims not for anything practical). A lack of interpretability is a major problem (like irl problem, not just scifi skynet problem) in ML, you can models with racism or other bias buried in them and not be able to tell except by manually experimenting with your model with data from outside the training set. But Sam Altman has turned it from a problem into a humble brag intended to imply their LLM is so powerful and mysterious and bordering on AGI.








  • Yud: “Woe is me, a child who was lied to!”

    He really can’t let down that one go, it keeps coming up. It was at least vaguely relevant to a Harry Potter self-insert, but his frustrated gifted child vibes keep leaking into other weird places. (Like Project Lawful, among it’s many digressions, had an aside about how dath ilan raises it’s children to avoid this. It almost made me sympathetic towards the child-abusing devil worshipers who had to put up with these asides to get to the main character’s chemistry and math lectures.)

    Of course this a meandering plug to his book!

    Yup, now that he has a book out he’s going to keep referencing back to it and it’s being added to the canon that must be read before anyone is allowed to dare disagree with him. (At least the sequences were free and all online)

    Is that… an incel shape-rotator reference?

    I think shape-rotator has generally permeated the rationalist lingo for a certain kind of math aptitude, I wasn’t aware the term had ties to the incel community. (But it wouldn’t surprise me that much.)