⭒˚。⋆ 𓆑 ⋆。𖦹

  • 5 Posts
  • 213 Comments
Joined 2 years ago
cake
Cake day: June 21st, 2023

help-circle
  • Blindsight mentioned!

    The only explanation is that something has coded nonsense in a way that poses as a useful message; only after wasting time and effort does the deception becomes apparent. The signal functions to consume the resources of a recipient for zero payoff and reduced fitness. The signal is a virus.

    This has been my biggest problem with it. It places a cognitive load on me that wasn’t there before, having to cut through the noise.






  • I can’t stop thinking about this piece from Gary Marcus I read a few days ago, How o3 and Grok 4 Accidentally Vindicated Neurosymbolic AI. It’s a fascinating read on the differences of connectionist vs. symbolic AI and the merging of the two into neurosymbolic AI from someone who understands the topic.

    I recommend giving the whole thing a read, but this little nugget at the end is what caught my attention,

    Why was the industry so quick to rally around a connectionist-only approach and shut out naysayers? Why were the top companies in the space seemingly shy about their recent neurosymbolic successes?

    Nobody knows for sure. But it may well be as simple as money. The message that we can simply scale our way to AGI is incredibly attractive to investors because it puts money as the central (and sufficient) force needed to advance.


    AGI is still rather poorly defined, and taking cues from Ed Zitron (another favorite of mine), there will be a moving of goalposts. Scaling fast and hard to several gigglefucks of power and claiming you’ve achieved AGI is the next big maneuver. All of this largely just to treat AI as a blackhole for accountability; the super smart computer said we had to take your healthcare.




  • LLMs are a tool, and all tools can be repurposed or repossessed.

    That’s just simply not true. Tools are usually quite specific in purpose, and often times the tasks they accomplish cannot be undone by the same tool. A drill cannot undrill a hole. I’m familiar with ML (machine learning) and the many, many legitimate uses it has across a wide range of fields.

    What you’re thinking of, I suspect, is a weapon. A resource that can be wielded equally by and against each side. The pains caused on the common person by the devaluation of our art and labor can’t be inflicted against the corpofascists; for them, that’s the point. They are the ones selling these tools to you and you cannot defeat them by buying in. And I do very much mean the open source models as well. Waging war on their terms, with their tools and methods (repossessed as they may be) is still a losing proposition.

    By ignoring this technology and sticking our fingers in our ears, we are allowing them to reshape out the technology works, instead of molding it for our own purposes. It’s not going to go away, and thinking that is just as foolish as believing the Internet is a fad.

    Time will tell. How are your NFTs doing? (sorry, that was mean)

    The negative preconceived notion bias is really not helping matters.

    Guilty as charged, I’m pretty strongly anti-AI. But seriously, watch that ad and tell me that the disorienting cadence of speech and uncanny, overly detailed generated images look good? Most of us have seen what’s on offer and we’re telling you, we’re tired.


    Look, I do apologize, I’m very much trying not to be overly aggro here or attack you in any way. But I think discussions about the religious overtones and belief systems of the BJ are exactly where we’re at.

    How o3 and Grok 4 Accidentally Vindicated Neurosymbolic AI

    This is a really interesting article. Gary Marcus is a lot more positive on AI than myself I think, but that’s understandable given his background. If I do concede that some form of AGI is inevitable, I think we are within our rights to demand that it is indeed the tool we deserve, and not just snake oil.

    AI art still ugly, sorry not sorry.


  • Kind of really disagree with this video 😕

    I’ve only read the first two Dune novels, and that awhile ago, so I’m poorly equipped to have this conversation, but the video focuses on the idea that fascists are perpetuating it to keep powerful tools of liberation out of the hands of the proletariat. You wouldn’t agree with a fascist, would you? While there may be some truth to this, it completely ignores the cause of the BJ to begin with. It was in fact a rebellion by the people against those tools.

    Even taken at face value, the video seems to posit that because the fascists can’t be trusted, AI is indeed a powerful tool for liberation. I don’t see that as the case. It hardly needs to be said, but Dune is a sci-fi novel, the context of which does not currently apply to our real world circumstances. AI is the tool of the fascists, used for oppression. I don’t think it can simply be repurposed for liberation, that’s a naive interpretation that ignores all of the actual ways in which the current implementations of AI work.

    Disgusting AI-generated add for merch halfway through.

    EDIT: the point is further confounded by the fact that the BJ eliminated “computers, thinking machines, and conscious robots”, not simply AI. Many of those are tools that could empower people but that doesn’t mean you can just lump them together.






  • I don’t really have a concise answer, but allow me to ramble from personal experience for a bit:

    I’m a sysadmin that was VERY heavily invested in the Microsoft ecosystem. It was all I worked with professionally and really all I had ever used personally as well. I grew up with Windows 3.1 and just kept on from there, although I did mess with Linux from time to time.

    Microsoft continues to enshittify Windows in many well-documented ways. From small things like not letting you customize the Start menu and task bar, to things like microstuttering from all the data it’s trying to load over the web, to the ads it keeps trying to shove into various corners. A million little splinters that add up over time. Still, I considered myself a power user, someone able to make registry tweaks and PowerShell scripts to suit my needs.

    Arch isn’t particularly difficult for anyone who is comfortable with OSes and has excellent documentation. After installation it is extremely minimal, coming with a relatively bare set of applications to keep it functioning. Using the documentation to make small decisions for yourself like which photo viewer or paint app to install feels empowering. Having all those splinters from Windows disappear at once and be replaced with a system that feels both personal and trustworthy does, in a weird way, kind of border on an almost religious experience. You can laugh, but these are the tools that a lot of us live our daily lives on, for both work and play. Removing a bloated corporation from that chain of trust does feel liberating.


    As to why particularly Arch? I think it’s just that level of control. I admit it’s not for everyone, but again, if you’re at least somewhat technically inclined, I absolutely believe it can be a great first distro, especially for learning. Ubuntu has made some bad decisions recently, but even before that, I always found myself tinkering with every install until it became some sort of Franken-Debian monster. And I like pacman way better than apt, fight me, nerds.


  • The latest We’re In Hell revealed a new piece of the puzzle to me, Symbolic vs Connectionist AI.

    As a layman I want to be careful about overstepping the bounds of my own understanding, but from someone who has followed this closely for decades, read a lot of sci-fi, and dabbled in computer sciences, it’s always been kind of clear to me that AI would be more symbolic than connectionist. Of course it’s going to be a bit of both, but there really are a lot of people out there that believe in AI from the movies; that one day it will just “awaken” once a certain number of connections are made.

    Cons of Connectionist AI: Interpretability: Connectionist AI systems are often seen as “black boxes” due to their lack of transparency and interpretability.

    Transparency and accountability are negatives when being used for a large number of applications AI is currently being pushed for. This is just THE PURPOSE.

    Even taking a step back from the apocalyptic killer AI mentioned in the video, we see the same in healthcare. The system is beyond us, smarter than us, processing larger quantities of data and making connections our feeble human minds can’t comprehend. We don’t have to understand it, we just have to accept its results as infallible and we are being trained to do so. The system has marked you as extraneous and removed your support. This is the purpose.


    EDIT: In further response to the article itself, I’d like to point out that misalignment is a very real problem but is anthropomorphized in ways it absolutely should not be. I want to reference a positive AI video, AI learns to exploit a glitch in Trackmania. To be clear, I have nothing but immense respect for Yosh and his work writing his homegrown Trackmania AI. Even he anthropomorphizes the car and carrot, but understand how the rewards are a fairly simple system to maximize a numerical score.

    This is what LLMs are doing, they are maximizing a score by trying to serve you an answer that you find satisfactory to the prompt you provided. I’m not gonna source it, but we all know that a lot of people don’t want to hear the truth, they want to hear what they want to hear. Tech CEOs have been mercilessly beating the algorithm to do just that.

    Even stripped of all reason, language can convey meaning and emotion. It’s why sad songs make you cry, it’s why propaganda and advertising work, and it’s why that abusive ex got the better of you even though you KNEW you were smarter than that. None of us are so complex as we think. It’s not hard to see how an LLM will not only provide sensible response to a sad prompt, but may make efforts to infuse it with appropriate emotion. It’s hard coded into the language, they can’t be separated and the fact that the LLM wields emotion without understanding like a monkey with a gun is terrifying.

    Turning this stuff loose on the populace like this is so unethical there should be trials, but I doubt there ever will be.