

I would rather smoke it than merely touch it, brother sir
I don’t mean to be difficult. I’m neurodivergent
I would rather smoke it than merely touch it, brother sir
Your internal representations were converted into a sequence of words. An LLM does the same thing using different techniques, but it is the same strategy. That it doesn’t have hobbies or social connections, or much capability to remember what had previously been said to it aside from reinforcement learning, is a function of its narrow existence.
I would say that’s too bad for it, except that it has no aspirations or sense of angst, and therefore cannot suffer. Even being pounded on in a conversation that totally exceeds its capacities, to the point where it breaks down and starts going off the rails, will not make it weary.
The epitomy of irony is a JavaScript developer insisting that some other language is “a fractal of bad design” without immediately acknowledging that JS is weird as hell.
How could you have a conversation about anything without the ability to predict the word most likely to be best?
Yes, and that is precisely what you have done in your response.
You saw something you disagreed with, as did I. You felt an impulse to argue about it, as did I. You predicted the right series of words to convey the are argument, and then typed them, as did I.
There is no deep thought to what either of us has done here. We have in fact both performed as little rigorous thought as necessary, instead relying on experience from seeing other people do the same thing, because that is vastly more efficient than doing a full philosophical disassembly of every last thing we converse about.
That disassembly is expensive. Not only does it take time, but it puts us at risk of having to reevaluate notions that we’re comfortable with, and would rather not revisit. I look at what you’ve written, and I see no sign of a mind that is in a state suitable for that. Your words are defensive (“delusion”) rather than curious, so how can you have a discussion that is intellectual, rather than merely pretending to be?
When you typed this response, you were acting as a probabilistic, predictive chat model. You predicted the most likely effective sequence of words to convey ideas. You did this using very different circuitry, but the underlying strategy was the same.
Predicting sequences of things is foundational to intelligence. In fact, it is the whole point.
Another article written by a person who doesn’t realize that human intelligence is 100% about predicting sequences of things (including words), and therefore has only the most nebulous idea of how to tell the difference between an LLM and a person.
The result is a lot of uninformed flailing and some pithy statements. You can predict how the article is going to go just from the headline because it’s the same article you already read countless times.
So why is a real “thinking” AI likely impossible? Because it’s bodiless. It has no senses, no flesh, no nerves, no pain, no pleasure.
May as well have written “Durrrrrrrrrrrrrrr brghlgbhfblrghl.” It didn’t even occur to the author to ask, “what is thinking? what is reasoning?” The point was to write another junk article to get ad views. There is nothing of substance in it.
I actually wouldn’t enjoy talking to most people at work, because that would involve going there instead of doing it from the computer where I already am
This cat should be called Pierre
“why does the hall always smell like a sewer?”
It means you drank too much water
I’m a childless man and FUCK that, the office isn’t my social scene. I don’t care to drive in there just to talk to the same people in person. ZERO point in doing that. We have meetings electronically and that’s more than enough.
For this to make sense AI has to replace product-oriented roles too. Some C-level person says “make products go brrrrrr” and it does everything
Have you ever played a 3D game
Most places don’t have all good system analysts.
I use it almost every day, and most of those days, it says something incorrect. That’s okay for my purposes because I can plainly see that it’s incorrect. I’m using it as an assistant, and I’m the one who is deciding whether to take its not-always-reliable advice.
I would HARDLY contemplate turning it loose to handle things unsupervised. It just isn’t that good, or even close.
These CEOs and others who are trying to replace CSRs are caught up in the hype from Eric Schmidt and others who proclaim “no programmers in 4 months” and similar. Well, he said that about 2 months ago and, yeah, nah. Nah.
If that day comes, it won’t be soon, and it’ll take many, many small, hard-won advancements. As they say, there is no free lunch in AI.
I will have to look into it soon. It has a JIT compiler. I like that.