

It’s almost like OP had learned about AI impressions before hearing that impressions have been a thing for far longer than we’ve had AI to imitate voices. No judgement here, just fascinating.
It’s almost like OP had learned about AI impressions before hearing that impressions have been a thing for far longer than we’ve had AI to imitate voices. No judgement here, just fascinating.
Compilation is CPU bound and, depending on what language mostly single core per compilation unit (I.e. in LLVM that’s roughly per file, but incremental compilations will probably only touch a file or two at a time, so the highest benefit will be from higher single core clock speed, not higher core count). So you want to focus on higher clock speed CPUs.
Also, high speed disks (NVME or at least a regular SSD) gives you performance gains for larger codebases.
“oooh yeah play with my testes a little bit”
It’s the social permission to say homosexual things without being a homosexual for me
I suppose you can’t blame your earlier dentists, though. How were they supposed to know? And if they automatically treated redheads differently, would that be racism?
Assuming they really are cool with it, I’d wager it’s a bit like being wrapped in blanket. Pretty comforting.
I occasionally lecture my 3DPD wife about science facts and she hates it. She’ll say things like “what?” And “I was just asking what we should do for dinner”
We still don’t talk sometimes
I think the main barriers are context length (useful context. GPT-4o has “128k context” but it’s mostly sensitive to the beginning and end of the context and blurry in the middle. This is consistent with other LLMs), and just data not really existing. How many large scale, well written, well maintained projects are really out there? Orders of magnitude less than there are examples of “how to split a string in bash” or “how to set up validation in spring boot”. We might “get there”, but it’ll take a whole lot of well written projects first, written by real humans, maybe with the help of AI here and there. Unless, that is, we build it with the ability to somehow learn and understand faster than humans.
People seem to disagree but I like this. This is AI code used responsibly. You’re using it to do more, without outsourcing all your work to it and you’re actively still trying to learn as you go. You may not be “good at coding” right now but with that mindset you’ll progress fast.
Not what I’d have expected. In my company it’s mostly higher ups (suits) pushing the stuff and workers begrudgingly implementing it.
deleted by creator
How high up in the corporate ladder are they?
As a former script kiddie myself I think it’s not much different from how I used to blindly copy and paste code snippets from tutorials. Well, environmental impact aside. Those who have the drive and genuine interest will actually come to learn things properly. Those who don’t should stay tf out of production code, which is why we genuinely shouldn’t let “vibe coding” be legitimized.
It’s a pretty big jump to go from ChatGPT/LeChat to hosting your own LLM locally, if you want results that are anywhere close to the commercial offerings. Both of them even have a free tier that would probably still be better than anything you could run locally without significant hardware investment. It’s definitely not difficult these days, but it’s very expensive to get close results unless you’ve already got the hardware.
It’s typically pronounced Tai/Dai in Japanese (大成功, daiseiko, 大変, taihen), or “oo” in the case of 大きい/大きな (ookii/ookina).
Do you not reuse any part of your outfit from day to day? Like do you have at least 7 different pairs of pants that you cycle through? Cause that’s why I wouldn’t do this, I’d rather not go out with some clothes then bring all that outside dirt into my bed.
I think many pawns, perhaps even most pawns, stay pawns their entire life. And that’s always been good enough for them.
We declare children as dependents legally, don’t we?
13542 in the original doesn’t even make a star