i would like to filter out all “massive multiplyer online arena shooter” There are way too much of them
i would like to filter out all “massive multiplyer online arena shooter” There are way too much of them
Any processor can run llms. The only issue is how fast, and how much ram it has access to. And you can trade the latter for disk space if you’re willing to sacrifice even more speed.
If it can add, it can run any model
raytracing is insanely expensive. If you saw what current cards can render in real time, you would see a very very noisy, incomplete image that looks like shit. Without ai denoising and a lot of temporal shit (which only looks good in screenshots). It is very very very far from being able to render an actual frame with decent performance.
yes, but what you need to be doing is tons of multiply-accumulate, using a fuckton of memory bandwidth… Which a gpu is designed for. You won’t design anything much better with an fpga.
yes it is malware
9% played 2024 games in 2023?
all the advertised AI detection tools are just that. Happy slapping!
This is not error correction issue though. Error correction means taking known data and adding redundancy to it so that damaed pieces can be repaired. This makes the message longer.
An llm’s output does not contain error correction. It’s just the output. And it doesn’t contain any errors, mathematically speaking. The hallucination is the correct output. It is what the statistics it gathered from its training set determined is most likely. A “correct” llm output is indistinguishable from a “hallucination”, mathematically, and always will be. A hallucination is simply “some output that some human, somewhere, doesn’t like”, and that’s uncomputable. And outputs that people subjectively consider as “hallucinations” cannot be eliminated, because an llm is, fundamentally, a probabilistic algorithm. If you added error correction to an llm’s output all you’d be able to recover is the llm’s original output, “hallucinations” and all.
Tldr: “hallucinations” are a subjective thing. A Hallucination" is not an error that can be corrected after-the-fact, because it is not an error in the first place.
they would need to develop balanced mechanics. Level scaling completely ruins any sense of progression.
they didn’t. This is about a properly written headline by BBC being butchered when summarized by apple intelligence, which they have no control over.
the problems with (the current forms of generative) AI will not be solved, because they cannot be solved. They are intrinsic to the whole framework.
5G hadn’t been invented yet. They had nothing to worry about back then.
/s
“could” increase consumer prices? Really? Still not sure whether they will actually increase, if tarriffs are implemented?
llm and search should not be in the same sentence
telegram is one of the worst, privacy wise.
american, right? It’s very popular in other places.
deus ex… thief…
multiplayer.
It doesn’t add up
well there’s a simple solution to that: make cars that people want to buy.
I was in the market for a car recently. I wanted a small city car. All the big players have now is crossovers and suvs.
So that’s why I got a chinese made vehicle. It was what I wanted, and it exists. The reason I didn’t buy something european is because none of them make anything I’d want to buy anymore. Maybe the fiat 500, but that’s a bit too pricey, imo.
and the bar is getting lower. Fast iteration, releasing broken, poorly understood, barely maintainable pieces of shit as quickly as one can.
Fucking agile
it’s ok! telegram is not encrypted!