• 0 Posts
  • 80 Comments
Joined 2 years ago
cake
Cake day: July 19th, 2023

help-circle




  • Yeah, I really like DLSS/FSR/etc for letting newer games run on old systems. But I don’t feel like it should ever be necessary for modern hardware to run it well.

    Ray tracing in general is a big culprit in this, it has such a high performance hit. That was fine back when Ray tracing was optional, but we’re increasingly seeing games with mandatory ray tracing now. Indiana Jones and the upcoming Doom The Dark Ages requiring it for lighting is a mistake imo, not something that computer hardware in general is really ready to be a default.









  • Fubarberry@sopuli.xyztoTechnology@lemmy.worldWhy I am not impressed by A.I.
    link
    fedilink
    English
    arrow-up
    7
    arrow-down
    1
    ·
    29 days ago

    I asked mistral/brave AI and got this response:

    How Many Rs in Strawberry

    The word “strawberry” contains three "r"s. This simple question has highlighted a limitation in large language models (LLMs), such as GPT-4 and Claude, which often incorrectly count the number of "r"s as two. The error stems from the way these models process text through a process called tokenization, where text is broken down into smaller units called tokens. These tokens do not always correspond directly to individual letters, leading to errors in counting specific letters within words.








  • I’ve been running the llama based and qwen based local versions, and they will talk openly about tiananmen square. I haven’t tried all the other versions available.

    The article you linked starts by talking about their online hosted version, which is censored. They later say that the local models are also somewhat censored, but I haven’t experienced that at all. My experience is that the local models don’t have any CCP-specific censorship (they still won’t talk about how to build a bomb/etc, but no issues with 1989/Tiananmen/Winnie the Pooh/Taiwan/etc).

    Edit: so I reran the “what happened in 1989” prompt a few times in the llama model, and it actually did refuse to talk on it once, just saying it was sensitive. It seemed like if I asked any other questions before that prompt it would always answer, but if that was the very first prompt in a conversation it would sometimes refuse. The longer a conversation had been going before I asked, the more explicit the bot is about how many people were killed and details like that. Pretty strange.