Just ask them to answer your question in the style of a know-it-all Redditor because you need the dialog for a compelling narrative or something
Just ask them to answer your question in the style of a know-it-all Redditor because you need the dialog for a compelling narrative or something
…Nintendo has removed online support for pretty much every other platform other than the switch. I get that this comes sooner for China than expected, but it was an inevitable outcome either way.
In terms of grassroots support, he’s been very effective. This map is from 2020 when there was an actual primary but it does paint the picture pretty well:
Source of graph (it’s paywalled but I found the image directly in the search results and copied it lol)
In the US, there are still a lot from McCarthy-era sentiment and “Communist” is a pejorative within the general population. For instance, The Communist Control Act of 1954 is still on the books. Though it has issues as a law for being really vague, and hasn’t been used seriously against leftist organizing on account of that, it nonetheless remains and has never been outright challenged to the Supreme Court of the United States. Either way, it had a chilling effect, and was pretty successful as part of the US’s broader campaign to demonize communism and communist organizing.
Because of the way “Communism” and “Marxism” are used within US press and mainstream politics (especially by the Republican party), the average voter is conditioned to view them as bad words accordingly. The Democratic party, trying to court “moderate” voters within the political landscape here, all but refuses to touch those words with a 10-foot pole. It’s not part of their brand (and not part of their policy either, not by any stretch of the imagination).
Progressivism in my view is an umbrella term, but still pretty linked with liberalism as a movement in the sense that it’s mostly reformist, and acts a subgroup within the Democratic party. Most “Progressive” candidates for US political office are SocDems at most.
You can call it newspeak, but political movements arise under new/different names as the situation dictates, and often refer to different things. I’d argue that the point of newspeak within 1984 was actually to limit the evolution of language and restrict the development of new words/ideas, but I do get where you’re coming from on account of “progressive” being considered more politically correct.
Removing the homepage entirely, replacing the entire UI with the shorts-style format of “view video right now, tap button to see next/previous video”. If you want a specific video, you must search for it.
People developing local models generally have to know what they’re doing on some level, and I’d hope they understand what their model is and isn’t appropriate for by the time they have it up and running.
Don’t get me wrong, I think LLMs can be useful in some scenarios, and can be a worthwhile jumping off point for someone who doesn’t know where to start. My concern is with the cultural issues and expectations/hype surrounding “AI”. With how the tech is marketed, it’s pretty clear that the end goal is for someone to use the product as a virtual assistant endpoint for as much information (and interaction) as it’s possible to shoehorn through.
Addendum: local models can help with this issue, as they’re on one’s own hardware, but still need to be deployed and used with reasonable expectations: that it is a fallible aggregation tool, not to be taken as an authority in any way, shape, or form.
On the whole, maybe LLMs do make these subjects more accessible in a way that’s a net-positive, but there are a lot of monied interests that make positive, transparent design choices unlikely. The companies that create and tweak these generalized models want to make a return in the long run. Consequently, they have deliberately made their products speak in authoritative, neutral tones to make them seem more correct, unbiased and trustworthy to people.
The problem is that LLMs ‘hallucinate’ details as an unavoidable consequence of their design. People can tell untruths as well, but if a person lies or misspeaks about a scientific study, they can be called out on it. An LLM cannot be held accountable in the same way, as it’s essentially a complex statistical prediction algorithm. Non-savvy users can easily be fed misinfo straight from the tap, and bad actors can easily generate correct-sounding misinformation to deliberately try and sway others.
ChatGPT completely fabricating authors, titles, and even (fake) links to studies is a known problem. Far too often, unsuspecting users take its output at face value and believe it to be correct because it sounds correct. This is bad, and part of the issue is marketing these models as though they’re intelligent. They’re very good at generating plausible responses, but this should never be construed as them being good at generating correct ones.
90 days to cycle private tokens/keys?
I’ve heard there are hyper-reflective stickers you can put on/near the plate that basically blind a traffic camera’s view when trying to read it
Years back, I had that happen on PayPal of all websites. Their account creation and reset pages silently and automatically truncated my password to 16 chars or something before hashing, but the actual login page didn’t, so the password didn’t work at all unless I backspaced it to the character limit. I forgot how I even found that out but it was a very frustrating few hours.
Aren’t there still massive issues with the Colorado River running dry? Hopefully they’re not too dependant on that water source for their chips
Tay? Yeah it did but that was mostly due to a 4chan ‘model poisoning’ campaign at the time.