

Certainly if they falsified a federal crime, the penalty shouldn’t be any less than wire fraud, which is up to 20 years in prison.
Check out my open source game engine! https://strayphotons.net/ https://github.com/frustra/strayphotons
I have been developing this engine on and off for over 10 years, and still have big plans.


Certainly if they falsified a federal crime, the penalty shouldn’t be any less than wire fraud, which is up to 20 years in prison.


I guess we’re calling geothermal energy “reverse solar” now. This is silly marketing.


Yeah, I don’t want to discourage anyone from trying, but tech jobs are a long ways away from having unions be the norm.
I’d love to have one in my job, since the only kind of job security you get in software is becoming a specialist in some niche area where you’re the only one who knows how anything works, which isn’t exactly a low-stress position either.


It’s a rough world out there for people trying to unionize…
https://kotaku.com/ubisoft-halifax-shutdown-unionized-rainbow-six-mobile-2000657752


fork()


This machine uses 75kWh per day to make 1 gallon of gasoline. Using the cheapest electricity in the country, that’s $9.29 per gallon (+ the machine itself is $20k).


I’m sure auto-generated captions will work great for channels like Primitive Technology where there’s no actual talking and the subtitles are describing what he’s doing…


“Office” is completely removed from https://www.office.com/ The only place “Office” can still be found is in the urls. It’s called “Microsoft 365” now.
Edit: My mistake, “Office Home 2024” is still a thing you can buy apparently, but it’s not the full package and isn’t being updated. I’m pretty sure Libreoffice is a full replacement for “Office Home”


It seems like SketchUp uses OpenGL, which should be supported just fine by a linux GPU driver. I haven’t tried it myself, but you could maybe try running it through Proton (idk if there’s a way outside of Steam?)
I didn’t even realize these were being manufactured by Sparkfun now. I’ve bought all my Teensy boards straight from the original designer: https://www.pjrc.com/store/ (It’s been several years since I’ve bought any parts)


I’d also argue a human monitoring your conversation would likely make similar mistakes in judgement about what’s happening, and this kind of invasion of privacy just isn’t okay in any form. There could be whole extra conversations happening that they can’t see (like speaking IRL before sending a consentual picture).


I’ve seen some horrendous systems where you can tell a bunch of totally separate visions were frankenstein’d together
My experience has been that using AI only accelerates this process, because the AI has no concept of what good architecture is or how to reduce entropy. Unless you can one-shot the entire architecture, it’s going to immediately go off the rails. And if the architecture was that simple to begin with, there really wasn’t much value in the AI in the first place.
I don’t think File Explorer on Windows uses fork() to copy files? If it does, that’s insane. I don’t think git calls fork per-file or anything either, does it?


This sounds like it takes away a huge amount of creative freedom from the writers if the AI is specifying the framework. It’d be like letting the AI write the plot, but then having real writers fill in details along the way, which sounds like a good way to have the story go nowhere interesting.
I’m not a writer, but if I was to apply this strategy to programming, which I am familiar with, it’d be like letting the AI decide what all the features are, and then I’d have to go and build them. Considering more than half my job is stuff other than actually writing code, this seems overly reductive, and underestimates how much human experience matters in deciding a framework and direction.
I fully blame this on NTFS being terrible with metadata and small files. I’m sure everyone’s tried copying/moving/deleting a big folder with 1000s of small files before and the transfer rate goes to nearly 0…
On the bright side, you’re getting paid to wait around ( /s because I know the feeling, and it’s just slow enough you can’t step away and do something else)


What improvements have there been in the previous 6 months? From what I’ve seen the AI is still spewing the same 3/10 slop it has since 2021, with maybe one or two improvements bringing it up from 2/10. I’ve heard several people say some newer/bigger models actually got worse at certain tasks, and clean training data is pretty much dried up to even train more models.
I just don’t see any world where scaling up the compute and power usage is going to suddenly improve the quality orders of magnitude. By design LLMs are programmed to output the most statistically likely response, but almost by definition is going to be the most average, bland response possible.


This is based on the assumption that the AI output is any good, but the actual game devs and writers are saying otherwise.
If the game is too big for writers to finish on their own, they’re not going to have time to read and fix everything wrong with the AI output either. This is how you get an empty, soulless game, not Balders Gate 3.
Legitimately it is a winning strategy: https://www.history.com/articles/us-invasion-of-panama-noriega
How large a number are we talking? This might be impossible for a computer as well considering this being a hard problem is effectively the basis for most encryption.