

Their prices lately have been very unimpressive.
Their prices lately have been very unimpressive.
Maybe HDR on linux? I’m fairly clueless about how it all works under the hood, but I’m currently on debian 12 and I’m hoping that by the time 13 comes around it will just work without me needing to do any manual system tweaks. As I understand it, it’s currently semi-working or fully-working in KDE6, but I’m still on KDE5 until debian 13 comes out.
Rutracker is pretty solid for a public tracker. They’ve notably got a ton of music rips mirrored from what.cd and RED, and they seem to have a good handle on gaming releases as well. Their uploaders seem to focus at least a little on making releases well-annotated and a custom/high quality experience rather than just mindless scene content dumping. Use an adblocker and page translator and you should be good to go.
I don’t think ‘cattle not pets’ is all that corporate, especially w/r/t death of the author. For me, it’s more about making sure that failure modes have (rehearsed) plans of action, and being cognizant of any manual/unreplicable “hand-feeding” that you’re doing. Random and unexpected hardware death should be part of your system’s lifecycle, and not something to spend time worrying about. This is also basically how ZFS was designed from a core level, with its immense distrust for hardware allowing you to connect whatever junky parts you want and letting ZFS catch drives that are lying/dying. In the original example, uptime seems to be an emphasized tenet, but I don’t think it’s the most important part.
RE replacements on scheduled time, that might be true for RAIDZ1, but IMO a big selling point of RAIDZ2 is that you’re not in a huge rush to get resilvering done. I keep a cold drive around anyway.
“Cattle not pets” in this instance means you have a specific plan for the random death of a HDD (which RAIDZ2 basically already handles), and because of that you can work your HDDs until they are completely dead. If your NAS is a “pet” then your strategy is more along the lines of taking extra-good care of your system (e.g. rotating HDDs out when you think they’re getting too old, not putting too much stress on them) and praying that nothing unexpected happens. I’d argue it’s not really “okay” to have pets just because you’re in a homelab, as you don’t really have to put too much effort into changing your setup to be more cynical instead of optimistic, and it can even save you money since you don’t need to worry about keeping things fresh and new.
“In the old way of doing things, we treat our servers like pets, for example Bob the mail server. If Bob goes down, it’s all hands on deck. The CEO can’t get his email and it’s the end of the world. In the new way, servers are numbered, like cattle in a herd. For example, www001 to www100. When one server goes down, it’s taken out back, shot, and replaced on the line.”
~from https://cloudscaling.com/blog/cloud-computing/the-history-of-pets-vs-cattle/
Can’t believe it’s someone’s job to make this list and this is what they came up with. A 14-year-old with internet access could name way more major players.
Maybe tangential but this reminded me of how much I hate setting up systemd timers/services. I refuse to accept that creating two files in two different directories and searching online for the default timer and service templates is an okay workflow over simply throwing a cron expression next to the command you want to run and being done with it. Is there really no way we can have a crontab-equivalent that virtually converts into a systemd backend when you don’t need the extra power? I feel like an old person that can’t accept change but it’s been a decade and I’m still angry.
You might be able to read Arches faster.
I’m not sure what a good written guide for manually running linux games is off the top of my head, but generally yeah you install Lutris, install the latest Proton-GE version through e.g. ProtonUp-QT, create a game entry in Lutris with a “Prefix” location dedicated to your wine prefix, pick Proton-GE as the runner, copy the game into the generated prefix, target the normal EXE, and launch it. Sometimes if a game isn’t launching you’ll need to use “winetricks” to install vcrun2022 and dotnet48 dependencies into the wine prefix, since each Wine prefix is sort of like a copy of windows, and windows has a handful of dependencies that games sometimes rely on. I’ve heard you can also just add the game as a “non-steam game” to steam, but I’ve not bothered as Lutris gives more control. Again I can’t vouch for any specific guides, but the keywords from this post should help target a general direction to move in.
In my experience, there is nearly no difference between windows and linux when it comes to piracy. There are a few games that linux can’t run (anticheat), but generally that shouldn’t be an issue for games that you would typically pirate. Linux does have a standard learning curve though, and you’ll need to get familiar with Lutris or some other Wine prefix manager to manage your games. If you’re dedicated to moving to linux, game piracy should not be a deciding factor.
As I understand it, the assertion is that the 1080p FPS is the same as 2k/4k FPS, assuming that you have an infinitely powerful GPU. So the 1080p FPS is your max potential FPS at any resolution with the CPU, and then you need to look at a GPU 2k/4k chart to see how much FPS it can achieve from that target. HWUnboxed also reasons that gamers are not blindly using ultra settings, so in real scenarios people are going to be lowering their settings to try to achieve a specific FPS target anyway. They also mention that lowering ingame settings doesn’t usually affect the CPU FPS benchmark.
So in summary, the 1080p CPU benchmark is the ~highest possible target you can achieve, and then it’s up to your GPU and ingame settings to decide how much of that target you can reach. It’s a little more difficult to grasp and calculate mentally, but it prevents the 2k/4k benchmark data from showing what is effectively misleading “point in time” data that will not be useful if you have a different GPU or ingame settings. This is most clearly demonstrated by re-reviewing older CPUs in the future-proof section and showing that putting massive GPUs on old CPUs puts the FPS benchmarks of all resolutions to roughly the same value - i.e. the CPU doesn’t truly have an effect w/r/t resolution, it’s mainly just the GPU.
One of the main points of this video is that 1080p testing is the only thing you should be looking at for CPU benchmarks (to the point that HWUnboxed is no longer doing 2k/4k testing in the future I think?), and although I was skeptical at first, the future-proof section did finally convince me. The new problem with this line of thinking is that you really need to be cross-referencing a GPU benchmark to figure out what a real world 2k/4k scenario will look like for the CPU you’re interested in.
Wow you weren’t kidding lol. I watched the 2.0 demo and at this timestamp there’s a CSAM-related room title that Matthew was invited to (at the top of the right window). Granted it’s probably someone stream-sniping, but it goes to show that there’s apparently active bad actors trying to interfere.
Their rough new user experience is concerning though. From what they described I suspect many of their “problems” are not actually “real”, but it doesn’t really matter because they still ended up in a scenario where they thought there were problems. How did they end up thinking that everything must be done with terminal while using Ubuntu? I know in the last ~10 years there’s been a big focus on the new user experience, so what more can be done to prevent this? My gut says there are too many online resources that are confusing new users when they try to onboard themselves - especially resources that are old, written for other distros, or written for people who just want to find the command they can copy-paste to do something.
Solid video, and it comes from a pretty grounded viewpoint. It’s not very techy or pros/cons-focused; it’s more about the “spirituality” of what we’re even doing with the technology in our lives. They’re obviously not a tech expert, but their mindset and “breaking point” are a lot more relatable for most casual people. This is the sort of realization that people are going to continue having as big tech encroaches further and further on their lives. E.g. their example of “it’s not one big problem, it’s many small problems that add up” with why it’s so frustrating to use Windows, but then why people continue to use it.
It will take a “breaking point” and self-motivated change to critically evaluate the power that you’re giving to corporations and decide that you’re going to accept some discomfort in order to fix it. There will never be a perfect time to effortlessly switch your entire workflow across operating systems. I daresay that if there ever was a point at which switching to Linux was effortless, big tech would flash something new and shiny and make that no longer the case. They prey on keeping people in the path of least resistance, and understanding their strategy is the first step to doing something about it.
Wish people would have realized this a couple decades ago, but it really does feel like Linux is re-entering public discourse as people are getting more and more jaded about their relationship with big tech companies.