• 0 Posts
  • 28 Comments
Joined 2 years ago
cake
Cake day: June 7th, 2023

help-circle
  • I’m sure there are several out there. But, when I was starting out, I didn’t see one and just rolled my own. The process was general enough that I’ve been able to mostly just replace the SteamID of the game in the Dockerfile and have it work well for other games. It doesn’t do anything fancy like automatic updating; but, it works and doesn’t need anything special.


  • I see containers as having a couple of advantages:

    1. Separation of dependencies - while not as big of issue as it used to be, just knowing that you won’t end up with the requirements for one application conflicting with another is one less issue to worry about. Additionally, you can do anything you want to one container, without having an effect on another container. You don’t get stuck wanting to reboot or revert the system, but not wanting to break a different running service.
    2. Portability - Eventually, you are going to replace the OS of that VM (at least, you should). Moving a container to a new OS is dead simple. Re-installing an application on a new OS, moving data and configs can be anywhere from easy to a pain in the arse, depending on the software.
    3. Easier fall back - Have you ever upgraded an application and had everything go to shit? In my years working as a sysadmin, I lost way too many evenings to this sort of bullshit. And while VM snapshots should make reverting easy, sometimes it just didn’t work out that way. Containers force enough separation of applications that you can do just about anything to one container and not effect others.
    4. Less dependency on a single install - Have you ever had a system just get FUBAR, and after a few hours of digging the answer seems to be, just format the drive and start over? Maybe you tried some weird application out and the uninstall wasn’t really clean. By having all that crap happen in containers, you can isolate the damage. Nuke the container, nuke the image, and the base OS is still clean.
    5. Easier version testing - Want to try out upgrading to version 2 of an application, but worried that it may not be fully baked yet or the new configs may take a while to get right? Do it off in a separate container on a copy of the data. You can do this with VMs and snapshots; but, I find containers to be less overhead.

    That all said, if an application does not have an official container image, the added complexity of creating and maintaining your own image can be a significant downside. One of my use cases for containers is running game servers (e.g. Valheim). There isn’t an official image; so, I had to roll my own. The effort to set this up isn’t zero and, when trying to sort out an image for a new game, it does take me a while before I can start playing. And those images need to be updated when a new version of the game releases. Technically, you can update a running container in a lot of cases; but, I usually end up rebuilding it at some point anyway.

    I’d also note that, careful use of VMs and snapshots can replicate or mitigate most of the advantages I listed. I’ve done both (decade and a half as a sysadmin). But, part of that “careful use” usually meant spinning up a new VM for each application. Putting multiple applications on the same OS install was usually asking for trouble. Eventually, one of the applications would get borked and having the flexibility to just nuke the whole install saved a lot of time and effort. Going with containers removed the need to nuke the OS along with the application to get a similar effect.

    At the end of the day, though. It’s your box, you do what you are most comfortable with and want to support. If that’s a monolithic install, then go for it. While I, or other might find containers a better answer for us, maybe it isn’t for you.




  • My list of items I look for:

    • A docker image is available. Not some sort of make or build script which make gods know what changes to my system, even if the end result is a docker image. Just have a docker image out on Dockerhub or a Dockerfile as part of the project. A docker-compose.yaml file is a nice bonus.
    • Two factor auth. I understand this is hard, but if you are actually building something you want people to seriously use, it needs to be seriously secured. Bonus points for working with my YubiKey.
    • Good authentication logging. I may be an outlier on this one, but I actually look at the audit logs for my services. Having a log of authentication activity (successes and failures) is important to me. I use both fail2ban to block off IPs which get up to any fuckery and I manually blackhole entire ASNs when it seems they are sourcing a lot of attacks. Give me timestamps (in ISO8601 format, all other formats are wrong), IP address, username, success or failure (as a independent field, not buried in a message or other string) and any client information you can (e.g. User-Agent strings).
    • Good error logging. Look, I kinda suck, I’m gonna break stuff. When I do, it’s nice to have solid logging giving me an idea of what I broke and to provide a standardized error code to search on. It also means that, when I give up and post it as an issue to your github page, I can provide you with some useful context.

    As for that hackernews response, I’d categorically disagree with most of it.

    An app, self-contained, (essentially) a single file with minimal dependencies.

    Ya…no. Complex stuff is complex. And a lot of good stuff is complex. My main, self-hosted app is NextCloud. Trying to run that as some monolithic app would be brain-dead stupid. Just for the sake of maintainability, it is going to need to be a fairly sprawling list of files and folders. And it’s going to be dependent on some sort of web server software. And that is a very good place to NOT roll your own. Good web server software is hard, secure web server software is damn near impossible. Let the large projects (Apache/Nginx) handle that bit for you.

    Not something so complex that it requires docker.

    “Requires docker” may be a bit much. But, there is a reason people like to containerize stuff, it avoids a lot of problems. And supporting whatever random setup people have just sucks. I can understand just putting a project out as a container and telling people to fuck off with their magical snowflake setup. There is a reason flatpak is gaining popularity.
    Honestly, I see docker as a way to reduce complexity in my setup. I don’t have to worry about dependencies or having the right version of some library on my OS. I don’t worry about different apps needing different versions of the same library. I don’t need to maintain different virtual python environments for different apps. The containers “just work”. Hell, I regularly dockerize dedicated game servers just for my wife and I to play on.

    Not something that requires you to install a separate database.

    Oh goodie, let’s all create our own database formats and re-learn the lessons of the '90s about how hard databases actually are! No really, fuck off with that noise. If your app needs a small database backend, maybe try SQLite. But, some things just need a real database. And as with web servers, rolling your own is usually a bad plan.

    Not something that depends on redis and other external services.

    Again, sometimes you just need to have certain functionality and there is no point re-inventing the wheel every time. Breaking those discrete things out into other microservices can make sense. Sure, this means you are now beholden to everything that other service does; but, your app will never be an island. You are always going to be using libraries that other people wrote. Just try to avoid too much sprawl. Every dependency you spin up means your users are now maintaining an extra application. And you should probably build a bit of checking into your app to ensure that those dependencies are in sync. It really sucks to upgrade a service and have it fail, only to discover that one of it’s dependencies needed to be upgraded manually first, and now the whole thing is corrupt and needs to be restored from backup. Yes, users should read the release notes, they never do.
    The corollary here is to be careful about setting your users up for a supply chain attack. Every dependency or external library you add is one more place for your application to be attacked. And just because the actual vulnerability is in SomeCoolLib.js, it’s still your app getting hacked. You chose that library, you’re now beholden to everything it gets wrong.

    At the end of it all, I’d say the best app to write is the one you are interested in writing. The internet is littered with lots of good intentions and interesting starts. There is a lot less software which is actually feature complete and useful. If you lose interest, because you are so busy trying to please a whole bunch of idiots on the other side of the internet, you will never actually release anything. You do you, and fuck all the haters. If what you put out is interesting and useful, us users will show up and figure out how to use it. We’ll also bitch and moan, no matter how great your app is. It’s what users do. Do listen, feedback is useful. But, also remember that opinions are like assholes: everyone has one, and most of them stink.


  • A brick and mortar store has a lot of overhead. And, even with merch sales, GameStop doesn’t have enough to offer to differentiate itself from online stores for that same merch. Why would I take the time to walk/drive over to the GameStop to buy some cheap crap from China, when I can go online and buy that same cheap crap from China for less online. Especially, when I can often get it direct from China (via AliExpress or the like) for even less? Without the sales of physical media and the used game market, there just isn’t a viable business case for GameStop anymore. Sure, I found the whole GameStop stock meme funny too. And it sucked that some big fund tried to short them into the dirt. But, looked at from a dispassionate perspective, the current business model is doomed.


  • It’s not nice as something to target, but it makes sense. Employment is about more than just straight money. When evaluating an employer, I consider everything from the top line salary, to benefits, work culture, work life balance and work environment. The non-tangible factors can mean that I would be willing to take a lower salary. That is why companies will do things like decked out rec rooms or the like. And ya, I might consider a lower salary to be part of something I love or believe in. E.g. If NASA were looking for remote cybersecurity workers, I might consider a lower salary that I would get elsewhere, just to get to be part of NASA.

    Employment is a negotiation between you and your employer. And while I do think technical folks could really use a trade union (something like the IBEW for electricians), for now you have to represent yourself and make sure you get what you are worth. And this might mean not working on the thing you are really passionate about. Especially if the people in charge of it are a bag of dicks.



  • There’s a gulf of difference between jumping to an obvious conclusion and actually doing the investigative work to really answer the question. The police aren’t dumb and are probably just as sure as the rest of us as to the motive that will be found. However, they still need to make that determination based on real evidence, especially if it’s going to go to court. So, “it’s unclear” until they have something which provides strong evidence of a motive.

    Ya, I’d be putting all my chips on this being someone who was on the receiving end of a denied claim. But, you never know when it’s going to end up being the guy failing to pay up to the Russian Mafia or some other situation which resulted in a targeted attack. I’m not going to defend all the actions of the police, but they do occasionally stop shooting kids long enough investigate crimes properly.












  • It’s a dick move, but I can kinda understand why SpaceX would make it. There has been a push to “de-risk” supply chains, after the disruptions caused by Covid, Russia’s invasion of Ukranie, and other world events. This type of de-risking was partly responsible for the CHIPS and Science Act. The US Government has a strategic incentive to have a stable and resilient supply chain for semiconductors.

    For SpaceX, having critical components be only available from fabs in Taiwan is a risk to business. China has been more and more vocal about it’s desire to annex Taiwan. With Trump taking office, one can imagine that the US commitment to protect Taiwan may not be quite as iron clad as it has been in the past. It’s not hard to imagine a future where China launches an invasion of Taiwan and the US does little more than shrug. At that point, any business which is solely reliant on Taiwan for semiconductors is going to see major disruptions.

    So ya, it’s a complete dick move. But, I suspect SpaceX will be far from the last company looking to build a supply chain outside Taiwan.