Little bit of everything!

Avid Swiftie (come join us at !taylorswift@poptalk.scrubbles.tech )

Gaming (Mass Effect, Witcher, and too much Satisfactory)

Sci-fi

I live for 90s TV sitcoms

  • 4 Posts
  • 210 Comments
Joined 2 years ago
cake
Cake day: June 2nd, 2023

help-circle


  • Kind of, but probably not. I started writing this and was like “totally it could be stateless”. Docker runs stateless, and I believe when it starts it is still stateless (or at least could be mounted on a ramdrive) - but then I started thinking, and what about the images? Have to be downloaded and ran somewhere, and that’s going to eat ram quickly. So I amend to you don’t need it to be stateful, you could have an image like you talked about that is loaded every time (that’s essentially what kubernetes does), but you will still need space somewhere as scratch drive. A place docker will places images and temporary file systems while it’s running.

    For state, check out docker’s volume backings here: https://docs.docker.com/engine/storage/volumes/. You could use nfs to another server as an example for your volumes. Your volumes would never need to be on your “app server”, but instead could be loaded via nfs from your storage server.

    This is all nearing into kubernetes territory though. If you’re thinking about netboot and automatically starting containers, and handling stateless volumes and storing volumes in a way that are synced with a storage server… it might be time for kubernetes.







  • I’m not a Brit (but worried similar stuff is coming here to America), but it sounds like you have to enforce those rules. Theoretically, from my high level understanding, would it be enough to have those rules in place, and when reported actively remove the content as a mod?

    I can’t tell exactly, but it sounds like the biggest thing is that there needs to be a way to remove the content quickly, which we have. Facebook is obviously an offender, where they have a “process” but it takes days and as we all know, 99% of the time they don’t actually do anything.

    As a higher more automated level, I’m guessing our automod stuff that most of us admins are using would probably be enough, or if not that some basic AI models.

    It doesn’t sound like they expect it to be perfect, here in the states they don’t expect me to be perfect, but they damn well expect me to follow correct process if I do become aware of something. It’s essentially A) I need to take reasonable preventative measures, like actively moderating, doing what I can automatically, banning bad users and removing content when needed and then B) Immediately taking action if I do become aware of anything, keeping evidence for the feds.