• 1 Post
  • 18 Comments
Joined 2 years ago
cake
Cake day: February 14th, 2024

help-circle
  • Regardless In certain circumstances my keys do the exact same thing and I’m quite sure I followed some guide to create one primary and one secondary key but it’s possible that guide has gone outdated.

    Yeah maybe this guide wasn’t there when you bought yours or it is outdated. Problem is, you have to setup the 2FA from scratch for these accounts if you don’t have the QR code anymore. Might still be worth a try to really get two identical keys.

    you are in charge of making backups and such. I can totally respect the folks who opted to self host, I’m horrible when it comes to backing up data and such and self hosting wasn’t really my thing back in 2020 so it never really was on my radar.

    Aegis is still an app on your phone. It just is not connected to an online service so you control the database file youself. It of course always depends on you setup e.g. if you have a single device that acts as your 2FA “key” and keep offline backups of the database you don’t have to host anything. If you want to authenticate with multiple devices and add new accounts often some form of automatic sync might be helpful. Even though I like the app, I don’t want to convince you of Aegis. I just didn’t want to paint the wrong picture.


  • I just realized, the formatting of my last reply got lost somehow, sorry for that. Nevertheless, thank you very much for your response. Really appreciate the insights of a long time user.

    I switched from Authy to Aegis like 2 years ago, because I didn’t want to rely on an online service either. Similar to something like Keepass, the database is local and you are in charge of making backups and such. But that is also the great thing about it. If your phone dies you just copy the backup to the new device and your golden. I already thought about the switch to a Yubikey back then, but didn’t go through with it.

    With regards to the backup key, Yubikey recommends to save (screenshot) the QR code that is generated during 2FA setup to setup the backup key later on. Maybe that is also a workaround for services that only allow a single 2FA device. https://support.yubico.com/hc/en-us/articles/360021919459-How-to-register-your-spare-key

    Yes always plugged in works of course, I just meant that you are somewhat compromising the security that you have gained by using dedicated hardware. But as you said, if touch is enabled and the key is password protected you are probably fine. In the end this comes always down to an optimization problem between security and convenience that everyone has to decided for themself.


  • Can you explain a little more how you handle them in your daily life? I always liked the idea if Yubikeys, but I am a bit worried that I just would switch back to my phone (Aegis) for convenience. Things like:

    Are there accounts that you didn’t get to work? Do you have separate keys for personal and work accounts? Do you just have it on your keychain an plug it in whenever you need it? Because always plugged in keys in your phone or laptop doesn’t really make sense. As far as I know you can’t just clone a key. How easy is it to setup a backup key? Does this work for all accounts? I try to not use my phone for critical stuff, but there are times I have to just check an account. Do you use your phone with Yubikeys? How is your experience? USB or NFC?


  • Yeah, you are right a custom bridge network can do DNS resolution with container names. I just saw in a video from Lawrence Systems, that he exposed the socket. And somewhere else I saw that container names where used for the proxy hosts in NPM. Since the default bridge doesn’t do DNS resolution I assumed that is why some people expose the socket.

    I just checked again and apparently he created the compose file with ChatGPT which added the socket. https://forums.lawrencesystems.com/t/nginx-proxy-manager-docker/24147/6 I always considered him to be one of the more trustworthy and also security conscious people out there, but this makes me question his authority. Atleast he corrected the mistake, so everyone who actually uses his compose file now doesn’t expose the socket.


  • Thanks for the write-up and sorry for the late reply. I guess I didn’t come very far without exposing the docker socket. Nextcloud was actually one of the services on my list I wanted to try out. But I haven’t looked at the compose file yet. It makes sense why it is needed by the AIO image. Interestingly, it uses a Docker socket proxy to presumably also mitigate some of the security risks that come from exposing the socket. Just like another comment in this thread already mentioned.

    However, since I don’t know much about Kubernetes I can’t really tell if it improves something, or if the privileges are just shifted e.g. from the container having socket access to the Kubernetes orchestration thingy having socket access. But it looks indeed interesting and maybe it is not a bad idea to look into it even early on in my selfhost and container adventure.

    Even though I said otherwise in another comment, I think I have also seen socket access in Nginx Proxy Manager in some example now. I don’t really know the advantages other than that you are able to use the container names for your proxy hosts instead of IP and port. I have also seen it in a monitoring setup, where I think Prometheus has access to the socket to track different Docker/Container statistics.



  • I am a strong believer in separate docker compose files to keep it more organized and hopefully have more control over everything. But in the end most of it comes down to personal preference.

    I actually have some kind of network issues with one of my containers at the moment (Adguard in this case), where your ideas already came in handy. Unfortunately, I couldn’t solve it yet, but this is also something for a new topic I believe.


  • I have heard the name Kubernetes and know that is also some kind of container thing, but never went really deeper than that. It was more a general question how people handle the whole business of exposing the docker socket to a container. Since I came across it in Watchtower and considered installing that I used it as an example. I always thought that Kubernetes and Docker swarms and things like that are something for the future when I have more experience with Docker and containers in general, but thank you for the idea.


  • I have set all this up on my Asustor NAS, therefore things like apt install are not applicable in my use-case. Nevertheless, thank you very much for your time and expertise with regards to users and volumes. What is your strategy for networks in general? Do you setup a separate network for each and every container unless the services have to communicate with each other? I am not sure I understand your network setup in the Jellyfin container.

    In the ports: part that 10.0.1.69 would be the IP of your server (or in this case, what I declare the jellyfin container’s IP to be) - it makes it so the container can only bind to the IP you provide, otherwise it can bind to anything the server has access to (as far as I understand). With the macvlan driver the virtual network driver of your container behaves like its own physical network interface which you can assign a separate IP to, right? What advantage does this have exactly or what potential problems does this solve?


  • I think I get where your coming from. In this specific case of Watchtower it is not a security flaw it just uses the socket to do what it is supposed to do. You either trust them and live with the risks it comes with or you don’t and find another solution. I used Watchtower as the example because it was the first one I came across that needs this access. There might be a lot of other containers out there that use this, so I wanted to hear peoples opinions on this topic and their approach.


  • Thank you for your comment and the resources you provided. I definitely look into these. I like your approach of minimizing the attack surface. As I said, I am still new to all of this and I came across the user option of docker compose just recently when I installed Jellyfin. However, I thought the actual container image has to be configured in a way so that this is even possible. Otherwise you can run into permission errors and such. Do you just specify a non-root user and see if it still works?

    And while we’re at it, how would you setup something like Jellyfin with regards to read-write permissions? I currently haven’t restricted it to read-only and in my current setup I most certainly need write permissions as well because I store the artwork in the respective directories inside my media folder. Would you just save these files to the non-persisted storage inside the container because you can re-download them anyway and keep the media volume as read-only?



  • I don’t know anything about Podman but I think Docker also has a rootless mode, however I don’t really know any details about that either. Maybe I should read more about that.

    Yeah, I think I also saw some fancy dashboard with Grafana and Prometheus where some part also required access to the socket (can’t remember which), so I thought it might me more common to do that than I originally thought.




  • No, none of my containers are exposed to the internet and I don’t intend to do so. I leave that to people with more experience. I have however setup the Wireguard VPN feature of my router to access my home network from outside which I need occasionally. But as far as I read, that is considered one of the savest options IF you have to make it available. No outside access is of course always preferred.


  • That is the exact reason why I wouldn’t use the auto-update feature. I just thought about setting it up to check for updates and give me some sort of notification. I just feel like a reminder every now and then helps me to keep everything up to date and avoid some sort of never change a running system mentality.

    Your idea about setting it up and only letting it run occasionally is definitely one to consider. It would at least avoid manually checking the releases of each container similar to the RSS suggestion of /u/InnerScientist