If what you want is an alternative for the SMS on Desktop feature, I’m surprised no one has mentioned https://messages.google.com/web
- 0 Posts
- 41 Comments
then the easier method is to install Caddy as docker and use the containername:containerport method?.. did I understand correctly?
Yes, if the only exposed port to the host or outside, is 443 from caddy container, then the only way to access any of those services is HTTPS through caddy.
I’ve installed caddy directly on my unbuntu server, but I admin my Jellyfin (and eventually Nextcloud) with Docker via CasaOS interface… is this a problem? Do I need to run Caddy in docker too?
The difference between having caddy or any other reverse proxy in docker alongside other apps/services, is that instead of having to expose ports for every container to the host, and then linking every service/app as,
localhost:<host-port>to caddy, you can have them on the same docker network and use<container-name>:<container-port>and only expose 80 443 to the host, meaning that the only way to access app/services is through caddy, that way if you disable port 80 after configuring SSL certificates, you can only access services with HTTPS.
A friendly reminder that it is best to wait a bit before updating, in case there are any bugs still there, happened a few days ago with Forgejo, that a mayor bug was detected after initial release of v13.0
You could use aliases on your
.bashrcfor git (and a bare repo), that would let you manage your$HOMEand/etcdirectly with git without using symlinks, only downside is having them separated in two aliases and two repos.# user config repo alias dotfiles='git --git-dir=$HOME/.dotfiles --work-tree=$HOME' # system config repo alias etcfiles='sudo git --git-dir=$HOME/.etcfiles --work-tree=/etc'It is also recommended that you run:
<alias> config --local status.showUntrackedFiles noin the terminal for both the
dotfilesandetcfilesaliases (you can pick the aliases and git-dir names you want)The aliases help you have a custom named folder instead of
.gitlocated in a custom path, and you can manage them without symlinks as you use git directly on the file’s original location, this would solve your issue of other solutions that depend on symlinksNote: you could technically have the root directory
--work-tree=/as a work tree to use only one command, but It is not recommended to give git the possibility to rewrite any file on the entire file system.Some reference links:
darkan15@lemmy.worldto
Selfhosted@lemmy.world•This is another implementation of what's possible inside of termux for all you self hosters.English
28·1 month agoThe TL, DR version of sharing with No License, is that technically speaking you are not explicitly permitting others to use your code in any way, just allowing them to look, a license is a formal way to give permissions to others to copy, modify, or use your code.
You don’t need an extra file for the license, you can embed it on a section at the top of your file, as you did with the description, just add a
# Licensesection at the very top, if you want the most permissive one you can just use MIT, just need to replace the year of publication of the code, and you can use a pseudonym/username like ‘hereforawhile@lemmy.ml’ if you don’t want to use something like email, username on another site or real name, that can be used to identify you, if that’s a concern
darkan15@lemmy.worldto
Selfhosted@lemmy.world•This is another implementation of what's possible inside of termux for all you self hosters.English
32·1 month agoJust wondering, as this is the second post I see you do like this, why not use git and a forge (codeberg, gitlab, github), to publish these projects, with proper file separation, a nice README with descriptions and instructions and a proper OSS license?
darkan15@lemmy.worldto
Selfhosted@lemmy.world•Spare mini PCs? What would you do with them?English
6·2 months agoYou don’t need to backup all your 24TB of data, you can have a copy of a subset of your important data on another device, if possible the best would be a 3-2-1 approach.
“RAID is not a backup”, is something that is mentioned a lot, as you can still lose data on a RAID setup.
darkan15@lemmy.worldto
Selfhosted@lemmy.world•Spare mini PCs? What would you do with them?English
6·2 months agoSecondary/Failover DNS or any other service that would be nice to have running when the main server is down for any reason.
darkan15@lemmy.worldto
Selfhosted@lemmy.world•issues setting up nginx as an https proxyEnglish
2·2 months agoOn your first part, clarifying your intent, I think that you are overcomplicating yourself by expecting traffic to come to the server via domain name (pass through proxy) from
Router Anetwork and byIP:PortfromRouter Bnetwork, you can access all, from anywhere through domains and subdomains, and avoid using numbers.If you can’t set up a DNS directly on
Router A, you can set it per device you would want to access the server through port forwarding ofRouter B, meaning setting the laptop to use itself as primary DNS and as secondary use external, and any other device you would want in that LAN do the same (laptop as primary), It is a bit tedious to do per device instead but still possible.Wouldn’t this link to the 192.168.0.y address of router B pass through router A, and loop back to router B, routing through the slower cable? Or is the router smart enough to realize he’s just talking to itself and just cut out `router A from the traffic?
No, the request would stop on
Router B, and maintain all traffic, on the10.0.0.*network it would not change subnets, or anything.In other words any device on
10.0.0.*will do a DNS request, ask the Router where the DNS server is, then the DNS query itself is sent directly to the server on port 53, then when the response of the DNS is received, via domain, query the server again, but on port80|443, and then receiving the HTTP/HTTPS response.Remember that all my advice so far is so you don’t use any IP or Port anywhere, and your experience is seamless on any device using domains, and subdomains, the only place where you would need to put IP or ports, is on the reverse proxy itself, to tell anything reaching it, where the specific app/service is, as those would need to be running on different ports but be reached through the reverse proxy on defaults 80 or 443, so that you don’t have to put numbers anywhere.
darkan15@lemmy.worldto
Selfhosted@lemmy.world•issues setting up nginx as an https proxyEnglish
2·2 months agoIf you decide on doing the secondary local DNS on the server on
Router Bnetwork, there is no need to loop back, as that DNS will maintain domain lookup and the requests on10.0.0.xall internal toRouter Bnetwork.On
Router Bthen you would have as primary DNS the Server IP, and as secondary an external one like Cloudflare or Google.You can still decide to put rules on the reverse proxy if the origin IP is from
192.168.0.*or10.0.0.*if you see the need to differentiate traffic, but I think that is not necessary.
darkan15@lemmy.worldto
Selfhosted@lemmy.world•issues setting up nginx as an https proxyEnglish
2·2 months agoDo yourself a favor and use the default ports for
HTTP(80),HTTPS(443)orDNS(53), you are not port forwarding to the internet, so there should be no issues.That way, you can do URLs like
https://app1.home.internal/andhttps://app2.home.internal/without having to add ports on anything outside the reverse proxy.From what you have described your hardware is connected something like this:
Internet -> Router A (
192.168.0.1) -> Laptop (192.168.0.x), Router B (192.168.0.yYou could run only one DNS on the laptop (or another device) connected to
Router Aand point the domain toRouter B, redirect for example the domainhome.internal(recommend<something>.internalas it is the intended one to use by convention), to the192.168.0.yIP, and it will redirect all devices to the server by port forwarding.If
Router Bhas Port Forwarding of Ports80and443to the Server10.0.0.114all the request are going to reach, no matter the LAN they are from. The devices connected toRouter Awill reach the server thanks to port forwarding, and the devices onRouter Bcan reach anything connected toRouter ANetwork192.168.0.*, they will make an extra hop but still reach.Both routers would have to point the primary DNS to the Laptop IP
192.168.0.x(should be a static IP), and secondary to either Cloudflare1.1.1.1or Google8.8.8.8.That setup would be dependent on having the laptop (or another device) always turned ON and connected to
Router Anetwork to have that DNS working.You could run a second DNS on the server for only the
10.0.0.*LAN, but that would not be reachable fromRouter Aor the Laptop, or any device on that outer LAN, only for devices directly connected toRouter B, and the only change would be to change the primary DNS onRouter Bto the Server IP10.0.0.114to use that secondary local DNS as primary.Lots of information, be sure to read slowly and separate steps to handle them one by one, but this should be the final setup, considering the information you have given.
You should be able to setup the certificates and the reverse proxy using subdomains without much trouble, only using IP:PORT on the reverse proxy.
darkan15@lemmy.worldto
Selfhosted@lemmy.world•issues setting up nginx as an https proxyEnglish
2·2 months agoMost routers, or devices, let you set up at least a primary and secondary DNS resolver (some let you add more), so you could have your local one as primary and an external one like google or Cloudflare as secondary. That way, if your local DNS resolver is down, it will directly go and query the external one, and still resolve them.
Still. Thanks for the tips. I’ll update the post with the solution once I figure it out.
You are welcome.
darkan15@lemmy.worldto
Selfhosted@lemmy.world•issues setting up nginx as an https proxyEnglish
3·2 months agoShould not be an issue to have everything internally, you can setup a local DNS resolver, and config the device that handles your DHCP (router or other) to set that as the default/primary DNS for any devices on your network.
To give you some options if you want to investigate, there is: dnsmasq, Technitium, Pi-Hole, Adguard Home. They can resolve external DNS queries, and also do domain rewrite/redirection to handle your internal only domain and redirect to the device with your reverse proxy.
That way, you can have a local domain like
domain.lanordomain.internalthat only works and is managed on your Internal network. And can use subdomains as well.I’m sorry if I’m not making sense. It’s the first time I’m working with webservers. And I genuinely have no idea of what I’m doing. Hell. The whole project has basically been a baptism by fire, since it’s my first proper server.
Don’t worry, we all started almost the same, and gradually learned more and more, if you have any questions a place like this is exactly for that, just ask.
darkan15@lemmy.worldto
Selfhosted@lemmy.world•issues setting up nginx as an https proxyEnglish
4·2 months agoNot all services/apps work well with subdirectories through a reverse proxy.
Some services/apps have a config option to add a prefix to all paths on their side to help with it, some others don’t have any config and always expect paths after the domain to not be changed.
But if you need to do some kind of path rewrite only on the reverse proxy side to add/change a segment of the path, there can be issues if all path changes are not going through the proxy.
In your case, transmission internally doesn’t know about the subdirectory, so even if you can get to the index/login from your first page load, when the app itself changes paths it redirects you to a path without the subdirectory.
Another example of this is with PWAs that when you click a link that should change the path, don’t reload the page (the action that would force a load that goes through the reverse proxy and that way trigger the rewrite), but instead use JavaScript to rewrite the path text locally and do DOM manipulation without triggering a page load.
To be honest, the best way out of this headache is to use subdomains instead of subdirectories, it is the standard used these days to avoid the need to do path rewrite magic that doesn’t work in a bunch of situations.
Yes, it could be annoying to handle SSL certificates if you don’t want or can’t issue wildcard certificates, but if you can do a cert with both
maindomain.tldand*.maindomain.tldthen you don’t need to touch that anymore and can use the same certificate for any service/app you could want to host behind the reverse proxy.
darkan15@lemmy.worldto
Selfhosted@lemmy.world•If I use Caddy for reverse-proxying into another local machine... is my local connection not HTTPS?English
9·2 months agoIf your concern is IoT devices, TVs, and the like sniffing on your local traffic, there are alternatives, and some of them are:
- https from reverse proxy to service.
- VLANs or Different LANs for IoT and your trusted devices (I do this one).
- Internal VPN connection between devices (like WireGuard), so the communication between selected devices is encrypted.
darkan15@lemmy.worldto
Selfhosted@lemmy.world•What is the easiest way to have a self hosted git server?English
12·2 months agoThe simplest (really the simplest) would be to do a
git init --barein a directory on one machine, and that way you can clone, push or pull from it, with the directory path as URL from the same machine and using ssh from the other (you could do this bare repo inside a container but really would be complicating it), you would have to init a new bare repo per project in a new directory.If a self-hosted server meaning something with a web UI to handle multiple repositories with pull requests, issues, etc. like your own local Github/Gitlab. The answer is forgejo (this link has the instructions to deploy with docker), and if you want to see how that looks like there is an online public instance called codeberg where the forgejo code is hosted, alongside other projects.
darkan15@lemmy.worldto
Games@lemmy.world•If you miss old network multiplayer games, or would like to try them with your friends for the first time, may I suggest setting them up via SoftEtherVPN?English
4·2 months agoI don’t know if SoftEther has an option so you don’t tunnel everything, and just use the virtual LAN IPs for games, file transfers, etc.
And I don’t know your actual technical level or the people you play with, but, for people that can go as far as opening ports and installing a server on your own machine, and getting others to connect to it, I would suggest Headscale (the free self-hosted version of Tailscale) as a next step, or if inclined to learn something a bit more hands on Wireguard.
With those you can configure it so, only the desired traffic goes through (like games or files sharing using the virtual LAN IP), and the rest goes out normally, or configure exit nodes, so if/when desired, all traffic is tunneled like what you have now.
If you have any question about Headscale you could ask in !selfhosted@lemmy.world
darkan15@lemmy.worldto
Selfhosted@lemmy.world•Docker dashboards: choice overloadEnglish
2·2 months agoThis would be my choice as well, as I went with Dockge exactly because it works with your existing docker-compose files, and there are no issues if you manage with either Dockge or with the terminal.
If you add Ntfy or Gotify then you should be set.


Maybe this one -> https://github.com/stan-smith/FossFLOW