

I don’t think you need to involve Linux at all if you boot the official windows installer. I would just install the SSD as the only drive internally and install to it, then put it back in its enclosure.


I don’t think you need to involve Linux at all if you boot the official windows installer. I would just install the SSD as the only drive internally and install to it, then put it back in its enclosure.


It looks like it’s about helping to audo deploy docker-compose.yml updates. So you can just push updated docker-compose.yml to a repo and have all your machines update instead of needing to go into each machine or set up something custom to do the same thing.
I already have container updates handled, but something like this would be great so that the single source of truth for my docker-compose.yml can be in a single repo.


The interesting thing is that the myth of American exceptionalism works both ways - both that we can succeed when we use strategies that have failed elsewhere, and also that we’re so unique and different that we’ll fail if we try strategies that are proven successful elsewhere. Both convincing people that we should do the opposite of what’s been shown to solve problems elsewhere.
I use gluetun to connect specific docker containers to a VPN without interfering with other networking, since it’s all self contained. It also has lots of providers built in which is convenient so you can just set the provider, your password, and your preferred region instead of needing to manually enter connection details manage lists of servers (it automatically updates it’s own cached server list from your provider, through the VPN connection itself)
Another nice feature is that it supports scripts for port forwarding, which works out of the box for some providers. So it can automatically get the forwarded port and then execute a custom script to set that port in your torrent client, soulseek, or whatever.
I could just use a wireguard or openvpn container, but this also makes it easy to hop between vpn providers just by swapping the connection details regardless of whether the providers only support wg or openvpn. Just makes it a little more universal.


Supposedly comaps has carplay support as of like 2 months ago, according to a page on their website


I use slskd connected to a VPN and it works great. I just run a gluetun container and then attach the slskd container to it with network mode service, same as you would connect transmission to gluetun.


Sounds like a job for a pair of second hand nanobeams or something similar.
I second the other commenter who suggested using WISP gear. If you have clear fresnel zones it should work a treat.


I second this. Gluetun makes it so easy, working with docker’s internal networking is such a pain.


Hdcp bypass splitters seem easy to find, have you tried one? There are also hdcp converters so if you have a splitter that only bypasses 1.4, you can try hooking it up to a 2.whatever to 1.4 converter.
I also can’t remember how they work exactly but you might need an hdcp compliant display connected to the main output of the splitter so the CC can establish an hdcp handshake. Idk if that’s only a thing with some of them. Good luck!


I love to see this. It is kinda weird to have to use rolls, but I guess the mechanical complexity of separating sheets and feeding them reliably is not fit for an MVP. I wonder how accurately the cut sheets would stack with individual sheets of another brand.


Luckily they are on 2.0.1 now so there has been 2 stable version by now


Is external libraries maybe what you’re looking for?
There’s already an issue open for it: https://github.com/immich-app/immich/issues/1713
Be sure to thumbs it up!


The main novel thing is non-soldered ram that’s fast. The huge improvement is upgradability in new laptops.


It’ll be hit or miss. If it’s ripped from the same source then it should be fine, but different editions, TV edits, scenes that are cut in some versions or additional title cards at the beginning will mess it up so you’d need to QC it.


If you search for pfsense alias script, you’ll find some examples on updating aliases from a script, so you’ll only need to write the part that gets the hostnames. Since it sounds like the hostnames are unpredictable, it might be hard as the only way to get them on the fly is to listen for what hostnames are being resolved by clients on the LAN, probably by hooking into unbound or whatever. If you can share what the service is it would make it easier to determine if there’s a shortcut, like the example I gave where all the subdomains are always in the same CIDR and if one of the hostnames is predictable (or if the subdomains are always in the same CIDR as the main domain for example, then you can have the script just look up the main domain’s cidr). Another possibly easier alternative would be to find an API that lets you search the certificate transparency logs for the main domain which would reveal all subdomains that have SSL certificates. You could then just load all those subdomains into the alias and let pfsense look up the IPs.
I would investigate whether the IPs of each subdomain follow a pattern of a particular CIDR or unique ASN because reacting to DNS lookups in realtime will probably mean some lag between first request and the routing being updated, compared to a solution that’s able to proactively route all relevant CIDRs or all CIDRs assigned to an ASN.


I think the way people do it is by making a script that gets the hostnames and updates the alias, then just schedule it in pfsense. I’ve also seen ASN based routing using a script, but that’ll only work on large services that use their own AS. If the service is large enough, they might predictably use IPs from the same CIDR, so if you spend some time collecting the relevant IPs, you might find that even when the hostnames are new and random, they always go to the same pool of IPs, that’s the lazy way I did selective routing to GitHub since it was always the same subnet.
That’s what I do. 1.6TB currently on rsync.net, only my personal artifacts excluding all media that can be reacquired and it’s a reasonable $10/mo. Synced daily at 4am.
If I wanted my backups to include my media collection or anything exceeding several TB, I would build a second NAS and drop it at my parents’.


Isn’t that description just the article author describing the pictures, not the captions that the parents used? There are no quotes around that part.
If it still boots from the internal disk then you may just need to set the boot priority to prefer your external drive. That’ll be mobo specific unfortunately so I can’t give any tips. I’ve had systems set up to boot from external media when plugged in so it should work.
Back in the day there was also an issue with running full windows installs from USB drives where you needed to prevent it from reinitializing USB devices during bootup since that would interfere with itself, but I’m not seeing anything recent about that so hopefully that’s not an issue anymore.