Alt account of @Badabinski

Just a sweaty nerd interested in software, home automation, emotional issues, and polite discourse about all of the above.

  • 0 Posts
  • 56 Comments
Joined 9 months ago
cake
Cake day: June 9th, 2024

help-circle
  • I imagine it’s more complicated than that. For example, Pu-238 only emits alpha radiation. I doubt that reactor waste only emits alpha radiation, meaning you’d have to harden the electronics for a close and potentially extreme emitter of beta/gamma radiation. I also don’t know if random high grade reactor waste gets hot enough to provide meaningful amounts of energy via thermoelectric means. Alternatively, it may be that it gets too hot.

    I doubt they could have simply slapped something together. The cost of developing a new RTG capable of using reactor waste would likely be a significant fraction of the budget to develop the probe itself. It might have been worth it, but I feel that it’s not clear-cut.








  • I’ve been very pleased with my factory-seconds Framework 13 (11th gen i7, 64 gigs of RAM and 2TB storage acquired through other channels). Linux support has been basically perfect for me, although there were some kinks earlier on. The Framework 16 might work for you if you need something with a discrete GPU.

    If you want something more mainstream, ThinkPads are often great for running Linux. Not every model is perfect, so I’d recommend doing some research there. The Arch Linux wiki often has laptop specific web pages that show how well supported the laptop is. For example, here’s the page for the Framework 13.


  • Cities probably have a higher density of towers, or the towers in cities have more capable antennas. Point-to-point microwave links can be pretty damn fast and reliable. They have their limitations, but even low-end systems like some of Ubiquiti’s 60ghz stuff can form full duplex 5Gbps links at 10+ kilometers. Fiber is still king, but I’m guessing the backhaul isn’t the issue.

    I’m guessing that the issue is congestion on the client radios. 5g is supposed to be much better at dealing with this thanks to time sharing improvements, but it seems likely that there just aren’t enough towers. One scenario that seems reasonable is that your telco (incorrectly) assumed that they wouldn’t need as many towers when upgrading, so they only upgraded a subset of their towers and removed old ones once 4g was deprecated.

    edit: you might be able to get better information about wtf is going on by using a community-sourced site like https://cellmapper.net/

    I believe you can use that site to get info about how many towers there are and what the client-side congestion is like.

    EDIT: ew, cellmapper is closed source. OpenCellid or beaconDB seem to be open source equivalents.


  • Abso-fucking-lutely, amen and hallelujah. I want 6G to focus on improving range and performance in marginal conditions. When shit is good, 5g is fast enough for now. I don’t know how you improve range and penetration without going to lower frequencies, so maybe we should try to do that? Lower frequencies mean less bandwidth, but RF is black magic fuckery and there’s all kinds of crazy shit that can be done with time division, so maybe we can improve throughout in the sub-ghz regime. I dunno about that, I’m just an idiot software developer who is thankful that shit works without me having to sacrifice a goat.

    Maybe there’s a way to broadcast at higher power levels, and maybe there are ways for base stations to be more sensitive or do filtering to increase SNR. I have no idea, but I think that should be what the telecos focus on. Better service over a wider area with the same number of towers would be huge.


  • Python is my primary language. For the way I write code and solve problems, it’s the language where I need the least help from an LLM. Python lets you write code that is incredibly concise while still being easy to read. There’s more of a case to be made for something like Go, since it seems like every single god damned function call ends up being variable, err := someFuckingShit() and then a if err!=nil and manually handling it instead of having nice exception handling. Even there, my IDE does that for me without requiring a computationally expensive LLM to do the work.

    Like, some people have a more conversational development style and I guess LLMs work well for them. I end up constantly context switching between code review mode and writing code mode which is incredibly disruptive.


  • As a senior dev, I have no use for it in my workflow. The only purpose it would serve for me is to reduce the amount of typing I do. I spend about 5-10% of my time actually writing code. The rest of my dev time is spent in architecting, debugging, testing, or documenting. LLMs aren’t really good at most of those things once you move past the most superficial levels of complexity. Besides, I don’t actually want something to reduce the amount I’m typing. If I’m typing too much and I’m getting annoyed then it’s a sure sign that I’ve done something bad. If I’m writing boilerplate then it’s time to write an abstraction to eliminate that. If I’m writing repetitive tests then it’s a sign I need to move to a property based testing framework like Hypothesis. If the LLM spits all of this out for me, I will end up writing code that is harder to understand and maintain.

    LLMs are fine for learning and junior positions where you’ll have more experienced folks reviewing code, but it just is not that helpful past a certain point.

    Also, this is probably a small thing, but I have yet to find an LLM that writes anything other than shitty, terrible shell scripts. Please for the love of God don’t use an LLM to write shell scripts. If you must, then please pass the results through shellcheck and fix all of the issues there.




  • The license of a GPLv3 project can change moving forward provided all copyright holders agree to the change. The license cannot be changed for code that was already released. If the Paisa devs could get every contributor on-board, then it’s fine. Alternatively, if they forced contributors to sign a CLA (Contributor License Agreement) which signs over the copyright to Paisa (most CLAs include copyright transfer), then that’s basically free rein to rug pull shit whenever they feel like it.

    Fuck CLAs by the way. Try to avoid contributing to projects on your free time that force you to sign one. If you’re contributing on behalf of a company, it’s likely that your legal team will take umbrage at you signing a CLA, but it’s not like you’ll own the copyright to your work anyways, so it’s less of an issue there.

    Support projects that have you sign a DCO (Developer Certificate of Origin). The DCO protects the company or individual running the project without forcing developers to give up their rights.






  • Oof, I didn’t know that about firejail. I’d heard of it, but I’d never used it. Like, c’mon folks! If you need privilege escalation, either require launching as root (if appropriate), or delegate the responsibility to a small, well-audited tool designed explicitly for the purpose and spawn a new privileged pid. Don’t use SUID. You will fuck it up. If you reach the point where setuid is your only option, then you’ve hopefully learned enough to rearchitect to not need it, or to give up, or use it if you’re, say, someone who maintains a libc or something.

    EDIT: this is overly dramatic, but also it’s not. I personally feel like using SUID is kinda like rolling your own crypto in terms of required competence.