• 9 Posts
  • 161 Comments
Joined 2 years ago
cake
Cake day: October 4th, 2023

help-circle
  • For some workloads, yes. I don’t think that the personal computer is going to go away.

    But it also makes a lot of economic and technical sense for some of those workloads.

    Historically — like, think up to about the late 1970s — useful computing hardware was very expensive. And most people didn’t have a requirement to keep computing hardware constantly loaded. In that kind of environment, we built datacenters and it was typical to time-share them. You’d use something like a teletype or some other kind of thin client to access a “real” computer to do your work.

    What happened at the end of the 1970s was that prices came down enough and there was enough capability to do useful work to start putting personal computers in front of everyone. You had enough useful capability to do real computing work locally. They were still quite expensive compared to the great majority of today’s personal computers:

    https://en.wikipedia.org/wiki/Apple_II

    The original retail price of the computer was US$1,298 (equivalent to $6,700 in 2024)[18][19] with 4 KB of RAM and US$2,638 (equivalent to $13,700 in 2024) with the maximum 48 KB of RAM.

    But they were getting down to the point where they weren’t an unreasonable expense for people who had a use for them.

    At the time, telecommunications infrastructure was much more limited than it was today, so using a “real” computer remotely from many locations was a pain, which also made the PC make sense.

    From about the late 1970s to today, the workloads that have dominated most software packages have been more-or-less serial computation. While “big iron” computers could do faster serial compute than personal computers, it wasn’t radically faster. Video games with dedicated 3D hardware were a notable exception, but those were latency sensitive and bandwidth intensive, especially relative to the available telecommunication infrastructure, so time-sharing remote “big iron” hardware just didn’t make a lot of sense.

    And while we could — and to some extent, did — ramp up serial computational capacity by using more power, there were limits on the returns we could get.

    However, what AI stuff represents has notable differences in workload characteristics. AI requires parallel processing. AI uses expensive hardware. We can throw a lot of power at things to get meaningful, useful increases in compute capability.

    • Just like in the 1970s, the hardware to do competitive AI stuff for many things that we want to do is expensive. Some of that is just short term, like the fact that we don’t have the memory manufacturing capacity in 2026 to meet need, so prices will rise to price out sufficient people that the available chips go to whoever the highest bidders are. That’ll resolve itself one way or another, like via buildout in memory capacity. But some of it is also that the quantities of memory are still pretty expensive. Even at pre-AI-boom prices, if you want the kind of memory that it’s useful to have available — hundreds of gigabytes — you’re going to be significantly increasing the price of a PC, and that’s before whatever the cost of the computation hardware is.

    • Power. Currently, we can usefully scale out parallel compute by using a lot more power. Under current regulations, a laptop that can go on an airline in the US can have an 100 Wh battery and a 100 Wh spare, separate battery. If you pull 100W on a sustained basis, you blow through a battery like that in an hour. A desktop can go further, but is limited by heat and cooling and is going to start running into a limit for US household circuits at something like 1800 W, and is going to be emitting a very considerable amount of heat dumped into a house at that point. Current NVidia hardware pulls over 1kW. A phone can’t do anything like any of the above. The power and cooling demands range from totally unreasonable to at least somewhat problematic. So even if we work out the cost issues, I think that it’s very likely that the power and cooling issues will be a fundamental bound.

    In those conditions, it makes sense for many users to stick the hardware in a datacenter with strong cooling capability and time-share it.

    Now, I personally really favor having local compute capability. I have a dedicated computer, a Framework Desktop, to do AI compute, and also have a 24GB GPU that I bought in significant part to do that. I’m not at all opposed to doing local compute. But at current prices, unless that kind of hardware can provide a lot more benefit than it currently does to most, most people are probably not going to buy local hardware.

    If your workload keeps hardware active 1% of the time — and maybe use as a chatbot might do that — then it is something like a hundred times cheaper in terms of the hardware cost to have the hardware timeshared. If the hardware is expensive — and current Nvidia hardware runs tens of thousands of dollars, too rich for most people’s taste unless they’re getting Real Work done with the stuff — it looks a lot more appealing to time-share it.

    There are some workloads for which there might be constant load, like maybe constantly analyzing speech, doing speech recognition. For those, then yeah, local hardware might make sense. But…if weaker hardware can sufficiently solve that problem, then we’re still back to the “expensive hardware in the datacenter” thing.

    Now, a lot of Nvidia’s costs are going to be fixed, not variable. And assuming that AMD and so forth catch up, in a competitive market, will come down — with scale, one can spread fixed costs out, and only the variable costs will place a floor on hardware costs. So I can maybe buy that, if we hit limits that mean that buying a ton of memory isn’t very interesting, price will come down. But I am not at all sure that the “more electrical power provides more capability” aspect will change. And as long as that holds, it’s likely going to make a lot of sense to use “big iron” hardware remotely.

    What you might see is a computer on the order of, say, a 2022 computer on everyone’s desk…but that a lot of parallel compute workloads are farmed out to datacenters, which have computers more-capable of doing parallel compute there.

    Cloud gaming is a thing. I’m not at all sure that there the cloud will dominate, even though it can leverage parallel compute. There, latency and bandwidth are real issues. You’d have to put enough datacenters close enough to people to make that viable and run enough fiber. And I’m not sure that we’ll ever reach the point where it makes sense to do remote compute for cloud gaming for everyone. Maybe.

    But for AI-type parallel compute workloads, where the bandwidth and latency requirements are a lot less severe, and the useful returns from throwing a lot of electricity at the thing significant…then it might make a lot more sense.

    I’d also point out that my guess is that AI probably will not be the only major parallel-compute application moving forward. Unless we can find some new properties in physics or something like that, we just aren’t advancing serial compute very rapidly any more; things have slowed down for over 20 years now. If you want more performance, as a software developer, there will be ever-greater relative returns from parallelizing problems and running them on parallel hardware.

    I don’t think that, a few years down the road, building a computer comparable to the one you might in 2024 is going to cost more than it did in 2024. I think that people will have PCs.

    But those PCs might running software that will be doing an increasing amount of parallel compute in the cloud, as the years go by.


  • Why buy Russian Steel?

    Without looking at the numbers, I’d guess that Russia is probably the cheapest option for those companies importing it from Russia.

    It also sounds like it’s not just steel in general, but some specific stuff:

    Sanctions on Russian exports have blocked most steel products from flowing into the EU, especially the most basic ones. Yet semi-finished slabs are still permitted into the bloc because Belgium, Czechia and Italy requested they remain available for factories that they say have no alternative sources of supply.

    I’m a little skeptical that nobody else out there produces those, though.

    searches

    Apparently they look like this:

    https://kavehmetal.com/steel-slab-7-essential-tips2025-guide/

    Steel slab plays a vital role in the production of steel sheets, plates, and other related products. Its use is particularly prominent in the manufacture of:

    Hot-rolled sheets or black sheets: The slab is heated to a specific temperature, then passed through rollers to reduce thickness and achieve the desired dimensions.

    Structural components: It is also used in the production of I-beams, rebars, and steel pipes, which are essential for construction and infrastructure projects.


  • Cluster weapons are not banned. Some countries have entered into a treaty to not use them, but the US is not one.

    I’d also add that my guess is that, as with land mines, the fairly-successful showing in the Russo-Ukrainian War means that weapons with submunitions probably are going to wax more than wane.

    On 18 July 2024, the Parliament of Lithuania decided to withdraw from the convention.[37] The Lithuanian government argued that Russia has used cluster munitions extensively during the Russian invasion of Ukraine and would not hesitate to use them in conflict with NATO.[38] The government also pointed out that of the NATO states bordering Russia, only Lithuania and Norway were parties to the convention.[37] Lithuania deposited its instrument of withdrawal from the convention on 6 September 2024,[39] and the withdrawal took effect on 6 March 2025.[40]

    For example, ATACMS missiles using cluster munitions made sticking aircraft and weapons in revetments — which Russia did at one helicopter airfield — a lot less effective for protection.


  • So, I’m pretty exasperated with the Trump fans too.

    However, I also did do what I could to understand where people are coming from. Bought several books written by political science people talking about the elections and motivations.

    Let me put it this way. A lot of people here on the left, on the Threadiverse, would probably criticize Margaret Thatcher because she largely shut down coal mining in the UK.

    The coal workers in the article are basically in the same boat as those coal workers. Hillary Clinton once — truthfully, if perhaps not showing a lot of sensitivity — said that “we’re going to put a lot of coal miners out of work”. Those people are right wing, but in significant part, they’re voting for Trump for the same reasons that the coal miners in the UK voted for Labour — because they’re scared of their coal mining jobs going away, want more demand for their labor. For them, Hillary Clinton is their Margaret Thatcher, and Trump at least represents the possibility of salvation.

    We have to stop burning coal. We can’t continue emitting carbon dioxide.

    I agree. I’m not arguing that we should mine more coal. I think that shutting down coal mining here is the right move (and, for that matter, that shutting it down in the UK was too). But I do think that it’s important to at least understand why people are doing some of the things they’re doing, even if one doesn’t agree with them.

    There aren’t that many people employed by the coal mining industry directly, not anymore, but there are companies that support the coal mining industry. Maybe they provide rail or maintenance services. Maybe they’re a restaurant that serves people who get money from the coal industry. Stuff like that. Coal goes away, so do the supporting businesses.

    Those people probably aren’t making very good political moves, not by my estimation. But they’re doing it because if the industry goes away, so do their jobs. So do a lot of their villages. I’d say that Trump is lying, yeah, but Trump is promising them hope – a coal renaissance.

    Wyoming and West Virginia are our two top coal-mining states. They also had the highest percentage vote share of all states for Trump in the 2024 general presidential election.

    And then you have the oil and natural gas industries. Those are a lot bigger in some other states. If you transition to other power sources, those guys are going to be out of a job too.

    I’m willing to say “Well, sorry, guys, but that industry just doesn’t make sense any more. You’re going to have to find new jobs, and you are probably going to have to move.” But for those people, that’s going to mean losing towns and stuff that they put time into. Their social network goes away, has to be rebuilt somewhere else. The largest investment that most Americans make is in their home. If people leave en masse, the value of that property falls too. Their net worth falls. I’m just saying that what Trump is dangling in front of them is the prospect of not having to do that. Yeah, it’s not true — he’s giving them a pleasant lie. But I think that it is, unfortunately, a very human trait to readily believe things that we want to be true, especially when someone is working very hard to make us believe them.



  • So an internet

    The highest data rate it looks like is supported by LoRa in North America is 21900 bits per second, so you’re talking about 21kbps, or 2.6kBps in a best-case scenario. That’s about half of what an analog telephone system modem could achieve.

    It’s going to be pretty bandwidth-constrained, limited in terms of routing traffic around.

    I think that the idea of a “public access, zero-admin mesh Internet over the air” isn’t totally crazy, but that it’d probably need to use something like laser links and hardware that can identify and auto-align to other links.



  • GitHub explicitly asked Homebrew to stop using shallow clones. Updating them was “an extremely expensive operation” due to the tree layout and traffic of homebrew-core and homebrew-cask.

    I’m not going through the PR to understand what’s breaking, since it’s not immediately apparent from a quick skim. But three possible problems based on what people are mentioning there.

    The problem is the cost of the shallow clone

    Assuming that the workload here is always --depth=1 and they aren’t doing commits at a high rate relative to clones, and that’s an expensive operation for git, I feel like for GitHub, a better solution would be some patch to git that allows it to cache a shallow clone for depth=1 for a given hashref.

    The problem is the cost of unshallowing the shallow clone

    If the actual problem isn’t the shallow clone, that a regular clone would be fine, but that unshallowing is a problem, then a patch to git that allows more-efficient unshallowing should be a better solution. I mean, I’d think that unshallowing should only need a time-ordered index of commits referenced blobs up to a given point. That shouldn’t be that expensive for git to maintain an index of, if it doesn’t already have it.

    The problem is that Homebrew has users repeatedly unshallowing a clone off GitHub and then blowing it away and repeating

    If the problem is that people keep repeatedly doing a clone off GitHub — that is, a regular, non-shallow clone would also be problematic — I’d think that a better solution would be to have Homebrew do a local bare clone as a cache, and then just do a pull on that cache and then use it as a reference to create the new clone. If Homebrew uses the fresh clone as read-only and the cache can be relied upon to remain, then they could use --reference alone. If not, then add --dissociate. I’d think that that’d lead to better performance anyway.





  • Britannica’s print edition bit the dust in 2010:

    https://en.wikipedia.org/wiki/Encyclopædia_Britannica

    The Encyclopædia Britannica (Latin for ‘British Encyclopaedia’) is a general-knowledge English-language encyclopaedia. It has been published since 1768, and after several ownership changes is currently owned by Encyclopædia Britannica, Inc. The 2010 version of the 15th edition, which spans 32 volumes and 32,640 pages, was the last printed edition.[1] Since 2016, it has been published exclusively as an online encyclopaedia at the website Britannica.com.

    Printed for 245 years, the Britannica was the longest-running in-print encyclopaedia in the English language.

    …but the World Book Encyclopedia is still doing printed editions:

    https://en.wikipedia.org/wiki/World_Book_Encyclopedia

    The World Book Encyclopedia is an American encyclopedia.[1] World Book was first published in 1917. Since 1925, a new edition of the encyclopedia has been published annually.[1] Although published online in digital form for a number of years, World Book is currently the only American encyclopedia which also still provides a print edition.[2] The encyclopedia is designed to cover major areas of knowledge uniformly, but it shows particular strength in scientific, technical, historical and medical subjects.[3]

    World Book, Inc. is based in Chicago, Illinois.[1] According to the company, the latest edition, World Book Encyclopedia 2024, contains more than 14,000 pages distributed along 22 volumes and also contains over 25,000 photographs.[4]

    I have to admit that I’ve never bought a print copy of the World Book myself, though I did grow up with one.



  • they don’t do any first-hand investigation of basic info that is clearly shared or copied from other USG agencies.

    Specifically the World Factbook people probably don’t, but I’m sure that least some of the estimates will come from the CIA, because they’re going to be the ones who are going to be responsible for same.

    But what I’m saying is that they aren’t going to be closing the analysis guys down, just the public publication of that information. And the analysis part is going to be the bulk of the budget.