• 1 Post
  • 55 Comments
Joined 5 months ago
cake
Cake day: June 24th, 2025

help-circle



  • Software development has the advantage of happening in an environment where the cost of failure is relatively low. A bad change might be caught by the test suite before it ever sees production. If it does get to production, there might be consequences, but not the kind that typically kills people.

    In fact, I think software engineering in general should lean into this idea on most things. Don’t try to ignore failure, but catch failure as fast as possible. Things like medical devices and aeronautic software should continue using formal verification methods, but the rest of us should iterate fast.

    Iterating fast doesn’t work for other things. I’m a programmer by trade, but I have enough electrical knowledge that I once took up a contract for designing a PCB. I tried to do the “one thing at a time” approach, and it just doesn’t work very well there. Non-trivial PCBs will have errors in them the first few times you try them, and you’ll need to go back for a redesign. In software, that potentially takes minutes or even seconds. But in PCBs, you have to order out to a company to make them and wait at least a week for them to turn it around. Even if you have equipment to do it yourself, it still takes hours. You just have to batch up your changes. (Electrical simulators can potentially help with this, though.)

    In government, the effects of policy could take years or even decades to work out. Single change at a time would be a stranglehold on the ability to fix problems.


  • It does in an indirect way due to having more parties.

    Let’s say you have $1M to try to buy off some politicians with campaign funds. If there are only two viable candidates, you give $500k to both. Now you’ve bought both parties and can win no matter the outcome.

    If there are three or four or five parties, though, you have to guess who is going to win. You can’t split it up that many times and still have much influence on the politicians in question. Your funds can be easily swamped out by grassroots groups. Your guess based on polling can also end up being dead wrong when some party makes a sudden surge in the final weeks.

    That said, it’d still be better if we ditched Citizens United and publicly funded elections.



  • . . . the fundamental ideas about rates of change seem like they’re something that everyone human deserves to be exposed to.

    People understand the idea of instantaneous speed intuitively. The trouble is giving it a rigorous mathematical foundation, and that’s what calculus does. Take away the rigor, and you can teach the basic ideas to anyone with some exposure to algebra. 6th grade, maybe earlier. It’s not particularly remarkable or even that useful for most people.

    When you go into a college major that requires calculus, they tend to make you take it all over again no matter if you took it in high school or not.

    Probability and statistics are far more important. We run into them constantly in daily life, and most people do not have a firm grounding in them.










  • Sorta modern.

    There’s been two big jumps in fundamental RAM usage during my time using Linux. The first was the move from libc to glibc. That tended to force at least 8MB as I recall. The second was adding Unicode support. That blew things up into the gigabyte hundreds of megabyte range.

    Edit: basing a lot of this on memory. Gigabyte range would be difficult for an OG Raspberry Pi, but I think it was closer to 128MB. That seems more reasonable with the difficulty of implementing every written language.

    We can’t exactly throw out Unicode support, at least not outside of specific use cases. Hypothetically, you might be able to make architectural changes to Unicode that would take less RAM, but it would likely require renegotiating all the cross-cultural deals that went into Unicode the first time. Nobody wants to go through that again.