• 0 Posts
  • 25 Comments
Joined 2 years ago
cake
Cake day: June 16th, 2023

help-circle


  • Problem in some teams are the respective audiences for the commit activity v. the ticket activity.

    The people who will engage on commit activity tend to have a greater common ground and sensibilities. Likely have to document your work and do code reviews as the code gets into the codebase and other such activity.

    However, on the ticket side you are likely to get people involved that are really obnoxious to contend with. Things like:

    • Getting caught up in arguments over sizing where the argument takes more of your time than doing the request
    • Having to explain to someone who shouldn’t care why the ticket was opened in the first place despite all the real stakeholders knowing immediately that it makes sense.
    • Work getting prioritized or descoped due to some political infighting rather than actual business need
    • Putting extra work to unwind completed work due to some miscommunication on planning and a project manager wanting to punish a marketing person for failing to properly get their request through the process
    • Walking an issue through the process to completion involves having to iterate through 7 states, with about 16 mandatory fields that are editable/not editable depending on which state and sometimes the process is stuck due to not having permission because of some bureaucratic nonsense that runs counter to everyone’s real world understanding.

    In a company with armies of project managers the ticket side is the side of dread even if the technical code side is relatively sane.


  • Yep, and then have to opt out all over again the next week when an update decides you need to verify you really mean to opt out again…

    And if you managed to not have an MS account when you installed, interrupt your login and say “you cannot proceed like you have been doing for the past year without adding an MS account now”, and then look up how to get out of that dialog without doing the MS account…




  • The main hiccup for hardware support is GPU support, and as a side effect of the bigger business being in messing with LLMs and that use case preferring Linux, GPUs are getting more Linux attention.

    For example, nVidia drivers went years and years with a status quo of “screw open source, compile our driver and deal with the limitations”. Only after they got big in the datacenter did they finally start working towards being fully open in the kernel space (though firmware and user space still closed source, but that’s a bit more managable)







  • Actually, the lower level may likely be less efficient, due to being oblivious about the nature of the data.

    For example, a traditional RAID1 mirror on creation immediately starts a rebuild across all the potential data capacity of the storage, without a single byte of actual data written. So you spend an entire drive wipe making “don’t care” bytes redundant.

    Similarly, for snapshotting, it can only track dirty blocks. So you replace uninitialized data that means nothing with actual data, the snapshot layer is compelled to back up that unitiialized data, because it has no idea whether the blocks replaced were uninialized junk or real stuff.

    There’s some mechanisms in theory and in practice to convey a bit of context to the block layer, but broadly speaking by virtue of being a mostly oblivious block level, you have to resort to the most naive and often inefficient approaches.

    That said, block capacity is cheap, and doing things at the block level can be done in a ‘dumb’ way, which may be easier for an implementation to get right, versus a more clever approach with a bigger surface for mistakes.




  • I’ve seen both. Our area has had, for a few years, mixed use zoning as a requirement. So a bunch of projects that clearly only wanted to do housing are doing what you describe: Housing and a meets minimum commercial/office effort. Notably while trumpeting “walkable” which is really code for “we don’t waste land on unprofitable parking lots”. So you have somewhat dense living and retail where the retail has zero parking, so the only people a business could hope to get are the people in the units immediately close by, which are not a lot, since each project seems to go for low rise housing. So you get three stories of apartments which is more dense than suburbia, but not nearly dense enough to sustain a dedicated retail presence.

    But there is this mall they’ve effectively renovated into a “downtown”, adding high rise apartments, and lots of them to the massive retail presence as well as big office buildings. Critically, also expanded to an insane degree the amount of available parking. It was a pretty failing mall (like most) and now it’s doing really well, both with high occupancy for their apartments and people going there for the rather nice retail. I think this was the first project and inspired the county to decide everything will be a success if it’s just mixed use, and they haven’t really come around to realize that they haven’t succeeded in forcing the developers to do viable mixed use by the current weak regulations.



  • Yep, and I see evidence of that over complication in some ‘getting started’ questions where people are asking about really convoluted design points and then people reinforcing that by doubling down or sometimes mentioning other weird exotic stuff, when they might be served by a checkbox in a ‘dumbed down’ self-hosting distribution on a single server, or maybe installing a package and just having it run, or maybe having to run a podman or docker command for some. But if they are struggling with complicated networking and scaling across a set of systems, then they are going way beyond what makes sense for a self host scenario.


  • Based on what I’ve seen, I’d also say a homelab is often needlessly complex compared to what I’d consider a sane approach to self hosting. You’ll throw all sorts of complexity to imitate the complexity of things you are asked to do professionally, that are either actually bad, but have hype/marketing, or may bring value, but only at scales beyond a household’s hosting needs and far simpler setups will suffice that are nearly 0 touch day to day.


  • For 90% of static site requirements, it scales fine. That entry point reverse proxy is faster at fetching content to serve via filesystem calls than it is at making an http call to another http service. For self hosting types of applications, that percentage guess to go 99.9%

    If you are in a situation where serving the files through your reverse proxy directly does not scale, throwing more containers behind that proxy won’t help in the static content scenario. You’ll need to do something like a CDN, and those like to consume straight directory trees, not containers.

    For dynamic backend, maybe. Mainly because you might screw up and your backend code needs to be isolated to mitigate security oopsies. Often it also is useful to manage dependencies, but that facet is less useful for golang where the resulting binary is pretty well self contained except maybe a little light usage of libc.