- 0 Posts
- 167 Comments
dejected_warp_core@lemmy.worldto Programmer Humor@programming.dev•only thing that makes this dumpster fire usable2·8 days agoI was thinking the same thing. I feel kind of bad now.
Also: this is what it would look like if Linus wrote a CPM kernel instead.
dejected_warp_core@lemmy.worldto Programmer Humor@programming.dev•the myth of the good tech giant6·25 days agoThere’s also diffusing responsibility across the organization. It’s easy to achieve unethical things, when the individual’s part of the job hardly seems “bad” at all.
dejected_warp_core@lemmy.worldto Programmer Humor@programming.dev•Been there, done that, would not recommend8·1 month agoWriting the tests first, or at least in tandem with your code, is the only way to fly. It’s like publishing a proof along with your code.
it sounds trite: make the tests fit the code. Yes, it’s a little more work to accomplish. The key here is that refactors of any scale become trivial to implement when you have unit-test coverage greater than 80%. This lets you extend your code with ease since that usually requires some refactor on some level.
dejected_warp_core@lemmy.worldto Programmer Humor@programming.dev•You typical Node project2·1 month agoI agree. Rather each one of those is rather substantial on its own. Plus the churn of going from framework to framework makes it less useful to compress and bundle all this stuff into fixed versions on a slower schedule (e.g. like Ubuntu packages do). I think that all contributes to bloat.
Kind of. They do center on code generation, at the end of the day. That’s where the similarities end. You can’t insert macros into your code arbitrarily, nor can you generate arbitrary text as an output. Rust macros take parsed tokens as input, and generated (valid) code as output. They must also be used as annotations or similar to function calls, depending on how they’re written. The limitations can be frustrating at times, but you also never have to deal with brain-breaking
shenanigans either.
That said, I’ve seen some brilliant stuff. A useful pattern is to have a macro span a swath of code, where the macro adds new/additional capabilities to vanilla Rust code. For example, here’s a parser expression grammar (PEG) implemented that way: https://github.com/kevinmehall/rust-peg
dejected_warp_core@lemmy.worldto Programmer Humor@programming.dev•You typical Node project4·1 month agoYou say that, but I’ve watched the JS community move from one framework and tool suite to the next quite rapidly. By my recollection, I’ve seen a wholesale change in popular tooling at least four times in the last decade. Granted, that’s not every developer’s trajectory through all this, but (IMO) that’s still a lot.
dejected_warp_core@lemmy.worldto Programmer Humor@programming.dev•You typical Node project14·1 month agoI used to struggle with this, until I realized what’s really going on. To do conventional web development, you have to download a zillion node modules so you can:
- Build one or more “transpilers” (e.g. Typescript, Sass support, JSX)
- Build linters and other SAST/DAST tooling
- Build packaging tools, to bundle, tree-shake, and minify your code
- Use shims/glue to hold all that together
- Use libraries that support the end product (e.g. React)
- Furnish multiple versions of dependencies in order for each tool to have its own (stable) graph
All this dwarfs any code you’re going to write by multiple orders of magnitude. I once had a node_modules tree that clocked in at over 1.5GB of sourcecode. What I was writing would have fit on a floppy-disk.
That said, it’s kind of insane. The problem is that there’s no binary releases, nor fully-vendored/bundled packages. The entire toolchain source, except nodejs and npm, is downloaded in its entirety, on every such project you run.
In contrast, if you made C++ or Rust developers rebuild their entire toolchain from source on every project, they’d riot. Or, they would re-invent binary releases that weekend.
Rust […] could use a higher level scripting language, or integrate an existing one, I guess.
One approach is to use more macros. These are still rooted in the core Rust language, so they give up none of the compile-time checks required for stability. The tradeoff is more complex debugging, as it’s tough to implement a macro without side effects and enough compile-time feedback that you’d expect from a DSL.
Another is to, as you suggest, embed something. For example, Rust has Lua bindings. One could also turn things inside out and refactor the rust program (or large portions of it) as a Python module.
dejected_warp_core@lemmy.worldto Programmer Humor@programming.dev•Slapping on a `.expect` is also error handling!2·1 month agoExactly.
Personally, I call it “python mode” since you’re staying on the “happy path” and let the program just crash out if those expectations aren’t met.
What are the maverick git workflows?
Okay, but be advised: this is how we start fights. Depending on where you’re coming from, everyone else is doing it wrong. Keep that in mind. That said, I want to have a discussion with you and others, if possible.
If we assume that a GitHub PR, or GitLab MR, workflow is “typical”, then the oddballs I know of are:
- Geritt - It endorses a unit of review/work that is always exactly one commit. I have some strong opinions about why this is a thing, why it’s not great, why you shouldn’t if you’re not Google, and how Google got here, but that’s a whole other discussion. It’s a super-polarizing approach to Git in general.
- Gitflow - takes the usual branching strategy of MR/PR work and dials it up to 11. This too is polarizing, as the added complexity can be a bit much for some folks.
IMO, a lot of the trouble we run into with Git is largely due to training problems. Also, one has to architect the git space to fit the company, culture, and engineering needs at hand. This means planning out what repositories you need, how you’re going to solve CI/CD, what bar for code review is needed, how to achieve release stability, and how to keep the rate of change steady and predictable. To do any of that, everyone needs to learn a bevy of git commands to do this well, and not enough companies bother to teach them.
dejected_warp_core@lemmy.worldto Programmer Humor@programming.dev•vibe coders discover "coding"3·1 month agoActually… yes. This was back when “source control” meant keeping the code on an extra floppy in a dusty desk drawer, and “security” meant throwing an extra lock on the office door, and “looking stuff up” meant trying to find useful books at the local bookstore. It was awful.
dejected_warp_core@lemmy.worldto Programmer Humor@programming.dev•vibe coders discover "coding"6·2 months agoWait until they figure out how senior engineers learned all this stuff.
Yuuup. Makes me wonder if there’s a viable “diaper pattern” for this kind of thing. I’m sure someone has solved that, just not with the usual old-school packaging tools (e.g. automake).
Ideally? Zero. I’m sure some teams require “warnings as errors” as a compiler setting for all work to pass muster.
In reality, there’s going to be odd corner-cases where some non-type-safe stuff is needed, which will make your compiler unhappy. I’ve seen this a bunch in 3rd party library headers, sadly. So it ultimately doesn’t matter how good my code is.
There’s also a shedload of legacy things going on a lot of the time, like having to just let all warnings through because of the handful of places that will never be warning free. IMO its a way better practice to turn a warning off for a specific line.. Sad thing is, it’s newer than C++ itself and is implementation dependent, so it probably doesn’t get used as much.
dejected_warp_core@lemmy.worldto Programmer Humor@programming.dev•Object oriented programming in Python be like:1·2 months agothat’s because anyone who develops oop in Python is mentally ill.
Hard disagree there. I would argue that most “multi-paradigm” languages converge on the same features, given enough time to iterate. It’s not necessarily about hot-sauce. I honestly think its about utility and meeting your userbase where their heads are.
TIL there’s more than one kind of “vibe” coding.
dejected_warp_core@lemmy.worldto Programmer Humor@programming.dev•Ah yes... NULL has been shipped32·2 months agoSCP Agent: What’s in the box?
SCP Scientist: Nothing
SCP Agent: So, what’s the big deal? Just open it. Or toss it out. I almost tripped on it and spilled my coffee…
SCP Scientist: You don’t understand. Our measurements show that there’s nothing in the box. It contains, or rather doesn’t contain, a complete absence of space, time, and matter. It’s a hole in the universe that is inexplicably cordoned off from the air in this room, and everything else in it, by a flimsy cardboard shell.
SCP Agent: …
SCP Agent: Is that bad?
SCP Scientist: Very.
dejected_warp_core@lemmy.worldto Programmer Humor@programming.dev•there's no escape! brew another cup!4·2 months agoRe: the not-XML-instead-of-code thing. Eventually, this sort of thing turns into a programming language. It’s just like carcinisation. Or you wind up writing ever-more code to support the original design. The environment inevitably creates evolutionary pressure that only if/else and iteration logic can solve, forcing the design ever closer to being Turing-complete.
It’s not that they don’t understand it. It’s that they literally can’t afford to adopt it.
Corporate ownership, combined with being publicly traded or privately investor funded, means that you have to increase shareholder value. Stock dividends aren’t enough. So, they use the only play that they know: scale the company up.
Problem is: you can scale art, but scaling software is very hard. Book publishers and record labels figured this out ages ago: keep adding more artists and more products. Meanwhile, AAA game studios keep stacking bodies onto existing IPs, making fewer yet bigger software products instead. Meanwhile, they keep getting bodied by small upstarts like Team Cherry, because they have a better effort:payout ratio. If everyone just ran their game companies like Penguin Random House instead of Microsoft, they’d be in better shape.