![]() |
|
![]() |
| How do you monitor all code duplications in the code base? Including ones that have been modified slightly ( such as optimizations, name changes, additional statements in between, etc) |
![]() |
| > Coupling superficially similar code is definitely not a good thing to do.
I've taken to calling that activity (removing syntactic redundancy that is only coincidental) "Huffman coding". |
![]() |
| > misunderstanding of what it means
And in response, people will complain that they're being dismissed with "you're doing it wrong!" Because that happens with everything in programmer-land. |
![]() |
| Was reading 1978 Elements of Programming Style a while ago. It's mostly Fortran and PL/I. Some of it is outdated, but a lot applies today as well. See e.g. https://en.wikipedia.org/wiki/The_Elements_of_Programming_St...
They actually have a Fortran example of "optimized" code that's quite difficult to follow, but allegedly faster according to the comments. But they rewrote it to be more readable and ... turns out that's actually faster! So this already applied even on 197something hardware. Also reminds me about this quote about early development of Unix and C: "Dennis Ritchie encouraged modularity by telling all and sundry that function calls were really, really cheap in C. Everybody started writing small functions and modularizing. Years later we found out that function calls were still expensive on the PDP-11, and VAX code was often spending 50% of its time in the CALLS instruction. Dennis had lied to us! But it was too late; we were all hooked..." And Knuth's "premature optimisation is the root of all evil" quote is also decades old by now. Kind of interesting we've been fighting this battle for over 50 years now :-/ (It should go without saying there are exceptions, and cases where you do need to optimize the shit out of things, after having proven that performance may be an issue. Also at scale "5% faster" can mean "need 5% less servers", which can translate to millions/dollars saved per year – "programmers are more expensive than computers" is another maxim that doesn't always hold true). |
![]() |
| I was just mulling this over today. DRY = easier-to-decode is probably true if you're working on groking the system at large. If you just want to peak in at something specific quickly, DRY code can be painful.
I wanted to see what compile flags were used by guix when compiling emacs. `guix edit emacs-next` brings up a file with nested definitions on top of the base package. I had to trust my working memory to unnest the definitions and track which compile flags are being added or removed. https://git.savannah.gnu.org/cgit/guix.git/tree/gnu/packages... It'd be more error prone to have each package using redundant base information, but I would have decoded what I was after a lot faster. Separately, there was a bug in some software aggregating cifti file values into tab separated values. But because any cifti->tsv conversion was generalized, it was too opaque for me to identify and patch myself as a drive-by contributor. https://github.com/PennLINC/xcp_d/issues/1170 to https://github.com/PennLINC/xcp_d/pull/1175/files#diff-76920... |
![]() |
| Am I crazy for almost exclusively just using type and sum types and no generics or interfaces and somehow being able to express everything I need to express?
Kind of wondering what I'm missing now. |
![]() |
| Hmm, you can do pretty nice things with generics to make some things impossible (or at least fail on compile), but I agree it’s hardly readable. In some cases you need that though. |
![]() |
| The question then becomes: when do you break out code or not? Unfortunately (or fortunately if the art and craft or programming fascinates you), the answer is not easy. It seems to have to do with avoiding over-fitting or under-fitting the domain and purpose, to do with getting the best fit in a Bayesian Occam's Razor sense. Minimizing unnecessary code but also doing "Dependency Length Minimization" of the parse tree of your program so that it's maximally understandable and the abstractions increase the potential of your program to correctly interpolate into unknown future use cases. I reflect on some of these points here: https://benoitessiambre.com/abstract.html . It's about entropy minimization, calibration of uncertainty. It's about evolving your code so that it tends toward an optimal causal graph of your domain so that your abstractions can more easily answer "what if" questions correctly. These things are all related.
|
![]() |
| https://grugbrain.dev/#grug-on-dry
grug begin feel repeat/copy paste code with small variation is better than many callback/closures passed arguments or elaborate object model: too hard complex for too little benefit at times hard balance here, repeat code always still make grug stare and say "mmm" often, but experience show repeat code sometimes often better than complex DRY solution something i have learned the hard way is that DRYing out too fast paints you into architectural corners you don't even know are there yet. |
![]() |
| Which tool do you use to manage the copies of code segments, including possible modifications?
I imagine it's difficult to keep all of them in your head |
![]() |
| I've never heard this one before, but I love it. Unfortunately we've also got "Don't Abstract Methods Prematurely" and "Descriptive And Meaningful Phrases". |
![]() |
| I think DRY should be more "Don't repeat assumptions".
Or rather, don't assume the same thing in two different places, especially not implicitly. Avoiding code duplication mostly follows from that. |
Two identical pieces of code can have different specifications.
# x is an age of a person; the code checks if the person is past the retirement age in the US def is_of_retirement_age(x): return x >= 65;
# x is an ASCII character which is already known to be alphanumeric; this code checks if it’s a letter. def is_ascii_letter(x): return x >= 65;
[1] https://www.pathsensitive.com/2018/01/the-design-of-software... [2] https://www.mirdin.com