#object-oriented programming makes sense when you're dealing with OBJECTS
Explore tagged Tumblr posts
Text
This stupid stupid programming language is trying my patience lol... everything about it seems designed to be as incomprehensible and inconvenient as possible!!!
#it's. object-oriented I THINK but NOT in a helpful way oh my god#object-oriented programming makes sense when you're dealing with OBJECTS#okay wait no it's not object-oriented it just treats certain types of functions as objects#but ALSO you have to reference those function blocks each cycle or they'll stop operating#what the fuck
0 notes
Text
Software As Narrative 11/n
## What's wrong with The Rule Of Three? ### Or: structuralism and software are a bad fit. This article is part 11 of an ongoing series on **Tentacular Devops** in the *Post-Information Age (or Cthulhucene Epoch).* If you would like, you may get caught up by [reading from the beginning.](http://infiniteundo.com/post/158652390003/software-as-narrative-collected-1)
tl;dr: Just Tell me why the Rule Of Three Is Bad Already!

At work we recently (it's 2017 as I write this) held a book club where we read The Pragmatic Programmer. Wow, what a mistake.
It is long past time to retire that out-of-date manual. Software has changed in the last 18 years and so has our understanding of how software works as a medium.
The Pragmatic Programmer is a relic of structuralism
In the 2000s and late 90s it was popular to use Architecture (the design and construction of houses, skyscrapers, airports, etc.) as a functional allegory for how to build software.
In the interceding time post-modernism has found its place in armory of legitimate paradigms that may be applied to an enterprise project and along with post-modernism has come the opportunity to take a post-structural approach to software.
When The Pragmatic Programmer was written, the most important software in the Enterprise still ran on the desktop. It might do a couple socket connections, maybe consume some data through a Web form but that was it. Software was a thing that lived on someone's desktop or laptop machine. If it worked, it worked for one user.
Likewise in those days if it broke, it broke for one user. Dispatch one tech support operator to the one user, fix the one problematic instance of the software and business could tick along normally again.
This time in computing is gone, never to return.
Devops is fundamentally post-structuralist
If you want to do continuous delivery and devops, The Pragmatic Programmer won't help you at all. There's nothing in there about collaboration, scaling Web sites or how to roll out an experimental feature. Instead it's a book that assumes a software system is like a building: something to be designed on paper an then built one convenient "brick" at a time --- although through the "magic" of object oriented programming you actually get to design your own bricks.
It would be better, given the option, to design the city that the buildings are part of and not worry about the bricks. But that's for a later rant on functional programming :)
The observed reality 18 years after The Pragmatic Programmer is that, as software engineers and systems operations people, we have taken on the problem of building and managing something that is thousands of orders of magnitude more complex than an airport or a mega-city.
Dijkstra was already pointing this out in the 1980s but not many people listened, probably because it isn't fun to think about how your species has created a problem that it is not capable of solving without some serious brain evolution going on (spoiler: our brains have not appreciably evolved since we came down from the trees: what we got is what we got in that department).
Software is not like buildings, not like cars, not like college campuses
The assumption (Dijkstra notwithstanding) at the time that The Pragmatic Programmer was written was that software is intractably complex only by accident: if we are careful and apply attention to qualities like craftsmanship in our work, then we can use Object-Oriented principles to control and constrain the complexity of our systems, resulting in software that is relatively cheap to maintain and new systems that are relatively cheap to build.
THIS TURNED OUT TO BE UTTER HORSESHIT.
It's not anyone's fault. Hindsight is 20/20 and all that. But today the idea that there is some kind of functional upper bound on system complexity is laughable (and if they don't laugh when you point this out, you can be sure they're trying to sell you something).
It turns out that what we know historically about "being careful" and "craftsmanship" doesn't mean fuck all when you are dealing with a medium that can potentially be "made of" more "discrete parts" in 30 minutes of interaction than there are seconds in the universe.
If the statement that a period of time can be made out of discrete parts seems like a non sequitur then you are beginning to see just how much software is not like any medium that has gone before including the building trade.
The language barely contains words to describe the level of weirdness that exists inside even a "simple" program (although as I have just said: there is no such thing as a simple computer program and the phrase is an oxymoron). This is part of why the internet has resulted in the invention of so many words. There are things on the internet --- a lot of things --- that simply have no precedent in human history prior to the late 1980s when the network began to come online.
Applying structural strategies to post-structural entities makes zero sense
Again it's not anyone's fault that books like The Pragmatic Programmer exist. But it is problematic to keep using such perspectives today. All of the "canonical" books from this period are suspect including Design Patterns and even Refactoring.
Software isn't what we thought it was 20 years ago. Software is something way weirder and way less tractable than we thought. There is no upper bound on complexity in software. None.
That Rule Of Three
So let's get back to this so-called "rule of three." A structural canard in a post-structural milieu.
Here is the current Wikipedia definition of the Rule of Three:
> Rule of three is a code refactoring rule of thumb to decide when a > replicated piece of code should be replaced by a new procedure. It > states that the code can be copied once, but that when the same code > is used three times, it should be extracted into a new procedure.
There are two primary objections I wish to raise:
3 repetitions is a completely arbitrary threshold.
The meaning of the word "repetition" is unclear and there is no way to make it clearer in this context, precisely because of the post-structural nature of code.
Three repetitions is an arbitrary numerical target.
Why is three the breakpoint for when code should be "replaced by a new procedure." Why not two repetitions? Why not four? Why not ten for that matter? There is no science at all to the choice of three as the threshold here. "Three repetitions bad" is pure folk knowledge, passed from one person to another over the years and accepted as common sense without a shred of supporting evidence.
There are enough counterintuitive and emergent "rules" to software development. We do not need to introduce entirely-made-up rules like "refactor after three repetitions."
"Copied piece of code" is a phrase so vague as to be meaningless in practice.
Of course, we can't consistently observe a rule that has to do with "repeated code" because that is a concept that has no clear definition.
By "repeated code" do we mean two subroutines that are identical in every respect? Do we mean two lines that are identical? The implications of one or the other are completely divergent.
What is the intent of the rule in such a case? The truth is that judging from all extant writing about The Rule Of Three, there is no authoritative way to decide what "repeated code" means and we are left to guess.
But it isn't clear what's meant by "repeated" even. Is "repeated" a synonym for "identical in every respect" or is there a fuzzier threshold that was meant to be applied?
For instance if I have an addOne() method and an addTen() method isn't that a sort of duplication? Should I then instead of creating addTwenty() in my codebase instead "reduce duplication" by replacing the two existing call-sites and the still-hypothetical third call site with a "generalized" add() method?
The number of semantic uncertainties just in the toy scenario above is staggering. In my 20 years as a consultant I have seen real-world attempts to perform "software maintenance" in this way create a slippery slope to bikeshedding. In the worst cases I have observed errant or misaligned "refactoring projects" that were so ill-defined (based on the previously-mentioned insufficient definition of "repeated code") that the software product began to exhibit production service degradations as a result.
There is enough guesswork in software. We do not need to introduce more.
#devops#pragmatic programmer#dave thomas#programming#hacking#design#code#information architecture#information design#refactoring#duplicated code#code duplication#static analysis
0 notes