A Unified Theory of Software Design, Architecture and Everything

It’s probably an obvious symptom for my being on a downward spiral from software activist to theoretician, but I’ll do it anyway. I’m going to present you with a naive unified theory of software design, which I will call DRE: Dependencies Rule Everything. I consider it a real pity that the most common use of this abbreviation until today is Digital Rectal Examination. This has to change.

I have been wondering for a while now why software design principles, heuristics and trade-offs are being described in so many different terms: coupling, cohesion, SOLID, LSP, DRY, LoD, you name it. It has always seemed obvious to me that there is only one concept that covers all the others: dependencies.

Here is my definition:

Party A depends on party B means: If party B changes one or more of its observable aspects, there is some probability (greater than zero) that party A will have to adapt.

Let’s examine the building blocks of this definition and its implications more closely:

  • I use the word “party” instead of some more code-centric expression like “module” or “component”. Rationale: There are dependencies between pieces of my software (functions, classes, modules, subsystems, components etc) but there are also dependencies onto the outer world – the domain. For instance, within DRE I would consider the method who implements the logic for withdrawing money from an account as dependent on the domain rule how money should be withdrawn. Thus, DRE dependencies are a superset of static code dependencies.
  • Change, the probability of the change and the probability of the change to affect the dependent together define the strength of a dependency. If the dependee can change (eg. disappear) without forcing the dependent to adapt there is no dependency worth mentioning. In that sense loose coupling has the goal to lower the probability of a change affecting the dependent and static typing usually raises the probability, e.g. by enforcing the number of arguments to a function call.
  • I am only interested in the observables of a party like interface, timing behaviour, error conditions, availability, cost. To put it differently: I don’t care about unobservable implementation details or anything that can change without the dependent noticing.
  • A dependency between individual parts results in a dependency between the respective aggregates. The more and the stronger the dependencies between the parts, the stronger the dependency between the aggregates.
  • Dependencies are directed and that’s a good thing. On an atomic level there must not be bidirectional dependencies since otherwise change would result in an unstable state. On an aggregate level bidirectional dependencies arise much too easy if you are not careful.
  • For most cases the dependency relation is not transitive. In some programming languages it seems like they are, but that’s usually because public visibility of inner parts is the default and the Law of Demeter strikes. One notable exception to the rule is facilitated by C/C++ compilers, which enforce recompilation of statically dependent components and thus propagate change all the way up.
  • There exist at least three different major species of dependencies: compile-time dependencies, runtime dependencies and domain-rule dependencies. Statically typed and dynamically typed languages do different trade-offs between compile-time and runtime dependencies; type information is one kind of explicit dependency, which is obviously preferable over implicit – i.e. potentially unknown – dependencies. Speaking of explicitness: unit tests and acceptance tests are a different way of making dependencies explicit. Many of the techniques that promise  you a loosely coupled design do nothing else than going from explicit dependencies (a statically enforced method call) to implicit dependency by putting some obfuscating mechanism (e.g. XML serializing) in-between. This does not help you a thing, it just makes the system slower and more complex by introducing additional dependencies to libraries and structured documents. Unless, of course, if you really really really need it for cross-platform, cross-version, cross-process communication.

What is Good Design then?

A good design – or architecture for that matter – in DRE is one in which the overall of dependencies and their strengths is minimized. Since strength is defined as the probability that a change will hit you there is quite some fortune telling involved in finding “the best” of all designs. In other words: if you cannot agree on a probable future you cannot agree on a good design. That’s why Agile designers use a simple heuristic to foretell the future: Everything will change but we don’t know how. In DRE-terms that means: optimizing dependencies within the system and assuming that the world outside (the domain and its rules) has zero probability to change. In practice this leads to lots of design changes in early stages of a project until the typical domain changes have been incorporated in internal design elements and will only affect the outer-most “parties” (e.g. the adapter class, the configuration file, the business rules database).

What follows from that is: If you really know the future, the agile approach of evolutionary design is not the best. But be honest, who the hell does?

Reformulating OO Design Principles

For most heuristics of good OO design it’s quite easy to see why they work at least as well in the DRE universe. I leave it to the astute reader to translate the SOLID principles into DRE speak. One central rule is not so easy to reconcile with DRE, namely Don’t Repeat Yourself. I’ll try it anyway:

Consider a simple case of duplication:

def calculationOne() {
  ...
  def salesTax = amount * 0.19
  ...
}
def calculationTwo() {
  ...
  def salesTax = amount * 0.19
  ...
}

Given a probability of p that the sales tax rate will change during the system’s life cycle, you have two dependencies with a strength of p, so your total dependency number is 2p. Let’s now apply DRY in a straightforward matter:

def calculationOne() {
  ...
  def salesTax = calculateSalesTax(amount)
  ...
}
def calculationTwo() {
  ...
  def salesTax = calculateSalesTax(amount)
  ...
}
def calculateSalesTax(amount) {
  return amount * 0.19
}

Given a probability of r that we will have to change the interface of calculateSalesTax later on, the overall dependency number is now 2r + p; if we have chosen our abstraction wisely, this figure will be lower than 2p. Thus, the code with less duplication is better design in DRE theory. Quod erat demonstrandum.

So what?

There are two striking reasons why the unified theory appears attractive to me:

  1. It gives me the vocabulary to talk about the relation between different design approaches and thereby tackle questions like “What has loose coupling to do with the SOLID principles?” and “Does this decoupling technique really decouple anything?”.
  2. It might provide me with a quantifiable means to compare different designs. Any tool vendors, contact me for licensing the approach!

Now it’s up to you all to tell me why this is utter nonsense or perfectly useless or both. And I do appreciate congratulations for my future nobel prize on computer science. Shoot!

Update:

To make that clear, it was not my intention to suggest that simply adding up propabilities would make for a mathematically sound model for a “design quality number”; it was just the simplest thing to do and it felt intuitive enough.

Tags: , ,

8 Responses to “A Unified Theory of Software Design, Architecture and Everything”

  1. marko Says:

    You are saying that “explicit dependency is obviously preferable over implicit” and so techniques that hide dependencies may be problematic. I agree with you on this one.

    But it has a problem: Many “dependcy hiding techniques” I have seen, also lower the probability that the Party A has to adapt – albeit only to a minor degree. So numerical the hiding may seem like a good idea. Especially because these hidings show up in metrics so unproportionally favourable they are so popular.

    So I actually often prefer a few strong explicit dependencies over a lot of weaker implicit dependencies. But while you mention explicit vs. implicit it is not considered in your definition of Good Design.

    But besides that: A very good article. Thought provoking.

  2. softestpawn Says:

    Good post. I think johannes is talking about the trade off in runtime flexibility vs compile time explicit trust. As you say I think the runtime flexibility is often overrated.

  3. Christian Schuhegger Says:

    Actually I have a similar problem with the many ways how good design principles are described and I thought for a long time that it should be possible to bring it down to only one sentence. I for myself came to the conclusion that what is good enough for relational database design is also good enough for OO design:
    “Avoid update anomalies”
    Do not only avoid update anomalies when you have to change your code, but design your code with several axis for potential change in mind and get it in a form where change along those axis will require to change the code in only one place. I know that this may sound a bit like BDUF (big design up-front) but it never hurts to put a bit of thought in a piece of code up-front. Alternatively you do it in the TDD cycle, once you come along an axis of change that leads to update anomalies you refactor the code.

    Another piece of insight that I came across for myself is that many problems that you have in OO languages for avoiding update anomalies you do not have in functional languages. I explain that to myself by the observation that the unit of work, which is in OO the class/object is much bigger than the unit of work in functional languages, the function. This simple difference leads to being able to avoid whole classes of problems that you have to deal with in OO languages.

    – even before I knew the term TDD I used that style of programming automatically in Lisp. It was just natural to do that. Functions are context free and easy to test. It was never natural in OO languages and I needed quite a bit of practice to be able to do it in OO languages.
    – why do functional languages not require a dependency injection mechanism?
    – “refactoring” in functional programs is much simpler than in OO languages, because you never need to break apart Objects. Functions are much smaller units and if they follow a single responsibility principle the need for refactoring a function nearly never exists.
    – a whole class of GoF design patterns is unnecessary in functional languages. There is a recent video on InfoQ about that topic (although not too much in detail):
    http://www.infoq.com/presentations/Functional-Design-Patterns

    The point I want to make here is not that functional languages are better than OO languages. The point I want to make is that the principle to avoid update anomalies should be taken at a higher level than “design” within a language. If the language can help to avoid update anomalies then it is not enough to only think about “good design principles” within the language, but you need to think about how languages should look like to avoid update anomalies, too. To that extent I came to the conclusion that ANSI CommonLisp (the programmable programming language) is the best language to follow the “avoid update anomalies” principle. Lisp follows the strategy that if a mechanism is good enough for the language designers then it’s also good enough for the users of that language.

    There is one common point of conflict in that line of thought that I cannot resolve for myself up to now. Following the “avoid update anomalies” principle often leads to very high abstractions (little redundancy, little noise) that for many people are “unreadable”. I agree very much that readability is one of the highest values in code and am not sure how to overcome that felt “weakness” (what’s unreadable for one person is not necessarily unreadable for another; the kind of unreadability I am talking about here is similar to the kind of unreadability a well written math or physics book has to most people that are not trained physicists). One option that “feels good” is the literate programming style in haskell for example. Even if the code is extremely dense and highly abstract the literate programming style intentionally adds a bit more verbosity. But then it is an active act of adding the verbosity!

    One citation I find very good in that context is from the wizard book:
    “Thus, programs must be written for people to read, and only incidentally for machines to execute”
    http://mitpress.mit.edu/sicp/full-text/book/book-Z-H-7.html#%_chap_Temp_4

    jm2c,
    Christian

  4. Christian Schuhegger Says:

    “Update anomalies” mainly addresses the issue of having to touch several pieces of code when you have to fix a bug or implement a new feature. This manifests itself often in projects via some written procedures on how to “add a new page” to a web application. You have to touch different source files in order to get the job done.

    Simple examples of “update anomalies” are:
    – copy and paste
    Whenever you find a bug in one place you have to fix it in all other places. Can be resolved by refactoring.

    – dispatch on type
    In the “old days” in the world of procedural languages you often came across switch/case blocks that were implementing dispatch on type, e.g.:
    switch(shapeType) {
    case circle:
    calculateSurfaceAreaOfCircle();
    break;
    case rectangle:
    calculateSurfaceAreaOfRectangle();
    break;

    }
    Whenever a new shape came along (e.g. triangle) you had to update this central switch/case statement plus modify the file that contained the code for that shape.
    That problem was resolved by object orientation and type polymorphism. Nowadays you would create an interface Shape that has a calculateSurfaceArea() method and classes like Circle or Rectangle would implement that interface. Whenever a new shape comes along only one new file needs to be created without interfering with any of the other preexisting code.

    – aspect oriented programming
    Imagine one day you want to add logging for every method in your program that tells you the time that a method needs to execute. Before aspect oriented programming you would have to touch *ALL* your files and all your methods to implement that, even if conceptually it is so simple. With the “invention” of aspect oriented programming that type of update anomaly is gone.

    – visitor pattern
    My most hated design pattern is the visitor pattern! It is so extremely ugly!! Whenever you add a new node type you have to touch *ALL* implementations of the visitors to add a new visitNewNodeType() method! That’s one of the worst update anomalies ever. Have a look how clean and nice haskell Functor type classes and fmap handle such situations!
    In fact the problem with the visitor is at a deeper level. The problem is that you need a “double dispatch on type” and current standard OO languages like Java only implement a single type dispatch on the “this” argument. This brings me to the next type of update anomaly:

    – double dispatch
    Imagine you have to implement in a 2D game a collision detection mechanism. Imagine again you have as base interface Shape and implementations Circle and Rectangle. Even if you would have the “double dispatch” or “multimethods” available in Java, where would you put the method that checks a collision between a Circle and a Rectangle? In the Circle class or in the Rectangle class or in both? In Java any solution to that problem would become very ugly. In Lisp for example and the common lisp object system (CLOS) you can implement that cleanly and whenever new shapes are added to the system there is no update anomaly. You only add code for any aspect of the program once without touching other pieces of your implementation.

    – validation jsr 303
    Another type of update anomaly in three tier systems is the validation logic. I’ve expressed some time ago some of my thoughts about that topic here:
    https://forum.hibernate.org/viewtopic.php?t=991847
    The problem is that you need to implement validation logic on the user interface, in the back-end services and in the database. But that’s not everything! The user interface needs to know things like enumerations of valid values before any validation is going on in order to only put allowed values in for example “select boxes”. It would not make sense to give a free form field, let the user type his input and only via a validation error tell him that only the value “circle” and “rectangle” were allowed.

    A very related topic to update anomalies is the topic of grouping. There are typically two types of grouping, the “unix file system style” and the “mac application folder style”. In a unix file system all libraries are under “lib”, all man pages are under “man”, all config files are under “etc” … On a mac you have the application folders where the libraries, the config files and the help files are all in one place. This maps onto the Aspect grouping style (similar like unix file system grouping) and the Object/Class grouping style (similar like mac application folder grouping). My argument is that both styles of grouping are “wrong”. In fact the best thing would be to “tag” methods along different axis and whenever you need a certain “view” on the sources like the object view your IDE would show you that. IBM made in th 90s some experiments in that direction with their visual age for C++ product that did not use a file system to store source code but a database that could give you views on your sources as you defined them.

    A certain class of update anomalies only arises because of the limited possibilities of our programming languages to group source code in certain ways together.

    Update anomalies are to a large degree related to the DRY (don’t repeat yourself) principle but update anomalies arise because of different other reasons, too.

  5. Christian Schuhegger Says:

    Of course all of those concepts somehow mean all similar things. In the end all of them are meant to describe what “good design” means :)

    The reason why I prefer “update anomaly” is that it is a very old term coming from the relational database world and spawning the development of a mathematical model (relation calculus) leading to the development of objective rules like the 1st to 5th normal forms of database design.

    I would hope that in the world of computations we would also come up with such a mathematical model (perhaps lambda calculus) that would lead to objective rules to follow to achieve a good design. Up to now “good design” is too much belly felling and heuristics for my taste.

    jm2c

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s


Follow

Get every new post delivered to your Inbox.

%d bloggers like this: