In this post I will try to articulate something that has bothered me for most of my career. Before I explain more, I want to make it clear that this is only an opinion, and at that also an opinion that is still evolving. I am not sure if I am walking out on thin ice or leaving it for solid ground. But here goes…
The issue deals with systems in which some form of design decision needs to be made. The context is road pavement systems and road networks. Thus, we are not dealing with a highly controlled design system such as concrete slab design or water flow in channels.
In road pavement engineering, we are always dealing with a system that is fraught with uncertainty. Hopgood1 suggested that in the context of knowledge based systems, uncertainty can stem from three sources:
Uncertain evidence (one test pit every 300 metres - what lies in-between?)
Vagueness (when exactly has a pavement “failed”?)
Uncertain link between evidence and conclusion.
In this post I will focus on the third source of uncertainty listed above, but in a sense all three are at play. Let us say in this domain fraught with uncertainty we are building a system to help make decisions. It may be a design system or a system to allocate resources. Assume the relationship is of this format:
We can write this more mathematically as follows:
Where the notation f() indicates that our desired answer “is a function of” inputs A, B and C. Wanting to be rigorous, we can acknowledge where uncertainty lies in our relationship:
As an example of the above, let us take the case where you need to make a decision of whether to recommend a Pavement Rehabilitation or only a Re-Surfacing in a network level budgeting system.
The known evidence is the visible and measured pavement situation. We can see and measure the types of distress, rut depth, drainage situation etc. There are factors that are unknown and/or with random variations, such as traffic growth, layer thicknesses etc. These need to be estimated.
Inputs A and B define for us the “pavement situation”. We tie these together into a decision by applying expert knowledge. This expert knowledge may be explicit (e.g. a Pavement Design Catalogue), or it can be subjective evaluations in the mind of the engineer.
These evaluations can take the form of structured reasoning such as: if there is systemic structural pavement distress then my knowledge and experience dictates that this pavement situation calls for a rehabilitation. A re-surfacing will be a waste of taxpayer funds because of reasons x, y and z.
Feeling uncomfortable with the notion of an experienced engineer making decisions, we strive to make our decision system more “fundamental”. Perhaps there is a drive to make decisions on an economic basis, in which case our decision making system becomes:
Where does this economic model come from? How certain is the link between the evidence and conclusion in this model? Are variables x1, x2, x3,…, xn measured or estimated? Do we understand how much uncertainty we are introducing? Or are we simply following the Red Queen’s reasoning:
I could have done it in a much more complicated way," said the Red Queen, immensely proud
Lewis Carroll: "Through the Looking-Glass"
Quite often, a sub-model is adopted on the basis of earlier studies - are the study findings still relevant? If the model requires crystal ball gazing into the future, what are the prediction errors and prediction intervals?
Here is where human-nature enters and makes this tricky. The problem is not that a more sophisticated sub-model is being used, but rather that - once it enters into the lexicon of the wider system - it becomes something that is “best practice” - not because it improved prediction, but because it seems more sophisticated and therefore seemingly more rigorous than the subjective element of our first version. In my experience, beyond the initial research, questions such as those posed above with respect to prediction errors are seldom asked or used in operational contexts.
Almost certainly, when we first run the model with the newly inserted economic sub-model, we will find the results are off-target. How do we recognise the results are off-target? - Expert Knowledge. Now starts a process of tuning and calibration. Based on what? Expert knowledge.
So, our model has evolved as follows:
It seems to me that what we have done is simply make it more difficult to impose our knowledge on the decision making system. Without a doubt, we can do much more with our second system (right hand side above) than we previously could with the first. For example, we can do sensitivity studies etc. But is that ever done? And if it is, is it done with explicit evaluation and acknowledgement of prediction errors?
I have a suspicion that, if the variations in estimated inputs and the prediction errors of our more sophisticated system is properly analysed (e.g. using Monte-Carlo simulation) and explicitly acknowledged, it will become apparent the the more complex system offers no better margin of error than the earlier simpler system.
This post is already longer than most readers will endure. So for now, I will draw it to a close. I think one of the above two models will be (a) more difficult to control and calibrate; (b) more difficult to understand and communicate so that it’s weaknesses are less properly understood; and (b) more likely to be abused.
The last point brings me to the title of my post: the Count-to-Three Principle. This principle/rule was expressed by Gerald Weinberg2:
If you cannot think of three ways of abusing a tool, you do not understand how to use it.
In his book “An Introduction to General Systems Thinking”, Weinberg elaborates on the Count-to-three Principle (emphasis is mine):
Faithful adherence to this principle would protect us from the enthusiasm of the optimizers, maximizers, and other species of perfectionists - but mostly from ourselves.
Hopgood A. 2000. Intelligent Systems for Engineers and Scientists. Second Edition. CRC Press.
Weinberg, G.A. 1975. An Introduction to General Systems Thinking. Wiley.