27 May 2017

Bruce's Modelling Gripes, No. 10: That I also do many of the things that annoy me

I think this will be my last "gripe" for a while, though it has been fun letting them out. I will now let them build up inside a while before I splurge again.

Yes, of course, I also do many of the things I have been complaining about in these "Gripes" (though not all of them). It is a fundamental social dilemma -- what is fun or advantageous for you as an individual can be a pain for others -- what is good for the general modelling community might be a "pain in the arse" to do.

All we can do is to try and set ourselves and others standards and then, collectively, try to keep each other to them - including those who want to suggest these. At certain crucial points they can be enforced (for acceptance for a publication, as a condition for a grant), but even then they are much more effective as part of a social norm -- part of what good/accomplished/reputable modellers do.

So I need this as much as anyone else. Personally I find the honesty ones easy - I have a childish delight in being brutally honest about my own and general academic matters, but find it harder to do the "tidying up" bits once I have sorted out a model - others will find the honesty thing harder because they lack the arrogant confidence I have. Lets keeps us all straight in this increasingly "post-truth" world!

26 May 2017

Bruce's Modelling Gripes, No. 9: Publishing early or publishing late

Alright, so I have cheated and rolled two gripes into one here, but the blog editor seems OK with this.
  • When modellers rush to publishing a full journal article on the fun model they are developing and often over-claim for it and generally not do enough work, check it or get enough results. A discussion paper or workshop paper is good, but presenting some work as mature when it is only just developing can waste everybody's time.
And the opposite extreme... 
  • When modellers keep a model to themselves for too long, waiting until they have it absolutely perfect before they publish and pretend that there was no messy process getting there. Perfection is fine but, please, please also put out a discussion paper on the idea early on so we know what you are working on. Also in the journal article be honest about the process you took to get there, including things you tried that did not work - as in a 'TRACE' document.
We can have the best of both worlds: open discussion papers showing raw ideas, plus journal papers when the work is mature, please!

25 May 2017

Bruce's Modelling Gripes, No. 8: Unnecessary Mathematics

Before computational simulation developed, the only kind of formal model was mathematical [note 1]. Because it it important to write models formally for the scientific process [note 2] maths became associated with science. However, solving complicated mathematical models is very hard, so to push the envelope of these mathematical models tended to involve cutting edge maths.

These days when we have a choice of kinds of formal model, we can choose the most appropriate kind of model e.g.: analytic or computational [note 3]. Most complex models are not analytically solvable, so it is usually the computational route that is relevant.

Some researchers [note 4] feel it is necessary to dress up their models using mathematical formulas, or make the specification of the model more mathematical than it needs to be. This is annoying, not only does it make the specification harder to read, but it reduces one of the advantages of computational modelling -- that the rules can have a natural interpretation in terms of observable processes. [Note 5]

If this does involve maths, then use it, but do not just to look 'scientific' -- that is as silly as wearing a white lab coat to program a simulation!



Note 1: This is almost but not quite true, there were models in other formal systems, such as formal logic, but these were vanishingly rare and difficult to use.

Note 2: Edmonds, B. (2000) The Purpose and Place of Formal Systems in the Development of Science, CPM Report 00-75, MMU, UK. (http://cfpm.org/cpmrep75.html)

Note 3: It does not really matter if one uses maths or code to program, the only important difference is between solving analytically and calculating examples (which is simulating).

Note 4: All fields have their own 'machismo' how you prove you are a *real* member of the community, in some fields (e.g. economics) this has included showing one's skill at mathematics. Thus this problem is more common in some fields than others, but pretty widespread across many fields.

Note 5: My first degree was in Mathematics, so I am not afraid of maths, just can step back from the implicit status game of knowing and 'displaying' maths.

23 May 2017

Bruce's Modelling Gripes, No. 7: Assuming simpler is more general

If one adds in some extra detail to a general model it can become more specific -- that is it then only applies to those cases where that particular detail held. However the reverse is not true: simplifying a model will not make it more general - it is just you can imagine it would be more general.

To see why this is, consider an accurate linear equation, then eliminate the variable leaving just a constant. The equation is now simpler, but now will only be true at only one point (and only be approximately right in a small region around that point) - it is much less general than the original, because it is true for far fewer cases.

Only under some special conditions does simplification result in greater generality:
  1. When what is simplified away is essentially irrelevant to the outcomes of interest (e.g. when there is some averaging process over a lot of random deviations)
  2. When what is simplified away happens to be constant for all the situations considered (e.g. gravity is always 9.8m/s^2 downwards)
  3. When you loosen your criteria for being approximately right hugely as you simplify (e.g. mover from a requirement that results match some concrete data to using the model as a vague analogy for what is happening)
In other cases, where you compare like with like (i.e. you don't move the goalposts such as in (3) above) then it only works if you happen to know what can be safely simplified away.

Why people think that simplification might lead to generality is somewhat of a mystery. Maybe they assume that the universe has to obey ultimately laws so that simplification is the right direction (but of course, even if this were true, we would not know which way to safely simplify). Maybe they are really thinking about the other direction, slowly becoming more accurate by making the model mirror the target more. Maybe this is just a justification for laziness, an excuse for avoiding messy complicated models. Maybe they just associate simple models with physics. Maybe they just hope their simple model is more general.

5 May 2017

Bruce's Modelling Gripes, No. 6: Over-hyping significance of a simulation to Funders, Policy Makers and the Public

When talking to other simulation modellers, a certain latitude is permissible in terms of describing the potential impact of our models. For example if we say "This simulation could be used to evaluate policy options concerning ...", the audience probably knows that, although this is theoretically possible, the many difficulties in doing this. They make allowance for the (understandable) enthusiasm of the model's creator, for they know such pronouncements will be taken with 'a pinch of salt'.

However, it is a very different situation when the importance, or impact or possible use of models is exaggerated to an audience of non-modellers, who are likely to take their pronouncements at face value. This includes promises in grant applications, journal publications, public lectures and discussion with policy actors/advisers. They will not be in a position to properly evaluate the claims made and have to take the results on trust (or ignore them along with the advice of other 'experts' and 'boffins').

The danger is that the reputation of the field will suffer when people rely on models for purposes that they are not established for. The refrain could become "Lies, damned lies, statistics and simulations". This is especially important in this era where scientists are being questioned and sometimes ignored.

Some of the reasons for such hype lies in the issues discussed in previous posts and some seem to lie elsewhere.
  • Confusions about purpose, thinking that establishing a simulation for one purpose is enough to suggest a different purpose
  • Insufficient validation for the use or importance claimed
  • Being deceived by the "theoretical spectacles" effect [note 1] -- when one has worked with a model for a while that we tend to see it through the "lens" of that model. Thus we confuse a way of understanding the world for the truth about it.
  • Sheer fraud: we want a grant, or to get published, or to justify a grant, so we bend the truth about our models somewhat. For example promising far more in a grant proposal than we know we will able to deliver.
In a context of other modellers, we can be easily found out and understood. With others we can get away with it for a time, but it will catch up with us in terms of an eventual loss of reputation. We really do not want to be like the economists!

Note 1: "theoretical spectacles" was a phrase introduced by Thomas Kuhn to describe the effect of only noticing evidence that is consistent with the theory one believes.