11 Jan 2021

Bad Science keeps getting cited

There is a problem in the efficacy of science in terms of correcting itself. Whilst in the very long-term erroneous and important science does get corrected, in the short and medium term papers continue to get cited even when the science they describe is debunked, falsified or even retracted. 

The problem seems to be (a) that severe criticism or retraction notices are not easily detectable on the page where the paper is read (a problem compounded when there are different copies of the paper around on pre-print services etc.) and (b) that papers get more readers (and hence citations) the more it is cited, so once a paper starts getting cited a lot, its citation rate is self-sustaining.

Some examples:

  •  (Schneider et al. 2020) report about a paper that was retracted due to falsified clinical trial data, but has continued to be cited in the 11 years after that - mostly with citations that give no indication of its retraction or weakness (in 96% of citing articles). This is not so surprising given the main page of the paper gives no indication that it is retracted (https://journal.chestnet.org/article/S0012-3692(15)49623-0/).
  • The case I know personally about is (Riolo et al 2001), which was heavily criticised (e.g. Roberts & Sherratt 2002, Edmonds & Hales 2003). The original model is very brittle and it is clear the authors did not understand their own model or results (changing a "<=" to a "<" in the formulation makes the effect they report disappear because the former forces agents with 0 tolerance to cooperate with clones of themselves and hence proliferate). Despite this the original paper is cited over 800 times according to Google citations.

(Schneider et al. 2020) report on some of the literature reporting other cases. Retraction Watch documents retractions by journals and indeed has a "leader board" for the 10 most cited retracted papers (https://retractionwatch.com/the-retraction-watch-leaderboard/top-10-most-highly-cited-retracted-papers/), with one paper being cited 1146 times after it was retracted.

That good science gets cited and then attracts more readers is clearly a good thing, but when the reverse happens the communication of its (lack of) quality does not work well. Firstly, there is the laziness of many researchers who cited papers without reading them (basing their citations on the citations of others). Secondly, when a paper is simply severely criticised this is difficult to know without reading many papers that cited a paper - a very time-consuming process. The only way in which the correction of bad science works is if the paper that criticises such research is cited many more times than the original.

Some mechanism whereby the quality of papers is communicated post-review is needed. The review process can only stop some bad science from being published because completely checking research is infeasible (unless this is a core part of one's own research).


Edmonds, B. and Hales, D. (2003) Replication, Replication and Replication - Some Hard Lessons from Model Alignment.  Journal of Artificial Societies and Social Simulation  6(4) (http://jasss.soc.surrey.ac.uk/6/4/11.html).

Riolo, Rick L., Michael D. Cohen, and Robert Axelrod. "Evolution of cooperation without reciprocity." Nature 414.6862 (2001): 441-443. https://www.nature.com/articles/35106555

Schneider, J., Ye, D., Hill, A.M. et al. Continued post-retraction citation of a fraudulent clinical trial report, 11 years after it was retracted for falsifying data. Scientometrics 125, 2877–2913 (2020). https://doi.org/10.1007/s11192-020-03631-1

Roberts, G., Sherratt, T. Does similarity breed cooperation?. Nature 418, 499–500 (2002). https://doi.org/10.1038/418499b 

5 Nov 2020

My summary of the ABC news analysis of US election exit polls

This is my summary of the Analysis of US exit polls done by ABC news. Full details at: https://abcnews.go.com/Elections/exit-polls-2020-us-presidential-election-results-analysis

On the leaders, the parties or issues that have become partisan, division was also highly partisan and symmetric, including views on: the economy, abortion, covid, counting votes "properly", health care, climate change, BLM/racism, and the competency of the Federal government.

Looking at all other factors which indicated a 60% or more tendency to vote for Republican or Democrat it found:

  • Those voting Democrat had a tendency to be: Black, Hispanic, Asian, had an advanced degree, in a worse/same financial situation, valued personal qualities of candidate (good judgement candidate, can unite country), had a positive view of state voting, might have voted for independents before, came from city over 50K residents, or were young (18-29, non-married women, first time voters).
  • Those voting Republican had a tendency to be: White 45-59, Christian, may have decided their vote in the last week, were doing better financially than 2016, liked a strong leader.

Extracts from the full details for these factors are as follows:

Are you:

(15,318 Respondents)

Voted (D)

Voted (R)

Black 12% responded



Hispanic/Latino 13% responded



Asian 3% responded




In which age group are you?

(15,452 Respondents)

Voted (D)

Voted (R)

18-29 17% responded




Age by race

(15,205 Respondents)

Voted (D)

Voted (R)

White 45-59 18% responded




Which best describes your education? You have:

(15,344 Respondents)

Voted (D)

Voted (R)

An advanced degree after a bachelor's degree (such as JD, MA, MBA, MD, PhD) 16% responded




Gender by marital status

(3,748 Respondents)

Voted (D)

Voted (R)

Non-married women 24% responded




Are you:

(2,470 Respondents)

Voted (D)

Voted (R)

Protestant 14% responded



Catholic 27% responded



Other Christian 31% responded




Are you gay, lesbian, bisexual or transgender?

(3,615 Respondents)

Voted (D)

Voted (R)

Yes 7% responded




Is this the first year you have ever voted?

(3,953 Respondents)

Voted (D)

Voted (R)

Yes 13% responded




When did you finally decide for whom to vote in the presidential election?

(3,731 Respondents)

Voted (D)

Voted (R)

In the last week 2% responded




Compared to four years ago, is your family's financial situation:

(3,731 Respondents)

Voted (D)

Voted (R)

Better today 41% responded



Worse today 20% responded



About the same 38% responded





Which ONE of these four candidate qualities mattered most in deciding how you voted for president? (CHECK ONLY ONE)

(3,845 Respondents)

Voted (D)

Voted (R)

Can unite the country 19% responded



Is a strong leader 32% responded



Has good judgment 23% responded




Which was more important in your vote for president today?

(3,845 Respondents)

Voted (D)

Voted (R)

My candidate's personal qualities 23% responded




Do you think your state makes it easy or difficult for you to vote?

(3,741 Respondents)

Voted (D)

Voted (R)

Somewhat easy 25% responded




Was your vote for president mainly:

(3,870 Respondents)

Voted (D)

Voted (R)

Against his opponent 24% responded




In the 2016 election for president, did you vote for:

(3,870 Respondents)

Voted (D)

Voted (R)

Hillary Clinton (Dem) 40% responded



Donald Trump (Rep) 42% responded



Other5 % responded



Did not vote 11% responded




Population of area, three categories

(15,343 Respondents)

Voted (D)

Voted (R)

City over 50,000 30% responded





8 Oct 2020

Some warning signs of wishful thinking

Sorting out whether we believe something for a good reason or because we want it to be true (or, oddly, because we fear it to be true) is hard. Even those I consider very rational are not always good at this (including myself), and some that are most partisan use this trait ruthlessly. However, there are some (fallible) signals that can help alert oneself of such wishful thinking, such as the following.

  1. It sanctions me doing something I want to do (drive in the car when I could have cycled, take an international holiday, eat a huge slice of over-rich chocolate cake etc.)
  2. It helps me criticise/attack/despise something I already dislike/think is bad (a government, a politician, a law, a restriction, the green movement, Capitalism, foreign aid, Brexit supporters etc.)
  3. The formulation of the belief shifts over time (from "smoking is not harmful" to "there is no evidence it is" to "it is harmful but only one of a complex of factors", from denying the earth's temperature is rising to denying it is due to humans to denying it is worth stopping etc.)
  4. The belief is constructed so that it is hard to disprove (conspiracy theories are often like this, but so too are many political claims, e.g.. some of the claims of Brexit or Remain camps, the benefit of homoeopathic cures, crystals)
  5. It sanctions me not doing something I do not want to do (find a job if I am lazy, get exercise if I am unhealthy, change my mind if I do not want to, admit to being wrong if this is embarrassing, avoid doing my expenses, wear a mask, wear a cycle helmet etc.)
  6. The use of obviously weak arguments to support the belief (it does not cost much anyway, I was going that way anyway, it is my right to do it, it won't harm anyone, Dominic Cummings did it so why not me, anything X says is rubbish, etc.)
  7. The invention of new supporting arguments only formulated when old ones are revealed to be weak or wrong (in economics - ok we know people are not rational but collectively they behave as if they are, the climate is warming due to the sunspot cycle etc.)
  8. The belief signals membership of a group I wish to belong to (holocaust denial, global warming will result in the extinction of all humans, greed is good etc.)
  9. Support of the belief is based on the persecution or the weakness of the arguments by those opposing it (Big Pharma would want this, they say there is a magic money tree, various Nationalist claims, the Government does not want you to know this etc.)
  10. It is involved in a highly political or personalised argument (Brexit, HCQ, Republican/Democrat, Immigration, lockdown, taxes etc.)
  11. There is no positive evidence for the belief - rather a (perceived) lack of negative evidence (herbal remedies, superstitions, free will/denial of free will, chemicals in the drinking water are affecting me, self-supporting arguments such as everyone is lying etc.)
  12. Hype, ridicule or insults are used to defend the belief (if you follow that line you would not be able to do anything, "socialist" in the US, only a stupid person would believe that, that is what they want you to believe, Trump is the best/worst president ever etc.)
  13. All my friends/group/kind believe it - though this often not an explicit/conscious reason (shaving, our group is superior to others/outsiders, our technique is better, British humour is unique and similar national myths, any diet that involves harm to animals is wrong etc.)
  14. It is too interesting - too surprising, funny, odd etc. False beliefs are not constrained by boring facts and thus can be far more engaging (internet memes, the earth is hollow, you too can be thin with this simple trick etc.)
  15. It is comforting or otherwise gives me status (Earth is the centre of the universe, Humans are the pinnacle of creation/evolution, simple theories are more likely to be true, your country is special/unique etc.)
  16. New phrases/words are used that are invented by believers - because this indicates this is more a group membership thing than a matter of truth (sheeple, Remoaners, follow the crumbs etc.)
  17. It can be expressed in a very few words and has lots of CAPITALS and exclamation marks!!(political slogans, ads, tweets, etc.)
  18. Support is mostly via a list of personal endorsements (these are easy to collect at a trivial level and very hard to check)
  19. When critiqued the response is not to engage with the argument but to reply with something else.  (The warning sign here is a lack of interest in the the basis for the belief - the belief comes first)
  20. It is contrary to general opinion since this makes the believers special and different (and hence gives status) - the narrative of the prophet in the wilderness or one man against the system (and, yes, in common narratives - e.g. films - this person is almost invariably male).

I NOT saying that these are infallible signs of a wrong belief (some of these hold for some truths), and I am NOT saying one shouldn't do these things (e.g. critiquing something you feel is wrong might well be a good thing to do). It just each of these should give one pause to question the corresponding belief a bit more than one might otherwise do. If one's reflection on the grounds for the belief indicate they are not so solid, then think of independent ways/evidence to check that belief or just shift position to a less certain belief, noting the doubt.

Clearly, I should populate this list with references to evidence (but who would read that anyway ;-), which I may get around to. However, I will update this list when I find more suggestions to add.

Also one should distinguish these warning signs from aspects that are merely not entirely reliable indications of truth, including the following (nothing outside formal systems can be 100% proved).

  • That a clever person says it (like me 8-)
  • That there is a debate or many views about this (that there are climate deniers or anti-vaxxers in TV debates does not mean they have strong arguments or reliable evidence)
  • That it is repeated by many users or on many websites (that Biden used an earpiece in the debate, that the Oklahoma bomber was an Islamic refugee etc.)
  • There are non-peer-reviewed/non-rigourous papers that claim this (look at the pre-print literature on COVID19)
  • I find it a useful/insightful way of thinking about things (if something can help one get to true insights it might equally help one to misleading ones)
  • It is plausible given what I know (plausibility makes it a hypothesis not a fact)
  • It fits with all my other knowledge/beliefs (but those might also be wrong)
  • It was taught to me at school (I was taught continental drift was false)
  • I read it in a book/newspaper (some authors, editors, journalists and publishers care to restrict  publication to statements that are well supported, but many do not)
  • An example of this is documented (examples are good starting points, but not enough to support generalisation)
  • It is complicated or technical (just because it is impressive is not enough to make it true, but it does indicate more effort has been put in to it).
  • That it is in the interest of some institution or block for you to disbelieve it (one can't ignore power relations when assessing truth, but the effect of power is complex)

21 Mar 2019

Slides of talk: "The Evolution of Empirical ABMS"

A talk at the workshop on "Agent-Based Models in Philosophy: Prospects and Limitations", Workshop on Agent-Based Models in Philosophy, Rurh University, Bochum, Germany. March 2019


ABMs (like other kinds of model) can be used in a purely abstract way, as a kind of thought experiment - a way of thinking about some aspect of the world that is too complicated to hold in our mind (in all its detail). In this way it both informs and complements discursive thought. However there is another set of uses for ABMs - empirical uses - where the mapping between the model and sets of observation-derived data are crucial. For these uses, one has to (a) use the mapping to get from some data to the model (b) use the model for some inference and (c) use the mapping again back to data. This includes both predictive and explanatory uses of ABMs. These are easily distinguishable from abstact uses becuase there is a fixed and well-defined relationship between the model and the data, this is not flexible on a case by case basis. In these cases the reliability comes from the composite (a)-(b)-(c) mapping, so that simplifying step (b) can be counterproductive if that means weakening steps (a) and (c) because it is the strength of the overall chain that is important. Taking the use of models in quantum mechanics as an example, one can see that sometimes the evolution of the formal models driven by empirical adequacy can be more important than the attendent abstract models used to get a feel for what is happening. Although using ABM's for empirical purposes is more challenging than for purely abstract purposes, they are being increasingly used for empirical explanation rather than thought experiments, and there is no reason to suppose that robust empirical adequacy is unachievable.

Slides at: https://www.slideshare.net/BruceEdmonds/the-evolution-of-empirical-abms-137463946

21 Aug 2018

Slides from my plenary "How social simulation could help social science deal with context"

An invited talk at Social Simulation 2018

This points out how context-sensitivity is fundamental to much human social behaviour, but largely bypassed or ignored in social science. I more formal social science, it is usual to assume or fit universal models, even if this covers a lot of different contexts. In qualitative social science context is almost deified, and any generalisation across contexts is passed on to those that learn from it. Agent-based modelling allows for context-sensitive models to be developed and hence the role of context explored and better understood. The talk discussed a framework for analysing narrative text using the Context-Scope-Narrative-Elements (CSNE) framework. It also illustrates a cognitive model that allows for context-dependent knowledge to be implemented wthin an agent in a simulation. The talk ends with a plea to avoid uncecessary or premature summarisation (using averages etc.).

Slides at: https://www.slideshare.net/BruceEdmonds/how-social-simulation-could-help-social-science-deal-with-context

15 Jul 2018

Slides from talk at MABS2018: "Mixing ABM and Policy ... what could possibly go wrong?"

Invited talk at 19th International Workshop on Multi-Agent Based Simulation at Stockholm on 14th July 2018.

Mixing ABM and Policy ... what could possibly go wrong?

This talk looks at a number of ways in which using ABM in the context of influencing policy can go wrong: during model construction, with model application and other.

It is related to the book chapter:
Aodha, L.Edmonds, B. (2017) Some pitfalls to beware when applying models to issues of policy relevance. In Edmonds, B. & Meyer, R. (eds.) Simulating Social Complexity - a handbook, 2nd edition. Springer, 801-822.
See the slides at: https://www.slideshare.net/BruceEdmonds/mixing-abm-and-policywhat-could-possibly-go-wrong

25 Jun 2018

Paper published in "Journal of Conflict Resolution": "Intragenerational Cultural Evolution and Ethnocentrism" by David Hales and myself

This #cfpm_org paper is suggestive of a process of cultural (horizontal) intragenerational processes of in-group favouritism and contrasts with Axelrod and Hammond's (2006) model of the (vertical) evolution of a fixed in-group preference.
Ethnocentrism denotes a positive orientation toward those sharing the same ethnicity and a negative one toward others. Previous models demonstrated how ethnocentrism might evolve intergenerationally (vertically) when ethnicity and behavior are inherited. We model short-term intragenerational (horizontal) cultural adaptation where agents have a fixed ethnicity but have the ability to form and join fluid cultural groups and to change how they define their in-group based on both ethnic and cultural markers. We find that fluid cultural markers become the dominant way that agents identify their in-group supporting positive interaction between ethnicities. However, in some circumstances, discrimination evolves in terms of a combination of cultural and ethnic markers producing bouts of ethnocentrism. This suggests the hypothesis that in human societies, even in the absence of direct selection on ethnic marker–based discrimination, selection on the use of fluid cultural markers can lead to marked changes in ethnocentrism within a generation.
Keywords tag-based cooperation, altruism, cultural evolution, in-group bias, ethnocentrism
This is open access and available at: http://journals.sagepub.com/doi/10.1177/0022002718780481

27 Jun 2017

Slides for talk and draft paper on: Modelling Purposes

A talk at the 2017 ESSA SiLiCo Summer school in Wageningen.

Slides at: https://www.slideshare.net/BruceEdmonds/model-purpose-and-complexity

This discusses some different purposes for a simulation model and the consequences of this in terms of its development, checking and justification. It also looks at how complex one's model should be.

Connected to this is a draft of a paper:

How one builds, checks, validates and interprets a model depends on its ‘purpose’. This is true even if the same model is used for different purposes, which means that a model built for one purpose but now used for another may need to be re-checked, re-validated and maybe even rebuilt in a different way. Here we review some of the different purposes for building a simulation model of complex social phenomena, focussing on five in particular: theoretical exposition, prediction, explanation, description and illustration. The chapter looks at some of the implications in terms of the ways in which the intended purpose might fail. In particular, it looks at the ways that a confusion of modelling purposes can fatally weaken modelling projects, whilst giving a false sense of their quality. This analysis motivates some of the ways in which these ‘dangers’ might be avoided or mitigated.

The citation is:
Edmonds, B. & Meyer, R. (2013) Simulating Social Complexity – a handbook. Springer. (Publisher's Page)

The text of this draft is at: http://cfpm.org/file_download/178/Five+Different+Modelling+Purposes.pdf

There is an updated version with 7 modelling purposes and different modelling *strategies* at: http://cfpm.org/file_download/186/Different+Modelling+Purposes-JASSS-v1.6.pdf

9 Jun 2017

Wide variation in number of votes needed to get elected

As usual, there is a very wide variation in the number of votes needed to get each seat in the UK parliament.  Provincial parties have it relatively easy (in NI, Wales and Scotland), minority parties spread over the UK have it hard (Green, LibDem, UKIP).

2 Jun 2017

Slides for talk on: Modelling Innovation – some options from probabilistic to radical

Given at the European Academy of Technology and Innovation Assessment, see notice about the talk at: https://www.ea-aw.de/service/news/2017/05/22/ea-kolloquium-prof-bruce-edmonds-vom-centre-forpolicy-modelling-cfpm-quotmodellier.html


In general, most modelling of innovation bypasses the creativity involved. Here I look at some of the different options. This loosely follows (but expands) Margaret Boden's analysis of creativity. Four ways are presented and their implementation discussed (a) probabilistic, where the 'innovation' simply corresponds to an unlikely event within a known distribution (b) combinatorial, where innovation is a process of finding the right combination of existing components or aspects (c) complex path-dependency where the path to any particular product is a complex set of decisions or steps and not deducible before it is discovered and (d) radical, where the innovation causes us to think of things in a new way or introduces a novel dimension in which to evaluate. A model of making things that introduces complex path-dependency will be exhibited. Some ways of moving towards (d) the most radical option are discussed and a future possible research agenda outlined.

Slides are at: https://www.slideshare.net/BruceEdmonds/modelling-innovation-some-options-from-probabilistic-to-radical

27 May 2017

Bruce's Modelling Gripes, No. 10: That I also do many of the things that annoy me

I think this will be my last "gripe" for a while, though it has been fun letting them out. I will now let them build up inside a while before I splurge again.

Yes, of course, I also do many of the things I have been complaining about in these "Gripes" (though not all of them). It is a fundamental social dilemma -- what is fun or advantageous for you as an individual can be a pain for others -- what is good for the general modelling community might be a "pain in the arse" to do.

All we can do is to try and set ourselves and others standards and then, collectively, try to keep each other to them - including those who want to suggest these. At certain crucial points they can be enforced (for acceptance for a publication, as a condition for a grant), but even then they are much more effective as part of a social norm -- part of what good/accomplished/reputable modellers do.

So I need this as much as anyone else. Personally I find the honesty ones easy - I have a childish delight in being brutally honest about my own and general academic matters, but find it harder to do the "tidying up" bits once I have sorted out a model - others will find the honesty thing harder because they lack the arrogant confidence I have. Lets keeps us all straight in this increasingly "post-truth" world!

26 May 2017

Bruce's Modelling Gripes, No. 9: Publishing early or publishing late

Alright, so I have cheated and rolled two gripes into one here, but the blog editor seems OK with this.
  • When modellers rush to publishing a full journal article on the fun model they are developing and often over-claim for it and generally not do enough work, check it or get enough results. A discussion paper or workshop paper is good, but presenting some work as mature when it is only just developing can waste everybody's time.
And the opposite extreme... 
  • When modellers keep a model to themselves for too long, waiting until they have it absolutely perfect before they publish and pretend that there was no messy process getting there. Perfection is fine but, please, please also put out a discussion paper on the idea early on so we know what you are working on. Also in the journal article be honest about the process you took to get there, including things you tried that did not work - as in a 'TRACE' document.
We can have the best of both worlds: open discussion papers showing raw ideas, plus journal papers when the work is mature, please!

25 May 2017

Bruce's Modelling Gripes, No. 8: Unnecessary Mathematics

Before computational simulation developed, the only kind of formal model was mathematical [note 1]. Because it it important to write models formally for the scientific process [note 2] maths became associated with science. However, solving complicated mathematical models is very hard, so to push the envelope of these mathematical models tended to involve cutting edge maths.

These days when we have a choice of kinds of formal model, we can choose the most appropriate kind of model e.g.: analytic or computational [note 3]. Most complex models are not analytically solvable, so it is usually the computational route that is relevant.

Some researchers [note 4] feel it is necessary to dress up their models using mathematical formulas, or make the specification of the model more mathematical than it needs to be. This is annoying, not only does it make the specification harder to read, but it reduces one of the advantages of computational modelling -- that the rules can have a natural interpretation in terms of observable processes. [Note 5]

If this does involve maths, then use it, but do not just to look 'scientific' -- that is as silly as wearing a white lab coat to program a simulation!

Note 1: This is almost but not quite true, there were models in other formal systems, such as formal logic, but these were vanishingly rare and difficult to use.

Note 2: Edmonds, B. (2000) The Purpose and Place of Formal Systems in the Development of Science, CPM Report 00-75, MMU, UK. (http://cfpm.org/cpmrep75.html)

Note 3: It does not really matter if one uses maths or code to program, the only important difference is between solving analytically and calculating examples (which is simulating).

Note 4: All fields have their own 'machismo' how you prove you are a *real* member of the community, in some fields (e.g. economics) this has included showing one's skill at mathematics. Thus this problem is more common in some fields than others, but pretty widespread across many fields.

Note 5: My first degree was in Mathematics, so I am not afraid of maths, just can step back from the implicit status game of knowing and 'displaying' maths.

23 May 2017

Bruce's Modelling Gripes, No. 7: Assuming simpler is more general

If one adds in some extra detail to a general model it can become more specific -- that is it then only applies to those cases where that particular detail held. However the reverse is not true: simplifying a model will not make it more general - it is just you can imagine it would be more general.

To see why this is, consider an accurate linear equation, then eliminate the variable leaving just a constant. The equation is now simpler, but now will only be true at only one point (and only be approximately right in a small region around that point) - it is much less general than the original, because it is true for far fewer cases.

Only under some special conditions does simplification result in greater generality:
  1. When what is simplified away is essentially irrelevant to the outcomes of interest (e.g. when there is some averaging process over a lot of random deviations)
  2. When what is simplified away happens to be constant for all the situations considered (e.g. gravity is always 9.8m/s^2 downwards)
  3. When you loosen your criteria for being approximately right hugely as you simplify (e.g. mover from a requirement that results match some concrete data to using the model as a vague analogy for what is happening)
In other cases, where you compare like with like (i.e. you don't move the goalposts such as in (3) above) then it only works if you happen to know what can be safely simplified away.

Why people think that simplification might lead to generality is somewhat of a mystery. Maybe they assume that the universe has to obey ultimately laws so that simplification is the right direction (but of course, even if this were true, we would not know which way to safely simplify). Maybe they are really thinking about the other direction, slowly becoming more accurate by making the model mirror the target more. Maybe this is just a justification for laziness, an excuse for avoiding messy complicated models. Maybe they just associate simple models with physics. Maybe they just hope their simple model is more general.

5 May 2017

Bruce's Modelling Gripes, No. 6: Over-hyping significance of a simulation to Funders, Policy Makers and the Public

When talking to other simulation modellers, a certain latitude is permissible in terms of describing the potential impact of our models. For example if we say "This simulation could be used to evaluate policy options concerning ...", the audience probably knows that, although this is theoretically possible, the many difficulties in doing this. They make allowance for the (understandable) enthusiasm of the model's creator, for they know such pronouncements will be taken with 'a pinch of salt'.

However, it is a very different situation when the importance, or impact or possible use of models is exaggerated to an audience of non-modellers, who are likely to take their pronouncements at face value. This includes promises in grant applications, journal publications, public lectures and discussion with policy actors/advisers. They will not be in a position to properly evaluate the claims made and have to take the results on trust (or ignore them along with the advice of other 'experts' and 'boffins').

The danger is that the reputation of the field will suffer when people rely on models for purposes that they are not established for. The refrain could become "Lies, damned lies, statistics and simulations". This is especially important in this era where scientists are being questioned and sometimes ignored.

Some of the reasons for such hype lies in the issues discussed in previous posts and some seem to lie elsewhere.
  • Confusions about purpose, thinking that establishing a simulation for one purpose is enough to suggest a different purpose
  • Insufficient validation for the use or importance claimed
  • Being deceived by the "theoretical spectacles" effect [note 1] -- when one has worked with a model for a while that we tend to see it through the "lens" of that model. Thus we confuse a way of understanding the world for the truth about it.
  • Sheer fraud: we want a grant, or to get published, or to justify a grant, so we bend the truth about our models somewhat. For example promising far more in a grant proposal than we know we will able to deliver.
In a context of other modellers, we can be easily found out and understood. With others we can get away with it for a time, but it will catch up with us in terms of an eventual loss of reputation. We really do not want to be like the economists!

Note 1: "theoretical spectacles" was a phrase introduced by Thomas Kuhn to describe the effect of only noticing evidence that is consistent with the theory one believes.

27 Apr 2017

Bruce's Modelling Gripes, No. 5: None or not many results

The point of a model lies mostly in its results. Indeed, I often only bother to read how a model has been constructed if the results look interesting. One has to remember that how a model is constructed - its basis, the choices you have made, the programming challenges are far, FAR more interesting to you, the programmer, than anyone else. Besides, if you have gone to all the trouble of making a model, the least you can do is extract some results and analyse them.

Ideally one should include the following:
  1. Some indicative results to give readers an idea of the typical or important behaviour of the model. This really helps understand a model's nature and also hints at its "purpose". This can be at a variety of levels - whatever helps to make the results meaningful and vivid. It could include visualisations of example runs as well as the normal graphs -- even following the events happening to a single agent, if that helps.
  2. A sensitivity analysis - checking how varying parameters affects the key results. This involves some judgement as it is usually impossible to do a comprehensive survey. What kind of sensitivity analysis, over what dimensions depends on the nature of the model, but not including ANY sensitivity analysis generally means that you are not (yet?) serious about the model (and if you are not taking it seriously others probably will not either).
  3. In the better papers, some hypotheses about the key mechanisms that seem to determine the significant results are explicitly stated and then tested with some focussed simulation experiments -- trying to falsify the explanations offered. These results with maybe some statistics should be exhibited [note 1].
  4. If the simulation is being validated or otherwise compared against data, this should be (a) shown then (b) measured. [note 2] 
  5. If the simulation is claimed to be predictive, its success at repeatedly predicting data (unknown to the modeller at the time) should be tabulated. It is especially useful in this context to give an idea of when the model predicts and when it does not.
What you show does depend on your model purpose. If the model is merely to illustrate an idea, then some indicative results may be sufficient for your goal, but more may still be helpful to the reader. If you are aiming to support an explanation of some data then a lot more is required. A theoretical exploration of some abstract mechanisms probably requires a very comprehensive display of results.

If you have no, or very few results, you should ask yourself if there is any point in publishing. In most occasions it might be better to wait until you have.

Note 1: p-values are probably not relevant here, since by doing enough runs one can pretty much get any p-value one desires. However checking you have the right power is probably important.  See

Note 2: that only some aspects of the results will be considered significant and other aspects considered model artefacts - it is good practice to be explicit about this. See

20 Apr 2017

Bruce's Modelling Gripes, No. 4: Not being open about code etc.

There is no excuse, these days (at least for academic purposes), not to be totally transparent about the details of the simulation. This is simply good scientific practice, so others can check, probe, play around with and inspect what you have done. The collective good that comes of this far outweighs and personal embarrassment at any errors or shortcomings discovered.

This should include:
  • The simulation code itself, publicly archived somewhere, e.g. openabm.org
  • Instructions as to how to get the code running (any libraries, special instructions, data needed etc.)
  • A full description of the simulation, its structures and algorithms (e.g. using the ODD structure or similar)
  • Links or references to any papers or related models
Other things that are useful are:
  • A set of indicative/example results
  • A sensitivity analysis
  • An account of how you developed the simulation, what you tried (even if it did not work)
For most academics, somehow or other, public money has been spent on funding you do the work, the public have a right to see the results and that you uphold the highest academic standards of openness and transparency.  You should do this even if the code is not perfectly documented and rather dodgy!  There is no excuse.

For more on this see:
Edmonds, B. & Polhill, G. (2015) Open Modelling for Simulators. In Terán, O. & Aguilar, J. (Eds.) Societal Benefits of Freely Accessible Technologies and Knowledge Resources. IGI Global, 237-254. DOI: 10.4018/978-1-4666-8336-5. (Previous version at http://cfpm.org/discussionpapers/172)

12 Apr 2017

Bruce's Modelling Gripes, No. 3: 'thin' or non-existent validation

Agent-based models (ABM) are the ultimate in being able to flexibly represent systems we observe. They are also very suggestive in their output - usually more suggestive than their validity would support. For these reasons it is very easy to be taken-in by an ABM, either one's own or another's. Furthermore, good practice in the form of visualisations and co-developing the model with stakeholders increase the danger that all those that might critique the model are cognitively committed to it.

It is to avoid such (self) deception that validation is undertaken - to make sure our reliance on a model is well-founded and not illusory. Unfortunately, effective validation is resource- and data- intensive and is far from easy. The flexibility in making a model needs to be matched by the strength of the validation checks. To be blunt - it is easy to (unintentionally or otherwise) 'fit' a complex ABM to a single line of data, so it looks convincing. This kind of validation where one aspect of the simulation outcomes is compared against one line of data is called 'thin validation'.

In physics or other sciences they do do this kind of validation, but they have well-validated micro-foundations so the effective flexibility of their modelling is far less than for a social simulation, where assumptions about the behaviour of the units (typically humans) are not well known. That is why such thin validation is inadequate for social phenomena.

Of course the kind and relevance of validation depends upon your purpose for modelling. If you are not really talking about the observed world but only exploring an entirely abstract world of mechanisms it might not be relevant. If you are only illustrating an idea or possible series of events and interactions then 'thin' validation might be sufficient.

However if you are attempting to predict unknown data or establish an explanation for what is observed then you probably need multi-dimensional or multi-aspect validation - checking the simulation in many different ways at once. Thus one might check if the right kind of social network emerges, and statistics about micro-level behaviours are correct and that emergent aggregate time-series are correct and that in snapshots of the simulation there is the right kind of distribution of key attributes. This is 'fat' validation, and then starts to be adequate to pin down our complex constructions.

Papers that have thin or non-existent validation (e.g. they rely on plausibility alone as a check) might be interesting but should not be interpreted as saying anything reliable about the world we observe.

10 Apr 2017

Bruce's Modelling Gripes, No. 2: Making specious excuses for pragmatic limitations on modelling

We are limited beings, with limited time as well as computational and mental capacity. Any modelling of very complex phenomena (social, ecological, biological etc.) will thus be limited in terms of: (a) how much time we can spend on it (b) how much detail we can cope with (checking, validating, understanding etc.) (c) what assumptions we can check

This is find, but instead of simply being honest about these limits there is a tendency to excuse them, to pretend that these limitations are more fundamentally justified. Three examples are as follows:
  • "for the sake of simplicity" (e.g. these articles in JASSS), this implies that simpler models are somehow better in ways beyond that of straightforward pragmatic convenience (e.g. easier to build, check, understand, communicate etc.)
  • That more complicated models are less complex (e.g. Sun et al.2016) which shows a graph where "complexity may decrease after a certain threshold of model complicatedness". This is sheer wishful thinking, what is more likely to be true is that it is harder to notice complexity in more complicated models, but this is due to our cognitive limitations in terms of pattern recognition, not anything to do with emergent complexity.
  • Changing English to make our achievements sound more impressive than they are, e.g. to call any calculation using a model a "prediction", when everybody else uses this word to really mean prediction (i.e. anticipating unknown data/observations sufficiently accurately using a model).
These weasel words would not matter so much if they were (a) purely internal to the field and everyone understood their meaning and (b) they were not used in public/policy consultations or grant applications where they might be taken seriously. Newcomers to the field often take these excuses too literally and so change what they attempt to do as if these excuses were genuine! This can be an excuse for following the easy option when they should be pushing the boundaries of what is possible. When policy makers/grant funders misunderstand these claims, an inevitable disappointment/disillusionment may follow, damaging the reputation of the field.


Sun, Z., Lorscheid, I., Millington, J. D., Lauf, S., Magliocca, N. R., Groeneveld, J., ... & Buchmann, C. M. (2016). Simple or complicated agent-based models? A complicated issue. Environmental Modelling & Software, 86, 56-67. 

27 Mar 2017

Bruce's Modelling Gripes, No.1: Unclear or confused modelling purpose

OK, maybe I am just becoming a grumpy ol man, but I thought I would start a series about tendencies in my own field that I think are bad :-), so here goes...

Modelling Gripe No.1: Unclear or confused modelling purpose

A model is a tool. Tools are useful for a particular purpose. To justify a new model one has to show it is good for its intended purpose.  Some modelling purposes include: prediction, explanation, analogy, theory exploration, illustration... others are listed by Epstein [1]. Even if a model is good for more than one purpose, it needs to be justified separately for each purpose claimed.

So here are 3 common confusions of purpose:

1.    Understanding Theory or Analogy -> Explanation. Once one has immersed oneself in a model, there is a danger that the world looks like this model to its author. Here the temptation is to immediately jump to an explanation of something in the world. A model can provide a way of looking at some phenomena, but just because one can view some phenomena in a particular way does not make it a good explanation.

2.    Explanation -> Prediction. A model that establishes an explanation traces a (complex) set of causal steps from the model set-up to outcomes that compare well with observed data. It is thus tempting to suggest that one can use this model to predict this observed data. However, establishing that a model is good for prediction requires its testing against unknown data many times – this goes way beyond what is needed to establish a candidate explanation for some phenomena.

3.    Illustration -> Understanding Theory. A neat illustration of an idea, suggests a mechanism. Thus the temptation is to use a model designed as an illustration or playful exploration as being sufficient for the purpose of a Understanding Theory. Understanding Theory involves the extensive testing of code to check the behaviour and any assumptions. An illustration, however suggestive, is not that rigorous. For example, it maybe that an illustrated process only appears under very particular circumstances, or it may be that the outcomes were due to aspects of the model that were thought to be unimportant. The work to rule out these kinds of possibility is what differentiates using a model as an illustration from modelling for Understanding Theory.

Unfortunately many authors are not clear in their papers about specifying exactly for which purpose they are justifying their model.  Maybe they have not thought about it, maybe they are just confused and maybe they are just being sloppy (e.g. assuming because its good for one purpose it is good for another).