6 Feb 2024

Sorry, server temporarily out of action...

...due to university IT admin deciding its OS was too old and so a security rick. sigh.

19 Sept 2022

Two aspects of AI - complexity and autonomy - and what it means for dealing with them

The conflation of complexity and autonomy

Thinking in this area is often muddied by conflating two different aspects of what is called "Artificial Intelligence" (AI): (1) the complexity of the algorithms involved, which means we (as humans) can not fully understand or predict what they will come up with and (2) the autonomy of AI entities, which means that they have their own goals and priorities which might be different from ours.

Some examples illustrate the difference.

  • A trained machine learning (ML) algorithm, such as a neural network, can be very complex - hard to understand how it distinguishes patterns and comes up with its outputs. This makes them hard to use because one does not know all their limitations and biases. An algorithm that works well when trained on one set of data, may suddenly work much less well for similar data from another context (as people are finding in the ML replication crisis). A consequence of this complexity is that one can not simply use it, like a car - one has to be trained to use it over a period of time (like learning to sail). However, ML algorithms have no autonomy.

  • It is hard to find a simple example of something with autonomy but no complexity, because it takes a process of some complexity for autonomy to be realised. Simple machines or entities usually have little autonomy. However, the complexity does not have to be 'inside' the entity but can be in the process that makes them. If a set of entities is evolving to thrive in a complex environment, then some of those solutions might be very simple but have goals that are very different from what we would like. The complexity is in the process of evolution which only sometimes results in complex entities. Examples include viruses (which are *relatively* simple compared to us) and active solutions that are evolved in silico. The result can be an ecology of entities that makes the achievement of our goals harder.

  • Of course, some entities are both complex and autonomous, e.g. a horse. However, training can help manage both. The training and human-socialisation of a horse makes it less autonomous - more willing to accept our goals over its own. The co-training of horse and riders makes the interaction between the two more predictable - simpler - but this is never as simple as a set of well-defined signals (though these help). An ill-treated or frustrated horse might well go against the intentions of its owners but in highly predictable and understandable ways. A human-socialised, but poorly trained, horse may wish to please its rider but misunderstand and do something unpredictable.

    Some of the confusion arises because both autonomy and complexity make AI entities hard to 'use'. However, they need to be dealt with in different ways (from the point of view of us humans).

    Dealing with Complex AI

    To use a complex tool well requires a lot of training (or other meta-analysis). You can not hope to just apply it "off the shelf". This is not surprising given the extent to which people confuse themselves or make mistakes with something as simple as regression analysis. As with learning to ride a horse, it is going to take a while to get the feel of a complex tool, learning when it can be successfully used, how to use it well, how to check it is giving you good results and how to interpret the results that come from it. Complex analytic tools are still useful, extending our mental capacities in a similar way that machines extend our physical capacities. They stand between us in terms of our understanding of what they are analysing, allowing us more leverage, even if this is now indirect. Such complex tools may require other tools to manage, check and understand what they are doing, so that we might develop a hierarchy of tools analysing tools, with humans at one end and the problems we are grappling with at the other. Such a system of very indirect understanding is inevitable if we are to push the envelope further, but even more tricky to manage.

    Dealing with Autonomous AI

    Dealing with autonomous entities is another matter entirely, though a familiar one. When we deal with other species or groups of people, particularly when we do not have much previous experience with them, we face this problem. Yes they may be hard to understand due to their unfamiliar complexity but that is just the start of the difficulties, which present even when we have a lot of experience of their kind. You can not simply 'use' such entities, even after a period of extensive training. Part of the confusion with algorithmic autonomous entities is that often it is assumed that one can, just because they are built of algorithms. Here we should look to clues as to what to do in ecology and sociology.

    Most fundamentally, the goals and motivations of the other entities matter. Tame rats make great pets - they are sociable, adaptable, intelligent and affectionate, but _only_ given that their needs for food, shelter, social contact etc. are completely met by their human owners. Wild rats have goals that are incompatible with those of humans when they get into our houses and store rooms. The options are (a) war: killing and capturing them (b) making sure they inhabit different spaces by stopping them getting inside (c) sufficiently interfering with them (for example by feeding them outside but including contraceptive chemicals) - negotiation with wild rats is not possible (d) fleeing to avoid them (e.g. going to live on an island as some birds do). Many other species have goals that are completely compatible with humans (many wild birds) and so we can quite happily live side by side and peaceably. Thus, whilst we have any control over the matter, it is important to not create autonomous AI entities which have goals or needs that will compete with ours. Thus we should not be seeking to make AI  entities that are similar to ourselves, but ones that might be complementary. Species that inhabit the same ecological space will be in competition with each other (e.g. red and grey squirrels), and one might eventually win out as a result, species that are very different (e.g. elephants and egrets) can be compatible. Longer-term evolution can result in competition and co-adaptation so that an ecosystem works as a whole, but this can be as predator-prey cycles or more cooperative co-adaptation (such as dogs and humans).

    If the goals and the motivations of the group of entities are sufficiently compatible with ours (either being complementary or just very different), then some accommodation between us and them is possible. This might involve learning not to encroach on each other's domains, with sanctions for breaches, so we can live happily side-by-side or it might involve more active communication and cooperation. Here the sociology of cooperation comes into play, the different ways in which some kind of cooperation or trust can emerge and be maintained. Here agent-based simulation can help us de-bug and inform efforts to cooperate, especially in helping identify 'early-warning' indicators of when cooperation is breaking down. There is now a substantial body of work on the mechanisms that can support cooperation, even in social dilemma situations where it might benefit individuals in the short-term to do otherwise.

    For example, one could imagine that a whole ecology of autonomous entities might be encouraged to inhabit the information sphere, as long as they are motivated to simplify and analyse that information rather than pollute it. In such a case it might be a matter of system farming rather than direct control or management that might be the correct strategy - feeding/motivating the ecology in suitable ways, dealing with crises and other difficulties (such as disease or computer viruses) rather than complete understanding or planning. Such a system could provide us with a highly complex, but effective set of compatible AI entities, which we did not fully understand (nor they understand us) but with a mutual basis for cooperation based on our complementarity. However, the mutual rewards and basic needs of both humans and information-entities would have to be very carefully understood and managed.

    In terms of interaction between individuals, high complexity (and hence no complete mutual understanding) need not stop effective coordination. Humans are highly complex and do not completely understand how they or others make decisions, which is often highly influenced by feelings and unconscious processing (e.g. pattern recognition), but can still effectively cooperate with each other. As long as: (a) they can justify their relevant actions in terms that are understood and acceptable to the other (even if this is not completely true) (b) agree coordination in terms of these justifications and (c) influence their own decision-making processes so to be in accord with the coordination.

  • 8 Jul 2022

    11 Jan 2021

    Bad Science keeps getting cited

    There is a problem in the efficacy of science in terms of correcting itself. Whilst in the very long-term erroneous and important science does get corrected, in the short and medium term papers continue to get cited even when the science they describe is debunked, falsified or even retracted. 

    The problem seems to be (a) that severe criticism or retraction notices are not easily detectable on the page where the paper is read (a problem compounded when there are different copies of the paper around on pre-print services etc.) and (b) that papers get more readers (and hence citations) the more it is cited, so once a paper starts getting cited a lot, its citation rate is self-sustaining.

    Some examples:

    •  (Schneider et al. 2020) report about a paper that was retracted due to falsified clinical trial data, but has continued to be cited in the 11 years after that - mostly with citations that give no indication of its retraction or weakness (in 96% of citing articles). This is not so surprising given the main page of the paper gives no indication that it is retracted (https://journal.chestnet.org/article/S0012-3692(15)49623-0/).
    • The case I know personally about is (Riolo et al 2001), which was heavily criticised (e.g. Roberts & Sherratt 2002, Edmonds & Hales 2003). The original model is very brittle and it is clear the authors did not understand their own model or results (changing a "<=" to a "<" in the formulation makes the effect they report disappear because the former forces agents with 0 tolerance to cooperate with clones of themselves and hence proliferate). Despite this the original paper is cited over 800 times according to Google citations.

    (Schneider et al. 2020) report on some of the literature reporting other cases. Retraction Watch documents retractions by journals and indeed has a "leader board" for the 10 most cited retracted papers (https://retractionwatch.com/the-retraction-watch-leaderboard/top-10-most-highly-cited-retracted-papers/), with one paper being cited 1146 times after it was retracted.

    That good science gets cited and then attracts more readers is clearly a good thing, but when the reverse happens the communication of its (lack of) quality does not work well. Firstly, there is the laziness of many researchers who cited papers without reading them (basing their citations on the citations of others). Secondly, when a paper is simply severely criticised this is difficult to know without reading many papers that cited a paper - a very time-consuming process. The only way in which the correction of bad science works is if the paper that criticises such research is cited many more times than the original.

    Some mechanism whereby the quality of papers is communicated post-review is needed. The review process can only stop some bad science from being published because completely checking research is infeasible (unless this is a core part of one's own research).

    References

    Edmonds, B. and Hales, D. (2003) Replication, Replication and Replication - Some Hard Lessons from Model Alignment.  Journal of Artificial Societies and Social Simulation  6(4) (http://jasss.soc.surrey.ac.uk/6/4/11.html).

    Riolo, Rick L., Michael D. Cohen, and Robert Axelrod. "Evolution of cooperation without reciprocity." Nature 414.6862 (2001): 441-443. https://www.nature.com/articles/35106555

    Schneider, J., Ye, D., Hill, A.M. et al. Continued post-retraction citation of a fraudulent clinical trial report, 11 years after it was retracted for falsifying data. Scientometrics 125, 2877–2913 (2020). https://doi.org/10.1007/s11192-020-03631-1

    Roberts, G., Sherratt, T. Does similarity breed cooperation?. Nature 418, 499–500 (2002). https://doi.org/10.1038/418499b 

    5 Nov 2020

    My summary of the ABC news analysis of US election exit polls

    This is my summary of the Analysis of US exit polls done by ABC news. Full details at: https://abcnews.go.com/Elections/exit-polls-2020-us-presidential-election-results-analysis

    On the leaders, the parties or issues that have become partisan, division was also highly partisan and symmetric, including views on: the economy, abortion, covid, counting votes "properly", health care, climate change, BLM/racism, and the competency of the Federal government.

    Looking at all other factors which indicated a 60% or more tendency to vote for Republican or Democrat it found:

    • Those voting Democrat had a tendency to be: Black, Hispanic, Asian, had an advanced degree, in a worse/same financial situation, valued personal qualities of candidate (good judgement candidate, can unite country), had a positive view of state voting, might have voted for independents before, came from city over 50K residents, or were young (18-29, non-married women, first time voters).
    • Those voting Republican had a tendency to be: White 45-59, Christian, may have decided their vote in the last week, were doing better financially than 2016, liked a strong leader.

    Extracts from the full details for these factors are as follows:

    Are you:

    (15,318 Respondents)

    Voted (D)

    Voted (R)

    Black 12% responded

    87%

    12%

    Hispanic/Latino 13% responded

    66%

    32%

    Asian 3% responded

    63%

    31%

     

    In which age group are you?

    (15,452 Respondents)

    Voted (D)

    Voted (R)

    18-29 17% responded

    62%

    35%

     

    Age by race

    (15,205 Respondents)

    Voted (D)

    Voted (R)

    White 45-59 18% responded

    38%

    60%

     

    Which best describes your education? You have:

    (15,344 Respondents)

    Voted (D)

    Voted (R)

    An advanced degree after a bachelor's degree (such as JD, MA, MBA, MD, PhD) 16% responded

    62%

    36%

     

    Gender by marital status

    (3,748 Respondents)

    Voted (D)

    Voted (R)

    Non-married women 24% responded

    62%

    37%

     

    Are you:

    (2,470 Respondents)

    Voted (D)

    Voted (R)

    Protestant 14% responded

    28%

    71%

    Catholic 27% responded

    37%

    62%

    Other Christian 31% responded

    32%

    67%

     

    Are you gay, lesbian, bisexual or transgender?

    (3,615 Respondents)

    Voted (D)

    Voted (R)

    Yes 7% responded

    61%

    28%

     

    Is this the first year you have ever voted?

    (3,953 Respondents)

    Voted (D)

    Voted (R)

    Yes 13% responded

    66%

    32%

     

    When did you finally decide for whom to vote in the presidential election?

    (3,731 Respondents)

    Voted (D)

    Voted (R)

    In the last week 2% responded

    30%

    63%

     

    Compared to four years ago, is your family's financial situation:

    (3,731 Respondents)

    Voted (D)

    Voted (R)

    Better today 41% responded

    25%

    72%

    Worse today 20% responded

    74%

    23%

    About the same 38% responded

    64%

    33%

     

     

    Which ONE of these four candidate qualities mattered most in deciding how you voted for president? (CHECK ONLY ONE)

    (3,845 Respondents)

    Voted (D)

    Voted (R)

    Can unite the country 19% responded

    76%

    23%

    Is a strong leader 32% responded

    28%

    71%

    Has good judgment 23% responded

    68%

    27%

     

    Which was more important in your vote for president today?

    (3,845 Respondents)

    Voted (D)

    Voted (R)

    My candidate's personal qualities 23% responded

    66%

    30%

     

    Do you think your state makes it easy or difficult for you to vote?

    (3,741 Respondents)

    Voted (D)

    Voted (R)

    Somewhat easy 25% responded

    63%

    36%

     

    Was your vote for president mainly:

    (3,870 Respondents)

    Voted (D)

    Voted (R)

    Against his opponent 24% responded

    68%

    30%

     

    In the 2016 election for president, did you vote for:

    (3,870 Respondents)

    Voted (D)

    Voted (R)

    Hillary Clinton (Dem) 40% responded

    95%

    4%

    Donald Trump (Rep) 42% responded

    8%

    92%

    Other5 % responded

    62%

    24%

    Did not vote 11% responded

    61%

    37%

     

    Population of area, three categories

    (15,343 Respondents)

    Voted (D)

    Voted (R)

    City over 50,000 30% responded

    60%

    37%

     

     

    8 Oct 2020

    Some warning signs of wishful thinking

    Sorting out whether we believe something for a good reason or because we want it to be true (or, oddly, because we fear it to be true) is hard. Even those I consider very rational are not always good at this (including myself), and some that are most partisan use this trait ruthlessly. However, there are some (fallible) signals that can help alert oneself of such wishful thinking, such as the following.

    1. It sanctions me doing something I want to do (drive in the car when I could have cycled, take an international holiday, eat a huge slice of over-rich chocolate cake etc.)
    2. It helps me criticise/attack/despise something I already dislike/think is bad (a government, a politician, a law, a restriction, the green movement, Capitalism, foreign aid, Brexit supporters etc.)
    3. The formulation of the belief shifts over time (from "smoking is not harmful" to "there is no evidence it is" to "it is harmful but only one of a complex of factors", from denying the earth's temperature is rising to denying it is due to humans to denying it is worth stopping etc.)
    4. The belief is constructed so that it is hard to disprove (conspiracy theories are often like this, but so too are many political claims, e.g.. some of the claims of Brexit or Remain camps, the benefit of homoeopathic cures, crystals)
    5. It sanctions me not doing something I do not want to do (find a job if I am lazy, get exercise if I am unhealthy, change my mind if I do not want to, admit to being wrong if this is embarrassing, avoid doing my expenses, wear a mask, wear a cycle helmet etc.)
    6. The use of obviously weak arguments to support the belief (it does not cost much anyway, I was going that way anyway, it is my right to do it, it won't harm anyone, Dominic Cummings did it so why not me, anything X says is rubbish, etc.)
    7. The invention of new supporting arguments only formulated when old ones are revealed to be weak or wrong (in economics - ok we know people are not rational but collectively they behave as if they are, the climate is warming due to the sunspot cycle etc.)
    8. The belief signals membership of a group I wish to belong to (holocaust denial, global warming will result in the extinction of all humans, greed is good etc.)
    9. Support of the belief is based on the persecution or the weakness of the arguments by those opposing it (Big Pharma would want this, they say there is a magic money tree, various Nationalist claims, the Government does not want you to know this etc.)
    10. It is involved in a highly political or personalised argument (Brexit, HCQ, Republican/Democrat, Immigration, lockdown, taxes etc.)
    11. There is no positive evidence for the belief - rather a (perceived) lack of negative evidence (herbal remedies, superstitions, free will/denial of free will, chemicals in the drinking water are affecting me, self-supporting arguments such as everyone is lying etc.)
    12. Hype, ridicule or insults are used to defend the belief (if you follow that line you would not be able to do anything, "socialist" in the US, only a stupid person would believe that, that is what they want you to believe, Trump is the best/worst president ever etc.)
    13. All my friends/group/kind believe it - though this often not an explicit/conscious reason (shaving, our group is superior to others/outsiders, our technique is better, British humour is unique and similar national myths, any diet that involves harm to animals is wrong etc.)
    14. It is too interesting - too surprising, funny, odd etc. False beliefs are not constrained by boring facts and thus can be far more engaging (internet memes, the earth is hollow, you too can be thin with this simple trick etc.)
    15. It is comforting or otherwise gives me status (Earth is the centre of the universe, Humans are the pinnacle of creation/evolution, simple theories are more likely to be true, your country is special/unique etc.)
    16. New phrases/words are used that are invented by believers - because this indicates this is more a group membership thing than a matter of truth (sheeple, Remoaners, follow the crumbs etc.)
    17. It can be expressed in a very few words and has lots of CAPITALS and exclamation marks!!(political slogans, ads, tweets, etc.)
    18. Support is mostly via a list of personal endorsements (these are easy to collect at a trivial level and very hard to check)
    19. When critiqued the response is not to engage with the argument but to reply with something else.  (The warning sign here is a lack of interest in the the basis for the belief - the belief comes first)
    20. It is contrary to general opinion since this makes the believers special and different (and hence gives status) - the narrative of the prophet in the wilderness or one man against the system (and, yes, in common narratives - e.g. films - this person is almost invariably male).


    I NOT saying that these are infallible signs of a wrong belief (some of these hold for some truths), and I am NOT saying one shouldn't do these things (e.g. critiquing something you feel is wrong might well be a good thing to do). It just each of these should give one pause to question the corresponding belief a bit more than one might otherwise do. If one's reflection on the grounds for the belief indicate they are not so solid, then think of independent ways/evidence to check that belief or just shift position to a less certain belief, noting the doubt.

    Clearly, I should populate this list with references to evidence (but who would read that anyway ;-), which I may get around to. However, I will update this list when I find more suggestions to add.

    Also one should distinguish these warning signs from aspects that are merely not entirely reliable indications of truth, including the following (nothing outside formal systems can be 100% proved).

    • That a clever person says it (like me 8-)
    • That there is a debate or many views about this (that there are climate deniers or anti-vaxxers in TV debates does not mean they have strong arguments or reliable evidence)
    • That it is repeated by many users or on many websites (that Biden used an earpiece in the debate, that the Oklahoma bomber was an Islamic refugee etc.)
    • There are non-peer-reviewed/non-rigourous papers that claim this (look at the pre-print literature on COVID19)
    • I find it a useful/insightful way of thinking about things (if something can help one get to true insights it might equally help one to misleading ones)
    • It is plausible given what I know (plausibility makes it a hypothesis not a fact)
    • It fits with all my other knowledge/beliefs (but those might also be wrong)
    • It was taught to me at school (I was taught continental drift was false)
    • I read it in a book/newspaper (some authors, editors, journalists and publishers care to restrict  publication to statements that are well supported, but many do not)
    • An example of this is documented (examples are good starting points, but not enough to support generalisation)
    • It is complicated or technical (just because it is impressive is not enough to make it true, but it does indicate more effort has been put in to it).
    • That it is in the interest of some institution or block for you to disbelieve it (one can't ignore power relations when assessing truth, but the effect of power is complex)


    21 Mar 2019

    Slides of talk: "The Evolution of Empirical ABMS"

    A talk at the workshop on "Agent-Based Models in Philosophy: Prospects and Limitations", Workshop on Agent-Based Models in Philosophy, Rurh University, Bochum, Germany. March 2019

    Abstract:

    ABMs (like other kinds of model) can be used in a purely abstract way, as a kind of thought experiment - a way of thinking about some aspect of the world that is too complicated to hold in our mind (in all its detail). In this way it both informs and complements discursive thought. However there is another set of uses for ABMs - empirical uses - where the mapping between the model and sets of observation-derived data are crucial. For these uses, one has to (a) use the mapping to get from some data to the model (b) use the model for some inference and (c) use the mapping again back to data. This includes both predictive and explanatory uses of ABMs. These are easily distinguishable from abstact uses becuase there is a fixed and well-defined relationship between the model and the data, this is not flexible on a case by case basis. In these cases the reliability comes from the composite (a)-(b)-(c) mapping, so that simplifying step (b) can be counterproductive if that means weakening steps (a) and (c) because it is the strength of the overall chain that is important. Taking the use of models in quantum mechanics as an example, one can see that sometimes the evolution of the formal models driven by empirical adequacy can be more important than the attendent abstract models used to get a feel for what is happening. Although using ABM's for empirical purposes is more challenging than for purely abstract purposes, they are being increasingly used for empirical explanation rather than thought experiments, and there is no reason to suppose that robust empirical adequacy is unachievable.

    Slides at: https://www.slideshare.net/BruceEdmonds/the-evolution-of-empirical-abms-137463946


    21 Aug 2018

    Slides from my plenary "How social simulation could help social science deal with context"

    An invited talk at Social Simulation 2018

    This points out how context-sensitivity is fundamental to much human social behaviour, but largely bypassed or ignored in social science. I more formal social science, it is usual to assume or fit universal models, even if this covers a lot of different contexts. In qualitative social science context is almost deified, and any generalisation across contexts is passed on to those that learn from it. Agent-based modelling allows for context-sensitive models to be developed and hence the role of context explored and better understood. The talk discussed a framework for analysing narrative text using the Context-Scope-Narrative-Elements (CSNE) framework. It also illustrates a cognitive model that allows for context-dependent knowledge to be implemented wthin an agent in a simulation. The talk ends with a plea to avoid uncecessary or premature summarisation (using averages etc.).

    Slides at: https://www.slideshare.net/BruceEdmonds/how-social-simulation-could-help-social-science-deal-with-context

    15 Jul 2018

    Slides from talk at MABS2018: "Mixing ABM and Policy ... what could possibly go wrong?"

    Invited talk at 19th International Workshop on Multi-Agent Based Simulation at Stockholm on 14th July 2018.

    Mixing ABM and Policy ... what could possibly go wrong?

    This talk looks at a number of ways in which using ABM in the context of influencing policy can go wrong: during model construction, with model application and other.

    It is related to the book chapter:
    Aodha, L.Edmonds, B. (2017) Some pitfalls to beware when applying models to issues of policy relevance. In Edmonds, B. & Meyer, R. (eds.) Simulating Social Complexity - a handbook, 2nd edition. Springer, 801-822.
    See the slides at: https://www.slideshare.net/BruceEdmonds/mixing-abm-and-policywhat-could-possibly-go-wrong

    25 Jun 2018

    Paper published in "Journal of Conflict Resolution": "Intragenerational Cultural Evolution and Ethnocentrism" by David Hales and myself

    This #cfpm_org paper is suggestive of a process of cultural (horizontal) intragenerational processes of in-group favouritism and contrasts with Axelrod and Hammond's (2006) model of the (vertical) evolution of a fixed in-group preference.
    Ethnocentrism denotes a positive orientation toward those sharing the same ethnicity and a negative one toward others. Previous models demonstrated how ethnocentrism might evolve intergenerationally (vertically) when ethnicity and behavior are inherited. We model short-term intragenerational (horizontal) cultural adaptation where agents have a fixed ethnicity but have the ability to form and join fluid cultural groups and to change how they define their in-group based on both ethnic and cultural markers. We find that fluid cultural markers become the dominant way that agents identify their in-group supporting positive interaction between ethnicities. However, in some circumstances, discrimination evolves in terms of a combination of cultural and ethnic markers producing bouts of ethnocentrism. This suggests the hypothesis that in human societies, even in the absence of direct selection on ethnic marker–based discrimination, selection on the use of fluid cultural markers can lead to marked changes in ethnocentrism within a generation.
    Keywords tag-based cooperation, altruism, cultural evolution, in-group bias, ethnocentrism
     
    This is open access and available at: http://journals.sagepub.com/doi/10.1177/0022002718780481

    27 Jun 2017

    Slides for talk and draft paper on: Modelling Purposes

    A talk at the 2017 ESSA SiLiCo Summer school in Wageningen.

    Slides at: https://www.slideshare.net/BruceEdmonds/model-purpose-and-complexity

    This discusses some different purposes for a simulation model and the consequences of this in terms of its development, checking and justification. It also looks at how complex one's model should be.

    Connected to this is a draft of a paper:

    Abstract
    How one builds, checks, validates and interprets a model depends on its ‘purpose’. This is true even if the same model is used for different purposes, which means that a model built for one purpose but now used for another may need to be re-checked, re-validated and maybe even rebuilt in a different way. Here we review some of the different purposes for building a simulation model of complex social phenomena, focussing on five in particular: theoretical exposition, prediction, explanation, description and illustration. The chapter looks at some of the implications in terms of the ways in which the intended purpose might fail. In particular, it looks at the ways that a confusion of modelling purposes can fatally weaken modelling projects, whilst giving a false sense of their quality. This analysis motivates some of the ways in which these ‘dangers’ might be avoided or mitigated.

    The citation is:
    Edmonds, B. & Meyer, R. (2013) Simulating Social Complexity – a handbook. Springer. (Publisher's Page)

    The text of this draft is at: http://cfpm.org/file_download/178/Five+Different+Modelling+Purposes.pdf

    There is an updated version with 7 modelling purposes and different modelling *strategies* at: http://cfpm.org/file_download/186/Different+Modelling+Purposes-JASSS-v1.6.pdf
     

    9 Jun 2017

    Wide variation in number of votes needed to get elected

    As usual, there is a very wide variation in the number of votes needed to get each seat in the UK parliament.  Provincial parties have it relatively easy (in NI, Wales and Scotland), minority parties spread over the UK have it hard (Green, LibDem, UKIP).

    2 Jun 2017

    Slides for talk on: Modelling Innovation – some options from probabilistic to radical

    Given at the European Academy of Technology and Innovation Assessment, see notice about the talk at: https://www.ea-aw.de/service/news/2017/05/22/ea-kolloquium-prof-bruce-edmonds-vom-centre-forpolicy-modelling-cfpm-quotmodellier.html

    Abstract:

    In general, most modelling of innovation bypasses the creativity involved. Here I look at some of the different options. This loosely follows (but expands) Margaret Boden's analysis of creativity. Four ways are presented and their implementation discussed (a) probabilistic, where the 'innovation' simply corresponds to an unlikely event within a known distribution (b) combinatorial, where innovation is a process of finding the right combination of existing components or aspects (c) complex path-dependency where the path to any particular product is a complex set of decisions or steps and not deducible before it is discovered and (d) radical, where the innovation causes us to think of things in a new way or introduces a novel dimension in which to evaluate. A model of making things that introduces complex path-dependency will be exhibited. Some ways of moving towards (d) the most radical option are discussed and a future possible research agenda outlined.

    Slides are at: https://www.slideshare.net/BruceEdmonds/modelling-innovation-some-options-from-probabilistic-to-radical

    27 May 2017

    Bruce's Modelling Gripes, No. 10: That I also do many of the things that annoy me

    I think this will be my last "gripe" for a while, though it has been fun letting them out. I will now let them build up inside a while before I splurge again.

    Yes, of course, I also do many of the things I have been complaining about in these "Gripes" (though not all of them). It is a fundamental social dilemma -- what is fun or advantageous for you as an individual can be a pain for others -- what is good for the general modelling community might be a "pain in the arse" to do.

    All we can do is to try and set ourselves and others standards and then, collectively, try to keep each other to them - including those who want to suggest these. At certain crucial points they can be enforced (for acceptance for a publication, as a condition for a grant), but even then they are much more effective as part of a social norm -- part of what good/accomplished/reputable modellers do.

    So I need this as much as anyone else. Personally I find the honesty ones easy - I have a childish delight in being brutally honest about my own and general academic matters, but find it harder to do the "tidying up" bits once I have sorted out a model - others will find the honesty thing harder because they lack the arrogant confidence I have. Lets keeps us all straight in this increasingly "post-truth" world!

    26 May 2017

    Bruce's Modelling Gripes, No. 9: Publishing early or publishing late

    Alright, so I have cheated and rolled two gripes into one here, but the blog editor seems OK with this.
    • When modellers rush to publishing a full journal article on the fun model they are developing and often over-claim for it and generally not do enough work, check it or get enough results. A discussion paper or workshop paper is good, but presenting some work as mature when it is only just developing can waste everybody's time.
    And the opposite extreme... 
    • When modellers keep a model to themselves for too long, waiting until they have it absolutely perfect before they publish and pretend that there was no messy process getting there. Perfection is fine but, please, please also put out a discussion paper on the idea early on so we know what you are working on. Also in the journal article be honest about the process you took to get there, including things you tried that did not work - as in a 'TRACE' document.
    We can have the best of both worlds: open discussion papers showing raw ideas, plus journal papers when the work is mature, please!

    25 May 2017

    Bruce's Modelling Gripes, No. 8: Unnecessary Mathematics

    Before computational simulation developed, the only kind of formal model was mathematical [note 1]. Because it it important to write models formally for the scientific process [note 2] maths became associated with science. However, solving complicated mathematical models is very hard, so to push the envelope of these mathematical models tended to involve cutting edge maths.

    These days when we have a choice of kinds of formal model, we can choose the most appropriate kind of model e.g.: analytic or computational [note 3]. Most complex models are not analytically solvable, so it is usually the computational route that is relevant.

    Some researchers [note 4] feel it is necessary to dress up their models using mathematical formulas, or make the specification of the model more mathematical than it needs to be. This is annoying, not only does it make the specification harder to read, but it reduces one of the advantages of computational modelling -- that the rules can have a natural interpretation in terms of observable processes. [Note 5]

    If this does involve maths, then use it, but do not just to look 'scientific' -- that is as silly as wearing a white lab coat to program a simulation!



    Note 1: This is almost but not quite true, there were models in other formal systems, such as formal logic, but these were vanishingly rare and difficult to use.

    Note 2: Edmonds, B. (2000) The Purpose and Place of Formal Systems in the Development of Science, CPM Report 00-75, MMU, UK. (http://cfpm.org/cpmrep75.html)

    Note 3: It does not really matter if one uses maths or code to program, the only important difference is between solving analytically and calculating examples (which is simulating).

    Note 4: All fields have their own 'machismo' how you prove you are a *real* member of the community, in some fields (e.g. economics) this has included showing one's skill at mathematics. Thus this problem is more common in some fields than others, but pretty widespread across many fields.

    Note 5: My first degree was in Mathematics, so I am not afraid of maths, just can step back from the implicit status game of knowing and 'displaying' maths.

    23 May 2017

    Bruce's Modelling Gripes, No. 7: Assuming simpler is more general

    If one adds in some extra detail to a general model it can become more specific -- that is it then only applies to those cases where that particular detail held. However the reverse is not true: simplifying a model will not make it more general - it is just you can imagine it would be more general.

    To see why this is, consider an accurate linear equation, then eliminate the variable leaving just a constant. The equation is now simpler, but now will only be true at only one point (and only be approximately right in a small region around that point) - it is much less general than the original, because it is true for far fewer cases.

    Only under some special conditions does simplification result in greater generality:
    1. When what is simplified away is essentially irrelevant to the outcomes of interest (e.g. when there is some averaging process over a lot of random deviations)
    2. When what is simplified away happens to be constant for all the situations considered (e.g. gravity is always 9.8m/s^2 downwards)
    3. When you loosen your criteria for being approximately right hugely as you simplify (e.g. mover from a requirement that results match some concrete data to using the model as a vague analogy for what is happening)
    In other cases, where you compare like with like (i.e. you don't move the goalposts such as in (3) above) then it only works if you happen to know what can be safely simplified away.

    Why people think that simplification might lead to generality is somewhat of a mystery. Maybe they assume that the universe has to obey ultimately laws so that simplification is the right direction (but of course, even if this were true, we would not know which way to safely simplify). Maybe they are really thinking about the other direction, slowly becoming more accurate by making the model mirror the target more. Maybe this is just a justification for laziness, an excuse for avoiding messy complicated models. Maybe they just associate simple models with physics. Maybe they just hope their simple model is more general.

    5 May 2017

    Bruce's Modelling Gripes, No. 6: Over-hyping significance of a simulation to Funders, Policy Makers and the Public

    When talking to other simulation modellers, a certain latitude is permissible in terms of describing the potential impact of our models. For example if we say "This simulation could be used to evaluate policy options concerning ...", the audience probably knows that, although this is theoretically possible, the many difficulties in doing this. They make allowance for the (understandable) enthusiasm of the model's creator, for they know such pronouncements will be taken with 'a pinch of salt'.

    However, it is a very different situation when the importance, or impact or possible use of models is exaggerated to an audience of non-modellers, who are likely to take their pronouncements at face value. This includes promises in grant applications, journal publications, public lectures and discussion with policy actors/advisers. They will not be in a position to properly evaluate the claims made and have to take the results on trust (or ignore them along with the advice of other 'experts' and 'boffins').

    The danger is that the reputation of the field will suffer when people rely on models for purposes that they are not established for. The refrain could become "Lies, damned lies, statistics and simulations". This is especially important in this era where scientists are being questioned and sometimes ignored.

    Some of the reasons for such hype lies in the issues discussed in previous posts and some seem to lie elsewhere.
    • Confusions about purpose, thinking that establishing a simulation for one purpose is enough to suggest a different purpose
    • Insufficient validation for the use or importance claimed
    • Being deceived by the "theoretical spectacles" effect [note 1] -- when one has worked with a model for a while that we tend to see it through the "lens" of that model. Thus we confuse a way of understanding the world for the truth about it.
    • Sheer fraud: we want a grant, or to get published, or to justify a grant, so we bend the truth about our models somewhat. For example promising far more in a grant proposal than we know we will able to deliver.
    In a context of other modellers, we can be easily found out and understood. With others we can get away with it for a time, but it will catch up with us in terms of an eventual loss of reputation. We really do not want to be like the economists!

    Note 1: "theoretical spectacles" was a phrase introduced by Thomas Kuhn to describe the effect of only noticing evidence that is consistent with the theory one believes.

    27 Apr 2017

    Bruce's Modelling Gripes, No. 5: None or not many results

    The point of a model lies mostly in its results. Indeed, I often only bother to read how a model has been constructed if the results look interesting. One has to remember that how a model is constructed - its basis, the choices you have made, the programming challenges are far, FAR more interesting to you, the programmer, than anyone else. Besides, if you have gone to all the trouble of making a model, the least you can do is extract some results and analyse them.

    Ideally one should include the following:
    1. Some indicative results to give readers an idea of the typical or important behaviour of the model. This really helps understand a model's nature and also hints at its "purpose". This can be at a variety of levels - whatever helps to make the results meaningful and vivid. It could include visualisations of example runs as well as the normal graphs -- even following the events happening to a single agent, if that helps.
    2. A sensitivity analysis - checking how varying parameters affects the key results. This involves some judgement as it is usually impossible to do a comprehensive survey. What kind of sensitivity analysis, over what dimensions depends on the nature of the model, but not including ANY sensitivity analysis generally means that you are not (yet?) serious about the model (and if you are not taking it seriously others probably will not either).
    3. In the better papers, some hypotheses about the key mechanisms that seem to determine the significant results are explicitly stated and then tested with some focussed simulation experiments -- trying to falsify the explanations offered. These results with maybe some statistics should be exhibited [note 1].
    4. If the simulation is being validated or otherwise compared against data, this should be (a) shown then (b) measured. [note 2] 
    5. If the simulation is claimed to be predictive, its success at repeatedly predicting data (unknown to the modeller at the time) should be tabulated. It is especially useful in this context to give an idea of when the model predicts and when it does not.
    What you show does depend on your model purpose. If the model is merely to illustrate an idea, then some indicative results may be sufficient for your goal, but more may still be helpful to the reader. If you are aiming to support an explanation of some data then a lot more is required. A theoretical exploration of some abstract mechanisms probably requires a very comprehensive display of results.

    If you have no, or very few results, you should ask yourself if there is any point in publishing. In most occasions it might be better to wait until you have.



    Note 1: p-values are probably not relevant here, since by doing enough runs one can pretty much get any p-value one desires. However checking you have the right power is probably important.  See

    Note 2: that only some aspects of the results will be considered significant and other aspects considered model artefacts - it is good practice to be explicit about this. See