27 Jun 2017

Slides for talk and draft paper on: Modelling Purposes

A talk at the 2017 ESSA SiLiCo Summer school in Wageningen.

Slides at: https://www.slideshare.net/BruceEdmonds/model-purpose-and-complexity

This discusses some different purposes for a simulation model and the consequences of this in terms of its development, checking and justification. It also looks at how complex one's model should be.

Connected to this is a draft of a paper:

Abstract
How one builds, checks, validates and interprets a model depends on its ‘purpose’. This is true even if the same model is used for different purposes, which means that a model built for one purpose but now used for another may need to be re-checked, re-validated and maybe even rebuilt in a different way. Here we review some of the different purposes for building a simulation model of complex social phenomena, focussing on five in particular: theoretical exposition, prediction, explanation, description and illustration. The chapter looks at some of the implications in terms of the ways in which the intended purpose might fail. In particular, it looks at the ways that a confusion of modelling purposes can fatally weaken modelling projects, whilst giving a false sense of their quality. This analysis motivates some of the ways in which these ‘dangers’ might be avoided or mitigated.

The citation is:
Edmonds, B. & Meyer, R. (2013) Simulating Social Complexity – a handbook. Springer. (Publisher's Page)

The text of this draft is at: http://cfpm.org/file_download/178/Five+Different+Modelling+Purposes.pdf

There is an updated version with 7 modelling purposes and different modelling *strategies* at: http://cfpm.org/file_download/186/Different+Modelling+Purposes-JASSS-v1.6.pdf
 

9 Jun 2017

Wide variation in number of votes needed to get elected

As usual, there is a very wide variation in the number of votes needed to get each seat in the UK parliament.  Provincial parties have it relatively easy (in NI, Wales and Scotland), minority parties spread over the UK have it hard (Green, LibDem, UKIP).

2 Jun 2017

Slides for talk on: Modelling Innovation – some options from probabilistic to radical

Given at the European Academy of Technology and Innovation Assessment, see notice about the talk at: https://www.ea-aw.de/service/news/2017/05/22/ea-kolloquium-prof-bruce-edmonds-vom-centre-forpolicy-modelling-cfpm-quotmodellier.html

Abstract:

In general, most modelling of innovation bypasses the creativity involved. Here I look at some of the different options. This loosely follows (but expands) Margaret Boden's analysis of creativity. Four ways are presented and their implementation discussed (a) probabilistic, where the 'innovation' simply corresponds to an unlikely event within a known distribution (b) combinatorial, where innovation is a process of finding the right combination of existing components or aspects (c) complex path-dependency where the path to any particular product is a complex set of decisions or steps and not deducible before it is discovered and (d) radical, where the innovation causes us to think of things in a new way or introduces a novel dimension in which to evaluate. A model of making things that introduces complex path-dependency will be exhibited. Some ways of moving towards (d) the most radical option are discussed and a future possible research agenda outlined.

Slides are at: https://www.slideshare.net/BruceEdmonds/modelling-innovation-some-options-from-probabilistic-to-radical

27 May 2017

Bruce's Modelling Gripes, No. 10: That I also do many of the things that annoy me

I think this will be my last "gripe" for a while, though it has been fun letting them out. I will now let them build up inside a while before I splurge again.

Yes, of course, I also do many of the things I have been complaining about in these "Gripes" (though not all of them). It is a fundamental social dilemma -- what is fun or advantageous for you as an individual can be a pain for others -- what is good for the general modelling community might be a "pain in the arse" to do.

All we can do is to try and set ourselves and others standards and then, collectively, try to keep each other to them - including those who want to suggest these. At certain crucial points they can be enforced (for acceptance for a publication, as a condition for a grant), but even then they are much more effective as part of a social norm -- part of what good/accomplished/reputable modellers do.

So I need this as much as anyone else. Personally I find the honesty ones easy - I have a childish delight in being brutally honest about my own and general academic matters, but find it harder to do the "tidying up" bits once I have sorted out a model - others will find the honesty thing harder because they lack the arrogant confidence I have. Lets keeps us all straight in this increasingly "post-truth" world!

26 May 2017

Bruce's Modelling Gripes, No. 9: Publishing early or publishing late

Alright, so I have cheated and rolled two gripes into one here, but the blog editor seems OK with this.
  • When modellers rush to publishing a full journal article on the fun model they are developing and often over-claim for it and generally not do enough work, check it or get enough results. A discussion paper or workshop paper is good, but presenting some work as mature when it is only just developing can waste everybody's time.
And the opposite extreme... 
  • When modellers keep a model to themselves for too long, waiting until they have it absolutely perfect before they publish and pretend that there was no messy process getting there. Perfection is fine but, please, please also put out a discussion paper on the idea early on so we know what you are working on. Also in the journal article be honest about the process you took to get there, including things you tried that did not work - as in a 'TRACE' document.
We can have the best of both worlds: open discussion papers showing raw ideas, plus journal papers when the work is mature, please!

25 May 2017

Bruce's Modelling Gripes, No. 8: Unnecessary Mathematics

Before computational simulation developed, the only kind of formal model was mathematical [note 1]. Because it it important to write models formally for the scientific process [note 2] maths became associated with science. However, solving complicated mathematical models is very hard, so to push the envelope of these mathematical models tended to involve cutting edge maths.

These days when we have a choice of kinds of formal model, we can choose the most appropriate kind of model e.g.: analytic or computational [note 3]. Most complex models are not analytically solvable, so it is usually the computational route that is relevant.

Some researchers [note 4] feel it is necessary to dress up their models using mathematical formulas, or make the specification of the model more mathematical than it needs to be. This is annoying, not only does it make the specification harder to read, but it reduces one of the advantages of computational modelling -- that the rules can have a natural interpretation in terms of observable processes. [Note 5]

If this does involve maths, then use it, but do not just to look 'scientific' -- that is as silly as wearing a white lab coat to program a simulation!



Note 1: This is almost but not quite true, there were models in other formal systems, such as formal logic, but these were vanishingly rare and difficult to use.

Note 2: Edmonds, B. (2000) The Purpose and Place of Formal Systems in the Development of Science, CPM Report 00-75, MMU, UK. (http://cfpm.org/cpmrep75.html)

Note 3: It does not really matter if one uses maths or code to program, the only important difference is between solving analytically and calculating examples (which is simulating).

Note 4: All fields have their own 'machismo' how you prove you are a *real* member of the community, in some fields (e.g. economics) this has included showing one's skill at mathematics. Thus this problem is more common in some fields than others, but pretty widespread across many fields.

Note 5: My first degree was in Mathematics, so I am not afraid of maths, just can step back from the implicit status game of knowing and 'displaying' maths.

23 May 2017

Bruce's Modelling Gripes, No. 7: Assuming simpler is more general

If one adds in some extra detail to a general model it can become more specific -- that is it then only applies to those cases where that particular detail held. However the reverse is not true: simplifying a model will not make it more general - it is just you can imagine it would be more general.

To see why this is, consider an accurate linear equation, then eliminate the variable leaving just a constant. The equation is now simpler, but now will only be true at only one point (and only be approximately right in a small region around that point) - it is much less general than the original, because it is true for far fewer cases.

Only under some special conditions does simplification result in greater generality:
  1. When what is simplified away is essentially irrelevant to the outcomes of interest (e.g. when there is some averaging process over a lot of random deviations)
  2. When what is simplified away happens to be constant for all the situations considered (e.g. gravity is always 9.8m/s^2 downwards)
  3. When you loosen your criteria for being approximately right hugely as you simplify (e.g. mover from a requirement that results match some concrete data to using the model as a vague analogy for what is happening)
In other cases, where you compare like with like (i.e. you don't move the goalposts such as in (3) above) then it only works if you happen to know what can be safely simplified away.

Why people think that simplification might lead to generality is somewhat of a mystery. Maybe they assume that the universe has to obey ultimately laws so that simplification is the right direction (but of course, even if this were true, we would not know which way to safely simplify). Maybe they are really thinking about the other direction, slowly becoming more accurate by making the model mirror the target more. Maybe this is just a justification for laziness, an excuse for avoiding messy complicated models. Maybe they just associate simple models with physics. Maybe they just hope their simple model is more general.

5 May 2017

Bruce's Modelling Gripes, No. 6: Over-hyping significance of a simulation to Funders, Policy Makers and the Public

When talking to other simulation modellers, a certain latitude is permissible in terms of describing the potential impact of our models. For example if we say "This simulation could be used to evaluate policy options concerning ...", the audience probably knows that, although this is theoretically possible, the many difficulties in doing this. They make allowance for the (understandable) enthusiasm of the model's creator, for they know such pronouncements will be taken with 'a pinch of salt'.

However, it is a very different situation when the importance, or impact or possible use of models is exaggerated to an audience of non-modellers, who are likely to take their pronouncements at face value. This includes promises in grant applications, journal publications, public lectures and discussion with policy actors/advisers. They will not be in a position to properly evaluate the claims made and have to take the results on trust (or ignore them along with the advice of other 'experts' and 'boffins').

The danger is that the reputation of the field will suffer when people rely on models for purposes that they are not established for. The refrain could become "Lies, damned lies, statistics and simulations". This is especially important in this era where scientists are being questioned and sometimes ignored.

Some of the reasons for such hype lies in the issues discussed in previous posts and some seem to lie elsewhere.
  • Confusions about purpose, thinking that establishing a simulation for one purpose is enough to suggest a different purpose
  • Insufficient validation for the use or importance claimed
  • Being deceived by the "theoretical spectacles" effect [note 1] -- when one has worked with a model for a while that we tend to see it through the "lens" of that model. Thus we confuse a way of understanding the world for the truth about it.
  • Sheer fraud: we want a grant, or to get published, or to justify a grant, so we bend the truth about our models somewhat. For example promising far more in a grant proposal than we know we will able to deliver.
In a context of other modellers, we can be easily found out and understood. With others we can get away with it for a time, but it will catch up with us in terms of an eventual loss of reputation. We really do not want to be like the economists!

Note 1: "theoretical spectacles" was a phrase introduced by Thomas Kuhn to describe the effect of only noticing evidence that is consistent with the theory one believes.

27 Apr 2017

Bruce's Modelling Gripes, No. 5: None or not many results

The point of a model lies mostly in its results. Indeed, I often only bother to read how a model has been constructed if the results look interesting. One has to remember that how a model is constructed - its basis, the choices you have made, the programming challenges are far, FAR more interesting to you, the programmer, than anyone else. Besides, if you have gone to all the trouble of making a model, the least you can do is extract some results and analyse them.

Ideally one should include the following:
  1. Some indicative results to give readers an idea of the typical or important behaviour of the model. This really helps understand a model's nature and also hints at its "purpose". This can be at a variety of levels - whatever helps to make the results meaningful and vivid. It could include visualisations of example runs as well as the normal graphs -- even following the events happening to a single agent, if that helps.
  2. A sensitivity analysis - checking how varying parameters affects the key results. This involves some judgement as it is usually impossible to do a comprehensive survey. What kind of sensitivity analysis, over what dimensions depends on the nature of the model, but not including ANY sensitivity analysis generally means that you are not (yet?) serious about the model (and if you are not taking it seriously others probably will not either).
  3. In the better papers, some hypotheses about the key mechanisms that seem to determine the significant results are explicitly stated and then tested with some focussed simulation experiments -- trying to falsify the explanations offered. These results with maybe some statistics should be exhibited [note 1].
  4. If the simulation is being validated or otherwise compared against data, this should be (a) shown then (b) measured. [note 2] 
  5. If the simulation is claimed to be predictive, its success at repeatedly predicting data (unknown to the modeller at the time) should be tabulated. It is especially useful in this context to give an idea of when the model predicts and when it does not.
What you show does depend on your model purpose. If the model is merely to illustrate an idea, then some indicative results may be sufficient for your goal, but more may still be helpful to the reader. If you are aiming to support an explanation of some data then a lot more is required. A theoretical exploration of some abstract mechanisms probably requires a very comprehensive display of results.

If you have no, or very few results, you should ask yourself if there is any point in publishing. In most occasions it might be better to wait until you have.



Note 1: p-values are probably not relevant here, since by doing enough runs one can pretty much get any p-value one desires. However checking you have the right power is probably important.  See

Note 2: that only some aspects of the results will be considered significant and other aspects considered model artefacts - it is good practice to be explicit about this. See

20 Apr 2017

Bruce's Modelling Gripes, No. 4: Not being open about code etc.

There is no excuse, these days (at least for academic purposes), not to be totally transparent about the details of the simulation. This is simply good scientific practice, so others can check, probe, play around with and inspect what you have done. The collective good that comes of this far outweighs and personal embarrassment at any errors or shortcomings discovered.

This should include:
  • The simulation code itself, publicly archived somewhere, e.g. openabm.org
  • Instructions as to how to get the code running (any libraries, special instructions, data needed etc.)
  • A full description of the simulation, its structures and algorithms (e.g. using the ODD structure or similar)
  • Links or references to any papers or related models
Other things that are useful are:
  • A set of indicative/example results
  • A sensitivity analysis
  • An account of how you developed the simulation, what you tried (even if it did not work)
For most academics, somehow or other, public money has been spent on funding you do the work, the public have a right to see the results and that you uphold the highest academic standards of openness and transparency.  You should do this even if the code is not perfectly documented and rather dodgy!  There is no excuse.

For more on this see:
Edmonds, B. & Polhill, G. (2015) Open Modelling for Simulators. In Terán, O. & Aguilar, J. (Eds.) Societal Benefits of Freely Accessible Technologies and Knowledge Resources. IGI Global, 237-254. DOI: 10.4018/978-1-4666-8336-5. (Previous version at http://cfpm.org/discussionpapers/172)

12 Apr 2017

Bruce's Modelling Gripes, No. 3: 'thin' or non-existent validation

Agent-based models (ABM) are the ultimate in being able to flexibly represent systems we observe. They are also very suggestive in their output - usually more suggestive than their validity would support. For these reasons it is very easy to be taken-in by an ABM, either one's own or another's. Furthermore, good practice in the form of visualisations and co-developing the model with stakeholders increase the danger that all those that might critique the model are cognitively committed to it.

It is to avoid such (self) deception that validation is undertaken - to make sure our reliance on a model is well-founded and not illusory. Unfortunately, effective validation is resource- and data- intensive and is far from easy. The flexibility in making a model needs to be matched by the strength of the validation checks. To be blunt - it is easy to (unintentionally or otherwise) 'fit' a complex ABM to a single line of data, so it looks convincing. This kind of validation where one aspect of the simulation outcomes is compared against one line of data is called 'thin validation'.

In physics or other sciences they do do this kind of validation, but they have well-validated micro-foundations so the effective flexibility of their modelling is far less than for a social simulation, where assumptions about the behaviour of the units (typically humans) are not well known. That is why such thin validation is inadequate for social phenomena.

Of course the kind and relevance of validation depends upon your purpose for modelling. If you are not really talking about the observed world but only exploring an entirely abstract world of mechanisms it might not be relevant. If you are only illustrating an idea or possible series of events and interactions then 'thin' validation might be sufficient.

However if you are attempting to predict unknown data or establish an explanation for what is observed then you probably need multi-dimensional or multi-aspect validation - checking the simulation in many different ways at once. Thus one might check if the right kind of social network emerges, and statistics about micro-level behaviours are correct and that emergent aggregate time-series are correct and that in snapshots of the simulation there is the right kind of distribution of key attributes. This is 'fat' validation, and then starts to be adequate to pin down our complex constructions.

Papers that have thin or non-existent validation (e.g. they rely on plausibility alone as a check) might be interesting but should not be interpreted as saying anything reliable about the world we observe.

10 Apr 2017

Bruce's Modelling Gripes, No. 2: Making specious excuses for pragmatic limitations on modelling

We are limited beings, with limited time as well as computational and mental capacity. Any modelling of very complex phenomena (social, ecological, biological etc.) will thus be limited in terms of: (a) how much time we can spend on it (b) how much detail we can cope with (checking, validating, understanding etc.) (c) what assumptions we can check

This is find, but instead of simply being honest about these limits there is a tendency to excuse them, to pretend that these limitations are more fundamentally justified. Three examples are as follows:
  • "for the sake of simplicity" (e.g. these articles in JASSS), this implies that simpler models are somehow better in ways beyond that of straightforward pragmatic convenience (e.g. easier to build, check, understand, communicate etc.)
  • That more complicated models are less complex (e.g. Sun et al.2016) which shows a graph where "complexity may decrease after a certain threshold of model complicatedness". This is sheer wishful thinking, what is more likely to be true is that it is harder to notice complexity in more complicated models, but this is due to our cognitive limitations in terms of pattern recognition, not anything to do with emergent complexity.
  • Changing English to make our achievements sound more impressive than they are, e.g. to call any calculation using a model a "prediction", when everybody else uses this word to really mean prediction (i.e. anticipating unknown data/observations sufficiently accurately using a model).
These weasel words would not matter so much if they were (a) purely internal to the field and everyone understood their meaning and (b) they were not used in public/policy consultations or grant applications where they might be taken seriously. Newcomers to the field often take these excuses too literally and so change what they attempt to do as if these excuses were genuine! This can be an excuse for following the easy option when they should be pushing the boundaries of what is possible. When policy makers/grant funders misunderstand these claims, an inevitable disappointment/disillusionment may follow, damaging the reputation of the field.

References

Sun, Z., Lorscheid, I., Millington, J. D., Lauf, S., Magliocca, N. R., Groeneveld, J., ... & Buchmann, C. M. (2016). Simple or complicated agent-based models? A complicated issue. Environmental Modelling & Software, 86, 56-67. 

27 Mar 2017

Bruce's Modelling Gripes, No.1: Unclear or confused modelling purpose

OK, maybe I am just becoming a grumpy ol man, but I thought I would start a series about tendencies in my own field that I think are bad :-), so here goes...

Modelling Gripe No.1: Unclear or confused modelling purpose

A model is a tool. Tools are useful for a particular purpose. To justify a new model one has to show it is good for its intended purpose.  Some modelling purposes include: prediction, explanation, analogy, theory exploration, illustration... others are listed by Epstein [1]. Even if a model is good for more than one purpose, it needs to be justified separately for each purpose claimed.

So here are 3 common confusions of purpose:


1.    Understanding Theory or Analogy -> Explanation. Once one has immersed oneself in a model, there is a danger that the world looks like this model to its author. Here the temptation is to immediately jump to an explanation of something in the world. A model can provide a way of looking at some phenomena, but just because one can view some phenomena in a particular way does not make it a good explanation.

2.    Explanation -> Prediction. A model that establishes an explanation traces a (complex) set of causal steps from the model set-up to outcomes that compare well with observed data. It is thus tempting to suggest that one can use this model to predict this observed data. However, establishing that a model is good for prediction requires its testing against unknown data many times – this goes way beyond what is needed to establish a candidate explanation for some phenomena.

3.    Illustration -> Understanding Theory. A neat illustration of an idea, suggests a mechanism. Thus the temptation is to use a model designed as an illustration or playful exploration as being sufficient for the purpose of a Understanding Theory. Understanding Theory involves the extensive testing of code to check the behaviour and any assumptions. An illustration, however suggestive, is not that rigorous. For example, it maybe that an illustrated process only appears under very particular circumstances, or it may be that the outcomes were due to aspects of the model that were thought to be unimportant. The work to rule out these kinds of possibility is what differentiates using a model as an illustration from modelling for Understanding Theory.

Unfortunately many authors are not clear in their papers about specifying exactly for which purpose they are justifying their model.  Maybe they have not thought about it, maybe they are just confused and maybe they are just being sloppy (e.g. assuming because its good for one purpose it is good for another).


6 Mar 2017

The Post-Truth Drift -- why it is partly the fault of Science (a short essay)

In a time which talks about being in a "post-truth" era of public discourse and where the reputation of "experts" as a group is questioned, it is easy to blame others for the predicament that Scientists find themselves (e.g. politicians, journalists, big business interests etc.). However, I argue that substantial part of the blame must fall on ourselves, the scientists  -- that we have (collectively), more than anyone, knocked away the pedestal on which they stood.

Firstly, scientists have increasingly allowed their work to be prematurely publicised - "announcing" breakthroughs with the first indicative results (or even before). It is, of course, understandable that scientists should believe in their own research, but it should be part of the discipline that we do not claim more than we have proved. Partly this is due to funding and institutional pressure, to quickly claim impact and progress (I remember an EU funding call that asked for "fundamental theoretical breakthroughs" and "policy impact" in the same project), but again it is part of the job to resist these pressures. More fundamentally, the basis for academic reputation has changed from cautious work to being first with new theories -- from collective to individual achievement.

The result of this over-hyping of results is that science loses its reputation for caution and reliability. This has been particularly stark in some of the "softer" sciences like nutrition or economics. All the clever mathematics in the world did not stop economists missing the last economic collapse - their lack of empirical foundations coming back to bite them. In the case of nutrition, a series of discoveries have been announced before their full complexity is understood.

However this goes a lot further than the softer sciences. The recent crisis in reproducibility in many fields indicates that publication has overtaken caution even for results that are not publicised outside their own field. This indicates that there is an imbalance in these fields with not enough people replicating and checking work and too many racing to discover things first. This is evident in some fields where there are a lot of researchers proposing or talking about abstract theories and not many doing the more concrete work, such as empirical measurement. Reputation should follow when an idea or model empirically checks out and not before.

Measuring academic reputation on citation-based indices reinforces this deleterious trend - one can get many citations for proposing an attractive or controversial idea, but it is independent of whether one was right or not. If we reward academics by their academic popularity with their peers rather than whether they were right, then that will affect the kind of academics we attract into the profession. Many fields are dominated by cliques who cite each other and (consciously or unconsciously) determine the methodological norms.

All fields need some methodological norms, however these norms can come about in ways that are independent of their success or reliability. Papers that grab a lot of attention can be more influential in these terms than those that turned out to be right. All fields seem to adjust their standards of success to ensure that the field, as a whole, can demonstrate progress and hence justify itself. When faced with highly complex phenomena, this can lead to a dilution of the criteria of success so this is achievable. In my field, abstract simulations without strong empirical foundations that provide a way of thinking about issues, gain more attention than they should and, more worryingly, are then advertised as able to perform "what if" analyses on policy inventions (implying their results will somehow correspond to reality). In economics, prediction rather than structural realism was declared as the aim of its modelling, but this weakened to predicting known out-of-sample data.

If all this weakening of criteria were internal and scientists were ultra-careful about not deceiving others into thinking their results were reliable, this would not be so bad. However, whether deliberately or otherwise, far too often the funders/policy makers/public are left with an impression that declared findings are more solidly based than they are. This is exacerbated by the grant funding process, where people who promise great results and impact are funded and more realistic proposals rejected. If one gets a grant, based upon such promises there is then pressure to justify outcomes that fall short of these, and to use language to obscure this.

Finally, when scientific advice and the policy world meet there is often fundamental misunderstanding, and this is partly the fault of the academics. In the wish for relevance and "impact" academics can be pressured into not being completely honest, and providing the policy makers with what they want regardless of whether this is justified by the science. One trouble with this interface is that there is not a clear line of responsibility -- if the advice from the scientists conflicts with those of the policy makers, what are they to do?  If they trust some complex process that they do not understand they are effectively delegating some of their responsibility, if they only trust it if agrees with their intuitions then this selection bias ensures that support rather than critique of decisions gets diffused.

The blurring of political and scientific debate that results from a non-cautious entry into policy debate has resulted in a conflation between debate over method and reliability of results to a confrontation of alternative results. Classically science has not debated results, but rather critiqued each other's method. If there are conflicting results this will not be resolved by debate but by further research. Alternative ideas should be tolerated until there is enough evidence to adjudicate between them. The competition should be in terms of sounder method, not in terms of which theory is better on any other grounds. Thus scientific debate has become conflated with political debate where, rightly, different ideas are contrasted and argued about.

There are some positive sides for science to the "post-truth" tendencies. The disconnected "ivory tower" school of research is rightly criticised. Whilst what academics do should not be constrained, what they use public money for has to be. The automatic deference that academics used to have from the public has also largely disappeared, meaning that results from scientists will be more readily questioned for its unconscious biases and meaning of its claims. To some extent the profession has become more porous with a wider range of people participating in the process of science - it is not just professors or boffins anymore.

Ultimately maintaining academic and research standards is the job of the academics themselves. The institutions they work in, the funders of research and the current governmental priorities mean that other involved actors will have other priorities. Universities compete in terms of the frameworks that government sets (REF, TEF etc.) or league tables constructed on simplistic indices. Funders of research are under pressure to claim research that has immediate and significant impact and publicity. It is only the academics themselves that can resist these pressures and so maintain their own, long-term reputations for independence and reliability. If we do not have these things, why should the public carry on financing us? What will people think of our science in 50 or 100 years time? Let the longer view prevail.

17 Feb 2017

Discussion paper: "Co-developing beliefs and social influence networks – towards understanding Brexit"

Centre for Policy Modelling Discussion paper: CPM-17-235

Abstract.

A relatively simple model is presented where the beliefs of agents and their social network co-develop. Agents can either hold or not each of a fixed menu of candidate beliefs. Depending on their type, agents have different coherency functions between beliefs, so that they are more likely to adopt a belief from a neighbour or drop a belief where this increases the total coherency of their belief set. With given probabilities links are randomly dropped or added but, if possible, links are made to a “friend of a friend”. The outcomes when both belief and link change processes occur are qualitatively different from either alone, showing the necessity of representing both cognitive and social processes together. Some example results are shown which moves a little towards modelling the processes behind divisive collective decisions, such as the Brexit vote.

Downloads

15 Feb 2017

Slides from talk on "Simulating Superdiversity"

#cfpm_org #abm #ethnosim
An invited talk to the Institute for research into superdiversity (IRIS), University of Birmingham, 31st Jan 2017.

Abstract: A simulation to illustrate how the complex patterns of cultural and genetic signals might combine to define what we mean by "groups" of people is presented. In this model both (a) how each individual might define their "in group" and (b) how each individual behaves to others in 'in' or 'out' groups can evolve over time.  Thus groups are not something that is precisely defined but is something that emerges in the simulation. The point is to illustrate the power of simulation techniques to explore such processes in a non-prescriptive way that takes the micro-macro distinction seriously and represents them within complex simulations. In the particular simulation presented, groups defined by culture strongly emerge as dominant and ethnically defined groups only occur when they are also culturally defined.

Slides available at: http://www.slideshare.net/BruceEdmonds/simulating-superdiversity

Linked to the upcoming workshop on simulating ethonocentrism and diversity, Manchester 7/8 July 2017: 

3 Jan 2017

Sad to announce the death of my mother, Anne Gillian Edmonds ("Gill") 1935-2016

My mother died on the 24th of December, 2016. 

She was a loving and effective mother to us four (Bruce, Juliet, Nicola and Malcolm), encouraging us all to be creative, independent and care about social concerns. This little 'wave' of people continues with her grandchildren: Thomas, Ruth, Duncan, Orlanda, Patsy, Sophie, Ben, Iona, Kaila, and Joshua and great-grandchild, William.

As well as raising us mob, she pursued an active career as a social worker for Oxfordshire district council (and in the last couple of years of her work, the RAF). She was an active campaigner on Green and development issues, bombarding David Cameron (her local MP) with letters on these issues before he became PM.  She contributed actively to local life in Burford, volunteering her effort in many ways.

She was a relentless autodidact, achieving an open university degree whilst bringing us up, and continuing on a variety of courses until the very last years of her life.  She spent several years effectively looking after my father as his dementia progressed.

A funeral service will be held at St John the Baptist Church, Burford, on 14th January at 11.00 a.m. All welcome. (Family Flowers only)