21 Jul 2010

31st August I am an invited speaker at MALLOW 2010

MALLOW is: The Multi-Agent Logics, Languages, and Organisations Federated Workshops (MALLOW 2010)

Details at: http://mallow2010.emse.fr/

Towards a Science of Socially Intelligent ICT- a workshop - 3rd August London

See http://phoenixweb.open.ac.uk/~jeff_johnson/ASSYST_Perada_WS.html

In the programme I think I am:

Title to be announced
Speaker to be announced


but in reality I will talking on  "The Roots of Social Intelligence and their Implications for Socio-Technical Systems"

Ends with a "Champagne Reception"! 

20 Jul 2010

Worrying Rumours from EPSRC

At a meeting of the newly funded "Complexity in the Real World" programme, someone reported from a meeting called by the EPSRC (PIs in charge of major EPSRC grants).  At that meeting it was reported that even under a mild set of 10% cuts to the research council's funding then the council has a choice between rescinding some grants and not funding any new ones (due to existing commitments).

Apparently the council:
  1. along with similar councils refusing to submit a plan for 40% cuts as requested by the government
  2. wants to continue funding new projects and hence may rescind existing grants, via the mechanism of asking recipient institutions to decide on 10% cuts on its EPSRC funded projects (with "ringfencing" for certain areas - doctoral centres, complexity projects etc.)

15 Jul 2010

Samos 2010 Summit on ICT for eGovernance and Policy Modelling

I went to this as a member of the "CROSSROAD Scientfic Committee".  It was basically a networking event with workshops to brainstorm ideas for future EU funding in this area.  Rather pretentiously they issued a communique at the end!

Details at:
     http://samos-summit.blogspot.com/2010/07/samos-2010-summit-declaration.html

5 Jun 2010

2 new papers on Context and Science/Simulation

CPM-10-209
             Complexity and Context-Dependency - Bruce Edmonds
CPM-10-210
             Context and Social Simulation - Bruce Edmonds

26 May 2010

Last Research Assesment Results Analysed - Russell group vs. Non-Russell

Using figures from the last RAE, one can estimate the number of academics judged to be of each kind: "world-class" (4*), "international" (3*), etc.) in each RAE unit of assessment in each university.  Using this we can see that 54% of the highest category are in the Russell group university and 46% of the "international" standard.  The rounded figures are as follows.
 So roughly half of the top academics are not in Russell group universities.  Also it shows that the Russell group entered a lot of academics who were judged to be in the lower categories.  Below is a histogram of the number of 4* academics in all universities.
There is a "long tail" of non-Russell universities with some "World Class" academics.  Thus, although many of the best academics are concentrated in Russell group univsersites, there are many spread around in a variety of universities.

25 May 2010

Votes in UK General Elections 1945-2010

Out of interest I plotted the raw number of votes cast for Conservative, Liberal/Lib-Dem, and Labour parties in each general election from 1945-2010, using Wikipedia as the source of data.

Its interesting that For most of the time Labour and Liberal(-Dem) are anti correllated - a gain for one is a loss for the other.  The Liberals have generally failed to attract conservative votes.  The Conservative vote has been fairly constant, except immediately after WWII and after the scandals in the early 1990's after which many of the Conservative voters simply did not vote (or voted for a fringe party).

Raw figures are:
        year      Labour    Conservative     Liberal 
1945 11,967,746 8,716,211 2,177,938
1950 13,226,176 11,507,061 2,621,487
1951 13,948,385 13,724,418 730,546
1955 12,405,254 13,310,891 722,402
1959 12,216,172 13,750,875 1,640,760
1964 12,205,808 12,002,642 3,099,283
1966 13,096,629 11,418,455 2,327,457
1970 12,208,758 13,145,123 2,117,035
1974 11,645,616 11,872,180 6,059,519
1974.5 11,457,079 10,462,565 5,346,704
1979 11,532,218 13,697,923 4,313,804
1983 8,456,934 13,012,316 7,780,949
1987 10,029,270 13,760,935 7,341,651
1992 11,560,484 14,093,007 5,999,384
1997 13,518,167 9,600,943 5,242,947
2001 10,724,953 8,357,615 4,814,321
2005 9,562,122 8,772,598 5,981,874
2010 8,604,358 10,683,787 6,827,938

12 May 2010

2 Science papers on social dilemma's and punishment

Lab Experiments for the Study of Social-Ecological Systems
Marco A. Janssen,1,* Robert Holahan,2 Allen Lee,1 Elinor Ostrom1,2
Science 30 April 2010:
Vol. 328. no. 5978, pp. 613 - 617
DOI: 10.1126/science.1183532
Governance of social-ecological systems is a major policy problem of the contemporary era. Field studies of fisheries, forests, and pastoral and water resources have identified many variables that influence the outcomes of governance efforts. We introduce an experimental environment that involves spatial and temporal resource dynamics in order to capture these two critical variables identified in field research. Previous behavioral experiments of commons dilemmas have found that people are willing to engage in costly punishment, frequently generating increases in gross benefits, contrary to game-theoretical predictions based on a static pay-off function. Results in our experimental environment find that costly punishment is again used but lacks a gross positive effect on resource harvesting unless combined with communication. These findings illustrate the importance of careful generalization from the laboratory to the world of policy.
http://www.sciencemag.org/cgi/content/abstract/328/5978/613

Coordinated Punishment of Defectors Sustains Cooperation and Can Proliferate When Rare
Robert Boyd,1,2,* Herbert Gintis,2,3,4,* Samuel Bowles2,5,*
Science 30 April 2010:
Vol. 328. no. 5978, pp. 617 - 620
DOI: 10.1126/science.1183665
Governance of social-ecological systems is a major policy problem of the contemporary era. Field studies of fisheries, forests, and pastoral and water resources have identified many variables that influence the outcomes of governance efforts. We introduce an experimental environment that involves spatial and temporal resource dynamics in order to capture these two critical variables identified in field research. Previous behavioral experiments of commons dilemmas have found that people are willing to engage in costly punishment, frequently generating increases in gross benefits, contrary to game-theoretical predictions based on a static pay-off function. Results in our experimental environment find that costly punishment is again used but lacks a gross positive effect on resource harvesting unless combined with communication. These findings illustrate the importance of careful generalization from the laboratory to the world of policy.
http://www.sciencemag.org/cgi/content/abstract/328/5978/617

15 Apr 2010

Russ Bernard's Plenary talk at the UK social networks conference...

...was a great polemic against those that seek to created a quantitative/qualitative divide in the social sciences, arguing for the naturalness of Mixed Methods.  Can be seen as arguing for the relative primacy of Evidence over theoretical frameworks, so fitting well with the methodology of the CPM, which could be summarised as: "You have to have a very VERY good reason to ignore evidence".  In descriptive simulation one often combines narrative accounts to inform the programming of the micor level (agents) and check this against quatitative evidence at the macro level.

At the UK Social Networks Conference in Manchester

Gave a talk centered around a proof concerning computability on a class of systems, showing that there is not possible network measure that will reflect underlying node importance.  I suggest that simulation could be used to stage abstraction to a social network.

9 Apr 2010

The Impossibility of a General Intelligence

I just caught Drew McDermot's talk as part of the AISB 2010 symposium: "Towards a Comprehensive Intelligence Test (TCIT): Reconsidering the Turing Test for the 21st Century Symposium".  He contended that Turing never intended this as a general test of intelligence, just a test that would establish that a machine was intelligent in some sense.

More than this I have argued that there is no such thing as a general intelligence -- intelligence is very different from computation.  See:
  • Edmonds, B. (2000). The Constructability of Artificial Intelligence (as defined by the Turing Test). Journal of Logic Language and Information, 9:419-424. (http://cfpm.org/cpmrep53.html)
  • Edmonds, B. (2008) The Social Embedding of Intelligence: How to Build a Machine that Could Pass the Turing Test. In Epstein, R., Roberts, G. and Beber, G. (Eds.) Parsing the Turing Test.  Springer, 211-235. (http://cfpm.org/cpmrep95.html)

8 Apr 2010

My Invited Talk at SNAMAS

Revealing the weakness of SNA and possibly fixing it, using MAS
A social network model consists of the representation of the target domain in terms of some system of nodes and arcs, plus how inferences about this are going to be made which can be interpreted back to the target.  This is a not an analytic results but a contingent theory that can only be validated against independent empical evidence.  The approach consists of several stages: (1) the collection of data about the structure and processes in the target; (2) the representation of this in a social network structure; (3) the inference of properties of the network using measures and other results; (4) the interpretation of these inferences back in terms of the target.  Properly considered the theory requires all stages and not simply stage (2) (I will call such a SNAT - a social network analysis theory).

To validate such a SNAT would require studies to see if there is independent evidence that the outcomes in the target system actually do correspond to the inferences from such a process (as interpreted to the target and given the data collection processes) for the range of targets that correspond to the declared scope of the theory.  Unfortunately this is rare, and more frequently a SNAT is only weakly validated against the intuitions of the same researcher that constructed the SNAT.  Partly this is due to expense of SNA and independent validation studies, but it also seems to be a result of the way SNA is divided between theoreticians and users.  The theoreticians looks at measures and other techniques that can be made of a given network system usually without any reference to observed case-studies - stage (3).  The users study observed examples and apply the techniques (frequently wrapped in software to make them more accessible) of the theoreticans to dervice conclusions about what their target systems - stages (1), (2) and (4).  Nobody checks that the combined SNAT, all four stages put together, actually works, i.e. subject it to an independent validation.

To demonstrate how SNAT are an inherently difficult and empirical approach, two cases in "Artificial Social Network Analysis" are exhibited.  That is where a MAS is studied using SNA methods.
  • In an apparently simple MAS, where almost all information about the nodes, their behaviour, the social network etc. is known beforehand (everything except the initialisation of the environment), it is proved that there is NO measure that will reliably correspond to the asymptotic importances of the nodes.  Given that one can not devise a reliable measure in this ideal and very simple case, this indicates that uses of SNA measures etc. that assume a priori that a given measure is a useful indication of a property of the target system is deeply flawed.
  • In a plausible simulation of a P2P file-sharing system, given information that is analagous to what a researcher of social networks "in the wild" would infer, it is evident that the wrong conclusions might well be made.  Given this is a simulation it is possible to to check whether the SNAT holds, and despite appearences is found to be lacking.  If this is the case for a plausible simulation, how can we take unvalidated SNA analyses of observed systems seriously?
Thus MAS in the form of simulations can be used to probe weaknesses in SNA approaches, showing doubtful assumptions as well as making clear the empirical and contingent nature of SNAT.

It is suggested that the root problem is the drastic nature of the abstraction step in SNA, from a complex social system to a relatively "thin" mathematical structure - a network.  However such abstraction can be staged using MAS simulations.  This has the advantage that the chain of reference from model to model is maintained and testable, but at the cost of far more work.
 Given at SNAMAS, part of AISB 2010, Leciester, March 29th 2010.

2 Apr 2010

Reading: Krotoski (2009) Social influence in Second Life

Krotoski, Aleksandra K. (2009). Social influence in Second Life: Social Network and Social Psychological Processes in the Diffusion of Belief and Behaviour on the Web. PhD Dissertation. University of Surrey, Department of Psychology, School of Human Sciences.

This thesis examines which social psychological and social network analytic features predict attitude and behaviour change using information gathered about 47,643 related avatars in the virtual community Second Life. Using data collected over three studies from online surveys and data accessed from the application’s computer servers, it describes why the structure of a social system, an individual’s position in a social group, and the structural content of an online relationship have been effective at predicting when influence occurs.
Available at:

31 Mar 2010

Position paper: Unpacking Public Discussion (for CROSSROAD)

I submitted this:
Unpacking Public Discussion – developing an open forest of political argument (available as CPM-10-207)
to the "Crossroad" consultation exercise and have been selected to be a member of their expert scientific panel.




Organisational Safety Catches

As Harold Thimbleby pointed out in his AISB 2010 talk, sensible design includes adding the equivalent of safety catches, so that crucial mistakes are not made.  (Interactive systems need safety locks, Harold Thimbleby, In press. 32nd International Conference on Information Technology Interfaces 2010. http://iti.srce.hr)

Surely we should try and do the same with organisations, including low-cost adaptions that can stop them making catastrophic mistakes.  These could include:
  • Making sure mistakes are fed back and not discouraged by punishment
  • Ensuring that people working together have talked socially
However we will only get a full understanding of these when we can model the organisation as a whole.

The Anti-Anthropomorphic Principle

This is the principle that if is there is an assumption that the world is organised around us or for us in some way (e.g. that the Earth is at the Centre of the Universe, that we live in a unique period of history, that we are the only sentient beings, etc.) then this is likely to be wrong.

 This can be seen as a consequence of our apparent cognitive bias to see ourselves as special and take our assumptions as universal.