19 Sept 2022

Two aspects of AI - complexity and autonomy - and what it means for dealing with them

The conflation of complexity and autonomy

Thinking in this area is often muddied by conflating two different aspects of what is called "Artificial Intelligence" (AI): (1) the complexity of the algorithms involved, which means we (as humans) can not fully understand or predict what they will come up with and (2) the autonomy of AI entities, which means that they have their own goals and priorities which might be different from ours.

Some examples illustrate the difference.

  • A trained machine learning (ML) algorithm, such as a neural network, can be very complex - hard to understand how it distinguishes patterns and comes up with its outputs. This makes them hard to use because one does not know all their limitations and biases. An algorithm that works well when trained on one set of data, may suddenly work much less well for similar data from another context (as people are finding in the ML replication crisis). A consequence of this complexity is that one can not simply use it, like a car - one has to be trained to use it over a period of time (like learning to sail). However, ML algorithms have no autonomy.

  • It is hard to find a simple example of something with autonomy but no complexity, because it takes a process of some complexity for autonomy to be realised. Simple machines or entities usually have little autonomy. However, the complexity does not have to be 'inside' the entity but can be in the process that makes them. If a set of entities is evolving to thrive in a complex environment, then some of those solutions might be very simple but have goals that are very different from what we would like. The complexity is in the process of evolution which only sometimes results in complex entities. Examples include viruses (which are *relatively* simple compared to us) and active solutions that are evolved in silico. The result can be an ecology of entities that makes the achievement of our goals harder.

  • Of course, some entities are both complex and autonomous, e.g. a horse. However, training can help manage both. The training and human-socialisation of a horse makes it less autonomous - more willing to accept our goals over its own. The co-training of horse and riders makes the interaction between the two more predictable - simpler - but this is never as simple as a set of well-defined signals (though these help). An ill-treated or frustrated horse might well go against the intentions of its owners but in highly predictable and understandable ways. A human-socialised, but poorly trained, horse may wish to please its rider but misunderstand and do something unpredictable.

    Some of the confusion arises because both autonomy and complexity make AI entities hard to 'use'. However, they need to be dealt with in different ways (from the point of view of us humans).

    Dealing with Complex AI

    To use a complex tool well requires a lot of training (or other meta-analysis). You can not hope to just apply it "off the shelf". This is not surprising given the extent to which people confuse themselves or make mistakes with something as simple as regression analysis. As with learning to ride a horse, it is going to take a while to get the feel of a complex tool, learning when it can be successfully used, how to use it well, how to check it is giving you good results and how to interpret the results that come from it. Complex analytic tools are still useful, extending our mental capacities in a similar way that machines extend our physical capacities. They stand between us in terms of our understanding of what they are analysing, allowing us more leverage, even if this is now indirect. Such complex tools may require other tools to manage, check and understand what they are doing, so that we might develop a hierarchy of tools analysing tools, with humans at one end and the problems we are grappling with at the other. Such a system of very indirect understanding is inevitable if we are to push the envelope further, but even more tricky to manage.

    Dealing with Autonomous AI

    Dealing with autonomous entities is another matter entirely, though a familiar one. When we deal with other species or groups of people, particularly when we do not have much previous experience with them, we face this problem. Yes they may be hard to understand due to their unfamiliar complexity but that is just the start of the difficulties, which present even when we have a lot of experience of their kind. You can not simply 'use' such entities, even after a period of extensive training. Part of the confusion with algorithmic autonomous entities is that often it is assumed that one can, just because they are built of algorithms. Here we should look to clues as to what to do in ecology and sociology.

    Most fundamentally, the goals and motivations of the other entities matter. Tame rats make great pets - they are sociable, adaptable, intelligent and affectionate, but _only_ given that their needs for food, shelter, social contact etc. are completely met by their human owners. Wild rats have goals that are incompatible with those of humans when they get into our houses and store rooms. The options are (a) war: killing and capturing them (b) making sure they inhabit different spaces by stopping them getting inside (c) sufficiently interfering with them (for example by feeding them outside but including contraceptive chemicals) - negotiation with wild rats is not possible (d) fleeing to avoid them (e.g. going to live on an island as some birds do). Many other species have goals that are completely compatible with humans (many wild birds) and so we can quite happily live side by side and peaceably. Thus, whilst we have any control over the matter, it is important to not create autonomous AI entities which have goals or needs that will compete with ours. Thus we should not be seeking to make AI  entities that are similar to ourselves, but ones that might be complementary. Species that inhabit the same ecological space will be in competition with each other (e.g. red and grey squirrels), and one might eventually win out as a result, species that are very different (e.g. elephants and egrets) can be compatible. Longer-term evolution can result in competition and co-adaptation so that an ecosystem works as a whole, but this can be as predator-prey cycles or more cooperative co-adaptation (such as dogs and humans).

    If the goals and the motivations of the group of entities are sufficiently compatible with ours (either being complementary or just very different), then some accommodation between us and them is possible. This might involve learning not to encroach on each other's domains, with sanctions for breaches, so we can live happily side-by-side or it might involve more active communication and cooperation. Here the sociology of cooperation comes into play, the different ways in which some kind of cooperation or trust can emerge and be maintained. Here agent-based simulation can help us de-bug and inform efforts to cooperate, especially in helping identify 'early-warning' indicators of when cooperation is breaking down. There is now a substantial body of work on the mechanisms that can support cooperation, even in social dilemma situations where it might benefit individuals in the short-term to do otherwise.

    For example, one could imagine that a whole ecology of autonomous entities might be encouraged to inhabit the information sphere, as long as they are motivated to simplify and analyse that information rather than pollute it. In such a case it might be a matter of system farming rather than direct control or management that might be the correct strategy - feeding/motivating the ecology in suitable ways, dealing with crises and other difficulties (such as disease or computer viruses) rather than complete understanding or planning. Such a system could provide us with a highly complex, but effective set of compatible AI entities, which we did not fully understand (nor they understand us) but with a mutual basis for cooperation based on our complementarity. However, the mutual rewards and basic needs of both humans and information-entities would have to be very carefully understood and managed.

    In terms of interaction between individuals, high complexity (and hence no complete mutual understanding) need not stop effective coordination. Humans are highly complex and do not completely understand how they or others make decisions, which is often highly influenced by feelings and unconscious processing (e.g. pattern recognition), but can still effectively cooperate with each other. As long as: (a) they can justify their relevant actions in terms that are understood and acceptable to the other (even if this is not completely true) (b) agree coordination in terms of these justifications and (c) influence their own decision-making processes so to be in accord with the coordination.