Models and their discontents (part 2)

published 28 Jun 2013
Photo: 
eviltomthai (flickr)

In the first part of this story, I described the basic concepts underlying scientific modelling and how, together with a mapping to reality, they express theories.  In this, the second part, I introduce a couple of examples of concrete models and discuss the problem of mapping them to reality, before finally getting to the point – the discontents.     


Model 1

We have a single entity called Hat.

The state of this entity has a dimension that has a scale. We will refer to this dimension as pink.  We define a transition rule which dictates that Hat's pink grows over time, according to some function. We can describe this model mathematically as follows:

Hatpink(t+1) > Hatpink(t)   
Hat(t+1) = f(Hat(t)) 


Model 2

We have 3 entities, labelled: Joe, Sally, Bob.  Each entity has a state dimension that is labelled fish. Joe also has a state dimension labelled chips. We define the following constraints:

  • There must be no more than 100 fish at any one time in the world. 
  • Bob can either have 100 fish or 0 fish.  Sally and Joe must have less than 100.

The transition rules are slightly more complicated because we introduce a condition:

  • If the product of the number of fish and the number of chips that Joe has is greater than a number c, then Bob gets 100 fish and everybody else gets 0.  
  • Otherwise, Joe gets more fish and chips. unless he has 0 fish, in which case, he stays the same. 

Both of these models are internally consistent in that none of the rules contradict any of the others.  They describe extremely simple imaginary worlds.  On the face of it, they are not interesting at all.  All the first model does is to describe an entity with a state dimension that increases over time that can be expressed by some function which depends on the entity’s current state. 

Model 2, reduced to a simple state-transition flow chart 28 Jun 2013 image: chekov

The second model is a little more complicated. We can do some basic reasoning over it, to figure out how it will behave depending on what the values of the entities states might be at any given time t.  We discover that we will arrive in a stable state where Bob has all the fish, the only question is how quickly. This depends on the value of c and the rate of increase of Joe’s fish and chips dimensions.  The model can be reduced to the simple state-transition / flow chart shown here.

Still, we have no reason to believe that there is anything useful or representative about the model - it's just a bunch of entities, with silly labels, and a bunch of rules with no apparent significance to anything.  If we want to make these models interesting, we need to create a mapping from something in the real world to the entities in the models and to demonstrate that the behaviours of the entities in our models mimic the behaviours of the real world entities that they are mapped to. 

But, before talking about mappings, we will change the labels on the models as follows:

  • Hat => Universe
  • Pink => Space
  • Joe => Proletariat
  • Sally => Capitalists
  • Bob => Classless Society
  • Fish => Proportion of the population belonging to
  • Chips => Antagonism.

The new labels imply a mapping between the model and reality – and the models begin to embody theories. The first model can now be seen to express the theory that the universe will keep on expanding.  The second model expresses Marx’s dialectical theory of revolution: increasing antagonism between capitalists and a growing proletariat will lead to a revolution that will usher in a classless society.  We can now redraw the flowchart to reflect the new labels.

Model with changed labels - Marx's dialectical revolutionary theory 28 Jun 2013 image: chekov

Our models have not changed.  They are still imaginary worlds with arbitrary rules that we have defined.  The labelled model is itself no more than the outline of a hypothesis because it is still tied only vaguely to reality. Words have loose semantics and have limited capacity to bridge the gap between the abstract space of the axiomatic model and fuzzy reality. The more precise, less subjective and readily quantifiable the mapping between the model and reality, the more the model becomes a fully fleshed out scientific hypothesis.  Because, with precision in mapping comes testability – the ability to accumulate evidence to support the proposition that the model’s dynamics mirror those of reality.  We can map from reality to the model, run the model for a period, then map back to reality to see how well the model predicted the real world dynamics.

Revolution in the model

It is worthwhile to briefly analyse the second model above with these new labels to identify how well it maps to reality.  It should be noted that the model is, obviously, a huge simplification of Marx - he wrote hundreds of thousands of words on these subjects. However, that is the point of a model, and when it comes to his theory of revolution, the model is more or less an accurate and complete representation of the dynamics he postulated.

What he claimed can be boiled down to the proposition that the dynamics of capitalist economic development were such that the proletariat would continue to grow in relative size and that the class antagonism between them and the capitalists would continue to sharpen until, at some stage, a threshold of revolutionary potential would arise, whereupon a classless society would emerge as the dialectical synthesis of the existing classes. 

The problem with the model, is not its fidelity to Marx, nor its internal rules, it is the lack of precision in the mapping between the model and reality.  We want to know, first of all, whether the proletariat had indeed shown a continued rise, as a proportion of the population under capitalist economies. On the face of it, this should be easy enough. There is lots of historical census data around and historical studies that have aggregated demographic data. However, what exactly are we looking for? What constitutes the proletariat?  How do we measure what proportion of the population belong to it? Similarly, how might we measure class antagonism? Without a reliable way of answering these questions, we have no reliable way of deciding whether our model's dynamics mirror those of reality. Unfortunately, it is very difficult to extract a definition from Marx's writings that can happily be mapped to modern society - are service workers proletarians? How about managers and administrators?  Are we to include the lumpen-proletariat?  These are the sorts of questions that have animated the last 150 years of Marxist theoretical debate without much clarity emerging.

However, even if we take the most restrictive possible definition, it is probable that the prediction will hold in reality - simply because there has been a consistent and protracted decrease in the size of the peasantry over the last couple of centuries and this, by itself, is probably sufficient to prove Marx right when it came to the relative growth of the proletariat whichever way we look at it.  When it comes to the other aspect of the model, the results are less favourable.  Firstly, although there ware many possible ways in which we might plausibly measure class antagonism - frequency of labour disputes, strikes, votes for socialist parties, etc, it is difficult to know whether they are suitable proxies for what he intended to express. Worse still, whatever proxies we choose, it is difficult to come to the conclusion that class antagonism has indeed sharpened over the last 100 years or so. More problematic still is the prediction of a revolutionary class-synthesis - this was based on nothing more than Hegelian dialectics - and there is nothing in the way of evidence to suggest that it is a pattern that reality obeys.  

Dynamic parallelism

The creation of models, definition of mappings between models and reality, and investigation of the extent to which models mirror reality is science. The internal rules, entities and states of the model can be arbitrary and invented; they do not matter in terms of its scientific value. What matters is how well the model can dynamically map to and from reality. Extremely strange models, which internally violate some of our most basic assumptions about reality, are useful if they mirror some aspect of reality better than any alternative.  The standard model of quantum mechanics, for example, is so counter-intuitive that it is virtually impossible to conceptualise the model’s behaviour. Yet its rules mirror the observed and measured interactions of particles at the lowest level very accurately.

Dynamic parallelism A model is useful as a tool for analysing reality to the extent that its dynamics parallel those of reality. We should be able to reliably map between the model and reality, in either direction. 28 Jun 2013 image: chekov

Mapping between logical models and fuzzy reality is never straightforward – reality can never really be measured directly.  However, this is less important than one might think.  As soon as we have a means of consistently mapping between our model and some aspect of reality, we have a feedback loop which allows us to refine our model, improve our mappings, finesse our measurements to improve the extent to which our model mirrors the dynamics of reality.  The goal is not to access truth, it is to produce models that mirror reality more reliably than any alternative. 

All analysis of reality is mediated by models.  The complex and fuzzy nature of reality must be mapped into some form of a simplifying model before it can be analysed in any way.  Human brains have evolved to overlay a mental model upon the world, a model that is mapped to reality via their senses.  It is, thus, not surprising that the way that the universe works at the very small and very large level, beyond the perception of our senses, is so counter-intuitive and breaks many of the fundamental assumptions of our mental models. When we consider a group of people as a collection of persistent, discrete individuals, or a brick wall as a solid obstacle, we are as much imposing a model onto a fuzzy reality as we would be if we were mapping them to a mathematical model. A solid obstacle may be a good approximation of a wall if we are thinking about a human walking through it, it is not a good approximation if we are firing a stream of neutrinos at it – a vacuum would be a better approximation in that case.  Similarly, human beings are made up of cells which are constantly recycled, they have fuzzy boundaries – is the gut-flora part of the individual?  The skin-bacteria? How about the symbiotic viruses in our DNA?  A persistent, discrete individual person is a simplifying approximation for certain patterns of particles. Our brains incorporate such a model because it has proved to be a useful approximation which simplifies our analysis and understanding of the world around us.


The Discontents

One can find numerous critiques of the general approach of using simplified models to analyse human society:

  • individuals cannot be treated like simple variables
  • human society is too complex to be adequately captured by mathematical models
  • things that are generally of social interest are too difficult to measure and too subjective to be reliably mapped to entities in a model
  • simplifying models look at aspects of reality in isolation, the real universe is so interconnected that the models do not apply.

These objections might be significant, if truth was the goal of our model, but we do not need to aim for truth when we have a feedback loop which allows us to identify when the model better mirrors reality and can aim merely for better mirroring, not truth.

If a model more faithfully parallels the dynamics of reality than any alternative, it does not matter how it chooses to represent the entities, whether it maps individuals to simple variables, complex agents or merges them into aggregations, it is still useful in analysing how the real world will behave. Although reality may be extremely complicated and interconnected, it exhibits definite patterns that allow us to define useful generalisations.  Furthermore, modern statistical analysis provides us with sophisticated ways of evaluating predictions and isolating factors even in the presence of complex interference.  Applying models to complex, interconnected worlds is difficult, but not impossible and we are getting better at it all the time.

However, criticisms of the use of models in analysing complex, interconnected systems like human society cannot be simply explained away as the consequence of a basic misunderstanding of the relationship between theories, reality, models and truths. There are good reasons why many people are skeptical of the predictions produced by models of human societies.  It is very easy to make a spurious but plausible conclusion about some aspect of reality by incorrectly extrapolating the predictions of a model and there are many people who are demonstrably willing to do so, in pursuit of agendas that are not connected to a desire to understand how the world works.  Scepticism is always the most reasonable, evidence-based position in response to any claim that a model tells us anything about reality.  

Micro-economic theory and its approach to modelling is responsible for the greatest damage to the idea of studying human society through axiomatic models. Much of the problem can be traced back to the start of the discipline in the 19th century.  The theoretical goals of the field were to demonstrate that an optimal equilibrium would emerge from the uncoordinated interactions of individuals through a market mechanism rather than to study the dynamics of actual economies.  In order to demonstrate such outcomes, it is necessary to introduce a variety of rules into the model which break any implied mappings between the behaviour of the agents in the model and people in all possible configurations of the real world.  Without a solid and reliable mapping to the real world, the model is merely an abstract intellectual toy and should not be considered to say anything about anything in the real world. It is easy to become lost in such a model, to focus on elucidating its behaviour to such an extent that the mapping between the model and reality becomes forgotten. This would be fine in itself - there’s nothing wrong with exploring the contours of abstract models - were it not for the propensity of economists to propose real world policies on the basis of such models. One simply cannot say that “this rule produced the following outcome in a world of agents with perfect information and complete, pre-determined indifference curves, therefore changing that law in the real world will be likely to have a particular outcome”.  The unfortunate outcome is a generalised suspicion of the use of formalised models to represent societies, even when they are tightly bound to real world phenomena and produce solid predictions. 


The inescapable nature of models

The fundamental problem with all critiques of the use of models to analyse human societies is that there is no alternative.  All analysis and understanding is mediated through models.  Our brains have evolved to map the chaotic jumble of sensory input they receive into simplified models of the world to allow us to better navigate our world, survive and reproduce.  Our brains create models populated with integral, individual humans, not the cells and molecules that constitute them. We perceive brick walls as solid obstacles, not swirling masses of indeterminate probability distributions.  Above and beyond the models that evolution has wired into our brains, we apply a multitude of models to provide further order to our complex social worlds.  We divide people into friends, enemies and rivals, natives, foreigners, and immigrants, men and women, workers and employers and so on.  Not only do we use such models to understand the world, we also interpret the world through them.  If we see two individuals fighting, depending on what models we employ, we may see it differently if they are, for example, a man and a woman, a foreigner and a native, a worker and a policeman.  We see a man beating a woman, a cop oppressing an immigrant or similar and the perception itself is imbued with attributes of our model, which are not derived from the scene itself .

People typically employ mental models that they find most useful for interpreting and simplifying the world as they experience it.  They may, for example, deploy a model of gendered behaviour which defines how men and women should behave in any given situation.  Frequently such models are built upon at least a vague correspondence between the model and reality.  However, the implicit models that people apply to the world are at least partially subconscious, rarely directly tested or evaluated and are often capable of resisting large quantities of negative evidence.  Furthermore, large-scale societies with complex integrated social systems are a relatively recent phenomenon in human evolution – stretching back a mere 12,000 years.  Until much more recently, most of the population would have had relatively few interactions beyond their immediate, local community.  Therefore, the evolved mental models that people instinctively apply to the world around them often do not map well to the modern, large-scale real world society they are trying to understand. 

For example, it is common to hear commentators analysing relations between nations in terms of inter-personal relationships (France are our friends), or analysing broad populations through the prism of personality or psychology (as a nation we can’t make up our mind), or through the model of some idealised family.  Models from familiar worlds are happily applied to domains where they have neither a useful mapping to the model nor any evidence of the model’s ability to mirror reality.

Thus, it is impossible to escape the use of models in analysis.  By being explicit about our models and their mappings to reality and requiring that their validity be proven by demonstrating that they can mirror reality better than any alternative models, we move our analytic framework into the foreground, where it can be examined, tested and proven.  Without explicit models, we are limited to the instinctive models that live in our heads and they are frequently wrong. 

However, models are imaginary worlds with arbitrarily defined rules.  For the model to become a hypothesis, it needs to be grounded through a reliable mapping to reality. The more precise and less subjective this mapping is, the more useful the model.  Words and labels are too semantically loose to serve this purpose on their own - if different people can interpret the mapping in different ways and yield different results, then the model is not reliable.  The closer that we can get to an objective measuring process, the more tightly bound to reality our model will become. 

Finally, once a model has a reliable binding to reality via a non-ambiguous mapping we must demonstrate that its dynamics parallel those of reality - at least better than any alternatives.  Once we can do this we have a proper theory.  Nothing else really matters - it doesn't matter if what the model's internals look like, it doesn't matter what entities are mapped into and out of the model, as long as it exhibits dynamic parallelism and can be reliably mapped to reality, we can use it to understand the world. 


With this description of the fundamentals of modeling out of the way, I'm going to go on to use models extensively in my subsequent theoretical writing.  The next installment will be about social classification.  It will appear in mid July.  For the immediate term, anyway, it's back to the personal narrative - the next installment of which will appear early next week.

Comments (8)

chekov's picture
chekov

This is a bit of an early draft - lots of the sections need work and the whole thing needs better structure.

Anonymous

reading this on my kindle ..taking a little time..

Anonymous

food for thought..

J.Ocular

I liked initial obscuring of the two models' "real world" state variables and then their revelation. During your re-write it would be nice to preserve that.

If I were being critical I'd suggest trimming the length a bit (easy to say, harder to do).

I liked this: "the implicit models that people apply to the world are at least partially subconscious, rarely directly tested or evaluated and are often capable of resisting large quantities of negative evidence." Michael Shermer's "Why People Believe Weird Things" is interesting on that subject.

I will admit that it took effort for me to read this section.. it needs more of a hook at the start rather than an announcement of dry theory: most of us tend to skip that. Perhaps posing a question, for instance talking about the human example of apocalyptic cults being confronted with the predicted end-times passing by with no effect (Stephen Jay Gould had a bit about that too in his last book).

chekov's picture
chekov

Great feedback J.Ocular - I will put those thoughts into the mix and let them settle a while before doing up a more complete draft.

Anonymous

Models/part 2: Just to be clear that I understand what you are saying.....are you saying that as long as a model mimics/mirros reality, it doesn’t matter how it is constructed or how its innards works or how its innards are interconnected..etc? If this is what you are saying, how do you know that your model will continue to reliably mirror/mimic reality if that reality changes (say changes in reality that stretch beyond the ‘operating conditions’ for the model)?
Do you assume that your feedback loop will always be adequate to allow you to correct your model if the reality it mirrors , changes substantially?...eg scales up.

In terms of using models to analyse and understand human society, if beliefs and values help drive human actions – how do these components get reflected in the model – or do they?

chekov's picture
chekov

Hi there,

Sorry about the delay in replying - I'm terribly busy at the moment.

Anyway, to answer your questions, briefly:

I don't think that the nature of a model's internals makes much of a difference - gravity, for example, is one of those things that has no 'micro-foundations' - it is an artefact of some models we have (Newton's and Einstein's). The theories are held in high esteem because of their ability to map back and forth between models and reality with high reliability. Sure, it's nice to understand why stuff happens the way it does, and theories that work without us knowing about why are always pointers towards exciting new research vistas. But if a theory is reliably better at mirroring reality than the competition, that's what is really important.

Secondly, all one can say about a good feedback loop for a model is that it should tell you how you are doing - not how to fix the model.

Finally, you can model beliefs and values any way you want - it depends on what questions you are asking how best to do it - whatever you end up will always be a big simplification and generalisation, but the right generalisation is always more illuminating than lots of detail!

Anonymous

no prob - thanks for taking the time to reply.