[Insightful Reads] A Crude Look At The Whole

Author: John H Miller


This is a book about complex systems - how things between various disciplines interact with one another. Given how people tend to view the world through the lens of what they as an individual have been through, it opens up a bunch of perspectives as to how things interact with each other to produce some unintended effect (the word here is ‘emergence’).

It begins with a note that we have been for some time streamlining various subject fields, that is reducing the subject fields to its simplest so it can be easily understood. Such a process is called reductionism. While not entirely a wrong approach, due to the isolationism involved it fails to predict emergent events in the world resulting from the clash of multiple systems.


Insights

  • Simple Rules producing nice, unintended patterns, eg. Cellular Automata
  • Supply and Demand producing a price equilibrium
  • Feedback loops from HFTs producing market crashes, and how circuit breakers allow them to recover.
  • Individual agents within a system producing change for the entire crowd. Example case: Honey Bees dancing to get the hive to move to another location. The quality of their dancing is based off the location they scout, other bees then investigate and come back to do their own dances. This repeats until enough are convinced (ie. ‘a quorum’) and do the move altogether.
  • Rabble-rousing: Each agent to have a sensitivity level that propels them to ‘take action’. If they observe X people participating in the movement, he/she will join. There then is a threshold for the number of people to start a movement.
  • “If you want to initiate social movements from a small spark, encourage a diversity of views and a sense that everyone is participating”. On the flip side, “Alter information individuals receive about reasonable threshold levels or the number of activists”.
  • Hill-climbing: A solution is at local maxima of the problem space and it needs to find the next peak, and to do that is you have to introduce errors. Examples: Drug-searching. Six-sigma process reduces errors.
  • Humans, slime-molds change decisions based on trade-offs. 2 indifferent choices changes to 1 preference when we introduce an indifferent/inferior 3rd choice. (see the decoy effect for marketers)
  • Majority rule (ie. peer/neighbour’s pressure and invisible voting) leading to segregation
  • Scaling: mathematical relationships between two variables, eg. the power law, Zipf’s law (largest city has twice as much population as the next).
  • Cooperation vs competition, and the N-person prisoner’s dilemma. A simulation was built whereby agents can choose to defect to make gains for themselves or cooperate which benefits involved parties. We can represent these agents as finite state automata, and mutate random preferences into them to make a decision to defect or cooperate.
    • In the initial iteration A, these machines defect more than normal and will do better than average due to the absence of any order on their opponents.Cooperation always makes you worse off. Cooperative strategies that arise become an easy ‘mark’ for defectors and die out quickly.
    • A theory goes that if the cooperators can stick together and play the game amongst themselves while avoiding defectors, they will receive high payoffs relative to the rest of the world. So in iteration B, we only allow agents to interact with a few other agents, and keep memory of opponents on whether they are a defector or cooperator. A blind strategy of always cooperating if doomed to fail. Machines also have to react on whether their opponents start reciprocating so it can defect to stop being exploited further. If the opponent reciprocates, the two can establish mutual cooperation
    • Actions in iteration B thus become communication signals, taking a short-term cost for an action in hopes of achieving long term benefits.
    • Iteration C involves replication and mutations through each evolution. The current generation of always-defecting machines may not have the mechanisms to react to a cooperative action, but they themselves gain mutations that their offspring may inherit and utilize.
    • Evolution is always in search of weaknesses, and newly cooperative strategies must remain vigilant and be able to react to an opponent’s defection. If not there is possibility of a mimic arising that sends all the right handshake signals to establish cooperation but then defects. Likewise, if cooperators are lazy and not tested by deflection will be destroyed by deflectors.
  • Self-Organized Criticality: Repeatedly adding random sand grains to an interlinked system will force it into a critical state, whereby adding sand grains will cause a chain reaction, like an avalanche. ‘Stuff boils over and just snaps’
  • Monte Carlo Method: A method for simulating the behaviour of systems
    • Start with a random distribution of particles in the system, calculate the ‘energy’ of that system
    • Apply a change to particles, moving them by a small amount using a random process driven by a proposal distribution.
    • Calculate a measure of interest for the new system and the previous system.
    • Use an acceptance function to accept which will be the preferred system for the next iteration
    • Repeat with the selected system. The system eventually converges on the exact distribution underlying the measure of interest (particles interested in A will flock to A).

    As an example would can forecast the weather by calculating the number of days it rains through the year. Alternatively, we calculate the chances of rain on a particular day given that it rained on the previous day. The latter approach gives us a more accurate prediction. Ie. The status quo predicts the next iteration given past distributions from the status quo.

  • Markov Chain Monte Carlo methods: For each pairwise states (“start”/”goal”) of a system, a transition to the next state from the start state governed by a chosen probability distribution makes it possible to reach(or ‘converge’) the goal state after n or more steps. This distribution is symmetric and requires the probability of proposing x given y be identical to the probability of y given x.
  • You can use MCMC to discover hidden probabilities/distributions in complex adaptive systems. Eg. A frog jumps to another lily pad if number of insects there is greater than the current, otherwise it jumps to the new pad with probability to the relative number of insects at the two locations. This is an MCMC method, and the frog’s desires are an acceptance function. After some iterations we find the amount of time it spends on any lily pad is given by the number of insects found at the pad divided by the total number of insects across all pads (a ‘convergence’).

This insightful book read is one of many book reviews to come, read more about it in this post

Tags: [Insights][Bookreads]



If you liked the insight, a coffee would be appreciated!
Comments
Loading comments..

Home