Thursday, December 13, 2012

Von Neumann on Empirics vs Hayek on Abstraction

Recently Carson Chow posted a short quote on the musings of Von Neumann on straying into pure mathematics when an idea originated in an empirical context.  [Hat tip Daniel Lemarie for retweeting it around.]

“[M]athematical ideas originate in empirics, although the genealogy is sometimes long and obscure. But, once they are so conceived, the subject begins to live a peculiar life of its own and is better compared to a creative one, governed by almost entirely aesthetic considerations, than to anything else, and, in particular, to an empirical science. There is, however, a further point which, I believe, needs stressing. As a mathematical discipline travels far from its empirical source, or still more, if it is a second and third generation only indirectly inspired by ideas coming from ‘reality’, it is beset with very grave dangers. It becomes more and more purely aestheticising, more and more purely l’art pour l’art. This need not be bad, if the field is surrounded by correlated subjects, which still have closer empirical connections, or if the discipline is under the influence of men with an exceptionally well-developed taste. But there is a grave danger that the subject will develop along the line of least resistance, that the stream, so far from its source, will separate into a multitude of insignificant branches, and that the discipline will become a disorganised mass of details and complexities. In other words, at a great distance from its empirical source, or after much ‘abstract’ inbreeding, a mathematical subject is in danger of degeneration.”

The quote was unsourced in the post.  It originally appeared in J.V.M.'s 1947 essay The Mathematician which is republished here:
Part 1 and Part 2

Recently I was reading Emanuel Derman's book Models.Behaving.Badly.: Why Confusing Illusion with Reality Can Lead to Disaster, on Wall Street and in Life


Friedrich Hayek, the Austrian economist who received the 1947 Nobel Memorial Prize in Economics, pointed out that in the physical sciences we know the macroscopic through concrete experience and proceed from to the microscopic by abstraction. [...] In economics, Hayek argued, the order of abstraction should be reversed: we know the individual agents and players from concrete personal experience, and the macroscopic "economy" is the abstraction. If the correct way to proceed is from the concrete to abstract, he argued, in economics we should begin with agents and proceed to economies and markets rather than vice versa.

These are highly contrasting views!  

Hayek argued that the market economy was not designed, rather emerged from the fairly simple actions of actors within the economy (negotiating on price etc).  Hayek is essentially saying that we should be able to reproduce the complexity of a given economy if only we can capture an accurate description of all of the 'rules' the actors are obeying.

Looking back on my own intellectual development I was first drawn to these emergent ideas within the 'bottom-up' branch of Artificial Intelligence.  It was fascinating to set up a simple rule in cellular automata and watch it generate amazing complexity.  Similarly the basic Darwinian model of evolution can be duplicated in a very short bit of code that given an objective function can solve problems indirectly via a massive number of randomized trials directed via the function.  

Lately I'm a 'data scientist' which means that I'm attempting to extract models from data.  Common methods attempt to wash out enough complexity in the data to derive a model that is inspect-able and understandable by a person.  This is fundamentally grounded in empirics as the extracted model is tested for accurate prediction against other data.  The inspected models are used mostly for 'story telling', helping users of the system to understand what is being learned from the data.. yet we leave specific predictions exclusively to the algorithms.

Of course one can't resolve these abstract differences in a blog post, that's a tall order.   

I would however assert that every learning algorithm should have two fundamentals. 
  1. It accurately predicts outcomes given input data
  2. It emits a model that is inspectable and can inform the building of better 'toy models' of the system under study. 
No one disputes the first, yet many seem all to happy to ignore the second and are OK with building increasingly powerful black-boxes with 'big data' machinery.

Posted via email from aicoder - nealrichter's blog

Saturday, January 07, 2012

Software pattern for proportional control of QPS in a webservice

Problem Statement

Imagine you are writing a webservice that must call a back-end service, such as a data store. Let's assume (with out loss of generality) that the data store (and your hardware supporting it) has some limit in QPS that it can handle. We'd like the client system (your web service) to impose a limit on the QPS to the back-end service. Also assume that this is a distributed webservice, lots of worker threads on lots of different machines.

Requirements

  1. Given a goal in QPS manage the maximum outgoing requests per second to that goal.
  2. Be fast. Maintain a fast controller settling time when the goal or queries change.
  3. Be adaptive. Respond to swings of incomming requests that need to be queried against the service.
  4. Be distributed. Locally active against global numbers without knowing the number of workers.
  5. Be robust. Handle additions and subtractions of worker/clients to the system without coordination. Minimize overshoot.

Design

Assume that querying this backend service, while valuable and mostly needed, is optional under duress. It's far more important for your front-end service to be responsive and return some error or 'No Content' than hang on a busted back-end. As result we'll use a sampling rate 'r' to denote the % of time that the web service should query the back-end. Under normal conditions this rate is 1 (100%). Also assume that the goal in QPS to the back-end is set in some configuration area in your system. Under duress the rate r will be adaptively tuned to obey the QPS goal. Also assume you have smartly implemented some monitoring system like Ganglia/Nagios/Cacti and are emitting events to it when you call the back-end service.

Inputs

  • G = Goal QPS
  • M = Current measured QPS (from Ganglia).
  • r = Current sampling rate [0,1]

Outputs

  • r_new = A sampling rate [0,1]

Adaptive Mechanism

  • r_new = r * (G/M)
  • r_new = MAX(0,MIN(1,r_new)) //clamp r_new between [0,1]

Benefits

  • Needs only global G and M as inputs.
  • No-coordination needed between workers/servers other than the globally observed M.
  • Adaptively moves the per-worker sampling rate independent of all other worker's rates.
  • Workers can have different incoming QPS rates from a load balancer, the controller will adapt.

Failure Modes

  • If the sensor for M fails to be updated then the controller is blind
  • If the Goal is set or re-set to zero, then the controller will stop traffic
  • Both of these can be addressed easily.
Desired Response
  • The overshoot/undershoot is called 'ringing'
  • The time to approach the goal is called the 'settling time'

Implementation Notes

  • Emit a ganglia/graphite/XXX counter for both requests sent and skipped
  • Use a smoothed average of the measured QPS to smooth out controller jitter.
  • (Optional) Each worker should do a bit of randomization of when it performs its sampling-rate-update to smooth out any startup/restart ringing.

Invertability

This design could be inverted for a server implementation. If your webservice has a set of APIs that are heavy to execute, then this controller could be used to control the incomming QPS that are delegated to the heavy work.

  1. Receive request
  2. Submit to sampling rate
  3. If Yes then delegate the request to the executor
  4. If No then respond to the client with 'HTTP 204 No Content' or equivalent empty reponse.