Thursday, 29 December 2011

Life is not like a video game

No, not because "it has no reset button". That's boring and if, like me, you first played video games in arcades where your mum would only give you one coin, not obviously true. There's a couple of other fallacies that I want to tackle. The general heading is "life is not fair". Firstly life has no obligation to let you win. Secondly, life does not give everyone the same 100 skill points to start with.
There's a fact most people dont realise when they play games, it's been specifically designed so that you can win. The game world may seem natural and real, but it obeys rules. One rule, too obvious to think of normally, is that there is a (large) set of sequences of button-presses that result in the "YOU WIN" screen. This rule is so obvious that we dont realise it's not a law of logic. It's perfectly possible for a one player game to have no way to win. If you play tic-tac-toe against a reasonable player you cannot win. If you play the MIU game you will never reach MU.

And this isn't just an observation about abstract games. It's an observation about life. There are some things that no matter how smart you are, no matter how hard you try, no matter how much you want, you cant have. You cant have a machine that generates free energy. It doesn't matter if you wish upon a star, you wont win that game. It doesn't matter how hard you try, it's not clear that *everyone* has a set of actions they can take to become world basketball champion. It's not clear that *everyone* can become the head of a fortune 500 company. 

And this is even true about games that humanity as a whole is playing. Just because the whole of humanity tries to do something doesn't mean they'll succeed. It doesn't matter how much money you throw at it, it's not obvious that it's even theoretically possible for humans to cure virus in the next 20 years. It's not obvious that the renewable energy project can succeed on straightforward thermodynamic principles. We shouldn't assume humanity is omnipotent.

There's a point to this other than depressing you and stopping you from going on the fucking X-factor. Before you start your project be careful that you have considered every option. If you pick the project "become the greatest basketball player ever" and you are 4ft tall then it's safe to say you are going to waste your life. You're allowed to amount to nothing. 99.99% of all the humans that have ever lived did not matter at all, they were useless, they picked a project they couldn't succeed at. And when you do pick this project you ignore the project you could have done. You forget that you could have sat down and done something else, and won. 

The second fallacy is one much loved by a certain kind of hippie/middle class/mother figure, the kind of person who thinks everyone is special. This is what I like to call the Dungeons and Dragons fallacy. It runs something like this: When creating your character what nature does is take 100 points and distribute them amongst various categories like "drawing skill", "hight", "maths ability", "attractiveness" etc. so people have a different balance but the same sum. This is the implicit idea whenever the fact that Alice is better than Bob at the Xylophone is countered by the assertion that Bob is probably better than Alice at Yachting. 

You've all heard people defend someone's low IQ by saying it doesn't measure some property, often creativity, with the implication that the low IQ person will be far more creative. Just plain false I'm afraid, there's no a priori reason why little Timmy cant be both bad at verbal reasoning *and* uncreative. There are many people who are bad at a wide range of things. Likewise there are many people who are good at a wide range of things. Unless you can point to a causal link between two abilities that tends to make them balance there is no reason why being bad at something could be reason to be happy.

This is quite a broad failure of reasoning. There is an implicit assumption in a lot of people's reasoning that nature should be "fair". We're taught in stories that the bad guy always gets his comeuppance. We are told that the little ugly duckling always turns out to be beautiful on the inside. And I'm sorry, there's no reason to think so. There is no god who "ought to be fair", there is the blind bumping of wavefunctions, they dont give a damn if you're thick *and* ugly. 

We often console ourself looking at someone we envy that they are just bound to have a flaw. We may loose to them at XYZ but there must be some area where we beat them. Sorry, just not true, in a world of 7 billion humans and rising it's just a statistical falsehood. There are people out there who on any axis you care to name are strictly worse that you. There is someone who's less romantically successful, has a lower IQ, is less creative, less good at drawing, less expressive, less attractive. And there are people who are better than you, at everything. I know this to be true of myself because I've met the motherfucker. There are people out there who are better at writing, better at thinking, more creative, more loving, more attractive, more composed, less likely to burn out from stress. 

I'm sorry if this is a depressing post. There isn't a way to say it other than "life isn't fair". You cannot assume that things will just work out towards some kind of balance. The laws of physics do not require any such thing. The important thing is to accept this universe and not go mad in it. The important thing is to accept life isn't fair. To accept there's a guy who's strictly better than you who will beat you at anything they set their mind to. Then work around it. Dont fight them on that project, do something else, do some project where you're not the best person who could possibly do it, but the best person who *is* doing it. There are few pure mathematicians who could honestly say that John Conway could not have replicated their work had he set his mind to it. Mathematics did not stop and wait for him to die. It carried on, most mathematicians knowing that John Conway and a dozen others could just stomp on their pet project and solve it any moment, but also knowing that if they didn't it still needed to be solved.

There are some projects that cannot be solved, there are some that can partially be solved and there are some that can be totally solved. You have a finite amount of effort you can put into all of your projects in the next year. The lesson is to pick the right projects. You cant always know which is which. Sometimes life can still  kill you while you're not looking. But whenever you can know, whenever you can predict, pick a project that you can solve and that you should solve. Because sometimes everyone else is waiting for you to do it.


  1. I don't think I could agree with you more. I've been pondering this exact thing myself all day, by coincidence. I have nothing particularly pertinent to say, other than an idle wondering as to why exactly it is that the 'bad guys' never see retribution. I can accept with grace that there are a gpod many things in life I will never succeed at. I can deal with that. But why, oh why, do people with the wrong ideals and a mind to cause hurt and damage anywhere they go always come out on top? If one thing has ever killed my belief in the 'divine', it's that. People always say 'what goes around comes around', and are steadfast in that belief - however, I'm yet to see one single instance of it, personally.

    Sorry to rant. I guess this has just struck a chord with me today. Odd that you'd have posted exactly what I was thinking about.

    Eloquently put, regardless.

  2. It's not necessarily true that bad guys never get their just deserts. The entire criminal justice system is proof of the fact that some bad people get their just deserts. The problem is that in practice it's the stupid bad people who get their just deserts. Clever bad people dont do stupid things like getting caught.

  3. Still, I'm not sure that the punishment dished out by the criminal justice system is relative to the crime that's been committed, in the vast majority of cases. And the people who don't do anything punishable by law, but still go around causing say, emotional trauma, they'll get away scott-free. No karma, no justice. I don't want to wish bad things on people, but I can't help but feel it should be that these people stumble at least once in their lives.

  4. Indeed, and I more than agree about the justice system. But of course, saying "it should be" in a universe with no gods in it doesn't make it happen. That's our job, us humans. If we think these people ought to stumble, or better yet that they ought to not be able to do wrong in the first place, it's our job to make it happen.

  5. Why do people without regard for other people come out on top? It's non-negative because they have a broader space of options. It's net positive because everyone else has coordination problems, and so don't sufficiently shun people who cause irritation.

    Coordinating to enforce norms is costly, and so the Nash equilibria of this non-coop game looks like low levels of "anti-social" behaviors, with the majority selectively ignoring some behaviours so as to avoid the enforcement costs, and being really good at rationalising their own behaviors.

    Cheaper ways to enforce norms are to mould the impressionable. There is a reason that most child literature has the bad guys loosing - it's good moulding and it's good signalling by the authors. Environments where there is less attention to signalling adherence to the standard norms are routinely damned in public. Think books/films/comics/games (roughly chronologically).

  6. The really scary sharpening of this post is to note that in aggregate humans do have god-like powers. Consider that circa 70kya, there were under 10,000 humans on the planet. Now, we're a driver of planetary dynamics.

    Using power output as a rough proxy for effect, we've gone from 1MW total 70kya, to around 20GW in 500AD, to 15TW today. Roughly, the first 3 orders of magnitude took almost 70,000 years, and the next factor of a 1000 took about 1500. We've increased energy use by 50% in the last 25 years.

    We've got more efficient at using the power available. 150 years ago, we had humans computing tables of functions, at 1 op/second, and 100W power consumption. Today, my graphics card can be programmed for 10^12 ops/second, with the same power consumption (and lower capital expenditure).

    Projecting out a little, within the 30-40 years the computational power for brute force whole brain simulations is likely to be available for a few tens of thousands of pounds. A decade later, the same money will simulate 1000 people.

    Question: How fast does research go when you can throw that level of brain power at a problem? Note that this is a lower bound on awesome power - it's not clear that the optimal use of a computer is to simulate primate brains.

    In summary, within a few decades essentially toy money will buy you a large reseach community. Current research money would get you literally millions of dedicated researchers. Put another way, the pace of research goes up by a factor of a thousand. What happens to the doubling time of performance/price of computational hardware in those circumstances? What happens to efficiency improvements in brain simulations, or general AI design? How do a few billion humans control hardware that is a million times smarter than they are?

    What I'm getting at is that humans are going to instantiate gods. Comparatively soon. Without a lot of work, it is unlikely to implement ethics that we like. Even human simulations aren't safe - most people don't trust most other people, especially on major issues.

    For example, I would not trust anyone espousing karma-esque models of ethics. Without trying to be too harsh to someone I don't know on the internet, it is inconsistent to say "I don't want to wish bad things on people, ... it should be that these people stumble at least once in their lives." If I jam the second clause into a being smarter and more capable than I, then I would confidently expect a prompt surveillance state with highly optimised controllable floors. This is a lower bound on wierdness of the resultant system. I would not consider this good.

    More troublesome is the fact that if "bad people should stumble" is a moral axiom, then people who oppose it (say by posting in comment threads on blogs) are anti-ethical and should, at least, be made to stumble. In fact, those people who think it is not an ideal axiom, and thus might later try to act against such an axiom, are anti-moral and should be made to stumble. I hope the following is fairly obvious, but I consider that state of the world to be dystopic.

    If I feed both clauses in, then consistency requires that I don't consider stumbling to be a bad thing. So if I was given the choice between 3^^^^^^3 (Look up Knuth up-arrow) people stumbling, or say, 1 person getting a dust speck in the eye (or other marginal but bad thing), then I would choose the latter. This seems wierd to me. I would not trust morality that asserts this.

    In summary, we are shortly going to enter a world where small groups of humans instantiate gods. This won't happen often - you would expect gods to be better at instantiating gods than primates are. As a corollary, you could reasonably argue that anything that isn't needed in the next, conservatively, 100 years is fairly pointless. Thereafter, we'll either have instantiated entities that make continued research non-optimal or we'll be watching Jupiter be turned into little smiley faces (or other wirehead behaviours).

  7. As an addendum, the list of things I would class as important right now:

    1) Malthusian catastrophes
    1a) Climate change
    1b) Food/Water management
    1c) Disease management
    1d) Old age / death

    2) Existantial risks
    2a) Transhuman AI
    2b) Asteroid deflection
    2c) Nanotech (bio and non-bio)

    3) Support structures
    3a) Education
    3b) Managing politicians
    3c) Managing the populace

    I claim 1a is an engineering problem. Google LFTR, Sulfur-Iodine, Fisher-Tropsch and amine scrubbers.
    I claim 1b is engineering given cheap energy. (reduce to 1a)
    Some parts of 1c have money (Gates for malaria). Others don't (Look up the giving what we can charities)
    1d is an active research topic (SENS / Cryogenics)

    2a is being researched, barely.
    - These entities are at least more likely than not (I mean, humans aren't optimally smart)
    2b has had the Hollywood treatment, and we're fairly confident that Apophis won't hit in 2036.
    - Having more than paper studies would be nice. Using the Apophis 2029 pass to actually do a diversion would be net positive, in my view.
    - We've done fairly major sky searches, and we don't think we've missed any extinction causing rocks.
    - There are probably still a bunch of merely continent devasting missing rocks.
    2c has little money for safety per se.
    - Grey goo scenarios are hard; most of this requires high vacuum or has power dissapation issues.
    - Danger is roughly reduced to 2a; we can make cheap low power computers with nanotech (read Drexler's PhD thesis).

    3a sucks, presently. Mostly because the content of the post is not internalised by politicians, as it's a really good way to loose votes. Think about the following political characterisation: "Hey, people, yes, people who should vote for us! Statistically, you won't amount to as much as other people, and you should work on the offcast projects that highly competent people don't want to work on." You then proceed to claim that your opponent endorses everything in Atlas Shrugged, and leave them to be dissected by Paxman. I mean, on this side of the pond it's political anathema to even consider that the independent sector may be better at preparing kids for Russell group universities. Optimising education for maximal progress on hard problems probably involves more money on getting more Conways and PhD researchers. Aside from screening for the standard breaks in learning (ASD's and dsylexia seem particularly common), ceteris paribus this means less for the bottom end of performers. Again, this gets politicians painted as Thatherite at best, and more likely Randian.

    3b kind of sucks. It's only been a few years that the DECC has had someone competent at the top (read anything by MacKay, esp. withoutthehotair). Stateside politicians still have issues with basic research, and petty facts about the efficacy of medical interventions. We have issues stating the risks of drug use, or the cost effectiveness of herceptin, or whether homeopathy should be available on the NHS. 4-5 year cycles are too short for some of the required reseach, and some of the research is predicated on first understanding that reality can just kill you.

    3c sucks. Aside from the obvious comments about 40% of Americans, there are vacinnes, animal testing, embryonic research, nuclear physics, counter-terrorism, what does or does not cause cancer, etc. To first order, most people are uninformed, and there are enough groups acting to keep things that way that changing this will be difficult. Unfortunate corollary: people are not driven by risk estimates, but by fear. I don't know whether the extent of the problems with the support structures are so severe as to endanger our ability to do the research on X-risk. It is fairly obvious that political and popular constraints are an issue in climate change. If someone could solve this class of problems quickly, that would be a good thing.


Feedback always welcome.