Thursday, 4 August 2011

The point.

Why blog? Why think about ethics? Why practice science? What ultimately is the point of all of it? What are we worrying about? And importantly, is it clear this goal is attainable? Is it clear we can work towards it successfully? It's not clear I've ever answered this question. So, here we go.

What's the goal? One word answer: happiness. Happiness is the thing I want for myself, the thing that is desirable in and of itself. A world where I'm happy is to be sought, a world where I'm not happy to be rejected and prevented if at all possible. Because other people seem to have minds like my own it makes sense that they also get this feeling. Because their feeling seems likely to be like mine it makes sense that their happiness is also a thing to be desired, that their unhappiness is a thing to be prevented.

So the job is to increase happiness for as many people with minds like mine as you can find.

Observation: my happiness is determined by two factors, first my state of mind and attitude, second the direct experiences I have of the world around me. Historically the Buddhists stress one, the enlightenment materialists the other. Both are necessary, a perfectly contented Buddhist monk may be as happy as humanly possible, but they'll stay that way for less than 40 years without scientific medicine. A scientist with all the technology, joy-inducing drugs and medicine they can manage to get their grips on may well be perfectly miserable if they believe the brave new world they've created is a dystopia and try to fight it.

So we need both. The second part is much more obvious and accessible to me. The first must not be underestimated, and there are clear measurable successes of both camps.

On the first front there is often (read: almost always) more heat than light. The Dalai Llama has many interesting things to say, and various schools of Buddhism have produced testable results. However, these are swamped huge number of "new age" scam artists. There needs to be a far more active and rigorous study of the psychology  of happiness. This is not easy, it needs people, time, resources and good ideas. But there's no reason that a good science cant be made, or why it cant explain what some Buddhist schools are doing right, and leave no room for new age bullshiters. I have nothing interesting or important to add to this, I leave it to others, doesn't mean it's not important, simply not my field.

The second front is much more my field. Examine the connections between sense data and happiness, find the good sense data, find the physical world that generates them, find the actions that cause this physical world to turn into that one, do them, Bob's your uncle Fanny's your aunt. Now there are several objections that pessimists, idiots or those with weird agendas raise to this.

First, material things cant make you happy, this is the "money cant buy you happiness" line. This is obviously wrong, anyone seriously suggesting this is true is either trying to be wise and failing, or they are colossally stupid. Of course material things change how happy we are. ... That's why we want them. The holidays we go on, the sports we watch, the hobbies we have, the prostitutes we hire. Of course they make us happy, if they didn't then we would stop buying them. They may not make you happy, sure going camping for a fortnight and then abseiling off a mountain doesn't seem fun to you, but it does to others, it really genuinely makes them happy. It's part of the physical world, it takes energy to make it happen, it takes human effort to make it happen, it takes money to make it happen.

Second objection, we shouldn't worry about material things, it all depends on state of mind. Bollocks. At the very least material things keeps that state of mind going longer by feeding the body that makes it work. (Anyone who honestly, really, truly, deeply believes that their mind is not just a part of their body and doesn't need it to survive is more than entitled to test this claim via suicide). State of mind is important. You can be happy in extremis. But I bet it's a hell of a lot easier if you're warm and comfortable. And actually, either way I win. If it's hard to be happy on the rack then let's get people off the rack, if it's easier to be happy on the rack because of some bullshit about teaching you the value of existing or whatever then lets put people on racks. If it makes no difference at all ... I'm going to go ahead and carry on taking people off racks ... if for no other reason than screaming keeps me up at night. Either the material world effects how easily we can be happy, in which case the material world is important. Or it makes no difference at all, in which case you can ignore all the things I'm doing in the material world ... but I have this thing that my own personal happiness does seem very very much to depend on the material world, so unless you can give me some very strong evidence .... I'm going to carry on with the world while you're sitting in your trance ignoring me.

Third objection. Science cant help us, and infact makes our lives worse, thinking about the problem scientifically wont work. Not a priori nonsense, thinking about the fact that you're stammering makes the stammer worse. The mere act of scientifically analysing something could conceivably make things worse. ...Not true of course. But you do need to spend a couple of seconds justifying that. No, there was no golden age. The people in the past did not lead happy idyllic lives. They got murdered a hell of a lot more than us, yeh, even the people who live in inner-city ghettos in America. You're a lot safer than someone living in the same place 500 years ago. The myth of the noble savage is exactly that, and both historical, anthropological and archaeological research bears this out. The scientific pursuit of material well-being has produced ever longer lifespans, ever lower murder rates, ever better levels of public health, ever lower infant mortality, ever less fatal wars (no, seriously) ever higher levels of participation in public life and thus of public policy designed to benefit the population. And participating in this is vital.

Fourth objection, yeh science has done great things, but we've about reached the limit, there are limits to growth all over the place, so soon we'll stop. Umm, true in one sense, but there's a way to get out of it. Yes, limits to growth exist, but the history of civilisation is the history of circumventing them. A problem that faced our early ancestors was the fact that grazing enough animals to feed a gang of hunters on the savannah takes a lot of land, I think The Ascent of Man gives the population of Chicago as the limit to the human population of the earth given that food supply. This is a clear limit to growth, you get round it by doing something else instead, farming, feeds more people with a smaller land area, then you farm ever more intensively with ever better crops. The limit is still there, you've not broken the laws of physics, you've simply done something else. Almost all limits can be dealt with the same way. There's a limit to how fast a team of horses can go, so you use a train, there's a limit to how efficient a steam engine can be, so you use diesel and electric. There's a limit to how fast a train can go, so you fly, there's a limit to how fast a plane can go, so you hop to orbit and hop back again. Limits can almost always be put off in this way. And if you can always move the most pressing limit at least a decade further away every decade, in a very real sense there is no such limit.

Fifth objection is a sub-objection to this one. At the end of the day there's a limit that really matters, that's the second law of thermodynamics, in any closed region of the universe energy sources run out, useful resources are exhausted and everything stops. In the really-quite-long-indeed term there's a simple solution. Stop living in a closed region of the universe. Expand your empire fast enough and you can match any pace of energy and resource growth you like. And the speed of light isn't a problem so long as the "rate of growth" you care about is relative to time on the fringe not time at the long dead core.

Ok, so. What's the point? Make people happy. How? A combination of a real hard scientific study of the psychology of happiness, and a real concerted push towards the continued circumvention of limits of expansion to the resources we need to make the changes in the physical world that make us happy. Everything else I've ever done, attempted, thought about, written about or talked about is either just a facet of this or a total waste of time.


  1. Your mind is not typical. It /is/ typical that you think it is. There are minds massively far from you by any metric of mind design, and dropping their preferences from consideration seems slightly chauvinist.

    Also, there is good evidence that both mental state and direct experience of the world can be circumvented by appropriate physical systems (Canonically rats electirically stimulating the locus accumbens in preference to avoiding starvation). Neither is "neccessary" for choosing to take actions; modifications to the "happiness" predicate need to be made to avoid this kind of feature.

    Unfortunatly, c is finite, and people seem to have utilities with crossterms for other peoples actions. This means that there will be a need to satisfice between various people's utilities. Playing devil's advocate: Even if the existence of LGBT people plunges Fred Phelps into a pit of despair 3(up)(up)(up)3 times larger than any other, you probably don't want to yield to his utilities.

    Lastly, I think there needs to be some thought given to existential risks. It is neither cognitive research nor limit to growth evasion per se, but personally I attach little utility to 2C of warming/asteroid strikes/unfettered (nano-scale) self-replicators/an AI turning Sol into paperclips. In the short term, avoiding this kind of halting state may be more of a problem than bumping a limit to growth up a bit. Hitting one looses us everything.

  2. Not sure if it's the most efficient response, but point-by-point stop my brain hurting.

    My mind is not typical no. Just to unpack that for clarity: I observe that other human bodies react in ways that I attribute to my mind when those reactions happen to my body. I also observe they react in ways that reflect a mind different to mine. So I infer a mind similar but different to my own. The problem is when you go too far and get minds very very far from mine, "this has a mind" no longer becomes the best explanation of the body's actions. Yes this object could have a mind very very different to mine, or it could just be a weird-ass rock. There's perhaps a difference between chauvinism and "it's not reasonable for me to conclude you exist".

    I'm trying to decide if I'm just being awkward in this response or if I have a valid point. See I tend to lump the rat electrodes, along with a good holodeck and soma into the category of "changing the physical world to bring about an experience that is genuinely more happiness". It is not clear to me that's it's wrong for the rat to 'believe' she's having sex and is enjoying herself. It's not clear to me we should be too worried to define utility in a way that excludes soma-happiness. The problem with soma is not that people aren't in a happy, desirable state. It's that there's nobody to staff the soma factories and that if the drug isn't perfect and there are higher levels of happiness to be found by better drugs etc. nobody will find it. If that's totally sidestepped your point, or if I've not understood it sorry, try again.

    There are cross terms of course. This is just an outline/foundational piece so obviously not the place to rigorously develop the utilitarian calculus. But yes this is clearly a non-trivial point. ...Though I do disagree with your example. If it's reasonable for us to believe Fred Phelps really does experience negative 3(up)(up)(up)(up)3 utils then I'm sorry but you have two choices. Either Fred Phelps needs to be killed/brainwashed to stop him feeling this bad, or we need to exterminate the gays. And do so as fast as possible. The whole point of the promotion of minority rights is that nobody does in fact experience this. The whole point is that the pain Fred feels is around 10 utils against 3x600million so we tell him to go fuck himself.

    True it's not a limit to expansion problem. But more generally it's a "make physical reality work" problem. Will have to think about how existential threats should be treated differently from threats of the simply very very bad.

  3. I note that you're biting the bullets. This is novel and I applaud it :) (with respect to both wireheading and scaling of utilities)

    How strong are your prior beliefs about the universality of certain behaviours (eg. rationality)? I'll pick up on this later.

    On utility blackmail: You shouldn't trust other peoples claimed utilities. From my point of view (ie gaming your utility function), the correct thing to do is now to construct human-like entities with utility functions that are *exactly* like mine, but scaled by 3^^^3, so that it dominates everyone else. Obviously death to me or the other mind (or a change in it's utilities) will have a claimed utility of -3^^(sufficient)^^3.

    What you want to do (I think) is to have some kind of equilibrium bargaining system between utilities. Obviously this removes the use of just scaling your utilities by 3^^^3; noone else cares per se. Also, you don't really want to act in accordance with present utilities, because utilities are not fixed things. At some level you want to allow satisficing of utilities between you and you_2020, who is clearly a different person. Boredom and hyperbolic discounting of the future make for a really bad combination in this sense - procrastination with opiate replacements.

    As I see it, the whole point of minority rights is that I don't get to scale my utilities (or roughly equivalently political power) and thus squash other people's utilities underfoot.

    Now for a consistency check. Minds sufficiently different from yours cease to exist in your internal ethics... And you

    have minority rights. Consider the running example: Fred Phelps probably does not interpret those "supporting the gay agenda" as being properly mindful (being mere pawns of satan). Hence he (running your ethics) is now justified to disregard the claimed utilities of everyone who disagrees with him on this point. I think that reductios your position somewhat.

    The difference is the scale. Existential is *very* *very* bad. Assuming some vague optimism, you should be expecting to take a substantial fraction of the galaxy, and be reducing planetary masses to computational gear or other habitat for people. The potential there is maybe 10^20 people if you were looking at mere flesh and blood. Add another factor of 10^5 or so in software. For the rest of the lifespan of the universe. That is the scale we're looking at. It is almost unimaginably bad to loose it.

  4. I tend to do that. I'm not sure if it's me being difficult, but sometimes it's just more honest to accept something unpleasant than make your system bend in unreasonable ways.

    Can I be annoying and say this question is in fact two questions? On the one hand the ontological. Of all those things that can reasonably be called minds how many have something like our idea of rationality, our behaviours etc? My prior is "almost non", or at least far from all. On the second hand is an epistemological question of detecting minds. If we are to detect minds it can only be through behaviours, and so it's inevitable that all those things we will be able to confidently believe to be minds will be incredibly close to our own and have rationality etc. for at least a good amount of time. But that's only a strong intuition, I'm far from dogmatic if you've got a better argument.

    I've been mulling an idea that may or may not solve the utility blackmail/St Petersburg mugging problem. Not worked the implications out properly, nor given it half the thought it needs to make it work. In outline, the plan is to ban any utility function unless sup(|U(universe)|) over all universes where U is defined is finite and equal to 1. Clearly this helps the re-scaling problem, I dont know however if it's too strong a condition, nor how it helps the problem of creating/simulating 3^^^^3 minds. On the second, I think you're always in the same mess. It seems unavoidable that one either ends up practically ignoring such minds after a point, or we end up with unbounded utilities => nonsense.

  5. I'm not sure I agree on the point of minority rights. That everyone's utility counts the same (in whatever sense) is the idea of not having an aristocracy or other kind of privileged elite. It's only once we have that then you get the claim that what the majority wants should be done. The problem with this claim is that there's a higher utility option, giving the minority rights that cost the majority almost nothing compared to the gain by the minority. That too me is the point of such rights, even after we deal with the scaling issue.

    Hmm. My instinctive response to your Fred reductio is that he's just wrong as a question of fact. He has the evidence needed to conclude the others have minds that can become happy, he's behaving irrationally in the domain of fact to assert otherwise. ... Not sure if that's just getting out of it.

    Ok let's find out by strengthening the example. The Vogons communicate by N-rays, all other advanced civilisations in their area do likewise, they have learnt by entirely correct Bayesian reasoning that "exactly those star systems that emit N-rays contain minds" is incredibly likely. The Vogons dont care about EM rays. The solar system neither receives nor transmits N-rays. Is it ethical for the Vogons to demolish the solar system to make way for a by-pass?

    I'd suggest that the kind of caution that would lead them to not demolish what they could reasonably conclude to be an empty system can be a bad thing. A civilisation that cautious isn't going to conquer the galaxy spreading win. Note that this isn't discounting human utility, if the Vogons were told and built an EM receiver then it would be totally unethical, but in the absence of that information I dont see they've got an excuse to be so over-cautious. (Of course I'm assuming we're ignoring the potential for minds evolving, in which case you'd want to detect life not minds, the Vogon's only ethical concern is protecting currently existing minds).

    Alternative example, we all believe AI does not yet exist to the extent of having a mind with happiness and hence utility. So no matter how much a computer begs and screams and tells you not to kill it, you'll quite happily switch it off. Totally discounting the claimed utility because we dont believe the mind, and hence the utility, exist.

    Ok, so extinction is purely a question of other losses of utility being temporary, and so still has the huge bonus of the great pan-galactic civilisation of win attached. Clear enough.


Feedback always welcome.