Monday, 21 September 2020

CarbonHedge

How can we remove carbon dioxide from the air by doing nothing?  Read on...


This, as you can see, is a hedge.  Hedges are made of trees that are forced to be mere bushes by repeated pruning.  This one, near my house, is in late Summer, just before the pruning is done.


And here it is a little further along, just after the tractor and flail has passed over it.

But opposite it is another hedge that hasn't been, and now therefore can't be, pruned:


It has grown into a row of trees, though it is still a reasonably effective hedge.  It does, admittedly, have a few holes that the pruned hedge does not have.  I'll return to these below.

People have suggested coppicing hedges as biofuel to replace fossil fuels. But burning biofuel is neither very clean, nor is it very effective as a means of carbon reduction because it doesn't actually reduce atmospheric carbon, it is merely carbon neutral.  And coppicing would be quite a laborious activity, even with machinery.

So the obvious thing to do is to allow the vertical shoots in the first picture to become the trees in the third by not trimming the tops of hedges as in the second.  The bottom couple of metres of the sides could still be trimmed to stop branches growing across roads or into crops at low levels.  And the time saved by not trimming the tops could be spent wandering along with a dibber, collecting blackberries.  Don't eat them!  But, when you get to a gap forming in the hedge, plant a few brambles with the dibber to block it.  (You can eat the rest of the blackberries for lunch...)

What effect would this have on the UK's COreduction strategy?

The UK has about 700,000 kilometres of hedges.  If they are about two metres thick on average, that's 140,000 hectares of potential trees.  The UK plans to plant 30,000 hectares of forest per year over the next 30 years to absorb CO2, so simply leaving the nation's hedges to grow vertically would achieve just under five years worth of the total (that is 15%) by doing nothing except a day's pleasant blackberrying once a year...



Saturday, 29 August 2020

GapOfTheGods

 


I am aware of the God-of-the-gaps nature of definitions of intelligence, whereby something that a computer becomes able to do successfully, like chess, is removed from the canon of intelligent ability.  By this process intelligence becomes a melting iceberg, drifting towards the Equator, with a smaller and smaller area upon which we humans may stand.

Despite that, I would like to propose a new definition of intelligence:


Intelligence is the ability to moderate impulses by deliberation.


By impulses I mean, in human terms, emotions.  But also, at a lower level, I mean such phenomena as a single-celled organism swimming up a chemical gradient towards food.

Let me start by considering systems that are entirely emotional and that do not deliberate: computers.  Consider what happens when you run a Google search.  The Google machine is completely unable to resist its impulse to respond.  If you were to ask it, "What is the best way to subvert the Google search engine?" it would return you a list of websites that would be its very best effort to answer your query correctly.  All computer systems, including all current AI systems, are entirely driven by their irresistible emotional need to respond to input.

If you type something at Generative Pre-trained Transformer 3 it will respond with coherent and rational text that may well be indistinguishable from human composition.  In that regard it is on its way to passing the Turing Test for intelligence.  But it cannot resist its emotional need to respond; the one thing you can guarantee is that, whatever you type at it, you will never get silence back.

But now suppose someone asked you, "What would be the best way for me to murder you?"  You would hesitate before answering and - if free to do so - not answer at all.  And under compulsion you would frame a considered lie.

Everything that responds to input or circumstances, from a thermostat, through a computer, a single-celled organism, to a rat, then a person, has an impulse to respond in a certain way.  But the more intelligent the responder, the more the response is mediated by prior thought and mental modelling of outcomes.  The degree of modification of the response depends both on the intensity of the immediate emotion with which the response starts, and the intelligent ability of the responder to model the situation internally and to consider alternatives to what the emotion is prompting them to do.  If you picked up a hot poker, the emotional impulse to drop it would be well-nigh impossible to resist.  But if someone held a gun to your head you would be able to grit your teeth and to retain your grip.  However, the single-celled organism swimming towards food would not be able to resist, no matter what danger lay ahead.

Today's AI systems are far cleverer than people in almost every specialised area in which they operate in just the same way that a mechanical digger is better than a person with a shovel. Computers are better than people at translating languages, playing Go or poker, or - of course - looking up information and references.  But we know that such systems are not intelligent in the way that we are with complete certainty once we see how they work; even a near-Turing-Test-passing program like GPT-3 is not thinking in the same way that we do because it cannot resist its impulse to do what it does. 

We are not used to regarding the physics that drives computers to do exactly what they are programmed or taught to do as an emotion, but that is what it is.  If you see someone whom you find sexually attractive, you know it immediately, emotionally, and certainly; that is your computer-like response.  But what actions you take (if any) when prompted by that emotion are neither certain nor immutable. 

Note that I am not saying that computers are deterministic and we are not.  Nor am I saying that we have "free will" and they do not, because "free-will" is a meaningless concept.  There is no reason to suppose that an AI system such as the current ones that work by machine learning could not be taught to moderate impulses in the same way that we do.

But so far that has not been done at all.

Finally, let me say that this idea makes evolutionary sense.  If our emotions were perfect guides to behaviour in all circumstances we would not need intelligence, nor even consciousness, with the considerable energy consumption that both of those require.  But both (using my definition of intelligence) are needed if an immediate emotional response to a situation is not always optimal and can be improved upon by thinking about it.  

Sunday, 9 August 2020

LightWorm

 

Nerve fibres conduct impulses at a speed of around 100 ms-1, which - in this age of gigabit light fibres - is a bit sluggish.

But we can now genetically engineer neurons to emit light when they fire, and to fire when light strikes them.  In addition light fibres are simple structures, consisting of two transparent concentric cylinders with different refractive indices. That is a lot simpler than a nerve's dendrite or axon (the nerve fibres that conduct impulses between nerve cells). We know that living organisms can make transparent materials of differing refractive indices (think about your eyes), and they excel at making tubular and cylindrical structures. Indeed plants and animals consist of little else.

So I propose genetically engineering neurons (nerve cells) that communicate optically rather than chemically. The synapses where transmitted signals from axons are received by dendrites as inputs to other neurons are small enough to transmit light instead of the neurotransmitter molecules that perform this function in natural neurons.  And light is easy to modulate chemically, so inhibitory neurotransmitters would just need to be more opaque, and excitatory ones would need to enhance transparency.  And, of course, it would be straightforward to create both inputs to, and outputs from, such a system using conventional light fibres, which would allow easy interface to electronics.

Doing this in a human brain might present a few challenges initially, so it would be best to start with a slightly simpler organism. Caenorhabditis elegans (in the picture above) is a small worm that has been extensively studied. So extensively, in fact, that we know how all 302 of its neurons are connected (that's for the hermaphrodite C. elegans; the male has 383 neurons, and we know how they're connected too).  We also know a great deal about the genetics of how the animal's nerve structure constructs itself.

Let's build a C. elegans with a brain that works at the speed of light...

Wednesday, 5 August 2020

TuringComplete



This is an edited version of a piece by me that appeared in the Communications of the Association for Computing Machinery, Vol 37, No 9 in 1994.

I recall asking my six-year-old, "How do you know that you are?" She considered the matter in silence for several minutes, occasionally drawing breath to say something and then thinking the better of it, whilst I conducted an internal battle against the Demon of False Pedagogy that was prompting me to make helpful suggestions. Eventually she smiled and said, "Because I can ask myself the question." 

Even with the usual caveats about parental pride, I consider that this Cartesian answer was genuine evidence of intelligent thought. But she doesn't do that every day, or even every week. And no more do the rest of us. Intelligent thought is rare. That is why we value it. 

The most important aspect of Turing's proposed test was his suggestion that it should go on for a long time. Speaking, reading, and writing are very low-bandwidth means of communication, and it may take hours or even days for a bright and original idea to emerge from them. We should also remember that there are many people with whom one could talk for the whole of their lives without hearing very much that was interesting or profound. 

The distress caused to researchers from Joseph Weizenbaum himself onwards by the ease with which really dumb programs such as ELIZA can hold sensible (if short) conversations has always been rather amusing. The point is surely not that such programs are poor models of intelligence, but that most of us act like such programs most of the time — a relaxed conversation often consists of little more than a speaker's words firing off a couple of random associations in a listener's mind; the listener then transposes a few pronouns and other ideas about and speaks the result in turn. In speech we often don't bother to get our grammar right, either. ELIZA and her children mimic these processes rather well. 

The researchers' distress arises because — in the main — they take a masculine view of conversation, namely that it is for communicating facts and ideas. But the most successful conversation-mimicking programs take a feminine view of conversation, namely that it is for engendering friendship and sympathy between the conversationalists (see, for example, You Just Don't Understand—Women and Men in Conversation by Deborah Tannen). Of these two equal aspects of conversation, the latter happens to turn out to be the easier to code. Of course the resulting programs don't really "feel" friendship and sympathy. But then, perhaps neither do counselors or analysts. 

I suspect that a real Turing Test passing program will end up coloring moods by switching between lots of ELIZA and PARRY and RACTER processes in the foreground to keep the conversation afloat, while the deep-thought processes (which we haven't got a clue how to program yet) generate red-hot ideas at the rate of two per year in the background. What's more, I suspect that's more or less how most of us work too, and that if the deep bit is missing altogether in some people, the fact hardly registers in quotidian chatter. 

Tuesday, 18 February 2020

CostDisBenefit



Today, a poorly-researched and highly dodgy cost-benefit analysis of democracy.

First, let me say that, as a way of making decisions that affect large numbers of people's lives, cost benefit analyses are at best morally questionable and at worst actively bad.  That is opposed to, for example, one person making a cost benefit analysis of something that will only affect them (rather like Darwin's list on whether or not to marry; Emma Wedgwood, of course, was free to turn him down).  That seems to me to be an entirely legitimate way to make a decision, if a little cold.

But what is cost-benefit analysis?

Suppose some project is proposed that will affect many people, some positively and some negatively.  The project might be building a new hospital that will require an ancient woodland to be felled.  A cost benefit analyst will go out and ask the people one or more of four questions:

  1. "What would you be prepared to pay for your share of the new hospital?"
  2. "What sum would you be prepared to accept to forgo the new hospital?"
  3. "What would you be prepared to pay to preserve the woodland?"
  4. "What sum would you be prepared to accept to see the woodland cut down?"
Questions 1 and 3 are known as willingness-to-pay questions, and Questions 2 and 4 are known as willingness-to-accept questions.  The analyst would add up people's answers and use the result to decide what it was that the people really wanted.  Note that the people aren't actually going to have to stump up the money they've offered, nor to get the money they've requested; cost-benefit analysis is just a way of getting a measure of how people think about something.  Of course both the wording and the order of the questions will almost always affect the results.

Why is all this morally questionable or outright bad?

Let's start with willingness-to-pay.  Suppose you are a rich environmentalist who can afford private health care.  In answer to Question 3 you may say, "I'll pay $1 million to keep the wood."  Your answer will shift the average a great deal, and will swamp the answers of the many poor people who answered $10 to Question 1 because they want the hospital built.  Willingness-to-pay is the exact equivalent of letting people buy votes, something that only the most swivel-eyed libertarian might propose.  It leads directly to gross inequality in influence, and indirectly to inequality in accumulated resources.

Is willingness-to-accept better?  Suppose you are a poor environmentalist who is passionate about the wood.  In answer to Question 4 you may say, "Pay me $1 trillion to cut the wood down."  Once again your answer gives you disproportionate influence.  Willingness-to-accept effectively gives everyone a veto over anything they don't like, as they all bid up the cash they demand to numbers requiring exponential notation just to write down.  It would mean that nothing with any opposition at all (no matter how irrational or ill-informed) would ever get done.

So, having established that it's rubbish, let's use cost-benefit analysis to decide if we should pay for democracy.

I set two surveys on Twitter (above). One asked what people would be willing to accept to forgo their vote, and the other asked what they'd be willing to pay to get one.

As you can see, the results are superficially completely irrational.  Most respondents wouldn't even be prepared to accept $1,000 to give up their vote.  Given that, you'd logically expect the same people, no matter how poor they were, to be prepared to pay at least $10 to get a vote.

But what's actually happening is more profound.  Offered both choices the majority are saying, "I'm not even going to play this game."  They presumably regard a vote as a right (as do I), and so they think its provision is outside the sordid realms of monetary transaction.  In short, they think the very questions are implicitly category errors.

However, I am quite intrigued by the one-in-five people who would be prepared to sell their right to vote for $1,000 and the one-in-five (possibly different) people who would pay up to $100 to get a vote.  So I suppose that I had better finish with how I would answer: I too wouldn't play the game; I'd campaign for monetary reward or cost for votes to be removed from the system.

But, to actually answer the questions, my willingness-to-pay for a vote is entirely dependent on geography.  I live in a safe parliamentary seat, so I know in advance that the probability of my vote either way altering the result is vanishingly small.  Thus the opportunity cost of my paying for a vote is such that I know that I could archive far more for others (and maybe myself) by giving the vote's cost to charity instead, which is what I would do.  If I lived in a marginal seat that could go either way with just 100 votes that argument would change completely, and I'd probably pay around $100 for a vote.  If I were asked for much more than that, the charity argument would still win.

My willingness to accept payment to give up my vote would work in the same way.  If I could do more for others and myself with the money I was payed than the expected good that would come from casting my vote, I'd take the money and do the good.

I am a lapsed utilitarian.