Search This Blog

Follow adrianbowyer on Twitter

My home page

Saturday, 16 September 2017

ProbablePrejudice




Think about this game:  suppose you have an urn filled with equal numbers of red and green marbles.  You reach in and take out a marble in your clenched hand so you can't see it.  What colour do you guess the marble is if you want to be right as often as possible?  The answer is it doesn't matter.  If you guess red or green at the toss of a coin you will score 50%.  If you always guess green you will also score 50%.  The same goes for any proportion of guesses in between.

But this cannot be true if there are more red marbles than green.  In the extreme, if there are only red marbles in the urn, you would clearly be crazy ever to guess green.  So what is the general rule if the proportion of red marbles is p and you know the value of p?

Suppose the proportion of red guesses you make is r.  Should r = p?  That seems to be true from the argument above if p = 1, and maybe if p = 0.5.  But it may not be true if p = 0.8, say.  Let's look at the sums:

A red marble comes out of the urn a fraction p of the time.  If you guess red r of the time you will be right pr of the total time for those red marbles.

A green marble comes out of the urn (1 - p) of the time.  If you guess green (1 - r) of the time you will be right (1 - p)(1 - r) of the total time for those green marbles.

So the total proportion of correct guesses you make, c, is

               c = pr + (1 - p)(1 - r)

                 = r(2p - 1) - p +1

If we plot a graph of correct guesses, c, for different values of r when p = 0.5 we get:


Which tells us what we said when we started - for equal numbers of reds and greens it doesn't matter what proportion of red guesses, r, you make, you will always score c = 50%.

But now suppose p = 0.8 (that is, 80% of the marbles in the urn are red).  Then the graph does this:


Now what is the best guessing strategy, r, to give the biggest value of correct guesses, c? It is not r = p as we conjectured.  It is to guess red all the time.  This gives a highest possible score of 80%.

This happens even for the tiniest majority of red or green marbles.  If you know red is in the majority, no matter how small that majority is, you always guess red.  If you know green is in the majority you always guess green.

This is a really easy rule for evolution to encode: if there are two types of things, A and B, and you know that A are in the majority, then - when encountering a thing with no other knowledge - assume the thing is A.  You will be right as often as it is possible to be.

A and B might be bears and tigers growling out of sight.  If you know there are more bears than tigers, then your best bet is to assume you have to deal with a bear.

This argument affects the best way for you to allocate resources.  Suppose that it costs you the same to prepare to encounter an A in the future as it costs to prepare to encounter a B.  Further suppose that the reward (or loss) you get if you meet an A is the same as the reward (or loss) you get if you meet a B.  Then, if there are even just a few more As than Bs, it is optimal to spend ALL your resources on preparing to meet As and to spend NONE on preparing to meet Bs.

Of course, A and B might not be bears and tigers; they might be people of unknown sexuality, nationality, or (if - like the bears and tigers - they are also out of sight) gender or ethnicity...





Monday, 4 September 2017

FishFerris


I was in Seattle a few days ago, where they have a Ferris wheel on the waterfront (above).

It would not be too difficult to make Ferris pods watertight, whereupon part of the ride in any ocean or river city could go underwater.  At the top you'd get a view across the city, and at the bottom you'd get a view of the fishes.  Plus there would be a small frisson for the wheel riders as each pod submerged.


It would work particularly well next to a reef dropoff, where the pods on the wheel could go down next to the coral wall.

Go build one, World!

Saturday, 12 August 2017

AluminiumFuel


This is what happens when you put aluminium in hydrochloric acid.  It makes aluminium chloride and hydrogen gas, which you can see bubbling off.  If you put aluminium in water it does something similar, giving aluminium oxide and hydrogen.

But the problem with aluminium in water is that aluminium oxide (unlike aluminium chloride) is not soluble.  The aluminium oxide forms a protective film over the aluminium and the reaction stops after a fraction of a second.  So you get hardly any hydrogen.

But now the US Army Aberdeen Proving Ground Research Laboratory has made a serendipitous discovery: they have found an aluminium alloy to which the film of oxide does not adhere, and so you can drop it into water and it generates hydrogen gas and aluminium oxide continually.

This is potentially an extremely important discovery.  The big problem with electric power is not generating electricity - we have hundreds of ways to do that, including many renewable techniques. The big problem is storage.  Even the very best and latest batteries are very expensive, very complicated, and store little energy for their weight.  

But aluminium is made from aluminium oxide by electricity, and chunks of aluminium are cheap, and are easy and safe to store and to transport.  And we have fuel-cells that will make electricity from hydrogen.

So. How about an aluminium-fuelled car?  Let's compare it with the most efficient (but very polluting) conventional cars currently on the road - diesels - and also with zero-emission Li-ion battery cars, like the Tesla.  Here's the maths:

First, how much hydrogen do we get from one kilogram of aluminium?

2Al + 3H2→ Al2O3 + 3H2

The atomic mass of aluminium is 27 and the molecular mass of hydrogen gas is 2.  We get three hydrogen molecules for every two aluminium atoms.  With the mass ratio, that means that for every one kilogram of aluminium we get a ninth of a kilogram of hydrogen.

Combining a ninth of a kilogram of hydrogen with oxygen from the air (which is what a fuel cell does) gives 15.5 mega-joules (MJ) of energy.  But fuel cells are about 50% efficient and electric motors are about 80%, so that becomes about 6 MJ of energy going to the car's wheels.

Thus, for our aluminium-powered car, one kilogram of aluminium lays down 6 MJ on the road.  For comparison one kilogram of diesel gives about 15 MJ, or two and a half times as much, and one kilogram of the best Li-ion batteries give around 0.3 MJ - a tiny fraction.

Diesel oil and aluminium pellets are both just simple, tough, cheap stuff; whereas Li-ion batteries are complicated, fragile, and expensive manufactured items.  It's easy to pour diesel into a car, or to drop aluminium powder or pellets into a hopper.  Batteries are time consuming to charge.  And a car carrying aluminium and water is inherently very safe in a crash compared to one with a tank of diesel in the back, and probably better than one with batteries (because of the very high mass of the batteries; lithium fires do happen, but there's actually not a lot of lithium in a lithium battery and it's not in its elemental form).

It looks as if our aluminium-powered car might be a GO.  That assumes, of course, that the US Army's research will scale and work reliably.

The car would be zero-emissions.  Its waste product would be aluminium oxide powder as a sort of ash.  This could be dumped at the refuelling station for re-smelting into aluminium.

And here is a potential problem.  Making one kilogram of aluminium from its bauxite ore (also aluminium oxide) takes about 48 MJ of electricity.  That would probably reduce a bit for recycling the car's ash, because that would be very pure.  Let's say 45 MJ, and assume we're going to be sensible and use renewable electricity like solar.  So the overall thermodynamic efficiency including the power generation (the so called wheel-to-wheel efficiency) of the aluminium-powered car is 6MJ÷45MJ, which is 13%.  The equivalent figure for a diesel car is around 14%, but for a battery electric car it's around 30%.

Note that all this analysis does not take account of the extra energy required to accelerate the considerable mass of the batteries in a battery car, some of which is recovered by regenerative braking.

In conclusion, the aluminium-powered car (if it works) would be a zero-emission vehicle that is as energy efficient as a diesel, but only half as efficient as a battery car.   It would be cheap to make (probably even cheaper in bulk than the diesel, and certainly cheaper than cars with lots of batteries). It would be the safest car on the road, and it would be quick to re-fuel (and ash-dump).

Aluminium as an electricity storage system probably makes sense for vehicles, at least until batteries get a lot better.  But its lower efficiency means it will not make sense for static storage, where mass does not matter.

But the really interesting possibility would be an aluminium-powered aeroplane.  An electric turbine would be more efficient (maybe 80%) than the turbofans that aeroplanes currently use (manufacturers are coy, but I guess around 40%), which would help to compensate for the 2.5:1 ratio of energy per kilogram we saw above between hydrocarbons and aluminium.  So we could have a zero-emission fleet of passenger aircraft, with no fuel fires in crashes.  



Finally, a separate but similar technology - the magnesium hydride paste developed by the Fraunhofer Institute. This is both a store of hydrogen and reacts with water to give off further hydrogen. It has an energy density of 6 MJ/kg and seems to me to be very promising. Thanks to Jon C for bringing it to my attention.

Tuesday, 25 April 2017

TTestComplete



A while ago the British Government was silly enough to allow me onto committees to decide how to spend millions of taxpayers' money on scientific and engineering research.  They even had me chair the meetings occasionally.

We'd get a stack of proposals for experiments, together with peer-review reports from other people like me on whether the experiments were worth doing or not.  The committees' modi operandorum were to put the proposals that the reviewers said were best at the top of the pile then work down discussing them and giving their proposers the money they wanted until the money ran out.

I liked to cause trouble by starting each meeting with my explanation of why this approach is All Wrong.

"The ones we should put at the top of the pile," I said, "are the ones where half the reviewers say 'Brilliant!' and the other half say 'Rubbish!'.  Those are the proposals that nobody knows the answer to, clearly.  So those are the experiments that are most important."

The other academics there would smile at me indulgently because of my political naivety.  The civil servants would smile at me nervously in case any of my fellow academics actually decided to do what I proposed.  And then everyone would carry on exactly as they had always done.

After a while I started saying no when I was asked to attend.

---o---

There has been an understandable fuss recently prompted by some good research by my erstwhile colleague Joanna Bryson and others about algorithmic racism - that is to say things like Google's autocomplete function giving the sort of results you can see in the picture above.

Google's (and other's) argument in defence of this is a strong one.  The essence of it is that their systems are driven by their user's preferences and actions; they gather the statistics and show people what most other people want to see when those other people do the same as you do.  The results are modified sometimes from "most other people" to "most other people like you" where "like you" is again the result of a statistical process.  If most other people are racist, historically ignorant cretins, then you will see results suitable for racist, historically ignorant cretins.  They (Google and the rest) are not like newspaper editors deciding what to put in front of people; they are just reflecting humanity back at you, you human you.

But you can see from the picture that the results of this are sometimes very bad, by almost any sensible moral definition.

Clearly what is needed is not the intervention of an editor - that would result in Google, Facebook and the rest turning into the New York Times or the Daily Mail, which would be a retrograde step, not an improvement.  What is needed is an unbiased statistical process that weights searches, hyperlinks and the rest from clever people more heavily than those from stupid people.

Note that I'm not saying that clever people aren't racists, and that stupid people are. I suspect that there is not that good a correlation, though this is interesting.  I'm just saying that in general all the web's automated linking and ranking systems ought to work better if they weighted the actions of people by their intelligence.

But how to grade the intellectual ability of web users?  The answer lies in the big data that all the web companies already use.  Facebook, for example, has a record of billions of people's educational achievements.  More interestingly it should be simple to train a neural network to examine tweets, blog posts and so on and to correlate their content with that educational data.  That network would then be able to grade new people and those who hadn't revealed any qualifications just by reading what they say online and apply weights accordingly.

I have no idea if this is a good idea or not.  It is my idea, but I'm not intelligent enough...


Thursday, 12 January 2017

HardClever


By now there must be a lot of people who actually believe that little glowing lights move along their axons and dendrites when they think, flashing at the synapses.

Anyway.

There has been a lot of fuss about AI lately, what with Google translate switching over to a neural network, rich people funding AI ethics research, and the EU trying to get ahead of the legislative curve.  There has also (this is humans in conversation after all...) been a lot of stuff on the grave dangers to humanity of super intelligent AIs from the likes of Stephen Hawking and Nick Bostrom.

Before we get too carried away, it seems to me that there is one very important question that we should be investigating.  It is: What is the computational complexity of general intelligence?  Before I say how we might find an answer, let me explain why this is important by looking at the extremes that that answer might take.  

At one end is linear complexity.  In this case, if we have a smart computer, we can make it ten times smarter by using a computer that is ten times bigger or faster.

At the other end is exponential complexity.  In this case, if we have a smart computer, we can make it ten times smarter only by having a computer that is twenty-two-thousand times bigger or faster.  (That is e10 times bigger or faster; there may be a factor in there too, but that's the essence of it.)

If smart computers do really present a danger, then the linear case is bad news because the machines can easily outstrip us once they start designing and building themselves and it is quicker to make a computer than to make a person.  In the exponential case the danger becomes negligible because the machines would have great difficulty obtaining the resources to make smarter versions of themselves.  The same problem would inhibit us trying to make smarter machines too (or smarter people by genetic engineering, come to that).

Note, in passing, that given genetic engineering the computers have no advantage over us when they, or we, make smarter versions of themselves or ourselves.  The computational complexity of the problem must be the same for both.

The big fuss about AI at the moment is almost all about machine learning using neural networks.  These have been around for decades doing interesting little tricks like recognising printed letters of the alphabet in images.  Indeed, thirty years ago I used to set my students a C programming exercise to make a neural network that did precisely that.

Some of the computational complexity of neural-net machine learning falls neatly into two separate parts.  The first is the complexity of teaching the network, and the second is the complexity of it thinking out an answer to a given problem once it has been taught.  The computer-memory required for the underlying network is the same in both cases, but the time taken for the teaching process and the give-an-answer process are different and separable.

Typically learning takes a lot longer than finding an answer to a problem once the learning is finished.  This is not a surprise - you are a neural network, and it took you a lot longer to learn to read than it now takes you actually to read - say - a blog post.

The reason for the current fuss about machine learning is that the likes of Google have realised that their big-data stores (which are certainly exponentially bigger than the newsprint that I used to give my students to get a computer to read) are an amazingly rich teaching resource for a neural network.

And here lies a possible hint at an answer to my question.  The teaching data has increased exponentially, and as a result the machines have got a little bit smarter.

On the other hand, once you have taught a neural network, it comes up with answers (that are often right...) to problems blindingly fast.  The time taken is roughly proportional to the logarithm of the size of the network.  This is to say that, if a network takes one millisecond to answer a question, a network twenty-two-thousand times bigger will take just ten milliseconds.

But the real experiments to find the computational complexity of general intelligence are staring us in the face.  They lie in biology, not in computing.  Psychologists have spent decades figuring out how smart squirrels, crows, ants, and all the rest are.  And they have also investigated related matters like how fast they learn, and how much they can remember.  Brain sections and staining should allow us to plot a graph of numbers of neurons and their degree of interconnectivity against an ordering of smartness of species.  We'd then be able to get an idea if ten times as smart requires ten times as much brain, or twenty-two-thousand times as much, or somewhere in between. This interesting paper seems like a start.

Finally, Isaac Asimov had a nice proof that telepathy doesn't exist.  If it did, he said, evolution would have exploited and refined it so fast and so far that it would be obvious everywhere.

We, as the smartest organisms on the planet, like to think we have taken it over.  We have certainly had an effect, and now find ourselves living in the Anthropocene.  But that effect on the planet is negligible compared to - say - the effect of phytoplankton, which are not smart at all.  And our unique intelligence took three billion years to achieve.  This is a strong indication that it is quite hard to engineer, even for evolution.

My personal guess is that general intelligence, by which I mean what a crow does when it bends a wire to hook a nut from a bottle, or what a human does when they explain quantum chromodynamics, will turn out to be exponentially hard.  We may well get there by throwing exponential resources at the problem.  But to get further either the intelligent computer, or we, will require exponentially more resources.