Search This Blog

Follow adrianbowyer on Twitter

My home page

Thursday, 14 June 2018

VerticallyChallenged


This is a couple of Augusta Westland AW609s.  They are vertical take off and landing aircraft that rotate their engines and propellers when up in the air to fly horizontally.  There are quite a few other VTOL aircraft that use this principle.

If you look, you can see that the propeller blades twist like a helix (all propeller blades do this; it compensates for the fact that the tip is moving faster than the middle).  The blades can also be twisted as a whole, which is called variable pitch.  

In a helicopter with just one rotor, variable pitch is essential for forward flight because the blade that is moving forward with the direction of flight is going fast into the air, and so generates more lift, whereas the one on the other side of the rotor that is going backwards relative to the air generates less lift.  Without the blades twisting every half-rotation using their variable pitch to give more lift on the back stroke, the helicopter would simply tip over and fall out of the sky.

But this effect is neutralised with two rotors like the AW609, one on the left and one on the right of the forward direction, as long as one rotates clockwise and the other rotates anticlockwise.  Then the forces balance, and the blades don't need to flap with each half-revolution.

The problem with planes like the AW609 is that the propellers need to be big to act like helicopters, but that makes them very inefficient in horizontal flight, limiting both the plane's speed and range.  What would be ideal for VTOL planes like this would be a propeller that could also shrink to a small radius in horizontal flight, and expand to a big radius when helicopter-style vertical flight was needed.

Given the lack of need for variable pitch, this could be made to work with four-bladed propellers (rather that the three you see in the picture), or, indeed, propellers with any even number of blades.  The blades would be hollow, with one very slightly smaller that the other.  To reduce the propeller diameter the blades would be drawn through the hub and the smaller one would slide inside the slightly larger one opposite.  They would also have to twist as they did this, to accommodate the helical blade shape.

There are a few problems with this idea, but I don't think they are insurmountable:

  1. All current blades are not a constant-pitch helix.  This would be needed for them to fit inside each other.
  2. Careful thought would need to be applied to balancing the propellers given the slight difference in the sizes of the pairs of opposite blades.  The masses need to match, obviously, but so too would the moment of inertia, lift and probably drag.
  3. The blades could not be variable pitch, except when fully extended.
  4. The blades would have to have a constant cross-section.
  5. Your [it-won't-work-because] goes here...
I don't know if the aerodynamic compromises needed to accommodate the above list (plus the things I haven't thought of) would nullify the increased speed and range that would come from having a more-or-less conventional sized propeller for horizontal flight.

But it would be interesting to do some experiments and calculations...

Wednesday, 11 April 2018

OutOfControl





This is an edited version of a letter that was published in the London Review of Books Vol. 39, No. 11, 1 June 2017.


Driving speed is easily controlled by self-funding radar cameras and fines; in contrast, MP3 music sharing is unstoppable.

Every technology sits somewhere on a continuum of controllability that can be adumbrated by another two of its extremes: nuclear energy and genetic engineering. If I want to build a nuclear power station then I will need a big field to put it in, copious supplies of cooling water and a few billion quid. Such requirements mean that others can exert control over my project. Nuclear energy is highly controllable. If, by contrast, I want to genetically engineer night-scented stock to make it glow in the dark so it attracts more pollinators, I could do so in my kitchen with equipment that I could build myself. Genetic engineering is uncontrollable.

We may debate controllable technologies before they are introduced with some hope that the debate will lead to more-or-less sensible regulation (if it is needed).

But it is pointless, or worse damaging, to debate an uncontrollable technology before its introduction.  Every technology starts as an idea in one person’s mind, and the responsibility for uncontrollable technologies lies entirely with their inventors. They alone decide whether or not to release a given technology because - if they put the idea up for debate - its uncontrollability means that people can implement it anyway, regardless of the debate's conclusions. (Note in passing that - all other things being equal - an uncontrollable technology will have greater Darwinian fitness than a controllable one when it comes to its being reproduced.)

In my own case I classify technologies I invent as broadly beneficial or damaging. The former I release online, open-source. The latter I don’t even write down (these include a couple of weapons systems at the uncontrollable end of the continuum); they will die with me.

I may be mistaken in my classification, with consequences we may regret. Other inventors may act differently: we may regret that too. But we shouldn’t make the mistake of indulging in (necessarily) endless discussion of what to do about a technology if it is uncontrollable. The amount of debate that we devote to a technology should, inter alia, be proportional to how controllable it is.

Technological changes have unforeseen and occasionally negative social and political consequences.  This is inevitable when something powerful impinges on things that are relatively weak like regulation; the same applies to the benefits. Fortunately the vast majority of people are well intentioned, and technology amplifies the majority along with its complementary minority. Much happens faster and more spectacularly, but the ratio of more good to less bad stays about the same.

Monday, 12 March 2018

FisherFolk



Castaway, the first British reality TV show nearly two decades ago, dropped a group of about thirty people on the remote Scottish island of Taransay and filmed them as they argued with each other and fell out brutally and in a psychologically damaging way over the following weeks.

I watched the opening episode, which had the whole group in a room in London before they set out discussing what they would do and how they thought things would work, and I predicted to anyone who would listen (i.e. my family and the cat) that the whole thing would be a social and emotional disaster for most of them.  And so it was.

The problem was that - in that London room - they were all talking with each other excitedly and at length in a friendly, convivial, and engaging way.

---o---

Think of two island fishermen in their fifties who have known each other since childhood.  On a Monday their total day's conversation as they pass each other on the quayside might be:

  "Morning,"

  "Morning."

And similarly every day of the week, with - perhaps - on the Friday:

  "Morning,"

  "Morning.  Storm's coming."

  "Aye."

They, and the rest of their island community, have evolved a peaceful system of friendship and cooperation an essential component of which is not annoying each other with their personal views, history, random thoughts, and chatter.

Our two friends sit together all evening in the pub in silence, their pints of beer in front of them on the table, taking a sip every minute or two and thinking their own thoughts.  If something needs to be communicated (like a storm) they mention it, then shut up.  Occasionally the whole community all gets very drunk and sing and play the pub piano and talk nonsense for hours then, the following morning, their hangovers enforce a return to their normal reservation.

A lot of folk anthropology consists of just-so stories about how we are adapted to life in a hunter-gatherer village and how we carry that inheritance over to modern global civilised life.  Sometimes, it is claimed, conflict results; one obvious example is xenophobia.  But one thing we have certainly not carried over is the circumspect reservation that we can observe today in isolated small communities.  Every communications technology we have created - printing, the telephone, radio, television, the internet, social media - works against that reservation, and we embrace them all with delight.

And we wonder why we don't get on as well as the two fishermen.



Saturday, 16 September 2017

ProbablePrejudice




Think about this game:  suppose you have an urn filled with equal numbers of red and green marbles.  You reach in and take out a marble in your clenched hand so you can't see it.  What colour do you guess the marble is if you want to be right as often as possible?  The answer is it doesn't matter.  If you guess red or green at the toss of a coin you will score 50%.  If you always guess green you will also score 50%.  The same goes for any proportion of guesses in between.

But this cannot be true if there are more red marbles than green.  In the extreme, if there are only red marbles in the urn, you would clearly be crazy ever to guess green.  So what is the general rule if the proportion of red marbles is p and you know the value of p?

Suppose the proportion of red guesses you make is r.  Should r = p?  That seems to be true from the argument above if p = 1, and maybe if p = 0.5.  But it may not be true if p = 0.8, say.  Let's look at the sums:

A red marble comes out of the urn a fraction p of the time.  If you guess red r of the time you will be right pr of the total time for those red marbles.

A green marble comes out of the urn (1 - p) of the time.  If you guess green (1 - r) of the time you will be right (1 - p)(1 - r) of the total time for those green marbles.

So the total proportion of correct guesses you make, c, is

               c = pr + (1 - p)(1 - r)

                 = r(2p - 1) - p +1

If we plot a graph of correct guesses, c, for different values of r when p = 0.5 we get:


Which tells us what we said when we started - for equal numbers of reds and greens it doesn't matter what proportion of red guesses, r, you make, you will always score c = 50%.

But now suppose p = 0.8 (that is, 80% of the marbles in the urn are red).  Then the graph does this:


Now what is the best guessing strategy, r, to give the biggest value of correct guesses, c? It is not r = p as we conjectured.  It is to guess red all the time.  This gives a highest possible score of 80%.

This happens even for the tiniest majority of red or green marbles.  If you know red is in the majority, no matter how small that majority is, you always guess red.  If you know green is in the majority you always guess green.

This is a really easy rule for evolution to encode: if there are two types of things, A and B, and you know that A are in the majority, then - when encountering a thing with no other knowledge - assume the thing is A.  You will be right as often as it is possible to be.

A and B might be bears and tigers growling out of sight.  If you know there are more bears than tigers, then your best bet is to assume you have to deal with a bear.

This argument affects the best way for you to allocate resources.  Suppose that it costs you the same to prepare to encounter an A in the future as it costs to prepare to encounter a B.  Further suppose that the reward (or loss) you get if you meet an A is the same as the reward (or loss) you get if you meet a B.  Then, if there are even just a few more As than Bs, it is optimal to spend ALL your resources on preparing to meet As and to spend NONE on preparing to meet Bs.

Of course, A and B might not be bears and tigers; they might be people of unknown sexuality, nationality, or (if - like the bears and tigers - they are also out of sight) gender or ethnicity...





Monday, 4 September 2017

FishFerris


I was in Seattle a few days ago, where they have a Ferris wheel on the waterfront (above).

It would not be too difficult to make Ferris pods watertight, whereupon part of the ride in any ocean or river city could go underwater.  At the top you'd get a view across the city, and at the bottom you'd get a view of the fishes.  Plus there would be a small frisson for the wheel riders as each pod submerged.


It would work particularly well next to a reef dropoff, where the pods on the wheel could go down next to the coral wall.

Go build one, World!

Saturday, 12 August 2017

AluminiumFuel


This is what happens when you put aluminium in hydrochloric acid.  It makes aluminium chloride and hydrogen gas, which you can see bubbling off.  If you put aluminium in water it does something similar, giving aluminium oxide and hydrogen.

But the problem with aluminium in water is that aluminium oxide (unlike aluminium chloride) is not soluble.  The aluminium oxide forms a protective film over the aluminium and the reaction stops after a fraction of a second.  So you get hardly any hydrogen.

But now the US Army Aberdeen Proving Ground Research Laboratory has made a serendipitous discovery: they have found an aluminium alloy to which the film of oxide does not adhere, and so you can drop it into water and it generates hydrogen gas and aluminium oxide continually.

This is potentially an extremely important discovery.  The big problem with electric power is not generating electricity - we have hundreds of ways to do that, including many renewable techniques. The big problem is storage.  Even the very best and latest batteries are very expensive, very complicated, and store little energy for their weight.  

But aluminium is made from aluminium oxide by electricity, and chunks of aluminium are cheap, and are easy and safe to store and to transport.  And we have fuel-cells that will make electricity from hydrogen.

So. How about an aluminium-fuelled car?  Let's compare it with the most efficient (but very polluting) conventional cars currently on the road - diesels - and also with zero-emission Li-ion battery cars, like the Tesla.  Here's the maths:

First, how much hydrogen do we get from one kilogram of aluminium?

2Al + 3H2→ Al2O3 + 3H2

The atomic mass of aluminium is 27 and the molecular mass of hydrogen gas is 2.  We get three hydrogen molecules for every two aluminium atoms.  With the mass ratio, that means that for every one kilogram of aluminium we get a ninth of a kilogram of hydrogen.

Combining a ninth of a kilogram of hydrogen with oxygen from the air (which is what a fuel cell does) gives 15.5 mega-joules (MJ) of energy.  But fuel cells are about 50% efficient and electric motors are about 80%, so that becomes about 6 MJ of energy going to the car's wheels.

Thus, for our aluminium-powered car, one kilogram of aluminium lays down 6 MJ on the road.  For comparison one kilogram of diesel gives about 15 MJ, or two and a half times as much, and one kilogram of the best Li-ion batteries give around 0.3 MJ - a tiny fraction.

Diesel oil and aluminium pellets are both just simple, tough, cheap stuff; whereas Li-ion batteries are complicated, fragile, and expensive manufactured items.  It's easy to pour diesel into a car, or to drop aluminium powder or pellets into a hopper.  Batteries are time consuming to charge.  And a car carrying aluminium and water is inherently very safe in a crash compared to one with a tank of diesel in the back, and probably better than one with batteries (because of the very high mass of the batteries; lithium fires do happen, but there's actually not a lot of lithium in a lithium battery and it's not in its elemental form).

It looks as if our aluminium-powered car might be a GO.  That assumes, of course, that the US Army's research will scale and work reliably.

The car would be zero-emissions.  Its waste product would be aluminium oxide powder as a sort of ash.  This could be dumped at the refuelling station for re-smelting into aluminium.

And here is a potential problem.  Making one kilogram of aluminium from its bauxite ore (also aluminium oxide) takes about 48 MJ of electricity.  That would probably reduce a bit for recycling the car's ash, because that would be very pure.  Let's say 45 MJ, and assume we're going to be sensible and use renewable electricity like solar.  So the overall thermodynamic efficiency including the power generation (the so called wheel-to-wheel efficiency) of the aluminium-powered car is 6MJ÷45MJ, which is 13%.  The equivalent figure for a diesel car is around 14%, but for a battery electric car it's around 30%.

Note that all this analysis does not take account of the extra energy required to accelerate the considerable mass of the batteries in a battery car, some of which is recovered by regenerative braking.

In conclusion, the aluminium-powered car (if it works) would be a zero-emission vehicle that is as energy efficient as a diesel, but only half as efficient as a battery car.   It would be cheap to make (probably even cheaper in bulk than the diesel, and certainly cheaper than cars with lots of batteries). It would be the safest car on the road, and it would be quick to re-fuel (and ash-dump).

Aluminium as an electricity storage system probably makes sense for vehicles, at least until batteries get a lot better.  But its lower efficiency means it will not make sense for static storage, where mass does not matter.

But the really interesting possibility would be an aluminium-powered aeroplane.  An electric turbine would be more efficient (maybe 80%) than the turbofans that aeroplanes currently use (manufacturers are coy, but I guess around 40%), which would help to compensate for the 2.5:1 ratio of energy per kilogram we saw above between hydrocarbons and aluminium.  So we could have a zero-emission fleet of passenger aircraft, with no fuel fires in crashes.  The only problem is that the plane gets heavier and heavier as it uses up its fuel - aluminium oxide weighs more than aluminium...

Tuesday, 25 April 2017

TTestComplete



A while ago the British Government was silly enough to allow me onto committees to decide how to spend millions of taxpayers' money on scientific and engineering research.  They even had me chair the meetings occasionally.

We'd get a stack of proposals for experiments, together with peer-review reports from other people like me on whether the experiments were worth doing or not.  The committees' modi operandorum were to put the proposals that the reviewers said were best at the top of the pile then work down discussing them and giving their proposers the money they wanted until the money ran out.

I liked to cause trouble by starting each meeting with my explanation of why this approach is All Wrong.

"The ones we should put at the top of the pile," I said, "are the ones where half the reviewers say 'Brilliant!' and the other half say 'Rubbish!'.  Those are the proposals that nobody knows the answer to, clearly.  So those are the experiments that are most important."

The other academics there would smile at me indulgently because of my political naivety.  The civil servants would smile at me nervously in case any of my fellow academics actually decided to do what I proposed.  And then everyone would carry on exactly as they had always done.

After a while I started saying no when I was asked to attend.

---o---

There has been an understandable fuss recently prompted by some good research by my erstwhile colleague Joanna Bryson and others about algorithmic racism - that is to say things like Google's autocomplete function giving the sort of results you can see in the picture above.

Google's (and other's) argument in defence of this is a strong one.  The essence of it is that their systems are driven by their user's preferences and actions; they gather the statistics and show people what most other people want to see when those other people do the same as you do.  The results are modified sometimes from "most other people" to "most other people like you" where "like you" is again the result of a statistical process.  If most other people are racist, historically ignorant cretins, then you will see results suitable for racist, historically ignorant cretins.  They (Google and the rest) are not like newspaper editors deciding what to put in front of people; they are just reflecting humanity back at you, you human you.

But you can see from the picture that the results of this are sometimes very bad, by almost any sensible moral definition.

Clearly what is needed is not the intervention of an editor - that would result in Google, Facebook and the rest turning into the New York Times or the Daily Mail, which would be a retrograde step, not an improvement.  What is needed is an unbiased statistical process that weights searches, hyperlinks and the rest from clever people more heavily than those from stupid people.

Note that I'm not saying that clever people aren't racists, and that stupid people are. I suspect that there is not that good a correlation, though this is interesting.  I'm just saying that in general all the web's automated linking and ranking systems ought to work better if they weighted the actions of people by their intelligence.

But how to grade the intellectual ability of web users?  The answer lies in the big data that all the web companies already use.  Facebook, for example, has a record of billions of people's educational achievements.  More interestingly it should be simple to train a neural network to examine tweets, blog posts and so on and to correlate their content with that educational data.  That network would then be able to grade new people and those who hadn't revealed any qualifications just by reading what they say online and apply weights accordingly.

I have no idea if this is a good idea or not.  It is my idea, but I'm not intelligent enough...


Thursday, 12 January 2017

HardClever


By now there must be a lot of people who actually believe that little glowing lights move along their axons and dendrites when they think, flashing at the synapses.

Anyway.

There has been a lot of fuss about AI lately, what with Google translate switching over to a neural network, rich people funding AI ethics research, and the EU trying to get ahead of the legislative curve.  There has also (this is humans in conversation after all...) been a lot of stuff on the grave dangers to humanity of super intelligent AIs from the likes of Stephen Hawking and Nick Bostrom.

Before we get too carried away, it seems to me that there is one very important question that we should be investigating.  It is: What is the computational complexity of general intelligence?  Before I say how we might find an answer, let me explain why this is important by looking at the extremes that that answer might take.  

At one end is linear complexity.  In this case, if we have a smart computer, we can make it ten times smarter by using a computer that is ten times bigger or faster.

At the other end is exponential complexity.  In this case, if we have a smart computer, we can make it ten times smarter only by having a computer that is twenty-two-thousand times bigger or faster.  (That is e10 times bigger or faster; there may be a factor in there too, but that's the essence of it.)

If smart computers do really present a danger, then the linear case is bad news because the machines can easily outstrip us once they start designing and building themselves and it is quicker to make a computer than to make a person.  In the exponential case the danger becomes negligible because the machines would have great difficulty obtaining the resources to make smarter versions of themselves.  The same problem would inhibit us trying to make smarter machines too (or smarter people by genetic engineering, come to that).

Note, in passing, that given genetic engineering the computers have no advantage over us when they, or we, make smarter versions of themselves or ourselves.  The computational complexity of the problem must be the same for both.

The big fuss about AI at the moment is almost all about machine learning using neural networks.  These have been around for decades doing interesting little tricks like recognising printed letters of the alphabet in images.  Indeed, thirty years ago I used to set my students a C programming exercise to make a neural network that did precisely that.

Some of the computational complexity of neural-net machine learning falls neatly into two separate parts.  The first is the complexity of teaching the network, and the second is the complexity of it thinking out an answer to a given problem once it has been taught.  The computer-memory required for the underlying network is the same in both cases, but the time taken for the teaching process and the give-an-answer process are different and separable.

Typically learning takes a lot longer than finding an answer to a problem once the learning is finished.  This is not a surprise - you are a neural network, and it took you a lot longer to learn to read than it now takes you actually to read - say - a blog post.

The reason for the current fuss about machine learning is that the likes of Google have realised that their big-data stores (which are certainly exponentially bigger than the newsprint that I used to give my students to get a computer to read) are an amazingly rich teaching resource for a neural network.

And here lies a possible hint at an answer to my question.  The teaching data has increased exponentially, and as a result the machines have got a little bit smarter.

On the other hand, once you have taught a neural network, it comes up with answers (that are often right...) to problems blindingly fast.  The time taken is roughly proportional to the logarithm of the size of the network.  This is to say that, if a network takes one millisecond to answer a question, a network twenty-two-thousand times bigger will take just ten milliseconds.

But the real experiments to find the computational complexity of general intelligence are staring us in the face.  They lie in biology, not in computing.  Psychologists have spent decades figuring out how smart squirrels, crows, ants, and all the rest are.  And they have also investigated related matters like how fast they learn, and how much they can remember.  Brain sections and staining should allow us to plot a graph of numbers of neurons and their degree of interconnectivity against an ordering of smartness of species.  We'd then be able to get an idea if ten times as smart requires ten times as much brain, or twenty-two-thousand times as much, or somewhere in between.

Finally, Isaac Asimov had a nice proof that telepathy doesn't exist.  If it did, he said, evolution would have exploited and refined it so fast and so far that it would be obvious everywhere.

We, as the smartest organisms on the planet, like to think we have taken it over.  We have certainly had an effect, and now find ourselves living in the Anthropocene.  But that effect on the planet is negligible compared to - say - the effect of phytoplankton, which are not smart at all.  And our unique intelligence took three billion years to achieve.  This is a strong indication that it is quite hard to engineer, even for evolution.

My personal guess is that general intelligence, by which I mean what a crow does when it bends a wire to hook a nut from a bottle, or what a human does when they explain quantum chromodynamics, will turn out to be exponentially hard.  We may well get there by throwing exponential resources at the problem.  But to get further either the intelligent computer, or we, will require exponentially more resources.