Follow adrianbowyer on Twitter

My home page

Tuesday, 18 October 2016


Google Street View lets you go anyplace on Earth that Google's cameras have previously visited (which is pretty much everywhere) and explore that place interactively as a 3D virtual world.  Sometimes the pictures are a bit out of date, but the system is still both interesting and useful.

In one way, however, the pictures are not out of date enough.

There are now many complete 3D computer models of cities as they were in different historical eras.  The picture above, for example, is a still from a video fly-through of a model of seventeenth century London created by De Montfort University.  But a directed video fly-through is not the same as a virtual world that you can explore interactively.

So why not integrate these models with Street View?  You could have an extra slider on the screen that would allow you to wind back to any point in history and walk round your location at that date.  There would be gaps, of course, which could be filled in as more models became available.  And also some of the buildings and other features would be conjecture (the De Montfort model is accurate as far as the known information is concerned, but it is set before the Great Fire so there are interpolations).  As long as these were flagged as such there would be no danger of confusion.  Street View does allow you to go back through Google's scanned archive, but in the seventeenth century they were quite a small company without the resources needed to do the scanning.

On your 'phone, the historical data could be superimposed on the modern world in augmented reality as you walked in it, Pokémon Go style, giving you details of superseded historical architecture in your current location.

And when there were enough data we could train a neural network to predict the likely buildings at a given location on a given date from the buildings preceding them in history.  Running that on the contemporary Street View would give us an idea of what our cities might look like in the future...

Wednesday, 31 August 2016


Amazon now have their Dash button that allows you to buy a restricted range of goods from, surprise - Amazon - when something runs out.  So you put the button on your washing machine, press it when the powder gets low, the button automatically does a buy-with-one-click using your home wifi, and a new pack arrives a day later.

But you can't set the buttons up to buy anything you like from Amazon, let alone from other suppliers.  The button locks you in to products that may well not be the best deal, nor exactly what you want.

Clearly what's needed is a user-programmable button that you can set up to take any online action that you preset into it.  Thus pressing the button might indeed do an Amazon one-click, or it might add an item to your Tesco online order, or it might boost your web-controlled central heating in the room where you are sitting, or it might just tweet that you are having your breakfast (if you feel that the world needs to know that on a daily basis).

Electronically, such a device would be straightforward.  And - as a marketing opportunity - it is potentially huge.  It would allow people total control over what they buy and from whom, completely subsuming Amazon Dash within itself among a much wider range of possibilities.  And in addition it could be used to carry out a vast range of non-buying online actions that are amenable to your pressing a button when you feel like it.

If I can find a spare afternoon, I might just design it and open-source the results...

Tuesday, 16 February 2016


Steven Pinker's famous book The Better Angels of our Nature posits four reasons for the decline in violence that has happened and is happening in the world: empathy, self-control, our moral sense, and reason.  He explicitly (though only partially) rejects the idea that we are evolving to become more peaceful.

I am not sure (particularly given meme as well as gene copying) that evolution can be discounted as an explanation for the decline in violence.

Recall John Maynard Smith's hawks-and-doves example of an evolutionarily stable strategy. Suppose the payout or utility matrix is

hawk dove
hawk -1, -1 +0.5, -0.5
dove -0.5, +0.5 +0.25, +0.25

What this says in English is that when two hawks meet they fight and each loses 1 unit of utility (the -1s top left) because of energy wastage, injury or death.   When a hawk meets a dove the hawk gains +0.5 units of utility because the hawk can easily steal from the dove (the +0.5 top right) and the dove loses 0.5 (the -0.5).  When a dove meets a hawk the reverse happens (bottom left).  And when two doves meet they each gain 0.25 units because they don't fight and can cooperate (bottom right).

The resulting utility graph looks like this:

The horizontal axis is the proportion of doves (the proportion of hawks is one minus the proportion of doves) and the vertical axis is utility .  The blue line is what hawks get for any given proportion of doves, and the orange line is what doves get.  To the left of the crossing point the orange line is higher, so there it makes more sense to be a dove than a hawk.  To the right the blue line is higher so there it makes more sense to be a hawk than a dove.  This means that the crossing point is the point where the population is evolutionarily stable - at that point it makes no sense for either doves or hawks to change their behaviour.  The crossing point is where the population has 33% of hawks and 67% of doves.

(I have chosen numbers that make the Nash equilibrium occur at zero utility for simplicity; this is not necessary for the argument that follows.)

Now suppose that one thing changes: technological advance makes weapons more deadly.

Note very carefully that better weapons is not the same thing as more weapons.  The number of weapons always goes as the proportion of hawks (33% above) and is an output from, not an input to, the model.

With better weapons, when a dove meets a dove nothing is different because they didn't fight before and they don't now.  When a hawk meets a dove the hawk gets the same profit as before because the dove surrendered all that it had before.  So the numbers in the right hand column stay the same except for...

When a dove meets a hawk the dove may lose more (maybe it dies instead of merely being injured: the -0.75s). And when a hawk meets a hawk both lose disastrously because their better weapons mean greater injury and more death (the -1.5s).  So the numbers in the left hand column get more negative:

hawk-1.5,-1.5+0.5, -0.75
dove-0.75, +0.5+0.25, +0.25

and the utility graph changes:

Now the population is stable when there are fewer hawks (25%) - and thus also fewer weapons - and more doves (75%).

Making weapons better at killing gives a society with fewer of them; a society that is more peaceful.

Monday, 1 February 2016


It is quite entertaining to listen in when my daughter (who's not the woman in the photo above) gets a scam telephone call.  She sets herself two targets:
  1. To keep the scammer on the line as long as possible to waste their time and money, and
  2. To try to get the scammer's credit card or bank details.
So far she has failed on Target 2, but she does manage to keep some of them doggedly attempting to return to their scripts after she has led them up garden paths after wild geese and red herrings for a long time.

But the problem is that all this wastes her time too.

Chatterbots have been around since I was programming computers by punching little holes in rectangles of cardboard.   The first was Weizenbaum's ELIZA psychiatrist.  That mimicked a non-directive therapist.  It was completely brainless, but so strong is the human impulse to ascribe active agency to anything that talks to us, it was both interesting and fairly convincing to have a typed conversation with.

And these days chatterbots are much more sophisticated.  With near real-time speech recognition, voice synthesis that sounds like a proper human, and recursive sentence writers that never make a grammatical mistake, they can just about hold a real 'phone conversation.   Just listen to the second recording here - the appropriate laughter from the robot is stunning.

So how about a 'phone app that you tap when you get a scam call?  This app takes over the conversation, wasting the scammer's time for as long as possible and allowing you to get on with your life.

But it needn't end there.  The app could transcribe the entire scam conversation and upload it.  This would automatically compile a reference collection of scammer's scripts that anyone could google while they had someone on the line that they were suspicious of.  Also the app could evolve: conversational gambits that led to longer calls could be strengthened and new weights could be incorporated in upgrades so the app would get better and better at hanging the hapless scammer on the line.  Finally, the app could take the record of the things that the scammers themselves say and add variations on that to its repertoire of responses.

Already there are online lists of source numbers for scammers (though most disguise their origins, of course).  When the app found that your 'phone's account was coming to the end of the month and that you had unused free minutes it could dial up those scammer numbers at three in the morning and see how many crook's credit card and bank details it could gather and post online...

Tuesday, 29 December 2015


The very wonderful Ben Goldacre rightly has it in for (among others) journalists who misrepresent scientific research to generate a completely artificial scare story.  Think no further than the MMR scandal in journalism for example, in which newspapers killed many children.  (And in which one journalist, Brian Deer, exposed the original fraud.)

Often the problem is not the original paper describing the research, but the press release put out by the authors' institution (whose PR departments are usually staffed by more journalists).  Of course the authors are at fault here - the PR department will always give them a draft of any release to check, and they should be savage in removing anything that they think may cause distortions if reported.  But authors are not disinterested in fame and publicity.

It seems to me that there is a simple solution.  The normal sequence of events is this:

  1. Do experiments,
  2. Write a paper on the results,
  3. Submit it to a journal,
  4. Correct the paper according to criticisms from the journal's reviewers,
  5. See the paper published,
  6. Have the PR people write a press release based on the paper,
  7. Check it,
  8. Send it out, and
  9. See the research - or a distortion of it - appear in the press.

But simply by moving Item 6 to between Items 2 and 3 - that is by having the press release sent out with the paper to the journal's reviewers - a lot of trouble could be avoided.  The reviewers have no interest in getting fame and publicity (unlike the authors and their institution), but they are concerned with accuracy and truth.  If they were to correct the press release along with the paper itself, and in particular were compelled to add a list at the start of the press release on what the paper does and does not say in plain terms, then a lot of trouble could be avoided.

The list would look something like:

  1. This paper shows that rats eating sprogwort reduced their serum LDL (cholesterol) levels by a statistically significant amount.
  2. This paper does not show that sprogwort reduces cardiovascular disease in rats.
  3. This paper does not show that sprogwort reduces cardiovascular disease in humans.
  4. Sprogwort is known to be neurotoxic in large doses; the research in this paper did not study that at all.

Then everyone would quickly discover that the following morning's headline in the Daily Beast that screams


was nonsense.  In particular other journalists would know, and - of course - there's nothing one journalist loves more than being able to show that a second journalist is lying...

Thursday, 3 December 2015


BBC Radio 3 has a competition in the mornings where they play an excerpt from a piece of music backwards and ask listeners to identify it.  Occasionally I get it and often I don't, but more interesting is the fact that music doesn't work backwards.  It just doesn't sound right, or even good.

This is rather surprising.  The rules of melody and harmony are like the laws of quantum mechanics - they are indifferent to the direction of time.

So if we watch (if we could) an atom absorbing a photon and one of its electrons consequently changing its energy, the reverse process whereby the atom emits a photon looks (if it could) just like a film being played backwards.  The same thing happens at the human scale - if you watch a film of two moving snooker balls bouncing off each other you can't tell if the film is being played backwards or forwards.  But if you watch a film of the first break in a snooker game, where the white strikes a triangle of reds, it is immediately obvious if the film is backwards or forwards.  For a load of red balls to converge and, after multiple collisions, to come to rest in a neat triangle whilst ejecting a white ball from their midst would be so monstrously improbable that we would know that that film was backwards more certainly than we know that we will not be eaten by a griffin.

A prog-rock cliché that is most effective is to start a piece with white noise (sometimes bandpass filtered to sound like the wind), or with random drum beats and cymbal clashes, and for all that gradually to resolve into a structured tune.  Think Pink Floyd.  And at the end of most musical pieces of all genres there is a resolution from dissonance to consonance, and similar resolutions are placed throughout because we find them pleasing.  So much so, indeed, that if a piece ends with a fade-out rather than a resolution we feel cheated and think that the players and composer (or merely the DJ) have been lazy.

A musical resolution is like all the snooker balls coming together improbably to form a triangle.  A high-entropy state has surprisingly turned into a low-entropy state - the very reverse of what we expect from our intuitive feeling for the Second Law.  The reason that backwards music sounds bad is because backwards music is like all of nature - its entropy increases.  But we like forwards music, and it sounds appealing to us, because it reverses the normal flow - it flows away from decay and dissipation towards freshness and refinement.

Music is backwards time.

Saturday, 24 October 2015


DNA was famously discovered in 1869 by Friedrich Miescher, just ten years after Darwin published the Origin of Species.  But it wasn't until 1952 that it was demonstrated to be the the key to heredity (in the T2 phage), and then its structure and modus operandi were deduced a year later.

In retrospect, its helical structure might have been anticipated (easy for me to say with hindsight, of course...).  If you take any random shape and stack repeated copies of it, the result is a helix.  The reason is that the top face of the shape will be a fixed distance from the bottom, and at a fixed angle to it.  Repeatedly copying something through a fixed distance and turning it through a fixed angle always gives a helix.

However, that helix must be very inconvenient for cells, as anyone who has had to deal with a tangle of string knows.  As DNA is copied it unzips down the middle then zips itself back up again a bit downstream.  Meanwhile the copy twists away forming a new chain.  My late colleague Bill Whish once worked out that the molecule has to spin at 8,000 RPM while it is doing this to get the job done in time.  Think of your car engine spinning at that speed and spewing out a twisted strip as it did so.  Now present yourself with the problem of stopping that strip from getting hopelessly knotted up, and also not tangled with its parent DNA.  It's fortunate that all this is happening in an extremely viscous medium - water.  At the molecular scale water is like very thick treacle indeed.  Our intuition from the engine and strip analogy is led astray by our instinct for flexibility, mass and inertia at human scales.  Down with the DNA inertia is negligible, and everything stays where it is put.

But what if DNA were not a helix, but were as straight as a ladder?  Many of these problems would go away.  It should not be too difficult to devise a molecular ladder where that angle between one rung and the next has zero twist.  If in no other way, it could be achieved by taking a twist and bonding it to its other stereo-isomer with the opposite twist to form a single rung.

The ribosome would have to be re-engineered to accommodate straight, rather than twisted, transfer RNA.  But people have already made artificial ribosomes that do things like working with four-base codons (all natural ribosomes work with three-base codons) so that should not be too difficult.  The actual codon to amino-acid to protein coding could remain unaltered, at the start at least.

We would then have synthetic cells that might be able to reproduce much more quickly than natural ones, or work at much higher temperatures.

And the ladder DNA would be much easier to organise in two-dimensional sheets or three-dimensional blocks.  We could then try to do parallel copying of the DNA rather than the serial copying that happens at the moment - a ladder-DNA sheet could make a copy by stamping it out, rather as a printing press stamps out a page.  That should speed up reproduction even further, and a similar parallelisation could be attempted with the ribosome, making proteins all in one go, as opposed to chaining them together one amino acid at a time.

Life would get a lot faster...

Monday, 12 October 2015


Science is mostly right because it assumes that it is wrong. All other fields of human thought are mostly wrong because they assume that they are right.

Despite this, scientific publishing is not organised scientifically at all; instead it relies on the antic combination of the political and the quasi-religious that is peer review.

Suppose some hapless junior academic in a fly-blown university physics department in Wyoming or Berkshire or somewhere has an explanation for why electron, muon or tau neutrinos change between these three types as they zip through you and everything else.  The physicist writes a paper about it that gets sent to a bunch of experts - the peer reviewers - who decide if it is sensible or not.  If it is sensible it gets published and the physicist stays in a job; if not, both it and the physicist are out on their ear.

That process is itself not sensible.

Since the start of the internet, I have not understood why we don't apply the scientific method to the publishing of science instead of relying upon the logical fallacy of argumentum ab auctoritate, which is what peer-review is.

Here's how it would work: authors post their paper online; at this stage there are no authors' names published, and the paper does not count as a publication to the physicist-sackers. It is assigned a prior probability of being right of, say, 10%. This probability rises automatically, but slowly, with time.

Then anyone except the authors can attempt to falsify the paper. At each failed attempt at falsification the probability of the paper's being right is increased in the appropriate Bayesian way. When that probability rises above, say, 70% the authors' names are published and they can use the paper to gain tenure, promotion, Nobel prizes and so on.

If an attempt at falsification succeeds then the paper is moved into a proved-false category and the authors may choose to have their names published, or not.

If no attempts are made to falsify the paper, and thus its time-dependent probability of being right rises to a high level all on its own, the paper is moved sideways into a maybe-interesting-but-not-falsifiable-so-not-science category. Any subsequent attempt at falsification (for example when technology improves) moves it back into the mainstream.

And what do the falsifiers get out of it? Their successful falsifications are automatically added to their list of publications with a high rank – high, because falsehood is the only certainty in science.

Almost all of this could be automated by any competent web programmer.

Nullius in verba, as the Royal Society says.