Follow adrianbowyer on Twitter

My home page

Saturday, 24 October 2015


DNA was famously discovered in 1869 by Friedrich Miescher, just ten years after Darwin published the Origin of Species.  But it wasn't until 1952 that it was demonstrated to be the the key to heredity (in the T2 phage), and then its structure and modus operandi were deduced a year later.

In retrospect, its helical structure might have been anticipated (easy for me to say with hindsight, of course...).  If you take any random shape and stack repeated copies of it, the result is a helix.  The reason is that the top face of the shape will be a fixed distance from the bottom, and at a fixed angle to it.  Repeatedly copying something through a fixed distance and turning it through a fixed angle always gives a helix.

However, that helix must be very inconvenient for cells, as anyone who has had to deal with a tangle of string knows.  As DNA is copied it unzips down the middle then zips itself back up again a bit downstream.  Meanwhile the copy twists away forming a new chain.  My late colleague Bill Whish once worked out that the molecule has to spin at 8,000 RPM while it is doing this to get the job done in time.  Think of your car engine spinning at that speed and spewing out a twisted strip as it did so.  Now present yourself with the problem of stopping that strip from getting hopelessly knotted up, and also not tangled with its parent DNA.  It's fortunate that all this is happening in an extremely viscous medium - water.  At the molecular scale water is like very thick treacle indeed.  Our intuition from the engine and strip analogy is led astray by our instinct for flexibility, mass and inertia at human scales.  Down with the DNA inertia is negligible, and everything stays where it is put.

But what if DNA were not a helix, but were as straight as a ladder?  Many of these problems would go away.  It should not be too difficult to devise a molecular ladder where that angle between one rung and the next has zero twist.  If in no other way, it could be achieved by taking a twist and bonding it to its other stereo-isomer with the opposite twist to form a single rung.

The ribosome would have to be re-engineered to accommodate straight, rather than twisted, transfer RNA.  But people have already made artificial ribosomes that do things like working with four-base codons (all natural ribosomes work with three-base codons) so that should not be too difficult.  The actual codon to amino-acid to protein coding could remain unaltered, at the start at least.

We would then have synthetic cells that might be able to reproduce much more quickly than natural ones, or work at much higher temperatures.

And the ladder DNA would be much easier to organise in two-dimensional sheets or three-dimensional blocks.  We could then try to do parallel copying of the DNA rather than the serial copying that happens at the moment - a ladder-DNA sheet could make a copy by stamping it out, rather as a printing press stamps out a page.  That should speed up reproduction even further, and a similar parallelisation could be attempted with the ribosome, making proteins all in one go, as opposed to chaining them together one amino acid at a time.

Life would get a lot faster...

Monday, 12 October 2015


Science is mostly right because it assumes that it is wrong. All other fields of human thought are mostly wrong because they assume that they are right.

Despite this, scientific publishing is not organised scientifically at all; instead it relies on the antic combination of the political and the quasi-religious that is peer review: suppose some hapless junior academic in a fly-blown university physics department in Wyoming or Berkshire or somewhere has an explanation for why electron, muon or tau neutrinos change between these three types as they zip through you and everything else.  The physicist writes a paper about it that gets sent to a bunch of experts - the peer reviewers - who decide if it is sensible or not.  If it is sensible it gets published and the physicist stays in a job; if not, both it and the physicist are out on their ear.

That process is itself not sensible.

Since the start of the internet, I have not understood why we don't apply the scientific method to the publishing of science instead of relying upon the logical fallacy of argumentum ab auctoritate, which is what peer-review is.

Here's how it would work: authors post their paper online; at this stage there are no authors' names published, and the paper does not count as a publication to the physicist-sackers. It is assigned a prior probability of being right of, say, 10%. This probability rises automatically, but slowly, with time.

Then anyone except the authors can attempt to falsify the paper. At each failed attempt at falsification the probability of the paper's being right is increased in the appropriate Bayesian way. When that probability rises above, say, 70% the authors' names are published and they can use the paper to gain tenure, promotion, Nobel prizes and so on.

If an attempt at falsification succeeds then the paper is moved into a proved-false category and the authors may choose to have their names published, or not.

If no attempts are made to falsify the paper, and thus its time-dependent probability of being right rises to a high level all on its own, the paper is moved sideways into a maybe-interesting-but-not-falsifiable-so-not-science category. Any subsequent attempt at falsification (for example when technology improves) moves it back into the mainstream.

And what do the falsifiers get out of it? Their successful falsifications are automatically added to their list of publications with a high rank – high, because falsehood is the only certainty in science.

Almost all of this could be automated by any competent web programmer.

Nullius in verba, as the Royal Society says.

Sunday, 12 April 2015


This is a cellphone photograph of the Horse Head Nebula at the top of the sword on Orion's belt.  It was made by the BBC Programme Stargazing Live on 20 March 2015.  No telescopes were involved. 

The full image (click this link) is 40MiB, which is rather high-resolution for a cellphone.  So how was it done?

Back in the age of Carl Sagan and Patrick Moore astronomers would point a telescope at a patch of sky and open a camera shutter.  The telescope would be on an equatorial mount driven by a clock, so the star images would stay in the same place on the photographic film as the Earth rotated under the sky.  Stars, and even worse nebulae such as those depicted above, don't give off a lot of light at this distance, so they waited for hours before closing the shutter.  Sometimes they would even come back the next night and the next for weeks, opening the shutter each time to catch a few more photons.  The result was an average - enough light had hit the film to form an image, but it had all been smeared out by atmospheric interference and any slight shake of the telescope.

In contrast the picture above was only exposed for, perhaps, one fiftieth of a second.   But it was not taken by just one cellphone - hundreds of viewers of the program had gone out and taken a picture of the same patch of sky with their phones and cameras, and had then uploaded them to the programme's website.  Each individual picture was just a black rectangle - not enough starlight had gone through the lens to make an image that could be seen.  But some had gone through, and registered in the camera's pixels as a slightly less-dark patch of black.

All the images were then stacked.  A computer first matched them up by making sure that the centres of the prominent stars were all in the same place, and then added up the slightly-less-black bits to make the picture.  Of course the pixels in all the cameras were not in the same place relative to the stars, which means that each camera pixel could be split into thousands of final-image pixels, which gives the fabulous resolution, a tiny bit of which you see above.

Long-exposure averaging loses information as a cost of getting enough light.  Stacking preserves and integrates the information and gets enough light by - effectively - having a camera aperture the size of all the camera lenses added together.  And all those lenses see different atmospheric interference, have different lens errors, and have different noise patterns in their phone's image sensors, so those can all be compensated for as well.

The human race is a species on which the stars never set.  So let's make the Human Telescope.  Set up a website to which anyone anywhere in the world can upload any sky images that they have taken with any digital camera, phone or telescope.  The images will have a timestamp and a GPS location, and will be continually stacked by a computer in the background to give an exquisitely detailed evolving picture of the whole vault of the heavens.

The world would become a great spherical insect eye looking at every star, galaxy, planet and nebula all the time.  We would be automatically finding comets, supernovae and near-Earth asteroids.  We would never miss an astronomical trick.

Thursday, 19 March 2015


We now have good encrypted means of communication that allow people to talk with each other in complete confidentiality.  But can we be confident that the messages we are receiving are true?  Making a message secure and ensuring that it is not a lie are independent problems.  We have only solved the first.

I just saw Buzz Aldrin on the TV (run with me here...) and thought, "He seems pretty lively for an oldster."  So I got out my phone and asked it how old he is.

"Buzz Aldrin is 85 years old," it said to me.  I am reasonably sure that it wasn't lying, and - if I were to go online - I could find multiple independent sources that would allow me to confirm his age.  Their independence assures me that 85 is almost certainly the correct answer.  It would require a conspiracy of Kennedy-was-assasinated-by-NASA-during-the-moon-landing proportions to think that all those sources of information had been got at and that I had been lied to.

And the answer 85 was given to me by a machine that understood my question.  No human being (apart from me) was involved.

So how about an online document checker that scans texts and tells you how likely they are to be true?    It would use the multiple-independent-sources technique to work up a score.  It could give you an overall statistic ("This is probably correct at a level of 93%."), and it could rate individual statements for both accuracy and uncertainly.

Thus if my document said:

Buzz Aldrin is 85.  My father was 31 years old when I was born.  The moon is 6000 years old.

I'd get a correctness level of around 50% and an uncertainty level of around 30%.  (The first two statements are true and the third is false.  But the system would obviously have much more trouble verifying the middle statement than the other two.)

Your document would come back coloured:

Buzz Aldrin is 85.  My father was 31 years old when I was born.  The moon is 6000 years old.

so you could see at a glance what to believe, what not to believe, and what needed further investigation.

Whenever you got a communication from a lawyer (or - if you are a suspicious sort - a lover) you could fire it at the system and see what colour it came back

The system could crawl the web, paying particular attention to advertising copy, political manifestos, and estate agents' property descriptions.  Then - if you selected the TRUTH option in your browser - whenever you visited a website it would be pre-coloured for you so you could see if it was all lies...

Saturday, 28 February 2015


As I have indicated elsewhere around here, I give E.M Forster's number of cheers for democracy.  But the forthcoming UK General Election reminds us all of the lamentably intermittent and indirect democracy that we actually have.  On one day in five years we each get an 0.00166% choice of a person to represent all of our views.   If we each have - say - one view per week, that means that we have only 0.000006385% of a democracy.  Spitting in a puddle would give us a greater say.

Also, that tiny amount of choice cannot even be expended on candidates of any great merit.  If you were ever a student, think of the strange fellow students you used to know who were members of student branches of political parties, or who stood for election to the students' union.  They are the ones standing for Parliament now.  Did any of them seem normal to you then?  No.  I thought not. (With the possible exception of the Rag Strippagram Coordinator.)

However, it's no use moaning about it, just as it's no use voting about it.  We have to fix it.  And - if history teaches us anything - it teaches us that the only way to fix something is to create and to spread an appropriate technology to do the job.

So how about a very simple Democracy App for your phone?  It would work as follows: every time the Division Bell rang in the House of Commons or Lords your phone would beep and the motion would be displayed.  You would vote "Yes" or "No" - or go back to sleep.  The results would be added up and then displayed on a website, along with how the parliamentarians voted.

The big trick is not to try to make the electronic vote binding.  Instead separate the making of a decision from the mass expression of an opinion on it, while tying the two together very tightly in time.  That way the self-interested and self-perpetuating decision makers get no say in whether the App is implemented  or not.

Things get interesting if a lot of people use the App and disagree with Parliament.  Firstly, journos would pick up on it and start to quote the stats: "MPs vote three-to-two in favour, but the public are three-to-one against."  That sort of thing.

Secondly, at the moment if 100,000 people sign a parliamentary petition, then the issue gets debated in Parliament.  Well, if the majority of Democracy App users who disagreed with Parliament exceeded 100,000, the system could automatically generate a petition to reconsider the vote and invite all those app-users who disagreed to sign it.

Thirdly, the whole idea of direct participation and electronic referendums on every important matter would gain currency, and the technology could be bedded in and made robust before any actual use in direct decision making was considered.

There are a few technical requirements that the Democracy App would have to meet.  Both the server and the client sides would have to be open-source, so that people could check that there was no skulduggery going on in the counting.   The App would have to be a free download, and would have to carry no advertising to avoid any accusations of bias towards the better-off or bias towards piper-payers.  And finally votes would have to be encrypted in memory when the App was running and end-to-end encrypted and routed through Tor in transmission so that they were anonymous.

My whole Democracy App idea is necessarily expressed in the terms of the Parliament with which I am most familiar.  But it would clearly work in any legislature where votes are taken, and many of them are democracies in name only.  Citizens of such places are the ones who most require anonymity.

The system would also work at all levels of government from lowly parishes, through county councils and parliaments, right up to supranational bodies such as the EU and the UN.  It could also work in the corporate sphere, connecting votes by boards of directors directly to workers and shareholders. 

The needed security technology is pretty widely understood, as is the writing of apps themselves.

Go to it, software engineers!

Monday, 23 February 2015

Return after a short absence

This blog has lain fallow for a little while as I had to attend to other matters, most notably helping to run RepRapPro Ltd and moving house.  But soon I shall return...

Wednesday, 10 October 2012


A while ago my brother put on some music after dinner.  "Who?" he asked me.

"Ashkenazy," I said.

"Correct.  Page turner?" he asked...

Of course it was terrible musical snobbery for us both to know implicitly that he was asking for the performer and not the composer (Chopin, as it happened).  And his subsequent question made me laugh for a long time.  Perhaps we had drunk too much wine.

But every musician needs a page turner, or there is that annoying half-second pause as one hand flies from the keys to flip.

There are, of course, page turner apps.  But as far as I can see they all rely on foot pedals.

However tablets have microphones.  It can't be hard to write a program that analyses the sound stream, matches it to a score, and then turns the page automatically...

Tuesday, 8 May 2012


James Lovelock and Richard Dawkins famously disagree over Lovelock's Gaia Hypothesis.

Dawkins' extremely strong counterargument against Gaia is that natural selection works at the level of the gene, and that it doesn't have any foresight to anticipate long-term planetary disaster in the face of an immediate pressure to select for a locally advantageous mutation.  As a counterexample to Gaia, there are the cyanobacteria.  They all but destroyed the Earth's entire ecosystem just after it first got started by dumping billions of tonnes of poison waste (oxygen) into the atmosphere.  And, of course, one counterexample is enough to disprove a hypothesis.

However, most of the evidence we have seems to support Lovelock.  For example, the sun has been getting steadily hotter, and yet conditions on Earth have been maintained more or less homoeostatically by the collective undirected actions of all its living organisms.  Lovelock simulated a self-regulating world of light and dark daisies (called, naturally, Daisyworld) that exhibited self-regulating temperature even though the participating organisms were only acting in their immediate self interest.  But this did not include a mutation that had an active interest in disrupting the homeostasis - something that is always a possibility in reality.

When he was working for NASA, Lovelock pointed out that a good indicator of life on another world would be (as it is on Earth) a surprisingly low-entropy state on its surface.  As an example, consider the oxygen again - oxygen is a highly reactive gas (hence its toxicity to primordial life); one would not expect unaided geological processes to create such a low-entropy atmosphere.  If we ever find an exoplanet with an ocean of molten sodium on its surface, we can be tolerably confident that that planet supports life.

One thing that all life needs is low-entropy.  You eat bread, which has a low entropy, and your subsequent living actions dump the energy in your bread as waste heat, which has high entropy.  In that way you, and all other living organisms, are behaving just like the steam engines of the nineteenth century which led thermodynamicists to the idea of entropy in the first place.  And when you plant a field of wheat for bread, you are contriving symbiotically with the wheat to use the Sun's energy locally to reduce the entropy of a part of the system.

Indeed, it is in the interest of all living organisms to have low-entropy surroundings.  That makes their life most easy to conduct (it is more-or-less equivalent to saying that they need a plentiful local source of food, clean water, and breathing air), and genes that cause them to create such a state in their local surroundings will - all other things being equal - be favoured.  Ants grow fungus farms for food.  Plants encourage symbiotic micro-organisms around their roots.  Spiders make larders in their webs.

One of Dawkins' great contributions to biology is the idea of the extended phenotype.  That is, the idea that genes can be selected for that drive organisms to alter their surroundings for their benefit.  Birds have genes that allow them instinctively to build nests - a nest is part of a bird's extended phenotype.  People have genes that allow us to converse, and hence to collaborate.  The results are our extended phenotypes of iPhones, power stations, and TV talent shows.

But - as we established above - there is a selection pressure on every living organism to reduce the entropy of its surroundings.  Genes will have been selected in all living things to cause them to try to create local extended phenotypes of such surroundings.  None of this is global; no organisms (not even we) have evolved to stabilise the planet.  But the collective effect of all the local attempts to reduce entropy is exactly the Gaia that Lovelock proposed.  It is not a globally directed effort by each individual (let alone each individual gene), any more than a water molecule in a steam engine is trying to make a flywheel go round.  But the collective unconscious action of all the water molecules is, nonetheless, to achieve precisely that.  And the collective unconscious action of all organisms is to create a global low-entropy environment.

Dawkins would correctly argue (I think) that given global low entropy created by Gaia then there is a selection pressure on organisms to exploit that as a whole, thus destroying it in a tragedy of the commons.  And that is precisely what the cyanobacteria did.

So the question that remains is: why are cyanobacteria events so rare?  The answer is Dawkins' own Mount Improbable.  An organism that mutates to exploit low entropy in general must do so locally at first by a small mutation.  And almost all those organisms suffer the usual fate of over-successful predators: they eat too much of their immediate surroundings and then starve.  For random change to generate a mutation that allows excessive exploitation on a global scale in a singe shot is an astronomically improbable event.  But it happened once with the cyanobacteria.  However, they didn't quite destroy everything, because they had a direct line to the Sun itself, and so they could survive the disaster they created.  Any organism without such a get-out-of-jail-free card would succumb to the predator starvation effect, and so leave pockets of other life (along perhaps with themselves) to start evolving afresh in balanced competition.

So Gaia is not a globally-directed system with a homeostatic purpose.  It emerges naturally as the result of all living things trying to reduce the entropy of their immediate surroundings using the power of the sun for their own benefit.  And Gaia is not proof against global cataclysm, nor against very very rare mutations that allow a single organism to exploit the system as a whole and thus destroy it.

For the first time in two billion years such a new species has emerged.  It also has a direct line to the Sun that is far more efficient (20%) than photosynthesis (3%).  It is the species who's extended phenotype includes making these out of the bare earth...