Search This Blog

Follow adrianbowyer on Twitter

My home page

Tuesday, 29 December 2015

SprogWort



The very wonderful Ben Goldacre rightly has it in for (among others) journalists who misrepresent scientific research to generate a completely artificial scare story.  Think no further than the MMR scandal in journalism for example, in which newspapers killed many children.  (And in which one journalist, Brian Deer, exposed the original fraud.)

Often the problem is not the original paper describing the research, but the press release put out by the authors' institution (whose PR departments are usually staffed by more journalists).  Of course the authors are at fault here - the PR department will always give them a draft of any release to check, and they should be savage in removing anything that they think may cause distortions if reported.  But authors are not disinterested in fame and publicity.

It seems to me that there is a simple solution.  The normal sequence of events is this:

  1. Do experiments,
  2. Write a paper on the results,
  3. Submit it to a journal,
  4. Correct the paper according to criticisms from the journal's reviewers,
  5. See the paper published,
  6. Have the PR people write a press release based on the paper,
  7. Check it,
  8. Send it out, and
  9. See the research - or a distortion of it - appear in the press.

But simply by moving Item 6 to between Items 2 and 3 - that is by having the press release sent out with the paper to the journal's reviewers - a lot of trouble could be avoided.  The reviewers have no interest in getting fame and publicity (unlike the authors and their institution), but they are concerned with accuracy and truth.  If they were to correct the press release along with the paper itself, and in particular were compelled to add a list at the start of the press release on what the paper does and does not say in plain terms, then a lot of trouble could be avoided.

The list would look something like:

  1. This paper shows that rats eating sprogwort reduced their serum LDL (cholesterol) levels by a statistically significant amount.
  2. This paper does not show that sprogwort reduces cardiovascular disease in rats.
  3. This paper does not show that sprogwort reduces cardiovascular disease in humans.
  4. Sprogwort is known to be neurotoxic in large doses; the research in this paper did not study that at all.


Then everyone would quickly discover that the following morning's headline in the Daily Beast that screams

SPROGWORT CURE FOR HEART ATTACKS

was nonsense.  In particular other journalists would know, and - of course - there's nothing one journalist loves more than being able to show that a second journalist is lying...



Thursday, 3 December 2015

EmitCisum


BBC Radio 3 has a competition in the mornings where they play an excerpt from a piece of music backwards and ask listeners to identify it.  Occasionally I get it and often I don't, but more interesting is the fact that music doesn't work backwards.  It just doesn't sound right, or even good.

This is rather surprising.  The rules of melody and harmony are like the laws of quantum mechanics - they are indifferent to the direction of time.

So if we watch (if we could) an atom absorbing a photon and one of its electrons consequently changing its energy, the reverse process whereby the atom emits a photon looks (if it could) just like a film being played backwards.  The same thing happens at the human scale - if you watch a film of two moving snooker balls bouncing off each other you can't tell if the film is being played backwards or forwards.  But if you watch a film of the first break in a snooker game, where the white strikes a triangle of reds, it is immediately obvious if the film is backwards or forwards.  For a load of red balls to converge and, after multiple collisions, to come to rest in a neat triangle whilst ejecting a white ball from their midst would be so monstrously improbable that we would know that that film was backwards more certainly than we know that we will not be eaten by a griffin.

A prog-rock cliché that is most effective is to start a piece with white noise (sometimes bandpass filtered to sound like the wind), or with random drum beats and cymbal clashes, and for all that gradually to resolve into a structured tune.  Think Pink Floyd.  And at the end of most musical pieces of all genres there is a resolution from dissonance to consonance, and similar resolutions are placed throughout because we find them pleasing.  So much so, indeed, that if a piece ends with a fade-out rather than a resolution we feel cheated and think that the players and composer (or merely the DJ) have been lazy.

A musical resolution is like all the snooker balls coming together improbably to form a triangle.  A high-entropy state has surprisingly turned into a low-entropy state - the very reverse of what we expect from our intuitive feeling for the Second Law.  The reason that backwards music sounds bad is because backwards music is like all of nature - its entropy increases.  But we like forwards music, and it sounds appealing to us, because it reverses the normal flow - it flows away from decay and dissipation towards freshness and refinement.

Music is backwards time.
 



Saturday, 24 October 2015

LadderDNA


DNA was famously discovered in 1869 by Friedrich Miescher, just ten years after Darwin published the Origin of Species.  But it wasn't until 1952 that it was demonstrated to be the the key to heredity (in the T2 phage), and then its structure and modus operandi were deduced a year later.

In retrospect, its helical structure might have been anticipated (easy for me to say with hindsight, of course...).  If you take any random shape and stack repeated copies of it, the result is a helix.  The reason is that the top face of the shape will be a fixed distance from the bottom, and at a fixed angle to it.  Repeatedly copying something through a fixed distance and turning it through a fixed angle always gives a helix.

However, that helix must be very inconvenient for cells, as anyone who has had to deal with a tangle of string knows.  As DNA is copied it unzips down the middle then zips itself back up again a bit downstream.  Meanwhile the copy twists away forming a new chain.  My late colleague Bill Whish once worked out that the molecule has to spin at 8,000 RPM while it is doing this to get the job done in time.  Think of your car engine spinning at that speed and spewing out a twisted strip as it did so.  Now present yourself with the problem of stopping that strip from getting hopelessly knotted up, and also not tangled with its parent DNA.  It's fortunate that all this is happening in an extremely viscous medium - water.  At the molecular scale water is like very thick treacle indeed.  Our intuition from the engine and strip analogy is led astray by our instinct for flexibility, mass and inertia at human scales.  Down with the DNA inertia is negligible, and everything stays where it is put.

But what if DNA were not a helix, but were as straight as a ladder?  Many of these problems would go away.  It should not be too difficult to devise a molecular ladder where that angle between one rung and the next has zero twist.  If in no other way, it could be achieved by taking a twist and bonding it to its other stereo-isomer with the opposite twist to form a single rung.

The ribosome would have to be re-engineered to accommodate straight, rather than twisted, transfer RNA.  But people have already made artificial ribosomes that do things like working with four-base codons (all natural ribosomes work with three-base codons) so that should not be too difficult.  The actual codon to amino-acid to protein coding could remain unaltered, at the start at least.

We would then have synthetic cells that might be able to reproduce much more quickly than natural ones, or work at much higher temperatures.

And the ladder DNA would be much easier to organise in two-dimensional sheets or three-dimensional blocks.  We could then try to do parallel copying of the DNA rather than the serial copying that happens at the moment - a ladder-DNA sheet could make a copy by stamping it out, rather as a printing press stamps out a page.  That should speed up reproduction even further, and a similar parallelisation could be attempted with the ribosome, making proteins all in one go, as opposed to chaining them together one amino acid at a time.

Life would get a lot faster...


Monday, 12 October 2015

FalseScience


Science is mostly right because it assumes that it is wrong. All other fields of human thought are mostly wrong because they assume that they are right.

Despite this, scientific publishing is not organised scientifically at all; instead it relies on the antic combination of the political and the quasi-religious that is peer review.

Suppose some hapless junior academic in a fly-blown university physics department in Wyoming or Berkshire or somewhere has an explanation for why electron, muon or tau neutrinos change between these three types as they zip through you and everything else.  The physicist writes a paper about it that gets sent to a bunch of experts - the peer reviewers - who decide if it is sensible or not.  If they think it is sensible it gets published and the physicist stays in a job; if not, both it and the physicist are out on their ear.

That process is itself not sensible.

Since the start of the internet, I have not understood why we don't apply the scientific method to the publishing of science instead of relying upon the logical fallacy of argumentum ab auctoritate, which is what peer-review is.

Here's how it would work: authors post their paper online; at this stage there are no authors' names published, and the paper does not count as a publication to the physicist-sackers. It is assigned a prior probability of being right of, say, 10%. This probability rises automatically, but slowly, with time.

Then anyone except the authors can attempt to falsify the paper. At each failed attempt at falsification the probability of the paper's being right is increased in the appropriate Bayesian way. When that probability rises above, say, 70% the authors' names are published and they can use the paper to gain tenure, promotion, Nobel prizes and so on.

If an attempt at falsification succeeds then the paper is moved into a proved-false category and the authors may choose to have their names published, or not.

If no attempts are made to falsify the paper, and thus its time-dependent probability of being right rises to a high level all on its own, the paper is moved sideways into a maybe-interesting-but-not-falsifiable-so-not-science category. Any subsequent attempt at falsification (for example when technology improves) moves it back into the mainstream.

And what do the falsifiers get out of it? Their successful and unsuccessful falsifications are automatically added to their list of publications with a high rank – high, because falsehood is the only certainty in science.

Almost all of this could be automated by any competent web programmer.

Nullius in verba, as the Royal Society says.

Sunday, 12 April 2015

StarStack


This is a cellphone photograph of the Horse Head Nebula at the top of the sword on Orion's belt.  It was made by the BBC Programme Stargazing Live on 20 March 2015.  No telescopes were involved. 

The full image (click this link) is 40MiB, which is rather high-resolution for a cellphone.  So how was it done?

Back in the age of Carl Sagan and Patrick Moore astronomers would point a telescope at a patch of sky and open a camera shutter.  The telescope would be on an equatorial mount driven by a clock, so the star images would stay in the same place on the photographic film as the Earth rotated under the sky.  Stars, and even worse nebulae such as those depicted above, don't give off a lot of light at this distance, so they waited for hours before closing the shutter.  Sometimes they would even come back the next night and the next for weeks, opening the shutter each time to catch a few more photons.  The result was an average - enough light had hit the film to form an image, but it had all been smeared out by atmospheric interference and any slight shake of the telescope.

Then Richard Gregory made the exquisitely beautiful invention of photographing the heavens through a previous negative of the same starfield.   A moment's thought shows that this will tend to cancel out the random noise, while emphasising the true data.  The trick can be repeated as often as desired.  (My daughter and I once went to a public lecture by Gregory at Bristol University.  He expounded more wonderful ideas in an hour than most people have in their entire lives.  And just think of the skill required to hold both an eight-year-old child and a forty-one-year-old engineer and mathematician equally entranced by those ideas.)

However, in contrast to the original smeary photographs and also Gregory's, the picture above was only exposed for, perhaps, one fiftieth of a second.   But it was not taken by just one cellphone - hundreds of viewers of the programme had gone out and taken a picture of the same patch of sky with their phones and cameras, and had then uploaded them to the programme's website.  Each individual picture was just a black rectangle - not enough starlight had gone through the lens to make an image that could be seen.  But some had gone through, and registered in the camera's pixels as a slightly less-dark patch of black.

All the images were then stacked.  A computer first matched them up by making sure that the centres of the prominent stars were all in the same place, and then added up the slightly-less-black bits to make the picture.  Of course the pixels in all the cameras were not in the same place relative to the stars, which means that each camera pixel could be split into thousands of final-image pixels, which gives the fabulous resolution, a tiny bit of which you see above.

Long-exposure averaging loses information as a cost of getting enough light.  Stacking preserves and integrates the information and gets enough light by - effectively - having a camera aperture the size of all the camera lenses added together.  And all those lenses see different atmospheric interference, have different lens errors, and have different noise patterns in their phone's image sensors, so those can all be compensated for as well.

The human race is a species on which the stars never set.  So let's make the Human Telescope.  Set up a website to which anyone anywhere in the world can upload any sky images that they have taken with any digital camera, phone or telescope.  The images will have a timestamp and a GPS location, and will be continually stacked by a computer in the background to give an exquisitely detailed evolving picture of the whole vault of the heavens.

The world would become a great spherical insect eye looking at every star, galaxy, planet and nebula all the time.  We would be automatically finding comets, supernovae and near-Earth asteroids.  We would never miss an astronomical trick.








Thursday, 19 March 2015

HonestJohn


We now have good encrypted means of communication that allow people to talk with each other in complete confidentiality.  But can we be confident that the messages we are receiving are true?  Making a message secure and ensuring that it is not a lie are independent problems.  We have only solved the first.

I just saw Buzz Aldrin on the TV (run with me here...) and thought, "He seems pretty lively for an oldster."  So I got out my phone and asked it how old he is.

"Buzz Aldrin is 85 years old," it said to me.  I am reasonably sure that it wasn't lying, and - if I were to go online - I could find multiple independent sources that would allow me to confirm his age.  Their independence assures me that 85 is almost certainly the correct answer.  It would require a conspiracy of Kennedy-was-assasinated-by-NASA-during-the-moon-landing proportions to think that all those sources of information had been got at and that I had been lied to.

And the answer 85 was given to me by a machine that understood my question.  No human being (apart from me) was involved.

So how about an online document checker that scans texts and tells you how likely they are to be true?    It would use the multiple-independent-sources technique to work up a score.  It could give you an overall statistic ("This is probably correct at a level of 93%."), and it could rate individual statements for both accuracy and uncertainly.

Thus if my document said:

Buzz Aldrin is 85.  My father was 31 years old when I was born.  The Moon is 6000 years old.

I'd get a correctness level of around 50% and an uncertainty level of around 30%.  (The first two statements are true and the third is false.  But the system would obviously have much more trouble verifying the middle statement than the other two.)

Your document would come back coloured:

Buzz Aldrin is 85.  My father was 31 years old when I was born.  The Moon is 6000 years old.

so you could see at a glance what to believe, what not to believe, and what needed further investigation.

Whenever you got a communication from a lawyer (or - if you are a suspicious sort - a lover) you could fire it at the system and see what colour it came back

The system could crawl the web, paying particular attention to advertising copy, political manifestos, and estate agents' property descriptions.  Then - if you selected the TRUTH option in your browser - whenever you visited a website it would be pre-coloured for you so you could see if it was all lies...

Saturday, 28 February 2015

OnePhoneOneVote


As I have indicated elsewhere around here, I give E.M Forster's number of cheers for democracy.  But the forthcoming UK General Election reminds us all of the lamentably intermittent and indirect democracy that we actually have.  On one day in five years we each get an 0.00166% choice of a person to represent all of our views.   If we each have - say - one view per week, that means that we have only 0.000006385% of a democracy.  Spitting in a puddle would give us a greater say.

Also, that tiny amount of choice cannot even be expended on candidates of any great merit.  If you were ever a student, think of the strange fellow students you used to know who were members of student branches of political parties, or who stood for election to the students' union.  They are the ones standing for Parliament now.  Did any of them seem normal to you then?  No.  I thought not. (With the possible exception of the Rag Strippagram Coordinator.)

However, it's no use moaning about it, just as it's no use voting about it.  We have to fix it.  And - if history teaches us anything - it teaches us that the only way to fix something is to create and to spread an appropriate technology to do the job.

So how about a very simple Democracy App for your phone?  It would work as follows: every time the Division Bell rang in the House of Commons or Lords your phone would beep and the motion would be displayed.  You would vote "Yes" or "No" - or go back to sleep.  The results would be added up and then displayed on a website, along with how the parliamentarians voted.

The big trick is not to try to make the electronic vote binding.  Instead separate the making of a decision from the mass expression of an opinion on it, while tying the two together very tightly in time.  That way the self-interested and self-perpetuating decision makers get no say in whether the App is implemented  or not.

Things get interesting if a lot of people use the App and disagree with Parliament.  Firstly, journos would pick up on it and start to quote the stats: "MPs vote three-to-two in favour, but the public are three-to-one against."  That sort of thing.

Secondly, at the moment if 100,000 people sign a parliamentary petition, then the issue gets debated in Parliament.  Well, if the majority of Democracy App users who disagreed with Parliament exceeded 100,000, the system could automatically generate a petition to reconsider the vote and invite all those app-users who disagreed to sign it.

Thirdly, the whole idea of direct participation and electronic referendums on every important matter would gain currency, and the technology could be bedded in and made robust before any actual use in direct decision making was considered.

There are a few technical requirements that the Democracy App would have to meet.  Both the server and the client sides would have to be open-source, so that people could check that there was no skulduggery going on in the counting.   The App would have to be a free download, and would have to carry no advertising to avoid any accusations of bias towards the better-off or bias towards piper-payers.  And finally votes would have to be encrypted in memory when the App was running and end-to-end encrypted and routed through Tor in transmission so that they were anonymous.

My whole Democracy App idea is necessarily expressed in the terms of the Parliament with which I am most familiar.  But it would clearly work in any legislature where votes are taken, and many of them are democracies in name only.  Citizens of such places are the ones who most require anonymity.

The system would also work at all levels of government from lowly parishes, through county councils and parliaments, right up to supranational bodies such as the EU and the UN.  It could also work in the corporate sphere, connecting votes by boards of directors directly to workers and shareholders. 

The needed security technology is pretty widely understood, as is the writing of apps themselves.

Go to it, software engineers!

Monday, 23 February 2015

Return after a short absence



This blog has lain fallow for a little while as I had to attend to other matters, most notably helping to run RepRapPro Ltd and moving house.  But soon I shall return...