My home page
Wednesday, 31 August 2016
Amazon now have their Dash button that allows you to buy a restricted range of goods from, surprise - Amazon - when something runs out. So you put the button on your washing machine, press it when the powder gets low, the button automatically does a buy-with-one-click using your home wifi, and a new pack arrives a day later.
But you can't set the buttons up to buy anything you like from Amazon, let alone from other suppliers. The button locks you in to products that may well not be the best deal, nor exactly what you want.
Clearly what's needed is a user-programmable button that you can set up to take any online action that you preset into it. Thus pressing the button might indeed do an Amazon one-click, or it might add an item to your Tesco online order, or it might boost your web-controlled central heating in the room where you are sitting, or it might just tweet that you are having your breakfast (if you feel that the world needs to know that on a daily basis).
Electronically, such a device would be straightforward. And - as a marketing opportunity - it is potentially huge. It would allow people total control over what they buy and from whom, completely subsuming Amazon Dash within itself among a much wider range of possibilities. And in addition it could be used to carry out a vast range of non-buying online actions that are amenable to your pressing a button when you feel like it.
If I can find a spare afternoon, I might just design it and open-source the results...
Tuesday, 16 February 2016
Steven Pinker's famous book The Better Angels of our Nature posits four reasons for the decline in violence that has happened and is happening in the world: empathy, self-control, our moral sense, and reason. He explicitly (though only partially) rejects the idea that we are evolving to become more peaceful.
I am not sure (particularly given meme as well as gene copying) that evolution can be discounted as an explanation for the decline in violence.
Recall John Maynard Smith's hawks-and-doves example of an evolutionarily stable strategy. Suppose the payout or utility matrix is
|hawk||-1, -1||+0.5, -0.5|
|dove||-0.5, +0.5||+0.25, +0.25|
What this says in English is that when two hawks meet they fight and each loses 1 unit of utility (the -1s top left) because of energy wastage, injury or death. When a hawk meets a dove the hawk gains +0.5 units of utility because the hawk can easily steal from the dove (the +0.5 top right) and the dove loses 0.5 (the -0.5). When a dove meets a hawk the reverse happens (bottom left). And when two doves meet they each gain 0.25 units because they don't fight and can cooperate (bottom right).
The resulting utility graph looks like this:
(I have chosen numbers that make the Nash equilibrium occur at zero utility for simplicity; this is not necessary for the argument that follows.)
Now suppose that one thing changes: technological advance makes weapons more deadly.
Note very carefully that better weapons is not the same thing as more weapons. The number of weapons always goes as the proportion of hawks (33% above) and is an output from, not an input to, the model.
With better weapons, when a dove meets a dove nothing is different because they didn't fight before and they don't now. When a hawk meets a dove the hawk gets the same profit as before because the dove surrendered all that it had before. So the numbers in the right hand column stay the same except for...
When a dove meets a hawk the dove may lose more (maybe it dies instead of merely being injured: the -0.75s). And when a hawk meets a hawk both lose disastrously because their better weapons mean greater injury and more death (the -1.5s). So the numbers in the left hand column get more negative:
|dove||-0.75, +0.5||+0.25, +0.25|
and the utility graph changes:
Making weapons better at killing gives a society with fewer of them; a society that is more peaceful.
Monday, 1 February 2016
It is quite entertaining to listen in when my daughter (who's not the woman in the photo above) gets a scam telephone call. She sets herself two targets:
- To keep the scammer on the line as long as possible to waste their time and money, and
- To try to get the scammer's credit card or bank details.
But the problem is that all this wastes her time too.
Chatterbots have been around since I was programming computers by punching little holes in rectangles of cardboard. The first was Weizenbaum's ELIZA psychiatrist. That mimicked a non-directive therapist. It was completely brainless, but so strong is the human impulse to ascribe active agency to anything that talks to us, it was both interesting and fairly convincing to have a typed conversation with.
And these days chatterbots are much more sophisticated. With near real-time speech recognition, voice synthesis that sounds like a proper human, and recursive sentence writers that never make a grammatical mistake, they can just about hold a real phone conversation. Just listen to the second recording here - the appropriate laughter from the robot is stunning.
So how about a phone app that you tap when you get a scam call? This app takes over the conversation, wasting the scammer's time for as long as possible and allowing you to get on with your life.
But it needn't end there. The app could transcribe the entire scam conversation and upload it. This would automatically compile a reference collection of scammer's scripts that anyone could google while they had someone on the line that they were suspicious of. Also the app could evolve: conversational gambits that led to longer calls could be strengthened and new weights could be incorporated in upgrades so the app would get better and better at hanging the hapless scammer on the line. Finally, the app could take the record of the things that the scammers themselves say and add variations on that to its repertoire of responses.
Already there are online lists of source numbers for scammers (though most disguise their origins, of course). When the app found that your phone's account was coming to the end of the month and that you had unused free minutes it could dial up those scammer numbers at three in the morning and see how many crook's credit card and bank details it could gather and post online...
Tuesday, 29 December 2015
The very wonderful Ben Goldacre rightly has it in for (among others) journalists who misrepresent scientific research to generate a completely artificial scare story. Think no further than the MMR scandal in journalism for example, in which newspapers killed many children. (And in which one journalist, Brian Deer, exposed the original fraud.)
Often the problem is not the original paper describing the research, but the press release put out by the authors' institution (whose PR departments are usually staffed by more journalists). Of course the authors are at fault here - the PR department will always give them a draft of any release to check, and they should be savage in removing anything that they think may cause distortions if reported. But authors are not disinterested in fame and publicity.
It seems to me that there is a simple solution. The normal sequence of events is this:
- Do experiments,
- Write a paper on the results,
- Submit it to a journal,
- Correct the paper according to criticisms from the journal's reviewers,
- See the paper published,
- Have the PR people write a press release based on the paper,
- Check it,
- Send it out, and
- See the research - or a distortion of it - appear in the press.
But simply by moving Item 6 to between Items 2 and 3 - that is by having the press release sent out with the paper to the journal's reviewers - a lot of trouble could be avoided. The reviewers have no interest in getting fame and publicity (unlike the authors and their institution), but they are concerned with accuracy and truth. If they were to correct the press release along with the paper itself, and in particular were compelled to add a list at the start of the press release on what the paper does and does not say in plain terms, then a lot of trouble could be avoided.
The list would look something like:
- This paper shows that rats eating sprogwort reduced their serum LDL (cholesterol) levels by a statistically significant amount.
- This paper does not show that sprogwort reduces cardiovascular disease in rats.
- This paper does not show that sprogwort reduces cardiovascular disease in humans.
- Sprogwort is known to be neurotoxic in large doses; the research in this paper did not study that at all.
Then everyone would quickly discover that the following morning's headline in the Daily Beast that screams
SPROGWORT CURE FOR HEART ATTACKS
was nonsense. In particular other journalists would know, and - of course - there's nothing one journalist loves more than being able to show that a second journalist is lying...
Thursday, 3 December 2015
BBC Radio 3 has a competition in the mornings where they play an excerpt from a piece of music backwards and ask listeners to identify it. Occasionally I get it and often I don't, but more interesting is the fact that music doesn't work backwards. It just doesn't sound right, or even good.
This is rather surprising. The rules of melody and harmony are like the laws of quantum mechanics - they are indifferent to the direction of time.
So if we watch (if we could) an atom absorbing a photon and one of its electrons consequently changing its energy, the reverse process whereby the atom emits a photon looks (if it could) just like a film being played backwards. The same thing happens at the human scale - if you watch a film of two moving snooker balls bouncing off each other you can't tell if the film is being played backwards or forwards. But if you watch a film of the first break in a snooker game, where the white strikes a triangle of reds, it is immediately obvious if the film is backwards or forwards. For a load of red balls to converge and, after multiple collisions, to come to rest in a neat triangle whilst ejecting a white ball from their midst would be so monstrously improbable that we would know that that film was backwards more certainly than we know that we will not be eaten by a griffin.
A prog-rock cliché that is most effective is to start a piece with white noise (sometimes bandpass filtered to sound like the wind), or with random drum beats and cymbal clashes, and for all that gradually to resolve into a structured tune. Think Pink Floyd. And at the end of most musical pieces of all genres there is a resolution from dissonance to consonance, and similar resolutions are placed throughout because we find them pleasing. So much so, indeed, that if a piece ends with a fade-out rather than a resolution we feel cheated and think that the players and composer (or merely the DJ) have been lazy.
A musical resolution is like all the snooker balls coming together improbably to form a triangle. A high-entropy state has surprisingly turned into a low-entropy state - the very reverse of what we expect from our intuitive feeling for the Second Law. The reason that backwards music sounds bad is because backwards music is like all of nature - its entropy increases. But we like forwards music, and it sounds appealing to us, because it reverses the normal flow - it flows away from decay and dissipation towards freshness and refinement.
Music is backwards time.
Saturday, 24 October 2015
DNA was famously discovered in 1869 by Friedrich Miescher, just ten years after Darwin published the Origin of Species. But it wasn't until 1952 that it was demonstrated to be the the key to heredity (in the T2 phage), and then its structure and modus operandi were deduced a year later.
In retrospect, its helical structure might have been anticipated (easy for me to say with hindsight, of course...). If you take any random shape and stack repeated copies of it, the result is a helix. The reason is that the top face of the shape will be a fixed distance from the bottom, and at a fixed angle to it. Repeatedly copying something through a fixed distance and turning it through a fixed angle always gives a helix.
However, that helix must be very inconvenient for cells, as anyone who has had to deal with a tangle of string knows. As DNA is copied it unzips down the middle then zips itself back up again a bit downstream. Meanwhile the copy twists away forming a new chain. My late colleague Bill Whish once worked out that the molecule has to spin at 8,000 RPM while it is doing this to get the job done in time. Think of your car engine spinning at that speed and spewing out a twisted strip as it did so. Now present yourself with the problem of stopping that strip from getting hopelessly knotted up, and also not tangled with its parent DNA. It's fortunate that all this is happening in an extremely viscous medium - water. At the molecular scale water is like very thick treacle indeed. Our intuition from the engine and strip analogy is led astray by our instinct for flexibility, mass and inertia at human scales. Down with the DNA inertia is negligible, and everything stays where it is put.
But what if DNA were not a helix, but were as straight as a ladder? Many of these problems would go away. It should not be too difficult to devise a molecular ladder where that angle between one rung and the next has zero twist. If in no other way, it could be achieved by taking a twist and bonding it to its other stereo-isomer with the opposite twist to form a single rung.
The ribosome would have to be re-engineered to accommodate straight, rather than twisted, transfer RNA. But people have already made artificial ribosomes that do things like working with four-base codons (all natural ribosomes work with three-base codons) so that should not be too difficult. The actual codon to amino-acid to protein coding could remain unaltered, at the start at least.
We would then have synthetic cells that might be able to reproduce much more quickly than natural ones, or work at much higher temperatures.
And the ladder DNA would be much easier to organise in two-dimensional sheets or three-dimensional blocks. We could then try to do parallel copying of the DNA rather than the serial copying that happens at the moment - a ladder-DNA sheet could make a copy by stamping it out, rather as a printing press stamps out a page. That should speed up reproduction even further, and a similar parallelisation could be attempted with the ribosome, making proteins all in one go, as opposed to chaining them together one amino acid at a time.
Life would get a lot faster...
Monday, 12 October 2015
Science is mostly right because it assumes that it is wrong. All other fields of human thought are mostly wrong because they assume that they are right.
Despite this, scientific publishing is not organised scientifically at all; instead it relies on the antic combination of the political and the quasi-religious that is peer review: suppose some hapless junior academic in a fly-blown university physics department in Wyoming or Berkshire or somewhere has an explanation for why electron, muon or tau neutrinos change between these three types as they zip through you and everything else. The physicist writes a paper about it that gets sent to a bunch of experts - the peer reviewers - who decide if it is sensible or not. If it is sensible it gets published and the physicist stays in a job; if not, both it and the physicist are out on their ear.
That process is itself not sensible.
Since the start of the internet, I have not understood why we don't apply the scientific method to the publishing of science instead of relying upon the logical fallacy of argumentum ab auctoritate, which is what peer-review is.
Here's how it would work: authors post their paper online; at this stage there are no authors' names published, and the paper does not count as a publication to the physicist-sackers. It is assigned a prior probability of being right of, say, 10%. This probability rises automatically, but slowly, with time.
Then anyone except the authors can attempt to falsify the paper. At each failed attempt at falsification the probability of the paper's being right is increased in the appropriate Bayesian way. When that probability rises above, say, 70% the authors' names are published and they can use the paper to gain tenure, promotion, Nobel prizes and so on.
If an attempt at falsification succeeds then the paper is moved into a proved-false category and the authors may choose to have their names published, or not.
If no attempts are made to falsify the paper, and thus its time-dependent probability of being right rises to a high level all on its own, the paper is moved sideways into a maybe-interesting-but-not-falsifiable-so-not-science category. Any subsequent attempt at falsification (for example when technology improves) moves it back into the mainstream.
And what do the falsifiers get out of it? Their successful falsifications are automatically added to their list of publications with a high rank – high, because falsehood is the only certainty in science.
Almost all of this could be automated by any competent web programmer.
Nullius in verba, as the Royal Society says.
Sunday, 12 April 2015
BBC Programme Stargazing Live on 20 March 2015. No telescopes were involved.
The full image (click this link) is 40MiB, which is rather high-resolution for a cellphone. So how was it done?
Back in the age of Carl Sagan and Patrick Moore astronomers would point a telescope at a patch of sky and open a camera shutter. The telescope would be on an equatorial mount driven by a clock, so the star images would stay in the same place on the photographic film as the Earth rotated under the sky. Stars, and even worse nebulae such as those depicted above, don't give off a lot of light at this distance, so they waited for hours before closing the shutter. Sometimes they would even come back the next night and the next for weeks, opening the shutter each time to catch a few more photons. The result was an average - enough light had hit the film to form an image, but it had all been smeared out by atmospheric interference and any slight shake of the telescope.
Then Richard Gregory made the exquisitely beautiful invention of photographing the heavens through a previous negative of the same starfield. A moment's thought shows that this will tend to cancel out the random noise, while emphasising the true data. The trick can be repeated as often as desired. (My daughter and I once went to a public lecture by Gregory at Bristol University. He expounded more wonderful ideas in an hour than most people have in their entire lives. And just think of the skill required to hold both an eight-year-old child and a forty-one-year-old engineer and mathematician equally entranced by those ideas.)
However, in contrast to the original smeary photographs and also Gregory's, the picture above was only exposed for, perhaps, one fiftieth of a second. But it was not taken by just one cellphone - hundreds of viewers of the programme had gone out and taken a picture of the same patch of sky with their phones and cameras, and had then uploaded them to the programme's website. Each individual picture was just a black rectangle - not enough starlight had gone through the lens to make an image that could be seen. But some had gone through, and registered in the camera's pixels as a slightly less-dark patch of black.
All the images were then stacked. A computer first matched them up by making sure that the centres of the prominent stars were all in the same place, and then added up the slightly-less-black bits to make the picture. Of course the pixels in all the cameras were not in the same place relative to the stars, which means that each camera pixel could be split into thousands of final-image pixels, which gives the fabulous resolution, a tiny bit of which you see above.
Long-exposure averaging loses information as a cost of getting enough light. Stacking preserves and integrates the information and gets enough light by - effectively - having a camera aperture the size of all the camera lenses added together. And all those lenses see different atmospheric interference, have different lens errors, and have different noise patterns in their phone's image sensors, so those can all be compensated for as well.
The human race is a species on which the stars never set. So let's make the Human Telescope. Set up a website to which anyone anywhere in the world can upload any sky images that they have taken with any digital camera, phone or telescope. The images will have a timestamp and a GPS location, and will be continually stacked by a computer in the background to give an exquisitely detailed evolving picture of the whole vault of the heavens.
The world would become a great spherical insect eye looking at every star, galaxy, planet and nebula all the time. We would be automatically finding comets, supernovae and near-Earth asteroids. We would never miss an astronomical trick.