Wednesday, April 6, 2011

Speeding up R computations

The past few days I've been going through some R code that I wrote last year, when I was preparing a massive simulation-based power study for some tests for multivariate normality that I've been working on. My goal was to reduce the time needed to run the simulation. I wasn't expecting great improvement, since I've always believed that the most common R functions are properly vectorized and optimized for speed. Turns out I was wrong. Very wrong.


The first thing that I did was that I replaced all parentheses ( ) by curly brackets { }. I was inspired to do so by this post (and this, via Xi'Ans Og) over at Radford Neal's blog. As he pointed out, code that uses parentheses is actually slower than the same code with curly brackets:

> system.time( for(i in 1:1000000) { 1*(1+1) } )
   user  system elapsed 
  1.337   0.005   1.349 
> system.time( for(i in 1:1000000) { 1*{1+1} } )
   user  system elapsed 
  1.072   0.003   1.076 

Similarly, you can compare a*a and a^2:

> system.time( for(i in 1:10000000) 3^2 )
   user  system elapsed 
  5.048   0.028   5.088 
> system.time( for(i in 1:10000000) 3*3 )
   user  system elapsed 
  4.721   0.024   4.748 

So, a^2 is slower than a*a. This made me wonder, are there other built-in R functions that are slower than they ought to be?

One thing that I found very surprising, and frankly rather disturbing, is that mean(x) takes ten times as long to calculate the mean value of the 50 real numbers in the vector x as the "manual" function sum(x)/50:


> x<-rnorm(50)
> system.time(for(i in 1:100000){mean(x)})
   user  system elapsed 
  1.522   0.000   1.523 
> system.time(for(i in 1:100000){sum(x)/length(x)})
   user  system elapsed 
  0.200   0.000   0.200 
> system.time(for(i in 1:100000){sum(x)/50})
   user  system elapsed 
  0.167   0.000   0.167 
> system.time(for(i in 1:100000){ overn<-rep(1/50,50); x%*%overn })
   user  system elapsed 
  0.678   0.000   0.677 
> overn<-rep(1/50,50); system.time(for(i in 1:100000){ x%*%overn })
   user  system elapsed 
  0.164   0.000   0.164 

I guess that the R development core team have been focusing on making R an easy-to-use high level programming language rather than optimizing all functions, but the poor performance of mean is just embarrassing.

Similarly, the var function can be greatly improved upon. Here are some of the many possibilites:


> x <- rnorm(50)
> system.time( for(i in 1:100000) { var(x) } )
   user  system elapsed 
  4.921   0.000   4.925 
> system.time( for(i in 1:100000) { sum((x-mean(x))^2)/{length(x)-1} } )
   user  system elapsed 
  2.322   0.000   2.325 
> system.time( for(i in 1:100000) { {sum(x*x)-sum(x)*sum(x)/length(x)}/{length(x)-1} } )
   user  system elapsed 
  0.736   0.000   0.737 
> system.time( for(i in 1:100000) { {sum(x*x)-sum(x)*sum(x)/50}/49 } )
   user  system elapsed 
  0.618   0.000   0.618 

> system.time( for(i in 1:100000) { sx<-sum(x); {sum(x*x)-sx*sx/50}/49 } )
   user  system elapsed 
  0.567   0.000   0.568

I changed all the uses of mean in my code to "sum/n" instead (and avoided using var entirely) and found that this sped things up quite a bit.

Another trick to speed up your computations is to create the vectors that you wish to change within a loop with the right number of elements. While

a<-NA
for(j in 1:100) a[j]<-j 

works just fine, it is actually quite a bit slower than

a<-rep(NA,100)
for(j in 1:100) a[j]<-j

You could create a in other ways as well of course, for instance by a<-vector(length=100). Here are the numbers:


> system.time( for(i in 1:100000) { a<-NA; for(j in 1:100) a[j]<-j })
   user  system elapsed 
 37.383   0.092  37.482 
> system.time( for(i in 1:100000) { a<-rep(NA,100); for(j in 1:100) a[j]<-j })
   user  system elapsed 
 25.866   0.065  25.936 
> system.time( for(i in 1:100000) { a<-vector(length=100); for(j in 1:100) a[j]<-j })
   user  system elapsed 
 25.517   0.022  25.548

In my case, I'd been a bit sloppy with creating the vectors in my loops in the proper way, so I changed this in my code as well.

In my simulation study, I simulate multivariate random variables, compute some test statistics and use these to estimate the powers of the normality tests against various alternatives. After doing the changes mentioned above, I compared the performance of my old code to that of the new code, for 1000 iterations of the procedure:

> system.time( source("oldCode.R") )
   user  system elapsed 
548.045   0.273 548.622 
> system.time( source("newCode.R") )
   user  system elapsed 
 93.138   0.002  93.194

The improved code is almost 6 times faster than the old code. When you do ten million or so iterations, that matters. A lot.

In conclusion, it's definitely possible to speed up your code significantly if you know of the pitfalls of R. I suspect that I'll be obsessed with finding more pitfalls in the next few weeks, so I'd be thankful for any hints about other weaknesses that R has.

It should probably be mentioned that R is really fast when things are properly vectorized. Last year, a coworker that uses Matlab challenged me to perform a number of matrix computations faster in R than in Matlab. To his great surprise, R won.

As a final remark, I'm now facing a bit of a dilemma. Should I write readable code; a^6; or fast code; a*a*a*a*a*a?

Update: looking to speed up your R computations even more? See my posts on compiling your code and parallelization.

Tuesday, February 15, 2011

Narcolepsy and the swine flu vaccine

Earlier this year the Finnish National Institute for Health and Wellfare published a report about the increased risk of narcolepsy observed among children and adolescents vaccinated with PandemrixR. In short, the conclusion was that the swine flu vaccine seemed to have had an unexpected side effect; the risk of narcolepsy, a sleeping disorder disease, was larger for vaccinated children in the 4-19 year age group than for unvaccinated children in the same age group.


Steven Novella at Science-Based Medicine wrote a great piece about this last week, discussing how this should be interpreted. I'm not going to go into a discussion about the findings themselves, but I would like to discuss the following part of the press release:

In Finland during years 2009–10, 60 children and adolescents aged 4-19 years fell ill with narcolepsy. These figures base on data from hospitals and primary care, and the review of individual patient records by a panel of neurologists and sleep researchers. Of those fallen ill, 52 (almost 90 percent) had received Pandemrix® vaccine, while the vaccine coverage in the entire age group was 70 percent. Based on the preliminary analyses, the risk of falling ill with narcolepsy among those vaccinated in the 4-19 years age group was 9-fold in comparison to those unvaccinated in the same age group.

Sceptical commenters on blogs and forums have questioned whether a 9-fold increase in risk really was observed. Here's the reasoning:

The estimated risk within a group is (the number of observed cases of the disease)/(size of the group). That is,

Risk for vaccinated child: 52/(n*0.7) = 1/n * 52/0.7
Risk for unvaccinated child: 8/(n*0.3) = 1/n * 8/0.3

where n is the number of children in the 4-19 age group.

So that the relative risk, i.e. the risk increase for the vaccinated children, is

(52/0.7)/(8/0.3)=2.79 .

Hang on a minute. 2.79? If a 9-fold increase in risk was observed the relative risk should be 9! It seems that the Finnish epidemiologists made a mistake.

...or did they?

Not necessarily. When analyzing this data, we need to take time into account. The report itself is only available in Finnish, but using Google Translate I gathered that the unvaccinated group were studied from January 2009 to August 2010 whereas the vaccinated individuals were studied from the date of vaccination and eight months on. In other words, the unvaccinated group had a longer time span in which they could fall ill.

That means that in order to calculate the relative risk, we need to divide the number of cases by the number of months that the groups were studied, to get the risk per month. That eliminates the time factor. After doing this, the relative risk becomes

((52/8)/0.7)/((8/20)/0.3)=6.96.

That's higher, but still not 9. Well, to complicate things a bit it seems that an individual was considered to be a part of the unvaccinated group until the date of vaccination, making the calculations a bit more difficult. When that is taken into account, along with other difficulties that no doubt occur when you have the actual data at hand, the relative risk probably becomes 9.

The full report is not yet available, so I can't say how close the above approach is to the one that was actually used in the analysis. Nevertheless, I hope that this post can help shed some light on the statistics behind the statement about a 9-fold increase.

A problem with this approach is that the number of months under which the unvaccinated group was studied might affect the results, just as in the shark attack example that I wrote about last week. Changing the time span for the unvaccinated group to January 2008 to August 2010, say, does however not change the conclusion in this case. The analysis seems to be pretty robust to the length of time under which the control group were studied.

WHO issued some comments regarding the Finnish study that are well worth reading.

Thursday, February 10, 2011

Sharks! Sharks!

The International Shark Attack File, or ISAF for short, recently published their 2010 worldwide shark attack summary and the findings have been reported by international media the last couple of days. The main message has been that the number of unprovoked shark attacks last year, 79, was larger than usual.

The question that is left unanswered in the articles that I've read is "just how unusual is 79 shark attacks in a year?" Let's find out!


Assume that a trial is performed n independent times (where n is large) and that at each trial the probability p of some given event is small. If X is the number of trials in which the event occurs, then X will be Poisson distributed. This means that the probability that X equals some value k will be

P(X=k)=mk/k!*e-m

where k!=k*(k-1)*(k-2)*...*3*2*1 and m=n*p is the average number of times that the event will occur. This fact has not only been seen empirically, but can also be proved using "basic" tools of probability theory.

Now, a large number of people, n, spend time in the sea each year, and for each such person there is a small probability, p, that the person is attacked by a shark. The people are more or less independent, and therefore we can argue that the number of shark attacks in a year should be (at least approximately) Poisson distributed.

For the mathematically minded, I should probably point out that I'm aware that the shark attack probabilities pi differ between different areas. This does not really contradict the assumption that shark attacks follow a Poisson process, as we can view the global shark attacks as the union of (essentially) independent Poisson processes with different intensities.

If we want to calculate the probability of a certain number of shark attacks, we now need to estimate the average number of shark attacks in a year, m. Well, from the ISAF 2000-2010 statistics we see that there's been an average of 71.5 shark attacks annually, or an average of 63.6 if we only look at the 2000-2009 period. Let's assume that the intensity of shark attacks has been constant throughout the last eleven years and that deviations from the mean are random and not due to some trend. In that case we can use the above averages as our estimates of m.

Using the estimate m=71.5 we get that the probability of at least 79 shark attacks in a year is 0.17, which means that we should expect more than 79 shark attacks roughly once in every six years. In the last eleven years there has been two such years, 2000 (80 attacks) and 2010 (79 attacks), which is more or less exactly what we would expect.

If we instead use the lower estimate m=63.6 we get that the probability is 0.026, which would mean that we can expect at least 79 shark attacks once in 38 years.

Were we to use the 2001-2009 data only, our estimate would be m=55.6 and the estimated probability of at least 79 sharks attacks would be as small as 0.001. In this scenario, 2010 would have been a one-in-a-thousand extreme when it comes to shark attacks!

There's actually a valuable lesson here. By choosing different years to include when calculating our estimate of m we arrived at completely different conclusions. It was easy for us to do in this example, and it's just as easy for anyone else to do when they want to present statistics. Lies, damn lies, ...

People tend to be afraid of sharks, and it is therefore interesting to note that out of the 79 sharks attacks last year, only 6 were fatal. According to the CIA World Factbook, 56.6 million people died in 2010. That means that the risk of being killed by a shark is approximately 0.0000001! That's a pretty abstract number, but maybe the "What's most likely to kill you?" infographic can help you visualize it. It illustrates the risks of various causes for death, but does unfortunately not include shark attacks... The ultimate shark infographic is probably this one, from last year.

Actually, this global shark catch graph tells us that the sharks are the ones who need to fear the humans.

On a side note, I teach a course called Statistics for engineers this semester and when I introduced the Poisson distribution two weeks ago, I used shark attacks as an example of an application of the distribution (along with some engineering applications, of course). I was inspired by this paper, from which I also borrowed a data set about the number of points Wayne Gretzky scored in each game during his time in Edmonton Oilers. Funnily, I gave the lecture on January 26, which was Gretzky's 50th birthday. When I introduced the binomial distribution I used the predictions of the 2010 FIFA world cup oracle Paul the Octopus as an illustrating example, who happened to be born on the 26th of January as well. This allowed me to move seamlessly on to the birthday problem ("what is the probability that there is at least one pair of people in this room that have the same birthday?"), which we could solve using the binomial and Poisson distributions. Sometimes Fortuna is on your side...

As I have more than 120 students in the course, the probability of at least on pair of people sharing a birthday was ridiculously close to 1.

Thursday, February 3, 2011

Art as statistics / statistics as art

Have you ever felt that you would love to have a classic painting hanging on your wall, but that it just isn't scientific enough? After all, you wouldn't want people to think that you're anything less than a completely rational scientifically minded person.

Well, fret no more, Arthur Buxton's van Gogh visualization might be the solution to your problem!

Buxton's pie charts show the percentages of different colours used in different van Gogh paintings. It's art as statistics!

Mario Klingemann uses pie charts in a similar way to give famous paintings new life with his "pie packed" pictures. Here are Michelangelo's Creation of Adam and Vermeer's The Girl with a Pearl Earring:



The principle is the same here as with Buxton's pie charts - each pie shows the percentage of different colours in the area that it covers. It's statistics as art!

And as I'm writing about pie charts that describe a picture, I can't help but bring up this old XKCD picture:



It's funny because it's true. Incidentally, XKCD also did pie charts relating to some of the old masters.

The LoveStat blog recently wrote about both Buxton and Klingemann. My main reason for writing this post is to remind myself that I still haven't framed the van Gogh posters I bought in Amsterdam four years ago.

Monday, December 20, 2010

The mathematics of a beat machine

2010 has been Swedish pop star Robyn's year. She's released a three-part album called Body Talk, been nominated to countless awards and broken new ground with her 3D music video with Twitter integration.

One of my favourite tracks on the second installment in the Body Talk series is We dance to the beat, the lyrics of which describe how we dance to the beat of all sorts of wonderful things, such as the beat of the continents shifting under our feet.

I'm a mathematician, so needless to say the line we dance to the beat of false math and unrecognized genius caught my attention. That line was one of the reasons that I listened to Body Talk pt. 2 over and over again (yes, I'm a bit of a geek). Luckily, the album is nothing short of amazing, just like parts 1 and 3, so the listening pleasure was far greater than what I would get simply from hearing a pop star mentioning mathematics.

In what must surely be a future textbook example of synchronicity, the words false math had been stuck in my head for a couple of days when the guys from the digital production company DinahMoe contacted me. As it turned out, We dance to the beat was in need of some real mathematics.


DinahMoe were developing a first of its kind interactive music experience in collaboration with Blip Butique. Released a week ago, it allows users to remix We dance to the beat and create a music video of sorts by combining sounds and visuals on the interactive beat machine site. The users search through a grid of video/sound clips and choose a combination of four videos and four sound clips. They can then interact with the clips to manipulate both audio and video, in perfect sync with the beat.

Users can share their remixes on Twitter or Facebook and look at the ever-growing seamless stream of remixes created by others. Every now and then the stream of remixes is interrupted by pre-programmed mixes with footage of Robyn, making the stream experience even cooler.


So, where do the mathematics enter into all of this? Well, there are 14 basic sound clips in the interactive beat machine and the creative team behind it wanted to construct a grid where each cell in the grid contained one clip. The users should be able find all possible combinations of 4 out of these 14 clips by looking at a subgrid with four cells (as in the picture above). The question, then, is how large the grid needs to be in order to fit all combinations and how the clips should be arranged so that no clip is ever paired with itself.

In order to answer this, we need to find out in how many ways we can choose 4 out of 14 clips. We could do this by listing all possible combinations (clips 1, 2, 3, 4, clips 1, 2, 3, 5, clips 1, 2, 3, 6, ... clips 11, 12, 13, 14), but that would (a) take too long and (b) bore us to death.

Not having a deathwish, I decided to make use of some tools from the mathematical branch of combinatorics. It just so happens that a classical combinatorial problem is the question "in how many different ways can we choose k out of n objects" (here k and n are whole numbers and k is less than or equal to n).

Luckily, others have solved this problem before us, and we can use their solution right away. The answer to the question can be written as what mathematicians arrogantly refer to as a "simple" formula, using what is called the binomial coefficient (the strange thing on the left hand side in the expression below). The formula is

which in this case, where n=14 and k=4 becomes (14*13*12*11)/(4*3*2*1)=1001. That is, there are 1001 possible ways to choose 4 out of 14 clips.

So, we need to create a grid where all 1001 combinations of four clips can be found by looking at a subgrid of size 2*2, that is, a grid with two rows and two columns. The clips much be placed in such a way that, for instance, clip number 1 never is found next to itself in the big grid - when the user looks at four cells there must be four different clips placed in them.

Consider the following small grids. In each of them there are two 2*2 subgrids that the user can look at.

In the grid to the left, the combinations of clips 1, 2, 4, 5 (the four cells to the left in the grid) and clips 2, 3, 5, 6 (the four cells to the right in the grid) are found. In the middle grid, the combinations are 1, 2, 4, 5 and 1, 2, 5, 6. In the grid to the right however, the combination 1, 1, 2, 4 is found, which isn't permitted. That's the kind of subgrids that we want to avoid.

Before we try to create the grid, we need to know how large it should be. How many subgrids of size 2*2 are there in a grid with a rows and b columns? Let's look at some examples:


From examples like this we can deduce that there is a general formula for the number of 2*2 subgrids: in a grid with a rows and b columns there are (a-1)*(b-1) subgrids of size 2*2. We need a grid with at least 1001 such subgrids in order to be able to fit all 1001 combinations in the grid.

It seemed reasonable to have a grid where a=b, that is, a grid with the same number of rows and columns. Then (a-1)*(b-1)=(a-1)*(a-1)=(a-1)2=a2-2*a+1. When a=32 we get that a2-2*a+1=961 and when a=33 we get that a2-2*a+1=1024, which is larger than 1001. Thus 33 is the smallest value of a that results in a grid where all combinations of 4 out the 14 clips can be placed.

In summary, we've found that there are 1001 combinations of 4 out 14 clips and that we need a grid that has at least 33 rows and 33 columns if we want all of these combinations to be found in 2*2 subgrids. Furthermore, we know what kind of subgrids we wish to avoid.

At this point, placing the clips in the grid is like solving a massive 33*33 sudoku with a strange set of rules. I did a bit of programming, using my favourite scripting language R, to let my computer do this for us. The resulting grid is one of the hidden cogs that is the magic behind the beat machine.


(To complicate things a bit, the grid we used is "infinite" in the sense that the left border of the grid fits with the right border, so that if you go 34 steps to the right or left, or 34 steps up or down you will come back to the same 4 clips that you started with.)

And then there is the matter of how to place the video clips in the grid... But that's another story.

Robyn recently performed at the Nobel peace prize concert. DinahMoe make awesomeness happen with interactive sounds and adaptive music. Check out their website for more examples!

Friday, December 17, 2010

Facebook friendships and urbanization

Last Tuesday Paul Butler, an intern on Facebook's data infrastructure engineering team, posted a map visualizing the "locality of friendship" on Facebook. Butler used data from friendships between the 500 million Facebook users to create a stunning visualization of the world's largest social network. The colour of the blue lines in represent friendship connections between cities and the white dots are the cities themselves (literally glowing from the density of the friendships within the city).


As interesting as the brightly shining cities and interconnecting lines are (think for a minute about what the lines represent!), many of us found the dark areas to be even more interesting. As is often the case, the most interesting questions came not from what we could see in the data set, but from what we couldn't see.

To that end, Thorsten Gätz produced a beautiful map where the Facebook friends map was put on top of a world map where areas with a high population density were marked in red. He described it as "[a] map which puts the connections of facebook users into context and shows where other social networks have the upper hand."


I am however not sure that population density is the right variable when trying to put the Facebook map into context. Parts of the third world are densely populated but not particularly urbanized. Urbanization should be highly correlated with social network usage, so perhaps that would be a more natural variable to look for in a comparison.

Ten years ago, NASA used data from meteorological satellites to produce a picture of Earth's city lights. Brighter areas are more urbanized, so this map should be useful when we try to put the Facebook map into context. Luckily, NASA's map uses the same projection as the Facebook map, so we can easily superimpose them on each other.


My graphical software skills are sadly lacking when compared to those of Gätz, but after trying my best I came up with this picture:


The red (and glowing) dots and lines are from the Facebook map whereas the blue parts come from NASA's map.

The conclusions that we can draw from this map differs somewhat from those drawn from Gätz's map. The most urbanized regions of Africa seems to coincide with the African regions on the Facebook map. Northeast Brazil, Russia and parts of eastern Europe, the middle east and China are either underrepresented or missing completely in the Facebook map. South Korea and Japan are clearly visible on the Facebook map, but even more so on the city lights map.

The picture that emerges when looking at the Facebook and urbanization maps together coincides with that from a world map of social network that was recently published over at Vincos blog, giving further support to the idea that urbanization is the right context for this map.


Dark areas of maps have always driven mankind to push further and explore the unknown. Today we can explore those dark areas from the comfort of our homes, with a little help of powerful computers and networks of satellites...

Revolutions and FlowingData drew my attention to Butler's Facebook map.

On a side note, it's also interesting to compare these pictures to world connection density and city-to-city router connections map.