Chapter 5 Performance

Some resources used here or for further reading:

The people who say that “R is just always slow” are usually not great R programmers. It is true that writing inefficient R code is easy, yet writing efficient R code is also possible when you know what you’re doing. In this chapter, you will learn how to write R(cpp) code that is fast.

5.1 R’s memory management

See this chapter of Advanced R.

5.2 Early advice

5.2.1 NEVER GROW A VECTOR

Example computing the cumulative sums of a vector:

x <- rnorm(1e4)  # Try also with n = 1e5
system.time({
  current_sum <- 0
  res <- c()
  for (x_i in x) {
    current_sum <- current_sum + x_i
    res <- c(res, current_sum)
  }
})
#>    user  system elapsed 
#>    0.10    0.02    0.11

Here, at each iteration, you are reallocating a vector (of increasing size). Allocation of memory takes time as well as computations. This makes your code quadratic with the size of x (if you multiply the size of x by 2, you can expect the execution time to be multiplied by 4, for large sample sizes), whereas it should be only linear.

A good solution is to always pre-allocate your results (if you know the size):

system.time({
  current_sum <- 0
  res2 <- double(length(x))
  for (i in seq_along(x)) {
    current_sum <- current_sum + x[i]
    res2[i] <- current_sum
  }
})
#>    user  system elapsed 
#>    0.02    0.00    0.02
all.equal(res2, res)
#> [1] TRUE

If you don’t know the size of the results, you can store them in a list and merge them afterwards:

system.time({
  current_sum <- 0
  res3 <- list()
  for (i in seq_along(x)) {
    current_sum <- current_sum + x[i]
    res3[[i]] <- current_sum
  }
})
#>    user  system elapsed 
#>       0       0       0
all.equal(unlist(res3), res)
#> [1] TRUE

With recent versions of R (>= 3.4), you can efficiently grow a vector using

system.time({
  current_sum <- 0
  res4 <- c()
  for (i in seq_along(x)) {
    current_sum <- current_sum + x[i]
    res4[i] <- current_sum
  }
})
#>    user  system elapsed 
#>       0       0       0
all.equal(res4, res)
#> [1] TRUE

Assigning to an element of a vector beyond the current length now over-allocates by a small fraction. The new vector is marked internally as growable, and the true length of the new vector is stored in the truelength field. This makes building up a vector result by assigning to the next element beyond the current length more efficient, though pre-allocating is still preferred. The implementation is subject to change and not intended to be used in packages at this time. (NEWS)

An even better solution would be to avoid the loop by using a vectorized function:

system.time(res5 <- cumsum(x))
#>    user  system elapsed 
#>       0       0       0
all.equal(res5, res)
#> [1] TRUE
x <- rnorm(1e7)
system.time(cumsum(x))
#>    user  system elapsed 
#>    0.06    0.00    0.06

As a second example, let us generate a matrix of uniform values (max changing for every column):

n <- 1e3
max <- 1:1000
system.time({
  mat <- NULL
  for (m in max) {
    mat <- cbind(mat, runif(n, max = m))
  }
})
#>    user  system elapsed 
#>    0.61    0.20    0.81
apply(mat, 2, max)[1:10]
#>  [1] 0.999054 1.999917 2.999667 3.993280 4.994418 5.999435 6.994488 7.996564 8.988868
#> [10] 9.996602

So, we can either pre-allocate a list or a matrix:

system.time({
  l <- vector("list", length(max))
  for (i in seq_along(max)) {
    l[[i]] <- runif(n, max = max[i])
  }
  mat2 <- do.call("cbind", l)
})
#>    user  system elapsed 
#>    0.03    0.00    0.03
apply(mat2, 2, max)[1:10]
#>  [1] 0.9990843 1.9990435 2.9938535 3.9976729 4.9936396 5.9893723 6.9539626 7.9996006
#>  [9] 8.9799541 9.9845350
system.time({
  mat3 <- matrix(0, n, length(max))
  for (i in seq_along(max)) {
    mat3[, i] <- runif(n, max = max[i])
  }
})
#>    user  system elapsed 
#>    0.05    0.00    0.05
apply(mat3, 2, max)[1:10]
#>  [1] 0.9987964 1.9990991 2.9925360 3.9946387 4.9947198 5.9984303 6.9789370 7.9882264
#>  [9] 8.9947816 9.9973325

What is nice with using a list is that you don’t need to pre-allocate. Indeed, as opposed to atomic vectors, each element of a list is in different places in memory so that you don’t have to reallocate all the data when you add an element to a list.

system.time({
  l <- list()
  for (i in seq_along(max)) {
    l[[i]] <- runif(n, max = max[i])
  }
  mat4 <- do.call("cbind", l)
})
#>    user  system elapsed 
#>    0.03    0.00    0.03
apply(mat4, 2, max)[1:10]
#>  [1] 0.9998578 1.9985802 2.9948578 3.9931722 4.9958939 5.9994480 6.9987769 7.9971307
#>  [9] 8.9809690 9.9693196

Instead of pre-allocating yourself, you can use sapply (or lapply and calling do.call() after, as previously done):

system.time(
  mat4 <- sapply(max, function(m) runif(n, max = m))
)
#>    user  system elapsed 
#>    0.03    0.00    0.03
apply(mat4, 2, max)[1:10]
#>  [1] 0.9985957 1.9978454 2.9969550 3.9986053 4.9952173 5.9986125 6.9946495 7.9889983
#>  [9] 8.9999699 9.9976858

Don’t listen to people telling you that sapply() is a vectorized operation that is so much faster than loops.

5.2.2 Use the right function

Often, in order to optimize your code, you can simply find the right function to do what you need to do.

For example, using rowMeans(x) instead of apply(x, 1, mean) can save you a lot of time. For example, if you want more efficient functions that apply to rows and columns of matrices, you can check package {matrixStats}.

Another example is when reading large text files; in such cases, prefer using data.table::fread() rather than read.table().

Generally, packages that uses C/Rcpp are efficient.

5.2.3 Do not try to optimize everything

“Programmers waste enormous amounts of time thinking about, or worrying about, the speed of noncritical parts of their programs, and these attempts at efficiency actually have a strong negative impact when debugging and maintenance are considered.”

— Donald Knuth.

If you try to optimize each and every part of your code, you will end up losing a lot of time writing it and it will probably less readable.

R is great at prototyping quickly because you can write code in a concise and easy way. Begin by doing just that. If performance matters, then profile your code to see which part of your code is taking too much time and optimize only this part!

Learn more on how to profile your code in RStudio in this article.

5.3 Vectorization

See this great blog post by Noam Ross to understand vectorization.

5.3.1 Exercise

Monte-Carlo integration (example from book Efficient R programming)

Suppose we wish to estimate the integral \(\int_0^1 x^2 dx\) using a Monte-Carlo method. Essentially, we throw darts at the curve and count the number of darts that fall below the curve (as in the following figure).

Monte Carlo Integration pseudo-code

  1. Initialize: hits = 0
  2. for i in 1:N
  3. \(~~\) Generate two random numbers, \(U_1\) and \(U_2\), between 0 and 1
  4. \(~~\) If \(U_2 < U_1^2\), then hits = hits + 1
  5. end for
  6. Area estimate = hits / N

Naively implementing this Monte-Carlo algorithm in R would typically lead to something like:

monte_carlo <- function(N) {
  
  hits <- 0
  for (i in seq_len(N)) {
    u1 <- runif(1)
    u2 <- runif(1)
    if (u1 ^ 2 > u2) {
      hits <- hits + 1
    }
  }
  
  hits / N
}

This takes a few seconds for N = 1e6:

N <- 1e6
system.time(monte_carlo(N))
#>    user  system elapsed 
#>    2.50    0.03    2.55

Your task: Find a vectorized solution for this problem:

system.time(monte_carlo_vec(N))
#>    user  system elapsed 
#>    0.06    0.00    0.06

5.4 Rcpp

See this presentation.

You have this data and this working code (a loop) that is slow

mydf <- readRDS(system.file("extdata/one-million.rds", package = "advr38pkg"))

QRA_3Dmatrix <- array(0, dim = c(max(mydf$ID), max(mydf$Volume), 2))

for (i in seq_len(nrow(mydf))) {
  # Row corresponds to the ID class
  row    <- mydf$ID[i]
  # Column corresponds to the volume class
  column <- mydf$Volume[i]
  # Number of events, initially zero, then +1
  QRA_3Dmatrix[row, column, 1] <- QRA_3Dmatrix[row, column, 1] + 1  
  # Sum energy 
  QRA_3Dmatrix[row, column, 2] <- QRA_3Dmatrix[row, column, 2] + 
    1 - 1.358 / (1 + exp( (1000 * mydf$Energy[i] - 129000) / 120300 ))
}

Rewrite this for-loop with Rcpp.

You can also try to use {dplyr} for this problem.

5.5 Linear algebra

In R, prefer using crossprod(X) and tcrossprod(X) instead of t(X) %*% X and X %*% t(X). Also using A %*% (B %*% y) and solve(A, y) will be faster than A %*% B %*% y and solve(A) %*% y.

Don’t re-implement linear algebra operations (such as matrix products) yourself. There exist some highly optimized libraries for this. If you want to use linear algebra in Rcpp, try RcppArmadillo or RcppEigen.

If you want to use some optimized multi-threaded linear library, you can try Microsoft R Open.

5.5.1 Exercises

Compute the Euclidean distances between each of row of X and each row of Y:

set.seed(1)
X <- matrix(rnorm(1000), ncol = 5)
Y <- matrix(rnorm(5000), ncol = 5)

A naive implementation would be:

system.time({
  dist <- matrix(NA_real_, nrow(X), nrow(Y))
  for (i in seq_len(nrow(X))) {
    for (j in seq_len(nrow(Y))) {
      dist[i, j] <- sqrt(sum((X[i, ] - Y[j, ])^2))
    }
  }
})
#>    user  system elapsed 
#>    0.22    0.00    0.22

A possible faster solution takes

#>    user  system elapsed 
#>       0       0       0

Rewrite this problem to use linear algebra instead of a loop (Hint: resize the 3-dimensional arrays as 2D matrices):

N <- 1e5
x <- rnorm(N*3*3);   dim(x) <- c(N,3,3)
y <- rnorm(N*3*3);   dim(y) <- c(N,3,3)

system.time({
  gg <- 0
  for (n in 1:dim(x)[1]){
    gg <- gg + t(x[n,,]) %*% y[n,,]
  }
})
#>    user  system elapsed 
#>    0.54    0.03    0.58

A possible faster solution takes

#>    user  system elapsed 
#>    0.02    0.00    0.02

5.6 Algorithms & data structures

Sometimes, getting the right data structure (e.g. using a matrix instead of a data frame or integers instead of characters) can save you some computation time.

Is your algorithm doing some redundant computations making it e.g. quadratic instead of linear with respect to the dimension of your data?

See exercises (section 5.7) for some insights.

You can also find a detailed example in this blog post.

5.7 Exercises

Generate \(10^8\) (begin with \(10^4\)) steps of the process described by the formula:\[X(0)=0\]\[X(t+1)=X(t)+Y(t)\] where \(Y(t)\) are independent random variables with the distribution \(N(0,1)\). Then, calculate in what percentage of indices \(t\) the value of \(X(t)\) was negative. You don’t need to store values of \(X\) if you don’t want to. What would be the benefit of writing an Rcpp function over a simple vectorized R function?

set.seed(1)
system.time(p <- advr38pkg::random_walk_neg_prop(1e7))
#>    user  system elapsed 
#>    0.39    0.02    0.40
p
#> [1] 0.3400444

mat <- as.matrix(mtcars)
ind <- seq_len(nrow(mat))
mat_big <- mat[rep(ind, 1000), ]  ## 1000 times bigger dataset
last_row <- mat_big[nrow(mat_big), ]

Speed up these loops:

system.time({
  for (j in 1:ncol(mat_big)) {
    for (i in 1:nrow(mat_big)) {
      mat_big[i, j] <- 10 * mat_big[i, j] * last_row[j]
    }
  }
})
#>    user  system elapsed 
#>    0.39    0.00    0.39

Why colSums() on a whole matrix is faster than on only half of it?

m0 <- matrix(rnorm(1e6), 1e3, 1e3)
microbenchmark::microbenchmark(
  colSums(m0[, 1:500]), 
  colSums(m0)
)
#> Unit: microseconds
#>                  expr    min      lq     mean median      uq    max neval
#>  colSums(m0[, 1:500]) 1464.1 1835.50 2059.488 1930.2 2083.40 7310.5   100
#>           colSums(m0)  765.8  828.15  882.119  855.9  916.65 1368.4   100

Try to speed up this code by vectorizing it first. Then, recode it in Rcpp and benchmark all the solutions you came up with.

M <- 50
step1 <- runif(M)
A <- rnorm(M)
N <- 1e4

tau <- matrix(0, N + 1, M)
tau[1, ] <- A
for (j in 1:M) {
  for (i in 2:nrow(tau)) {
    tau[i, j] <- tau[i - 1, j] + step1[j] * 1.0025^(i - 2)
  }
} 

Make a fast function that counts the number of elements between a sequence of breaks. Can you do it in base R? Try also implementing it in Rcpp. How can you implement a solution whose computation time doesn’t depend on the number of breaks? [Which are the special cases that you should consider?]

x <- sample(10, size = 1e4, replace = TRUE)
breaks <- c(1, 3, 9, 9.5, 10)
table(cut(x, breaks))
#> 
#>    (1,3]    (3,9]  (9,9.5] (9.5,10] 
#>     2006     5997        0      944
hist(x, breaks, plot = FALSE)$counts  # includes first break
#> [1] 3059 5997    0  944
advr38pkg::count_by_breaks(x, breaks)
#> [1] 2006 5997    0  944
advr38pkg::count_by_breaks_fast(x, breaks)
#> [1] 2006 5997    0  944
microbenchmark::microbenchmark(
  table(cut(x, breaks)), 
  hist(x, breaks, plot = FALSE)$counts, 
  advr38pkg::count_by_breaks(x, breaks),
  advr38pkg::count_by_breaks_fast(x, breaks)
)
#> Unit: microseconds
#>                                        expr   min     lq     mean median      uq    max
#>                       table(cut(x, breaks)) 801.7 909.55 1004.368 942.05 1043.60 2637.8
#>        hist(x, breaks, plot = FALSE)$counts 284.9 340.25  412.441 356.40  436.45 1263.6
#>       advr38pkg::count_by_breaks(x, breaks) 266.5 365.10  452.425 379.15  426.10 3361.3
#>  advr38pkg::count_by_breaks_fast(x, breaks) 152.5 184.60  214.293 191.50  213.10 1148.5
#>  neval
#>    100
#>    100
#>    100
#>    100
x2 <- sample(10, size = 1e5, replace = TRUE)
breaks2 <- seq(0, 10, length.out = 100)
microbenchmark::microbenchmark(
  advr38pkg::count_by_breaks(x2, breaks),
  advr38pkg::count_by_breaks_fast(x2, breaks),
  advr38pkg::count_by_breaks(x2, breaks2),
  advr38pkg::count_by_breaks_fast(x2, breaks2)
)
#> Unit: milliseconds
#>                                          expr      min        lq       mean    median
#>        advr38pkg::count_by_breaks(x2, breaks)   2.4776   2.68735   3.079292   2.92735
#>   advr38pkg::count_by_breaks_fast(x2, breaks)   1.1295   1.21535   1.385777   1.28100
#>       advr38pkg::count_by_breaks(x2, breaks2) 100.9357 106.56120 109.560769 108.45680
#>  advr38pkg::count_by_breaks_fast(x2, breaks2)   1.1556   1.22695   1.351562   1.30810
#>         uq      max neval
#>    3.29225   6.2791   100
#>    1.38720   4.7121   100
#>  110.40480 140.4010   100
#>    1.42820   1.9862   100

An R user wants to implement some sampling on a sparse matrix and provides this working code:

library(Matrix)
N <- 100
m <- Matrix(0, nrow = N, ncol = N)

for (j in 1:N) {
  cols <- sample((1:N)[-j], 2)  # 2 columns != j 
  m[j, cols] <- 1
}

This code is slow; can you find two major reasons why?

How can you more efficiently assign 1s?

Can you use sampling with replacement (which can be easily vectorized) in this example?

Implement faster solutions in R and Rcpp.


Make a fast function that returns all prime numbers up to a number N.

N <- 1e6
system.time(
  primes <- advr38pkg::AllPrimesUpTo(N)
)
#>    user  system elapsed 
#>    0.05    0.00    0.05
plot(primes, pch = 20, cex = 0.5)

5.8 Parallel

I basically always use foreach and recommend to do so. See my guide to parallelism in R with foreach.

Just remember to optimize your code before trying to parallelize it.

Try to parallelize some of your best solutions for the previous exercises.