2 minute read

An ARS scientist asked me about ways to speed up his work in R last week, so I thought I would post my answer here.

  • Understand your bottleneck. Is it memory, or computation? Is there a particular point that’s slow? There are packages for profiling your code to understand the bottlenecks like pryr and lineprof that are useful for this. Optimizing for the sake of optimization is not optimal. Find the slowest part of the code, optimize it and repeat until the code is fast enough to get the job done.

  • Is it a package that is slow or your code? To take advantage of packages for speeding up R you often need to rewrite functions. This can be done for external packages but it requires an understanding of the package you want to modify.

  • There a nice page on R optimization from the Chief scientist at Rstudio: http://adv-r.had.co.nz/Performance.html.

  • Stop using for loops and start using vector functions.

  • Use parallel cluster packages like Snow, Snowfall, or Parallel. But note that these will only speed up packages designed to use them or your own code written to take advantage of them. For instance, in your own code you can use lapply to apply a function to a vector but with Parallel you need to use the command parlapply.

  • Compile you program to byte code with packages like JIT or Compiler

  • If you have one step that is slow rewrite the function if C++ using the package Rcpp.

  • If you are going something slow like Gibbs sampling or Markov chain Monte Carlo be sure to use a package designed to do this like Stan or Rjags. The have sampling routines coded in C++ that are much faster.

  • Is your problem embarrassingly parallel where the input data could be split into different files and submitted at array jobs to different nodes on the cluster? That could get you the speedup you need.

  • Under the hood R uses an open source linear algebra library called BLAS. There is a faster proprietary intel Linear algebra library called Intel KML. The Microsoft R open implementation of R ships with that Intel library. This could speed your code up if the bottleneck is occurring on matrix operations.

  • For huge problems Rspark can use spark to perform map-reduce operations but there’s a good bit of overhead required to setup a spark cluster and it is not currently implemented on Ceres. It could be set up on AWS more easily.

  • There are packages to use GPUs with R (gpuR) but this should be the last thing you try. There are much easier ways to obtain speedups and Ceres does not have any GPU nodes.

Tags:

Categories:

Updated: