February 1st, 2017
At Nobilis, we've recently had to deal with importing files that contain millions of data points, translating into millions of database rows. As you'd expect, performance was a major concern in an operation of this scale.
We have learnt a lot and managed to get a benchmark file from an initial test of 1000 seconds to under 3 seconds using various tips and tricks in Ruby, and some in Postgres.
I gave a talk about this at the Leeds Ruby Thing Meetup a while back, and the slides I used for the talk are below. Hopefully you can pick up a thing or two!