Ruby for Data Science: Handling Large Datasets Efficiently

I’m interested in using Ruby for data science tasks, particularly for working with large datasets. While Ruby may not be as commonly associated with data science as languages like Python or R, I believe it has potential for certain data processing tasks. However, I’m facing challenges when dealing with large datasets, and I’d like some guidance on optimizing my code.

Here’s a simplified example of my Ruby code:

require 'csv'

# Reading a large CSV file into memory
data = []
CSV.foreach('large_dataset.csv', headers: true) do |row|
    data << row.to_h
end

# Performing data analysis on the loaded dataset
total_sales = data.map { |row| row['Sales'].to_f }.sum
average_price = data.map { |row| row['Price'].to_f }.reduce(:+) / data.length

puts "Total Sales: #{total_sales}"
puts "Average Price: #{average_price}"

The problem is that when dealing with large CSV files, this code becomes slow and memory-intensive. I’ve heard about streaming and batching techniques in Python for handling such cases, but I’m not sure how to implement similar strategies in Ruby.

Could someone with experience in data science with Ruby provide guidance on how to efficiently handle and process large datasets, while avoiding memory issues and slow execution times? Are there specific Ruby gems or techniques that are well-suited for this purpose? Any insights or code optimizations would be greatly appreciated. Thank you!

Hey Vishal!

Consider using the smarter_csv gem to handle large datasets in Ruby. It reads CSV data in batches, not loading the whole file into memory at once.

Your code might look like this:

require 'smarter_csv'

total_sales = 0
average_price_arr = []

SmarterCSV.process('large_dataset.csv', chunk_size: 500) do |chunk|
  chunk.each do |hash|
    total_sales += hash[:sales].to_f
    average_price_arr << hash[:price].to_f
  end
end

average_price = average_price_arr.reduce(:+) / average_price_arr.length

puts "Total Sales: #{total_sales}"
puts "Average Price: #{average_price}"

In this code, smarter_csv processes the CSV file in chunks of 500 rows at a time. This way you can process large CSV files with less memory consumption!