I’m reading from a table in the database to fetch some path to files to
I do a loop for each path, and for each .csv file, I update another
All works perfect, but it takes lot of time, 90 files and aprox.
After adding transactions, the speed has been great, 400% faster.
Now, as I see I have to specify the table for wich the transaction
occur, where I have to use, the firsst table (where I fetch the path to
files) or from the other one ?
I update the path file row with a timestamp, rows created/updated/… ,
so if I problem occurs with some file, I don’t want to update their
status as done nor having any new data in the other table.
for your info:
table master => has the path files
table dades => has de data
database backend => sqlite (only has only one transaction, and not per
table, per database)
@txt_to_return = "" @txt_rows_imported = 0 # delete all the rows in Master for nc8 ... @result = Arxiu.find_by_sql('DELETE FROM masters') @txt_to_return += "<p><b>Deleted all rows from NC8 table</b></p>" @columns = Arxiu.content_columns # id, nc8, nacer @file_paths=Arxiu.find(:all, :conditions => 'kind="nc8"') require 'csv' for arxiu in @file_paths @rows_per_file = 0
Master.transaction do CSV.open(arxiu.file_path, "r") do |row| @data_new=Master.new @data_new.nc8 = row @data_new.nacer = row @data_new.save @txt_rows_imported += 1 @rows_per_file += 1 end # CSV.open... do end # transaction
end # begin
arxiu.update_last=Time.now() arxiu.update_rows = @rows_per_file arxiu.save
@txt_to_return += "<p><b>Successfully created " +
@txt_rows_imported.to_s + " rows"
@txt_to_return += "
from " + @file_paths.size.to_s + " files
render(:text => @txt_to_return)
end # import_file_nc8_ajax
ah, the question …
the question is, where I have to put the
and if I have to use the Master table or the Dada table
if there are erros, the 99.99% will be in the Dada table, related to the
imported data from the .csv file …