I want to make sure I do what is most efficient when dealing with
multiple and potentially large files.
I need to take row(n) and row(n+1) from a file and use the data to do
things in other parts of my program. Then the program will iterate by
incrementing n. I may have up to 30 files, each having 50,000 rows.
My question is should I read row(n) and row(n+1), accessing the file
again and again on each iteration of the main program? Or should I just
read the whole file into memory (say, an array) then just grab items
from the array by index in the main program?