Any suggestions on applications that involve alot of calcualtions on a
fairly large data set?
My app uses a set of raw data ~5k, applies some default/override rules
on the raw data and does some calculations on the data in combination
with a list of assumptions. A ranked list along with detailed metrics
is generated. The end user can manipulate some of the rules and
assumptions to generate different metrics for comparison.
My current approach maintains the original data set, default/overrides
and assumptions as seperate models and does all the ranking calculations
on the fly. The performance is a bit sluggish if I’m looking at
more that 400 objects and this is with only one user.
Any suggestions on caching options to explore? Should I consider using
a seperate model for storing the results? This would make the ranking
and comparisons quicker, but I’m concerned about the overhead of having
to write large amount of temporary data to the database.