Commit fe9ea2dc authored by Médéric Boquien's avatar Médéric Boquien

Completely change the SED cache. Now we rely on the fact that for a given set...

Completely change the SED cache. Now we rely on the fact that for a given set of modules we only need to store only one model in cache. If there is a cache miss, it means we can discard this one model. In turn the cache key simply becomes the number of modules used, making it easier and faster to access. To check whether the stored module corresponds to the requested one, we store both the model parameters and the SED in a tuple.
parent f8d70ed7
......@@ -17,7 +17,7 @@
### Optimised
- The estimation of the physical properties is made a bit faster when all the models are valid. (Médéric Boquien)
- The access to the SED and module caches has been made faster and simpler. This results in a speedup of ~6% in the computation of the models. (Médéric Boquien)
- The access to the module cache has been made faster and the model cache has been made much simpler, avoiding plenty of complex computations. This results in a speedup of at least ~6% in the computation of the models. The speedup can be higher when using few photometric bands. At the same time it considerably reduces the number of page faults seen in some rare circumstances. (Médéric Boquien)
- The models counter was a bottleneck when using many cores as updating it could stall other parallel processes. Now the internal counter is updated much less frequently. The speedup goes from between negligible (few cores) up to a factor of a few (many cores). The downside is the the updates on the screen may be a bit irregular. (Médéric Boquien)
- It turns out that elevating an array to some power is an especially slow operation in python. The `dustatt_calzleit` module has been optimised leading to a massive speed improvement. This speedup is especially large for models that do not include dust emission. (Médéric Boquien)
- Making copies of partially computed SED when storing them to the cache can be slow. Now we avoid making copies of the redshifted SED. The speedup should be especially noticeable when computing a set of models with numerous redshifts. (Médéric Boquien)
......
......@@ -102,11 +102,6 @@ def sed(idx, midx):
Global index of the model.
"""
global gbl_previous_idx
if gbl_previous_idx > -1:
gbl_warehouse.partial_clear_cache(
gbl_models.params.index_module_changed(gbl_previous_idx, midx))
gbl_previous_idx = midx
sed = gbl_warehouse.get_sed(gbl_models.params.modules,
gbl_models.params.from_index(midx))
......@@ -228,11 +223,6 @@ def bestfit(oidx, obs):
np.seterr(invalid='ignore')
best_index = int(gbl_results.best.index[oidx])
global gbl_previous_idx
if gbl_previous_idx > -1:
gbl_warehouse.partial_clear_cache(
gbl_params.index_module_changed(gbl_previous_idx, best_index))
gbl_previous_idx = best_index
# We compute the model at the exact redshift not to have to correct for the
# difference between the object and the grid redshifts.
......
......@@ -51,12 +51,6 @@ def fluxes(idx, midx):
Index of the model within the current block of models.
"""
global gbl_previous_idx
if gbl_previous_idx > -1:
gbl_warehouse.partial_clear_cache(
gbl_models.params.index_module_changed(gbl_previous_idx, midx))
gbl_previous_idx = midx
sed = gbl_warehouse.get_sed(gbl_models.params.modules,
gbl_models.params.from_index(midx))
......
......@@ -5,6 +5,7 @@
from ..sed import SED
from .. import sed_modules
from .sedcache import SedCache
class SedWarehouse(object):
......@@ -33,7 +34,7 @@ class SedWarehouse(object):
else:
raise TypeError("The nocache argument must be a list or an str.")
self.sed_cache = {}
self.sed_cache = SedCache()
self.module_cache = {}
def get_module_cached(self, name, **kwargs):
......@@ -118,7 +119,7 @@ class SedWarehouse(object):
# dictionaries.
key = tuple(tuple(par.values()) for par in parameter_list)
sed = self.sed_cache.get(key)
sed = self.sed_cache[key]
if sed is None:
mod = self.get_module_cached(module_list.pop(),
**parameter_list.pop())
......
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment