Commit 20909a00 authored by Médéric Boquien's avatar Médéric Boquien

Optimise the interpolation of the luminosity components 1. computing the...

Optimise the interpolation of the luminosity components 1. computing the interpolation of all the components in one step rather than interpolating separately for each component, and 2. caching some intermediate computations that only depend on the grid sampling and therefore apply to all physical components.
parent c3cbbbf3
......@@ -20,6 +20,7 @@
### Optimised
- By default the MKL library created many threads for each for the parallel processes. Not only was this not necessary as a high-level parallelisation already exists, but it generated a strong oversubscription on the CPU and on the RAM. The slowdown was over a factor of ~2 in some cases. Now we mandate KML to use only 1 thread fo each process. (Médéric Boquien & Yannick Roehlly)
- The generic numpy interpolation function was used. As we are in a well-controlled environment, this generated unnecessary verifications on the type and shape of the arrays. The compiled numpy interpolation function is now used, bypassing those checks. This generates a gain of 5-10% in computing speed for the generation of the models. (Médéric Boquien)
- The interpolation of the spectra of the different physical components on a new wavelength grid was not optimal as the interpolation was done separately for each component. Now a specific function has been implemented caching repetitive computations and vectorising the interpolation to compute it for all the components in a single step. This generates a gain of 10% in computing speed for the generation of the models. (Médéric Boquien)
## 0.10.0 (2016-09-15)
### Added
......
......@@ -10,6 +10,7 @@ from scipy.constants import c, pi
# Cache dictionaries
dx_cache = {}
best_grid_cache = {}
x_cache = {}
def lambda_to_nu(wavelength):
......@@ -357,9 +358,9 @@ def interpolate_lumin(wl, lumin, wl_new, lumin_new):
# We interpolate only on the wavelengths where the components are
# already defined.
w = np.where((wl_unique > wl[0]) & (wl_unique < wl[-1]))
for i in range(lumin.shape[0]):
lumin_out[i, lumin.shape[1]+w[0]] = interp(wl_unique[w], wl,
lumin[i, :])
lumin_out[:-1, lumin.shape[1]+w[0]] = quick_interp_lum(wl_unique[w],
wl, lumin)
wl_best = np.concatenate((wl, wl_unique))
s = argsort_wl(wl_best)
wl_best = wl_best[s]
......@@ -409,3 +410,45 @@ def flux_trapz(y, x, key):
dx = np.diff(x)
dx_cache[key] = dx
return np.dot(dx, y[1:]+y[:-1]) * .5
def quick_interp_lum(x_new, x, y):
"""
Light weight interpolation function to interpolate luminosities on a new
wavelength grid. It assumes that all vallues are within the range of
definition. Also a number of quantities are cached in a dictionary in
order to avoid recomputing them at each call. The key is a tuple of the
size of the interpolation point, the size of the original x array and the
shape of the original y array. It should not be called for any other
purpose at it relies on strong assumptions that may be wrong interwise.
Parameters
----------
x_new : 1D array
Wavelengths where the luminosities array must be interpolated
x: 1D array
Wavelengths of the luminosities array
y: 2D array
Luminosities array
Returns
-------
y_new: 2D array
Interpolated luminosities array
"""
key = (x_new.size, x.size, y.shape)
if key in x_cache:
lo, hi, frac_x = x_cache[key]
else:
hi = np.searchsorted(x, x_new)
# Clip them so that they are at least 1.
# Removes mis-interpolation of x_new[n] = x[0]
hi = hi.clip(1, len(x)-1).astype(int)
lo = hi - 1
frac_x = (x_new - x[lo]) / (x[hi] - x[lo])
x_cache[key] = (lo, hi, frac_x)
y_lo = np.take(y, lo, axis=-1)
y_hi = np.take(y, hi, axis=-1)
return y_lo + (y_hi - y_lo) * frac_x
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment