- 12 Feb, 2016 1 commit
-
-
Médéric Boquien authored
In the output the sfh.age parameter would correspond to the input value minus 1. Now both values are consistent with one another.
-
- 05 Feb, 2016 1 commit
-
-
Médéric Boquien authored
The optionally saved spectra in the pdf_analysis and savefluxes modules were saved in the VO-table format. The most important downside is that it is very slow to write to, which proves to be a major bottleneck in the computing speed. To overcome this issue, we rather save the spectra using the FITS formation. Instead of having one file containing the spectra (including the various components) and the SFH in a single file, now we have one file for the spectra and one file for the SFH.
-
- 04 Dec, 2015 1 commit
-
-
Médéric Boquien authored
-
- 11 Oct, 2015 1 commit
-
-
Médéric Boquien authored
Magic values are a bit of a pain to handle and are not safe. This patch removes the use of magic values and rather replaces them with NaN whenever appropriate. The logic of the input file stays the same but the magic values are converted internally after reading it so that the rest of the code does not have to deal with that.
-
- 20 Sep, 2015 1 commit
-
-
Médéric Boquien authored
-
- 10 Sep, 2015 1 commit
-
-
Médéric Boquien authored
Ensure that the .fnu @property returns a result even if the redshifting module has not been applied. By default in that case we assume a distance of 10 parsecs. Note that as the redshifting module is mandatory (it is needed to apply the IGM absorption, which has a small effect even at z=0 with this formula), it should not happen. Issue found by Yannick Roehlly.
-
- 30 Aug, 2015 1 commit
-
-
Médéric Boquien authored
-
- 29 Aug, 2015 2 commits
-
-
Médéric Boquien authored
-
Médéric Boquien authored
If there is the redshift information then there is necessarily the luminosity distance too. No need to test separately for the presence of each.
-
- 28 Aug, 2015 1 commit
-
-
Médéric Boquien authored
InfoDict was made to overcome the slowness to copy an OrderedDict. However since we have removed OrderedDict now, we can just switch to a regular dictionary, which happens to be fast to copy.
-
- 27 Aug, 2015 2 commits
-
-
Médéric Boquien authored
As python evaluates expressions from the left to the right, put the vector in the rightmost position so that we do not multiply it with a scalar several times.
-
Médéric Boquien authored
When computing the flux in filters, np.trapz() become the bottleneck of the code. A large part of the time is spent on safeguards and on operations for nD arrays. However in our specific case we only have 1D arrays. Also we can cache some computed variables, for instance dx which only depends on the wavelength sampling. We do that employing the same key that is used for caching the best sampling of each filter in compute_fnu(). That way the caches are consistent. The key is based on the size of the wavelength grid, the name of the filter, and the redshift.
-
- 26 Aug, 2015 4 commits
-
-
Médéric Boquien authored
Replace the SED.info OrderedDict() with a very light weight and speficically tailored reimplementation of an OrderedDict(). The reason is that an OrderedDict() takes an inordinate amount to time to copy. In the end this could amount to a very significant fraction of the total runtime. Do not use this new implementation anywhere or it will break your code. It makes very strong assumptions on how it is to be used.
-
Médéric Boquien authored
-
Médéric Boquien authored
To verify whether a variable is among the keys of a dictionary there is actually no need to call the keys() function. A dictionary has a __contains__() member which is called when the “in” keyword is used.
-
Médéric Boquien authored
There is no need to build a list of keys to check whether a variable is among the keys of a dictionary.
-
- 23 Jul, 2015 1 commit
-
-
Yannick Roehlly authored
Addition of a new module performing a quench on the star formation history. We must change a little the SED object to allow the modification of the SED.
-
- 16 Jul, 2015 2 commits
-
-
Médéric Boquien authored
For the cache to be efficient it needs to be shared between all instances of the class. As the computation is done in separate processes, we do not need to worry about race conditions, the cache will only ever be accessed by a single instance at any given time.
-
Médéric Boquien authored
The wavelength sampling varies depending on the wavelength. This means that we need to cache different grids depending on the redshift.
-
- 15 Jul, 2015 3 commits
-
-
Only copy the SFH if it exists. For instance when only modelling the dust we do not have any stellar population.
-
Yannick Roehlly authored
Using a list to keep the information keys that are proportional to the mass leads to problems when forcing the update of a proportional information: the key name appears then twice (or more) in the list and the information is multiplied twice (or more) by the mass.
-
Yannick Roehlly authored
Using a list to keep the information keys that are proportional to the mass leads to problems when forcing the update of a proportional information: the key name appears then twice (or more) in the list and the information is multiplied twice (or more) by the mass.
-
- 25 May, 2015 1 commit
-
-
Médéric Boquien authored
Rather than reinterpolating all the components when adding a new one, let's be smart about that and compute the interpolation only for new wavelengths. At the same time, make use of memoisation not to repeat the same computation all the time when marging different wavelength grids.
-
- 24 May, 2015 3 commits
-
-
Médéric Boquien authored
Optimise the copy of a SED object. First, use direct references to the time grid and SFH rather than copies as they will never change. Then, directly assign self._sfh, bypassing the slow setter.
-
Médéric Boquien authored
-
Médéric Boquien authored
-
- 21 May, 2015 1 commit
-
-
Médéric Boquien authored
Implement a cache for the filters to compute the fluxes. Rather than passing the transmission table as an argument to compute_fnu(), which makes caching difficult and/or slow to compute the hash, we rather pass the filter name. Then the SED object fetches the filter from the database and resamples it on the optimal wavelength grid. The result is stored in cache to avoid carrying out this operation repeatedly.
-
- 20 May, 2015 2 commits
-
-
Médéric Boquien authored
As there should never be an sfh.age in sed.info when we add it, remove the option to force an update. Better to stop as it shows there is something wrong going on.
-
Médéric Boquien authored
-
- 24 Apr, 2015 1 commit
-
-
BURGARELLA Denis authored
-
- 10 Nov, 2014 4 commits
-
-
Médéric Boquien authored
Let's avoid using costly np.max() functions on ordered arrays. The max will always be the last element. Also use the fact that the arrays are sorted to select the last 100 elements to compute the average SFR.
-
Médéric Boquien authored
Setters and getters are nice but they come with a significant overhead. Because we initialise class members to None anyway, they should not be needed at all. I leave the setter/getter for the sfh as it is actually doing something more. It now accounts for a non negligible fraction of the runtime though.
-
Médéric Boquien authored
Rather than relying on deepcopy, let's build the new SED objects ourselves. As we know the structure perfectly we can do that much more efficiently. To do se we add a copy() member to the SED class. This function creates a new object and initialises its members with proper copies.
-
Médéric Boquien authored
Call the copy() class member rather than the base numpy function. Quick testing shows it is faster: 1.1 μs vs 2.7 μs for an array of 2000 elements.
-
- 14 Oct, 2014 1 commit
-
-
Médéric Boquien authored
Computing the luminosity each time we integrate in a filter (for instance) is not efficient. As the luminosity is in effect modified in only one place (add_contribution), let's just compute it manually each we add a component. This yields a ~10% improvement.
-
- 22 Aug, 2014 2 commits
-
-
Médéric Boquien authored
Use the base numpy interp() function rather than scipy's interp1d(). The reason is that when interpolation only 1 array the former is much faster (probably because it is compiled and there is no overhead returning a function. We can only do that for the new component. For the old ones, there are too many for this to be worthwhile. Still, it nets a nice improvement.
-
Médéric Boquien authored
The wavelength arrays are sorted. So there is no need to call min() and max() to find the extrema. We can directly get the first and last elements.
-
- 21 Aug, 2014 2 commits
-
-
Médéric Boquien authored
-
Médéric Boquien authored
Small optimisation in the interpolation. There is no need to sort the arrays because they are already sorted.
-
- 29 Mar, 2014 1 commit
-
-
Médéric Boquien authored
Optimise the computation of the fluxes. 1) consider only the SED section corresponding to the filter when computing the best grid. 2) As filters are already normalised, there is no need to divide my its integral to get the flux.
-