...
 
Commits (92)
# Change Log
## Unreleased
### Added
- The (1+z1)/(1+z2) factor between observed and grid flux densities caused by the differential redshifting is now taken into account. With a default grid redshift rounding of two decimals this yields a difference of at most 0.5% in the estimated physical properties at z=0 and even less at higher z. (Médéric Boquien)
- The list of bands for which to carry out a Bayesian flux estimate is now configurable. By default this corresponds to the fitted bands but it also supports bands that are not included in the fit. This can be set in the `fluxes` parameter of the `pdf\_analysis` module. (Médéric Boquien)
- Implementation of the auto-detection of lines in the input flux file so they are automatically added to the list of bands in `pcigale.ini`. (Médéric Boquien)
### Changed
- Python 3.6 is now the minimum required version. (Médéric Boquien)
- The logo has now been moved to the lower-right corner of the figure so that it does not overlap with any information and it has been updated for a less pixelated version. (Médéric Boquien & Rodrigo González Castillo)
- The wavelength range in SED plots is now dynamically adapted to cover the observed wavelengths. (Médéric Boquien)
- The uncertainties on the SED plots now correspond only to 1σ rather than 3σ so they do not appear exceedingly large. (Médéric Boquien)
- The lines linking the different bands in the residual SED plot have been eliminated to improve the readability. (Médéric Boquien)
- Some lines have been made slightly thicker in SED plots so the different components are more visible. (Médéric Boquien)
- The colours in the SED plots have been tweaked for aesthetic reasons. (Médéric Boquien)
- The markers for the observed fluxes in the SED plots have been tweaked to improve readability. (Médéric Boquien)
- The computation of all the cosmology-dependent quantities has been consolidated in pcigale/utils/cosmology.py and optimised. This leads to a slightly faster startup, in particular when there are many objects to fit, and it makes it easier to change the cosmology. (Médéric Boquien)
- The time spent computing is now displayed in hours, minutes, and seconds rather than just seconds to improve legibility. (Médéric Boquien)
### Fixed
- Make sure we can plot the PDF of equivalent widths. (Médéric Boquien)
- Fix a crash when generating a mock catalogue containing intensive properties. (Médéric Boquien)
- In the `sfhdelayed` and `sfhdelayedbq` modules, provide the correct description for the sfr_A parameter (Médéric Boquien & Laure Ciesla)
- Internally the luminosity distance was erroneously stored in Mpc rather than in m for non-zero redshifts. This has now been standardised to m. (Médéric Boquien)
- As the best-fit properties are computed at the exact observed redshift, correct the scaling factor as it is computed at the grid redshift. This corrects for slight offsets on the best-fit properties when the input redshift has more decimals than the grid redshift. (Médéric Boquien)
- Fix the pip install by making pcigale.managers discoverable. (Yannick Roehlly)
- When using a parameters file, Boolean values were not correctly interpreted. (Médéric Boquien, reported by Eric Martínez, INAOE)
- Make sure that the best-fit models are stored with the correct scaling factor when the distance is given explicitly (Médéric Boquien)
- Some labels and the title for the SED plots has been improved to avoid overlaps and overflows. (Médéric Boquien)
- Ensure that best models are properly computed when models are computed by blocks and that no fit could be made in one or more blocks. This can be case if all the models in the block are older than the age of the universe. (Médéric)
- Make sure that the parameters are saved with the proper scale (linear or logarithmic) in the χ² files. (Médéric Boquien)
- Some math libraries such as MKL or OpenBLAS sometime try to be (too) smart, starting computation threads on their own. As cigale is already parallel, this just oversubscribes the CPU and can lead to important slowdowns. An environment variable could be set by the user to disable this, but this is cumbersome. Rather, we set these variables directly in the code at the startup of cigale. (Yannick Roehlly & Médéric Boquien)
### Optimised
- Slight speedup of the computation of the likelihood from the χ² using a multiplication rather than a division. (Médéric Boquien)
- Speedup of the computation of the χ² by ~10% taking the opposite of a scalar rather than of an array. (Médéric Boquien)
- Thanks to a change in the layout of the models storage in RAM, the computation of the χ² is now massively faster when the run contains multiple redshifts. (Médéric Boquien)
- The computation of the weighted means and standard deviations has been made ~50% faster by normalising the likelihood. (Médéric Boquien)
- The the fritz2006 module should now run faster thanks to an optimisation of the computation of the luminosity of the various AGN components (Médéric Boquien & Guang Yang)
- Various optimisations have been made regarding shared arrays to make their access faster. The overall effect is a speedup of 3-4% for the computation of the models. (Médéric Boquien)
- All the cores should now be used over the entire duration of the computation of the Bayesian and best-fit estimates. Before the number of active cores could drop towards the end of the computation. (Médéric Boquien)
## 2018.0 (2018-11-06)
### Added
- It is now possible to optionally indicate the distance in Mpc in the input file. If present it will be used in lieu of the distance computed from the redshift. This is especially useful in the nearby universe where the redshift is a very poor indicator of the actual distance. (Médéric Boquien)
......@@ -144,7 +182,7 @@
- The output files providing estimates of the physical properties are now generated both in the form of text and FITS files. (Médéric Boquien)
- When using the `dustatt_calzleit module`, choosing ẟ≠0 leads to an effective E(B-V) different from the one set by the user. Now the E(B-V) will always correspond to the one specified by the user. This means that at fixed E(B-V), A(V) depends on ẟ. (Médéric Boquien)
- The pcigale-mock tool has been merged into pcigale-plots; the mock plots can be obtained with the "mock" command. (Médéric Boquien)
- The `sfhdelayed` module is now initialised with _init_code() to be consistent with the way things are done in other modules. This should give a slight speedup under some sircumstances too. (Médéric Boquien)
- The `sfhdelayed` module is now initialised with \_init_code() to be consistent with the way things are done in other modules. This should give a slight speedup under some sircumstances too. (Médéric Boquien)
- In `sfhfromfile`, the specification of the time grid was vague and therefore could lead to incorrect results if it was not properly formatted by the end user. The description has been clarified and we now check that the time starts from 0 and that the time step is always 1 Myr. If it is not the case we raise an exception. (Médéric Boquien)
- When the redshift is not indicated in pcigale.ini, the analysis module fills the list of redshifts from the redshifts indicated in the input flux file. This is inefficient as analysis modules should not have to modify the configuration. Now this is done when interpreting pcigale.ini before calling the relevant analysis module. As a side effect, "pigale check" now returns the total number of models that cigale will compute rather than the number of models per redshift bin. (Médéric Boquien)
- The optionally saved spectra in the `pdf_analysis` and `savefluxes` modules were saved in the VO-table format. The most important downside is that it is very slow to write to, which proves to be a major bottleneck in the computing speed. To overcome this issue, we rather save the spectra using the FITS formation. Instead of having one file containing the spectra (including the various components) and the SFH in a single file, now we have one file for the spectra and one file for the SFH.
......@@ -178,7 +216,7 @@
## 0.8.0 (2015-12-01)
### Added
- The evaluation of the parameters is always done linearly. This can be a problem when estimating the SFR or the stellar mass for instance as it is usual to estimate their log rather. Because the log is non-linear, the likelihood-weighted mean of the log is not the log of the likelihood-weighted mean. Therefore the estimation of the log of these parameters has to be done during the analysis step. This is now possible. The variables to be analysed in log just need to be indicated with the suffix "_log", for instance "stellar.m\_star\_log". (Médéric Boquien, idea suggested by Samir Salim)
- The evaluation of the parameters is always done linearly. This can be a problem when estimating the SFR or the stellar mass for instance as it is usual to estimate their log rather. Because the log is non-linear, the likelihood-weighted mean of the log is not the log of the likelihood-weighted mean. Therefore the estimation of the log of these parameters has to be done during the analysis step. This is now possible. The variables to be analysed in log just need to be indicated with the suffix "\_log", for instance "stellar.m\_star\_log". (Médéric Boquien, idea suggested by Samir Salim)
### Fixed
- Running the scripts in parallel trigger a deadlock on OS X with python 3.5. A workaround has been implemented. (Médéric Boquien)
......
......@@ -3,6 +3,15 @@
# Licensed under the CeCILL-v2 licence - see Licence_CeCILL_V2-en.txt
# Author: Yannick Roehlly
import os
# Set environment variables to disable multithreading as users will probably
# want to set the number of cores to the max of their computer.
os.environ["OMP_NUM_THREADS"] = "1"
os.environ["OPENBLAS_NUM_THREADS"] = "1"
os.environ["MKL_NUM_THREADS"] = "1"
os.environ["VECLIB_MAXIMUM_THREADS"] = "1"
os.environ["NUMEXPR_NUM_THREADS"] = "1"
import argparse
import multiprocessing as mp
import sys
......@@ -37,8 +46,8 @@ def check(config):
configuration = config.configuration
if configuration:
print("With this configuration cigale will compute {} "
"models.".format(ParametersManager(configuration).size))
print(f"With this configuration cigale will compute "
f"{ParametersManager(configuration).size} models.")
def run(config):
......@@ -52,12 +61,9 @@ def run(config):
def main():
if sys.version_info[:2] < (3, 5):
raise Exception("Python {}.{} is unsupported. Please upgrade to "
"Python 3.5 or later.".format(*sys.version_info[:2]))
if sys.version_info[:2] < (3, 6):
print("Python {}.{} detected. For better performance we recommend "
"Python 3.6 or later.".format(*sys.version_info[:2]))
raise Exception(f"Python {sys.version_info[0]}.{sys.version_info[1]} is"
f" unsupported. Please upgrade to Python 3.6 or later.")
# We set the sub processes start method to spawn because it solves
# deadlocks when a library cannot handle being used on two sides of a
......
......@@ -54,7 +54,7 @@ class AnalysisModule(object):
if os.path.exists('out/'):
name = datetime.now().strftime("%Y-%m-%d_%H:%M:%S") + '_out/'
os.rename('out/', name)
print("The out/ directory was renamed to {}".format(name))
print(f"The out/ directory was renamed to {name}")
os.mkdir('out/')
shutil.copy('pcigale.ini', 'out/')
......
......@@ -31,7 +31,7 @@ import multiprocessing as mp
import numpy as np
from .. import AnalysisModule
from ..utils import Counter
from utils.counter import Counter
from .workers import sed as worker_sed
from .workers import init_sed as init_worker_sed
from .workers import init_analysis as init_worker_analysis
......@@ -55,6 +55,13 @@ class PdfAnalysis(AnalysisModule):
"are many models).",
["sfh.sfr", "sfh.sfr10Myrs", "sfh.sfr100Myrs"]
)),
("bands", (
"cigale_string_list()",
"List of bands for which to estimate the fluxes. Note that this is "
"independent from the fluxes actually fitted to estimate the "
"physical properties.",
None
)),
("save_best_sed", (
"boolean()",
"If true, save the best SED for each observation to a file.",
......@@ -114,16 +121,17 @@ class PdfAnalysis(AnalysisModule):
initargs = (models, results, Counter(len(obs)))
self._parallel_job(worker_analysis, obs, initargs,
init_worker_analysis, conf['cores'])
init_worker_analysis, conf['cores'], 1)
return results
def _compute_best(self, conf, obs, params, results):
initargs = (conf, params, obs, results, Counter(len(obs)))
self._parallel_job(worker_bestfit, obs, initargs,
init_worker_bestfit, conf['cores'])
init_worker_bestfit, conf['cores'], 1)
def _parallel_job(self, worker, items, initargs, initializer, ncores):
def _parallel_job(self, worker, items, initargs, initializer, ncores,
chunksize=None):
if ncores == 1: # Do not create a new process
initializer(*initargs)
for idx, item in enumerate(items):
......@@ -131,13 +139,13 @@ class PdfAnalysis(AnalysisModule):
else: # run in parallel
with mp.Pool(processes=ncores, initializer=initializer,
initargs=initargs) as pool:
pool.starmap(worker, enumerate(items))
pool.starmap(worker, enumerate(items), chunksize)
def _compute(self, conf, obs, params):
results = []
nblocks = len(params.blocks)
for iblock in range(nblocks):
print('\nProcessing block {}/{}...'.format(iblock + 1, nblocks))
print(f"\nProcessing block {iblock + 1}/{nblocks}...")
# We keep the models if there is only one block. This allows to
# avoid recomputing the models when we do a mock analysis
if not hasattr(self, '_models'):
......@@ -195,6 +203,7 @@ class PdfAnalysis(AnalysisModule):
obs.save('observations')
results = self._compute(conf, obs, params)
print("\nSanity check of the analysis results...")
results.best.analyse_chi2()
print("\nSaving the analysis results...")
......@@ -205,7 +214,7 @@ class PdfAnalysis(AnalysisModule):
# For the mock analysis we do not save the ancillary files.
for k in ['best_sed', 'chi2']:
conf['analysis_params']["save_{}".format(k)] = False
conf['analysis_params'][f"save_{k}"] = False
# We replace the observations with a mock catalogue..
obs.generate_mock(results)
......
......@@ -8,7 +8,7 @@
from functools import lru_cache
from astropy import log
from astropy.cosmology import WMAP7 as cosmo
from ...utils.cosmology import luminosity_distance
import numpy as np
from scipy import optimize
from scipy.constants import parsec
......@@ -21,8 +21,8 @@ def save_chi2(obs, variable, models, chi2, values):
"""Save the chi² and the associated physocal properties
"""
fname = 'out/{}_{}_chi2-block-{}.npy'.format(obs.id, variable.replace('/',
'_'), models.iblock)
fname = f"out/{obs.id}_{variable.replace('/', '_')}_chi2-block-" \
f"{models.iblock}.npy"
data = np.memmap(fname, dtype=np.float64, mode='w+',
shape=(2, chi2.size))
data[0, :] = chi2
......@@ -30,26 +30,26 @@ def save_chi2(obs, variable, models, chi2, values):
@lru_cache(maxsize=None)
def compute_corr_dz(model_z, obs_dist):
def compute_corr_dz(model_z, obs):
"""The mass-dependent physical properties are computed assuming the
redshift of the model. However because we round the observed redshifts to
two decimals, there can be a difference of 0.005 in redshift between the
models and the actual observation. At low redshift, this can cause a
models and the actual observation. This causes two issues. First there is a
difference in the luminosity distance. At low redshift, this can cause a
discrepancy in the mass-dependent physical properties: ~0.35 dex at z=0.010
vs 0.015 for instance. Therefore we correct these physical quantities by
multiplying them by corr_dz.
vs 0.015 for instance. In addition, the 1+z dimming will be different.
We compute here the correction factor for these two effects.
Parameters
----------
model_z: float
Redshift of the model.
obs_dist: float
Luminosity distance of the observed object.
obs: instance of the Observation class
Object containing the distance and redshift of an object
"""
if model_z == 0.:
return (obs_dist / (10. * parsec)) ** 2.
return (obs_dist / cosmo.luminosity_distance(model_z).value) ** 2.
return (obs.distance / luminosity_distance(model_z)) ** 2. * \
(1. + model_z) / (1. + obs.redshift)
def dchi2_over_ds2(s, obsdata, obsdata_err, obslim, obslim_err, moddata,
......@@ -252,12 +252,12 @@ def compute_chi2(models, obs, corr_dz, wz, lim_flag):
# inverse error
inv_flux_err = 1. / obs.flux_err[band]
model = models.flux[band][wz]
chi2 += ((flux - model * scaling) * inv_flux_err) ** 2.
chi2 += ((model * scaling - flux) * inv_flux_err) ** 2.
# Computation of the χ² from intensive properties
for name, prop in obs.intprop.items():
model = models.intprop[name][wz]
chi2 += ((prop - model) * (1. / obs.intprop_err[name])) ** 2.
chi2 += ((model - prop) * (1. / obs.intprop_err[name])) ** 2.
# Computation of the χ² from extensive properties
for name, prop in obs.extprop.items():
......@@ -265,7 +265,7 @@ def compute_chi2(models, obs, corr_dz, wz, lim_flag):
# inverse error
inv_prop_err = 1. / obs.extprop_err[name]
model = models.extprop[name][wz]
chi2 += ((prop - (scaling * model) * corr_dz) * inv_prop_err) ** 2.
chi2 += (((scaling * model) * corr_dz - prop) * inv_prop_err) ** 2.
# Finally take the presence of upper limits into account
if limits is True:
......@@ -285,13 +285,15 @@ def compute_chi2(models, obs, corr_dz, wz, lim_flag):
def weighted_param(param, weights):
"""Compute the weighted mean and standard deviation of an array of data.
Note that here we assume that the sum of the weights is normalised to 1.
This simplifies and accelerates the computation.
Parameters
----------
param: array
Values of the parameters for the entire grid of models
weights: array
Weights by which to weight the parameter values
Weights by which to weigh the parameter values
Returns
-------
......@@ -302,7 +304,7 @@ def weighted_param(param, weights):
"""
mean = np.average(param, weights=weights)
std = np.sqrt(np.average((param-mean)**2, weights=weights))
mean = np.sum(param * weights)
std = np.sqrt(np.sum((param - mean)**2 * weights))
return (mean, std)
......@@ -10,7 +10,6 @@ from copy import deepcopy
import numpy as np
from ..utils import nothread
from .utils import save_chi2, compute_corr_dz, compute_chi2, weighted_param
from ...warehouse import SedWarehouse
......@@ -28,10 +27,6 @@ def init_sed(models, counter):
"""
global gbl_warehouse, gbl_models, gbl_counter
# Limit the number of threads to 1 if we use MKL in order to limit the
# oversubscription of the CPU/RAM.
nothread()
gbl_warehouse = SedWarehouse()
gbl_models = models
......@@ -103,6 +98,9 @@ def sed(idx, midx):
sed = gbl_warehouse.get_sed(gbl_models.params.modules,
gbl_models.params.from_index(midx))
# The redshift is the fastest varying variable but we want to store it
# as the slowest one so that models at a given redshift are contiguous
idx = (idx % gbl_models.nz) * gbl_models.nm + idx // gbl_models.nz
if 'sfh.age' in sed.info and sed.info['sfh.age'] > sed.info['universe.age']:
for band in gbl_models.flux:
gbl_models.flux[band][idx] = np.nan
......@@ -117,6 +115,7 @@ def sed(idx, midx):
gbl_models.extprop[prop][idx] = sed.info[prop]
for prop in gbl_models.intprop:
gbl_models.intprop[prop][idx] = sed.info[prop]
gbl_models.index[idx] = midx
gbl_counter.inc()
......@@ -141,8 +140,10 @@ def analysis(idx, obs):
# work on views of the arrays and not on copies to save on RAM.
z = np.array(
gbl_models.conf['sed_modules_params']['redshifting']['redshift'])
wz = slice(np.abs(obs.redshift-z).argmin(), None, z.size)
corr_dz = compute_corr_dz(z[wz.start], obs.distance)
length = gbl_models.nm
zidx = np.abs(obs.redshift-z).argmin()
wz = slice(zidx * length, (zidx + 1) * length, 1)
corr_dz = compute_corr_dz(z[zidx], obs)
else: # We do not know the redshift so we use the full grid
wz = slice(0, None, 1)
corr_dz = 1.
......@@ -153,7 +154,7 @@ def analysis(idx, obs):
if np.any(chi2 < -np.log(np.finfo(np.float64).tiny) * 2.):
# We use the exponential probability associated with the χ² as
# likelihood function.
likelihood = np.exp(-chi2 / 2.)
likelihood = np.exp(-.5 * chi2)
wlikely = np.where(np.isfinite(likelihood))
# If all the models are valid, it is much more efficient to use a slice
if likelihood.size == wlikely[0].size:
......@@ -162,6 +163,7 @@ def analysis(idx, obs):
scaling_l = scaling[wlikely]
gbl_results.bayes.weight[idx] = np.nansum(likelihood)
likelihood *= 1. / gbl_results.bayes.weight[idx]
# We compute the weighted average and standard deviation using the
# likelihood as weight.
......@@ -176,7 +178,7 @@ def analysis(idx, obs):
gbl_results.bayes.intmean[prop][idx] = mean
gbl_results.bayes.interror[prop][idx] = std
if gbl_models.conf['analysis_params']['save_chi2'] is True:
save_chi2(obs, prop, gbl_models, chi2, values)
save_chi2(obs, prop, gbl_models, chi2, _(values))
for prop in gbl_results.bayes.extmean:
if prop.endswith('_log'):
......@@ -191,7 +193,7 @@ def analysis(idx, obs):
gbl_results.bayes.exterror[prop][idx] = std
if gbl_models.conf['analysis_params']['save_chi2'] is True:
save_chi2(obs, prop, gbl_models, chi2,
values * scaling * corr_dz)
_(values * scaling * corr_dz))
for band in gbl_results.bayes.fluxmean:
values = gbl_models.flux[band][wz]
......@@ -205,13 +207,7 @@ def analysis(idx, obs):
best_idx_z = np.nanargmin(chi2)
gbl_results.best.chi2[idx] = chi2[best_idx_z]
gbl_results.best.scaling[idx] = scaling[best_idx_z]
gbl_results.best.index[idx] = (wz.start + best_idx_z*wz.step +
gbl_models.block.start)
else:
# It sometimes happens because models are older than the Universe's age
print("No suitable model found for the object {}. It may be that "
"models are older than the Universe or that your chi² are very "
"large.".format(obs.id))
gbl_results.best.index[idx] = gbl_models.index[wz][best_idx_z]
gbl_counter.inc()
......@@ -229,30 +225,47 @@ def bestfit(oidx, obs):
"""
np.seterr(invalid='ignore')
best_index = int(gbl_results.best.index[oidx])
# We compute the model at the exact redshift not to have to correct for the
# difference between the object and the grid redshifts.
params = deepcopy(gbl_params.from_index(best_index))
if obs.redshift >= 0.:
params[gbl_params.modules.index('redshifting')]['redshift'] = obs.redshift
corr_dz = compute_corr_dz(obs.redshift, obs.distance)
else: # The model redshift is always exact in redhisfting mode
corr_dz = 1.
sed = gbl_warehouse.get_sed(gbl_params.modules, params)
scaling = gbl_results.best.scaling[oidx]
for band in gbl_results.best.flux:
gbl_results.best.flux[band][oidx] = sed.compute_fnu(band) * scaling
for prop in gbl_results.best.intprop:
gbl_results.best.intprop[prop][oidx] = sed.info[prop]
for prop in gbl_results.best.extprop:
gbl_results.best.extprop[prop][oidx] = sed.info[prop] * scaling \
* corr_dz
if gbl_conf['analysis_params']["save_best_sed"]:
sed.to_fits('out/{}'.format(obs.id), scaling)
if np.isfinite(gbl_results.best.index[oidx]):
best_index = int(gbl_results.best.index[oidx])
# We compute the model at the exact redshift not to have to correct for
# the difference between the object and the grid redshifts.
params = deepcopy(gbl_params.from_index(best_index))
if obs.redshift >= 0.:
model_z = params[gbl_params.modules.index('redshifting')]['redshift']
params[gbl_params.modules.index('redshifting')]['redshift'] = obs.redshift
# Correct fluxes for the fact that the scaling factor was computed
# on the grid redshift. Because of the difference in redshift the
# distance is different and must be reflected in the scaling
corr_scaling = compute_corr_dz(model_z, obs) / \
compute_corr_dz(obs.redshift, obs)
else: # The model redshift is always exact in redhisfting mode
corr_scaling = 1.
sed = deepcopy(gbl_warehouse.get_sed(gbl_params.modules, params))
# Handle the case where the distance does not correspond to the redshift.
if obs.redshift >= 0.:
corr_dz = (obs.distance / sed.info['universe.luminosity_distance']) ** 2
else:
corr_dz = 1.
scaling = gbl_results.best.scaling[oidx] * corr_scaling
for band in gbl_results.best.flux:
gbl_results.best.flux[band][oidx] = sed.compute_fnu(band) * scaling
# If the distance is user defined, the redshift-based luminosity distance
# of the model is probably incorrect so we replace it
sed.add_info('universe.luminosity_distance', obs.distance, force=True)
for prop in gbl_results.best.intprop:
gbl_results.best.intprop[prop][oidx] = sed.info[prop]
for prop in gbl_results.best.extprop:
gbl_results.best.extprop[prop][oidx] = sed.info[prop] * scaling \
* corr_dz
if gbl_conf['analysis_params']["save_best_sed"]:
sed.to_fits(f"out/{obs.id}", scaling * corr_dz)
gbl_counter.inc()
......@@ -19,7 +19,7 @@ from collections import OrderedDict
import multiprocessing as mp
from .. import AnalysisModule
from ..utils import Counter
from utils.counter import Counter
from .workers import init_fluxes as init_worker_fluxes
from .workers import fluxes as worker_fluxes
from ...managers.models import ModelsManager
......@@ -71,8 +71,7 @@ class SaveFluxes(AnalysisModule):
def _compute_models(self, conf, obs, params):
nblocks = len(params.blocks)
for iblock in range(nblocks):
print('Computing models for block {}/{}...'.format(iblock + 1,
nblocks))
print(f"Computing models for block {iblock + 1}/{nblocks}...")
models = ModelsManager(conf, obs, params, iblock)
counter = Counter(len(params.blocks[iblock]), 50, 250)
......@@ -86,7 +85,7 @@ class SaveFluxes(AnalysisModule):
counter.pprint(len(params.blocks[iblock]))
print("Saving the models ....")
models.save('models-block-{}'.format(iblock))
models.save(f"models-block-{iblock}")
def process(self, conf):
......
......@@ -8,7 +8,6 @@
import numpy as np
from ...warehouse import SedWarehouse
from ..utils import nothread
def init_fluxes(models, counter):
......@@ -26,10 +25,6 @@ def init_fluxes(models, counter):
"""
global gbl_warehouse, gbl_models, gbl_obs, gbl_save, gbl_counter
# Limit the number of threads to 1 if we use MKL in order to limit the
# oversubscription of the CPU/RAM.
nothread()
gbl_warehouse = SedWarehouse()
gbl_models = models
......@@ -67,6 +62,6 @@ def fluxes(idx, midx):
gbl_models.intprop[prop][idx] = sed.info[prop]
if gbl_save is True:
sed.to_fits("out/{}".format(midx))
sed.to_fits(f"out/{midx}")
gbl_counter.inc()
......@@ -367,8 +367,8 @@ class Database(object):
result.spec_table)
else:
raise DatabaseLookupError(
"The M2005 SSP for imf <{0}> and metallicity <{1}> is not in "
"the database.".format(imf, metallicity))
f"The M2005 SSP for imf <{imf}> and metallicity <{metallicity}>"
f" is not in the database.")
def get_m2005_parameters(self):
"""Get parameters for the Maraston 2005 stellar models.
......@@ -431,8 +431,8 @@ class Database(object):
result.spec_table)
else:
raise DatabaseLookupError(
"The BC03 SSP for imf <{0}> and metallicity <{1}> is not in "
"the database.".format(imf, metallicity))
f"The BC03 SSP for imf <{imf}> and metallicity <{metallicity}> "
f"is not in the database.")
def get_bc03_parameters(self):
"""Get parameters for the Bruzual & Charlot 2003 stellar models.
......@@ -499,8 +499,8 @@ class Database(object):
result.lumin)
else:
raise DatabaseLookupError(
"The DL2007 model for qpah <{0}>, umin <{1}>, and umax <{2}> "
"is not in the database.".format(qpah, umin, umax))
f"The DL2007 model for qpah <{qpah}>, umin <{umin}>, and umax "
f"<{umax}> is not in the database.")
def get_dl2007_parameters(self):
"""Get parameters for the DL2007 models.
......@@ -570,9 +570,8 @@ class Database(object):
result.wave, result.lumin)
else:
raise DatabaseLookupError(
"The DL2014 model for qpah <{0}>, umin <{1}>, umax <{2}>, and "
"alpha <{3}> is not in the database."
.format(qpah, umin, umax, alpha))
f"The DL2014 model for qpah <{qpah}>, umin <{umin}>, umax "
f"<{umax}>, and alpha <{alpha}> is not in the database.")
def get_dl2014_parameters(self):
"""Get parameters for the DL2014 models.
......@@ -638,8 +637,8 @@ class Database(object):
result.lumin)
else:
raise DatabaseLookupError(
"The Dale2014 template for frac_agn <{0}> and alpha <{1}> "
"is not in the database.".format(frac_agn, alpha))
f"The Dale2014 template for frac_agn <{frac_agn}> and alpha "
f"<{alpha}> is not in the database.")
def get_dale2014_parameters(self):
"""Get parameters for the Dale 2014 models.
......@@ -884,8 +883,8 @@ class Database(object):
result.lumin)
else:
raise DatabaseLookupError(
"The Schreiber2016 template for type <{0}> and tdust <{1}> "
"is not in the database.".format(type, tdust))
f"The Schreiber2016 template for type <{type}> and tdust "
f"<{tdust}> is not in the database.")
def get_schreiber2016_parameters(self):
"""Get parameters for the Scnreiber 2016 models.
......@@ -956,9 +955,8 @@ class Database(object):
result.wave, result.lumin)
else:
raise DatabaseLookupError(
"The THEMIS model for qhac <{0}>, umin <{1}>, umax <{2}>, and "
"alpha <{3}> is not in the database."
.format(qhac, umin, umax, alpha))
f"The THEMIS model for qhac <{qhac}>, umin <{umin}>, umax "
f"<{umax}>, and alpha <{alpha}> is not in the database.")
def _get_parameters(self, schema):
"""Generic function to get parameters from an arbitrary schema.
......@@ -1038,7 +1036,7 @@ class Database(object):
raise Exception('The database is not writable.')
else:
raise DatabaseLookupError(
"The filter <{0}> is not in the database".format(name))
f"The filter <{name}> is not in the database")
def get_filter(self, name):
"""
......@@ -1067,7 +1065,7 @@ class Database(object):
result.pivot_wavelength)
else:
raise DatabaseLookupError(
"The filter <{0}> is not in the database".format(name))
f"The filter <{name}> is not in the database")
def get_filter_names(self):
"""Get the list of the name of the filters in the database.
......
......@@ -9,6 +9,8 @@ compute them, such as the configuration, the observations, and the parameters
of the models.
"""
import ctypes
from astropy.table import Table, Column
from .utils import SharedArray, get_info
......@@ -37,11 +39,20 @@ class ModelsManager(object):
self.allintpropnames & props_nolog)
self.extpropnames = (self.allextpropnames & set(obs.extprops) |
self.allextpropnames & props_nolog)
if 'bands' in conf['analysis_params']:
bandnames = set(obs.bands+conf['analysis_params']['bands'])
else:
bandnames = obs.bands
size = len(params.blocks[iblock])
if conf['parameters_file'] is "":
self.nz = len(conf['sed_modules_params']['redshifting']['redshift'])
self.nm = size // self.nz
self.flux = {band: SharedArray(size) for band in obs.bands}
self.flux = {band: SharedArray(size) for band in bandnames}
self.intprop = {prop: SharedArray(size) for prop in self.intpropnames}
self.extprop = {prop: SharedArray(size) for prop in self.extpropnames}
self.index = SharedArray(size, ctypes.c_uint32)
def save(self, filename):
"""Save the fluxes and properties of all the models into a table.
......@@ -66,6 +77,6 @@ class ModelsManager(object):
for prop in sorted(self.intprop.keys()):
table.add_column(Column(self.intprop[prop], name=prop))
table.write("out/{}.fits".format(filename))
table.write("out/{}.txt".format(filename), format='ascii.fixed_width',
table.write(f"out/{filename}.fits")
table.write(f"out/{filename}.txt", format='ascii.fixed_width',
delimiter=None)
......@@ -3,12 +3,12 @@
# Licensed under the CeCILL-v2 licence - see Licence_CeCILL_V2-en.txt
# Author: Médéric Boquien
from astropy.cosmology import WMAP7 as cosmo
from astropy.table import Column
import numpy as np
from scipy.constants import parsec
from ..utils import read_table
from ..utils.cosmology import luminosity_distance
from utils.io import read_table
from .utils import get_info
......@@ -103,15 +103,15 @@ class ObservationsManagerPassbands(object):
"""
for item in self.tofit + self.tofit_err:
if item not in self.table.colnames:
raise Exception("{} to be taken in the fit but not present "
"in the observation table.".format(item))
raise Exception(f"{item} to be taken in the fit but not present"
f" in the observation table.")
for item in self.table.colnames:
if (item != 'id' and item != 'redshift' and item != 'distance' and
item not in self.tofit + self.tofit_err):
self.table.remove_column(item)
print("Warning: {} in the input file but not to be taken into"
" account in the fit.".format(item))
print(f"Warning: {item} in the input file but not to be taken "
f"into account in the fit.")
def _check_errors(self, defaulterror=0.1):
"""Check whether the error columns are present. If not, add them.
......@@ -148,9 +148,8 @@ class ObservationsManagerPassbands(object):
name=error)
self.table.add_column(colerr,
index=self.table.colnames.index(item)+1)
print("Warning: {}% of {} taken as errors.".format(defaulterror *
100.,
item))
print(f"Warning: {defaulterror * 100}% of {item} taken as "
f"errors.")
def _check_invalid(self, upperlimits=False, threshold=-9990.):
"""Check whether invalid data are correctly marked as such.
......@@ -194,8 +193,7 @@ class ObservationsManagerPassbands(object):
self.extprops.remove(item)
self.extprops_err.remove(item + '_err')
self.table.remove_columns([item, item + '_err'])
print("Warning: {} removed as no valid data was found.".format(
allinvalid))
print(f"Warning: {allinvalid} removed as no valid data was found.")
def _add_model_error(self, modelerror=0.1):
"""Add in quadrature the error of the model to the input error.
......@@ -235,7 +233,7 @@ class ObservationsManagerPassbands(object):
np.fabs(self.table[err]))
for prop in self.intprops:
err = prop + '_err'
self.table[name] = np.random.normal(fits.best.intprop[prop],
self.table[prop] = np.random.normal(fits.best.intprop[prop],
np.fabs(self.table[err]))
for prop in self.extprops:
err = prop + '_err'
......@@ -256,9 +254,9 @@ class ObservationsManagerPassbands(object):
Root of the filename where to save the observations.
"""
self.table.write('out/{}.fits'.format(filename))
self.table.write('out/{}.txt'.format(filename),
format='ascii.fixed_width', delimiter=None)
self.table.write(f'out/{filename}.fits')
self.table.write(f'out/{filename}.txt', format='ascii.fixed_width',
delimiter=None)
class ObservationsManagerVirtual(object):
......@@ -301,10 +299,8 @@ class Observation(object):
if 'distance' in row.colnames and np.isfinite(row['distance']):
self.distance = row['distance'] * parsec * 1e6
else:
if self.redshift == 0.:
self.distance = 10. * parsec
elif self.redshift > 0.:
self.distance = cosmo.luminosity_distance(self.redshift).value
if self.redshift >= 0.:
self.distance = luminosity_distance(self.redshift)
else:
self.distance = np.nan
self.flux = {k: row[k] for k in cls.bands
......
......@@ -8,7 +8,7 @@ import collections
import itertools
import numpy as np
from ..utils import read_table
from utils.io import read_table
class ParametersManager(object):
......
......@@ -10,6 +10,7 @@ along with the names of the parameters, which are proportional to the mass,
etc. Each of these classes contain a merge() method that allows to combine
results of the analysis with different blocks of models.
"""
import ctypes
from astropy.table import Table, Column
import numpy as np
......@@ -36,6 +37,7 @@ class BayesResultsManager(object):
intpropnames = [prop for prop in models.obs.conf['analysis_params']['variables']
if (prop in models.allintpropnames or
prop[:-4] in models.allintpropnames)]
fluxnames = [name for name in models.conf['analysis_params']['bands']]
self.nproperties = len(intpropnames) + len(extpropnames)
# Arrays where we store the data related to the models. For memory
......@@ -47,8 +49,8 @@ class BayesResultsManager(object):
self.interror = {prop: SharedArray(nobs) for prop in intpropnames}
self.extmean = {prop: SharedArray(nobs) for prop in extpropnames}
self.exterror = {prop: SharedArray(nobs) for prop in extpropnames}
self.fluxmean = {band: SharedArray(nobs) for band in models.flux}
self.fluxerror = {band: SharedArray(nobs) for band in models.flux}
self.fluxmean = {band: SharedArray(nobs) for band in fluxnames}
self.fluxerror = {band: SharedArray(nobs) for band in fluxnames}
self.weight = SharedArray(nobs)
@property
......@@ -112,10 +114,10 @@ class BayesResultsManager(object):
for band in merged.fluxerror}
weight = np.array([result.weight for result in results])
totweight = np.sum(weight, axis=0)
totweight = np.nansum(weight, axis=0)
for prop in merged.intmean:
merged.intmean[prop][:] = np.sum(
merged.intmean[prop][:] = np.nansum(
intmean[prop] * weight, axis=0) / totweight
# We compute the merged standard deviation by combining the
......@@ -123,11 +125,11 @@ class BayesResultsManager(object):
# http://stats.stackexchange.com/a/10445 where the number of
# datapoints has been substituted with the weights. In short we
# exploit the fact that Var(X) = E(Var(X)) + Var(E(X)).
merged.interror[prop][:] = np.sqrt(np.sum(
merged.interror[prop][:] = np.sqrt(np.nansum(
weight * (interror[prop]**2. + (intmean[prop]-merged.intmean[prop])**2), axis=0) / totweight)
for prop in merged.extmean:
merged.extmean[prop][:] = np.sum(
merged.extmean[prop][:] = np.nansum(
extmean[prop] * weight, axis=0) / totweight
# We compute the merged standard deviation by combining the
......@@ -135,7 +137,7 @@ class BayesResultsManager(object):
# http://stats.stackexchange.com/a/10445 where the number of
# datapoints has been substituted with the weights. In short we
# exploit the fact that Var(X) = E(Var(X)) + Var(E(X)).
merged.exterror[prop][:] = np.sqrt(np.sum(
merged.exterror[prop][:] = np.sqrt(np.nansum(
weight * (exterror[prop]**2. + (extmean[prop]-merged.extmean[prop])**2), axis=0) / totweight)
for prop in merged.extmean:
......@@ -148,7 +150,7 @@ class BayesResultsManager(object):
merged.exterror[prop])
for band in merged.fluxmean:
merged.fluxmean[band][:] = np.sum(
merged.fluxmean[band][:] = np.nansum(
fluxmean[band] * weight, axis=0) / totweight
# We compute the merged standard deviation by combining the
......@@ -156,7 +158,7 @@ class BayesResultsManager(object):
# http://stats.stackexchange.com/a/10445 where the number of
# datapoints has been substituted with the weights. In short we
# exploit the fact that Var(X) = E(Var(X)) + Var(E(X)).
merged.fluxerror[band][:] = np.sqrt(np.sum(
merged.fluxerror[band][:] = np.sqrt(np.nansum(
weight * (fluxerror[band]**2. + (fluxmean[band]-merged.fluxmean[band])**2), axis=0) / totweight)
......@@ -189,9 +191,7 @@ class BestResultsManager(object):
for prop in allextpropnames}
self.chi2 = SharedArray(nobs)
self.scaling = SharedArray(nobs)
# We store the index as a float to work around python issue #10746
self.index = SharedArray(nobs)
self.index = SharedArray(nobs, ctypes.c_uint32)
@property
def flux(self):
......@@ -284,22 +284,22 @@ class BestResultsManager(object):
if len(results) == 1:
return results[0]
best = np.argmin([result.chi2 for result in results], axis=0)
chi2 = np.array([result.chi2 for result in results])
merged = results[0]
for iobs, bestidx in enumerate(best):
for band in merged.flux:
merged.flux[band][iobs] = \
results[bestidx].flux[band][iobs]
for prop in merged.intprop:
merged.intprop[prop][iobs] = \
results[bestidx].intprop[prop][iobs]
for prop in merged.extprop:
merged.extprop[prop][iobs] = \
results[bestidx].extprop[prop][iobs]
merged.chi2[iobs] = results[bestidx].chi2[iobs]
merged.scaling[iobs] = results[bestidx].scaling[iobs]
merged.index[iobs] = results[bestidx].index[iobs]
for iobs, bestidx in enumerate(np.argsort(chi2, axis=0)[0, :]):
if np.isfinite(bestidx):
for band in merged.flux:
merged.flux[band][iobs] = \
results[bestidx].flux[band][iobs]
for prop in merged.intprop:
merged.intprop[prop][iobs] = \
results[bestidx].intprop[prop][iobs]
for prop in merged.extprop:
merged.extprop[prop][iobs] = \
results[bestidx].extprop[prop][iobs]
merged.chi2[iobs] = results[bestidx].chi2[iobs]
merged.scaling[iobs] = results[bestidx].scaling[iobs]
merged.index[iobs] = results[bestidx].index[iobs]
return merged
......@@ -308,14 +308,23 @@ class BestResultsManager(object):
objects seems to be overconstrainted.
"""
# If no best model has been found, it means none could be properly
# fitted. We warn the user in that case
bad = [str(id_) for id_ in self.obs.table['id'][np.isnan(self.chi2)]]
if len(bad) > 0:
print(f"No suitable model found for {', '.join(bad)}. It may be "
f"that models are older than the universe or that your χ² are"
f" very large.")
obs = [self.obs.table[obs].data for obs in self.obs.tofit]
nobs = np.count_nonzero(np.isfinite(obs), axis=0)
chi2_red = self.chi2 / (nobs - 1)
# If low values of reduced chi^2, it means that the data are overfitted
# Errors might be under-estimated or not enough valid data.
print("\n{}% of the objects have chi^2_red~0 and {}% chi^2_red<0.5"
.format(np.round((chi2_red < 1e-12).sum() / chi2_red.size, 1),
np.round((chi2_red < 0.5).sum() / chi2_red.size, 1)))
print(f"{np.round((chi2_red < 1e-12).sum() / chi2_red.size, 1)}% of "
f"the objects have χ²_red~0 and "
f"{np.round((chi2_red < 0.5).sum() / chi2_red.size, 1)}% "
f"χ²_red<0.5")
class ResultsManager(object):
......@@ -405,6 +414,6 @@ class ResultsManager(object):
name="best."+band, unit=unit))
table.write("out/{}.txt".format(filename), format='ascii.fixed_width',
table.write(f"out/{filename}.txt", format='ascii.fixed_width',
delimiter=None)
table.write("out/{}.fits".format(filename), format='fits')
table.write(f"out/{filename}.fits", format='fits')
......@@ -37,7 +37,7 @@ class SharedArray(object):
implementation and if new operations are done on these arrays, it may be
necessary to define them here.
"""
def __init__(self, size):
def __init__(self, size, dtype=ctypes.c_double):
"""The RawArray is stored in raw, which is protected by a setter and
a getter. The array property returns raw as a regular Numpy array. It
is important to access both the RawArray and the Numpy array forms. The
......@@ -47,29 +47,30 @@ class SharedArray(object):
issue we selectively work with array or raw depending on whether the
operation is done with a slice or not.
"""
self.raw = RawArray(ctypes.c_double, size)
self.raw = RawArray(dtype, size)
self.size = size
# By default RawArray initialises all the elements to 0. Setting them to
# np.nan is preferanble to in case for a reason some elements are never
# assigned a value during a run
self.array[:] = np.nan
def __setitem__(self, idx, data):
if isinstance(idx, slice):
self.array[idx] = data
else:
self._raw[idx] = data
self._raw[idx] = data
def __getitem__(self, idx):
if isinstance(idx, slice):
return self.array[idx]
return self._raw[idx]
if isinstance(idx, int):
return self._raw[idx]
return self.array[idx]
def __len__(self):
return self.size
def __rmul__(self, other):
return other * self.array
return other * self._array
@property
def array(self):
return np.ctypeslib.as_array(self._raw)
return self._array
@property
def raw(self):
......@@ -79,5 +80,6 @@ class SharedArray(object):
def raw(self, raw):
if isinstance(raw, ctypes.Array):
self._raw = raw
self._array = np.ctypeslib.as_array(self._raw)
else:
raise TypeError("Type must be RawArray.")
......@@ -220,9 +220,6 @@ class SED(object):
def get_lumin_contribution(self, name):
"""Get the luminosity vector of a given contribution
If the name of the contribution is not unique in the SED, the flux of
the last one is returned.
Parameters
----------
name: string
......@@ -235,10 +232,7 @@ class SED(object):
wavelength grid.
"""
# Find the index of the _last_ name element
idx = (len(self.contribution_names) - 1 -
self.contribution_names[::-1].index(name))
return self.luminosities[idx]
return self.luminosities[self.contribution_names.index(name)]
def compute_fnu(self, filter_name):
"""
......
......@@ -37,10 +37,10 @@ def save_sed_to_fits(sed, prefix, norm=1.):
for name in sed.contribution_names:
table[name] = Column(norm * sed.get_lumin_contribution(name),
unit="W/nm")
table.write("{}_best_model.fits".format(prefix))
table.write(f"{prefix}_best_model.fits")
if sed.sfh is not None:
table = Table(meta=info)
table["time"] = Column(np.arange(sed.sfh.size), unit="Myr")
table["SFR"] = Column(norm * sed.sfh, unit="Msun/yr")
table.write("{}_SFH.fits".format(prefix))
table.write(f"{prefix}_SFH.fits")
......@@ -63,7 +63,7 @@ class BC03(SedModule):
elif self.imf == 1:
self.ssp = database.get_bc03('chab', self.metallicity)
else:
raise Exception("IMF #{} unknown".format(self.imf))
raise Exception(f"IMF #{self.imf} unknown")
def process(self, sed):
"""Add the convolution of a Bruzual and Charlot SSP to the SED
......
......@@ -56,7 +56,7 @@ class Fluxes(SedModule):
for filter_ in filter_list:
sed.add_info(
"param.{}".format(filter_),
f"param.{filter_}",
sed.compute_fnu(filter_),
True
)
......
......@@ -100,6 +100,10 @@ class Fritz2006(SedModule):
self.fritz2006 = base.get_fritz2006(self.r_ratio, self.tau,
self.beta, self.gamma,
self.opening_angle, self.psy)
self.l_agn_scatt = np.trapz(self.fritz2006.lumin_scatt,
x=self.fritz2006.wave)
self.l_agn_agn = np.trapz(self.fritz2006.lumin_agn,
x=self.fritz2006.wave)
def process(self, sed):
"""Add the IR re-emission contributions
......@@ -128,10 +132,8 @@ class Fritz2006(SedModule):
if self.fracAGN < 1.:
agn_power = luminosity * (1./(1.-self.fracAGN) - 1.)
l_agn_therm = agn_power
l_agn_scatt = np.trapz(agn_power * self.fritz2006.lumin_scatt,
x=self.fritz2006.wave)
l_agn_agn = np.trapz(agn_power * self.fritz2006.lumin_agn,
x=self.fritz2006.wave)
l_agn_scatt = agn_power * self.l_agn_scatt
l_agn_agn = agn_power * self.l_agn_agn
l_agn_total = l_agn_therm + l_agn_scatt + l_agn_agn
else:
......
......@@ -13,7 +13,7 @@ This module reads a SED spectrum from a file.
from collections import OrderedDict
from ..utils import read_table
from utils.io import read_table
from . import SedModule
......
......@@ -61,7 +61,7 @@ class M2005(SedModule):
with Database() as database:
self.ssp = database.get_m2005('krou', self.metallicity)
else:
raise Exception("IMF #{} unknown".format(self.imf))
raise Exception(f"IMF #{self.imf} unknown")
def process(self, sed):
"""Add the convolution of a Maraston 2005 SSP to the SED
......
......@@ -64,7 +64,10 @@ class MBB(SedModule):
self.epsilon = float(self.parameters["epsilon_mbb"])
self.T = float(self.parameters["t_mbb"])
self.beta = float(self.parameters["beta_mbb"])
self.energy_balance = bool(self.parameters["energy_balance"])
if type(self.parameters["energy_balance"]) is str:
self.energy_balance = self.parameters["energy_balance"].lower() == 'true'
else:
self.energy_balance = bool(self.parameters["energy_balance"])
if self.epsilon < 0.:
raise Exception("Error, epsilon_mbb must be ≥ 0.")
......
......@@ -95,7 +95,10 @@ class NebularEmission(SedModule):
self.fesc = float(self.parameters['f_esc'])
self.fdust = float(self.parameters['f_dust'])
self.lines_width = float(self.parameters['lines_width'])
self.emission = bool(self.parameters["emission"])
if type(self.parameters["emission"]) is str:
self.emission = self.parameters["emission"].lower() == 'true'
else:
self.emission = bool(self.parameters["emission"])
if self.fesc < 0. or self.fesc > 1:
raise Exception("Escape fraction must be between 0 and 1")
......
......@@ -23,7 +23,7 @@ from collections import OrderedDict
import numpy as np
from scipy.constants import parsec
from scipy.special import factorial
from astropy.cosmology import WMAP7 as cosmology
from ..utils.cosmology import age, luminosity_distance
from . import SedModule
......@@ -160,16 +160,11 @@ class Redshifting(SedModule):
# Raise an error when applying a negative redshift. This module is
# not for blue-shifting.
if self.redshift < 0.:
raise Exception("The redshift provided is negative <{}>."
.format(self.redshift))
raise Exception(f"The redshift provided is negative "
f"({self.redshift}).")
self.universe_age = cosmology.age(self.redshift).value * 1000.
if self.redshift == 0.:
self.luminosity_distance = 10. * parsec
else:
self.luminosity_distance = (
cosmology.luminosity_distance(self.redshift).value * 1e6 *
parsec)
self.universe_age = age(self.redshift)
self.luminosity_distance = luminosity_distance(self.redshift)
# We do not define the values of the IGM attenuation component yet.
# This is because we need the wavelength grid for that first. This
# will be assigned on the first call.
......@@ -188,8 +183,8 @@ class Redshifting(SedModule):
# If the SED is already redshifted, raise an error.
if ('universe.redshift' in sed.info and
sed.info['universe.redshift'] > 0.):
raise Exception("The SED is already redshifted <z={}>."
.format(sed.info['universe.redshift']))
raise Exception(f"The SED is already redshifted (z="
f"{sed.info['universe.redshift']}).")
if redshift > 0.: