...
 
Commits (29)
......@@ -12,6 +12,7 @@ with their current email address and affiliation.
Laboratoire d'Astrophysique de Marseille, France
* David Corre <david.corre@lam.fr>
Laboratoire d'Astrophysique de Marseille, France
* Yannick Roehlly <yannick@iaora.eu>
* Yannick Roehlly <yannick.roehlly@lam.fr>
Laboratoire d'Astrophysique de Marseille, France
* Héctor Salas Olave <hector.salas.o@gmail.com>
Universidad de Antofagasta, Chile
# Change Log
## Unreleased
### Added
- The (1+z1)/(1+z2) factor between observed and grid flux densities caused by the differential redshifting is now taken into account. With a default grid redshift rounding of two decimals this yields a difference of at most 0.5% in the estimated physical properties at z=0 and even less at higher z. (Médéric Boquien)
### Changed
### Fixed
- Make sure we can plot the PDF of equivalent widths. (Médéric Boquien)
- Fix a crash when generating a mock catalogue containing intensive properties. (Médéric Boquien)
- In the `sfhdelayed` and `sfhdelayedbq` modules, provide the correct description for the sfr_A parameter (Médéric Boquien & Laure Ciesla)
- Internally the luminosity distance was erroneously stored in Mpc rather than in m for non-zero redshifts. This has now been standardised to m. (Médéric Boquien)
- As the best-fit properties are computed at the exact observed redshift, correct the scaling factor as it is computed at the grid redshift. This corrects for slight offsets on the best-fit properties when the input redshift has more decimals than the grid redshift. (Médéric Boquien)
- Fix the pip install by making pcigale.managers discoverable. (Yannick Roehlly)
- When using a parameters file, Boolean values were not correctly interpreted. (Médéric Boquien, reported by Eric Martínez, INAOE)
- Make sure that the best-fit models are stored with the correct scaling factor when the distance is given explicitly (Médéric Boquien)
### Optimised
- Slight speedup of the computation of the likelihood from the χ² (Médéric Boquien)
## 2018.0 (2018-11-06)
### Added
- It is now possible to optionally indicate the distance in Mpc in the input file. If present it will be used in lieu of the distance computed from the redshift. This is especially useful in the nearby universe where the redshift is a very poor indicator of the actual distance. (Médéric Boquien)
......@@ -144,7 +160,7 @@
- The output files providing estimates of the physical properties are now generated both in the form of text and FITS files. (Médéric Boquien)
- When using the `dustatt_calzleit module`, choosing ẟ≠0 leads to an effective E(B-V) different from the one set by the user. Now the E(B-V) will always correspond to the one specified by the user. This means that at fixed E(B-V), A(V) depends on ẟ. (Médéric Boquien)
- The pcigale-mock tool has been merged into pcigale-plots; the mock plots can be obtained with the "mock" command. (Médéric Boquien)
- The `sfhdelayed` module is now initialised with _init_code() to be consistent with the way things are done in other modules. This should give a slight speedup under some sircumstances too. (Médéric Boquien)
- The `sfhdelayed` module is now initialised with \_init_code() to be consistent with the way things are done in other modules. This should give a slight speedup under some sircumstances too. (Médéric Boquien)
- In `sfhfromfile`, the specification of the time grid was vague and therefore could lead to incorrect results if it was not properly formatted by the end user. The description has been clarified and we now check that the time starts from 0 and that the time step is always 1 Myr. If it is not the case we raise an exception. (Médéric Boquien)
- When the redshift is not indicated in pcigale.ini, the analysis module fills the list of redshifts from the redshifts indicated in the input flux file. This is inefficient as analysis modules should not have to modify the configuration. Now this is done when interpreting pcigale.ini before calling the relevant analysis module. As a side effect, "pigale check" now returns the total number of models that cigale will compute rather than the number of models per redshift bin. (Médéric Boquien)
- The optionally saved spectra in the `pdf_analysis` and `savefluxes` modules were saved in the VO-table format. The most important downside is that it is very slow to write to, which proves to be a major bottleneck in the computing speed. To overcome this issue, we rather save the spectra using the FITS formation. Instead of having one file containing the spectra (including the various components) and the SFH in a single file, now we have one file for the spectra and one file for the SFH.
......@@ -178,7 +194,7 @@
## 0.8.0 (2015-12-01)
### Added
- The evaluation of the parameters is always done linearly. This can be a problem when estimating the SFR or the stellar mass for instance as it is usual to estimate their log rather. Because the log is non-linear, the likelihood-weighted mean of the log is not the log of the likelihood-weighted mean. Therefore the estimation of the log of these parameters has to be done during the analysis step. This is now possible. The variables to be analysed in log just need to be indicated with the suffix "_log", for instance "stellar.m\_star\_log". (Médéric Boquien, idea suggested by Samir Salim)
- The evaluation of the parameters is always done linearly. This can be a problem when estimating the SFR or the stellar mass for instance as it is usual to estimate their log rather. Because the log is non-linear, the likelihood-weighted mean of the log is not the log of the likelihood-weighted mean. Therefore the estimation of the log of these parameters has to be done during the analysis step. This is now possible. The variables to be analysed in log just need to be indicated with the suffix "\_log", for instance "stellar.m\_star\_log". (Médéric Boquien, idea suggested by Samir Salim)
### Fixed
- Running the scripts in parallel trigger a deadlock on OS X with python 3.5. A workaround has been implemented. (Médéric Boquien)
......
......@@ -20,9 +20,10 @@ import numpy as np
from scipy import interpolate
import scipy.constants as cst
from astropy.table import Table
from pcigale.data import (Database, Filter, M2005, BC03, Fritz2006,
from pcigale.data import (Database, Filter, M2005, BC03, BC03_SSP, Fritz2006,
Dale2014, DL2007, DL2014, NebularLines,
NebularContinuum, Schreiber2016, THEMIS)
NebularContinuum, Schreiber2016, THEMIS,
Yggdrasil_SSP)
def read_bc03_ssp(filename):
......@@ -410,6 +411,85 @@ def build_bc2003(base, res):
ssp_lumin
))
def build_bc2003_ssp(base, res):
bc03_dir = os.path.join(os.path.dirname(__file__), 'bc03/')
# Metallicities associated to each key
metallicity = {
"m22": 0.0001,
"m32": 0.0004,
"m42": 0.004,
"m52": 0.008,
"m62": 0.02,
"m72": 0.05
}
for key, imf in itertools.product(metallicity, ["salp", "chab"]):
ssp_filename = "{}bc2003_{}_{}_{}_ssp.ised_ASCII".format(bc03_dir, res,
key, imf)
color3_filename = "{}bc2003_lr_{}_{}_ssp.3color".format(bc03_dir, key,
imf)
color4_filename = "{}bc2003_lr_{}_{}_ssp.4color".format(bc03_dir, key,
imf)
print("Importing {}...".format(ssp_filename))
# Read the desired information from the color files
color_table = []
color3_table = np.genfromtxt(color3_filename).transpose()
color4_table = np.genfromtxt(color4_filename).transpose()
color_table.append(color4_table[6]) # Mstar
color_table.append(color4_table[7]) # Mgas
color_table.append(10 ** color3_table[5]) # NLy
color_table = np.array(color_table)
ssp_time, ssp_wave, ssp_lumin = read_bc03_ssp(ssp_filename)
base.add_bc03_ssp(BC03_SSP(
imf,
metallicity[key],
ssp_time,
ssp_wave,
color_table,
ssp_lumin
))
def build_yggdrasil_ssp(base):
yggdrasil_dir = os.path.join(os.path.dirname(__file__), 'yggdrasil/')
# Metallicities associated to each key
metallicities = ["0.004", "0.008", "0.02"]
fcovs = ["0", "0.5"]
for Z, fcov in itertools.product(metallicities, fcovs):
filename = f"{yggdrasil_dir}Z={Z}_kroupa_IMF_fcov_{fcov}_SFR_inst_Spectra"
print(f"Importing {filename}...")
with open(filename) as f:
specallages = "".join(f.readlines()[2::]).split('\n\n')[:-1]
for i in range(len(specallages)):
specallages[i] = specallages[i].split('\n')
ssp_time = np.array([float(item[0].split()[-1]) for item in specallages])
ssp_info = np.array([float(item[2].split()[-1]) for item in specallages])
# Normalisation from 10⁶ to 1 Msun
ssp_info *= 1e-6
minwlsize = int(np.min([float(item[3].split()[-1]) for item in specallages]))
ssp_wave = np.array([float(line.split(' ')[0]) for line in specallages[0][7:]])[-minwlsize:]
# Conversion from Å to nm
ssp_wave *= .1
ssp_lumin = np.empty((minwlsize, ssp_time.size))
for i, spec in enumerate(specallages):
ssp_lumin[:, i] = np.array([float(line.split(' ')[-1]) for line in spec[7:]])[-minwlsize:]
# Conversion from erg/s/Å/10⁶ Msun to W/nm/Msun
ssp_lumin *= 1e-7 * 10 * 1e-6
base.add_yggdrasil_ssp(Yggdrasil_SSP(float(Z), float(fcov), ssp_time,
ssp_wave, ssp_info, ssp_lumin))
def build_dale2014(base):
models = []
......@@ -860,6 +940,7 @@ def build_base(bc03res='lr'):
print('#' * 78)
print("3- Importing Bruzual and Charlot 2003 SSP\n")
build_bc2003_ssp(base, bc03res)
build_bc2003(base, bc03res)
print("\nDONE\n")
print('#' * 78)
......@@ -899,6 +980,11 @@ def build_base(bc03res='lr'):
print("\nDONE\n")
print('#' * 78)
print("11- Importing the Yggdrasil SSP")
build_yggdrasil_ssp(base)
print("\nDONE\n")
print('#' * 78)
base.session.close_all()
......
......@@ -30,26 +30,30 @@ def save_chi2(obs, variable, models, chi2, values):
@lru_cache(maxsize=None)
def compute_corr_dz(model_z, obs_dist):
def compute_corr_dz(model_z, obs):
"""The mass-dependent physical properties are computed assuming the
redshift of the model. However because we round the observed redshifts to
two decimals, there can be a difference of 0.005 in redshift between the
models and the actual observation. At low redshift, this can cause a
models and the actual observation. This causes two issues. First there is a
difference in the luminosity distance. At low redshift, this can cause a
discrepancy in the mass-dependent physical properties: ~0.35 dex at z=0.010
vs 0.015 for instance. Therefore we correct these physical quantities by
multiplying them by corr_dz.
vs 0.015 for instance. In addition, the 1+z dimming will be different.
We compute here the correction factor for these two effects.
Parameters
----------
model_z: float
Redshift of the model.
obs_dist: float
Luminosity distance of the observed object.
obs: instance of the Observation class
Object containing the distance and redshift of an object
"""
if model_z == 0.:
return (obs_dist / (10. * parsec)) ** 2.
return (obs_dist / cosmo.luminosity_distance(model_z).value) ** 2.
mod_distance = 10. * parsec
else:
mod_distance = cosmo.luminosity_distance(model_z).value * 1e6 * parsec
return (obs.distance / mod_distance) ** 2. * \
(1. + model_z) / (1. + obs.redshift)
def dchi2_over_ds2(s, obsdata, obsdata_err, obslim, obslim_err, moddata,
......@@ -151,6 +155,47 @@ def _compute_scaling(models, obs, corr_dz, wz):
return num/denom
def _compute_scaling_mag(models, obs, corr_dz, wz):
"""Compute the scaling factor to be applied to the model fluxes to best fit
the observations. Note that we look over the bands to avoid the creation of
an array of the same size as the model_fluxes array. Because we loop on the
bands and not on the models, the impact on the performance should be small.
Parameters
----------
models: ModelsManagers class instance
Contains the models (fluxes, intensive, and extensive properties).
obs: Observation class instance
Contains the fluxes, intensive properties, extensive properties and
their errors, for a sigle observation.
corr_dz: float
Correction factor to scale the extensive properties to the right
distance
wz: slice
Selection of the models at the redshift of the observation or all the
redshifts in photometric-redshift mode.
Returns
-------
scaling: array
Scaling factors minimising the χ²
"""
_ = list(models.flux.keys())[0]
num = np.zeros_like(models.flux[_][wz])
denom = np.zeros_like(models.flux[_][wz])
for band, flux in obs.flux.items():
# Multiplications are faster than divisions, so we directly use the
# inverse error
inv_err2 = 1. / obs.flux_err[band] ** 2.
model = models.flux[band][wz]
num += (model - flux) * inv_err2
denom += inv_err2
return num/denom
def _correct_scaling_ul(scaling, mod, obs, wz):
"""Correct the scaling factor when one or more fluxes and/or properties are
upper limits.
......@@ -236,7 +281,7 @@ def compute_chi2(models, obs, corr_dz, wz, lim_flag):
scaling of the models to obtain the minimum χ²
"""
limits = lim_flag and (len(obs.flux_ul) > 0 or len(obs.extprop_ul) > 0)
scaling = _compute_scaling(models, obs, corr_dz, wz)
scaling = _compute_scaling_mag(models, obs, corr_dz, wz)
# Some observations may not have flux values in some filter(s), but
# they can have upper limit(s).
......@@ -252,7 +297,7 @@ def compute_chi2(models, obs, corr_dz, wz, lim_flag):
# inverse error
inv_flux_err = 1. / obs.flux_err[band]
model = models.flux[band][wz]
chi2 += ((flux - model * scaling) * inv_flux_err) ** 2.
chi2 += ((flux - model + scaling) * inv_flux_err) ** 2.
# Computation of the χ² from intensive properties
for name, prop in obs.intprop.items():
......
......@@ -112,7 +112,7 @@ def sed(idx, midx):
gbl_models.intprop[prop][idx] = np.nan
else:
for band in gbl_models.flux:
gbl_models.flux[band][idx] = sed.compute_fnu(band)
gbl_models.flux[band][idx] = 2.5 * np.log10(sed.compute_fnu(band))
for prop in gbl_models.extprop:
gbl_models.extprop[prop][idx] = sed.info[prop]
for prop in gbl_models.intprop:
......@@ -142,7 +142,7 @@ def analysis(idx, obs):
z = np.array(
gbl_models.conf['sed_modules_params']['redshifting']['redshift'])
wz = slice(np.abs(obs.redshift-z).argmin(), None, z.size)
corr_dz = compute_corr_dz(z[wz.start], obs.distance)
corr_dz = compute_corr_dz(z[wz.start], obs)
else: # We do not know the redshift so we use the full grid
wz = slice(0, None, 1)
corr_dz = 1.
......@@ -153,7 +153,7 @@ def analysis(idx, obs):
if np.any(chi2 < -np.log(np.finfo(np.float64).tiny) * 2.):
# We use the exponential probability associated with the χ² as
# likelihood function.
likelihood = np.exp(-chi2 / 2.)
likelihood = np.exp(-.5 * chi2)
wlikely = np.where(np.isfinite(likelihood))
# If all the models are valid, it is much more efficient to use a slice
if likelihood.size == wlikely[0].size:
......@@ -185,17 +185,17 @@ def analysis(idx, obs):
else:
values = gbl_models.extprop[prop][wz]
_ = lambda x: x
mean, std = weighted_param(_(values[wlikely] * scaling_l * corr_dz),
mean, std = weighted_param(_(values[wlikely] * 10**(-.4*scaling_l) * corr_dz),
likelihood)
gbl_results.bayes.extmean[prop][idx] = mean
gbl_results.bayes.exterror[prop][idx] = std
if gbl_models.conf['analysis_params']['save_chi2'] is True:
save_chi2(obs, prop, gbl_models, chi2,
values * scaling * corr_dz)
values * 10**(-.4*scaling) * corr_dz)
for band in gbl_results.bayes.fluxmean:
values = gbl_models.flux[band][wz]
mean, std = weighted_param(values[wlikely] * scaling_l,
mean, std = weighted_param(values[wlikely] * 10**(-.4*scaling_l),
likelihood)
gbl_results.bayes.fluxmean[band][idx] = mean
gbl_results.bayes.fluxerror[band][idx] = std
......@@ -235,24 +235,41 @@ def bestfit(oidx, obs):
# difference between the object and the grid redshifts.
params = deepcopy(gbl_params.from_index(best_index))
if obs.redshift >= 0.:
model_z = params[gbl_params.modules.index('redshifting')]['redshift']
params[gbl_params.modules.index('redshifting')]['redshift'] = obs.redshift
corr_dz = compute_corr_dz(obs.redshift, obs.distance)
# Correct fluxes for the fact that the scaling factor was computed on
# the grid redshift. Because of the difference in redshift the distance
# is different and must be reflected in the scaling
corr_scaling = compute_corr_dz(model_z, obs) / \
compute_corr_dz(obs.redshift, obs)
else: # The model redshift is always exact in redhisfting mode
corr_dz = 1.
corr_scaling = 1.
sed = gbl_warehouse.get_sed(gbl_params.modules, params)
scaling = gbl_results.best.scaling[oidx]
sed = deepcopy(sed)
# Handle the case where the distance does not correspond to the redshift.
if obs.redshift >= 0.:
corr_dz = (obs.distance / sed.info['universe.luminosity_distance']) ** 2
else:
corr_dz = 1.
scaling = gbl_results.best.scaling[oidx] * corr_scaling
for band in gbl_results.best.flux:
gbl_results.best.flux[band][oidx] = sed.compute_fnu(band) * scaling
gbl_results.best.flux[band][oidx] = sed.compute_fnu(band) * 10**(-.4*scaling)
# If the distance is user defined, the redshift-based luminosity distance
# of the model is probably incorrect so we replace it
sed.add_info('universe.luminosity_distance', obs.distance, force=True)
for prop in gbl_results.best.intprop:
gbl_results.best.intprop[prop][oidx] = sed.info[prop]
for prop in gbl_results.best.extprop:
gbl_results.best.extprop[prop][oidx] = sed.info[prop] * scaling \
gbl_results.best.extprop[prop][oidx] = sed.info[prop] * 10**(-.4*scaling) \
* corr_dz
if gbl_conf['analysis_params']["save_best_sed"]:
sed.to_fits('out/{}'.format(obs.id), scaling)
sed.to_fits('out/{}'.format(obs.id), scaling * corr_dz)
gbl_counter.inc()
......@@ -24,6 +24,7 @@ import numpy as np
from .filters import Filter
from .m2005 import M2005
from .bc03 import BC03
from .bc03_ssp import BC03_SSP
from .dale2014 import Dale2014
from .dl2007 import DL2007
from .dl2014 import DL2014
......@@ -32,6 +33,7 @@ from .nebular_continuum import NebularContinuum
from .nebular_lines import NebularLines
from .schreiber2016 import Schreiber2016
from .themis import THEMIS
from .yggdrasil_ssp import Yggdrasil_SSP
DATABASE_FILE = pkg_resources.resource_filename(__name__, 'data.db')
......@@ -116,6 +118,50 @@ class _BC03(BASE):
self.spec_table = ssp.spec_table
class _BC03_SSP(BASE):
"""Storage for Bruzual and Charlot 2003 SSP
"""
__tablename__ = "bc03_ssp"
imf = Column(String, primary_key=True)
metallicity = Column(Float, primary_key=True)
time_grid = Column(PickleType)
wavelength_grid = Column(PickleType)
info_table = Column(PickleType)
spec_table = Column(PickleType)
def __init__(self, ssp):
self.imf = ssp.imf
self.metallicity = ssp.metallicity
self.time_grid = ssp.time_grid
self.wavelength_grid = ssp.wavelength_grid
self.info_table = ssp.info_table
self.spec_table = ssp.spec_table
class _Yggdrasil_SSP(BASE):
"""Storage for the Yggdrasil SSP
"""
__tablename__ = "yggdrasil_ssp"
metallicity = Column(Float, primary_key=True)
fcov = Column(Float, primary_key=True)
time_grid = Column(PickleType)
wavelength_grid = Column(PickleType)
info_table = Column(PickleType)
spec_table = Column(PickleType)
def __init__(self, ssp):
self.metallicity = ssp.metallicity
self.fcov = ssp.fcov
self.time_grid = ssp.time_grid
self.wavelength_grid = ssp.wavelength_grid
self.info_table = ssp.info_table
self.spec_table = ssp.spec_table
class _Dale2014(BASE):
"""Storage for Dale et al (2014) infra-red templates
"""
......@@ -444,6 +490,108 @@ class Database(object):
"""
return self._get_parameters(_BC03)
def add_bc03_ssp(self, ssp_bc03):
"""
Add a Bruzual and Charlot 2003 SSP to pcigale database
Parameters
----------
ssp: pcigale.data.SspBC03
"""
if self.is_writable:
ssp = _BC03_SSP(ssp_bc03)
self.session.add(ssp)
try:
self.session.commit()
except exc.IntegrityError:
self.session.rollback()
raise DatabaseInsertError('The SSP is already in the base.')
else:
raise Exception('The database is not writable.')
def get_bc03_ssp(self, imf, metallicity):
"""
Query the database for the Bruzual and Charlot 2003 SSP corresponding
to the given initial mass function and metallicity.
Parameters
----------
imf: string
Initial mass function (salp for Salpeter, chab for Chabrier)
metallicity: float
0.02 for Solar metallicity
Returns
-------
ssp: pcigale.data.BC03
The BC03 object.
Raises
------
DatabaseLookupError: if the requested SSP is not in the database.
"""
result = self.session.query(_BC03_SSP)\
.filter(_BC03_SSP.imf == imf)\
.filter(_BC03_SSP.metallicity == metallicity)\
.first()
if result:
return BC03_SSP(result.imf, result.metallicity, result.time_grid,
result.wavelength_grid, result.info_table,
result.spec_table)
else:
raise DatabaseLookupError(
"The BC03 SSP for imf <{0}> and metallicity <{1}> is not in "
"the database.".format(imf, metallicity))
def get_bc03_ssp_parameters(self):
"""Get parameters for the Bruzual & Charlot 2003 stellar models.
Returns
-------
paramaters: dictionary
dictionary of parameters and their values
"""
return self._get_parameters(_BC03_SSP)
def add_yggdrasil_ssp(self, ssp_yggdrasil):
"""
Add a Bruzual and Charlot 2003 SSP to pcigale database
Parameters
----------
ssp: pcigale.data.SspBC03
"""
if self.is_writable:
ssp = _Yggdrasil_SSP(ssp_yggdrasil)
self.session.add(ssp)
try:
self.session.commit()
except exc.IntegrityError:
self.session.rollback()
raise DatabaseInsertError('The SSP is already in the base.')
else:
raise Exception('The database is not writable.')
def get_yggdrasil_ssp(self, metallicity, fcov):
result = self.session.query(_Yggdrasil_SSP)\
.filter(_Yggdrasil_SSP.metallicity == metallicity)\
.filter(_Yggdrasil_SSP.fcov == fcov)\
.first()
if result:
return Yggdrasil_SSP(result.metallicity, result.fcov,
result.time_grid, result.wavelength_grid,
result.info_table, result.spec_table)
else:
raise DatabaseLookupError(
"The Yggdrasil SSP for metallicity <{}> and fcov <{}> is not in"
" the database.".format(metallicity, fcov))
def get_yggdrasil_ssp_parameters(self):
return self._get_parameters(_Yggdrasil_SSP)
def add_dl2007(self, models):
"""
Add a list of Draine and Li (2007) models to the database.
......
# -*- coding: utf-8 -*-
# Copyright (C) 2013 Centre de données Astrophysiques de Marseille
# Licensed under the CeCILL-v2 licence - see Licence_CeCILL_V2-en.txt
# Author: Yannick Roehlly
import numpy as np
class BC03_SSP(object):
def __init__(self, imf, metallicity, time_grid, wavelength_grid,
info_table, spec_table):
if imf in ['salp', 'chab']:
self.imf = imf
else:
raise ValueError('IMF must be either sal for Salpeter or '
'cha for Chabrier.')
self.metallicity = metallicity
self.time_grid = time_grid
self.wavelength_grid = wavelength_grid
self.info_table = info_table
self.spec_table = spec_table
import numpy as np
class Yggdrasil_SSP(object):
def __init__(self, metallicity, fcov, time_grid, wavelength_grid,
info_table, spec_table):
self.metallicity = metallicity
self.fcov = fcov
self.time_grid = time_grid
self.wavelength_grid = wavelength_grid
self.info_table = info_table
self.spec_table = spec_table
......@@ -35,7 +35,7 @@ class ObservationsManagerPassbands(object):
at each iteration.
"""
def __init__(self, config, params, defaulterror=0.1, modelerror=0.1,
def __init__(self, config, params, defaulterror=0.1, modelerror=0.,
threshold=-9990.):
self.conf = config
......@@ -235,7 +235,7 @@ class ObservationsManagerPassbands(object):
np.fabs(self.table[err]))
for prop in self.intprops:
err = prop + '_err'
self.table[name] = np.random.normal(fits.best.intprop[prop],
self.table[prop] = np.random.normal(fits.best.intprop[prop],
np.fabs(self.table[err]))
for prop in self.extprops:
err = prop + '_err'
......@@ -304,7 +304,8 @@ class Observation(object):
if self.redshift == 0.:
self.distance = 10. * parsec
elif self.redshift > 0.:
self.distance = cosmo.luminosity_distance(self.redshift).value
self.distance = cosmo.luminosity_distance(self.redshift).value \
* 1e6 * parsec
else:
self.distance = np.nan
self.flux = {k: row[k] for k in cls.bands
......
......@@ -397,7 +397,7 @@ class ResultsManager(object):
name="best."+prop))
for band in self.obs.bands:
if band.startswith('line.') or band.startswith('linefilter.''):
if band.startswith('line.') or band.startswith('linefilter.'):
unit = 'W/m^2'
else:
unit = 'mJy'
......
"""
Bruzual and Charlot (2003) stellar emission module for an SSP
=============================================================
This module implements the Bruzual and Charlot (2003) Single Stellar
Populations.
"""
from collections import OrderedDict
import numpy as np
from . import SedModule
from ..data import Database
class BC03SSP(SedModule):
parameter_list = OrderedDict([
("imf", (
"cigale_list(dtype=int, options=0. & 1.)",
"Initial mass function: 0 (Salpeter) or 1 (Chabrier).",
0
)),
("metallicity", (
"cigale_list(options=0.0001 & 0.0004 & 0.004 & 0.008 & 0.02 & "
"0.05)",
"Metalicity. Possible values are: 0.0001, 0.0004, 0.004, 0.008, "
"0.02, 0.05.",
0.02
)),
("separation_age", (
"cigale_list(dtype=int, minvalue=0)",
"Age [Myr] of the separation between the young and the old star "
"populations. The default value in 10^7 years (10 Myr). Set "
"to 0 not to differentiate ages (only an old population).",
10
))
])
def _init_code(self):
"""Read the SSP from the database."""
self.imf = int(self.parameters["imf"])
self.metallicity = float(self.parameters["metallicity"])
self.separation_age = int(self.parameters["separation_age"])
with Database() as database:
if self.imf == 0:
self.ssp = database.get_bc03_ssp('salp', self.metallicity)
elif self.imf == 1:
self.ssp = database.get_bc03_ssp('chab', self.metallicity)
else:
raise Exception("IMF #{} unknown".format(self.imf))
def process(self, sed):
"""Add the convolution of a Bruzual and Charlot SSP to the SED
Parameters
----------
sed: pcigale.sed.SED
SED object.
"""
if 'ssp.index' in sed.info:
index = sed.info['ssp.index']
else:
raise Exception('The stellar models do not correspond to pure SSP.')
if self.ssp.time_grid[index] <= self.separation_age:
spec_young = self.ssp.spec_table[:, index]
info_young = self.ssp.info_table[:, index]
spec_old = np.zeros_like(spec_young)
info_old = np.zeros_like(info_young)
else:
spec_old = self.ssp.spec_table[:, index]
info_old = self.ssp.info_table[:, index]
spec_young = np.zeros_like(spec_old)
info_young = np.zeros_like(info_old)
info_all = info_young + info_old
info_young = dict(zip(["m_star", "m_gas", "n_ly"], info_young))
info_old = dict(zip(["m_star", "m_gas", "n_ly"], info_old))
info_all = dict(zip(["m_star", "m_gas", "n_ly"], info_all))
# We compute the Lyman continuum luminosity as it is important to
# compute the energy absorbed by the dust before ionising gas.
wave = self.ssp.wavelength_grid
w = np.where(wave <= 91.1)
lum_lyc_young, lum_lyc_old = np.trapz([spec_young[w], spec_old[w]],
wave[w])
# We do similarly for the total stellar luminosity
lum_young, lum_old = np.trapz([spec_young, spec_old], wave)
sed.add_module(self.name, self.parameters)
sed.add_info("stellar.imf", self.imf)
sed.add_info("stellar.metallicity", self.metallicity)
sed.add_info("stellar.old_young_separation_age", self.separation_age)
sed.add_info("stellar.age", self.ssp.time_grid[index])
sed.add_info("stellar.m_star_young", info_young["m_star"], True)
sed.add_info("stellar.m_gas_young", info_young["m_gas"], True)
sed.add_info("stellar.n_ly_young", info_young["n_ly"], True)
sed.add_info("stellar.lum_ly_young", lum_lyc_young, True)
sed.add_info("stellar.lum_young", lum_young, True)
sed.add_info("stellar.m_star_old", info_old["m_star"], True)
sed.add_info("stellar.m_gas_old", info_old["m_gas"], True)
sed.add_info("stellar.n_ly_old", info_old["n_ly"], True)
sed.add_info("stellar.lum_ly_old", lum_lyc_old, True)
sed.add_info("stellar.lum_old", lum_old, True)
sed.add_info("stellar.m_star", info_all["m_star"], True)
sed.add_info("stellar.m_gas", info_all["m_gas"], True)
sed.add_info("stellar.n_ly", info_all["n_ly"], True)
sed.add_info("stellar.lum_ly", lum_lyc_young + lum_lyc_old, True)
sed.add_info("stellar.lum", lum_young + lum_old, True)
sed.add_contribution("stellar.old", wave, spec_old)
sed.add_contribution("stellar.young", wave, spec_young)
# SedModule to be returned by get_module
Module = BC03SSP
"""
Simple screen extinction module
=====================================================================
This module implements various screen extinction laws, including NW, LMC, and
SMC laws.
"""
from collections import OrderedDict
import numpy as np
from . import SedModule
def ccm(wave, Rv):
""" Computes the MW extinction law using the Cardelli, Clayton & Mathis
(1989) curve valid from 1250 Angstroms to 3.3 microns.
Parameters
----------
wave : numpy.ndarray (1-d)
Wavelengths in nm.
Rv : float
Ratio of total to selective extinction, A_V / E(B-V).
"""
x = 1e3 / wave
# In the paper the condition is 0.3 < x < 1.1.
# However setting just x <1.1 avoids to have an artificial break at 0.3
# with something positive above 0.3 and 0 below.
cond1 = x < 1.1
cond2 = (x >= 1.1) & (x < 3.3)
cond3 = (x >= 3.3) & (x < 5.9)
cond4 = (x >= 5.9) & (x < 8.0)
cond5 = (x >= 8.0) & (x <= 11.)
fcond1 = lambda wn: Rv * .574 * wn**1.61 - .527 * wn**1.61
fcond2 = lambda wn: 1.0 * (Rv * np.polyval([-.505, 1.647, -0.827, -1.718,
1.137, .701, -0.609, 0.104, 1.],
wn - 1.82) +
np.polyval([3.347, -10.805, 5.491, 11.102,
-7.985, -3.989, 2.908, 1.952, 0.],
wn - 1.82))
fcond3 = lambda wn: 1.0 * (Rv * (1.752 - 0.316 * wn -
(0.104 / ((wn - 4.67)**2 + 0.341))) +
(-3.090 + 1.825 * wn +
(1.206 / ((wn - 4.62)**2 + 0.263))))
fcond4 = lambda wn: 1.0 * (Rv * (1.752 - 0.316 * wn -
(0.104 / ((wn - 4.67)**2 + 0.341)) +
np.polyval([-0.009779, -0.04473, 0., 0.],
wn - 5.9)) +
(-3.090 + 1.825 * wn +
(1.206 / ((wn - 4.62)**2 + 0.263)) +
np.polyval([0.1207, 0.2130, 0., 0.],
wn - 5.9)))
fcond5 = lambda wn: 1.0 * (Rv * (np.polyval([-0.070, 0.137, -0.628, -1.073],
wn-8.)) +
np.polyval([0.374, -0.420, 4.257, 13.670],
wn - 8.))
return np.piecewise(x, [cond1, cond2, cond3, cond4, cond5],
[fcond1, fcond2, fcond3, fcond4, fcond5])
def Pei92(wave, Rv=None, law='mw'):
""" Compute the extinction law using the Pei92 (1989) MW, LMC, and SMC
curves valid from 912 Angstroms to 25 microns.
Parameters
----------
wave : numpy.ndarray (1-d)
Wavelengths in nm.
Rv : float
Ratio of total to selective extinction, A_V / E(B-V).
law : string
name of the extinction curve to use: 'mw','lmc' or 'smc'
"""
wvl = wave * 1e-3
if law.lower() == 'smc':
if Rv is None:
Rv = 2.93
a_coeff = np.array([185, 27, 0.005, 0.010, 0.012, 0.03])
wvl_coeff = np.array([0.042, 0.08, 0.22, 9.7, 18, 25])
b_coeff = np.array([90, 5.50, -1.95, -1.95, -1.80, 0.0])
n_coeff = np.array([2.0, 4.0, 2.0, 2.0, 2.0, 2.0])
elif law.lower() == 'lmc':
if Rv is None:
Rv = 3.16
a_coeff = np.array([175, 19, 0.023, 0.005, 0.006, 0.02])
wvl_coeff = np.array([0.046, 0.08, 0.22, 9.7, 18, 25])
b_coeff = np.array([90, 5.5, -1.95, -1.95, -1.8, 0.0])
n_coeff = np.array([2.0, 4.5, 2.0, 2.0, 2.0, 2.0])
elif law.lower() == 'mw':
if Rv is None:
Rv = 3.08
a_coeff = np.array([165, 14, 0.045, 0.002, 0.002, 0.012])
wvl_coeff = np.array([0.046, 0.08, 0.22, 9.7, 18, 25])
b_coeff = np.array([90, 4.0, -1.95, -1.95, -1.8, 0.0])
n_coeff = np.array([2.0, 6.5, 2.0, 2.0, 2.0, 2.0])
Alambda_over_Ab = np.zeros(len(wvl))
for a, wv, b, n in zip(a_coeff, wvl_coeff, b_coeff, n_coeff):
Alambda_over_Ab += a / ((wvl / wv)**n + (wv / wvl)**n + b)
# Normalise with Av
Alambda_over_Av = (1 / Rv + 1) * Alambda_over_Ab
# set 0 for wvl < 91.2nm
Alambda_over_Av[wvl < 91.2 * 1e-3] = 0
# Set 0 for wvl > 30 microns
Alambda_over_Av[wvl > 30] = 0
# Transform Alambda_over_Av into Alambda_over_E(B-V)
Alambda_over_Ebv = Rv * Alambda_over_Av
return Alambda_over_Ebv
class DustExtinction(SedModule):
"""Screen extinction law
This module computes the screen extinction from various classical
extinctions laws for the MW, the LMC, and the SMC
The extinction is computed for all the components and is added to the SED as
a negative contribution.
"""
parameter_list = OrderedDict([
("E_BV", (
"cigale_list(minvalue=0.)",
"E(B-V), the colour excess.",
0.3
)),
("Rv", (
"cigale_list(minvalue=0.)",
"Ratio of total to selective extinction, A_V / E(B-V). The "
"standard value is 3.1 for MW using CCM89. For SMC and LMC using "
"Pei92 the values should be 2.93 and 3.16.",
3.1
)),
("law", (
"cigale_list(dtype=int, options=0 & 1 & 2)",
"Extinction law to apply. The values are 0 for CCM, 1 for SMC, and "
"2 for LCM.",
0
)),
("filters", (
"string()",
"Filters for which the extinction will be computed and added to "
"the SED information dictionary. You can give several filter "
"names separated by a & (don't use commas).",
"B_B90 & V_B90 & FUV"
))
])
def _init_code(self):
"""Get the filters from the database"""
self.ebv = float(self.parameters["E_BV"])
self.law = int(self.parameters["law"])
self.Rv = float(self.parameters["Rv"])
self.filter_list = [item.strip() for item in
self.parameters["filters"].split("&")]
# We cannot compute the extinction until we know the wavelengths. Yet,
# we reserve the object.
self.att = None
self.lineatt = {}
def process(self, sed):
"""Add the extinction component to each of the emission components.
Parameters
----------
sed: pcigale.sed.SED object
"""
wl = sed.wavelength_grid
# Fλ fluxes (only from continuum) in each filter before extinction.
flux_noatt = {filt: sed.compute_fnu(filt) for filt in self.filter_list}
# Compute stellar extinction curve
if self.att is None:
if self.law == 0:
self.att = 10 ** (-.4 * ccm(wl, self.Rv) * self.ebv)
elif self.law == 1:
self.att = 10 ** (-.4 * Pei92(wl, Rv=self.Rv, law='smc') *
self.ebv)
elif self.law == 2:
self.att = 10 ** (-.4 * Pei92(wl, self.Rv, law='lmc') *
self.ebv)
# Compute nebular extinction curves
if len(self.lineatt) == 0:
names = [k for k in sed.lines]
linewl = np.array([sed.lines[k][0] for k in names])
if self.law == 0:
self.lineatt['nebular'] = ccm(wl, self.Rv)
for name, att in zip(names, ccm(linewl, self.Rv)):
self.lineatt[name] = att
elif self.law == 1:
self.lineatt['nebular'] = Pei92(wl, law='smc')
for name, att in zip(names, Pei92(linewl, law='smc')):
self.lineatt[name] = att
elif self.law == 2:
self.lineatt['nebular'] = Pei92(wl, law='lmc')
for name, att in zip(names, Pei92(linewl, law='lmc')):
self.lineatt[name] = att
for k, v in self.lineatt.items():
self.lineatt[k] = 10. ** (-.4 * v * self.ebv)
dust_lumin = 0.
contribs = [contrib for contrib in sed.contribution_names if
'absorption' not in contrib]
for contrib in contribs:
luminosity = sed.get_lumin_contribution(contrib)
if 'nebular' in contrib:
extinction_spec = luminosity * (self.lineatt['nebular'] - 1.)
else:
extinction_spec = luminosity * (self.att - 1.)
dust_lumin -= np.trapz(extinction_spec, wl)
sed.add_module(self.name, self.parameters)
sed.add_contribution("attenuation." + contrib, wl,
extinction_spec)
for name, (linewl, old, young) in sed.lines.items():
sed.lines[name] = (linewl, old * self.lineatt[name],
young * self.lineatt[name])
# Total extinction
if 'dust.luminosity' in sed.info:
sed.add_info("dust.luminosity",
sed.info["dust.luminosity"] + dust_lumin, True, True)
else:
sed.add_info("dust.luminosity", dust_lumin, True)
# Fλ fluxes (only from continuum) in each filter after extinction.
flux_att = {filt: sed.compute_fnu(filt) for filt in self.filter_list}
# Attenuation in each filter
for filt in self.filter_list:
sed.add_info("attenuation." + filt,
-2.5 * np.log10(flux_att[filt] / flux_noatt[filt]))
sed.add_info('attenuation.E_BV', self.ebv)
# SedModule to be returned by get_module
Module = DustExtinction
......@@ -64,7 +64,10 @@ class MBB(SedModule):
self.epsilon = float(self.parameters["epsilon_mbb"])
self.T = float(self.parameters["t_mbb"])
self.beta = float(self.parameters["beta_mbb"])
self.energy_balance = bool(self.parameters["energy_balance"])
if type(self.parameters["energy_balance"]) is str:
self.energy_balance = self.parameters["energy_balance"].lower() == 'true'
else:
self.energy_balance = bool(self.parameters["energy_balance"])
if self.epsilon < 0.:
raise Exception("Error, epsilon_mbb must be ≥ 0.")
......
......@@ -95,7 +95,10 @@ class NebularEmission(SedModule):
self.fesc = float(self.parameters['f_esc'])
self.fdust = float(self.parameters['f_dust'])
self.lines_width = float(self.parameters['lines_width'])
self.emission = bool(self.parameters["emission"])
if type(self.parameters["emission"]) is str:
self.emission = self.parameters["emission"].lower() == 'true'
else:
self.emission = bool(self.parameters["emission"])
if self.fesc < 0. or self.fesc > 1:
raise Exception("Escape fraction must be between 0 and 1")
......
......@@ -73,7 +73,10 @@ class Sfh2Exp(SedModule):
self.burst_age = int(self.parameters["burst_age"])
age = int(self.parameters["age"])
sfr_0 = float(self.parameters["sfr_0"])
normalise = bool(self.parameters["normalise"])
if type(self.parameters["normalise"]) is str:
normalise = self.parameters["normalise"].lower() == 'true'
else:
normalise = bool(self.parameters["normalise"])
# Time grid and age. If needed, the age is rounded to the inferior Myr
time_grid = np.arange(age)
......
......@@ -63,7 +63,10 @@ class SfhBuat08(SedModule):
def _init_code(self):
self.velocity = float(self.parameters["velocity"])
self.age = int(self.parameters["age"])
normalise = bool(self.parameters["normalise"])
if type(self.parameters["normalise"]) is str:
normalise = self.parameters["normalise"].lower() == 'true'
else:
normalise = bool(self.parameters["normalise"])
# Time grid and age. If needed, the age is rounded to the inferior Myr
time_grid = np.arange(self.age)
......
......@@ -52,7 +52,10 @@ class SfhQuenchSmooth(SedModule):
def _init_code(self):
self.quenching_age = int(self.parameters["quenching_time"])
self.quenching_factor = float(self.parameters["quenching_factor"])
self.normalise = bool(self.parameters["normalise"])
if type(self.parameters["normalise"]) is str:
normalise = self.parameters["normalise"].lower() == 'true'
else:
normalise = bool(self.parameters["normalise"])
def process(self, sed):
"""
......
......@@ -52,7 +52,10 @@ class SfhQuenchTrunk(SedModule):
def _init_code(self):
self.quenching_age = int(self.parameters["quenching_age"])
self.quenching_factor = float(self.parameters["quenching_factor"])
self.normalise = bool(self.parameters["normalise"])
if type(self.parameters["normalise"]) is str:
normalise = self.parameters["normalise"].lower() == 'true'
else:
normalise = bool(self.parameters["normalise"])
def process(self, sed):
"""
......
......@@ -63,7 +63,8 @@ class SFHDelayed(SedModule):
)),
("sfr_A", (
"cigale_list(minvalue=0.)",
"Value of SFR at t = 0 in M_sun/yr.",
"Multiplicative factor controlling the SFR if normalise is False. "
"For instance without any burst: SFR(t)=sfr_A×t×exp(-t/τ)/τ²",
1.
)),
("normalise", (
......@@ -80,7 +81,10 @@ class SFHDelayed(SedModule):
self.age_burst = int(self.parameters["age_burst"])
self.f_burst = float(self.parameters["f_burst"])
sfr_A = float(self.parameters["sfr_A"])
normalise = bool(self.parameters["normalise"])
if type(self.parameters["normalise"]) is str:
normalise = self.parameters["normalise"].lower() == 'true'
else:
normalise = bool(self.parameters["normalise"])
# Time grid for each component
t = np.arange(self.age_main)
......
......@@ -59,7 +59,8 @@ class SFHDelayedBQ(SedModule):
)),
("sfr_A", (
"cigale_list(minvalue=0.)",
"Value of SFR at t = 0 in M_sun/yr.",
"Multiplicative factor controlling the SFR if normalise is False. "
"For instance without any burst/quench: SFR(t)=sfr_A×t×exp(-t/τ)/τ²",
1.
)),
("normalise", (
......@@ -75,7 +76,10 @@ class SFHDelayedBQ(SedModule):
self.age_bq = int(self.parameters["age_bq"])
self.r_sfr = float(self.parameters["r_sfr"])
sfr_A = float(self.parameters["sfr_A"])
normalise = bool(self.parameters["normalise"])
if type(self.parameters["normalise"]) is str:
normalise = self.parameters["normalise"].lower() == 'true'
else:
normalise = bool(self.parameters["normalise"])
# Delayed SFH
t = np.arange(self.age_main)
......
......@@ -59,9 +59,12 @@ class SfhFromFile(SedModule):
def _init_code(self):
filename = self.parameters['filename']
normalise = bool(self.parameters["normalise"])
age = int(self.parameters['age'])
self.sfr_column_number = int(self.parameters['sfr_column'])
if type(self.parameters["normalise"]) is str:
normalise = self.parameters["normalise"].lower() == 'true'
else:
normalise = bool(self.parameters["normalise"])
table = read_table(filename)
self.sfr = table.columns[self.sfr_column_number].data.astype(np.float)
......
......@@ -73,7 +73,10 @@ class SfhPeriodic(SedModule):
self.tau_bursts = float(self.parameters["tau_bursts"])
age = int(self.parameters["age"])
sfr_A = float(self.parameters["sfr_A"])
normalise = bool(self.parameters["normalise"])
if type(self.parameters["normalise"]) is str:
normalise = self.parameters["normalise"].lower() == 'true'
else:
normalise = bool(self.parameters["normalise"])
time_grid = np.arange(0, age)
self.sfr = np.zeros_like(time_grid, dtype=np.float)
......
"""
Simple module to reduce the SFH to a single SSP
===========================================================
This module implements a star formation history (SFH) through a single SSP.
"""
from collections import OrderedDict
import numpy as np
from . import SedModule
class SSP(SedModule):
"""Instantaneous burst corresponding to a model-provided SSP
This module sets the SED star formation history (SFH) as a single stellar
population
"""
parameter_list = OrderedDict([
("index", (
"cigale_list(dtype=int, minvalue=0)",
"Index of the SSP to use.",
0
))
])
def _init_code(self):
self.index = int(self.parameters["index"])
def process(self, sed):
"""Add a double decreasing exponential Star Formation History.
Parameters
----------
sed: pcigale.sed.SED object
"""
sed.add_module(self.name, self.parameters)
# Add the sfh and the output parameters to the SED.
sed.sfh = np.array([0.])
sed.add_info("ssp.index", self.index)
# SedModule to be returned by get_module
Module = SSP
"""
Bruzual and Charlot (2003) stellar emission module for an SSP
=============================================================
This module implements the Bruzual and Charlot (2003) Single Stellar
Populations.
"""
from collections import OrderedDict
import numpy as np
from . import SedModule
from ..data import Database
class YggdrasilSSP(SedModule):
parameter_list = OrderedDict([
("metallicity", (
"cigale_list(options=0.004 & 0.008 & 0.02)",
"Metallicity. Possible values are: 0.004, 0.008, and 0.02.",
0.02
)),
("fcov", (
"cigale_list(options=0 & 0.5)",
"Coverting fraction. Possible values are 0 and 0.5.",
0.5
)),
("separation_age", (
"cigale_list(dtype=int, minvalue=0)",
"Age [Myr] of the separation between the young and the old star "
"populations. The default value in 10^7 years (10 Myr). Set "
"to 0 not to differentiate ages (only an old population).",
10
))
])
def _init_code(self):
"""Read the SSP from the database."""
self.metallicity = float(self.parameters["metallicity"])
self.fcov = float(self.parameters["fcov"])
self.separation_age = int(self.parameters["separation_age"])
with Database() as database:
self.ssp = database.get_yggdrasil_ssp(self.metallicity,
self.fcov)
def process(self, sed):
"""Add the convolution of a Bruzual and Charlot SSP to the SED
Parameters
----------
sed: pcigale.sed.SED
SED object.
"""
if 'ssp.index' in sed.info:
index = sed.info['ssp.index']
else:
raise Exception('The stellar models do not correspond to pure SSP.')
if self.ssp.time_grid[index] <= self.separation_age:
spec_young = self.ssp.spec_table[:, index]
info_young = self.ssp.info_table[index]
spec_old = np.zeros_like(spec_young)
info_old = np.zeros_like(info_young)
else:
spec_old = self.ssp.spec_table[:, index]
info_old = self.ssp.info_table[index]
spec_young = np.zeros_like(spec_old)
info_young = np.zeros_like(info_old)
info_all = info_young + info_old
wave = self.ssp.wavelength_grid
lum_young, lum_old = np.trapz([spec_young, spec_old], wave)
sed.add_module(self.name, self.parameters)
sed.add_info("stellar.metallicity", self.metallicity)
sed.add_info("stellar.fcov", self.fcov)
sed.add_info("stellar.old_young_separation_age", self.separation_age)
sed.add_info("stellar.age", self.ssp.time_grid[index])
sed.add_info("stellar.m_star_young", info_young, True)
sed.add_info("stellar.lum_young", lum_young, True)
sed.add_info("stellar.m_star_old", info_old, True)
sed.add_info("stellar.lum_old", lum_old, True)
sed.add_info("stellar.m_star", info_all, True)
sed.add_info("stellar.lum", lum_young + lum_old, True)
sed.add_contribution("stellar.old", wave, spec_old)
sed.add_contribution("stellar.young", wave, spec_young)
# SedModule to be returned by get_module
Module = YggdrasilSSP
......@@ -100,7 +100,7 @@ class Configuration(object):
["* sfhdelayed (delayed SFH with optional exponential burst)"] +
["* sfhdelayedbq (delayed SFH with optional constant burst/quench)"
] +
["* sfhfromfile (abitrary SFH read from an input file)"] +
["* sfhfromfile (arbitrary SFH read from an input file)"] +
["* sfhperiodic (periodic SFH, exponential, rectangle or delayed"
")"] +
["SSP:"] +
......
......@@ -405,7 +405,7 @@ def _mock_worker(exact, estimated, param, nologo):
plt.figimage(image, 510, 55, origin='upper', zorder=10, alpha=1)
plt.tight_layout()
plt.savefig('out/mock_{}.pdf'.format(param))
plt.savefig('out/mock_{}.pdf'.format(param.replace('/', '_')))
plt.close()
......