...
 
Commits (17)
......@@ -12,6 +12,7 @@ with their current email address and affiliation.
Laboratoire d'Astrophysique de Marseille, France
* David Corre <david.corre@lam.fr>
Laboratoire d'Astrophysique de Marseille, France
* Yannick Roehlly <yannick@iaora.eu>
* Yannick Roehlly <yannick.roehlly@lam.fr>
Laboratoire d'Astrophysique de Marseille, France
* Héctor Salas Olave <hector.salas.o@gmail.com>
Universidad de Antofagasta, Chile
# Change Log
## Unreleased
### Added
- The (1+z1)/(1+z2) factor between observed and grid flux densities caused by the differential redshifting is now taken into account. With a default grid redshift rounding of two decimals this yields a difference of at most 0.5% in the estimated physical properties at z=0 and even less at higher z. (Médéric Boquien)
### Changed
### Fixed
- Make sure we can plot the PDF of equivalent widths. (Médéric Boquien)
- Fix a crash when generating a mock catalogue containing intensive properties. (Médéric Boquien)
- In the `sfhdelayed` and `sfhdelayedbq` modules, provide the correct description for the sfr_A parameter (Médéric Boquien & Laure Ciesla)
- Internally the luminosity distance was erroneously stored in Mpc rather than in m for non-zero redshifts. This has now been standardised to m. (Médéric Boquien)
- As the best-fit properties are computed at the exact observed redshift, correct the scaling factor as it is computed at the grid redshift. This corrects for slight offsets on the best-fit properties when the input redshift has more decimals than the grid redshift. (Médéric Boquien)
- Fix the pip install by making pcigale.managers discoverable. (Yannick Roehlly)
### Optimised
## 2018.0 (2018-11-06)
### Added
- It is now possible to optionally indicate the distance in Mpc in the input file. If present it will be used in lieu of the distance computed from the redshift. This is especially useful in the nearby universe where the redshift is a very poor indicator of the actual distance. (Médéric Boquien)
......@@ -144,7 +157,7 @@
- The output files providing estimates of the physical properties are now generated both in the form of text and FITS files. (Médéric Boquien)
- When using the `dustatt_calzleit module`, choosing ẟ≠0 leads to an effective E(B-V) different from the one set by the user. Now the E(B-V) will always correspond to the one specified by the user. This means that at fixed E(B-V), A(V) depends on ẟ. (Médéric Boquien)
- The pcigale-mock tool has been merged into pcigale-plots; the mock plots can be obtained with the "mock" command. (Médéric Boquien)
- The `sfhdelayed` module is now initialised with _init_code() to be consistent with the way things are done in other modules. This should give a slight speedup under some sircumstances too. (Médéric Boquien)
- The `sfhdelayed` module is now initialised with \_init_code() to be consistent with the way things are done in other modules. This should give a slight speedup under some sircumstances too. (Médéric Boquien)
- In `sfhfromfile`, the specification of the time grid was vague and therefore could lead to incorrect results if it was not properly formatted by the end user. The description has been clarified and we now check that the time starts from 0 and that the time step is always 1 Myr. If it is not the case we raise an exception. (Médéric Boquien)
- When the redshift is not indicated in pcigale.ini, the analysis module fills the list of redshifts from the redshifts indicated in the input flux file. This is inefficient as analysis modules should not have to modify the configuration. Now this is done when interpreting pcigale.ini before calling the relevant analysis module. As a side effect, "pigale check" now returns the total number of models that cigale will compute rather than the number of models per redshift bin. (Médéric Boquien)
- The optionally saved spectra in the `pdf_analysis` and `savefluxes` modules were saved in the VO-table format. The most important downside is that it is very slow to write to, which proves to be a major bottleneck in the computing speed. To overcome this issue, we rather save the spectra using the FITS formation. Instead of having one file containing the spectra (including the various components) and the SFH in a single file, now we have one file for the spectra and one file for the SFH.
......@@ -178,7 +191,7 @@
## 0.8.0 (2015-12-01)
### Added
- The evaluation of the parameters is always done linearly. This can be a problem when estimating the SFR or the stellar mass for instance as it is usual to estimate their log rather. Because the log is non-linear, the likelihood-weighted mean of the log is not the log of the likelihood-weighted mean. Therefore the estimation of the log of these parameters has to be done during the analysis step. This is now possible. The variables to be analysed in log just need to be indicated with the suffix "_log", for instance "stellar.m\_star\_log". (Médéric Boquien, idea suggested by Samir Salim)
- The evaluation of the parameters is always done linearly. This can be a problem when estimating the SFR or the stellar mass for instance as it is usual to estimate their log rather. Because the log is non-linear, the likelihood-weighted mean of the log is not the log of the likelihood-weighted mean. Therefore the estimation of the log of these parameters has to be done during the analysis step. This is now possible. The variables to be analysed in log just need to be indicated with the suffix "\_log", for instance "stellar.m\_star\_log". (Médéric Boquien, idea suggested by Samir Salim)
### Fixed
- Running the scripts in parallel trigger a deadlock on OS X with python 3.5. A workaround has been implemented. (Médéric Boquien)
......
......@@ -20,7 +20,7 @@ import numpy as np
from scipy import interpolate
import scipy.constants as cst
from astropy.table import Table
from pcigale.data import (Database, Filter, M2005, BC03, Fritz2006,
from pcigale.data import (Database, Filter, M2005, BC03, BPASSv2, Fritz2006,
Dale2014, DL2007, DL2014, NebularLines,
NebularContinuum, Schreiber2016, THEMIS)
......@@ -411,6 +411,100 @@ def build_bc2003(base, res):
))
def build_bpassv2(base, bpassres):
bpass_dir = os.path.join(os.path.dirname(__file__), 'bpassv2.2/')
# Time grid (1 Myr to 13.7 Gyr with 1 Myr step)
ssp_timegrid = np.arange(1, 13700)
# The wavelength grid of the BPASS models is extremely refined. This is too
# refined for our purpose. By default we interpolate on the grid of the low
# resolution BC03 models. We remove the wavelengths beyond 10 microns as
# they are out of the range covered by BPASS.
spec_wave_lr = base.get_bc03('salp', 0.02).wavelength_grid
spec_wave_lr = spec_wave_lr[spec_wave_lr <= 10000.]
# Metallicities associated to each key
metal = {
"em5": 0.00001,
"em4": 0.0001,
"001": 0.001,
"002": 0.002,
"003": 0.003,
"004": 0.004,
"006": 0.006,
"008": 0.008,
"010": 0.010,
"014": 0.014,
"020": 0.020,
"030": 0.030,
"040": 0.040
}
# IMF associated to each key
imf = {
"100_100": 0,
"100_300": 1,
"135_100": 2,
"135_300": 3,
"135all_100": 4,
"170_100": 5,
"170_300": 6,
"chab100": 7,
"chab300": 8
}
basename = "{}{}/{}-{}-imf_{}.z{}.dat.gz"
for key_metal, key_imf, binary in itertools.product(metal, imf,
['bin', 'sin']):
specname = basename.format(bpass_dir, key_imf, 'spectra', binary,
key_imf, key_metal)
ionname = basename.format(bpass_dir, key_imf, 'ionizing', binary,
key_imf, key_metal)
massname = basename.format(bpass_dir, key_imf, 'starmass', binary,
key_imf, key_metal)
print("Importing {}...".format(specname))
spec = np.genfromtxt(specname)
# Get the wavelengths and them from Å to nm
spec_wave = spec[:, 0] * 0.1
# Get the ionizing flux and normalize it from 10⁶ Msun to 1 Msun
ion = 10**np.genfromtxt(ionname)[:, 1] * 1e-6
# Get the stellar mass and normalize it from 10⁶ Msun to 1 Msun
mass = np.genfromtxt(massname)[:, 1] * 1e-6
# Convert from Lsun/Å for 10⁶ Msun to W/nm for 1 Msun
spec = spec[:, 1:] * (10. * 3.828e26 * 1e-6)
if bpassres == 'lr':
spec = 10**interpolate.interp1d(np.log10(spec_wave), np.log10(spec),
axis=0, assume_sorted=True)(np.log10(spec_wave_lr))
spec[np.where(np.isnan(spec))] = 0.
# Timegrid from the models in Myr
spec_timegrid = np.logspace(0., 5., spec.shape[1])
ssp_lumin = interpolate.interp1d(spec_timegrid, spec)(ssp_timegrid)
ssp_mass = interpolate.interp1d(spec_timegrid, mass)(ssp_timegrid)
ssp_ion = interpolate.interp1d(spec_timegrid, ion)(ssp_timegrid)
ssp_info = np.array([ssp_mass, ssp_ion])
base.add_bpassv2(BPASSv2(
imf[key_imf],
metal[key_metal],
True if 'bin' in binary else False,
ssp_timegrid,
spec_wave_lr if bpassres == 'lr' else spec_wave,
ssp_info,
ssp_lumin
))
def build_dale2014(base):
models = []
dale2014_dir = os.path.join(os.path.dirname(__file__), 'dale2014/')
......@@ -843,7 +937,7 @@ def build_themis(base):
base.add_themis(models)
def build_base(bc03res='lr'):
def build_base(bc03res='lr', bpassres='lr'):
base = Database(writable=True)
base.upgrade_base()
......@@ -864,32 +958,37 @@ def build_base(bc03res='lr'):
print("\nDONE\n")
print('#' * 78)
print("4- Importing Draine and Li (2007) models\n")
print("4- Importing the BPASS v2.2 SSP\n")
build_bpassv2(base, bpassres)
print("\nDONE\n")
print('#' * 78)
print("5- Importing Draine and Li (2007) models\n")
build_dl2007(base)
print("\nDONE\n")
print('#' * 78)
print("5- Importing the updated Draine and Li (2007 models)\n")
print("6- Importing the updated Draine and Li (2014 models)\n")
build_dl2014(base)
print("\nDONE\n")
print('#' * 78)
print("6- Importing Fritz et al. (2006) models\n")
print("7- Importing Fritz et al. (2006) models\n")
build_fritz2006(base)
print("\nDONE\n")
print('#' * 78)
print("7- Importing Dale et al (2014) templates\n")
print("8- Importing Dale et al (2014) templates\n")
build_dale2014(base)
print("\nDONE\n")
print('#' * 78)
print("8- Importing nebular lines and continuum\n")
print("9- Importing nebular lines and continuum\n")
build_nebular(base)
print("\nDONE\n")
print('#' * 78)
print("9- Importing Schreiber et al (2016) models\n")
print("10- Importing Schreiber et al (2016) models\n")
build_schreiber2016(base)
print("\nDONE\n")
print('#' * 78)
......
......@@ -30,26 +30,30 @@ def save_chi2(obs, variable, models, chi2, values):
@lru_cache(maxsize=None)
def compute_corr_dz(model_z, obs_dist):
def compute_corr_dz(model_z, obs):
"""The mass-dependent physical properties are computed assuming the
redshift of the model. However because we round the observed redshifts to
two decimals, there can be a difference of 0.005 in redshift between the
models and the actual observation. At low redshift, this can cause a
models and the actual observation. This causes two issues. First there is a
difference in the luminosity distance. At low redshift, this can cause a
discrepancy in the mass-dependent physical properties: ~0.35 dex at z=0.010
vs 0.015 for instance. Therefore we correct these physical quantities by
multiplying them by corr_dz.
vs 0.015 for instance. In addition, the 1+z dimming will be different.
We compute here the correction factor for these two effects.
Parameters
----------
model_z: float
Redshift of the model.
obs_dist: float
Luminosity distance of the observed object.
obs: instance of the Observation class
Object containing the distance and redshift of an object
"""
if model_z == 0.:
return (obs_dist / (10. * parsec)) ** 2.
return (obs_dist / cosmo.luminosity_distance(model_z).value) ** 2.
mod_distance = 10. * parsec
else:
mod_distance = cosmo.luminosity_distance(model_z).value * 1e6 * parsec
return (obs.distance / mod_distance) ** 2. * \
(1. + model_z) / (1. + obs.redshift)
def dchi2_over_ds2(s, obsdata, obsdata_err, obslim, obslim_err, moddata,
......
......@@ -142,7 +142,7 @@ def analysis(idx, obs):
z = np.array(
gbl_models.conf['sed_modules_params']['redshifting']['redshift'])
wz = slice(np.abs(obs.redshift-z).argmin(), None, z.size)
corr_dz = compute_corr_dz(z[wz.start], obs.distance)
corr_dz = compute_corr_dz(z[wz.start], obs)
else: # We do not know the redshift so we use the full grid
wz = slice(0, None, 1)
corr_dz = 1.
......@@ -235,16 +235,32 @@ def bestfit(oidx, obs):
# difference between the object and the grid redshifts.
params = deepcopy(gbl_params.from_index(best_index))
if obs.redshift >= 0.:
model_z = params[gbl_params.modules.index('redshifting')]['redshift']
params[gbl_params.modules.index('redshifting')]['redshift'] = obs.redshift
corr_dz = compute_corr_dz(obs.redshift, obs.distance)
# Correct fluxes for the fact that the scaling factor was computed on
# the grid redshift. Because of the difference in redshift the distance
# is different and must be reflected in the scaling
corr_scaling = compute_corr_dz(model_z, obs) / \
compute_corr_dz(obs.redshift, obs)
else: # The model redshift is always exact in redhisfting mode
corr_dz = 1.
corr_scaling = 1.
sed = gbl_warehouse.get_sed(gbl_params.modules, params)
scaling = gbl_results.best.scaling[oidx]
# Handle the case where the distance does not correspond to the redshift.
if obs.redshift >= 0.:
corr_dz = (obs.distance / sed.info['universe.luminosity_distance']) ** 2
else:
corr_dz = 1.
scaling = gbl_results.best.scaling[oidx] * corr_scaling
for band in gbl_results.best.flux:
gbl_results.best.flux[band][oidx] = sed.compute_fnu(band) * scaling
# If the distance is user defined, the redshift-based luminosity distance
# of the model is probably incorrect so we replace it
sed.add_info('universe.luminosity_distance', obs.distance, force=True)
for prop in gbl_results.best.intprop:
gbl_results.best.intprop[prop][oidx] = sed.info[prop]
......
......@@ -16,7 +16,8 @@ SqlAlchemy ORM to store the data in a unique SQLite3 database.
"""
import pkg_resources
from sqlalchemy import create_engine, exc, Column, String, Float, PickleType
from sqlalchemy import create_engine, exc, Column, String, Float, PickleType
from sqlalchemy import Integer, Boolean
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy.orm import class_mapper, sessionmaker
import numpy as np
......@@ -24,6 +25,7 @@ import numpy as np
from .filters import Filter
from .m2005 import M2005
from .bc03 import BC03
from .bpassv2 import BPASSv2
from .dale2014 import Dale2014
from .dl2007 import DL2007
from .dl2014 import DL2014
......@@ -115,6 +117,28 @@ class _BC03(BASE):
self.info_table = ssp.info_table
self.spec_table = ssp.spec_table
class _BPASSv2(BASE):
"""Storage for the BPASS v2 SSP
"""
__tablename__ = "bpassv2"
imf = Column(Integer, primary_key=True)
metallicity = Column(Float, primary_key=True)
binary = Column(Boolean, primary_key=True)
time_grid = Column(PickleType)
wavelength_grid = Column(PickleType)
info_table = Column(PickleType)
spec_table = Column(PickleType)
def __init__(self, ssp):
self.imf = ssp.imf
self.metallicity = ssp.metallicity
self.binary = ssp.binary
self.time_grid = ssp.time_grid
self.wavelength_grid = ssp.wavelength_grid
self.info_table = ssp.info_table
self.spec_table = ssp.spec_table
class _Dale2014(BASE):
"""Storage for Dale et al (2014) infra-red templates
......@@ -444,6 +468,36 @@ class Database(object):
"""
return self._get_parameters(_BC03)
def add_bpassv2(self, ssp_bpassv2):
if self.is_writable:
ssp = _BPASSv2(ssp_bpassv2)
self.session.add(ssp)
try:
self.session.commit()
except exc.IntegrityError:
self.session.rollback()
raise DatabaseInsertError('The SSP is already in the base.')
else:
raise Exception('The database is not writable.')
def get_bpassv2(self, imf, metallicity, binary):
result = self.session.query(_BPASSv2)\
.filter(_BPASSv2.imf == imf)\
.filter(_BPASSv2.metallicity == metallicity)\
.filter(_BPASSv2.binary == binary)\
.first()
if result:
return BPASSv2(result.imf, result.metallicity, result.binary,
result.time_grid, result.wavelength_grid,
result.info_table, result.spec_table)
else:
raise DatabaseLookupError(
"The BPASS SSP for imf {}, metallicity {}, and binary {} "
"is not in the database.".format(imf, metallicity, binary))
def get_bpassv2_parameters(self):
return self.get_parameters(_BPASSv2)
def add_dl2007(self, models):
"""
Add a list of Draine and Li (2007) models to the database.
......
# -*- coding: utf-8 -*-
# Copyright (C) Universidad de Antofagasta
# Licensed under the CeCILL-v2 licence - see Licence_CeCILL_V2-en.txt
# Author: Médéric Boquien
import numpy as np
class BPASSv2(object):
"""BPASS Single Stellar Populations version 2.0
This class holds the data associated with a single stellar population
(SSP) from BPASS v2. Compared to the pristine SSP:
* The time grid ranges from 1 Myr to 14 Gyr with 1 Myr steps.
* The SSP are all interpolated on this new grid.
* The wavelength is given in nm rather than Å.
* The spectra are given in W/nm rather than Lsun/Å.
"""
def __init__(self, imf, metallicity, binary, time_grid, wavelength_grid,
info_table, spec_table):
"""Create a new single stellar population as defined in Bruzual and
Charlot (2003).
Parameters
----------
imf: string
Initial mass function (IMF): either 'salp' for Salpeter (1955) or
'chab' for Chabrier (2003).
metallicity: float
The metallicity. Possible values are 0.0001, 0.0004, 0.004, 0.008,
0.02, and 0.05.
binary: boolean
time_grid: array of floats
The time grid in Myr used in the info_table and the spec_table.
wavelength_grid: array of floats
The wavelength grid in nm used in spec_table.
info_table: 2 axis array of floats
spec_table: 2D array of floats
Spectrum of the SSP in W/nm (first axis) every 1 Myr (second axis).
"""
self.imf = imf
self.metallicity = metallicity
self.binary = binary
self.time_grid = time_grid
self.wavelength_grid = wavelength_grid
self.info_table = info_table
self.spec_table = spec_table
def convolve(self, sfh, separation_age):
"""Convolve the SSP with a Star Formation History
Given an SFH, this method convolves the info table and the SSP
luminosity spectrum.
Parameters
----------
sfh: array of floats
Star Formation History in Msun/yr.
separation_age: float
Age separating the young from the old stellar populations in Myr.
Returns
-------
spec_young: array of floats
Spectrum in W/nm of the young stellar populations.
spec_old: array of floats
Same as spec_young but for the old stellar populations.
info_young: dictionary
Dictionary containing various information from the *.?color tables
for the young stellar populations:
* "m_star": Total mass in stars in Msun
* "m_gas": Mass returned to the ISM by evolved stars in Msun
* "n_ly": rate of H-ionizing photons (s-1)
info_old : dictionary
Same as info_young but for the old stellar populations.
info_all: dictionary
Same as info_young but for the entire stellar population. Also
contains "age_mass", the stellar mass-weighted age
"""
# We cut the SSP to the maximum age considered to simplify the
# computation. We take only the first three elements from the
# info_table as the others do not make sense when convolved with the
# SFH (break strength).
info_table = self.info_table[:, :sfh.size]
spec_table = self.spec_table[:, :sfh.size]
# The convolution is just a matter of reverting the SFH and computing
# the sum of the data from the SSP one to one product. This is done
# using the dot product. The 1e6 factor is because the SFH is in solar
# mass per year.
info_young = 1e6 * np.dot(info_table[:, :separation_age],
sfh[-separation_age:][::-1])
spec_young = 1e6 * np.dot(spec_table[:, :separation_age],
sfh[-separation_age:][::-1])
info_old = 1e6 * np.dot(info_table[:, separation_age:],
sfh[:-separation_age][::-1])
spec_old = 1e6 * np.dot(spec_table[:, separation_age:],
sfh[:-separation_age][::-1])
info_all = info_young + info_old
info_young = dict(zip(["m_star", "n_ly"], info_young))
info_old = dict(zip(["m_star", "n_ly"], info_old))
info_all = dict(zip(["m_star", "n_ly"], info_all))
info_all['age_mass'] = np.average(self.time_grid[:sfh.size],
weights=info_table[0, :] * sfh[::-1])
return spec_young, spec_old, info_young, info_old, info_all
......@@ -235,7 +235,7 @@ class ObservationsManagerPassbands(object):
np.fabs(self.table[err]))
for prop in self.intprops:
err = prop + '_err'
self.table[name] = np.random.normal(fits.best.intprop[prop],
self.table[prop] = np.random.normal(fits.best.intprop[prop],
np.fabs(self.table[err]))
for prop in self.extprops:
err = prop + '_err'
......@@ -304,7 +304,8 @@ class Observation(object):
if self.redshift == 0.:
self.distance = 10. * parsec
elif self.redshift > 0.:
self.distance = cosmo.luminosity_distance(self.redshift).value
self.distance = cosmo.luminosity_distance(self.redshift).value \
* 1e6 * parsec
else:
self.distance = np.nan
self.flux = {k: row[k] for k in cls.bands
......
......@@ -397,7 +397,7 @@ class ResultsManager(object):
name="best."+prop))
for band in self.obs.bands:
if band.startswith('line.') or band.startswith('linefilter.''):
if band.startswith('line.') or band.startswith('linefilter.'):
unit = 'W/m^2'
else:
unit = 'mJy'
......
# -*- coding: utf-8 -*-
# Copyright (C) 2017 Universidad de Antofagasta
# Licensed under the CeCILL-v2 licence - see Licence_CeCILL_V2-en.txt
# Author: Médéric Boquien
"""
BPASS v2 stellar emission module
==================================================
This module implements the BPASS v2 Single Stellar Populations.
"""
from collections import OrderedDict
import numpy as np
from . import SedModule
from ..data import Database
class BPASSv2(SedModule):
"""BPASS v2 stellar emission module
This SED creation module convolves the SED star formation history with a
BPASS v2 single stellar population to add a stellar component to the SED.
"""
parameter_list = OrderedDict([
("imf", (
"cigale_list(dtype=int, options=0 & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8)",
"Initial mass function: 0 (-1.30 between 0.1 to 0.5Msun and -2.00 "
"from 0.5 to 300Msun), 1 (-1.30 between 0.1 to 0.5Msun and -2.00 "
"from 0.5 to 100Msun), 2 (-2.35 from 0.1 to 100Msun), 3 (-1.30 "
"between 0.1 to 0.5Msun and -2.35 from 0.5 to 300Msun), 4 (-1.30 "
"between 0.1 to 0.5Msun and -2.35 from 0.5 to 100Msun), 5 (-1.30 "
"between 0.1 to 0.5Msun and -2.70 from 0.5 to 300Msun), 6 (-1.30 "
"between 0.1 to 0.5Msun and -2.70 from 0.5 to 100Msun), 7 ("
"Chabrier up to 100Msun), 8 (Chabrier up to 300Msun).",
2
)),
("metallicity", (
"cigale_list(options=0.001 & 0.002 & 0.003 & 0.004 & 0.006 & "
"0.008 & 0.010 & 0.014 & 0.020 & 0.030 & 0.040)",
"Metalicity. Possible values are: 0.001, 0.002, 0.003, 0.004, "
"0.006, 0.008, 0.010, 0.014, 0.020, 0.030, 0.040.",
0.02
)),
("binary", (
"cigale_list(options=0 & 1)",
"Single (0) or binary (1) stellar populations.",
0
)),
("separation_age", (
"cigale_list(dtype=int, minvalue=0)",
"Age [Myr] of the separation between the young and the old star "
"populations. The default value in 10^7 years (10 Myr). Set "
"to 0 not to differentiate ages (only an old population).",
10
))
])
def _init_code(self):
"""Read the SSP from the database."""
self.imf = int(self.parameters["imf"])
self.metallicity = float(self.parameters["metallicity"])
self.separation_age = int(self.parameters["separation_age"])
self.binary = bool(self.parameters["binary"])
with Database() as db:
self.ssp = db.get_bpassv2(self.imf, self.metallicity, self.binary)
self.wave = self.ssp.wavelength_grid
self.w_lymanc = np.where(self.wave <= 91.1)
def process(self, sed):
"""Add the convolution of a Bruzual and Charlot SSP to the SED
Parameters
----------
sed: pcigale.sed.SED
SED object.
"""
out = self.ssp.convolve(sed.sfh, self.separation_age)
spec_young, spec_old, info_young, info_old, info_all = out
# We compute the Lyman continuum luminosity as it is important to
# compute the energy absorbed by the dust before ionising gas.
wave_lymanc = self.wave[self.w_lymanc]
lum_lyc_young = np.trapz(spec_young[self.w_lymanc], wave_lymanc)
lum_lyc_old = np.trapz(spec_old[self.w_lymanc], wave_lymanc)
# We do similarly for the total stellar luminosity
lum_young, lum_old = np.trapz([spec_young, spec_old], self.wave)
sed.add_module(self.name, self.parameters)
sed.add_info("stellar.imf", self.imf)
sed.add_info("stellar.metallicity", self.metallicity)
sed.add_info("stellar.binary", float(self.binary))
sed.add_info("stellar.m_star_young", info_young["m_star"], True)
sed.add_info("stellar.n_ly_young", info_young["n_ly"], True)
sed.add_info("stellar.lum_ly_young", lum_lyc_young, True)
sed.add_info("stellar.lum_young", lum_young, True)
sed.add_info("stellar.m_star_old", info_old["m_star"], True)
sed.add_info("stellar.n_ly_old", info_old["n_ly"], True)
sed.add_info("stellar.lum_ly_old", lum_lyc_old, True)
sed.add_info("stellar.lum_old", lum_old, True)
sed.add_info("stellar.m_star", info_all["m_star"], True)
sed.add_info("stellar.n_ly", info_all["n_ly"], True)
sed.add_info("stellar.lum", lum_young + lum_old, True)
sed.add_contribution("stellar.old", self.wave, spec_old)
sed.add_contribution("stellar.young", self.wave, spec_young)
# SedModule to be returned by get_module
Module = BPASSv2
......@@ -63,7 +63,8 @@ class SFHDelayed(SedModule):
)),
("sfr_A", (
"cigale_list(minvalue=0.)",
"Value of SFR at t = 0 in M_sun/yr.",
"Multiplicative factor controlling the SFR if normalise is False. "
"For instance without any burst: SFR(t)=sfr_A×t×exp(-t/τ)/τ²",
1.
)),
("normalise", (
......
......@@ -59,7 +59,8 @@ class SFHDelayedBQ(SedModule):
)),
("sfr_A", (
"cigale_list(minvalue=0.)",
"Value of SFR at t = 0 in M_sun/yr.",
"Multiplicative factor controlling the SFR if normalise is False. "
"For instance without any burst/quench: SFR(t)=sfr_A×t×exp(-t/τ)/τ²",
1.
)),
("normalise", (
......
......@@ -100,7 +100,7 @@ class Configuration(object):
["* sfhdelayed (delayed SFH with optional exponential burst)"] +
["* sfhdelayedbq (delayed SFH with optional constant burst/quench)"
] +
["* sfhfromfile (abitrary SFH read from an input file)"] +
["* sfhfromfile (arbitrary SFH read from an input file)"] +
["* sfhperiodic (periodic SFH, exponential, rectangle or delayed"
")"] +
["SSP:"] +
......
......@@ -405,7 +405,7 @@ def _mock_worker(exact, estimated, param, nologo):
plt.figimage(image, 510, 55, origin='upper', zorder=10, alpha=1)
plt.tight_layout()
plt.savefig('out/mock_{}.pdf'.format(param))
plt.savefig('out/mock_{}.pdf'.format(param.replace('/', '_')))
plt.close()
......
......@@ -11,21 +11,24 @@ from setuptools import find_packages, setup
class custom_build(build):
user_options = [
('bc03res=', None, 'Resolution of the BC03 models, hr or lr.'),
('bpassres=', None, 'Resolution of the BPASS models, hr or lr.')
]
description = 'Build the pcigale database.'
def initialize_options(self):
build.initialize_options(self)
self.bc03res = 'lr'
self.bpassres = 'lr'
def finalize_options(self):
assert self.bc03res in ('lr', 'hr'), 'bc03res must be hr or lr!'
assert self.bpassres in ('lr', 'hr'), 'bpassres must be hr or lr!'
build.finalize_options(self)
def run(self):
# Build the database.
import database_builder
database_builder.build_base(self.bc03res)
database_builder.build_base(self.bc03res, self.bpassres)
# Proceed with the build
build.run(self)
......