Commit d84a9751 authored by Médéric Boquien's avatar Médéric Boquien

Merge branch 'develop' into feature/pcigale-plots

parents 828881bb c8bf4f2c
......@@ -2,12 +2,20 @@
## Unreleased
### Added
- The (1+z1)/(1+z2) factor between observed and grid flux densities caused by the differential redshifting is now taken into account. With a default grid redshift rounding of two decimals this yields a difference of at most 0.5% in the estimated physical properties at z=0 and even less at higher z. (Médéric Boquien)
### Changed
### Fixed
- Make sure we can plot the PDF of equivalent widths. (Médéric Boquien)
- Fix a crash when generating a mock catalogue containing intensive properties. (Médéric Boquien)
- In the `sfhdelayed` and `sfhdelayedbq` modules, provide the correct description for the sfr_A parameter (Médéric Boquien & Laure Ciesla)
- Internally the luminosity distance was erroneously stored in Mpc rather than in m for non-zero redshifts. This has now been standardised to m. (Médéric Boquien)
- As the best-fit properties are computed at the exact observed redshift, correct the scaling factor as it is computed at the grid redshift. This corrects for slight offsets on the best-fit properties when the input redshift has more decimals than the grid redshift. (Médéric Boquien)
- Fix the pip install by making pcigale.managers discoverable. (Yannick Roehlly)
- When using a parameters file, Boolean values were not correctly interpreted. (Médéric Boquien, reported by Eric Martínez, INAOE)
- Make sure that the best-fit models are stored with the correct scaling factor when the distance is given explicitly (Médéric Boquien)
### Optimised
- Slight speedup of the computation of the likelihood from the χ² (Médéric Boquien)
- The the fritz2006 module should now run faster thanks to an optimisation of the computation of the luminosity of the various AGN components (Médéric Boquien & Guang Yang)
## 2018.0 (2018-11-06)
### Added
......@@ -153,7 +161,7 @@
- The output files providing estimates of the physical properties are now generated both in the form of text and FITS files. (Médéric Boquien)
- When using the `dustatt_calzleit module`, choosing ẟ≠0 leads to an effective E(B-V) different from the one set by the user. Now the E(B-V) will always correspond to the one specified by the user. This means that at fixed E(B-V), A(V) depends on ẟ. (Médéric Boquien)
- The pcigale-mock tool has been merged into pcigale-plots; the mock plots can be obtained with the "mock" command. (Médéric Boquien)
- The `sfhdelayed` module is now initialised with _init_code() to be consistent with the way things are done in other modules. This should give a slight speedup under some sircumstances too. (Médéric Boquien)
- The `sfhdelayed` module is now initialised with \_init_code() to be consistent with the way things are done in other modules. This should give a slight speedup under some sircumstances too. (Médéric Boquien)
- In `sfhfromfile`, the specification of the time grid was vague and therefore could lead to incorrect results if it was not properly formatted by the end user. The description has been clarified and we now check that the time starts from 0 and that the time step is always 1 Myr. If it is not the case we raise an exception. (Médéric Boquien)
- When the redshift is not indicated in pcigale.ini, the analysis module fills the list of redshifts from the redshifts indicated in the input flux file. This is inefficient as analysis modules should not have to modify the configuration. Now this is done when interpreting pcigale.ini before calling the relevant analysis module. As a side effect, "pigale check" now returns the total number of models that cigale will compute rather than the number of models per redshift bin. (Médéric Boquien)
- The optionally saved spectra in the `pdf_analysis` and `savefluxes` modules were saved in the VO-table format. The most important downside is that it is very slow to write to, which proves to be a major bottleneck in the computing speed. To overcome this issue, we rather save the spectra using the FITS formation. Instead of having one file containing the spectra (including the various components) and the SFH in a single file, now we have one file for the spectra and one file for the SFH.
......@@ -187,7 +195,7 @@
## 0.8.0 (2015-12-01)
### Added
- The evaluation of the parameters is always done linearly. This can be a problem when estimating the SFR or the stellar mass for instance as it is usual to estimate their log rather. Because the log is non-linear, the likelihood-weighted mean of the log is not the log of the likelihood-weighted mean. Therefore the estimation of the log of these parameters has to be done during the analysis step. This is now possible. The variables to be analysed in log just need to be indicated with the suffix "_log", for instance "stellar.m\_star\_log". (Médéric Boquien, idea suggested by Samir Salim)
- The evaluation of the parameters is always done linearly. This can be a problem when estimating the SFR or the stellar mass for instance as it is usual to estimate their log rather. Because the log is non-linear, the likelihood-weighted mean of the log is not the log of the likelihood-weighted mean. Therefore the estimation of the log of these parameters has to be done during the analysis step. This is now possible. The variables to be analysed in log just need to be indicated with the suffix "\_log", for instance "stellar.m\_star\_log". (Médéric Boquien, idea suggested by Samir Salim)
### Fixed
- Running the scripts in parallel trigger a deadlock on OS X with python 3.5. A workaround has been implemented. (Médéric Boquien)
......
......@@ -30,26 +30,30 @@ def save_chi2(obs, variable, models, chi2, values):
@lru_cache(maxsize=None)
def compute_corr_dz(model_z, obs_dist):
def compute_corr_dz(model_z, obs):
"""The mass-dependent physical properties are computed assuming the
redshift of the model. However because we round the observed redshifts to
two decimals, there can be a difference of 0.005 in redshift between the
models and the actual observation. At low redshift, this can cause a
models and the actual observation. This causes two issues. First there is a
difference in the luminosity distance. At low redshift, this can cause a
discrepancy in the mass-dependent physical properties: ~0.35 dex at z=0.010
vs 0.015 for instance. Therefore we correct these physical quantities by
multiplying them by corr_dz.
vs 0.015 for instance. In addition, the 1+z dimming will be different.
We compute here the correction factor for these two effects.
Parameters
----------
model_z: float
Redshift of the model.
obs_dist: float
Luminosity distance of the observed object.
obs: instance of the Observation class
Object containing the distance and redshift of an object
"""
if model_z == 0.:
return (obs_dist / (10. * parsec)) ** 2.
return (obs_dist / cosmo.luminosity_distance(model_z).value) ** 2.
mod_distance = 10. * parsec
else:
mod_distance = cosmo.luminosity_distance(model_z).value * 1e6 * parsec
return (obs.distance / mod_distance) ** 2. * \
(1. + model_z) / (1. + obs.redshift)
def dchi2_over_ds2(s, obsdata, obsdata_err, obslim, obslim_err, moddata,
......
......@@ -142,7 +142,7 @@ def analysis(idx, obs):
z = np.array(
gbl_models.conf['sed_modules_params']['redshifting']['redshift'])
wz = slice(np.abs(obs.redshift-z).argmin(), None, z.size)
corr_dz = compute_corr_dz(z[wz.start], obs.distance)
corr_dz = compute_corr_dz(z[wz.start], obs)
else: # We do not know the redshift so we use the full grid
wz = slice(0, None, 1)
corr_dz = 1.
......@@ -153,7 +153,7 @@ def analysis(idx, obs):
if np.any(chi2 < -np.log(np.finfo(np.float64).tiny) * 2.):
# We use the exponential probability associated with the χ² as
# likelihood function.
likelihood = np.exp(-chi2 / 2.)
likelihood = np.exp(-.5 * chi2)
wlikely = np.where(np.isfinite(likelihood))
# If all the models are valid, it is much more efficient to use a slice
if likelihood.size == wlikely[0].size:
......@@ -235,16 +235,32 @@ def bestfit(oidx, obs):
# difference between the object and the grid redshifts.
params = deepcopy(gbl_params.from_index(best_index))
if obs.redshift >= 0.:
model_z = params[gbl_params.modules.index('redshifting')]['redshift']
params[gbl_params.modules.index('redshifting')]['redshift'] = obs.redshift
corr_dz = compute_corr_dz(obs.redshift, obs.distance)
# Correct fluxes for the fact that the scaling factor was computed on
# the grid redshift. Because of the difference in redshift the distance
# is different and must be reflected in the scaling
corr_scaling = compute_corr_dz(model_z, obs) / \
compute_corr_dz(obs.redshift, obs)
else: # The model redshift is always exact in redhisfting mode
corr_dz = 1.
corr_scaling = 1.
sed = gbl_warehouse.get_sed(gbl_params.modules, params)
scaling = gbl_results.best.scaling[oidx]
# Handle the case where the distance does not correspond to the redshift.
if obs.redshift >= 0.:
corr_dz = (obs.distance / sed.info['universe.luminosity_distance']) ** 2
else:
corr_dz = 1.
scaling = gbl_results.best.scaling[oidx] * corr_scaling
for band in gbl_results.best.flux:
gbl_results.best.flux[band][oidx] = sed.compute_fnu(band) * scaling
# If the distance is user defined, the redshift-based luminosity distance
# of the model is probably incorrect so we replace it
sed.add_info('universe.luminosity_distance', obs.distance, force=True)
for prop in gbl_results.best.intprop:
gbl_results.best.intprop[prop][oidx] = sed.info[prop]
......@@ -253,6 +269,6 @@ def bestfit(oidx, obs):
* corr_dz
if gbl_conf['analysis_params']["save_best_sed"]:
sed.to_fits('out/{}'.format(obs.id), scaling)
sed.to_fits('out/{}'.format(obs.id), scaling * corr_dz)
gbl_counter.inc()
......@@ -304,7 +304,8 @@ class Observation(object):
if self.redshift == 0.:
self.distance = 10. * parsec
elif self.redshift > 0.:
self.distance = cosmo.luminosity_distance(self.redshift).value
self.distance = cosmo.luminosity_distance(self.redshift).value \
* 1e6 * parsec
else:
self.distance = np.nan
self.flux = {k: row[k] for k in cls.bands
......
......@@ -100,6 +100,10 @@ class Fritz2006(SedModule):
self.fritz2006 = base.get_fritz2006(self.r_ratio, self.tau,
self.beta, self.gamma,
self.opening_angle, self.psy)
self.l_agn_scatt = np.trapz(self.fritz2006.lumin_scatt,
x=self.fritz2006.wave)
self.l_agn_agn = np.trapz(self.fritz2006.lumin_agn,
x=self.fritz2006.wave)
def process(self, sed):
"""Add the IR re-emission contributions
......@@ -128,10 +132,8 @@ class Fritz2006(SedModule):
if self.fracAGN < 1.:
agn_power = luminosity * (1./(1.-self.fracAGN) - 1.)
l_agn_therm = agn_power
l_agn_scatt = np.trapz(agn_power * self.fritz2006.lumin_scatt,
x=self.fritz2006.wave)
l_agn_agn = np.trapz(agn_power * self.fritz2006.lumin_agn,
x=self.fritz2006.wave)
l_agn_scatt = agn_power * self.l_agn_scatt
l_agn_agn = agn_power * self.l_agn_agn
l_agn_total = l_agn_therm + l_agn_scatt + l_agn_agn
else:
......
......@@ -64,6 +64,9 @@ class MBB(SedModule):
self.epsilon = float(self.parameters["epsilon_mbb"])
self.T = float(self.parameters["t_mbb"])
self.beta = float(self.parameters["beta_mbb"])
if type(self.parameters["energy_balance"]) is str:
self.energy_balance = self.parameters["energy_balance"].lower() == 'true'
else:
self.energy_balance = bool(self.parameters["energy_balance"])
if self.epsilon < 0.:
......
......@@ -95,6 +95,9 @@ class NebularEmission(SedModule):
self.fesc = float(self.parameters['f_esc'])
self.fdust = float(self.parameters['f_dust'])
self.lines_width = float(self.parameters['lines_width'])
if type(self.parameters["emission"]) is str:
self.emission = self.parameters["emission"].lower() == 'true'
else:
self.emission = bool(self.parameters["emission"])
if self.fesc < 0. or self.fesc > 1:
......
......@@ -73,6 +73,9 @@ class Sfh2Exp(SedModule):
self.burst_age = int(self.parameters["burst_age"])
age = int(self.parameters["age"])
sfr_0 = float(self.parameters["sfr_0"])
if type(self.parameters["normalise"]) is str:
normalise = self.parameters["normalise"].lower() == 'true'
else:
normalise = bool(self.parameters["normalise"])
# Time grid and age. If needed, the age is rounded to the inferior Myr
......
......@@ -63,6 +63,9 @@ class SfhBuat08(SedModule):
def _init_code(self):
self.velocity = float(self.parameters["velocity"])
self.age = int(self.parameters["age"])
if type(self.parameters["normalise"]) is str:
normalise = self.parameters["normalise"].lower() == 'true'
else:
normalise = bool(self.parameters["normalise"])
# Time grid and age. If needed, the age is rounded to the inferior Myr
......
......@@ -52,7 +52,10 @@ class SfhQuenchSmooth(SedModule):
def _init_code(self):
self.quenching_age = int(self.parameters["quenching_time"])
self.quenching_factor = float(self.parameters["quenching_factor"])
self.normalise = bool(self.parameters["normalise"])
if type(self.parameters["normalise"]) is str:
normalise = self.parameters["normalise"].lower() == 'true'
else:
normalise = bool(self.parameters["normalise"])
def process(self, sed):
"""
......
......@@ -52,7 +52,10 @@ class SfhQuenchTrunk(SedModule):
def _init_code(self):
self.quenching_age = int(self.parameters["quenching_age"])
self.quenching_factor = float(self.parameters["quenching_factor"])
self.normalise = bool(self.parameters["normalise"])
if type(self.parameters["normalise"]) is str:
normalise = self.parameters["normalise"].lower() == 'true'
else:
normalise = bool(self.parameters["normalise"])
def process(self, sed):
"""
......
......@@ -81,6 +81,9 @@ class SFHDelayed(SedModule):
self.age_burst = int(self.parameters["age_burst"])
self.f_burst = float(self.parameters["f_burst"])
sfr_A = float(self.parameters["sfr_A"])
if type(self.parameters["normalise"]) is str:
normalise = self.parameters["normalise"].lower() == 'true'
else:
normalise = bool(self.parameters["normalise"])
# Time grid for each component
......
......@@ -76,6 +76,9 @@ class SFHDelayedBQ(SedModule):
self.age_bq = int(self.parameters["age_bq"])
self.r_sfr = float(self.parameters["r_sfr"])
sfr_A = float(self.parameters["sfr_A"])
if type(self.parameters["normalise"]) is str:
normalise = self.parameters["normalise"].lower() == 'true'
else:
normalise = bool(self.parameters["normalise"])
# Delayed SFH
......
......@@ -59,9 +59,12 @@ class SfhFromFile(SedModule):
def _init_code(self):
filename = self.parameters['filename']
normalise = bool(self.parameters["normalise"])
age = int(self.parameters['age'])
self.sfr_column_number = int(self.parameters['sfr_column'])
if type(self.parameters["normalise"]) is str:
normalise = self.parameters["normalise"].lower() == 'true'
else:
normalise = bool(self.parameters["normalise"])
table = read_table(filename)
self.sfr = table.columns[self.sfr_column_number].data.astype(np.float)
......
......@@ -73,6 +73,9 @@ class SfhPeriodic(SedModule):
self.tau_bursts = float(self.parameters["tau_bursts"])
age = int(self.parameters["age"])
sfr_A = float(self.parameters["sfr_A"])
if type(self.parameters["normalise"]) is str:
normalise = self.parameters["normalise"].lower() == 'true'
else:
normalise = bool(self.parameters["normalise"])
time_grid = np.arange(0, age)
......
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment