Commit 5ea521a4 authored by Médéric Boquien's avatar Médéric Boquien

Correct the mass-dependent physical quantities for the difference between the...

Correct the mass-dependent physical quantities for the difference between the redshift of the model and the redshift of the observation. Because the former is rounded to 2 decimal places, there can be a redshift difference of 0.005 (if CIGALE computes the list of redshifts itself). This can lead to differences of more than 0.35 dex at z=0.01 for instance.
parent 00ec0da3
......@@ -18,6 +18,7 @@
- In the presence of upper limits, correct the scaling factor of the models to the observations before computing the χ², not after. (Médéric Boquien)
- When called without arguments, pcigale-plots would crash and display the backtrace. Now it displays the a short help showing how to use it. (Médéric Boquien)
- For sfh2exp, when setting the scale of the SFH with sfr0, the normalisation was incorrect by a factor exp(-1/tau_main). (Médéric Boquien)
- The mass-dependent physical properties are computed assuming the redshift of the model. However because we round the observed redshifts to two decimals, there can be a difference of 0.005 in redshift between the models and the actual observation if CIGALE computes the list of redshifts itself. At low redshift, this can cause a discrepancy in the mass-dependent physical properties: ~0.35 dex at z=0.010 vs 0.015 for instance. Therefore we now evaluate these physical quantities at the observed redshift at full precision. (Médéric Boquien, issue reported by Samir Salim)
### Optimised
- Prior to version 0.7.0, we needed to maintain the list of redshifts for all the computed models. Past 0.7.0 we just infer the redshift from a list unique redshifts. This means that we can now discard the list of redshifts for all the models and only keep the list of unique redshifts. This saves ~8 MB of memory for every 10⁶ models. the models should be computed slightly faster but it is in the measurement noise. (Médéric Boquien)
......
......@@ -8,6 +8,7 @@
import time
from astropy.cosmology import WMAP7 as cosmology
import numpy as np
import scipy.stats as st
......@@ -195,14 +196,28 @@ def analysis(idx, obs):
obs_fluxes = np.array([obs[name] for name in gbl_filters])
obs_errors = np.array([obs[name + "_err"] for name in gbl_filters])
obs_z = obs['redshift']
nobs = np.where(np.isfinite(obs_fluxes))[0].size
if obs['redshift'] >= 0.:
if obs_z >= 0.:
# We pick the the models with the closest redshift using a slice to
# work on views of the arrays and not on copies to save on RAM.
wz = slice(np.abs(obs['redshift'] - gbl_z).argmin(), None, gbl_z.size)
idx_z = np.abs(obs_z - gbl_z).argmin()
model_z = gbl_z[idx_z]
wz = slice(idx_z, None, gbl_z.size)
# The mass-dependent physical properties are computed assuming the
# redshift of the model. However because we round the observed redshifts
# to two decimals, there can be a difference of 0.005 in redshift
# between the models and the actual observation. At low redshift, this
# can cause a discrepancy in the mass-dependent physical properties:
# ~0.35 dex at z=0.010 vs 0.015 for instance. Therefore we correct these
# physical quantities by multiplying them by corr_dz.
corr_dz = (cosmology.luminosity_distance(obs_z).value /
cosmology.luminosity_distance(model_z).value)**2.
else: # We do not know the redshift so we use the full grid
wz = slice(0, None, 1)
corr_dz = 1.
chi2, scaling = compute_chi2(gbl_model_fluxes[wz, :], obs_fluxes,
obs_errors, gbl_lim_flag)
......@@ -250,7 +265,7 @@ def analysis(idx, obs):
# We correct the mass-dependent parameters
for key in sed.mass_proportional_info:
sed.info[key] *= scaling[best_index_z]
sed.info[key] *= scaling[best_index_z] * corr_dz
# We compute the weighted average and standard deviation using the
# likelihood as weight.
......@@ -265,7 +280,8 @@ def analysis(idx, obs):
if variable in sed.mass_proportional_info:
mean, std = weighted_param(_(gbl_model_variables[wz, i][wlikely]
* scaling[wlikely]), likelihood)
* scaling[wlikely] * corr_dz),
likelihood)
else:
mean, std = weighted_param(_(gbl_model_variables[wz, i][wlikely]),
likelihood)
......@@ -289,11 +305,11 @@ def analysis(idx, obs):
if gbl_save['chi2']:
save_chi2(obs['id'], gbl_analysed_variables,
sed.mass_proportional_info, gbl_model_variables[wz, :],
scaling, chi2 / (nobs - 1))
scaling * corr_dz, chi2 / (nobs - 1))
if gbl_save['pdf']:
save_pdf(obs['id'], gbl_analysed_variables,
sed.mass_proportional_info, gbl_model_variables[wz, :],
scaling, likelihood, wlikely)
scaling * corr_dz, likelihood, wlikely)
with gbl_n_computed.get_lock():
gbl_n_computed.value += 1
......
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment