Commit 11dbd038 authored by BURGARELLA Denis's avatar BURGARELLA Denis
Browse files

Merge branch 'release/v0.7.0'

parents a571f2cb deb7dc52
# Change Log
## 0.7.0 (2015-11-19)
### Added
- The pcigale-mock utility has been added to generate plots comparing the exact and pcigale-estimated parameters. This requires pcigale to be run beforehand with the pdf_analysis module and the mock_flag option set to True. (Denis Burgarella and Médéric Boquien)
- The pcigale-filter utility has been added to easily list, plot, add, and remove filters without having the rebuild the database entirely. (Médéric Boquien)
- It is now possible to analyse the flux in a band as a regular parameter. It can be useful for flux predictions. (Yannick Roehlly)
- The redshift can be a now used as a free parameter, enabling pcigale to estimate the photometric redshift. (Médéric Boquien)
- When running "pcigale genconf", the list of modules is automatically checked against the list of official modules. If modules are missing, information is printed on the screen indicating the level of severity (information, warning, or error) and the list of modules that can be used. (Médéric Boquien)
### Changed
- The galaxy_mass parameter was very ambiguous. In reality it corresponds to the integral of the SFH. Consequently it has been renamed sfh.integrated. (Médéric Boquien)
- In the Calzetti attenuation module, add a warning saying that is the power law slope is different than 0, E(B-V) will no longer be the real one. (Yannick Roehlly)
- Add "B_B90" to the list of computed attenuation so that users can calculate the effective E(B-V). (Yannick Roehlly)
- Computing the parameters and their uncertainties through the histogram of the PDF is slow and can introduce biases in some cases. Rather, now the estimated values of the parameters and the corresponding uncertainties are simply computed from the weighted mean and standard deviation of the models that are at least 0.1% as likely as the best model to reproduce the observations. The differences in the estimates are very small except when very few models are used. (Médéric Boquien)
- Magic values to indicate invalid values (e.g. values lower than -99) are difficult to handle safely. They have been replaced with NaN wherever appropriate. The logic of the input flux file stays the same for the time being but the magic values are converted internally after reading it. Users are advised to replace magic values with NaN. The output files now use NaN instead of magic number to indicate invalid values. (Médéric Boquien)
- Rename the the AGN faction added by dale2014 module from agn.fracAGN to agn.fracAGN_dale2014 to avoid conflict with fritz2006 module. (Yannick Roehlly)
- Remove the radio component from the dale2014 model so that it can be used with the more flexible radio module, courtesy Daniel Dale. (Laure Ciesla and Médéric Boquien)
### Fixed
- The SFH is modelled using discrete star formation episodes every 1 Myr. This means that as the SFH is not really continuous (the input single stellar population do not allow us to compute that properly), we should not integrate SFH(t), but simply sum SFH(t) as t is discrete. In most cases the difference is very small. The only case where it makes a difference is for a very rapidly varying SFH, for instance taking τ=1 Myr. (Médéric Boquien)
- Ensure that the flux can be computed even if the redshifting module has not been applied. By default in that case we assume a distance of 10 parsecs. While in practice it should never happen as the redshifting module is mandatory, this can be more important when using pcigale as a library. (Médéric Boquien and Yannick Roehlly)
- When called without arguments, pcigale would crash. Now it displays a brief message to remind how it should be invoked. (Médéric Boquien)
- Raise an exception instead of crash when an unknown IMF is requested. (Médéric Boquien)
- When the error column for a flux was present in the observation table but not in the used column list (when the user prefers to use a default error instead of the real one), the error on the flux was set to 0. (Yannick Roehlly)
- For some reason a point in the GALEX FUV filter has a negative transmission. That should not happen. After comparison with the filter on the GALEX website it has been set to 0. (Médéric Boquien)
- Shorten the left and right 0 parts of the pseudo D4000 filter so that it can be applied on smaller spectra. (Yannick Roehlly)
- To compute the reduced χ², we need to divide by the number of bands-1 (and not the number of bands). We do that because we consider that the models depend on one meta-parameter. (Médéric Boquien)
- The test to determine when to take into account upper limits did not work according the specifications. Now upper limits are always taken into account when they should. (Médéric Boquien)
- The nebular emission could be counted in excess in the dust luminosity as both lines and the Lyman continuum could be attenuated. Now we do not extend the attenuation under 91 nm. Also, a new component as been added, taking specifically the Lyman continuum absorption by gas, allowing to conserve the information about the intrinsic stellar Lyman continuum if need be. (Yannick Roehlly and Médéric Boquien)
- The Flambda table in the VO-table export does not reflect the fact that it stores luminosity densities. Accordingly, it has been renamed Llambda. (Yannick Roehlly and Médéric Boquien)
- When the flux file contains a mix of spaces and tabulations as column separators, pcigale discards the header and takes the first data line as the header. Now pcigale properly handles such a combination. Bug reported by Paola Santini. (Médéric Boquien)
### Optimised
- Major speedup to build the database by inserting multiple models in the database at once rather one model at a time. On an SSD, the total run time of "python build" goes from 5m20s to 2m42s. The speedup should be even more spectacular on a rotating hard drive. (Médéric Boquien)
- Memory usage reduction using in-place operations (e.g. a/=2 rather than a=a/2, saving the creation of a temporary array the size of a) where possible. (Médéric Boquien)
- The creation and handling of mock catalogues has been streamlined. (Médéric Boquien)
- Slight speedup using np.full() where possible to create an array with all elements set to the same value. (Médéric Boquien)
- Computing the scaling factors and the χ² in one step over the entire grid of models is very memory-intensive, leading to out-of-memory issues when using multiple cores. Rather, let's compute them band by band, as this avoids the creation of large temporary arrays, while keeping the computation fast. (Médéric Boquien).
- Each core copied the subset of models corresponding to the redshift of the object to be analysed. This is a problem as it can strongly increase memory usage with the number of cores, especially when there are many models and just one redshift. Rather than making a copy, we use a view, which not only saves a considerable amount of memory but is also faster as there is no need to allocate new, large arrays. This is made possible as models are regularly ordered with redshift. (Médéric Boquien)
- Various minor optimisations. (Médéric Boquien)
## 0.6.0 (2015-09-07)
### Added
- New module to compute a star formation history as described in Buat et al. 2008. (Yannick Roehlly)
- New module to compute a periodic SFH. Each star formation episode can be exponential, "delayed", or rectangular. (Médéric Boquien and Denis Burgarella)
- New module performing a quench on the star formation history. (Yannick Roehlly)
- New module to compute the emission of a modified black body. (Denis Burgarella)
- New module to compute the physical properties measured on the emission spectrum (e.g. spectral indices, ultraviolet slope, etc.). (Denis Burgarella)
- New pseudo filters to compute line fluxes and spectral indices. (Yannick Roehlly)
- The dust masses are now computed for the draine2007 and draine2014 modules. (Médéric Boquien)
### Changed
- The nebular_lines_width parameter of the nebular module is now called lines_width as the nebular prefix was redundant. (Médéric Boquien)
- Prefix the ouput variables of SFH-related modules with "sfh" to facilitate their identification in the output files. (Médéric Boquien)
- Prefix the output variables of the fritz2006 AGN module with "agn" to facilitate their identification in the output files. (Médéric Boquien)
- Prefix the redshift with "universe". (Médéric Boquien)
- With pcigale-plot, draw the spectra only to λ=50 cm as the models do not extend much further and there is very rarely any observation beyond λ=21 cm. (Médéric Boquien)
- As pcigale is getting much faster, display the number of computed models every 250 models rather than every 100 models. (Médéric Boquien)
- Give default values for the dl2014, sfh_buat, and sfhdelayed modules to allow for quick test runs. (Médéric Boquien)
- Now pcigale-plots plots errors on upper limits. (Denis Burgarella)
### Fixed
- When plotting, round the redshift to two decimals to match the redshift of the model. (Médéric Boquien)
- Ensure that all the input parameters of the nebular module are also output so it is possible to analyse them. (Médéric Boquien)
- Properly take the Lyman continuum photons absorbed by dust into account to compute the dust emission. (Médéric Boquien)
- Improve the readability of the pcigale-plots generated spectra by drawing the observed fluxes on top of other lines. (Médéric Boquien)
- The nebular components are now plotted with pcigale-plots. (Médéric Boquien)
- When a filter that was not in the database was called, pcigale would crash ungracefully as the exception invoked does not exist in Python 3.x. Now use a Python 3.x exception. (Médéric Boquien)
- The dustatt_powerlaw module could not identify which physical components to attenuate when the nebular module was called. (Médéric Boquien)
- The displayed counter for the number of objects already analysed could be slightly offset from the number of models actually computed. (Médéric Boquien)
- Change the method to compute the χ² in the presence of upper limits as the previous method did not always converge. (Denis Burgarella and Médéric Boquien)
### Optimised
- When plotting, do not recompute the luminosity distance, which is very slow, but rather get the one computed during the analysis and that is given in the output files. (Médéric Boquien)
- Adding new physical components with a wavelength sampling different than that of the pre-existing grid is slow as a common grid has to be computed and all components interpolated over it. The nebular lines are especially bad for that, owing to the fact that each line is sampled with 19 points. This is excessive as sampling over 9 points barely changes the fluxes while speeding up the computation of the models by ~20%. (Médéric Boquien)
- Rather than resampling the filter transmission on the wavelength grid every time the flux is computed in a given filter, put the resampled filter into a cache. (Médéric Boquien)
- Rather than recomputing every time the merged wavelength grid from two different wavelength grids (for instance when adding a new physical component or when integrating the spectrum in a filter), put the results in a cache. (Médéric Boquien)
- Before adding a new component to a SED, we first copy the original SED without that component from the cache. This copy can be very slow when done automatically by python. We rather do this copy manually, which is much faster. (Médéric Boquien)
- When adding a new physical component with a different wavelength sampling, rather than reinterpolating all the components over the new grid, compute the interpolation only for new wavelengths. (Médéric Boquien)
- Various minor optimisations. (Médéric Boquien)
- When computing the flux in filters, np.trapz() becomes a bottleneck of the code. A large part of the time is actually spent on safeguards and on operations for nD arrays. However here we only have 1D arrays and some variables can be cached, which allows some optimisations to compute fluxes faster. (Médéric Boquien)
- The output parameters of a model were stored in an ordered dictionary. While convenient to keep the order of insertion it is very slow as it is implemented in pure Python for versions up to 3.4. Rather we use a regular dictionary and we reorder the parameters alphabetically. (Médéric Boquien)
- To store the SED in memory and retrieve them later, we index them with the list of parameters used to compute them. We serialise those using JSON. However JSON is slow. As these data are purely internal, rather use marshal, which is much faster than JSON. (Médéric Boquien)
## 0.5.1 (2015-04-28)
### Changed
- Set the default dale2014 AGN fraction to 0 to avoid the accidentl inclusion of AGN. (Denis Burgarella)
- Modify the name of the averaged SFR: two averaged SFRs over 10 (sfh.sfr10Myrs) and 100Myrs (sfh.sfr100Myrs). (Denis Burgarella)
- Improve the documentation of the savefluxes module. (Denis Burgarella)
### Fixed
- Correction of the x-axis limits. (Denis Burgarella)
- Fix the detection of the presence of the agn.fritz2006_therm in pcigale-plots. (Denis Burgarella)
- Correct the wavelength in the SCUBA 450 μm filter. (Denis Burgarella)
- Install the ancillary data required to make plots. (Yannick Roehlly)
## 0.5.0 (2015-04-02)
## 0.4.0 (2014-10-09)
## 0.3.0 (2014-07-06)
## 0.2.0 (2014-06-10)
## 0.1.0 (2014-05-26)
......@@ -117,17 +117,17 @@ def read_bc03_ssp(filename):
# The time grid is in year, we want Myr.
time_grid = np.array(time_grid, dtype=float)
time_grid = time_grid * 1.e-6
time_grid *= 1.e-6
# The first "long" vector encountered is the wavelength grid. The value
# are in Ångström, we convert it to nano-meter.
wavelength = np.array(full_table.pop(0), dtype=float)
wavelength = wavelength * 0.1
wavelength *= 0.1
# The luminosities are in Solar luminosity (3.826.10^33 ergs.s-1) per
# Ångström, we convert it to W/nm.
luminosity = np.array(full_table, dtype=float)
luminosity = luminosity * 3.826e27
luminosity *= 3.826e27
# Transposition to have the time in the second axis.
luminosity = luminosity.transpose()
......@@ -137,6 +137,7 @@ def read_bc03_ssp(filename):
def build_filters(base):
filters = []
filters_dir = os.path.join(os.path.dirname(__file__), 'filters/')
for filter_file in glob.glob(filters_dir + '*.dat'):
with open(filter_file, 'r') as filter_file_read:
......@@ -165,8 +166,9 @@ def build_filters(base):
new_filter.effective_wavelength = np.mean(
filter_table[0][filter_table[1] > 0]
def build_m2005(base):
......@@ -226,7 +228,7 @@ def build_m2005(base):
[age_grid_orig, lambda_grid_orig, flux_orig] = \
spec_table[:, spec_table[1, :] == wavelength]
flux_orig = flux_orig * 10 * 1.e-7 # From erg/s^-1/Å to W/nm
age_grid_orig = age_grid_orig * 1000 # Gyr to Myr
age_grid_orig *= 1000 # Gyr to Myr
flux_regrid = interpolate.interp1d(age_grid_orig,
......@@ -329,6 +331,7 @@ def build_bc2003(base):
def build_dale2014(base):
models = []
dale2014_dir = os.path.join(os.path.dirname(__file__), 'dale2014/')
# Getting the alpha grid for the templates
......@@ -369,10 +372,9 @@ def build_dale2014(base):
lumin[lumin < 0] = 0
lumin[wave < 2E3] = 0
norm = np.trapz(lumin, x=wave)
lumin = lumin/norm
base.add_dale2014(Dale2014(fraction, alpha_grid[al-1], wave, lumin))
lumin /= norm
models.append(Dale2014(fraction, alpha_grid[al-1], wave, lumin))
# Emission from dust heated by AGN - Quasar template
filename = dale2014_dir + "shi_agn.regridded.extended.dat"
print("Importing {}...".format(filename))
......@@ -381,12 +383,15 @@ def build_dale2014(base):
wave *= 1e3
lumin_quasar = 10**lumin_quasar / wave
norm = np.trapz(lumin_quasar, x=wave)
lumin_quasar = lumin_quasar / norm
lumin_quasar /= norm
models.append(Dale2014(1.0, 0.0, wave, lumin_quasar))
base.add_dale2014(Dale2014(1.0, 0.0, wave, lumin_quasar))
def build_dl2007(base):
models = []
dl2007_dir = os.path.join(os.path.dirname(__file__), 'dl2007/')
qpah = {
......@@ -442,7 +447,7 @@ def build_dl2007(base):
# Conversion from Jy cm² sr¯¹ H¯¹to W nm¯¹ (kg of dust)¯¹
lumin *= conv/MdMH[model]
base.add_dl2007(DL2007(qpah[model], umin, umin, wave, lumin))
models.append(DL2007(qpah[model], umin, umin, wave, lumin))
for umax in umaximum:
filename = dl2007_dir + "U{}/U{}_{}_MW3.1_{}.txt".format(umin,
......@@ -459,10 +464,12 @@ def build_dl2007(base):
# Conversion from Jy cm² sr¯¹ H¯¹to W nm¯¹ (kg of dust)¯¹
lumin *= conv/MdMH[model]
base.add_dl2007(DL2007(qpah[model], umin, umax, wave, lumin))
models.append(DL2007(qpah[model], umin, umax, wave, lumin))
def build_dl2014(base):
models = []
dl2014_dir = os.path.join(os.path.dirname(__file__), 'dl2014/')
qpah = {"000": 0.47, "010": 1.12, "020": 1.77, "030": 2.50, "040": 3.19,
......@@ -515,7 +522,7 @@ def build_dl2014(base):
# Conversion from Jy cm² sr¯¹ H¯¹to W nm¯¹ (kg of dust)¯¹
lumin *= conv/MdMH[model]
base.add_dl2014(DL2014(qpah[model], umin, umin, 1.0, wave, lumin))
models.append(DL2014(qpah[model], umin, umin, 1.0, wave, lumin))
for al in alpha:
filename = (dl2014_dir + "U{}_1e7_MW3.1_{}/spec_{}.dat"
.format(umin, model, al))
......@@ -529,11 +536,12 @@ def build_dl2014(base):
# Conversion from Jy cm² sr¯¹ H¯¹to W nm¯¹ (kg of dust)¯¹
lumin *= conv/MdMH[model]
base.add_dl2014(DL2014(qpah[model], umin, 1e7, al, wave,
models.append(DL2014(qpah[model], umin, 1e7, al, wave, lumin))
def build_fritz2006(base):
models = []
fritz2006_dir = os.path.join(os.path.dirname(__file__), 'fritz2006/')
# Parameters of Fritz+2006
......@@ -590,16 +598,19 @@ def build_fritz2006(base):
lumin_agn *= 1e-4
# Normalization of the lumin_therm to 1W
norm = np.trapz(lumin_therm, x=wave)
lumin_therm = lumin_therm / norm
lumin_scatt = lumin_scatt / norm
lumin_agn = lumin_agn / norm
lumin_therm /= norm
lumin_scatt /= norm
lumin_agn /= norm
base.add_fritz2006(Fritz2006(params[4], params[3], params[2],
models.append(Fritz2006(params[4], params[3], params[2],
params[1], params[0], psy[n], wave,
lumin_therm, lumin_scatt, lumin_agn))
def build_nebular(base):
models_lines = []
models_cont = []
lines_dir = os.path.join(os.path.dirname(__file__), 'nebular/')
# Number of Lyman continuum photon to normalize the nebular continuum
......@@ -628,14 +639,9 @@ def build_nebular(base):
ratio2 = ratio2/ratio2[w]
ratio3 = ratio3/ratio3[w]
lines = NebularLines(np.float(Z), -3., wave, ratio1)
lines = NebularLines(np.float(Z), -2., wave, ratio2)
lines = NebularLines(np.float(Z), -1., wave, ratio3)
models_lines.append(NebularLines(np.float(Z), -3., wave, ratio1))
models_lines.append(NebularLines(np.float(Z), -2., wave, ratio2))
models_lines.append(NebularLines(np.float(Z), -1., wave, ratio3))
filename = "{}continuum_{}.dat".format(lines_dir, Z)
print("Importing {}...".format(filename))
......@@ -651,15 +657,12 @@ def build_nebular(base):
cont2 *= conv
cont3 *= conv
cont = NebularContinuum(np.float(Z), -3., wave, cont1)
cont = NebularContinuum(np.float(Z), -2., wave, cont2)
cont = NebularContinuum(np.float(Z), -1., wave, cont3)
models_cont.append(NebularContinuum(np.float(Z), -3., wave, cont1))
models_cont.append(NebularContinuum(np.float(Z), -2., wave, cont2))
models_cont.append(NebularContinuum(np.float(Z), -1., wave, cont3))
def build_base():
base = Database(writable=True)
This source diff could not be displayed because it is too large. You can view the blob instead.
......@@ -14,4 +14,4 @@
1648.07 0.1601
1705.61 0.1188
1750.00 0.1050
1810.83 -0.0009
1810.83 0.0000
# PSEUDO_D4000
# energy
# D4000 pseudo filter
3600.0 0.0
3610.0 0.0
3620.0 0.0
3630.0 0.0
3640.0 0.0
3650.0 0.0
3660.0 0.0
3670.0 0.0
3680.0 0.0
3690.0 0.0
3700.0 0.0
3710.0 0.0
3720.0 0.0
3730.0 0.0
3740.0 0.0
3750.0 0.0
3760.0 0.0
3770.0 0.0
3780.0 0.0
3790.0 0.0
3800.0 0.0
3810.0 0.0
3820.0 0.0
3830.0 0.0
3840.0 0.0
3849.0 0.0
3850.0 -0.1
......@@ -57,20 +33,3 @@
4100.0 0.1
4101.0 0.0
4110.0 0.0
4120.0 0.0
4130.0 0.0
4140.0 0.0
4150.0 0.0
4160.0 0.0
4170.0 0.0
4180.0 0.0
4190.0 0.0
4200.0 0.0
4210.0 0.0
4220.0 0.0
4230.0 0.0
4240.0 0.0
4250.0 0.0
4260.0 0.0
4270.0 0.0
4280.0 0.0
# energy
# Lyman continuum pseudo filter
91 1
100 1
110 1
120 1
130 1
140 1
150 1
160 1
170 1
180 1
190 1
200 1
210 1
220 1
230 1
240 1
250 1
260 1
270 1
280 1
290 1
300 1
310 1
320 1
330 1
340 1
350 1
360 1
370 1
380 1
390 1
400 1
410 1
420 1
430 1
440 1
450 1
460 1
470 1
480 1
490 1
500 1
510 1
520 1
530 1
540 1
550 1
560 1
570 1
580 1
590 1
600 1
610 1
620 1
630 1
640 1
650 1
660 1
670 1
680 1
690 1
700 1
710 1
720 1
730 1
740 1
750 1
760 1
770 1
780 1
790 1
800 1
810 1
820 1
830 1
840 1
850 1
860 1
870 1
880 1
890 1
900 1
910 1
911 1
912 0
920 0
......@@ -15,14 +15,16 @@ __version__ = "0.1-alpha"
def init(config):
"Create a blank configuration file."
"""Create a blank configuration file.
print("The initial configuration file was created. Please complete it "
"with the data file name and the pcigale modules to use.")
def genconf(config):
"Generate the full configuration."
"""Generate the full configuration.
print("The configuration file has been updated. Please complete the "
"various module parameters and the data file columns to use in "
......@@ -30,7 +32,8 @@ def genconf(config):
def check(config):
"Check the configuration."
"""Check the configuration.
# TODO: Check if all the parameters that don't have default values are
# given for each module.
print("With this configuration, pcigale must compute {} "
......@@ -41,7 +44,8 @@ def check(config):
def run(config):
"Run the analysis."
"""Run the analysis.
data_file = config.configuration['data_file']
column_list = config.configuration['column_list']
creation_modules = config.configuration['creation_modules']
......@@ -87,18 +91,21 @@ def main():
run_parser = subparsers.add_parser('run', help=run.__doc__)
args = parser.parse_args()
if args.config_file:
config = Configuration(args.config_file)
if len(sys.argv) == 1:
config = Configuration()
if args.parser == 'init':
elif args.parser == 'genconf':
elif args.parser == 'check':
elif args.parser == 'run':
args = parser.parse_args()
if args.config_file:
config = Configuration(args.config_file)
config = Configuration()
if args.parser == 'init':
elif args.parser == 'genconf':
elif args.parser == 'check':
elif args.parser == 'run':
......@@ -5,6 +5,7 @@
# Author: Yannick Roehlly & Denis Burgarella
from importlib import import_module
import numpy as np
from astropy.table import Column
......@@ -148,20 +149,18 @@ def get_module(module_name):
def adjust_errors(flux, error, tolerance, lim_flag, default_error=0.1,
"""Adjust the errors replacing the 0 values by the default error and
adding the systematic deviation.
The systematic deviation change the error to:
sqrt( error² + (flux * deviation)² )
def adjust_data(fluxes, errors, tolerance, lim_flag, default_error=0.1,
"""Adjust the fluxes and errors replacing the invalid values by NaN, and
adding the systematic deviation. The systematic deviation changes the
errors to: sqrt(errors² + (fluxes*deviation)²)
flux: array of floats
error: array of floats
Observational error in the same unit as the fluxes.
fluxes: array of floats
Observed fluxes.
errors: array of floats
Observational errors in the same unit as the fluxes.
tolerance: float
Tolerance threshold under flux error is considered as 0.
lim_flag: boolean
......@@ -179,38 +178,35 @@ def adjust_errors(flux, error, tolerance, lim_flag, default_error=0.1,
# The arrays must have the same lengths.
if len(flux) != len(error):
if len(fluxes) != len(errors):
raise ValueError("The flux and error arrays must have the same "
# We copy the error array not to modify the original one.
error = np.copy(error)
# We copy the arrays not to modify the original ones.
fluxes = fluxes.copy()
errors = errors.copy()
# We define:
# 1) upper limit == (lim_flag==True) and
# [(flux > tolerance) and (-9990. < error < tolerance)]
# 2) no data == (flux < -9990.) and (error < -9990.)
# Note that the upper-limit test must be performed before the no-data test
# because if lim_flag is False, we process upper limits as no-data.
# Replace errors below tolerance by the default one.
mask_noerror = np.logical_and(flux > tolerance, error < -9990.)
error[mask_noerror] = (default_error * flux[mask_noerror])
# We set invalid data to NaN
mask_invalid = np.where((fluxes <= tolerance) | (errors < -9990.))
fluxes[mask_invalid] = np.nan
errors[mask_invalid] = np.nan
mask_limflag = np.logical_and.reduce(
(flux > tolerance, error >= -9990., error < tolerance))
# Replace missing errors by the default ones.
mask_noerror = np.where((fluxes > tolerance) & ~np.isfinite(errors))
errors[mask_noerror] = (default_error * fluxes[mask_noerror])
# Replace upper limits by no data if lim_flag==False
if not lim_flag:
flux[mask_limflag] = -9999.
error[mask_limflag] = -9999.
mask_ok = np.logical_and(flux > tolerance, error > tolerance)
mask_limflag = np.where((fluxes > tolerance) & (errors < tolerance))
fluxes[mask_limflag] = np.nan
errors[mask_limflag] = np.nan
# Add the systematic error.
error[mask_ok] = np.sqrt(error[mask_ok]**2 +