Commit 0d5eb1ab authored by Médéric Boquien's avatar Médéric Boquien

Merge branch 'develop' into schreiber16

parents 259519f7 46e87695
......@@ -2,12 +2,50 @@
## Unreleased
### Added
- When using the savefluxes module, all the output parameters were saved. This is not efficient when the user is only interested in some of the output parameters but not all. We introduce the "variables" configuration parameter for savefluxes to list the output parameters the user wants to save. If the list is left empty, all parameters are saved, preserving the current behaviour. This should increase the speed substantially when saving memory. (Médéric Boquien)
- Similarly to the savefluxes module, in the pdf_analysis module if the list of physical properties is left empty, all physical parameters are now analysed. (Médéric Boquien)
- It is now possible to pass the parameters of the models to be computed from a file rather than having to indicate them in pcigale.ini. This means that the models do not necessarily need to be computed on a systematic grid of parameters. The name of the file is passed as an argument to the parameters\_file keyword in pcigale.ini. If this is done, the creation\_modules argument is ignored. Finally, the file must be formatted as following: each row is a different model and each column a different parameter. They must follow the naming scheme: module\_name.parameter\_name, that is "bc03.imf" for instance. (Médéric Boquien)
### Changed
- The estimates of the physical parameters from the analysis of the PDF and from the best fit were recorded in separate files. This can be bothersome when trying to compare quantities from different files. Rather, we generate a single file containing all quantities. The ones estimated from the analysis of the PDF are prefixed with "bayes" and the ones from the best fit with "best". (Médéric Boquien)
- To homogenize input and output files, the "observation_id" has been changed to "id" in the output files. (Médéric Boquien)
- The output files providing estimates of the physical properties are now generated both in the form of text and FITS files. (Médéric Boquien)
- When using the dustatt_calzleit module, choosing ẟ≠0 leads to an effective E(B-V) different from the one set by the user. Now the E(B-V) will always correspond to the one specified by the user. This means that at fixed E(B-V), A(V) depends on ẟ. (Médéric Boquien)
- The pcigale-mock tool has been merged into pcigale-plots; the mock plots can be obtained with the "mock" command.
- The sfhdelayed module is now initialised with _init_code() to be consistent with the way things are done in other modules. This should give a slight speedup under some sircumstances too. (Médéric Boquien)
- In sfhfromfile, the specification of the time grid was vague and therefore could lead to incorrect results if it was not properly formatted by the end user. The description has been clarified and we now check that the time starts from 0 and that the time step is always 1 Myr. If it is not the case we raise an exception. (Médéric Boquien)
- When the redshift is not indicated in pcigale.ini, the analysis module fills the list of redshifts from the redshifts indicated in the input flux file. This is inefficient as analysis modules should have have to modify the configuration. Now this is done when interpreting pcigale.ini before calling the relevant analysis module. As a side effect, "pigale check" now returns the total number of models that cigale will compute rather than the number of models per redshift bin. (Médéric Boquien)
### Fixed
- To estimate parameters in log, pcigale determines which variables end with the "_log" string and removed it to compute the models. However in some circumstances, it was overzealous. This has been fixed. (Médéric Boquien)
- When estimating a parameter in log, these were not scaled appropriately and taken in log when saving the related χ² and PDF. (Médéric Boquien)
- In the presence of upper limits, correct the scaling factor of the models to the observations before computing the χ², not after. (Médéric Boquien)
- When called without arguments, pcigale-plots would crash and display the backtrace. Now it displays the a short help showing how to use it. (Médéric Boquien)
- For sfh2exp, when setting the scale of the SFH with sfr0, the normalisation was incorrect by a factor exp(-1/tau_main). (Médéric Boquien)
- The mass-dependent physical properties are computed assuming the redshift of the model. However because we round the observed redshifts to two decimals, there can be a difference of 0.005 in redshift between the models and the actual observation if CIGALE computes the list of redshifts itself. At low redshift, this can cause a discrepancy in the mass-dependent physical properties: ~0.35 dex at z=0.010 vs 0.015 for instance. Therefore we now evaluate these physical quantities at the observed redshift at full precision. (Médéric Boquien, issue reported by Samir Salim)
- In the sfhfromfile module, an extraneous offset in the column index made that it took the previous column as the SFR rather than the selected column. (Médéric Boquien)
- In sfhfromfile, if the SFR is made of integers cigale crashed. Now we systematically convert it to float. (Médéric Boquien)
- The order of the parameters for the analysis modules would change each time a new pcigale.ini was generated. Now the order is fixed. (Médéric Boquien)
### Optimised
- Prior to version 0.7.0, we needed to maintain the list of redshifts for all the computed models. Past 0.7.0 we just infer the redshift from a list unique redshifts. This means that we can now discard the list of redshifts for all the models and only keep the list of unique redshifts. This saves ~8 MB of memory for every 10⁶ models. the models should be computed slightly faster but it is in the measurement noise. (Médéric Boquien)
- The sfhfromfile module is now fully initialised when it is instantiated rather than doing so when processing the SED. This should be especially sensitive when processing different SED. (Médéric Boquien)
## 0.8.1 (2015-12-07)
### Fixed
- To estimate parameters in log, pcigale determines which variables end with the "_log" string and removed it to compute the models. However in some circumstances, it was overzealous. This has been fixed. (Médéric Boquien)
## 0.8.0 (2015-12-01)
### Added
- The evaluation of the parameters is always done linearly. This can be a problem when estimating the SFR or the stellar mass for instance as it is usual to estimate their log rather. Because the log is non-linear, the likelihood-weighted mean of the log is not the log of the likelihood-weighted mean. Therefore the estimation of the log of these parameters has to be done during the analysis step. This is now possible. The variables to be analysed in log just need to be indicated with the suffix "_log", for instance "stellar.m_star_log". (Médéric Boquien, idea suggested by Samir Salim)
### Fixed
- Running the scripts in parallel trigger a deadlock on OS X with python 3.5. A workaround has been implemented. (Médéric Boquien)
- When no dust emission module is used, pcigale genconf complains that no dust attenuation module is used. Correctly specify dust emission and not attenuation. (Médéric Boquien and Laure Ciesla)
- Allowing more flexibility to read ASCII files broke the handling of FITS files. It is now fixed. (Yannick Roehlly)
### Optimised
### Changed
- The attenuation.ebvs\_main and attenuation.ebvs\_old parameters are no longer present as they were duplicates of attenuation.E\_BVs.stellar.old and attenuation.E\_BVs.stellar.young (that are still available).
## 0.7.0 (2015-11-19)
### Added
......@@ -96,7 +134,7 @@
## 0.5.1 (2015-04-28)
### Changed
- Set the default dale2014 AGN fraction to 0 to avoid the accidentl inclusion of AGN. (Denis Burgarella)
- Set the default dale2014 AGN fraction to 0 to avoid the accidental inclusion of AGN. (Denis Burgarella)
- Modify the name of the averaged SFR: two averaged SFRs over 10 (sfh.sfr10Myrs) and 100Myrs (sfh.sfr100Myrs). (Denis Burgarella)
- Improve the documentation of the savefluxes module. (Denis Burgarella)
......
......@@ -2,10 +2,10 @@
# energy
# H beta pseudo filter
4827.87 0.0
4827.875 -0.2857959417043478
4847.87 -0.2857959417043478
4847.875 0.34782608695652173
4876.625 0.34782608695652173
4876.63 -0.2857959417043478
4891.625 -0.2857959417043478
4827.875 0.2857959417043478
4847.87 0.2857959417043478
4847.875 -0.34782608695652173
4876.625 -0.34782608695652173
4876.63 0.2857959417043478
4891.625 0.2857959417043478
4891.63 0.0
......@@ -2,14 +2,14 @@
# energy
# H delta pseudo filter
4041.5 0.0
4041.6000000000004 -0.1415428166967742
4079.75 -0.1415428166967742
4041.6000000000004 0.1415428166967742
4079.75 0.1415428166967742
4079.8 0.0
4083.3999999999996 0.0
4083.5 0.25806451612903225
4122.25 0.25806451612903225
4083.5 -0.25806451612903225
4122.25 -0.25806451612903225
4122.3 0.0
4128.4 0.0
4128.5 -0.1415428166967742
4161.0 -0.1415428166967742
4128.5 0.1415428166967742
4161.0 0.1415428166967742
4161.1 0.0
......@@ -2,12 +2,12 @@
# energy
# H gamma pseudo filter
4283.4 0.0
4283.5 -0.11268875366857142
4319.74 -0.11268875366857142
4319.75 0.22857142857142856
4363.5 0.22857142857142856
4283.5 0.11268875366857142
4319.74 0.11268875366857142
4319.75 -0.22857142857142856
4363.5 -0.22857142857142856
4363.6 0.0
4367.200000000001 0.0
4367.25 -0.11268875366857142
4419.75 -0.11268875366857142
4367.25 0.11268875366857142
4419.75 0.11268875366857142
4419.8 0.0
......@@ -2,14 +2,14 @@
# energy
# Mg2 pseudo filter
4895.12 0.0
4895.125 -0.07843137254117646
4957.625 -0.07843137254117646
4895.125 0.07843137254117646
4957.625 0.07843137254117646
4957.63 0.0
5154.120000000001 0.0
5154.125 0.23529411764705882
5196.625 0.23529411764705882
5154.125 -0.23529411764705882
5196.625 -0.23529411764705882
5196.63 0.0
5301.12 0.0
5301.125 -0.07843137254117646
5366.125 -0.07843137254117646
5301.125 0.07843137254117646
5366.125 0.07843137254117646
5366.130000000001 0.0
......@@ -8,8 +8,8 @@ import multiprocessing as mp
import sys
from .session.configuration import Configuration
from .analysis_modules import get_module as get_analysis_module
from .analysis_modules.utils import ParametersHandler
from .analysis_modules import get_module
from .handlers.parameters_handler import ParametersHandler
__version__ = "0.1-alpha"
......@@ -36,28 +36,17 @@ def check(config):
"""
# TODO: Check if all the parameters that don't have default values are
# given for each module.
print("With this configuration, pcigale must compute {} "
"SEDs.".format(ParametersHandler(
config.configuration['creation_modules'],
config.configuration['creation_modules_params']
).size))
configuration = config.configuration
print("With this configuration cigale will compute {} "
"models.".format(ParametersHandler(configuration).size))
def run(config):
"""Run the analysis.
"""
data_file = config.configuration['data_file']
column_list = config.configuration['column_list']
creation_modules = config.configuration['creation_modules']
creation_modules_params = config.configuration['creation_modules_params']
analysis_module = get_analysis_module(config.configuration[
'analysis_method'])
analysis_module_params = config.configuration['analysis_method_params']
cores = config.configuration['cores']
analysis_module.process(data_file, column_list, creation_modules,
creation_modules_params, analysis_module_params,
cores)
configuration = config.configuration
analysis_module = get_module(configuration['analysis_method'])
analysis_module.process(configuration)
def main():
......
......@@ -31,8 +31,7 @@ class AnalysisModule(object):
# module parameter.
self.parameters = kwargs
def _process(self, data_file, column_list, creation_modules,
creation_modules_params, parameters):
def _process(self, configuration):
"""Do the actual analysis
This method is responsible for the fitting / analysis process
......@@ -40,19 +39,8 @@ class AnalysisModule(object):
Parameters
----------
data_file: string
Name of the file containing the observations to be fitted.
column_list: array of strings
Names of the columns from the data file to use in the analysis.
creation_modules: array of strings
Names (in the right order) of the modules to use to build the SED.
creation_modules_params: array of array of dictionaries
Array containing all the possible combinations of configurations
for the creation_modules. Each 'inner' array has the same length as
the creation_modules array and contains the configuration
dictionary for the corresponding module.
parameters: dictionary
Configuration for the module.
configuration: dictionary
Configuration file
Returns
-------
......@@ -61,8 +49,7 @@ class AnalysisModule(object):
"""
raise NotImplementedError()
def process(self, data_file, column_list, creation_modules,
creation_modules_params, parameters):
def process(self, configuration):
"""Process with the analysis
This method is responsible for checking the module parameters before
......@@ -72,19 +59,8 @@ class AnalysisModule(object):
Parameters
----------
data_file: string
Name of the file containing the observations to be fitted.
column_list: array of strings
Names of the columns from the data file to use in the analysis.
creation_modules: array of strings
Names (in the right order) of the modules to use to build the SED.
creation_modules_params: array of array of dictionaries
Array containing all the possible combinations of configurations
for the creation_modules. Each 'inner' array has the same length as
the creation_modules array and contains the configuration
dictionary for the corresponding module.
parameters: dictionary
Configuration for the module.
configuration: dictionary
Contents of pcigale.ini in the form of a dictionary
Returns
-------
......@@ -95,6 +71,7 @@ class AnalysisModule(object):
KeyError: when not all the needed parameters are given.
"""
parameters = configuration['analysis_method_params']
# For parameters that are present on the parameter_list with a default
# value and that are not in the parameters dictionary, we add them
# with their default value.
......@@ -124,8 +101,7 @@ class AnalysisModule(object):
"expected one." + message)
# We do the actual processing
self._process(data_file, column_list, creation_modules,
creation_modules_params, parameters)
self._process(configuration)
def get_module(module_name):
......
......@@ -25,6 +25,7 @@ reduced χ²) is given for each observation.
"""
from collections import OrderedDict
import ctypes
import multiprocessing as mp
from multiprocessing.sharedctypes import RawArray
......@@ -34,29 +35,29 @@ import numpy as np
from ...utils import read_table
from .. import AnalysisModule, complete_obs_table
from .utils import save_table_analysis, save_table_best, analyse_chi2
from .utils import save_results, analyse_chi2
from ...warehouse import SedWarehouse
from .workers import sed as worker_sed
from .workers import init_sed as init_worker_sed
from .workers import init_analysis as init_worker_analysis
from .workers import analysis as worker_analysis
from ..utils import ParametersHandler, backup_dir
from ..utils import backup_dir
from ...handlers.parameters_handler import ParametersHandler
# Tolerance threshold under which any flux or error is considered as 0.
TOLERANCE = 1e-12
# Limit the redshift to this number of decimals
REDSHIFT_DECIMALS = 2
class PdfAnalysis(AnalysisModule):
"""PDF analysis module"""
parameter_list = dict([
parameter_list = OrderedDict([
("analysed_variables", (
"array of strings",
"List of the variables (in the SEDs info dictionaries) for which "
"the statistical analysis will be done.",
"List of the physical properties to estimate. Leave empty to "
"analyse all the physical properties (not recommended when there "
"are many models).",
["sfh.sfr", "sfh.sfr10Myrs", "sfh.sfr100Myrs"]
)),
("save_best_sed", (
......@@ -90,8 +91,7 @@ class PdfAnalysis(AnalysisModule):
))
])
def process(self, data_file, column_list, creation_modules,
creation_modules_params, config, cores):
def process(self, conf):
"""Process with the psum analysis.
The analysis is done in two steps which can both run on multiple
......@@ -102,19 +102,8 @@ class PdfAnalysis(AnalysisModule):
Parameters
----------
data_file: string
Name of the file containing the observations to fit.
column_list: list of strings
Name of the columns from the data file to use for the analysis.
creation_modules: list of strings
List of the module names (in the right order) to use for creating
the SEDs.
creation_modules_params: list of dictionaries
List of the parameter dictionaries for each module.
config: dictionary
Dictionary containing the configuration.
core: integer
Number of cores to run the analysis on
conf: dictionary
Contents of pcigale.ini in the form of a dictionary
"""
np.seterr(invalid='ignore')
......@@ -125,35 +114,38 @@ class PdfAnalysis(AnalysisModule):
backup_dir()
# Initalise variables from input arguments.
analysed_variables = config["analysed_variables"]
creation_modules = conf['creation_modules']
creation_modules_params = conf['creation_modules_params']
analysed_variables = conf['analysis_method_params']["analysed_variables"]
analysed_variables_nolog = [variable[:-4] if variable.endswith('_log')
else variable for variable in
analysed_variables]
n_variables = len(analysed_variables)
save = {key: config["save_{}".format(key)].lower() == "true"
save = {key: conf['analysis_method_params']["save_{}".format(key)].lower() == "true"
for key in ["best_sed", "chi2", "pdf"]}
lim_flag = config["lim_flag"].lower() == "true"
mock_flag = config["mock_flag"].lower() == "true"
lim_flag = conf['analysis_method_params']["lim_flag"].lower() == "true"
mock_flag = conf['analysis_method_params']["mock_flag"].lower() == "true"
filters = [name for name in column_list if not name.endswith('_err')]
filters = [name for name in conf['column_list'] if not
name.endswith('_err')]
n_filters = len(filters)
# Read the observation table and complete it by adding error where
# none is provided and by adding the systematic deviation.
obs_table = complete_obs_table(read_table(data_file), column_list,
filters, TOLERANCE, lim_flag)
obs_table = complete_obs_table(read_table(conf['data_file']),
conf['column_list'], filters, TOLERANCE,
lim_flag)
n_obs = len(obs_table)
w_redshifting = creation_modules.index('redshifting')
if list(creation_modules_params[w_redshifting]['redshift']) == ['']:
z = np.unique(np.around(obs_table['redshift'],
decimals=REDSHIFT_DECIMALS))
creation_modules_params[w_redshifting]['redshift'] = z
del z
z = np.array(creation_modules_params[w_redshifting]['redshift'])
# The parameters handler allows us to retrieve the models parameters
# from a 1D index. This is useful in that we do not have to create
# a list of parameters as they are computed on-the-fly. It also has
# nice goodies such as finding the index of the first parameter to
# have changed between two indices or the number of models.
params = ParametersHandler(creation_modules, creation_modules_params)
params = ParametersHandler(conf)
n_params = params.size
# Retrieve an arbitrary SED to obtain the list of output parameters
......@@ -173,24 +165,19 @@ class PdfAnalysis(AnalysisModule):
# not write on the same section.
# We put the shape in a tuple along with the RawArray because workers
# need to know the shape to create the numpy array from the RawArray.
model_redshifts = (RawArray(ctypes.c_double, n_params),
(n_params))
model_fluxes = (RawArray(ctypes.c_double,
n_params * n_filters),
model_fluxes = (RawArray(ctypes.c_double, n_params * n_filters),
(n_params, n_filters))
model_variables = (RawArray(ctypes.c_double,
n_params * n_variables),
model_variables = (RawArray(ctypes.c_double, n_params * n_variables),
(n_params, n_variables))
initargs = (params, filters, analysed_variables, model_redshifts,
model_fluxes, model_variables, time.time(),
mp.Value('i', 0))
if cores == 1: # Do not create a new process
initargs = (params, filters, analysed_variables_nolog, model_fluxes,
model_variables, time.time(), mp.Value('i', 0))
if conf['cores'] == 1: # Do not create a new process
init_worker_sed(*initargs)
for idx in range(n_params):
worker_sed(idx)
else: # Analyse observations in parallel
with mp.Pool(processes=cores, initializer=init_worker_sed,
else: # Compute the models in parallel
with mp.Pool(processes=conf['cores'], initializer=init_worker_sed,
initargs=initargs) as pool:
pool.map(worker_sed, range(n_params))
......@@ -208,17 +195,18 @@ class PdfAnalysis(AnalysisModule):
best_chi2 = (RawArray(ctypes.c_double, n_obs), (n_obs))
best_chi2_red = (RawArray(ctypes.c_double, n_obs), (n_obs))
initargs = (params, filters, analysed_variables, model_redshifts,
model_fluxes, model_variables, time.time(),
mp.Value('i', 0), analysed_averages, analysed_std,
best_fluxes, best_parameters, best_chi2, best_chi2_red,
save, lim_flag, n_obs)
if cores == 1: # Do not create a new process
initargs = (params, filters, analysed_variables, z, model_fluxes,
model_variables, time.time(), mp.Value('i', 0),
analysed_averages, analysed_std, best_fluxes,
best_parameters, best_chi2, best_chi2_red, save, lim_flag,
n_obs)
if conf['cores'] == 1: # Do not create a new process
init_worker_analysis(*initargs)
for idx, obs in enumerate(obs_table):
worker_analysis(idx, obs)
else: # Analyse observations in parallel
with mp.Pool(processes=cores, initializer=init_worker_analysis,
with mp.Pool(processes=conf['cores'],
initializer=init_worker_analysis,
initargs=initargs) as pool:
pool.starmap(worker_analysis, enumerate(obs_table))
......@@ -226,12 +214,9 @@ class PdfAnalysis(AnalysisModule):
print("\nSaving results...")
save_table_analysis('analysis_results.txt', obs_table['id'],
analysed_variables, analysed_averages,
analysed_std)
save_table_best('best_models.txt', obs_table['id'], best_chi2,
best_chi2_red, best_parameters, best_fluxes, filters,
info)
save_results("results", obs_table['id'], analysed_variables,
analysed_averages, analysed_std, best_chi2, best_chi2_red,
best_parameters, best_fluxes, filters, info)
if mock_flag is True:
......@@ -256,11 +241,11 @@ class PdfAnalysis(AnalysisModule):
for idx, name in enumerate(filters):
mock_table[name] = mock_fluxes[:, idx]
initargs = (params, filters, analysed_variables, model_redshifts,
model_fluxes, model_variables, time.time(),
mp.Value('i', 0), analysed_averages, analysed_std,
best_fluxes, best_parameters, best_chi2,
best_chi2_red, save, lim_flag, n_obs)
initargs = (params, filters, analysed_variables, z, model_fluxes,
model_variables, time.time(), mp.Value('i', 0),
analysed_averages, analysed_std, best_fluxes,
best_parameters, best_chi2, best_chi2_red, save,
lim_flag, n_obs)
if cores == 1: # Do not create a new process
init_worker_analysis(*initargs)
for idx, mock in enumerate(mock_table):
......@@ -272,12 +257,10 @@ class PdfAnalysis(AnalysisModule):
print("\nSaving results...")
save_table_analysis('analysis_mock_results.txt', mock_table['id'],
analysed_variables, analysed_averages,
analysed_std)
save_table_best('best_mock_models.txt', mock_table['id'],
best_chi2, best_chi2_red, best_parameters,
best_fluxes, filters, info)
save_results("results_mock", mock_table['id'], analysed_variables,
analysed_averages, analysed_std, best_chi2,
best_chi2_red, best_parameters, best_fluxes, filters,
info)
print("Run completed!")
......
......@@ -32,7 +32,7 @@ def save_best_sed(obsid, sed, norm):
sed.to_votable(OUT_DIR + "{}_best_model.xml".format(obsid), mass=norm)
def save_pdf(obsid, name, model_variable, likelihood):
def _save_pdf(obsid, name, model_variable, likelihood):
"""Compute and save the PDF to a FITS file
We estimate the probability density functions (PDF) of the parameter from
......@@ -83,7 +83,40 @@ def save_pdf(obsid, name, model_variable, likelihood):
table.write(OUT_DIR + "{}_{}_pdf.fits".format(obsid, name))
def save_chi2(obsid, name, model_variable, chi2):
def save_pdf(obsid, names, mass_proportional, model_variables, scaling,
likelihood, wlikely):
"""Compute and save the PDF of analysed variables
Parameters
----------
obsid: string
Name of the object. Used to prepend the output file name
names: list of strings
Analysed variables names
model_variables: array
Values of the model variables
likelihood: 1D array
Likelihood of the "likely" models
"""
for i, name in enumerate(names):
if name.endswith('_log'):
if name[:-4] in mass_proportional:
model_variable = np.log10(model_variables[:, i][wlikely] *
scaling[wlikely])
else:
model_variable = np.log10(model_variables[:, i][wlikely])
else:
if name in mass_proportional:
model_variable = (model_variables[:, i][wlikely] *
scaling[wlikely])
else:
model_variable = model_variables[:, i][wlikely]
_save_pdf(obsid, name, model_variable, likelihood)
def _save_chi2(obsid, name, model_variable, chi2):
"""Save the best reduced χ² versus an analysed variable
Parameters
......@@ -103,94 +136,107 @@ def save_chi2(obsid, name, model_variable, chi2):
table.write(OUT_DIR + "{}_{}_chi2.fits".format(obsid, name))
def save_table_analysis(filename, obsid, analysed_variables, analysed_averages,
analysed_std):
"""Save the estimated values derived from the analysis of the PDF
def save_chi2(obsid, names, mass_proportional, model_variables, scaling, chi2):
"""Save the best reduced χ² versus analysed variables
Parameters
----------
filename: name of the file to save
Name of the output file
obsid: table column
Names of the objects
analysed_variables: list
Analysed variable names
analysed_averages: RawArray
Analysed variables values estimates
analysed_std: RawArray
Analysed variables errors estimates
obsid: string
Name of the object. Used to prepend the output file name
name: list of strings
Analysed variables names
model_variables: array
Values of the model variables
scaling: array
Scaling factors of the models
chi2:
Reduced χ²
"""
np_analysed_averages = np.ctypeslib.as_array(analysed_averages[0])
np_analysed_averages = np_analysed_averages.reshape(analysed_averages[1])
np_analysed_std = np.ctypeslib.as_array(analysed_std[0])
np_analysed_std = np_analysed_std.reshape(analysed_std[1])
result_table = Table()
result_table.add_column(Column(obsid.data, name="observation_id"))
for index, variable in enumerate(analysed_variables):
result_table.add_column(Column(
np_analysed_averages[:, index],
name=variable
))
result_table.add_column(Column(
np_analysed_std[:, index],
name=variable+"_err"
))
result_table.write(OUT_DIR + filename, format='ascii.fixed_width',
delimiter=None)
def save_table_best(filename, obsid, chi2, chi2_red, variables, fluxes,
filters, info_keys):
"""Save the values corresponding to the best fit
for i, name in enumerate(names):
if name.endswith('_log'):
if name[:-4] in mass_proportional:
model_variable = np.log10(model_variables[:, i] * scaling)
else:
model_variable = np.log10(model_variables[:, i])
else:
if name in mass_proportional:
model_variable = model_variables[:, i] * scaling
else:
model_variable = model_variables[:, i]
_save_chi2(obsid, name, model_variable, chi2)