Skip to content

Commit

Permalink
Patch v0.13.1 (#176)
Browse files Browse the repository at this point in the history
* deploy_ghpages.yml: add missing dependency (#161)

* Bump VERSION to v0.14.0-dev (#162)

* bump VERSION to 0.14.0-dev
* add Python 3.9 to conda build

* Removed Ctot from second order term in chemicalequalibrium polynomial (#163)

* Fix prescaling of data and vanishing confidence intervals in `snlls` (#166)

* parse_multidatasets: fix bug in parsing so that prescaling works also for single signals

* tests: fix snlls test

* fitregmodel: adapt to changes in parse_multidatasets

* tests: refactor test as relative assertion

* Update README.md (#167)

* deerload: Fix bug when loading BES3T files with entries in manipulation history layer (#164)

* deerload: fix bug when loading BES3T files with entries in manipulation history layer

* add unit tests for deerload

* deerload: fix error in parsing of XGF files

* rename test data files consistently

* fix tests

Co-authored-by: Stefan Stoll <[email protected]>

* Fix behavior of weights in global fitting  (#171)

* fitregmodel: fix behaviour of weights in global fitting

* fitparamodel: add test for global weights behaviour

* lsqcomponents: fix usage of global weights

* fitmultimodel: fix behaviour of weights in global fitting

* fitmultimodel: return list of fitted signals instead of single array, fix scaling of output fit signals and UQ, fix plotting

* fitmultimodel: fix scaling issue in global fitting

* snlls: fix behaviour of weights in global fitting

* fitmodel: add tests to check behaviour of weights in global fitting

* fitmultimodel: fix error when plotting single dataset fits

* fitmultimodel: fix bug in plottin of single dataset fits

* fitmultimodel: fix error in previous commit

* fitmodel: Fix uncertainty quantification when fitting dipolar evolution functions (#172)

* fitmodel: fix errors in uncertainty quantification for dipolar evolution functions

* modify tests that missed the error

* fitmodel: fixed error found by tests

* fitparamodel: fix scale of output fit.model and its uncertainty (#173)

* snlls: allow `extrapenalty` to take linear parameters as input (#175)

* snlls: allow `extrapenalty` to take linear parameters as input

* snlls: fix order of concatenation of Jacobians

* Optimize default weights in global fitting according to the datasets noise levels (#174)

* optimize default weights in global fitting according to the datasets noise levels

* add tests

* fix bug in calculation of default weights for arbitrary scales

* refactor DER_SNR method, add reference

Co-authored-by: Stefan Stoll <[email protected]>

* VERSION: bump to v0.13.1

* CHANGELOG: update for v0.13.1

* CHANGELOG: minor edits

* fix GHA to run on a PR to merge into any branch

* fix GHA to run docs CI on PR to merge into any branch

Co-authored-by: Maxx Tessmer <[email protected]>
Co-authored-by: Stefan Stoll <[email protected]>
  • Loading branch information
3 people authored May 18, 2021
1 parent 83f2503 commit 12219df
Show file tree
Hide file tree
Showing 41 changed files with 6,110 additions and 162 deletions.
2 changes: 1 addition & 1 deletion .github/workflows/ci_PR.yml
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@ name: DeerLab PR tests
on:
pull_request:
branches:
- main
- "**"

jobs:
build:
Expand Down
1 change: 1 addition & 0 deletions .github/workflows/deploy_ghpages.yml
Original file line number Diff line number Diff line change
Expand Up @@ -30,6 +30,7 @@ jobs:
python -m pip install numpydoc
python -m pip install sphinx-gallery
python -m pip install sphinxcontrib-httpdomain
python -m pip install sphinxcontrib-ghcontributors
python -m pip install m2r2
python -m pip install sphinx==1.8.4
sudo apt install texlive-extra-utils
Expand Down
2 changes: 1 addition & 1 deletion .github/workflows/docs_PR.yml
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@ name: Docs Build PR test
on:
pull_request:
branches:
- main
- "**"
paths:
- 'docsrc/**'
- '.github/workflows/deploy_ghpages.yml'
Expand Down
17 changes: 17 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,21 @@

Release v0.13.1 - May 2021
---------------------------------

#### Overall changes

- Fixed the behaviour of global weights throughout DeerLab fit functions. The keyword argument ``weights`` was not having any or the expected effect in the results in some fit functions. Also fixes the behaviour of built-in plots for global fits ([#168](https://github.com/JeschkeLab/DeerLab/issues/168), [#171](https://github.com/JeschkeLab/DeerLab/pull/171)).
- Optimize default weights in global fitting according to the datasets noise levels ([#169](https://github.com/JeschkeLab/DeerLab/issues/169), [#174](https://github.com/JeschkeLab/DeerLab/pull/174)).
- Fixed a bug in ``snlls`` that was causing the confidence intervals in ``snlls``, ``fitmodel`` and ``fitmultimodel`` to vanish for large signal scales ([#165](https://github.com/JeschkeLab/DeerLab/issues/165), [#166](https://github.com/JeschkeLab/DeerLab/pull/166)).

#### Specific changes
- ``deerload``: Corrected a bug that happened in certain BES3T Bruker spectrometer files, when there are entries under the ``MANIPULATION HISTORY LAYER`` section at the end of the descriptor file. Also fixed the reading of ``.XGF`` partner files ([#164](https://github.com/JeschkeLab/DeerLab/pull/164)).
- ``snlls``: The keyword argument ``extrapenalty`` now requires a function that takes both non-linear and linear parameters. Corrected the name of the keyword in the documentation ([#175](https://github.com/JeschkeLab/DeerLab/pull/175)).
- ``fitparamodel``: Fixed the scaling of the output ``FitResult.model`` and ``FitResult.modelUncert`` ([#173](https://github.com/JeschkeLab/DeerLab/pull/173)).
- ``ex_pseudotitration_parameter_free``: Removed ``Ctot`` from second order term in the ``chemicalequalibrium`` polynomial ([#163](https://github.com/JeschkeLab/DeerLab/pull/163)).

---------------------------------

Release v0.13.0 - April 2021
---------------------------------

Expand Down
13 changes: 7 additions & 6 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,7 @@
# DeerLab

[![https://jeschkelab.github.io/DeerLab/](https://img.shields.io/pypi/v/deerlab)](https://pypi.org/project/DeerLab/)
[![https://img.shields.io/conda/v/JeschkeLab/deerlab](https://img.shields.io/conda/v/JeschkeLab/deerlab)](https://anaconda.org/jeschkelab/deerlab)
[![Website](https://img.shields.io/website?down_message=offline&label=Documentation&up_message=online&url=https%3A%2F%2Fjeschkelab.github.io%2FDeerLab%2Findex.html)](https://jeschkelab.github.io/DeerLab/)
[![PyPI - Python Version](https://img.shields.io/pypi/pyversions/deerlab)](https://www.python.org/downloads/)
![PyPI - Downloads](https://img.shields.io/pypi/dm/deerlab?color=brightgreen)
Expand All @@ -20,17 +21,17 @@ All additional dependencies are automatically downloaded and installed during th

### Setup

A pre-built distribution can be installed using `pip`.
A pre-built distribution can be installed from the PyPI repository using `pip` or from the Anaconda repository using `conda`.

First, ensure that `pip` is up-to-date. From a terminal (preferably with admin privileges) use the following command:
From a terminal (preferably with admin privileges) use the following command to install from PyPI:

python -m pip install --upgrade pip
python -m pip install deerlab

Next, install DeerLab with
or the following command to install from Anaconda:

python -m pip install deerlab
conda install deerlab -c JeschkeLab

More details on the installation of DeerLab can be found [here](https://jeschkelab.github.io/DeerLab/installation.html).
More details on the installation and updating of DeerLab can be found [here](https://jeschkelab.github.io/DeerLab/installation.html).

### Citation

Expand Down
2 changes: 1 addition & 1 deletion VERSION
Original file line number Diff line number Diff line change
@@ -1 +1 @@
v0.13.0
v0.13.1
2 changes: 1 addition & 1 deletion conda.recipe/package_conda.bat
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
@echo off

set versions=3.6 3.7 3.8
set versions=3.6 3.7 3.8 3.9
set platforms=osx-64 linux-32 linux-64 win-32 win-64

:: Get the path to the Anaconda executable
Expand Down
14 changes: 8 additions & 6 deletions deerlab/deerload.py
Original file line number Diff line number Diff line change
Expand Up @@ -130,7 +130,7 @@ def deerload(fullbasename, plot=False, full_output=False, *args,**kwargs):
index = AxisNames.index(a)
axisname = a+'TYP'
axistype = parDESC[axisname]
if Dimensions[index] > 1:
if Dimensions[index] == 1:
pass
else:
if 'IGD'== axistype:
Expand All @@ -149,11 +149,11 @@ def deerload(fullbasename, plot=False, full_output=False, *args,**kwargs):

dt_axis = dt_axis.newbyteorder(byteorder)
# Open and read companion file
with open(companionfilename,'rb') as fp:
if fp > 0:
try:
with open(companionfilename,'rb') as fp:
abscissa[:Dimensions[index],index] = np.frombuffer(fp.read(),dtype=dt_axis)
else:
warn(f'Could not read companion file {companionfilename} for nonlinear axis. Assuming linear axis.')
except:
warn(f'Could not read companion file {companionfilename} for nonlinear axis. Assuming linear axis.')
axistype='IDX'
if axistype == 'IDX':
minimum = float(parDESC[str(a+'MIN')])
Expand Down Expand Up @@ -280,7 +280,9 @@ def read_description_file(DSCFileName):
reKeyValue = re.compile(r"(\w+)\W+(.*)")

for line in allLines:


if 'MANIPULATION HISTORY LAYER' in line:
break
# Layer/section header (possible values: #DESC, #SPL, #DSL, #MHL)
mo1 = reSectionHeader.search(line)
if mo1:
Expand Down
58 changes: 21 additions & 37 deletions deerlab/fitmodel.py
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@
def fitmodel(Vexp, t, r, dd_model='P', bg_model=bg_hom3d, ex_model=ex_4pdeer,
dd_par0=None, bg_par0=None, ex_par0=None, verbose=False,
dd_lb=None, bg_lb=None, ex_lb=None, dd_ub=None, bg_ub=None, ex_ub=None,
weights=1, uq='covariance', regparam='aic', tol=1e-10,maxiter=1e8):
weights=None, uq='covariance', regparam='aic', tol=1e-10,maxiter=1e8):
r"""
Fits a dipolar model to the experimental signal ``V`` with time axis ``t``, using
distance axis ``r``. The model is specified by the distance distribution (dd),
Expand Down Expand Up @@ -98,7 +98,7 @@ def fitmodel(Vexp, t, r, dd_model='P', bg_model=bg_hom3d, ex_model=ex_4pdeer,
weights : array_like, optional
Array of weighting coefficients for the individual signals in global fitting,
the default is all weighted equally. If not specified all datasets are weighted equally.
the default is all weighted equally. If not specified all datasets are weighted inversely proportional to their noise levels.
regparam : str or scalar, optional
Method for the automatic selection of the optimal regularization parameter:
Expand Down Expand Up @@ -358,7 +358,7 @@ def multiPathwayModel(par):
return Ks, Bs
# =========================================================================

def splituq(full_uq,Pfit,Vfit,Bfit,parfit_,Kfit,scales=1):
def splituq(param_uq,Pfit_uq,Pfit,Vfit,Bfit,parfit_,Kfit,scales=1):
# =========================================================================
"""
Uncertainty quantification
Expand All @@ -370,26 +370,11 @@ def splituq(full_uq,Pfit,Vfit,Bfit,parfit_,Kfit,scales=1):
# Pre-allocation
paruq_bg,paruq_ex,Bfit_uq,Vmod_uq,Vunmod_uq,Vfit_uq = ([],[],[],[],[],[])

# Retrieve full covariance matrix
if isinstance(full_uq,list):
paruq = full_uq[0]
else:
paruq = full_uq
covmat = paruq.covmat

Nparam = len(parfit_)
paramidx = np.arange(Nparam)

# Full parameter set uncertainty
# -------------------------------
subcovmat = covmat[np.ix_(paramidx,paramidx)]
paruq = UQResult('covariance',parfit_,subcovmat,lb,ub)

# Background parameters uncertainty
# ---------------------------------
for jj in range(nSignals):
if includeBackground[jj]:
bgsubcovmat = paruq.covmat[np.ix_(bgidx[jj],bgidx[jj])]
bgsubcovmat = param_uq.covmat[np.ix_(bgidx[jj],bgidx[jj])]
paruq_bg.append( UQResult('covariance',parfit_[bgidx[jj]],bgsubcovmat,lb[bgidx[jj]],ub[bgidx[jj]]))
else:
paruq_bg.append([None])
Expand All @@ -398,15 +383,15 @@ def splituq(full_uq,Pfit,Vfit,Bfit,parfit_,Kfit,scales=1):
# ----------------------------------
for jj in range(nSignals):
if includeExperiment[jj]:
exsubcovmat = paruq.covmat[np.ix_(exidx[jj],exidx[jj])]
exsubcovmat = param_uq.covmat[np.ix_(exidx[jj],exidx[jj])]
paruq_ex.append( UQResult('covariance',parfit_[exidx[jj]],exsubcovmat,lb[exidx[jj]],ub[exidx[jj]]))
else:
paruq_ex.append([None])

# Distribution parameters uncertainty
# ------------------------------------
if parametricDistribution:
ddsubcovmat = paruq.covmat[np.ix_(ddidx,ddidx)]
ddsubcovmat = param_uq.covmat[np.ix_(ddidx,ddidx)]
paruq_dd = UQResult('covariance',parfit_[ddidx],ddsubcovmat,lb[ddidx],ub[ddidx])
else:
paruq_dd = [None]
Expand All @@ -420,15 +405,13 @@ def splituq(full_uq,Pfit,Vfit,Bfit,parfit_,Kfit,scales=1):
Pfcn = lambda par: dd_model(r,par[ddidx])
else:
Pfcn = lambda _: np.ones_like(r)/np.trapz(np.ones_like(r),r)
Pfit_uq = paruq.propagate(Pfcn,nonneg)
else:
Pfit_uq = full_uq[1]
Pfit_uq = param_uq.propagate(Pfcn,nonneg)

# Background uncertainty
# -----------------------
for jj in range(nSignals):
if includeExperiment[jj]:
Bfit_uq.append( paruq.propagate(lambda par:scales[jj]*multiPathwayModel(par)[1][jj]) )
Bfit_uq.append( param_uq.propagate(lambda par:scales[jj]*multiPathwayModel(par)[1][jj]) )
else:
Bfit_uq.append([None])

Expand All @@ -439,7 +422,7 @@ def splituq(full_uq,Pfit,Vfit,Bfit,parfit_,Kfit,scales=1):
Lam0fcn = lambda par: ex_model[jj](par)[0]
Bfcn = lambda par: scales[jj]*multiPathwayModel(par)[1][jj]
Vunmod_fcn = lambda par: Lam0fcn(par[exidx[jj]])*Bfcn(par)
Vunmod_uq.append( paruq.propagate(lambda par:Vunmod_fcn(par)) )
Vunmod_uq.append( param_uq.propagate(lambda par:Vunmod_fcn(par)) )
else:
Vunmod_uq.append([None])

Expand All @@ -449,26 +432,28 @@ def splituq(full_uq,Pfit,Vfit,Bfit,parfit_,Kfit,scales=1):
if includeForeground and parametricDistribution:
# Full parametric signal
Vmodel = lambda par: scales[jj]*multiPathwayModel(par)[0][jj]@Pfcn(par[ddidx])
Vfit_uq.append( paruq.propagate(Vmodel))
Vfit_uq.append( param_uq.propagate(Vmodel))
elif includeForeground and np.all(~includeExperiment & ~includeBackground):
Vmodel = lambda _: Kfit[jj]@Pfit
# Dipola evolution function
J = Kfit[jj]
Vcovmat = J@covmat@J.T
Vcovmat = J@Pfit_uq.covmat@J.T
Vfit_uq.append( UQResult('covariance',Vfit[jj],Vcovmat))
elif includeForeground:
# Parametric signal with parameter-free distribution
Vmodel = lambda par: scales[jj]*multiPathwayModel(par[paramidx])[0][jj]@Pfit
Vfit_uq.append( paruq.propagate(Vmodel) )
Vmodel = lambda par: scales[jj]*multiPathwayModel(par)[0][jj]@Pfit
Vfit_uq.append( param_uq.propagate(Vmodel) )
else:
Vfit_uq.append([None])

# Modulated contribution uncertainty
# -----------------------------
for jj in range(nSignals):
if includeForeground:
if includeForeground and np.all(~includeExperiment & ~includeBackground):
Vmod_uq.append(Vfit_uq)
elif includeForeground:
Vmod_fcn = lambda par: Vmodel(par) - Vunmod_fcn(par)
Vmod_uq.append( paruq.propagate(Vmod_fcn) )
Vmod_uq.append( param_uq.propagate(Vmod_fcn))
else:
Vmod_uq.append([None])

Expand Down Expand Up @@ -525,7 +510,7 @@ def regularization_analysis(Vexp):
parfit = np.asarray([None])

if uqanalysis and uq=='covariance':
Vfit_uq, Pfit_uq, Bfit_uq, Vmod_uq, Vunmod_uq, paruq_bg, paruq_ex, paruq_dd = splituq(Pfit_uq,Pfit,Vfit,Bfit,parfit,Ks,scales)
Vfit_uq, Pfit_uq, Bfit_uq, Vmod_uq, Vunmod_uq, paruq_bg, paruq_ex, paruq_dd = splituq(None,Pfit_uq,Pfit,Vfit,Bfit,parfit,Ks,scales)
return fit, Pfit, Vfit, Bfit, Vmod, Vunmod, parfit, Pfit_uq, Vfit_uq, Bfit_uq, Vmod_uq, Vunmod_uq, paruq_bg, paruq_ex, paruq_dd, scales, alphaopt
else:
return fit, Pfit, Vfit, Bfit, Vmod, Vunmod, parfit, scales, alphaopt
Expand Down Expand Up @@ -565,7 +550,7 @@ def nonlinear_lsq_analysis(Vexp):
Vmod, Vunmod = calculate_Vmod_Vunmod(parfit,Vfit,Bfit,scales)

if uqanalysis and uq=='covariance':
Vfit_uq, Pfit_uq, Bfit_uq, Vmod_uq, Vunmod_uq, paruq_bg, paruq_ex, paruq_dd = splituq(param_uq,Pfit,Vfit,Bfit,parfit,None, scales)
Vfit_uq, Pfit_uq, Bfit_uq, Vmod_uq, Vunmod_uq, paruq_bg, paruq_ex, paruq_dd = splituq(param_uq,None,Pfit,Vfit,Bfit,parfit,None, scales)
return fit, Pfit, Vfit, Bfit, Vmod, Vunmod, parfit, Pfit_uq, Vfit_uq, Bfit_uq, Vmod_uq, Vunmod_uq, paruq_bg, paruq_ex, paruq_dd,scales,alphaopt
else:
return fit, Pfit, Vfit, Bfit, Vmod, Vunmod, parfit, scales, alphaopt
Expand All @@ -583,7 +568,7 @@ def separable_nonlinear_lsq_analysis(Vexp):
prescales = [1 for V in Vexp]
Vexp_ = [Vexp[i]/prescales[i] for i in range(nSignals)]

def scale_constraint(nonlinpar):
def scale_constraint(nonlinpar,linpar):
# --------------------------------------------------------
penalty = np.zeros(nSignals)
for i in range(nSignals):
Expand All @@ -607,7 +592,6 @@ def scale_constraint(nonlinpar):
Pfit = fit.lin
param_uq = fit.nonlinUncert
Pfit_uq = fit.linUncert
snlls_uq = [param_uq,Pfit_uq]
alphaopt = fit.regparam
scales = fit.scale

Expand All @@ -628,7 +612,7 @@ def scale_constraint(nonlinpar):
Vmod, Vunmod = calculate_Vmod_Vunmod(parfit,Vfit,Bfit,scales)

if uqanalysis and uq=='covariance':
Vfit_uq, _, Bfit_uq, Vmod_uq, Vunmod_uq, paruq_bg, paruq_ex, paruq_dd = splituq(snlls_uq, Pfit, Vfit, Bfit, parfit, Kfit, scales)
Vfit_uq, _, Bfit_uq, Vmod_uq, Vunmod_uq, paruq_bg, paruq_ex, paruq_dd = splituq(param_uq,Pfit_uq, Pfit, Vfit, Bfit, parfit, Kfit, scales)
return fit, Pfit, Vfit, Bfit, Vmod, Vunmod, parfit, Pfit_uq, Vfit_uq, Bfit_uq, Vmod_uq, Vunmod_uq, paruq_bg, paruq_ex, paruq_dd,scales,alphaopt
else:
return fit, Pfit, Vfit, Bfit, Vmod, Vunmod, parfit, scales, alphaopt
Expand Down
Loading

0 comments on commit 12219df

Please sign in to comment.