0% found this document useful (0 votes)
43 views323 pages

Lmfit

Uploaded by

Neha Rajput
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
43 views323 pages

Lmfit

Uploaded by

Neha Rajput
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 323

Non-Linear Least-Squares Minimization

and Curve-Fitting for Python


Release 1.2.0

Matthew Newville, Till Stensitzki, Renee Otten, and others

Apr 17, 2023


CONTENTS

1 Getting started with Non-Linear Least-Squares Fitting 3

2 Downloading and Installation 7


2.1 Prerequisites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2.2 Downloads . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2.3 Installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
2.4 Development Version . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
2.5 Testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
2.6 Acknowledgements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
2.7 Copyright, Licensing, and Re-distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10

3 Getting Help 13

4 Frequently Asked Questions 15


4.1 What’s the best way to ask for help or submit a bug report? . . . . . . . . . . . . . . . . . . . . . . . 15
4.2 Why did my script break when upgrading from lmfit 0.8.3 to 0.9.0? . . . . . . . . . . . . . . . . . . 15
4.3 I get import errors from IPython . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
4.4 How can I fit multi-dimensional data? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
4.5 How can I fit multiple data sets? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
4.6 How can I fit complex data? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
4.7 How should I cite LMFIT? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
4.8 I get errors from NaN in my fit. What can I do? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
4.9 Why are Parameter values sometimes stuck at initial values? . . . . . . . . . . . . . . . . . . . . . . 18
4.10 Why are uncertainties in Parameters sometimes not determined? . . . . . . . . . . . . . . . . . . . . 18
4.11 Can Parameters be used for Array Indices or Discrete Values? . . . . . . . . . . . . . . . . . . . . . 18

5 Parameter and Parameters 23


5.1 The Parameter class . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
5.2 The Parameters class . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
5.3 The create_params() function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
5.4 Simple Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29

6 Performing Fits and Analyzing Outputs 31


6.1 The minimize() function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
6.2 Writing a Fitting Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
6.3 Choosing Different Fitting Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
6.4 MinimizerResult – the optimization result . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
6.5 Getting and Printing Fit Reports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
6.6 Using a Iteration Callback Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
6.7 Using the Minimizer class . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
6.8 Minimizer.emcee() - calculating the posterior probability distribution of parameters . . . . . . . . 54

i
7 Modeling Data and Curve Fitting 61
7.1 Motivation and simple example: Fit data to Gaussian profile . . . . . . . . . . . . . . . . . . . . . . 61
7.2 The Model class . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
7.3 The ModelResult class . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
7.4 Composite Models : adding (or multiplying) Models . . . . . . . . . . . . . . . . . . . . . . . . . . 92

8 Built-in Fitting Models in the models module 99


8.1 Peak-like models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99
8.2 Linear and Polynomial Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112
8.3 Periodic Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116
8.4 Step-like models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116
8.5 Exponential and Power law models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118
8.6 Two dimensional Peak-like models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119
8.7 User-defined Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120
8.8 Example 1: Fit Peak data to Gaussian, Lorentzian, and Voigt profiles . . . . . . . . . . . . . . . . . 122
8.9 Example 2: Fit data to a Composite Model with pre-defined models . . . . . . . . . . . . . . . . . . 127
8.10 Example 3: Fitting Multiple Peaks – and using Prefixes . . . . . . . . . . . . . . . . . . . . . . . . . 129
8.11 Example 4: Using a Spline Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134

9 Calculation of confidence intervals 141


9.1 Method used for calculating confidence intervals . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141
9.2 A basic example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141
9.3 Working without standard error estimates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142
9.4 Calculating and visualizing maps of 𝜒2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143
9.5 An advanced example for evaluating confidence intervals . . . . . . . . . . . . . . . . . . . . . . . . 147
9.6 Confidence Interval Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150

10 Bounds Implementation 153

11 Using Mathematical Constraints 155


11.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155
11.2 Supported Operators, Functions, and Constants . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 156
11.3 Using Inequality Constraints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157
11.4 Advanced usage of Expressions in lmfit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157

12 Release Notes 159


12.1 Version 1.2.0 Release Notes (April 05, 2023) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159
12.2 Version 1.1.0 Release Notes (November 27, 2022) . . . . . . . . . . . . . . . . . . . . . . . . . . . 160
12.3 Version 1.0.3 Release Notes (October 14, 2021) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161
12.4 Version 1.0.2 Release Notes (February 7, 2021) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162
12.5 Version 1.0.1 Release Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163
12.6 Version 1.0.0 Release Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163
12.7 Version 0.9.15 Release Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 164
12.8 Version 0.9.14 Release Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 164
12.9 Version 0.9.13 Release Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 165
12.10 Version 0.9.12 Release Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166
12.11 Version 0.9.10 Release Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166
12.12 Version 0.9.9 Release Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 167
12.13 Version 0.9.6 Release Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 167
12.14 Version 0.9.5 Release Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 167
12.15 Version 0.9.4 Release Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 167
12.16 Version 0.9.3 Release Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 168
12.17 Version 0.9.0 Release Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 168

13 Examples gallery 171

ii
13.1 Fit with Data in a pandas DataFrame . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 171
13.2 Using an ExpressionModel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173
13.3 Fit Using Inequality Constraint . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174
13.4 Fit Using differential_evolution Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176
13.5 Fit Using Bounds . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 179
13.6 Fit with Algebraic Constraint . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 181
13.7 Fit Specifying Different Reduce Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 184
13.8 Building a lmfit model with SymPy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 186
13.9 Fit Multiple Data Sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 190
13.10 Fit using the Model interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 193
13.11 Fit Specifying a Function to Compute the Jacobian . . . . . . . . . . . . . . . . . . . . . . . . . . . 198
13.12 Outlier detection via leave-one-out . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 201
13.13 Emcee and the Model Interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 206
13.14 Complex Resonator Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 212
13.15 Model Selection using lmfit and emcee . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 217
13.16 Calculate Confidence Intervals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 220
13.17 Fit Two Dimensional Peaks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 225
13.18 Global minimization using the brute method (a.k.a. grid search) . . . . . . . . . . . . . . . . . . . 233

14 Examples from the documentation 247


14.1 Examples from the documentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 247

Python Module Index 313

Index 315

iii
iv
Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

Lmfit provides a high-level interface to non-linear optimization and curve fitting problems for Python. It builds on
and extends many of the optimization methods of scipy.optimize. Initially inspired by (and named for) extending the
Levenberg-Marquardt method from scipy.optimize.leastsq, lmfit now provides a number of useful enhancements to
optimization and data fitting problems, including:
• Using Parameter objects instead of plain floats as variables. A Parameter has a value that can be varied during
the fit or kept at a fixed value. It can have upper and/or lower bounds. A Parameter can even have a value that is
constrained by an algebraic expression of other Parameter values. As a Python object, a Parameter can also have
attributes such as a standard error, after a fit that can estimate uncertainties.
• Ease of changing fitting algorithms. Once a fitting model is set up, one can change the fitting algorithm used to
find the optimal solution without changing the objective function.
• Improved estimation of confidence intervals. While scipy.optimize.leastsq will automatically calculate uncer-
tainties and correlations from the covariance matrix, the accuracy of these estimates is sometimes questionable.
To help address this, lmfit has functions to explicitly explore parameter space and determine confidence levels
even for the most difficult cases. Additionally, lmfit will use the numdifftools package (if installed) to estimate
parameter uncertainties and correlations for algorithms that do not natively support this in SciPy.
• Improved curve-fitting with the Model class. This extends the capabilities of scipy.optimize.curve_fit, allowing
you to turn a function that models your data into a Python class that helps you parametrize and fit data with that
model.
• Many built-in models for common lineshapes are included and ready to use.
The lmfit package is Free software, using an Open Source license. The software and this document are works in
progress. If you are interested in participating in this effort please use the lmfit GitHub repository.

CONTENTS 1
Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

2 CONTENTS
CHAPTER

ONE

GETTING STARTED WITH NON-LINEAR LEAST-SQUARES FITTING

The lmfit package provides simple tools to help you build complex fitting models for non-linear least-squares problems
and apply these models to real data. This section gives an overview of the concepts and describes how to set up and
perform simple fits. Some basic knowledge of Python, NumPy, and modeling data are assumed – this is not a tutorial
on why or how to perform a minimization or fit data, but is rather aimed at explaining how to use lmfit to do these
things.
In order to do a non-linear least-squares fit of a model to data or for any other optimization problem, the main task
is to write an objective function that takes the values of the fitting variables and calculates either a scalar value to be
minimized or an array of values that are to be minimized, typically in the least-squares sense. For many data fitting
processes, the latter approach is used, and the objective function should return an array of (data-model), perhaps
scaled by some weighting factor such as the inverse of the uncertainty in the data. For such a problem, the chi-square
(𝜒2 ) statistic is often defined as:
𝑁
∑︁ [𝑦 meas − 𝑦 model (v)]2
𝑖 𝑖
𝜒2 =
𝑖
𝜖2𝑖

where 𝑦𝑖meas is the set of measured data, 𝑦𝑖model (v) is the model calculation, v is the set of variables in the model to be
optimized in the fit, and 𝜖𝑖 is the estimated uncertainty in the data, respectively.
In a traditional non-linear fit, one writes an objective function that takes the variable values and calculates the residual
array 𝑦𝑖meas − 𝑦𝑖model (v), or the residual array scaled by the data uncertainties, [𝑦𝑖meas − 𝑦𝑖model (v)]/𝜖𝑖 , or some other
weighting factor.
As a simple concrete example, one might want to model data with a decaying sine wave, and so write an objective
function like this:

from numpy import exp, sin

def residual(variables, x, data, uncertainty):


"""Model a decaying sine wave and subtract data."""
amp = variables[0]
phaseshift = variables[1]
freq = variables[2]
decay = variables[3]

model = amp * sin(x*freq + phaseshift) * exp(-x*x*decay)

return (data-model) / uncertainty

To perform the minimization with scipy.optimize, one would do this:

3
Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

from numpy import linspace, random


from scipy.optimize import leastsq

# generate synthetic data with noise


x = linspace(0, 100)
noise = random.normal(size=x.size, scale=0.2)
data = 7.5 * sin(x*0.22 + 2.5) * exp(-x*x*0.01) + noise

# generate experimental uncertainties


uncertainty = abs(0.16 + random.normal(size=x.size, scale=0.05))

variables = [10.0, 0.2, 3.0, 0.007]


out = leastsq(residual, variables, args=(x, data, uncertainty))

Though it is wonderful to be able to use Python for such optimization problems, and the SciPy library is robust and
easy to use, the approach here is not terribly different from how one would do the same fit in C or Fortran. There are
several practical challenges to using this approach, including:
a) The user has to keep track of the order of the variables, and their meaning – variables[0] is the amplitude,
variables[2] is the frequency, and so on, although there is no intrinsic meaning to this order.
b) If the user wants to fix a particular variable (not vary it in the fit), the residual function has to be altered to
have fewer variables, and have the corresponding constant value passed in some other way. While reasonable
for simple cases, this quickly becomes a significant work for more complex models, and greatly complicates
modeling for people not intimately familiar with the details of the fitting code.
c) There is no simple, robust way to put bounds on values for the variables, or enforce mathematical relationships
between the variables. While some optimization methods in SciPy do provide bounds, they require bounds to
be set for all variables with separate arrays that are in the same arbitrary order as variable values. Again, this is
acceptable for small or one-off cases, but becomes painful if the fitting model needs to change.
d) In some cases, constraints can be placed on Parameter values, but this is a pretty opaque and complex process.
While these shortcomings can be worked around with some work, they are all essentially due to the use of arrays or
lists to hold the variables. This closely matches the implementation of the underlying Fortran code, but does not fit
very well with Python’s rich selection of objects and data structures. The key concept in lmfit is to define and use
Parameter objects instead of plain floating point numbers as the variables for the fit. Using Parameter objects (or
the closely related Parameters – a dictionary of Parameter objects), allows one to do the following:
a) forget about the order of variables and refer to Parameters by meaningful names.
b) place bounds on Parameters as attributes, without worrying about preserving the order of arrays for variables
and boundaries, and without relying on the solver to support bounds itself.
c) fix Parameters, without having to rewrite the objective function.
d) place algebraic constraints on Parameters.
To illustrate the value of this approach, we can rewrite the above example for the decaying sine wave as:

from numpy import exp, sin

from lmfit import minimize, Parameters

def residual(params, x, data, uncertainty):


amp = params['amp']
phaseshift = params['phase']
(continues on next page)

4 Chapter 1. Getting started with Non-Linear Least-Squares Fitting


Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

(continued from previous page)


freq = params['frequency']
decay = params['decay']

model = amp * sin(x*freq + phaseshift) * exp(-x*x*decay)

return (data-model) / uncertainty

params = Parameters()
params.add('amp', value=10)
params.add('decay', value=0.007)
params.add('phase', value=0.2)
params.add('frequency', value=3.0)

out = minimize(residual, params, args=(x, data, uncertainty))

At first look, we simply replaced a list of values with a dictionary, so that we can access Parameters by name. Just by
itself, this is better as it allows separation of the objective function from the code using it.
Note that creation of Parameters here could also be done as:
New in version 1.2.0.

from lmfit import create_params

params = create_params(amp=10, decay=0.007, phase=0.2, frequency=3.0)

where keyword/value pairs set Parameter names and their initial values.
Either when using create_param() or Parameters, the resulting params object is an instance of Parameters,
which acts like a dictionary, with keys being the Parameter name and values being individual Parameter objects.
These Parameter objects hold the value and several other attributes that control how a Parameter acts. For example,
Parameters can be fixed or bounded; setting attributes to control this behavior can be done during definition, as with:

params = Parameters()
params.add('amp', value=10, vary=False)
params.add('decay', value=0.007, min=0.0)
params.add('phase', value=0.2)
params.add('frequency', value=3.0, max=10)

Here vary=False will prevent the value from changing in the fit, and min=0.0 will set a lower bound on that param-
eter’s value. The same thing can be accomplished by providing a dictionary of attribute values to create_params():
New in version 1.2.0.

params = create_params(amp={'value': 10, 'vary': False},


decay={'value': 0.007, 'min': 0},
phase=0.2,
frequency={'value': 3.0, 'max':10})

Parameter attributes can also be modified after they have been created:

params['amp'].vary = False
params['decay'].min = 0.10

5
Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

Importantly, our objective function remains unchanged. This means the objective function can simply express the
parametrized phenomenon to be calculated, accessing Parameter values by name and separating the choice of param-
eters to be varied in the fit.
The params object can be copied and modified to make many user-level changes to the model and fitting process. Of
course, most of the information about how your data is modeled goes into the objective function, but the approach here
allows some external control; that is, control by the user performing the fit, instead of by the author of the objective
function.
Finally, in addition to the Parameters approach to fitting data, lmfit allows switching optimization methods without
changing the objective function, provides tools for generating fitting reports, and provides a better determination of
Parameters confidence levels.

6 Chapter 1. Getting started with Non-Linear Least-Squares Fitting


CHAPTER

TWO

DOWNLOADING AND INSTALLATION

2.1 Prerequisites

Lmfit works with Python versions 3.7 and higher. Version 0.9.15 is the final version to support Python 2.7.
Lmfit requires the following Python packages, with versions given:
• NumPy version 1.19 or higher.
• SciPy version 1.6 or higher.
• asteval version 0.9.28 or higher.
• uncertainties version 3.1.4 or higher.
All of these are readily available on PyPI, and are installed automatically if installing with pip install lmfit.
In order to run the test suite, the pytest, pytest-cov, and flaky packages are required. Some functionality requires
the emcee (version 3+), corner, pandas, Jupyter, matplotlib, dill, or numdifftools packages. These are not installed
automatically, but we highly recommend each of them.
For building the documentation and generating the examples gallery, matplotlib, emcee (version 3+), corner, Sphinx,
sphinx-gallery, jupyter_sphinx, ipykernel, Pillow, and SymPy are required. For generating the PDF documentation, the
Python packages sphinxcontrib-svg2pdfconverter and cairosvg are also required, as well as the LaTex package Latexmk
(which is included by default in some LaTex distributions).
Please refer to setup.cfg under options.extras_require for a list of all dependencies that are needed if you want
to participate in the development of lmfit. You can install all these dependencies automatically by doing pip install
lmfit[all], or select only a subset (e.g., dev`, doc, or test).
Please note: the “original” python setup.py install is deprecated, but we will provide a shim setup.py file for
as long as Python and/or setuptools allow the use of this legacy command.

2.2 Downloads

The latest stable version of lmfit is 1.2.0 and is available from PyPI. Check the Release Notes for a list of changes
compared to earlier releases.

7
Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

2.3 Installation

The easiest way to install lmfit is with:

pip install lmfit

For Anaconda Python, lmfit is not an official package, but several Anaconda channels provide it, allowing installation
with (for example):

conda install -c conda-forge lmfit

2.4 Development Version

To get the latest development version from the lmfit GitHub repository, use:

git clone https://github.com/lmfit/lmfit-py.git

and install using:

pip install --upgrade build pip setuptools wheel

to install the required build dependencies and then do:

python -m build
pip install ".[all]'

to generate the wheel and install lmfit with all its dependencies.
We welcome all contributions to lmfit! If you cloned the repository for this purpose, please read CONTRIBUTING.md
for more detailed instructions.

2.5 Testing

A battery of tests scripts that can be run with the pytest testing framework is distributed with lmfit in the tests folder.
These are automatically run as part of the development process. For any release or any master branch from the git
repository, running pytest should run all of these tests to completion without errors or failures.
Many of the examples in this documentation are distributed with lmfit in the examples folder, and should also run for
you. Some of these examples assume that matplotlib has been installed and is working correctly.

2.6 Acknowledgements

Many people have contributed to lmfit. The attribution of credit in a


project such as this is difficult to get perfect, and there are no doubt
important contributions that are missing or under-represented here. Please
consider this file as part of the code and documentation that may have bugs
that need fixing.

(continues on next page)

8 Chapter 2. Downloading and Installation


Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

(continued from previous page)


Some of the largest and most important contributions (in approximate order
of size of the contribution to the existing code) are from:

Matthew Newville wrote the original version and maintains the project.

Renee Otten wrote the brute force method, implemented the basin-hopping
and AMPGO global solvers, implemented uncertainty calculations for scalar
minimizers and has greatly improved the code, testing, and documentation
and overall project.

Till Stensitzki wrote the improved estimates of confidence intervals, and


contributed many tests, bug fixes, and documentation.

A. R. J. Nelson added differential_evolution, emcee, and greatly improved


the code, docstrings, and overall project.

Antonino Ingargiola wrote much of the high level Model code and has
provided many bug fixes and improvements.

Daniel B. Allan wrote much of the original version of the high level Model
code, and many improvements to the testing and documentation.

Austen Fox fixed many of the built-in model functions and improved the
testing and documentation of these.

Michal Rawlik added plotting capabilities for Models.

The method used for placing bounds on parameters was derived from the
clear description in the MINUIT documentation, and adapted from
J. J. Helmus's Python implementation in leastsqbounds.py.

E. O. Le Bigot wrote the uncertainties package, a version of which was


used by lmfit for many years, and is now an external dependency.

The original AMPGO code came from Andrea Gavana and was adopted for
lmfit.

The propagation of parameter uncertainties to uncertainties in a Model


was adapted from the excellent description at
https://www.astro.rug.nl/software/kapteyn/kmpfittutorial.html#confidence-and-
˓→prediction-intervals,

which references the original work of: J. Wolberg, Data Analysis Using the
Method of Least Squares, 2006, Springer.

Additional patches, bug fixes, and suggestions have come from Faustin
Carter, Christoph Deil, Francois Boulogne, Thomas Caswell, Colin Brosseau,
nmearl, Gustavo Pasquevich, Clemens Prescher, LiCode, Ben Gamari, Yoav
Roam, Alexander Stark, Alexandre Beelen, Andrey Aristov, Nicholas Zobrist,
Ethan Welty, Julius Zimmermann, Mark Dean, Arun Persaud, Ray Osborn, @lneuhaus,
Marcel Stimberg, Yoshiera Huang, Leon Foks, Sebastian Weigand, Florian LB,
Michael Hudson-Doyle, Ruben Verweij, @jedzill4, @spalato, Jens Hedegaard Nielsen,
(continues on next page)

2.6. Acknowledgements 9
Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

(continued from previous page)


Martin Majli, Kristian Meyer, @azelcer, Ivan Usov, and many others.

The lmfit code obviously depends on, and owes a very large debt to the code
in scipy.optimize. Several discussions on the SciPy-user and lmfit mailing
lists have also led to improvements in this code.

2.7 Copyright, Licensing, and Re-distribution

The LMFIT-py code is distributed under the following license:

BSD-3

Copyright 2022 Matthew Newville, The University of Chicago


Renee Otten, Brandeis University
Till Stensitzki, Freie Universitat Berlin
A. R. J. Nelson, Australian Nuclear Science and Technology Organisation
Antonino Ingargiola, University of California, Los Angeles
Daniel B. Allen, Johns Hopkins University
Michal Rawlik, Eidgenossische Technische Hochschule, Zurich

Redistribution and use in source and binary forms, with or without


modification, are permitted provided that the following conditions are met:

1. Redistributions of source code must retain the above copyright notice,


this list of conditions and the following disclaimer.

2. Redistributions in binary form must reproduce the above copyright


notice, this list of conditions and the following disclaimer in the
documentation and/or other materials provided with the distribution.

3. Neither the name of the copyright holder nor the names of its
contributors may be used to endorse or promote products derived from this
software without specific prior written permission.

THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE
LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
POSSIBILITY OF SUCH DAMAGE.

Some code has been taken from the scipy library whose licence is below.

Copyright (c) 2001, 2002 Enthought, Inc.


(continues on next page)

10 Chapter 2. Downloading and Installation


Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

(continued from previous page)


All rights reserved.

Copyright (c) 2003-2019 SciPy Developers.


All rights reserved.

Redistribution and use in source and binary forms, with or without


modification, are permitted provided that the following conditions are met:

a. Redistributions of source code must retain the above copyright notice,


this list of conditions and the following disclaimer.
b. Redistributions in binary form must reproduce the above copyright
notice, this list of conditions and the following disclaimer in the
documentation and/or other materials provided with the distribution.
c. Neither the name of Enthought nor the names of the SciPy Developers
may be used to endorse or promote products derived from this software
without specific prior written permission.

THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDERS OR CONTRIBUTORS
BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY,
OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF
THE POSSIBILITY OF SUCH DAMAGE.

Some code has been taken from the AMPGO library of Andrea Gavana, which was
released under a MIT license.

2.7. Copyright, Licensing, and Re-distribution 11


Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

12 Chapter 2. Downloading and Installation


CHAPTER

THREE

GETTING HELP

If you have questions, comments, or suggestions for LMFIT, please use the mailing list. This provides an on-line
conversation that is both archived well and can be searched easily with standard web searches. If you find a bug in the
code or documentation, use GitHub Issues to submit a report. If you have an idea for how to solve the problem and are
familiar with Python and GitHub, submitting a GitHub Pull Request would be greatly appreciated.
If you are unsure whether to use the mailing list or the Issue tracker, please start a conversation on the mailing list.
That is, the problem you’re having may or may not be due to a bug. If it is due to a bug, creating an Issue from the
conversation is easy. If it is not a bug, the problem will be discussed and then the Issue will be closed. While one can
search through closed Issues on GitHub, these are not so easily searched, and the conversation is not easily useful to
others later. Starting the conversation on the mailing list with “How do I do this?” or “Why didn’t this work?” instead
of “This should work and doesn’t” is generally preferred, and will better help others with similar questions. Of course,
there is not always an obvious way to decide if something is a Question or an Issue, and we will try our best to engage
in all discussions.

13
Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

14 Chapter 3. Getting Help


CHAPTER

FOUR

FREQUENTLY ASKED QUESTIONS

A list of common questions.

4.1 What’s the best way to ask for help or submit a bug report?

See Getting Help.

4.2 Why did my script break when upgrading from lmfit 0.8.3 to 0.9.0?

See Version 0.9.0 Release Notes.

4.3 I get import errors from IPython

If you see something like:

from IPython.html.widgets import Dropdown

ImportError: No module named 'widgets'

then you need to install the ipywidgets package, try: pip install ipywidgets.

4.4 How can I fit multi-dimensional data?

The fitting routines accept data arrays that are one-dimensional and double precision. So you need to convert the data
and model (or the value returned by the objective function) to be one-dimensional. A simple way to do this is to use
numpy.ndarray.flatten, for example:

def residual(params, x, data=None):


....
resid = calculate_multidim_residual()
return resid.flatten()

15
Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

4.5 How can I fit multiple data sets?

As above, the fitting routines accept data arrays that are one-dimensional and double precision. So you need to convert
the sets of data and models (or the value returned by the objective function) to be one-dimensional. A simple way to
do this is to use numpy.concatenate. As an example, here is a residual function to simultaneously fit two lines to two
different arrays. As a bonus, the two lines share the ‘offset’ parameter:

import numpy as np

def fit_function(params, x=None, dat1=None, dat2=None):


model1 = params['offset'] + x * params['slope1']
model2 = params['offset'] + x * params['slope2']

resid1 = dat1 - model1


resid2 = dat2 - model2
return np.concatenate((resid1, resid2))

4.6 How can I fit complex data?

As with working with multi-dimensional data, you need to convert your data and model (or the value returned by the
objective function) to be double precision, floating point numbers. The simplest approach is to use numpy.ndarray.view,
perhaps like:

import numpy as np

def residual(params, x, data=None):


....
resid = calculate_complex_residual()
return resid.view(float)

Alternately, you can use the lmfit.Model class to wrap a fit function that returns a complex vector. It will automatically
apply the above prescription when calculating the residual. The benefit to this method is that you also get access to the
plot routines from the ModelResult class, which are also complex-aware.

16 Chapter 4. Frequently Asked Questions


Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

4.7 How should I cite LMFIT?

See https://dx.doi.org/10.5281/zenodo.11813

4.8 I get errors from NaN in my fit. What can I do?

The solvers used by lmfit use NaN (see https://en.wikipedia.org/wiki/NaN) values as signals that the calculation cannot
continue. If any value in the residual array (typically (data-model)*weight) is NaN, then calculations of chi-square
or comparisons with other residual arrays to try find a better fit will also give NaN and fail. There is no sensible way
for lmfit or any of the optimization routines to know how to handle such NaN values. They indicate that numerical
calculations are not sensible and must stop.
This means that if your objective function (if using minimize) or model function (if using Model) generates a NaN,
the fit will stop immediately. If your objective or model function generates a NaN, you really must handle that.

4.8.1 nan_policy

If you are using lmfit.Model and the NaN values come from your data array and are meant to indicate missing values,
or if you using lmfit.minimize() with the same basic intention, then it might be possible to get a successful fit in
spite of the NaN values. To do this, you can add a nan_policy='omit' argument to lmfit.minimize(), or when
creating a lmfit.Model, or when running lmfit.Model.fit().
In order for this to be effective, the number of NaN values cannot ever change during the fit. If the NaN values come
from the data and not the calculated model, that should be the case.

4.8.2 Common sources of NaN

If you are seeing errors due to NaN values, you will need to figure out where they are coming from and eliminate them.
It is sometimes difficult to tell what causes NaN values. Keep in mind that all values should be assumed to be either
scalar values or numpy arrays of double precision real numbers when fitting. Some of the most likely causes of NaNs
are:
• taking sqrt(x) or log(x) where x is negative.
• doing x**y where x is negative. Since y is real, there will be a fractional component, and a negative number to
a fractional exponent is not a real number.
• doing x/y where both x and y are 0.
If you use these very common constructs in your objective or model function, you should take some caution for what
values you are passing these functions and operators. Many special functions have similar limitations and should also
be viewed with some suspicion if NaNs are being generated.
A related problem is the generation of Inf (Infinity in floating point), which generally comes from exp(x) where x has
values greater than 700 or so, so that the resulting value is greater than 1.e308. Inf is only slightly better than NaN.
It will completely ruin the ability to do the fit. However, unlike NaN, it is also usually clear how to handle Inf, as
you probably won’t ever have values greater than 1.e308 and can therefore (usually) safely clip the argument passed to
exp() to be smaller than about 700.

4.7. How should I cite LMFIT? 17


Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

4.9 Why are Parameter values sometimes stuck at initial values?

In order for a Parameter to be optimized in a fit, changing its value must have an impact on the fit residual (data-model
when curve fitting, for example). If a fit has not changed one or more of the Parameters, it means that changing those
Parameters did not change the fit residual.
Normally (that is, unless you specifically provide a function for calculating the derivatives, in which case you probably
would not be asking this question ;)), the fitting process begins by making a very small change to each Parameter value
to determine which way and how large of a change to make for the parameter: This is the derivative or Jacobian (change
in residual per change in parameter value). By default, the change made for each variable Parameter is to multiply its
value by (1.0+1.0e-8) or so (unless the value is below about 1.e-15, in which case it adds 1.0e-8). If that small change
does not change the residual, then the value of the Parameter will not be updated.
Parameter values that are “way off” are a common reason for Parameters being stuck at initial values. As an example,
imagine fitting peak-like data with and x range of 0 to 10, peak centered at 6, and a width of 1 or 2 or so, as in the
example at doc_model_gaussian.py. A Gaussian function with an initial value of for the peak center at 5 and an initial
width or 5 will almost certainly find a good fit. An initial value of the peak center of -50 will end up being stuck with a
“bad fit” because a small change in Parameters will still lead the modeled Gaussian to have no intensity over the actual
range of the data. You should make sure that initial values for Parameters are reasonable enough to actually effect the
fit. As it turns out in the example linked to above, changing the center value to any value between about 0 and 10 (that
is, the data range) will result to a good fit.
Another common cause for Parameters being stuck at initial values is when the initial value is at a boundary value. For
this case, too, a small change in the initial value for the Parameter will still leave the value at the boundary value and
not show any real change in the residual.
If you’re using bounds, make sure the initial values for the Parameters are not at the boundary values.
Finally, one reason for a Parameter to not change is that they are actually used as discrete values. This is discussed
below in Can Parameters be used for Array Indices or Discrete Values?.

4.10 Why are uncertainties in Parameters sometimes not deter-


mined?

In order for Parameter uncertainties to be estimated, each variable Parameter must actually change the fit, and cannot
be stuck at an initial value or at a boundary value. See Why are Parameter values sometimes stuck at initial values? for
why values may not change from their initial values.

4.11 Can Parameters be used for Array Indices or Discrete Values?

The short answer is “No”: variables in all of the fitting methods used in lmfit (and all of those available in scipy.
optimize) are treated as continuous values, and represented as double precision floating point values. As an important
example, you cannot have a variable that is somehow constrained to be an integer.
Still, it is a rather common question of how to fit data to a model that includes a breakpoint, perhaps

{︃
𝑐 for 𝑥 < 𝑥0
𝑓 (𝑥; 𝑥0 , 𝑎, 𝑏, 𝑐) =
𝑎 + 𝑏𝑥2 for 𝑥 > 𝑥0

That you implement with a model function and use to fit data like this:

18 Chapter 4. Frequently Asked Questions


Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

import numpy as np

import lmfit

def quad_off(x, x0, a, b, c):


model = a + b * x**2
model[np.where(x < x0)] = c
return model

x0 = 19
b = 0.02
a = 2.0
xdat = np.linspace(0, 100, 101)
ydat = a + b * xdat**2
ydat[np.where(xdat < x0)] = a + b * x0**2
ydat += np.random.normal(scale=0.1, size=xdat.size)

mod = lmfit.Model(quad_off)
pars = mod.make_params(x0=22, a=1, b=1, c=1)

result = mod.fit(ydat, pars, x=xdat)


print(result.fit_report())

[[Model]]
Model(quad_off)
[[Fit Statistics]]
# fitting method = leastsq
# function evals = 14
# data points = 101
# variables = 4
chi-square = 3.19745850
reduced chi-square = 0.03296349
Akaike info crit = -340.729188
Bayesian info crit = -330.268706
R-squared = 0.99999099
[[Variables]]
x0: 22.0000000 +/- 1.7288e-05 (0.00%) (init = 22)
a: 1.99533131 +/- 0.03754894 (1.88%) (init = 1)
b: 0.02000201 +/- 7.3258e-06 (0.04%) (init = 1)
c: 9.32421159 +/- 0.03870840 (0.42%) (init = 1)
[[Correlations]] (unreported correlations are < 0.100)
C(a, b) = -0.8368
C(x0, a) = +0.1871
C(x0, b) = -0.1508

This will not result in a very good fit, as the value for x0 cannot be found by making a small change in its value.
Specifically, model[np.where(x < x0)] will give the same result for x0=22 and x0=22.001, and so that value is
not changed during the fit.
There are a couple ways around this problem. First, you may be able to make the fit depend on x0 in a way that is not
just discrete. That depends on your model function. A second option is to treat the break not as a hard break but as a

4.11. Can Parameters be used for Array Indices or Discrete Values? 19


Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

more gentle transition with a sigmoidal function, such as an error function. Like the break-point, these will go from 0
to 1, but more gently and with some finite value leaking into neighboring points. The amount of leakage or width of
the step can also be adjusted.
A simple modification of the above to use an error function would look like this and give better fit results:

import numpy as np
from scipy.special import erf

import lmfit

def quad_off(x, x0, a, b, c):


m1 = a + b * x**2
m2 = c * np.ones(len(x))
# step up from 0 to 1 at x0: (erf(x-x0)+1)/2
# step down from 1 to 0 at x0: (1-erf(x-x0))/2
model = m1 * (erf(x-x0)+1)/2 + m2 * (1-erf(x-x0))/2
return model

x0 = 19
b = 0.02
a = 2.0
xdat = np.linspace(0, 100, 101)
ydat = a + b * xdat**2
ydat[np.where(xdat < x0)] = a + b * x0**2
ydat += np.random.normal(scale=0.1, size=xdat.size)

mod = lmfit.Model(quad_off)
pars = mod.make_params(x0=22, a=1, b=1, c=1)

result = mod.fit(ydat, pars, x=xdat)


print(result.fit_report())

[[Model]]
Model(quad_off)
[[Fit Statistics]]
# fitting method = leastsq
# function evals = 56
# data points = 101
# variables = 4
chi-square = 1.01206740
reduced chi-square = 0.01043368
Akaike info crit = -456.915660
Bayesian info crit = -446.455178
R-squared = 0.99999715
[[Variables]]
x0: 19.4038028 +/- 0.34484695 (1.78%) (init = 22)
a: 1.96097115 +/- 0.02047153 (1.04%) (init = 1)
b: 0.02000347 +/- 4.0388e-06 (0.02%) (init = 1)
c: 9.24175811 +/- 0.02313785 (0.25%) (init = 1)
[[Correlations]] (unreported correlations are < 0.100)
(continues on next page)

20 Chapter 4. Frequently Asked Questions


Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

(continued from previous page)


C(a, b) = -0.8294
C(x0, a) = +0.1760
C(x0, b) = -0.1412

The natural width of the error function is about 2 x units, but you can adjust this, shortening it with erf((x-x0)*2)
to give a sharper transition for example.

4.11. Can Parameters be used for Array Indices or Discrete Values? 21


Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

22 Chapter 4. Frequently Asked Questions


CHAPTER

FIVE

PARAMETER AND PARAMETERS

This chapter describes the Parameter object, which is a key concept of lmfit.
A Parameter is the quantity to be optimized in all minimization problems, replacing the plain floating point number
used in the optimization routines from scipy.optimize. A Parameter has a value that can either be varied in the
fit or held at a fixed value, and can have lower and/or upper bounds placed on the value. It can even have a value that
is constrained by an algebraic expression of other Parameter values. Since Parameter objects live outside the core
optimization routines, they can be used in all optimization routines from scipy.optimize. By using Parameter
objects instead of plain variables, the objective function does not have to be modified to reflect every change of what
is varied in the fit, or whether bounds can be applied. This simplifies the writing of models, allowing general models
that describe the phenomenon and gives the user more flexibility in using and testing variations of that model.
Whereas a Parameter expands on an individual floating point variable, the optimization methods actually still need
an ordered group of floating point variables. In the scipy.optimize routines this is required to be a one-dimensional
numpy.ndarray. In lmfit, this one-dimensional array is replaced by a Parameters object, which works as an ordered
dictionary of Parameter objects with a few additional features and methods. That is, while the concept of a Parameter
is central to lmfit, one normally creates and interacts with a Parameters instance that contains many Parameter ob-
jects. For example, the objective functions you write for lmfit will take an instance of Parameters as its first argument.
A table of parameter values, bounds, and other attributes can be printed using Parameters.pretty_print().

5.1 The Parameter class

class Parameter(name, value=None, vary=True, min=-inf, max=inf, expr=None, brute_step=None,


user_data=None)
A Parameter is an object that can be varied in a fit.
It is a central component of lmfit, and all minimization and modeling methods use Parameter objects.
A Parameter has a name attribute, and a scalar floating point value. It also has a vary attribute that describes
whether the value should be varied during the minimization. Finite bounds can be placed on the Parameter’s
value by setting its min and/or max attributes. A Parameter can also have its value determined by a mathematical
expression of other Parameter values held in the expr attribute. Additional attributes include brute_step used as
the step size in a brute-force minimization, and user_data reserved exclusively for user’s need.
After a minimization, a Parameter may also gain other attributes, including stderr holding the estimated standard
error in the Parameter’s value, and correl, a dictionary of correlation values with other Parameters used in the
minimization.
Parameters
• name (str) – Name of the Parameter.
• value (float, optional) – Numerical Parameter value.

23
Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

• vary (bool, optional) – Whether the Parameter is varied during a fit (default is True).
• min (float, optional) – Lower bound for value (default is -numpy.inf, no lower
bound).
• max (float, optional) – Upper bound for value (default is numpy.inf, no upper bound).
• expr (str, optional) – Mathematical expression used to constrain the value during the
fit (default is None).
• brute_step (float, optional) – Step size for grid points in the brute method (default
is None).
• user_data (optional) – User-definable extra attribute used for a Parameter (default is
None).
stderr
The estimated standard error for the best-fit value.
Type
float
correl
A dictionary of the correlation with the other fitted Parameters of the form:

{'decay': 0.404, 'phase': -0.020, 'frequency': 0.102}

Type
dict

See Bounds Implementation for details on the math used to implement the bounds with min and max.
The expr attribute can contain a mathematical expression that will be used to compute the value for the Parameter
at each step in the fit. See Using Mathematical Constraints for more details and examples of this feature.
set(value=None, vary=None, min=None, max=None, expr=None, brute_step=None, is_init_value=True)
Set or update Parameter attributes.
Parameters
• value (float, optional) – Numerical Parameter value.
• vary (bool, optional) – Whether the Parameter is varied during a fit.
• min (float, optional) – Lower bound for value. To remove a lower bound you must
use -numpy.inf.
• max (float, optional) – Upper bound for value. To remove an upper bound you must
use numpy.inf.
• expr (str, optional) – Mathematical expression used to constrain the value during the
fit. To remove a constraint you must supply an empty string.
• brute_step (float, optional) – Step size for grid points in the brute method. To
remove the step size you must use 0.
• is_init_value (bool, optional) – Whether to set value as init_value, when setting
value.

24 Chapter 5. Parameter and Parameters


Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

Notes

Each argument to set() has a default value of None, which will leave the current value for the attribute
unchanged. Thus, to lift a lower or upper bound, passing in None will not work. Instead, you must set these
to -numpy.inf or numpy.inf, as with:

par.set(min=None) # leaves lower bound unchanged


par.set(min=-numpy.inf) # removes lower bound

Similarly, to clear an expression, pass a blank string, (not None!) as with:

par.set(expr=None) # leaves expression unchanged


par.set(expr='') # removes expression

Explicitly setting a value or setting vary=True will also clear the expression.
Finally, to clear the brute_step size, pass 0, not None:

par.set(brute_step=None) # leaves brute_step unchanged


par.set(brute_step=0) # removes brute_step

5.2 The Parameters class

class Parameters(usersyms=None)
A dictionary of Parameter objects.
It should contain all Parameter objects that are required to specify a fit model. All minimization and Model fitting
routines in lmfit will use exactly one Parameters object, typically given as the first argument to the objective
function.
All keys of a Parameters() instance must be strings and valid Python symbol names, so that the name must match
[a-z_][a-z0-9_]* and cannot be a Python reserved word.
All values of a Parameters() instance must be Parameter objects.
A Parameters() instance includes an asteval Interpreter used for evaluation of constrained Parameters.
Parameters() support copying and pickling, and have methods to convert to and from serializations using json
strings.
Parameters
usersyms (dict, optional) – Dictionary of symbols to add to the asteval.Interpreter
(default is None).
add(name, value=None, vary=True, min=-inf, max=inf, expr=None, brute_step=None)
Add a Parameter.
Parameters
• name (str or Parameter) – If name refers to a Parameter object it will be added directly
to the Parameters instance, otherwise a new Parameter object with name string is created
before adding it. In both cases, name must match [a-z_][a-z0-9_]* and cannot be a
Python reserved word.
• value (float, optional) – Numerical Parameter value, typically the initial value.
• vary (bool, optional) – Whether the Parameter is varied during a fit (default is True).

5.2. The Parameters class 25


Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

• min (float, optional) – Lower bound for value (default is -numpy.inf, no lower
bound).
• max (float, optional) – Upper bound for value (default is numpy.inf, no upper
bound).
• expr (str, optional) – Mathematical expression used to constrain the value during the
fit (default is None).
• brute_step (float, optional) – Step size for grid points in the brute method (default
is None).

Examples

>>> params = Parameters()


>>> params.add('xvar', value=0.50, min=0, max=1)
>>> params.add('yvar', expr='1.0 - xvar')

which is equivalent to:

>>> params = Parameters()


>>> params['xvar'] = Parameter(name='xvar', value=0.50, min=0, max=1)
>>> params['yvar'] = Parameter(name='yvar', expr='1.0 - xvar')

add_many(*parlist)
Add many parameters, using a sequence of tuples.
Parameters
*parlist (sequence of tuple or Parameter) – A sequence of tuples, or a sequence of
Parameter instances. If it is a sequence of tuples, then each tuple must contain at least
a name. The order in each tuple must be (name, value, vary, min, max, expr,
brute_step).

Examples

>>> params = Parameters()


# add with tuples: (NAME VALUE VARY MIN MAX EXPR BRUTE_STEP)
>>> params.add_many(('amp', 10, True, None, None, None, None),
... ('cen', 4, True, 0.0, None, None, None),
... ('wid', 1, False, None, None, None, None),
... ('frac', 0.5))
# add a sequence of Parameters
>>> f = Parameter('par_f', 100)
>>> g = Parameter('par_g', 2.)
>>> params.add_many(f, g)

pretty_print(oneline=False, colwidth=8, precision=4, fmt='g', columns=['value', 'min', 'max', 'stderr',


'vary', 'expr', 'brute_step'])
Pretty-print of parameters data.
Parameters
• oneline (bool, optional) – If True prints a one-line parameters representation (default
is False).

26 Chapter 5. Parameter and Parameters


Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

• colwidth (int, optional) – Column width for all columns specified in columns (de-
fault is 8).
• precision (int, optional) – Number of digits to be printed after floating point (default
is 4).
• fmt ({'g', 'e', 'f'}, optional) – Single-character numeric formatter. Valid values are:
‘g’ floating point and exponential (default), ‘e’ exponential, or ‘f’ floating point.
• columns (list of str, optional) – List of Parameter attribute names to print (default is
to show all attributes).
valuesdict()
Return an ordered dictionary of parameter values.
Returns
A dictionary of name:value pairs for each Parameter.
Return type
dict
dumps(**kws)
Represent Parameters as a JSON string.
Parameters
**kws (optional) – Keyword arguments that are passed to json.dumps.
Returns
JSON string representation of Parameters.
Return type
str
See also:
dump, loads, load, json.dumps
dump(fp, **kws)
Write JSON representation of Parameters to a file-like object.
Parameters
• fp (file-like object) – An open and .write()-supporting file-like object.
• **kws (optional) – Keyword arguments that are passed to dumps.
Returns
Return value from fp.write(): the number of characters written.
Return type
int
See also:
dumps, load, json.dump
eval(expr)
Evaluate a statement using the asteval Interpreter.
Parameters
expr (str) – An expression containing parameter names and other symbols recognizable by
the asteval Interpreter.
Returns
The result of evaluating the expression.

5.2. The Parameters class 27


Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

Return type
float
loads(s, **kws)
Load Parameters from a JSON string.
Parameters
**kws (optional) – Keyword arguments that are passed to json.loads.
Returns
Updated Parameters from the JSON string.
Return type
Parameters

Notes

Current Parameters will be cleared before loading the data from the JSON string.
See also:
dump, dumps, load, json.loads
load(fp, **kws)
Load JSON representation of Parameters from a file-like object.
Parameters
• fp (file-like object) – An open and .read()-supporting file-like object.
• **kws (optional) – Keyword arguments that are passed to loads.
Returns
Updated Parameters loaded from fp.
Return type
Parameters
See also:
dump, loads, json.load

Warning: Saving Parameters with user-added functions to the _asteval interpreter using :meth::dump and
dumps() may not be easily recovered with the load() and loads(). See Saving and Loading Models for fur-
ther discussion.

5.3 The create_params() function

The create_params() function is probably the easiest method for making Parameters objects, as it allows defining
Parameter names by keyword with values either being the numerical initial value for the Parameter or being a dictionary
with keyword/value pairs for value as well as other Parameter attribute such as min, max, expr, and so forth.
create_params(**kws)
Create lmfit.Parameters instance and set initial values and attributes.
Parameters
**kws – keywords are parameter names, value are dictionaries of Parameter values and attributes.

28 Chapter 5. Parameter and Parameters


Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

Return type
Parameters instance

Notes

1. keyword arguments will be used to create parameter names.


2. values can either be numbers (floats or integers) to set the parameter value, or can be dictionaries with any
of the following keywords: value, vary, min, max, expr, brute_step, or is_init_value to set those
parameter attributes.
3. for each parameter, is_init_value controls whether to set init_value when setting value, and defaults
to True.

Examples

>>> params = create_params(amplitude=2, center=200,


sigma={'value': 3, 'min':0},
fwhm={'expr': '2.0*sigma'})

5.4 Simple Example

A basic example making use of Parameters and the minimize() function (discussed in the next chapter) might look
like this:

# <examples/doc_parameters_basic.py>
import numpy as np

from lmfit import Minimizer, Parameters, create_params, report_fit

# create data to be fitted


x = np.linspace(0, 15, 301)
np.random.seed(2021)
data = (5.0 * np.sin(2.0*x - 0.1) * np.exp(-x*x*0.025) +
np.random.normal(size=x.size, scale=0.2))

# define objective function: returns the array to be minimized


def fcn2min(params, x, data):
"""Model a decaying sine wave and subtract data."""
amp = params['amp']
shift = params['shift']
omega = params['omega']
decay = params['decay']
model = amp * np.sin(x*omega + shift) * np.exp(-x*x*decay)
return model - data

# create a set of Parameters


params = Parameters()
(continues on next page)

5.4. Simple Example 29


Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

(continued from previous page)


params.add('amp', value=10, min=0)
params.add('decay', value=0.1)
params.add('shift', value=0.0, min=-np.pi/2., max=np.pi/2.)
params.add('omega', value=3.0)

# ... or use
params = create_params(amp=dict(value=10, min=0),
decay=0.1,
omega=3,
shift=dict(value=0, min=-np.pi/2, max=np.pi/2))

# do fit, here with the default leastsq algorithm


minner = Minimizer(fcn2min, params, fcn_args=(x, data))
result = minner.minimize()

# calculate final result


final = data + result.residual

# write error report


report_fit(result)

# try to plot results


try:
import matplotlib.pyplot as plt
plt.plot(x, data, '+')
plt.plot(x, final)
plt.show()
except ImportError:
pass
# <end of examples/doc_parameters_basic.py>

Here, the objective function explicitly unpacks each Parameter value. This can be simplified using the Parameters
valuesdict() method, which would make the objective function fcn2min above look like:

def fcn2min(params, x, data):


"""Model a decaying sine wave and subtract data."""
v = params.valuesdict()

model = v['amp'] * np.sin(x*v['omega'] + v['shift']) * np.exp(-x*x*v['decay'])


return model - data

The results are identical, and the difference is a stylistic choice.

30 Chapter 5. Parameter and Parameters


CHAPTER

SIX

PERFORMING FITS AND ANALYZING OUTPUTS

As shown in the previous chapter, a simple fit can be performed with the minimize() function. For more sophisticated
modeling, the Minimizer class can be used to gain a bit more control, especially when using complicated constraints
or comparing results from related fits.

6.1 The minimize() function

The minimize() function is a wrapper around Minimizer for running an optimization problem. It takes an objective
function (the function that calculates the array to be minimized), a Parameters object, and several optional arguments.
See Writing a Fitting Function for details on writing the objective function.
minimize(fcn, params, method='leastsq', args=None, kws=None, iter_cb=None, scale_covar=True,
nan_policy='raise', reduce_fcn=None, calc_covar=True, max_nfev=None, **fit_kws)
Perform the minimization of the objective function.
The minimize function takes an objective function to be minimized, a dictionary (Parameters ; Parameters)
containing the model parameters, and several optional arguments including the fitting method.
Parameters
• fcn (callable) – Objective function to be minimized. When method is ‘leastsq’ or
‘least_squares’, the objective function should return an array of residuals (difference be-
tween model and data) to be minimized in a least-squares sense. With the scalar methods the
objective function can either return the residuals array or a single scalar value. The function
must have the signature:

fcn(params, *args, **kws)

• params (Parameters) – Contains the Parameters for the model.


• method (str, optional) – Name of the fitting method to use. Valid values are:
– ’leastsq’: Levenberg-Marquardt (default)
– ’least_squares’: Least-Squares minimization, using Trust Region Reflective method
– ’differential_evolution’: differential evolution
– ’brute’: brute force method
– ’basinhopping’: basinhopping
– ’ampgo’: Adaptive Memory Programming for Global Optimization
– ’nelder’: Nelder-Mead
– ’lbfgsb’: L-BFGS-B

31
Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

– ’powell’: Powell
– ’cg’: Conjugate-Gradient
– ’newton’: Newton-CG
– ’cobyla’: Cobyla
– ’bfgs’: BFGS
– ’tnc’: Truncated Newton
– ’trust-ncg’: Newton-CG trust-region
– ’trust-exact’: nearly exact trust-region
– ’trust-krylov’: Newton GLTR trust-region
– ’trust-constr’: trust-region for constrained optimization
– ’dogleg’: Dog-leg trust-region
– ’slsqp’: Sequential Linear Squares Programming
– ’emcee’: Maximum likelihood via Monte-Carlo Markov Chain
– ’shgo’: Simplicial Homology Global Optimization
– ’dual_annealing’: Dual Annealing optimization
In most cases, these methods wrap and use the method of the same name from scipy.optimize,
or use scipy.optimize.minimize with the same method argument. Thus ‘leastsq’
will use scipy.optimize.leastsq, while ‘powell’ will use scipy.optimize.minimizer(. . . ,
method=’powell’)
For more details on the fitting methods please refer to the SciPy docs.
• args (tuple, optional) – Positional arguments to pass to fcn.
• kws (dict, optional) – Keyword arguments to pass to fcn.
• iter_cb (callable, optional) – Function to be called at each fit iteration. This function
should have the signature:

iter_cb(params, iter, resid, *args, **kws),

where params will have the current parameter values, iter the iteration number, resid the
current residual array, and *args and **kws as passed to the objective function.
• scale_covar (bool, optional) – Whether to automatically scale the covariance matrix
(default is True).
• nan_policy ({'raise', 'propagate', 'omit'}, optional) – Specifies action if fcn (or
a Jacobian) returns NaN values. One of:
– ’raise’ : a ValueError is raised
– ’propagate’ : the values returned from userfcn are un-altered
– ’omit’ : non-finite values are filtered
• reduce_fcn (str or callable, optional) – Function to convert a residual array to a
scalar value for the scalar minimizers. See Notes in Minimizer.
• calc_covar (bool, optional) – Whether to calculate the covariance matrix (default is
True) for solvers other than ‘leastsq’ and ‘least_squares’. Requires the numdifftools package
to be installed.

32 Chapter 6. Performing Fits and Analyzing Outputs


Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

• max_nfev (int or None, optional) – Maximum number of function evaluations (de-


fault is None). The default value depends on the fitting method.
• **fit_kws (dict, optional) – Options to pass to the minimizer being used.
Returns
Object containing the optimized parameters and several goodness-of-fit statistics.
Return type
MinimizerResult
Changed in version 0.9.0: Return value changed to MinimizerResult.

Notes

The objective function should return the value to be minimized. For the Levenberg-Marquardt algorithm from
leastsq(), this returned value must be an array, with a length greater than or equal to the number of fitting variables
in the model. For the other methods, the return value can either be a scalar or an array. If an array is returned,
the sum-of- squares of the array will be sent to the underlying fitting method, effectively doing a least-squares
optimization of the return values.
A common use for args and kws would be to pass in other data needed to calculate the residual, including such
things as the data array, dependent variable, uncertainties in the data, and other data structures for the model
calculation.
On output, params will be unchanged. The best-fit values and, where appropriate, estimated uncertainties and
correlations, will all be contained in the returned MinimizerResult. See MinimizerResult – the optimization
result for further details.
This function is simply a wrapper around Minimizer and is equivalent to:

fitter = Minimizer(fcn, params, fcn_args=args, fcn_kws=kws,


iter_cb=iter_cb, scale_covar=scale_covar,
nan_policy=nan_policy, reduce_fcn=reduce_fcn,
calc_covar=calc_covar, **fit_kws)
fitter.minimize(method=method)

6.2 Writing a Fitting Function

An important component of a fit is writing a function to be minimized – the objective function. Since this function will
be called by other routines, there are fairly stringent requirements for its call signature and return value. In principle,
your function can be any Python callable, but it must look like this:
func(params, *args, **kws):
Calculate objective residual to be minimized from parameters.
Parameters
• params (Parameters) – Parameters.
• args – Positional arguments. Must match args argument to minimize().
• kws – Keyword arguments. Must match kws argument to minimize().
Returns
Residual array (generally data-model) to be minimized in the least-squares sense.

6.2. Writing a Fitting Function 33


Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

Return type
numpy.ndarray. The length of this array cannot change between calls.
A common use for the positional and keyword arguments would be to pass in other data needed to calculate the residual,
including things as the data array, dependent variable, uncertainties in the data, and other data structures for the model
calculation.
The objective function should return the value to be minimized. For the Levenberg-Marquardt algorithm from
leastsq(), this returned value must be an array, with a length greater than or equal to the number of fitting vari-
ables in the model. For the other methods, the return value can either be a scalar or an array. If an array is returned, the
sum of squares of the array will be sent to the underlying fitting method, effectively doing a least-squares optimization
of the return values.
Since the function will be passed in a dictionary of Parameters, it is advisable to unpack these to get numerical values
at the top of the function. A simple way to do this is with Parameters.valuesdict(), as shown below:

from numpy import exp, sign, sin, pi

def residual(pars, x, data=None, eps=None):


# unpack parameters: extract .value attribute for each parameter
parvals = pars.valuesdict()
period = parvals['period']
shift = parvals['shift']
decay = parvals['decay']

if abs(shift) > pi/2:


shift = shift - sign(shift)*pi

if abs(period) < 1.e-10:


period = sign(period)*1.e-10

model = parvals['amp'] * sin(shift + x/period) * exp(-x*x*decay*decay)

if data is None:
return model
if eps is None:
return model - data
return (model-data) / eps

In this example, x is a positional (required) argument, while the data array is actually optional (so that the function
returns the model calculation if the data is neglected). Also note that the model calculation will divide x by the value
of the period Parameter. It might be wise to ensure this parameter cannot be 0. It would be possible to use bounds on
the Parameter to do this:

params['period'] = Parameter(name='period', value=2, min=1.e-10)

but putting this directly in the function with:

if abs(period) < 1.e-10:


period = sign(period)*1.e-10

is also a reasonable approach. Similarly, one could place bounds on the decay parameter to take values only between
-pi/2 and pi/2.

34 Chapter 6. Performing Fits and Analyzing Outputs


Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

6.3 Choosing Different Fitting Methods

By default, the Levenberg-Marquardt algorithm is used for fitting. While often criticized, including the fact it finds
a local minimum, this approach has some distinct advantages. These include being fast, and well-behaved for most
curve-fitting needs, and making it easy to estimate uncertainties for and correlations between pairs of fit variables, as
discussed in MinimizerResult – the optimization result.
Alternative algorithms can also be used by providing the method keyword to the minimize() function or Minimizer.
minimize() class as listed in the Table of Supported Fitting Methods. If you have the numdifftools package installed,
lmfit will try to estimate the covariance matrix and determine parameter uncertainties and correlations if calc_covar
is True (default).
Table of Supported Fitting Methods:

Fitting Method method arg to minimize() or Minimizer.


minimize()
Levenberg-Marquardt leastsq or least_squares
Nelder-Mead nelder
L-BFGS-B lbfgsb
Powell powell
Conjugate Gradient cg
Newton-CG newton
COBYLA cobyla
BFGS bfgsb
Truncated Newton tnc
Newton CG trust-region trust-ncg
Exact trust-region trust-exact
Newton GLTR trust-region trust-krylov
Constrained trust-region trust-constr
Dogleg dogleg
Sequential Linear Squares Programming slsqp
Differential Evolution differential_evolution
Brute force method brute
Basinhopping basinhopping
Adaptive Memory Programming for Global Opti- ampgo
mization
Simplicial Homology Global Ooptimization shgo
Dual Annealing dual_annealing
Maximum likelihood via Monte-Carlo Markov emcee
Chain

Note: The objective function for the Levenberg-Marquardt method must return an array, with more elements than
variables. All other methods can return either a scalar value or an array. The Monte-Carlo Markov Chain or emcee
method has two different operating methods when the objective function returns a scalar value. See the documentation
for emcee.

Warning: Much of this documentation assumes that the Levenberg-Marquardt (leastsq) method is used. Many
of the fit statistics and estimates for uncertainties in parameters discussed in MinimizerResult – the optimization
result are done only unconditionally for this (and the least_squares) method. Lmfit versions newer than 0.9.11

6.3. Choosing Different Fitting Methods 35


Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

provide the capability to use numdifftools to estimate the covariance matrix and calculate parameter uncertainties
and correlations for other methods as well.

6.4 MinimizerResult – the optimization result

New in version 0.9.0.


An optimization with minimize() or Minimizer.minimize() will return a MinimizerResult object. This is an
otherwise plain container object (that is, with no methods of its own) that simply holds the results of the minimization.
These results will include several pieces of informational data such as status and error messages, fit statistics, and the
updated parameters themselves.
Importantly, the parameters passed in to Minimizer.minimize() will be not be changed. To find the best-fit values,
uncertainties and so on for each parameter, one must use the MinimizerResult.params attribute. For example, to
print the fitted values, bounds and other parameter attributes in a well-formatted text tables you can execute:

result.params.pretty_print()

with results being a MinimizerResult object. Note that the method pretty_print() accepts several arguments
for customizing the output (e.g., column width, numeric format, etcetera).
class MinimizerResult(**kws)
The results of a minimization.
Minimization results include data such as status and error messages, fit statistics, and the updated (i.e., best-fit)
parameters themselves in the params attribute.
The list of (possible) MinimizerResult attributes is given below:
params
The best-fit parameters resulting from the fit.
Type
Parameters
status
Termination status of the optimizer. Its value depends on the underlying solver. Refer to message for details.
Type
int
var_names
Ordered list of variable parameter names used in optimization, and useful for understanding the values in
init_vals and covar.
Type
list
covar
Covariance matrix from minimization, with rows and columns corresponding to var_names.
Type
numpy.ndarray
init_vals
List of initial values for variable parameters using var_names.

36 Chapter 6. Performing Fits and Analyzing Outputs


Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

Type
list
init_values
Dictionary of initial values for variable parameters.
Type
dict
nfev
Number of function evaluations.
Type
int
success
True if the fit succeeded, otherwise False.
Type
bool
errorbars
True if uncertainties were estimated, otherwise False.
Type
bool
message
Message about fit success.
Type
str
call_kws
Keyword arguments sent to underlying solver.
Type
dict
ier
Integer error value from scipy.optimize.leastsq (‘leastsq’ method only).
Type
int
lmdif_message
Message from scipy.optimize.leastsq (‘leastsq’ method only).
Type
str
nvarys
Number of variables in fit: 𝑁varys .
Type
int
ndata
Number of data points: 𝑁 .
Type
int

6.4. MinimizerResult – the optimization result 37


Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

nfree
Degrees of freedom in fit: 𝑁 − 𝑁varys .
Type
int
residual
Residual array Residi . Return value of the objective function when using the best-fit values of the param-
eters.
Type
numpy.ndarray
chisqr
∑︀𝑁
Chi-square: 𝜒2 = 𝑖 [Resid𝑖 ]2 .
Type
float
redchi
Reduced chi-square: 𝜒2𝜈 = 𝜒2 /(𝑁 − 𝑁varys ).
Type
float
aic
Akaike Information Criterion statistic: 𝑁 ln(𝜒2 /𝑁 ) + 2𝑁varys .
Type
float
bic
Bayesian Information Criterion statistic: 𝑁 ln(𝜒2 /𝑁 ) + ln(𝑁 )𝑁varys .
Type
float
flatchain
A flatchain view of the sampling chain from the emcee method.
Type
pandas.DataFrame
show_candidates()
pretty_print() representation of candidates from the brute fitting method.

6.4.1 Goodness-of-Fit Statistics

Table of Fit Results: These values, including the standard Goodness-of-Fit statistics, are all attributes of
the MinimizerResult object returned by minimize() or Minimizer.minimize().

38 Chapter 6. Performing Fits and Analyzing Outputs


Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

Attribute Name Description / Formula


nfev number of function evaluations
nvarys number of variables in fit 𝑁varys
ndata number of data points: 𝑁
nfree degrees of freedom in fit: 𝑁 − 𝑁varys
residual residual array, returned by the objective function: {Residi }
∑︀𝑁
chisqr chi-square: 𝜒2 = 𝑖 [Resid𝑖 ]2
redchi reduced chi-square: 𝜒2𝜈 = 𝜒2 /(𝑁 − 𝑁varys )
aic Akaike Information Criterion statistic (see below)
bic Bayesian Information Criterion statistic (see below)
var_names ordered list of variable parameter names used for init_vals and covar
covar covariance matrix (with rows/columns using var_names)
init_vals list of initial values for variable parameters
call_kws dict of keyword arguments sent to underlying solver

Note that the calculation of chi-square and reduced chi-square assume that the returned residual function is scaled
properly to the uncertainties in the data. For these statistics to be meaningful, the person writing the function to be
minimized must scale them properly.
After a fit using the leastsq() or least_squares() method has completed successfully, standard errors for the fitted
variables and correlations between pairs of fitted variables are automatically calculated from the covariance matrix. For
other methods, the calc_covar parameter (default is True) in the Minimizer class determines whether or not to use
the numdifftools package to estimate the covariance matrix. The standard error (estimated 1𝜎 error-bar) goes into
the stderr attribute of the Parameter. The correlations with all other variables will be put into the correl attribute
of the Parameter – a dictionary with keys for all other Parameters and values of the corresponding correlation.
In some cases, it may not be possible to estimate the errors and correlations. For example, if a variable actually has no
practical effect on the fit, it will likely cause the covariance matrix to be singular, making standard errors impossible to
estimate. Placing bounds on varied Parameters makes it more likely that errors cannot be estimated, as being near the
maximum or minimum value makes the covariance matrix singular. In these cases, the errorbars attribute of the fit
result (Minimizer object) will be False.

6.4.2 Akaike and Bayesian Information Criteria

The MinimizerResult includes the traditional chi-square and reduced chi-square statistics:
𝑁
∑︁
𝜒2 = 𝑟𝑖2
𝑖
𝜒2𝜈 2
= 𝜒 /(𝑁 − 𝑁varys )
where 𝑟 is the residual array returned by the objective function (likely to be (data-model)/uncertainty for data
modeling usages), 𝑁 is the number of data points (ndata), and 𝑁varys is number of variable parameters.
Also included are the Akaike Information Criterion, and Bayesian Information Criterion statistics, held in the aic and
bic attributes, respectively. These give slightly different measures of the relative quality for a fit, trying to balance
quality of fit with the number of variable parameters used in the fit. These are calculated as:
aic = 𝑁 ln(𝜒2 /𝑁 ) + 2𝑁varys
bic = 𝑁 ln(𝜒2 /𝑁 ) + ln(𝑁 )𝑁varys

When comparing fits with different numbers of varying parameters, one typically selects the model with lowest reduced
chi-square, Akaike information criterion, and/or Bayesian information criterion. Generally, the Bayesian information
criterion is considered the most conservative of these statistics.

6.4. MinimizerResult – the optimization result 39


Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

6.4.3 Uncertainties in Variable Parameters, and their Correlations

As mentioned above, when a fit is complete the uncertainties for fitted Parameters as well as the correlations between
pairs of Parameters are usually calculated. This happens automatically either when using the default leastsq()
method, the least_squares() method, or for most other fitting methods if the highly-recommended numdifftools
package is available. The estimated standard error (the 1𝜎 uncertainty) for each variable Parameter will be contained in
the stderr, while the correl attribute for each Parameter will contain a dictionary of the correlation with each other
variable Parameter.
These estimates of the uncertainties are done by inverting the Hessian matrix which represents the second derivative
of fit quality for each variable parameter. There are situations for which the uncertainties cannot be estimated, which
generally indicates that this matrix cannot be inverted because one of the fit is not actually sensitive to one of the
variables. This can happen if a Parameter is stuck at an upper or lower bound, if the variable is simply not used by the
fit, or if the value for the variable is such that it has no real influence on the fit.
In principle, the scale of the uncertainties in the Parameters is closely tied to the goodness-of-fit statistics chi-square and
reduced chi-square (chisqr and redchi). The standard errors or 1𝜎 uncertainties are those that increase chi-square by
1. Since a “good fit” should have redchi of around 1, this requires that the data uncertainties (and to some extent the
sampling of the N data points) is correct. Unfortunately, it is often not the case that one has high-quality estimates of the
data uncertainties (getting the data is hard enough!). Because of this common situation, the uncertainties reported and
held in stderr are not those that increase chi-square by 1, but those that increase chi-square by reduced chi-square. This
is equivalent to rescaling the uncertainty in the data such that reduced chi-square would be 1. To be clear, this rescaling
is done by default because if reduced chi-square is far from 1, this rescaling often makes the reported uncertainties
sensible, and if reduced chi-square is near 1 it does little harm. If you have good scaling of the data uncertainty and
believe the scale of the residual array is correct, this automatic rescaling can be turned off using scale_covar=False.
Note that the simple (and fast!) approach to estimating uncertainties and correlations by inverting the second derivative
matrix assumes that the components of the residual array (if, indeed, an array is used) are distributed around 0 with a
normal (Gaussian distribution), and that a map of probability distributions for pairs would be elliptical – the size of the
of ellipse gives the uncertainty itself and the eccentricity of the ellipse gives the correlation. This simple approach to
assessing uncertainties ignores outliers, highly asymmetric uncertainties, or complex correlations between Parameters.
In fact, it is not too hard to come up with problems where such effects are important. Our experience is that the
automated results are usually the right scale and quite reasonable as initial estimates, but a more thorough exploration of
the Parameter space using the tools described in Minimizer.emcee() - calculating the posterior probability distribution
of parameters and An advanced example for evaluating confidence intervals can give a more complete understanding
of the distributions and relations between Parameters.

6.5 Getting and Printing Fit Reports

fit_report(inpars, modelpars=None, show_correl=True, min_correl=0.1, sort_pars=False, correl_mode='list')


Generate a report of the fitting results.
The report contains the best-fit values for the parameters and their uncertainties and correlations.
Parameters
• inpars (Parameters) – Input Parameters from fit or MinimizerResult returned from a fit.
• modelpars (Parameters, optional) – Known Model Parameters.
• show_correl (bool, optional) – Whether to show list of sorted correlations (default is
True).
• min_correl (float, optional) – Smallest correlation in absolute value to show (default
is 0.1).

40 Chapter 6. Performing Fits and Analyzing Outputs


Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

• sort_pars (bool or callable, optional) – Whether to show parameter names sorted


in alphanumerical order. If False (default), then the parameters will be listed in the order they
were added to the Parameters dictionary. If callable, then this (one argument) function is used
to extract a comparison key from each list element.
• correl_mode ({'list', table'} str, optional) – Mode for how to show correlations.
Can be either ‘list’ (default) to show a sorted (if sort_pars is True) list of correlation values,
or ‘table’ to show a complete, formatted table of correlations.
Returns
Multi-line text of fit report.
Return type
str
An example using this to write out a fit report would be:

# <examples/doc_fitting_withreport.py>
from numpy import exp, linspace, pi, random, sign, sin

from lmfit import create_params, fit_report, minimize

p_true = create_params(amp=14.0, period=5.46, shift=0.123, decay=0.032)

def residual(pars, x, data=None):


"""Model a decaying sine wave and subtract data."""
vals = pars.valuesdict()
amp = vals['amp']
per = vals['period']
shift = vals['shift']
decay = vals['decay']

if abs(shift) > pi/2:


shift = shift - sign(shift)*pi
model = amp * sin(shift + x/per) * exp(-x*x*decay*decay)
if data is None:
return model
return model - data

random.seed(0)
x = linspace(0.0, 250., 1001)
noise = random.normal(scale=0.7215, size=x.size)
data = residual(p_true, x) + noise

fit_params = create_params(amp=13, period=2, shift=0, decay=0.02)

out = minimize(residual, fit_params, args=(x,), kws={'data': data})

print(fit_report(out))
# <end examples/doc_fitting_withreport.py>

which would give as output:

6.5. Getting and Printing Fit Reports 41


Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

[[Fit Statistics]]
# fitting method = leastsq
# function evals = 83
# data points = 1001
# variables = 4
chi-square = 498.811759
reduced chi-square = 0.50031270
Akaike info crit = -689.222517
Bayesian info crit = -669.587497
[[Variables]]
amp: 13.9121959 +/- 0.14120321 (1.01%) (init = 13)
period: 5.48507038 +/- 0.02666520 (0.49%) (init = 2)
shift: 0.16203673 +/- 0.01405662 (8.67%) (init = 0)
decay: 0.03264539 +/- 3.8015e-04 (1.16%) (init = 0.02)
[[Correlations]] (unreported correlations are < 0.100)
C(period, shift) = +0.7974
C(amp, decay) = +0.5816
C(amp, shift) = -0.2966
C(amp, period) = -0.2432
C(shift, decay) = -0.1819
C(period, decay) = -0.1496

To be clear, you can get at all of these values from the fit result out and out.params. For example, a crude printout
of the best fit variables and standard errors could be done as

print('-------------------------------')
print('Parameter Value Stderr')
for name, param in out.params.items():
print(f'{name:7s} {param.value:11.5f} {param.stderr:11.5f}')

-------------------------------
Parameter Value Stderr
amp 13.91220 0.14120
period 5.48507 0.02667
shift 0.16204 0.01406
decay 0.03265 0.00038

6.6 Using a Iteration Callback Function

An iteration callback function is a function to be called at each iteration, just after the objective function is called. The
iteration callback allows user-supplied code to be run at each iteration, and can be used to abort a fit.
iter_cb(params, iter, resid, *args, **kws):
User-supplied function to be run at each iteration.
Parameters
• params (Parameters) – Parameters.
• iter (int) – Iteration number.
• resid (numpy.ndarray) – Residual array.
• args – Positional arguments. Must match args argument to minimize()

42 Chapter 6. Performing Fits and Analyzing Outputs


Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

• kws – Keyword arguments. Must match kws argument to minimize()


Returns
Iteration abort flag.
Return type
None for normal behavior, any value like True to abort the fit.
Normally, the iteration callback would have no return value or return None. To abort a fit, have this function return a
value that is True (including any non-zero integer). The fit will also abort if any exception is raised in the iteration
callback. When a fit is aborted this way, the parameters will have the values from the last iteration. The fit statistics are
not likely to be meaningful, and uncertainties will not be computed.

6.7 Using the Minimizer class

For full control of the fitting process, you will want to create a Minimizer object.
class Minimizer(userfcn, params, fcn_args=None, fcn_kws=None, iter_cb=None, scale_covar=True,
nan_policy='raise', reduce_fcn=None, calc_covar=True, max_nfev=None, **kws)
A general minimizer for curve fitting and optimization.
Parameters
• userfcn (callable) – Objective function that returns the residual (difference between
model and data) to be minimized in a least-squares sense. This function must have the sig-
nature:

userfcn(params, *fcn_args, **fcn_kws)

• params (Parameters) – Contains the Parameters for the model.


• fcn_args (tuple, optional) – Positional arguments to pass to userfcn.
• fcn_kws (dict, optional) – Keyword arguments to pass to userfcn.
• iter_cb (callable, optional) – Function to be called at each fit iteration. This function
should have the signature:

iter_cb(params, iter, resid, *fcn_args, **fcn_kws)

where params will have the current parameter values, iter the iteration number, resid the
current residual array, and *fcn_args and **fcn_kws are passed to the objective function.
• scale_covar (bool, optional) – Whether to automatically scale the covariance matrix
(default is True).
• nan_policy ({'raise', 'propagate', 'omit}, optional) – Specifies action if userfcn
(or a Jacobian) returns NaN values. One of:
– ’raise’ : a ValueError is raised (default)
– ’propagate’ : the values returned from userfcn are un-altered
– ’omit’ : non-finite values are filtered
• reduce_fcn (str or callable, optional) – Function to convert a residual array to a
scalar value for the scalar minimizers. Optional values are (where r is the residual array):
– None : sum-of-squares of residual (default)
= (r*r).sum()

6.7. Using the Minimizer class 43


Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

– ’negentropy’ : neg entropy, using normal distribution


= rho*log(rho).sum()`, where rho = exp(-r*r/2)/(sqrt(2*pi))
– ’neglogcauchy’ : neg log likelihood, using Cauchy distribution
= -log(1/(pi*(1+r*r))).sum()
– callable : must take one argument (r) and return a float.
• calc_covar (bool, optional) – Whether to calculate the covariance matrix (de-
fault is True) for solvers other than 'leastsq' and 'least_squares'. Requires the
numdifftools package to be installed.
• max_nfev (int or None, optional) – Maximum number of function evaluations (de-
fault is None). The default value depends on the fitting method.
• **kws (dict, optional) – Options to pass to the minimizer being used.

Notes

The objective function should return the value to be minimized. For the Levenberg-Marquardt algorithm from
leastsq() or least_squares(), this returned value must be an array, with a length greater than or equal to
the number of fitting variables in the model. For the other methods, the return value can either be a scalar or
an array. If an array is returned, the sum-of-squares of the array will be sent to the underlying fitting method,
effectively doing a least-squares optimization of the return values. If the objective function returns non-finite
values then a ValueError will be raised because the underlying solvers cannot deal with them.
A common use for the fcn_args and fcn_kws would be to pass in other data needed to calculate the residual,
including such things as the data array, dependent variable, uncertainties in the data, and other data structures
for the model calculation.
The Minimizer object has a few public methods:
Minimizer.minimize(method='leastsq', params=None, **kws)
Perform the minimization.
Parameters
• method (str, optional) – Name of the fitting method to use. Valid values are:
– ’leastsq’: Levenberg-Marquardt (default)
– ’least_squares’: Least-Squares minimization, using Trust Region Reflective method
– ’differential_evolution’: differential evolution
– ’brute’: brute force method
– ’basinhopping’: basinhopping
– ’ampgo’: Adaptive Memory Programming for Global Optimization
– ’nelder’: Nelder-Mead
– ’lbfgsb’: L-BFGS-B
– ’powell’: Powell
– ’cg’: Conjugate-Gradient
– ’newton’: Newton-CG
– ’cobyla’: Cobyla

44 Chapter 6. Performing Fits and Analyzing Outputs


Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

– ’bfgs’: BFGS
– ’tnc’: Truncated Newton
– ’trust-ncg’: Newton-CG trust-region
– ’trust-exact’: nearly exact trust-region
– ’trust-krylov’: Newton GLTR trust-region
– ’trust-constr’: trust-region for constrained optimization
– ’dogleg’: Dog-leg trust-region
– ’slsqp’: Sequential Linear Squares Programming
– ’emcee’: Maximum likelihood via Monte-Carlo Markov Chain
– ’shgo’: Simplicial Homology Global Optimization
– ’dual_annealing’: Dual Annealing optimization
In most cases, these methods wrap and use the method with the same name
from scipy.optimize, or use scipy.optimize.minimize with the same method ar-
gument. Thus ‘leastsq’ will use scipy.optimize.leastsq, while ‘powell’ will use
scipy.optimize.minimizer(. . . , method=’powell’).
For more details on the fitting methods please refer to the SciPy documentation.
• params (Parameters, optional) – Parameters of the model to use as starting values.
• **kws (optional) – Additional arguments are passed to the underlying minimization
method.
Returns
Object containing the optimized parameters and several goodness-of-fit statistics.
Return type
MinimizerResult
Changed in version 0.9.0: Return value changed to MinimizerResult.
Minimizer.leastsq(params=None, max_nfev=None, **kws)
Use Levenberg-Marquardt minimization to perform a fit.
It assumes that the input Parameters have been initialized, and a function to minimize has been properly set up.
When possible, this calculates the estimated uncertainties and variable correlations from the covariance matrix.
This method calls scipy.optimize.leastsq and, by default, numerical derivatives are used.
Parameters
• params (Parameters, optional) – Parameters to use as starting point.
• max_nfev (int or None, optional) – Maximum number of function evaluations. De-
faults to 2000*(nvars+1), where nvars is the number of variable parameters.
• **kws (dict, optional) – Minimizer options to pass to scipy.optimize.leastsq.
Returns
Object containing the optimized parameters and several goodness-of-fit statistics.
Return type
MinimizerResult
Changed in version 0.9.0: Return value changed to MinimizerResult.

6.7. Using the Minimizer class 45


Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

Minimizer.least_squares(params=None, max_nfev=None, **kws)


Least-squares minimization using scipy.optimize.least_squares.
This method wraps scipy.optimize.least_squares, which has built-in support for bounds and robust loss functions.
By default it uses the Trust Region Reflective algorithm with a linear loss function (i.e., the standard least-squares
problem).
Parameters
• params (Parameters, optional) – Parameters to use as starting point.
• max_nfev (int or None, optional) – Maximum number of function evaluations. De-
faults to 2000*(nvars+1), where nvars is the number of variable parameters.
• **kws (dict, optional) – Minimizer options to pass to scipy.optimize.least_squares.
Returns
Object containing the optimized parameters and several goodness-of-fit statistics.
Return type
MinimizerResult
Changed in version 0.9.0: Return value changed to MinimizerResult.
Minimizer.scalar_minimize(method='Nelder-Mead', params=None, max_nfev=None, **kws)
Scalar minimization using scipy.optimize.minimize.
Perform fit with any of the scalar minimization algorithms supported by scipy.optimize.minimize. Default argu-
ment values are:

scalar_minimize() arg Default Value Description


method ‘Nelder-Mead’ fitting method
tol 1.e-7 fitting and parameter tolerance
hess None Hessian of objective function

Parameters
• method (str, optional) – Name of the fitting method to use. One of:
– ’Nelder-Mead’ (default)
– ’L-BFGS-B’
– ’Powell’
– ’CG’
– ’Newton-CG’
– ’COBYLA’
– ’BFGS’
– ’TNC’
– ’trust-ncg’
– ’trust-exact’
– ’trust-krylov’
– ’trust-constr’
– ’dogleg’

46 Chapter 6. Performing Fits and Analyzing Outputs


Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

– ’SLSQP’
– ’differential_evolution’
• params (Parameters, optional) – Parameters to use as starting point.
• max_nfev (int or None, optional) – Maximum number of function evaluations. De-
faults to 2000*(nvars+1), where nvars is the number of variable parameters.
• **kws (dict, optional) – Minimizer options pass to scipy.optimize.minimize.
Returns
Object containing the optimized parameters and several goodness-of-fit statistics.
Return type
MinimizerResult

Changed in version 0.9.0: Return value changed to MinimizerResult.

Notes

If the objective function returns a NumPy array instead of the expected scalar, the sum-of-squares of the array
will be used.
Note that bounds and constraints can be set on Parameters for any of these methods, so are not supported sepa-
rately for those designed to use bounds. However, if you use the differential_evolution method you must
specify finite (min, max) for each varying Parameter.
Minimizer.prepare_fit(params=None)
Prepare parameters for fitting.
Prepares and initializes model and Parameters for subsequent fitting. This routine prepares the conversion of
Parameters into fit variables, organizes parameter bounds, and parses, “compiles” and checks constrain ex-
pressions. The method also creates and returns a new instance of a MinimizerResult object that contains the
copy of the Parameters that will actually be varied in the fit.
Parameters
params (Parameters, optional) – Contains the Parameters for the model; if None, then the
Parameters used to initialize the Minimizer object are used.
Return type
MinimizerResult

Notes

This method is called directly by the fitting methods, and it is generally not necessary to call this function ex-
plicitly.
Changed in version 0.9.0: Return value changed to MinimizerResult.
Minimizer.brute(params=None, Ns=20, keep=50, workers=1, max_nfev=None)
Use the brute method to find the global minimum of a function.
The following parameters are passed to scipy.optimize.brute and cannot be changed:

6.7. Using the Minimizer class 47


Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

brute() arg Value Description


full_output 1 Return the evaluation grid and the objective function’s values on it.
finish None No “polishing” function is to be used after the grid search.
disp False Do not print convergence messages (when finish is not None).

It assumes that the input Parameters have been initialized, and a function to minimize has been properly set up.
Parameters
• params (Parameters, optional) – Contains the Parameters for the model. If None, then
the Parameters used to initialize the Minimizer object are used.
• Ns (int, optional) – Number of grid points along the axes, if not otherwise specified (see
Notes).
• keep (int, optional) – Number of best candidates from the brute force method that are
stored in the candidates attribute. If ‘all’, then all grid points from scipy.optimize.brute
are stored as candidates.
• workers (int or map-like callable, optional) – For parallel evaluation of the grid
(see scipy.optimize.brute for more details).
• max_nfev (int or None, optional) – Maximum number of function evaluations (de-
fault is None). Defaults to 200000*(nvarys+1).
Returns
Object containing the parameters from the brute force method. The return values (x0, fval,
grid, Jout) from scipy.optimize.brute are stored as brute_<parname> attributes. The Min-
imizerResult also contains the :attr:candidates attribute and show_candidates() method.
The candidates attribute contains the parameters and chisqr from the brute force method
as a namedtuple, ('Candidate', ['params', 'score']) sorted on the (lowest) chisqr
value. To access the values for a particular candidate one can use result.candidate[#].
params or result.candidate[#].score, where a lower # represents a better candidate. The
show_candidates() method uses the pretty_print() method to show a specific candidate-#
or all candidates when no number is specified.
Return type
MinimizerResult
New in version 0.9.6.

Notes

The brute() method evaluates the function at each point of a multidimensional grid of points. The grid
points are generated from the parameter ranges using Ns and (optional) brute_step. The implementation in
scipy.optimize.brute requires finite bounds and the range is specified as a two-tuple (min, max) or slice-object
(min, max, brute_step). A slice-object is used directly, whereas a two-tuple is converted to a slice object
that interpolates Ns points from min to max, inclusive.
In addition, the brute() method in lmfit, handles three other scenarios given below with their respective slice-
object:
• lower bound (min) and brute_step are specified:
range = (min, min + Ns * brute_step, brute_step).
• upper bound (max) and brute_step are specified:
range = (max - Ns * brute_step, max, brute_step).

48 Chapter 6. Performing Fits and Analyzing Outputs


Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

• numerical value (value) and brute_step are specified:


range = (value - (Ns//2) * brute_step`, value + (Ns//2) * brute_step,
brute_step).
For more information, check the examples in examples/lmfit_brute_example.ipynb.
Minimizer.basinhopping(params=None, max_nfev=None, **kws)
Use the basinhopping algorithm to find the global minimum.
This method calls scipy.optimize.basinhopping using the default arguments. The default minimizer is BFGS,
but since lmfit supports parameter bounds for all minimizers, the user can choose any of the solvers present in
scipy.optimize.minimize.
Parameters
• params (Parameters, optional) – Contains the Parameters for the model. If None, then
the Parameters used to initialize the Minimizer object are used.
• max_nfev (int or None, optional) – Maximum number of function evaluations (de-
fault is None). Defaults to 200000*(nvarys+1).
• **kws (dict, optional) – Minimizer options to pass to scipy.optimize.basinhopping.
Returns
Object containing the optimization results from the basinhopping algorithm.
Return type
MinimizerResult
New in version 0.9.10.
Minimizer.ampgo(params=None, max_nfev=None, **kws)
Find the global minimum of a multivariate function using AMPGO.
AMPGO stands for ‘Adaptive Memory Programming for Global Optimization’ and is an efficient algorithm to
find the global minimum.
Parameters
• params (Parameters, optional) – Contains the Parameters for the model. If None, then
the Parameters used to initialize the Minimizer object are used.
• max_nfev (int, optional) – Maximum number of total function evaluations. If None
(default), the optimization will stop after totaliter number of iterations (see below)..
• **kws (dict, optional) – Minimizer options to pass to the ampgo algorithm, the options
are listed below:

local: str, optional


Name of the local minimization method. Valid options
are:
- `'L-BFGS-B'` (default)
- `'Nelder-Mead'`
- `'Powell'`
- `'TNC'`
- `'SLSQP'`
local_opts: dict, optional
Options to pass to the local minimizer (default is
None).
maxfunevals: int, optional
Maximum number of function evaluations. If None
(continues on next page)

6.7. Using the Minimizer class 49


Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

(continued from previous page)


(default), the optimization will stop after
`totaliter` number of iterations (deprecated: use
`max_nfev` instead).
totaliter: int, optional
Maximum number of global iterations (default is 20).
maxiter: int, optional
Maximum number of `Tabu Tunneling` iterations during
each global iteration (default is 5).
glbtol: float, optional
Tolerance whether or not to accept a solution after a
tunneling phase (default is 1e-5).
eps1: float, optional
Constant used to define an aspiration value for the
objective function during the Tunneling phase (default
is 0.02).
eps2: float, optional
Perturbation factor used to move away from the latest
local minimum at the start of a Tunneling phase
(default is 0.1).
tabulistsize: int, optional
Size of the (circular) tabu search list (default is 5).
tabustrategy: {'farthest', 'oldest'}, optional
Strategy to use when the size of the tabu list exceeds
`tabulistsize`. It can be `'oldest'` to drop the oldest
point from the tabu list or `'farthest'` (defauilt) to
drop the element farthest from the last local minimum
found.
disp: bool, optional
Set to True to print convergence messages (default is
False).

Returns
Object containing the parameters from the ampgo method, with fit parameters, statistics and such.
The return values (x0, fval, eval, msg, tunnel) are stored as ampgo_<parname> attributes.
Return type
MinimizerResult
New in version 0.9.10.

Notes

The Python implementation was written by Andrea Gavana in 2014 (http://infinity77.net/global_optimization/


index.html).
The details of the AMPGO algorithm are described in the paper “Adaptive Memory Programming for Constrained
Global Optimization” located here:
http://leeds-faculty.colorado.edu/glover/fred%20pubs/416%20-%20AMP%20(TS)%20for%20Constrained%
20Global%20Opt%20w%20Lasdon%20et%20al%20.pdf
Minimizer.shgo(params=None, max_nfev=None, **kws)
Use the SHGO algorithm to find the global minimum.

50 Chapter 6. Performing Fits and Analyzing Outputs


Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

SHGO stands for “simplicial homology global optimization” and calls scipy.optimize.shgo using its default ar-
guments.
Parameters
• params (Parameters, optional) – Contains the Parameters for the model. If None, then
the Parameters used to initialize the Minimizer object are used.
• max_nfev (int or None, optional) – Maximum number of function evaluations. De-
faults to 200000*(nvars+1), where nvars is the number of variable parameters.
• **kws (dict, optional) – Minimizer options to pass to the SHGO algorithm.
Returns
Object containing the parameters from the SHGO method. The return values specific to
scipy.optimize.shgo (x, xl, fun, funl, nfev, nit, nlfev, nlhev, and nljev) are stored as
shgo_<parname> attributes.
Return type
MinimizerResult
New in version 0.9.14.
Minimizer.dual_annealing(params=None, max_nfev=None, **kws)
Use the dual_annealing algorithm to find the global minimum.
This method calls scipy.optimize.dual_annealing using its default arguments.
Parameters
• params (Parameters, optional) – Contains the Parameters for the model. If None, then
the Parameters used to initialize the Minimizer object are used.
• max_nfev (int or None, optional) – Maximum number of function evaluations. De-
faults to 200000*(nvars+1), where nvars is the number of variables.
• **kws (dict, optional) – Minimizer options to pass to the dual_annealing algorithm.
Returns
Object containing the parameters from the dual_annealing method. The return values specific to
scipy.optimize.dual_annealing (x, fun, nfev, nhev, njev, and nit) are stored as da_<parname>
attributes.
Return type
MinimizerResult
New in version 0.9.14.
Minimizer.emcee(params=None, steps=1000, nwalkers=100, burn=0, thin=1, ntemps=1, pos=None,
reuse_sampler=False, workers=1, float_behavior='posterior', is_weighted=True, seed=None,
progress=True, run_mcmc_kwargs={})
Bayesian sampling of the posterior distribution.
The method uses the emcee Markov Chain Monte Carlo package and assumes that the prior is Uniform. You
need to have emcee version 3 or newer installed to use this method.
Parameters
• params (Parameters, optional) – Parameters to use as starting point. If this is not spec-
ified then the Parameters used to initialize the Minimizer object are used.
• steps (int, optional) – How many samples you would like to draw from the posterior
distribution for each of the walkers?

6.7. Using the Minimizer class 51


Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

• nwalkers (int, optional) – Should be set so 𝑛𝑤𝑎𝑙𝑘𝑒𝑟𝑠 >> 𝑛𝑣𝑎𝑟𝑦𝑠, where nvarys
are the number of parameters being varied during the fit. ‘Walkers are the members of the
ensemble. They are almost like separate Metropolis-Hastings chains but, of course, the pro-
posal distribution for a given walker depends on the positions of all the other walkers in the
ensemble.’ - from the emcee webpage.
• burn (int, optional) – Discard this many samples from the start of the sampling regime.
• thin (int, optional) – Only accept 1 in every thin samples.
• ntemps (int, deprecated) – ntemps has no effect.
• pos (numpy.ndarray, optional) – Specify the initial positions for the sampler, an ndar-
ray of shape (nwalkers, nvarys). You can also initialise using a previous chain of the
same nwalkers and nvarys. Note that nvarys may be one larger than you expect it to be if
your userfcn returns an array and is_weighted=False.
• reuse_sampler (bool, optional) – Set to True if you have already run emcee with the
Minimizer instance and want to continue to draw from its sampler (and so retain the chain
history). If False, a new sampler is created. The keywords nwalkers, pos, and params will
be ignored when this is set, as they will be set by the existing sampler. Important: the
Parameters used to create the sampler must not change in-between calls to emcee. Alteration
of Parameters would include changed min, max, vary and expr attributes. This may happen,
for example, if you use an altered Parameters object and call the minimize method in-between
calls to emcee.
• workers (Pool-like or int, optional) – For parallelization of sampling. It can be
any Pool-like object with a map method that follows the same calling sequence as the built-in
map function. If int is given as the argument, then a multiprocessing-based pool is spawned
internally with the corresponding number of parallel processes. ‘mpi4py’-based paralleliza-
tion and ‘joblib’-based parallelization pools can also be used here. Note: because of multi-
processing overhead it may only be worth parallelising if the objective function is expensive
to calculate, or if there are a large number of objective evaluations per step (nwalkers *
nvarys).
• float_behavior (str, optional) – Meaning of float (scalar) output of objective func-
tion. Use ‘posterior’ if it returns a log-posterior probability or ‘chi2’ if it returns 𝜒2 . See
Notes for further details.
• is_weighted (bool, optional) – Has your objective function been weighted by mea-
surement uncertainties? If is_weighted=True then your objective function is assumed
to return residuals that have been divided by the true measurement uncertainty (data -
model) / sigma. If is_weighted=False then the objective function is assumed to return
unweighted residuals, data - model. In this case emcee will employ a positive measure-
ment uncertainty during the sampling. This measurement uncertainty will be present in the
output params and output chain with the name __lnsigma. A side effect of this is that you
cannot use this parameter name yourself. Important: this parameter only has any effect if
your objective function returns an array. If your objective function returns a float, then this
parameter is ignored. See Notes for more details.
• seed (int or numpy.random.RandomState, optional) – If seed is an int, a new
numpy.random.RandomState instance is used, seeded with seed. If seed is already a
numpy.random.RandomState instance, then that numpy.random.RandomState instance is
used. Specify seed for repeatable minimizations.
• progress (bool, optional) – Print a progress bar to the console while running.
• run_mcmc_kwargs (dict, optional) – Additional (optional) keyword arguments that are
passed to emcee.EnsembleSampler.run_mcmc.

52 Chapter 6. Performing Fits and Analyzing Outputs


Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

Returns
MinimizerResult object containing updated params, statistics, etc. The updated params rep-
resent the median of the samples, while the uncertainties are half the difference of the 15.87
and 84.13 percentiles. The MinimizerResult contains a few additional attributes: chain contain
the samples and has shape ((steps - burn) // thin, nwalkers, nvarys). flatchain is
a pandas.DataFrame of the flattened chain, that can be accessed with result.flatchain[parname].
lnprob contains the log probability for each sample in chain. The sample with the highest prob-
ability corresponds to the maximum likelihood estimate. acor is an array containing the auto-
correlation time for each parameter if the auto-correlation time can be computed from the chain.
Finally, acceptance_fraction (an array of the fraction of steps accepted for each walker).
Return type
MinimizerResult

Notes

This method samples the posterior distribution of the parameters using Markov Chain Monte Carlo. It calcu-
lates the log-posterior probability of the model parameters, F, given the data, D, ln 𝑝(𝐹𝑡𝑟𝑢𝑒 |𝐷). This ‘posterior
probability’ is given by:

ln 𝑝(𝐹𝑡𝑟𝑢𝑒 |𝐷) ∝ ln 𝑝(𝐷|𝐹𝑡𝑟𝑢𝑒 ) + ln 𝑝(𝐹𝑡𝑟𝑢𝑒 )

where ln 𝑝(𝐷|𝐹𝑡𝑟𝑢𝑒 ) is the ‘log-likelihood’ and ln 𝑝(𝐹𝑡𝑟𝑢𝑒 ) is the ‘log-prior’. The default log-prior encodes
prior information known about the model that the log-prior probability is -numpy.inf (impossible) if any of
the parameters is outside its limits, and is zero if all the parameters are inside their bounds (uniform prior). The
log-likelihood function is1 :

1 ∑︁ (𝑔𝑛 (𝐹𝑡𝑟𝑢𝑒 ) − 𝐷𝑛 )2
[︂ ]︂
2
ln 𝑝(𝐷|𝐹𝑡𝑟𝑢𝑒 ) = − + ln(2𝜋𝑠 𝑛 )
2 𝑛 𝑠2𝑛

The first term represents the residual (𝑔 being the generative model, 𝐷𝑛 the data and 𝑠𝑛 the measurement uncer-
tainty). This gives 𝜒2 when summed over all data points. The objective function may also return the log-posterior
probability, ln 𝑝(𝐹𝑡𝑟𝑢𝑒 |𝐷). Since the default log-prior term is zero, the objective function can also just return
the log-likelihood, unless you wish to create a non-uniform prior.
If the objective function returns a float value, this is assumed by default to be the log-posterior proba-
bility, (float_behavior default is ‘posterior’). If your objective function returns 𝜒2 , then you should use
float_behavior='chi2' instead.
By default objective functions may return an ndarray of (possibly weighted) residuals. In this case, use
is_weighted to select whether these are correctly weighted by measurement uncertainty. Note that this ignores the
second term above, so that to calculate a correct log-posterior probability value your objective function should
return a float value. With is_weighted=False the data uncertainty, s_n, will be treated as a nuisance pa-
rameter to be marginalized out. This uses strictly positive uncertainty (homoscedasticity) for each data point,
𝑠𝑛 = exp(__lnsigma). __lnsigma will be present in MinimizerResult.params, as well as Minimizer.chain and
nvarys will be increased by one.
1 https://emcee.readthedocs.io

6.7. Using the Minimizer class 53


Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

References

6.8 Minimizer.emcee() - calculating the posterior probability distribu-


tion of parameters

Minimizer.emcee() can be used to obtain the posterior probability distribution of parameters, given a set of ex-
perimental data. Note that this method does not actually perform a fit at all. Instead, it explores parameter space to
determine the probability distributions for the parameters, but without an explicit goal of attempting to refine the so-
lution. It should not be used for fitting, but it is a useful method to to more thoroughly explore the parameter space
around the solution after a fit has been done and thereby get an improved understanding of the probability distribution
for the parameters. It may be able to refine your estimate of the most likely values for a set of parameters, but it will
not iteratively find a good solution to the minimization problem. To use this method effectively, you should first use
another minimization method and then use this method to explore the parameter space around thosee best-fit values.
To illustrate this, we’ll use an example problem of fitting data to function of a double exponential decay, including a
modest amount of Gaussian noise to the data. Note that this example is the same problem used in An advanced example
for evaluating confidence intervals for evaluating confidence intervals in the parameters, which is a similar goal to the
one here.

import matplotlib.pyplot as plt


import numpy as np

import lmfit

x = np.linspace(1, 10, 250)


np.random.seed(0)
y = 3.0 * np.exp(-x / 2) - 5.0 * np.exp(-(x - 0.1) / 10.) + 0.1 * np.random.randn(x.size)

Create a Parameter set for the initial guesses:

p = lmfit.Parameters()
p.add_many(('a1', 4.), ('a2', 4.), ('t1', 3.), ('t2', 3., True))

def residual(p):
v = p.valuesdict()
return v['a1'] * np.exp(-x / v['t1']) + v['a2'] * np.exp(-(x - 0.1) / v['t2']) - y

Solving with minimize() gives the Maximum Likelihood solution. Note that we use the robust Nelder-Mead method
here. The default Levenberg-Marquardt method seems to have difficulty with exponential decays, though it can refine
the solution if starting near the solution:

mi = lmfit.minimize(residual, p, method='nelder', nan_policy='omit')


lmfit.printfuncs.report_fit(mi.params, min_correl=0.5)

[[Variables]]
a1: 2.98623689 +/- 0.15010519 (5.03%) (init = 4)
a2: -4.33525597 +/- 0.11765821 (2.71%) (init = 4)
t1: 1.30993186 +/- 0.13449653 (10.27%) (init = 3)
t2: 11.8240752 +/- 0.47172598 (3.99%) (init = 3)
[[Correlations]] (unreported correlations are < 0.500)
C(a2, t2) = +0.9876
C(a2, t1) = -0.9278
(continues on next page)

54 Chapter 6. Performing Fits and Analyzing Outputs


Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

(continued from previous page)


C(t1, t2) = -0.8852
C(a1, t1) = -0.6093

and plotting the fit using the Maximum Likelihood solution gives the graph below:

plt.plot(x, y, 'o')
plt.plot(x, residual(mi.params) + y, label='best fit')
plt.legend()
plt.show()

Note that the fit here (for which the numdifftools package is installed) does estimate and report uncertainties in the
parameters and correlations for the parameters, and reports the correlation of parameters a2 and t2 to be very high.
As we’ll see, these estimates are pretty good, but when faced with such high correlation, it can be helpful to get the full
probability distribution for the parameters. MCMC methods are very good for this.
Furthermore, we wish to deal with the data uncertainty. This is called marginalisation of a nuisance parameter. emcee
requires a function that returns the log-posterior probability. The log-posterior probability is a sum of the log-prior
probability and log-likelihood functions. The log-prior probability is assumed to be zero if all the parameters are within
their bounds and -np.inf if any of the parameters are outside their bounds.
If the objective function returns an array of unweighted residuals (i.e., data-model) as is the case here, you can
use is_weighted=False as an argument for emcee. In that case, emcee will automatically add/use the __lnsigma
parameter to estimate the true uncertainty in the data. To place boundaries on this parameter one can do:

mi.params.add('__lnsigma', value=np.log(0.1), min=np.log(0.001), max=np.log(2))

Now we have to set up the minimizer and do the sampling (again, just to be clear, this is not doing a fit):

6.8. Minimizer.emcee() - calculating the posterior probability distribution of parameters 55


Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

res = lmfit.minimize(residual, method='emcee', nan_policy='omit', burn=300, steps=1000,␣


˓→thin=20,

params=mi.params, is_weighted=False, progress=False)

As mentioned in the Notes for Minimizer.emcee(), the is_weighted argument will be ignored if your objective
function returns a float instead of an array. For the documentation we set progress=False; the default is to print a
progress bar to the Terminal if the tqdm package is installed.
The success of the method (i.e., whether or not the sampling went well) can be assessed by checking the integrated
autocorrelation time and/or the acceptance fraction of the walkers. For this specific example the autocorrelation time
could not be estimated because the “chain is too short”. Instead, we plot the acceptance fraction per walker and its
mean value suggests that the sampling worked as intended (as a rule of thumb the value should be between 0.2 and
0.5).

plt.plot(res.acceptance_fraction, 'o')
plt.xlabel('walker')
plt.ylabel('acceptance fraction')
plt.show()

With the results from emcee, we can visualize the posterior distributions for the parameters using the corner package:

import corner

emcee_plot = corner.corner(res.flatchain, labels=res.var_names,


truths=list(res.params.valuesdict().values()))

56 Chapter 6. Performing Fits and Analyzing Outputs


Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

The values reported in the MinimizerResult are the medians of the probability distributions and a 1 𝜎 quantile,
estimated as half the difference between the 15.8 and 84.2 percentiles. Printing these values:

print('median of posterior probability distribution')


print('--------------------------------------------')
lmfit.report_fit(res.params)

median of posterior probability distribution


--------------------------------------------
[[Variables]]
a1: 2.98945718 +/- 0.14033921 (4.69%) (init = 2.986237)
a2: -4.34687243 +/- 0.12131092 (2.79%) (init = -4.335256)
t1: 1.32883916 +/- 0.13766047 (10.36%) (init = 1.309932)
(continues on next page)

6.8. Minimizer.emcee() - calculating the posterior probability distribution of parameters 57


Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

(continued from previous page)


t2: 11.7836194 +/- 0.47719763 (4.05%) (init = 11.82408)
__lnsigma: -2.32559226 +/- 0.04542650 (1.95%) (init = -2.302585)
[[Correlations]] (unreported correlations are < 0.100)
C(a2, t2) = +0.9811
C(a2, t1) = -0.9377
C(t1, t2) = -0.8943
C(a1, t1) = -0.5076
C(a1, a2) = +0.2140
C(a1, t2) = +0.1777

You can see that this recovered the right uncertainty level on the data. Note that these values agree pretty well with the
results, uncertainties and correlations found by the fit and using numdifftools to estimate the covariance matrix. That
is, even though the parameters a2, t1, and t2 are all highly correlated and do not display perfectly Gaussian probability
distributions, the probability distributions found by explicitly sampling the parameter space are not so far from elliptical
as to make the simple (and much faster) estimates from inverting the covariance matrix completely invalid.
As mentioned above, the result from emcee reports the median values, which are not necessarily the same as the
Maximum Likelihood Estimate. To obtain the values for the Maximum Likelihood Estimation (MLE) we find the
location in the chain with the highest probability:

highest_prob = np.argmax(res.lnprob)
hp_loc = np.unravel_index(highest_prob, res.lnprob.shape)
mle_soln = res.chain[hp_loc]
for i, par in enumerate(p):
p[par].value = mle_soln[i]

print('\nMaximum Likelihood Estimation from emcee ')


print('-------------------------------------------------')
print('Parameter MLE Value Median Value Uncertainty')
fmt = ' {:5s} {:11.5f} {:11.5f} {:11.5f}'.format
for name, param in p.items():
print(fmt(name, param.value, res.params[name].value,
res.params[name].stderr))

Maximum Likelihood Estimation from emcee


-------------------------------------------------
Parameter MLE Value Median Value Uncertainty
a1 2.93839 2.98946 0.14034
a2 -4.35274 -4.34687 0.12131
t1 1.34310 1.32884 0.13766
t2 11.78782 11.78362 0.47720

Here the difference between MLE and median value are seen to be below 0.5%, and well within the estimated 1-𝜎
uncertainty.
Finally, we can use the samples from emcee to work out the 1- and 2-𝜎 error estimates.

print('\nError estimates from emcee:')


print('------------------------------------------------------')
print('Parameter -2sigma -1sigma median +1sigma +2sigma')

(continues on next page)

58 Chapter 6. Performing Fits and Analyzing Outputs


Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

(continued from previous page)


for name in p.keys():
quantiles = np.percentile(res.flatchain[name],
[2.275, 15.865, 50, 84.135, 97.275])
median = quantiles[2]
err_m2 = quantiles[0] - median
err_m1 = quantiles[1] - median
err_p1 = quantiles[3] - median
err_p2 = quantiles[4] - median
fmt = ' {:5s} {:8.4f} {:8.4f} {:8.4f} {:8.4f} {:8.4f}'.format
print(fmt(name, err_m2, err_m1, median, err_p1, err_p2))

Error estimates from emcee:


------------------------------------------------------
Parameter -2sigma -1sigma median +1sigma +2sigma
a1 -0.2656 -0.1362 2.9895 0.1445 0.3141
a2 -0.3209 -0.1309 -4.3469 0.1118 0.1985
t1 -0.2377 -0.1305 1.3288 0.1448 0.3278
t2 -1.0677 -0.4807 11.7836 0.4739 0.8990

And we see that the initial estimates for the 1-𝜎 standard error using numdifftools was not too bad. We’ll return
to this example problem in An advanced example for evaluating confidence intervals and use a different method to
calculate the 1- and 2-𝜎 error bars.

6.8. Minimizer.emcee() - calculating the posterior probability distribution of parameters 59


Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

60 Chapter 6. Performing Fits and Analyzing Outputs


CHAPTER

SEVEN

MODELING DATA AND CURVE FITTING

A common use of least-squares minimization is curve fitting, where one has a parametrized model function meant
to explain some phenomena and wants to adjust the numerical values for the model so that it most closely matches
some data. With scipy, such problems are typically solved with scipy.optimize.curve_fit, which is a wrapper around
scipy.optimize.leastsq. Since lmfit’s minimize() is also a high-level wrapper around scipy.optimize.leastsq it can be
used for curve-fitting problems. While it offers many benefits over scipy.optimize.leastsq, using minimize() for many
curve-fitting problems still requires more effort than using scipy.optimize.curve_fit.
The Model class in lmfit provides a simple and flexible approach to curve-fitting problems. Like
scipy.optimize.curve_fit, a Model uses a model function – a function that is meant to calculate a model for some phe-
nomenon – and then uses that to best match an array of supplied data. Beyond that similarity, its interface is rather
different from scipy.optimize.curve_fit, for example in that it uses Parameters, but also offers several other important
advantages.
In addition to allowing you to turn any model function into a curve-fitting method, lmfit also provides canonical defi-
nitions for many known lineshapes such as Gaussian or Lorentzian peaks and Exponential decays that are widely used
in many scientific domains. These are available in the models module that will be discussed in more detail in the next
chapter (Built-in Fitting Models in the models module). We mention it here as you may want to consult that list before
writing your own model. For now, we focus on turning Python functions into high-level fitting models with the Model
class, and using these to fit data.

7.1 Motivation and simple example: Fit data to Gaussian profile

Let’s start with a simple and common example of fitting data to a Gaussian peak. As we will see, there is a built-in
GaussianModel class that can help do this, but here we’ll build our own. We start with a simple definition of the
model function:

from numpy import exp, linspace, random

def gaussian(x, amp, cen, wid):


return amp * exp(-(x-cen)**2 / wid)

We want to use this function to fit to data 𝑦(𝑥) represented by the arrays y and x. With scipy.optimize.curve_fit, this
would be:

from scipy.optimize import curve_fit

x = linspace(-10, 10, 101)


y = gaussian(x, 2.33, 0.21, 1.51) + random.normal(0, 0.2, x.size)

(continues on next page)

61
Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

(continued from previous page)


init_vals = [1, 0, 1] # for [amp, cen, wid]
best_vals, covar = curve_fit(gaussian, x, y, p0=init_vals)

That is, we create data, make an initial guess of the model values, and run scipy.optimize.curve_fit with the model func-
tion, data arrays, and initial guesses. The results returned are the optimal values for the parameters and the covariance
matrix. It’s simple and useful, but it misses the benefits of lmfit.
With lmfit, we create a Model that wraps the gaussian model function, which automatically generates the appropriate
residual function, and determines the corresponding parameter names from the function signature itself:

from lmfit import Model

gmodel = Model(gaussian)
print(f'parameter names: {gmodel.param_names}')
print(f'independent variables: {gmodel.independent_vars}')

parameter names: ['amp', 'cen', 'wid']


independent variables: ['x']

As you can see, the Model gmodel determined the names of the parameters and the independent variables. By default,
the first argument of the function is taken as the independent variable, held in independent_vars, and the rest of
the functions positional arguments (and, in certain cases, keyword arguments – see below) are used for Parameter
names. Thus, for the gaussian function above, the independent variable is x, and the parameters are named amp,
cen, and wid, and – all taken directly from the signature of the model function. As we will see below, you can modify
the default assignment of independent variable / arguments and specify yourself what the independent variable is and
which function arguments should be identified as parameter names.
Parameters are not created when the model is created. The model knows what the parameters should be named,
but nothing about the scale and range of your data. To help you create Parameters for a Model, each model has a
make_params() method that will generate parameters with the expected names. You will have to do this, or make
Parameters some other way (say, with create_params()), and assign initial values for all Parameters. You can also
assign other attributes when doing this:

params = gmodel.make_params()

This creates the Parameters but does not automatically give them initial values since it has no idea what the scale
should be. If left unspecified, the initial values will be -Inf, which will generally fail to give useful results. You can
set initial values for parameters with keyword arguments to make_params():

params = gmodel.make_params(cen=0.3, amp=3, wid=1.25)

or assign them (and other parameter properties) after the Parameters class has been created.
A Model has several methods associated with it. For example, one can use the eval() method to evaluate the model or
the fit() method to fit data to this model with a Parameter object. Both of these methods can take explicit keyword
arguments for the parameter values. For example, one could use eval() to calculate the predicted function:

x_eval = linspace(0, 10, 201)


y_eval = gmodel.eval(params, x=x_eval)

or with:

y_eval = gmodel.eval(x=x_eval, cen=6.5, amp=100, wid=2.0)

62 Chapter 7. Modeling Data and Curve Fitting


Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

Admittedly, this a slightly long-winded way to calculate a Gaussian function, given that you could have called your
gaussian function directly. But now that the model is set up, we can use its fit() method to fit this model to data,
as with:

result = gmodel.fit(y, params, x=x)

or with:

result = gmodel.fit(y, x=x, cen=0.5, amp=10, wid=2.0)

Putting everything together, included in the examples folder with the source code, is:

# <examples/doc_model_gaussian.py>
import matplotlib.pyplot as plt
from numpy import exp, loadtxt, pi, sqrt

from lmfit import Model

data = loadtxt('model1d_gauss.dat')
x = data[:, 0]
y = data[:, 1]

def gaussian(x, amp, cen, wid):


"""1-d gaussian: gaussian(x, amp, cen, wid)"""
return (amp / (sqrt(2*pi) * wid)) * exp(-(x-cen)**2 / (2*wid**2))

gmodel = Model(gaussian)
result = gmodel.fit(y, x=x, amp=5, cen=5, wid=1)

print(result.fit_report())

plt.plot(x, y, 'o')
plt.plot(x, result.init_fit, '--', label='initial fit')
plt.plot(x, result.best_fit, '-', label='best fit')
plt.legend()
plt.show()
# <end examples/doc_model_gaussian.py>

which is pretty compact and to the point. The returned result will be a ModelResult object. As we will see below,
this has many components, including a fit_report() method, which will show:

[[Model]]
Model(gaussian)
[[Fit Statistics]]
# fitting method = leastsq
# function evals = 33
# data points = 101
# variables = 3
chi-square = 3.40883599
reduced chi-square = 0.03478404
Akaike info crit = -336.263713
Bayesian info crit = -328.418352
(continues on next page)

7.1. Motivation and simple example: Fit data to Gaussian profile 63


Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

(continued from previous page)


R-squared = 0.98533348
[[Variables]]
amp: 8.88021893 +/- 0.11359522 (1.28%) (init = 5)
cen: 5.65866102 +/- 0.01030495 (0.18%) (init = 5)
wid: 0.69765478 +/- 0.01030505 (1.48%) (init = 1)
[[Correlations]] (unreported correlations are < 0.100)
C(amp, wid) = +0.5774

As the script shows, the result will also have init_fit for the fit with the initial parameter values and a best_fit for
the fit with the best fit parameter values. These can be used to generate the following plot:

which shows the data in blue dots, the best fit as a solid green line, and the initial fit as a dashed orange line.
Note that the model fitting was really performed with:

gmodel = Model(gaussian)
result = gmodel.fit(y, params, x=x, amp=5, cen=5, wid=1)

These lines clearly express that we want to turn the gaussian function into a fitting model, and then fit the 𝑦(𝑥) data
to this model, starting with values of 5 for amp, 5 for cen and 1 for wid. In addition, all the other features of lmfit are
included: Parameters can have bounds and constraints and the result is a rich object that can be reused to explore the
model fit in detail.

64 Chapter 7. Modeling Data and Curve Fitting


Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

7.2 The Model class

The Model class provides a general way to wrap a pre-defined function as a fitting model.
class Model(func, independent_vars=None, param_names=None, nan_policy='raise', prefix='', name=None,
**kws)
Create a model from a user-supplied model function.
The model function will normally take an independent variable (generally, the first argument) and a series of
arguments that are meant to be parameters for the model. It will return an array of data to model some data as
for a curve-fitting problem.
Parameters
• func (callable) – Function to be wrapped.
• independent_vars (list of str, optional) – Arguments to func that are independent vari-
ables (default is None).
• param_names (list of str, optional) – Names of arguments to func that are to be made
into parameters (default is None).
• nan_policy ({'raise', 'propagate', 'omit'}, optional) – How to handle NaN and
missing values in data. See Notes below.
• prefix (str, optional) – Prefix used for the model.
• name (str, optional) – Name for the model. When None (default) the name is the same
as the model function (func).
• **kws (dict, optional) – Additional keyword arguments to pass to model function.

Notes

1. Parameter names are inferred from the function arguments, and a residual function is automatically con-
structed.
2. The model function must return an array that will be the same size as the data being modeled.
3. nan_policy sets what to do when a NaN or missing value is seen in the data. Should be one of:
• ‘raise’ : raise a ValueError (default)
• ‘propagate’ : do nothing
• ‘omit’ : drop missing data

Examples

The model function will normally take an independent variable (generally, the first argument) and a series of
arguments that are meant to be parameters for the model. Thus, a simple peak using a Gaussian defined as:

>>> import numpy as np


>>> def gaussian(x, amp, cen, wid):
... return amp * np.exp(-(x-cen)**2 / wid)

can be turned into a Model with:

7.2. The Model class 65


Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

>>> gmodel = Model(gaussian)

this will automatically discover the names of the independent variables and parameters:

>>> print(gmodel.param_names, gmodel.independent_vars)


['amp', 'cen', 'wid'], ['x']

7.2.1 Model class Methods

Model.eval(params=None, **kwargs)
Evaluate the model with supplied parameters and keyword arguments.
Parameters
• params (Parameters, optional) – Parameters to use in Model.
• **kwargs (optional) – Additional keyword arguments to pass to model function.
Returns
Value of model given the parameters and other arguments.
Return type
numpy.ndarray, float, int or complex

Notes

1. if params is None, the values for all parameters are expected to be provided as keyword arguments.
2. If params is given, and a keyword argument for a parameter value is also given, the keyword argument will
be used in place of the value in the value in params.
3. all non-parameter arguments for the model function, including all the independent variables will need to
be passed in using keyword arguments.
4. The return types are generally numpy.ndarray, but may depends on the model function and input independent
variables. That is, return values may be Python float, int, or complex values.
Model.fit(data, params=None, weights=None, method='leastsq', iter_cb=None, scale_covar=True,
verbose=False, fit_kws=None, nan_policy=None, calc_covar=True, max_nfev=None, **kwargs)
Fit the model to the data using the supplied Parameters.
Parameters
• data (array_like) – Array of data to be fit.
• params (Parameters, optional) – Parameters to use in fit (default is None).
• weights (array_like, optional) – Weights to use for the calculation of the fit residual
[i.e., weights*(data-fit)]. Default is None; must have the same size as data.
• method (str, optional) – Name of fitting method to use (default is ‘leastsq’).
• iter_cb (callable, optional) – Callback function to call at each iteration (default is
None).
• scale_covar (bool, optional) – Whether to automatically scale the covariance matrix
when calculating uncertainties (default is True).

66 Chapter 7. Modeling Data and Curve Fitting


Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

• verbose (bool, optional) – Whether to print a message when a new parameter is added
because of a hint (default is True).
• fit_kws (dict, optional) – Options to pass to the minimizer being used.
• nan_policy ({'raise', 'propagate', 'omit'}, optional) – What to do when encoun-
tering NaNs when fitting Model.
• calc_covar (bool, optional) – Whether to calculate the covariance matrix (default is
True) for solvers other than ‘leastsq’ and ‘least_squares’. Requires the numdifftools pack-
age to be installed.
• max_nfev (int or None, optional) – Maximum number of function evaluations (de-
fault is None). The default value depends on the fitting method.
• **kwargs (optional) – Arguments to pass to the model function, possibly overriding pa-
rameters.
Return type
ModelResult

Notes

1. if params is None, the values for all parameters are expected to be provided as keyword arguments. Mixing
params and keyword arguments is deprecated (see Model.eval).
2. all non-parameter arguments for the model function, including all the independent variables will need to
be passed in using keyword arguments.
3. Parameters are copied on input, so that the original Parameter objects are unchanged, and the updated values
are in the returned ModelResult.

Examples

Take t to be the independent variable and data to be the curve we will fit. Use keyword arguments to set initial
guesses:

>>> result = my_model.fit(data, tau=5, N=3, t=t)

Or, for more control, pass a Parameters object.

>>> result = my_model.fit(data, params, t=t)

Model.guess(data, x, **kws)
Guess starting values for the parameters of a Model.
This is not implemented for all models, but is available for many of the built-in models.
Parameters
• data (array_like) – Array of data (i.e., y-values) to use to guess parameter values.
• x (array_like) – Array of values for the independent variable (i.e., x-values).
• **kws (optional) – Additional keyword arguments, passed to model function.
Returns
Initial, guessed values for the parameters of a Model.

7.2. The Model class 67


Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

Return type
Parameters
Raises
NotImplementedError – If the guess method is not implemented for a Model.

Notes

Should be implemented for each model subclass to run self.make_params(), update starting values and return a
Parameters object.
Changed in version 1.0.3: Argument x is now explicitly required to estimate starting values.
Model.make_params(verbose=False, **kwargs)
Create a Parameters object for a Model.
Parameters
• verbose (bool, optional) – Whether to print out messages (default is False).
• **kwargs (optional) –
Parameter names and initial values or dictionaries of
values and attributes.
Returns
params – Parameters object for the Model.
Return type
Parameters

Notes

1. Parameter values can be numbers (floats or ints) to set the parameter value, or dictionaries with any of the
following keywords: value, vary, min, max, expr, brute_step, is_init_value to set those parameter
attributes.
2. This method will also apply any default values or parameter hints that may have been defined for the model.

Example

>>> gmodel = GaussianModel(prefix='peak_') + LinearModel(prefix='bkg_')


>>> gmodel.make_params(peak_center=3200, bkg_offset=0, bkg_slope=0,
... peak_amplitdue=dict(value=100, min=2),
... peak_sigma=dict(value=25, min=0, max=1000))

Model.set_param_hint(name, **kwargs)
Set hints to use when creating parameters with make_params().
The given hint can include optional bounds and constraints (value, vary, min, max, expr), which will be
used by Model.make_params() when building default parameters.
While this can be used to set initial values, Model.make_params or the function create_params should be pre-
ferred for creating parameters with initial values.
The intended use here is to control how a Model should create parameters, such as setting bounds that are required
by the mathematics of the model (for example, that a peak width cannot be negative), or to define common
constrained parameters.

68 Chapter 7. Modeling Data and Curve Fitting


Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

Parameters
• name (str) – Parameter name, can include the models prefix or not.
• **kwargs (optional) – Arbitrary keyword arguments, needs to be a Parameter attribute.
Can be any of the following:
– value
[float, optional] Numerical Parameter value.
– vary
[bool, optional] Whether the Parameter is varied during a fit (default is True).
– min
[float, optional] Lower bound for value (default is -numpy.inf, no lower bound).
– max
[float, optional] Upper bound for value (default is numpy.inf, no upper bound).
– expr
[str, optional] Mathematical expression used to constrain the value during the fit.

Example

>>> model = GaussianModel()


>>> model.set_param_hint('sigma', min=0)

See Using parameter hints.


Model.print_param_hints(colwidth=8)
Print a nicely aligned text-table of parameter hints.
Parameters
colwidth (int, optional) – Width of each column, except for first and last columns.

7.2.2 Model class Attributes

func
The model function used to calculate the model.
independent_vars
List of strings for names of the independent variables.
nan_policy
Describes what to do for NaNs that indicate missing values in the data. The choices are:
• 'raise': Raise a ValueError (default)
• 'propagate': Do not check for NaNs or missing values. The fit will try to ignore them.
• 'omit': Remove NaNs or missing observations in data. If pandas is installed, pandas.isnull() is used,
otherwise numpy.isnan() is used.
name
Name of the model, used only in the string representation of the model. By default this will be taken from the
model function.

7.2. The Model class 69


Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

opts
Extra keyword arguments to pass to model function. Normally this will be determined internally and should not
be changed.
param_hints
Dictionary of parameter hints. See Using parameter hints.
param_names
List of strings of parameter names.
prefix
Prefix used for name-mangling of parameter names. The default is ''. If a particular Model has arguments
amplitude, center, and sigma, these would become the parameter names. Using a prefix of 'g1_' would
convert these parameter names to g1_amplitude, g1_center, and g1_sigma. This can be essential to avoid
name collision in composite models.

7.2.3 Determining parameter names and independent variables for a function

The Model created from the supplied function func will create a Parameters object, and names are inferred from the
function` arguments, and a residual function is automatically constructed.
By default, the independent variable is taken as the first argument to the function. You can, of course, explicitly set this,
and will need to do so if the independent variable is not first in the list, or if there is actually more than one independent
variable.
If not specified, Parameters are constructed from all positional arguments and all keyword arguments that have a default
value that is numerical, except the independent variable, of course. Importantly, the Parameters can be modified after
creation. In fact, you will have to do this because none of the parameters have valid initial values. In addition, one can
place bounds and constraints on Parameters, or fix their values.

7.2.4 Explicitly specifying independent_vars

As we saw for the Gaussian example above, creating a Model from a function is fairly easy. Let’s try another one:

import numpy as np
from lmfit import Model

def decay(t, tau, N):


return N*np.exp(-t/tau)

decay_model = Model(decay)
print(f'independent variables: {decay_model.independent_vars}')

params = decay_model.make_params()
print('\nParameters:')
for pname, par in params.items():
print(pname, par)

independent variables: ['t']

Parameters:
(continues on next page)

70 Chapter 7. Modeling Data and Curve Fitting


Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

(continued from previous page)


tau <Parameter 'tau', value=-inf, bounds=[-inf:inf]>
N <Parameter 'N', value=-inf, bounds=[-inf:inf]>

Here, t is assumed to be the independent variable because it is the first argument to the function. The other function
arguments are used to create parameters for the model.
If you want tau to be the independent variable in the above example, you can say so:

decay_model = Model(decay, independent_vars=['tau'])


print(f'independent variables: {decay_model.independent_vars}')

params = decay_model.make_params()
print('\nParameters:')
for pname, par in params.items():
print(pname, par)

independent variables: ['tau']

Parameters:
t <Parameter 't', value=-inf, bounds=[-inf:inf]>
N <Parameter 'N', value=-inf, bounds=[-inf:inf]>

You can also supply multiple values for multi-dimensional functions with multiple independent variables. In fact, the
meaning of independent variable here is simple, and based on how it treats arguments of the function you are modeling:
independent variable
A function argument that is not a parameter or otherwise part of the model, and that will be required to be
explicitly provided as a keyword argument for each fit with Model.fit() or evaluation with Model.eval().
Note that independent variables are not required to be arrays, or even floating point numbers.

7.2.5 Functions with keyword arguments

If the model function had keyword parameters, these would be turned into Parameters if the supplied default value was
a valid number (but not None, True, or False).

def decay2(t, tau, N=10, check_positive=False):


if check_positive:
arg = abs(t)/max(1.e-9, abs(tau))
else:
arg = t/tau
return N*np.exp(arg)

mod = Model(decay2)
params = mod.make_params()
print('Parameters:')
for pname, par in params.items():
print(pname, par)

7.2. The Model class 71


Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

Parameters:
tau <Parameter 'tau', value=-inf, bounds=[-inf:inf]>
N <Parameter 'N', value=10, bounds=[-inf:inf]>

Here, even though N is a keyword argument to the function, it is turned into a parameter, with the default numerical
value as its initial value. By default, it is permitted to be varied in the fit – the 10 is taken as an initial value, not a fixed
value. On the other hand, the check_positive keyword argument, was not converted to a parameter because it has a
boolean default value. In some sense, check_positive becomes like an independent variable to the model. However,
because it has a default value it is not required to be given for each model evaluation or fit, as independent variables
are.

7.2.6 Defining a prefix for the Parameters

As we will see in the next chapter when combining models, it is sometimes necessary to decorate the parameter names in
the model, but still have them be correctly used in the underlying model function. This would be necessary, for example,
if two parameters in a composite model (see Composite Models : adding (or multiplying) Models or examples in the
next chapter) would have the same name. To avoid this, we can add a prefix to the Model which will automatically
do this mapping for us.

def myfunc(x, amplitude=1, center=0, sigma=1):


# function definition, for now just ``pass``
pass

mod = Model(myfunc, prefix='f1_')


params = mod.make_params()
print('Parameters:')
for pname, par in params.items():
print(pname, par)

Parameters:
f1_amplitude <Parameter 'f1_amplitude', value=1, bounds=[-inf:inf]>
f1_center <Parameter 'f1_center', value=0, bounds=[-inf:inf]>
f1_sigma <Parameter 'f1_sigma', value=1, bounds=[-inf:inf]>

You would refer to these parameters as f1_amplitude and so forth, and the model will know to map these to the
amplitude argument of myfunc.

7.2.7 Initializing model parameter values

As mentioned above, creating a model does not automatically create the corresponding Parameters. These can be
created with either the create_params() function, or the Model.make_params() method of the corresponding
instance of Model.
When creating Parameters, each parameter is created with invalid initial value of -Inf if it is not set explicitly. That is
to say, parameter values must be initialized in order for the model to evaluate a finite result or used in a fit. There are
a few different ways to do this:
1. You can supply initial values in the definition of the model function.
2. You can initialize the parameters when creating parameters with Model.make_params().
3. You can create a Parameters object with Parameters or create_params().

72 Chapter 7. Modeling Data and Curve Fitting


Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

4. You can supply initial values for the parameters when calling Model.eval() or Model.fit() methods.
Generally, using the Model.make_params() method is recommended. The methods described above can be mixed,
allowing you to overwrite initial values at any point in the process of defining and using the model.

Initializing values in the function definition

To supply initial values for parameters in the definition of the model function, you can simply supply a default value:

def myfunc(x, a=1, b=0):


return a*x + 10*a - b

instead of using:

def myfunc(x, a, b):


return a*x + 10*a - b

This has the advantage of working at the function level – all parameters with keywords can be treated as options. It
also means that some default initial value will always be available for the parameter.

Initializing values with Model.make_params()

When creating parameters with Model.make_params() you can specify initial values. To do this, use keyword ar-
guments for the parameter names. You can either set initial values as numbers (floats or ints) or as dictionaries with
keywords of (value, vary, min, max, expr, brute_step, and is_init_value) to specify these parameter attributes.

mod = Model(myfunc)

# simply supply initial values


pars = mod.make_params(a=3, b=0.5)

# supply initial values, attributes for bounds, etcetera:


pars_bounded = mod.make_params(a=dict(value=3, min=0),
b=dict(value=0.5, vary=False))

Creating a Parameters object directly

You can also create your own Parameters directly using create_params(). This is independent of using the Model
class, but is essentially equivalent to Model.make_params() except with less checking of errors for model prefixes
and so on.

from lmfit import create_params

mod = Model(myfunc)

# simply supply initial values


pars = create_params(a=3, b=0.5)

# supply initial values and attributes for bounds, etc:


pars_bounded = create_params(a=dict(value=3, min=0),
b=dict(value=0.5, vary=False))

Because less error checking is done, Model.make_params() should probably be preferred when using Models.

7.2. The Model class 73


Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

Initializing parameter values for a model with keyword arguments

Finally, you can explicitly supply initial values when using a model. That is, as with Model.make_params(), you can
include values as keyword arguments to either the Model.eval() or Model.fit() methods:

x = linspace(0, 10, 100)


y_eval = mod.eval(x=x, a=7.0, b=-2.0)
y_sim = y_eval + random.normal(0, 0.2, x.size)
out = mod.fit(y_sim, pars, x=x, a=3.0, b=0.0)

These approaches to initialization provide many opportunities for setting initial values for parameters. The methods
can be combined, so that you can set parameter hints but then change the initial value explicitly with Model.fit().

7.2.8 Using parameter hints

After a model has been created, but prior to creating parameters with Model.make_params(), you can define param-
eter hints for that model. This allows you to set other parameter attributes for bounds, whether it is varied in the fit, or
set a default constraint expression for a parameter. You can also set the initial value, but that is not really the intention
of the method, which is to really to let you say that about the idealized Model, for example that some values may not
make sense for some parameters, or that some parameters might be a small change from another parameter and so be
fixed or constrained by default.
To set a parameter hint, you can use Model.set_param_hint(), as with:

mod = Model(myfunc)
mod.set_param_hint('bounded_parameter', min=0, max=1.0)
pars = mod.make_params()

Parameter hints are discussed in more detail in section Using parameter hints.
Parameter hints are stored in a model’s param_hints attribute, which is simply a nested dictionary:

print('Parameter hints:')
for pname, par in mod.param_hints.items():
print(pname, par)

Parameter hints:
bounded_parameter {'min': 0, 'max': 1.0}

You can change this dictionary directly or use the Model.set_param_hint() method. Either way, these parameter
hints are used by Model.make_params() when making parameters.
Parameter hints also allow you to create new parameters. This can be useful to make derived parameters with constraint
expressions. For example to get the full-width at half maximum of a Gaussian model, one could use a parameter hint
of:

mod = Model(gaussian)
mod.set_param_hint('wid', min=0)
mod.set_param_hint('fwhm', expr='2.3548*wid')
params = mod.make_params(amp={'value': 10, 'min':0.1, 'max':2000},
cen=5.5, wid=1.25)
params.pretty_print()

74 Chapter 7. Modeling Data and Curve Fitting


Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

Name Value Min Max Stderr Vary Expr Brute_Step


amp 10 0.1 2000 None True None None
cen 5.5 -inf inf None True None None
fwhm 2.944 -inf inf None False 2.3548*wid None
wid 1.25 0 inf None True None None

With that definition, the value (and uncertainty) of the fwhm parameter will be reported in the output of any fit done
with that model.

7.2.9 Data Types for data and independent data with Model

The model as defined by your model function will use the independent variable(s) you specify to best match the data
you provide. The model is meant to be an abstract representation for data, but when you do a fit with Model.fit(),
you really need to pass in values for the data to be modeled and the independent data used to calculate that data.
The mathematical solvers used by lmfit all work exclusively with 1-dimensional numpy arrays of datatype (dtype)
float64. The value of the calculation (model-data)*weights using the calculation of your model function, and
the data and weights you pass in will be coerced to an 1-dimensional ndarray with dtype float64 when it is passed to
the solver.
If the data you pass to Model.fit() is not an ndarray of dtype float64 but is instead a tuples of numbers, a list of
numbers, or a pandas.Series, it will be coerced into an ndarray. If your data is a list, tuple, or Series of complex
numbers, it will be coerced to an ndarray with dtype complex128.
If your data is a numpy array of dtype float32, it will not be coerced to float64, as we assume this was an intentional
choice. That may make all of the calculations done in your model function be in single-precision which may make fits
less sensitive, but the values will be converted to float64 before being sent to the solver, so the fit should work.
The independent data for models using Model are meant to be truly independent, and not not required to be strictly
numerical or objects that are easily converted to arrays of numbers. That is, independent data for a model could be a
dictionary, an instance of a user-defined class, or other type of structured data. You can use independent data any way
you want in your model function.
But, as with almost all the examples given here, independent data is often also a 1-dimensonal array of values, say
x, and a simple view of the fit would be to plot the data as y as a function of x. Again, this is not required, but it is
very common. Because of this very common usage, if your independent data is a tuple or list of numbers or pandas.
Series, it will be coerced to be an ndarray of dtype float64. But as with the primary data, if your independent data
is an ndarray of some different dtype (float32, uint16, etc), it will not be coerced to float64, as we assume this
was intentional.

Note: Data and independent data that are tuples or lists of numbers, or panda.Series will be coerced to an ndarray
of dtype float64 before passing to the model function. Data with other dtypes (or independent data of other object
types such as dicts) will not be coerced to float64.

7.2. The Model class 75


Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

7.2.10 Saving and Loading Models

New in version 0.9.8.


It is sometimes desirable to save a Model for later use outside of the code used to define the model. Lmfit provides a
save_model() function that will save a Model to a file. There is also a companion load_model() function that can
read this file and reconstruct a Model from it.
Saving a model turns out to be somewhat challenging. The main issue is that Python is not normally able to serialize
a function (such as the model function making up the heart of the Model) in a way that can be reconstructed into a
callable Python object. The dill package can sometimes serialize functions, but with the limitation that it can be used
only in the same version of Python. In addition, class methods used as model functions will not retain the rest of the
class attributes and methods, and so may not be usable. With all those warnings, it should be emphasized that if you
are willing to save or reuse the definition of the model function as Python code, then saving the Parameters and rest of
the components that make up a model presents no problem.
If the dill package is installed, the model function will also be saved using it. But because saving the model function
is not always reliable, saving a model will always save the name of the model function. The load_model() takes an
optional funcdefs argument that can contain a dictionary of function definitions with the function names as keys and
function objects as values. If one of the dictionary keys matches the saved name, the corresponding function object
will be used as the model function. If it is not found by name, and if dill was used to save the model, and if dill is
available at run-time, the dill-encoded function will try to be used. Note that this approach will generally allow you
to save a model that can be used by another installation of the same version of Python, but may not work across Python
versions. For preserving fits for extended periods of time (say, archiving for documentation of scientific results), we
strongly encourage you to save the full Python code used for the model function and fit process.
save_model(model, fname)
Save a Model to a file.
Parameters
• model (Model) – Model to be saved.
• fname (str) – Name of file for saved Model.
load_model(fname, funcdefs=None)
Load a saved Model from a file.
Parameters
• fname (str) – Name of file containing saved Model.
• funcdefs (dict, optional) – Dictionary of custom function names and definitions.
Returns
Model object loaded from file.
Return type
Model
As a simple example, one can save a model as:

# <examples/doc_model_savemodel.py>
import numpy as np

from lmfit.model import Model, save_model

def mysine(x, amp, freq, shift):


return amp * np.sin(x*freq + shift)
(continues on next page)

76 Chapter 7. Modeling Data and Curve Fitting


Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

(continued from previous page)

sinemodel = Model(mysine)
pars = sinemodel.make_params(amp=1, freq=0.25, shift=0)

save_model(sinemodel, 'sinemodel.sav')
# <end examples/doc_model_savemodel.py>

To load that later, one might do:

# <examples/doc_model_loadmodel.py>
import os
import sys

import matplotlib.pyplot as plt


import numpy as np

from lmfit.model import load_model

if not os.path.exists('sinemodel.sav'):
os.system(f"{sys.executable} doc_model_savemodel.py")

def mysine(x, amp, freq, shift):


return amp * np.sin(x*freq + shift)

data = np.loadtxt('sinedata.dat')
x = data[:, 0]
y = data[:, 1]

model = load_model('sinemodel.sav', funcdefs={'mysine': mysine})


params = model.make_params(amp=dict(value=3, min=0),
freq=0.52,
shift=dict(value=0, min=-1, max=1))

result = model.fit(y, params, x=x)


print(result.fit_report())

plt.plot(x, y, 'o')
plt.plot(x, result.best_fit, '-')
plt.show()
# <end examples/doc_model_loadmodel.py>

See also Saving and Loading ModelResults.

7.2. The Model class 77


Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

7.3 The ModelResult class

A ModelResult (which had been called ModelFit prior to version 0.9) is the object returned by Model.fit(). It
is a subclass of Minimizer, and so contains many of the fit results. Of course, it knows the Model and the set of
Parameters used in the fit, and it has methods to evaluate the model, to fit the data (or re-fit the data with changes to
the parameters, or fit with different or modified data) and to print out a report for that fit.
While a Model encapsulates your model function, it is fairly abstract and does not contain the parameters or data used
in a particular fit. A ModelResult does contain parameters and data as well as methods to alter and re-do fits. Thus the
Model is the idealized model while the ModelResult is the messier, more complex (but perhaps more useful) object
that represents a fit with a set of parameters to data with a model.
A ModelResult has several attributes holding values for fit results, and several methods for working with fits. These
include statistics inherited from Minimizer useful for comparing different models, including chisqr, redchi, aic,
and bic.
class ModelResult(model, params, data=None, weights=None, method='leastsq', fcn_args=None,
fcn_kws=None, iter_cb=None, scale_covar=True, nan_policy='raise', calc_covar=True,
max_nfev=None, **fit_kws)
Result from the Model fit.
This has many attributes and methods for viewing and working with the results of a fit using Model. It inherits
from Minimizer, so that it can be used to modify and re-run the fit for the Model.
Parameters
• model (Model) – Model to use.
• params (Parameters) – Parameters with initial values for model.
• data (array_like, optional) – Data to be modeled.
• weights (array_like, optional) – Weights to multiply (data-model) for fit residual.
• method (str, optional) – Name of minimization method to use (default is ‘leastsq’).
• fcn_args (sequence, optional) – Positional arguments to send to model function.
• fcn_dict (dict, optional) – Keyword arguments to send to model function.
• iter_cb (callable, optional) – Function to call on each iteration of fit.
• scale_covar (bool, optional) – Whether to scale covariance matrix for uncertainty
evaluation.
• nan_policy ({'raise', 'propagate', 'omit'}, optional) – What to do when encoun-
tering NaNs when fitting Model.
• calc_covar (bool, optional) – Whether to calculate the covariance matrix (default is
True) for solvers other than ‘leastsq’ and ‘least_squares’. Requires the numdifftools pack-
age to be installed.
• max_nfev (int or None, optional) – Maximum number of function evaluations (de-
fault is None). The default value depends on the fitting method.
• **fit_kws (optional) – Keyword arguments to send to minimization routine.

78 Chapter 7. Modeling Data and Curve Fitting


Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

7.3.1 ModelResult methods

ModelResult.eval(params=None, **kwargs)
Evaluate model function.
Parameters
• params (Parameters, optional) – Parameters to use.
• **kwargs (optional) – Options to send to Model.eval().
Returns
Array or value for the evaluated model.
Return type
numpy.ndarray, float, int, or complex
ModelResult.eval_components(params=None, **kwargs)
Evaluate each component of a composite model function.
Parameters
• params (Parameters, optional) – Parameters, defaults to ModelResult.params.
• **kwargs (optional) – Keyword arguments to pass to model function.
Returns
Keys are prefixes of component models, and values are the estimated model value for each com-
ponent of the model.
Return type
dict
ModelResult.fit(data=None, params=None, weights=None, method=None, nan_policy=None, **kwargs)
Re-perform fit for a Model, given data and params.
Parameters
• data (array_like, optional) – Data to be modeled.
• params (Parameters, optional) – Parameters with initial values for model.
• weights (array_like, optional) – Weights to multiply (data-model) for fit residual.
• method (str, optional) – Name of minimization method to use (default is ‘leastsq’).
• nan_policy ({'raise', 'propagate', 'omit'}, optional) – What to do when encoun-
tering NaNs when fitting Model.
• **kwargs (optional) – Keyword arguments to send to minimization routine.
ModelResult.fit_report(modelpars=None, show_correl=True, min_correl=0.1, sort_pars=False,
correl_mode='list')
Return a printable fit report.
The report contains fit statistics and best-fit values with uncertainties and correlations.
Parameters
• modelpars (Parameters, optional) – Known Model Parameters.
• show_correl (bool, optional) – Whether to show list of sorted correlations (default is
True).

7.3. The ModelResult class 79


Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

• min_correl (float, optional) – Smallest correlation in absolute value to show (default


is 0.1).
• sort_pars (callable, optional) – Whether to show parameter names sorted in al-
phanumerical order (default is False). If False, then the parameters will be listed in the
order as they were added to the Parameters dictionary. If callable, then this (one argument)
function is used to extract a comparison key from each list element.
• correl_mode ({'list', table'} str, optional) – Mode for how to show correlations.
Can be either ‘list’ (default) to show a sorted (if sort_pars is True) list of correlation values,
or ‘table’ to show a complete, formatted table of correlations.
Returns
Multi-line text of fit report.
Return type
str
ModelResult.summary()
Return a dictionary with statistics and attributes of a ModelResult.
Returns
Dictionary of statistics and many attributes from a ModelResult.
Return type
dict

Notes

1. values for data arrays are not included.


2. The result summary dictionary will include the following entries:

model, method, ndata, nvarys, nfree, chisqr, redchi, aic, bic, rsquared, nfev, max_nfev,
aborted, errorbars, success, message, lmdif_message, ier, nan_policy, scale_covar,
calc_covar, ci_out, col_deriv, flatchain, call_kws, var_names, user_options, kws,
init_values, best_values, and params.

where ‘params’ is a list of parameter “states”: tuples with entries of (name, value, vary, expr, min,
max, brute_step, stderr, correl, init_value, user_data).
3. The result will include only plain Python objects, and so should be easily serializable with JSON or similar
tools.
ModelResult.conf_interval(**kwargs)
Calculate the confidence intervals for the variable parameters.
Confidence intervals are calculated using the confidence.conf_interval() function and keyword arguments
(**kwargs) are passed to that function. The result is stored in the ci_out attribute so that it can be accessed
without recalculating them.
ModelResult.ci_report(with_offset=True, ndigits=5, **kwargs)
Return a formatted text report of the confidence intervals.
Parameters
• with_offset (bool, optional) – Whether to subtract best value from all other values
(default is True).
• ndigits (int, optional) – Number of significant digits to show (default is 5).

80 Chapter 7. Modeling Data and Curve Fitting


Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

• **kwargs (optional) – Keyword arguments that are passed to the conf_interval function.
Returns
Text of formatted report on confidence intervals.
Return type
str
ModelResult.eval_uncertainty(params=None, sigma=1, **kwargs)
Evaluate the uncertainty of the model function.
This can be used to give confidence bands for the model from the uncertainties in the best-fit parameters.
Parameters
• params (Parameters, optional) – Parameters, defaults to ModelResult.params.
• sigma (float, optional) – Confidence level, i.e. how many sigma (default is 1).
• **kwargs (optional) – Values of options, independent variables, etcetera.
Returns
Uncertainty at each value of the model.
Return type
numpy.ndarray

Notes

1. This is based on the excellent and clear example from https://www.astro.rug.nl/software/kapteyn/


kmpfittutorial.html#confidence-and-prediction-intervals, which references the original work of: J. Wol-
berg, Data Analysis Using the Method of Least Squares, 2006, Springer
2. The value of sigma is number of sigma values, and is converted to a probability. Values of 1, 2, or 3 give
probabilities of 0.6827, 0.9545, and 0.9973, respectively. If the sigma value is < 1, it is interpreted as the
probability itself. That is, sigma=1 and sigma=0.6827 will give the same results, within precision errors.
3. Also sets attributes of dely for the uncertainty of the model (which will be the same as the array returned
by this method) and dely_comps, a dictionary of dely for each component.

Examples

>>> out = model.fit(data, params, x=x)


>>> dely = out.eval_uncertainty(x=x)
>>> plt.plot(x, data)
>>> plt.plot(x, out.best_fit)
>>> plt.fill_between(x, out.best_fit-dely,
... out.best_fit+dely, color='#888888')

ModelResult.plot(datafmt='o', fitfmt='-', initfmt='--', xlabel=None, ylabel=None, yerr=None, numpoints=None,


fig=None, data_kws=None, fit_kws=None, init_kws=None, ax_res_kws=None,
ax_fit_kws=None, fig_kws=None, show_init=False, parse_complex='abs', title=None)
Plot the fit results and residuals using matplotlib.
The method will produce a matplotlib figure (if package available) with both results of the fit and the residuals
plotted. If the fit model included weights, errorbars will also be plotted. To show the initial conditions for the
fit, pass the argument show_init=True.
Parameters

7.3. The ModelResult class 81


Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

• datafmt (str, optional) – Matplotlib format string for data points.


• fitfmt (str, optional) – Matplotlib format string for fitted curve.
• initfmt (str, optional) – Matplotlib format string for initial conditions for the fit.
• xlabel (str, optional) – Matplotlib format string for labeling the x-axis.
• ylabel (str, optional) – Matplotlib format string for labeling the y-axis.
• yerr (numpy.ndarray, optional) – Array of uncertainties for data array.
• numpoints (int, optional) – If provided, the final and initial fit curves are evaluated not
only at data points, but refined to contain numpoints points in total.
• fig (matplotlib.figure.Figure, optional) – The figure to plot on. The default is
None, which means use the current pyplot figure or create one if there is none.
• data_kws (dict, optional) – Keyword arguments passed to the plot function for data
points.
• fit_kws (dict, optional) – Keyword arguments passed to the plot function for fitted
curve.
• init_kws (dict, optional) – Keyword arguments passed to the plot function for the
initial conditions of the fit.
• ax_res_kws (dict, optional) – Keyword arguments for the axes for the residuals plot.
• ax_fit_kws (dict, optional) – Keyword arguments for the axes for the fit plot.
• fig_kws (dict, optional) – Keyword arguments for a new figure, if a new one is created.
• show_init (bool, optional) – Whether to show the initial conditions for the fit (default
is False).
• parse_complex ({'abs', 'real', 'imag', 'angle'}, optional) – How to reduce com-
plex data for plotting. Options are one of: ‘abs’ (default), ‘real’, ‘imag’, or ‘angle’, which
correspond to the NumPy functions with the same name.
• title (str, optional) – Matplotlib format string for figure title.
Return type
matplotlib.figure.Figure
See also:

ModelResult.plot_fit
Plot the fit results using matplotlib.
ModelResult.plot_residuals
Plot the fit residuals using matplotlib.

82 Chapter 7. Modeling Data and Curve Fitting


Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

Notes

The method combines ModelResult.plot_fit and ModelResult.plot_residuals.


If yerr is specified or if the fit model included weights, then matplotlib.axes.Axes.errorbar is used to plot the
data. If yerr is not specified and the fit includes weights, yerr set to 1/self.weights.
If model returns complex data, yerr is treated the same way that weights are in this case.
If fig is None then matplotlib.pyplot.figure(**fig_kws) is called, otherwise fig_kws is ignored.
ModelResult.plot_fit(ax=None, datafmt='o', fitfmt='-', initfmt='--', xlabel=None, ylabel=None, yerr=None,
numpoints=None, data_kws=None, fit_kws=None, init_kws=None, ax_kws=None,
show_init=False, parse_complex='abs', title=None)
Plot the fit results using matplotlib, if available.
The plot will include the data points, the initial fit curve (optional, with show_init=True), and the best-fit curve.
If the fit model included weights or if yerr is specified, errorbars will also be plotted.
Parameters
• ax (matplotlib.axes.Axes, optional) – The axes to plot on. The default in None,
which means use the current pyplot axis or create one if there is none.
• datafmt (str, optional) – Matplotlib format string for data points.
• fitfmt (str, optional) – Matplotlib format string for fitted curve.
• initfmt (str, optional) – Matplotlib format string for initial conditions for the fit.
• xlabel (str, optional) – Matplotlib format string for labeling the x-axis.
• ylabel (str, optional) – Matplotlib format string for labeling the y-axis.
• yerr (numpy.ndarray, optional) – Array of uncertainties for data array.
• numpoints (int, optional) – If provided, the final and initial fit curves are evaluated not
only at data points, but refined to contain numpoints points in total.
• data_kws (dict, optional) – Keyword arguments passed to the plot function for data
points.
• fit_kws (dict, optional) – Keyword arguments passed to the plot function for fitted
curve.
• init_kws (dict, optional) – Keyword arguments passed to the plot function for the
initial conditions of the fit.
• ax_kws (dict, optional) – Keyword arguments for a new axis, if a new one is created.
• show_init (bool, optional) – Whether to show the initial conditions for the fit (default
is False).
• parse_complex ({'abs', 'real', 'imag', 'angle'}, optional) – How to reduce com-
plex data for plotting. Options are one of: ‘abs’ (default), ‘real’, ‘imag’, or ‘angle’, which
correspond to the NumPy functions with the same name.
• title (str, optional) – Matplotlib format string for figure title.
Return type
matplotlib.axes.Axes
See also:

7.3. The ModelResult class 83


Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

ModelResult.plot_residuals
Plot the fit residuals using matplotlib.
ModelResult.plot
Plot the fit results and residuals using matplotlib.

Notes

For details about plot format strings and keyword arguments see documentation of matplotlib.axes.Axes.plot.
If yerr is specified or if the fit model included weights, then matplotlib.axes.Axes.errorbar is used to plot the
data. If yerr is not specified and the fit includes weights, yerr set to 1/self.weights.
If model returns complex data, yerr is treated the same way that weights are in this case.
If ax is None then matplotlib.pyplot.gca(**ax_kws) is called.
ModelResult.plot_residuals(ax=None, datafmt='o', yerr=None, data_kws=None, fit_kws=None,
ax_kws=None, parse_complex='abs', title=None)
Plot the fit residuals using matplotlib, if available.
If yerr is supplied or if the model included weights, errorbars will also be plotted.
Parameters
• ax (matplotlib.axes.Axes, optional) – The axes to plot on. The default in None,
which means use the current pyplot axis or create one if there is none.
• datafmt (str, optional) – Matplotlib format string for data points.
• yerr (numpy.ndarray, optional) – Array of uncertainties for data array.
• data_kws (dict, optional) – Keyword arguments passed to the plot function for data
points.
• fit_kws (dict, optional) – Keyword arguments passed to the plot function for fitted
curve.
• ax_kws (dict, optional) – Keyword arguments for a new axis, if a new one is created.
• parse_complex ({'abs', 'real', 'imag', 'angle'}, optional) – How to reduce com-
plex data for plotting. Options are one of: ‘abs’ (default), ‘real’, ‘imag’, or ‘angle’, which
correspond to the NumPy functions with the same name.
• title (str, optional) – Matplotlib format string for figure title.
Return type
matplotlib.axes.Axes
See also:

ModelResult.plot_fit
Plot the fit results using matplotlib.
ModelResult.plot
Plot the fit results and residuals using matplotlib.

84 Chapter 7. Modeling Data and Curve Fitting


Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

Notes

For details about plot format strings and keyword arguments see documentation of matplotlib.axes.Axes.plot.
If yerr is specified or if the fit model included weights, then matplotlib.axes.Axes.errorbar is used to plot the
data. If yerr is not specified and the fit includes weights, yerr set to 1/self.weights.
If model returns complex data, yerr is treated the same way that weights are in this case.
If ax is None then matplotlib.pyplot.gca(**ax_kws) is called.

7.3.2 ModelResult attributes

aic
Floating point best-fit Akaike Information Criterion statistic (see MinimizerResult – the optimization result).
best_fit
numpy.ndarray result of model function, evaluated at provided independent variables and with best-fit parameters.
best_values
Dictionary with parameter names as keys, and best-fit values as values.
bic
Floating point best-fit Bayesian Information Criterion statistic (see MinimizerResult – the optimization result).
chisqr
Floating point best-fit chi-square statistic (see MinimizerResult – the optimization result).
ci_out
Confidence interval data (see Calculation of confidence intervals) or None if the confidence intervals have not
been calculated.
covar
numpy.ndarray (square) covariance matrix returned from fit.
data
numpy.ndarray of data to compare to model.
dely
numpy.ndarray of estimated uncertainties in the y values of the model from ModelResult.
eval_uncertainty() (see Calculating uncertainties in the model function).
dely_comps
a dictionary of estimated uncertainties in the y values of the model components, from ModelResult.
eval_uncertainty() (see Calculating uncertainties in the model function).
errorbars
Boolean for whether error bars were estimated by fit.
ier
Integer returned code from scipy.optimize.leastsq.
init_fit
numpy.ndarray result of model function, evaluated at provided independent variables and with initial parameters.
init_params
Initial parameters.

7.3. The ModelResult class 85


Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

init_values
Dictionary with parameter names as keys, and initial values as values.
iter_cb
Optional callable function, to be called at each fit iteration. This must take take arguments of (params, iter,
resid, *args, **kws), where params will have the current parameter values, iter the iteration, resid the
current residual array, and *args and **kws as passed to the objective function. See Using a Iteration Callback
Function.
jacfcn
Optional callable function, to be called to calculate Jacobian array.
lmdif_message
String message returned from scipy.optimize.leastsq.
message
String message returned from minimize().
method
String naming fitting method for minimize().
call_kws
Dict of keyword arguments actually send to underlying solver with minimize().
model
Instance of Model used for model.
ndata
Integer number of data points.
nfev
Integer number of function evaluations used for fit.
nfree
Integer number of free parameters in fit.
nvarys
Integer number of independent, freely varying variables in fit.
params
Parameters used in fit; will contain the best-fit values.
redchi
Floating point reduced chi-square statistic (see MinimizerResult – the optimization result).
residual
numpy.ndarray for residual.
rsquared
Floating point 𝑅2 statisic, defined for data 𝑦 and best-fit model 𝑓 as

(𝑦𝑖 − 𝑓𝑖 )2
∑︀
2
𝑅 = 1 − ∑︀𝑖
¯)2
𝑖 (𝑦𝑖 − 𝑦

86 Chapter 7. Modeling Data and Curve Fitting


Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

scale_covar
Boolean flag for whether to automatically scale covariance matrix.
success
Boolean value of whether fit succeeded.
weights
numpy.ndarray (or None) of weighting values to be used in fit. If not None, it will be used as a multiplicative
factor of the residual array, so that weights*(data - fit) is minimized in the least-squares sense.

7.3.3 Calculating uncertainties in the model function

We return to the first example above and ask not only for the uncertainties in the fitted parameters but for the range of val-
ues that those uncertainties mean for the model function itself. We can use the ModelResult.eval_uncertainty()
method of the model result object to evaluate the uncertainty in the model with a specified level for 𝜎.
That is, adding:

dely = result.eval_uncertainty(sigma=3)
plt.fill_between(x, result.best_fit-dely, result.best_fit+dely, color="#ABABAB",
label='3-$\sigma$ uncertainty band')

to the example fit to the Gaussian at the beginning of this chapter will give 3-𝜎 bands for the best-fit Gaussian, and
produce the figure below.

New in version 1.0.4.


If the model is a composite built from multiple components, the ModelResult.eval_uncertainty() method will
evaluate the uncertainty of both the full model (often the sum of multiple components) as well as the uncertainty in each

7.3. The ModelResult class 87


Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

component. The uncertainty of the full model will be held in result.dely, and the uncertainties for each component
will be held in the dictionary result.dely_comps, with keys that are the component prefixes.
An example script shows how the uncertainties in components of a composite model can be calculated and used:

# <examples/doc_model_uncertainty2.py>
import matplotlib.pyplot as plt
import numpy as np

from lmfit.models import ExponentialModel, GaussianModel

dat = np.loadtxt('NIST_Gauss2.dat')
x = dat[:, 1]
y = dat[:, 0]

model = (GaussianModel(prefix='g1_') +
GaussianModel(prefix='g2_') +
ExponentialModel(prefix='bkg_'))

params = model.make_params(bkg_amplitude=100, bkg_decay=80,


g1_amplitude=3000,
g1_center=100,
g1_sigma=10,
g2_amplitude=3000,
g2_center=150,
g2_sigma=10)

result = model.fit(y, params, x=x)


print(result.fit_report(min_correl=0.5))

comps = result.eval_components(x=x)
dely = result.eval_uncertainty(sigma=3)

fig, axes = plt.subplots(2, 2, figsize=(12.8, 9.6))

axes[0][0].plot(x, y, 'o', color='#99002299', markersize=3, label='data')


axes[0][0].plot(x, result.best_fit, '-', label='best fit')
axes[0][0].plot(x, result.init_fit, '--', label='initial fit')
axes[0][0].set_title('data, initial fit, and best-fit')
axes[0][0].legend()

axes[0][1].plot(x, y, 'o', color='#99002299', markersize=3, label='data')


axes[0][1].plot(x, result.best_fit, '-', label='best fit')
axes[0][1].fill_between(x, result.best_fit-dely, result.best_fit+dely,
color="#8A8A8A", label=r'3-$\sigma$ band')
axes[0][1].set_title('data, best-fit, and uncertainty band')
axes[0][1].legend()

axes[1][0].plot(x, result.best_fit, '-', label=r'best fit, 3-$\sigma$ band')


axes[1][0].fill_between(x,
result.best_fit-result.dely,
result.best_fit+result.dely,
color="#8A8A8A")

(continues on next page)

88 Chapter 7. Modeling Data and Curve Fitting


Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

(continued from previous page)


axes[1][0].plot(x, comps['bkg_'], label=r'background, 3-$\sigma$ band')
axes[1][0].fill_between(x,
comps['bkg_']-result.dely_comps['bkg_'],
comps['bkg_']+result.dely_comps['bkg_'],
color="#8A8A8A")

axes[1][0].plot(x, comps['g1_'], label=r'Gaussian #1, 3-$\sigma$ band')


axes[1][0].fill_between(x,
comps['g1_']-result.dely_comps['g1_'],
comps['g1_']+result.dely_comps['g1_'],
color="#8A8A8A")

axes[1][0].plot(x, comps['g2_'], label=r'Gaussian #2, 3-$\sigma$ band')


axes[1][0].fill_between(x,
comps['g2_']-result.dely_comps['g2_'],
comps['g2_']+result.dely_comps['g2_'],
color="#8A8A8A")
axes[1][0].set_title('model components with uncertainty bands')
axes[1][0].legend()

axes[1][1].plot(x, result.best_fit, '-', label='best fit')


axes[1][1].plot(x, 10*result.dely, label=r'3-$\sigma$ total (x10)')
axes[1][1].plot(x, 10*result.dely_comps['bkg_'], label=r'3-$\sigma$ background (x10)')
axes[1][1].plot(x, 10*result.dely_comps['g1_'], label=r'3-$\sigma$ Gaussian #1 (x10)')
axes[1][1].plot(x, 10*result.dely_comps['g2_'], label=r'3-$\sigma$ Gaussian #2 (x10)')
axes[1][1].set_title('uncertainties for model components')
axes[1][1].legend()

plt.show()
# <end examples/doc_model_uncertainty2.py>

[[Model]]
((Model(gaussian, prefix='g1_') + Model(gaussian, prefix='g2_')) + Model(exponential,
˓→ prefix='bkg_'))

[[Fit Statistics]]
# fitting method = leastsq
# function evals = 55
# data points = 250
# variables = 8
chi-square = 1247.52821
reduced chi-square = 5.15507524
Akaike info crit = 417.864631
Bayesian info crit = 446.036318
R-squared = 0.99648654
[[Variables]]
g1_amplitude: 4257.77399 +/- 42.3838008 (1.00%) (init = 3000)
g1_center: 107.030957 +/- 0.15006868 (0.14%) (init = 100)
g1_sigma: 16.6725789 +/- 0.16048222 (0.96%) (init = 10)
g2_amplitude: 2493.41715 +/- 36.1696228 (1.45%) (init = 3000)
g2_center: 153.270104 +/- 0.19466723 (0.13%) (init = 150)
g2_sigma: 13.8069453 +/- 0.18680099 (1.35%) (init = 10)
(continues on next page)

7.3. The ModelResult class 89


Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

(continued from previous page)


bkg_amplitude: 99.0183280 +/- 0.53748639 (0.54%) (init = 100)
bkg_decay: 90.9508824 +/- 1.10310769 (1.21%) (init = 80)
g1_fwhm: 39.2609222 +/- 0.37790675 (0.96%) == '2.3548200*g1_sigma'
g1_height: 101.880228 +/- 0.59217122 (0.58%) == '0.3989423*g1_amplitude/max(1e-
˓→15, g1_sigma)'

g2_fwhm: 32.5128710 +/- 0.43988270 (1.35%) == '2.3548200*g2_sigma'


g2_height: 72.0455936 +/- 0.61721901 (0.86%) == '0.3989423*g2_amplitude/max(1e-
˓→15, g2_sigma)'

[[Correlations]] (unreported correlations are < 0.500)


C(g1_amplitude, g1_sigma) = +0.8243
C(g2_amplitude, g2_sigma) = +0.8154
C(bkg_amplitude, bkg_decay) = -0.6946
C(g1_sigma, g2_center) = +0.6842
C(g1_center, g2_amplitude) = -0.6689
C(g1_center, g2_sigma) = -0.6520
C(g1_amplitude, g2_center) = +0.6477
C(g1_center, g2_center) = +0.6205
C(g1_center, g1_sigma) = +0.5075
C(g1_amplitude, bkg_decay) = -0.5074

90 Chapter 7. Modeling Data and Curve Fitting


Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

7.3.4 Saving and Loading ModelResults

New in version 0.9.8.


As with saving models (see section Saving and Loading Models), it is sometimes desirable to save a ModelResult,
either for later use or to organize and compare different fit results. Lmfit provides a save_modelresult() function
that will save a ModelResult to a file. There is also a companion load_modelresult() function that can read this
file and reconstruct a ModelResult from it.
As discussed in section Saving and Loading Models, there are challenges to saving model functions that may make it
difficult to restore a saved a ModelResult in a way that can be used to perform a fit. Use of the optional funcdefs
argument is generally the most reliable way to ensure that a loaded ModelResult can be used to evaluate the model
function or redo the fit.
save_modelresult(modelresult, fname)
Save a ModelResult to a file.
Parameters
• modelresult (ModelResult) – ModelResult to be saved.
• fname (str) – Name of file for saved ModelResult.
load_modelresult(fname, funcdefs=None)
Load a saved ModelResult from a file.
Parameters
• fname (str) – Name of file containing saved ModelResult.
• funcdefs (dict, optional) – Dictionary of custom function names and definitions.
Returns
ModelResult object loaded from file.
Return type
ModelResult
An example of saving a ModelResult is:

# <examples/doc_model_savemodelresult.py>
import numpy as np

from lmfit.model import save_modelresult


from lmfit.models import GaussianModel

data = np.loadtxt('model1d_gauss.dat')
x = data[:, 0]
y = data[:, 1]

gmodel = GaussianModel()
result = gmodel.fit(y, x=x, amplitude=5, center=5, sigma=1)

save_modelresult(result, 'gauss_modelresult.sav')

print(result.fit_report())
# <end examples/doc_model_savemodelresult.py>

To load that later, one might do:

7.3. The ModelResult class 91


Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

# <examples/doc_model_loadmodelresult.py>
import os
import sys

import matplotlib.pyplot as plt


import numpy as np

from lmfit.model import load_modelresult

if not os.path.exists('gauss_modelresult.sav'):
os.system(f"{sys.executable} doc_model_savemodelresult.py")

data = np.loadtxt('model1d_gauss.dat')
x = data[:, 0]
y = data[:, 1]

result = load_modelresult('gauss_modelresult.sav')
print(result.fit_report())

plt.plot(x, y, 'o')
plt.plot(x, result.best_fit, '-')
plt.show()
# <end examples/doc_model_loadmodelresult.py>

7.4 Composite Models : adding (or multiplying) Models

One of the more interesting features of the Model class is that Models can be added together or combined with basic
algebraic operations (add, subtract, multiply, and divide) to give a composite model. The composite model will have
parameters from each of the component models, with all parameters being available to influence the whole model. This
ability to combine models will become even more useful in the next chapter, when pre-built subclasses of Model are
discussed. For now, we’ll consider a simple example, and build a model of a Gaussian plus a line, as to model a peak
with a background. For such a simple problem, we could just build a model that included both components:

def gaussian_plus_line(x, amp, cen, wid, slope, intercept):


"""line + 1-d gaussian"""

gauss = (amp / (sqrt(2*pi) * wid)) * exp(-(x-cen)**2 / (2*wid**2))


line = slope*x + intercept
return gauss + line

and use that with:

mod = Model(gaussian_plus_line)

But we already had a function for a gaussian function, and maybe we’ll discover that a linear background isn’t sufficient
which would mean the model function would have to be changed.
Instead, lmfit allows models to be combined into a CompositeModel. As an alternative to including a linear back-
ground in our model function, we could define a linear function:

92 Chapter 7. Modeling Data and Curve Fitting


Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

def line(x, slope, intercept):


"""a line"""
return slope*x + intercept

and build a composite model with just:

mod = Model(gaussian) + Model(line)

This model has parameters for both component models, and can be used as:

# <examples/doc_model_two_components.py>
import matplotlib.pyplot as plt
from numpy import exp, loadtxt, pi, sqrt

from lmfit import Model

data = loadtxt('model1d_gauss.dat')
x = data[:, 0]
y = data[:, 1] + 0.25*x - 1.0

def gaussian(x, amp, cen, wid):


"""1-d gaussian: gaussian(x, amp, cen, wid)"""
return (amp / (sqrt(2*pi) * wid)) * exp(-(x-cen)**2 / (2*wid**2))

def line(x, slope, intercept):


"""a line"""
return slope*x + intercept

mod = Model(gaussian) + Model(line)


pars = mod.make_params(amp=5, cen=5, wid={'value': 1, 'min': 0},
slope=0, intercept=1)

result = mod.fit(y, pars, x=x)


print(result.fit_report())

plt.plot(x, y, 'o')
plt.plot(x, result.init_fit, '--', label='initial fit')
plt.plot(x, result.best_fit, '-', label='best fit')
plt.legend()
plt.show()
# <end examples/doc_model_two_components.py>

which prints out the results:

[[Model]]
(Model(gaussian) + Model(line))
[[Fit Statistics]]
# fitting method = leastsq
# function evals = 55
# data points = 101
(continues on next page)

7.4. Composite Models : adding (or multiplying) Models 93


Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

(continued from previous page)


# variables = 5
chi-square = 2.57855517
reduced chi-square = 0.02685995
Akaike info crit = -360.457020
Bayesian info crit = -347.381417
R-squared = 0.99194643
[[Variables]]
amp: 8.45930976 +/- 0.12414531 (1.47%) (init = 5)
cen: 5.65547889 +/- 0.00917673 (0.16%) (init = 5)
wid: 0.67545513 +/- 0.00991697 (1.47%) (init = 1)
slope: 0.26484403 +/- 0.00574892 (2.17%) (init = 0)
intercept: -0.96860189 +/- 0.03352202 (3.46%) (init = 1)
[[Correlations]] (unreported correlations are < 0.100)
C(slope, intercept) = -0.7954
C(amp, wid) = +0.6664
C(amp, intercept) = -0.2216
C(amp, slope) = -0.1692
C(cen, slope) = -0.1618
C(wid, intercept) = -0.1477
C(cen, intercept) = +0.1287
C(wid, slope) = -0.1127

and shows the plot on the left.

On the left, data is shown in blue dots, the total fit is shown in solid green line, and the initial fit is shown as a orange
dashed line. The figure on the right shows again the data in blue dots, the Gaussian component as a orange dashed line
and the linear component as a green dashed line. It is created using the following code:

comps = result.eval_components()
plt.plot(x, y, 'o')
plt.plot(x, comps['gaussian'], '--', label='Gaussian component')
plt.plot(x, comps['line'], '--', label='Line component')

The components were generated after the fit using the ModelResult.eval_components() method of the result,
which returns a dictionary of the components, using keys of the model name (or prefix if that is set). This will
use the parameter values in result.params and the independent variables (x) used during the fit. Note that while the
ModelResult held in result does store the best parameters and the best estimate of the model in result.best_fit,

94 Chapter 7. Modeling Data and Curve Fitting


Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

the original model and parameters in pars are left unaltered.


You can apply this composite model to other data sets, or evaluate the model at other values of x. You may want to do
this to give a finer or coarser spacing of data point, or to extrapolate the model outside the fitting range. This can be
done with:

xwide = linspace(-5, 25, 3001)


predicted = mod.eval(result.params, x=xwide)

In this example, the argument names for the model functions do not overlap. If they had, the prefix argument to
Model would have allowed us to identify which parameter went with which component model. As we will see in the
next chapter, using composite models with the built-in models provides a simple way to build up complex models.
class CompositeModel(left, right, op[, **kws ])
Combine two models (left and right) with binary operator (op).
Normally, one does not have to explicitly create a CompositeModel, but can use normal Python operators +, -,
*, and / to combine components as in:

>>> mod = Model(fcn1) + Model(fcn2) * Model(fcn3)

Parameters
• left (Model) – Left-hand model.
• right (Model) – Right-hand model.
• op (callable binary operator) – Operator to combine left and right models.
• **kws (optional) – Additional keywords are passed to Model when creating this new
model.

Notes

The two models can use different independent variables.


Note that when using built-in Python binary operators, a CompositeModel will automatically be constructed for you.
That is, doing:

mod = Model(fcn1) + Model(fcn2) * Model(fcn3)

will create a CompositeModel. Here, left will be Model(fcn1), op will be operator.add(), and right will
be another CompositeModel that has a left attribute of Model(fcn2), an op of operator.mul(), and a right of
Model(fcn3).
To use a binary operator other than +, -, *, or / you can explicitly create a CompositeModel with the appropriate
binary operator. For example, to convolve two models, you could define a simple convolution function, perhaps as:

import numpy as np

def convolve(dat, kernel):


"""simple convolution of two arrays"""
npts = min(len(dat), len(kernel))
pad = np.ones(npts)
tmp = np.concatenate((pad*dat[0], dat, pad*dat[-1]))
out = np.convolve(tmp, kernel, mode='valid')
(continues on next page)

7.4. Composite Models : adding (or multiplying) Models 95


Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

(continued from previous page)


noff = int((len(out) - npts) / 2)
return (out[noff:])[:npts]

which extends the data in both directions so that the convolving kernel function gives a valid result over the data range.
Because this function takes two array arguments and returns an array, it can be used as the binary operator. A full script
using this technique is here:

# <examples/doc_model_composite.py>
import matplotlib.pyplot as plt
import numpy as np

from lmfit import CompositeModel, Model


from lmfit.lineshapes import gaussian, step

# create data from broadened step


x = np.linspace(0, 10, 201)
y = step(x, amplitude=12.5, center=4.5, sigma=0.88, form='erf')
np.random.seed(0)
y = y + np.random.normal(scale=0.35, size=x.size)

def jump(x, mid):


"""Heaviside step function."""
o = np.zeros(x.size)
imid = max(np.where(x <= mid)[0])
o[imid:] = 1.0
return o

def convolve(arr, kernel):


"""Simple convolution of two arrays."""
npts = min(arr.size, kernel.size)
pad = np.ones(npts)
tmp = np.concatenate((pad*arr[0], arr, pad*arr[-1]))
out = np.convolve(tmp, kernel, mode='valid')
noff = int((len(out) - npts) / 2)
return out[noff:noff+npts]

# create Composite Model using the custom convolution operator


mod = CompositeModel(Model(jump), Model(gaussian), convolve)

# create parameters for model. Note that 'mid' and 'center' will be highly
# correlated. Since 'mid' is used as an integer index, it will be very
# hard to fit, so we fix its value
pars = mod.make_params(amplitude=dict(value=1, min=0),
center=3.5,
sigma=dict(value=1.5, min=0),
mid=dict(value=4, vary=False))

# fit this model to data array y


result = mod.fit(y, params=pars, x=x)
(continues on next page)

96 Chapter 7. Modeling Data and Curve Fitting


Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

(continued from previous page)

print(result.fit_report())

# generate components
comps = result.eval_components(x=x)

# plot results
fig, axes = plt.subplots(1, 2, figsize=(12.8, 4.8))

axes[0].plot(x, y, 'bo')
axes[0].plot(x, result.init_fit, 'k--', label='initial fit')
axes[0].plot(x, result.best_fit, 'r-', label='best fit')
axes[0].legend()

axes[1].plot(x, y, 'bo')
axes[1].plot(x, 10*comps['jump'], 'k--', label='Jump component')
axes[1].plot(x, 10*comps['gaussian'], 'r-', label='Gaussian component')
axes[1].legend()

plt.show()
# <end examples/doc_model_composite.py>

which prints out the results:

[[Model]]
(Model(jump) <function convolve at 0x130d64dc0> Model(gaussian))
[[Fit Statistics]]
# fitting method = leastsq
# function evals = 33
# data points = 201
# variables = 3
chi-square = 24.7562335
reduced chi-square = 0.12503148
Akaike info crit = -414.939746
Bayesian info crit = -405.029832
R-squared = 0.99632577
[[Variables]]
mid: 4 (fixed)
amplitude: 0.62508458 +/- 0.00189732 (0.30%) (init = 1)
center: 5.50853669 +/- 0.00973231 (0.18%) (init = 3.5)
sigma: 0.59576097 +/- 0.01348579 (2.26%) (init = 1.5)
[[Correlations]] (unreported correlations are < 0.100)
C(amplitude, center) = +0.3292
C(amplitude, sigma) = +0.2680

and shows the plots:

7.4. Composite Models : adding (or multiplying) Models 97


Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

Using composite models with built-in or custom operators allows you to build complex models from testable sub-
components.

98 Chapter 7. Modeling Data and Curve Fitting


CHAPTER

EIGHT

BUILT-IN FITTING MODELS IN THE MODELS MODULE

Lmfit provides several built-in fitting models in the models module. These pre-defined models each subclass from the
Model class of the previous chapter and wrap relatively well-known functional forms, such as Gaussian, Lorentzian,
and Exponential that are used in a wide range of scientific domains. In fact, all the models are based on simple, plain
Python functions defined in the lineshapes module. In addition to wrapping a function into a Model, these models
also provide a guess() method that is intended to give a reasonable set of starting values from a data array that closely
approximates the data to be fit.
As shown in the previous chapter, a key feature of the Model class is that models can easily be combined to give a com-
posite CompositeModel. Thus, while some of the models listed here may seem pretty trivial (notably, ConstantModel
and LinearModel), the main point of having these is to be able to use them in composite models. For example, a
Lorentzian plus a linear background might be represented as:

from lmfit.models import LinearModel, LorentzianModel

peak = LorentzianModel()
background = LinearModel()
model = peak + background

Almost all the models listed below are one-dimensional, with an independent variable named x. Many of these models
represent a function with a distinct peak, and so share common features. To maintain uniformity, common parameter
names are used whenever possible. Thus, most models have a parameter called amplitude that represents the overall
intensity (or area of) a peak or function and a sigma parameter that gives a characteristic width.
After a list of built-in models, a few examples of their use are given.

8.1 Peak-like models

There are many peak-like models available. These include GaussianModel, LorentzianModel, VoigtModel,
PseudoVoigtModel, and some less commonly used variations. Most of these models are unit-normalized and share
the same parameter names so that you can easily switch between models and interpret the results. The amplitude
parameter is the multiplicative factor for the unit-normalized peak lineshape, and so will represent the strength of that
peak or the area under that curve. The center parameter will be the centroid x value. The sigma parameter is the
characteristic width of the peak, with many functions using (𝑥 − 𝜇)/𝜎 where 𝜇 is the centroid value. Most of these
peak functions will have two additional parameters derived from and constrained by the other parameters. The first of
these is fwhm which will hold the estimated “Full Width at Half Max” for the peak, which is often easier to compare
between different models than sigma. The second of these is height which will contain the maximum value of the
peak, typically the value at 𝑥 = 𝜇. Finally, each of these models has a guess() method that uses data to make a fairly
crude but usually sufficient guess for the value of amplitude, center, and sigma, and sets a lower bound of 0 on the
value of sigma.

99
Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

8.1.1 GaussianModel

class GaussianModel(independent_vars=['x'], prefix='', nan_policy='raise', **kwargs)


A model based on a Gaussian or normal distribution lineshape.
The model has three Parameters: amplitude, center, and sigma. In addition, parameters fwhm and height are
included as constraints to report full width at half maximum and maximum peak height, respectively.

𝐴 2 2
𝑓 (𝑥; 𝐴, 𝜇, 𝜎) = √ 𝑒[−(𝑥−𝜇) /2𝜎 ]
𝜎 2𝜋
where√the parameter amplitude corresponds to 𝐴, center to 𝜇, and sigma to 𝜎. The full width at half maximum
is 2𝜎 2 ln 2, approximately 2.3548𝜎.
For more information, see: https://en.wikipedia.org/wiki/Normal_distribution
Parameters
• independent_vars (list of str, optional) – Arguments to the model function that are
independent variables default is [‘x’]).
• prefix (str, optional) – String to prepend to parameter names, needed to add two Mod-
els that have parameter names in common.
• nan_policy ({'raise', 'propagate', 'omit'}, optional) – How to handle NaN and
missing values in data. See Notes below.
• **kwargs (optional) – Keyword arguments to pass to Model.

Notes

1. nan_policy sets what to do when a NaN or missing value is seen in the data. Should be one of:
• ‘raise’ : raise a ValueError (default)
• ‘propagate’ : do nothing
• ‘omit’ : drop missing data

8.1.2 LorentzianModel

class LorentzianModel(independent_vars=['x'], prefix='', nan_policy='raise', **kwargs)


A model based on a Lorentzian or Cauchy-Lorentz distribution function.
The model has three Parameters: amplitude, center, and sigma. In addition, parameters fwhm and height are
included as constraints to report full width at half maximum and maximum peak height, respectively.

𝐴 [︀ 𝜎 ]︀
𝑓 (𝑥; 𝐴, 𝜇, 𝜎) = 2 2
𝜋 (𝑥 − 𝜇) + 𝜎

where the parameter amplitude corresponds to 𝐴, center to 𝜇, and sigma to 𝜎. The full width at half maximum
is 2𝜎.
For more information, see: https://en.wikipedia.org/wiki/Cauchy_distribution
Parameters
• independent_vars (list of str, optional) – Arguments to the model function that are
independent variables default is [‘x’]).

100 Chapter 8. Built-in Fitting Models in the models module


Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

• prefix (str, optional) – String to prepend to parameter names, needed to add two Mod-
els that have parameter names in common.
• nan_policy ({'raise', 'propagate', 'omit'}, optional) – How to handle NaN and
missing values in data. See Notes below.
• **kwargs (optional) – Keyword arguments to pass to Model.

Notes

1. nan_policy sets what to do when a NaN or missing value is seen in the data. Should be one of:
• ‘raise’ : raise a ValueError (default)
• ‘propagate’ : do nothing
• ‘omit’ : drop missing data

8.1.3 SplitLorentzianModel

class SplitLorentzianModel(independent_vars=['x'], prefix='', nan_policy='raise', **kwargs)


A model based on a Lorentzian or Cauchy-Lorentz distribution function.
The model has four parameters: amplitude, center, sigma, and sigma_r. In addition, parameters fwhm and height
are included as constraints to report full width at half maximum and maximum peak height, respectively.
‘Split’ means that the width of the distribution is different between left and right slopes.

2𝐴 [︀ 𝜎2 𝜎𝑟2 ]︀
𝑓 (𝑥; 𝐴, 𝜇, 𝜎, 𝜎𝑟 ) = 2 2
* 𝐻(𝜇 − 𝑥) + 2 2
* 𝐻(𝑥 − 𝜇)
𝜋(𝜎 + 𝜎𝑟 ) (𝑥 − 𝜇) + 𝜎 (𝑥 − 𝜇) + 𝜎𝑟

where the parameter amplitude corresponds to 𝐴, center to 𝜇, sigma to 𝜎, sigma_l to 𝜎𝑙 , and 𝐻(𝑥) is a Heaviside
step function:

𝐻(𝑥) = 0|𝑥 < 0, 1|𝑥 ≥ 0

The full width at half maximum is 𝜎𝑙 + 𝜎𝑟 . Just as with the Lorentzian model, integral of this function from -.inf
to +.inf equals to amplitude.
For more information, see: https://en.wikipedia.org/wiki/Cauchy_distribution
Parameters
• independent_vars (list of str, optional) – Arguments to the model function that are
independent variables default is [‘x’]).
• prefix (str, optional) – String to prepend to parameter names, needed to add two Mod-
els that have parameter names in common.
• nan_policy ({'raise', 'propagate', 'omit'}, optional) – How to handle NaN and
missing values in data. See Notes below.
• **kwargs (optional) – Keyword arguments to pass to Model.

8.1. Peak-like models 101


Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

Notes

1. nan_policy sets what to do when a NaN or missing value is seen in the data. Should be one of:
• ‘raise’ : raise a ValueError (default)
• ‘propagate’ : do nothing
• ‘omit’ : drop missing data

8.1.4 VoigtModel

class VoigtModel(independent_vars=['x'], prefix='', nan_policy='raise', **kwargs)


A model based on a Voigt distribution function.
The model has four Parameters: amplitude, center, sigma, and gamma. By default, gamma is constrained to
have a value equal to sigma, though it can be varied independently. In addition, parameters fwhm and height
are included as constraints to report full width at half maximum and maximum peak height, respectively. The
definition for the Voigt function used here is:

𝐴Re[𝑤(𝑧)]
𝑓 (𝑥; 𝐴, 𝜇, 𝜎, 𝛾) = √
𝜎 2𝜋
where
𝑥 − 𝜇 + 𝑖𝛾
𝑧 = √
𝜎 2
2
𝑤(𝑧) = 𝑒−𝑧 erfc(−𝑖𝑧)

and erfc() is the complementary error function. As above, amplitude corresponds to 𝐴, center to 𝜇, and sigma
to 𝜎. The parameter gamma corresponds to 𝛾. If gamma is kept at the default value (constrained to sigma), the
full width at half maximum is approximately 3.6013𝜎.
For more information, see: https://en.wikipedia.org/wiki/Voigt_profile
Parameters
• independent_vars (list of str, optional) – Arguments to the model function that are
independent variables default is [‘x’]).
• prefix (str, optional) – String to prepend to parameter names, needed to add two Mod-
els that have parameter names in common.
• nan_policy ({'raise', 'propagate', 'omit'}, optional) – How to handle NaN and
missing values in data. See Notes below.
• **kwargs (optional) – Keyword arguments to pass to Model.

Notes

1. nan_policy sets what to do when a NaN or missing value is seen in the data. Should be one of:
• ‘raise’ : raise a ValueError (default)
• ‘propagate’ : do nothing
• ‘omit’ : drop missing data

102 Chapter 8. Built-in Fitting Models in the models module


Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

8.1.5 PseudoVoigtModel

class PseudoVoigtModel(independent_vars=['x'], prefix='', nan_policy='raise', **kwargs)


A model based on a pseudo-Voigt distribution function.
This is a weighted sum of a Gaussian and Lorentzian distribution function that share values for amplitude (𝐴),
center (𝜇), and full width at half maximum fwhm (and so has constrained values of sigma (𝜎) and height (max-
imum peak height). The parameter fraction (𝛼) controls the relative weight of the Gaussian and Lorentzian
components, giving the full definition of:

(1 − 𝛼)𝐴 [−(𝑥−𝜇)2 /2𝜎𝑔 2 ] 𝛼𝐴 [︀ 𝜎 ]︀


𝑓 (𝑥; 𝐴, 𝜇, 𝜎, 𝛼) = √ 𝑒 + 2 2
𝜎𝑔 2𝜋 𝜋 (𝑥 − 𝜇) + 𝜎

where 𝜎𝑔 = 𝜎/ 2 ln 2 so that the full width at half maximum of each component and of the sum is 2𝜎. The
guess() function always sets the starting value for fraction at 0.5.
For more information, see: https://en.wikipedia.org/wiki/Voigt_profile#Pseudo-Voigt_Approximation
Parameters
• independent_vars (list of str, optional) – Arguments to the model function that are
independent variables default is [‘x’]).
• prefix (str, optional) – String to prepend to parameter names, needed to add two Mod-
els that have parameter names in common.
• nan_policy ({'raise', 'propagate', 'omit'}, optional) – How to handle NaN and
missing values in data. See Notes below.
• **kwargs (optional) – Keyword arguments to pass to Model.

Notes

1. nan_policy sets what to do when a NaN or missing value is seen in the data. Should be one of:
• ‘raise’ : raise a ValueError (default)
• ‘propagate’ : do nothing
• ‘omit’ : drop missing data

8.1.6 MoffatModel

class MoffatModel(independent_vars=['x'], prefix='', nan_policy='raise', **kwargs)


A model based on the Moffat distribution function.
The model has four Parameters: amplitude (𝐴), center (𝜇), a width parameter sigma (𝜎), and an exponent beta
(𝛽). In addition, parameters fwhm and height are included as constraints to report full width at half maximum
and maximum peak height, respectively.
[︀ 𝑥 − 𝜇 2 ]︀−𝛽
𝑓 (𝑥; 𝐴, 𝜇, 𝜎, 𝛽) = 𝐴 ( ) +1
𝜎

the full width at half maximum is 2𝜎 21/𝛽 − 1. The guess() function always sets the starting value for beta
to 1.
Note that for (𝛽 = 1) the Moffat has a Lorentzian shape. For more information, see: https://en.wikipedia.org/
wiki/Moffat_distribution

8.1. Peak-like models 103


Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

Parameters
• independent_vars (list of str, optional) – Arguments to the model function that are
independent variables default is [‘x’]).
• prefix (str, optional) – String to prepend to parameter names, needed to add two Mod-
els that have parameter names in common.
• nan_policy ({'raise', 'propagate', 'omit'}, optional) – How to handle NaN and
missing values in data. See Notes below.
• **kwargs (optional) – Keyword arguments to pass to Model.

Notes

1. nan_policy sets what to do when a NaN or missing value is seen in the data. Should be one of:
• ‘raise’ : raise a ValueError (default)
• ‘propagate’ : do nothing
• ‘omit’ : drop missing data

8.1.7 Pearson4Model

class Pearson4Model(independent_vars=['x'], prefix='', nan_policy='raise', **kwargs)


A model based on a Pearson IV distribution.
The model has five parameters: amplitude (𝐴), center (𝜇), sigma (𝜎), expon (𝑚) and skew (𝜈). In addition,
parameters fwhm, height and position are included as constraints to report estimates for the approximate full
width at half maximum (20% error), the peak height, and the peak position (the position of the maximal function
value), respectively. The fwhm value has an error of about 20% in the parameter range expon: (0.5, 1000], skew:
[-1000, 1000].

⃒ Γ(𝑚+𝑖 𝜈 ) ⃒2
⃒ ⃒
⃒ 2 ⃒ ]︂−𝑚
⃒ Γ(𝑚) ⃒ [︂ (𝑥 − 𝜇)2
(︂ (︂ )︂)︂
𝑥−𝜇
𝑓 (𝑥; 𝐴, 𝜇, 𝜎, 𝑚, 𝜈) = 𝐴 1 + exp −𝜈 arctan
𝜎𝛽(𝑚 − 12 , 12 ) 𝜎2 𝜎

where 𝛽 is the beta function (see scipy.special.beta). The guess() function always gives a starting value of 1.5
for expon, and 0 for skew.
For more information, see: https://en.wikipedia.org/wiki/Pearson_distribution#The_Pearson_type_IV_
distribution
Parameters
• independent_vars (list of str, optional) – Arguments to the model function that are
independent variables default is [‘x’]).
• prefix (str, optional) – String to prepend to parameter names, needed to add two Mod-
els that have parameter names in common.
• nan_policy ({'raise', 'propagate', 'omit'}, optional) – How to handle NaN and
missing values in data. See Notes below.
• **kwargs (optional) – Keyword arguments to pass to Model.

104 Chapter 8. Built-in Fitting Models in the models module


Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

Notes

1. nan_policy sets what to do when a NaN or missing value is seen in the data. Should be one of:
• ‘raise’ : raise a ValueError (default)
• ‘propagate’ : do nothing
• ‘omit’ : drop missing data

8.1.8 Pearson7Model

class Pearson7Model(independent_vars=['x'], prefix='', nan_policy='raise', **kwargs)


A model based on a Pearson VII distribution.
The model has four parameters: amplitude (𝐴), center (𝜇), sigma (𝜎), and exponent (𝑚). In addition, parameters
fwhm and height are included as constraints to report estimates for the full width at half maximum and maximum
peak height, respectively.

𝐴 [︀ (𝑥 − 𝜇)2 ]︀−𝑚
𝑓 (𝑥; 𝐴, 𝜇, 𝜎, 𝑚) = 1 +
𝜎𝛽(𝑚 − 21 , 12 ) 𝜎2

where 𝛽 is the beta function (see scipy.special.beta). The guess() function always gives a starting value for
exponent of 1.5. In addition, parameters fwhm and height are included as constraints to report full width at half
maximum and maximum peak height, respectively.
For more information, see: https://en.wikipedia.org/wiki/Pearson_distribution#The_Pearson_type_VII_
distribution
Parameters
• independent_vars (list of str, optional) – Arguments to the model function that are
independent variables default is [‘x’]).
• prefix (str, optional) – String to prepend to parameter names, needed to add two Mod-
els that have parameter names in common.
• nan_policy ({'raise', 'propagate', 'omit'}, optional) – How to handle NaN and
missing values in data. See Notes below.
• **kwargs (optional) – Keyword arguments to pass to Model.

Notes

1. nan_policy sets what to do when a NaN or missing value is seen in the data. Should be one of:
• ‘raise’ : raise a ValueError (default)
• ‘propagate’ : do nothing
• ‘omit’ : drop missing data

8.1. Peak-like models 105


Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

8.1.9 StudentsTModel

class StudentsTModel(independent_vars=['x'], prefix='', nan_policy='raise', **kwargs)


A model based on a Student’s t-distribution function.
The model has three Parameters: amplitude (𝐴), center (𝜇), and sigma (𝜎). In addition, parameters fwhm and
height are included as constraints to report full width at half maximum and maximum peak height, respectively.
𝐴Γ( 𝜎+12 )
[︁ 𝜎+1
(𝑥 − 𝜇)2 ]︁− 2
𝑓 (𝑥; 𝐴, 𝜇, 𝜎) = √ 𝜎 1+
𝜎𝜋 Γ( 2 ) 𝜎
where Γ(𝑥) is the gamma function.
For more information, see: https://en.wikipedia.org/wiki/Student%27s_t-distribution
Parameters
• independent_vars (list of str, optional) – Arguments to the model function that are
independent variables default is [‘x’]).
• prefix (str, optional) – String to prepend to parameter names, needed to add two Mod-
els that have parameter names in common.
• nan_policy ({'raise', 'propagate', 'omit'}, optional) – How to handle NaN and
missing values in data. See Notes below.
• **kwargs (optional) – Keyword arguments to pass to Model.

Notes

1. nan_policy sets what to do when a NaN or missing value is seen in the data. Should be one of:
• ‘raise’ : raise a ValueError (default)
• ‘propagate’ : do nothing
• ‘omit’ : drop missing data

8.1.10 BreitWignerModel

class BreitWignerModel(independent_vars=['x'], prefix='', nan_policy='raise', **kwargs)


A model based on a Breit-Wigner-Fano function.
The model has four Parameters: amplitude (𝐴), center (𝜇), sigma (𝜎), and q (𝑞).
𝐴(𝑞𝜎/2 + 𝑥 − 𝜇)2
𝑓 (𝑥; 𝐴, 𝜇, 𝜎, 𝑞) =
(𝜎/2)2 + (𝑥 − 𝜇)2
For more information, see: https://en.wikipedia.org/wiki/Fano_resonance
Parameters
• independent_vars (list of str, optional) – Arguments to the model function that are
independent variables default is [‘x’]).
• prefix (str, optional) – String to prepend to parameter names, needed to add two Mod-
els that have parameter names in common.
• nan_policy ({'raise', 'propagate', 'omit'}, optional) – How to handle NaN and
missing values in data. See Notes below.
• **kwargs (optional) – Keyword arguments to pass to Model.

106 Chapter 8. Built-in Fitting Models in the models module


Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

Notes

1. nan_policy sets what to do when a NaN or missing value is seen in the data. Should be one of:
• ‘raise’ : raise a ValueError (default)
• ‘propagate’ : do nothing
• ‘omit’ : drop missing data

8.1.11 LognormalModel

class LognormalModel(independent_vars=['x'], prefix='', nan_policy='raise', **kwargs)


A model based on the Log-normal distribution function.
The modal has three Parameters amplitude (𝐴), center (𝜇), and sigma (𝜎). In addition, parameters fwhm and
height are included as constraints to report estimates of full width at half maximum and maximum peak height,
respectively.
2
/2𝜎 2
𝐴 𝑒−(ln(𝑥)−𝜇)
𝑓 (𝑥; 𝐴, 𝜇, 𝜎) = √
𝜎 2𝜋 𝑥
For more information, see: https://en.wikipedia.org/wiki/Lognormal
Parameters
• independent_vars (list of str, optional) – Arguments to the model function that are
independent variables default is [‘x’]).
• prefix (str, optional) – String to prepend to parameter names, needed to add two Mod-
els that have parameter names in common.
• nan_policy ({'raise', 'propagate', 'omit'}, optional) – How to handle NaN and
missing values in data. See Notes below.
• **kwargs (optional) – Keyword arguments to pass to Model.

Notes

1. nan_policy sets what to do when a NaN or missing value is seen in the data. Should be one of:
• ‘raise’ : raise a ValueError (default)
• ‘propagate’ : do nothing
• ‘omit’ : drop missing data

8.1.12 DampedOscillatorModel

class DampedOscillatorModel(independent_vars=['x'], prefix='', nan_policy='raise', **kwargs)


A model based on the Damped Harmonic Oscillator Amplitude.
The model has three Parameters: amplitude (𝐴), center (𝜇), and sigma (𝜎). In addition, the parameter height is
included as a constraint to report the maximum peak height.
𝐴
𝑓 (𝑥; 𝐴, 𝜇, 𝜎) = √︀
[1 − (𝑥/𝜇)2 ]2 + (2𝜎𝑥/𝜇)2
For more information, see: https://en.wikipedia.org/wiki/Harmonic_oscillator#Amplitude_part

8.1. Peak-like models 107


Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

Parameters
• independent_vars (list of str, optional) – Arguments to the model function that are
independent variables default is [‘x’]).
• prefix (str, optional) – String to prepend to parameter names, needed to add two Mod-
els that have parameter names in common.
• nan_policy ({'raise', 'propagate', 'omit'}, optional) – How to handle NaN and
missing values in data. See Notes below.
• **kwargs (optional) – Keyword arguments to pass to Model.

Notes

1. nan_policy sets what to do when a NaN or missing value is seen in the data. Should be one of:
• ‘raise’ : raise a ValueError (default)
• ‘propagate’ : do nothing
• ‘omit’ : drop missing data

8.1.13 DampedHarmonicOscillatorModel

class DampedHarmonicOscillatorModel(independent_vars=['x'], prefix='', nan_policy='raise', **kwargs)


A model based on a variation of the Damped Harmonic Oscillator.
The model follows the definition given in DAVE/PAN (see: https://www.ncnr.nist.gov/dave) and has four Param-
eters: amplitude (𝐴), center (𝜇), sigma (𝜎), and gamma (𝛾). In addition, parameters fwhm and height are included
as constraints to report estimates for full width at half maximum and maximum peak height, respectively.

𝐴𝜎 [︁ 1 1 ]︁
𝑓 (𝑥; 𝐴, 𝜇, 𝜎, 𝛾) = −
𝜋[1 − exp(−𝑥/𝛾)] (𝑥 − 𝜇)2 + 𝜎 2 (𝑥 + 𝜇)2 + 𝜎 2

where 𝛾 = 𝑘𝑇 , k is the Boltzmann constant in 𝑒𝑣𝐾 − 1, and T is the temperature in 𝐾.


For more information, see: https://en.wikipedia.org/wiki/Harmonic_oscillator
Parameters
• independent_vars (list of str, optional) – Arguments to the model function that are
independent variables default is [‘x’]).
• prefix (str, optional) – String to prepend to parameter names, needed to add two Mod-
els that have parameter names in common.
• nan_policy ({'raise', 'propagate', 'omit'}, optional) – How to handle NaN and
missing values in data. See Notes below.
• **kwargs (optional) – Keyword arguments to pass to Model.

108 Chapter 8. Built-in Fitting Models in the models module


Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

Notes

1. nan_policy sets what to do when a NaN or missing value is seen in the data. Should be one of:
• ‘raise’ : raise a ValueError (default)
• ‘propagate’ : do nothing
• ‘omit’ : drop missing data

8.1.14 ExponentialGaussianModel

class ExponentialGaussianModel(independent_vars=['x'], prefix='', nan_policy='raise', **kwargs)


A model of an Exponentially modified Gaussian distribution.
The model has four Parameters: amplitude (𝐴), center (𝜇), sigma (𝜎), and gamma (𝛾).
𝐴𝛾 (︁ 𝜇 + 𝛾𝜎 2 − 𝑥 )︁
exp 𝛾(𝜇 − 𝑥 + 𝛾𝜎 2 /2) erfc
[︀ ]︀
𝑓 (𝑥; 𝐴, 𝜇, 𝜎, 𝛾) = √
2 2𝜎
where erfc() is the complementary error function.
For more information, see: https://en.wikipedia.org/wiki/Exponentially_modified_Gaussian_distribution
Parameters
• independent_vars (list of str, optional) – Arguments to the model function that are
independent variables default is [‘x’]).
• prefix (str, optional) – String to prepend to parameter names, needed to add two Mod-
els that have parameter names in common.
• nan_policy ({'raise', 'propagate', 'omit'}, optional) – How to handle NaN and
missing values in data. See Notes below.
• **kwargs (optional) – Keyword arguments to pass to Model.

Notes

1. nan_policy sets what to do when a NaN or missing value is seen in the data. Should be one of:
• ‘raise’ : raise a ValueError (default)
• ‘propagate’ : do nothing
• ‘omit’ : drop missing data

8.1.15 SkewedGaussianModel

class SkewedGaussianModel(independent_vars=['x'], prefix='', nan_policy='raise', **kwargs)


A skewed Gaussian model, using a skewed normal distribution.
The model has four Parameters: amplitude (𝐴), center (𝜇), sigma (𝜎), and gamma (𝛾).
𝐴 2 2
{︁ [︀ 𝛾(𝑥 − 𝜇) ]︀}︁
𝑓 (𝑥; 𝐴, 𝜇, 𝜎, 𝛾) = √ 𝑒[−(𝑥−𝜇) /2𝜎 ] 1 + erf √
𝜎 2𝜋 𝜎 2
where erf() is the error function.
For more information, see: https://en.wikipedia.org/wiki/Skew_normal_distribution

8.1. Peak-like models 109


Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

Parameters
• independent_vars (list of str, optional) – Arguments to the model function that are
independent variables default is [‘x’]).
• prefix (str, optional) – String to prepend to parameter names, needed to add two Mod-
els that have parameter names in common.
• nan_policy ({'raise', 'propagate', 'omit'}, optional) – How to handle NaN and
missing values in data. See Notes below.
• **kwargs (optional) – Keyword arguments to pass to Model.

Notes

1. nan_policy sets what to do when a NaN or missing value is seen in the data. Should be one of:
• ‘raise’ : raise a ValueError (default)
• ‘propagate’ : do nothing
• ‘omit’ : drop missing data

8.1.16 SkewedVoigtModel

class SkewedVoigtModel(independent_vars=['x'], prefix='', nan_policy='raise', **kwargs)


A skewed Voigt model, modified using a skewed normal distribution.
The model has five Parameters amplitude (𝐴), center (𝜇), sigma (𝜎), and gamma (𝛾), as usual for a Voigt distri-
bution, and adds a new Parameter skew.
{︁ [︀ skew(x − 𝜇) ]︀}︁
𝑓 (𝑥; 𝐴, 𝜇, 𝜎, 𝛾, skew) = Voigt(x; A, 𝜇, 𝜎, 𝛾) 1 + erf √
𝜎 2
where erf() is the error function.
For more information, see: https://en.wikipedia.org/wiki/Skew_normal_distribution
Parameters
• independent_vars (list of str, optional) – Arguments to the model function that are
independent variables default is [‘x’]).
• prefix (str, optional) – String to prepend to parameter names, needed to add two Mod-
els that have parameter names in common.
• nan_policy ({'raise', 'propagate', 'omit'}, optional) – How to handle NaN and
missing values in data. See Notes below.
• **kwargs (optional) – Keyword arguments to pass to Model.

110 Chapter 8. Built-in Fitting Models in the models module


Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

Notes

1. nan_policy sets what to do when a NaN or missing value is seen in the data. Should be one of:
• ‘raise’ : raise a ValueError (default)
• ‘propagate’ : do nothing
• ‘omit’ : drop missing data

8.1.17 ThermalDistributionModel

class ThermalDistributionModel(independent_vars=['x'], prefix='', nan_policy='raise', form='bose',


**kwargs)
Return a thermal distribution function.
Variable form defines the kind of distribution as below with three Parameters: amplitude (𝐴), center (𝑥0 ), and
kt (𝑘𝑡). The following distributions are available:
• ‘bose’ : Bose-Einstein distribution (default)
• ‘maxwell’ : Maxwell-Boltzmann distribution
• ‘fermi’ : Fermi-Dirac distribution
The functional forms are defined as:
1
𝑓 (𝑥; 𝐴, 𝑥0 , 𝑘𝑡, form = ′ bose′ ) =
𝐴 exp( 𝑥−𝑥
−1
𝑘𝑡 )
0

1
𝑓 (𝑥; 𝐴, 𝑥0 , 𝑘𝑡, form = ′ maxwell′ ) =
𝐴 exp( 𝑥−𝑥
𝑘𝑡 )
0

1
𝑓 (𝑥; 𝐴, 𝑥0 , 𝑘𝑡, form = ′ fermi′ ) = 𝑥−𝑥0 ]
𝐴 exp( 𝑘𝑡 ) + 1

Notes

• kt should be defined in the same units as x (𝑘𝐵 = 8.617 × 10−5 eV/K).


• set 𝑘𝑡 < 0 to implement the energy loss convention common in scattering research.

For more information, see: http://hyperphysics.phy-astr.gsu.edu/hbase/quantum/disfcn.html


Parameters
• independent_vars (list of str, optional) – Arguments to the model function that are
independent variables default is [‘x’]).
• prefix (str, optional) – String to prepend to parameter names, needed to add two Mod-
els that have parameter names in common.
• nan_policy ({'raise', 'propagate', 'omit'}, optional) – How to handle NaN and
missing values in data. See Notes below.
• **kwargs (optional) – Keyword arguments to pass to Model.

8.1. Peak-like models 111


Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

Notes

1. nan_policy sets what to do when a NaN or missing value is seen in the data. Should be one of:
• ‘raise’ : raise a ValueError (default)
• ‘propagate’ : do nothing
• ‘omit’ : drop missing data

8.1.18 DoniachModel

class DoniachModel(independent_vars=['x'], prefix='', nan_policy='raise', **kwargs)


A model of an Doniach Sunjic asymmetric lineshape.
This model is used in photo-emission and has four Parameters: amplitude (𝐴), center (𝜇), sigma (𝜎), and gamma
(𝛾). In addition, parameter height is included as a constraint to report maximum peak height.
[︀ ]︀
𝐴 cos 𝜋𝛾/2 + (1 − 𝛾) arctan ((𝑥 − 𝜇)/𝜎)
𝑓 (𝑥; 𝐴, 𝜇, 𝜎, 𝛾) = 1−𝛾 ]︀(1−𝛾)/2
𝜎 [︀
1 + (𝑥 − 𝜇)/𝜎

For more information, see: https://www.casaxps.com/help_manual/line_shapes.htm


Parameters
• independent_vars (list of str, optional) – Arguments to the model function that are
independent variables default is [‘x’]).
• prefix (str, optional) – String to prepend to parameter names, needed to add two Mod-
els that have parameter names in common.
• nan_policy ({'raise', 'propagate', 'omit'}, optional) – How to handle NaN and
missing values in data. See Notes below.
• **kwargs (optional) – Keyword arguments to pass to Model.

Notes

1. nan_policy sets what to do when a NaN or missing value is seen in the data. Should be one of:
• ‘raise’ : raise a ValueError (default)
• ‘propagate’ : do nothing
• ‘omit’ : drop missing data

8.2 Linear and Polynomial Models

These models correspond to polynomials of some degree. Of course, lmfit is a very inefficient way to do linear re-
gression (see numpy.polyfit or scipy.stats.linregress), but these models may be useful as one of many components of a
composite model. The SplineModel below corresponds to a cubic spline.

112 Chapter 8. Built-in Fitting Models in the models module


Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

8.2.1 ConstantModel

class ConstantModel(independent_vars=['x'], prefix='', nan_policy='raise', **kwargs)


Constant model, with a single Parameter: c.
Note that this is ‘constant’ in the sense of having no dependence on the independent variable x, not in the sense
of being non-varying. To be clear, c will be a Parameter that will be varied in the fit (by default, of course).
Parameters
• independent_vars (list of str, optional) – Arguments to the model function that are
independent variables default is [‘x’]).
• prefix (str, optional) – String to prepend to parameter names, needed to add two Mod-
els that have parameter names in common.
• nan_policy ({'raise', 'propagate', 'omit'}, optional) – How to handle NaN and
missing values in data. See Notes below.
• **kwargs (optional) – Keyword arguments to pass to Model.

Notes

1. nan_policy sets what to do when a NaN or missing value is seen in the data. Should be one of:
• ‘raise’ : raise a ValueError (default)
• ‘propagate’ : do nothing
• ‘omit’ : drop missing data

8.2.2 LinearModel

class LinearModel(independent_vars=['x'], prefix='', nan_policy='raise', **kwargs)


Linear model, with two Parameters: intercept and slope.
Defined as:

𝑓 (𝑥; 𝑚, 𝑏) = 𝑚𝑥 + 𝑏

with slope for 𝑚 and intercept for 𝑏.


Parameters
• independent_vars (list of str, optional) – Arguments to the model function that are
independent variables default is [‘x’]).
• prefix (str, optional) – String to prepend to parameter names, needed to add two Mod-
els that have parameter names in common.
• nan_policy ({'raise', 'propagate', 'omit'}, optional) – How to handle NaN and
missing values in data. See Notes below.
• **kwargs (optional) – Keyword arguments to pass to Model.

8.2. Linear and Polynomial Models 113


Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

Notes

1. nan_policy sets what to do when a NaN or missing value is seen in the data. Should be one of:
• ‘raise’ : raise a ValueError (default)
• ‘propagate’ : do nothing
• ‘omit’ : drop missing data

8.2.3 QuadraticModel

class QuadraticModel(independent_vars=['x'], prefix='', nan_policy='raise', **kwargs)


A quadratic model, with three Parameters: a, b, and c.
Defined as:

𝑓 (𝑥; 𝑎, 𝑏, 𝑐) = 𝑎𝑥2 + 𝑏𝑥 + 𝑐

Parameters
• independent_vars (list of str, optional) – Arguments to the model function that are
independent variables default is [‘x’]).
• prefix (str, optional) – String to prepend to parameter names, needed to add two Mod-
els that have parameter names in common.
• nan_policy ({'raise', 'propagate', 'omit'}, optional) – How to handle NaN and
missing values in data. See Notes below.
• **kwargs (optional) – Keyword arguments to pass to Model.

Notes

1. nan_policy sets what to do when a NaN or missing value is seen in the data. Should be one of:
• ‘raise’ : raise a ValueError (default)
• ‘propagate’ : do nothing
• ‘omit’ : drop missing data

8.2.4 PolynomialModel

class PolynomialModel(degree=7, independent_vars=['x'], prefix='', nan_policy='raise', **kwargs)


A polynomial model with up to 7 Parameters, specified by degree.
∑︁
𝑓 (𝑥; 𝑐0 , 𝑐1 , . . . , 𝑐7 ) = 𝑐𝑖 𝑥 𝑖
𝑖=0,7

with parameters c0, c1, . . . , c7. The supplied degree will specify how many of these are actual variable parame-
ters. This uses numpy.polyval for its calculation of the polynomial.
Parameters
• independent_vars (list of str, optional) – Arguments to the model function that are
independent variables default is [‘x’]).

114 Chapter 8. Built-in Fitting Models in the models module


Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

• prefix (str, optional) – String to prepend to parameter names, needed to add two Mod-
els that have parameter names in common.
• nan_policy ({'raise', 'propagate', 'omit'}, optional) – How to handle NaN and
missing values in data. See Notes below.
• **kwargs (optional) – Keyword arguments to pass to Model.

Notes

1. nan_policy sets what to do when a NaN or missing value is seen in the data. Should be one of:
• ‘raise’ : raise a ValueError (default)
• ‘propagate’ : do nothing
• ‘omit’ : drop missing data

8.2.5 SplinelModel

class SplineModel(xknots, independent_vars=['x'], prefix='', nan_policy='raise', **kwargs)


A 1-D cubic spline model with a variable number of knots and parameters s0, s1, . . . , sN, for N knots.
The user must supply a list or ndarray xknots: the x values for the ‘knots’ which control the flexibility of the
spline function.
The parameters s0, . . . , sN (where N is the size of xknots) will correspond to the y values for the spline knots
at the x=xknots positions where the highest order derivative will be discontinuous. The resulting curve will not
necessarily pass through these knot points, but for finely-spaced knots, the spline parameter values will be very
close to the y values of the resulting curve.
The maximum number of knots supported is 300.
Using the guess() method to initialize parameter values is highly recommended.
Parameters
• xknots (list of floats or ndarray, required) – x-values of knots for spline.
• independent_vars (list of str, optional) – Arguments to the model function that are
independent variables default is [‘x’]).
• prefix (str, optional) – String to prepend to parameter names, needed to add two Mod-
els that have parameter names in common.
• nan_policy ({'raise', 'propagate', 'omit'}, optional) – How to handle NaN and
missing values in data. See Notes below.

Notes

1. There must be at least 4 knot points, and not more than 300.
2. nan_policy sets what to do when a NaN or missing value is seen in
the data. Should be one of:

• ‘raise’ : raise a ValueError (default)


• ‘propagate’ : do nothing
• ‘omit’ : drop missing data

8.2. Linear and Polynomial Models 115


Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

8.3 Periodic Models

These models correspond to periodic functions.

8.3.1 SineModel

class SineModel(independent_vars=['x'], prefix='', nan_policy='raise', **kwargs)


A model based on a sinusoidal lineshape.
The model has three Parameters: amplitude, frequency, and shift.

𝑓 (𝑥; 𝐴, 𝜑, 𝑓 ) = 𝐴 sin(𝑓 𝑥 + 𝜑)

where the parameter amplitude corresponds to 𝐴, frequency to 𝑓 , and shift to 𝜑. All are constrained to be non-
negative, and shift additionally to be smaller than 2𝜋.
Parameters
• independent_vars (list of str, optional) – Arguments to the model function that are
independent variables default is [‘x’]).
• prefix (str, optional) – String to prepend to parameter names, needed to add two Mod-
els that have parameter names in common.
• nan_policy ({'raise', 'propagate', 'omit'}, optional) – How to handle NaN and
missing values in data. See Notes below.
• **kwargs (optional) – Keyword arguments to pass to Model.

Notes

1. nan_policy sets what to do when a NaN or missing value is seen in the data. Should be one of:
• ‘raise’ : raise a ValueError (default)
• ‘propagate’ : do nothing
• ‘omit’ : drop missing data

8.4 Step-like models

Two models represent step-like functions, and share many characteristics.

8.4.1 StepModel

class StepModel(independent_vars=['x'], prefix='', nan_policy='raise', form='linear', **kwargs)


A model based on a Step function.
The model has three Parameters: amplitude (𝐴), center (𝜇), and sigma (𝜎).
There are four choices for form:
• ‘linear’ (default)
• ‘atan’ or ‘arctan’ for an arc-tangent function

116 Chapter 8. Built-in Fitting Models in the models module


Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

• ‘erf’ for an error function


• ‘logistic’ for a logistic function (for more information, see: https://en.wikipedia.org/wiki/Logistic_
function)
The step function starts with a value 0 and ends with a value of 𝐴 rising to 𝐴/2 at 𝜇, with 𝜎 setting the charac-
teristic width. The functional forms are defined as:

𝑓 (𝑥; 𝐴, 𝜇, 𝜎, form = ′ linear′ ) = 𝐴 min [1, max (0, 𝛼 + 1/2)]


′ ′
𝑓 (𝑥; 𝐴, 𝜇, 𝜎, form = arctan ) = 𝐴[1/2 + arctan (𝛼)/𝜋]
′ ′
𝑓 (𝑥; 𝐴, 𝜇, 𝜎, form = erf ) = 𝐴[1 + erf(𝛼)]/2
[︂ ]︂
1
𝑓 (𝑥; 𝐴, 𝜇, 𝜎, form = ′ logistic′ ) =𝐴 1−
1 + 𝑒𝛼

where 𝛼 = (𝑥 − 𝜇)/𝜎.
Parameters
• independent_vars (list of str, optional) – Arguments to the model function that are
independent variables default is [‘x’]).
• prefix (str, optional) – String to prepend to parameter names, needed to add two Mod-
els that have parameter names in common.
• nan_policy ({'raise', 'propagate', 'omit'}, optional) – How to handle NaN and
missing values in data. See Notes below.
• **kwargs (optional) – Keyword arguments to pass to Model.

Notes

1. nan_policy sets what to do when a NaN or missing value is seen in the data. Should be one of:
• ‘raise’ : raise a ValueError (default)
• ‘propagate’ : do nothing
• ‘omit’ : drop missing data

8.4.2 RectangleModel

class RectangleModel(independent_vars=['x'], prefix='', nan_policy='raise', form='linear', **kwargs)


A model based on a Step-up and Step-down function.
The model has five Parameters: amplitude (𝐴), center1 (𝜇1 ), center2 (𝜇2 ), sigma1 (𝜎1 ), and sigma2 (𝜎2 ).
There are four choices for form, which is used for both the Step up and the Step down:
• ‘linear’ (default)
• ‘atan’ or ‘arctan’ for an arc-tangent function
• ‘erf’ for an error function
• ‘logistic’ for a logistic function (for more information, see: https://en.wikipedia.org/wiki/Logistic_
function)

8.4. Step-like models 117


Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

The function starts with a value 0 and transitions to a value of 𝐴, taking the value 𝐴/2 at 𝜇1 , with 𝜎1 setting the
characteristic width. The function then transitions again to the value 𝐴/2 at 𝜇2 , with 𝜎2 setting the characteristic
width. The functional forms are defined as:

𝑓 (𝑥; 𝐴, 𝜇, 𝜎, form = ′ linear′ ) = 𝐴{min [1, max (−1, 𝛼1 )] + min [1, max (−1, 𝛼2 )]}/2
′ ′
𝑓 (𝑥; 𝐴, 𝜇, 𝜎, form = arctan ) = 𝐴[arctan (𝛼1 ) + arctan (𝛼2 )]/𝜋
′ ′
𝑓 (𝑥; 𝐴, 𝜇, 𝜎, form = erf ) = 𝐴 [erf(𝛼1 ) + erf(𝛼2 )] /2
[︂ ]︂
1 1
𝑓 (𝑥; 𝐴, 𝜇, 𝜎, form = ′ logistic′ ) =𝐴 1− −
1 + 𝑒𝛼1 1 + 𝑒𝛼 2
where 𝛼1 = (𝑥 − 𝜇1 )/𝜎1 and 𝛼2 = −(𝑥 − 𝜇2 )/𝜎2 .
Parameters
• independent_vars (list of str, optional) – Arguments to the model function that are
independent variables default is [‘x’]).
• prefix (str, optional) – String to prepend to parameter names, needed to add two Mod-
els that have parameter names in common.
• nan_policy ({'raise', 'propagate', 'omit'}, optional) – How to handle NaN and
missing values in data. See Notes below.
• **kwargs (optional) – Keyword arguments to pass to Model.

Notes

1. nan_policy sets what to do when a NaN or missing value is seen in the data. Should be one of:
• ‘raise’ : raise a ValueError (default)
• ‘propagate’ : do nothing
• ‘omit’ : drop missing data

8.5 Exponential and Power law models

8.5.1 ExponentialModel

class ExponentialModel(independent_vars=['x'], prefix='', nan_policy='raise', **kwargs)


A model based on an exponential decay function.
The model has two Parameters: amplitude (𝐴) and decay (𝜏 ) and is defined as:

𝑓 (𝑥; 𝐴, 𝜏 ) = 𝐴𝑒−𝑥/𝜏

For more information, see: https://en.wikipedia.org/wiki/Exponential_decay


Parameters
• independent_vars (list of str, optional) – Arguments to the model function that are
independent variables default is [‘x’]).
• prefix (str, optional) – String to prepend to parameter names, needed to add two Mod-
els that have parameter names in common.
• nan_policy ({'raise', 'propagate', 'omit'}, optional) – How to handle NaN and
missing values in data. See Notes below.

118 Chapter 8. Built-in Fitting Models in the models module


Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

• **kwargs (optional) – Keyword arguments to pass to Model.

Notes

1. nan_policy sets what to do when a NaN or missing value is seen in the data. Should be one of:
• ‘raise’ : raise a ValueError (default)
• ‘propagate’ : do nothing
• ‘omit’ : drop missing data

8.5.2 PowerLawModel

class PowerLawModel(independent_vars=['x'], prefix='', nan_policy='raise', **kwargs)


A model based on a Power Law.
The model has two Parameters: amplitude (𝐴) and exponent (𝑘) and is defined as:

𝑓 (𝑥; 𝐴, 𝑘) = 𝐴𝑥𝑘

For more information, see: https://en.wikipedia.org/wiki/Power_law


Parameters
• independent_vars (list of str, optional) – Arguments to the model function that are
independent variables default is [‘x’]).
• prefix (str, optional) – String to prepend to parameter names, needed to add two Mod-
els that have parameter names in common.
• nan_policy ({'raise', 'propagate', 'omit'}, optional) – How to handle NaN and
missing values in data. See Notes below.
• **kwargs (optional) – Keyword arguments to pass to Model.

Notes

1. nan_policy sets what to do when a NaN or missing value is seen in the data. Should be one of:
• ‘raise’ : raise a ValueError (default)
• ‘propagate’ : do nothing
• ‘omit’ : drop missing data

8.6 Two dimensional Peak-like models

The one example of a two-dimensional peak is a two-dimensional Gaussian.

8.6. Two dimensional Peak-like models 119


Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

8.6.1 Gaussian2dModel

class Gaussian2dModel(independent_vars=['x', 'y'], prefix='', nan_policy='raise', **kwargs)


A model based on a two-dimensional Gaussian function.
The model has two independent variables x and y and five Parameters: amplitude, centerx, sigmax, centery, and
sigmay. In addition, parameters fwhmx, fwhmy, and height are included as constraints to report the maximum
peak height and the two full width at half maxima, respectively.

𝑓 (𝑥, 𝑦; 𝐴, 𝜇𝑥 , 𝜎𝑥 , 𝜇𝑦 , 𝜎𝑦 ) = 𝐴𝑔(𝑥; 𝐴 = 1, 𝜇𝑥 , 𝜎𝑥 )𝑔(𝑦; 𝐴 = 1, 𝜇𝑦 , 𝜎𝑦 )

where subfunction 𝑔(𝑥; 𝐴, 𝜇, 𝜎) is a Gaussian lineshape:

𝐴 2 2
𝑔(𝑥; 𝐴, 𝜇, 𝜎) = √ 𝑒[−(𝑥−𝜇) /2𝜎 ] .
𝜎 2𝜋

Parameters
• independent_vars (list of str, optional) – Arguments to the model function that are
independent variables default is [‘x’, ‘y’]).
• prefix (str, optional) – String to prepend to parameter names, needed to add two Mod-
els that have parameter names in common.
• nan_policy ({'raise', 'propagate', 'omit'}, optional) – How to handle NaN and
missing values in data. See Notes below.
• **kwargs (optional) – Keyword arguments to pass to Model.

Notes

1. nan_policy sets what to do when a NaN or missing value is seen in the data. Should be one of:
• ‘raise’ : raise a ValueError (default)
• ‘propagate’ : do nothing
• ‘omit’ : drop missing data

8.7 User-defined Models

As shown in the previous chapter (Modeling Data and Curve Fitting), it is fairly straightforward to build fitting models
from parametrized Python functions. The number of model classes listed so far in the present chapter should make it
clear that this process is not too difficult. Still, it is sometimes desirable to build models from a user-supplied function.
This may be especially true if model-building is built-in to some larger library or application for fitting in which the
user may not be able to easily build and use a new model from Python code.
The ExpressionModel allows a model to be built from a user-supplied expression. This uses the asteval module also
used for mathematical constraints as discussed in Using Mathematical Constraints.

120 Chapter 8. Built-in Fitting Models in the models module


Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

8.7.1 ExpressionModel

class ExpressionModel(expr, independent_vars=None, init_script=None, nan_policy='raise', **kws)


ExpressionModel class.
Generate a model from user-supplied expression.
Parameters
• expr (str) – Mathematical expression for model.
• independent_vars (list of str or None, optional) – Variable names to use as indepen-
dent variables.
• init_script (str or None, optional) – Initial script to run in asteval interpreter.
• nan_policy ({'raise, 'propagate', 'omit'}, optional) – How to handle NaN and
missing values in data. See Notes below.
• **kws (optional) – Keyword arguments to pass to Model.

Notes

1. each instance of ExpressionModel will create and use its own version of an asteval interpreter.
2. prefix is not supported for ExpressionModel.
3. nan_policy sets what to do when a NaN or missing value is seen in the data. Should be one of:
• ‘raise’ : raise a ValueError (default)
• ‘propagate’ : do nothing
• ‘omit’ : drop missing data

Since the point of this model is that an arbitrary expression will be supplied, the determination of what are the parameter
names for the model happens when the model is created. To do this, the expression is parsed, and all symbol names are
found. Names that are already known (there are over 500 function and value names in the asteval namespace, including
most Python built-ins, more than 200 functions inherited from NumPy, and more than 20 common lineshapes defined
in the lineshapes module) are not converted to parameters. Unrecognized names are expected to be names of either
parameters or independent variables. If independent_vars is the default value of None, and if the expression contains
a variable named x, that will be used as the independent variable. Otherwise, independent_vars must be given.
For example, if one creates an ExpressionModel as:

from lmfit.models import ExpressionModel

mod = ExpressionModel('off + amp * exp(-x/x0) * sin(x*phase)')

The name exp will be recognized as the exponent function, so the model will be interpreted to have parameters named
off, amp, x0 and phase. In addition, x will be assumed to be the sole independent variable. In general, there is no
obvious way to set default parameter values or parameter hints for bounds, so this will have to be handled explicitly.
To evaluate this model, you might do the following:

from numpy import exp, linspace, sin

x = linspace(0, 10, 501)


params = mod.make_params(off=0.25, amp=1.0, x0=2.0, phase=0.04)
y = mod.eval(params, x=x)

8.7. User-defined Models 121


Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

While many custom models can be built with a single line expression (especially since the names of the lineshapes
like gaussian, lorentzian and so on, as well as many NumPy functions, are available), more complex models will
inevitably require multiple line functions. You can include such Python code with the init_script argument. The
text of this script is evaluated when the model is initialized (and before the actual expression is parsed), so that you can
define functions to be used in your expression.
As a probably unphysical example, to make a model that is the derivative of a Gaussian function times the logarithm
of a Lorentzian function you may could to define this in a script:

script = """
def mycurve(x, amp, cen, sig):
loren = lorentzian(x, amplitude=amp, center=cen, sigma=sig)
gauss = gaussian(x, amplitude=amp, center=cen, sigma=sig)
return log(loren) * gradient(gauss) / gradient(x)
"""

and then use this with ExpressionModel as:

mod = ExpressionModel('mycurve(x, height, mid, wid)', init_script=script,


independent_vars=['x'])

As above, this will interpret the parameter names to be height, mid, and wid, and build a model that can be used to
fit data.

8.8 Example 1: Fit Peak data to Gaussian, Lorentzian, and Voigt pro-
files

Here, we will fit data to three similar lineshapes, in order to decide which might be the better model. We will start with
a Gaussian profile, as in the previous chapter, but use the built-in GaussianModel instead of writing one ourselves.
This is a slightly different version from the one in previous example in that the parameter names are different, and have
built-in default values. We will simply use:

from numpy import loadtxt

from lmfit.models import GaussianModel

data = loadtxt('test_peak.dat')
x = data[:, 0]
y = data[:, 1]

mod = GaussianModel()

pars = mod.guess(y, x=x)


out = mod.fit(y, pars, x=x)

print(out.fit_report(min_correl=0.25))

which prints out the results:

[[Model]]
Model(gaussian)
[[Fit Statistics]]
(continues on next page)

122 Chapter 8. Built-in Fitting Models in the models module


Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

(continued from previous page)


# fitting method = leastsq
# function evals = 25
# data points = 401
# variables = 3
chi-square = 29.9943157
reduced chi-square = 0.07536260
Akaike info crit = -1033.77437
Bayesian info crit = -1021.79248
R-squared = 0.99045513
[[Variables]]
amplitude: 30.3135789 +/- 0.15712752 (0.52%) (init = 43.62238)
center: 9.24277046 +/- 0.00737497 (0.08%) (init = 9.25)
sigma: 1.23218496 +/- 0.00737506 (0.60%) (init = 1.35)
fwhm: 2.90157379 +/- 0.01736695 (0.60%) == '2.3548200*sigma'
height: 9.81457271 +/- 0.05087308 (0.52%) == '0.3989423*amplitude/max(1e-15,␣
˓→sigma)'

[[Correlations]] (unreported correlations are < 0.250)


C(amplitude, sigma) = +0.5774

We see a few interesting differences from the results of the previous chapter. First, the parameter names are longer.
Second, there are fwhm and height parameters, to give the full-width-at-half-maximum and maximum peak height,
respectively. And third, the automated initial guesses are pretty good. A plot of the fit:

shows a decent match to the data – the fit worked with no explicit setting of initial parameter values. Looking more
closely, the fit is not perfect, especially in the tails of the peak, suggesting that a different peak shape, with longer
tails, should be used. Perhaps a Lorentzian would be better? To do this, we simply replace GaussianModel with
LorentzianModel to get a LorentzianModel:

8.8. Example 1: Fit Peak data to Gaussian, Lorentzian, and Voigt profiles 123
Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

from lmfit.models import LorentzianModel

mod = LorentzianModel()

with the rest of the script as above. Perhaps predictably, the first thing we try gives results that are worse by comparing
the fit statistics:

[[Model]]
Model(lorentzian)
[[Fit Statistics]]
# fitting method = leastsq
# function evals = 25
# data points = 401
# variables = 3
chi-square = 53.7535387
reduced chi-square = 0.13505914
Akaike info crit = -799.830322
Bayesian info crit = -787.848438
R-squared = 0.98289441
[[Variables]]
amplitude: 38.9726380 +/- 0.31386754 (0.81%) (init = 54.52798)
center: 9.24439393 +/- 0.00927645 (0.10%) (init = 9.25)
sigma: 1.15483177 +/- 0.01315708 (1.14%) (init = 1.35)
fwhm: 2.30966354 +/- 0.02631416 (1.14%) == '2.0000000*sigma'
height: 10.7421504 +/- 0.08634317 (0.80%) == '0.3183099*amplitude/max(1e-15,␣
˓→sigma)'

[[Correlations]] (unreported correlations are < 0.250)


C(amplitude, sigma) = +0.7087

and also by visual inspection of the fit to the data (figure below).

124 Chapter 8. Built-in Fitting Models in the models module


Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

The tails are now too big, and the value for 𝜒2 almost doubled. A Voigt model does a better job. Using VoigtModel,
this is as simple as using:

from lmfit.models import VoigtModel

mod = VoigtModel()

with all the rest of the script as above. This gives:

[[Model]]
Model(voigt)
[[Fit Statistics]]
# fitting method = leastsq
# function evals = 25
# data points = 401
# variables = 3
chi-square = 14.5448627
reduced chi-square = 0.03654488
Akaike info crit = -1324.00615
Bayesian info crit = -1312.02427
R-squared = 0.99537150
[[Variables]]
amplitude: 35.7553799 +/- 0.13861559 (0.39%) (init = 65.43358)
center: 9.24411179 +/- 0.00505496 (0.05%) (init = 9.25)
sigma: 0.73015485 +/- 0.00368473 (0.50%) (init = 0.8775)
gamma: 0.73015485 +/- 0.00368473 (0.50%) == 'sigma'
fwhm: 2.62949983 +/- 0.01326979 (0.50%) == '1.0692*gamma+sqrt(0.
˓→8664*gamma**2+5.545083*sigma**2)'

(continues on next page)

8.8. Example 1: Fit Peak data to Gaussian, Lorentzian, and Voigt profiles 125
Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

(continued from previous page)


height: 10.2204068 +/- 0.03959933 (0.39%) == '(amplitude/(max(1e-15,␣
˓→sigma*sqrt(2*pi))))*wofz((1j*gamma)/(max(1e-15, sigma*sqrt(2)))).real'
[[Correlations]] (unreported correlations are < 0.250)
C(amplitude, sigma) = +0.6513

which has a much better value for 𝜒2 and the other goodness-of-fit measures, and an obviously better match to the data
as seen in the figure below (left).

Fit to peak with Voigt model (left) and Voigt model with gamma varying independently of sigma (right).
Can we do better? The Voigt function has a 𝛾 parameter (gamma) that can be distinct from sigma. The default behavior
used above constrains gamma to have exactly the same value as sigma. If we allow these to vary separately, does the fit
improve? To do this, we have to change the gamma parameter from a constrained expression and give it a starting value
using something like:

mod = VoigtModel()
pars = mod.guess(y, x=x)
pars['gamma'].set(value=0.7, vary=True, expr='')

which gives:

[[Model]]
Model(voigt)
[[Fit Statistics]]
# fitting method = leastsq
# function evals = 26
# data points = 401
# variables = 4
chi-square = 10.9301767
reduced chi-square = 0.02753193
Akaike info crit = -1436.57602
Bayesian info crit = -1420.60017
R-squared = 0.99652177
[[Variables]]
amplitude: 34.1914716 +/- 0.17946974 (0.52%) (init = 65.43358)
center: 9.24374846 +/- 0.00441904 (0.05%) (init = 9.25)
sigma: 0.89518951 +/- 0.01415479 (1.58%) (init = 0.8775)
(continues on next page)

126 Chapter 8. Built-in Fitting Models in the models module


Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

(continued from previous page)


gamma: 0.52540156 +/- 0.01857994 (3.54%) (init = 0.7)
fwhm: 2.72573678 +/- 0.01363994 (0.50%) == '1.0692*gamma+sqrt(0.
˓→8664*gamma**2+5.545083*sigma**2)'

height: 10.0872197 +/- 0.03482126 (0.35%) == '(amplitude/(max(1e-15,␣


˓→sigma*sqrt(2*pi))))*wofz((1j*gamma)/(max(1e-15, sigma*sqrt(2)))).real'

[[Correlations]] (unreported correlations are < 0.250)


C(sigma, gamma) = -0.9285
C(amplitude, gamma) = +0.8210
C(amplitude, sigma) = -0.6512

and the fit shown on the right above.


Comparing the two fits with the Voigt function, we see that 𝜒2 is definitely improved with a separately varying gamma
parameter. In addition, the two values for gamma and sigma differ significantly – well outside the estimated uncertain-
ties. More compelling, reduced 𝜒2 is improved even though a fourth variable has been added to the fit. In the simplest
statistical sense, this suggests that gamma is a significant variable in the model. In addition, we can use both the Akaike
or Bayesian Information Criteria (see Akaike and Bayesian Information Criteria) to assess how likely the model with
variable gamma is to explain the data than the model with gamma fixed to the value of sigma. According to theory,
exp(−(AIC1 − AIC0)/2) gives the probability that a model with AIC1 is more likely than a model with AIC0. For
the two models here, with AIC values of -1436 and -1324 (Note: if we had more carefully set the value for weights
based on the noise in the data, these values might be positive, but there difference would be roughly the same), this
says that the model with gamma fixed to sigma has a probability less than 5.e-25 of being the better model.

8.9 Example 2: Fit data to a Composite Model with pre-defined mod-


els

Here, we repeat the point made at the end of the last chapter that instances of Model class can be added together to
make a composite model. By using the large number of built-in models available, it is therefore very simple to build
models that contain multiple peaks and various backgrounds. An example of a simple fit to a noisy step function plus
a constant:

# <examples/doc_builtinmodels_stepmodel.py>
import matplotlib.pyplot as plt
import numpy as np

from lmfit.models import LinearModel, StepModel

x = np.linspace(0, 10, 201)


y = np.ones_like(x)
y[:48] = 0.0
y[48:77] = np.arange(77-48)/(77.0-48)
np.random.seed(0)
y = 110.2 * (y + 9e-3*np.random.randn(x.size)) + 12.0 + 2.22*x

step_mod = StepModel(form='erf', prefix='step_')


line_mod = LinearModel(prefix='line_')

pars = line_mod.make_params(intercept=y.min(), slope=0)


pars += step_mod.guess(y, x=x, center=2.5)

(continues on next page)

8.9. Example 2: Fit data to a Composite Model with pre-defined models 127
Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

(continued from previous page)


mod = step_mod + line_mod
out = mod.fit(y, pars, x=x)

print(out.fit_report())

plt.plot(x, y)
plt.plot(x, out.init_fit, '--', label='initial fit')
plt.plot(x, out.best_fit, '-', label='best fit')
plt.legend()
plt.show()
# <end examples/doc_builtinmodels_stepmodel.py>

After constructing step-like data, we first create a StepModel telling it to use the erf form (see details above), and a
ConstantModel. We set initial values, in one case using the data and guess() method for the initial step function
parameters, and make_params() arguments for the linear component. After making a composite model, we run fit()
and report the results, which gives:

[[Model]]
(Model(step, prefix='step_', form='erf') + Model(linear, prefix='line_'))
[[Fit Statistics]]
# fitting method = leastsq
# function evals = 55
# data points = 201
# variables = 5
chi-square = 593.709621
reduced chi-square = 3.02913072
Akaike info crit = 227.700173
Bayesian info crit = 244.216697
R-squared = 0.99897798
[[Variables]]
line_slope: 1.87162383 +/- 0.09318592 (4.98%) (init = 0)
line_intercept: 12.0964588 +/- 0.27606017 (2.28%) (init = 11.58574)
step_amplitude: 112.858576 +/- 0.65391731 (0.58%) (init = 134.7378)
step_center: 3.13494787 +/- 0.00516602 (0.16%) (init = 2.5)
step_sigma: 0.67393440 +/- 0.01091158 (1.62%) (init = 1.428571)
[[Correlations]] (unreported correlations are < 0.100)
C(line_slope, step_amplitude) = -0.8791
C(step_amplitude, step_sigma) = +0.5643
C(line_slope, step_sigma) = -0.4569
C(line_intercept, step_center) = +0.4269
C(line_slope, line_intercept) = -0.3093
C(line_slope, step_center) = -0.2338
C(line_intercept, step_sigma) = -0.1372
C(line_intercept, step_amplitude) = -0.1173
C(step_amplitude, step_center) = +0.1095

with a plot of

128 Chapter 8. Built-in Fitting Models in the models module


Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

8.10 Example 3: Fitting Multiple Peaks – and using Prefixes

As shown above, many of the models have similar parameter names. For composite models, this could lead to a problem
of having parameters for different parts of the model having the same name. To overcome this, each Model can have a
prefix attribute (normally set to a blank string) that will be put at the beginning of each parameter name. To illustrate,
we fit one of the classic datasets from the NIST StRD suite involving a decaying exponential and two Gaussians.

# <examples/doc_builtinmodels_nistgauss.py>
import matplotlib.pyplot as plt
import numpy as np

from lmfit.models import ExponentialModel, GaussianModel

dat = np.loadtxt('NIST_Gauss2.dat')
x = dat[:, 1]
y = dat[:, 0]

exp_mod = ExponentialModel(prefix='exp_')
pars = exp_mod.guess(y, x=x)

gauss1 = GaussianModel(prefix='g1_')
pars.update(gauss1.make_params(center=dict(value=105, min=75, max=125),
sigma=dict(value=15, min=0),
amplitude=dict(value=2000, min=0)))

gauss2 = GaussianModel(prefix='g2_')
(continues on next page)

8.10. Example 3: Fitting Multiple Peaks – and using Prefixes 129


Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

(continued from previous page)


pars.update(gauss2.make_params(center=dict(value=155, min=125, max=175),
sigma=dict(value=15, min=0),
amplitude=dict(value=2000, min=0)))

mod = gauss1 + gauss2 + exp_mod

init = mod.eval(pars, x=x)


out = mod.fit(y, pars, x=x)

print(out.fit_report(correl_mode='table'))

fig, axes = plt.subplots(1, 2, figsize=(12.8, 4.8))


axes[0].plot(x, y)
axes[0].plot(x, init, '--', label='initial fit')
axes[0].plot(x, out.best_fit, '-', label='best fit')
axes[0].legend()

comps = out.eval_components(x=x)
axes[1].plot(x, y)
axes[1].plot(x, comps['g1_'], '--', label='Gaussian component 1')
axes[1].plot(x, comps['g2_'], '--', label='Gaussian component 2')
axes[1].plot(x, comps['exp_'], '--', label='Exponential component')
axes[1].legend()

plt.show()
# <end examples/doc_builtinmodels_nistgauss.py>

where we give a separate prefix to each model (they all have an amplitude parameter). The prefix values are attached
transparently to the models.
Note that the calls to make_param() used the bare name, without the prefix. We could have used the prefixes, but
because we used the individual model gauss1 and gauss2, there was no need.
Note also in the example here that we explicitly set bounds on many of the parameter values.
The fit results printed out are:

[[Model]]
((Model(gaussian, prefix='g1_') + Model(gaussian, prefix='g2_')) + Model(exponential,
˓→ prefix='exp_'))

[[Fit Statistics]]
# fitting method = leastsq
# function evals = 46
# data points = 250
# variables = 8
chi-square = 1247.52821
reduced chi-square = 5.15507524
Akaike info crit = 417.864631
Bayesian info crit = 446.036318
R-squared = 0.99648654
[[Variables]]
exp_amplitude: 99.0183278 +/- 0.53748593 (0.54%) (init = 162.2102)
exp_decay: 90.9508853 +/- 1.10310778 (1.21%) (init = 93.24905)
(continues on next page)

130 Chapter 8. Built-in Fitting Models in the models module


Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

(continued from previous page)


g1_amplitude: 4257.77360 +/- 42.3836478 (1.00%) (init = 2000)
g1_center: 107.030956 +/- 0.15006851 (0.14%) (init = 105)
g1_sigma: 16.6725772 +/- 0.16048381 (0.96%) (init = 15)
g1_fwhm: 39.2609181 +/- 0.37791049 (0.96%) == '2.3548200*g1_sigma'
g1_height: 101.880230 +/- 0.59217173 (0.58%) == '0.3989423*g1_amplitude/max(1e-
˓→15, g1_sigma)'

g2_amplitude: 2493.41735 +/- 36.1697789 (1.45%) (init = 2000)


g2_center: 153.270102 +/- 0.19466802 (0.13%) (init = 155)
g2_sigma: 13.8069464 +/- 0.18679695 (1.35%) (init = 15)
g2_fwhm: 32.5128735 +/- 0.43987320 (1.35%) == '2.3548200*g2_sigma'
g2_height: 72.0455941 +/- 0.61722243 (0.86%) == '0.3989423*g2_amplitude/max(1e-
˓→15, g2_sigma)'

[[Correlations]] (unreported correlations are < 0.100)


C(g1_amplitude, g1_sigma) = +0.8243
C(g2_amplitude, g2_sigma) = +0.8154
C(exp_amplitude, exp_decay) = -0.6946
C(g1_sigma, g2_center) = +0.6842
C(g1_center, g2_amplitude) = -0.6689
C(g1_center, g2_sigma) = -0.6520
C(g1_amplitude, g2_center) = +0.6477
C(g1_center, g2_center) = +0.6205
C(g1_center, g1_sigma) = +0.5075
C(exp_decay, g1_amplitude) = -0.5074
C(g1_sigma, g2_amplitude) = -0.4915
C(g2_center, g2_sigma) = -0.4889
C(g1_sigma, g2_sigma) = -0.4826
C(g2_amplitude, g2_center) = -0.4763
C(exp_decay, g2_amplitude) = -0.4270
C(g1_amplitude, g1_center) = +0.4183
C(g1_amplitude, g2_sigma) = -0.4010
C(g1_amplitude, g2_amplitude) = -0.3071
C(exp_amplitude, g2_amplitude) = +0.2821
C(exp_decay, g1_sigma) = -0.2520
C(exp_decay, g2_sigma) = -0.2329
C(exp_amplitude, g2_sigma) = +0.1714
C(exp_decay, g2_center) = -0.1514
C(exp_amplitude, g1_amplitude) = +0.1478
C(exp_decay, g1_center) = +0.1055

We get a very good fit to this problem (described at the NIST site as of average difficulty, but the tests there are generally
deliberately challenging) by applying reasonable initial guesses and putting modest but explicit bounds on the parameter
values. The overall fit is shown on the left, with its individual components displayed on the right:

8.10. Example 3: Fitting Multiple Peaks – and using Prefixes 131


Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

One final point on setting initial values. From looking at the data itself, we can see the two Gaussian peaks are rea-
sonably well separated but do overlap. Furthermore, we can tell that the initial guess for the decaying exponential
component was poorly estimated because we used the full data range. We can simplify the initial parameter values by
using this, and by defining an index_of() function to limit the data range. That is, with:

def index_of(arrval, value):


"""Return index of array *at or below* value."""
if value < min(arrval):
return 0
return max(np.where(arrval <= value)[0])

ix1 = index_of(x, 75)


ix2 = index_of(x, 135)
ix3 = index_of(x, 175)

exp_mod.guess(y[:ix1], x=x[:ix1])
gauss1.guess(y[ix1:ix2], x=x[ix1:ix2])
gauss2.guess(y[ix2:ix3], x=x[ix2:ix3])

we can get a better initial estimate (see below).

132 Chapter 8. Built-in Fitting Models in the models module


Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

The fit converges to the same answer, giving to identical values (to the precision printed out in the report), but in fewer
steps, and without any bounds on parameters at all:

[[Model]]
((Model(gaussian, prefix='g1_') + Model(gaussian, prefix='g2_')) + Model(exponential,
˓→ prefix='exp_'))

[[Fit Statistics]]
# fitting method = leastsq
# function evals = 37
# data points = 250
# variables = 8
chi-square = 1247.52821
reduced chi-square = 5.15507524
Akaike info crit = 417.864631
Bayesian info crit = 446.036318
R-squared = 0.99648654
[[Variables]]
exp_amplitude: 99.0183265 +/- 0.53748764 (0.54%) (init = 94.53724)
exp_decay: 90.9508884 +/- 1.10310753 (1.21%) (init = 111.1985)
g1_amplitude: 4257.77384 +/- 42.3839276 (1.00%) (init = 3189.648)
g1_center: 107.030957 +/- 0.15006934 (0.14%) (init = 106.5)
g1_sigma: 16.6725783 +/- 0.16048220 (0.96%) (init = 14.5)
g1_fwhm: 39.2609209 +/- 0.37790669 (0.96%) == '2.3548200*g1_sigma'
g1_height: 101.880228 +/- 0.59216965 (0.58%) == '0.3989423*g1_amplitude/max(1e-
˓→15, g1_sigma)'

g2_amplitude: 2493.41698 +/- 36.1699974 (1.45%) (init = 2818.337)


g2_center: 153.270103 +/- 0.19466966 (0.13%) (init = 150)
(continues on next page)

8.10. Example 3: Fitting Multiple Peaks – and using Prefixes 133


Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

(continued from previous page)


g2_sigma: 13.8069440 +/- 0.18680331 (1.35%) (init = 15)
g2_fwhm: 32.5128679 +/- 0.43988817 (1.35%) == '2.3548200*g2_sigma'
g2_height: 72.0455954 +/- 0.61722287 (0.86%) == '0.3989423*g2_amplitude/max(1e-
˓→15, g2_sigma)'

[[Correlations]] (unreported correlations are < 0.100)


C(g1_amplitude, g1_sigma) = +0.8243
C(g2_amplitude, g2_sigma) = +0.8154
C(exp_amplitude, exp_decay) = -0.6946
C(g1_sigma, g2_center) = +0.6842
C(g1_center, g2_amplitude) = -0.6689
C(g1_center, g2_sigma) = -0.6521
C(g1_amplitude, g2_center) = +0.6477
C(g1_center, g2_center) = +0.6205
C(g1_center, g1_sigma) = +0.5075
C(exp_decay, g1_amplitude) = -0.5074
C(g1_sigma, g2_amplitude) = -0.4914
C(g2_center, g2_sigma) = -0.4890
C(g1_sigma, g2_sigma) = -0.4826
C(g2_amplitude, g2_center) = -0.4763
C(exp_decay, g2_amplitude) = -0.4270
C(g1_amplitude, g1_center) = +0.4183
C(g1_amplitude, g2_sigma) = -0.4011
C(g1_amplitude, g2_amplitude) = -0.3071
C(exp_amplitude, g2_amplitude) = +0.2821
C(exp_decay, g1_sigma) = -0.2520
C(exp_decay, g2_sigma) = -0.2329
C(exp_amplitude, g2_sigma) = +0.1714
C(exp_decay, g2_center) = -0.1514
C(exp_amplitude, g1_amplitude) = +0.1478
C(exp_decay, g1_center) = +0.1055

This script is in the file doc_builtinmodels_nistgauss2.py in the examples folder, and the figure above shows an
improved initial estimate of the data.

8.11 Example 4: Using a Spline Model

In the example above, the two peaks might represent the interesting part of the data, and the exponential decay could
be viewed a “background” which might be due to other physical effects or part of some response of the instrumentation
used to make the measurement. That is, the background might be well-understood to be modeled as an exponential
decay, as in the example above and so easily included in the full analysis. As the results above show, there is some – but
not huge – correlation of the parameters between the peak amplitudes and the decay of the exponential function. That
means that it is helpful to include all of those components in a single fit, as the uncertainties in the peak amplitudes
(which would be interpreted as “line strength” or “area”) will reflect some of the uncertainty in how well we modeled
the background.
Sometimes a background is more complex or at least has a less obvious functional form. In these cases, it can be
useful to use a spline to model part of the curve. Just for completeness, a spline is a piecewise continuous polynomial
function (typically made of cubic polynomials) that has a series of x values known as “knots” at which the highest order
derivative is allowed to be discontinuous. By adding more knots, the spline function has more flexibility to follow a
particular function.
As an example (see the example file “doc_builtinmodels_splinemodel.py”), we start with data with a single peak and

134 Chapter 8. Built-in Fitting Models in the models module


Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

a background that is hard to characterize clearly as a simple decay, oscillatory structure.

import numpy as np
import matplotlib.pyplot as plt
from lmfit.models import SplineModel, GaussianModel

data = np.loadtxt('test_splinepeak.dat')
x = data[:, 0]
y = data[:, 1]

plt.plot(x, y, label='data')
plt.legend()
plt.show()

which shows (figure below):

There is definitely a peak there, so we could start with building a model for a Gaussian peak, say with:

model = GaussianModel(prefix='peak_')
params = model.make_params(amplitude=8, center=16, sigma=1)

To account for that changing background, we’ll use a spline, but need to know where to put the “knots”. Picking points
away from the peak makes sense – we don’t want to fit the peak – but we want it to have some flexibility near the peak.
Let’s try spacing knot points at x=1, 3, ..., 13, then skip over the peak at around x=16 and then pick up knots
points at x=19, 21, 23, 25.

knot_xvals = np.array([1, 3, 5, 7, 9, 11, 13, 19, 21, 23, 25])

(continues on next page)

8.11. Example 4: Using a Spline Model 135


Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

(continued from previous page)


bkg = SplineModel(prefix='bkg_', xknots=knot_xvals)
params.update(bkg.guess(y, x))

Note that we used bkg.guess() to guess the initial values of the spline parameters and then update the params
Parameters object with these 11 parameters to account for the spline. These will be very close to the y values at the
knot x values. The precise definition of the spline knot parameters is not “the y-values through which the resulting
spline curve goes”, but these values are pretty good estimates for the resulting spline values. You’ll see below that these
initial values are close.
With a spline background defined, we can create a composite model, and run a fit.

model = model + bkg

params['peak_amplitude'].min = 0
params['peak_center'].min = 10
params['peak_center'].max = 20

out = model.fit(y, params, x=x)


print(out.fit_report(min_correl=0.3))

You’ll see that we first set some “sanity bounds” on the peak parameters to prevent the peak from going completely
wrong. This really is not necessary in this case, but it is often a reasonable thing to do - the general advice for this is to
be generous in the bounds, not overly restrictive.
This fit will print out a report of

[[Model]]
(Model(gaussian, prefix='peak_') + Model(spline_model, prefix='bkg_'))
[[Fit Statistics]]
# fitting method = leastsq
# function evals = 92
# data points = 501
# variables = 14
chi-square = 52.6611549
reduced chi-square = 0.10813379
Akaike info crit = -1100.61674
Bayesian info crit = -1041.58425
R-squared = 0.94690612
[[Variables]]
peak_amplitude: 12.2231135 +/- 0.29554108 (2.42%) (init = 8)
peak_center: 16.4280869 +/- 0.01091051 (0.07%) (init = 16)
peak_sigma: 0.72096400 +/- 0.01336667 (1.85%) (init = 1)
peak_fwhm: 1.69774046 +/- 0.03147610 (1.85%) == '2.3548200*peak_sigma'
peak_height: 6.76360674 +/- 0.09854044 (1.46%) == '0.3989423*peak_amplitude/
˓→max(1e-15, peak_sigma)'

bkg_s0: 3.51175736 +/- 0.04941392 (1.41%) (init = 3.787995)


bkg_s1: 3.72930068 +/- 0.09558236 (2.56%) (init = 3.959487)
bkg_s2: 4.26846495 +/- 0.12650286 (2.96%) (init = 4.384009)
bkg_s3: 4.42375490 +/- 0.10170203 (2.30%) (init = 4.431971)
bkg_s4: 4.49590448 +/- 0.10615552 (2.36%) (init = 4.243976)
bkg_s5: 3.96515315 +/- 0.09336555 (2.35%) (init = 4.115153)
bkg_s6: 3.35531899 +/- 0.12669985 (3.78%) (init = 3.965325)
bkg_s7: 2.89909752 +/- 0.16190211 (5.58%) (init = 2.788437)
(continues on next page)

136 Chapter 8. Built-in Fitting Models in the models module


Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

(continued from previous page)


bkg_s8: 2.82656963 +/- 0.13445495 (4.76%) (init = 2.984317)
bkg_s9: 3.43338680 +/- 0.15987281 (4.66%) (init = 3.383491)
bkg_s10: 3.73024843 +/- 0.12096865 (3.24%) (init = 3.791937)
[[Correlations]] (unreported correlations are < 0.300)
C(bkg_s7, bkg_s8) = -0.8192
C(peak_amplitude, peak_sigma) = +0.7987
C(bkg_s8, bkg_s9) = -0.7063
C(bkg_s5, bkg_s6) = -0.6950
C(peak_amplitude, bkg_s7) = -0.6878
C(bkg_s2, bkg_s3) = -0.6672
C(bkg_s9, bkg_s10) = -0.6060
C(bkg_s3, bkg_s4) = -0.5743
C(bkg_s1, bkg_s2) = -0.5646
C(bkg_s4, bkg_s5) = -0.5542
C(bkg_s7, bkg_s9) = +0.5216
C(peak_sigma, bkg_s7) = -0.5193
C(peak_amplitude, bkg_s8) = +0.5185
C(bkg_s0, bkg_s1) = +0.4448
C(peak_sigma, bkg_s8) = +0.3733
C(peak_center, bkg_s6) = +0.3599
C(bkg_s4, bkg_s6) = +0.3597
C(bkg_s0, bkg_s2) = -0.3595
C(bkg_s2, bkg_s4) = +0.3504
C(bkg_s8, bkg_s10) = +0.3455
C(bkg_s6, bkg_s7) = -0.3332
C(peak_center, bkg_s7) = -0.3301
C(peak_amplitude, bkg_s9) = -0.3206

from this we can make a few observations. First, the correlation between the “spline” parameters” and the “peak
parameters” is noticeable, but not extremely high – that’s good, and the estimated uncertainties do account for this
correlation. The spline components are correlated with each other (especially with the N-1 and N+1 spline parameter).
Second, we can see that the initial values for the background spline parameters are pretty good.
We can plot the results and fit components with

comps = out.eval_components()
plt.plot(x, out.best_fit, label='best fit')
plt.plot(x, comps['bkg_'], label='background')
plt.plot(x, comps['peak_'], label='peak')
plt.legend()

which will generate the plot shown below:

8.11. Example 4: Using a Spline Model 137


Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

If we’re interested in seeing the locations of the knots, you might do

knot_yvals = np.array([o.value for o in out.params.values() if o.name.startswith('bkg


˓→')])
plt.plot(knot_xvals, knot_yvals, 'o', color='black', label='spline knots values')

which will generate be shown as

138 Chapter 8. Built-in Fitting Models in the models module


Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

You might be interested in trying to assess what impact the select of the knots has on the resulting peak intensity. For
example, you might try some of the following set of knot values:

knot_xvals1 = np.array([1, 3, 5, 7, 9, 11, 13, 19, 21, 23, 25])


knot_xvals2 = np.array([1, 3, 5, 7, 9, 11, 13, 16, 19, 21, 23, 25])
knot_xvals3 = np.array([1, 3, 5, 7, 9, 11, 13, 15, 17, 19, 21, 23, 25])

and re-run the fit with these different sets of knot points. The results are shown in the table below.
Table of Peak amplitudes with varying spline points

spline x points N Peak amplitude value and uncertainty


knot_xvals1 11 12.223 (0.295)
knot_xvals2 12 11.746 (0.594)
knot_xvals3 13 12.052 (0.872)

Adding more spline points, especially near the peak center around x=16.4, can impact the measurement of the ampli-
tude but the uncertainty increases dramatically enough to mostly cover the same range of values. This is a interesting
case of adding more parameters to a fit and having the uncertainties in the fitted parameters getting worse. The interested
reader is encouraged to explore the fit reports and plot these different case.
Finally, the basic case above used 11 spline points to fit the baseline. In fact, it would be reasonable to ask whether that
is enough parameters to fit the full spectra. By imposing that there is also a Gaussian peak nearby makes the spline fit
only the background, but without the Gaussian, the spline could fit the full curve. By way of example, we’ll just try
increasing the number of spline points to fit this data

plt.plot(x, y, 'o', label='data')


for nknots in (10, 15, 20, 25):
(continues on next page)

8.11. Example 4: Using a Spline Model 139


Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

(continued from previous page)


model = SplineModel(prefix='bkg_', xknots=np.linspace(0, 25, nknots))
params = model.guess(y, x)
out = model.fit(y, params, x=x)
plt.plot(x, out.best_fit, label=f'best-fit ({nknots} knots)')

plt.legend()
plt.show()

which will show the fit below:

By itself, 10 knots does not give a very good fit, but 25 knots or more does give a very good fit to the peak. This should
give some confidence that the fit with 11 parameters for the background spline is acceptable, but also give some reason
to be careful in selecting the number of spline points to use.

140 Chapter 8. Built-in Fitting Models in the models module


CHAPTER

NINE

CALCULATION OF CONFIDENCE INTERVALS

The lmfit confidence module allows you to explicitly calculate confidence intervals for variable parameters. For most
models, it is not necessary since the estimation of the standard error from the estimated covariance matrix is normally
quite good.
But for some models, the sum of two exponentials for example, the approximation begins to fail. For this case, lmfit
has the function conf_interval() to calculate confidence intervals directly. This is substantially slower than using
the errors estimated from the covariance matrix, but the results are more robust.

9.1 Method used for calculating confidence intervals

The F-test is used to compare our null model, which is the best fit we have found, with an alternate model, where one
of the parameters is fixed to a specific value. The value is changed until the difference between 𝜒20 and 𝜒2𝑓 can’t be
explained by the loss of a degree of freedom within a certain confidence.
(︃ )︃
𝜒2𝑓 𝑁 −𝑃
𝐹 (𝑃𝑓 𝑖𝑥 , 𝑁 − 𝑃 ) = 2 −1
𝜒0 𝑃𝑓 𝑖𝑥

N is the number of data points and P the number of parameters of the null model. 𝑃𝑓 𝑖𝑥 is the number of fixed parameters
(or to be more clear, the difference of number of parameters between our null model and the alternate model).
Adding a log-likelihood method is under consideration.

9.2 A basic example

First we create an example problem:

import numpy as np

import lmfit

x = np.linspace(0.3, 10, 100)


np.random.seed(0)
y = 1/(0.1*x) + 2 + 0.1*np.random.randn(x.size)
pars = lmfit.Parameters()
pars.add_many(('a', 0.1), ('b', 1))

def residual(p):
return 1/(p['a']*x) + p['b'] - y

141
Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

before we can generate the confidence intervals, we have to run a fit, so that the automated estimate of the standard
errors can be used as a starting point:

mini = lmfit.Minimizer(residual, pars)


result = mini.minimize()

print(lmfit.fit_report(result.params))

[[Variables]]
a: 0.09943896 +/- 1.9322e-04 (0.19%) (init = 0.1)
b: 1.98476942 +/- 0.01222678 (0.62%) (init = 1)
[[Correlations]] (unreported correlations are < 0.100)
C(a, b) = +0.6008

Now it is just a simple function call to calculate the confidence intervals:

ci = lmfit.conf_interval(mini, result)
lmfit.printfuncs.report_ci(ci)

99.73% 95.45% 68.27% _BEST_ 68.27% 95.45% 99.73%


a: -0.00059 -0.00039 -0.00019 0.09944 +0.00019 +0.00039 +0.00060
b: -0.03764 -0.02477 -0.01229 1.98477 +0.01229 +0.02477 +0.03764

This shows the best-fit values for the parameters in the _BEST_ column, and parameter values that are at the varying
confidence levels given by steps in 𝜎. As we can see, the estimated error is almost the same, and the uncertainties are
well behaved: Going from 1-𝜎 (68% confidence) to 3-𝜎 (99.7% confidence) uncertainties is fairly linear. It can also
be seen that the errors are fairly symmetric around the best fit value. For this problem, it is not necessary to calculate
confidence intervals, and the estimates of the uncertainties from the covariance matrix are sufficient.

9.3 Working without standard error estimates

Sometimes the estimation of the standard errors from the covariance matrix fails, especially if values are near given
bounds. Hence, to find the confidence intervals in these cases, it is necessary to set the errors by hand. Note that the
standard error is only used to find an upper limit for each value, hence the exact value is not important.
To set the step-size to 10% of the initial value we loop through all parameters and set it manually:

for p in result.params:
result.params[p].stderr = abs(result.params[p].value * 0.1)

142 Chapter 9. Calculation of confidence intervals


Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

9.4 Calculating and visualizing maps of 𝜒2

The estimated values for the 1 − 𝜎 standard error calculated by default for each fit include the effects of correlation
between pairs of variables, but assumes the uncertainties are symmetric. While it doesn’t exactly say what the values
of the 𝑛 − 𝜎 uncertainties would be, the implication is that the 𝑛 − 𝜎 error is simply 𝑛2 𝜎.
The conf_interval() function described above improves on these automatically (and quickly) calculated uncer-
tainies by explicitly finding 𝑛 − 𝜎 confidence levels in both directions – it does not assume that the uncertainties are
symmetric. This function also takes into account the correlations between pairs of variables, but it does not convey this
information very well.
For even further exploration of the confidence levels of parameter values, it can be useful to calculate maps of 𝜒2 values
for pairs of variables around their best fit values and visualize these as contour plots. Typically, pairs of variables will
have elliptical contours of constant 𝑛 − 𝜎 level, with highly-correlated pairs of variables having high ratios of major
and minor axes.
The conf_interval2d() can calculate 2-d arrays or maps of either probability or 𝛿𝜒2 = 𝜒2 − 𝜒2best for any pair
of variables. Visualizing these can help better understand the nature of the uncertainties and correlations between
parameters. To illustrate this, we’ll start with an example fit to data that we deliberately add components not accounted
for in the model, and with slightly non-Gaussian noise – a constructed but “real-world” example:

# <examples/doc_confidence_chi2_maps.py>
import matplotlib.pyplot as plt
import numpy as np

from lmfit import conf_interval, conf_interval2d, report_ci


from lmfit.lineshapes import gaussian
from lmfit.models import GaussianModel, LinearModel

sigma_levels = [1, 2, 3]

rng = np.random.default_rng(seed=102)

#########################
# set up data -- deliberately adding imperfections and
# a small amount of non-Gaussian noise
npts = 501
x = np.linspace(1, 100, num=npts)
noise = rng.normal(scale=0.3, size=npts) + 0.2*rng.f(3, 9, size=npts)
y = (gaussian(x, amplitude=83, center=47., sigma=5.)
+ 0.02*x + 4 + 0.25*np.cos((x-20)/8.0) + noise)

mod = GaussianModel() + LinearModel()


params = mod.make_params(amplitude=100, center=50, sigma=5,
slope=0, intecept=2)
out = mod.fit(y, params, x=x)
print(out.fit_report())

#########################
# run conf_intervale, print report
sigma_levels = [1, 2, 3]
ci = conf_interval(out, out, sigmas=sigma_levels)

print("## Confidence Report:")


(continues on next page)

9.4. Calculating and visualizing maps of 𝜒2 143


Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

(continued from previous page)


report_ci(ci)

[[Model]]
(Model(gaussian) + Model(linear))
[[Fit Statistics]]
# fitting method = leastsq
# function evals = 31
# data points = 501
# variables = 5
chi-square = 103.861381
reduced chi-square = 0.20939794
Akaike info crit = -778.348033
Bayesian info crit = -757.265003
R-squared = 0.93782756
[[Variables]]
amplitude: 78.8171374 +/- 1.21910939 (1.55%) (init = 100)
center: 47.0751649 +/- 0.07576660 (0.16%) (init = 50)
sigma: 4.93298753 +/- 0.07984021 (1.62%) (init = 5)
slope: 0.01839006 +/- 7.1957e-04 (3.91%) (init = 0)
intercept: 4.39234411 +/- 0.04420227 (1.01%) (init = 0)
fwhm: 11.6162977 +/- 0.18800933 (1.62%) == '2.3548200*sigma'
height: 6.37412722 +/- 0.08603873 (1.35%) == '0.3989423*amplitude/max(1e-15,␣
˓→sigma)'

[[Correlations]] (unreported correlations are < 0.100)


C(slope, intercept) = -0.8421
C(amplitude, sigma) = +0.6371
C(amplitude, intercept) = -0.3373
C(sigma, intercept) = -0.2149
C(center, slope) = -0.1026

## Confidence Report:
99.73% 95.45% 68.27% _BEST_ 68.27% 95.45% 99.73%
amplitude: -3.62610 -2.41983 -1.21237 78.81714 +1.22111 +2.45479 +3.70515
center : -0.22849 -0.15214 -0.07584 47.07516 +0.07587 +0.15225 +0.22873
sigma : -0.23335 -0.15640 -0.07870 4.93299 +0.08000 +0.16158 +0.24509
slope : -0.00217 -0.00144 -0.00072 0.01839 +0.00072 +0.00144 +0.00217
intercept: -0.13326 -0.08860 -0.04423 4.39234 +0.04421 +0.08854 +0.13312

The reports show that we obtained a pretty good fit, and that the automated estimates of the uncertainties are actually
pretty good – agreeing to the second decimal place. But we also see that some of the uncertainties do become noticeably
asymmetric at high 𝑛 − 𝜎 levels.
We’ll plot this data and fit, and then further explore these uncertainties using conf_interval2d():

#########################
# plot initial fit
colors = ('#2030b0', '#b02030', '#207070')
fig, axes = plt.subplots(2, 3, figsize=(15, 9.5))

axes[0, 0].plot(x, y, 'o', markersize=3, label='data', color=colors[0])


axes[0, 0].plot(x, out.best_fit, label='fit', color=colors[1])
axes[0, 0].set_xlabel('x')
(continues on next page)

144 Chapter 9. Calculation of confidence intervals


Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

(continued from previous page)


axes[0, 0].set_ylabel('y')
axes[0, 0].legend()

aix, aiy = 0, 0
nsamples = 30
for pairs in (('sigma', 'amplitude'), ('intercept', 'amplitude'),
('slope', 'intercept'), ('slope', 'center'), ('sigma', 'center')):
xpar, ypar = pairs
print("Generating chi-square map for ", pairs)
c_x, c_y, dchi2_mat = conf_interval2d(out, out, xpar, ypar,
nsamples, nsamples,
nsigma=3.5, chi2_out=True)
# sigma matrix: sigma increases chi_square
# from chi_square_best
# to chi_square + sigma**2 * reduced_chi_square
# so: sigma = sqrt(dchi2 / reduced_chi_square)
sigma_mat = np.sqrt(abs(dchi2_mat)/out.redchi)

# you could calculate the matrix of probabilities from sigma as:


# prob_mat = np.erf(sigma_mat/np.sqrt(2))

aix += 1
if aix == 2:
aix = 0
aiy += 1
ax = axes[aix, aiy]

cnt = ax.contour(c_x, c_y, sigma_mat, levels=sigma_levels, colors=colors,


linestyles='-')
ax.clabel(cnt, inline=True, fmt="$\sigma=%.0f $", fontsize=13)

# draw boxes for estimated uncertaties:


# dotted : scaled stderr from initial fit
# dashed : values found from conf_interval()
xv = out.params[xpar].value
xs = out.params[xpar].stderr
yv = out.params[ypar].value
ys = out.params[ypar].stderr

cix = ci[xpar]
ciy = ci[ypar]
nc = len(sigma_levels)
for i in sigma_levels:
# dotted line: scaled stderr
ax.plot((xv-i*xs, xv+i*xs, xv+i*xs, xv-i*xs, xv-i*xs),
(yv-i*ys, yv-i*ys, yv+i*ys, yv+i*ys, yv-i*ys),
linestyle='dotted', color=colors[i-1])

# dashed line: refined uncertainties from conf_interval


xsp, xsm = cix[nc+i][1], cix[nc-i][1]
ysp, ysm = ciy[nc+i][1], ciy[nc-i][1]
ax.plot((xsm, xsp, xsp, xsm, xsm), (ysm, ysm, ysp, ysp, ysm),
(continues on next page)

9.4. Calculating and visualizing maps of 𝜒2 145


Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

(continued from previous page)


linestyle='dashed', color=colors[i-1])

ax.set_xlabel(xpar)
ax.set_ylabel(ypar)
ax.grid(True, color='#d0d0d0')
plt.show()
# <end examples/doc_confidence_chi2_maps.py>

Generating chi-square map for ('sigma', 'amplitude')

Generating chi-square map for ('intercept', 'amplitude')

Generating chi-square map for ('slope', 'intercept')

Generating chi-square map for ('slope', 'center')

Generating chi-square map for ('sigma', 'center')

Here we made contours for the 𝑛 − 𝜎 levels from the 2-D array of 𝜒2 by noting that the 𝑛 − 𝜎 level will have 𝜒2
increased by 𝑛2 𝜒2𝜈 where 𝜒2𝜈 is reduced chi-square.
The dotted boxes show both the scaled values of the standard errors from the initial fit, and the dashed boxes show the
confidence levels from conf_interval(). You can see that the notion of increasing 𝜒2 by 𝜒2𝜈 works very well, and
that there is a small asymmetry in the uncertainties for the amplitude and sigma parameters.

146 Chapter 9. Calculation of confidence intervals


Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

9.5 An advanced example for evaluating confidence intervals

Now we look at a problem where calculating the error from approximated covariance can lead to misleading result –
the same double exponential problem shown in Minimizer.emcee() - calculating the posterior probability distribution
of parameters. In fact such a problem is particularly hard for the Levenberg-Marquardt method, so we first estimate
the results using the slower but robust Nelder-Mead method. We can then compare the uncertainties computed (if the
numdifftools package is installed) with those estimated using Levenberg-Marquardt around the previously found
solution. We can also compare to the results of using emcee.

# <examples/doc_confidence_advanced.py>
import matplotlib.pyplot as plt
import numpy as np

import lmfit

x = np.linspace(1, 10, 250)


np.random.seed(0)
y = 3.0*np.exp(-x/2) - 5.0*np.exp(-(x-0.1)/10.) + 0.1*np.random.randn(x.size)

p = lmfit.create_params(a1=4, a2=4, t1=3, t2=3)

def residual(p):
return p['a1']*np.exp(-x/p['t1']) + p['a2']*np.exp(-(x-0.1)/p['t2']) - y

# create Minimizer
mini = lmfit.Minimizer(residual, p, nan_policy='propagate')

# first solve with Nelder-Mead algorithm


out1 = mini.minimize(method='Nelder')

# then solve with Levenberg-Marquardt using the


# Nelder-Mead solution as a starting point
out2 = mini.minimize(method='leastsq', params=out1.params)

lmfit.report_fit(out2.params, min_correl=0.5)

ci, trace = lmfit.conf_interval(mini, out2, sigmas=[1, 2], trace=True)


lmfit.printfuncs.report_ci(ci)

# plot data and best fit


plt.figure()
plt.plot(x, y)
plt.plot(x, residual(out2.params) + y, '-')
plt.show()

# plot confidence intervals (a1 vs t2 and a2 vs t2)


fig, axes = plt.subplots(1, 2, figsize=(12.8, 4.8))
cx, cy, grid = lmfit.conf_interval2d(mini, out2, 'a1', 't2', 30, 30)
ctp = axes[0].contourf(cx, cy, grid, np.linspace(0, 1, 11))
fig.colorbar(ctp, ax=axes[0])
axes[0].set_xlabel('a1')
(continues on next page)

9.5. An advanced example for evaluating confidence intervals 147


Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

(continued from previous page)


axes[0].set_ylabel('t2')

cx, cy, grid = lmfit.conf_interval2d(mini, out2, 'a2', 't2', 30, 30)


ctp = axes[1].contourf(cx, cy, grid, np.linspace(0, 1, 11))
fig.colorbar(ctp, ax=axes[1])
axes[1].set_xlabel('a2')
axes[1].set_ylabel('t2')
plt.show()

# plot dependence between two parameters


fig, axes = plt.subplots(1, 2, figsize=(12.8, 4.8))
cx1, cy1, prob = trace['a1']['a1'], trace['a1']['t2'], trace['a1']['prob']
cx2, cy2, prob2 = trace['t2']['t2'], trace['t2']['a1'], trace['t2']['prob']

axes[0].scatter(cx1, cy1, c=prob, s=30)


axes[0].set_xlabel('a1')
axes[0].set_ylabel('t2')

axes[1].scatter(cx2, cy2, c=prob2, s=30)


axes[1].set_xlabel('t2')
axes[1].set_ylabel('a1')
plt.show()
# <end examples/doc_confidence_advanced.py>

which will report:

[[Variables]]
a1: 2.98622095 +/- 0.14867027 (4.98%) (init = 2.986237)
a2: -4.33526363 +/- 0.11527574 (2.66%) (init = -4.335256)
t1: 1.30994276 +/- 0.13121215 (10.02%) (init = 1.309932)
t2: 11.8240337 +/- 0.46316956 (3.92%) (init = 11.82408)
[[Correlations]] (unreported correlations are < 0.500)
C(a2, t2) = +0.9871
C(a2, t1) = -0.9246
C(t1, t2) = -0.8805
C(a1, t1) = -0.5988

95.45% 68.27% _BEST_ 68.27% 95.45%


a1: -0.27285 -0.14165 2.98622 +0.16354 +0.36343
a2: -0.30440 -0.13219 -4.33526 +0.10689 +0.19684
t1: -0.23392 -0.12494 1.30994 +0.14660 +0.32369
t2: -1.01937 -0.48813 11.82403 +0.46045 +0.90439

Again we called conf_interval(), this time with tracing and only for 1- and 2-𝜎. Comparing these two different
estimates, we see that the estimate for a1 is reasonably well approximated from the covariance matrix, but the estimates
for a2 and especially for t1, and t2 are very asymmetric and that going from 1 𝜎 (68% confidence) to 2 𝜎 (95%
confidence) is not very predictable.
Plots of the confidence region are shown in the figures below for a1 and t2 (left), and a2 and t2 (right):

148 Chapter 9. Calculation of confidence intervals


Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

Neither of these plots is very much like an ellipse, which is implicitly assumed by the approach using the covariance
matrix. The plots actually look quite a bit like those found with MCMC and shown in the “corner plot” in Mini-
mizer.emcee() - calculating the posterior probability distribution of parameters. In fact, comparing the confidence
interval results here with the results for the 1- and 2-𝜎 error estimated with emcee, we can see that the agreement is
pretty good and that the asymmetry in the parameter distributions are reflected well in the asymmetry of the uncertain-
ties.
The trace returned as the optional second argument from conf_interval() contains a dictionary for each variable pa-
rameter. The values are dictionaries with arrays of values for each variable, and an array of corresponding probabilities
for the corresponding cumulative variables. This can be used to show the dependence between two parameters:

fig, axes = plt.subplots(1, 2, figsize=(12.8, 4.8))


cx1, cy1, prob = trace['a1']['a1'], trace['a1']['t2'], trace['a1']['prob']
cx2, cy2, prob2 = trace['t2']['t2'], trace['t2']['a1'], trace['t2']['prob']

axes[0].scatter(cx1, cy1, c=prob, s=30)


axes[0].set_xlabel('a1')
axes[0].set_ylabel('t2')

axes[1].scatter(cx2, cy2, c=prob2, s=30)


axes[1].set_xlabel('t2')
axes[1].set_ylabel('a1')

plt.show()

which shows the trace of values:

9.5. An advanced example for evaluating confidence intervals 149


Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

As an alternative/complement to the confidence intervals, the Minimizer.emcee() method uses Markov Chain Monte
Carlo to sample the posterior probability distribution. These distributions demonstrate the range of solutions that the
data supports and we refer to Minimizer.emcee() - calculating the posterior probability distribution of parameters where
this methodology was used on the same problem.
Credible intervals (the Bayesian equivalent of the frequentist confidence interval) can be obtained with this method.
MCMC can be used for model selection, to determine outliers, to marginalize over nuisance parameters, etcetera. For
example, you may have fractionally underestimated the uncertainties on a dataset. MCMC can be used to estimate the
true level of uncertainty on each data point. A tutorial on the possibilities offered by MCMC can be found at1 .

9.6 Confidence Interval Functions

conf_interval(minimizer, result, p_names=None, sigmas=None, trace=False, maxiter=200, verbose=False,


prob_func=None)
Calculate the confidence interval (CI) for parameters.
The parameter for which the CI is calculated will be varied, while the remaining parameters are re-optimized to
minimize the chi-square. The resulting chi-square is used to calculate the probability with a given statistic (e.g.,
F-test). This function uses a 1d-rootfinder from SciPy to find the values resulting in the searched confidence
region.
Parameters
• minimizer (Minimizer) – The minimizer to use, holding objective function.
• result (MinimizerResult) – The result of running minimize().
• p_names (list, optional) – Names of the parameters for which the CI is calculated. If
None (default), the CI is calculated for every parameter.
• sigmas (list, optional) – The sigma-levels to find (default is [1, 2, 3]). See Notes
below.
• trace (bool, optional) – Defaults to False; if True, each result of a probability calcula-
tion is saved along with the parameter. This can be used to plot so-called “profile traces”.
• maxiter (int, optional) – Maximum of iteration to find an upper limit (default is 200).
• verbose (bool, optional) – Print extra debugging information (default is False).
1 https://jakevdp.github.io/blog/2014/03/11/frequentism-and-bayesianism-a-practical-intro/

150 Chapter 9. Calculation of confidence intervals


Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

• prob_func (None or callable, optional) – Function to calculate the probability from


the optimized chi-square. Default is None and uses the built-in function f_compare (i.e., F-
test).
Returns
• output (dict) – A dictionary containing a list of (sigma, vals)-tuples for each parameter.
• trace_dict (dict, optional) – Only if trace is True. Is a dictionary, the key is the parameter
which was fixed. The values are again a dict with the names as keys, but with an additional
key ‘prob’. Each contains an array of the corresponding values.
See also:
conf_interval2d

Notes

The values for sigma are taken as the number of standard deviations for a normal distribution and converted to
probabilities. That is, the default sigma=[1, 2, 3] will use probabilities of 0.6827, 0.9545, and 0.9973. If
any of the sigma values is less than 1, that will be interpreted as a probability. That is, a value of 1 and 0.6827
will give the same results, within precision.

Examples

>>> from lmfit.printfuncs import *


>>> mini = minimize(some_func, params)
>>> mini.leastsq()
True
>>> report_errors(params)
... #report
>>> ci = conf_interval(mini)
>>> report_ci(ci)
... #report

Now with quantiles for the sigmas and using the trace.

>>> ci, trace = conf_interval(mini, sigmas=[0.5, 1, 2, 3], trace=True)


>>> fixed = trace['para1']['para1']
>>> free = trace['para1']['not_para1']
>>> prob = trace['para1']['prob']

This makes it possible to plot the dependence between free and fixed parameters.
conf_interval2d(minimizer, result, x_name, y_name, nx=10, ny=10, limits=None, prob_func=None, nsigma=5,
chi2_out=False)
Calculate confidence regions for two fixed parameters.
The method itself is explained in conf_interval: here we are fixing two parameters.
Parameters
• minimizer (Minimizer) – The minimizer to use, holding objective function.
• result (MinimizerResult) – The result of running minimize().
• x_name (str) – The name of the parameter which will be the x direction.

9.6. Confidence Interval Functions 151


Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

• y_name (str) – The name of the parameter which will be the y direction.
• nx (int, optional) – Number of points in the x direction (default is 10).
• ny (int, optional) – Number of points in the y direction (default is 10).
• limits (tuple, optional) – Should have the form ((x_upper, x_lower),
(y_upper, y_lower)). If not given, the default is nsigma*stderr in each direction.
• prob_func (None or callable, deprecated) – Starting with version 1.2, this argu-
ment is unused and has no effect.
• nsigma (float or int, optional) – Multiplier of stderr for limits (default is 5).
• chi2_out (bool) – Whether to return chi-square at each coordinate instead of probability.
Returns
• x (numpy.ndarray) – X-coordinates (same shape as nx).
• y (numpy.ndarray) – Y-coordinates (same shape as ny).
• grid (numpy.ndarray) – 2-D array (with shape (nx, ny)) containing the calculated proba-
bilities or chi-square.
See also:
conf_interval

Examples

>>> mini = Minimizer(some_func, params)


>>> result = mini.leastsq()
>>> x, y, gr = conf_interval2d(mini, result, 'para1','para2')
>>> plt.contour(x,y,gr)

ci_report(ci, with_offset=True, ndigits=5)


Return text of a report for confidence intervals.
Parameters
• ci (dict) – The result of conf_interval(): a dictionary containing a list of (sigma,
vals)-tuples for each parameter.
• with_offset (bool, optional) – Whether to subtract best value from all other values
(default is True).
• ndigits (int, optional) – Number of significant digits to show (default is 5).
Returns
Text of formatted report on confidence intervals.
Return type
str

152 Chapter 9. Calculation of confidence intervals


CHAPTER

TEN

BOUNDS IMPLEMENTATION

This section describes the implementation of Parameter bounds. The MINPACK-1 implementation used in
scipy.optimize.leastsq for the Levenberg-Marquardt algorithm does not explicitly support bounds on parameters, and
expects to be able to fully explore the available range of values for any Parameter. Simply placing hard constraints
(that is, resetting the value when it exceeds the desired bounds) prevents the algorithm from determining the partial
derivatives, and leads to unstable results.
Instead of placing such hard constraints, bounded parameters are mathematically transformed using the formulation
devised (and documented) for MINUIT. This is implemented following (and borrowing heavily from) the leastsqbound
from J. J. Helmus. Parameter values are mapped from internally used, freely variable values 𝑃internal to bounded
parameters 𝑃bounded . When both min and max bounds are specified, the mapping is:
(︀ 2(𝑃bounded − min) )︀
𝑃internal = arcsin −1
(max − min)
(︀ )︀ (max − min)
𝑃bounded = min + sin(𝑃internal ) + 1
2
With only an upper limit max supplied, but min left unbounded, the mapping is:
√︀
𝑃internal = (max − 𝑃bounded + 1)2 − 1
√︁
𝑃bounded = max + 1 − 𝑃internal 2 +1

With only a lower limit min supplied, but max left unbounded, the mapping is:
√︀
𝑃internal = (𝑃bounded − min + 1)2 − 1
√︁
𝑃bounded = min − 1 + 𝑃internal 2 +1

With these mappings, the value for the bounded Parameter cannot exceed the specified bounds, though the internally
varied value can be freely varied.
It bears repeating that code from leastsqbound was adopted to implement the transformation described above. The
challenging part (thanks again to Jonathan J. Helmus!) here is to re-transform the covariance matrix so that the uncer-
tainties can be estimated for bounded Parameters. This is included by using the derivate 𝑑𝑃internal /𝑑𝑃bounded from the
equations above to re-scale the Jacobin matrix before constructing the covariance matrix from it. Tests show that this
re-scaling of the covariance matrix works quite well, and that uncertainties estimated for bounded are quite reasonable.
Of course, if the best fit value is very close to a boundary, the derivative estimated uncertainty and correlations for that
parameter may not be reliable.
The MINUIT documentation recommends caution in using bounds. Setting bounds can certainly increase the number
of function evaluations (and so computation time), and in some cases may cause some instabilities, as the range of
acceptable parameter values is not fully explored. On the other hand, preliminary tests suggest that using max and min
to set clearly outlandish bounds does not greatly affect performance or results.

153
Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

154 Chapter 10. Bounds Implementation


CHAPTER

ELEVEN

USING MATHEMATICAL CONSTRAINTS

Being able to fix variables to a constant value or place upper and lower bounds on their values can greatly simplify
modeling real data. These capabilities are key to lmfit’s Parameters. In addition, it is sometimes highly desirable to
place mathematical constraints on parameter values. For example, one might want to require that two Gaussian peaks
have the same width, or have amplitudes that are constrained to add to some value. Of course, one could rewrite the
objective or model function to place such requirements, but this is somewhat error-prone, and limits the flexibility so
that exploring constraints becomes laborious.
To simplify the setting of constraints, Parameters can be assigned a mathematical expression of other Parameters,
builtin constants, and builtin mathematical functions that will be used to determine its value. The expressions used for
constraints are evaluated using the asteval module, which uses Python syntax, and evaluates the constraint expressions
in a safe and isolated namespace.
This approach to mathematical constraints allows one to not have to write a separate model function for two Gaussians
where the two sigma values are forced to be equal, or where amplitudes are related. Instead, one can write a more
general two Gaussian model (perhaps using GaussianModel) and impose such constraints on the Parameters for a
particular fit.

11.1 Overview

Just as one can place bounds on a Parameter, or keep it fixed during the fit, so too can one place mathematical constraints
on parameters. The way this is done with lmfit is to write a Parameter as a mathematical expression of the other
parameters and a set of pre-defined operators and functions. The constraint expressions are simple Python statements,
allowing one to place constraints like:

from lmfit import Parameters

pars = Parameters()
pars.add('frac_curve1', value=0.5, min=0, max=1)
pars.add('frac_curve2', expr='1-frac_curve1')

as the value of the frac_curve1 parameter is updated at each step in the fit, the value of frac_curve2 will be updated
so that the two values are constrained to add to 1.0. Of course, such a constraint could be placed in the fitting function,
but the use of such constraints allows the end-user to modify the model of a more general-purpose fitting function.
Nearly any valid mathematical expression can be used, and a variety of built-in functions are available for flexible
modeling.

155
Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

11.2 Supported Operators, Functions, and Constants

The mathematical expressions used to define constrained Parameters need to be valid Python expressions. As you
would expect, the operators +, -, *, /, and **, are supported. In fact, a much more complete set can be used, including
Python’s bit- and logical operators:

+, -, *, /, **, &, |, ^, <<, >>, %, and, or,


==, >, >=, <, <=, !=, ~, not, is, is not, in, not in

The values for e (2.7182818. . . ) and pi (3.1415926. . . ) are available, as are several supported mathematical and
trigonometric function:

abs, acos, acosh, asin, asinh, atan, atan2, atanh, ceil,


copysign, cos, cosh, degrees, exp, fabs, factorial,
floor, fmod, frexp, fsum, hypot, isinf, isnan, ldexp,
log, log10, log1p, max, min, modf, pow, radians, sin,
sinh, sqrt, tan, tanh, trunc

In addition, all Parameter names will be available in the mathematical expressions. Thus, with parameters for a few
peak-like functions:

pars = Parameters()
pars.add('amp_1', value=0.5, min=0, max=1)
pars.add('cen_1', value=2.2)
pars.add('wid_1', value=0.2)

The following expression are all valid:

pars.add('amp_2', expr='(2.0 - amp_1**2)')


pars.add('wid_2', expr='sqrt(pi)*wid_1')
pars.add('cen_2', expr='cen_1 * wid_2 / max(wid_1, 0.001)')

In fact, almost any valid Python expression is allowed. A notable example is that Python’s 1-line if expression is
supported:

pars.add('param_a', value=1)
pars.add('param_b', value=2)
pars.add('test_val', value=100)

pars.add('bounded', expr='param_a if test_val/2. > 100 else param_b')

which is equivalent to the more familiar:

if pars['test_val'].value/2. > 100:


bounded = pars['param_a'].value
else:
bounded = pars['param_b'].value

156 Chapter 11. Using Mathematical Constraints


Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

11.3 Using Inequality Constraints

A rather common question about how to set up constraints that use an inequality, say, 𝑥 + 𝑦 ≤ 10. This can be done
with algebraic constraints by recasting the problem, as 𝑥 + 𝑦 = 𝛿 and 𝛿 ≤ 10. That is, first, allow 𝑥 to be held by the
freely varying parameter x. Next, define a parameter delta to be variable with a maximum value of 10, and define
parameter y as delta - x:

pars = Parameters()
pars.add('x', value=5, vary=True)
pars.add('delta', value=5, max=10, vary=True)
pars.add('y', expr='delta-x')

The essential point is that an inequality still implies that a variable (here, delta) is needed to describe the constraint.
The secondary point is that upper and lower bounds can be used as part of the inequality to make the definitions more
convenient.

11.4 Advanced usage of Expressions in lmfit

The expression used in a constraint is converted to a Python Abstract Syntax Tree, which is an intermediate version of
the expression – a syntax-checked, partially compiled expression. Among other things, this means that Python’s own
parser is used to parse and convert the expression into something that can easily be evaluated within Python. It also
means that the symbols in the expressions can point to any Python object.
In fact, the use of Python’s AST allows a nearly full version of Python to be supported, without using Python’s built-in
eval() function. The asteval module actually supports most Python syntax, including for- and while-loops, conditional
expressions, and user-defined functions. There are several unsupported Python constructs, most notably the class
statement, so that new classes cannot be created, and the import statement, which helps make the asteval module
safe from malicious use.
One important feature of the asteval module is that you can add domain-specific functions into the it, for later use in
constraint expressions. To do this, you would use the _asteval attribute of the Parameters class, which contains
a complete AST interpreter. The asteval interpreter uses a flat namespace, implemented as a single dictionary. That
means you can preload any Python symbol into the namespace for the constraints, for example this Lorentzian function:

def mylorentzian(x, amp, cen, wid):


"lorentzian function: wid = half-width at half-max"
return (amp / (1 + ((x-cen) / wid)**2))

You can add this user-defined function to the asteval interpreter of the Parameters class:

from lmfit import Parameters

pars = Parameters()
pars._asteval.symtable['lorentzian'] = mylorentzian

and then initialize the Minimizer class with this parameter set:

from lmfit import Minimizer

def userfcn(x, params):


pass
(continues on next page)

11.3. Using Inequality Constraints 157


Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

(continued from previous page)

fitter = Minimizer(userfcn, pars)

Alternatively, one can first initialize the Minimizer class and add the function to the asteval interpreter of Minimizer.
params afterwards:

pars = Parameters()
fitter = Minimizer(userfcn, pars)
fitter.params._asteval.symtable['lorentzian'] = mylorentzian

In both cases the user-defined lorentzian() function can now be used in constraint expressions.

158 Chapter 11. Using Mathematical Constraints


CHAPTER

TWELVE

RELEASE NOTES

This section discusses changes between versions, especially changes significant to the use and behavior of the library.
This is not meant to be a comprehensive list of changes. For such a complete record, consult the lmfit GitHub repository.

12.1 Version 1.2.0 Release Notes (April 05, 2023)

New features:
• add create_params function (PR #844)
• add chi2_out and nsigma options to conf_interval2d()
• add ModelResult.summary() to return many resulting fit statistics and attributes into a JSON-able dict.
• add correl_table() function to lmfit.printfuncs and correl_mode option to fit_report() and
ModelResult.fit_report() to optionally display a RST-formatted table of a correlation matrix.
Bug fixes/enhancements:
• fix bug when setting param.vary=True for a constrained parameter (Issue #859; PR #860)
• fix bug in reported uncertainties for constrained parameters by better propating uncertainties (Issue #855; PR
#856)
• Coercing of user input data and independent data for Model to float64 ndarrays is somewhat less aggressive and
will not increase the precision of numpy ndarrays (see Data Types for data and independent data with Model for
details). The resulting calculation from a model or objective function is more aggressively coerced to float64.
(Issue #850; PR #853)
• the default value of epsfcn is increased to 1.e-10 to allow for handling of data with precision less than float64
(Issue #850; PR #853)
• fix conf_interval2d to use “increase chi-square by sigma**2*reduced chi-square” to give the sigma-level
probabilities (Issue #848; PR #852)
• fix reading of older ModelResult (Issue #845; included in PR #844)
• fix deepcopy of Parameters and user data (mguhyo; PR #837)
• improve Model.make_params and create_params to take optional dict of Parameter attributes (PR #844)
• fix reporting of nfev from least_squares to better reflect actual number of function calls (Issue #842; PR
#844)
• fix bug in Model.eval when mixing parameters and keyword arguments (PR #844, #839)
• re-adds residual to saved Model result (PR #844, #830)

159
Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

• ConstantModel and ComplexConstantModel will return an ndarray of the same shape as the independent
variable x (JeppeKlitgaard, Issue #840; PR #841)
• update tests for latest versions of NumPy and SciPy.
• many fixes of doc typos and updates of dependencies, pre-commit hooks, and CI.

12.2 Version 1.1.0 Release Notes (November 27, 2022)

New features:
• add Pearson4Model (@lellid; PR #800)
• add SplineModel (PR #804)
• add R^2 rsquared statistic to fit outputs and reports for Model fits (Issue #803; PR #810)
• add calculation of dely for model components of composite models (Issue #761; PR #826)
Bug fixes/enhancements:
• make sure variable spercent is always defined in params_html_table functions (reported by @MySlien-
tWind; Issue #768, PR #770)
• always initialize the variables success and covar the MinimizerResult (reported by Marc W. Pound; PR
#771)
• build package following PEP517/PEP518; use pyproject.toml and setup.cfg; leave setup.py for now (PR
#777)
• components used to create a CompositeModel can now have different independent variables (@Julian-
Hochhaus; Discussion #787; PR #788)
• fixed function definition for StepModel(form='linear'), was not consistent with the other ones (@matpom-
pili; PR #794)
• fixed height factor for Gaussian2dModel, was not correct (@matpompili; PR #795)
• for covariances with negative diagonal elements, we set the covariance to None (PR #813)
• fixed linear mode for RectangleModel (@arunpersaud; Issue #815; PR #816)
• report correct initial values for parameters with bounds (Issue #820; PR #821)
• allow recalculation of confidence intervals (@jagerber48; PR #798)
• include ‘residual’ in JSON output of ModelResult.dumps (@mac01021; PR #830)
• supports and is tested against Python 3.11; updated minimum required version of SciPy, NumPy, and asteval (PR
#832)
Deprecations:
• remove support for Python 3.6 which reached EOL on 2021-12-23 (PR #790)

160 Chapter 12. Release Notes


Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

12.3 Version 1.0.3 Release Notes (October 14, 2021)

Potentially breaking change:


• argument x is now required for the guess method of Models (Issue #747; PR #748)
To get reasonable estimates for starting values one should always supply both x and y values; in some cases it would
work when only providing data (i.e., y-values). With the change above, x is now required in the guess method call,
so scripts might need to be updated to explicitly supply x.
Bug fixes/enhancements:
• do not overwrite user-specified figure titles in Model.plot() functions and allow setting with title keyword
argument (PR #711)
• preserve Parameters subclass in deepcopy (@jenshnielsen; PR #719)
• coerce data and indepdent_vars to NumPy array with dtype=float64 or dtype=complex128 where ap-
plicable (Issues #723 and #728)
• fix collision between parameter names in built-in models and user-specified parameters (Issue #710 and PR #732)
• correct error message in PolynomialModel (@kremeyer; PR #737)
• improved handling of altered JSON data (Issue #739; PR #740, reported by Matthew Giammar)
• map max_nfev to maxiter when using differential_evolution (PR #749, reported by Olivier B.)
• correct use of noise versus experimental uncertainty in the documentation (PR #751, reported by Andrés Zelcer)
• specify return type of eval method more precisely and allow for plotting of (Complex)ConstantModel by co-
ercing their float, int, or complex return value to a numpy.ndarray (Issue #684 and PR #754)
• fix dho (Damped Harmonic Oscillator) lineshape (PR #755; @rayosborn)
• reset Minimizer._abort to False before starting a new fit (Issue #756 and PR #757; @azelcer)
• fix typo in guess_from_peak2d (@ivan-usovl; PR #758)
Various:
• update asteval dependency to >= 0.9.22 to avoid DeprecationWarnings from NumPy v1.20.0 (PR #707)
• remove incorrectly spelled DonaichModel and donaich lineshape, deprecated in version 1.0.1 (PR #707)
• remove occurrences of OrderedDict throughout the code; dict is order-preserving since Python 3.6 (PR #713)
• update the contributing instructions (PR #718; @martin-majlis)
• (again) defer import of matplotlib to when it is needed (@zobristnicholas; PR #721)
• fix description of name argument in Parameters.add (@kristianmeyerr; PR #725)
• update dependencies, make sure a functional development environment is installed on Windows (Issue #712)
• use setuptools_scm for version info instead of versioneer (PR #729)
• transition to using f-strings (PR #730)
• mark test_manypeaks_speed.py as flaky to avoid intermittent test failures (repeat up to 5 times; PR #745)
• update scipy dependency to >= 1.14.0 (PR #751)
• improvement to output of examples in sphinx-gallery and use higher resolution figures (PR #753)
• remove deprecated functions lmfit.printfuncs.report_errors and asteval argument in Parameters
class (PR #759)

12.3. Version 1.0.3 Release Notes (October 14, 2021) 161


Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

12.4 Version 1.0.2 Release Notes (February 7, 2021)

Version 1.0.2 officially supports Python 3.9 and has dropped support for Python 3.5. The minimum version of the
following dependencies were updated: asteval>=0.9.21, numpy>=1.18, and scipy>=1.3.
New features:
• added two-dimensional Gaussian lineshape and model (PR #642; @mpmdean)
• all built-in models are now registered in lmfit.models.lmfit_models; new Model class attribute
valid_forms (PR #663; @rayosborn)
• added a SineModel (PR #676; @lneuhaus)
• add the run_mcmc_kwargs argument to Minimizer.emcee to pass to the emcee.EnsembleSampler.
run_mcmc function (PR #694; @rbnvrw)
Bug fixes:
• ModelResult.eval_uncertainty should use provided Parameters (PR #646)
• center in lognormal model can be negative (Issue #644, PR #645; @YoshieraHuang)
• restore best-fit values after calculation of covariance matrix (Issue #655, PR #657)
• add helper-function not_zero to prevent ZeroDivisionError in lineshapes and use in exponential lineshape (Issue
#631, PR #664; @s-weigand)
• save last_internal_values and use to restore internal values if fit is aborted (PR #667)
• dumping a fit using the lbfgsb method now works, convert bytes to string if needed (Issue #677, PR #678;
@leonfoks)
• fix use of callable Jacobian for scalar methods (PR #681; @mstimberg)
• preserve float/int types when encoding for JSON (PR #696; @jedzill4)
• better support for saving/loading of ExpressionModels and assure that init_params and init_fit are set when
loading a ModelResult (PR #706)
Various:
• update minimum dependencies (PRs #688, #693)
• improvements in coding style, docstrings, CI, and test coverage (PRs #647, #649, #650, #653, #654; #685, #668,
#689)
• fix typo in Oscillator (PR #658; @flothesof)
• add example using SymPy (PR #662)
• allow better custom pool for emcee() (Issue #666, PR #667)
• update NIST Strd reference functions and tests (PR #670)
• make building of documentation cross-platform (PR #673; @s-weigand)
• relax module name check in test_check_ast_errors for Python 3.9 (Issue #674, PR #675; @mwhudson)
• fix/update layout of documentation, now uses the sphinx13 theme (PR #687)
• fixed DeprecationWarnings reported by NumPy v1.2.0 (PR #699)
• increase value of tiny and check for it in bounded parameters to avoid “parameter not moving from initial value”
(Issue #700, PR #701)

162 Chapter 12. Release Notes


Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

• add max_nfev to basinhopping and brute (now supported everywhere in lmfit) and set to more uniform
default values (PR #701)
• use Azure Pipelines for CI, drop Travis (PRs #696 and #702)

12.5 Version 1.0.1 Release Notes

Version 1.0.1 is the last release that supports Python 3.5. All newer version will require 3.6+ so that we can use
formatting-strings and rely on dictionaries being ordered.
New features:
• added thermal distribution model and lineshape (PR #620; @mpmdean)
• introduced a new argument max_nfev to uniformly specify the maximum number of function evaluations (PR
#610) Please note: all other arguments (e.g., ``maxfev``, ``maxiter``, . . . ) will no longer be passed to the
underlying solver. A warning will be emitted stating that one should use ``max_nfev``.
• the attribute call_kws was added to the MinimizerResult class and contains the keyword arguments that are
supplied to the solver in SciPy.
Bug fixes:
• fixes to the load and __setstate__ methods of the Parameter class
• fixed failure of ModelResult.dump() due to missing attributes (Issue #611, PR #623; @mpmdean)
• guess_from_peak function now also works correctly with decreasing x-values or when using pandas (PRs #627
and #629; @mpmdean)
• the Parameter.set() method now correctly first updates the boundaries and then the value (Issue #636, PR
#637; @arunpersaud)
Various:
• fixed typo for the use of expressions in the documentation (Issue #610; @jkrogager)
• removal of PY2-compatibility and unused code and improved test coverage (PRs #619, #631, and #633)
• removed deprecated isParameter function and automatic conversion of an uncertainties object (PR #626)
• inaccurate FWHM calculations were removed from built-in models, others labeled as estimates (Issue #616 and
PR #630)
• corrected spelling mistake for the Doniach lineshape and model (Issue #634; @rayosborn)
• removed unsupported/untested code for IPython notebooks in lmfit/ui/*

12.6 Version 1.0.0 Release Notes

Version 1.0.0 supports Python 3.5, 3.6, 3.7, and 3.8


New features:
• no new features are introduced in 1.0.0.
Improvements:
• support for Python 2 and use of the six package are removed. (PR #612)
Various:

12.5. Version 1.0.1 Release Notes 163


Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

• documentation updates to clarify the use of emcee. (PR #614)

12.7 Version 0.9.15 Release Notes

Version 0.9.15 is the last release that supports Python 2.7; it now also fully supports Python 3.8.
New features, improvements, and bug fixes:
• move application of parameter bounds to setter instead of getter (PR #587)
• add support for non-array Jacobian types in least_squares (Issue #588, @ezwelty in PR #589)
• add more information (i.e., acor and acceptance_fraction) about emcee fit (@j-zimmermann in PR #593)
• “name” is now a required positional argument for Parameter class, update the magic methods (PR #595)
• fix nvars count and bound handling in confidence interval calculations (Issue #597, PR #598)
• support Python 3.8; requires asteval >= 0.9.16 (PR #599)
• only support emcee version 3 (i.e., no PTSampler anymore) (PR #600)
• fix and refactor prob_bunc in confidence interval calculations (PR #604)
• fix adding Parameters with custom user-defined symbols (Issue #607, PR #608; thanks to @gbouvignies for the
report)
Various:
• bump requirements to LTS version of SciPy/ NumPy and code clean-up (PR #591)
• documentation updates (PR #596, and others)
• improve test coverage and Travis CI updates (PR #595, and others)
• update pre-commit hooks and configuration in setup.cfg
To-be deprecated: - function Parameter.isParameter and conversion from uncertainties.core.Variable to value in _getval
(PR #595)

12.8 Version 0.9.14 Release Notes

New features:
• the global optimizers shgo and dual_annealing (new in SciPy v1.2) are now supported (Issue #527; PRs #545
and #556)
• eval method added to the Parameter class (PR #550 by @zobristnicholas)
• avoid ZeroDivisionError in printfuncs.params_html_table (PR #552 by @aaristov and PR #559)
• add parallelization to brute method (PR #564, requires SciPy v1.3)
Bug fixes:
• consider only varying parameters when reporting potential issues with calculating errorbars (PR #549) and com-
pare value to both min and max (PR #571)
• guard against division by zero in lineshape functions and FWHM and height expression calculations (PR #545)
• fix issues with restoring a saved Model (Issue #553; PR #554)
• always set result.method for emcee algorithm (PR #558)

164 Chapter 12. Release Notes


Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

• more careful adding of parameters to handle out-of-order constraint expressions (Issue #560; PR #561)
• make sure all parameters in Model.guess() use prefixes (PRs #567 and #569)
• use inspect.signature for PY3 to support wrapped functions (Issue #570; PR #576)
• fix result.nfev` for brute method when using parallelization (Issue #578; PR #579)
Various:
• remove “missing” in the Model class (replaced by nan_policy) and “drop” as option to nan_policy (replaced by
omit) deprecated since 0.9 (PR #565).
• deprecate ‘report_errors’ in printfuncs.py (PR #571)
• updates to the documentation to use jupyter-sphinx to include examples/output (PRs #573 and #575)
• include a Gallery with examples in the documentation using sphinx-gallery (PR #574 and #583)
• improve test-coverage (PRs #571, #572 and #585)
• add/clarify warning messages when NaN values are detected (PR #586)
• several updates to docstrings (Issue #584; PR #583, and others)
• update pre-commit hooks and several docstrings

12.9 Version 0.9.13 Release Notes

New features:
• Clearer warning message in fit reports when uncertainties should but cannot be estimated, including guesses of
which Parameters to examine (#521, #543)
• SplitLorenztianModel and split_lorentzian function (#523)
• HTML representations for Parameter, MinimizerResult, and Model so that they can be printed better with Jupyter
(#524, #548)
• support parallelization for differential evolution (#526)
Bug fixes:
• delay import of matplotlib (and so, the selection of its backend) as late as possible (#528, #529)
• fix for saving, loading, and reloading ModelResults (#534)
• fix to leastsq to report the best-fit values, not the values tried last (#535, #536)
• fix synchronization of all parameter values on Model.guess() (#539, #542)
• improve deprecation warnings for outdated nan_policy keywords (#540)
• fix for edge case in gformat() (#547)
Project management:
• using pre-commit framework to improve and enforce coding style (#533)
• added code coverage report to github main page
• updated docs, github templates, added several tests.
• dropped support and testing for Python 3.4.

12.9. Version 0.9.13 Release Notes 165


Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

12.10 Version 0.9.12 Release Notes

Lmfit package is now licensed under BSD-3.


New features:
• SkewedVoigtModel was added as built-in model (Issue #493)
• Parameter uncertainties and correlations are reported for least_squares
• Plotting of complex-valued models is now handled in ModelResult class (PR #503)
• A model’s independent variable is allowed to be an object (Issue #492)
• Added usersyms to Parameters() initialization to make it easier to add custom functions and symbols (Issue
#507)
• the numdifftools package can be used to calculate parameter uncertainties and correlations for all solvers that
do not natively support this (PR #506)
• emcee can now be used as method keyword-argument to Minimizer.minimize and minimize function, which
allows for using emcee in the Model class (PR #512; see examples/example_emcee_with_Model.py)
(Bug)fixes:
• asteval errors are now flushed after raising (Issue #486)
• max_time and evaluation time for ExpressionModel increased to 1 hour (Issue #489)
• loading a saved ModelResult now restores all attributes (Issue #491)
• development versions of scipy and emcee are now supported (Issue #497 and PR #496)
• ModelResult.eval() do no longer overwrite the userkws dictionary (Issue #499)
• running the test suite requires pytest only (Issue #504)
• improved FWHM calculation for VoigtModel (PR #514)

12.11 Version 0.9.10 Release Notes

Two new global algorithms were added: basinhopping and AMPGO. Basinhopping wraps the method present in scipy,
and more information can be found in the documentation (basinhopping() and scipy.optimize.basinhopping). The
Adaptive Memory Programming for Global Optimization (AMPGO) algorithm was adapted from Python code written
by Andrea Gavana. A more detailed explanation of the algorithm is available in the AMPGO paper and specifics for
lmfit can be found in the ampgo() function.
Lmfit uses the external uncertainties (https://github.com/lebigot/uncertainties) package (available on PyPI), instead of
distributing its own fork.
An AbortFitException is now raised when the fit is aborted by the user (i.e., by using iter_cb).
Bugfixes:
• all exceptions are allowed when trying to import matplotlib
• simplify and fix corner-case errors when testing closeness of large integers

166 Chapter 12. Release Notes


Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

12.12 Version 0.9.9 Release Notes

Lmfit now uses the asteval (https://github.com/newville/asteval) package instead of distributing its own copy. The
minimum required asteval version is 0.9.12, which is available on PyPI. If you see import errors related to asteval,
please make sure that you actually have the latest version installed.

12.13 Version 0.9.6 Release Notes

Support for SciPy 0.14 has been dropped: SciPy 0.15 is now required. This is especially important for lmfit mainte-
nance, as it means we can now rely on SciPy having code for differential evolution and do not need to keep a local
copy.
A brute force method was added, which can be used either with Minimizer.brute() or using the method='brute'
option to Minimizer.minimize(). This method requires finite bounds on all varying parameters, or that parameters
have a finite brute_step attribute set to specify the step size.
Custom cost functions can now be used for the scalar minimizers using the reduce_fcn option.
Many improvements to documentation and docstrings in the code were made. As part of that effort, all API documen-
tation in this main Sphinx documentation now derives from the docstrings.
Uncertainties in the resulting best-fit for a model can now be calculated from the uncertainties in the model parameters.
Parameters have two new attributes: brute_step, to specify the step size when using the brute method, and
user_data, which is unused but can be used to hold additional information the user may desire. This will be pre-
served on copy and pickling.
Several bug fixes and cleanups.
Versioneer was updated to 0.18.
Tests can now be run either with nose or pytest.

12.14 Version 0.9.5 Release Notes

Support for Python 2.6 and SciPy 0.13 has been dropped.

12.15 Version 0.9.4 Release Notes

Some support for the new least_squares routine from SciPy 0.17 has been added.
Parameters can now be used directly in floating point or array expressions, so that the Parameter value does not need
sigma = params['sigma'].value. The older, explicit usage still works, but the docs, samples, and tests have been
updated to use the simpler usage.
Support for Python 2.6 and SciPy 0.13 is now explicitly deprecated and will be dropped in version 0.9.5.

12.12. Version 0.9.9 Release Notes 167


Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

12.16 Version 0.9.3 Release Notes

Models involving complex numbers have been improved.


The emcee module can now be used for uncertainty estimation.
Many bug fixes, and an important fix for performance slowdown on getting parameter values.
ASV benchmarking code added.

12.17 Version 0.9.0 Release Notes

This upgrade makes an important, non-backward-compatible change to the way many fitting scripts and programs will
work. Scripts that work with version 0.8.3 will not work with version 0.9.0 and vice versa. The change was not made
lightly or without ample discussion, and is really an improvement. Modifying scripts that did work with 0.8.3 to work
with 0.9.0 is easy, but needs to be done.

12.17.1 Summary

The upgrade from 0.8.3 to 0.9.0 introduced the MinimizerResult class (see MinimizerResult – the optimization result)
which is now used to hold the return value from minimize() and Minimizer.minimize(). This returned object
contains many goodness of fit statistics, and holds the optimized parameters from the fit. Importantly, the parameters
passed into minimize() and Minimizer.minimize() are no longer modified by the fit. Instead, a copy of the passed-
in parameters is made which is changed and returns as the params attribute of the returned MinimizerResult.

12.17.2 Impact

This upgrade means that a script that does:

my_pars = Parameters()
my_pars.add('amp', value=300.0, min=0)
my_pars.add('center', value=5.0, min=0, max=10)
my_pars.add('decay', value=1.0, vary=False)

result = minimize(objfunc, my_pars)

will still work, but that my_pars will NOT be changed by the fit. Instead, my_pars is copied to an internal set of
parameters that is changed in the fit, and this copy is then put in result.params. To look at fit results, use result.
params, not my_pars.
This has the effect that my_pars will still hold the starting parameter values, while all of the results from the fit are
held in the result object returned by minimize().
If you want to do an initial fit, then refine that fit to, for example, do a pre-fit, then refine that result different fitting
method, such as:

result1 = minimize(objfunc, my_pars, method='nelder')


result1.params['decay'].vary = True
result2 = minimize(objfunc, result1.params, method='leastsq')

and have access to all of the starting parameters my_pars, the result of the first fit result1, and the result of the final
fit result2.

168 Chapter 12. Release Notes


Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

12.17.3 Discussion

The main goal for making this change were to


1. give a better return value to minimize() and Minimizer.minimize() that can hold all of the information
about a fit. By having the return value be an instance of the MinimizerResult class, it can hold an arbitrary
amount of information that is easily accessed by attribute name, and even be given methods. Using objects is
good!
2. To limit or even eliminate the amount of “state information” a Minimizer holds. By state information, we mean
how much of the previous fit is remembered after a fit is done. Keeping (and especially using) such information
about a previous fit means that a Minimizer might give different results even for the same problem if run a
second time. While it’s desirable to be able to adjust a set of Parameters re-run a fit to get an improved result,
doing this by changing an internal attribute (Minimizer.params) has the undesirable side-effect of not being
able to “go back”, and makes it somewhat cumbersome to keep track of changes made while adjusting parameters
and re-running fits.

12.17. Version 0.9.0 Release Notes 169


Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

170 Chapter 12. Release Notes


CHAPTER

THIRTEEN

EXAMPLES GALLERY

Below are examples of the different things you can do with lmfit. Click on any image to see the complete source code
and output.
We encourage users (i.e., YOU) to submit user-guide-style, documented, and preferably self-contained examples of
how you use lmfit for inclusion in this gallery! Please note that many of the examples below currently do not follow
these guidelines yet.

13.1 Fit with Data in a pandas DataFrame

Simple example demonstrating how to read in the data using pandas and supply the elements of the DataFrame to
lmfit.

import pandas as pd

from lmfit.models import LorentzianModel

read the data into a pandas DataFrame, and use the x and y columns:

dframe = pd.read_csv('peak.csv')

model = LorentzianModel()
params = model.guess(dframe['y'], x=dframe['x'])

result = model.fit(dframe['y'], params, x=dframe['x'])

and gives the fitting results:

print(result.fit_report())

[[Model]]
Model(lorentzian)
[[Fit Statistics]]
# fitting method = leastsq
# function evals = 21
# data points = 101
# variables = 3
chi-square = 13.0737250
reduced chi-square = 0.13340536
Akaike info crit = -200.496119
(continues on next page)

171
Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

(continued from previous page)


Bayesian info crit = -192.650757
R-squared = 0.98351484
[[Variables]]
amplitude: 39.1530621 +/- 0.62389897 (1.59%) (init = 50.7825)
center: 9.22379948 +/- 0.01835867 (0.20%) (init = 9.3)
sigma: 1.15503770 +/- 0.02603721 (2.25%) (init = 1.3)
fwhm: 2.31007541 +/- 0.05207442 (2.25%) == '2.0000000*sigma'
height: 10.7899571 +/- 0.17160652 (1.59%) == '0.3183099*amplitude/max(1e-15,␣
˓→sigma)'

[[Correlations]] (unreported correlations are < 0.100)


C(amplitude, sigma) = +0.7087

and plot below:

result.plot_fit()

Total running time of the script: ( 0 minutes 0.324 seconds)

172 Chapter 13. Examples gallery


Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

13.2 Using an ExpressionModel

ExpressionModels allow a model to be built from a user-supplied expression. See: https://lmfit.github.io/lmfit-py/


builtin_models.html#user-defined-models

import matplotlib.pyplot as plt


import numpy as np

from lmfit.models import ExpressionModel

Generate synthetic data for the user-supplied model:

x = np.linspace(-10, 10, 201)


amp, cen, wid = 3.4, 1.8, 0.5

y = amp * np.exp(-(x-cen)**2 / (2*wid**2)) / (np.sqrt(2*np.pi)*wid)


np.random.seed(2021)
y = y + np.random.normal(size=x.size, scale=0.01)

Define the ExpressionModel and perform the fit:

gmod = ExpressionModel("amp * exp(-(x-cen)**2 /(2*wid**2))/(sqrt(2*pi)*wid)")


result = gmod.fit(y, x=x, amp=5, cen=5, wid=1)

this results in the following output:

print(result.fit_report())

[[Model]]
Model(_eval)
[[Fit Statistics]]
# fitting method = leastsq
# function evals = 52
# data points = 201
# variables = 3
chi-square = 0.01951689
reduced chi-square = 9.8570e-05
Akaike info crit = -1851.19580
Bayesian info crit = -1841.28588
R-squared = 0.99967271
[[Variables]]
amp: 3.40625133 +/- 0.00512077 (0.15%) (init = 5)
cen: 1.80121155 +/- 8.6847e-04 (0.05%) (init = 5)
wid: 0.50029616 +/- 8.6848e-04 (0.17%) (init = 1)
[[Correlations]] (unreported correlations are < 0.100)
C(amp, wid) = +0.5774

plt.plot(x, y, 'o')
plt.plot(x, result.init_fit, '--', label='initial fit')
plt.plot(x, result.best_fit, '-', label='best fit')
plt.legend()

13.2. Using an ExpressionModel 173


Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

Total running time of the script: ( 0 minutes 0.272 seconds)

13.3 Fit Using Inequality Constraint

Sometimes specifying boundaries using min and max are not sufficient, and more complicated (inequality) constraints
are needed. In the example below the center of the Lorentzian peak is constrained to be between 0-5 away from the
center of the Gaussian peak.
See also: https://lmfit.github.io/lmfit-py/constraints.html#using-inequality-constraints

import matplotlib.pyplot as plt


import numpy as np

from lmfit import Minimizer, create_params, report_fit


from lmfit.lineshapes import gaussian, lorentzian

def residual(pars, x, data):


model = (gaussian(x, pars['amp_g'], pars['cen_g'], pars['wid_g']) +
lorentzian(x, pars['amp_l'], pars['cen_l'], pars['wid_l']))
return model - data

Generate the simulated data using a Gaussian and Lorentzian lineshape:

174 Chapter 13. Examples gallery


Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

np.random.seed(0)
x = np.linspace(0, 20.0, 601)

data = (gaussian(x, 21, 6.1, 1.2) + lorentzian(x, 10, 9.6, 1.3) +


np.random.normal(scale=0.1, size=x.size))

Create the fitting parameters and set an inequality constraint for cen_l. First, we add a new fitting parameter
peak_split, which can take values between 0 and 5. Afterwards, we constrain the value for cen_l using the ex-
pression to be 'peak_split+cen_g':

pfit = create_params(amp_g=10, cen_g=5, wid_g=1, amp_l=10,


peak_split=dict(value=2.5, min=0, max=5),
cen_l=dict(expr='peak_split+cen_g'),
wid_l=dict(expr='wid_g'))

mini = Minimizer(residual, pfit, fcn_args=(x, data))


out = mini.leastsq()
best_fit = data + out.residual

Performing a fit, here using the leastsq algorithm, gives the following fitting results:

report_fit(out.params)

[[Variables]]
amp_g: 21.2722842 +/- 0.05138772 (0.24%) (init = 10)
cen_g: 6.10496396 +/- 0.00334613 (0.05%) (init = 5)
wid_g: 1.21434954 +/- 0.00327317 (0.27%) (init = 1)
amp_l: 9.46504173 +/- 0.05445415 (0.58%) (init = 10)
peak_split: 3.52163544 +/- 0.01004618 (0.29%) (init = 2.5)
cen_l: 9.62659940 +/- 0.01066172 (0.11%) == 'peak_split+cen_g'
wid_l: 1.21434954 +/- 0.00327317 (0.27%) == 'wid_g'
[[Correlations]] (unreported correlations are < 0.100)
C(amp_g, wid_g) = +0.6199
C(amp_g, peak_split) = +0.3796
C(wid_g, peak_split) = +0.3445
C(amp_g, amp_l) = -0.2951
C(cen_g, amp_l) = -0.2761
C(amp_g, cen_g) = +0.1936
C(wid_g, amp_l) = -0.1651
C(cen_g, wid_g) = +0.1546

and figure:

plt.plot(x, data, 'o')


plt.plot(x, best_fit, '--', label='best fit')
plt.legend()
plt.show()

13.3. Fit Using Inequality Constraint 175


Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

Total running time of the script: ( 0 minutes 0.319 seconds)

13.4 Fit Using differential_evolution Algorithm

This example compares the leastsq and differential_evolution algorithms on a fairly simple problem.

import matplotlib.pyplot as plt


import numpy as np

import lmfit

def resid(params, x, ydata):


decay = params['decay'].value
offset = params['offset'].value
omega = params['omega'].value
amp = params['amp'].value

y_model = offset + amp * np.sin(x*omega) * np.exp(-x/decay)


return y_model - ydata

Generate synthetic data and set-up Parameters with initial values/boundaries:

176 Chapter 13. Examples gallery


Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

decay = 5
offset = 1.0
amp = 2.0
omega = 4.0

np.random.seed(2)
x = np.linspace(0, 10, 101)
y = offset + amp*np.sin(omega*x) * np.exp(-x/decay)
yn = y + np.random.normal(size=y.size, scale=0.450)

params = lmfit.Parameters()
params.add('offset', 2.0, min=0, max=10.0)
params.add('omega', 3.3, min=0, max=10.0)
params.add('amp', 2.5, min=0, max=10.0)
params.add('decay', 1.0, min=0, max=10.0)

Perform the fits and show fitting results and plot:

o1 = lmfit.minimize(resid, params, args=(x, yn), method='leastsq')


print("# Fit using leastsq:")
lmfit.report_fit(o1)

# Fit using leastsq:


[[Fit Statistics]]
# fitting method = leastsq
# function evals = 65
# data points = 101
# variables = 4
chi-square = 21.7961792
reduced chi-square = 0.22470288
Akaike info crit = -146.871969
Bayesian info crit = -136.411487
[[Variables]]
offset: 0.96333089 +/- 0.04735890 (4.92%) (init = 2)
omega: 3.98700839 +/- 0.02079709 (0.52%) (init = 3.3)
amp: 1.80253587 +/- 0.19401928 (10.76%) (init = 2.5)
decay: 5.76279753 +/- 1.04073348 (18.06%) (init = 1)
[[Correlations]] (unreported correlations are < 0.100)
C(amp, decay) = -0.7550

o2 = lmfit.minimize(resid, params, args=(x, yn), method='differential_evolution')


print("\n\n# Fit using differential_evolution:")
lmfit.report_fit(o2)

# Fit using differential_evolution:


[[Fit Statistics]]
# fitting method = differential_evolution
# function evals = 1425
# data points = 101
# variables = 4
chi-square = 21.7961792
reduced chi-square = 0.22470288
(continues on next page)

13.4. Fit Using differential_evolution Algorithm 177


Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

(continued from previous page)


Akaike info crit = -146.871969
Bayesian info crit = -136.411487
[[Variables]]
offset: 0.96333133 +/- 0.04735903 (4.92%) (init = 2)
omega: 3.98700854 +/- 0.02121810 (0.53%) (init = 3.3)
amp: 1.80252620 +/- 0.19022409 (10.55%) (init = 2.5)
decay: 5.76284501 +/- 1.00452608 (17.43%) (init = 1)
[[Correlations]] (unreported correlations are < 0.100)
C(amp, decay) = -0.7434

plt.plot(x, yn, 'o', label='data')


plt.plot(x, yn+o1.residual, '-', label='leastsq')
plt.plot(x, yn+o2.residual, '--', label='diffev')
plt.legend()

Total running time of the script: ( 0 minutes 0.518 seconds)

178 Chapter 13. Examples gallery


Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

13.5 Fit Using Bounds

A major advantage of using lmfit is that one can specify boundaries on fitting parameters, even if the underlying
algorithm in SciPy does not support this. For more information on how this is implemented, please refer to: https:
//lmfit.github.io/lmfit-py/bounds.html
The example below shows how to set boundaries using the min and max attributes to fitting parameters.

import matplotlib.pyplot as plt


from numpy import exp, linspace, pi, random, sign, sin

from lmfit import create_params, minimize


from lmfit.printfuncs import report_fit

create the ‘true’ Parameter values and residual function:

p_true = create_params(amp=14.0, period=5.4321, shift=0.12345, decay=0.010)

def residual(pars, x, data=None):


argu = (x * pars['decay'])**2
shift = pars['shift']
if abs(shift) > pi/2:
shift = shift - sign(shift)*pi
model = pars['amp'] * sin(shift + x/pars['period']) * exp(-argu)
if data is None:
return model
return model - data

Generate synthetic data and initialize fitting Parameters:

random.seed(0)
x = linspace(0, 250, 1500)
noise = random.normal(scale=2.8, size=x.size)
data = residual(p_true, x) + noise

fit_params = create_params(amp=dict(value=13, max=20, min=0),


period=dict(value=2, max=10),
shift=dict(value=0, max=pi/2., min=-pi/2.),
decay=dict(value=0.02, max=0.1, min=0))

Perform the fit and show the results:

out = minimize(residual, fit_params, args=(x,), kws={'data': data})


fit = residual(out.params, x)

report_fit(out, modelpars=p_true, correl_mode='table')

[[Fit Statistics]]
# fitting method = leastsq
# function evals = 79
# data points = 1500
# variables = 4
(continues on next page)

13.5. Fit Using Bounds 179


Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

(continued from previous page)


chi-square = 11301.3646
reduced chi-square = 7.55438813
Akaike info crit = 3037.18756
Bayesian info crit = 3058.44044
[[Variables]]
amp: 13.8904759 +/- 0.24410753 (1.76%) (init = 13), model_value = 14
period: 5.44026387 +/- 0.01416106 (0.26%) (init = 2), model_value = 5.4321
shift: 0.12464389 +/- 0.02414210 (19.37%) (init = 0), model_value = 0.12345
decay: 0.00996363 +/- 2.0275e-04 (2.03%) (init = 0.02), model_value = 0.01
[[Correlations]]
+----------+----------+----------+----------+----------+
| Variable | amp | period | shift | decay |
+----------+----------+----------+----------+----------+
| amp | +1.0000 | -0.0700 | -0.0870 | +0.5757 |
| period | -0.0700 | +1.0000 | +0.7999 | -0.0404 |
| shift | -0.0870 | +0.7999 | +1.0000 | -0.0502 |
| decay | +0.5757 | -0.0404 | -0.0502 | +1.0000 |
+----------+----------+----------+----------+----------+

plt.plot(x, data, 'o', label='data')


plt.plot(x, fit, label='best fit')
plt.legend()
plt.show()

180 Chapter 13. Examples gallery


Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

Total running time of the script: ( 0 minutes 0.468 seconds)

13.6 Fit with Algebraic Constraint

Example on how to use algebraic constraints using the expr attribute.

import matplotlib.pyplot as plt


from numpy import linspace, random

from lmfit.lineshapes import gaussian, lorentzian


from lmfit.models import GaussianModel, LinearModel, LorentzianModel

random.seed(0)
x = linspace(0.0, 20.0, 601)

data = (gaussian(x, amplitude=21, center=8.1, sigma=1.2) +


lorentzian(x, amplitude=10, center=9.6, sigma=2.4) +
0.01 + x*0.05 + random.normal(scale=0.23, size=x.size))

model = GaussianModel(prefix='g_') + LorentzianModel(prefix='l_') + LinearModel(prefix=


˓→'line_')

(continues on next page)

13.6. Fit with Algebraic Constraint 181


Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

(continued from previous page)

params = model.make_params(g_amplitude=10, g_center=9, g_sigma=1,


line_slope=0, line_intercept=0)

params.add(name='total_amplitude', value=20)
params.set(l_amplitude=dict(expr='total_amplitude - g_amplitude'))
params.set(l_center=dict(expr='1.5+g_center'))
params.set(l_sigma=dict(expr='2*g_sigma'))

data_uncertainty = 0.021 # estimate of data error (for all data points)

init = model.eval(params, x=x)


result = model.fit(data, params, x=x, weights=1.0/data_uncertainty)

print(result.fit_report())

plt.plot(x, data, '+')


plt.plot(x, init, '--', label='initial fit')
plt.plot(x, result.best_fit, '-', label='best fit')
plt.legend()
plt.show()

182 Chapter 13. Examples gallery


Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

[[Model]]
((Model(gaussian, prefix='g_') + Model(lorentzian, prefix='l_')) + Model(linear,␣
˓→prefix='line_'))

[[Fit Statistics]]
# fitting method = leastsq
# function evals = 65
# data points = 601
# variables = 6
chi-square = 71878.3055
reduced chi-square = 120.803875
Akaike info crit = 2887.26503
Bayesian info crit = 2913.65660
R-squared = -22.1967107
[[Variables]]
g_amplitude: 21.1877635 +/- 0.32192128 (1.52%) (init = 10)
g_center: 8.11125903 +/- 0.01162987 (0.14%) (init = 9)
g_sigma: 1.20925819 +/- 0.01170853 (0.97%) (init = 1)
l_amplitude: 9.41261441 +/- 0.61672968 (6.55%) == 'total_amplitude - g_amplitude
˓→'

l_center: 9.61125903 +/- 0.01162987 (0.12%) == '1.5+g_center'


l_sigma: 2.41851637 +/- 0.02341707 (0.97%) == '2*g_sigma'
line_slope: 0.04615727 +/- 0.00170178 (3.69%) (init = 0)
line_intercept: 0.05128584 +/- 0.02448063 (47.73%) (init = 0)
g_fwhm: 2.84758536 +/- 0.02757149 (0.97%) == '2.3548200*g_sigma'
g_height: 6.98998378 +/- 0.05837066 (0.84%) == '0.3989423*g_amplitude/max(1e-
˓→15, g_sigma)'

l_fwhm: 4.83703275 +/- 0.04683414 (0.97%) == '2.0000000*l_sigma'


l_height: 1.23882905 +/- 0.08992735 (7.26%) == '0.3183099*l_amplitude/max(1e-
˓→15, l_sigma)'

total_amplitude: 30.6003779 +/- 0.36481425 (1.19%) (init = 20)


[[Correlations]] (unreported correlations are < 0.100)
C(g_amplitude, g_sigma) = +0.8662
C(g_amplitude, g_center) = +0.7496
C(line_slope, line_intercept) = -0.7144
C(g_center, total_amplitude) = -0.6952
C(g_center, g_sigma) = +0.6227
C(g_amplitude, total_amplitude) = -0.6115
C(line_intercept, total_amplitude) = -0.5883
C(g_sigma, total_amplitude) = -0.4115
C(g_center, line_intercept) = +0.3868
C(g_amplitude, line_intercept) = +0.1834
C(g_amplitude, line_slope) = +0.1825
C(g_sigma, line_slope) = +0.1739

Total running time of the script: ( 0 minutes 0.309 seconds)

13.6. Fit with Algebraic Constraint 183


Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

13.7 Fit Specifying Different Reduce Function

The reduce_fcn specifies how to convert a residual array to a scalar value for the scalar minimizers. The
default value is None (i.e., “sum of squares of residual”) - alternatives are: negentropy, neglogcauchy, or
a user-specified callable. For more information please refer to: https://lmfit.github.io/lmfit-py/fitting.html#
using-the-minimizer-class
Here, we use as an example the Student’s t log-likelihood for robust fitting of data with outliers.

import matplotlib.pyplot as plt


import numpy as np

import lmfit

def resid(params, x, ydata):


decay = params['decay'].value
offset = params['offset'].value
omega = params['omega'].value
amp = params['amp'].value

y_model = offset + amp * np.sin(x*omega) * np.exp(-x/decay)


return y_model - ydata

Generate synthetic data with noise/outliers and initialize fitting Parameters:

decay = 5
offset = 1.0
amp = 2.0
omega = 4.0

np.random.seed(2)
x = np.linspace(0, 10, 101)
y = offset + amp * np.sin(omega*x) * np.exp(-x/decay)
yn = y + np.random.normal(size=y.size, scale=0.250)

outliers = np.random.randint(int(len(x)/3.0), len(x), int(len(x)/12))


yn[outliers] += 5*np.random.random(len(outliers))

params = lmfit.create_params(offset=2.0, omega=3.3, amp=2.5,


decay=dict(value=1, min=0))

Perform fits using the L-BFGS-B method with different reduce_fcn:

method = 'L-BFGS-B'
o1 = lmfit.minimize(resid, params, args=(x, yn), method=method)
print("# Fit using sum of squares:\n")
lmfit.report_fit(o1)

# Fit using sum of squares:

[[Fit Statistics]]
# fitting method = L-BFGS-B
(continues on next page)

184 Chapter 13. Examples gallery


Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

(continued from previous page)


# function evals = 130
# data points = 101
# variables = 4
chi-square = 32.1674767
reduced chi-square = 0.33162347
Akaike info crit = -107.560626
Bayesian info crit = -97.1001440
[[Variables]]
offset: 1.10392444 +/- 0.05751441 (5.21%) (init = 2)
omega: 3.97313428 +/- 0.02073920 (0.52%) (init = 3.3)
amp: 1.69977054 +/- 0.21587474 (12.70%) (init = 2.5)
decay: 7.65901541 +/- 1.87209240 (24.44%) (init = 1)
[[Correlations]] (unreported correlations are < 0.100)
C(amp, decay) = -0.7733

o2 = lmfit.minimize(resid, params, args=(x, yn), method=method,


reduce_fcn='neglogcauchy')
print("\n\n# Robust Fit, using log-likelihood with Cauchy PDF:\n")
lmfit.report_fit(o2)

# Robust Fit, using log-likelihood with Cauchy PDF:

[[Fit Statistics]]
# fitting method = L-BFGS-B
# function evals = 135
# data points = 101
# variables = 4
chi-square = 33.5081334
reduced chi-square = 0.34544467
Akaike info crit = -103.436564
Bayesian info crit = -92.9760822
[[Variables]]
offset: 1.02005965 +/- 0.06642640 (6.51%) (init = 2)
omega: 3.98224426 +/- 0.02898702 (0.73%) (init = 3.3)
amp: 1.83231421 +/- 0.27241865 (14.87%) (init = 2.5)
decay: 5.77327486 +/- 1.45140618 (25.14%) (init = 1)
[[Correlations]] (unreported correlations are < 0.100)
C(amp, decay) = -0.7584
C(offset, amp) = -0.1067

plt.plot(x, y, 'o', label='true function')


plt.plot(x, yn, '--*', label='with noise+outliers')
plt.plot(x, yn+o1.residual, '-', label='sum of squares fit')
plt.plot(x, yn+o2.residual, '-', label='robust fit')
plt.legend()
plt.show()

13.7. Fit Specifying Different Reduce Function 185


Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

Total running time of the script: ( 0 minutes 0.313 seconds)

13.8 Building a lmfit model with SymPy

SymPy is a Python library for symbolic mathematics. It can be very useful to build a model with SymPy and then apply
that model to the data with lmfit. This example shows how to do that. Please note that this example requires both the
sympy and matplotlib packages.

import matplotlib.pyplot as plt


import numpy as np
import sympy
from sympy.parsing import sympy_parser

import lmfit

Instead of creating the SymPy symbols explicitly and building an expression with them, we will use the SymPy parser.

gauss_peak1 = sympy_parser.parse_expr('A1*exp(-(x-xc1)**2/(2*sigma1**2))')
gauss_peak2 = sympy_parser.parse_expr('A2*exp(-(x-xc2)**2/(2*sigma2**2))')
exp_back = sympy_parser.parse_expr('B*exp(-x/xw)')

model_list = sympy.Array((gauss_peak1, gauss_peak2, exp_back))


(continues on next page)

186 Chapter 13. Examples gallery


Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

(continued from previous page)


model = sum(model_list)
print(model)

A1*exp(-(x - xc1)**2/(2*sigma1**2)) + A2*exp(-(x - xc2)**2/(2*sigma2**2)) + B*exp(-x/xw)

We are using SymPy’s lambdify function to make a function from the model expressions. We then use these functions
to generate some fake data.

model_list_func = sympy.lambdify(list(model_list.free_symbols), model_list)


model_func = sympy.lambdify(list(model.free_symbols), model)

Generate synthetic data with noise and plot the data.

np.random.seed(1)
x = np.linspace(0, 10, 40)
param_values = dict(x=x, A1=2, sigma1=1, sigma2=1, A2=3, xc1=2, xc2=5, xw=4, B=5)
y = model_func(**param_values)
yi = model_list_func(**param_values)
yn = y + np.random.randn(y.size)*0.4

plt.plot(x, yn, 'o')


plt.plot(x, y)
for c in yi:
plt.plot(x, c, color='0.7')

13.8. Building a lmfit model with SymPy 187


Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

Next, we will just create a lmfit model from the function and fit the data.

lm_mod = lmfit.Model(model_func, independent_vars=('x'))


res = lm_mod.fit(data=yn, **param_values)

res.plot_fit()
plt.plot(x, y, label='true')
plt.legend()

188 Chapter 13. Examples gallery


Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

The nice thing of using SymPy is that we can easily modify our fit function. Let’s assume we know that the width
of both Gaussians are identical. Similarly, we assume that the ratio between both Gaussians is fixed to 3:2 for some
reason. Both can be expressed by just substituting the variables.

model2 = model.subs('sigma2', 'sigma1').subs('A2', '3/2*A1')


model2_func = sympy.lambdify(list(model2.free_symbols), model2)
lm_mod = lmfit.Model(model2_func, independent_vars=('x'))
param2_values = dict(x=x, A1=2, sigma1=1, xc1=2, xc2=5, xw=4, B=5)
res2 = lm_mod.fit(data=yn, **param2_values)
res2.plot_fit()
plt.plot(x, y, label='true')
plt.legend()

13.8. Building a lmfit model with SymPy 189


Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

Total running time of the script: ( 0 minutes 1.584 seconds)

13.9 Fit Multiple Data Sets

Fitting multiple (simulated) Gaussian data sets simultaneously.


All minimizers require the residual array to be one-dimensional. Therefore, in the objective function we need to
flatten the array before returning it.
TODO: this could/should be using the Model interface / built-in models!

import matplotlib.pyplot as plt


import numpy as np

from lmfit import Parameters, minimize, report_fit

def gauss(x, amp, cen, sigma):


"""Gaussian lineshape."""
return amp * np.exp(-(x-cen)**2 / (2.*sigma**2))

def gauss_dataset(params, i, x):


(continues on next page)

190 Chapter 13. Examples gallery


Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

(continued from previous page)


"""Calculate Gaussian lineshape from parameters for data set."""
amp = params[f'amp_{i+1}']
cen = params[f'cen_{i+1}']
sig = params[f'sig_{i+1}']
return gauss(x, amp, cen, sig)

def objective(params, x, data):


"""Calculate total residual for fits of Gaussians to several data sets."""
ndata, _ = data.shape
resid = 0.0*data[:]

# make residual per data set


for i in range(ndata):
resid[i, :] = data[i, :] - gauss_dataset(params, i, x)

# now flatten this to a 1D array, as minimize() needs


return resid.flatten()

Create five simulated Gaussian data sets

np.random.seed(2021)
x = np.linspace(-1, 2, 151)
data = []
for _ in np.arange(5):
amp = 0.60 + 9.50*np.random.rand()
cen = -0.20 + 1.20*np.random.rand()
sig = 0.25 + 0.03*np.random.rand()
dat = gauss(x, amp, cen, sig) + np.random.normal(size=x.size, scale=0.1)
data.append(dat)
data = np.array(data)

Create five sets of fitting parameters, one per data set

fit_params = Parameters()
for iy, y in enumerate(data):
fit_params.add(f'amp_{iy+1}', value=0.5, min=0.0, max=200)
fit_params.add(f'cen_{iy+1}', value=0.4, min=-2.0, max=2.0)
fit_params.add(f'sig_{iy+1}', value=0.3, min=0.01, max=3.0)

Constrain the values of sigma to be the same for all peaks by assigning sig_2, . . . , sig_5 to be equal to sig_1.

for iy in (2, 3, 4, 5):


fit_params[f'sig_{iy}'].expr = 'sig_1'

Run the global fit and show the fitting result

out = minimize(objective, fit_params, args=(x, data))


report_fit(out.params)

[[Variables]]
amp_1: 6.32742010 +/- 0.02279089 (0.36%) (init = 0.5)
(continues on next page)

13.9. Fit Multiple Data Sets 191


Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

(continued from previous page)


cen_1: 0.68049261 +/- 0.00126458 (0.19%) (init = 0.4)
sig_1: 0.25755570 +/- 4.9426e-04 (0.19%) (init = 0.3)
amp_2: 6.98604753 +/- 0.02296733 (0.33%) (init = 0.5)
cen_2: 0.50433700 +/- 0.00114536 (0.23%) (init = 0.4)
sig_2: 0.25755570 +/- 4.9426e-04 (0.19%) == 'sig_1'
amp_3: 7.11643510 +/- 0.02300415 (0.32%) (init = 0.5)
cen_3: -0.08260274 +/- 0.00112437 (1.36%) (init = 0.4)
sig_3: 0.25755570 +/- 4.9426e-04 (0.19%) == 'sig_1'
amp_4: 6.10197422 +/- 0.02273421 (0.37%) (init = 0.5)
cen_4: 0.07386098 +/- 0.00131130 (1.78%) (init = 0.4)
sig_4: 0.25755570 +/- 4.9426e-04 (0.19%) == 'sig_1'
amp_5: 9.23910555 +/- 0.02368872 (0.26%) (init = 0.5)
cen_5: 0.34443083 +/- 8.6605e-04 (0.25%) (init = 0.4)
sig_5: 0.25755570 +/- 4.9426e-04 (0.19%) == 'sig_1'
[[Correlations]] (unreported correlations are < 0.100)
C(sig_1, amp_5) = -0.3742
C(sig_1, amp_3) = -0.2968
C(sig_1, amp_2) = -0.2919
C(amp_1, sig_1) = -0.2664
C(sig_1, amp_4) = -0.2575
C(amp_3, amp_5) = +0.1111
C(amp_2, amp_5) = +0.1092

Plot the data sets and fits

plt.figure()
for i in range(5):
y_fit = gauss_dataset(out.params, i, x)
plt.plot(x, data[i, :], 'o', x, y_fit, '-')

192 Chapter 13. Examples gallery


Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

Total running time of the script: ( 0 minutes 0.278 seconds)

13.10 Fit using the Model interface

This notebook shows a simple example of using the lmfit.Model class. For more information please refer to: https:
//lmfit.github.io/lmfit-py/model.html#the-model-class.

import numpy as np
from pandas import Series

from lmfit import Model, Parameter, report_fit

The Model class is a flexible, concise curve fitter. I will illustrate fitting example data to an exponential decay.

def decay(t, N, tau):


return N*np.exp(-t/tau)

The parameters are in no particular order. We’ll need some example data. I will use N=7 and tau=3, and add a little
noise.

t = np.linspace(0, 5, num=1000)
np.random.seed(2021)
data = decay(t, 7, 3) + np.random.randn(t.size)

13.10. Fit using the Model interface 193


Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

Simplest Usage

model = Model(decay, independent_vars=['t'])


result = model.fit(data, t=t, N=10, tau=1)

The Model infers the parameter names by inspecting the arguments of the function, decay. Then I passed the indepen-
dent variable, t, and initial guesses for each parameter. A residual function is automatically defined, and a least-squared
regression is performed.
We can immediately see the best-fit values:

print(result.values)

{'N': 7.146693193035486, 'tau': 2.898028980706703}

and use these best-fit parameters for plotting with the plot function:

result.plot()

194 Chapter 13. Examples gallery


Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

We can review the best-fit Parameters by accessing result.params:

result.params.pretty_print()

Name Value Min Max Stderr Vary Expr Brute_Step


N 7.147 -inf inf 0.0913 True None None
tau 2.898 -inf inf 0.06299 True None None

More information about the fit is stored in the result, which is an lmfit.MimimizerResult object (see: https://lmfit.
github.io/lmfit-py/fitting.html#lmfit.minimizer.MinimizerResult)
Specifying Bounds and Holding Parameters Constant
Above, the Model class implicitly builds Parameter objects from keyword arguments of fit that match the arguments
of decay. You can build the Parameter objects explicitly; the following is equivalent.

13.10. Fit using the Model interface 195


Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

result = model.fit(data, t=t,


N=Parameter('N', value=10),
tau=Parameter('tau', value=1))
report_fit(result.params)

[[Variables]]
N: 7.14669319 +/- 0.09130428 (1.28%) (init = 10)
tau: 2.89802898 +/- 0.06299118 (2.17%) (init = 1)
[[Correlations]] (unreported correlations are < 0.100)
C(N, tau) = -0.7533

By building Parameter objects explicitly, you can specify bounds (min, max) and set parameters constant
(vary=False).

result = model.fit(data, t=t,


N=Parameter('N', value=7, vary=False),
tau=Parameter('tau', value=1, min=0))
report_fit(result.params)

[[Variables]]
N: 7 (fixed)
tau: 2.97663118 +/- 0.04347476 (1.46%) (init = 1)

Defining Parameters in Advance


Passing parameters to fit can become unwieldy. As an alternative, you can extract the parameters from model like
so, set them individually, and pass them to fit.

params = model.make_params(N=10, tau={'value': 1, 'min': 0})

result = model.fit(data, params, t=t)


report_fit(result.params)

[[Variables]]
N: 7.14669316 +/- 0.09130423 (1.28%) (init = 10)
tau: 2.89802901 +/- 0.06299127 (2.17%) (init = 1)
[[Correlations]] (unreported correlations are < 0.100)
C(N, tau) = -0.7533

Keyword arguments override params, resetting value and all other properties (min, max, vary).

result = model.fit(data, params, t=t, tau=1)


report_fit(result.params)

[[Variables]]
N: 7.14669316 +/- 0.09130423 (1.28%) (init = 10)
tau: 2.89802901 +/- 0.06299127 (2.17%) (init = 1)
[[Correlations]] (unreported correlations are < 0.100)
C(N, tau) = -0.7533

The input parameters are not modified by fit. They can be reused, retaining the same initial value. If you want to use
the result of one fit as the initial guess for the next, simply pass params=result.params.

196 Chapter 13. Examples gallery


Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

#TODO/FIXME: not sure if there ever way a “helpful exception”, but currently #it raises a ValueError: The input
contains nan values.
#*A Helpful Exception*
#All this implicit magic makes it very easy for the user to neglect to set a #parameter. The fit function checks for this
and raises a helpful exception.

# #result = model.fit(data, t=t, tau=1) # N unspecified

#An extra parameter that cannot be matched to the model function will #throw a UserWarning, but it will not raise,
leaving open the possibility #of unforeseen extensions calling for some parameters.
Weighted Fits
Use the sigma argument to perform a weighted fit. If you prefer to think of the fit in term of weights, sigma=1/
weights.

weights = np.arange(len(data))
result = model.fit(data, params, t=t, weights=weights)
report_fit(result.params)

[[Variables]]
N: 6.98535179 +/- 0.28002384 (4.01%) (init = 10)
tau: 2.97268236 +/- 0.11134755 (3.75%) (init = 1)
[[Correlations]] (unreported correlations are < 0.100)
C(N, tau) = -0.9311

Handling Missing Data


By default, attempting to fit data that includes a NaN, which conventionally indicates a “missing” observation, raises a
lengthy exception. You can choose to omit (i.e., skip over) missing values instead.

data_with_holes = data.copy()
data_with_holes[[5, 500, 700]] = np.nan # Replace arbitrary values with NaN.

model = Model(decay, independent_vars=['t'], nan_policy='omit')


result = model.fit(data_with_holes, params, t=t)
report_fit(result.params)

[[Variables]]
N: 7.15448795 +/- 0.09181809 (1.28%) (init = 10)
tau: 2.89285089 +/- 0.06306004 (2.18%) (init = 1)
[[Correlations]] (unreported correlations are < 0.100)
C(N, tau) = -0.7542

If you don’t want to ignore missing values, you can set the model to raise proactively, checking for missing values
before attempting the fit.
Uncomment to see the error #model = Model(decay, independent_vars=[‘t’], nan_policy=’raise’) #result =
model.fit(data_with_holes, params, t=t)
The default setting is nan_policy='raise', which does check for NaNs and raises an exception when present.
Null-checking relies on pandas.isnull if it is available. If pandas cannot be imported, it silently falls back on numpy.
isnan.
Data Alignment

13.10. Fit using the Model interface 197


Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

Imagine a collection of time series data with different lengths. It would be convenient to define one sufficiently long
array t and use it for each time series, regardless of length. pandas (https://pandas.pydata.org/pandas-docs/stable/)
provides tools for aligning indexed data. And, unlike most wrappers to scipy.leastsq, Model can handle pandas
objects out of the box, using its data alignment features.
Here I take just a slice of the data and fit it to the full t. It is automatically aligned to the correct section of t using
Series’ index.

model = Model(decay, independent_vars=['t'])


truncated_data = Series(data)[200:800] # data points 200-800
t = Series(t) # all 1000 points
result = model.fit(truncated_data, params, t=t)
report_fit(result.params)

[[Variables]]
N: 7.10725864 +/- 0.24259071 (3.41%) (init = 10)
tau: 2.92503564 +/- 0.13481789 (4.61%) (init = 1)
[[Correlations]] (unreported correlations are < 0.100)
C(N, tau) = -0.9320

Data with missing entries and an unequal length still aligns properly.

model = Model(decay, independent_vars=['t'], nan_policy='omit')


truncated_data_with_holes = Series(data_with_holes)[200:800]
result = model.fit(truncated_data_with_holes, params, t=t)
report_fit(result.params)

[[Variables]]
N: 7.11270194 +/- 0.24334895 (3.42%) (init = 10)
tau: 2.92065227 +/- 0.13488230 (4.62%) (init = 1)
[[Correlations]] (unreported correlations are < 0.100)
C(N, tau) = -0.9320

Total running time of the script: ( 0 minutes 0.440 seconds)

13.11 Fit Specifying a Function to Compute the Jacobian

Specifying an analytical function to calculate the Jacobian can speed-up the fitting procedure.

import matplotlib.pyplot as plt


import numpy as np

from lmfit import Minimizer, Parameters

def func(pars, x, data=None):


a, b, c = pars['a'], pars['b'], pars['c']
model = a * np.exp(-b*x) + c
if data is None:
return model
return model - data

(continues on next page)

198 Chapter 13. Examples gallery


Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

(continued from previous page)

def dfunc(pars, x, data=None):


a, b = pars['a'], pars['b']
v = np.exp(-b*x)
return np.array([v, -a*x*v, np.ones(len(x))])

def f(var, x):


return var[0] * np.exp(-var[1]*x) + var[2]

params = Parameters()
params.add('a', value=10)
params.add('b', value=10)
params.add('c', value=10)

a, b, c = 2.5, 1.3, 0.8


x = np.linspace(0, 4, 50)
y = f([a, b, c], x)
np.random.seed(2021)
data = y + 0.15*np.random.normal(size=x.size)

Fit without analytic derivative:

min1 = Minimizer(func, params, fcn_args=(x,), fcn_kws={'data': data})


out1 = min1.leastsq()
fit1 = func(out1.params, x)

Fit with analytic derivative:

min2 = Minimizer(func, params, fcn_args=(x,), fcn_kws={'data': data})


out2 = min2.leastsq(Dfun=dfunc, col_deriv=1)
fit2 = func(out2.params, x)

Comparison of fit to exponential decay with/without analytical derivatives to model = a*exp(-b*x) + c:

print(f'"true" parameters are: a = {a:.3f}, b = {b:.3f}, c = {c:.3f}\n\n'


'|=========================================\n'
'| Statistic/Parameter | Without | With |\n'
'|-----------------------------------------\n'
f'| N Function Calls | {out1.nfev:d} | {out2.nfev:d} |\n'
f'| Chi-square | {out1.chisqr:.4f} | {out2.chisqr:.4f} |\n'
f"| a | {out1.params['a'].value:.4f} | {out2.params['a'].value:.
˓→4f} |\n"

f"| b | {out1.params['b'].value:.4f} | {out2.params['b'].value:.


˓→4f} |\n"

f"| c | {out1.params['c'].value:.4f} | {out2.params['c'].value:.


˓→4f} |\n"

'------------------------------------------')

"true" parameters are: a = 2.500, b = 1.300, c = 0.800

(continues on next page)

13.11. Fit Specifying a Function to Compute the Jacobian 199


Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

(continued from previous page)


|=========================================
| Statistic/Parameter | Without | With |
|-----------------------------------------
| N Function Calls | 39 | 12 |
| Chi-square | 1.0920 | 1.0920 |
| a | 2.5635 | 2.5635 |
| b | 1.3585 | 1.3585 |
| c | 0.8241 | 0.8241 |
------------------------------------------

and the best-fit to the synthetic data (with added noise) is the same for both methods:

plt.plot(x, data, 'o', label='data')


plt.plot(x, fit1, label='with analytical derivative')
plt.plot(x, fit2, '--', label='without analytical derivative')
plt.legend()

Total running time of the script: ( 0 minutes 0.256 seconds)

200 Chapter 13. Examples gallery


Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

13.12 Outlier detection via leave-one-out

Outliers can sometimes be identified by assessing the influence of each datapoint. To assess the influence of one point,
we fit the dataset without the point and compare the result with the fit of the full dataset. The code below shows how
to do this with lmfit. Note that the presented method is very basic.

from collections import defaultdict

import matplotlib.pyplot as plt


import numpy as np

import lmfit

Generate test data and model:

x = np.linspace(0.3, 10, 100)


np.random.seed(1)
y = 1.0 / (0.1 * x) + 2.0 + 3 * np.random.randn(x.size)

params = lmfit.Parameters()
params.add_many(('a', 0.1), ('b', 1))

def func(x, a, b):


return 1.0 / (a * x) + b

Make five points outliers:

idx = np.random.randint(0, x.size, 5)


y[idx] += 10 * np.random.randn(idx.size)

Fit the data:

model = lmfit.Model(func, independent_vars=['x'])


fit_result = model.fit(y, x=x, a=0.1, b=2)

and gives the plot and fitting results below:

fit_result.plot_fit()
plt.plot(x[idx], y[idx], 'o', label='outliers')
plt.show()

13.12. Outlier detection via leave-one-out 201


Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

print(fit_result.fit_report())

[[Model]]
Model(func)
[[Fit Statistics]]
# fitting method = leastsq
# function evals = 13
# data points = 100
# variables = 2
chi-square = 1338.34458
reduced chi-square = 13.6565773
Akaike info crit = 263.401856
Bayesian info crit = 268.612196
R-squared = 0.70062332
[[Variables]]
a: 0.08937623 +/- 0.00590174 (6.60%) (init = 0.1)
b: 1.51298991 +/- 0.46229147 (30.55%) (init = 2)
[[Correlations]] (unreported correlations are < 0.100)
C(a, b) = +0.6008

Fit the dataset while omitting one data point:

best_vals = defaultdict(lambda: np.zeros(x.size))


(continues on next page)

202 Chapter 13. Examples gallery


Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

(continued from previous page)


stderrs = defaultdict(lambda: np.zeros(x.size))
chi_sq = np.zeros_like(x)
for i in range(x.size):
idx2 = np.arange(0, x.size)
idx2 = np.delete(idx2, i)
tmp_x = x[idx2]
tmp = model.fit(y[idx2], x=tmp_x, a=fit_result.params['a'],
b=fit_result.params['b'])
chi_sq[i] = tmp.chisqr
for p in tmp.params:
tpar = tmp.params[p]
best_vals[p][i] = tpar.value
stderrs[p][i] = (tpar.stderr / fit_result.params[p].stderr)

Plot the influence on the red. chisqr of each point:

fig, ax = plt.subplots()
ax.plot(x, (fit_result.chisqr - chi_sq) / chi_sq)
ax.scatter(x[idx], fit_result.chisqr / chi_sq[idx] - 1, color='r',
label='outlier')
ax.set_ylabel(r'Relative red. $\chi^2$ change')
ax.set_xlabel('x')
ax.legend()

13.12. Outlier detection via leave-one-out 203


Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

Plot the influence on the parameter value and error of each point:

fig, axs = plt.subplots(4, figsize=(4, 7), sharex='col')


axs[0].plot(x, best_vals['a'])
axs[0].scatter(x[idx], best_vals['a'][idx], color='r', label='outlier')
axs[0].set_ylabel('best a')

axs[1].plot(x, best_vals['b'])
axs[1].scatter(x[idx], best_vals['b'][idx], color='r', label='outlier')
axs[1].set_ylabel('best b')

axs[2].plot(x, stderrs['a'])
axs[2].scatter(x[idx], stderrs['a'][idx], color='r', label='outlier')
axs[2].set_ylabel('err a change')

axs[3].plot(x, stderrs['b'])
axs[3].scatter(x[idx], stderrs['b'][idx], color='r', label='outlier')
axs[3].set_ylabel('err b change')

axs[3].set_xlabel('x')

204 Chapter 13. Examples gallery


Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

Total running time of the script: ( 0 minutes 1.251 seconds)

13.12. Outlier detection via leave-one-out 205


Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

13.13 Emcee and the Model Interface

import corner
import matplotlib.pyplot as plt
import numpy as np

import lmfit

Set up a double-exponential function and create a Model:

def double_exp(x, a1, t1, a2, t2):


return a1*np.exp(-x/t1) + a2*np.exp(-(x-0.1) / t2)

model = lmfit.Model(double_exp)

Generate some fake data from the model with added noise:

truths = (3.0, 2.0, -5.0, 10.0)


x = np.linspace(1, 10, 250)
np.random.seed(0)
y = double_exp(x, *truths)+0.1*np.random.randn(x.size)

Create model parameters and give them initial values:

p = model.make_params(a1=4, t1=3, a2=4, t2=3)

Fit the model using a traditional minimizer, and show the output:

result = model.fit(data=y, params=p, x=x, method='Nelder', nan_policy='omit')

lmfit.report_fit(result)
result.plot()

206 Chapter 13. Examples gallery


Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

[[Fit Statistics]]
# fitting method = Nelder-Mead
# function evals = 609
# data points = 250
# variables = 4
chi-square = 2.33333982
reduced chi-square = 0.00948512
Akaike info crit = -1160.54007
Bayesian info crit = -1146.45423
R-squared = 0.94237407
[[Variables]]
a1: 2.98623689 +/- 0.15010519 (5.03%) (init = 4)
t1: 1.30993186 +/- 0.13449653 (10.27%) (init = 3)
a2: -4.33525597 +/- 0.11765821 (2.71%) (init = 4)
(continues on next page)

13.13. Emcee and the Model Interface 207


Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

(continued from previous page)


t2: 11.8240752 +/- 0.47172598 (3.99%) (init = 3)
[[Correlations]] (unreported correlations are < 0.100)
C(a2, t2) = +0.9876
C(t1, a2) = -0.9278
C(t1, t2) = -0.8852
C(a1, t1) = -0.6093
C(a1, a2) = +0.2973
C(a1, t2) = +0.2319

Calculate parameter covariance using emcee:


• start the walkers out at the best-fit values
• set is_weighted to False to estimate the noise weights
• set some sensible priors on the uncertainty to keep the MCMC in check

emcee_kws = dict(steps=5000, burn=500, thin=20, is_weighted=False,


progress=False)
emcee_params = result.params.copy()
emcee_params.add('__lnsigma', value=np.log(0.1), min=np.log(0.001), max=np.log(2.0))

run the MCMC algorithm and show the results:

result_emcee = model.fit(data=y, x=x, params=emcee_params, method='emcee',


nan_policy='omit', fit_kws=emcee_kws)

lmfit.report_fit(result_emcee)

[[Fit Statistics]]
# fitting method = emcee
# function evals = 500000
# data points = 250
# variables = 5
chi-square = 245.221790
reduced chi-square = 1.00090526
Akaike info crit = 5.17553688
Bayesian info crit = 22.7828415
R-squared = -5.05618323
[[Variables]]
a1: 2.99546858 +/- 0.14834594 (4.95%) (init = 2.986237)
t1: 1.32127391 +/- 0.14079400 (10.66%) (init = 1.309932)
a2: -4.34376940 +/- 0.12389335 (2.85%) (init = -4.335256)
t2: 11.7937607 +/- 0.48879633 (4.14%) (init = 11.82408)
__lnsigma: -2.32712371 +/- 0.04514314 (1.94%) (init = -2.302585)
[[Correlations]] (unreported correlations are < 0.100)
C(a2, t2) = +0.9810
C(t1, a2) = -0.9367
C(t1, t2) = -0.8949
C(a1, t1) = -0.5154
C(a1, a2) = +0.2197
C(a1, t2) = +0.1868

208 Chapter 13. Examples gallery


Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

result_emcee.plot_fit()
plt.plot(x, model.eval(params=result.params, x=x), '--', label='Nelder')
plt.legend()

Check the acceptance fraction to see whether emcee performed well:

plt.plot(result_emcee.acceptance_fraction, 'o')
plt.xlabel('walker')
plt.ylabel('acceptance fraction')

13.13. Emcee and the Model Interface 209


Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

Try to compute the autocorrelation time:

if hasattr(result_emcee, "acor"):
print("Autocorrelation time for the parameters:")
print("----------------------------------------")
for i, p in enumerate(result.params):
print(f'{p} = {result_emcee.acor[i]:.3f}')

Autocorrelation time for the parameters:


----------------------------------------
a1 = 61.334
t1 = 85.867
a2 = 86.046
t2 = 84.745

Plot the parameter covariances returned by emcee using corner:

emcee_corner = corner.corner(result_emcee.flatchain, labels=result_emcee.var_names,


truths=list(result_emcee.params.valuesdict().values()))

210 Chapter 13. Examples gallery


Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

print("\nmedian of posterior probability distribution")


print('--------------------------------------------')
lmfit.report_fit(result_emcee.params)

median of posterior probability distribution


--------------------------------------------
[[Variables]]
a1: 2.99546858 +/- 0.14834594 (4.95%) (init = 2.986237)
t1: 1.32127391 +/- 0.14079400 (10.66%) (init = 1.309932)
a2: -4.34376940 +/- 0.12389335 (2.85%) (init = -4.335256)
t2: 11.7937607 +/- 0.48879633 (4.14%) (init = 11.82408)
__lnsigma: -2.32712371 +/- 0.04514314 (1.94%) (init = -2.302585)
[[Correlations]] (unreported correlations are < 0.100)
(continues on next page)

13.13. Emcee and the Model Interface 211


Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

(continued from previous page)


C(a2, t2) = +0.9810
C(t1, a2) = -0.9367
C(t1, t2) = -0.8949
C(a1, t1) = -0.5154
C(a1, a2) = +0.2197
C(a1, t2) = +0.1868

Find the maximum likelihood solution:

highest_prob = np.argmax(result_emcee.lnprob)
hp_loc = np.unravel_index(highest_prob, result_emcee.lnprob.shape)
mle_soln = result_emcee.chain[hp_loc]
print("\nMaximum Likelihood Estimation (MLE):")
print('----------------------------------')
for ix, param in enumerate(emcee_params):
print(f"{param}: {mle_soln[ix]:.3f}")

quantiles = np.percentile(result_emcee.flatchain['t1'], [2.28, 15.9, 50, 84.2, 97.7])


print(f"\n\n1 sigma spread = {0.5 * (quantiles[3] - quantiles[1]):.3f}")
print(f"2 sigma spread = {0.5 * (quantiles[4] - quantiles[0]):.3f}")

Maximum Likelihood Estimation (MLE):


----------------------------------
a1: 2.971
t1: 1.317
a2: -4.336
t2: 11.815
__lnsigma: -2.336

1 sigma spread = 0.141


2 sigma spread = 0.291

Total running time of the script: ( 0 minutes 35.405 seconds)

13.14 Complex Resonator Model

This notebook shows how to fit the parameters of a complex resonator, using lmfit.Model and defining a custom Model
class.
Following Khalil et al. (https://arxiv.org/abs/1108.3117), we can model the forward transmission of a microwave
resonator with total quality factor 𝑄, coupling quality factor 𝑄𝑒 , and resonant frequency 𝑓0 using:

𝑄𝑄−1
𝑒
𝑆21 (𝑓 ) = 1 −
1 + 2𝑗𝑄(𝑓 − 𝑓0 )/𝑓0

𝑆21 is thus a complex function of a real frequency.


By allowing 𝑄𝑒 to be complex, this model can take into account mismatches in the input and output transmission
impedances.

212 Chapter 13. Examples gallery


Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

import matplotlib.pyplot as plt


import numpy as np

import lmfit

Since scipy.optimize and lmfit require real parameters, we represent 𝑄𝑒 as Q_e_real + 1j*Q_e_imag.

def linear_resonator(f, f_0, Q, Q_e_real, Q_e_imag):


Q_e = Q_e_real + 1j*Q_e_imag
return 1 - (Q * Q_e**-1 / (1 + 2j * Q * (f - f_0) / f_0))

The standard practice of defining a lmfit model is as follows:

class ResonatorModel(lmfit.model.Model):
__doc__ = "resonator model" + lmfit.models.COMMON_INIT_DOC

def __init__(self, *args, **kwargs):


# pass in the defining equation so the user doesn't have to later
super().__init__(linear_resonator, *args, **kwargs)

self.set_param_hint('Q', min=0) # enforce Q is positive

def guess(self, data, f=None, **kwargs):


verbose = kwargs.pop('verbose', None)
if f is None:
return
argmin_s21 = np.abs(data).argmin()
fmin = f.min()
fmax = f.max()
f_0_guess = f[argmin_s21] # guess that the resonance is the lowest point
Q_min = 0.1 * (f_0_guess/(fmax-fmin)) # assume the user isn't trying to fit just␣
˓→a small part of a resonance curve

delta_f = np.diff(f) # assume f is sorted


min_delta_f = delta_f[delta_f > 0].min()
Q_max = f_0_guess/min_delta_f # assume data actually samples the resonance␣
˓→reasonably

Q_guess = np.sqrt(Q_min*Q_max) # geometric mean, why not?


Q_e_real_guess = Q_guess/(1-np.abs(data[argmin_s21]))
if verbose:
print(f"fmin={fmin}, fmax={fmax}, f_0_guess={f_0_guess}")
print(f"Qmin={Q_min}, Q_max={Q_max}, Q_guess={Q_guess}, Q_e_real_guess={Q_e_
˓→real_guess}")

params = self.make_params(Q=Q_guess, Q_e_real=Q_e_real_guess, Q_e_imag=0, f_0=f_


˓→0_guess)

params[f'{self.prefix}Q'].set(min=Q_min, max=Q_max)
params[f'{self.prefix}f_0'].set(min=fmin, max=fmax)
return lmfit.models.update_param_vals(params, self.prefix, **kwargs)

Now let’s use the model to generate some fake data:

resonator = ResonatorModel()
true_params = resonator.make_params(f_0=100, Q=10000, Q_e_real=9000, Q_e_imag=-9000)

(continues on next page)

13.14. Complex Resonator Model 213


Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

(continued from previous page)


f = np.linspace(99.95, 100.05, 100)
true_s21 = resonator.eval(params=true_params, f=f)
noise_scale = 0.02
np.random.seed(123)
measured_s21 = true_s21 + noise_scale*(np.random.randn(100) + 1j*np.random.randn(100))

plt.plot(f, 20*np.log10(np.abs(measured_s21)))
plt.ylabel('|S21| (dB)')
plt.xlabel('MHz')
plt.title('simulated measurement')

Try out the guess method we added:

guess = resonator.guess(measured_s21, f=f, verbose=True)

fmin=99.95, fmax=100.05, f_0_guess=100.00353535353536


Qmin=100.00353535354105, Q_max=99003.50000055433, Q_guess=3146.537781821432, Q_e_real_
˓→guess=5082.2474265369565

And now fit the data using the guess-ed values as a starting point:

result = resonator.fit(measured_s21, params=guess, f=f, verbose=True)


(continues on next page)

214 Chapter 13. Examples gallery


Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

(continued from previous page)

print(result.fit_report() + '\n')
result.params.pretty_print()

[[Model]]
Model(linear_resonator)
[[Fit Statistics]]
# fitting method = leastsq
# function evals = 41
# data points = 200
# variables = 4
chi-square = 0.08533642
reduced chi-square = 4.3539e-04
Akaike info crit = -1543.89425
Bayesian info crit = -1530.70099
R-squared = -8.5336e+13
[[Variables]]
f_0: 100.000096 +/- 7.0309e-05 (0.00%) (init = 100.0035)
Q: 10059.4972 +/- 142.294636 (1.41%) (init = 3146.538)
Q_e_real: 9180.62017 +/- 133.777681 (1.46%) (init = 5082.247)
Q_e_imag: -9137.03667 +/- 133.769692 (1.46%) (init = 0)
[[Correlations]] (unreported correlations are < 0.100)
C(Q, Q_e_real) = +0.5175
C(f_0, Q_e_imag) = +0.5175
C(f_0, Q_e_real) = +0.5151
C(Q, Q_e_imag) = -0.5150

Name Value Min Max Stderr Vary Expr Brute_Step


Q 1.006e+04 100 9.9e+04 142.3 True None None
Q_e_imag -9137 -inf inf 133.8 True None None
Q_e_real 9181 -inf inf 133.8 True None None
f_0 100 99.95 100 7.031e-05 True None None

Now we’ll make some plots of the data and fit. Define a convenience function for plotting complex quantities:

def plot_ri(data, *args, **kwargs):


plt.plot(data.real, data.imag, *args, **kwargs)

fit_s21 = resonator.eval(params=result.params, f=f)


guess_s21 = resonator.eval(params=guess, f=f)

plt.figure()
plot_ri(measured_s21, '.')
plot_ri(fit_s21, '.-', label='best fit')
plot_ri(guess_s21, '--', label='initial fit')
plt.legend()
plt.xlabel('Re(S21)')
plt.ylabel('Im(S21)')

plt.figure()
plt.plot(f, 20*np.log10(np.abs(measured_s21)), '.')
(continues on next page)

13.14. Complex Resonator Model 215


Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

(continued from previous page)


plt.plot(f, 20*np.log10(np.abs(fit_s21)), '.-', label='best fit')
plt.plot(f, 20*np.log10(np.abs(guess_s21)), '--', label='initial fit')
plt.legend()
plt.ylabel('|S21| (dB)')
plt.xlabel('MHz')

216 Chapter 13. Examples gallery


Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0


Total running time of the script: ( 0 minutes 0.736 seconds)

13.15 Model Selection using lmfit and emcee

FIXME: this is a useful example; however, it doesn’t run correctly anymore as the PTSampler was removed in emcee
v3. . .
lmfit.emcee can be used to obtain the posterior probability distribution of parameters, given a set of experimental data.
This notebook shows how it can be used for Bayesian model selection.

import matplotlib.pyplot as plt


import numpy as np

import lmfit

Define a Gaussian lineshape and generate some data:

def gauss(x, a_max, loc, sd):


return a_max * np.exp(-((x - loc) / sd)**2)

x = np.linspace(3, 7, 250)
np.random.seed(0)
y = 4 + 10 * x + gauss(x, 200, 5, 0.5) + gauss(x, 60, 5.8, 0.2)
(continues on next page)

13.15. Model Selection using lmfit and emcee 217


Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

(continued from previous page)


dy = np.sqrt(y)
y += dy * np.random.randn(y.size)

Plot the data:

plt.errorbar(x, y)

Define the normalised residual for the data:

def residual(p, just_generative=False):


v = p.valuesdict()
generative = v['a'] + v['b'] * x
M = 0
while f'a_max{M}' in v:
generative += gauss(x, v[f'a_max{M}'], v[f'loc{M}'], v[f'sd{M}'])
M += 1

if just_generative:
return generative
return (generative - y) / dy

Create a Parameter set for the initial guesses:

def initial_peak_params(M):
p = lmfit.Parameters()

# a and b give a linear background


a = np.mean(y)
b = 1

# a_max, loc and sd are the amplitude, location and SD of each Gaussian
# component
a_max = np.max(y)
loc = np.mean(x)
sd = (np.max(x) - np.min(x)) * 0.5

p.add_many(('a', a, True, 0, 10), ('b', b, True, 1, 15))

for i in range(M):
p.add_many((f'a_max{i}', 0.5 * a_max, True, 10, a_max),
(f'loc{i}', loc, True, np.min(x), np.max(x)),
(f'sd{i}', sd, True, 0.1, np.max(x) - np.min(x)))
return p

Solving with minimize gives the Maximum Likelihood solution.

p1 = initial_peak_params(1)
mi1 = lmfit.minimize(residual, p1, method='differential_evolution')

lmfit.printfuncs.report_fit(mi1.params, min_correl=0.5)

From inspection of the data above we can tell that there is going to be more than 1 Gaussian component, but how
many are there? A Bayesian approach can be used for this model selection problem. We can do this with lmfit.emcee,

218 Chapter 13. Examples gallery


Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

which uses the emcee package to do a Markov Chain Monte Carlo sampling of the posterior probability distribution.
lmfit.emcee requires a function that returns the log-posterior probability. The log-posterior probability is a sum of the
log-prior probability and log-likelihood functions.
The log-prior probability encodes information about what you already believe about the system. lmfit.emcee assumes
that this log-prior probability is zero if all the parameters are within their bounds and -np.inf if any of the parameters
are outside their bounds. As such it’s a uniform prior.
The log-likelihood function is given below. To use non-uniform priors then should include these terms in lnprob. This
is the log-likelihood probability for the sampling.

def lnprob(p):
resid = residual(p, just_generative=True)
return -0.5 * np.sum(((resid - y) / dy)**2 + np.log(2 * np.pi * dy**2))

To start with we have to create the minimizers and burn them in. We create 4 different minimizers representing 0, 1, 2
or 3 Gaussian contributions. To do the model selection we have to integrate the over the log-posterior distribution to
see which has the higher probability. This is done using the thermodynamic_integration_log_evidence method of the
sampler attribute contained in the lmfit.Minimizer object.

# Work out the log-evidence for different numbers of peaks:


total_steps = 310
burn = 300
thin = 10
ntemps = 15
workers = 1 # the multiprocessing does not work with sphinx-gallery
log_evidence = []
res = []

# set up the Minimizers


for i in range(4):
p0 = initial_peak_params(i)
# you can't use lnprob as a userfcn with minimize because it needs to be
# maximised
mini = lmfit.Minimizer(residual, p0)
out = mini.minimize(method='differential_evolution')
res.append(out)

mini = []
# burn in the samplers
for i in range(4):
# do the sampling
mini.append(lmfit.Minimizer(lnprob, res[i].params))
out = mini[i].emcee(steps=total_steps, ntemps=ntemps, workers=workers,
reuse_sampler=False, float_behavior='posterior',
progress=False)
# get the evidence
print(i, total_steps, mini[i].sampler.thermodynamic_integration_log_evidence())
log_evidence.append(mini[i].sampler.thermodynamic_integration_log_evidence()[0])

Once we’ve burned in the samplers we have to do a collection run. We thin out the MCMC chain to reduce autocorre-
lation between successive samples.

for j in range(6):
total_steps += 100
(continues on next page)

13.15. Model Selection using lmfit and emcee 219


Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

(continued from previous page)


for i in range(4):
# do the sampling
res = mini[i].emcee(burn=burn, steps=100, thin=thin, ntemps=ntemps,
workers=workers, reuse_sampler=True, progress=False)
# get the evidence
print(i, total_steps, mini[i].sampler.thermodynamic_integration_log_evidence())
log_evidence.append(mini[i].sampler.thermodynamic_integration_log_evidence()[0])

plt.plot(log_evidence[-4:])
plt.ylabel('Log-evidence')
plt.xlabel('number of peaks')

The Bayes factor is related to the exponential of the difference between the log-evidence values. Thus, 0 peaks is not
very likely compared to 1 peak. But 1 peak is not as good as 2 peaks. 3 peaks is not that much better than 2 peaks.

r01 = np.exp(log_evidence[-4] - log_evidence[-3])


r12 = np.exp(log_evidence[-3] - log_evidence[-2])
r23 = np.exp(log_evidence[-2] - log_evidence[-1])

print(r01, r12, r23)

These numbers tell us that zero peaks is 0 times as likely as one peak. Two peaks is 7e49 times more likely than one
peak. Three peaks is 1.1 times more likely than two peaks. With this data one would say that two peaks is sufficient.
Caution has to be taken with these values. The log-priors for this sampling are uniform but improper, i.e. they are not
normalised properly. Internally the lnprior probability is calculated as 0 if all parameters are within their bounds and -
np.inf if any parameter is outside the bounds. The lnprob function defined above is the log-likelihood alone. Remember,
that the log-posterior probability is equal to the sum of the log-prior and log-likelihood probabilities. Extra terms can
be added to the lnprob function to calculate the normalised log-probability. These terms would look something like:
∏︁ 1
log( )
𝑖
𝑚𝑎𝑥𝑖 − 𝑚𝑖𝑛𝑖

where 𝑚𝑎𝑥𝑖 and 𝑚𝑖𝑛𝑖 are the upper and lower bounds for the parameter, and the prior is a uniform distribution. Other
types of prior are possible. For example, you might expect the prior to be Gaussian.
Total running time of the script: ( 0 minutes 0.000 seconds)

13.16 Calculate Confidence Intervals

import matplotlib.pyplot as plt


from numpy import argsort, exp, linspace, pi, random, sign, sin, unique
from scipy.interpolate import interp1d

from lmfit import (Minimizer, conf_interval, conf_interval2d, create_params,


report_ci, report_fit)

Define the residual function, specify “true” parameter values, and generate a synthetic data set with some noise:

def residual(pars, x, data=None):


argu = (x*pars['decay'])**2
(continues on next page)

220 Chapter 13. Examples gallery


Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

(continued from previous page)


shift = pars['shift']
if abs(shift) > pi/2:
shift = shift - sign(shift)*pi
model = pars['amp']*sin(shift + x/pars['period']) * exp(-argu)
if data is None:
return model
return model - data

p_true = create_params(amp=14.0, period=5.33, shift=0.123, decay=0.010)

x = linspace(0.0, 250.0, 2500)


random.seed(2021)
noise = random.normal(scale=0.7215, size=x.size)
data = residual(p_true, x) + noise

Create fitting parameters and set initial values:

fit_params = create_params(amp=13.0, period=2, shift=0.0, decay=0.020)

Set-up the minimizer and perform the fit using leastsq algorithm, and show the report:

mini = Minimizer(residual, fit_params, fcn_args=(x,), fcn_kws={'data': data})


out = mini.leastsq()

fit = residual(out.params, x)
report_fit(out)

[[Fit Statistics]]
# fitting method = leastsq
# function evals = 95
# data points = 2500
# variables = 4
chi-square = 1277.24638
reduced chi-square = 0.51171730
Akaike info crit = -1670.96059
Bayesian info crit = -1647.66441
[[Variables]]
amp: 14.0708269 +/- 0.04936878 (0.35%) (init = 13)
period: 5.32980958 +/- 0.00273143 (0.05%) (init = 2)
shift: 0.12156317 +/- 0.00482312 (3.97%) (init = 0)
decay: 0.01002489 +/- 4.0726e-05 (0.41%) (init = 0.02)
[[Correlations]] (unreported correlations are < 0.100)
C(period, shift) = +0.8002
C(amp, decay) = +0.5758

Calculate the confidence intervals for parameters and display the results:

ci, tr = conf_interval(mini, out, trace=True)

report_ci(ci)

(continues on next page)

13.16. Calculate Confidence Intervals 221


Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

(continued from previous page)


names = out.params.keys()
i = 0
gs = plt.GridSpec(4, 4)
sx = {}
sy = {}
for fixed in names:
j = 0
for free in names:
if j in sx and i in sy:
ax = plt.subplot(gs[i, j], sharex=sx[j], sharey=sy[i])
elif i in sy:
ax = plt.subplot(gs[i, j], sharey=sy[i])
sx[j] = ax
elif j in sx:
ax = plt.subplot(gs[i, j], sharex=sx[j])
sy[i] = ax
else:
ax = plt.subplot(gs[i, j])
sy[i] = ax
sx[j] = ax
if i < 3:
plt.setp(ax.get_xticklabels(), visible=False)
else:
ax.set_xlabel(free)

if j > 0:
plt.setp(ax.get_yticklabels(), visible=False)
else:
ax.set_ylabel(fixed)

res = tr[fixed]
prob = res['prob']
f = prob < 0.96

x, y = res[free], res[fixed]
ax.scatter(x[f], y[f], c=1-prob[f], s=25*(1-prob[f]+0.5))
ax.autoscale(1, 1)
j += 1
i += 1

222 Chapter 13. Examples gallery


Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

99.73% 95.45% 68.27% _BEST_ 68.27% 95.45% 99.73%


amp : -0.14795 -0.09863 -0.04934 14.07083 +0.04939 +0.09886 +0.14847
period: -0.00818 -0.00546 -0.00273 5.32981 +0.00273 +0.00548 +0.00822
shift : -0.01446 -0.00964 -0.00482 0.12156 +0.00482 +0.00965 +0.01449
decay : -0.00012 -0.00008 -0.00004 0.01002 +0.00004 +0.00008 +0.00012

It is also possible to calculate the confidence regions for two fixed parameters using the function conf_interval2d:

names = list(out.params.keys())

plt.figure()
for i in range(4):
for j in range(4):
indx = 16-j*4-i
ax = plt.subplot(4, 4, indx)
ax.ticklabel_format(style='sci', scilimits=(-2, 2), axis='y')

# set-up labels and tick marks


ax.tick_params(labelleft=False, labelbottom=False)
if indx in (2, 5, 9, 13):
plt.ylabel(names[j])
ax.tick_params(labelleft=True)
if indx == 1:
(continues on next page)

13.16. Calculate Confidence Intervals 223


Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

(continued from previous page)


ax.tick_params(labelleft=True)
if indx in (13, 14, 15, 16):
plt.xlabel(names[i])
ax.tick_params(labelbottom=True)
[label.set_rotation(45) for label in ax.get_xticklabels()]

if i != j:
x, y, m = conf_interval2d(mini, out, names[i], names[j], 20, 20)
plt.contourf(x, y, m, linspace(0, 1, 10))

x = tr[names[i]][names[i]]
y = tr[names[i]][names[j]]
pr = tr[names[i]]['prob']
s = argsort(x)
plt.scatter(x[s], y[s], c=pr[s], s=30, lw=1)

else:
x = tr[names[i]][names[i]]
y = tr[names[i]]['prob']

t, s = unique(x, True)
f = interp1d(t, y[s], 'slinear')
xn = linspace(x.min(), x.max(), 50)
plt.plot(xn, f(xn), lw=1)
plt.ylabel('prob')
ax.tick_params(labelleft=True)

plt.tight_layout()
plt.show()

224 Chapter 13. Examples gallery


Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

Total running time of the script: ( 0 minutes 15.395 seconds)

13.17 Fit Two Dimensional Peaks

This example illustrates how to handle two-dimensional data with lmfit.

import matplotlib.pyplot as plt


import numpy as np
from scipy.interpolate import griddata

import lmfit
from lmfit.lineshapes import gaussian2d, lorentzian

13.17. Fit Two Dimensional Peaks 225


Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

13.17.1 Two-dimensional Gaussian

We start by considering a simple two-dimensional gaussian function, which depends on coordinates (x, y). The most
general case of experimental data will be irregularly sampled and noisy. Let’s simulate some:

npoints = 10000
np.random.seed(2021)
x = np.random.rand(npoints)*10 - 4
y = np.random.rand(npoints)*5 - 3
z = gaussian2d(x, y, amplitude=30, centerx=2, centery=-.5, sigmax=.6, sigmay=.8)
z += 2*(np.random.rand(*z.shape)-.5)
error = np.sqrt(z+1)

To plot this, we can interpolate the data onto a grid.

X, Y = np.meshgrid(np.linspace(x.min(), x.max(), 100),


np.linspace(y.min(), y.max(), 100))
Z = griddata((x, y), z, (X, Y), method='linear', fill_value=0)

fig, ax = plt.subplots()
art = ax.pcolor(X, Y, Z, shading='auto')
plt.colorbar(art, ax=ax, label='z')
ax.set_xlabel('x')
ax.set_ylabel('y')
plt.show()

226 Chapter 13. Examples gallery


Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

In this case, we can use a built-in model to fit

model = lmfit.models.Gaussian2dModel()
params = model.guess(z, x, y)
result = model.fit(z, x=x, y=y, params=params, weights=1/error)
lmfit.report_fit(result)

[[Fit Statistics]]
# fitting method = leastsq
# function evals = 87
# data points = 10000
# variables = 5
chi-square = 20618.1774
reduced chi-square = 2.06284916
Akaike info crit = 7245.87992
Bayesian info crit = 7281.93162
R-squared = 0.28378389
[[Variables]]
amplitude: 27.4195833 +/- 0.65062974 (2.37%) (init = 16.51399)
centerx: 1.99705425 +/- 0.01405864 (0.70%) (init = 1.940764)
centery: -0.49516158 +/- 0.01907800 (3.85%) (init = -0.5178641)
sigmax: 0.54740777 +/- 0.01224965 (2.24%) (init = 1.666582)
sigmay: 0.73300589 +/- 0.01617042 (2.21%) (init = 0.8332836)
(continues on next page)

13.17. Fit Two Dimensional Peaks 227


Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

(continued from previous page)


fwhmx: 1.28904676 +/- 0.02884573 (2.24%) == '2.3548200*sigmax'
fwhmy: 1.72609693 +/- 0.03807842 (2.21%) == '2.3548200*sigmay'
height: 10.8758308 +/- 0.35409534 (3.26%) == '0.1591549*amplitude/(max(1e-15,␣
˓→sigmax)*max(1e-15, sigmay))'

[[Correlations]] (unreported correlations are < 0.100)


C(amplitude, sigmay) = +0.2464
C(amplitude, sigmax) = +0.2314

To check the fit, we can evaluate the function on the same grid we used before and make plots of the data, the fit and
the difference between the two.

fig, axs = plt.subplots(2, 2, figsize=(10, 10))

vmax = np.nanpercentile(Z, 99.9)

ax = axs[0, 0]
art = ax.pcolor(X, Y, Z, vmin=0, vmax=vmax, shading='auto')
plt.colorbar(art, ax=ax, label='z')
ax.set_title('Data')

ax = axs[0, 1]
fit = model.func(X, Y, **result.best_values)
art = ax.pcolor(X, Y, fit, vmin=0, vmax=vmax, shading='auto')
plt.colorbar(art, ax=ax, label='z')
ax.set_title('Fit')

ax = axs[1, 0]
fit = model.func(X, Y, **result.best_values)
art = ax.pcolor(X, Y, Z-fit, vmin=0, vmax=10, shading='auto')
plt.colorbar(art, ax=ax, label='z')
ax.set_title('Data - Fit')

for ax in axs.ravel():
ax.set_xlabel('x')
ax.set_ylabel('y')
axs[1, 1].remove()
plt.show()

228 Chapter 13. Examples gallery


Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

13.17.2 Two-dimensional off-axis Lorentzian

We now go on to show a harder example, in which the peak has a Lorentzian profile and an off-axis anisotropic shape.
This can be handled by applying a suitable coordinate transform and then using the lorentzian function that lmfit
provides in the lineshapes module.

def lorentzian2d(x, y, amplitude=1., centerx=0., centery=0., sigmax=1., sigmay=1.,


rotation=0):
"""Return a two dimensional lorentzian.

The maximum of the peak occurs at ``centerx`` and ``centery``


with widths ``sigmax`` and ``sigmay`` in the x and y directions
(continues on next page)

13.17. Fit Two Dimensional Peaks 229


Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

(continued from previous page)


respectively. The peak can be rotated by choosing the value of ``rotation``
in radians.
"""
xp = (x - centerx)*np.cos(rotation) - (y - centery)*np.sin(rotation)
yp = (x - centerx)*np.sin(rotation) + (y - centery)*np.cos(rotation)
R = (xp/sigmax)**2 + (yp/sigmay)**2

return 2*amplitude*lorentzian(R)/(np.pi*sigmax*sigmay)

Data can be simulated and plotted in the same way as we did before.

npoints = 10000
x = np.random.rand(npoints)*10 - 4
y = np.random.rand(npoints)*5 - 3
z = lorentzian2d(x, y, amplitude=30, centerx=2, centery=-.5, sigmax=.6,
sigmay=1.2, rotation=30*np.pi/180)
z += 2*(np.random.rand(*z.shape)-.5)
error = np.sqrt(z+1)

X, Y = np.meshgrid(np.linspace(x.min(), x.max(), 100),


np.linspace(y.min(), y.max(), 100))
Z = griddata((x, y), z, (X, Y), method='linear', fill_value=0)

fig, ax = plt.subplots()
ax.set_xlabel('x')
ax.set_ylabel('y')
art = ax.pcolor(X, Y, Z, shading='auto')
plt.colorbar(art, ax=ax, label='z')
plt.show()

230 Chapter 13. Examples gallery


Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

To fit, create a model from the function. Don’t forget to tell lmfit that both x and y are independent variables. Keep
in mind that lmfit will take the function keywords as default initial guesses in this case and that it will not know that
certain parameters only make physical sense over restricted ranges. For example, peak widths should be positive and
the rotation can be restricted over a quarter circle.

model = lmfit.Model(lorentzian2d, independent_vars=['x', 'y'])


params = model.make_params(amplitude=10, centerx=x[np.argmax(z)],
centery=y[np.argmax(z)])
params['rotation'].set(value=.1, min=0, max=np.pi/2)
params['sigmax'].set(value=1, min=0)
params['sigmay'].set(value=2, min=0)

result = model.fit(z, x=x, y=y, params=params, weights=1/error)


lmfit.report_fit(result)

[[Fit Statistics]]
# fitting method = leastsq
# function evals = 73
# data points = 10000
# variables = 6
chi-square = 11287.3823
reduced chi-square = 1.12941588
Akaike info crit = 1223.00402
(continues on next page)

13.17. Fit Two Dimensional Peaks 231


Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

(continued from previous page)


Bayesian info crit = 1266.26606
R-squared = 0.57742880
[[Variables]]
amplitude: 25.6417887 +/- 0.50569636 (1.97%) (init = 10)
centerx: 2.00326033 +/- 0.01163397 (0.58%) (init = 2.051478)
centery: -0.49692376 +/- 0.01680902 (3.38%) (init = -0.478231)
sigmax: 0.51074241 +/- 0.01027264 (2.01%) (init = 1)
sigmay: 1.11741198 +/- 0.02190772 (1.96%) (init = 2)
rotation: 0.48130689 +/- 0.01542118 (3.20%) (init = 0.1)
[[Correlations]] (unreported correlations are < 0.100)
C(centerx, centery) = +0.5767
C(amplitude, sigmax) = +0.3342
C(amplitude, sigmay) = +0.3046

The process of making plots to check it worked is the same as before.

fig, axs = plt.subplots(2, 2, figsize=(10, 10))

vmax = np.nanpercentile(Z, 99.9)

ax = axs[0, 0]
art = ax.pcolor(X, Y, Z, vmin=0, vmax=vmax, shading='auto')
plt.colorbar(art, ax=ax, label='z')
ax.set_title('Data')

ax = axs[0, 1]
fit = model.func(X, Y, **result.best_values)
art = ax.pcolor(X, Y, fit, vmin=0, vmax=vmax, shading='auto')
plt.colorbar(art, ax=ax, label='z')
ax.set_title('Fit')

ax = axs[1, 0]
fit = model.func(X, Y, **result.best_values)
art = ax.pcolor(X, Y, Z-fit, vmin=0, vmax=10, shading='auto')
plt.colorbar(art, ax=ax, label='z')
ax.set_title('Data - Fit')

for ax in axs.ravel():
ax.set_xlabel('x')
ax.set_ylabel('y')
axs[1, 1].remove()
plt.show()

232 Chapter 13. Examples gallery


Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

Total running time of the script: ( 0 minutes 3.080 seconds)

13.18 Global minimization using the brute method (a.k.a. grid search)

This notebook shows a simple example of using lmfit.minimize.brute that uses the method with the same name
from scipy.optimize.
The method computes the function’s value at each point of a multidimensional grid of points, to find the global minimum
of the function. It behaves identically to scipy.optimize.brute in case finite bounds are given on all varying
parameters, but will also deal with non-bounded parameters (see below).

13.18. Global minimization using the brute method (a.k.a. grid search) 233
Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

import copy

from matplotlib.colors import LogNorm


import matplotlib.pyplot as plt
import numpy as np

from lmfit import Minimizer, create_params, fit_report

Let’s start with the example given in the documentation of SciPy:


“We illustrate the use of brute to seek the global minimum of a function of two variables that is given as the sum of a
positive-definite quadratic and two deep “Gaussian-shaped” craters. Specifically, define the objective function f as the
sum of three other functions, f = f1 + f2 + f3. We suppose each of these has a signature (z, *params), where
z = (x, y), and params and the functions are as defined below.”
First, we create a set of Parameters where all variables except x and y are given fixed values. Just as in the documentation
we will do a grid search between -4 and 4 and use a stepsize of 0.25. The bounds can be set as usual with the min
and max attributes, and the stepsize is set using brute_step.

params = create_params(a=dict(value=2, vary=False),


b=dict(value=3, vary=False),
c=dict(value=7, vary=False),
d=dict(value=8, vary=False),
e=dict(value=9, vary=False),
f=dict(value=10, vary=False),
g=dict(value=44, vary=False),
h=dict(value=-1, vary=False),
i=dict(value=2, vary=False),
j=dict(value=26, vary=False),
k=dict(value=1, vary=False),
l=dict(value=-2, vary=False),
scale=dict(value=0.5, vary=False),
x=dict(value=0.0, vary=True, min=-4, max=4, brute_step=0.25),
y=dict(value=0.0, vary=True, min=-4, max=4, brute_step=0.25))

Second, create the three functions and the objective function:

def f1(p):
par = p.valuesdict()
return (par['a'] * par['x']**2 + par['b'] * par['x'] * par['y'] +
par['c'] * par['y']**2 + par['d']*par['x'] + par['e']*par['y'] +
par['f'])

def f2(p):
par = p.valuesdict()
return (-1.0*par['g']*np.exp(-((par['x']-par['h'])**2 +
(par['y']-par['i'])**2) / par['scale']))

def f3(p):
par = p.valuesdict()
return (-1.0*par['j']*np.exp(-((par['x']-par['k'])**2 +
(par['y']-par['l'])**2) / par['scale']))
(continues on next page)

234 Chapter 13. Examples gallery


Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

(continued from previous page)

def f(params):
return f1(params) + f2(params) + f3(params)

Performing the actual grid search is done with:

fitter = Minimizer(f, params)


result = fitter.minimize(method='brute')

, which will increment x and y between -4 in increments of 0.25 until 4 (not inclusive).

grid_x, grid_y = (np.unique(par.ravel()) for par in result.brute_grid)


print(grid_x)

[-4. -3.75 -3.5 -3.25 -3. -2.75 -2.5 -2.25 -2. -1.75 -1.5 -1.25
-1. -0.75 -0.5 -0.25 0. 0.25 0.5 0.75 1. 1.25 1.5 1.75
2. 2.25 2.5 2.75 3. 3.25 3.5 3.75]

The objective function is evaluated on this grid, and the raw output from scipy.optimize.brute is stored in the
MinimizerResult as brute_<parname> attributes. These attributes are:
result.brute_x0 – A 1-D array containing the coordinates of a point at which the objective function had its minimum
value.

print(result.brute_x0)

[-1. 1.75]

result.brute_fval – Function value at the point x0.

print(result.brute_fval)

-2.8923637137222027

result.brute_grid – Representation of the evaluation grid. It has the same length as x0.

print(result.brute_grid)

[[[-4. -4. -4. ... -4. -4. -4. ]


[-3.75 -3.75 -3.75 ... -3.75 -3.75 -3.75]
[-3.5 -3.5 -3.5 ... -3.5 -3.5 -3.5 ]
...
[ 3.25 3.25 3.25 ... 3.25 3.25 3.25]
[ 3.5 3.5 3.5 ... 3.5 3.5 3.5 ]
[ 3.75 3.75 3.75 ... 3.75 3.75 3.75]]

[[-4. -3.75 -3.5 ... 3.25 3.5 3.75]


[-4. -3.75 -3.5 ... 3.25 3.5 3.75]
[-4. -3.75 -3.5 ... 3.25 3.5 3.75]
...
(continues on next page)

13.18. Global minimization using the brute method (a.k.a. grid search) 235
Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

(continued from previous page)


[-4. -3.75 -3.5 ... 3.25 3.5 3.75]
[-4. -3.75 -3.5 ... 3.25 3.5 3.75]
[-4. -3.75 -3.5 ... 3.25 3.5 3.75]]]

result.brute_Jout – Function values at each point of the evaluation grid, i.e., Jout = func(*grid).

print(result.brute_Jout)

[[134. 119.6875 106.25 ... 74.18749997 85.24999999


97.1875 ]
[129.125 115. 101.75 ... 74.74999948 85.99999987
98.12499997]
[124.5 110.5625 97.5 ... 75.5624928 86.99999818
99.31249964]
...
[ 94.12499965 85.24999772 77.24998843 ... 192. 208.5
225.875 ]
[ 96.49999997 87.81249979 79.99999892 ... 199.8125 216.5
234.0625 ]
[ 99.125 90.62499998 82.99999992 ... 207.875 224.75
242.5 ]]

Reassuringly, the obtained results are identical to using the method in SciPy directly!
Example 2: fit of a decaying sine wave
In this example, we will explain some of the options of the algorithm.
We start off by generating some synthetic data with noise for a decaying sine wave, define an objective function, and
create/initialize a Parameter set.

x = np.linspace(0, 15, 301)


np.random.seed(7)
noise = np.random.normal(size=x.size, scale=0.2)
data = (5. * np.sin(2*x - 0.1) * np.exp(-x*x*0.025) + noise)
plt.plot(x, data, 'o')
plt.show()

def fcn2min(params, x, data):


"""Model decaying sine wave, subtract data."""
amp = params['amp']
shift = params['shift']
omega = params['omega']
decay = params['decay']
model = amp * np.sin(x*omega + shift) * np.exp(-x*x*decay)
return model - data

236 Chapter 13. Examples gallery


Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

In contrast to the implementation in SciPy (as shown in the first example), varying parameters do not need to have
finite bounds in lmfit. However, if a parameter does not have finite bounds, then it does need a brute_step attribute
specified:

params = create_params(amp=dict(value=7, min=2.5, brute_step=0.25),


decay=dict(value=0.05, brute_step=0.005),
shift=dict(value=0.0, min=-np.pi/2., max=np.pi/2),
omega=dict(value=3, max=5, brute_step=0.25))

Our initial parameter set is now defined as shown below and this will determine how the grid is set-up.

params.pretty_print()

Name Value Min Max Stderr Vary Expr Brute_Step


amp 7 2.5 inf None True None 0.25
decay 0.05 -inf inf None True None 0.005
omega 3 -inf 5 None True None 0.25
shift 0 -1.571 1.571 None True None None

First, we initialize a Minimizer and perform the grid search:

fitter = Minimizer(fcn2min, params, fcn_args=(x, data))


result_brute = fitter.minimize(method='brute', Ns=25, keep=25)
(continues on next page)

13.18. Global minimization using the brute method (a.k.a. grid search) 237
Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

(continued from previous page)

print(fit_report(result_brute))

[[Fit Statistics]]
# fitting method = brute
# function evals = 375000
# data points = 301
# variables = 4
chi-square = 11.9353671
reduced chi-square = 0.04018642
Akaike info crit = -963.508878
Bayesian info crit = -948.680437
## Warning: uncertainties could not be estimated:
[[Variables]]
amp: 5.00000000 (init = 7)
decay: 0.02500000 (init = 0.05)
shift: -0.13089969 (init = 0)
omega: 2.00000000 (init = 3)

We used two new parameters here: Ns and keep. The parameter Ns determines the 'number of grid points along the
axes' similarly to its usage in SciPy. Together with brute_step, min and max for a Parameter it will dictate how the
grid is set-up:
(1) finite bounds are specified (“SciPy implementation”): uses brute_step if present (in the example above) or uses
Ns to generate the grid. The latter scenario that interpolates Ns points from min to max (inclusive), is here shown for
the parameter shift:

par_name = 'shift'
indx_shift = result_brute.var_names.index(par_name)
grid_shift = np.unique(result_brute.brute_grid[indx_shift].ravel())
print(f"parameter = {par_name}\nnumber of steps = {len(grid_shift)}\ngrid = {grid_shift}
˓→")

parameter = shift
number of steps = 25
grid = [-1.57079633 -1.43989663 -1.30899694 -1.17809725 -1.04719755 -0.91629786
-0.78539816 -0.65449847 -0.52359878 -0.39269908 -0.26179939 -0.13089969
0. 0.13089969 0.26179939 0.39269908 0.52359878 0.65449847
0.78539816 0.91629786 1.04719755 1.17809725 1.30899694 1.43989663
1.57079633]

If finite bounds are not set for a certain parameter then the user must specify brute_step - three more scenarios are
considered here:
(2) lower bound (min) and brute_step are specified: range = (min, min + Ns * brute_step, brute_step)

par_name = 'amp'
indx_shift = result_brute.var_names.index(par_name)
grid_shift = np.unique(result_brute.brute_grid[indx_shift].ravel())
print(f"parameter = {par_name}\nnumber of steps = {len(grid_shift)}\ngrid = {grid_shift}
˓→")

238 Chapter 13. Examples gallery


Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

parameter = amp
number of steps = 25
grid = [2.5 2.75 3. 3.25 3.5 3.75 4. 4.25 4.5 4.75 5. 5.25 5.5 5.75
6. 6.25 6.5 6.75 7. 7.25 7.5 7.75 8. 8.25 8.5 ]

(3) upper bound (max) and brute_step are specified: range = (max - Ns * brute_step, max, brute_step)

par_name = 'omega'
indx_shift = result_brute.var_names.index(par_name)
grid_shift = np.unique(result_brute.brute_grid[indx_shift].ravel())
print(f"parameter = {par_name}\nnumber of steps = {len(grid_shift)}\ngrid = {grid_shift}
˓→")

parameter = omega
number of steps = 25
grid = [-1.25 -1. -0.75 -0.5 -0.25 0. 0.25 0.5 0.75 1. 1.25 1.5
1.75 2. 2.25 2.5 2.75 3. 3.25 3.5 3.75 4. 4.25 4.5
4.75]

(4) numerical value (value) and brute_step are specified: range = (value - (Ns//2) * brute_step, value
+ (Ns//2) * brute_step, brute_step)

par_name = 'decay'
indx_shift = result_brute.var_names.index(par_name)
grid_shift = np.unique(result_brute.brute_grid[indx_shift].ravel())
print(f"parameter = {par_name}\nnumber of steps = {len(grid_shift)}\ngrid = {grid_shift}
˓→")

parameter = decay
number of steps = 24
grid = [-1.00000000e-02 -5.00000000e-03 5.20417043e-18 5.00000000e-03
1.00000000e-02 1.50000000e-02 2.00000000e-02 2.50000000e-02
3.00000000e-02 3.50000000e-02 4.00000000e-02 4.50000000e-02
5.00000000e-02 5.50000000e-02 6.00000000e-02 6.50000000e-02
7.00000000e-02 7.50000000e-02 8.00000000e-02 8.50000000e-02
9.00000000e-02 9.50000000e-02 1.00000000e-01 1.05000000e-01]

The MinimizerResult contains all the usual best-fit parameters and fitting statistics. For example, the optimal solution
from the grid search is given below together with a plot:

print(fit_report(result_brute))

[[Fit Statistics]]
# fitting method = brute
# function evals = 375000
# data points = 301
# variables = 4
chi-square = 11.9353671
reduced chi-square = 0.04018642
Akaike info crit = -963.508878
Bayesian info crit = -948.680437
## Warning: uncertainties could not be estimated:
(continues on next page)

13.18. Global minimization using the brute method (a.k.a. grid search) 239
Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

(continued from previous page)


[[Variables]]
amp: 5.00000000 (init = 7)
decay: 0.02500000 (init = 0.05)
shift: -0.13089969 (init = 0)
omega: 2.00000000 (init = 3)

plt.plot(x, data, 'o')


plt.plot(x, data + fcn2min(result_brute.params, x, data), '--')
plt.show()

We can see that this fit is already very good, which is what we should expect since our brute force grid is sampled
rather finely and encompasses the “correct” values.
In a more realistic, complicated example the brute method will be used to get reasonable values for the parameters and
perform another minimization (e.g., using leastsq) using those as starting values. That is where the keep parameter
comes into play: it determines the “number of best candidates from the brute force method that are stored in the
candidates attribute”. In the example above we store the best-ranking 25 solutions (the default value is 50 and
storing all the grid points can be accomplished by choosing all). The candidates attribute contains the parameters
and chisqr from the brute force method as a namedtuple, (‘Candidate’, [‘params’, ‘score’]), sorted on the
(lowest) chisqr value. To access the values for a particular candidate one can use result.candidate[#].params
or result.candidate[#].score, where a lower # represents a better candidate. The show_candidates(#) uses
the pretty_print() method to show a specific candidate-# or all candidates when no number is specified.
The optimal fit is, as usual, stored in the MinimizerResult.params attribute and is, therefore, identical to

240 Chapter 13. Examples gallery


Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

result_brute.show_candidates(1).

result_brute.show_candidates(1)

Candidate #1, chisqr = 11.935


Name Value Min Max Stderr Vary Expr Brute_Step
amp 5 2.5 inf None True None 0.25
decay 0.025 -inf inf None True None 0.005
omega 2 -inf 5 None True None 0.25
shift -0.1309 -1.571 1.571 None True None None

In this case, the next-best scoring candidate has already a chisqr that increased quite a bit:

result_brute.show_candidates(2)

Candidate #2, chisqr = 13.994


Name Value Min Max Stderr Vary Expr Brute_Step
amp 4.75 2.5 inf None True None 0.25
decay 0.025 -inf inf None True None 0.005
omega 2 -inf 5 None True None 0.25
shift -0.1309 -1.571 1.571 None True None None

and is, therefore, probably not so likely. . . However, as said above, in most cases you’ll want to do another minimization
using the solutions from the brute method as starting values. That can be easily accomplished as shown in the code
below, where we now perform a leastsq minimization starting from the top-25 solutions and accept the solution if
the chisqr is lower than the previously ‘optimal’ solution:

best_result = copy.deepcopy(result_brute)

for candidate in result_brute.candidates:


trial = fitter.minimize(method='leastsq', params=candidate.params)
if trial.chisqr < best_result.chisqr:
best_result = trial

From the leastsq minimization we obtain the following parameters for the most optimal result:

print(fit_report(best_result))

[[Fit Statistics]]
# fitting method = leastsq
# function evals = 21
# data points = 301
# variables = 4
chi-square = 10.8653514
reduced chi-square = 0.03658367
Akaike info crit = -991.780924
Bayesian info crit = -976.952483
[[Variables]]
amp: 5.00323085 +/- 0.03805940 (0.76%) (init = 5)
decay: 0.02563850 +/- 4.4572e-04 (1.74%) (init = 0.03)
shift: -0.09162987 +/- 0.00978382 (10.68%) (init = 0)
omega: 1.99611629 +/- 0.00316225 (0.16%) (init = 2)
[[Correlations]] (unreported correlations are < 0.100)
(continues on next page)

13.18. Global minimization using the brute method (a.k.a. grid search) 241
Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

(continued from previous page)


C(shift, omega) = -0.7855
C(amp, decay) = +0.5838
C(amp, shift) = -0.1208

As expected the parameters have not changed significantly as they were already very close to the “real” values, which
can also be appreciated from the plots below.

plt.plot(x, data, 'o')


plt.plot(x, data + fcn2min(result_brute.params, x, data), '-',
label='brute')
plt.plot(x, data + fcn2min(best_result.params, x, data), '--',
label='brute followed by leastsq')
plt.legend()

Finally, the results from the brute force grid-search can be visualized using the rather lengthy Python function below
(which might get incorporated in lmfit at some point).

def plot_results_brute(result, best_vals=True, varlabels=None,


output=None):
"""Visualize the result of the brute force grid search.

The output file will display the chi-square value per parameter and contour
plots for all combination of two parameters.
(continues on next page)

242 Chapter 13. Examples gallery


Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

(continued from previous page)

Inspired by the `corner` package (https://github.com/dfm/corner.py).

Parameters
----------
result : :class:`~lmfit.minimizer.MinimizerResult`
Contains the results from the :meth:`brute` method.

best_vals : bool, optional


Whether to show the best values from the grid search (default is True).

varlabels : list, optional


If None (default), use `result.var_names` as axis labels, otherwise
use the names specified in `varlabels`.

output : str, optional


Name of the output PDF file (default is 'None')
"""
npars = len(result.var_names)
_fig, axes = plt.subplots(npars, npars)

if not varlabels:
varlabels = result.var_names
if best_vals and isinstance(best_vals, bool):
best_vals = result.params

for i, par1 in enumerate(result.var_names):


for j, par2 in enumerate(result.var_names):

# parameter vs chi2 in case of only one parameter


if npars == 1:
axes.plot(result.brute_grid, result.brute_Jout, 'o', ms=3)
axes.set_ylabel(r'$\chi^{2}$')
axes.set_xlabel(varlabels[i])
if best_vals:
axes.axvline(best_vals[par1].value, ls='dashed', color='r')

# parameter vs chi2 profile on top


elif i == j and j < npars-1:
if i == 0:
axes[0, 0].axis('off')
ax = axes[i, j+1]
red_axis = tuple(a for a in range(npars) if a != i)
ax.plot(np.unique(result.brute_grid[i]),
np.minimum.reduce(result.brute_Jout, axis=red_axis),
'o', ms=3)
ax.set_ylabel(r'$\chi^{2}$')
ax.yaxis.set_label_position("right")
ax.yaxis.set_ticks_position('right')
ax.set_xticks([])
if best_vals:
ax.axvline(best_vals[par1].value, ls='dashed', color='r')
(continues on next page)

13.18. Global minimization using the brute method (a.k.a. grid search) 243
Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

(continued from previous page)

# parameter vs chi2 profile on the left


elif j == 0 and i > 0:
ax = axes[i, j]
red_axis = tuple(a for a in range(npars) if a != i)
ax.plot(np.minimum.reduce(result.brute_Jout, axis=red_axis),
np.unique(result.brute_grid[i]), 'o', ms=3)
ax.invert_xaxis()
ax.set_ylabel(varlabels[i])
if i != npars-1:
ax.set_xticks([])
else:
ax.set_xlabel(r'$\chi^{2}$')
if best_vals:
ax.axhline(best_vals[par1].value, ls='dashed', color='r')

# contour plots for all combinations of two parameters


elif j > i:
ax = axes[j, i+1]
red_axis = tuple(a for a in range(npars) if a not in (i, j))
X, Y = np.meshgrid(np.unique(result.brute_grid[i]),
np.unique(result.brute_grid[j]))
lvls1 = np.linspace(result.brute_Jout.min(),
np.median(result.brute_Jout)/2.0, 7, dtype='int')
lvls2 = np.linspace(np.median(result.brute_Jout)/2.0,
np.median(result.brute_Jout), 3, dtype='int')
lvls = np.unique(np.concatenate((lvls1, lvls2)))
ax.contourf(X.T, Y.T, np.minimum.reduce(result.brute_Jout, axis=red_
˓→axis),
lvls, norm=LogNorm())
ax.set_yticks([])
if best_vals:
ax.axvline(best_vals[par1].value, ls='dashed', color='r')
ax.axhline(best_vals[par2].value, ls='dashed', color='r')
ax.plot(best_vals[par1].value, best_vals[par2].value, 'rs', ms=3)
if j != npars-1:
ax.set_xticks([])
else:
ax.set_xlabel(varlabels[i])
if j - i >= 2:
axes[i, j].axis('off')

if output is not None:


plt.savefig(output)

and finally, to generated the figure:

plot_results_brute(result_brute, best_vals=True, varlabels=None)


plt.show()

244 Chapter 13. Examples gallery


Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

Total running time of the script: ( 0 minutes 22.850 seconds)

13.18. Global minimization using the brute method (a.k.a. grid search) 245
Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

246 Chapter 13. Examples gallery


CHAPTER

FOURTEEN

EXAMPLES FROM THE DOCUMENTATION

Below are all the examples that are part of the lmfit documentation.

14.1 Examples from the documentation

Below are all the examples that are part of the lmfit documentation.

14.1.1 doc_model_savemodel.py

# <examples/doc_model_savemodel.py>
import numpy as np

from lmfit.model import Model, save_model

def mysine(x, amp, freq, shift):


return amp * np.sin(x*freq + shift)

sinemodel = Model(mysine)
pars = sinemodel.make_params(amp=1, freq=0.25, shift=0)

save_model(sinemodel, 'sinemodel.sav')
# <end examples/doc_model_savemodel.py>

Total running time of the script: ( 0 minutes 0.006 seconds)

14.1.2 doc_model_savemodelresult.py

[[Model]]
Model(gaussian)
[[Fit Statistics]]
# fitting method = leastsq
# function evals = 33
# data points = 101
# variables = 3
chi-square = 3.40883599
(continues on next page)

247
Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

(continued from previous page)


reduced chi-square = 0.03478404
Akaike info crit = -336.263713
Bayesian info crit = -328.418352
R-squared = 0.98533348
[[Variables]]
amplitude: 8.88021907 +/- 0.11359530 (1.28%) (init = 5)
center: 5.65866105 +/- 0.01030493 (0.18%) (init = 5)
sigma: 0.69765480 +/- 0.01030508 (1.48%) (init = 1)
fwhm: 1.64285148 +/- 0.02426660 (1.48%) == '2.3548200*sigma'
height: 5.07800563 +/- 0.06495769 (1.28%) == '0.3989423*amplitude/max(1e-15,␣
˓→sigma)'

[[Correlations]] (unreported correlations are < 0.100)


C(amplitude, sigma) = +0.5774

# <examples/doc_model_savemodelresult.py>
import numpy as np

from lmfit.model import save_modelresult


from lmfit.models import GaussianModel

data = np.loadtxt('model1d_gauss.dat')
x = data[:, 0]
y = data[:, 1]

gmodel = GaussianModel()
result = gmodel.fit(y, x=x, amplitude=5, center=5, sigma=1)

save_modelresult(result, 'gauss_modelresult.sav')

print(result.fit_report())
# <end examples/doc_model_savemodelresult.py>

Total running time of the script: ( 0 minutes 0.037 seconds)

14.1.3 doc_confidence_basic.py

[[Variables]]
a: 0.09943896 +/- 1.9322e-04 (0.19%) (init = 0.1)
b: 1.98476942 +/- 0.01222678 (0.62%) (init = 1)
[[Correlations]] (unreported correlations are < 0.100)
C(a, b) = +0.6008
99.73% 95.45% 68.27% _BEST_ 68.27% 95.45% 99.73%
a: -0.00059 -0.00039 -0.00019 0.09944 +0.00019 +0.00039 +0.00060
b: -0.03764 -0.02477 -0.01229 1.98477 +0.01229 +0.02477 +0.03764

248 Chapter 14. Examples from the documentation


Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

# <examples/doc_confidence_basic.py>
import numpy as np

import lmfit

x = np.linspace(0.3, 10, 100)


np.random.seed(0)
y = 1/(0.1*x) + 2 + 0.1*np.random.randn(x.size)

pars = lmfit.create_params(a=0.1, b=1)

def residual(p):
return 1/(p['a']*x) + p['b'] - y

mini = lmfit.Minimizer(residual, pars)


result = mini.minimize()

print(lmfit.fit_report(result.params))

ci = lmfit.conf_interval(mini, result)
lmfit.printfuncs.report_ci(ci)
# <end examples/doc_confidence_basic.py>

Total running time of the script: ( 0 minutes 0.175 seconds)

14.1. Examples from the documentation 249


Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

14.1.4 doc_model_loadmodelresult.py

[[Model]]
Model(gaussian)
[[Fit Statistics]]
# fitting method = leastsq
# function evals = 33
# data points = 101
# variables = 3
chi-square = 3.40883599
reduced chi-square = 0.03478404
Akaike info crit = -336.263713
Bayesian info crit = -328.418352
R-squared = 0.98533348
[[Variables]]
amplitude: 8.88021907 +/- 0.11359530 (1.28%) (init = 5)
center: 5.65866105 +/- 0.01030493 (0.18%) (init = 5)
sigma: 0.69765480 +/- 0.01030508 (1.48%) (init = 1)
fwhm: 1.64285148 +/- 0.02426660 (1.48%) == '2.3548200*sigma'
height: 5.07800563 +/- 0.06495769 (1.28%) == '0.3989423*amplitude/max(1e-15,␣
˓→sigma)'

[[Correlations]] (unreported correlations are < 0.100)


C(amplitude, sigma) = +0.5774

250 Chapter 14. Examples from the documentation


Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

# <examples/doc_model_loadmodelresult.py>
import os
import sys

import matplotlib.pyplot as plt


import numpy as np

from lmfit.model import load_modelresult

if not os.path.exists('gauss_modelresult.sav'):
os.system(f"{sys.executable} doc_model_savemodelresult.py")

data = np.loadtxt('model1d_gauss.dat')
x = data[:, 0]
y = data[:, 1]

result = load_modelresult('gauss_modelresult.sav')
print(result.fit_report())

plt.plot(x, y, 'o')
plt.plot(x, result.best_fit, '-')
plt.show()
# <end examples/doc_model_loadmodelresult.py>

Total running time of the script: ( 0 minutes 0.216 seconds)

14.1. Examples from the documentation 251


Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

14.1.5 doc_model_loadmodelresult2.py

[[Model]]
((Model(gaussian, prefix='g1_') + Model(gaussian, prefix='g2_')) + Model(exponential,
˓→ prefix='exp_'))

[[Fit Statistics]]
# fitting method = leastsq
# function evals = 46
# data points = 250
# variables = 8
chi-square = 1247.52821
reduced chi-square = 5.15507524
Akaike info crit = 417.864631
Bayesian info crit = 446.036318
R-squared = 0.99648654
[[Variables]]
exp_amplitude: 99.0183278 +/- 0.53748593 (0.54%) (init = 162.2102)
exp_decay: 90.9508853 +/- 1.10310778 (1.21%) (init = 93.24905)
g1_amplitude: 4257.77360 +/- 42.3836478 (1.00%) (init = 2000)
g1_center: 107.030956 +/- 0.15006851 (0.14%) (init = 105)
g1_sigma: 16.6725772 +/- 0.16048381 (0.96%) (init = 15)
g1_fwhm: 39.2609181 +/- 0.37791049 (0.96%) == '2.3548200*g1_sigma'
g1_height: 101.880230 +/- 0.59217173 (0.58%) == '0.3989423*g1_amplitude/max(1e-
(continues on next page)

252 Chapter 14. Examples from the documentation


Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

(continued from previous page)


˓→ 15, g1_sigma)'
g2_amplitude: 2493.41735 +/- 36.1697789 (1.45%) (init = 2000)
g2_center: 153.270102 +/- 0.19466802 (0.13%) (init = 155)
g2_sigma: 13.8069464 +/- 0.18679695 (1.35%) (init = 15)
g2_fwhm: 32.5128735 +/- 0.43987320 (1.35%) == '2.3548200*g2_sigma'
g2_height: 72.0455941 +/- 0.61722243 (0.86%) == '0.3989423*g2_amplitude/max(1e-
˓→15, g2_sigma)'

[[Correlations]] (unreported correlations are < 0.100)


C(g1_amplitude, g1_sigma) = +0.8243
C(g2_amplitude, g2_sigma) = +0.8154
C(exp_amplitude, exp_decay) = -0.6946
C(g1_sigma, g2_center) = +0.6842
C(g1_center, g2_amplitude) = -0.6689
C(g1_center, g2_sigma) = -0.6520
C(g1_amplitude, g2_center) = +0.6477
C(g1_center, g2_center) = +0.6205
C(g1_center, g1_sigma) = +0.5075
C(exp_decay, g1_amplitude) = -0.5074
C(g1_sigma, g2_amplitude) = -0.4915
C(g2_center, g2_sigma) = -0.4889
C(g1_sigma, g2_sigma) = -0.4826
C(g2_amplitude, g2_center) = -0.4763
C(exp_decay, g2_amplitude) = -0.4270
C(g1_amplitude, g1_center) = +0.4183
C(g1_amplitude, g2_sigma) = -0.4010
C(g1_amplitude, g2_amplitude) = -0.3071
C(exp_amplitude, g2_amplitude) = +0.2821
C(exp_decay, g1_sigma) = -0.2520
C(exp_decay, g2_sigma) = -0.2329
C(exp_amplitude, g2_sigma) = +0.1714
C(exp_decay, g2_center) = -0.1514
C(exp_amplitude, g1_amplitude) = +0.1478
C(exp_decay, g1_center) = +0.1055

# <examples/doc_model_loadmodelresult2.py>
import os
import sys

import matplotlib.pyplot as plt


import numpy as np

from lmfit.model import load_modelresult

if not os.path.exists('nistgauss_modelresult.sav'):
os.system(f"{sys.executable} doc_model_savemodelresult2.py")

dat = np.loadtxt('NIST_Gauss2.dat')
(continues on next page)

14.1. Examples from the documentation 253


Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

(continued from previous page)


x = dat[:, 1]
y = dat[:, 0]

result = load_modelresult('nistgauss_modelresult.sav')
print(result.fit_report())

plt.plot(x, y, 'o')
plt.plot(x, result.best_fit, '-')
plt.show()
# <end examples/doc_model_loadmodelresult2.py>

Total running time of the script: ( 0 minutes 0.228 seconds)

14.1.6 doc_model_gaussian.py

[[Model]]
Model(gaussian)
[[Fit Statistics]]
# fitting method = leastsq
# function evals = 33
# data points = 101
(continues on next page)

254 Chapter 14. Examples from the documentation


Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

(continued from previous page)


# variables = 3
chi-square = 3.40883599
reduced chi-square = 0.03478404
Akaike info crit = -336.263713
Bayesian info crit = -328.418352
R-squared = 0.98533348
[[Variables]]
amp: 8.88021893 +/- 0.11359522 (1.28%) (init = 5)
cen: 5.65866102 +/- 0.01030495 (0.18%) (init = 5)
wid: 0.69765478 +/- 0.01030505 (1.48%) (init = 1)
[[Correlations]] (unreported correlations are < 0.100)
C(amp, wid) = +0.5774

# <examples/doc_model_gaussian.py>
import matplotlib.pyplot as plt
from numpy import exp, loadtxt, pi, sqrt

from lmfit import Model

data = loadtxt('model1d_gauss.dat')
x = data[:, 0]
y = data[:, 1]

def gaussian(x, amp, cen, wid):


"""1-d gaussian: gaussian(x, amp, cen, wid)"""
return (amp / (sqrt(2*pi) * wid)) * exp(-(x-cen)**2 / (2*wid**2))

gmodel = Model(gaussian)
result = gmodel.fit(y, x=x, amp=5, cen=5, wid=1)

print(result.fit_report())

plt.plot(x, y, 'o')
plt.plot(x, result.init_fit, '--', label='initial fit')
plt.plot(x, result.best_fit, '-', label='best fit')
plt.legend()
plt.show()
# <end examples/doc_model_gaussian.py>

Total running time of the script: ( 0 minutes 0.232 seconds)

14.1. Examples from the documentation 255


Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

14.1.7 doc_model_loadmodel.py

[[Model]]
Model(mysine)
[[Fit Statistics]]
# fitting method = leastsq
# function evals = 25
# data points = 101
# variables = 3
chi-square = 7.68903767
reduced chi-square = 0.07845957
Akaike info crit = -254.107813
Bayesian info crit = -246.262451
R-squared = 0.97252133
[[Variables]]
amp: 2.32733694 +/- 0.03950824 (1.70%) (init = 3)
freq: 0.50098739 +/- 5.7726e-04 (0.12%) (init = 0.52)
shift: 0.53605324 +/- 0.03383110 (6.31%) (init = 0)
[[Correlations]] (unreported correlations are < 0.100)
C(freq, shift) = -0.8663

256 Chapter 14. Examples from the documentation


Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

# <examples/doc_model_loadmodel.py>
import os
import sys

import matplotlib.pyplot as plt


import numpy as np

from lmfit.model import load_model

if not os.path.exists('sinemodel.sav'):
os.system(f"{sys.executable} doc_model_savemodel.py")

def mysine(x, amp, freq, shift):


return amp * np.sin(x*freq + shift)

data = np.loadtxt('sinedata.dat')
x = data[:, 0]
y = data[:, 1]

model = load_model('sinemodel.sav', funcdefs={'mysine': mysine})


params = model.make_params(amp=dict(value=3, min=0),
freq=0.52,
shift=dict(value=0, min=-1, max=1))

result = model.fit(y, params, x=x)


print(result.fit_report())

plt.plot(x, y, 'o')
plt.plot(x, result.best_fit, '-')
plt.show()
# <end examples/doc_model_loadmodel.py>

Total running time of the script: ( 0 minutes 0.228 seconds)

14.1. Examples from the documentation 257


Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

14.1.8 doc_model_with_nan_policy.py

[[Model]]
Model(gaussian)
[[Fit Statistics]]
# fitting method = leastsq
# function evals = 22
# data points = 99
# variables = 3
chi-square = 3.27990355
reduced chi-square = 0.03416566
Akaike info crit = -331.323278
Bayesian info crit = -323.537918
R-squared = 0.98570688
[[Variables]]
amplitude: 8.82064881 +/- 0.11686114 (1.32%) (init = 5)
center: 5.65906365 +/- 0.01055590 (0.19%) (init = 6)
sigma: 0.69165307 +/- 0.01060640 (1.53%) (init = 1)
fwhm: 1.62871849 +/- 0.02497615 (1.53%) == '2.3548200*sigma'
height: 5.08770952 +/- 0.06488251 (1.28%) == '0.3989423*amplitude/max(1e-15,␣
˓→sigma)'

[[Correlations]] (unreported correlations are < 0.100)


C(amplitude, sigma) = +0.6105

258 Chapter 14. Examples from the documentation


Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

# <examples/doc_model_with_nan_policy.py>
import matplotlib.pyplot as plt
import numpy as np

from lmfit.models import GaussianModel

data = np.loadtxt('model1d_gauss.dat')
x = data[:, 0]
y = data[:, 1]

y[44] = np.nan
y[65] = np.nan

# nan_policy = 'raise'
# nan_policy = 'propagate'
nan_policy = 'omit'

gmodel = GaussianModel()
result = gmodel.fit(y, x=x, amplitude=5, center=6, sigma=1,
nan_policy=nan_policy)

print(result.fit_report())

# make sure nans are removed for plotting:


x_ = x[np.where(np.isfinite(y))]
y_ = y[np.where(np.isfinite(y))]

plt.plot(x_, y_, 'o')


plt.plot(x_, result.init_fit, '--', label='initial fit')
plt.plot(x_, result.best_fit, '-', label='best fit')
plt.legend()
plt.show()
# <end examples/doc_model_with_nan_policy.py>

Total running time of the script: ( 0 minutes 0.239 seconds)

14.1. Examples from the documentation 259


Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

14.1.9 doc_builtinmodels_stepmodel.py

[[Model]]
(Model(step, prefix='step_', form='erf') + Model(linear, prefix='line_'))
[[Fit Statistics]]
# fitting method = leastsq
# function evals = 55
# data points = 201
# variables = 5
chi-square = 593.709621
reduced chi-square = 3.02913072
Akaike info crit = 227.700173
Bayesian info crit = 244.216697
R-squared = 0.99897798
[[Variables]]
line_slope: 1.87162383 +/- 0.09318592 (4.98%) (init = 0)
line_intercept: 12.0964588 +/- 0.27606017 (2.28%) (init = 11.58574)
step_amplitude: 112.858576 +/- 0.65391731 (0.58%) (init = 134.7378)
step_center: 3.13494787 +/- 0.00516602 (0.16%) (init = 2.5)
step_sigma: 0.67393440 +/- 0.01091158 (1.62%) (init = 1.428571)
[[Correlations]] (unreported correlations are < 0.100)
C(line_slope, step_amplitude) = -0.8791
C(step_amplitude, step_sigma) = +0.5643
(continues on next page)

260 Chapter 14. Examples from the documentation


Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

(continued from previous page)


C(line_slope, step_sigma) = -0.4569
C(line_intercept, step_center) = +0.4269
C(line_slope, line_intercept) = -0.3093
C(line_slope, step_center) = -0.2338
C(line_intercept, step_sigma) = -0.1372
C(line_intercept, step_amplitude) = -0.1173
C(step_amplitude, step_center) = +0.1095

# <examples/doc_builtinmodels_stepmodel.py>
import matplotlib.pyplot as plt
import numpy as np

from lmfit.models import LinearModel, StepModel

x = np.linspace(0, 10, 201)


y = np.ones_like(x)
y[:48] = 0.0
y[48:77] = np.arange(77-48)/(77.0-48)
np.random.seed(0)
y = 110.2 * (y + 9e-3*np.random.randn(x.size)) + 12.0 + 2.22*x

step_mod = StepModel(form='erf', prefix='step_')


line_mod = LinearModel(prefix='line_')

pars = line_mod.make_params(intercept=y.min(), slope=0)


pars += step_mod.guess(y, x=x, center=2.5)

mod = step_mod + line_mod


out = mod.fit(y, pars, x=x)

print(out.fit_report())

plt.plot(x, y)
plt.plot(x, out.init_fit, '--', label='initial fit')
plt.plot(x, out.best_fit, '-', label='best fit')
plt.legend()
plt.show()
# <end examples/doc_builtinmodels_stepmodel.py>

Total running time of the script: ( 0 minutes 0.245 seconds)

14.1. Examples from the documentation 261


Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

14.1.10 doc_model_uncertainty.py

[[Model]]
Model(gaussian)
[[Fit Statistics]]
# fitting method = leastsq
# function evals = 33
# data points = 101
# variables = 3
chi-square = 3.40883599
reduced chi-square = 0.03478404
Akaike info crit = -336.263713
Bayesian info crit = -328.418352
R-squared = 0.98533348
[[Variables]]
amp: 8.88021893 +/- 0.11359522 (1.28%) (init = 5)
cen: 5.65866102 +/- 0.01030495 (0.18%) (init = 5)
wid: 0.69765478 +/- 0.01030505 (1.48%) (init = 1)
[[Correlations]] (unreported correlations are < 0.100)
C(amp, wid) = +0.5774

262 Chapter 14. Examples from the documentation


Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

# <examples/doc_model_uncertainty.py>
import matplotlib.pyplot as plt
from numpy import exp, loadtxt, pi, sqrt

from lmfit import Model

data = loadtxt('model1d_gauss.dat')
x = data[:, 0]
y = data[:, 1]

def gaussian(x, amp, cen, wid):


"""1-d gaussian: gaussian(x, amp, cen, wid)"""
return (amp / (sqrt(2*pi) * wid)) * exp(-(x-cen)**2 / (2*wid**2))

gmodel = Model(gaussian)
result = gmodel.fit(y, x=x, amp=5, cen=5, wid=1)

print(result.fit_report())

dely = result.eval_uncertainty(sigma=3)

plt.plot(x, y, 'o')
plt.plot(x, result.init_fit, '--', label='initial fit')
plt.plot(x, result.best_fit, '-', label='best fit')
plt.fill_between(x, result.best_fit-dely, result.best_fit+dely,
color="#ABABAB", label=r'3-$\sigma$ uncertainty band')
plt.legend()
plt.show()
# <end examples/doc_model_uncertainty.py>

Total running time of the script: ( 0 minutes 0.249 seconds)

14.1. Examples from the documentation 263


Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

14.1.11 doc_model_two_components.py

[[Model]]
(Model(gaussian) + Model(line))
[[Fit Statistics]]
# fitting method = leastsq
# function evals = 55
# data points = 101
# variables = 5
chi-square = 2.57855517
reduced chi-square = 0.02685995
Akaike info crit = -360.457020
Bayesian info crit = -347.381417
R-squared = 0.99194643
[[Variables]]
amp: 8.45930976 +/- 0.12414531 (1.47%) (init = 5)
cen: 5.65547889 +/- 0.00917673 (0.16%) (init = 5)
wid: 0.67545513 +/- 0.00991697 (1.47%) (init = 1)
slope: 0.26484403 +/- 0.00574892 (2.17%) (init = 0)
intercept: -0.96860189 +/- 0.03352202 (3.46%) (init = 1)
[[Correlations]] (unreported correlations are < 0.100)
C(slope, intercept) = -0.7954
C(amp, wid) = +0.6664
(continues on next page)

264 Chapter 14. Examples from the documentation


Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

(continued from previous page)


C(amp, intercept) = -0.2216
C(amp, slope) = -0.1692
C(cen, slope) = -0.1618
C(wid, intercept) = -0.1477
C(cen, intercept) = +0.1287
C(wid, slope) = -0.1127

# <examples/doc_model_two_components.py>
import matplotlib.pyplot as plt
from numpy import exp, loadtxt, pi, sqrt

from lmfit import Model

data = loadtxt('model1d_gauss.dat')
x = data[:, 0]
y = data[:, 1] + 0.25*x - 1.0

def gaussian(x, amp, cen, wid):


"""1-d gaussian: gaussian(x, amp, cen, wid)"""
return (amp / (sqrt(2*pi) * wid)) * exp(-(x-cen)**2 / (2*wid**2))

def line(x, slope, intercept):


"""a line"""
return slope*x + intercept

mod = Model(gaussian) + Model(line)


pars = mod.make_params(amp=5, cen=5, wid={'value': 1, 'min': 0},
slope=0, intercept=1)

result = mod.fit(y, pars, x=x)


print(result.fit_report())

plt.plot(x, y, 'o')
plt.plot(x, result.init_fit, '--', label='initial fit')
plt.plot(x, result.best_fit, '-', label='best fit')
plt.legend()
plt.show()
# <end examples/doc_model_two_components.py>

Total running time of the script: ( 0 minutes 0.247 seconds)

14.1. Examples from the documentation 265


Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

14.1.12 doc_fitting_withreport.py

[[Fit Statistics]]
# fitting method = leastsq
# function evals = 83
# data points = 1001
# variables = 4
chi-square = 498.811759
reduced chi-square = 0.50031270
Akaike info crit = -689.222517
Bayesian info crit = -669.587497
[[Variables]]
amp: 13.9121959 +/- 0.14120321 (1.01%) (init = 13)
period: 5.48507038 +/- 0.02666520 (0.49%) (init = 2)
shift: 0.16203673 +/- 0.01405662 (8.67%) (init = 0)
decay: 0.03264539 +/- 3.8015e-04 (1.16%) (init = 0.02)
[[Correlations]] (unreported correlations are < 0.100)
C(period, shift) = +0.7974
C(amp, decay) = +0.5816
C(amp, shift) = -0.2966
C(amp, period) = -0.2432
C(shift, decay) = -0.1819
C(period, decay) = -0.1496

# <examples/doc_fitting_withreport.py>
from numpy import exp, linspace, pi, random, sign, sin

from lmfit import create_params, fit_report, minimize

p_true = create_params(amp=14.0, period=5.46, shift=0.123, decay=0.032)

def residual(pars, x, data=None):


"""Model a decaying sine wave and subtract data."""
vals = pars.valuesdict()
amp = vals['amp']
per = vals['period']
shift = vals['shift']
decay = vals['decay']

if abs(shift) > pi/2:


shift = shift - sign(shift)*pi
model = amp * sin(shift + x/per) * exp(-x*x*decay*decay)
if data is None:
return model
return model - data

(continues on next page)

266 Chapter 14. Examples from the documentation


Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

(continued from previous page)


random.seed(0)
x = linspace(0.0, 250., 1001)
noise = random.normal(scale=0.7215, size=x.size)
data = residual(p_true, x) + noise

fit_params = create_params(amp=13, period=2, shift=0, decay=0.02)

out = minimize(residual, fit_params, args=(x,), kws={'data': data})

print(fit_report(out))
# <end examples/doc_fitting_withreport.py>

Total running time of the script: ( 0 minutes 0.008 seconds)

14.1.13 doc_model_savemodelresult2.py

[[Model]]
((Model(gaussian, prefix='g1_') + Model(gaussian, prefix='g2_')) + Model(exponential,
˓→ prefix='exp_'))

[[Fit Statistics]]
# fitting method = leastsq
# function evals = 46
# data points = 250
# variables = 8
chi-square = 1247.52821
reduced chi-square = 5.15507524
Akaike info crit = 417.864631
Bayesian info crit = 446.036318
R-squared = 0.99648654
[[Variables]]
exp_amplitude: 99.0183278 +/- 0.53748593 (0.54%) (init = 162.2102)
exp_decay: 90.9508853 +/- 1.10310778 (1.21%) (init = 93.24905)
g1_amplitude: 4257.77360 +/- 42.3836478 (1.00%) (init = 2000)
g1_center: 107.030956 +/- 0.15006851 (0.14%) (init = 105)
g1_sigma: 16.6725772 +/- 0.16048381 (0.96%) (init = 15)
g1_fwhm: 39.2609181 +/- 0.37791049 (0.96%) == '2.3548200*g1_sigma'
g1_height: 101.880230 +/- 0.59217173 (0.58%) == '0.3989423*g1_amplitude/max(1e-
˓→15, g1_sigma)'

g2_amplitude: 2493.41735 +/- 36.1697789 (1.45%) (init = 2000)


g2_center: 153.270102 +/- 0.19466802 (0.13%) (init = 155)
g2_sigma: 13.8069464 +/- 0.18679695 (1.35%) (init = 15)
g2_fwhm: 32.5128735 +/- 0.43987320 (1.35%) == '2.3548200*g2_sigma'
g2_height: 72.0455941 +/- 0.61722243 (0.86%) == '0.3989423*g2_amplitude/max(1e-
˓→15, g2_sigma)'

[[Correlations]] (unreported correlations are < 0.100)


C(g1_amplitude, g1_sigma) = +0.8243
C(g2_amplitude, g2_sigma) = +0.8154
C(exp_amplitude, exp_decay) = -0.6946
C(g1_sigma, g2_center) = +0.6842
C(g1_center, g2_amplitude) = -0.6689
C(g1_center, g2_sigma) = -0.6520
(continues on next page)

14.1. Examples from the documentation 267


Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

(continued from previous page)


C(g1_amplitude, g2_center) = +0.6477
C(g1_center, g2_center) = +0.6205
C(g1_center, g1_sigma) = +0.5075
C(exp_decay, g1_amplitude) = -0.5074
C(g1_sigma, g2_amplitude) = -0.4915
C(g2_center, g2_sigma) = -0.4889
C(g1_sigma, g2_sigma) = -0.4826
C(g2_amplitude, g2_center) = -0.4763
C(exp_decay, g2_amplitude) = -0.4270
C(g1_amplitude, g1_center) = +0.4183
C(g1_amplitude, g2_sigma) = -0.4010
C(g1_amplitude, g2_amplitude) = -0.3071
C(exp_amplitude, g2_amplitude) = +0.2821
C(exp_decay, g1_sigma) = -0.2520
C(exp_decay, g2_sigma) = -0.2329
C(exp_amplitude, g2_sigma) = +0.1714
C(exp_decay, g2_center) = -0.1514
C(exp_amplitude, g1_amplitude) = +0.1478
C(exp_decay, g1_center) = +0.1055

# <examples/doc_model_savemodelresult2.py>
import numpy as np

from lmfit.model import save_modelresult


from lmfit.models import ExponentialModel, GaussianModel

dat = np.loadtxt('NIST_Gauss2.dat')
x = dat[:, 1]
y = dat[:, 0]

exp_mod = ExponentialModel(prefix='exp_')
pars = exp_mod.guess(y, x=x)

gauss1 = GaussianModel(prefix='g1_')
pars.update(gauss1.make_params(center=dict(value=105, min=75, max=125),
sigma=dict(value=15, min=0),
amplitude=dict(value=2000, min=0)))

gauss2 = GaussianModel(prefix='g2_')
pars.update(gauss2.make_params(center=dict(value=155, min=125, max=175),
sigma=dict(value=15, min=0),
amplitude=dict(value=2000, min=0)))

mod = gauss1 + gauss2 + exp_mod

init = mod.eval(pars, x=x)

(continues on next page)

268 Chapter 14. Examples from the documentation


Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

(continued from previous page)


result = mod.fit(y, pars, x=x)

save_modelresult(result, 'nistgauss_modelresult.sav')

print(result.fit_report())
# <end examples/doc_model_savemodelresult2.py>

Total running time of the script: ( 0 minutes 0.061 seconds)

14.1.14 doc_builtinmodels_nistgauss2.py

[[Model]]
((Model(gaussian, prefix='g1_') + Model(gaussian, prefix='g2_')) + Model(exponential,
˓→ prefix='exp_'))

[[Fit Statistics]]
# fitting method = leastsq
# function evals = 37
# data points = 250
# variables = 8
chi-square = 1247.52821
reduced chi-square = 5.15507524
(continues on next page)

14.1. Examples from the documentation 269


Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

(continued from previous page)


Akaike info crit = 417.864631
Bayesian info crit = 446.036318
R-squared = 0.99648654
[[Variables]]
exp_amplitude: 99.0183265 +/- 0.53748764 (0.54%) (init = 94.53724)
exp_decay: 90.9508884 +/- 1.10310753 (1.21%) (init = 111.1985)
g1_amplitude: 4257.77384 +/- 42.3839276 (1.00%) (init = 3189.648)
g1_center: 107.030957 +/- 0.15006934 (0.14%) (init = 106.5)
g1_sigma: 16.6725783 +/- 0.16048220 (0.96%) (init = 14.5)
g1_fwhm: 39.2609209 +/- 0.37790669 (0.96%) == '2.3548200*g1_sigma'
g1_height: 101.880228 +/- 0.59216965 (0.58%) == '0.3989423*g1_amplitude/max(1e-
˓→15, g1_sigma)'

g2_amplitude: 2493.41698 +/- 36.1699974 (1.45%) (init = 2818.337)


g2_center: 153.270103 +/- 0.19466966 (0.13%) (init = 150)
g2_sigma: 13.8069440 +/- 0.18680331 (1.35%) (init = 15)
g2_fwhm: 32.5128679 +/- 0.43988817 (1.35%) == '2.3548200*g2_sigma'
g2_height: 72.0455954 +/- 0.61722287 (0.86%) == '0.3989423*g2_amplitude/max(1e-
˓→15, g2_sigma)'

[[Correlations]] (unreported correlations are < 0.500)


C(g1_amplitude, g1_sigma) = +0.8243
C(g2_amplitude, g2_sigma) = +0.8154
C(exp_amplitude, exp_decay) = -0.6946
C(g1_sigma, g2_center) = +0.6842
C(g1_center, g2_amplitude) = -0.6689
C(g1_center, g2_sigma) = -0.6521
C(g1_amplitude, g2_center) = +0.6477
C(g1_center, g2_center) = +0.6205
C(g1_center, g1_sigma) = +0.5075
C(exp_decay, g1_amplitude) = -0.5074

# <examples/doc_nistgauss2.py>
import matplotlib.pyplot as plt
import numpy as np

from lmfit.models import ExponentialModel, GaussianModel

dat = np.loadtxt('NIST_Gauss2.dat')
x = dat[:, 1]
y = dat[:, 0]

exp_mod = ExponentialModel(prefix='exp_')
gauss1 = GaussianModel(prefix='g1_')
gauss2 = GaussianModel(prefix='g2_')

def index_of(arrval, value):


"""Return index of array *at or below* value."""
(continues on next page)

270 Chapter 14. Examples from the documentation


Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

(continued from previous page)


if value < min(arrval):
return 0
return max(np.where(arrval <= value)[0])

ix1 = index_of(x, 75)


ix2 = index_of(x, 135)
ix3 = index_of(x, 175)

pars1 = exp_mod.guess(y[:ix1], x=x[:ix1])


pars2 = gauss1.guess(y[ix1:ix2], x=x[ix1:ix2])
pars3 = gauss2.guess(y[ix2:ix3], x=x[ix2:ix3])

pars = pars1 + pars2 + pars3


mod = gauss1 + gauss2 + exp_mod

out = mod.fit(y, pars, x=x)

print(out.fit_report(min_correl=0.5))

plt.plot(x, y)
plt.plot(x, out.init_fit, '--', label='initial fit')
plt.plot(x, out.best_fit, '-', label='best fit')
plt.legend()
plt.show()
# <end examples/doc_nistgauss2.py>

Total running time of the script: ( 0 minutes 0.270 seconds)

14.1. Examples from the documentation 271


Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

14.1.15 doc_model_with_iter_callback.py

ITER -1 ['peak_amplitude = 3.00000', 'peak_center = 6.00000', 'peak_sigma = 2.00000',


˓→'bkg_slope = 0.00000', 'bkg_intercept = 0.00000', 'peak_fwhm = 4.70964', 'peak_height␣
˓→= 0.59841']

ITER 0 ['peak_amplitude = 3.00000', 'peak_center = 6.00000', 'peak_sigma = 2.00000',


˓→'bkg_slope = 0.00000', 'bkg_intercept = 0.00000', 'peak_fwhm = 4.70964', 'peak_height␣

˓→= 0.59841']

ITER 1 ['peak_amplitude = 3.00000', 'peak_center = 6.00000', 'peak_sigma = 2.00000',


˓→'bkg_slope = 0.00000', 'bkg_intercept = 0.00000', 'peak_fwhm = 4.70964', 'peak_height␣

˓→= 0.59841']

ITER 2 ['peak_amplitude = 3.00004', 'peak_center = 6.00000', 'peak_sigma = 2.00000',


˓→'bkg_slope = 0.00000', 'bkg_intercept = 0.00000', 'peak_fwhm = 4.70964', 'peak_height␣

˓→= 0.59842']

ITER 3 ['peak_amplitude = 3.00000', 'peak_center = 6.00004', 'peak_sigma = 2.00000',


˓→'bkg_slope = 0.00000', 'bkg_intercept = 0.00000', 'peak_fwhm = 4.70964', 'peak_height␣

˓→= 0.59841']

ITER 4 ['peak_amplitude = 3.00000', 'peak_center = 6.00000', 'peak_sigma = 2.00003',


˓→'bkg_slope = 0.00000', 'bkg_intercept = 0.00000', 'peak_fwhm = 4.70970', 'peak_height␣

˓→= 0.59841']

ITER 5 ['peak_amplitude = 3.00000', 'peak_center = 6.00000', 'peak_sigma = 2.00000',


˓→'bkg_slope = 0.00001', 'bkg_intercept = 0.00000', 'peak_fwhm = 4.70964', 'peak_height␣

˓→= 0.59841']

(continues on next page)

272 Chapter 14. Examples from the documentation


Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

(continued from previous page)


ITER 6 ['peak_amplitude = 3.00000', 'peak_center = 6.00000', 'peak_sigma = 2.00000',
˓→'bkg_slope = 0.00000', 'bkg_intercept = 0.00001', 'peak_fwhm = 4.70964', 'peak_height␣
˓→= 0.59841']

ITER 7 ['peak_amplitude = 28.06096', 'peak_center = 19.99268', 'peak_sigma = 6.62280',


˓→'bkg_slope = -0.20866', 'bkg_intercept = 3.28814', 'peak_fwhm = 15.59550', 'peak_

˓→height = 1.69033']

ITER 8 ['peak_amplitude = 28.06125', 'peak_center = 19.99268', 'peak_sigma = 6.62280',


˓→'bkg_slope = -0.20866', 'bkg_intercept = 3.28814', 'peak_fwhm = 15.59550', 'peak_

˓→height = 1.69034']

ITER 9 ['peak_amplitude = 28.06096', 'peak_center = 19.99267', 'peak_sigma = 6.62280',


˓→'bkg_slope = -0.20866', 'bkg_intercept = 3.28814', 'peak_fwhm = 15.59550', 'peak_

˓→height = 1.69033']

ITER 10 ['peak_amplitude = 28.06096', 'peak_center = 19.99268', 'peak_sigma = 6.62288',


˓→ 'bkg_slope = -0.20866', 'bkg_intercept = 3.28814', 'peak_fwhm = 15.59568', 'peak_

˓→height = 1.69031']

ITER 11 ['peak_amplitude = 28.06096', 'peak_center = 19.99268', 'peak_sigma = 6.62280',


˓→ 'bkg_slope = -0.20866', 'bkg_intercept = 3.28814', 'peak_fwhm = 15.59550', 'peak_

˓→height = 1.69033']

ITER 12 ['peak_amplitude = 28.06096', 'peak_center = 19.99268', 'peak_sigma = 6.62280',


˓→ 'bkg_slope = -0.20866', 'bkg_intercept = 3.28818', 'peak_fwhm = 15.59550', 'peak_

˓→height = 1.69033']

ITER 13 ['peak_amplitude = 69.61513', 'peak_center = 11.64712', 'peak_sigma = 18.26748


˓→', 'bkg_slope = 0.15397', 'bkg_intercept = 3.00619', 'peak_fwhm = 43.01664', 'peak_

˓→height = 1.52032']

ITER 14 ['peak_amplitude = 26.14334', 'peak_center = 19.99783', 'peak_sigma = 17.92626


˓→', 'bkg_slope = -0.18990', 'bkg_intercept = 3.74340', 'peak_fwhm = 42.21311', 'peak_

˓→height = 0.58181']

ITER 15 ['peak_amplitude = 26.14361', 'peak_center = 19.99783', 'peak_sigma = 17.92626


˓→', 'bkg_slope = -0.18990', 'bkg_intercept = 3.74340', 'peak_fwhm = 42.21311', 'peak_

˓→height = 0.58182']

ITER 16 ['peak_amplitude = 26.14334', 'peak_center = 19.99783', 'peak_sigma = 17.92626


˓→', 'bkg_slope = -0.18990', 'bkg_intercept = 3.74340', 'peak_fwhm = 42.21311', 'peak_

˓→height = 0.58181']

ITER 17 ['peak_amplitude = 26.14334', 'peak_center = 19.99783', 'peak_sigma = 17.92644


˓→', 'bkg_slope = -0.18990', 'bkg_intercept = 3.74340', 'peak_fwhm = 42.21355', 'peak_

˓→height = 0.58180']

ITER 18 ['peak_amplitude = 26.14334', 'peak_center = 19.99783', 'peak_sigma = 17.92626


˓→', 'bkg_slope = -0.18989', 'bkg_intercept = 3.74340', 'peak_fwhm = 42.21311', 'peak_

˓→height = 0.58181']

ITER 19 ['peak_amplitude = 26.14334', 'peak_center = 19.99783', 'peak_sigma = 17.92626


˓→', 'bkg_slope = -0.18990', 'bkg_intercept = 3.74343', 'peak_fwhm = 42.21311', 'peak_

˓→height = 0.58181']

ITER 20 ['peak_amplitude = 28.96026', 'peak_center = 19.98918', 'peak_sigma = 18.29596


˓→', 'bkg_slope = -0.27575', 'bkg_intercept = 4.75376', 'peak_fwhm = 43.08370', 'peak_

˓→height = 0.63148']

ITER 21 ['peak_amplitude = 28.96056', 'peak_center = 19.98918', 'peak_sigma = 18.29596


˓→', 'bkg_slope = -0.27575', 'bkg_intercept = 4.75376', 'peak_fwhm = 43.08370', 'peak_

˓→height = 0.63148']

ITER 22 ['peak_amplitude = 28.96026', 'peak_center = 19.98917', 'peak_sigma = 18.29596


˓→', 'bkg_slope = -0.27575', 'bkg_intercept = 4.75376', 'peak_fwhm = 43.08370', 'peak_

˓→height = 0.63148']

ITER 23 ['peak_amplitude = 28.96026', 'peak_center = 19.98918', 'peak_sigma = 18.29616


(continues on next page)

14.1. Examples from the documentation 273


Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

(continued from previous page)


˓→', 'bkg_slope = -0.27575', 'bkg_intercept = 4.75376', 'peak_fwhm = 43.08416', 'peak_
˓→height = 0.63147']
ITER 24 ['peak_amplitude = 28.96026', 'peak_center = 19.98918', 'peak_sigma = 18.29596
˓→', 'bkg_slope = -0.27574', 'bkg_intercept = 4.75376', 'peak_fwhm = 43.08370', 'peak_

˓→height = 0.63148']

ITER 25 ['peak_amplitude = 28.96026', 'peak_center = 19.98918', 'peak_sigma = 18.29596


˓→', 'bkg_slope = -0.27575', 'bkg_intercept = 4.75381', 'peak_fwhm = 43.08370', 'peak_

˓→height = 0.63148']

ITER 26 ['peak_amplitude = 49.50287', 'peak_center = 18.93364', 'peak_sigma = 0.30541',


˓→ 'bkg_slope = -0.35174', 'bkg_intercept = 4.83315', 'peak_fwhm = 0.71918', 'peak_

˓→height = 64.66412']

ITER 27 ['peak_amplitude = 29.93395', 'peak_center = 19.97768', 'peak_sigma = 17.89130


˓→', 'bkg_slope = -0.29348', 'bkg_intercept = 4.93342', 'peak_fwhm = 42.13079', 'peak_

˓→height = 0.66747']

ITER 28 ['peak_amplitude = 29.93426', 'peak_center = 19.97768', 'peak_sigma = 17.89130


˓→', 'bkg_slope = -0.29348', 'bkg_intercept = 4.93342', 'peak_fwhm = 42.13079', 'peak_

˓→height = 0.66748']

ITER 29 ['peak_amplitude = 29.93395', 'peak_center = 19.97767', 'peak_sigma = 17.89130


˓→', 'bkg_slope = -0.29348', 'bkg_intercept = 4.93342', 'peak_fwhm = 42.13079', 'peak_

˓→height = 0.66747']

ITER 30 ['peak_amplitude = 29.93395', 'peak_center = 19.97768', 'peak_sigma = 17.89149


˓→', 'bkg_slope = -0.29348', 'bkg_intercept = 4.93342', 'peak_fwhm = 42.13124', 'peak_

˓→height = 0.66746']

ITER 31 ['peak_amplitude = 29.93395', 'peak_center = 19.97768', 'peak_sigma = 17.89130


˓→', 'bkg_slope = -0.29347', 'bkg_intercept = 4.93342', 'peak_fwhm = 42.13079', 'peak_

˓→height = 0.66747']

ITER 32 ['peak_amplitude = 29.93395', 'peak_center = 19.97768', 'peak_sigma = 17.89130


˓→', 'bkg_slope = -0.29348', 'bkg_intercept = 4.93347', 'peak_fwhm = 42.13079', 'peak_

˓→height = 0.66747']

ITER 33 ['peak_amplitude = 33.93754', 'peak_center = 19.80974', 'peak_sigma = 14.52919


˓→', 'bkg_slope = -0.31329', 'bkg_intercept = 5.00443', 'peak_fwhm = 34.21363', 'peak_

˓→height = 0.93186']

ITER 34 ['peak_amplitude = 33.93789', 'peak_center = 19.80974', 'peak_sigma = 14.52919


˓→', 'bkg_slope = -0.31329', 'bkg_intercept = 5.00443', 'peak_fwhm = 34.21363', 'peak_

˓→height = 0.93187']

ITER 35 ['peak_amplitude = 33.93754', 'peak_center = 19.80971', 'peak_sigma = 14.52919


˓→', 'bkg_slope = -0.31329', 'bkg_intercept = 5.00443', 'peak_fwhm = 34.21363', 'peak_

˓→height = 0.93186']

ITER 36 ['peak_amplitude = 33.93754', 'peak_center = 19.80974', 'peak_sigma = 14.52935


˓→', 'bkg_slope = -0.31329', 'bkg_intercept = 5.00443', 'peak_fwhm = 34.21399', 'peak_

˓→height = 0.93185']

ITER 37 ['peak_amplitude = 33.93754', 'peak_center = 19.80974', 'peak_sigma = 14.52919


˓→', 'bkg_slope = -0.31328', 'bkg_intercept = 5.00443', 'peak_fwhm = 34.21363', 'peak_

˓→height = 0.93186']

ITER 38 ['peak_amplitude = 33.93754', 'peak_center = 19.80974', 'peak_sigma = 14.52919


˓→', 'bkg_slope = -0.31329', 'bkg_intercept = 5.00448', 'peak_fwhm = 34.21363', 'peak_

˓→height = 0.93186']

ITER 39 ['peak_amplitude = 39.94969', 'peak_center = 11.63266', 'peak_sigma = 17.05427


˓→', 'bkg_slope = -0.30912', 'bkg_intercept = 4.82241', 'peak_fwhm = 40.15974', 'peak_

˓→height = 0.93452']

ITER 40 ['peak_amplitude = 39.95010', 'peak_center = 11.63266', 'peak_sigma = 17.05427


˓→', 'bkg_slope = -0.30912', 'bkg_intercept = 4.82241', 'peak_fwhm = 40.15974', 'peak_

(continues on next page)

274 Chapter 14. Examples from the documentation


Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

(continued from previous page)


˓→height = 0.93453']
ITER 41 ['peak_amplitude = 39.94969', 'peak_center = 11.63237', 'peak_sigma = 17.05427
˓→', 'bkg_slope = -0.30912', 'bkg_intercept = 4.82241', 'peak_fwhm = 40.15974', 'peak_

˓→height = 0.93452']

ITER 42 ['peak_amplitude = 39.94969', 'peak_center = 11.63266', 'peak_sigma = 17.05445


˓→', 'bkg_slope = -0.30912', 'bkg_intercept = 4.82241', 'peak_fwhm = 40.16016', 'peak_

˓→height = 0.93451']

ITER 43 ['peak_amplitude = 39.94969', 'peak_center = 11.63266', 'peak_sigma = 17.05427


˓→', 'bkg_slope = -0.30911', 'bkg_intercept = 4.82241', 'peak_fwhm = 40.15974', 'peak_

˓→height = 0.93452']

ITER 44 ['peak_amplitude = 39.94969', 'peak_center = 11.63266', 'peak_sigma = 17.05427


˓→', 'bkg_slope = -0.30912', 'bkg_intercept = 4.82246', 'peak_fwhm = 40.15974', 'peak_

˓→height = 0.93452']

ITER 45 ['peak_amplitude = 43.44365', 'peak_center = 7.65269', 'peak_sigma = 0.17915',


˓→'bkg_slope = -0.30320', 'bkg_intercept = 3.85514', 'peak_fwhm = 0.42187', 'peak_height␣

˓→= 96.74096']

ITER 46 ['peak_amplitude = 40.21652', 'peak_center = 11.53095', 'peak_sigma = 15.41291


˓→', 'bkg_slope = -0.30665', 'bkg_intercept = 4.70376', 'peak_fwhm = 36.29462', 'peak_

˓→height = 1.04095']

ITER 47 ['peak_amplitude = 40.21693', 'peak_center = 11.53095', 'peak_sigma = 15.41291


˓→', 'bkg_slope = -0.30665', 'bkg_intercept = 4.70376', 'peak_fwhm = 36.29462', 'peak_

˓→height = 1.04096']

ITER 48 ['peak_amplitude = 40.21652', 'peak_center = 11.53065', 'peak_sigma = 15.41291


˓→', 'bkg_slope = -0.30665', 'bkg_intercept = 4.70376', 'peak_fwhm = 36.29462', 'peak_

˓→height = 1.04095']

ITER 49 ['peak_amplitude = 40.21652', 'peak_center = 11.53095', 'peak_sigma = 15.41307


˓→', 'bkg_slope = -0.30665', 'bkg_intercept = 4.70376', 'peak_fwhm = 36.29500', 'peak_

˓→height = 1.04094']

ITER 50 ['peak_amplitude = 40.21652', 'peak_center = 11.53095', 'peak_sigma = 15.41291


˓→', 'bkg_slope = -0.30665', 'bkg_intercept = 4.70376', 'peak_fwhm = 36.29462', 'peak_

˓→height = 1.04095']

ITER 51 ['peak_amplitude = 40.21652', 'peak_center = 11.53095', 'peak_sigma = 15.41291


˓→', 'bkg_slope = -0.30665', 'bkg_intercept = 4.70381', 'peak_fwhm = 36.29462', 'peak_

˓→height = 1.04095']

ITER 52 ['peak_amplitude = 40.73607', 'peak_center = 10.91338', 'peak_sigma = 11.96371


˓→', 'bkg_slope = -0.30679', 'bkg_intercept = 4.50063', 'peak_fwhm = 28.17238', 'peak_

˓→height = 1.35839']

ITER 53 ['peak_amplitude = 40.73648', 'peak_center = 10.91338', 'peak_sigma = 11.96371


˓→', 'bkg_slope = -0.30679', 'bkg_intercept = 4.50063', 'peak_fwhm = 28.17238', 'peak_

˓→height = 1.35840']

ITER 54 ['peak_amplitude = 40.73607', 'peak_center = 10.91308', 'peak_sigma = 11.96371


˓→', 'bkg_slope = -0.30679', 'bkg_intercept = 4.50063', 'peak_fwhm = 28.17238', 'peak_

˓→height = 1.35839']

ITER 55 ['peak_amplitude = 40.73607', 'peak_center = 10.91338', 'peak_sigma = 11.96384


˓→', 'bkg_slope = -0.30679', 'bkg_intercept = 4.50063', 'peak_fwhm = 28.17268', 'peak_

˓→height = 1.35837']

ITER 56 ['peak_amplitude = 40.73607', 'peak_center = 10.91338', 'peak_sigma = 11.96371


˓→', 'bkg_slope = -0.30679', 'bkg_intercept = 4.50063', 'peak_fwhm = 28.17238', 'peak_

˓→height = 1.35839']

ITER 57 ['peak_amplitude = 40.73607', 'peak_center = 10.91338', 'peak_sigma = 11.96371


˓→', 'bkg_slope = -0.30679', 'bkg_intercept = 4.50068', 'peak_fwhm = 28.17238', 'peak_

˓→height = 1.35839']

(continues on next page)

14.1. Examples from the documentation 275


Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

(continued from previous page)


ITER 58 ['peak_amplitude = 40.86269', 'peak_center = 9.92047', 'peak_sigma = 5.29355',
˓→'bkg_slope = -0.31543', 'bkg_intercept = 4.04328', 'peak_fwhm = 12.46536', 'peak_
˓→height = 3.07957']

ITER 59 ['peak_amplitude = 40.86311', 'peak_center = 9.92047', 'peak_sigma = 5.29355',


˓→'bkg_slope = -0.31543', 'bkg_intercept = 4.04328', 'peak_fwhm = 12.46536', 'peak_

˓→height = 3.07960']

ITER 60 ['peak_amplitude = 40.86269', 'peak_center = 9.92015', 'peak_sigma = 5.29355',


˓→'bkg_slope = -0.31543', 'bkg_intercept = 4.04328', 'peak_fwhm = 12.46536', 'peak_

˓→height = 3.07957']

ITER 61 ['peak_amplitude = 40.86269', 'peak_center = 9.92047', 'peak_sigma = 5.29361',


˓→'bkg_slope = -0.31543', 'bkg_intercept = 4.04328', 'peak_fwhm = 12.46550', 'peak_

˓→height = 3.07953']

ITER 62 ['peak_amplitude = 40.86269', 'peak_center = 9.92047', 'peak_sigma = 5.29355',


˓→'bkg_slope = -0.31542', 'bkg_intercept = 4.04328', 'peak_fwhm = 12.46536', 'peak_

˓→height = 3.07957']

ITER 63 ['peak_amplitude = 40.86269', 'peak_center = 9.92047', 'peak_sigma = 5.29355',


˓→'bkg_slope = -0.31543', 'bkg_intercept = 4.04332', 'peak_fwhm = 12.46536', 'peak_

˓→height = 3.07957']

ITER 64 ['peak_amplitude = 34.49433', 'peak_center = 6.96011', 'peak_sigma = 3.79883',


˓→'bkg_slope = -0.21584', 'bkg_intercept = 3.04624', 'peak_fwhm = 8.94557', 'peak_height␣

˓→= 3.62249']

ITER 65 ['peak_amplitude = 34.49469', 'peak_center = 6.96011', 'peak_sigma = 3.79883',


˓→'bkg_slope = -0.21584', 'bkg_intercept = 3.04624', 'peak_fwhm = 8.94557', 'peak_height␣

˓→= 3.62253']

ITER 66 ['peak_amplitude = 34.49433', 'peak_center = 6.95978', 'peak_sigma = 3.79883',


˓→'bkg_slope = -0.21584', 'bkg_intercept = 3.04624', 'peak_fwhm = 8.94557', 'peak_height␣

˓→= 3.62249']

ITER 67 ['peak_amplitude = 34.49433', 'peak_center = 6.96011', 'peak_sigma = 3.79888',


˓→'bkg_slope = -0.21584', 'bkg_intercept = 3.04624', 'peak_fwhm = 8.94568', 'peak_height␣

˓→= 3.62245']

ITER 68 ['peak_amplitude = 34.49433', 'peak_center = 6.96011', 'peak_sigma = 3.79883',


˓→'bkg_slope = -0.21584', 'bkg_intercept = 3.04624', 'peak_fwhm = 8.94557', 'peak_height␣

˓→= 3.62249']

ITER 69 ['peak_amplitude = 34.49433', 'peak_center = 6.96011', 'peak_sigma = 3.79883',


˓→'bkg_slope = -0.21584', 'bkg_intercept = 3.04628', 'peak_fwhm = 8.94557', 'peak_height␣

˓→= 3.62249']

ITER 70 ['peak_amplitude = 0.08766', 'peak_center = 8.87255', 'peak_sigma = 0.10402',


˓→'bkg_slope = -0.26833', 'bkg_intercept = 4.93654', 'peak_fwhm = 0.24494', 'peak_height␣

˓→= 0.33620']

ITER 71 ['peak_amplitude = 30.67432', 'peak_center = 8.04096', 'peak_sigma = 1.99254',


˓→'bkg_slope = -0.19527', 'bkg_intercept = 2.91723', 'peak_fwhm = 4.69208', 'peak_height␣

˓→= 6.14155']

ITER 72 ['peak_amplitude = 30.67464', 'peak_center = 8.04096', 'peak_sigma = 1.99254',


˓→'bkg_slope = -0.19527', 'bkg_intercept = 2.91723', 'peak_fwhm = 4.69208', 'peak_height␣

˓→= 6.14161']

ITER 73 ['peak_amplitude = 30.67432', 'peak_center = 8.04063', 'peak_sigma = 1.99254',


˓→'bkg_slope = -0.19527', 'bkg_intercept = 2.91723', 'peak_fwhm = 4.69208', 'peak_height␣

˓→= 6.14155']

ITER 74 ['peak_amplitude = 30.67432', 'peak_center = 8.04096', 'peak_sigma = 1.99257',


˓→'bkg_slope = -0.19527', 'bkg_intercept = 2.91723', 'peak_fwhm = 4.69214', 'peak_height␣

˓→= 6.14147']

ITER 75 ['peak_amplitude = 30.67432', 'peak_center = 8.04096', 'peak_sigma = 1.99254',


(continues on next page)

276 Chapter 14. Examples from the documentation


Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

(continued from previous page)


˓→'bkg_slope = -0.19527', 'bkg_intercept = 2.91723', 'peak_fwhm = 4.69208', 'peak_height␣
˓→= 6.14155']
ITER 76 ['peak_amplitude = 30.67432', 'peak_center = 8.04096', 'peak_sigma = 1.99254',
˓→'bkg_slope = -0.19527', 'bkg_intercept = 2.91726', 'peak_fwhm = 4.69208', 'peak_height␣
˓→= 6.14155']

ITER 77 ['peak_amplitude = 21.89977', 'peak_center = 7.48302', 'peak_sigma = 1.18769',


˓→'bkg_slope = -0.19932', 'bkg_intercept = 3.46743', 'peak_fwhm = 2.79680', 'peak_height␣
˓→= 7.35606']

ITER 78 ['peak_amplitude = 21.90000', 'peak_center = 7.48302', 'peak_sigma = 1.18769',


˓→'bkg_slope = -0.19932', 'bkg_intercept = 3.46743', 'peak_fwhm = 2.79680', 'peak_height␣
˓→= 7.35614']

ITER 79 ['peak_amplitude = 21.89977', 'peak_center = 7.48269', 'peak_sigma = 1.18769',


˓→'bkg_slope = -0.19932', 'bkg_intercept = 3.46743', 'peak_fwhm = 2.79680', 'peak_height␣
˓→= 7.35606']

ITER 80 ['peak_amplitude = 21.89977', 'peak_center = 7.48302', 'peak_sigma = 1.18771',


˓→'bkg_slope = -0.19932', 'bkg_intercept = 3.46743', 'peak_fwhm = 2.79684', 'peak_height␣
˓→= 7.35595']

ITER 81 ['peak_amplitude = 21.89977', 'peak_center = 7.48302', 'peak_sigma = 1.18769',


˓→'bkg_slope = -0.19932', 'bkg_intercept = 3.46743', 'peak_fwhm = 2.79680', 'peak_height␣
˓→= 7.35606']

ITER 82 ['peak_amplitude = 21.89977', 'peak_center = 7.48302', 'peak_sigma = 1.18769',


˓→'bkg_slope = -0.19932', 'bkg_intercept = 3.46746', 'peak_fwhm = 2.79680', 'peak_height␣
˓→= 7.35606']

ITER 83 ['peak_amplitude = 24.51986', 'peak_center = 7.64789', 'peak_sigma = 1.23846',


˓→'bkg_slope = -0.20238', 'bkg_intercept = 3.36745', 'peak_fwhm = 2.91635', 'peak_height␣
˓→= 7.89852']

ITER 84 ['peak_amplitude = 24.52011', 'peak_center = 7.64789', 'peak_sigma = 1.23846',


˓→'bkg_slope = -0.20238', 'bkg_intercept = 3.36745', 'peak_fwhm = 2.91635', 'peak_height␣
˓→= 7.89861']

ITER 85 ['peak_amplitude = 24.51986', 'peak_center = 7.64756', 'peak_sigma = 1.23846',


˓→'bkg_slope = -0.20238', 'bkg_intercept = 3.36745', 'peak_fwhm = 2.91635', 'peak_height␣
˓→= 7.89852']

ITER 86 ['peak_amplitude = 24.51986', 'peak_center = 7.64789', 'peak_sigma = 1.23848',


˓→'bkg_slope = -0.20238', 'bkg_intercept = 3.36745', 'peak_fwhm = 2.91639', 'peak_height␣
˓→= 7.89841']

ITER 87 ['peak_amplitude = 24.51986', 'peak_center = 7.64789', 'peak_sigma = 1.23846',


˓→'bkg_slope = -0.20238', 'bkg_intercept = 3.36745', 'peak_fwhm = 2.91635', 'peak_height␣
˓→= 7.89852']

ITER 88 ['peak_amplitude = 24.51986', 'peak_center = 7.64789', 'peak_sigma = 1.23846',


˓→'bkg_slope = -0.20238', 'bkg_intercept = 3.36748', 'peak_fwhm = 2.91635', 'peak_height␣
˓→= 7.89852']

ITER 89 ['peak_amplitude = 24.52133', 'peak_center = 7.63730', 'peak_sigma = 1.22397',


˓→'bkg_slope = -0.20264', 'bkg_intercept = 3.36998', 'peak_fwhm = 2.88223', 'peak_height␣
˓→= 7.99252']

ITER 90 ['peak_amplitude = 24.52158', 'peak_center = 7.63730', 'peak_sigma = 1.22397',


˓→'bkg_slope = -0.20264', 'bkg_intercept = 3.36998', 'peak_fwhm = 2.88223', 'peak_height␣
˓→= 7.99260']

ITER 91 ['peak_amplitude = 24.52133', 'peak_center = 7.63697', 'peak_sigma = 1.22397',


˓→'bkg_slope = -0.20264', 'bkg_intercept = 3.36998', 'peak_fwhm = 2.88223', 'peak_height␣
˓→= 7.99252']

ITER 92 ['peak_amplitude = 24.52133', 'peak_center = 7.63730', 'peak_sigma = 1.22399',


˓→'bkg_slope = -0.20264', 'bkg_intercept = 3.36998', 'peak_fwhm = 2.88227', 'peak_height␣
(continues on next page)

14.1. Examples from the documentation 277


Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

(continued from previous page)


˓→ = 7.99240']
ITER 93 ['peak_amplitude = 24.52133', 'peak_center = 7.63730', 'peak_sigma = 1.22397',
˓→'bkg_slope = -0.20264', 'bkg_intercept = 3.36998', 'peak_fwhm = 2.88223', 'peak_height␣

˓→= 7.99252']

ITER 94 ['peak_amplitude = 24.52133', 'peak_center = 7.63730', 'peak_sigma = 1.22397',


˓→'bkg_slope = -0.20264', 'bkg_intercept = 3.37001', 'peak_fwhm = 2.88223', 'peak_height␣

˓→= 7.99252']

ITER 95 ['peak_amplitude = 24.52372', 'peak_center = 7.63753', 'peak_sigma = 1.22415',


˓→'bkg_slope = -0.20264', 'bkg_intercept = 3.36986', 'peak_fwhm = 2.88265', 'peak_height␣

˓→= 7.99214']

ITER 96 ['peak_amplitude = 24.52397', 'peak_center = 7.63753', 'peak_sigma = 1.22415',


˓→'bkg_slope = -0.20264', 'bkg_intercept = 3.36986', 'peak_fwhm = 2.88265', 'peak_height␣

˓→= 7.99222']

ITER 97 ['peak_amplitude = 24.52372', 'peak_center = 7.63720', 'peak_sigma = 1.22415',


˓→'bkg_slope = -0.20264', 'bkg_intercept = 3.36986', 'peak_fwhm = 2.88265', 'peak_height␣

˓→= 7.99214']

ITER 98 ['peak_amplitude = 24.52372', 'peak_center = 7.63753', 'peak_sigma = 1.22416',


˓→'bkg_slope = -0.20264', 'bkg_intercept = 3.36986', 'peak_fwhm = 2.88269', 'peak_height␣

˓→= 7.99202']

ITER 99 ['peak_amplitude = 24.52372', 'peak_center = 7.63753', 'peak_sigma = 1.22415',


˓→'bkg_slope = -0.20264', 'bkg_intercept = 3.36986', 'peak_fwhm = 2.88265', 'peak_height␣

˓→= 7.99214']

ITER 100 ['peak_amplitude = 24.52372', 'peak_center = 7.63753', 'peak_sigma = 1.22415',


˓→ 'bkg_slope = -0.20264', 'bkg_intercept = 3.36989', 'peak_fwhm = 2.88265', 'peak_

˓→height = 7.99214']

ITER 101 ['peak_amplitude = 24.52371', 'peak_center = 7.63753', 'peak_sigma = 1.22415',


˓→ 'bkg_slope = -0.20264', 'bkg_intercept = 3.36986', 'peak_fwhm = 2.88264', 'peak_

˓→height = 7.99214']

ITER 101 ['peak_amplitude = 24.52371', 'peak_center = 7.63753', 'peak_sigma = 1.22415',


˓→ 'bkg_slope = -0.20264', 'bkg_intercept = 3.36986', 'peak_fwhm = 2.88264', 'peak_

˓→height = 7.99214']

Nfev = 101
[[Model]]
(Model(gaussian, prefix='peak_') + Model(linear, prefix='bkg_'))
[[Fit Statistics]]
# fitting method = leastsq
# function evals = 101
# data points = 401
# variables = 5
chi-square = 20.0043556
reduced chi-square = 0.05051605
Akaike info crit = -1192.20257
Bayesian info crit = -1172.23276
R-squared = 0.99377421
[[Variables]]
peak_amplitude: 24.5237052 +/- 0.16281835 (0.66%) (init = 3)
peak_center: 7.63752785 +/- 0.00746969 (0.10%) (init = 6)
peak_sigma: 1.22414559 +/- 0.00811005 (0.66%) (init = 2)
bkg_slope: -0.20264093 +/- 0.00204346 (1.01%) (init = 0)
bkg_intercept: 3.36986054 +/- 0.02653942 (0.79%) (init = 0)
peak_fwhm: 2.88264251 +/- 0.01909771 (0.66%) == '2.3548200*peak_sigma'
peak_height: 7.99214038 +/- 0.04318559 (0.54%) == '0.3989423*peak_amplitude/
(continues on next page)

278 Chapter 14. Examples from the documentation


Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

(continued from previous page)


˓→max(1e-15, peak_sigma)'

[[Correlations]] (unreported correlations are < 0.100)


C(bkg_slope, bkg_intercept) = -0.8574
C(peak_amplitude, peak_sigma) = +0.6681
C(peak_amplitude, bkg_intercept) = -0.5260
C(peak_sigma, bkg_intercept) = -0.3514
C(peak_amplitude, bkg_slope) = +0.2858
C(peak_sigma, bkg_slope) = +0.1909
C(peak_center, bkg_slope) = -0.1451
C(peak_center, bkg_intercept) = +0.1244

# <examples/doc_with_itercb.py>
import matplotlib.pyplot as plt
from numpy import linspace, random

from lmfit.lineshapes import gaussian


from lmfit.models import GaussianModel, LinearModel

def per_iteration(pars, iteration, resid, *args, **kws):


print(" ITER ", iteration, [f"{p.name} = {p.value:.5f}" for p in pars.values()])

x = linspace(0., 20, 401)


y = gaussian(x, amplitude=24.56, center=7.6543, sigma=1.23)
random.seed(2021)
y = y - .20*x + 3.333 + random.normal(scale=0.23, size=x.size)

mod = GaussianModel(prefix='peak_') + LinearModel(prefix='bkg_')

pars = mod.make_params(peak_amplitude=dict(value=3.0, min=0),


peak_center=dict(value=6.0, min=0, max=20),
peak_sigma=2.0,
bkg_intercept=0,
bkg_slope=0)

out = mod.fit(y, pars, x=x, iter_cb=per_iteration)

plt.plot(x, y, '--')

print(f'Nfev = {out.nfev}')
print(out.fit_report())

plt.plot(x, out.best_fit, '-', label='best fit')


plt.legend()
plt.show()
# <end examples/doc_with_itercb.py>

14.1. Examples from the documentation 279


Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

Total running time of the script: ( 0 minutes 0.262 seconds)

14.1.16 doc_parameters_valuesdict.py

[[Fit Statistics]]
# fitting method = leastsq
# function evals = 64
# data points = 301
# variables = 4
chi-square = 12.1867036
reduced chi-square = 0.04103267
Akaike info crit = -957.236198
Bayesian info crit = -942.407756
[[Variables]]
amp: 5.03088059 +/- 0.04005824 (0.80%) (init = 10)
decay: 0.02495457 +/- 4.5396e-04 (1.82%) (init = 0.1)
omega: 2.00026310 +/- 0.00326183 (0.16%) (init = 3)
shift: -0.10264952 +/- 0.01022294 (9.96%) (init = 0)
[[Correlations]] (unreported correlations are < 0.100)
C(omega, shift) = -0.7852
C(amp, decay) = +0.5840
C(amp, shift) = -0.1179

280 Chapter 14. Examples from the documentation


Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

# <examples/doc_parameters_valuesdict.py>
import numpy as np

from lmfit import Minimizer, create_params, report_fit

# create data to be fitted


x = np.linspace(0, 15, 301)
np.random.seed(2021)
data = (5.0 * np.sin(2.0*x - 0.1) * np.exp(-x*x*0.025) +
np.random.normal(size=x.size, scale=0.2))

# define objective function: returns the array to be minimized


def fcn2min(params, x, data):
"""Model a decaying sine wave and subtract data."""
v = params.valuesdict()

model = v['amp'] * np.sin(x * v['omega'] + v['shift']) * np.exp(-x*x*v['decay'])


return model - data

# create a set of Parameters


params = create_params(amp=dict(value=10, min=0),
decay=0.1,
omega=3.0,
shift=dict(value=0.0, min=-np.pi/2., max=np.pi/2))

# do fit, here with the default leastsq algorithm


minner = Minimizer(fcn2min, params, fcn_args=(x, data))
result = minner.minimize()

# calculate final result


final = data + result.residual

# write error report


report_fit(result)

# try to plot results


try:
import matplotlib.pyplot as plt
plt.plot(x, data, '+')
plt.plot(x, final)
plt.show()
except ImportError:
pass
# <end of examples/doc_parameters_valuesdict.py>

Total running time of the script: ( 0 minutes 0.239 seconds)

14.1. Examples from the documentation 281


Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

14.1.17 doc_builtinmodels_peakmodels.py

282 Chapter 14. Examples from the documentation


Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

[[Model]]
Model(gaussian)
[[Fit Statistics]]
# fitting method = leastsq
# function evals = 25
# data points = 401
# variables = 3
chi-square = 29.9943157
reduced chi-square = 0.07536260
Akaike info crit = -1033.77437
(continues on next page)

14.1. Examples from the documentation 283


Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

(continued from previous page)


Bayesian info crit = -1021.79248
R-squared = 0.99045513
[[Variables]]
amplitude: 30.3135789 +/- 0.15712752 (0.52%) (init = 43.62238)
center: 9.24277046 +/- 0.00737497 (0.08%) (init = 9.25)
sigma: 1.23218496 +/- 0.00737506 (0.60%) (init = 1.35)
fwhm: 2.90157379 +/- 0.01736695 (0.60%) == '2.3548200*sigma'
height: 9.81457271 +/- 0.05087308 (0.52%) == '0.3989423*amplitude/max(1e-15,␣
˓→sigma)'

[[Correlations]]
+-----------+-----------+-----------+-----------+
| Variable | amplitude | center | sigma |
+-----------+-----------+-----------+-----------+
| amplitude | +1.0000 | -0.0000 | +0.5774 |
| center | -0.0000 | +1.0000 | -0.0000 |
| sigma | +0.5774 | -0.0000 | +1.0000 |
+-----------+-----------+-----------+-----------+
[[Model]]
Model(lorentzian)
[[Fit Statistics]]
# fitting method = leastsq
# function evals = 25
# data points = 401
# variables = 3
chi-square = 53.7535387
reduced chi-square = 0.13505914
Akaike info crit = -799.830322
Bayesian info crit = -787.848438
R-squared = 0.98289441
[[Variables]]
amplitude: 38.9726380 +/- 0.31386754 (0.81%) (init = 54.52798)
center: 9.24439393 +/- 0.00927645 (0.10%) (init = 9.25)
sigma: 1.15483177 +/- 0.01315708 (1.14%) (init = 1.35)
fwhm: 2.30966354 +/- 0.02631416 (1.14%) == '2.0000000*sigma'
height: 10.7421504 +/- 0.08634317 (0.80%) == '0.3183099*amplitude/max(1e-15,␣
˓→sigma)'

[[Correlations]]
+-----------+-----------+-----------+-----------+
| Variable | amplitude | center | sigma |
+-----------+-----------+-----------+-----------+
| amplitude | +1.0000 | -0.0002 | +0.7087 |
| center | -0.0002 | +1.0000 | -0.0002 |
| sigma | +0.7087 | -0.0002 | +1.0000 |
+-----------+-----------+-----------+-----------+
[[Model]]
Model(voigt)
[[Fit Statistics]]
# fitting method = leastsq
# function evals = 25
# data points = 401
# variables = 3
chi-square = 14.5448627
(continues on next page)

284 Chapter 14. Examples from the documentation


Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

(continued from previous page)


reduced chi-square = 0.03654488
Akaike info crit = -1324.00615
Bayesian info crit = -1312.02427
R-squared = 0.99537150
[[Variables]]
amplitude: 35.7553799 +/- 0.13861559 (0.39%) (init = 65.43358)
center: 9.24411179 +/- 0.00505496 (0.05%) (init = 9.25)
sigma: 0.73015485 +/- 0.00368473 (0.50%) (init = 0.8775)
gamma: 0.73015485 +/- 0.00368473 (0.50%) == 'sigma'
fwhm: 2.62949983 +/- 0.01326979 (0.50%) == '1.0692*gamma+sqrt(0.
˓→8664*gamma**2+5.545083*sigma**2)'

height: 10.2204068 +/- 0.03959933 (0.39%) == '(amplitude/(max(1e-15,␣


˓→sigma*sqrt(2*pi))))*wofz((1j*gamma)/(max(1e-15, sigma*sqrt(2)))).real'

[[Correlations]]
+-----------+-----------+-----------+-----------+
| Variable | amplitude | center | sigma |
+-----------+-----------+-----------+-----------+
| amplitude | +1.0000 | -0.0001 | +0.6513 |
| center | -0.0001 | +1.0000 | -0.0001 |
| sigma | +0.6513 | -0.0001 | +1.0000 |
+-----------+-----------+-----------+-----------+
[[Model]]
Model(voigt)
[[Fit Statistics]]
# fitting method = leastsq
# function evals = 25
# data points = 401
# variables = 3
chi-square = 14.5448627
reduced chi-square = 0.03654488
Akaike info crit = -1324.00615
Bayesian info crit = -1312.02427
R-squared = 0.99537150
[[Variables]]
amplitude: 35.7553799 +/- 0.13861559 (0.39%) (init = 65.43358)
center: 9.24411179 +/- 0.00505496 (0.05%) (init = 9.25)
sigma: 0.73015485 +/- 0.00368473 (0.50%) (init = 0.8775)
gamma: 0.73015485 +/- 0.00368473 (0.50%) == 'sigma'
fwhm: 2.62949983 +/- 0.01326979 (0.50%) == '1.0692*gamma+sqrt(0.
˓→8664*gamma**2+5.545083*sigma**2)'

height: 10.2204068 +/- 0.03959933 (0.39%) == '(amplitude/(max(1e-15,␣


˓→sigma*sqrt(2*pi))))*wofz((1j*gamma)/(max(1e-15, sigma*sqrt(2)))).real'

[[Correlations]]
+-----------+-----------+-----------+-----------+
| Variable | amplitude | center | sigma |
+-----------+-----------+-----------+-----------+
| amplitude | +1.0000 | -0.0001 | +0.6513 |
| center | -0.0001 | +1.0000 | -0.0001 |
| sigma | +0.6513 | -0.0001 | +1.0000 |
+-----------+-----------+-----------+-----------+

14.1. Examples from the documentation 285


Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

# <examples/doc_builtinmodels_peakmodels.py>
import matplotlib.pyplot as plt
from numpy import loadtxt

from lmfit.models import GaussianModel, LorentzianModel, VoigtModel

data = loadtxt('test_peak.dat')
x = data[:, 0]
y = data[:, 1]

# Gaussian model
mod = GaussianModel()
pars = mod.guess(y, x=x)
out = mod.fit(y, pars, x=x)

print(out.fit_report(correl_mode='table'))

plt.plot(x, y)
plt.plot(x, out.best_fit, '-', label='Gaussian Model')
plt.legend()
plt.show()

# Lorentzian model
mod = LorentzianModel()
pars = mod.guess(y, x=x)
out = mod.fit(y, pars, x=x)

print(out.fit_report(correl_mode='table'))

plt.figure()
plt.plot(x, y, '-')
plt.plot(x, out.best_fit, '-', label='Lorentzian Model')
plt.legend()
plt.show()

# Voigt model
mod = VoigtModel()
pars = mod.guess(y, x=x)
out = mod.fit(y, pars, x=x)

print(out.fit_report(correl_mode='table'))

fig, axes = plt.subplots(1, 2, figsize=(12.8, 4.8))

axes[0].plot(x, y, '-')
axes[0].plot(x, out.best_fit, '-', label='Voigt Model\ngamma constrained')
axes[0].legend()
(continues on next page)

286 Chapter 14. Examples from the documentation


Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

(continued from previous page)

# allow the gamma parameter to vary in the fit


pars['gamma'].vary = True
out_gamma = mod.fit(y, pars, x=x)
print(out.fit_report(correl_mode='table'))

axes[1].plot(x, y, '-')
axes[1].plot(x, out_gamma.best_fit, '-', label='Voigt Model\ngamma unconstrained')
axes[1].legend()

plt.show()
# <end examples/doc_builtinmodels_peakmodels.py>

Total running time of the script: ( 0 minutes 1.027 seconds)

14.1.18 doc_builtinmodels_nistgauss.py

[[Model]]
((Model(gaussian, prefix='g1_') + Model(gaussian, prefix='g2_')) + Model(exponential,
˓→ prefix='exp_'))

[[Fit Statistics]]
# fitting method = leastsq
# function evals = 46
# data points = 250
# variables = 8
chi-square = 1247.52821
reduced chi-square = 5.15507524
Akaike info crit = 417.864631
Bayesian info crit = 446.036318
R-squared = 0.99648654
[[Variables]]
exp_amplitude: 99.0183278 +/- 0.53748593 (0.54%) (init = 162.2102)
exp_decay: 90.9508853 +/- 1.10310778 (1.21%) (init = 93.24905)
g1_amplitude: 4257.77360 +/- 42.3836478 (1.00%) (init = 2000)
g1_center: 107.030956 +/- 0.15006851 (0.14%) (init = 105)
(continues on next page)

14.1. Examples from the documentation 287


Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

(continued from previous page)


g1_sigma: 16.6725772 +/- 0.16048381 (0.96%) (init = 15)
g1_fwhm: 39.2609181 +/- 0.37791049 (0.96%) == '2.3548200*g1_sigma'
g1_height: 101.880230 +/- 0.59217173 (0.58%) == '0.3989423*g1_amplitude/max(1e-
˓→15, g1_sigma)'

g2_amplitude: 2493.41735 +/- 36.1697789 (1.45%) (init = 2000)


g2_center: 153.270102 +/- 0.19466802 (0.13%) (init = 155)
g2_sigma: 13.8069464 +/- 0.18679695 (1.35%) (init = 15)
g2_fwhm: 32.5128735 +/- 0.43987320 (1.35%) == '2.3548200*g2_sigma'
g2_height: 72.0455941 +/- 0.61722243 (0.86%) == '0.3989423*g2_amplitude/max(1e-
˓→15, g2_sigma)'

[[Correlations]]
+---------------+---------------+---------------+---------------+---------------+------
˓→---------+---------------+---------------+---------------+

| Variable | exp_amplitude | exp_decay | g1_amplitude | g1_center | g1_


˓→sigma | g2_amplitude | g2_center | g2_sigma |
+---------------+---------------+---------------+---------------+---------------+------
˓→---------+---------------+---------------+---------------+

| exp_amplitude | +1.0000 | -0.6946 | +0.1478 | -0.0467 | +0.


˓→0218 | +0.2821 | +0.0331 | +0.1714 |
| exp_decay | -0.6946 | +1.0000 | -0.5074 | +0.1055 | -0.
˓→2520 | -0.4270 | -0.1514 | -0.2329 |
| g1_amplitude | +0.1478 | -0.5074 | +1.0000 | +0.4183 | +0.
˓→8243 | -0.3071 | +0.6477 | -0.4010 |
| g1_center | -0.0467 | +0.1055 | +0.4183 | +1.0000 | +0.
˓→5075 | -0.6689 | +0.6205 | -0.6520 |
| g1_sigma | +0.0218 | -0.2520 | +0.8243 | +0.5075 | +1.
˓→0000 | -0.4915 | +0.6842 | -0.4826 |
| g2_amplitude | +0.2821 | -0.4270 | -0.3071 | -0.6689 | -0.
˓→4915 | +1.0000 | -0.4763 | +0.8154 |
| g2_center | +0.0331 | -0.1514 | +0.6477 | +0.6205 | +0.
˓→6842 | -0.4763 | +1.0000 | -0.4889 |
| g2_sigma | +0.1714 | -0.2329 | -0.4010 | -0.6520 | -0.
˓→4826 | +0.8154 | -0.4889 | +1.0000 |
+---------------+---------------+---------------+---------------+---------------+------
˓→---------+---------------+---------------+---------------+

# <examples/doc_builtinmodels_nistgauss.py>
import matplotlib.pyplot as plt
import numpy as np

from lmfit.models import ExponentialModel, GaussianModel

dat = np.loadtxt('NIST_Gauss2.dat')
x = dat[:, 1]
y = dat[:, 0]

exp_mod = ExponentialModel(prefix='exp_')
(continues on next page)

288 Chapter 14. Examples from the documentation


Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

(continued from previous page)


pars = exp_mod.guess(y, x=x)

gauss1 = GaussianModel(prefix='g1_')
pars.update(gauss1.make_params(center=dict(value=105, min=75, max=125),
sigma=dict(value=15, min=0),
amplitude=dict(value=2000, min=0)))

gauss2 = GaussianModel(prefix='g2_')
pars.update(gauss2.make_params(center=dict(value=155, min=125, max=175),
sigma=dict(value=15, min=0),
amplitude=dict(value=2000, min=0)))

mod = gauss1 + gauss2 + exp_mod

init = mod.eval(pars, x=x)


out = mod.fit(y, pars, x=x)

print(out.fit_report(correl_mode='table'))

fig, axes = plt.subplots(1, 2, figsize=(12.8, 4.8))


axes[0].plot(x, y)
axes[0].plot(x, init, '--', label='initial fit')
axes[0].plot(x, out.best_fit, '-', label='best fit')
axes[0].legend()

comps = out.eval_components(x=x)
axes[1].plot(x, y)
axes[1].plot(x, comps['g1_'], '--', label='Gaussian component 1')
axes[1].plot(x, comps['g2_'], '--', label='Gaussian component 2')
axes[1].plot(x, comps['exp_'], '--', label='Exponential component')
axes[1].legend()

plt.show()
# <end examples/doc_builtinmodels_nistgauss.py>

Total running time of the script: ( 0 minutes 0.525 seconds)

14.1. Examples from the documentation 289


Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

14.1.19 doc_parameters_basic.py

[[Fit Statistics]]
# fitting method = leastsq
# function evals = 64
# data points = 301
# variables = 4
chi-square = 12.1867036
reduced chi-square = 0.04103267
Akaike info crit = -957.236198
Bayesian info crit = -942.407756
[[Variables]]
amp: 5.03088059 +/- 0.04005824 (0.80%) (init = 10)
decay: 0.02495457 +/- 4.5396e-04 (1.82%) (init = 0.1)
omega: 2.00026310 +/- 0.00326183 (0.16%) (init = 3)
shift: -0.10264952 +/- 0.01022294 (9.96%) (init = 0)
[[Correlations]] (unreported correlations are < 0.100)
C(omega, shift) = -0.7852
C(amp, decay) = +0.5840
C(amp, shift) = -0.1179

290 Chapter 14. Examples from the documentation


Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

# <examples/doc_parameters_basic.py>
import numpy as np

from lmfit import Minimizer, Parameters, create_params, report_fit

# create data to be fitted


x = np.linspace(0, 15, 301)
np.random.seed(2021)
data = (5.0 * np.sin(2.0*x - 0.1) * np.exp(-x*x*0.025) +
np.random.normal(size=x.size, scale=0.2))

# define objective function: returns the array to be minimized


def fcn2min(params, x, data):
"""Model a decaying sine wave and subtract data."""
amp = params['amp']
shift = params['shift']
omega = params['omega']
decay = params['decay']
model = amp * np.sin(x*omega + shift) * np.exp(-x*x*decay)
return model - data

# create a set of Parameters


params = Parameters()
params.add('amp', value=10, min=0)
params.add('decay', value=0.1)
params.add('shift', value=0.0, min=-np.pi/2., max=np.pi/2.)
params.add('omega', value=3.0)

# ... or use
params = create_params(amp=dict(value=10, min=0),
decay=0.1,
omega=3,
shift=dict(value=0, min=-np.pi/2, max=np.pi/2))

# do fit, here with the default leastsq algorithm


minner = Minimizer(fcn2min, params, fcn_args=(x, data))
result = minner.minimize()

# calculate final result


final = data + result.residual

# write error report


report_fit(result)

# try to plot results


try:
import matplotlib.pyplot as plt
plt.plot(x, data, '+')
plt.plot(x, final)
plt.show()
(continues on next page)

14.1. Examples from the documentation 291


Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

(continued from previous page)


except ImportError:
pass
# <end of examples/doc_parameters_basic.py>

Total running time of the script: ( 0 minutes 0.233 seconds)

14.1.20 doc_model_composite.py

[[Model]]
(Model(jump) <function convolve at 0x13bd86170> Model(gaussian))
[[Fit Statistics]]
# fitting method = leastsq
# function evals = 33
# data points = 201
# variables = 3
chi-square = 24.7562335
reduced chi-square = 0.12503148
Akaike info crit = -414.939746
Bayesian info crit = -405.029832
R-squared = 0.99632577
[[Variables]]
mid: 4 (fixed)
amplitude: 0.62508458 +/- 0.00189732 (0.30%) (init = 1)
center: 5.50853669 +/- 0.00973231 (0.18%) (init = 3.5)
sigma: 0.59576097 +/- 0.01348579 (2.26%) (init = 1.5)
[[Correlations]] (unreported correlations are < 0.100)
C(amplitude, center) = +0.3292
C(amplitude, sigma) = +0.2680

292 Chapter 14. Examples from the documentation


Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

# <examples/doc_model_composite.py>
import matplotlib.pyplot as plt
import numpy as np

from lmfit import CompositeModel, Model


from lmfit.lineshapes import gaussian, step

# create data from broadened step


x = np.linspace(0, 10, 201)
y = step(x, amplitude=12.5, center=4.5, sigma=0.88, form='erf')
np.random.seed(0)
y = y + np.random.normal(scale=0.35, size=x.size)

def jump(x, mid):


"""Heaviside step function."""
o = np.zeros(x.size)
imid = max(np.where(x <= mid)[0])
o[imid:] = 1.0
return o

def convolve(arr, kernel):


"""Simple convolution of two arrays."""
npts = min(arr.size, kernel.size)
pad = np.ones(npts)
tmp = np.concatenate((pad*arr[0], arr, pad*arr[-1]))
out = np.convolve(tmp, kernel, mode='valid')
noff = int((len(out) - npts) / 2)
return out[noff:noff+npts]

# create Composite Model using the custom convolution operator


mod = CompositeModel(Model(jump), Model(gaussian), convolve)

# create parameters for model. Note that 'mid' and 'center' will be highly
# correlated. Since 'mid' is used as an integer index, it will be very
# hard to fit, so we fix its value
pars = mod.make_params(amplitude=dict(value=1, min=0),
center=3.5,
sigma=dict(value=1.5, min=0),
mid=dict(value=4, vary=False))

# fit this model to data array y


result = mod.fit(y, params=pars, x=x)

print(result.fit_report())

# generate components
comps = result.eval_components(x=x)

# plot results
(continues on next page)

14.1. Examples from the documentation 293


Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

(continued from previous page)


fig, axes = plt.subplots(1, 2, figsize=(12.8, 4.8))

axes[0].plot(x, y, 'bo')
axes[0].plot(x, result.init_fit, 'k--', label='initial fit')
axes[0].plot(x, result.best_fit, 'r-', label='best fit')
axes[0].legend()

axes[1].plot(x, y, 'bo')
axes[1].plot(x, 10*comps['jump'], 'k--', label='Jump component')
axes[1].plot(x, 10*comps['gaussian'], 'r-', label='Gaussian component')
axes[1].legend()

plt.show()
# <end examples/doc_model_composite.py>

Total running time of the script: ( 0 minutes 0.494 seconds)

14.1.21 doc_builtinmodels_splinemodel.py

[[Model]]
(Model(gaussian, prefix='peak_') + Model(spline_model, prefix='bkg_'))
(continues on next page)

294 Chapter 14. Examples from the documentation


Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

(continued from previous page)


[[Fit Statistics]]
# fitting method = leastsq
# function evals = 92
# data points = 501
# variables = 14
chi-square = 52.6611549
reduced chi-square = 0.10813379
Akaike info crit = -1100.61674
Bayesian info crit = -1041.58425
R-squared = 0.94690612
[[Variables]]
peak_amplitude: 12.2231138 +/- 0.29554074 (2.42%) (init = 8)
peak_center: 16.4280869 +/- 0.01091050 (0.07%) (init = 16)
peak_sigma: 0.72096402 +/- 0.01336666 (1.85%) (init = 1)
peak_fwhm: 1.69774050 +/- 0.03147609 (1.85%) == '2.3548200*peak_sigma'
peak_height: 6.76360675 +/- 0.09854036 (1.46%) == '0.3989423*peak_amplitude/
˓→max(1e-15, peak_sigma)'

bkg_s0: 3.51175736 +/- 0.04941392 (1.41%) (init = 3.787995)


bkg_s1: 3.72930068 +/- 0.09558236 (2.56%) (init = 3.959487)
bkg_s2: 4.26846495 +/- 0.12650286 (2.96%) (init = 4.384009)
bkg_s3: 4.42375491 +/- 0.10170203 (2.30%) (init = 4.431971)
bkg_s4: 4.49590447 +/- 0.10615551 (2.36%) (init = 4.243976)
bkg_s5: 3.96515316 +/- 0.09336554 (2.35%) (init = 4.115153)
bkg_s6: 3.35531898 +/- 0.12669983 (3.78%) (init = 3.965325)
bkg_s7: 2.89909737 +/- 0.16190201 (5.58%) (init = 2.788437)
bkg_s8: 2.82656972 +/- 0.13445491 (4.76%) (init = 2.984317)
bkg_s9: 3.43338674 +/- 0.15987280 (4.66%) (init = 3.383491)
bkg_s10: 3.73024845 +/- 0.12096864 (3.24%) (init = 3.791937)
[[Correlations]] (unreported correlations are < 0.300)
C(bkg_s7, bkg_s8) = -0.8192
C(peak_amplitude, peak_sigma) = +0.7987
C(bkg_s8, bkg_s9) = -0.7063
C(bkg_s5, bkg_s6) = -0.6950
C(peak_amplitude, bkg_s7) = -0.6878
C(bkg_s2, bkg_s3) = -0.6672
C(bkg_s9, bkg_s10) = -0.6060
C(bkg_s3, bkg_s4) = -0.5743
C(bkg_s1, bkg_s2) = -0.5646
C(bkg_s4, bkg_s5) = -0.5542
C(bkg_s7, bkg_s9) = +0.5216
C(peak_sigma, bkg_s7) = -0.5192
C(peak_amplitude, bkg_s8) = +0.5185
C(bkg_s0, bkg_s1) = +0.4448
C(peak_sigma, bkg_s8) = +0.3733
C(peak_center, bkg_s6) = +0.3599
C(bkg_s4, bkg_s6) = +0.3597
C(bkg_s0, bkg_s2) = -0.3595
C(bkg_s2, bkg_s4) = +0.3504
C(bkg_s8, bkg_s10) = +0.3455
C(bkg_s6, bkg_s7) = -0.3332
C(peak_center, bkg_s7) = -0.3301
C(peak_amplitude, bkg_s9) = -0.3206

14.1. Examples from the documentation 295


Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

# <examples/doc_builtinmodels_splinemodel.py>
import matplotlib.pyplot as plt
import numpy as np

from lmfit.models import GaussianModel, SplineModel

data = np.loadtxt('test_splinepeak.dat')
x = data[:, 0]
y = data[:, 1]

plt.plot(x, y, label='data')

model = GaussianModel(prefix='peak_')
params = model.make_params(amplitude=dict(value=8, min=0),
center=dict(value=16, min=5, max=25),
sigma=dict(value=1, min=0))

# make a background spline with knots evenly spaced over the background,
# but sort of skipping over where the peak is
knot_xvals3 = np.array([1, 3, 5, 7, 9, 11, 13, 15, 17, 19, 21, 23, 25])
knot_xvals2 = np.array([1, 3, 5, 7, 9, 11, 13, 16, 19, 21, 23, 25]) # noqa: E241
knot_xvals1 = np.array([1, 3, 5, 7, 9, 11, 13, 19, 21, 23, 25]) # noqa: E241

bkg = SplineModel(prefix='bkg_', xknots=knot_xvals1)


params.update(bkg.guess(y, x))

model = model + bkg

plt.plot(x, model.eval(params, x=x), label='initial')

out = model.fit(y, params, x=x)


print(out.fit_report(min_correl=0.3))
comps = out.eval_components()

plt.plot(x, out.best_fit, label='best fit')


plt.plot(x, comps['bkg_'], label='background')
plt.plot(x, comps['peak_'], label='peak')

knot_yvals = np.array([o.value for o in out.params.values() if o.name.startswith('bkg')])


plt.plot(knot_xvals1, knot_yvals, 'o', color='black', label='spline knots values')
plt.legend()
plt.show()

# knot positions | peak amplitude


# 11, 13, 19, 21 | 12.223 0.295
# 11, 13, 16, 19, 21 | 11.746 0.594
# 11, 13, 15, 17, 19, 21 | 12.052 0.872

(continues on next page)

296 Chapter 14. Examples from the documentation


Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

(continued from previous page)


plt.plot(x, y, 'o', label='data')

for nknots in (10, 15, 20, 25, 30):


model = SplineModel(prefix='bkg_', xknots=np.linspace(0, 25, nknots))
params = model.guess(y, x)
out = model.fit(y, params, x=x)
plt.plot(x, out.best_fit, label=f'best-fit ({nknots} knots)')
plt.legend()
plt.show()

# <end examples/doc_builtinmodels_splinemodel.py>

Total running time of the script: ( 0 minutes 0.506 seconds)

14.1.22 doc_confidence_advanced.py

14.1. Examples from the documentation 297


Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

[[Variables]]
a1: 2.98622095 +/- 0.14867027 (4.98%) (init = 2.986237)
a2: -4.33526363 +/- 0.11527574 (2.66%) (init = -4.335256)
t1: 1.30994276 +/- 0.13121215 (10.02%) (init = 1.309932)
t2: 11.8240337 +/- 0.46316956 (3.92%) (init = 11.82408)
[[Correlations]] (unreported correlations are < 0.500)
C(a2, t2) = +0.9871
C(a2, t1) = -0.9246
C(t1, t2) = -0.8805
C(a1, t1) = -0.5988
95.45% 68.27% _BEST_ 68.27% 95.45%
a1: -0.27285 -0.14165 2.98622 +0.16354 +0.36343
a2: -0.30440 -0.13219 -4.33526 +0.10689 +0.19684
t1: -0.23392 -0.12494 1.30994 +0.14660 +0.32369
t2: -1.01937 -0.48813 11.82403 +0.46045 +0.90439

# <examples/doc_confidence_advanced.py>
import matplotlib.pyplot as plt
import numpy as np
(continues on next page)

298 Chapter 14. Examples from the documentation


Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

(continued from previous page)

import lmfit

x = np.linspace(1, 10, 250)


np.random.seed(0)
y = 3.0*np.exp(-x/2) - 5.0*np.exp(-(x-0.1)/10.) + 0.1*np.random.randn(x.size)

p = lmfit.create_params(a1=4, a2=4, t1=3, t2=3)

def residual(p):
return p['a1']*np.exp(-x/p['t1']) + p['a2']*np.exp(-(x-0.1)/p['t2']) - y

# create Minimizer
mini = lmfit.Minimizer(residual, p, nan_policy='propagate')

# first solve with Nelder-Mead algorithm


out1 = mini.minimize(method='Nelder')

# then solve with Levenberg-Marquardt using the


# Nelder-Mead solution as a starting point
out2 = mini.minimize(method='leastsq', params=out1.params)

lmfit.report_fit(out2.params, min_correl=0.5)

ci, trace = lmfit.conf_interval(mini, out2, sigmas=[1, 2], trace=True)


lmfit.printfuncs.report_ci(ci)

# plot data and best fit


plt.figure()
plt.plot(x, y)
plt.plot(x, residual(out2.params) + y, '-')
plt.show()

# plot confidence intervals (a1 vs t2 and a2 vs t2)


fig, axes = plt.subplots(1, 2, figsize=(12.8, 4.8))
cx, cy, grid = lmfit.conf_interval2d(mini, out2, 'a1', 't2', 30, 30)
ctp = axes[0].contourf(cx, cy, grid, np.linspace(0, 1, 11))
fig.colorbar(ctp, ax=axes[0])
axes[0].set_xlabel('a1')
axes[0].set_ylabel('t2')

cx, cy, grid = lmfit.conf_interval2d(mini, out2, 'a2', 't2', 30, 30)


ctp = axes[1].contourf(cx, cy, grid, np.linspace(0, 1, 11))
fig.colorbar(ctp, ax=axes[1])
axes[1].set_xlabel('a2')
axes[1].set_ylabel('t2')
plt.show()

# plot dependence between two parameters


fig, axes = plt.subplots(1, 2, figsize=(12.8, 4.8))
(continues on next page)

14.1. Examples from the documentation 299


Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

(continued from previous page)


cx1, cy1, prob = trace['a1']['a1'], trace['a1']['t2'], trace['a1']['prob']
cx2, cy2, prob2 = trace['t2']['t2'], trace['t2']['a1'], trace['t2']['prob']

axes[0].scatter(cx1, cy1, c=prob, s=30)


axes[0].set_xlabel('a1')
axes[0].set_ylabel('t2')

axes[1].scatter(cx2, cy2, c=prob2, s=30)


axes[1].set_xlabel('t2')
axes[1].set_ylabel('a1')
plt.show()
# <end examples/doc_confidence_advanced.py>

Total running time of the script: ( 0 minutes 5.473 seconds)

14.1.23 doc_model_uncertainty2.py

[[Model]]
((Model(gaussian, prefix='g1_') + Model(gaussian, prefix='g2_')) + Model(exponential,
˓→ prefix='bkg_'))

(continues on next page)

300 Chapter 14. Examples from the documentation


Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

(continued from previous page)


[[Fit Statistics]]
# fitting method = leastsq
# function evals = 55
# data points = 250
# variables = 8
chi-square = 1247.52821
reduced chi-square = 5.15507524
Akaike info crit = 417.864631
Bayesian info crit = 446.036318
R-squared = 0.99648654
[[Variables]]
g1_amplitude: 4257.77399 +/- 42.3838008 (1.00%) (init = 3000)
g1_center: 107.030957 +/- 0.15006868 (0.14%) (init = 100)
g1_sigma: 16.6725789 +/- 0.16048222 (0.96%) (init = 10)
g2_amplitude: 2493.41715 +/- 36.1696228 (1.45%) (init = 3000)
g2_center: 153.270104 +/- 0.19466723 (0.13%) (init = 150)
g2_sigma: 13.8069453 +/- 0.18680099 (1.35%) (init = 10)
bkg_amplitude: 99.0183280 +/- 0.53748639 (0.54%) (init = 100)
bkg_decay: 90.9508824 +/- 1.10310769 (1.21%) (init = 80)
g1_fwhm: 39.2609222 +/- 0.37790675 (0.96%) == '2.3548200*g1_sigma'
g1_height: 101.880228 +/- 0.59217122 (0.58%) == '0.3989423*g1_amplitude/max(1e-
˓→15, g1_sigma)'

g2_fwhm: 32.5128710 +/- 0.43988270 (1.35%) == '2.3548200*g2_sigma'


g2_height: 72.0455936 +/- 0.61721901 (0.86%) == '0.3989423*g2_amplitude/max(1e-
˓→15, g2_sigma)'

[[Correlations]] (unreported correlations are < 0.500)


C(g1_amplitude, g1_sigma) = +0.8243
C(g2_amplitude, g2_sigma) = +0.8154
C(bkg_amplitude, bkg_decay) = -0.6946
C(g1_sigma, g2_center) = +0.6842
C(g1_center, g2_amplitude) = -0.6689
C(g1_center, g2_sigma) = -0.6520
C(g1_amplitude, g2_center) = +0.6477
C(g1_center, g2_center) = +0.6205
C(g1_center, g1_sigma) = +0.5075
C(g1_amplitude, bkg_decay) = -0.5074

# <examples/doc_model_uncertainty2.py>
import matplotlib.pyplot as plt
import numpy as np

from lmfit.models import ExponentialModel, GaussianModel

dat = np.loadtxt('NIST_Gauss2.dat')
x = dat[:, 1]
y = dat[:, 0]

(continues on next page)

14.1. Examples from the documentation 301


Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

(continued from previous page)


model = (GaussianModel(prefix='g1_') +
GaussianModel(prefix='g2_') +
ExponentialModel(prefix='bkg_'))

params = model.make_params(bkg_amplitude=100, bkg_decay=80,


g1_amplitude=3000,
g1_center=100,
g1_sigma=10,
g2_amplitude=3000,
g2_center=150,
g2_sigma=10)

result = model.fit(y, params, x=x)


print(result.fit_report(min_correl=0.5))

comps = result.eval_components(x=x)
dely = result.eval_uncertainty(sigma=3)

fig, axes = plt.subplots(2, 2, figsize=(12.8, 9.6))

axes[0][0].plot(x, y, 'o', color='#99002299', markersize=3, label='data')


axes[0][0].plot(x, result.best_fit, '-', label='best fit')
axes[0][0].plot(x, result.init_fit, '--', label='initial fit')
axes[0][0].set_title('data, initial fit, and best-fit')
axes[0][0].legend()

axes[0][1].plot(x, y, 'o', color='#99002299', markersize=3, label='data')


axes[0][1].plot(x, result.best_fit, '-', label='best fit')
axes[0][1].fill_between(x, result.best_fit-dely, result.best_fit+dely,
color="#8A8A8A", label=r'3-$\sigma$ band')
axes[0][1].set_title('data, best-fit, and uncertainty band')
axes[0][1].legend()

axes[1][0].plot(x, result.best_fit, '-', label=r'best fit, 3-$\sigma$ band')


axes[1][0].fill_between(x,
result.best_fit-result.dely,
result.best_fit+result.dely,
color="#8A8A8A")

axes[1][0].plot(x, comps['bkg_'], label=r'background, 3-$\sigma$ band')


axes[1][0].fill_between(x,
comps['bkg_']-result.dely_comps['bkg_'],
comps['bkg_']+result.dely_comps['bkg_'],
color="#8A8A8A")

axes[1][0].plot(x, comps['g1_'], label=r'Gaussian #1, 3-$\sigma$ band')


axes[1][0].fill_between(x,
comps['g1_']-result.dely_comps['g1_'],
comps['g1_']+result.dely_comps['g1_'],
color="#8A8A8A")

axes[1][0].plot(x, comps['g2_'], label=r'Gaussian #2, 3-$\sigma$ band')


(continues on next page)

302 Chapter 14. Examples from the documentation


Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

(continued from previous page)


axes[1][0].fill_between(x,
comps['g2_']-result.dely_comps['g2_'],
comps['g2_']+result.dely_comps['g2_'],
color="#8A8A8A")
axes[1][0].set_title('model components with uncertainty bands')
axes[1][0].legend()

axes[1][1].plot(x, result.best_fit, '-', label='best fit')


axes[1][1].plot(x, 10*result.dely, label=r'3-$\sigma$ total (x10)')
axes[1][1].plot(x, 10*result.dely_comps['bkg_'], label=r'3-$\sigma$ background (x10)')
axes[1][1].plot(x, 10*result.dely_comps['g1_'], label=r'3-$\sigma$ Gaussian #1 (x10)')
axes[1][1].plot(x, 10*result.dely_comps['g2_'], label=r'3-$\sigma$ Gaussian #2 (x10)')
axes[1][1].set_title('uncertainties for model components')
axes[1][1].legend()

plt.show()
# <end examples/doc_model_uncertainty2.py>

Total running time of the script: ( 0 minutes 1.170 seconds)

14.1.24 doc_fitting_emcee.py

14.1. Examples from the documentation 303


Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

304 Chapter 14. Examples from the documentation


Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

[[Variables]]
a1: 2.98623689 +/- 0.15010519 (5.03%) (init = 4)
a2: -4.33525597 +/- 0.11765821 (2.71%) (init = 4)
t1: 1.30993186 +/- 0.13449653 (10.27%) (init = 3)
t2: 11.8240752 +/- 0.47172598 (3.99%) (init = 3)
[[Correlations]] (unreported correlations are < 0.500)
C(a2, t2) = +0.9876
C(a2, t1) = -0.9278
C(t1, t2) = -0.8852
C(a1, t1) = -0.6093
The chain is shorter than 50 times the integrated autocorrelation time for 5␣
˓→parameter(s). Use this estimate with caution and run a longer chain!

N/50 = 20;
tau: [42.15955322 47.347426 48.71211873 46.7985718 40.89881208]

median of posterior probability distribution


--------------------------------------------
[[Variables]]
a1: 2.98945718 +/- 0.14033921 (4.69%) (init = 2.986237)
a2: -4.34687243 +/- 0.12131092 (2.79%) (init = -4.335256)
t1: 1.32883916 +/- 0.13766047 (10.36%) (init = 1.309932)
t2: 11.7836194 +/- 0.47719763 (4.05%) (init = 11.82408)
__lnsigma: -2.32559226 +/- 0.04542650 (1.95%) (init = -2.302585)
[[Correlations]] (unreported correlations are < 0.100)
(continues on next page)

14.1. Examples from the documentation 305


Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

(continued from previous page)


C(a2, t2) = +0.9811
C(a2, t1) = -0.9377
C(t1, t2) = -0.8943
C(a1, t1) = -0.5076
C(a1, a2) = +0.2140
C(a1, t2) = +0.1777

Maximum Likelihood Estimation from emcee


-------------------------------------------------
Parameter MLE Value Median Value Uncertainty
a1 2.93839 2.98946 0.14034
a2 -4.35274 -4.34687 0.12131
t1 1.34310 1.32884 0.13766
t2 11.78782 11.78362 0.47720

Error Estimates from emcee


------------------------------------------------------
Parameter -2sigma -1sigma median +1sigma +2sigma
a1 -0.2656 -0.1362 2.9895 0.1445 0.3141
a2 -0.3209 -0.1309 -4.3469 0.1118 0.1985
t1 -0.2377 -0.1305 1.3288 0.1448 0.3278
t2 -1.0677 -0.4807 11.7836 0.4739 0.8990

# <examples/doc_fitting_emcee.py>
import numpy as np

import lmfit

try:
import matplotlib.pyplot as plt
HASPYLAB = True
except ImportError:
HASPYLAB = False

try:
import corner
HASCORNER = True
except ImportError:
HASCORNER = False

x = np.linspace(1, 10, 250)


np.random.seed(0)
y = (3.0*np.exp(-x/2) - 5.0*np.exp(-(x-0.1) / 10.) +
0.1*np.random.randn(x.size))

p = lmfit.Parameters()
p.add_many(('a1', 4), ('a2', 4), ('t1', 3), ('t2', 3., True))
(continues on next page)

306 Chapter 14. Examples from the documentation


Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

(continued from previous page)

def residual(p):
v = p.valuesdict()
return v['a1']*np.exp(-x/v['t1']) + v['a2']*np.exp(-(x-0.1) / v['t2']) - y

mi = lmfit.minimize(residual, p, method='nelder', nan_policy='omit')


lmfit.printfuncs.report_fit(mi.params, min_correl=0.5)
if HASPYLAB:
plt.figure()
plt.plot(x, y, 'o')
plt.plot(x, residual(mi.params) + y, label='best fit')
plt.legend()
plt.show()

# Place bounds on the ln(sigma) parameter that emcee will automatically add
# to estimate the true uncertainty in the data since is_weighted=False
mi.params.add('__lnsigma', value=np.log(0.1), min=np.log(0.001), max=np.log(2))

res = lmfit.minimize(residual, method='emcee', nan_policy='omit', burn=300,


steps=1000, thin=20, params=mi.params, is_weighted=False,
progress=False)

if HASPYLAB and HASCORNER:


emcee_corner = corner.corner(res.flatchain, labels=res.var_names,
truths=list(res.params.valuesdict().values()))
plt.show()

if HASPYLAB:
plt.plot(res.acceptance_fraction, 'o')
plt.xlabel('walker')
plt.ylabel('acceptance fraction')
plt.show()

if hasattr(res, "acor"):
print("Autocorrelation time for the parameters:")
print("----------------------------------------")
for i, par in enumerate(p):
print(par, res.acor[i])

print("\nmedian of posterior probability distribution")


print('--------------------------------------------')
lmfit.report_fit(res.params)

# find the maximum likelihood solution


highest_prob = np.argmax(res.lnprob)
hp_loc = np.unravel_index(highest_prob, res.lnprob.shape)
mle_soln = res.chain[hp_loc]
for i, par in enumerate(p):
p[par].value = mle_soln[i]
(continues on next page)

14.1. Examples from the documentation 307


Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

(continued from previous page)

print('\nMaximum Likelihood Estimation from emcee ')


print('-------------------------------------------------')
print('Parameter MLE Value Median Value Uncertainty')
fmt = ' {:5s} {:11.5f} {:11.5f} {:11.5f}'.format
for name, param in p.items():
print(fmt(name, param.value, res.params[name].value,
res.params[name].stderr))

if HASPYLAB:
plt.figure()
plt.plot(x, y, 'o')
plt.plot(x, residual(mi.params) + y, label='Nelder-Mead')
plt.plot(x, residual(res.params) + y, '--', label='emcee')
plt.legend()
plt.show()

print('\nError Estimates from emcee ')


print('------------------------------------------------------')
print('Parameter -2sigma -1sigma median +1sigma +2sigma ')

for name in p.keys():


quantiles = np.percentile(res.flatchain[name],
[2.275, 15.865, 50, 84.135, 97.275])
median = quantiles[2]
err_m2 = quantiles[0] - median
err_m1 = quantiles[1] - median
err_p1 = quantiles[3] - median
err_p2 = quantiles[4] - median
fmt = ' {:5s} {:8.4f} {:8.4f} {:8.4f} {:8.4f} {:8.4f}'.format
print(fmt(name, err_m2, err_m1, median, err_p1, err_p2))

Total running time of the script: ( 0 minutes 7.875 seconds)

14.1.25 doc_confidence_chi2_maps.py

# <examples/doc_confidence_chi2_maps.py>

import matplotlib.pyplot as plt


import numpy as np

from lmfit import conf_interval, conf_interval2d, report_ci


from lmfit.lineshapes import gaussian
from lmfit.models import GaussianModel, LinearModel

sigma_levels = [1, 2, 3]

rng = np.random.default_rng(seed=102)

set up data – deliberately adding imperfections and a small amount of non-Gaussian noise

308 Chapter 14. Examples from the documentation


Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

npts = 501
x = np.linspace(1, 100, num=npts)

noise = rng.normal(scale=0.3, size=npts) + 0.2*rng.f(3, 9, size=npts)

y = (gaussian(x, amplitude=83, center=47., sigma=5.)


+ 0.02*x + 4 + 0.25*np.cos((x-20)/8.0) + noise)

mod = GaussianModel() + LinearModel()


params = mod.make_params(amplitude=100, center=50, sigma=5,
slope=0, intecept=2)

out = mod.fit(y, params, x=x)


print(out.fit_report(correl_mode='table'))

[[Model]]
(Model(gaussian) + Model(linear))
[[Fit Statistics]]
# fitting method = leastsq
# function evals = 31
# data points = 501
# variables = 5
chi-square = 103.861381
reduced chi-square = 0.20939794
Akaike info crit = -778.348033
Bayesian info crit = -757.265003
R-squared = 0.93782756
[[Variables]]
amplitude: 78.8171374 +/- 1.21910939 (1.55%) (init = 100)
center: 47.0751649 +/- 0.07576660 (0.16%) (init = 50)
sigma: 4.93298753 +/- 0.07984021 (1.62%) (init = 5)
slope: 0.01839006 +/- 7.1957e-04 (3.91%) (init = 0)
intercept: 4.39234411 +/- 0.04420227 (1.01%) (init = 0)
fwhm: 11.6162977 +/- 0.18800933 (1.62%) == '2.3548200*sigma'
height: 6.37412722 +/- 0.08603873 (1.35%) == '0.3989423*amplitude/max(1e-15,␣
˓→sigma)'

[[Correlations]]
+-----------+-----------+-----------+-----------+-----------+-----------+
| Variable | amplitude | center | sigma | slope | intercept |
+-----------+-----------+-----------+-----------+-----------+-----------+
| amplitude | +1.0000 | -0.0074 | +0.6371 | +0.0721 | -0.3373 |
| center | -0.0074 | +1.0000 | -0.0048 | -0.1026 | +0.0864 |
| sigma | +0.6371 | -0.0048 | +1.0000 | +0.0459 | -0.2149 |
| slope | +0.0721 | -0.1026 | +0.0459 | +1.0000 | -0.8421 |
| intercept | -0.3373 | +0.0864 | -0.2149 | -0.8421 | +1.0000 |
+-----------+-----------+-----------+-----------+-----------+-----------+

run conf_intervale, print report

ci = conf_interval(out, out, sigmas=sigma_levels)

print("## Confidence Report:")


report_ci(ci)

14.1. Examples from the documentation 309


Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

## Confidence Report:
99.73% 95.45% 68.27% _BEST_ 68.27% 95.45% 99.73%
amplitude: -3.62610 -2.41983 -1.21237 78.81714 +1.22111 +2.45479 +3.70515
center : -0.22849 -0.15214 -0.07584 47.07516 +0.07587 +0.15225 +0.22873
sigma : -0.23335 -0.15640 -0.07870 4.93299 +0.08000 +0.16158 +0.24509
slope : -0.00217 -0.00144 -0.00072 0.01839 +0.00072 +0.00144 +0.00217
intercept: -0.13326 -0.08860 -0.04423 4.39234 +0.04421 +0.08854 +0.13312

plot initial fit

colors = ('#2030b0', '#b02030', '#207070')


fig, axes = plt.subplots(2, 3, figsize=(15, 9.5))

axes[0, 0].plot(x, y, 'o', markersize=3, label='data', color=colors[0])


axes[0, 0].plot(x, out.best_fit, label='fit', color=colors[1])
axes[0, 0].set_xlabel('x')
axes[0, 0].set_ylabel('y')
axes[0, 0].legend()

aix, aiy = 0, 0
nsamples = 50
explicitly_calculate_sigma = True

for pairs in (('sigma', 'amplitude'), ('intercept', 'amplitude'),


('slope', 'intercept'), ('slope', 'center'), ('sigma', 'center')):

xpar, ypar = pairs


if explicitly_calculate_sigma:
print("Generating chi-square map for ", pairs)
c_x, c_y, chi2_mat = conf_interval2d(out, out, xpar, ypar,
nsamples, nsamples, nsigma=3.5,
chi2_out=True)
# explicitly calculate sigma matrix: sigma increases chi_square
# from chi_square_best
# to chi_square + sigma**2 * reduced_chi_square
# so: sigma = sqrt((chi2-chi2_best)/ reduced_chi_square)
chi2_min = chi2_mat.min()
sigma_mat = np.sqrt((chi2_mat-chi2_min)/out.redchi)
else:
print("Generating sigma map for ", pairs)
# or, you could just calculate the matrix of probabilities as:
# print("Generating chi-square map for ", pairs)
c_x, c_y, sigma_mat = conf_interval2d(out, out, xpar, ypar,
nsamples, nsamples, nsigma=3.5)

aix += 1
if aix == 2:
aix = 0
aiy += 1
ax = axes[aix, aiy]

(continues on next page)

310 Chapter 14. Examples from the documentation


Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

(continued from previous page)


cnt = ax.contour(c_x, c_y, sigma_mat, levels=sigma_levels, colors=colors,
linestyles='-')
ax.clabel(cnt, inline=True, fmt=r"$\sigma=%.0f $", fontsize=13)

# draw boxes for estimated uncertaties:


# dotted : scaled stderr from initial fit
# dashed : values found from conf_interval()
xv = out.params[xpar].value
xs = out.params[xpar].stderr
yv = out.params[ypar].value
ys = out.params[ypar].stderr

cix = ci[xpar]
ciy = ci[ypar]

nc = len(sigma_levels)
for i in sigma_levels:
# dotted line: scaled stderr
ax.plot((xv-i*xs, xv+i*xs, xv+i*xs, xv-i*xs, xv-i*xs),
(yv-i*ys, yv-i*ys, yv+i*ys, yv+i*ys, yv-i*ys),
linestyle='dotted', color=colors[i-1])

# dashed line: refined uncertainties from conf_interval


xsp, xsm = cix[nc+i][1], cix[nc-i][1]
ysp, ysm = ciy[nc+i][1], ciy[nc-i][1]
ax.plot((xsm, xsp, xsp, xsm, xsm), (ysm, ysm, ysp, ysp, ysm),
linestyle='dashed', color=colors[i-1])

ax.set_xlabel(xpar)
ax.set_ylabel(ypar)
ax.grid(True, color='#d0d0d0')

plt.show()
# <end examples/doc_confidence_chi2_maps.py>

14.1. Examples from the documentation 311


Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

Generating chi-square map for ('sigma', 'amplitude')


Generating chi-square map for ('intercept', 'amplitude')
Generating chi-square map for ('slope', 'intercept')
Generating chi-square map for ('slope', 'center')
Generating chi-square map for ('sigma', 'center')

Total running time of the script: ( 1 minutes 4.966 seconds)

312 Chapter 14. Examples from the documentation


PYTHON MODULE INDEX

l
lmfit.confidence, 141
lmfit.minimizer, 30
lmfit.model, 61
lmfit.models, 99
lmfit.parameter, 21

313
Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

314 Python Module Index


INDEX

A DoniachModel (class in lmfit.models), 112


add() (Parameters method), 25 dual_annealing() (Minimizer method), 51
add_many() (Parameters method), 26 dump() (Parameters method), 27
aic (in module lmfit.model), 85 dumps() (Parameters method), 27
aic (MinimizerResult attribute), 38
ampgo() (Minimizer method), 49 E
emcee() (Minimizer method), 51
B errorbars (in module lmfit.model), 85
basinhopping() (Minimizer method), 49 errorbars (MinimizerResult attribute), 37
best_fit (in module lmfit.model), 85 eval() (Model method), 66
best_values (in module lmfit.model), 85 eval() (ModelResult method), 79
bic (in module lmfit.model), 85 eval() (Parameters method), 27
bic (MinimizerResult attribute), 38 eval_components() (ModelResult method), 79
BreitWignerModel (class in lmfit.models), 106 eval_uncertainty() (ModelResult method), 81
brute() (Minimizer method), 47 ExponentialGaussianModel (class in lmfit.models),
109
C ExponentialModel (class in lmfit.models), 118
call_kws (in module lmfit.model), 86 ExpressionModel (class in lmfit.models), 121
call_kws (MinimizerResult attribute), 37
chisqr (in module lmfit.model), 85
F
chisqr (MinimizerResult attribute), 38 fit() (Model method), 66
ci_out (in module lmfit.model), 85 fit() (ModelResult method), 79
ci_report() (in module lmfit), 152 fit_report() (in module lmfit.printfuncs), 40
ci_report() (ModelResult method), 80 fit_report() (ModelResult method), 79
Composite models, 92 flatchain (MinimizerResult attribute), 38
CompositeModel (class in lmfit.model), 95 func (in module lmfit.model), 69
conf_interval() (in module lmfit), 150
conf_interval() (ModelResult method), 80 G
conf_interval2d() (in module lmfit), 151 Gaussian2dModel (class in lmfit.models), 120
ConstantModel (class in lmfit.models), 113 GaussianModel (class in lmfit.models), 100
correl (Parameter attribute), 24 guess() (Model method), 67
covar (in module lmfit.model), 85
covar (MinimizerResult attribute), 36 I
create_params() (in module lmfit.parameter), 28 ier (in module lmfit.model), 85
ier (MinimizerResult attribute), 37
D independent_vars (in module lmfit.model), 69
DampedHarmonicOscillatorModel (class in lm- init_fit (in module lmfit.model), 85
fit.models), 108 init_params (in module lmfit.model), 85
DampedOscillatorModel (class in lmfit.models), 107 init_vals (MinimizerResult attribute), 36
data (in module lmfit.model), 85 init_values (in module lmfit.model), 85
dely (in module lmfit.model), 85 init_values (MinimizerResult attribute), 37
dely_comps (in module lmfit.model), 85 iter_cb (in module lmfit.model), 86

315
Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

J nfree (in module lmfit.model), 86


jacfcn (in module lmfit.model), 86 nfree (MinimizerResult attribute), 37
nvarys (in module lmfit.model), 86
L nvarys (MinimizerResult attribute), 37
least_squares() (Minimizer method), 45
leastsq() (Minimizer method), 45
O
LinearModel (class in lmfit.models), 113 opts (in module lmfit.model), 69
lmdif_message (in module lmfit.model), 86
lmdif_message (MinimizerResult attribute), 37 P
lmfit.confidence param_hints (in module lmfit.model), 70
module, 141 param_names (in module lmfit.model), 70
lmfit.minimizer Parameter (class in lmfit.parameter), 23
module, 30 Parameters (class in lmfit.parameter), 25
lmfit.model params (in module lmfit.model), 86
module, 61 params (MinimizerResult attribute), 36
lmfit.models Pearson4Model (class in lmfit.models), 104
module, 99 Pearson7Model (class in lmfit.models), 105
lmfit.parameter plot() (ModelResult method), 81
module, 21 plot_fit() (ModelResult method), 83
load() (Parameters method), 28 plot_residuals() (ModelResult method), 84
load_model() (in module lmfit.model), 76 PolynomialModel (class in lmfit.models), 114
load_modelresult() (in module lmfit.model), 91 PowerLawModel (class in lmfit.models), 119
loads() (Parameters method), 28 prefix (in module lmfit.model), 70
LognormalModel (class in lmfit.models), 107 prepare_fit() (Minimizer method), 47
LorentzianModel (class in lmfit.models), 100 pretty_print() (Parameters method), 26
print_param_hints() (Model method), 69
M PseudoVoigtModel (class in lmfit.models), 103
make_params() (Model method), 68
message (in module lmfit.model), 86 Q
message (MinimizerResult attribute), 37 QuadraticModel (class in lmfit.models), 114
method (in module lmfit.model), 86
minimize() (in module lmfit.minimizer), 31 R
minimize() (Minimizer method), 44 RectangleModel (class in lmfit.models), 117
Minimizer (class in lmfit.minimizer), 43 redchi (in module lmfit.model), 86
MinimizerResult (class in lmfit.minimizer), 36 redchi (MinimizerResult attribute), 38
Model (class in lmfit.model), 65 Removing a Constraint Expression, 24
model (in module lmfit.model), 86 residual (in module lmfit.model), 86
ModelResult (class in lmfit.model), 78 residual (MinimizerResult attribute), 38
module rsquared (in module lmfit.model), 86
lmfit.confidence, 141
lmfit.minimizer, 30 S
lmfit.model, 61 save_model() (in module lmfit.model), 76
lmfit.models, 99 save_modelresult() (in module lmfit.model), 91
lmfit.parameter, 21 scalar_minimize() (Minimizer method), 46
MoffatModel (class in lmfit.models), 103 scale_covar (in module lmfit.model), 86
set() (Parameter method), 24
N set_param_hint() (Model method), 68
name (in module lmfit.model), 69 shgo() (Minimizer method), 50
nan_policy (in module lmfit.model), 69 show_candidates() (MinimizerResult method), 38
ndata (in module lmfit.model), 86 SineModel (class in lmfit.models), 116
ndata (MinimizerResult attribute), 37 SkewedGaussianModel (class in lmfit.models), 109
nfev (in module lmfit.model), 86 SkewedVoigtModel (class in lmfit.models), 110
nfev (MinimizerResult attribute), 37 SplineModel (class in lmfit.models), 115

316 Index
Non-Linear Least-Squares Minimization and Curve-Fitting for Python, Release 1.2.0

SplitLorentzianModel (class in lmfit.models), 101


status (MinimizerResult attribute), 36
stderr (Parameter attribute), 24
StepModel (class in lmfit.models), 116
StudentsTModel (class in lmfit.models), 106
success (in module lmfit.model), 87
success (MinimizerResult attribute), 37
summary() (ModelResult method), 80

T
ThermalDistributionModel (class in lmfit.models),
111

V
valuesdict() (Parameters method), 27
var_names (MinimizerResult attribute), 36
VoigtModel (class in lmfit.models), 102

W
weights (in module lmfit.model), 87

Index 317

You might also like