2013-10-19 15:50:09 -03:00
|
|
|
:mod:`statistics` --- Mathematical statistics functions
|
|
|
|
=======================================================
|
|
|
|
|
|
|
|
.. module:: statistics
|
2019-11-25 18:17:59 -04:00
|
|
|
:synopsis: Mathematical statistics functions
|
2016-06-11 16:02:54 -03:00
|
|
|
|
2013-10-19 15:50:09 -03:00
|
|
|
.. moduleauthor:: Steven D'Aprano <steve+python@pearwood.info>
|
|
|
|
.. sectionauthor:: Steven D'Aprano <steve+python@pearwood.info>
|
|
|
|
|
|
|
|
.. versionadded:: 3.4
|
|
|
|
|
2016-06-11 16:02:54 -03:00
|
|
|
**Source code:** :source:`Lib/statistics.py`
|
|
|
|
|
2013-10-19 15:50:09 -03:00
|
|
|
.. testsetup:: *
|
|
|
|
|
|
|
|
from statistics import *
|
2023-09-30 01:18:12 -03:00
|
|
|
import math
|
2013-10-19 15:50:09 -03:00
|
|
|
__name__ = '<doctest>'
|
|
|
|
|
|
|
|
--------------
|
|
|
|
|
|
|
|
This module provides functions for calculating mathematical statistics of
|
2019-09-06 03:02:27 -03:00
|
|
|
numeric (:class:`~numbers.Real`-valued) data.
|
|
|
|
|
|
|
|
The module is not intended to be a competitor to third-party libraries such
|
2023-05-02 03:34:44 -03:00
|
|
|
as `NumPy <https://numpy.org>`_, `SciPy <https://scipy.org/>`_, or
|
2019-09-06 03:02:27 -03:00
|
|
|
proprietary full-featured statistics packages aimed at professional
|
|
|
|
statisticians such as Minitab, SAS and Matlab. It is aimed at the level of
|
|
|
|
graphing and scientific calculators.
|
|
|
|
|
|
|
|
Unless explicitly noted, these functions support :class:`int`,
|
|
|
|
:class:`float`, :class:`~decimal.Decimal` and :class:`~fractions.Fraction`.
|
|
|
|
Behaviour with other types (whether in the numeric tower or not) is
|
|
|
|
currently unsupported. Collections with a mix of types are also undefined
|
|
|
|
and implementation-dependent. If your input data consists of mixed types,
|
|
|
|
you may be able to use :func:`map` to ensure a consistent result, for
|
|
|
|
example: ``map(float, input_data)``.
|
2014-02-08 05:58:04 -04:00
|
|
|
|
2022-07-10 04:40:27 -03:00
|
|
|
Some datasets use ``NaN`` (not a number) values to represent missing data.
|
|
|
|
Since NaNs have unusual comparison semantics, they cause surprising or
|
|
|
|
undefined behaviors in the statistics functions that sort data or that count
|
|
|
|
occurrences. The functions affected are ``median()``, ``median_low()``,
|
|
|
|
``median_high()``, ``median_grouped()``, ``mode()``, ``multimode()``, and
|
|
|
|
``quantiles()``. The ``NaN`` values should be stripped before calling these
|
|
|
|
functions::
|
|
|
|
|
|
|
|
>>> from statistics import median
|
|
|
|
>>> from math import isnan
|
|
|
|
>>> from itertools import filterfalse
|
|
|
|
|
|
|
|
>>> data = [20.7, float('NaN'),19.2, 18.3, float('NaN'), 14.4]
|
|
|
|
>>> sorted(data) # This has surprising behavior
|
|
|
|
[20.7, nan, 14.4, 18.3, 19.2, nan]
|
|
|
|
>>> median(data) # This result is unexpected
|
|
|
|
16.35
|
|
|
|
|
|
|
|
>>> sum(map(isnan, data)) # Number of missing values
|
|
|
|
2
|
|
|
|
>>> clean = list(filterfalse(isnan, data)) # Strip NaN values
|
|
|
|
>>> clean
|
|
|
|
[20.7, 19.2, 18.3, 14.4]
|
|
|
|
>>> sorted(clean) # Sorting now works as expected
|
|
|
|
[14.4, 18.3, 19.2, 20.7]
|
|
|
|
>>> median(clean) # This result is now well defined
|
|
|
|
18.75
|
|
|
|
|
|
|
|
|
2013-10-19 15:50:09 -03:00
|
|
|
Averages and measures of central location
|
|
|
|
-----------------------------------------
|
|
|
|
|
|
|
|
These functions calculate an average or typical value from a population
|
|
|
|
or sample.
|
|
|
|
|
2019-03-12 04:43:27 -03:00
|
|
|
======================= ===============================================================
|
2013-10-19 15:50:09 -03:00
|
|
|
:func:`mean` Arithmetic mean ("average") of data.
|
2021-05-21 00:22:26 -03:00
|
|
|
:func:`fmean` Fast, floating point arithmetic mean, with optional weighting.
|
2019-04-07 13:20:03 -03:00
|
|
|
:func:`geometric_mean` Geometric mean of data.
|
2016-08-23 13:34:25 -03:00
|
|
|
:func:`harmonic_mean` Harmonic mean of data.
|
2024-02-25 19:46:47 -04:00
|
|
|
:func:`kde` Estimate the probability density distribution of the data.
|
2024-05-04 01:13:36 -03:00
|
|
|
:func:`kde_random` Random sampling from the PDF generated by kde().
|
2013-10-19 15:50:09 -03:00
|
|
|
:func:`median` Median (middle value) of data.
|
|
|
|
:func:`median_low` Low median of data.
|
|
|
|
:func:`median_high` High median of data.
|
2024-03-25 20:49:44 -03:00
|
|
|
:func:`median_grouped` Median (50th percentile) of grouped data.
|
2019-03-12 04:43:27 -03:00
|
|
|
:func:`mode` Single mode (most common value) of discrete or nominal data.
|
2021-03-13 21:00:28 -04:00
|
|
|
:func:`multimode` List of modes (most common values) of discrete or nominal data.
|
2019-04-23 04:06:35 -03:00
|
|
|
:func:`quantiles` Divide data into intervals with equal probability.
|
2019-03-12 04:43:27 -03:00
|
|
|
======================= ===============================================================
|
2013-10-19 15:50:09 -03:00
|
|
|
|
2013-10-21 03:57:26 -03:00
|
|
|
Measures of spread
|
|
|
|
------------------
|
|
|
|
|
|
|
|
These functions calculate a measure of how much the population or sample
|
|
|
|
tends to deviate from the typical or average values.
|
|
|
|
|
|
|
|
======================= =============================================
|
|
|
|
:func:`pstdev` Population standard deviation of data.
|
|
|
|
:func:`pvariance` Population variance of data.
|
|
|
|
:func:`stdev` Sample standard deviation of data.
|
|
|
|
:func:`variance` Sample variance of data.
|
|
|
|
======================= =============================================
|
2013-10-19 15:50:09 -03:00
|
|
|
|
2021-04-25 08:45:09 -03:00
|
|
|
Statistics for relations between two inputs
|
|
|
|
-------------------------------------------
|
|
|
|
|
|
|
|
These functions calculate statistics regarding relations between two inputs.
|
|
|
|
|
|
|
|
========================= =====================================================
|
|
|
|
:func:`covariance` Sample covariance for two variables.
|
2022-08-18 15:48:27 -03:00
|
|
|
:func:`correlation` Pearson and Spearman's correlation coefficients.
|
2021-05-24 21:30:58 -03:00
|
|
|
:func:`linear_regression` Slope and intercept for simple linear regression.
|
2021-04-25 08:45:09 -03:00
|
|
|
========================= =====================================================
|
|
|
|
|
2013-10-21 03:57:26 -03:00
|
|
|
|
|
|
|
Function details
|
|
|
|
----------------
|
2013-10-19 15:50:09 -03:00
|
|
|
|
2013-11-04 02:30:50 -04:00
|
|
|
Note: The functions do not require the data given to them to be sorted.
|
|
|
|
However, for reading convenience, most of the examples show sorted sequences.
|
|
|
|
|
2013-10-19 15:50:09 -03:00
|
|
|
.. function:: mean(data)
|
|
|
|
|
2019-11-12 03:35:06 -04:00
|
|
|
Return the sample arithmetic mean of *data* which can be a sequence or iterable.
|
2013-10-21 03:57:26 -03:00
|
|
|
|
|
|
|
The arithmetic mean is the sum of the data divided by the number of data
|
|
|
|
points. It is commonly called "the average", although it is only one of many
|
|
|
|
different mathematical averages. It is a measure of the central location of
|
|
|
|
the data.
|
2013-10-19 15:50:09 -03:00
|
|
|
|
2013-10-21 03:57:26 -03:00
|
|
|
If *data* is empty, :exc:`StatisticsError` will be raised.
|
2013-10-19 15:50:09 -03:00
|
|
|
|
|
|
|
Some examples of use:
|
|
|
|
|
|
|
|
.. doctest::
|
|
|
|
|
|
|
|
>>> mean([1, 2, 3, 4, 4])
|
|
|
|
2.8
|
|
|
|
>>> mean([-1.0, 2.5, 3.25, 5.75])
|
|
|
|
2.625
|
|
|
|
|
|
|
|
>>> from fractions import Fraction as F
|
|
|
|
>>> mean([F(3, 7), F(1, 21), F(5, 3), F(1, 3)])
|
|
|
|
Fraction(13, 21)
|
|
|
|
|
|
|
|
>>> from decimal import Decimal as D
|
|
|
|
>>> mean([D("0.5"), D("0.75"), D("0.625"), D("0.375")])
|
|
|
|
Decimal('0.5625')
|
|
|
|
|
|
|
|
.. note::
|
|
|
|
|
2021-12-21 05:00:53 -04:00
|
|
|
The mean is strongly affected by `outliers
|
|
|
|
<https://en.wikipedia.org/wiki/Outlier>`_ and is not necessarily a
|
|
|
|
typical example of the data points. For a more robust, although less
|
|
|
|
efficient, measure of `central tendency
|
|
|
|
<https://en.wikipedia.org/wiki/Central_tendency>`_, see :func:`median`.
|
2013-10-19 15:50:09 -03:00
|
|
|
|
2013-10-21 03:57:26 -03:00
|
|
|
The sample mean gives an unbiased estimate of the true population mean,
|
2019-09-06 03:02:27 -03:00
|
|
|
so that when taken on average over all the possible samples,
|
2013-10-21 03:57:26 -03:00
|
|
|
``mean(sample)`` converges on the true mean of the entire population. If
|
|
|
|
*data* represents the entire population rather than a sample, then
|
|
|
|
``mean(data)`` is equivalent to calculating the true population mean μ.
|
2013-10-19 15:50:09 -03:00
|
|
|
|
|
|
|
|
2021-05-21 00:22:26 -03:00
|
|
|
.. function:: fmean(data, weights=None)
|
2019-02-21 19:06:29 -04:00
|
|
|
|
|
|
|
Convert *data* to floats and compute the arithmetic mean.
|
|
|
|
|
|
|
|
This runs faster than the :func:`mean` function and it always returns a
|
2019-11-12 03:35:06 -04:00
|
|
|
:class:`float`. The *data* may be a sequence or iterable. If the input
|
2019-09-05 04:18:47 -03:00
|
|
|
dataset is empty, raises a :exc:`StatisticsError`.
|
2019-02-21 19:06:29 -04:00
|
|
|
|
|
|
|
.. doctest::
|
|
|
|
|
|
|
|
>>> fmean([3.5, 4.0, 5.25])
|
|
|
|
4.25
|
|
|
|
|
2021-05-21 00:22:26 -03:00
|
|
|
Optional weighting is supported. For example, a professor assigns a
|
|
|
|
grade for a course by weighting quizzes at 20%, homework at 20%, a
|
|
|
|
midterm exam at 30%, and a final exam at 30%:
|
|
|
|
|
|
|
|
.. doctest::
|
|
|
|
|
|
|
|
>>> grades = [85, 92, 83, 91]
|
|
|
|
>>> weights = [0.20, 0.20, 0.30, 0.30]
|
|
|
|
>>> fmean(grades, weights)
|
|
|
|
87.6
|
|
|
|
|
|
|
|
If *weights* is supplied, it must be the same length as the *data* or
|
|
|
|
a :exc:`ValueError` will be raised.
|
|
|
|
|
2019-02-21 19:06:29 -04:00
|
|
|
.. versionadded:: 3.8
|
|
|
|
|
2021-05-21 00:22:26 -03:00
|
|
|
.. versionchanged:: 3.11
|
|
|
|
Added support for *weights*.
|
|
|
|
|
2019-02-21 19:06:29 -04:00
|
|
|
|
2019-04-07 13:20:03 -03:00
|
|
|
.. function:: geometric_mean(data)
|
|
|
|
|
|
|
|
Convert *data* to floats and compute the geometric mean.
|
|
|
|
|
2019-09-05 04:18:47 -03:00
|
|
|
The geometric mean indicates the central tendency or typical value of the
|
|
|
|
*data* using the product of the values (as opposed to the arithmetic mean
|
|
|
|
which uses their sum).
|
|
|
|
|
2019-04-07 13:20:03 -03:00
|
|
|
Raises a :exc:`StatisticsError` if the input dataset is empty,
|
|
|
|
if it contains a zero, or if it contains a negative value.
|
2019-11-12 03:35:06 -04:00
|
|
|
The *data* may be a sequence or iterable.
|
2019-04-07 13:20:03 -03:00
|
|
|
|
|
|
|
No special efforts are made to achieve exact results.
|
|
|
|
(However, this may change in the future.)
|
|
|
|
|
|
|
|
.. doctest::
|
|
|
|
|
2019-09-05 04:18:47 -03:00
|
|
|
>>> round(geometric_mean([54, 24, 36]), 1)
|
2019-04-07 13:20:03 -03:00
|
|
|
36.0
|
|
|
|
|
|
|
|
.. versionadded:: 3.8
|
|
|
|
|
|
|
|
|
2020-12-23 23:52:09 -04:00
|
|
|
.. function:: harmonic_mean(data, weights=None)
|
2016-08-23 13:34:25 -03:00
|
|
|
|
2019-11-12 03:35:06 -04:00
|
|
|
Return the harmonic mean of *data*, a sequence or iterable of
|
2020-12-23 23:52:09 -04:00
|
|
|
real-valued numbers. If *weights* is omitted or *None*, then
|
|
|
|
equal weighting is assumed.
|
2016-08-23 13:34:25 -03:00
|
|
|
|
2021-02-07 20:44:42 -04:00
|
|
|
The harmonic mean is the reciprocal of the arithmetic :func:`mean` of the
|
|
|
|
reciprocals of the data. For example, the harmonic mean of three values *a*,
|
|
|
|
*b* and *c* will be equivalent to ``3/(1/a + 1/b + 1/c)``. If one of the
|
|
|
|
values is zero, the result will be zero.
|
2016-08-23 13:34:25 -03:00
|
|
|
|
|
|
|
The harmonic mean is a type of average, a measure of the central
|
2019-09-06 03:02:27 -03:00
|
|
|
location of the data. It is often appropriate when averaging
|
2021-02-07 20:44:42 -04:00
|
|
|
ratios or rates, for example speeds.
|
2019-09-06 03:02:27 -03:00
|
|
|
|
|
|
|
Suppose a car travels 10 km at 40 km/hr, then another 10 km at 60 km/hr.
|
|
|
|
What is the average speed?
|
|
|
|
|
|
|
|
.. doctest::
|
|
|
|
|
|
|
|
>>> harmonic_mean([40, 60])
|
|
|
|
48.0
|
2016-08-23 13:34:25 -03:00
|
|
|
|
2020-12-23 23:52:09 -04:00
|
|
|
Suppose a car travels 40 km/hr for 5 km, and when traffic clears,
|
|
|
|
speeds-up to 60 km/hr for the remaining 30 km of the journey. What
|
|
|
|
is the average speed?
|
2016-08-23 13:34:25 -03:00
|
|
|
|
|
|
|
.. doctest::
|
|
|
|
|
2020-12-23 23:52:09 -04:00
|
|
|
>>> harmonic_mean([40, 60], weights=[5, 30])
|
|
|
|
56.0
|
2016-08-23 13:34:25 -03:00
|
|
|
|
2020-12-23 23:52:09 -04:00
|
|
|
:exc:`StatisticsError` is raised if *data* is empty, any element
|
|
|
|
is less than zero, or if the weighted sum isn't positive.
|
2016-08-23 13:34:25 -03:00
|
|
|
|
2019-11-07 01:50:44 -04:00
|
|
|
The current algorithm has an early-out when it encounters a zero
|
|
|
|
in the input. This means that the subsequent inputs are not tested
|
|
|
|
for validity. (This behavior may change in the future.)
|
|
|
|
|
2016-08-23 15:23:31 -03:00
|
|
|
.. versionadded:: 3.6
|
|
|
|
|
2021-01-03 08:35:26 -04:00
|
|
|
.. versionchanged:: 3.10
|
2020-12-23 23:52:09 -04:00
|
|
|
Added support for *weights*.
|
2016-08-23 13:34:25 -03:00
|
|
|
|
2024-02-25 19:46:47 -04:00
|
|
|
|
2024-03-24 06:35:58 -03:00
|
|
|
.. function:: kde(data, h, kernel='normal', *, cumulative=False)
|
2024-02-25 19:46:47 -04:00
|
|
|
|
|
|
|
`Kernel Density Estimation (KDE)
|
|
|
|
<https://www.itm-conferences.org/articles/itmconf/pdf/2018/08/itmconf_sam2018_00037.pdf>`_:
|
2024-03-24 06:35:58 -03:00
|
|
|
Create a continuous probability density function or cumulative
|
|
|
|
distribution function from discrete samples.
|
2024-02-25 19:46:47 -04:00
|
|
|
|
|
|
|
The basic idea is to smooth the data using `a kernel function
|
|
|
|
<https://en.wikipedia.org/wiki/Kernel_(statistics)>`_.
|
|
|
|
to help draw inferences about a population from a sample.
|
|
|
|
|
|
|
|
The degree of smoothing is controlled by the scaling parameter *h*
|
|
|
|
which is called the bandwidth. Smaller values emphasize local
|
|
|
|
features while larger values give smoother results.
|
|
|
|
|
|
|
|
The *kernel* determines the relative weights of the sample data
|
|
|
|
points. Generally, the choice of kernel shape does not matter
|
|
|
|
as much as the more influential bandwidth smoothing parameter.
|
|
|
|
|
|
|
|
Kernels that give some weight to every sample point include
|
2024-03-24 06:35:58 -03:00
|
|
|
*normal* (*gauss*), *logistic*, and *sigmoid*.
|
2024-02-25 19:46:47 -04:00
|
|
|
|
|
|
|
Kernels that only give weight to sample points within the bandwidth
|
2024-03-24 06:35:58 -03:00
|
|
|
include *rectangular* (*uniform*), *triangular*, *parabolic*
|
|
|
|
(*epanechnikov*), *quartic* (*biweight*), *triweight*, and *cosine*.
|
|
|
|
|
|
|
|
If *cumulative* is true, will return a cumulative distribution function.
|
2024-02-25 19:46:47 -04:00
|
|
|
|
|
|
|
A :exc:`StatisticsError` will be raised if the *data* sequence is empty.
|
|
|
|
|
|
|
|
`Wikipedia has an example
|
|
|
|
<https://en.wikipedia.org/wiki/Kernel_density_estimation#Example>`_
|
|
|
|
where we can use :func:`kde` to generate and plot a probability
|
|
|
|
density function estimated from a small sample:
|
|
|
|
|
|
|
|
.. doctest::
|
|
|
|
|
|
|
|
>>> sample = [-2.1, -1.3, -0.4, 1.9, 5.1, 6.2]
|
|
|
|
>>> f_hat = kde(sample, h=1.5)
|
|
|
|
>>> xarr = [i/100 for i in range(-750, 1100)]
|
|
|
|
>>> yarr = [f_hat(x) for x in xarr]
|
|
|
|
|
|
|
|
The points in ``xarr`` and ``yarr`` can be used to make a PDF plot:
|
|
|
|
|
|
|
|
.. image:: kde_example.png
|
|
|
|
:alt: Scatter plot of the estimated probability density function.
|
|
|
|
|
|
|
|
.. versionadded:: 3.13
|
|
|
|
|
|
|
|
|
2024-05-04 01:13:36 -03:00
|
|
|
.. function:: kde_random(data, h, kernel='normal', *, seed=None)
|
|
|
|
|
|
|
|
Return a function that makes a random selection from the estimated
|
|
|
|
probability density function produced by ``kde(data, h, kernel)``.
|
|
|
|
|
|
|
|
Providing a *seed* allows reproducible selections. In the future, the
|
|
|
|
values may change slightly as more accurate kernel inverse CDF estimates
|
|
|
|
are implemented. The seed may be an integer, float, str, or bytes.
|
|
|
|
|
|
|
|
A :exc:`StatisticsError` will be raised if the *data* sequence is empty.
|
|
|
|
|
|
|
|
Continuing the example for :func:`kde`, we can use
|
|
|
|
:func:`kde_random` to generate new random selections from an
|
|
|
|
estimated probability density function:
|
|
|
|
|
|
|
|
>>> data = [-2.1, -1.3, -0.4, 1.9, 5.1, 6.2]
|
|
|
|
>>> rand = kde_random(data, h=1.5, seed=8675309)
|
|
|
|
>>> new_selections = [rand() for i in range(10)]
|
|
|
|
>>> [round(x, 1) for x in new_selections]
|
|
|
|
[0.7, 6.2, 1.2, 6.9, 7.0, 1.8, 2.5, -0.5, -1.8, 5.6]
|
|
|
|
|
|
|
|
.. versionadded:: 3.13
|
|
|
|
|
|
|
|
|
2013-10-19 15:50:09 -03:00
|
|
|
.. function:: median(data)
|
|
|
|
|
2013-10-21 03:57:26 -03:00
|
|
|
Return the median (middle value) of numeric data, using the common "mean of
|
|
|
|
middle two" method. If *data* is empty, :exc:`StatisticsError` is raised.
|
2019-11-12 03:35:06 -04:00
|
|
|
*data* can be a sequence or iterable.
|
2013-10-19 15:50:09 -03:00
|
|
|
|
2019-09-06 03:02:27 -03:00
|
|
|
The median is a robust measure of central location and is less affected by
|
|
|
|
the presence of outliers. When the number of data points is odd, the
|
|
|
|
middle data point is returned:
|
2013-10-19 15:50:09 -03:00
|
|
|
|
|
|
|
.. doctest::
|
|
|
|
|
|
|
|
>>> median([1, 3, 5])
|
|
|
|
3
|
|
|
|
|
2013-10-21 03:57:26 -03:00
|
|
|
When the number of data points is even, the median is interpolated by taking
|
|
|
|
the average of the two middle values:
|
2013-10-19 15:50:09 -03:00
|
|
|
|
|
|
|
.. doctest::
|
|
|
|
|
|
|
|
>>> median([1, 3, 5, 7])
|
|
|
|
4.0
|
|
|
|
|
2013-10-21 03:57:26 -03:00
|
|
|
This is suited for when your data is discrete, and you don't mind that the
|
|
|
|
median may not be an actual data point.
|
2013-10-19 15:50:09 -03:00
|
|
|
|
2019-09-06 03:02:27 -03:00
|
|
|
If the data is ordinal (supports order operations) but not numeric (doesn't
|
|
|
|
support addition), consider using :func:`median_low` or :func:`median_high`
|
2018-06-25 08:04:01 -03:00
|
|
|
instead.
|
|
|
|
|
2013-10-19 15:50:09 -03:00
|
|
|
.. function:: median_low(data)
|
|
|
|
|
2013-10-21 03:57:26 -03:00
|
|
|
Return the low median of numeric data. If *data* is empty,
|
2019-11-12 03:35:06 -04:00
|
|
|
:exc:`StatisticsError` is raised. *data* can be a sequence or iterable.
|
2013-10-19 15:50:09 -03:00
|
|
|
|
2013-10-21 03:57:26 -03:00
|
|
|
The low median is always a member of the data set. When the number of data
|
|
|
|
points is odd, the middle value is returned. When it is even, the smaller of
|
|
|
|
the two middle values is returned.
|
2013-10-19 15:50:09 -03:00
|
|
|
|
|
|
|
.. doctest::
|
|
|
|
|
|
|
|
>>> median_low([1, 3, 5])
|
|
|
|
3
|
|
|
|
>>> median_low([1, 3, 5, 7])
|
|
|
|
3
|
|
|
|
|
2013-10-21 03:57:26 -03:00
|
|
|
Use the low median when your data are discrete and you prefer the median to
|
|
|
|
be an actual data point rather than interpolated.
|
2013-10-19 15:50:09 -03:00
|
|
|
|
|
|
|
|
|
|
|
.. function:: median_high(data)
|
|
|
|
|
2013-10-21 03:57:26 -03:00
|
|
|
Return the high median of data. If *data* is empty, :exc:`StatisticsError`
|
2019-11-12 03:35:06 -04:00
|
|
|
is raised. *data* can be a sequence or iterable.
|
2013-10-19 15:50:09 -03:00
|
|
|
|
2013-10-21 03:57:26 -03:00
|
|
|
The high median is always a member of the data set. When the number of data
|
|
|
|
points is odd, the middle value is returned. When it is even, the larger of
|
|
|
|
the two middle values is returned.
|
2013-10-19 15:50:09 -03:00
|
|
|
|
|
|
|
.. doctest::
|
|
|
|
|
|
|
|
>>> median_high([1, 3, 5])
|
|
|
|
3
|
|
|
|
>>> median_high([1, 3, 5, 7])
|
|
|
|
5
|
|
|
|
|
2013-10-21 03:57:26 -03:00
|
|
|
Use the high median when your data are discrete and you prefer the median to
|
|
|
|
be an actual data point rather than interpolated.
|
2013-10-19 15:50:09 -03:00
|
|
|
|
|
|
|
|
2024-03-25 20:49:44 -03:00
|
|
|
.. function:: median_grouped(data, interval=1.0)
|
2013-10-19 15:50:09 -03:00
|
|
|
|
2024-03-25 20:49:44 -03:00
|
|
|
Estimates the median for numeric data that has been `grouped or binned
|
|
|
|
<https://en.wikipedia.org/wiki/Data_binning>`_ around the midpoints
|
|
|
|
of consecutive, fixed-width intervals.
|
2013-10-19 15:50:09 -03:00
|
|
|
|
2024-03-25 20:49:44 -03:00
|
|
|
The *data* can be any iterable of numeric data with each value being
|
|
|
|
exactly the midpoint of a bin. At least one value must be present.
|
2013-10-19 15:50:09 -03:00
|
|
|
|
2024-03-25 20:49:44 -03:00
|
|
|
The *interval* is the width of each bin.
|
2013-10-19 15:50:09 -03:00
|
|
|
|
2024-03-25 20:49:44 -03:00
|
|
|
For example, demographic information may have been summarized into
|
|
|
|
consecutive ten-year age groups with each group being represented
|
|
|
|
by the 5-year midpoints of the intervals:
|
2013-10-19 15:50:09 -03:00
|
|
|
|
|
|
|
.. doctest::
|
|
|
|
|
2024-03-25 20:49:44 -03:00
|
|
|
>>> from collections import Counter
|
|
|
|
>>> demographics = Counter({
|
|
|
|
... 25: 172, # 20 to 30 years old
|
|
|
|
... 35: 484, # 30 to 40 years old
|
|
|
|
... 45: 387, # 40 to 50 years old
|
|
|
|
... 55: 22, # 50 to 60 years old
|
|
|
|
... 65: 6, # 60 to 70 years old
|
|
|
|
... })
|
|
|
|
...
|
|
|
|
|
|
|
|
The 50th percentile (median) is the 536th person out of the 1071
|
|
|
|
member cohort. That person is in the 30 to 40 year old age group.
|
|
|
|
|
|
|
|
The regular :func:`median` function would assume that everyone in the
|
|
|
|
tricenarian age group was exactly 35 years old. A more tenable
|
|
|
|
assumption is that the 484 members of that age group are evenly
|
|
|
|
distributed between 30 and 40. For that, we use
|
|
|
|
:func:`median_grouped`:
|
2013-10-19 15:50:09 -03:00
|
|
|
|
|
|
|
.. doctest::
|
|
|
|
|
2024-03-25 20:49:44 -03:00
|
|
|
>>> data = list(demographics.elements())
|
|
|
|
>>> median(data)
|
|
|
|
35
|
|
|
|
>>> round(median_grouped(data, interval=10), 1)
|
|
|
|
37.5
|
2013-10-19 15:50:09 -03:00
|
|
|
|
2024-03-25 20:49:44 -03:00
|
|
|
The caller is responsible for making sure the data points are separated
|
|
|
|
by exact multiples of *interval*. This is essential for getting a
|
|
|
|
correct result. The function does not check this precondition.
|
2013-10-19 15:50:09 -03:00
|
|
|
|
2024-03-25 20:49:44 -03:00
|
|
|
Inputs may be any numeric type that can be coerced to a float during
|
|
|
|
the interpolation step.
|
2013-10-19 15:50:09 -03:00
|
|
|
|
|
|
|
|
|
|
|
.. function:: mode(data)
|
|
|
|
|
2019-03-12 04:43:27 -03:00
|
|
|
Return the single most common data point from discrete or nominal *data*.
|
|
|
|
The mode (when it exists) is the most typical value and serves as a
|
|
|
|
measure of central location.
|
2013-10-21 03:57:26 -03:00
|
|
|
|
2019-09-05 04:18:47 -03:00
|
|
|
If there are multiple modes with the same frequency, returns the first one
|
|
|
|
encountered in the *data*. If the smallest or largest of those is
|
|
|
|
desired instead, use ``min(multimode(data))`` or ``max(multimode(data))``.
|
|
|
|
If the input *data* is empty, :exc:`StatisticsError` is raised.
|
2013-10-19 15:50:09 -03:00
|
|
|
|
2019-09-06 03:02:27 -03:00
|
|
|
``mode`` assumes discrete data and returns a single value. This is the
|
2013-10-19 15:50:09 -03:00
|
|
|
standard treatment of the mode as commonly taught in schools:
|
|
|
|
|
|
|
|
.. doctest::
|
|
|
|
|
|
|
|
>>> mode([1, 1, 2, 3, 3, 3, 3, 4])
|
|
|
|
3
|
|
|
|
|
2019-09-05 04:18:47 -03:00
|
|
|
The mode is unique in that it is the only statistic in this package that
|
|
|
|
also applies to nominal (non-numeric) data:
|
2013-10-19 15:50:09 -03:00
|
|
|
|
|
|
|
.. doctest::
|
|
|
|
|
|
|
|
>>> mode(["red", "blue", "blue", "red", "green", "red", "red"])
|
|
|
|
'red'
|
|
|
|
|
2019-03-12 04:43:27 -03:00
|
|
|
.. versionchanged:: 3.8
|
|
|
|
Now handles multimodal datasets by returning the first mode encountered.
|
|
|
|
Formerly, it raised :exc:`StatisticsError` when more than one mode was
|
|
|
|
found.
|
|
|
|
|
|
|
|
|
|
|
|
.. function:: multimode(data)
|
|
|
|
|
|
|
|
Return a list of the most frequently occurring values in the order they
|
|
|
|
were first encountered in the *data*. Will return more than one result if
|
|
|
|
there are multiple modes or an empty list if the *data* is empty:
|
|
|
|
|
|
|
|
.. doctest::
|
|
|
|
|
|
|
|
>>> multimode('aabbbbccddddeeffffgg')
|
|
|
|
['b', 'd', 'f']
|
|
|
|
>>> multimode('')
|
|
|
|
[]
|
|
|
|
|
|
|
|
.. versionadded:: 3.8
|
|
|
|
|
2013-10-19 15:50:09 -03:00
|
|
|
|
2013-10-21 03:57:26 -03:00
|
|
|
.. function:: pstdev(data, mu=None)
|
2013-10-19 15:50:09 -03:00
|
|
|
|
2013-10-21 03:57:26 -03:00
|
|
|
Return the population standard deviation (the square root of the population
|
|
|
|
variance). See :func:`pvariance` for arguments and other details.
|
2013-10-19 15:50:09 -03:00
|
|
|
|
|
|
|
.. doctest::
|
|
|
|
|
|
|
|
>>> pstdev([1.5, 2.5, 2.5, 2.75, 3.25, 4.75])
|
|
|
|
0.986893273527251
|
|
|
|
|
|
|
|
|
2013-10-21 03:57:26 -03:00
|
|
|
.. function:: pvariance(data, mu=None)
|
2013-10-19 15:50:09 -03:00
|
|
|
|
2019-11-12 03:35:06 -04:00
|
|
|
Return the population variance of *data*, a non-empty sequence or iterable
|
2019-09-05 04:18:47 -03:00
|
|
|
of real-valued numbers. Variance, or second moment about the mean, is a
|
|
|
|
measure of the variability (spread or dispersion) of data. A large
|
|
|
|
variance indicates that the data is spread out; a small variance indicates
|
|
|
|
it is clustered closely around the mean.
|
2013-10-19 15:50:09 -03:00
|
|
|
|
2024-04-19 02:36:24 -03:00
|
|
|
If the optional second argument *mu* is given, it should be the *population*
|
|
|
|
mean of the *data*. It can also be used to compute the second moment around
|
|
|
|
a point that is not the mean. If it is missing or ``None`` (the default),
|
2019-09-05 04:18:47 -03:00
|
|
|
the arithmetic mean is automatically calculated.
|
2013-10-19 15:50:09 -03:00
|
|
|
|
2013-10-21 03:57:26 -03:00
|
|
|
Use this function to calculate the variance from the entire population. To
|
|
|
|
estimate the variance from a sample, the :func:`variance` function is usually
|
|
|
|
a better choice.
|
|
|
|
|
|
|
|
Raises :exc:`StatisticsError` if *data* is empty.
|
2013-10-19 15:50:09 -03:00
|
|
|
|
|
|
|
Examples:
|
|
|
|
|
|
|
|
.. doctest::
|
|
|
|
|
|
|
|
>>> data = [0.0, 0.25, 0.25, 1.25, 1.5, 1.75, 2.75, 3.25]
|
|
|
|
>>> pvariance(data)
|
|
|
|
1.25
|
|
|
|
|
2013-10-21 03:57:26 -03:00
|
|
|
If you have already calculated the mean of your data, you can pass it as the
|
|
|
|
optional second argument *mu* to avoid recalculation:
|
2013-10-19 15:50:09 -03:00
|
|
|
|
|
|
|
.. doctest::
|
|
|
|
|
|
|
|
>>> mu = mean(data)
|
|
|
|
>>> pvariance(data, mu)
|
|
|
|
1.25
|
|
|
|
|
|
|
|
Decimals and Fractions are supported:
|
|
|
|
|
|
|
|
.. doctest::
|
|
|
|
|
|
|
|
>>> from decimal import Decimal as D
|
|
|
|
>>> pvariance([D("27.5"), D("30.25"), D("30.25"), D("34.5"), D("41.75")])
|
|
|
|
Decimal('24.815')
|
|
|
|
|
|
|
|
>>> from fractions import Fraction as F
|
|
|
|
>>> pvariance([F(1, 4), F(5, 4), F(1, 2)])
|
|
|
|
Fraction(13, 72)
|
|
|
|
|
|
|
|
.. note::
|
|
|
|
|
2013-10-21 03:57:26 -03:00
|
|
|
When called with the entire population, this gives the population variance
|
|
|
|
σ². When called on a sample instead, this is the biased sample variance
|
|
|
|
s², also known as variance with N degrees of freedom.
|
2013-10-19 15:50:09 -03:00
|
|
|
|
2019-09-05 04:18:47 -03:00
|
|
|
If you somehow know the true population mean μ, you may use this
|
|
|
|
function to calculate the variance of a sample, giving the known
|
|
|
|
population mean as the second argument. Provided the data points are a
|
|
|
|
random sample of the population, the result will be an unbiased estimate
|
|
|
|
of the population variance.
|
2013-10-19 15:50:09 -03:00
|
|
|
|
|
|
|
|
2013-10-21 03:57:26 -03:00
|
|
|
.. function:: stdev(data, xbar=None)
|
2013-10-19 15:50:09 -03:00
|
|
|
|
2013-10-21 03:57:26 -03:00
|
|
|
Return the sample standard deviation (the square root of the sample
|
|
|
|
variance). See :func:`variance` for arguments and other details.
|
2013-10-19 15:50:09 -03:00
|
|
|
|
|
|
|
.. doctest::
|
|
|
|
|
|
|
|
>>> stdev([1.5, 2.5, 2.5, 2.75, 3.25, 4.75])
|
|
|
|
1.0810874155219827
|
|
|
|
|
|
|
|
|
2013-10-21 03:57:26 -03:00
|
|
|
.. function:: variance(data, xbar=None)
|
2013-10-19 15:50:09 -03:00
|
|
|
|
2013-10-21 03:57:26 -03:00
|
|
|
Return the sample variance of *data*, an iterable of at least two real-valued
|
|
|
|
numbers. Variance, or second moment about the mean, is a measure of the
|
|
|
|
variability (spread or dispersion) of data. A large variance indicates that
|
|
|
|
the data is spread out; a small variance indicates it is clustered closely
|
|
|
|
around the mean.
|
2013-10-19 15:50:09 -03:00
|
|
|
|
2024-04-19 02:36:24 -03:00
|
|
|
If the optional second argument *xbar* is given, it should be the *sample*
|
|
|
|
mean of *data*. If it is missing or ``None`` (the default), the mean is
|
2013-10-19 16:10:01 -03:00
|
|
|
automatically calculated.
|
2013-10-19 15:50:09 -03:00
|
|
|
|
2013-10-21 03:57:26 -03:00
|
|
|
Use this function when your data is a sample from a population. To calculate
|
|
|
|
the variance from the entire population, see :func:`pvariance`.
|
|
|
|
|
|
|
|
Raises :exc:`StatisticsError` if *data* has fewer than two values.
|
2013-10-19 15:50:09 -03:00
|
|
|
|
|
|
|
Examples:
|
|
|
|
|
|
|
|
.. doctest::
|
|
|
|
|
|
|
|
>>> data = [2.75, 1.75, 1.25, 0.25, 0.5, 1.25, 3.5]
|
|
|
|
>>> variance(data)
|
|
|
|
1.3720238095238095
|
|
|
|
|
2024-04-19 02:36:24 -03:00
|
|
|
If you have already calculated the sample mean of your data, you can pass it
|
|
|
|
as the optional second argument *xbar* to avoid recalculation:
|
2013-10-19 15:50:09 -03:00
|
|
|
|
|
|
|
.. doctest::
|
|
|
|
|
|
|
|
>>> m = mean(data)
|
|
|
|
>>> variance(data, m)
|
|
|
|
1.3720238095238095
|
|
|
|
|
2013-10-21 03:57:26 -03:00
|
|
|
This function does not attempt to verify that you have passed the actual mean
|
|
|
|
as *xbar*. Using arbitrary values for *xbar* can lead to invalid or
|
2013-10-19 15:50:09 -03:00
|
|
|
impossible results.
|
|
|
|
|
|
|
|
Decimal and Fraction values are supported:
|
|
|
|
|
|
|
|
.. doctest::
|
|
|
|
|
|
|
|
>>> from decimal import Decimal as D
|
|
|
|
>>> variance([D("27.5"), D("30.25"), D("30.25"), D("34.5"), D("41.75")])
|
|
|
|
Decimal('31.01875')
|
|
|
|
|
|
|
|
>>> from fractions import Fraction as F
|
|
|
|
>>> variance([F(1, 6), F(1, 2), F(5, 3)])
|
|
|
|
Fraction(67, 108)
|
|
|
|
|
|
|
|
.. note::
|
|
|
|
|
2013-10-21 03:57:26 -03:00
|
|
|
This is the sample variance s² with Bessel's correction, also known as
|
|
|
|
variance with N-1 degrees of freedom. Provided that the data points are
|
|
|
|
representative (e.g. independent and identically distributed), the result
|
|
|
|
should be an unbiased estimate of the true population variance.
|
2013-10-19 15:50:09 -03:00
|
|
|
|
2013-10-21 03:57:26 -03:00
|
|
|
If you somehow know the actual population mean μ you should pass it to the
|
|
|
|
:func:`pvariance` function as the *mu* parameter to get the variance of a
|
|
|
|
sample.
|
2013-10-19 15:50:09 -03:00
|
|
|
|
2019-09-05 04:18:47 -03:00
|
|
|
.. function:: quantiles(data, *, n=4, method='exclusive')
|
2019-04-23 04:06:35 -03:00
|
|
|
|
2019-09-05 04:18:47 -03:00
|
|
|
Divide *data* into *n* continuous intervals with equal probability.
|
2019-04-23 04:06:35 -03:00
|
|
|
Returns a list of ``n - 1`` cut points separating the intervals.
|
|
|
|
|
|
|
|
Set *n* to 4 for quartiles (the default). Set *n* to 10 for deciles. Set
|
|
|
|
*n* to 100 for percentiles which gives the 99 cuts points that separate
|
2019-09-08 20:57:58 -03:00
|
|
|
*data* into 100 equal sized groups. Raises :exc:`StatisticsError` if *n*
|
2019-04-23 04:06:35 -03:00
|
|
|
is not least 1.
|
|
|
|
|
2019-09-08 20:57:58 -03:00
|
|
|
The *data* can be any iterable containing sample data. For meaningful
|
2019-09-05 04:18:47 -03:00
|
|
|
results, the number of data points in *data* should be larger than *n*.
|
2023-10-01 01:35:54 -03:00
|
|
|
Raises :exc:`StatisticsError` if there is not at least one data point.
|
2019-04-23 04:06:35 -03:00
|
|
|
|
2019-09-08 20:57:58 -03:00
|
|
|
The cut points are linearly interpolated from the
|
2019-04-23 04:06:35 -03:00
|
|
|
two nearest data points. For example, if a cut point falls one-third
|
|
|
|
of the distance between two sample values, ``100`` and ``112``, the
|
2019-05-18 14:18:29 -03:00
|
|
|
cut-point will evaluate to ``104``.
|
|
|
|
|
|
|
|
The *method* for computing quantiles can be varied depending on
|
2019-09-06 03:02:27 -03:00
|
|
|
whether the *data* includes or excludes the lowest and
|
2019-05-18 14:18:29 -03:00
|
|
|
highest possible values from the population.
|
|
|
|
|
|
|
|
The default *method* is "exclusive" and is used for data sampled from
|
|
|
|
a population that can have more extreme values than found in the
|
|
|
|
samples. The portion of the population falling below the *i-th* of
|
2019-07-21 20:32:00 -03:00
|
|
|
*m* sorted data points is computed as ``i / (m + 1)``. Given nine
|
|
|
|
sample values, the method sorts them and assigns the following
|
|
|
|
percentiles: 10%, 20%, 30%, 40%, 50%, 60%, 70%, 80%, 90%.
|
2019-05-18 14:18:29 -03:00
|
|
|
|
|
|
|
Setting the *method* to "inclusive" is used for describing population
|
2019-07-21 20:32:00 -03:00
|
|
|
data or for samples that are known to include the most extreme values
|
2019-09-05 04:18:47 -03:00
|
|
|
from the population. The minimum value in *data* is treated as the 0th
|
2019-07-21 20:32:00 -03:00
|
|
|
percentile and the maximum value is treated as the 100th percentile.
|
|
|
|
The portion of the population falling below the *i-th* of *m* sorted
|
|
|
|
data points is computed as ``(i - 1) / (m - 1)``. Given 11 sample
|
|
|
|
values, the method sorts them and assigns the following percentiles:
|
|
|
|
0%, 10%, 20%, 30%, 40%, 50%, 60%, 70%, 80%, 90%, 100%.
|
2019-05-18 14:18:29 -03:00
|
|
|
|
2019-04-23 04:06:35 -03:00
|
|
|
.. doctest::
|
|
|
|
|
|
|
|
# Decile cut points for empirically sampled data
|
|
|
|
>>> data = [105, 129, 87, 86, 111, 111, 89, 81, 108, 92, 110,
|
|
|
|
... 100, 75, 105, 103, 109, 76, 119, 99, 91, 103, 129,
|
|
|
|
... 106, 101, 84, 111, 74, 87, 86, 103, 103, 106, 86,
|
|
|
|
... 111, 75, 87, 102, 121, 111, 88, 89, 101, 106, 95,
|
|
|
|
... 103, 107, 101, 81, 109, 104]
|
|
|
|
>>> [round(q, 1) for q in quantiles(data, n=10)]
|
|
|
|
[81.0, 86.2, 89.0, 99.4, 102.5, 103.6, 106.0, 109.8, 111.0]
|
|
|
|
|
|
|
|
.. versionadded:: 3.8
|
|
|
|
|
2023-10-01 01:35:54 -03:00
|
|
|
.. versionchanged:: 3.13
|
|
|
|
No longer raises an exception for an input with only a single data point.
|
|
|
|
This allows quantile estimates to be built up one sample point
|
|
|
|
at a time becoming gradually more refined with each new data point.
|
|
|
|
|
2021-04-25 08:45:09 -03:00
|
|
|
.. function:: covariance(x, y, /)
|
|
|
|
|
|
|
|
Return the sample covariance of two inputs *x* and *y*. Covariance
|
|
|
|
is a measure of the joint variability of two inputs.
|
|
|
|
|
|
|
|
Both inputs must be of the same length (no less than two), otherwise
|
|
|
|
:exc:`StatisticsError` is raised.
|
|
|
|
|
|
|
|
Examples:
|
|
|
|
|
|
|
|
.. doctest::
|
|
|
|
|
|
|
|
>>> x = [1, 2, 3, 4, 5, 6, 7, 8, 9]
|
|
|
|
>>> y = [1, 2, 3, 1, 2, 3, 1, 2, 3]
|
|
|
|
>>> covariance(x, y)
|
|
|
|
0.75
|
|
|
|
>>> z = [9, 8, 7, 6, 5, 4, 3, 2, 1]
|
|
|
|
>>> covariance(x, z)
|
|
|
|
-7.5
|
|
|
|
>>> covariance(z, x)
|
|
|
|
-7.5
|
|
|
|
|
|
|
|
.. versionadded:: 3.10
|
|
|
|
|
2022-08-18 15:48:27 -03:00
|
|
|
.. function:: correlation(x, y, /, *, method='linear')
|
2021-04-25 08:45:09 -03:00
|
|
|
|
|
|
|
Return the `Pearson's correlation coefficient
|
|
|
|
<https://en.wikipedia.org/wiki/Pearson_correlation_coefficient>`_
|
|
|
|
for two inputs. Pearson's correlation coefficient *r* takes values
|
2022-08-18 15:48:27 -03:00
|
|
|
between -1 and +1. It measures the strength and direction of a linear
|
|
|
|
relationship.
|
|
|
|
|
|
|
|
If *method* is "ranked", computes `Spearman's rank correlation coefficient
|
|
|
|
<https://en.wikipedia.org/wiki/Spearman%27s_rank_correlation_coefficient>`_
|
|
|
|
for two inputs. The data is replaced by ranks. Ties are averaged so that
|
|
|
|
equal values receive the same rank. The resulting coefficient measures the
|
|
|
|
strength of a monotonic relationship.
|
|
|
|
|
|
|
|
Spearman's correlation coefficient is appropriate for ordinal data or for
|
|
|
|
continuous data that doesn't meet the linear proportion requirement for
|
|
|
|
Pearson's correlation coefficient.
|
2021-04-25 08:45:09 -03:00
|
|
|
|
|
|
|
Both inputs must be of the same length (no less than two), and need
|
|
|
|
not to be constant, otherwise :exc:`StatisticsError` is raised.
|
|
|
|
|
2022-08-18 15:48:27 -03:00
|
|
|
Example with `Kepler's laws of planetary motion
|
|
|
|
<https://en.wikipedia.org/wiki/Kepler's_laws_of_planetary_motion>`_:
|
2021-04-25 08:45:09 -03:00
|
|
|
|
|
|
|
.. doctest::
|
|
|
|
|
2022-08-18 15:48:27 -03:00
|
|
|
>>> # Mercury, Venus, Earth, Mars, Jupiter, Saturn, Uranus, and Neptune
|
|
|
|
>>> orbital_period = [88, 225, 365, 687, 4331, 10_756, 30_687, 60_190] # days
|
|
|
|
>>> dist_from_sun = [58, 108, 150, 228, 778, 1_400, 2_900, 4_500] # million km
|
|
|
|
|
|
|
|
>>> # Show that a perfect monotonic relationship exists
|
|
|
|
>>> correlation(orbital_period, dist_from_sun, method='ranked')
|
|
|
|
1.0
|
|
|
|
|
|
|
|
>>> # Observe that a linear relationship is imperfect
|
|
|
|
>>> round(correlation(orbital_period, dist_from_sun), 4)
|
|
|
|
0.9882
|
|
|
|
|
|
|
|
>>> # Demonstrate Kepler's third law: There is a linear correlation
|
|
|
|
>>> # between the square of the orbital period and the cube of the
|
|
|
|
>>> # distance from the sun.
|
|
|
|
>>> period_squared = [p * p for p in orbital_period]
|
|
|
|
>>> dist_cubed = [d * d * d for d in dist_from_sun]
|
|
|
|
>>> round(correlation(period_squared, dist_cubed), 4)
|
2021-04-25 08:45:09 -03:00
|
|
|
1.0
|
|
|
|
|
|
|
|
.. versionadded:: 3.10
|
|
|
|
|
2022-08-18 15:48:27 -03:00
|
|
|
.. versionchanged:: 3.12
|
|
|
|
Added support for Spearman's rank correlation coefficient.
|
|
|
|
|
2021-11-21 10:39:26 -04:00
|
|
|
.. function:: linear_regression(x, y, /, *, proportional=False)
|
2021-04-25 08:45:09 -03:00
|
|
|
|
2021-05-24 21:30:58 -03:00
|
|
|
Return the slope and intercept of `simple linear regression
|
2021-04-25 08:45:09 -03:00
|
|
|
<https://en.wikipedia.org/wiki/Simple_linear_regression>`_
|
|
|
|
parameters estimated using ordinary least squares. Simple linear
|
2021-05-24 21:30:58 -03:00
|
|
|
regression describes the relationship between an independent variable *x* and
|
|
|
|
a dependent variable *y* in terms of this linear function:
|
2021-04-25 08:45:09 -03:00
|
|
|
|
2021-05-25 03:04:04 -03:00
|
|
|
*y = slope \* x + intercept + noise*
|
2021-04-25 08:45:09 -03:00
|
|
|
|
2021-05-24 21:30:58 -03:00
|
|
|
where ``slope`` and ``intercept`` are the regression parameters that are
|
2021-05-25 03:04:04 -03:00
|
|
|
estimated, and ``noise`` represents the
|
2021-04-25 08:45:09 -03:00
|
|
|
variability of the data that was not explained by the linear regression
|
2021-05-16 23:21:14 -03:00
|
|
|
(it is equal to the difference between predicted and actual values
|
2021-05-25 03:04:04 -03:00
|
|
|
of the dependent variable).
|
2021-04-25 08:45:09 -03:00
|
|
|
|
2021-05-24 21:30:58 -03:00
|
|
|
Both inputs must be of the same length (no less than two), and
|
2021-05-25 03:04:04 -03:00
|
|
|
the independent variable *x* cannot be constant;
|
|
|
|
otherwise a :exc:`StatisticsError` is raised.
|
2021-04-25 08:45:09 -03:00
|
|
|
|
2021-05-16 23:21:14 -03:00
|
|
|
For example, we can use the `release dates of the Monty
|
2021-05-25 03:04:04 -03:00
|
|
|
Python films <https://en.wikipedia.org/wiki/Monty_Python#Films>`_
|
|
|
|
to predict the cumulative number of Monty Python films
|
2021-05-16 23:21:14 -03:00
|
|
|
that would have been produced by 2019
|
2021-05-25 03:04:04 -03:00
|
|
|
assuming that they had kept the pace.
|
2021-04-25 08:45:09 -03:00
|
|
|
|
|
|
|
.. doctest::
|
|
|
|
|
|
|
|
>>> year = [1971, 1975, 1979, 1982, 1983]
|
|
|
|
>>> films_total = [1, 2, 3, 4, 5]
|
2021-05-24 21:30:58 -03:00
|
|
|
>>> slope, intercept = linear_regression(year, films_total)
|
2021-05-25 03:04:04 -03:00
|
|
|
>>> round(slope * 2019 + intercept)
|
2021-04-25 08:45:09 -03:00
|
|
|
16
|
|
|
|
|
2021-11-21 10:39:26 -04:00
|
|
|
If *proportional* is true, the independent variable *x* and the
|
|
|
|
dependent variable *y* are assumed to be directly proportional.
|
|
|
|
The data is fit to a line passing through the origin.
|
|
|
|
Since the *intercept* will always be 0.0, the underlying linear
|
|
|
|
function simplifies to:
|
|
|
|
|
|
|
|
*y = slope \* x + noise*
|
|
|
|
|
2023-09-30 01:18:12 -03:00
|
|
|
Continuing the example from :func:`correlation`, we look to see
|
|
|
|
how well a model based on major planets can predict the orbital
|
|
|
|
distances for dwarf planets:
|
|
|
|
|
|
|
|
.. doctest::
|
|
|
|
|
|
|
|
>>> model = linear_regression(period_squared, dist_cubed, proportional=True)
|
|
|
|
>>> slope = model.slope
|
|
|
|
|
|
|
|
>>> # Dwarf planets: Pluto, Eris, Makemake, Haumea, Ceres
|
|
|
|
>>> orbital_periods = [90_560, 204_199, 111_845, 103_410, 1_680] # days
|
|
|
|
>>> predicted_dist = [math.cbrt(slope * (p * p)) for p in orbital_periods]
|
|
|
|
>>> list(map(round, predicted_dist))
|
|
|
|
[5912, 10166, 6806, 6459, 414]
|
|
|
|
|
|
|
|
>>> [5_906, 10_152, 6_796, 6_450, 414] # actual distance in million km
|
|
|
|
[5906, 10152, 6796, 6450, 414]
|
|
|
|
|
2021-04-25 08:45:09 -03:00
|
|
|
.. versionadded:: 3.10
|
|
|
|
|
2021-11-21 10:39:26 -04:00
|
|
|
.. versionchanged:: 3.11
|
|
|
|
Added support for *proportional*.
|
2019-04-23 04:06:35 -03:00
|
|
|
|
2013-10-19 15:50:09 -03:00
|
|
|
Exceptions
|
|
|
|
----------
|
|
|
|
|
|
|
|
A single exception is defined:
|
|
|
|
|
2013-10-20 18:52:54 -03:00
|
|
|
.. exception:: StatisticsError
|
2013-10-19 15:50:09 -03:00
|
|
|
|
2013-10-20 18:52:09 -03:00
|
|
|
Subclass of :exc:`ValueError` for statistics-related exceptions.
|
2013-10-19 15:50:09 -03:00
|
|
|
|
2019-02-23 18:44:07 -04:00
|
|
|
|
|
|
|
:class:`NormalDist` objects
|
2019-03-15 01:46:31 -03:00
|
|
|
---------------------------
|
2019-02-23 18:44:07 -04:00
|
|
|
|
2019-03-01 01:47:26 -04:00
|
|
|
:class:`NormalDist` is a tool for creating and manipulating normal
|
|
|
|
distributions of a `random variable
|
|
|
|
<http://www.stat.yale.edu/Courses/1997-98/101/ranvar.htm>`_. It is a
|
2019-09-05 04:18:47 -03:00
|
|
|
class that treats the mean and standard deviation of data
|
2019-03-01 01:47:26 -04:00
|
|
|
measurements as a single entity.
|
2019-02-23 18:44:07 -04:00
|
|
|
|
|
|
|
Normal distributions arise from the `Central Limit Theorem
|
|
|
|
<https://en.wikipedia.org/wiki/Central_limit_theorem>`_ and have a wide range
|
2019-03-07 03:23:55 -04:00
|
|
|
of applications in statistics.
|
2019-02-23 18:44:07 -04:00
|
|
|
|
|
|
|
.. class:: NormalDist(mu=0.0, sigma=1.0)
|
|
|
|
|
|
|
|
Returns a new *NormalDist* object where *mu* represents the `arithmetic
|
2019-02-28 13:16:25 -04:00
|
|
|
mean <https://en.wikipedia.org/wiki/Arithmetic_mean>`_ and *sigma*
|
2019-02-23 18:44:07 -04:00
|
|
|
represents the `standard deviation
|
2019-02-28 13:16:25 -04:00
|
|
|
<https://en.wikipedia.org/wiki/Standard_deviation>`_.
|
2019-02-23 18:44:07 -04:00
|
|
|
|
|
|
|
If *sigma* is negative, raises :exc:`StatisticsError`.
|
|
|
|
|
2019-02-24 15:44:55 -04:00
|
|
|
.. attribute:: mean
|
2019-02-23 18:44:07 -04:00
|
|
|
|
2019-03-07 03:23:55 -04:00
|
|
|
A read-only property for the `arithmetic mean
|
2019-02-24 15:44:55 -04:00
|
|
|
<https://en.wikipedia.org/wiki/Arithmetic_mean>`_ of a normal
|
|
|
|
distribution.
|
2019-02-23 18:44:07 -04:00
|
|
|
|
2019-09-08 20:57:58 -03:00
|
|
|
.. attribute:: median
|
|
|
|
|
|
|
|
A read-only property for the `median
|
|
|
|
<https://en.wikipedia.org/wiki/Median>`_ of a normal
|
|
|
|
distribution.
|
|
|
|
|
|
|
|
.. attribute:: mode
|
|
|
|
|
|
|
|
A read-only property for the `mode
|
|
|
|
<https://en.wikipedia.org/wiki/Mode_(statistics)>`_ of a normal
|
|
|
|
distribution.
|
|
|
|
|
2019-02-24 15:44:55 -04:00
|
|
|
.. attribute:: stdev
|
2019-02-23 18:44:07 -04:00
|
|
|
|
2019-03-07 03:23:55 -04:00
|
|
|
A read-only property for the `standard deviation
|
2019-02-24 15:44:55 -04:00
|
|
|
<https://en.wikipedia.org/wiki/Standard_deviation>`_ of a normal
|
|
|
|
distribution.
|
2019-02-23 18:44:07 -04:00
|
|
|
|
|
|
|
.. attribute:: variance
|
|
|
|
|
2019-03-07 03:23:55 -04:00
|
|
|
A read-only property for the `variance
|
2019-02-23 18:44:07 -04:00
|
|
|
<https://en.wikipedia.org/wiki/Variance>`_ of a normal
|
|
|
|
distribution. Equal to the square of the standard deviation.
|
|
|
|
|
|
|
|
.. classmethod:: NormalDist.from_samples(data)
|
|
|
|
|
2019-09-05 04:18:47 -03:00
|
|
|
Makes a normal distribution instance with *mu* and *sigma* parameters
|
|
|
|
estimated from the *data* using :func:`fmean` and :func:`stdev`.
|
2019-02-23 18:44:07 -04:00
|
|
|
|
2019-09-05 04:18:47 -03:00
|
|
|
The *data* can be any :term:`iterable` and should consist of values
|
|
|
|
that can be converted to type :class:`float`. If *data* does not
|
|
|
|
contain at least two elements, raises :exc:`StatisticsError` because it
|
|
|
|
takes at least one point to estimate a central value and at least two
|
|
|
|
points to estimate dispersion.
|
2019-02-23 18:44:07 -04:00
|
|
|
|
2019-04-23 05:46:18 -03:00
|
|
|
.. method:: NormalDist.samples(n, *, seed=None)
|
2019-02-23 18:44:07 -04:00
|
|
|
|
|
|
|
Generates *n* random samples for a given mean and standard deviation.
|
|
|
|
Returns a :class:`list` of :class:`float` values.
|
|
|
|
|
|
|
|
If *seed* is given, creates a new instance of the underlying random
|
|
|
|
number generator. This is useful for creating reproducible results,
|
|
|
|
even in a multi-threading context.
|
|
|
|
|
2023-08-27 10:59:40 -03:00
|
|
|
.. versionchanged:: 3.13
|
|
|
|
|
|
|
|
Switched to a faster algorithm. To reproduce samples from previous
|
|
|
|
versions, use :func:`random.seed` and :func:`random.gauss`.
|
|
|
|
|
2019-02-23 18:44:07 -04:00
|
|
|
.. method:: NormalDist.pdf(x)
|
|
|
|
|
|
|
|
Using a `probability density function (pdf)
|
2019-09-05 04:18:47 -03:00
|
|
|
<https://en.wikipedia.org/wiki/Probability_density_function>`_, compute
|
|
|
|
the relative likelihood that a random variable *X* will be near the
|
|
|
|
given value *x*. Mathematically, it is the limit of the ratio ``P(x <=
|
|
|
|
X < x+dx) / dx`` as *dx* approaches zero.
|
2019-02-23 18:44:07 -04:00
|
|
|
|
2019-03-11 03:43:33 -03:00
|
|
|
The relative likelihood is computed as the probability of a sample
|
|
|
|
occurring in a narrow range divided by the width of the range (hence
|
|
|
|
the word "density"). Since the likelihood is relative to other points,
|
2022-10-06 22:01:30 -03:00
|
|
|
its value can be greater than ``1.0``.
|
2019-02-23 18:44:07 -04:00
|
|
|
|
|
|
|
.. method:: NormalDist.cdf(x)
|
|
|
|
|
|
|
|
Using a `cumulative distribution function (cdf)
|
|
|
|
<https://en.wikipedia.org/wiki/Cumulative_distribution_function>`_,
|
2019-03-01 01:47:26 -04:00
|
|
|
compute the probability that a random variable *X* will be less than or
|
2019-02-23 18:44:07 -04:00
|
|
|
equal to *x*. Mathematically, it is written ``P(X <= x)``.
|
|
|
|
|
2019-03-19 00:17:14 -03:00
|
|
|
.. method:: NormalDist.inv_cdf(p)
|
|
|
|
|
|
|
|
Compute the inverse cumulative distribution function, also known as the
|
|
|
|
`quantile function <https://en.wikipedia.org/wiki/Quantile_function>`_
|
|
|
|
or the `percent-point
|
2022-06-21 15:55:18 -03:00
|
|
|
<https://web.archive.org/web/20190203145224/https://www.statisticshowto.datasciencecentral.com/inverse-distribution-function/>`_
|
2019-03-19 00:17:14 -03:00
|
|
|
function. Mathematically, it is written ``x : P(X <= x) = p``.
|
|
|
|
|
|
|
|
Finds the value *x* of the random variable *X* such that the
|
|
|
|
probability of the variable being less than or equal to that value
|
|
|
|
equals the given probability *p*.
|
|
|
|
|
2019-03-07 02:59:40 -04:00
|
|
|
.. method:: NormalDist.overlap(other)
|
|
|
|
|
2019-09-05 04:18:47 -03:00
|
|
|
Measures the agreement between two normal probability distributions.
|
|
|
|
Returns a value between 0.0 and 1.0 giving `the overlapping area for
|
|
|
|
the two probability density functions
|
|
|
|
<https://www.rasch.org/rmt/rmt101r.htm>`_.
|
2019-03-07 02:59:40 -04:00
|
|
|
|
2019-10-13 23:53:30 -03:00
|
|
|
.. method:: NormalDist.quantiles(n=4)
|
2019-09-08 20:57:58 -03:00
|
|
|
|
|
|
|
Divide the normal distribution into *n* continuous intervals with
|
|
|
|
equal probability. Returns a list of (n - 1) cut points separating
|
|
|
|
the intervals.
|
|
|
|
|
|
|
|
Set *n* to 4 for quartiles (the default). Set *n* to 10 for deciles.
|
|
|
|
Set *n* to 100 for percentiles which gives the 99 cuts points that
|
|
|
|
separate the normal distribution into 100 equal sized groups.
|
|
|
|
|
2020-04-16 14:25:14 -03:00
|
|
|
.. method:: NormalDist.zscore(x)
|
|
|
|
|
|
|
|
Compute the
|
|
|
|
`Standard Score <https://www.statisticshowto.com/probability-and-statistics/z-score/>`_
|
|
|
|
describing *x* in terms of the number of standard deviations
|
|
|
|
above or below the mean of the normal distribution:
|
|
|
|
``(x - mean) / stdev``.
|
|
|
|
|
|
|
|
.. versionadded:: 3.9
|
|
|
|
|
2019-02-23 18:44:07 -04:00
|
|
|
Instances of :class:`NormalDist` support addition, subtraction,
|
|
|
|
multiplication and division by a constant. These operations
|
|
|
|
are used for translation and scaling. For example:
|
|
|
|
|
|
|
|
.. doctest::
|
|
|
|
|
|
|
|
>>> temperature_february = NormalDist(5, 2.5) # Celsius
|
|
|
|
>>> temperature_february * (9/5) + 32 # Fahrenheit
|
|
|
|
NormalDist(mu=41.0, sigma=4.5)
|
|
|
|
|
2019-03-11 03:43:33 -03:00
|
|
|
Dividing a constant by an instance of :class:`NormalDist` is not supported
|
|
|
|
because the result wouldn't be normally distributed.
|
2019-02-23 18:44:07 -04:00
|
|
|
|
|
|
|
Since normal distributions arise from additive effects of independent
|
2019-03-07 03:23:55 -04:00
|
|
|
variables, it is possible to `add and subtract two independent normally
|
|
|
|
distributed random variables
|
2019-02-23 18:44:07 -04:00
|
|
|
<https://en.wikipedia.org/wiki/Sum_of_normally_distributed_random_variables>`_
|
|
|
|
represented as instances of :class:`NormalDist`. For example:
|
|
|
|
|
|
|
|
.. doctest::
|
|
|
|
|
|
|
|
>>> birth_weights = NormalDist.from_samples([2.5, 3.1, 2.1, 2.4, 2.7, 3.5])
|
|
|
|
>>> drug_effects = NormalDist(0.4, 0.15)
|
|
|
|
>>> combined = birth_weights + drug_effects
|
2019-03-11 03:43:33 -03:00
|
|
|
>>> round(combined.mean, 1)
|
|
|
|
3.1
|
|
|
|
>>> round(combined.stdev, 1)
|
|
|
|
0.5
|
2019-02-23 18:44:07 -04:00
|
|
|
|
|
|
|
.. versionadded:: 3.8
|
|
|
|
|
|
|
|
|
2024-03-25 11:26:42 -03:00
|
|
|
Examples and Recipes
|
|
|
|
--------------------
|
2019-02-23 18:44:07 -04:00
|
|
|
|
2023-08-13 10:01:23 -03:00
|
|
|
|
|
|
|
Classic probability problems
|
|
|
|
****************************
|
|
|
|
|
2019-02-28 13:16:25 -04:00
|
|
|
:class:`NormalDist` readily solves classic probability problems.
|
2019-02-23 18:44:07 -04:00
|
|
|
|
|
|
|
For example, given `historical data for SAT exams
|
2020-01-27 22:31:46 -04:00
|
|
|
<https://nces.ed.gov/programs/digest/d17/tables/dt17_226.40.asp>`_ showing
|
|
|
|
that scores are normally distributed with a mean of 1060 and a standard
|
|
|
|
deviation of 195, determine the percentage of students with test scores
|
|
|
|
between 1100 and 1200, after rounding to the nearest whole number:
|
2019-02-23 18:44:07 -04:00
|
|
|
|
|
|
|
.. doctest::
|
|
|
|
|
|
|
|
>>> sat = NormalDist(1060, 195)
|
2019-03-07 03:23:55 -04:00
|
|
|
>>> fraction = sat.cdf(1200 + 0.5) - sat.cdf(1100 - 0.5)
|
2019-03-11 03:43:33 -03:00
|
|
|
>>> round(fraction * 100.0, 1)
|
|
|
|
18.4
|
2019-02-23 18:44:07 -04:00
|
|
|
|
2019-03-19 00:17:14 -03:00
|
|
|
Find the `quartiles <https://en.wikipedia.org/wiki/Quartile>`_ and `deciles
|
|
|
|
<https://en.wikipedia.org/wiki/Decile>`_ for the SAT scores:
|
|
|
|
|
|
|
|
.. doctest::
|
|
|
|
|
2019-09-08 20:57:58 -03:00
|
|
|
>>> list(map(round, sat.quantiles()))
|
2019-03-19 00:17:14 -03:00
|
|
|
[928, 1060, 1192]
|
2019-09-08 20:57:58 -03:00
|
|
|
>>> list(map(round, sat.quantiles(n=10)))
|
2019-03-19 00:17:14 -03:00
|
|
|
[810, 896, 958, 1011, 1060, 1109, 1162, 1224, 1310]
|
|
|
|
|
2023-08-13 10:01:23 -03:00
|
|
|
|
|
|
|
Monte Carlo inputs for simulations
|
|
|
|
**********************************
|
|
|
|
|
2024-03-25 11:26:42 -03:00
|
|
|
To estimate the distribution for a model that isn't easy to solve
|
2019-02-23 18:44:07 -04:00
|
|
|
analytically, :class:`NormalDist` can generate input samples for a `Monte
|
2019-03-11 03:43:33 -03:00
|
|
|
Carlo simulation <https://en.wikipedia.org/wiki/Monte_Carlo_method>`_:
|
2019-02-23 18:44:07 -04:00
|
|
|
|
|
|
|
.. doctest::
|
|
|
|
|
2019-03-11 03:43:33 -03:00
|
|
|
>>> def model(x, y, z):
|
|
|
|
... return (3*x + 7*x*y - 5*y) / (11 * z)
|
|
|
|
...
|
2019-02-23 18:44:07 -04:00
|
|
|
>>> n = 100_000
|
2019-09-05 04:18:47 -03:00
|
|
|
>>> X = NormalDist(10, 2.5).samples(n, seed=3652260728)
|
|
|
|
>>> Y = NormalDist(15, 1.75).samples(n, seed=4582495471)
|
|
|
|
>>> Z = NormalDist(50, 1.25).samples(n, seed=6582483453)
|
|
|
|
>>> quantiles(map(model, X, Y, Z)) # doctest: +SKIP
|
|
|
|
[1.4591308524824727, 1.8035946855390597, 2.175091447274739]
|
2019-02-23 18:44:07 -04:00
|
|
|
|
2023-08-13 10:01:23 -03:00
|
|
|
Approximating binomial distributions
|
|
|
|
************************************
|
|
|
|
|
2020-01-26 00:21:17 -04:00
|
|
|
Normal distributions can be used to approximate `Binomial
|
2022-08-04 04:13:49 -03:00
|
|
|
distributions <https://mathworld.wolfram.com/BinomialDistribution.html>`_
|
2020-01-26 00:21:17 -04:00
|
|
|
when the sample size is large and when the probability of a successful
|
|
|
|
trial is near 50%.
|
|
|
|
|
|
|
|
For example, an open source conference has 750 attendees and two rooms with a
|
|
|
|
500 person capacity. There is a talk about Python and another about Ruby.
|
|
|
|
In previous conferences, 65% of the attendees preferred to listen to Python
|
|
|
|
talks. Assuming the population preferences haven't changed, what is the
|
2020-01-27 22:31:46 -04:00
|
|
|
probability that the Python room will stay within its capacity limits?
|
2020-01-26 00:21:17 -04:00
|
|
|
|
|
|
|
.. doctest::
|
|
|
|
|
|
|
|
>>> n = 750 # Sample size
|
|
|
|
>>> p = 0.65 # Preference for Python
|
|
|
|
>>> q = 1.0 - p # Preference for Ruby
|
|
|
|
>>> k = 500 # Room capacity
|
|
|
|
|
|
|
|
>>> # Approximation using the cumulative normal distribution
|
|
|
|
>>> from math import sqrt
|
|
|
|
>>> round(NormalDist(mu=n*p, sigma=sqrt(n*p*q)).cdf(k + 0.5), 4)
|
|
|
|
0.8402
|
|
|
|
|
2024-01-09 15:02:07 -04:00
|
|
|
>>> # Exact solution using the cumulative binomial distribution
|
2020-01-26 00:21:17 -04:00
|
|
|
>>> from math import comb, fsum
|
|
|
|
>>> round(fsum(comb(n, r) * p**r * q**(n-r) for r in range(k+1)), 4)
|
|
|
|
0.8402
|
|
|
|
|
|
|
|
>>> # Approximation using a simulation
|
2024-01-09 15:02:07 -04:00
|
|
|
>>> from random import seed, binomialvariate
|
2020-01-26 00:21:17 -04:00
|
|
|
>>> seed(8675309)
|
2024-01-09 15:02:07 -04:00
|
|
|
>>> mean(binomialvariate(n, p) <= k for i in range(10_000))
|
|
|
|
0.8406
|
2020-01-26 00:21:17 -04:00
|
|
|
|
2023-08-13 10:01:23 -03:00
|
|
|
|
|
|
|
Naive bayesian classifier
|
|
|
|
*************************
|
|
|
|
|
2019-02-23 18:44:07 -04:00
|
|
|
Normal distributions commonly arise in machine learning problems.
|
|
|
|
|
2019-03-07 03:23:55 -04:00
|
|
|
Wikipedia has a `nice example of a Naive Bayesian Classifier
|
2022-06-21 15:55:18 -03:00
|
|
|
<https://en.wikipedia.org/wiki/Naive_Bayes_classifier#Person_classification>`_.
|
2019-03-09 04:42:23 -04:00
|
|
|
The challenge is to predict a person's gender from measurements of normally
|
|
|
|
distributed features including height, weight, and foot size.
|
2019-02-23 18:44:07 -04:00
|
|
|
|
2019-03-07 03:23:55 -04:00
|
|
|
We're given a training dataset with measurements for eight people. The
|
2019-02-23 18:44:07 -04:00
|
|
|
measurements are assumed to be normally distributed, so we summarize the data
|
|
|
|
with :class:`NormalDist`:
|
|
|
|
|
|
|
|
.. doctest::
|
|
|
|
|
|
|
|
>>> height_male = NormalDist.from_samples([6, 5.92, 5.58, 5.92])
|
|
|
|
>>> height_female = NormalDist.from_samples([5, 5.5, 5.42, 5.75])
|
|
|
|
>>> weight_male = NormalDist.from_samples([180, 190, 170, 165])
|
|
|
|
>>> weight_female = NormalDist.from_samples([100, 150, 130, 150])
|
|
|
|
>>> foot_size_male = NormalDist.from_samples([12, 11, 12, 10])
|
|
|
|
>>> foot_size_female = NormalDist.from_samples([6, 8, 7, 9])
|
|
|
|
|
2019-03-07 03:23:55 -04:00
|
|
|
Next, we encounter a new person whose feature measurements are known but whose
|
|
|
|
gender is unknown:
|
2019-02-23 18:44:07 -04:00
|
|
|
|
|
|
|
.. doctest::
|
|
|
|
|
|
|
|
>>> ht = 6.0 # height
|
|
|
|
>>> wt = 130 # weight
|
|
|
|
>>> fs = 8 # foot size
|
|
|
|
|
2019-03-07 03:23:55 -04:00
|
|
|
Starting with a 50% `prior probability
|
|
|
|
<https://en.wikipedia.org/wiki/Prior_probability>`_ of being male or female,
|
|
|
|
we compute the posterior as the prior times the product of likelihoods for the
|
|
|
|
feature measurements given the gender:
|
2019-02-23 18:44:07 -04:00
|
|
|
|
|
|
|
.. doctest::
|
|
|
|
|
2019-03-07 03:23:55 -04:00
|
|
|
>>> prior_male = 0.5
|
|
|
|
>>> prior_female = 0.5
|
2019-02-23 18:44:07 -04:00
|
|
|
>>> posterior_male = (prior_male * height_male.pdf(ht) *
|
|
|
|
... weight_male.pdf(wt) * foot_size_male.pdf(fs))
|
|
|
|
|
|
|
|
>>> posterior_female = (prior_female * height_female.pdf(ht) *
|
|
|
|
... weight_female.pdf(wt) * foot_size_female.pdf(fs))
|
|
|
|
|
2019-03-07 03:23:55 -04:00
|
|
|
The final prediction goes to the largest posterior. This is known as the
|
|
|
|
`maximum a posteriori
|
2019-02-23 18:44:07 -04:00
|
|
|
<https://en.wikipedia.org/wiki/Maximum_a_posteriori_estimation>`_ or MAP:
|
|
|
|
|
|
|
|
.. doctest::
|
|
|
|
|
|
|
|
>>> 'male' if posterior_male > posterior_female else 'female'
|
|
|
|
'female'
|
|
|
|
|
|
|
|
|
2013-10-19 15:50:09 -03:00
|
|
|
..
|
|
|
|
# This modelines must appear within the last ten lines of the file.
|
|
|
|
kate: indent-width 3; remove-trailing-space on; replace-tabs on; encoding utf-8;
|