Fitting probability distributions

Fitting probability distributions#

Fitting probability distributions to data is an important task in any discipline of science and engineering, as these distributions can be used to derive quantitative statements about risks and frequencies of uncertain properties. Probabilistic tools can be applied to model this uncertainty. In this workshop, you will work with a dataset of your choosing estimate a distribution and evaluate the quality of your fit.

The goal of this project is:

  1. Choose a reasonable distribution function for your chosen dataset, analyzing the statistics of the observations.

  2. Fit the chosen distributions by moments.

  3. Assess the fit computing probabilities analytically.

  4. Assess the fit using goodness of fit techniques and computer code.

The project will be divided into 3 parts: 1) data analysis, 2) pen and paper stuff (math practice!), and 3) programming.

# Let us load some required libraries 
import numpy as np                       # For math
import matplotlib.pyplot as plt          # For plotting
from matplotlib.gridspec import GridSpec # For plotting
import pandas as pd                      # For file-wrangling
from scipy import stats                  # For math

# This is just cosmetic - it updates the font size for our plots
plt.rcParams.update({'font.size': 14})

Please choose one of the following datasets:

  1. Observations of the compressive strength of concrete. The compressive strength of concrete is key for the safety of infrastructures and buildings. However, a lot of boundary conditions influence the final resistance of the concrete, such the cement content, the environmental temperature or the age of the concrete. (You can read more about the dataset here)

  2. ERA5 predictions of the hourly temperature at 2m height during the summer months (June, July, August) from 2005 to 2025. ERA5 predictions are re-analysis data, which are observation-corrected weather model predictions. Like most climate data, they depend on chaotic global weather dynamics, which precise long-term predictions difficult. (The data set is extracted from here.)

import os
from urllib.request import urlretrieve

# This function searches for the data files we require, and downloads them from a server if unsuccessful
def findfile(fname):
    if not os.path.isfile(fname):
        print(f"Downloading {fname}...")
        urlretrieve('http://files.mude.citg.tudelft.nl/'+fname, fname)

# We download two datasets for concrete and temperature data
findfile('dataset_concrete.csv')
findfile('dataset_temperature.csv')
# Please choose one of the datasets below
viable_datasets = ['concrete','temperature']
dataset = 'concrete' # Choose one dataset from the list above

# Automated check to see if the user selection is viable
assert dataset in viable_datasets, "Dataset must be in {}. You have selected {}.".format(str(viable_datasets),dataset)

# Load the data
data = np.genfromtxt('dataset_{}.csv'.format(dataset), skip_header=1)

# Set the axis labels for the dataset
if dataset == "concrete":
    label = "Concrete compressive strength [MPa]"
    number_bins = 10 # Depending on the number of data points, we may want to use different bin numbers for the histogram
elif dataset == "temperature":
    label = "Summer air temperature in Delft [°C]"
    number_bins = 20 # Depending on the number of data points, we may want to use different bin numbers for the histogram

Now, let us clean and plot the data.

# Clean the data by removing all NaN entries
data = data[~np.isnan(data)]

# Create a figure that shows the time series
plt.figure(figsize=(10, 6))

# GridSpec allows you to specify the size and relative positions of subplots, which can be very useful for plotting
gs = GridSpec(
    nrows = 1, # We want one row
    ncols = 2, # We want two columns
    width_ratios = [1,0.2]) # The second column should only be 20% as wide as the first column

# In the first subplot, we plot the raw data series
plt.subplot(gs[0,0])
plt.plot(data,'ok')
plt.xlabel("observation number")
plt.ylabel(label)
ylims = plt.gca().get_ylim()

# In the second subplot, we plot the histogram
plt.subplot(gs[0,1])
plt.hist(data, orientation='horizontal', color='lightblue', rwidth=0.9, bins = number_bins)
plt.xlabel("frequency")
plt.gca().set_ylim(ylims)

In the figure above, you can see all the observations of your chosen dataset. You can see that there is no clear pattern in the observations. Let’s see how the statistics look like!

# Statistics
df_describe = pd.DataFrame(data)
df_describe.describe()

Task 1.1: Using ONLY the statistics calculated in the previous lines:

  • Choose an appropriate distribution to model the data between the following: (1) Gumbel, (2) Uniform, and (3) Gaussian.
  • Justiy your choice.
  • Your answer here.

    Part 2: Use pen and paper!#

    Once you have selected the appropriate distribution, you are going to fit it by moments manually and check the fit by computing some probabilities analytically. Remember that you have all the information you need in the textbook. Do not use any computer code for this section, you have to do in with pen and paper. You can use the notebook as a calculator.

    Task 2.1: Fit the selected distribution by moments.

    Your answer here.

    We can now check the fit by computing manually some probabilities from the fitted distribution and comparing them with the empirical ones.

    Task 2.2: Check the fit of the distribution:

  • Use the values obtained from the statistical inspection: the min, 25%, 50%, 75% and max values. What are the non-exceedance probabilities (from the empirical distribution) that correspond to those values?
  • Compute the values of the random variable corresponding to those probabilities using the fitted distribution.
  • Compare the obtained values with the empirical ones and assess the fit.
  • You can summarize your answers in the following table (report your values with 3-4 significant digits max, as needed).

    Tip: To compute the minimum value and maximum value for the predicted quantiles, you can use the expected minimum and maximum values for a dataset of the same size of your observations. Recall how the computed the non-exceedance probability of the first and last rank samples in an empirical CDF.

    Minimum value

    P25%

    P50%

    P75%

    Maximum value

    Non-exceedance probability [\(-\)]

    0.25

    0.50

    0.75

    Empirical quantiles

    Predicted quantiles

    Part 3: Let’s do it with Python!#

    Now, let’s assess the performance using further goodness of fit metrics and see whether they are consistent with the previously done analysis. Note that you have the pseudo-code for the empirical CDF in the reader.

    Task 3.1: Prepare a function to compute the empirical cumulative distribution function.

    # def ecdf(YOUR_CODE_HERE):
    #     """Write a function that returns [non_exceedance_probabilities, sorted_values]."""
    #     YOUR_CODE_HERE # may be more than one line
    #     return [non_exceedance_probabilities, sorted_values]
    

    Task 3.2: Transform the parameters of the selected distribution you fitted by moments to loc-scale-shape.

    Hint: the distributions are listed in our online textbook, but it is also critical to make sure that the formulation in the book is identical to that of the Python package we are using. You can do this by finding the page of the relevant distribution in the Scipy.stats documentation.

    Your answer here.

    Task 3.3: Assess the goodness of fit of the distribution you fitted using the method of moments by:

  • Visually comparing the empirical and fitted PDF.
  • Using the exceedance plot in log-scale.
  • Using the QQplot.
  • Interpret them. Do you reach a conclusion similar to that in the previous section?
  • Hint: Use Scipy’s built in functions (watch out for the definition of the parameters!).

    # loc = YOUR_CODE_HERE
    # scale = YOUR_CODE_HERE
    
    # fig, axes = plt.subplots(1, 1, figsize=(10, 5))
    # axes.hist(YOUR_CODE_HERE,
    #           edgecolor='k', linewidth=0.2, color='cornflowerblue',
    #           label='Empirical PDF', density = True, bins = number_bins)
    # axes.plot(YOUR_CODE_HERE, YOUR_CODE_HERE,
    #           'k', linewidth=2, label='YOUR_DISTRIBUTION_NAME_HERE PDF')
    # axes.set_xlabel(label)
    # axes.set_title('PDF', fontsize=18)
    # axes.legend()
    
    # fig, axes = plt.subplots(1, 1, figsize=(10, 5))
    
    # axes.step(YOUR_CODE_HERE, YOUR_CODE_HERE, 
    #           color='k', label='Empirical PDF')
    # axes.plot(YOUR_CODE_HERE, YOUR_CODE_HERE,
    #           color='cornflowerblue', label='YOUR_DISTRIBUTION_NAME_HERE PDF')
    # axes.set_xlabel(label)
    # axes.set_ylabel('${P[X > x]}$')
    # axes.set_title('Exceedance probability in log-scale', fontsize=18)
    # axes.set_yscale('log')
    # axes.legend()
    # axes.grid()
    
    # fig, axes = plt.subplots(1, 1, figsize=(10, 5))
    
    # axes.scatter(YOUR_CODE_HERE, YOUR_CODE_HERE, 
    #              color='cornflowerblue', label='Gumbel')
    # axes.set_xlabel('Observed '+label)
    # axes.set_ylabel('Estimated '+label)
    # axes.set_title('QQplot', fontsize=18)
    # xlims = axes.get_xlim()
    # ylims = axes.get_ylim()
    # axes.plot([np.min([xlims[0],ylims[0]]), np.max([xlims[1],ylims[1]])], [np.min([xlims[0],ylims[0]]), np.max([xlims[1],ylims[1]])], 'k')
    # axes.grid()
    

    By Max Ramgraber, Patricia Mares Nasarre and Robert Lanzafame, Delft University of Technology. CC BY 4.0, more info on the Credits page of Workbook.