GMRFModel

class menpo.model.GMRFModel(samples, graph, mode='concatenation', n_components=None, dtype=<class 'numpy.float64'>, sparse=True, n_samples=None, bias=0, incremental=False, verbose=False)[source]

Bases: GMRFVectorModel

Trains a Gaussian Markov Random Field (GMRF).

Parameters
  • samples (list or iterable of Vectorizable) – List or iterable of samples to build the model from.

  • graph (UndirectedGraph or DirectedGraph or Tree) – The graph that defines the relations between the features.

  • n_samples (int, optional) – If provided then samples must be an iterator that yields n_samples. If not provided then samples has to be a list (so we know how large the data matrix needs to be).

  • mode ({'concatenation', 'subtraction'}, optional) –

    Defines the feature vector of each edge. Assuming that \(\mathbf{x}_i\) and \(\mathbf{x}_j\) are the feature vectors of two adjacent vertices (\(i,j:(v_i,v_j)\in E\)), then the edge’s feature vector in the case of 'concatenation' is

    \[\left[{\mathbf{x}_i}^T, {\mathbf{x}_j}^T\right]^T\]

    and in the case of 'subtraction'

    \[\mathbf{x}_i - \mathbf{x}_j\]

  • n_components (int or None, optional) – When None (default), the covariance matrix of each edge is inverted using np.linalg.inv. If int, it is inverted using truncated SVD using the specified number of compnents.

  • dtype (numpy.dtype, optional) – The data type of the GMRF’s precision matrix. For example, it can be set to numpy.float32 for single precision or to numpy.float64 for double precision. Depending on the size of the precision matrix, this option can you a lot of memory.

  • sparse (bool, optional) – When True, the GMRF’s precision matrix has type scipy.sparse.bsr_matrix, otherwise it is a numpy.array.

  • bias (int, optional) – Default normalization is by (N - 1), where N is the number of observations given (unbiased estimate). If bias is 1, then normalization is by N. These values can be overridden by using the keyword ddof in numpy versions >= 1.5.

  • incremental (bool, optional) – This argument must be set to True in case the user wants to incrementally update the GMRF. Note that if True, the model occupies 2x memory.

  • verbose (bool, optional) – If True, the progress of the model’s training is printed.

Notes

Let us denote a graph as \(G=(V,E)\), where \(V=\{v_i,v_2,\ldots, v_{|V|}\}\) is the set of \(|V|\) vertices and there is an edge \((v_i,v_j)\in E\) for each pair of connected vertices. Let us also assume that we have a set of random variables \(X=\{X_i\}, \forall i:v_i\in V\), which represent an abstract feature vector of length \(k\) extracted from each vertex \(v_i\), i.e. \(\mathbf{x}_i,i:v_i\in V\).

A GMRF is described by an undirected graph, where the vertexes stand for random variables and the edges impose statistical constraints on these random variables. Thus, the GMRF models the set of random variables with a multivariate normal distribution

\[p(X=\mathbf{x}|G)\sim\mathcal{N}(\boldsymbol{\mu},\boldsymbol{\Sigma})\]

We denote by \(\mathbf{Q}\) the block-sparse precision matrix that is the inverse of the covariance matrix \(\boldsymbol{\Sigma}\), i.e. \(\mathbf{Q}=\boldsymbol{\Sigma}^{-1}\). By applying the GMRF we make the assumption that the random variables satisfy the three Markov properties (pairwise, local and global) and that the blocks of the precision matrix that correspond to disjoint vertexes are zero, i.e.

\[\mathbf{Q}_{ij}=\mathbf{0}_{k\times k},\forall i,j:(v_i,v_j)\notin E\]

References

1

H. Rue, and L. Held. “Gaussian Markov random fields: theory and applications,” CRC Press, 2005.

2

E. Antonakos, J. Alabort-i-Medina, and S. Zafeiriou. “Active Pictorial Structures”, IEEE International Conference on Computer Vision & Pattern Recognition (CVPR), Boston, MA, USA, pp. 5435-5444, June 2015.

increment(samples, n_samples=None, verbose=False)[source]

Update the mean and precision matrix of the GMRF by updating the distributions of all the edges.

Parameters
  • samples (list or iterable of Vectorizable) – List or iterable of samples to build the model from.

  • n_samples (int, optional) – If provided then samples must be an iterator that yields n_samples. If not provided then samples has to be a list (so we know how large the data matrix needs to be).

  • verbose (bool, optional) – If True, the progress of the model’s incremental update is printed.

mahalanobis_distance(samples, subtract_mean=True, square_root=False)[source]

Compute the mahalanobis distance given a sample \(\mathbf{x}\) or an array of samples \(\mathbf{X}\), i.e.

\[\sqrt{(\mathbf{x}-\boldsymbol{\mu})^T \mathbf{Q} (\mathbf{x}-\boldsymbol{\mu})} \text{ or } \sqrt{(\mathbf{X}-\boldsymbol{\mu})^T \mathbf{Q} (\mathbf{X}-\boldsymbol{\mu})}\]
Parameters
  • samples (Vectorizable or list of Vectorizable) – The new data sample or a list of samples.

  • subtract_mean (bool, optional) – When True, the mean vector is subtracted from the data vector.

  • square_root (bool, optional) – If False, the mahalanobis distance gets squared.

mean()[source]

Return the mean of the model.

Type

Vectorizable

principal_components_analysis(max_n_components=None)[source]

Returns a PCAModel with the Principal Components.

Note that the eigenvalue decomposition is applied directly on the precision matrix and then the eigenvalues are inverted.

Parameters

max_n_components (int or None, optional) – The maximum number of principal components. If None, all the components are returned.

Returns

pca (PCAModel) – The PCA model.