Haralick Textural Features: Difference between revisions

From David's Wiki
Line 44: Line 44:
# Sum Average: <math>f_6 = \sum_{i=2}^{2N_g} ip_{x+y}(i)</math>
# Sum Average: <math>f_6 = \sum_{i=2}^{2N_g} ip_{x+y}(i)</math>
# Sum Entropy: <math>f_7 = \sum_{i=2}^{2N_g} (i-f_6)^2 p_{x+y}(i)</math>
# Sum Entropy: <math>f_7 = \sum_{i=2}^{2N_g} (i-f_6)^2 p_{x+y}(i)</math>
# Entropy:
#* Note that the original paper has a typo.
# Difference Variance:
# Entropy: <math>f_9 = - \sum_i \sum_j p(i,j) \log(p(i,j))</math>
# Difference Entropy:
# Difference Variance: <math>f_10= var(p_{x-y})</math>
# Difference Entropy: <math>f_{11} = -\sum_{i=0}^{N_{g}-1} p_{x-y}(i) \log p_{x-y}(i)</math>
# Information Measures of Correlation 1:
# Information Measures of Correlation 1:
# Information Measures of Correlation 2:
# Information Measures of Correlation 2:
# Maximal Correlation Coefficient:
# Maximal Correlation Coefficient: <math>f_{14} = (second largest eigenvalue of Q)^{1/2}</math> where <math>Q(i,j) = \sum_k \frac{p(i,k)p(j,k)}{p_x(i)p_y(k)}</math>
 
Notation:
* <math>p(i,j)</math> = i,j value in the noramlized co-occurance matrix
* <math>p_x(i)</math> = marginal probability (<math>\sum_j p(i,j)</math>)
* <math>N_g</math> number of gray tones


==Resources==
==Resources==

Revision as of 14:45, 3 May 2022

These are a set of image features discovered by Wikipedia: Robert Haralick.

Algorithm

Texture vs Tone

Each image will have a tone and a texture:

  • Tone - average color in a patch
  • Texture - "variation of features of discrete gray tone"

Gray-Tone Spatial-Dependence matrices

Today, these are known as Gray Level Co-occurrence Matrix (GLCM).

For a matrix with \(\displaystyle N_g\), a Gray-Tone Spatial-Dependence matrix will be a \(\displaystyle N_g \times N_g\) symmetric matrix where entry \(\displaystyle i,j\) will contain the number of occurrences where a pixel with value \(\displaystyle i\) neighbors a pixel with value \(\displaystyle j\).

In an image each pixel will have eight neighboring pixels, except at the edges:

Nearest Neighbors to *
135° 90° 45°
*
45° 90° 135°

Then \(\displaystyle P(i,j,d,\alpha)\) is the number of occurrences where a pixel with value \(\displaystyle i\) and a pixel with value \(\displaystyle j\) are distance \(\displaystyle d\) apart along angle \(\displaystyle \alpha \in \{0^\circ, 45^\circ, 90^\circ, 135^\circ\}\).

If we fix d=1, then we get four matrices of co-occurances along each direction:

  • \(\displaystyle P_{H} = \{P(i,j,1, 0^\circ)\}\)
  • \(\displaystyle P_{V} = \{P(i,j,1, 90^\circ)\}\)
  • \(\displaystyle P_{LD} = \{P(i,j,1, 135^\circ)\}\)
  • \(\displaystyle P_{RD} = \{P(i,j,1, 45^\circ)\}\)

For horizontal and vertical directions with resolution N, each row or column will have 2(N-1) neighbors. Thus in total, there will be \(\displaystyle R=2 N_x(N_y-1)\) or \(\displaystyle R=2 N_y(N_x-1)\) neighbors.
For diagonal directions, there will be \(\displaystyle R=2 (N_x - 1)(N_y-1)\) neighbors.
Each co-occurance matrix \(\displaystyle P\) can be normalized by dividing each entry by \(\displaystyle R\) to get \(\displaystyle p= P/R\).

Features

There are 14 values Haralick et al. compute per co-occurance matrix. The mean and range among the four matrices are used to get 28 features.

  1. Angular second moment: \(\displaystyle f_1 = \sum_i \sum_j p(i,j)^2\)
  2. Contrast: \(\displaystyle f_2 = \sum_{n=0}^{N_{g-1}} n^2 \{ \underset{|i-j|=n}{\sum_{i=1}^{N-g}\sum_{j=1}^{N-g}} p(i,j) \}\)
  3. Correlation: \(\displaystyle f_3 = \frac{\sum_i \sum_j (ij)p(i,j) - \mu_x \mu_y}{\sigma_x \sigma_y}\)
  4. Sum of squares variance: \(\displaystyle f_4 = \sum_i \sum_j (i-\mu)^2 p(i,j)\)
  5. Inverse difference moment: \(\displaystyle f_5 = \sum_i \sum_j \frac{1}{1+(i-j)^2} p(i,j)\)
  6. Sum Average: \(\displaystyle f_6 = \sum_{i=2}^{2N_g} ip_{x+y}(i)\)
  7. Sum Entropy: \(\displaystyle f_7 = \sum_{i=2}^{2N_g} (i-f_6)^2 p_{x+y}(i)\)
    • Note that the original paper has a typo.
  8. Entropy: \(\displaystyle f_9 = - \sum_i \sum_j p(i,j) \log(p(i,j))\)
  9. Difference Variance: \(\displaystyle f_10= var(p_{x-y})\)
  10. Difference Entropy: \(\displaystyle f_{11} = -\sum_{i=0}^{N_{g}-1} p_{x-y}(i) \log p_{x-y}(i)\)
  11. Information Measures of Correlation 1:
  12. Information Measures of Correlation 2:
  13. Maximal Correlation Coefficient: \(\displaystyle f_{14} = (second largest eigenvalue of Q)^{1/2}\) where \(\displaystyle Q(i,j) = \sum_k \frac{p(i,k)p(j,k)}{p_x(i)p_y(k)}\)

Notation:

  • \(\displaystyle p(i,j)\) = i,j value in the noramlized co-occurance matrix
  • \(\displaystyle p_x(i)\) = marginal probability (\(\displaystyle \sum_j p(i,j)\))
  • \(\displaystyle N_g\) number of gray tones

Resources

Implementations