METHOD FOR CHARACTERISING THE ANISOTROPY OF THE TEXTURE OF A DIGITAL IMAGE

0Associated
Cases 
0Associated
Defendants 
0Accused
Products 
0Forward
Citations 
0
Petitions 
1
Assignment
First Claim
1. A method comprising characterizing the anisotropy of the texture of a digital image, wherein characterizing comprises:
 a) acquiring a digital image formed from pixels, each pixel being associated with a light intensity and with a position in space ^{d}, where d is a natural integer higher than or equal to two;
b) automatically converting) the acquired image in order to obtain a converted image I_{j,k}, the conversion comprising applying a modification T_{j,k }of the image that makes each pixel of the acquired image rotate from one position to the next by an angle α
_{j }about a point or an axis and that enlarges or shrinks the image by a factor γ
_{k}, where α
_{j}=arg(u_{jk}) and γ
_{k}=u_{jk}^{2}, u_{jk }being a vector that completely characterizes the modification T_{j,k}, the indices j and k uniquely identifying the angle α
_{j }and the factor γ
_{k}, respectively,then, for each converted image, calculating a Kincrement V_{j,k}[m] for each pixel of position m of the converted image, this Kincrement being calculated by applying a convolution kernel v by means of the following formula;
1 Assignment
0 Petitions
Accused Products
Abstract
This characterizing method comprises:
 estimating (28) the scalar coefficients τ_{m }of an even function τ(θ) defined in [0; 2π] that minimizes the following criterion C:
where:
 β_{j }are terms estimated from an acquired digital image,
 τ(θ) is a πperiodic function defined in the interval [0; 2π],
 Γ(θ) is the function defined by the following relationship:


 where:
 {circumflex over (V)} is the discrete Fourier transform of a convolution kernel v,
 H is an estimated Hurst exponent of the acquired image,
f) then, calculating (30), depending on the estimate of the scalar coefficients τ_{m}, an anisotropy index that characterizes the anisotropy of the image, this index varying monotonically as a function of the statistical dispersion of the values of the function τ(θ) for θ varying between 0 and π.
 where:

0 Citations
No References
No References
11 Claims
 1. A method comprising characterizing the anisotropy of the texture of a digital image, wherein characterizing comprises:
 a) acquiring a digital image formed from pixels, each pixel being associated with a light intensity and with a position in space ^{d}, where d is a natural integer higher than or equal to two;
b) automatically converting) the acquired image in order to obtain a converted image I_{j,k}, the conversion comprising applying a modification T_{j,k }of the image that makes each pixel of the acquired image rotate from one position to the next by an angle α
_{j }about a point or an axis and that enlarges or shrinks the image by a factor γ
_{k}, where α
_{j}=arg(u_{jk}) and γ
_{k}=u_{jk}^{2}, u_{jk }being a vector that completely characterizes the modification T_{j,k}, the indices j and k uniquely identifying the angle α
_{j }and the factor γ
_{k}, respectively,then, for each converted image, calculating a Kincrement V_{j,k}[m] for each pixel of position m of the converted image, this Kincrement being calculated by applying a convolution kernel v by means of the following formula;  View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9, 10, 11)
 a) acquiring a digital image formed from pixels, each pixel being associated with a light intensity and with a position in space ^{d}, where d is a natural integer higher than or equal to two;
1 Specification
The invention relates to a method for characterizing the anisotropy of the texture of a digital image. The invention also relates to a method for classifying digital images depending on the anisotropy of their texture. The invention lastly relates to a data storage medium and to a computer for implementing these methods.
Patent application WO2016/042269A1 describes a method that allows the Hurst exponent H of the texture of an image and terms β_{j }that vary depending on the characteristics of the texture of this image in one particular direction corresponding to an angle α_{j }to be estimated. This method is very good at identifying the anisotropy of an image.
It has also been proposed to construct an index A that characterizes the anisotropy of the texture of the image from the terms β_{i}. This index is called the “anisotropy index”. For example, the calculation of an anisotropy index A from the terms β_{i }is described in the following articles:
 F. J. P Richard: “Analysis of anisotropic brownian textures and application to lesion detection in mammograms”, Procedia Environmental Sciences 27 (2015) 1620, 2015,
 F. J. P Richard: “Anisotropy indices for the characterization of brownian textures and their application for breast images”, 2016
 F. J. P Richard: “Some anisotropy indices for the characterization of Brownian texture and their application to breast images”, SPATIAL STATISTICS, vol. 18, Feb. 17, 2016, pages 147162,
 F. J. P Richard: “Tests of isotropy for rough textures of trended images”, STATISTICA SINICA, vol. 26, no 3, Jul. 1, 2016, pages 132.
In these articles, the index A is dependent on the average of the terms β_{j}. For a given angle α_{j}, the term β_{j }varies as a function of characteristics of the texture in this given direction but also as a function of the Hurst exponent H. It will be recalled that the Hurst exponent H is a global characteristic of the texture that is independent of the orientation of the image. Thus, when a variation in the term β_{j }is observed, it is not possible to know simply whether this variation is due to a modification of the anisotropy of the image or to a modification of the texture in its entirety and therefore of the Hurst exponent H. Thus, this index A varies not only as a function of the anisotropy of the texture of the image but also as a function of the Hurst exponent H of the texture of this image.
The invention aims to provide a method for characterizing the anisotropy of an image using an anisotropy index that varies as a function of the anisotropy of the texture while being much less sensitive to variations in the Hurst exponent H of the same texture. One subject thereof is therefore a method as claimed in claim 1.
The claimed method estimates from the terms β_{j}, the coefficients of a function τ(θ), here referred to as the asymptotic topothesia function. This function τ(θ) has the particularity of returning a value that characterizes the texture of the image in the direction θ while being almost completely independent of the value of the Hurst exponent H associated with the same texture. Thus, constructing the anisotropy index, which varies monotonically as a function of the statistical dispersion of the function τ(θ), in this way allows an index that varies as a function of the anisotropy of the texture while being practically independent of the value of the Hurst exponent H of the same texture, to be obtained.
Embodiments of this method may have one or more of the features of the dependent claims.
These embodiments of the characterizing method furthermore have the following advantages:
 The calculation of the estimates of the scalar coefficients of the function τ(θ) by virtue of a linear relationship between these coefficients to be estimated and the terms β_{j }accelerates very substantially the execution of the characterizing method.
 The fact that the optimal value of the parameter λ is calculated allows the estimation of the scalar coefficients of the function τ(θ) to be improved and therefore an index that is even less sensitive with respect to variations in the Hurst index H to be obtained.
Another subject of the invention is a method for automatically classifying digital images depending on the anisotropy of their texture.
The invention also relates to a data storage medium, containing instructions for implementing the claimed method, when these instructions are executed by a computer.
The invention also relates to a computer for implementing the claimed method.
The invention will be better understood on reading the following description, which is given merely by way of nonlimiting example with reference to the appended drawings, in which:
In these figures, the same references have been used to reference elements that are the same. In the rest of this description, features and functions well known to those skilled in the art have not been described in detail.
In this description, the following mathematical notations and conventions have been adopted, unless otherwise indicated:
 the interval [X, Y] is the interval of all the integer numbers higher than or equal to X and lower than or equal to Y, where X and Y are themselves integers;
 a vector A in a space of d dimensions (such that ^{d}) has for coordinates A_{1}, A_{2}, . . . , A_{d};
 [0, X]^{d }is the product [0,X_{1}]×[0,X_{2}]× . . . ×[0,X_{d}] where X is a vector of ^{d }of coordinates X_{1}, X_{2}, . . . , X_{d}, such that the ith coordinate U, of a vector U of [0, X]^{d }belongs to the interval [0,X_{i}], where i is an index higher than or equal to 0 and lower than or equal to d;
 X is the sum of the components of the vector X, such that X=X_{1}+X_{2}+ . . . +X_{d};
 X^{2 }is the Euclidean norm to the power of two of the vector X, such that X^{2}=(X_{1}^{2}+X_{2}^{2}+ . . . +X_{d}^{2}).
In this description, anisotropy is understood to mean the fact that the properties of the texture of the image differ depending on the direction in which they are observed.
The texture of a digital image is generally defined relative to the spatial distribution of variations in the intensity and/or variations in the tone of the pixels forming the digital image. Texture is a manifestation of the Hölderian regularity of the image. The notion of texture is for example defined:
 in the work “Handbook of Texture Analysis”, M. Mirmehdi et al., eds., World Scientific, October 2008 in the chapter “Introduction to Texture Analysis” by E. R. Davies, or even:
 in section “I. Introduction” of the article by Robert M. Haralick et al; “Textural features for image classification”; IEEE Transactions on Systems, Man and Cybernetics; vol. SMC3, no 6, p. 610621, November 1973.
The anisotropy of an image may be caused by two factors: texture and “tendency”. Typically, texture corresponds to shortrange (i.e. highfrequency) variations in the intensity of the pixels whereas tendency relates to longerrange (i.e. lowfrequency) variations in the intensity of the pixels.
Here, it is the texture, and above all its anisotropy, that is of interest for characterization of the image 2. For example, when the image 2 represents a biological tissue, the anisotropic character of the texture of the image may give an indication as to the presence or the risk of development of cancerous cells within this tissue. In this example, the image 2 is a mammogram.
Image 2 is formed from a plurality of pixels. Each pixel is associated with:
 a pixel intensity value, and with
 a position p in the space ^{d }
where d is a natural integer higher than or equal to 2 that represents the dimension of the image 2. Here, in this example, d=2.
Thus, spatially the pixels of the image 2 are arranged in a lattice in the space ^{d}. Preferably, the resolution of the image is the same along all the d axes of the image. Below, the set of possible positions of the pixels of the image 2 is denoted [0, N]^{d}, where N is a vector that codes the size of the image and the components of which are strictly positive natural integers belonging to ^{d}. This notation means that the p_{1}, p_{2}, . . . , p_{d }coordinates of the position p of a pixel of the image belong, respectively, to the sets [0,N_{1}], [0, N_{2}], . . . , [0, N_{d}], where N_{1}, N_{2}, . . . , N_{d }are the coordinates of N. Here, the image 2 has a square shape of (N_{1}+1)*(N_{2}+1) size, where N_{1}+1=N_{2}+1 and N_{1}+1 is the length of one side of this square expressed in number of pixels. For example, the image 2 is a zone of interest extracted from an image of larger size. The sides of the image 2 have a length larger than or equal to 50 pixels or to 100 pixels or to 500 pixels.
In this example, the light intensity of the pixels is encoded in grayscale, in 8bit grayscale for example. The pixel intensity values are integer numbers belonging to the interval [0,255].
The device 12 comprises to this end:
 a programmable computer 14, such as a microprocessor,
 a data storage medium 16, such as a memory,
 an interface 18 for acquiring a digital image.
The interface 18 allows the image 2 to be acquired. Typically, the digital image is generated by an electronic imagecapturing apparatus such as a radiography apparatus. The computer 14 executes the instructions stored in the medium 16. The medium 16 in particular contains instructions for implementing the method of
The anisotropy of the image 2 is identified and characterized using a certain number of operations.
In this example, the image 2 is modelled as an intrinsic random gaussian field. In other words, the intensity value associated with each pixel of the image 2 is said to correspond to a Gaussian random variable Z. The concept of intrinsic random gaussian fields is defined in more detail in the following works: J. P. Chilès et al. “Geostatistics: Modeling Spatial Uncertainty”, J. Wiley, 2^{nd }edition, 2012.
The intensity value associated with the pixel the position of which in the image 2 is given by the position p is denoted Z[p]. For example, an orthonormal coordinate system having as origin the null vector (0)^{d }is defined in ^{d}. The position p belongs to ^{d}.
An example of an implementation of the method for automatically characterizing the anisotropy of a texture will now be described with reference to the flowchart of
In a step 20, the image 2 is automatically acquired by the interface 18 and stored, for example, in the medium 16. The notation “I” will be used below to refer to this image 2.
In this example, of dimension d=2, the normalized image 2 is modelled by a square matrix Z of (N_{1}+1)*(N_{2}+1) size. The coefficients of this matrix Z are the Z[p] corresponding to the intensity of the pixels of position p. The components of the vector p give the position of this coefficient in the matrix Z. For example, Z[p] is the coefficient of the p_{1}th row and the p_{2}th column of Z, where p_{1 }and p_{2 }are the coordinates of the position p in [0, N]^{2}.
Next, in a step 22, geometric conversions are applied to the image 2 in order to obtain a series of converted images I_{j,k}. These conversions comprise modifications T_{j,k }of the image 2 that each include:
 a rotation by an angle α_{j}, and
 scaling by a scale factor γ_{k}.
Below, the image obtained after the modification T_{j,k }has been applied to the acquired image I is denoted T_{j,k}(I).
Each modification T_{j,k }is characterized uniquely by a vector u_{jk }of the space ^{2}\{(0,0)}, such that α_{j}=arg(u_{jk}) and γ_{k}=u_{jk}^{2}. The space ^{2}\{(0,0)} is the space ^{2 }deprived of the point of coordinates (0,0).
The indices “j” and “k” are integer indices that uniquely identify the angle α_{j }and the factor γ_{k}, respectively. The index j varies between 1 and n_{j}. To simplify notation, “rotation j” and “scaling k” will be spoken of below when referring to a rotation of angle α_{j }and to scaling by a factor γ_{k}, respectively.
The rotation j makes each of the pixels of the image 2 rotate by the angle α_{j }from a starting position to an end position about the same predetermined point or the same predetermined axis. Typically, this point or this axis of rotation passes through the geometric center of the image. The rotation here occurs about the geometric center of the image. The geometric center of a digital image is defined as being the centroid of the positions of all of the pixels of the image, each weighted by a coefficient of the same value.
The scaling k enlarges or shrinks the image via a homothety of factor γ_{k}. In the following examples, the center of the homothety is the geometric center of the image.
These modifications T_{j,k }are applied for at least two, and, preferably, at least three or four angles α_{j }of different values. Advantageously, the various values of the angles α_{j }are distributed as uniformly as possible between 0° and 180° while respecting the constraint that the vector u_{jk }must belong to the space ^{2}\{(0,0)}. The number n_{j }of different values for the angle α_{j }is generally chosen not to be too high in order to limit the number of calculations to be carried out. For example, this number n_{j }is chosen to be lower than 150 or 100. A good compromise consists in choosing at least four different values for the angle α_{j }and, preferably at least 10 or 20 different values. For each angle α_{j}, modifications T_{j,k }are applied for at least two, and, preferably, at least three or four or five, different scalings γ_{k}.
The values of the factor γ_{k }are for example higher than or equal to 1 and lower than or equal to 10^{2 }or 8^{2 }or 4^{2}. Preferably, the various values of the factor γ_{k }are distributed as uniformly as possible in the chosen interval of values. For example, here, the scalings γ_{k }used are all scalings for which the following condition is met: the Euclidean norm of the vector u_{jk }belongs to the interval [√2; 10].
For example, the angles of rotation α_{j }are chosen depending on the horizontal and vertical directions of the image 2. For example, to make two rotations j1 and j2, values α_{j1}=90° and α_{j2}=180° are chosen, where j1 and j2 are particular values of the index j. The angles are here expressed with respect to the horizontal axis of the image 2.
In this example, of dimension d=2, the modifications T_{j,k }applied are the following, expressed here in matrix form:
In step 22, Kincrements are calculated for each of the converted images T_{j,k}(I). This calculation includes filtering intended to remove tendencies of polynomial form of order strictly lower than K. More precisely, for each image T_{j,k}(I), a filter is applied allowing the Kincrement V_{j,k }of this image T_{j,k}(I) to be calculated. It is the Kincrement of this image T_{j,k}(I) that forms the converted image I_{j,k}. The Kincrement V_{j,k }of this image is not calculated for all the points of the image T_{j,k}(I), but only for certain thereof as will be seen below.
The Kincrement concept is for example defined in more detail in the following work: J. P. Chilès et al. “Geostatistics: Modeling Spatial Uncertainty”, J. Wiley, 2^{nd }edition, 2012.
Here, the filtering is carried out by means of a convolution kernel denoted “v”, in order to achieve linear filtering. Below, the “filter v” will be spoken of when referring to this convolution kernel.
The filter v is defined for the set [0,L]^{d}. This filter v is characterized by a characteristic polynomial Q_{v}(z) defined by:
Here, the filter v is a matrix and the quantity v[p] is a particular scalar value for this filter for the position p, where p is a vector of [0,L]^{d}. This value v[p] is zero if the vector p does not belong to [0,L]^{d}. Equivalently, the filter v is also said to have a bounded support in [0,L]^{d}. This filter v is different from the null function that, for any value of the vector p, has a value v[p] of zero. The notation z^{p }here designates the monomial z_{1}^{p1}*z_{2}^{p2}* . . . *z_{d}^{pd}.
The filter v is thus parameterized with the vector L, which is a vector of [0, N]^{d}. Generally, the vector L is chosen so as to be contained in the image I. Therefore, preferably, values of L are used that respect, for any i ranging from 1 to d, the relationship L_{i}<<N_{i}, i.e. L_{i }is lower than 10 times or 100 times N_{i}.
In addition, the filter v is such that its characteristic polynomial Q_{v}(z) meets the following condition:
∀a∈[0,K]^{d }such that a≤K then
where the constant K is a nonzero natural integer and ∂^{a}Q_{v}/∂z_{1}^{a1 }. . . ∂z_{d}^{ad }is the partial derivative of the polynomial Q_{v}(z) with respect to the components of the vector z, the symbol ∂z_{i}^{ai }indicating a differentiation of the polynomial Q_{v}(z) of order a_{i }with respect to the variable z_{i}, where z_{i }is the ith component of the vector z and a_{i }is the ith component of the vector a, i being an integer index higher than or equal to 0 and lower than or equal to d.
Filtering the image T_{j,k}(I) with the filter v allows the effect of “tendency” to be removed from the subsequent calculations of the method when said tendency takes the form of a polynomial of order P_{o}, provided that the value of the constant K is chosen as follows:
 K≥P_{o}+1 if d is lower than or equal to 4, and
 K≥P_{o}/2+d/4 if d>4.
The Kincrements of the image T_{j,k}(I), denoted V_{j,k}, are then calculated by virtue of the filter v as follows:
where:
 V_{j,k}[m] is a Kincrement calculated in the image T_{j,k}(I) for the pixel of position m, with m a vector belonging to a set E that will be defined below;
 the product T_{j,k}·p corresponds to the application of the modification T_{j,k }to the pixel of position p of the image I and expresses the coordinates in ^{d}, after application of the modification T_{j,k}, of the pixel that initially had the position p in the image I,
 v[p] is the value of the filter v for the value of p.
For each image T_{j,k}(I), the Kincrement is calculated only for those pixels of the image T_{j,k}(I) the positions of which belong to a set E. The set E only contains positions:
 that belong to the image I, and
 that, whatever the applied modification T_{j,k}, occupy a position that already exists in the image I, after application of this modification T_{j,k}.
Furthermore, for any position m belonging to E, the pixels of positions “m−T_{j,k}·p” occupy a position contained in the image I.
The number of positions that belong to the set E is denoted n_{E}.
Thus, quadratic variations are calculated solely for those points of the converted image for which no interpolation is necessary. It is thus possible to make use of rotations j of any angle, in contrast to the case of projections. Specifically, if a projection is carried out in a, for example, diagonal direction of the image, some projected points have a position that does not belong to the set [0,N]^{d}. In other words, they do not form part of the lattice. It is therefore necessary to employ an interpolation in order to determine the intensity value associated with the points that do belong to this lattice. This introduces an approximation and therefore an error. With the modifications T_{j,k }and then the selection of the points of the set E, the reliability of the method is improved.
In this example, the filtering is carried out within the same formula as the application of the modifications T_{j,k}.
With this choice of the constant K, the filtering generates increments V_{j,k}[m] of order K. This filtering makes it possible to not take into account any anisotropy of the images caused by tendency, but rather only the anisotropy of the texture of the underlying image. As a result, the reliability of the characterizing method is improved.
The constant K must therefore be chosen as described above depending on the nature of the tendency present in the image 2. Typically, in the case of a mammogram, the polynomial degree P_{o }of the tendency is lower than or equal to 2. For example, a value is selected by a user of the method. To this end, step 22 here comprises acquiring a value of the vector L and a value of the constant K.
In this example, the filter v is chosen as follows:
if the vector p belongs to [0,L]^{d }and v[p]=0 if not, where the terms C^{p}_{L }are binomial coefficients.
With this particular filter, the condition expressed above on the characteristic polynomial Q_{v}(z) is met if K=L−1. Thus, the value of K is deduced from the acquired value of the parameter L.
Thus, in this particular case, filtering the image T_{j,k}(I) with the filter v allows the effect of “tendency” to be removed when the latter has a polynomial form of order P_{o}, provided that the parameter L is chosen as follows:
 L=P_{o}+2 if d is lower than or equal to 4, and
 L=P_{o}/2+d/4+1 if d>4.
In this example, of dimension d=2, the vector L has two components, L_{1 }and L_{2}. To remove a tendency of polynomial degree P_{o}=2, it is necessary to choose L_{1 }and L_{2 }such that L is equal to four. Preferably L_{1 }and L_{2 }are chosen such that L_{1}=4 and L_{2}=0. Specifically, by choosing values, for the coordinates of the vector “L”, that are sufficiently different from each other, the directional sensitivity of the filter is increased. Thus, it will react more markedly, and will therefore filter more effectively, variations that are oriented in one particular direction. In contrast, a filter for which L_{1 }and L_{2 }are chosen such that L_{1}=2 and L_{2}=2 would be less sensitive to directional signals and less effective.
In this embodiment, the filter v is defined by the following relationship:
if the vector p belongs to [0,L]×{0} and v[p]=0 if not, where L_{1 }belongs to \{0}. Here, L_{1}=4. Under these conditions, the order of the kernel v is equal to K=L_{1}−1.
In this step, step 22, for each different value of j and k, the computer 14 carries out in succession the following operations:
 application of the modification T_{j,k }to the image 2 in order to obtain the image T_{j,k}(I), then
 application of the filter v to the image T_{j,k}(I) in order to obtain the converted image I_{j,k}.
Next, in a step 24, for each image I_{j,k}, the pvariation W_{j,k }associated with this image I_{j,k }is calculated. The concept of pvariations is well known to those skilled in the art of statistics and probabilities. Here, it is calculated in the following way:
In the above equation, the symbol “q” is used instead of the symbol “p” conventionally used in this equation in order to avoid any confusion with the symbol “p” used in this description to designate the position of a pixel. In this example, a particular form of pvariations is used: “quadratic variations” or “2variations”, for which q=2. Thus, the quadratic variation W_{j,k }of the image I_{j,k }is calculated in the following way, from the Kincrements calculated after filtering in step 22:
These quadratic variations W_{j,k }contain information that is important for the identification of anisotropy. To extract this information, the procedure is as follows.
In a step 26, a covariance analysis comprising a statistical regression is carried out on all the variations W_{j,k }calculated for each of the images I_{j,k }in order to estimate:
 the value of the Hurst exponent H of the image I and,
 a term β_{j}.
The statistical regression is defined by the following relationship:
log(W_{j,k})=log(u_{jk}^{2})*H+β_{j}+ε_{j,k},
where:
 u_{jk}^{2 }is the Euclidean norm to the power of two of the vector u_{jk},
 H is the Hurst exponent of the image I;
 β_{j }is a quantity that does not depend on the scaling k; here this parameter is analogous to a socalled intercept parameter of the regression, unless it depends on the rotations j;
 ε_{j,k }is an error term of the regression the statistical properties of which are predetermined and set by the user. For example, the error terms ε_{j,k }are intercorrelated Gaussian random variables.
It will be recalled that the Hurst exponent H is a physical quantity that is independent of the rotations of the image.
Thus, a number n_{j }of terms β_{j }is obtained, n_{j }being the number of different rotations applied to the image I.
For example, if it is decided to make do with only the two rotations j1 and j2 described above, the regression is carried out on the basis of all the quadratic variations calculated for j1 and for j2. Thus, two terms β_{j1 }and β_{j2 }are obtained.
At this stage, in a step 28, the computer 14 estimates the scalar coefficients τ_{m }of an even function τ(θ) called the asymptotic topothesia function. The function τ(θ) is continuous in the interval [0; 2π]. The function τ(θ) is a function that minimizes the following criterion C:
where:
 the symbol “τ*Γ(θ)” designates the function obtained by carrying out the circular convolution of the functions τ(θ) and Γ(θ),
 Γ(θ) is the function defined by the following relationship:
where:
 {circumflex over (V)} is the discrete Fourier transform of the kernel v,
 H is the Hurst exponent of the acquired image,
 ρ is the variable of integration.
The scalar coefficients τ_{m }of the function τ(θ) are defined by the following relationship:
where:
 M is an integer number higher than one,
 τ_{m }are the scalar coefficients of the function τ(θ),
 f_{m}(θ) are the functions of a basis of πperiodic functions defined in the interval [0; 2π].
A πperiodic function is a periodic function of period π.
In this embodiment, the πperiodic function basis used is a Fourier basis. Therefore, the function τ(θ) is here defined by the following relationship:
where τ_{0}, τ_{1,m }and τ_{2,m }are the scalar coefficients of the function τ(θ),
The number M is a constant that is defined beforehand, for example by the user. Generally, this number M is lower than the number n_{j }of different angles α_{j}. Typically, this number M is also higher than or equal to 2 or 4. Typically, the number M is chosen so that the number of scalar coefficients of the function τ(θ) is comprised in the interval [0.35n_{j}, 0.75n_{j}] or in the interval [0.45n_{j}, 0.55n_{j}].
In this embodiment, and in the context specified above, it has been possible to establish a linear relationship between an approximation τ* of the coefficients of the function τ(θ) and the estimated terms β_{j}. This relationship is the following:
τ*=(L^{T}L+λR)^{−1}L^{T}β (3)
where:
 τ* is the vector (τ_{0}*, τ_{1,1}*, τ_{2,1}*, τ_{1,2}*, τ_{2,2}*, . . . , τ_{1,M1}*, τ_{2,M1}*, τ_{1,M}*, τ_{2,M}*)^{T}, the coefficients τ_{0}*, τ_{1,1}*, τ_{2,1}*, τ_{1,2}*, τ_{2,2}*, . . . , τ_{1,M1}*, τ_{2,M1}*, τ_{1,M}*, τ_{2,M}* being the estimates of the coefficients τ_{0}, τ_{1,1}, τ_{2,1}, τ_{1,2}, τ_{2,2}, . . . , τ_{1,M1}, τ_{2,M1}, τ_{1,M}, τ_{2,M}, respectively
 L is a matrix of n_{j}×(2M+1) size the kth column of which is defined by the following relationship:
({circumflex over (μ)}[0],{circumflex over (μ)}[1]cos(α_{k}),{circumflex over (μ)}[1]sin(α_{k}), . . . ,{circumflex over (μ)}[M]cos(Mα_{k}),{circumflex over (μ)}[M] sin(Mα_{k}))
 where {circumflex over (μ)}[0], {circumflex over (μ)}[1], . . . , μ[M] are the coefficients of the discrete Fourier transform of the following function μ(θ): μ(θ)=cos(θ)^{2H}, where H is the Hurst exponent of the acquired image,
 λ is a predetermined parameter,
 R is a diagonal matrix of (2M+1)×(2M+1) size the coefficients of which on the diagonal are, in order: 0, 2, 2, 5, 5, . . . , (1+M^{2}), (1+M^{2}),
 the symbol “^{T}” designates the transpose operation,
 β is the vector (β_{1}, β_{2}, . . . , β_{nj1}, β_{nj})^{T}.
The inventors have also established that the optimum value of the parameter λ is equal or very close to a value λ*. The value λ* is obtained using the following relationship:
where:
 κ is equal to v_{+}/v_{−}, v_{+} and v_{−} being the lowest and the highest eigenvalues of the matrix L^{T}L, respectively,
 trace(X) is a function that returns the sum of the diagonal coefficients of a square matrix X,
 V(β) is the covariance matrix of the estimator of the vector β,
 β^{2 }is the Euclidean norm to the power of two of the vector β.
Thus, in this embodiment, in step 28, the computer 14 calculates the value λ* using the above relationship. Next, it chooses a value of the parameter λ close to the value λ*. For example, it chooses, randomly for example, a value of the parameter λ in the interval [0; 1.3λ*] or [0; 1.1λ*]. Most often, the value of the parameter λ is chosen in the interval [0.7λ*; 1.3λ*] or [0.9λ*; 1.1λ*]. Here, the value of the parameter λ is systematically set equal to the value λ*. Lastly, the computer estimates the coefficients τ_{0}, τ_{1,m}, τ_{2,m }using relationship (3).
The relationship (4) was established by searching for a numerical expression for the coefficients τ_{0}*, τ_{1,m}*, τ_{2,m}* that minimizes not the criterion C directly but a penalized criterion C_{A}. This penalized criterion C_{A }is for example the following:
The term that is subtracted from the criterion C in relationship (4) is known as the “penalty”.
In addition, in the case where the filter v is that defined by relationship (1), then the circular convolution τ*Γ(θ) may be written in the following form:
τ*Γ(θ)=τ*μ(θ) (6)
where:
 μ(θ) is defined by the following relationship:
μ(θ)=cos(θ)^{2H }
 γ is a constant that is independent of the value θ,
 the symbol “*” is a circular convolution such that τ*μ(θ) is the function resulting from the circular convolution of the functions τ(θ) and μ(θ).
The constant γ is defined by the following relationship:
where the symbols used in the above relationship were defined above and  . . . ^{2 }is the Euclidean norm to the power of two.
Thus, the penalized criterion C_{A }may be written in the following form:
C_{A}=Lτ−β^{2}+λτ^{T}Rτ
where the symbols L, τ, β, λ and R were defined above and  . . . ^{2 }is the Euclidean norm to the power of two. The vector τ* is the numerical solution of the criterion C_{A}.
For a given value of the angle θ comprised between zero and π, the value of the function τ(θ) depends on the characteristics of the texture of the image in the direction θ. This value is independent of the value of the Hurst exponent H. Thus, the statistical dispersion of the values of the function τ(θ) for θ varying between 0 and π is representative of the anisotropy of the texture of the image. For example, the statistical dispersion of the function τ(θ) is represented by an index A that is dependent on the sum of the following deviations: τ−M_{τ}_{Lp }for θ varying between 0 and π, where:
 τ is the function τ(θ),
 M_{τ }is an average value of the values of the function τ(θ) for θ varying from 0 to π or an approximation of this average, and
  . . . _{Lp }is the norm L1 if Lp is equal to 1, the norm L2 if Lp is equal to 2 and so on, Lp being strictly higher than zero.
Thus, the statistical dispersion of the function τ(θ) may be the variance or the standard deviation of the values of this function in [0; π]. For example, here, Lp is chosen equal to 2 and the calculated index A is equal to the square root of the sum defined above. Thus, the higher the value of the index A, the greater the anisotropy of the image. Under these conditions, the index A is defined by the following relationship:
Here, in a step 30, the computer 14 calculates the index A using the following formula, which corresponds to relationship (7):
The flowchart of
 R. Johnson, P. Messier, W. Sethares, et al: “Pursuing automated classification of historic photographic papers from raking light images”, J. AM. Inst. Conserv. 53(3):159170, 2014, and
 P. Messier, R. Johnson, H. Wilhelm, W. Sethares, A. Klein, et al: “Automated surface texture classification of inkjet and photographic media”, In NIP & Digital Fabrication Conference, pages 8591, Society for Imaging Science and Technology, 2013.
In a step 40, a plurality of digital images 2 are automatically acquired. Among the acquired images, certain correspond to gloss paper, others to satin paper and others to matte paper.
In a step 42, for each of these images, the anisotropy index A and the Hurst exponent H are calculated by implementing the method of
In a step 44, the acquired images are automatically classified with respect to one another depending on their index A and exponent H calculated in step 42. This classification is for example achieved by means of a classifier such as a classifying algorithm based on neural networks or a support vector machine.
The graph of
Variants of the Image:
The pixels of the image 2 may have other intensity values. The intensity value of each pixel may be a real value. Or it may be higher than 256. For example, the image 2 is colorcoded. In this case, the color image is separated into a plurality of monochromatic images each corresponding to one of the color channels used to form the color image. The method is then applied separately to each of these monochromatic images.
The image 2 may not have a square shape. For example, when of dimension d=2, the image may have a rectangular or even trapezoidal shape. When the image does not have a regular shape, the notions of “horizontal” and “vertical” direction are replaced by reference directions suited to the geometry of the image. For example, in the case of an image of triangular shape, the base and height of the triangle will be used as reference.
The dimension d of the images may be higher than two. For example, the image 2 may be a hypercube of dimension d.
The image 2 may be other than a mammogram or a photograph or a sheet of paper. For example, it may be a question of a photograph of a bone tissue. The anisotropy of the texture of the image then provides information on the presence of bone pathologies, such as osteoporosis. Other broader fields of application may be envisioned, such as other types of biological tissues, aerial or satellite images, geological images, or photographs of materials. Generally, the method is applicable to any type of irregular and textured image such as an image obtained from any imagecapturing electronic apparatus.
Variants of the Method for Estimating the Terms β_{j }and H:
Other modifications T_{j,k }may be used. For example, when the dimension d=3, the modifications T_{j,k }cause a rotation j about a given axis of rotation and a scaling k in a given direction of the image. For example, the following modifications may be used:

 The above modifications T_{j,k }cause a rotation about an axis and a scaling in a direction that is not parallel to this axis.
The values of the angle α_{j }may be different. Preferably, values of the angle α_{j }that do not require interpolations are chosen. However, it is also possible to choose values of the angle α_{j }such that it is necessary to interpolate the pixels of the converted image to find the values associated with each position p comprised in the set E.
As a variant, the rotation and the scaling are not applied at the same time.
Other filters v may be used to calculate the Kincrements. For example, to calculate the Kincrements it is also possible to use any filter the convolution kernel of which is defined as being equal to the convolution:
 of any convolution kernel v1, and
 of a convolution kernel v2 equal to the kernel v described above.
In the particular case where the kernel v1 is an identity matrix, the filter v described above is obtained. In contrast, choosing a kernel v1 different from the identity matrix makes it possible to construct many different filters from the filters v described above, all suitable for calculating the Kincrements.
The filtering may be implemented differently in step 22. In particular, the conversion and the filtering are not necessarily applied simultaneously, but in separate formulas.
As a variant, all the conversions T_{j,k }are first applied to the image I, then, subsequently, the filters are applied to each of the images T_{j,k}(I).
The value of K may be different. In particular, with the filter v chosen in the example, when the image I has no tendency, i.e. when M equals 0, then preferably L=2 or, if d>4, L=1+d/4.
As a variant, a plurality of filters v_{i }are applied to each image T_{j,k}(I) in step 22. The number of different filters v_{i }applied to a given image T_{j,k}(I) is denoted R. In this case, the converted image obtained by applying filter v_{i }to image T_{j,k}(I) is denoted I_{i,j,k }and the Kincrement of this image at position m in this image is denoted V_{i,j,k}[m] where “i” is an index that uniquely identifies the applied filter v_{i}. This index “i” is here different from the index “i” used above as dummy variable in particular with reference to the partial derivative of the polynomial Q_{v}(z). Therefore, it is thus possible to calculate a plurality of quadratic variations for each image T_{j,k}(I), one for each filter v_{i }applied to this image T_{j,k}(I). Thus, the quadratic variation calculated for the image I_{i,j,k }is denoted W_{i,j,k}.
To this end, step 22 comprises an operation of selecting filters v_{i}, for example from a predefined filter library.
The quadratic variations W_{i,j,k }are then calculated in step 24 using the following relationship:
In step 26, the regression is then carried out in the following way, on account of the n_{i }applied filters: log(W_{i,j,k})=log(μ_{j,k}^{2})*H+β_{i,j}+ε_{i,j,k}, where:
 β_{i,j }is the term β_{j }associated with the filter v_{i};
 ε_{i,j,k }is the error term ε_{j,k }associated with the filter v_{i}.
Thus, a number n_{b }of terms β_{i,j }is obtained, where n_{b}=n_{j}*n_{i}, n_{j }being the number of different rotations applied to the image I. In step 28, the method of
When a plurality of filters v_{i }are used, the number of filters v_{i }applied may vary from one image T_{j,k}(I) to the next, provided however that to one filter i there correspond at least two rotations j and, for each of these rotations j, at least two scalings k.
Variants of the Estimation of τ*:
The penalty used in the criterion C_{A }may be different. Provided that the penalty is a differentiable function, then it is possible to determine a linear relationship, such as relationship (3), that directly expresses the estimation of the coefficients τ_{m }as a function of the terms β_{j}. In particular, it is possible to find such a linear relationship whatever the filter v and the basis of πperiodic functions f_{m}(θ) used. The filter v may therefore be different from that defined by relationship (1) and the basis used may also be different from the Fourier basis. When the filter v is different from that defined by relationship (1) or when the basis is different from the Fourier basis, the linear relationship is different from that defined by relationship (3).
The penalty used in the criterion C_{A }may also be a nondifferentiable function. In this case, it may be difficult or even impossible to establish a linear relationship between the estimate of the coefficients τ_{m }and the terms β_{j}. For example, the penalty may use a norm L1 of the function τ(θ) that is nondifferentiable. In this case, other methods may be used to approximate the coefficients of the function τ(θ) that minimizes this penalized criterion. For example, the estimate of the coefficients τ_{0}, τ_{1,m}, τ_{2,m }that minimize the criterion C_{A }are estimated by executing a known minimization algorithm such as the iterative shrinkagethresholding algorithm (ISTA) or the fast iterative shrinkagethresholding algorithm (FISTA).
The variant described here allows the values of the coefficients τ_{0}, τ_{1,m}, τ_{2,m }that minimize the criterion C_{A }to be estimated without requiring to do so a linear numerical relationship, such as relationship (3), that allows an estimate of these coefficients T_{0}, T_{1,m}, τ_{2,m }to be directly obtained from the values of the terms β_{j}.
The above embodiment was described in the particular case where the function τ(θ) is decomposed in a Fourier basis of πperiodic functions. However, the method described above also works if the function τ(θ) is decomposed in any other basis of πperiodic functions f_{m }in which each function f_{m }is defined in the interval [0; 2π]. In particular, it is not necessary for the basis of the πperiodic functions f_{m }to be an orthogonal basis. Thus, in the basis of the f_{m }functions, the function τ(θ) is defined by the following relationship:
where the T_{m }are the scalar coefficients of the function τ(θ). For example, as a variant, the functions f_{m }are piecewise constant πperiodic functions in [0; π]. A piecewise constant function is a function that takes constant values in a plurality of immediately successive subintervals comprised between [0; π].
The method for minimizing the criterion C or the criterion C_{A }by means of known algorithms for minimizing such a criterion may be implemented whatever the form of the adopted function f_{m}.
As a variant, the number M may be higher than or equal to the number n_{j}.
Variants of the Calculation of the Index A:
As a variant, the index A is calculated only for angles θ equal to the angles α_{j }and not for all the values of θ comprised between 0 and π. In this case, for example, the index A is only dependent on the sum of the following deviations:
The classification may be carried out differently in step 42. For example, the order of classification of the images may be chosen differently.