ANALYSIS APPARATUS, IMAGING SYSTEM, AND STORAGE MEDIUM

0Associated
Cases 
0Associated
Defendants 
0Accused
Products 
0Forward
Citations 
0
Petitions 
1
Assignment
First Claim
1. An analysis apparatus comprising:
 an estimator configured to estimate spectral information of an object based on information acquired by a spectroscope; and
a processor configured to tag the spectral information of the object with a feature amount of the object,wherein the estimator performs different estimation processing for each of a plurality of spectroscopes, andwherein the processor performs regression processing common to the plurality of spectroscopes.
1 Assignment
0 Petitions
Accused Products
Abstract
An analysis apparatus includes an estimator configured to estimate spectral information of an object based on information acquired by a spectroscope, and a processor configured to tag the spectral information of the object with a feature amount of the object. The estimator performs different estimation processing for each of a plurality of spectroscopes, and the processor performs regression processing common to the plurality of spectroscopes.
0 Citations
No References
No References
16 Claims
 1. An analysis apparatus comprising:
an estimator configured to estimate spectral information of an object based on information acquired by a spectroscope; and a processor configured to tag the spectral information of the object with a feature amount of the object, wherein the estimator performs different estimation processing for each of a plurality of spectroscopes, and wherein the processor performs regression processing common to the plurality of spectroscopes.  View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15)
 16. A nontransitory computerreadable storage medium storing a program that causes a computer to execute a method that includes the steps of:
estimating spectral information of an object based on information acquired by a spectroscope; and tagging the spectral information of the object with a feature amount of the object, wherein the estimating step performs different estimation processing for each of a plurality of spectroscopes, and wherein the tagging step performs regression processing common to the plurality of spectroscopes.
1 Specification
The present invention relates to an analysis apparatus that analyzes a feature amount of an object using spectral (or spectroscopic) information.
One conventionally known method analyzes a feature amount of an object based on spectral information acquired by a spectroscope such as a hyperspectral camera or a multispectral camera. Japanese Patent Application LaidOpen No. 2000214091 discloses a method of analyzing a feature amount of an object based on spectral information by tagging the spectral information with the feature amount of the object.
The relationship between the spectral information and the feature amount of the object differs according to the type of the spectroscope used to acquire the spectral information. It is thus necessary to construct for each spectroscope the analysis apparatus that analyzes the feature amount of the object. As a result, due to the arduous construction of the analysis apparatus, the feature amount of the object cannot be efficiently analyzed.
The present invention provides an analysis apparatus, an imaging system, and a storage medium, each of which can easily and efficiently analyze an object.
An analysis apparatus according to one aspect of the present invention includes an estimator configured to estimate spectral information of an object based on information acquired by a spectroscope, and a processor configured to tag the spectral information of the object with a feature amount of the object. The estimator performs different estimation processing for each of a plurality of spectroscopes, and the processor performs regression processing common to the plurality of spectroscopes. An imaging system according to another aspect of the present invention includes the above analysis apparatus and the spectroscope.
A nontransitory computerreadable storage medium according to another aspect of the present invention storing a program that causes a computer to execute a method that includes the steps of estimating spectral information of an object based on information acquired by a spectroscope, and tagging the spectral information of the object with a feature amount of the object. The estimating step performs different estimation processing for each of a plurality of spectroscopes, and the tagging step performs regression processing common to the plurality of spectroscopes.
Further features of the present invention will become apparent from the following description of exemplary embodiments with reference to the attached drawings.
Referring now to the accompanying drawings, a detailed description will be given of embodiments according to the present invention.
Referring now to
In
Reference numerals 110 and 120 denote cameras (imaging apparatuses) as spectroscopes configured to acquire spectral information. The cameras 110 and 120 are different types of cameras. In this embodiment, the camera 110 divides the wavelength into three, and the camera 120 divides the wavelength into five. Reference numeral 111 denotes spectral information acquired by capturing the object 100 using the camera 110. Reference numeral 121 denotes spectral information acquired by capturing the object 100 using the camera 120. In each of the spectral information 111 and 121, the abscissa axis represents wavelength information and the ordinate axis represents an intensity.
The spectrum 101 thus includes information that is fine in the wavelength direction so that a curve can be displayed. On the other hand, the spectral information acquired by the cameras 110 and 120 is spectral information rounded to three or five points due to the spectral performances of the cameras 110 and 120. The spectral information 111 and 121 are different from each other in sensitivity to each wavelength as well as the number of wavelengths. Thus, for example, the same “red (R)” information also has different values in each of the cameras 110 and 120.
Reference numerals 112 and 122 denote material amounts obtained through the material analysis to the spectral information 111 and 121, such as chlorophyll amount a. Since the same object 100 is captured, the material amounts 112 and 122 should be the same. In calculating the material amounts 112 and 122 using different spectral information 111 and 121, the prior art needs to develop an identifier suitable for the spectral information 111 and 121, and cannot equally treat the spectral information 111 and 121.
On the other hand, this embodiment uses the material analysis technology 102 for identifying a material based on the spectral information 111 and 121 and can use the same analysis technology and analysis data regardless of the spectral characteristic of the cameras 110 and 120. Thereby, this embodiment can complement the data amount of the material analysis technology 102 among the different cameras 110 and 120 and compare the material amounts acquired by the different cameras 110 and 120 with each other.
As illustrated in
Referring now to
In
The spectrum 101 of the object 100 is rounded to information such as the spectral information 211 and 321 when it is multiplied by the spectral characteristics 201 and 301 of the cameras 110 and 120. The spectral information 211 and 321 may be calculated based on transmittances of the lenses of the cameras 110 and 120, marginal light losses, and the like. Since the acquired spectral information 211 and 321 is calculated based on the different spectral characteristics 201 and 301, it is difficult to compare them as they are. Accordingly, this embodiment converts the spectral information 211 and 321 into the estimated spectra 212 and 322. Thereby, the spectral information of the object 100 can be compared irrespective of the spectral characteristics 201 and 301.
Any methods may be adopted as long as the estimated spectra 212 and 322 can be obtained, but this embodiment describes a method using the characteristics of the multivariate analysis and regression analysis. However, the present invention is not limited to this method, and an estimated spectrum may be obtained using another method.
Referring now to
The multivariate analysis is a technology of analyzing data 401 containing a plurality of variables. In general, many variables make complicate the calculation, and thus the number of variables may be as small as possible while the characteristic of the data 401 is maintained. Accordingly, the data 401 containing a plurality of variables is compressed. Reference numeral 402 is a compression technology for converting multivariable data 401 into threevariable compressed data. The technology available to the compression of the data 401 includes the principal component analysis PCA, the independent component analysis ICA, or sparse coding, etc. These methods described herein can compress the data 401 losslessly.
Reference numeral 403 represents a regression technology (regression analysis). The regression technology 403 is a technology of tagging the data 401 differently compressed for each sample number n by the compression technology 402 with the material amount as teacher information (training dataset). When it is assumed that the original data 401 is the spectrum 101 of the object 100 and has m pieces of wavelength information, f_{n}(λ) corresponding to the data 401 is expressed by the following expression (1).
f_{n}(λ)−[a_{1n}λ_{1},a_{2n}λ_{2},a_{3n}λ_{3}, . . . a_{mn}λ_{m}] (1)
In the expression (1), n is the sample number, and a coefficient a_{mn }of each wavelength λ_{m }changes for a different number. In the compression technology 402, as expressed by the following expressions (2) and (3), the feature amount expressed by the expression (1) is converted into a form that can be explained by a threedimensional variable.
f_{n}(λ)=t_{1n}p_{1}+t_{2n}p_{2}+t_{3n}p_{3}+α_{n} (2)
p_{1}=[a_{11}λ_{1},a_{21}λ_{2},a_{31}λ_{3}, . . . a_{m1}λ_{m1}] (3)
In the expression (2), t is a score of each sample number n. p is a basis vector extracted from a large amount of data represented by the expression (1), and the score t indicates a ratio of the basis. Thus, the score t is calculated for each of basis vectors p_{1}, p_{2}, and p_{3}. In the expression (2), a is a residual that cannot be explained with the basis vector p. The expression (3) is substituted for the expression (2) and the compression is made so that it follows the same expression as the expression (1). The above principal component analysis PCA, the independent component analysis ICA, the sparse coding, or the like can be used to derive the expression (3).
The regression technology 403 can use a variety of regression methods, but the most common linear regression will be exemplified. Where it is assumed that Chl_{n }is a material amount, such as a chlorophyll amount, to be identified with the sample number n, a relational expression expressed by a linear sum can be shown in the following expression (4).
Chl_{n}=k_{1}t_{1n}+k_{2}t_{2n}+k_{3}t_{3n} (4)
The expression (4) can calculate how the material amount Chl_{n }as the training dataset is tagged with each score tin n samples.
This embodiment uses a spectrum estimating technology utilizing the characteristics of the multivariate analysis and regression analysis. The spectra 101 of n objects 100 are prepared. The spectra 101 are compressed by the compression technology 402 and each of them is converted into a form that can be described by a small number of variables, such as three variables.
Each spectral information 211 and 321 is calculated where it is assumed that that spectra 101 are captured with the spectral characteristics 201 and 301 of the cameras 110 and 120. The spectral information 211 and 321 is substituted for the expression (4) instead of the material amount (material information) Chl_{n}, and tagged with previously compressed information (data) of the spectrum 101. For example, the camera 110 can acquire information of the three components B, G, and R. The following expression (5) is calculated with the score t of the spectrum 101 and the value of each coefficient k is obtained.
B
_{n}
=k
_{1b}
t
_{1n}
−k
_{2b}
t
_{2n}
−k
_{3b}
t
_{3n }
G
_{n}
=k
_{1g}
t
_{1n}
−k
_{2g}
t
_{2n}
−k
_{3g}
t
_{3n }
R_{n}=k_{1r}t_{1n}−k_{2r}t_{2n}−k_{3r}t_{3n} (5)
The value of each coefficient k is previously obtained and the spectral information 211 acquired by the camera 110 is substituted for the left side. As a result, three simultaneous equations can be made for three unknown scores t, and the score t can be calculated. Since the camera 120 has five pieces of spectral information, it is understood that five expressions (5) can be created and the three unknown scores t can be calculated. Once the score t is calculated, the estimated spectra 212 and 322 can be obtained by the invertible transformation which is one characteristic of the compression technology 402.
The thus obtained estimated spectra 212 and 322 have the same information shape regardless of the spectral characteristics 201 and 301 of the cameras 110 and 120. Accordingly, once the desired material analysis technology can be formed based on the information of the estimated spectra 212 and 322, a common material analysis technology can be applied to the cameras 110 and 120. One major characteristic is that the imaging apparatus sensitive only to the visible region such as the cameras 110 and 120 can restore data in the wavelength range of the spectrum (data 401) previously used for the learning. Hence, the cameras 110 and 120 can estimate the spectrum in the nearinfrared region (700 to 1100 nm) as in the estimated spectra 212 and 322.
In general, the material analysis needs to prepare a large amount of data tagged with training dataset as described above. Conventionally, when different information is acquired by a camera or a spectroscope, it is necessary to collect training dataset individually suitable for each spectral information 211 and 321.
On the other hand, the material analysis technology utilizing the estimated spectrum according to this embodiment can analyze a material using the same training dataset regardless of the spectral characteristic of the spectroscope or the camera. Hence, the training dataset available to the material analysis can be efficiently increased without depending on the spectroscope. When a new spectroscope can compare the spectroscopic characteristics 201 and 301 with the information previously acquired by different spectroscopes and analyze the material, even if these information are not derived by the same hardware.
A variety of methods are applicable so as to identify the material using the estimated spectra 212 and 322. For example, as described above, the multivariate analysis technology, machine learning, deep learning and the like are applicable. Characteristically, these methods maintain the essential process of finally tagging the features of the estimated spectra 212 and 322 with the training dataset. On the other hand, the score t obtained in the course of the spectral estimation has the same meaning as the coefficient obtained by compressing the estimated spectra 212 and 322, and therefore represents the feature amount itself. This embodiment may again perform the material regression in the stage of obtaining the score t, even if the estimated spectra 212 and 322 are not obtained. This embodiment refers to a coefficient representative of the wavelength characteristic of each of the estimated spectra 212 and 322 as a spectral coefficient, such as a score t.
This embodiment obtains the estimated spectra 212 and 322 based on the previously prepared spectrum 101 of the object 100, and cannot derive the correct spectrum for a different objects. However, the gist of this embodiment is not the estimation of the correct spectrum 101, but the evaluation of different spectral information 211 and 321 in the same manner with the same dimension. Hence, as long as the conversion is compatible with the comparison, it is unnecessary to use the spectrum 101 of the correct object 100. For example, in comparing the spectral information of rice leaves, there is no practical problem to use the spectrum 101 of a camellia leaf.
In the calculation, this embodiment uniformly applies the same coefficient k using the spectral information 211 and 321 acquired by the cameras 110 and 120. On the other hand, all objects acquired by the cameras 110 and 120 are not always the same object 100. Whether or not the spectral information 211 and 321 is the predetermined object 100 may be determined and the spectrum may be estimated. In order to recognize the object, a general image recognition technology or the like can be used. This embodiment enables a recognition using the score t obtained by the expression (5) as it is.
Numerical examples will be shown below which relate to the spectral characteristics (spectrum 101) at the wavelength of 400 nm to 1100 nm, the derived basis vectors p_{1}, p_{2}, and p_{3}, an average value Pave for each wavelength, and a coefficient a. The leaf data of the spectrum 101 is used as training dataset. The actual analysis needs to use a large amount of training dataset, but the spectrum 101 is used for the analysis as one of the large number of data. In applying the compression technology 402, an averaging process may be performed for each variable and an average value is set to 0 for each variable of the data 401. The compression technology 402 obtains the feature amount of data and the axial direction having a large variable amount on the basis of the 0 point in calculating the basis vector p. Unless the average value is set to 0, the average scattering influence directly affects the feature value derivation, and thus the average value may be used.
As will also be described in each of the following examples, the average value of the material amount may be set to 0 in the regression technology 403. The fluctuation of the material amount is also represented by positive and negative by averaging the material amount. This is convenient because the feature amount derived by the compression technology 402 is obtained with positive and negative values with respect to 0. An average value for averaging these values can be processed as a constant term in this embodiment. Let the average value be a constant term and the compressed representation value be a variable term. Then, the influence of the variable term is large in order to identify the feature, and the influence of the constant term is large in order to represent the absolute value of the material amount. Accordingly, this embodiment may provide estimation processing based on a remaining variable term (the variable term obtained by excluding the constant term) obtained by subtracting (excluding) a constant term exceeding 30% of the absolute value (different depending on the wavelength) from the spectral information acquired by the spectroscope. As described above, the constant term can use the average value etc., and efficiently extract the variable term by the difference processing. The absolute value of the variable term may be smaller than the absolute value of the constant term
In this embodiment, the estimator 11 may perform different estimation processing for each of the plurality of spectroscopes, and the processor 12 performs regression processing common to the plurality of spectroscopes. The spectral information of the object may be a spectrum of the object. The spectral information of the object may be a spectral coefficient. The spectral coefficient may be a coefficient, such as a score t, representing the wavelength characteristic of the reflection light or the transmission light of the object (the wavelength characteristics of the estimated spectra 212 and 322). The estimator 11 may estimate the spectral coefficient of the object through four arithmetic operations to the spectral information acquired by the spectroscopes. The expression for estimating the spectral coefficient of the object may be tagged by the linear regression, such as the expression (4).
The estimator 11 may estimate the spectral coefficient of the object based on the spectral information acquired by the spectroscope, and estimate the spectrum of the object based on the spectral coefficient of the object through different processing according to the wavelength of the reflection light or the transmission light of the object. The feature amount of the object may be property information of the object (material amount or information indicating a component or state of the object). The processor 12 may determine whether or not the object is a predetermined object based on the spectral information of the object estimated by the estimator 11 (recognition technology). Each example will be described in detail below.
Referring now to
As illustrated in the spectral characteristic 601, the spectroscope according to this example has four pieces of information of blue (B) of a dotted line, green (G) of a solid line, red (R) of a broken line, and infrared (IR) of an alternate long and short dash line. In other words, the spectroscope according to this example can acquire four points of information like the spectral information 611 as the spectrum 101.
Table 1 shows coefficients k(k_{1}, k_{2}, k_{3}) in the expression (5) relating to B, G, R, and IR in this example. Table 2 shows average values C_{avg }of each of B, G, R and IR. Using the coefficient k and the average value C_{avg }can provide matrix and simultaneous equations, as represented by the following expression (6).
The first term on the left side in the expression (6) is spectral information 611 acquired by the spectroscope according to this example, the second term is shown in Table 2, and the coefficients on the right side are shown in Table 1. The spectral information 611 corresponding to the spectrum 101 enables the numerical values of t1=0.311, t2=−0.089, t3=−0.007 to be derived in the expression (6). The spectrum f(λ) is obtained with these values and the following expression (7).
f(λ)=t_{1}·p_{1}+t_{2}·p_{2}+p_{3}+p_{avg} (7)
Each basis vector p and average value p_{avg }utilize values provided in the following numerical examples. These calculations provide the estimated spectrum 612. The estimated spectrum 612 and the spectrum 101 have a very high correlation, and a correlation coefficient of 0.99 is obtained. The calculated absolute values of the variable term t_{1}·p_{1}+t_{2}·p_{2}+t_{3}·p_{3 }for the estimated spectrum 612 and the value of the numerical example as the constant term are set such that it is confirmed that the absolute value of the constant term according to the numerical example is large in all wavelength regions. It can be understood that the constant term is generally used and only the variable term maintains the estimation accuracy.
One of the material analysis technology of the estimated spectrum 612 derives a value of NDVI (normalized vegetation index) using a simple expression. NDVI is generally used as an index of the leaf vegetation and growth degree, and known to be obtained by the following expression (8).
In the expression (8), IR is a nearinfrared reflectance, and R is a reflectance of a red wavelength. For the leaf, red is the wavelength used for the photosynthesis, and IR is a reflective area showing the growth degree depending on the leaf structure. When it is assumed that R is a wavelength of 650 nm and IR is a wavelength of 850 nm from the estimated spectrum, the object 100 has NDVI of 0.64.
Referring now to
As illustrated in the spectral characteristic 701, the spectroscope according to this example can acquire three pieces of information of green (G) of a solid line, red (R) of a broken line, infrared (IR) of an alternate long and short dash line. In other words, the spectroscope according to this example can acquire three points of information as the spectrum 101 like the spectral information 711.
Table 3 shows the coefficients k(k_{1}, k_{2}, k_{3}) in the expression (5) relating to G, R, and IR in this example. Table 4 shows average values C_{avg }of each of G, R, and IR. The analysis pursuant to the expression (6) provides numerical values of t_{1}=0.305, t_{2}=−0.100, and t_{3}=−0.008. The calculation using these values and the expression (7) provides the estimated spectrum 712. The estimated spectrum 712 and the spectrum 101 have a very high correlation, and a correlation coefficient of 0.99 is obtained.
This example once converts the spectral information 711 into scores t_{1}, t_{2}, and t_{3 }having low direct correlations with the spectral characteristic 701. This corresponds to outputting the same scores t_{1}, t_{2}, and t_{3 }for the spectral information of the same object 100, regardless of the shape or sensitivity of the spectral characteristic 701. In the examples 1 and 2, the spectrum 101 of the same object 100 is acquired. It can be understood that the spectral characteristics 601 and 701 show shapes different from each other. On the other hand, when the calculated scores t_{1}, t_{2}, t_{3 }are compared, it is understood that the values are very close to those of the examples 1 and 2.
This example provides the analysis such that the absolute values of the vectors p_{1}, p_{2}, and p_{3 }are 1. This is because expressing it as a unit vector is more convenient for explaining the orthogonality of these vectors p_{1}, p_{2}, and p_{3 }or the like It is assumed that the vectors p_{1}, p_{2}, and p_{3 }are unit vectors and the scores t_{1}, t_{2}, and t_{3 }are arranged in descending order of the compression contribution ratio, the score t_{1 }is likely to have the maximum absolute value. On the other hand, the scores t_{2 }and t_{3 }have low contribution ratios and serves as a fine adjustment.
In this example, the error (permissible error or tolerance) of the absolute value of the score t_{1 }having the highest contribution ratio may be 30% or less regardless of the shape of the spectral characteristic 701 or the spectroscope. In other words, when the score t_{1 }is 0.311, absolute values of ±30% are taken as permissible values, which are values from 0.218 to 0.404. Comparing the examples 1 and 2 with each other, the score t_{1 }having the maximum value falls within a predetermined range.
It is effective for this example to increase the efficiency of the compression technology 402 in order to reduce the error by 30% or less so as to mitigate the influence of the spectral characteristic 701. The devised compression technology 402 can adjust the contribution ratios of the scores t_{1}, t_{2}, and t_{3}. Devising means will be described in detail for the following example 3. Since the scores t_{2 }and t_{3 }with low contribution ratios are adjustment terms for obtaining the estimated spectrum 712, the values may differ greatly depending on spectroscopes. In this example, the object 100 has NDVI of 0.64, and a result very close to that in the example 1 can be obtained.
The comparative example calculates the NDVI using information on the spectroscopes illustrated in the examples 1 and 2. In general, the NDVI obtained by the expression (8) sets the infrared information as IR and red information as R regardless of the shape of spectral characteristics. With IR and R information of the spectral information 611 and 711, the NDVI of the spectroscope according to the example 1 of 0.37 is significantly different from that of the example 2 of −0.40. This difference largely depends on the shapes and sizes of the spectral characteristics 601 and 701, and is caused particularly by the low sensitivity to the IR in the example 2.
The conventional method is significantly influenced by the spectral characteristics 601 and 701 of the spectroscopes, and the results of the material analyses are not the same even for the same object 100. It was thus necessary to separately construct the analysis technology for each of the spectroscopes according to the examples 1 and 2. On the other hand, the examples 1 and 2 can obtain substantially the same value using the common analysis technology.
Referring now to
Table 5 shows the coefficients k(k_{1}, k_{2}, k_{3}) of the expression (5) relating to B, G, and R in this example. Table 6 shows average values C_{avg }of each of B, G and R. The analysis according to the expression (6) provides numerical values of t_{1}=0.252, t_{2}=0.138, and t_{3}=−0.003. The calculation using these values and the expression (7) provides the estimated spectrum 212. The correlation between the estimated spectrum 212 and the spectrum 101 is very high, and a correlation coefficient of 0.99 is obtained. In this example, the object 100 has NDVI of 0.67, which is very close to that of the example 1.
In this example, the values somewhat shift from those in the examples 1 and 2. Referring now to
As can be understood from
Accordingly, this example performs the analysis in order of the spectral information 211, the spectral coefficient t, and the estimated spectrum 212. The conversion from the spectral coefficient t to the estimated spectrum 212 provides the estimated spectrum 212 by “different coefficient processing or a conversion into different information according to each wavelength.” For example, assume that the wavelength of 400 to 700 nm is used for the reflectance and the value of the wavelength of 700 to 1100 nm is used for the absorbance (1/log R).
This example uses the principal component analysis for the compression technology 402. With data using a simple reflectance, the compression contribution ratio of score 1 (how much original data could be compressed) is 0.67, but it can be greatly improved to 0.93. In addition to this conversion, a method for multiplying each variable by different coefficients is also effective. For example, the score 1 gains a compression contribution ratio of 0.96 by multiplying the original data 401 by the coefficient “a” illustrated in the numerical example and by performing the compression technology 402. Since the coefficient “a” increases the compression contribution ratio, it can be obtained based on the method of the compression technology.
This example has described in detail the method for eliminating the influence of a in the expression (2), but the present invention is not limited to this method. For example, a method may be used which prepares the original data 401 from which a has been removed, adds the influence of a to the regression expression (4), or constructs the regression expression (4) so that the influence of a is ignorable. These methods express the original data 401 with a small number of variables by the compression technology 402 and express the influence of the spectroscope (imaging apparatus) through the regression technology 403. In particular, the regression expression (4) is suitable to express the influence of spectroscope (imaging apparatus). Thus, in order to improve the contribution ratio and the accuracy, it is not limited to the linear sum expression.
Referring now to
Table 7 shows the coefficients k(k_{1}, k_{2}, k_{3}) of the expression (5) relating to B, G, LG, O, R in this example. Table 8 shows average values C_{avg }of each of B, G, LG, O, and R. The analysis according to the expression (6) performs numerical values of t_{1}=0.250, t_{2}=0.146, and t_{3}=−0.002. The calculation using these values and the expression (7) can provide the estimated spectrum 322. The correlation between estimated spectrum 322 and spectrum 101 is very high, and a correlation coefficient of 0.99 is obtained. In this example, the object 100 has NDVI of 0.67, which is very close to that of the example 1.
This embodiment has a slight shift in value as in the example 3. Referring now to
Similar to the example 3, this embodiment makes a conversion to improve the compression contribution ratio.
Embodiment(s) of the present invention can also be realized by a computer of a system or apparatus that reads out and executes computer executable instructions (e.g., one or more programs) recorded on a storage medium (which may also be referred to more fully as a ‘nontransitory computerreadable storage medium’) to perform the functions of one or more of the abovedescribed embodiment(s) and/or that includes one or more circuits (e.g., application specific integrated circuit (ASIC)) for performing the functions of one or more of the abovedescribed embodiment(s), and by a method performed by the computer of the system or apparatus by, for example, reading out and executing the computer executable instructions from the storage medium to perform the functions of one or more of the abovedescribed embodiment(s) and/or controlling the one or more circuits to perform the functions of one or more of the abovedescribed embodiment(s). The computer may comprise one or more processors (e.g., central processor (CPU), micro processor (MPU)) and may include a network of separate computers or separate processors to read out and execute the computer executable instructions. The computer executable instructions may be provided to the computer, for example, from a network or the storage medium. The storage medium may include, for example, one or more of a hard disk, a randomaccess memory (RAM), a read only memory (ROM), a storage of distributed computing systems, an optical disk (such as a compact disc (CD), digital versatile disc (DVD), or Bluray Disc (BD)™), a flash memory device, a memory card, and the like.
The analysis apparatus according to each example can analyze the feature amount of the object using the same material recognizer and material analyzer, regardless of the spectroscope that acquires the spectral information. Hence, each example can provide an analysis apparatus, an imaging apparatus, and a program, each of which can easily and efficiently analyze an object.
While the present invention has been described with reference to exemplary embodiments, it is to be understood that the invention is not limited to the disclosed exemplary embodiments. The scope of the following claims is to be accorded the broadest interpretation so as to encompass all such modifications and equivalent structures and functions.
For example, a description was made mainly on the spectral information for the original data 401, but statistically the same type of data is similarly applicable. For example, even when information on different spaces is connected, numerical information such as position information, temperature information, and humidity information can be similarly used.
This application claims the benefit of Japanese Patent Application No. 201882119, filed on Apr. 23, 2018, which is hereby incorporated by reference herein in its entirety.