Autofocusing method

0Associated
Cases 
0Associated
Defendants 
0Accused
Products 
14Forward
Citations 
0
Petitions 
1
Assignment
First Claim
1. A method of carrying out autofocusing by displacing a lens along the direction of an optical axis of said lens in accordance with a correlation between outputs from a pair of solidstate image sensors, said method comprising the steps of:
 passing light through said lens and impinging the light passed through said lens on both of said solidstate image sensors;
scanning each of said pair of solidstate image sensors thereby producing respective outputs A(n) and B(n);
converting said outputs A(n) and B(n) into digital data A'"'"'(n) and B'"'"'(n);
calculating Fourier transforms F_{1} (u), F_{2} (u), G_{1} (u) and G_{2} (u) using the following equations, ##EQU8## where N is the number of photoelectric elements in each of said pair of solidstate image sensors, u=0, 1, 2, . . . , M, and M is an integer which is equal to or smaller than (N/2)1;
calculating the following values F_{0}, G_{0}, F(u) and G(u) from said F_{1} (u), F_{2} (u), G_{1} (u) and G_{2} (u), ##EQU9## calculating the following value f*g, using said F_{0}, G_{0}, F(u) and G(u), in a timed sequence at a predetermined time period for variable t, ##EQU10## determining an amount of displacement of said lens in accordance with a value of said variable t which gives a maximum value for said f*g, whereby if said amount of displacement thus determined is outside of a predetermined range, said lens is displaced to an infocus position according to said amount of displacement thus determined.
1 Assignment
0 Petitions
Accused Products
Abstract
An autofocusing method includes a scanning step for scanning a pair of solidstate image sensors on which light from a subject of interest impinges as passing through a focusing lens, an A/D conversion step, a Fourier transformation step, a convolution operation step, a peak detection step for determining an amount of displacement of said lens from the results of said convolution, and a lens displacement step for displacing said lens according to the displacement amount thus determined. The present autofocusing method is least susceptible to noises and not adversely affected by differences in contrast of the subject of interest.
17 Citations
View as Search Results
Lens barrel  
Patent #
US 7,505,216 B2
Filed 06/14/2007

Current Assignee
Ricoh Company Limited

Sponsoring Entity
Ricoh Company Limited

Lens barrel  
Patent #
US 7,259,923 B2
Filed 08/09/2004

Current Assignee
Ricoh Company Limited

Sponsoring Entity
Ricoh Company Limited

Lens barrel  
Patent #
US 20050068638A1
Filed 08/09/2004

Current Assignee
Ricoh Company Limited

Sponsoring Entity
Ricoh Company Limited

Fast focus assessment system and method for imaging  
Patent #
US 6,753,919 B1
Filed 11/24/1999

Current Assignee
Iridian Technologies Inc.

Sponsoring Entity
Iridian Technologies Inc.

Automatic focusing device employing a frequency domain representation of a digital video signal  
Patent #
US 5,210,564 A
Filed 12/21/1990

Current Assignee
Ricoh Company Limited

Sponsoring Entity
Ricoh Company Limited

Optical filtering device and method for using the same  
Patent #
US 5,128,706 A
Filed 02/25/1991

Current Assignee
Asahi Kogaku Kogyo Kabushiki Kaisha

Sponsoring Entity
Asahi Kogaku Kogyo Kabushiki Kaisha

Optical filtering device and method for using the same  
Patent #
US 4,908,644 A
Filed 03/22/1988

Current Assignee
Asahi Kogaku Kogyo Kabushiki Kaisha

Sponsoring Entity
Asahi Kogaku Kogyo Kabushiki Kaisha

Automatic focussing system  
Patent #
US 4,814,889 A
Filed 10/07/1987

Current Assignee
General Electric Company

Sponsoring Entity
General Electric Company

Depthoffocus imaging process method  
Patent #
US 4,661,986 A
Filed 05/18/1984

Current Assignee
RCA Corporation

Sponsoring Entity
RCA Corporation

Threedimensional range finder  
Patent #
US 4,664,512 A
Filed 12/14/1984

Current Assignee
Naoki Shimizu

Sponsoring Entity
Naoki Shimizu

Range finding method and apparatus  
Patent #
US 4,695,156 A
Filed 07/03/1986

Current Assignee
Westinghouse Electric Company LLC

Sponsoring Entity
Westinghouse Electric Company LLC

Imaging terminal having focus control  
Patent #
US 8,692,927 B2
Filed 01/19/2011

Current Assignee
Hand Held Products Incorporated

Sponsoring Entity
Hand Held Products Incorporated

Autofocusing optical imaging device  
Patent #
US 8,760,563 B2
Filed 10/19/2010

Current Assignee
Hand Held Products Incorporated

Sponsoring Entity
Hand Held Products Incorporated

Autofocusing optical imaging device  
Patent #
US 9,036,054 B2
Filed 06/20/2014

Current Assignee
Hand Held Products Incorporated

Sponsoring Entity
Hand Held Products Incorporated

Method and apparatus for determining focus direction and amount  
Patent #
US 4,333,007 A
Filed 07/10/1980

Current Assignee
Honeywell Incorporated

Sponsoring Entity
Honeywell Incorporated

Focus condition detecting device  
Patent #
US 4,253,752 A
Filed 09/26/1979

Current Assignee
Nikon Corporation

Sponsoring Entity
Nikon Corporation

Method and system for detecting sharpness of an object image  
Patent #
US 4,133,606 A
Filed 11/11/1975

Current Assignee
Canon Ayutthaya Limited

Sponsoring Entity
Canon Ayutthaya Limited

5 Claims
 1. A method of carrying out autofocusing by displacing a lens along the direction of an optical axis of said lens in accordance with a correlation between outputs from a pair of solidstate image sensors, said method comprising the steps of:
passing light through said lens and impinging the light passed through said lens on both of said solidstate image sensors; scanning each of said pair of solidstate image sensors thereby producing respective outputs A(n) and B(n); converting said outputs A(n) and B(n) into digital data A'"'"'(n) and B'"'"'(n); calculating Fourier transforms F_{1} (u), F_{2} (u), G_{1} (u) and G_{2} (u) using the following equations, ##EQU8## where N is the number of photoelectric elements in each of said pair of solidstate image sensors, u=0, 1, 2, . . . , M, and M is an integer which is equal to or smaller than (N/2)1; calculating the following values F_{0}, G_{0}, F(u) and G(u) from said F_{1} (u), F_{2} (u), G_{1} (u) and G_{2} (u), ##EQU9## calculating the following value f*g, using said F_{0}, G_{0}, F(u) and G(u), in a timed sequence at a predetermined time period for variable t, ##EQU10## determining an amount of displacement of said lens in accordance with a value of said variable t which gives a maximum value for said f*g, whereby if said amount of displacement thus determined is outside of a predetermined range, said lens is displaced to an infocus position according to said amount of displacement thus determined.  View Dependent Claims (2, 3, 4, 5)
 3. The method of claim 2 wherein said value of variable t giving a maximum value for f*g is determined by comparing said L(I) each other and designated as I(MAX) which is then compared with a pair of predetermined values C_{1} and C_{2} thereby indicating a too close condition if I(MAX) is larger than C_{1}, a too far condition if I(MAX) is smaller than C_{2} and an infocus condition if I(MAX) is larger than C_{1} but smaller than C_{2}.
 4. The method of claim 3 wherein the lens is moved only when said I(MAX) is either larger than C_{1} or smaller than C_{2} and said amount of displacement of said lens is determined according to said I(MAX).
 5. The method of claim 4 further comprising the step of displaying a focusing condition determined by said step of comparing I(MAX) with C_{1} and C_{2}.
1 Specification
1. Field of the Invention
This invention relates to an autofocusing method and particularly to a method to be applied to an autofocus system of a photographic camera.
2. Description of the Prior Art
An autofocusing method for carrying out an autofocusing operation by leading the light reflected from a subject to be photographed to a pair of solidstate image sensors through a lens to be brought into infocused position and displacing the lens in the direction of optical axis in accordance with a correlation between the outputs from the solidstate image sensors is well known in the art. A typical example includes Honeywell'"'"'s TLC system.
In this type of the prior art autofocusing method, it is assumed that the output from each of the pair of solidstate image sensors provide the same functional format so that it is highly susceptible to noises. Furthermore, if output characteristics of a plurality of photoelectric elements of each of the solidstate sensors are scattered, the outputs, which should inherently be of the same functional format, are distorted by such a scatter and differ one from another. For this reason, it is thus required to use a solidstate image sensor having no scatter in the output characteristics of its photoelectric elements for each of the paired solidstate image sensors. However, since the yield in manufacturing such a solidstate image sensor is low, it is difficult to lower the cost of such a solidstate image sensor, thereby pushing up the cost of autofocusing system.
Moreover, in this type of prior art autofocusing method, the correlation between the outputs from the paired solidstate image sensors is evaluated by the following evaluation function ##EQU1## where A(n) and B(n) are outputs from the respective solidstate image sensors for n=1N with N indicating the number of photoelectric elements in each of the solidstate image sensors and α, β and k are set values. As is clear from the format of the evaluation function, it is affected by the contrast of a subject to be photographed. Thus, if the contrast is low, there are produced such disadvantages as deterioration in infocusing accuracy and substantial time delay in realization of infocused condition.
It is therefore a primary object of the present invention to obviate the disadvantages of the prior art as described above and to provide an improved autofocusing method.
Another object of the present invention is to provide an improved autofocusing method which is least susceptible to noises.
A further object of the present invention is to provide an improved autofocusing method which is not adversely affected by the contrast of a subject to be focused.
A still further object of the present invention is to provide an autofocus method and system which can be advantageously applied to an autofocus mechanism of a photographic camera.
Other objects, advantages and novel features of the present invention will become apparent from the following detailed description of the invention when considered in conjunction with the accompanying drawings.
FIG. 1 is a block diagram showing the overall structure of an autofocusing system constructed in accordance with one embodiment of the present invention; and
FIG. 2 is a flow chart showing a sequence of steps in the autofocusing method of the present invention.
The autofocusing method of the present invention includes such steps as a scanning step for scanning each of a pair of solidstate image sensors, an analogtodigital (A/D) conversion step, a Fourier transformation step, a convolution step, a peak detecting step for determining the amount of displacement of a lens from the results of convolution and a displacement step for displacing the lens to the infocus position.
After passing through a lens which is to be brought into the infocus position, the light impinges on a pair of solidstate image sensors. During the scanning step, this light is scanned by each of the paired solidstate image sensors so that there is produced an output in timed sequence in accordance with the intensity distribution of the light. This output is then supplied to the A/D conversion step where the analog output is converted into a digital quantity. The digital quantity thus obtained is then subjected to Fourier transformation at the Fourier transforming step and its results are then subjected to the convolution operation. Then, according to the results obtained from this convolution step, the amount of displacement for the lens to be brought into the infocus position is determined at the peak detecting step. Finally, at the lens displacing step, the lens is moved to the infocus position based on the amount of displacement thus determined.
Now, the fundamental principle of the present invention will be described below.
If two functions p(x) and q(x) are present, one method to evaluate the similarity between the two is to use the following equation.
φ(ξ)=∫{p(x)q(xξ)}.sup.2 dx (1)
In principle, the present invention uses this technique. If the above equation (1) is developed, we have
φ(ξ)=∫{p(x)}.sup.2 dx+∫{q(xξ)}.sup.2 dx2∫p(x)q(xξ)dx (2).
This can be further modified in form as
φ(ξ)=[∫{p(x)}.sup.2 dx+∫{q(xξ)}.sup.2 dx] ##EQU2##
Now, a consideration will be given to the factor in the second term of the above equation (3), which is ##EQU3## The above equation (4) becomes 0 if ξ=0 and p(x)=q(x). This is the condition in which two of the same function are superposed one on top of the other.
Let p(x) and q(x) to be functions obtained as outputs from a pair of solidstate image sensors. Since the light impinging on each of the pair of solidstate image sensors emanates from the same subject to be focused or photographed and passes through a lens which is to be brought into the infocused position, p(x) and q(x) should be similar in format even with the presence of a scatter in the output characteristics of photoelectric elements in each of the sensors. In view of this, if the maximum value of the second term in the equation (4), which is ##EQU4## is closer to unity, the focusing lens should also be closer to the infocus position, and, thus, depending on the value of which gives the maximum value of the equation (5), the amount of displacement of the lens should be able to be determined.
Moreover, as described previously, p(x) and q(x) are inherently similar functions from each other, and, thus, if p(x) is a periodic function, then q(x) is also a periodic function having the same period. As a result, the numerator and the denominator in the above equation (5) have a common periodic factor. Thus, even if periodicity is present in p(x) and q(x), the above equation (5) has nothing to do with periodicity.
Examination of the format of the numerator in the above equation (5) reveals that the Fourier transform of this numerator is a product of Fourier transforms of p(x) and q(x) according to the wellknown theorem of superposition. Accordingly, in calculating the above equation (5), it is convenient to subject p(x) and q(x) to Fourier transformation prior to the calculation, and then after carrying out the calculation to implement the inverse Fourier transformation.
Now, the present invention will be described more in detail by way of embodiments with reference to the drawings. FIG. 1 shows in blocks an autofocusing system constructed in accordance with one embodiment of the present invention. As shown, light L coming from the same subject to be focused or photographed through a focusing lens (not shown) impinges on a pair of solidstate image sensors A and B. It is to be noted that each of the image sensors A and B includes N number of photoelectric elements as well known for one skilled in the art.
Now, scanning is carried out at each of the pair of solidstate image sensors A and B so that the intensity of light received by each of the photoelectric elements is converted into a series of output signals as A(1), A(2), . . . , A(n), . . . , A(N) and B(1), B(2), . . . , B(n), . . . , B(N). These output signals A(n) and B(n), where n=1N, are then applied to an A/D converter to be converted into digital values A'"'"'(n) and B'"'"'(n), which are then supplied to a Fourier transformation unit to be subjected to the Fourier transformation operation. That is, A'"'"'(n) and B'"'"'(n) are Fouriertransformed as in the following manner. ##EQU5## where u=0, 1, 2, . . . , M, and M is an integer which is equal to or smaller than (N/2)1 and a set number appropriately determined from the spatial frequency component ratio of a subject to be photographed and the response speed in autofocusing operation.
Outputs F_{1} (u), F_{2} (u), G_{1} (u) and G_{2} (u), where u=0, 1, 2, . . . , M, from the Fourier transforming unit are then supplied to a convolution unit where a convolution execution is carried out using these values. That is, at the convolution unit, the following values are calculated from F_{1} (u), F_{2} (u), G_{1} (u) and G_{2} (u). ##EQU6## Then, using these values, the following equation is calculated. ##EQU7## where, f*g corresponds to the previously described equation (5).
The result f*g is outputted in timed sequence at a predetermined pitch in variable t. That is, given that t_{i} =χTI, where I=1I_{N} and i=1I_{N}, using clock T, constant χ, and positive integer I=1, 2, 3, . . . , I, . . . , In, where In is a set value, f*g (t_{i})=L(I) are outputted in timed sequence in the order of I=1, 2, . . . and supplied to a peak detecting unit, where the peak detecting operation is carried out as in the following manner.
That is, as in the process shown in the flow chart of FIG. 2, the successively supplied L(1), L(2), L(3), . . . , L(I), . . . , L(In) are successively compared one with respect to another, thereby detecting a maximum L(MAX). And, the I which gives L(MAX) is designated as I(MAX), which is then compared with previously set values C_{1} and C_{2}, whereby it is determined as to whether
I(MAX)>C_{1},
I(MAX)<C_{2}, or
C_{1} >I(MAX)>C_{2}.
That is, the values of C_{1} and C_{2} are set such that if I(MAX) is larger than C_{1}, it indicates the too close condition; on the other hand, if I(MAX) is smaller than C_{2}, it indicates the too far condition. Thus, if I(MAX) is larger than C_{1} or I(MAX) is smaller than C_{2}, then it is necessary to displace the lens so as to attain the infocus condition. And the amount of movement of the lens is determined according to I(MAX), which is then supplied to a pair of decoders, one of which is connected to a lens drive to move the lens to the infocus position and the other of which is connected to a display where the outoffocus condition, too close or too far, is displayed appropriately. Then, the above process is again carried out right from the beginning of scanning step. On the other hand, if C_{1} <I(MAX)<C_{2}, the infocus condition is attained so that there is no need to move the lens. In this case, the infocus condition is indicated.
While the above provides a full and complete disclosure of the preferred embodiments of the present invention, various modifications, alternate constructions and equivalents may be employed without departing from the true spirit and scope of the invention. Therefore, the above description and illustration should not be construed as limiting the scope of the invention, which is defined by the appended claims.