System and method for automatically setting image acquisition controls
DCFirst Claim
Patent Images
1. A system for automatically setting image acquisition controls of a camera, comprising:
- a temporary image acquisition unit which acquires a temporary image;
a scene analysis unit which extracts information from said temporary image and generates a scene classification from said extracted information;
a photographic expert unit which adjusts image capture parameters based on said scene classification; and
a permanent image acquisition unit which acquires a permanent image based on the image capture parameters adjusted by said photographic expert unit.
5 Assignments
Litigations
0 Petitions
Accused Products
Abstract
A system and method use computer image processing for setting the parameters for an image acquisition device of a camera. The image parameters are set automatically by analyzing the image to be taken, and setting the controls according to the subject matter, in the same manner as an expert photographer would be able to. The system can run in a fully automatic mode choosing the best image parameters, or a “guided mode” where the user is prompted with choices where a number of alternate settings would be reasonable.
425 Citations
56 Claims
-
1. A system for automatically setting image acquisition controls of a camera, comprising:
-
a temporary image acquisition unit which acquires a temporary image;
a scene analysis unit which extracts information from said temporary image and generates a scene classification from said extracted information;
a photographic expert unit which adjusts image capture parameters based on said scene classification; and
a permanent image acquisition unit which acquires a permanent image based on the image capture parameters adjusted by said photographic expert unit. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41)
a user interface to prompt a user of the camera and to receive input from the user in response to user prompts, wherein the photographic expert unit optionally runs in a “
guided mode”
where the user is prompted with choices of a number of alternate settings.
-
-
4. The system of claim 3, wherein information corresponding to user-adjusted image capture parameters is supplied as feedback to the photographic expert unit through the user interface.
-
5. The system of claim 4, wherein mapping between input characteristics and image parameters is learned automatically by the photographic expert unit observing settings input by the user for different kinds of images.
-
6. The system of claim 1, wherein the photographic expert unit determines a subject based on extracted features within said scene classification.
-
7. The system of claim 6, wherein the photographic expert unit chooses a proper exposure for said subject determined.
-
8. The system of claim 7, wherein the photographic expert unit chooses a shutter speed and aperture combination based upon reflected light from said subject subject.
-
9. The system of claim 1, further comprising:
an illumination device and wherein the photographic expert unit controls the illumination device to compensate for poor illumination.
-
10. The system of claim 9, wherein the photographic expert unit controls the illumination device to compensate for a back-lit subject.
-
11. The system of claim 1, wherein a focal-length of a lens is one of the image capture parameters.
-
12. The system of claim 1, wherein lighting of an imaged scene is analyzed by the photographic expert unit to determine exposure parameters of the acquired image.
-
13. The system of claim 1, wherein distances of multiple objects in an imaged scene are used by the photographic expert unit to estimate optimal or candidate focus depths.
-
14. The system of claim 1, wherein distances of objects in an imaged scene are used by The photogahic expert unit to estimate optimal or candidate settings for focus depth and aperture to obtain desirable depth of field characteristics.
-
15. The system of claim 1, wherein speeds of objects in an image are estimated and used by the photographic expert unit to determine optimal or candidate exposure speed settings.
-
16. The system of claim 1, wherein objects in the image are recognized and a style of photography is determined by the photographic expert unit based upon the objects.
-
17. The system of claim 1, wherein the photographic expert unit includes a storage medium for storing said extracted information.
-
18. The system of claim 17, wherein the extracted information includes at least one of image features, scene components, and imaging parameters.
-
19. The system of claim 17, wherein the stored information is used for indexing at a later time.
-
20. The system of claim 19, wherein the stored information includes time reference and duration of observed features.
-
21. The system of claim 1, wherein the extracted information includes image features based on one or more of the following:
- depth segmentation, motion segmentation, illumination evaluation, color analysis, geometric features, texture analysis, and scale determination.
-
22. The system of claim 1, wherein the extracted information includes scene components that include one or more of the following:
- human faces, people, buildings, vehicles, walls, furniture, vegetation, skyline, horizon, and ground plane.
-
23. The system of claim 1, wherein said scene classification is one of a portrait, close-up portrait, group shot, action photo, natural landscape, cityscape, still-life, macro close-up, crowd scene, backlit object, motor sports, sea scene, athletic event, stage performance, silhouette, flowers, architectural landmark, animal, and night sky.
-
24. The system of claim 1, wherein the image capture parameters include one or more of the following optical controls:
- focus distance, aperture, shutter speed, zoom or focal length, filter application, filter parameters and imager sensitivity.
-
25. The system of claim 1, wherein the scene analysis unit determines a subject region of the image.
-
26. The system of claim 25, wherein the scene analysis unit determines a subject region of the image based on one or more of the following:
- a flesh tone area, a geometric model of a face, an area with high motion, the closest region, and the most central region.
-
27. The system of claim 6, wherein said determined subject is a human face.
-
28. The system of claim 27, wherein color balance of the permanent image acquisition unit is altered to improve flesh tone color rendition in the face.
-
29. The system of claim 28, wherein the face is differentially exposed relative to the remainder of the image to improve contrast.
-
30. The system of claim 1, further comprising:
at least one auxiliary non-imaging sensor supplying an input into the scene analysis unit.
-
31. The system of claim 30, wherein the at least one auxiliary sensor includes one or more of the following:
- an accelerometer, a tilt sensor, an ambient light sensor, an ultraviolet light sensor, a temperature sensor, an optical triangulation unit, an ultrasonic range finder, an audio sensor, a motion sensor, a global positioning system and a compass.
-
32. The system of claim 1, further comprising:
a controllable mechanical pointing system coupled to the permanent image acquisition device.
-
33. The system of claim 1, wherein the photographic expert unit determines a shutter speed to eliminate blurring due to camera shake.
-
34. The system of claim 1, wherein the photographic expert unit determines an exact time at which to acquire the permanent image based on the scene classification.
-
35. The system of claim 1, further comprising:
a user interface which informs a user of the camera of the scene classification and allowing the user to either confirm this scene classification or override the photographic expert unit and choose a different scene classification.
-
36. The system of claim 35, wherein the scene analysis unit learns an improved mapping between the extracted features and said scene classification based on feedback from the user.
-
37. The system of claim 1, further comprising;
a user interface which informs a user of the camera of the image parameters selected and allows the user to optionally override these parameters.
-
38. The system of claim 1, further comprising:
a user interface which informs a user of the camera of anomalous situations and/or non-optimal settings.
-
39. The system of claim 1, wherein said scene analysis unit generates a plurality of hypotheses of scene classifications, and wherein said system further comprises:
a scene arbitration unit which selects said scene classification from said plurality of hypotheses of scene classifications.
-
40. The system of claim 39, wherein said scene arbitration unit selects said scene classification from said plurality of hypotheses of scene classifications based on one of a rulesbased, statistical, or neural network-based process.
-
41. The system of claim 40, wherein said rules-based or statistical process selecting said scene classification as an optimal one of said hypotheses of scene classifications based on said extracted information.
-
42. A computer implemented method for automatically setting image acquisition controls of a camera, said method comprising the steps of:
-
acquiring a temporary image;
extracting at least one of image features and scene components from said temporary image;
generating a scene classification from said at least one of the extracted image features and scene components;
adjusting image capture parameters based on the scene classification generated in said generating step; and
acquiring a permanent image based on the image capture parameters adjusted in said adjusting step. - View Dependent Claims (43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54)
prompting a user of the camera with choices of a number of alternate settings; and
receiving input from the user in response to user prompts, wherein the photographic expert unit optionally runs in a “
guided mode.”
-
-
44. The method of claim 43, further comprising:
- receiving user feedback to adjust image capture parameters.
-
45. The method of claim 44, further comprising:
automatically learning mapping between input characteristics and image parameters by observing settings input by the user for different kinds of images.
-
46. The method of claim 42, further comprising:
determining an intended subjected based on extracted features from the scene classification.
-
47. the method of claim 46, further comprising:
choosing a proper focus distance for a subject region of an image of the determined intended subject.
-
48. The method of claim 46, further comprising:
choosing a proper exposure for a subject region of an image of the determined intended subject.
-
49. The method of claim 48, further comprising:
choosing a shutter speed and aperture combination based upon reflected light from the determined intended subject.
-
50. The method of claim 42, further comprising:
-
generating a plurality of hypotheses of scene classifications; and
selecting said scene classification from said plurality of hypotheses of scene classifications.
-
-
51. The method of claim 50, further comprising:
requesting a scene classification choice from a user of the camera.
-
52. The method of claim 50, wherein said scene classification is selected from said plurality of hypotheses of scene classifications based on one of a rules-based or statistical process.
-
53. The method of claim 42, further comprising:
saving said scene classification and imaging parameter information associated with an image.
-
54. The method of claim 52, wherein said rules-based or statistical process selects said scene classification as an optimal one of said hypotheses of scene classifications based on at least one of said extracted image features and scene components.
-
55. A system for automatically setting image acquisition controls of a camera, comprising:
-
a temporary image acquisition unit for acquiring a temporary image;
a permanent image acquisition unit with controllable image capture parameters for the acquisition of a permanent image;
a scene analysis unit for receiving input from the temporary image acquisition unit, extracting image features and scene components, and using these to generate a parameterized scene classification; and
a photographic expert unit for receiving said scene classification from the scene analysis unit and generating as an output image capture parameters, wherein the photographic expert unit adjusts the image capture parameters of the permanent image acquisition unit based on the scene classification. - View Dependent Claims (56)
-
Specification