Development and test of an artificial active vision system
An active vision system is such that the “camera” isn’t static, but moves around the environment. The incremental sensorimotor information accumulates and is processed as it comes, finally ending in a response.
In my case, the task was to categorise images (hand-written letters and black&white human faces with different gender and emotions). The “camera” was a small square of pixels that had to crawl over the images over time.
The experimental setup was within the Evolutionary Robotics framework: a simulated epuck controlled by a CTRNN evolved with a genetic algorithm (fitness being the ability to categorise correctly the images presented) moved over the image and explored around.
The good results showed that, for example, our saccades (ie, the quick eye movements we unconsciously use to see the environment) are very efficient in removing “useless” areas of the field of view and progressively focus on the important parts (for example, the best evolved controllers categorised images very quickly when they learned to focus on the sides of the mouth, the nose and the eyebrows, apparently very good discriminant of human gender and emotion).
The thesis can be found here, but it’s in italian only.