BACKGROUND: For centuries, art lovers have wondered about the Mona Lisa's mysterious smile, and what Mona Lisa may have been thinking or feeling the time she was painted. Now, scientists in The Netherlands have used emotion-recognition software to determine the Mona Lisa's emotions while sitting for her portrait by Leonardo da Vinci. Other applications of emotion recognition software might be to detect terror suspects on the basis of their emotions, not just on their physical characteristics.
HOW IT WORKS: The software uses a network of points over a 3D model of the face of Leonardo's most famous subject. This looks at features such as the curvature of the lips and crinkles around the eyes to give a score for six basic emotions. The software scored the Mona Lisa for happiness, disgust, fear and anger, but found no evidence of surprise or sadness. The exact breakdown of the Mona Lisa's emotions, according to the software program, were 83 percent happy, 9 percent disgusted, 6 percent fearful, and 2 percent angry.
IN DIVINE PROPORTION: Fans of The Da Vinci Code know that the Mona Lisa smile isn't the only mystery associated with Leonardo's masterpiece. In 1509, he collaborated with mathematician Fra Luca Pacioli and artist Piero della Francesca on a book about the golden ratio: If a line is divided into two unequal lengths, in such a way that the ratio of the longer segment to the shorter segment will be the same as the ratio of the whole line to the longer segment, the resulting number will be something close to 1.618. Some art historians say that within the painting, the relationship between the Mona Lisa's right shoulder and cheek and her left shoulder and cheek forms a golden triangle whose shortest sides are in divine proportion to its base.
ROBOTS THAT FEEL? Emotion-recognition software is part of the field of human-computer interaction, a technology that allows computers to respond to human beings appropriately by reading the expression on their faces. The most famous example is a robot called Kismet, built by researchers at MIT, that reacts to tone of voice and visual cues such as body movement, posture, and facial expression in order to interact socially with human beings. The robot does not experience emotions, but its creators are hoping to create a convincing simulation as Kismet continues to learn.
The Institute of Electrical and Electronics Engineers, Inc., contributed to the information contained in the TV portion of this report.