New Technology Could Enable Glasses-Free Viewing of e-Readers, Smartphones and Displays
Researchers at the MIT Media Laboratory and the University of California at Berkeley have developed a new display technology that automatically corrects for vision defects—no glasses or contact lenses required. The technique could lead to dashboard-mounted GPS displays that farsighted drivers can consult without putting their glasses on, or electronic readers that eliminate the need for reading glasses, among other applications, according to the researchers.
“The first spectacles were invented in the 13th century,” said Gordon Wetzstein, a research scientist at the Media Lab and one of the display’s co-creators. “Today, of course, we have contact lenses and surgery, but it’s all invasive in the sense that you either have to put something in your eye, wear something on your head, or undergo surgery. We have a different solution that basically puts the glasses on the display, rather than on your head. It will not be able to help you see the rest of the world more sharply, but today, we spend a huge portion of our time interacting with the digital world.”
The display is a variation on a glasses-free 3D technology also developed by the Media Lab’s Camera Culture group, headed by Ramesh Raskar. But where the 3D display projects slightly different images to the viewer’s left and right eyes, the vision-correcting display projects slightly different images to different parts of the viewer’s pupil.
A vision defect is a mismatch between the eye’s focal distance—the range at which it can actually bring objects into focus—and the distance of the object it’s trying to focus on. Essentially, the new display simulates an image at the correct focal distance—somewhere between the display and the viewer’s eye.
The difficulty with this approach is that simulating a single pixel in the virtual image requires multiple pixels of the physical display. The angle at which light should seem to arrive from the simulated image is sharper than the angle at which light would arrive from the same image displayed on the screen. So the physical pixels projecting light to the right side of the pupil have to be offset to the left, and the pixels projecting light to the left side of the pupil have to be offset to the right.
The use of multiple on-screen pixels to simulate a single virtual pixel would drastically reduce the image resolution. But this problem turns out to be very similar to a problem that Wetzstein, Raskar, and colleagues solved in their 3D displays, which also had to project different images at different angles.
The researchers discovered that there is, in fact, a great deal of redundancy between the images required to simulate different viewing angles. The algorithm that computes the image to be displayed onscreen can exploit that redundancy, allowing individual screen pixels to participate simultaneously in the projection of different viewing angles. The MIT and Berkeley researchers were able to adapt that algorithm to the problem of vision correction, so the new display incurs only a modest loss in resolution.
In the researchers’ prototype, however, display pixels do have to be masked from the parts of the pupil for which they’re not intended. That requires that a transparency patterned with an array of pinholes be laid over the screen, blocking more than half the light it emits.
But early versions of the 3D display faced the same problem, and the MIT researchers solved it by instead using two liquid-crystal displays (LCDs) in parallel. Carefully tailoring the images displayed on the LCDs to each other allows the system to mask perspectives while letting much more light pass through. Wetzstein envisions that commercial versions of a vision-correcting screen would use the same technique.
Indeed, he says, the same screens could both display 3D content and correct for vision defects, all glasses-free. They could also reproduce another Camera Culture project, which diagnoses vision defects. So the same device could, in effect, determine the user’s prescription and automatically correct for it.
“Most people in mainstream optics would have said, ‘Oh, this is impossible,'” said Chris Dainty, a professor at the University College London Institute of Ophthalmology and Moorfields Eye Hospital. “But Ramesh’s group has the art of making the apparently impossible possible.”
Click here to watch a video about the new display technology.
By Eye² Staff
– See more at: http://www.visionmonday.com/eye2/newsletter/3087/content/51674/#sthash.WIpgfiBB.dpuf