
The breakthrough is a different type of sensor that captures what are known as light fields -- basically, all the light that is moving in all directions in the view of the camera. That offers several advantages over traditional photography, the most revolutionary of which is that photos no longer need to be focused before they are taken.
Lytro's camera works by positioning an array of tiny lenses between the main lens and the image sensor, with the microlenses measuring both the total amount of light coming in as well as its direction.
There are some neat demo photos on their site, where clicking in the Flash app moves the focus point, which show you the theoretical end result, but I don't understand how this works at all. Their "simple explanation" just says,
"Recording light fields requires an innovative, entirely new kind of sensor called a light field sensor. The light field sensor captures the color, intensity and vector direction of the rays of light. This directional information is completely lost with traditional camera sensors, which simply add up all the light rays and record them as a single amount of light."
which I'm pretty sure is the same as saying:

Their paper is mostly about how you simulate N different cameras once you have the light-field info, but I still don't understand how you de-focus light that has already passed through a lens (and there is a lens in front of this array of tiny lenses) or how having one lens per pixel gets you the direction of the ray. Because throwing away that information is what lenses are for.