Friday, October 20, 2017

Weatherproof TTL Serial JPEG Camera with NTSC Video and IR LEDs

https://www.adafruit.com/product/613




TECHNICAL DETAILS


  • Metal Housing size: 2" x 2" x 2.5"
  • Weight: 150 grams
  • Image sensor: CMOS 1/4 inch
  • CMOS Pixels: 30M
  • Pixel size: 5.6um*5.6um
  • Output format: Standard JPEG/M-JPEG
  • White balance: Automatic
  • Exposure: Automatic
  • Gain: Automatic
  • Shutter: Electronic rolling shutter
  • SNR: 45DB
  • Dynamic Range: 60DB
  • Max analog gain: 16DB
  • Frame speed: 640*480 30fps
  • Scan mode: Progressive scan
  • Viewing angle: 60 degrees
  • Monitoring distance: 10 meters, maximum 15meters (adjustable)
  • Image size: VGA(640*480), QVGA(320*240), QQVGA(160*120)
  • Baud rate: Default 38400, Maximum 115200
  • Current draw: 75mA without IR LEDs on. 250mA extra for IR LEDs
  • Operating voltage: DC +5V
  • Communication: 3.3V TTL (Three wire TX, RX, GND)

Monday, September 25, 2017

Gigabit Multimedia Serial Link (GMSL)

Gigabit Multimedia Serial Link (GMSL) serializer and deserializer (SerDes)

Right now this is a sort of standard built around Maxim's chips.
It seems to be used almost exclusively in the self driving car / automotive industry.

power and data is carried over a single Coax cable to a GMSL camera.





Maxim Integrated’s MAX9272A and MAX9275 gigabit multimedia serial link (GMSL) serializers and deserializers (SerDes) used in the Surround View Kit are designed primarily for automotive video applications such as ADAS & Infotainment. Maxim’s GMSL SerDes technology provides a compression-free alternative to Ethernet, delivering 10x faster data rates, 50% lower cabling costs, and better EMC. The ADAS Starter Kit comes with coax cables having a length of 20cm, but they can be exchanged to longer ones as Maxim's GMSL chipsets drive 15 meters of coax or shielded twisted-pair (STP) cabling, thereby providing the margin required for robust and versatile designs

https://www.maximintegrated.com/en/products/interface/high-speed-signaling/gmsl.html

http://shop.leopardimaging.com/product.sc?productId=283

https://www.macnica.eu/products/imi

https://www.renesas.com/en-us/solutions/automotive/adas/solution-kits/adas-view-solution-kit.html

https://www.renesas.com/en-us/solutions/automotive/adas/solution-kits/adas-surround-view-kit.html

Friday, August 04, 2017

JPEG2000 GPU Codec toolkit

http://comprimato.com/


Ultra-high speed compression and life-like viewing experience starts here. JPEG2000 Codec for GPU and CPU. Comprimato's JPEG2000 GPU Codec toolkit helps Media & Entertainment and Geospatial Imaging technology companies keep it real and with more accurate decision-making power.

Camera in a furniture screw







CIA Hacking Tool "Dumbo" Hack WebCams & Corrupt Video Recordings

https://gbhackers.com/cia-hacking-tool-dumba-hack-webcams/


Sent from my iPad

Thursday, August 03, 2017

Saturday, July 29, 2017

Michael Ossmann Pulls DSSS Out of Nowhere | Hackaday

http://hackaday.com/2017/07/29/michael-ossmann-pulls-dsss-out-of-nowhere/

Altspace VR closes

https://www.wired.com/story/altspace-vr-closes/


Altspace tweeted the unexpected news: ”It is with tremendously heavy hearts that we must let you know that we are closing down AltspaceVR very soon.” The site had been unable to close its latest round of funding, it elaborated in a blog post, and will be shutting down next week.

Friday, July 28, 2017

The world's only single-lens Monocentric wide-FOV light field camera.


The operative word here is Monocentric 

Monocentric

Monocentric eyepiece diagram
A Monocentric is an achromatic triplet lens with two pieces of crown glass cemented on both sides of a flint glass element. The elements are thick, strongly curved, and their surfaces have a common center giving it the name "monocentric". It was invented by Hugo Adolf Steinheil around 1883. This design, like the solid eyepiece designs of Robert Tolles, Charles S. Hastings, and E. Wilfred Taylor, is free from ghost reflections and gives a bright contrast image, a desirable feature when it was invented (before anti-reflective coatings).


A Wide-Field-of-View Monocentric Light Field Camera
Donald G. Dansereau, Glenn Schuster , Joseph Ford , and Gordon Wetzstein Stanford University, Department of Electrical Engineering

http://www.computationalimaging.org/wp-content/uploads/2017/04/LFMonocentric.pdf

Abstract Light field (LF) capture and processing are important in an expanding range of computer vision applications, offering rich textural and depth information and simplification of conventionally complex tasks. Although LF cameras are commercially available, no existing device offers wide field-of-view (FOV) imaging. This is due in part to the limitations of fisheye lenses, for which a fundamentally constrained entrance pupil diameter severely limits depth sensitivity. In this work we describe a novel, compact optical design that couples a monocentric lens with multiple sensors using microlens arrays, allowing LF capture with an unprecedented FOV. Leveraging capabilities of the LF representation, we propose a novel method for efficiently coupling the spherical lens and planar sensors, replacing expensive and bulky fiber bundles. We construct a single sensor LF camera prototype, rotating the sensor relative to a fixed main lens to emulate a wide-FOV multi-sensor scenario. Finally, we describe a processing tool chain, including a convenient spherical LF parameterization, and demonstrate depth estimation and post-capture refocus for indoor and outdoor panoramas with 15 × 15 × 1600 × 200 pixels (72 MPix) and a 138° FOV.


---------

Designing a 4D camera for robots

Stanford engineers have developed a 4D camera with an extra-wide field of view. They believe this camera can be better than current options for close-up robotic vision and augmented reality.

Stanford has created a 4D camera that can capture 140 degrees of information. The new technology would be the perfect addition to robots and autonomous vehicles. The 4D camera relies on light field photography which allows it to gather such a wide degree of information.



Light field camera, or standard plenoptic camera, works by capturing information about the light field emanating from the scene. It measures the intensity of the light in the scene and also the direction that the light rays travel. Traditional photography only captures the light intensity.



The researchers proudly call their design to be the “first-ever single-lens, wide field of view, light field camera.” The camera uses the information it has gathered about the light at the scene in combination with the 2D image to create the 4D image.  
This means the photo can be refocused after the image has been captured. The researchers cleverly use the analogy of the difference between looking out a window and through a peephole to describe the difference between the traditional photography and the new technology. They say, ““A 2D photo is like a peephole because you can’t move your head around to gain more information about depth, translucency or light scattering’. Looking through a window, you can move and, as a result, identify features like shape, transparency and shininess.”
The 4D camera’s unique qualities make it perfect for use with robots.  For instance, images captured by a search and rescue robot could be zoomed in and refocused to provide important information to base control. The imagery produced by the 4D camera could also have application in augmented reality as the information rich images could help with better quality rendering.  
The 4D camera’s unique qualities make it perfect for use with robots.  For instance, images captured by a search and rescue robot could be zoomed in and refocused to provide important information to base control. The imagery produced by the 4D camera could also have application in augmented reality as the information rich images could help with better quality rendering.  
The 4D camera is still at a  proof-of-concept stage, and too big for any of the future possible applications. But now the technology is at a working stage, smaller and lighter versions can be developed. The researchers explain the motivation for creating a camera specifically for robots. Donald Dansereau, a postdoctoral fellow in electrical engineering explains, “We want to consider what would be the right camera for a robot that drives or delivers packages by air. We’re great at making cameras for humans but do robots need to see the way humans do? Probably not.”
The research will be presented at the computer vision conference, CVPR 2017 on July 23.
http://news.stanford.edu/press-releases/2017/07/21/new-camera-impro-virtual-reality/

First images from the world's only single-lens wide-FOV light field camera.

From CVPR 2017 paper "A Wide-Field-of-View Monocentric Light Field Camera", 
   http://dgd.vision/Projects/LFMonocentric/

This parallax pan scrolls through a 138-degree, 72-MPix light field captured using our optical prototype. Shifting the virtual camera position over a circular trajectory during the pan reveals the parallax information captured by the LF.

There is no post-processing or alignment between fields, this is the raw light field as measured by the camera.



Other related work:

http://spie.org/newsroom/6666-panoramic-full-frame-imaging-with-monocentric-lenses-and-curved-fiber-bundles

Monocentric lens-based multi-scale optical systems and methods of use US 9256056 B2





High Definition, Low Delay, SDR-Based Video Transmission in UAV Applications


Software-defined radio (SDR) is a radio communication system where components that have been typically implemented in hardware (e.g. mixersfiltersamplifiersmodulators/demodulatorsdetectors, etc.) are instead implemented by means of software on a personal computer or embedded system.[1] While the concept of SDR is not new, the rapidly evolving capabilities of digital electronics render practical many processes which used to be only theoretically possible.

https://en.wikipedia.org/wiki/Software-defined_radio


SDR changes everything when it comes to radio and is the future of video.

We are having a meetup every Saturday at 4Pm at the Hackerdojo in Santa Clara, CA.

https://www.meetup.com/Fly-by-SDR-Hacker-Club-Prep-for-Darpa-SDR-Hackfest/



High Definition, Low Delay, SDR-Based Video Transmission in UAV Applications


Abstract

Integrated RF agile transceivers are not only widely employed in software-defined radio (SDR)1 architectures in cellular telephone base stations, such as multiservice distributed access system (MDAS) and small cell, but also for wireless HD video transmission for industrial, commercial, and military applications, such as unmanned aerial vehicles (UAVs). This article will examine a wideband wireless video signal chain implementation using the AD9361/AD93642,3 integrated transceiver ICs, the amount of data transmitted, the corresponding RF occupied signal bandwidth, the transmission distance, and the transmitter’s power. It will also describe the implementation of the PHY layer of OFDM and present hopping frequency time test results to avoid RF interference. Finally, we will discuss the advantages and disadvantages between Wi-Fi and the RF agile transceiver in wideband wireless applications.

http://www.analog.com/en/analog-dialogue/articles/high-definition-low-delay-sdr-based-video-transmission-in-uav-applications.html


Tuesday, July 25, 2017

A New Sampling Algorithm Could Eliminate Sensor Saturation (scitechdaily.com)

https://science.slashdot.org/story/17/07/22/0537231/a-new-sampling-algorithm-could-eliminate-sensor-saturation

A New Sampling Algorithm Could Eliminate Sensor Saturation (scitechdaily.com) 70

Baron_Yam shared an article from Science Daily: Researchers from MIT and the Technical University of Munich have developed a new technique that could lead to cameras that can handle light of any intensity, and audio that doesn't skip or pop. Virtually any modern information-capture device -- such as a camera, audio recorder, or telephone -- has an analog-to-digital converter in it, a circuit that converts the fluctuating voltages of analog signals into strings of ones and zeroes. Almost all commercial analog-to-digital converters (ADCs), however, have voltage limits. If an incoming signal exceeds that limit, the ADC either cuts it off or flatlines at the maximum voltage. This phenomenon is familiar as the pops and skips of a "clipped" audio signal or as "saturation" in digital images -- when, for instance, a sky that looks blue to the naked eye shows up on-camera as a sheet of white.

Last week, at the International Conference on Sampling Theory and Applications, researchers from MIT and the Technical University of Munich presented a technique that they call unlimited sampling, which can accurately digitize signals whose voltage peaks are far beyond an ADC's voltage limit. The consequence could be cameras that capture all the gradations of color visible to the human eye, audio that doesn't skip, and medical and environmental sensors that can handle both long periods of low activity and the sudden signal spikes that are often the events of interest.

One of the paper's author's explains that "The idea is very simple. If you have a number that is too big to store in your computer memory, you can take the modulo of the number."

Sunday, June 18, 2017

Road to the Holodeck: Lightfields and Volumetric VR


It's come and gone, but it looked to be very interesting.

https://www.eventbrite.com/e/road-to-the-holodeck-lightfields-and-volumetric-vr-tickets-34087827610#

What's a lightfield, you ask?

Several technologies are required for VR's holy grail: the fabled holodeck. We already have graphical VR experiences that let us move throughout volumentric spaces, such as video games. And we have photorealistic media that lets us look around a 360 plane from a fixed position (a.k.a. head tracking).
But what about the best of both worlds?
We're talking volumetric spaces in which you can move around, but are also photorealistic. In addition to things like positional tracking and lots of processing horsepower, the heart of this vision is lightfields. They define how photons hit our eyes and render what we see.
Because it's a challenge to capture photorealistic imagery from every possible angle in a given space -- as our eyes do in real reality -- the art of lightfields in VR involves extrapolating many vantage points, once a fixed point is captured. And that requires clever algorithms, processing, and whole lot of data.

Friday, June 16, 2017

What is HDbitT?


What is HDbitT?

A new standard protocol of digital connective, the next generation solution for AV over IP.

HDbitT stands for High-Definition Digital bit Transmission Technology, It’s a new standard protocol of digital connective , specilized in professional Audio/Video over IP delivery and transmission.






HDbitT enables high-definition Audio/Video up to 4Kx2K@60Hz transmit via network cables, optic fiber, power line cable, wireless and even more transmission mediums, it provides more stable performance, better image clarity, further transmission distance and other significant advantages, easily to facilitate the requirement of HD signals long-distance transmission without any converter.

It supposedly uses an unnamed compression algorithm to achieve 1080p/60 over Ethernet in 18Mb bandwidth. There are also products with built in wireless or ethernet-over-powerline links. 




http://www.hdbitt.org/what-is-hdbitt.html





UPDATE:
 

Jon
 (J-Tech Digital)
Jul 28, 12:47 CDT
Hello John,

Thank you for contacting J-Tech Digital. HDbitT is sort of proprietary. It was built on an already known protocol (TCP/IP), but does not work with other TCP/IP HDMI extenders. It also uses multicast (if your network switch does not support multicast, your switch will treat the traffic as broadcast). If you would like to know any other information please let me know.




Wednesday, June 07, 2017

EyeQue Personal Vision Tracker

http://www.eyeque.com/

A series of pixel-powered tests to determine your refraction measurement and generate your EyeGlass Numbers.



The EyeQue Personal Vision Tracker does not provide a prescription.

EyeQue provides users with a refractive tool which, when operated correctly, measures the user’s refractive correction. Results can be used to determine if corrective eyewear would be beneficial. Personal testing does not replace the need for regular eye health exams provided by an eye doctor.

Tuesday, June 06, 2017

SF WebRTC Meetup @Symphony - May 18, 2017

Fwd:Qt Newsletter: Qt 5.9 LTS is out, Lytro customer case and more


Qt Newsletter: Qt 5.9 is out, Lytro customer case and more!

logo.jpg

Lytro customer case

Enabling the touch-control user interface for Lytro's groundbreaking new light field cameras

The Lytro Cinema and Lytro Immerge cameras use light-field technology to open up a whole new world of visual options for film, TV, and virtual reality applications.

Lytro chose the Qt cross-platform UI framework tool to rapidly create the user interface for the two cameras. The cameras' controls are integrated in standalone touchscreen tablets that not only allow users to set up a shot, but also start making image adjustments – even before post-production. Read the whole story.



Upcoming webinars


Qt for Embedded Device Creation and Boundary Devices, 7 June

Meet Qt 5.9, 13 June

Do's and Don'ts of QML, 28 June

 

Other events
SIGGRAPH 2017  SIGGRAPH is the world's largest, most influential annual conference and exhibition in computer graphics and interactive techniques.

Join us at booth #849 to learn how The Qt Company can help you with your needs.

Videos from Qt World Summit 2016

Videos from Qt World Summit 2016 are available. Come see the exciting keynotes from Kevin Kelly, LG, Ubuntu, AMD, Tableau, BEP Marine as well as strategy and developer talks.



Thursday, June 01, 2017

Kopin & Goertek Reveal Smallest VR Headset w/ /2Kx2K Res @120 Hz

The Smallest VR Headset - about Half the Size and Weight of Traditional devices - offers Cinema-like Image Quality



Kopin and Goertek are unveiling its groundbreaking VR headset reference design codenamed Elf VR at AWE. The new design will eliminate the barriers that have long stood in the way of delivering an effective VR experience and overcomes limitations related to uncomfortably bulky and heavy headset designs, low resolution and sluggish framerates and the annoying screen door effect. The new Elf reference design features Kopin’s “LightningTM” OLED microdisplay panel offering an incredible 2048 x 2048 resolution in each eye - more than three-times the resolution of Oculus Rift or HTC Vive, and at an unbelievable pixel density of 2,940 pixels per inch, five-times more than standard displays.


In addition, the panel runs at 120 Hz refresh rate, 33% faster than what traditional HMDs offer - for reduced motion blur, latency and flicker. As a result, nausea and fatigue are eliminated. Because Kopin’s panel is OLED-based and has integrated driver circuits, it requires much less power, battery life can be extended, and heat output is substantially reduced.









FOR IMMEDIATE RELEASE
For more information contact:



KOPIN AND GOERTEK LAUNCH ERA OF SEAMLESS VIRTUAL REALITY WITH CUTTING-EDGE NEW REFERENCE DESIGN
The Smallest VR Headset - about Half the Size and Weight of Traditional devices - offers Film-like Images


SANTA CLARA, CA – June 1st, 2017 - Kopin Corporation (NASDAQ:KOPN) (“Kopin”) today kicked off the era of of Seamless Virtual Reality. On stage at Augmented World Expo, the Company showcased a groundbreaking reference design, codenamed Elf VR, for a new Head-Mounted Display created with its partner Goertek Inc. (“Goertek”), the world leader in VR headset manufacturing.

When brought to market, the new design will eliminate the barriers that have long stood in the way of delivering an effective VR experience. In fact, traditional attempts at VR headsets have been uncomfortably bulky and heavy, while low resolution and sluggish framerates caused screen door effect and nausea, making them usable for only tens of minutes at a time at best.

Kopin’s Lightning Display – A new approach to VR

To resolve these issues, Kopin’s engineers utilized its three decades of display experience to create “LightningTM” OLED microdisplay panel, putting an end to the dreaded screen-door effect, with 2048 x 2048 resolution in each eye, more than three times the resolution of Oculus Rift or HTC Vive, and at an unbelievable pixel density of 2,940 pixels per inch, five times more than standard displays.

Kopin first showcased its Lightning display at CES 2017, to overwhelming acclaim and a coveted CES Innovation Award. PC Magazine wrote that “the most advanced display I saw came from Kopin” and Anandtech said “Seeing is believing…I quite literally could not see anything resembling aliasing on the display even with a 10x loupe to try and look more closely.”

In addition, the panel runs at 120 Hz refresh rate, 33% faster than what traditional HMDs offer - for reduced motion blur, latency and flicker. As a result, nausea and fatigue are eliminated. Because Kopin’s panel is OLED-based and has integrated driver circuits, it requires much less power, battery life can be extended, and heat output is substantially reduced.

“It is now time for us to move beyond our conventional expectation of what virtual reality can be and strive for more,” explained Kopin founder and CEO John Fan. “Great progress has been made this year, although challenges remain. This reference design, created with our partner Goertek, is a significant achievement. It is much lighter and fully 40% smaller than standard solutions, so that it can be worn for long periods without discomfort. At the same time, our OLED microdisplay panel achieves such high resolution and frame rate that it deliver a VR experience that truly approaches reality for markets including gaming, pro applications or film.”

In addition to the game-changing new design, Kopin previously announced an alliance with BOE Technology Group Co. Ltd. (BOE) and Yunan OLiGHTEK Opto-Electronic Technology Co.,Ltd. for OLED display manufacturing. As part of that alliance, all parties will contribute up to $150 million to establish a high-volume, state of the art facility to manufacture OLED micro-displays to support the growing AR and VR markets. The new facility, which would be the world’s largest OLED-on-silicon manufacturing center, will be managed by BOE and is expected to be built in Kunming, Yunnan Province, China over the next two years. BOE is the world leader in display panels for mobile phone and tablets.

Technical specs:
  • Elf VR is equipped with Kopin "Lightning" OLED microdisplay panels, which feature 2048 x 2048 resolution of each panel, to provide binocular 4K image resolution at 120Hz refresh rate. Combined with both 4K Ultra-High image resolution and 120Hz refresh-rate, Elf VR provides very smooth images with excellent quality, and effectively reduces the sense of vertigo.
  • he Microdisplay panels are manufactured with advanced ultra-precise processing techniques. Its pixel density was increased by approximately 400% compared to the conventional TFT-LCD, OLED and AMOLED display, and the screen size can be reduced to approximately 1/5 at similar pixel resolution level.
  • Elf VR also adopts an advanced optical solution with a compact Multi-Lens design, which enabled it to reduce the thickness of its optical module by around 60%, and to reduce the total weight of VR HMD by around 50% as well, which can significantly improve the user experiences for longtime wearing.
  • The reference design supports two novel optics solutions – 70 degrees FOV for film-like beauty or 100 degrees FOV for deep immersion.

UploadVR implodes


What makes this shocking is this is a well funded startup to the tune of $5.75 Million, and they had a kink room with a bed and employees having sex.

They were written up in Forbes and part of the 30 under 30 best and brightest entrepreneurs.

https://www.crunchbase.com/organization/upload-vr#/entity
Total Equity Funding
$5.75M in 2 Rounds from 15 Investors
Most Recent Funding
$4.5M Series A on May 16, 2017
Headquarters:
San Francisco, California
Description:
Upload is dedicated to accelerating the success of the virtual reality industry through inspiring community experiences.
Founders:
,
Categories:
Digital Entertainment, Media and Entertainment, Virtual Reality
Website:
http://upload.io


https://en.wikipedia.org/wiki/UploadVR


Friday, May 26, 2017

Fwd: custom paper vr glasses


---------- Forwarded message ----------
From: Wendy
Subject: Re:custom paper vr glasses

Dear  Friends,

Nice day.

This is Wendy from Lionstar company in China.we are the profession manufacture of 3d glasses.

We know you are on the market of custom paper vr glasses..May be you want to get more information of suppliers.

 

As an ISO certificated and GMC, SGS, BV audited factory, we produce high quality products with 100% environmental friendly materials, modern production lines and strict QC rules.

 

Now we have cooperated with Disney, VolkWagens, KFC, McDonald's, SONY, LG, Skyworth and Lenovo…

 

if you are interested in them and need a electronic catalog or price ,pls kindly email us at lionstaroo8@lionstar-optic.com.

Best regards

Wendy

 
 
image image
Wendy
Shenzhen Lionstar Technology Co.,ltd
Tel: 0086-755-84866026 Mobile/whatsapp: 0086-15277402946
Address: 5Floor,No.1 Factory, 4 Chuangye Road,Zhangbei,Xinlian community,Longgang District,518172,SZ,China   
Webiste:  www.lionstar-optic.com Email: sales008@lionstar-optic.com  Skype:2206915735@qq.com

Wednesday, March 01, 2017

Color consistently, why a picture with no red pixels can look red.


I think this actually solves the issues I was having with machine vision systems in uncontrolled lighting conditions.

Sent from my iPad

Thursday, February 09, 2017

Piet is a language that interprets graphic files as source code.



This is “Hello World” in Piet:


It could also be written this way:


Piet is an esoteric language that interprets graphic files as source code. Each block of color is interpreted according to its hue, its brightness, and its size. There’s nothing missing in either of these examples; there’s no written code hiding behind the pictures. If you load either of these graphics into a Piet interpreter, you’ll get the console output “Hello World”.

https://www.quora.com/What-programming-language-has-a-cool-Hello-World-program
http://www.dangermouse.net/esoteric/piet.html
https://esolangs.org/wiki/Piet
http://www.majcher.com/code/piet/Piet-Interpreter.html
https://www.bertnase.de/npiet/

It runs a Stack based machine code similar to the Java JVM Virtual Machine so it should possible to compile real code in to these and run them.


Wednesday, February 08, 2017

Gradient-index (GRIN) optics

https://en.wikipedia.org/wiki/Gradient-index_optics

Gradient-index (GRINoptics is the branch of optics covering optical effects produced by a gradual variation of the refractive index of a material. Such variations can be used to produce lenses with flat surfaces, or lenses that do not have the aberrations typical of traditional spherical lenses. Gradient-index lenses may have a refraction gradient that is spherical, axial, or radial.

History

In 1854, J C Maxwell suggested a lens whose refractive index distribution would allow for every region of space to be sharply imaged. Known as the Maxwell fisheye lens, it involves a spherical index function and would be expected to be spherical in shape as well (Maxwell, 1854). This lens, however, is impractical to make and has little usefulness since only points on the surface and within the lens are sharply imaged and extended objects suffer from extreme aberrations. In 1905, R W Wood used a dipping technique creating a gelatin cylinder with a refractive index gradient that varied symmetrically with the radial distance from the axis. Disk-shaped slices of the cylinder were later shown to have plane faces with radial index distribution. He showed that even though the faces of the lens were flat, they acted like converging and diverging lens depending on whether the index was a decreasing or increasing relative to the radial distance (Wood, 1905). In 1964, a posthumous book of R. K. Luneburg was published in which he described a lens that focuses incident parallel rays of light onto a point on the opposite surface of the lens (Luneburg, 1964). This also limits the applications of the lens because it is difficult to use it to focus visible light; however, it has some usefulness in microwave applications.