Friday, October 20, 2017

Weatherproof TTL Serial JPEG Camera with NTSC Video and IR LEDs

https://www.adafruit.com/product/613




TECHNICAL DETAILS


  • Metal Housing size: 2" x 2" x 2.5"
  • Weight: 150 grams
  • Image sensor: CMOS 1/4 inch
  • CMOS Pixels: 30M
  • Pixel size: 5.6um*5.6um
  • Output format: Standard JPEG/M-JPEG
  • White balance: Automatic
  • Exposure: Automatic
  • Gain: Automatic
  • Shutter: Electronic rolling shutter
  • SNR: 45DB
  • Dynamic Range: 60DB
  • Max analog gain: 16DB
  • Frame speed: 640*480 30fps
  • Scan mode: Progressive scan
  • Viewing angle: 60 degrees
  • Monitoring distance: 10 meters, maximum 15meters (adjustable)
  • Image size: VGA(640*480), QVGA(320*240), QQVGA(160*120)
  • Baud rate: Default 38400, Maximum 115200
  • Current draw: 75mA without IR LEDs on. 250mA extra for IR LEDs
  • Operating voltage: DC +5V
  • Communication: 3.3V TTL (Three wire TX, RX, GND)

Monday, September 25, 2017

Gigabit Multimedia Serial Link (GMSL)

Gigabit Multimedia Serial Link (GMSL) serializer and deserializer (SerDes)

Right now this is a sort of standard built around Maxim's chips.
It seems to be used almost exclusively in the self driving car / automotive industry.

power and data is carried over a single Coax cable to a GMSL camera.





Maxim Integrated’s MAX9272A and MAX9275 gigabit multimedia serial link (GMSL) serializers and deserializers (SerDes) used in the Surround View Kit are designed primarily for automotive video applications such as ADAS & Infotainment. Maxim’s GMSL SerDes technology provides a compression-free alternative to Ethernet, delivering 10x faster data rates, 50% lower cabling costs, and better EMC. The ADAS Starter Kit comes with coax cables having a length of 20cm, but they can be exchanged to longer ones as Maxim's GMSL chipsets drive 15 meters of coax or shielded twisted-pair (STP) cabling, thereby providing the margin required for robust and versatile designs

https://www.maximintegrated.com/en/products/interface/high-speed-signaling/gmsl.html

http://shop.leopardimaging.com/product.sc?productId=283

https://www.macnica.eu/products/imi

https://www.renesas.com/en-us/solutions/automotive/adas/solution-kits/adas-view-solution-kit.html

https://www.renesas.com/en-us/solutions/automotive/adas/solution-kits/adas-surround-view-kit.html

Friday, August 04, 2017

JPEG2000 GPU Codec toolkit

http://comprimato.com/


Ultra-high speed compression and life-like viewing experience starts here. JPEG2000 Codec for GPU and CPU. Comprimato's JPEG2000 GPU Codec toolkit helps Media & Entertainment and Geospatial Imaging technology companies keep it real and with more accurate decision-making power.

Camera in a furniture screw







CIA Hacking Tool "Dumbo" Hack WebCams & Corrupt Video Recordings

https://gbhackers.com/cia-hacking-tool-dumba-hack-webcams/


Sent from my iPad

Thursday, August 03, 2017

Saturday, July 29, 2017

Michael Ossmann Pulls DSSS Out of Nowhere | Hackaday

http://hackaday.com/2017/07/29/michael-ossmann-pulls-dsss-out-of-nowhere/

Altspace VR closes

https://www.wired.com/story/altspace-vr-closes/


Altspace tweeted the unexpected news: ”It is with tremendously heavy hearts that we must let you know that we are closing down AltspaceVR very soon.” The site had been unable to close its latest round of funding, it elaborated in a blog post, and will be shutting down next week.

Friday, July 28, 2017

The world's only single-lens Monocentric wide-FOV light field camera.


The operative word here is Monocentric 

Monocentric

Monocentric eyepiece diagram
A Monocentric is an achromatic triplet lens with two pieces of crown glass cemented on both sides of a flint glass element. The elements are thick, strongly curved, and their surfaces have a common center giving it the name "monocentric". It was invented by Hugo Adolf Steinheil around 1883. This design, like the solid eyepiece designs of Robert Tolles, Charles S. Hastings, and E. Wilfred Taylor, is free from ghost reflections and gives a bright contrast image, a desirable feature when it was invented (before anti-reflective coatings).


A Wide-Field-of-View Monocentric Light Field Camera
Donald G. Dansereau, Glenn Schuster , Joseph Ford , and Gordon Wetzstein Stanford University, Department of Electrical Engineering

http://www.computationalimaging.org/wp-content/uploads/2017/04/LFMonocentric.pdf

Abstract Light field (LF) capture and processing are important in an expanding range of computer vision applications, offering rich textural and depth information and simplification of conventionally complex tasks. Although LF cameras are commercially available, no existing device offers wide field-of-view (FOV) imaging. This is due in part to the limitations of fisheye lenses, for which a fundamentally constrained entrance pupil diameter severely limits depth sensitivity. In this work we describe a novel, compact optical design that couples a monocentric lens with multiple sensors using microlens arrays, allowing LF capture with an unprecedented FOV. Leveraging capabilities of the LF representation, we propose a novel method for efficiently coupling the spherical lens and planar sensors, replacing expensive and bulky fiber bundles. We construct a single sensor LF camera prototype, rotating the sensor relative to a fixed main lens to emulate a wide-FOV multi-sensor scenario. Finally, we describe a processing tool chain, including a convenient spherical LF parameterization, and demonstrate depth estimation and post-capture refocus for indoor and outdoor panoramas with 15 × 15 × 1600 × 200 pixels (72 MPix) and a 138° FOV.


---------

Designing a 4D camera for robots

Stanford engineers have developed a 4D camera with an extra-wide field of view. They believe this camera can be better than current options for close-up robotic vision and augmented reality.

Stanford has created a 4D camera that can capture 140 degrees of information. The new technology would be the perfect addition to robots and autonomous vehicles. The 4D camera relies on light field photography which allows it to gather such a wide degree of information.



Light field camera, or standard plenoptic camera, works by capturing information about the light field emanating from the scene. It measures the intensity of the light in the scene and also the direction that the light rays travel. Traditional photography only captures the light intensity.



The researchers proudly call their design to be the “first-ever single-lens, wide field of view, light field camera.” The camera uses the information it has gathered about the light at the scene in combination with the 2D image to create the 4D image.  
This means the photo can be refocused after the image has been captured. The researchers cleverly use the analogy of the difference between looking out a window and through a peephole to describe the difference between the traditional photography and the new technology. They say, ““A 2D photo is like a peephole because you can’t move your head around to gain more information about depth, translucency or light scattering’. Looking through a window, you can move and, as a result, identify features like shape, transparency and shininess.”
The 4D camera’s unique qualities make it perfect for use with robots.  For instance, images captured by a search and rescue robot could be zoomed in and refocused to provide important information to base control. The imagery produced by the 4D camera could also have application in augmented reality as the information rich images could help with better quality rendering.  
The 4D camera’s unique qualities make it perfect for use with robots.  For instance, images captured by a search and rescue robot could be zoomed in and refocused to provide important information to base control. The imagery produced by the 4D camera could also have application in augmented reality as the information rich images could help with better quality rendering.  
The 4D camera is still at a  proof-of-concept stage, and too big for any of the future possible applications. But now the technology is at a working stage, smaller and lighter versions can be developed. The researchers explain the motivation for creating a camera specifically for robots. Donald Dansereau, a postdoctoral fellow in electrical engineering explains, “We want to consider what would be the right camera for a robot that drives or delivers packages by air. We’re great at making cameras for humans but do robots need to see the way humans do? Probably not.”
The research will be presented at the computer vision conference, CVPR 2017 on July 23.
http://news.stanford.edu/press-releases/2017/07/21/new-camera-impro-virtual-reality/

First images from the world's only single-lens wide-FOV light field camera.

From CVPR 2017 paper "A Wide-Field-of-View Monocentric Light Field Camera", 
   http://dgd.vision/Projects/LFMonocentric/

This parallax pan scrolls through a 138-degree, 72-MPix light field captured using our optical prototype. Shifting the virtual camera position over a circular trajectory during the pan reveals the parallax information captured by the LF.

There is no post-processing or alignment between fields, this is the raw light field as measured by the camera.



Other related work:

http://spie.org/newsroom/6666-panoramic-full-frame-imaging-with-monocentric-lenses-and-curved-fiber-bundles

Monocentric lens-based multi-scale optical systems and methods of use US 9256056 B2





High Definition, Low Delay, SDR-Based Video Transmission in UAV Applications


Software-defined radio (SDR) is a radio communication system where components that have been typically implemented in hardware (e.g. mixersfiltersamplifiersmodulators/demodulatorsdetectors, etc.) are instead implemented by means of software on a personal computer or embedded system.[1] While the concept of SDR is not new, the rapidly evolving capabilities of digital electronics render practical many processes which used to be only theoretically possible.

https://en.wikipedia.org/wiki/Software-defined_radio


SDR changes everything when it comes to radio and is the future of video.

We are having a meetup every Saturday at 4Pm at the Hackerdojo in Santa Clara, CA.

https://www.meetup.com/Fly-by-SDR-Hacker-Club-Prep-for-Darpa-SDR-Hackfest/



High Definition, Low Delay, SDR-Based Video Transmission in UAV Applications


Abstract

Integrated RF agile transceivers are not only widely employed in software-defined radio (SDR)1 architectures in cellular telephone base stations, such as multiservice distributed access system (MDAS) and small cell, but also for wireless HD video transmission for industrial, commercial, and military applications, such as unmanned aerial vehicles (UAVs). This article will examine a wideband wireless video signal chain implementation using the AD9361/AD93642,3 integrated transceiver ICs, the amount of data transmitted, the corresponding RF occupied signal bandwidth, the transmission distance, and the transmitter’s power. It will also describe the implementation of the PHY layer of OFDM and present hopping frequency time test results to avoid RF interference. Finally, we will discuss the advantages and disadvantages between Wi-Fi and the RF agile transceiver in wideband wireless applications.

http://www.analog.com/en/analog-dialogue/articles/high-definition-low-delay-sdr-based-video-transmission-in-uav-applications.html


Tuesday, July 25, 2017

A New Sampling Algorithm Could Eliminate Sensor Saturation (scitechdaily.com)

https://science.slashdot.org/story/17/07/22/0537231/a-new-sampling-algorithm-could-eliminate-sensor-saturation

A New Sampling Algorithm Could Eliminate Sensor Saturation (scitechdaily.com) 70

Baron_Yam shared an article from Science Daily: Researchers from MIT and the Technical University of Munich have developed a new technique that could lead to cameras that can handle light of any intensity, and audio that doesn't skip or pop. Virtually any modern information-capture device -- such as a camera, audio recorder, or telephone -- has an analog-to-digital converter in it, a circuit that converts the fluctuating voltages of analog signals into strings of ones and zeroes. Almost all commercial analog-to-digital converters (ADCs), however, have voltage limits. If an incoming signal exceeds that limit, the ADC either cuts it off or flatlines at the maximum voltage. This phenomenon is familiar as the pops and skips of a "clipped" audio signal or as "saturation" in digital images -- when, for instance, a sky that looks blue to the naked eye shows up on-camera as a sheet of white.

Last week, at the International Conference on Sampling Theory and Applications, researchers from MIT and the Technical University of Munich presented a technique that they call unlimited sampling, which can accurately digitize signals whose voltage peaks are far beyond an ADC's voltage limit. The consequence could be cameras that capture all the gradations of color visible to the human eye, audio that doesn't skip, and medical and environmental sensors that can handle both long periods of low activity and the sudden signal spikes that are often the events of interest.

One of the paper's author's explains that "The idea is very simple. If you have a number that is too big to store in your computer memory, you can take the modulo of the number."

Sunday, June 18, 2017

Road to the Holodeck: Lightfields and Volumetric VR


It's come and gone, but it looked to be very interesting.

https://www.eventbrite.com/e/road-to-the-holodeck-lightfields-and-volumetric-vr-tickets-34087827610#

What's a lightfield, you ask?

Several technologies are required for VR's holy grail: the fabled holodeck. We already have graphical VR experiences that let us move throughout volumentric spaces, such as video games. And we have photorealistic media that lets us look around a 360 plane from a fixed position (a.k.a. head tracking).
But what about the best of both worlds?
We're talking volumetric spaces in which you can move around, but are also photorealistic. In addition to things like positional tracking and lots of processing horsepower, the heart of this vision is lightfields. They define how photons hit our eyes and render what we see.
Because it's a challenge to capture photorealistic imagery from every possible angle in a given space -- as our eyes do in real reality -- the art of lightfields in VR involves extrapolating many vantage points, once a fixed point is captured. And that requires clever algorithms, processing, and whole lot of data.

Friday, June 16, 2017

What is HDbitT?


What is HDbitT?

A new standard protocol of digital connective, the next generation solution for AV over IP.

HDbitT stands for High-Definition Digital bit Transmission Technology, It’s a new standard protocol of digital connective , specilized in professional Audio/Video over IP delivery and transmission.






HDbitT enables high-definition Audio/Video up to 4Kx2K@60Hz transmit via network cables, optic fiber, power line cable, wireless and even more transmission mediums, it provides more stable performance, better image clarity, further transmission distance and other significant advantages, easily to facilitate the requirement of HD signals long-distance transmission without any converter.

It supposedly uses an unnamed compression algorithm to achieve 1080p/60 over Ethernet in 18Mb bandwidth. There are also products with built in wireless or ethernet-over-powerline links. 




http://www.hdbitt.org/what-is-hdbitt.html





UPDATE:
 

Jon
 (J-Tech Digital)
Jul 28, 12:47 CDT
Hello John,

Thank you for contacting J-Tech Digital. HDbitT is sort of proprietary. It was built on an already known protocol (TCP/IP), but does not work with other TCP/IP HDMI extenders. It also uses multicast (if your network switch does not support multicast, your switch will treat the traffic as broadcast). If you would like to know any other information please let me know.




Wednesday, June 07, 2017

EyeQue Personal Vision Tracker

http://www.eyeque.com/

A series of pixel-powered tests to determine your refraction measurement and generate your EyeGlass Numbers.



The EyeQue Personal Vision Tracker does not provide a prescription.

EyeQue provides users with a refractive tool which, when operated correctly, measures the user’s refractive correction. Results can be used to determine if corrective eyewear would be beneficial. Personal testing does not replace the need for regular eye health exams provided by an eye doctor.

Tuesday, June 06, 2017

SF WebRTC Meetup @Symphony - May 18, 2017

Fwd:Qt Newsletter: Qt 5.9 LTS is out, Lytro customer case and more


Qt Newsletter: Qt 5.9 is out, Lytro customer case and more!

logo.jpg

Lytro customer case

Enabling the touch-control user interface for Lytro's groundbreaking new light field cameras

The Lytro Cinema and Lytro Immerge cameras use light-field technology to open up a whole new world of visual options for film, TV, and virtual reality applications.

Lytro chose the Qt cross-platform UI framework tool to rapidly create the user interface for the two cameras. The cameras' controls are integrated in standalone touchscreen tablets that not only allow users to set up a shot, but also start making image adjustments – even before post-production. Read the whole story.



Upcoming webinars


Qt for Embedded Device Creation and Boundary Devices, 7 June

Meet Qt 5.9, 13 June

Do's and Don'ts of QML, 28 June

 

Other events
SIGGRAPH 2017  SIGGRAPH is the world's largest, most influential annual conference and exhibition in computer graphics and interactive techniques.

Join us at booth #849 to learn how The Qt Company can help you with your needs.

Videos from Qt World Summit 2016

Videos from Qt World Summit 2016 are available. Come see the exciting keynotes from Kevin Kelly, LG, Ubuntu, AMD, Tableau, BEP Marine as well as strategy and developer talks.



Thursday, June 01, 2017

Kopin & Goertek Reveal Smallest VR Headset w/ /2Kx2K Res @120 Hz

The Smallest VR Headset - about Half the Size and Weight of Traditional devices - offers Cinema-like Image Quality



Kopin and Goertek are unveiling its groundbreaking VR headset reference design codenamed Elf VR at AWE. The new design will eliminate the barriers that have long stood in the way of delivering an effective VR experience and overcomes limitations related to uncomfortably bulky and heavy headset designs, low resolution and sluggish framerates and the annoying screen door effect. The new Elf reference design features Kopin’s “LightningTM” OLED microdisplay panel offering an incredible 2048 x 2048 resolution in each eye - more than three-times the resolution of Oculus Rift or HTC Vive, and at an unbelievable pixel density of 2,940 pixels per inch, five-times more than standard displays.


In addition, the panel runs at 120 Hz refresh rate, 33% faster than what traditional HMDs offer - for reduced motion blur, latency and flicker. As a result, nausea and fatigue are eliminated. Because Kopin’s panel is OLED-based and has integrated driver circuits, it requires much less power, battery life can be extended, and heat output is substantially reduced.









FOR IMMEDIATE RELEASE
For more information contact:



KOPIN AND GOERTEK LAUNCH ERA OF SEAMLESS VIRTUAL REALITY WITH CUTTING-EDGE NEW REFERENCE DESIGN
The Smallest VR Headset - about Half the Size and Weight of Traditional devices - offers Film-like Images


SANTA CLARA, CA – June 1st, 2017 - Kopin Corporation (NASDAQ:KOPN) (“Kopin”) today kicked off the era of of Seamless Virtual Reality. On stage at Augmented World Expo, the Company showcased a groundbreaking reference design, codenamed Elf VR, for a new Head-Mounted Display created with its partner Goertek Inc. (“Goertek”), the world leader in VR headset manufacturing.

When brought to market, the new design will eliminate the barriers that have long stood in the way of delivering an effective VR experience. In fact, traditional attempts at VR headsets have been uncomfortably bulky and heavy, while low resolution and sluggish framerates caused screen door effect and nausea, making them usable for only tens of minutes at a time at best.

Kopin’s Lightning Display – A new approach to VR

To resolve these issues, Kopin’s engineers utilized its three decades of display experience to create “LightningTM” OLED microdisplay panel, putting an end to the dreaded screen-door effect, with 2048 x 2048 resolution in each eye, more than three times the resolution of Oculus Rift or HTC Vive, and at an unbelievable pixel density of 2,940 pixels per inch, five times more than standard displays.

Kopin first showcased its Lightning display at CES 2017, to overwhelming acclaim and a coveted CES Innovation Award. PC Magazine wrote that “the most advanced display I saw came from Kopin” and Anandtech said “Seeing is believing…I quite literally could not see anything resembling aliasing on the display even with a 10x loupe to try and look more closely.”

In addition, the panel runs at 120 Hz refresh rate, 33% faster than what traditional HMDs offer - for reduced motion blur, latency and flicker. As a result, nausea and fatigue are eliminated. Because Kopin’s panel is OLED-based and has integrated driver circuits, it requires much less power, battery life can be extended, and heat output is substantially reduced.

“It is now time for us to move beyond our conventional expectation of what virtual reality can be and strive for more,” explained Kopin founder and CEO John Fan. “Great progress has been made this year, although challenges remain. This reference design, created with our partner Goertek, is a significant achievement. It is much lighter and fully 40% smaller than standard solutions, so that it can be worn for long periods without discomfort. At the same time, our OLED microdisplay panel achieves such high resolution and frame rate that it deliver a VR experience that truly approaches reality for markets including gaming, pro applications or film.”

In addition to the game-changing new design, Kopin previously announced an alliance with BOE Technology Group Co. Ltd. (BOE) and Yunan OLiGHTEK Opto-Electronic Technology Co.,Ltd. for OLED display manufacturing. As part of that alliance, all parties will contribute up to $150 million to establish a high-volume, state of the art facility to manufacture OLED micro-displays to support the growing AR and VR markets. The new facility, which would be the world’s largest OLED-on-silicon manufacturing center, will be managed by BOE and is expected to be built in Kunming, Yunnan Province, China over the next two years. BOE is the world leader in display panels for mobile phone and tablets.

Technical specs:
  • Elf VR is equipped with Kopin "Lightning" OLED microdisplay panels, which feature 2048 x 2048 resolution of each panel, to provide binocular 4K image resolution at 120Hz refresh rate. Combined with both 4K Ultra-High image resolution and 120Hz refresh-rate, Elf VR provides very smooth images with excellent quality, and effectively reduces the sense of vertigo.
  • he Microdisplay panels are manufactured with advanced ultra-precise processing techniques. Its pixel density was increased by approximately 400% compared to the conventional TFT-LCD, OLED and AMOLED display, and the screen size can be reduced to approximately 1/5 at similar pixel resolution level.
  • Elf VR also adopts an advanced optical solution with a compact Multi-Lens design, which enabled it to reduce the thickness of its optical module by around 60%, and to reduce the total weight of VR HMD by around 50% as well, which can significantly improve the user experiences for longtime wearing.
  • The reference design supports two novel optics solutions – 70 degrees FOV for film-like beauty or 100 degrees FOV for deep immersion.

UploadVR implodes


What makes this shocking is this is a well funded startup to the tune of $5.75 Million, and they had a kink room with a bed and employees having sex.

They were written up in Forbes and part of the 30 under 30 best and brightest entrepreneurs.

https://www.crunchbase.com/organization/upload-vr#/entity
Total Equity Funding
$5.75M in 2 Rounds from 15 Investors
Most Recent Funding
$4.5M Series A on May 16, 2017
Headquarters:
San Francisco, California
Description:
Upload is dedicated to accelerating the success of the virtual reality industry through inspiring community experiences.
Founders:
,
Categories:
Digital Entertainment, Media and Entertainment, Virtual Reality
Website:
http://upload.io


https://en.wikipedia.org/wiki/UploadVR


Friday, May 26, 2017

Fwd: custom paper vr glasses


---------- Forwarded message ----------
From: Wendy
Subject: Re:custom paper vr glasses

Dear  Friends,

Nice day.

This is Wendy from Lionstar company in China.we are the profession manufacture of 3d glasses.

We know you are on the market of custom paper vr glasses..May be you want to get more information of suppliers.

 

As an ISO certificated and GMC, SGS, BV audited factory, we produce high quality products with 100% environmental friendly materials, modern production lines and strict QC rules.

 

Now we have cooperated with Disney, VolkWagens, KFC, McDonald's, SONY, LG, Skyworth and Lenovo…

 

if you are interested in them and need a electronic catalog or price ,pls kindly email us at lionstaroo8@lionstar-optic.com.

Best regards

Wendy

 
 
image image
Wendy
Shenzhen Lionstar Technology Co.,ltd
Tel: 0086-755-84866026 Mobile/whatsapp: 0086-15277402946
Address: 5Floor,No.1 Factory, 4 Chuangye Road,Zhangbei,Xinlian community,Longgang District,518172,SZ,China   
Webiste:  www.lionstar-optic.com Email: sales008@lionstar-optic.com  Skype:2206915735@qq.com

Wednesday, March 01, 2017

Color consistently, why a picture with no red pixels can look red.


I think this actually solves the issues I was having with machine vision systems in uncontrolled lighting conditions.

Sent from my iPad

Thursday, February 09, 2017

Piet is a language that interprets graphic files as source code.



This is “Hello World” in Piet:


It could also be written this way:


Piet is an esoteric language that interprets graphic files as source code. Each block of color is interpreted according to its hue, its brightness, and its size. There’s nothing missing in either of these examples; there’s no written code hiding behind the pictures. If you load either of these graphics into a Piet interpreter, you’ll get the console output “Hello World”.

https://www.quora.com/What-programming-language-has-a-cool-Hello-World-program
http://www.dangermouse.net/esoteric/piet.html
https://esolangs.org/wiki/Piet
http://www.majcher.com/code/piet/Piet-Interpreter.html
https://www.bertnase.de/npiet/

It runs a Stack based machine code similar to the Java JVM Virtual Machine so it should possible to compile real code in to these and run them.


Wednesday, February 08, 2017

Gradient-index (GRIN) optics

https://en.wikipedia.org/wiki/Gradient-index_optics

Gradient-index (GRINoptics is the branch of optics covering optical effects produced by a gradual variation of the refractive index of a material. Such variations can be used to produce lenses with flat surfaces, or lenses that do not have the aberrations typical of traditional spherical lenses. Gradient-index lenses may have a refraction gradient that is spherical, axial, or radial.

History

In 1854, J C Maxwell suggested a lens whose refractive index distribution would allow for every region of space to be sharply imaged. Known as the Maxwell fisheye lens, it involves a spherical index function and would be expected to be spherical in shape as well (Maxwell, 1854). This lens, however, is impractical to make and has little usefulness since only points on the surface and within the lens are sharply imaged and extended objects suffer from extreme aberrations. In 1905, R W Wood used a dipping technique creating a gelatin cylinder with a refractive index gradient that varied symmetrically with the radial distance from the axis. Disk-shaped slices of the cylinder were later shown to have plane faces with radial index distribution. He showed that even though the faces of the lens were flat, they acted like converging and diverging lens depending on whether the index was a decreasing or increasing relative to the radial distance (Wood, 1905). In 1964, a posthumous book of R. K. Luneburg was published in which he described a lens that focuses incident parallel rays of light onto a point on the opposite surface of the lens (Luneburg, 1964). This also limits the applications of the lens because it is difficult to use it to focus visible light; however, it has some usefulness in microwave applications.

Saturday, January 28, 2017

THE EIDOPHOR TELEVISION SYSTEM


I have shared an article from Hack-a-day on this earlier. 


Then I ran across this great little article.


THE EIDOPHOR TELEVISION SYSTEM

Note: The information presented here is based on articles or papers by the following; E. Labin, S. M. P. T. E. Journal, April 1950; Earl I. Sponable, S.M.P.T.E. Journal, April 1953; E. Baumann, S. M. P. T. E. Journal, April 1953; Eidophor Training Manual and brochures, supplied by Bernhard Merk, Switzerland.
 
From the earliest days of television, large theater size screen images were a goal for most, if not all of the television pioneers. Some companies in the movie industry such as Twentieth Century Fox were also very interested at the time, because this might provide addition income from their theaters. So they actively promoted and supported the development of suitable systems that might accomplish large screen theater television.
The Eidophor system was an example of this and it was in use extensively from the early 50s, until well into the 80s. EIDOPHOR is a Greek word combination meaning "Image Bearer". Invented in 1939, the actual development work began in the early 40s in Zurich, Switzerland, under the direction of Professor Dr. Fritz Fisher. After considering the many problems, he soon came to the conclusion that a very powerful arc light source would be necessary to provide sufficient brightness on a theater size screen. The next problem was how he could efficiently modulate such an intense source of light.
Dr. Fisher reviewed all of the light modulators previously used, particularly the Kerr cell, as was used by Dr. Alexanderson in his large screen television work. He found the efficiency of this cell to be much too low for his purposes and so continued his search. Undoubtedly, Dr. Fisher would also have considered the Jeffree cell, used in the Scophony theater systems. Unlike the Kerr cell, which exhibits no memory characteristics whatsoever, the Jeffree cell was able to store as many as 200 to 300 picture elements, providing a significant increase in image brightness on the screen. But even this amount of improvement was not enough to satisfy Dr. Fisher's goal for
brightness.
Dr. Fisher went on to review some work done by Foucault on the optics of telescopes and also by Toepler who had described an optical system referred to as the "Toepler Schlieren" (in German, Schlieren means "streaks" or "striae").
His earliest design based on their work was similar to the drawing shown here on the right. This is a light control system based on the phase contrast principle and is a variation of the Schlieren optical arrangement.
The arc lamp at A, together with the condenser lens B, produces a uniform illumination of the plane C. A light-modulating or controlling medium is placed in this plane, between the bar-and-slit systems at F and G. A field lens is placed so that it images bar system F upon the opaque bars of system G. The image point at H is located in the image plane C of the objective lens D. This projection lens would therefore image the point H at point H' on the projection screen E.
But this cannot happen because the light beams are being completely blocked off by the bars of system G. It should be noted that the incident illumination of every image point at H, is blocked by the strips of the bar system G. However, if a control medium of some sort, is located at the image plane C and could be deformed in a suitable way, diffraction of the light beams would occur. Those diffracted parts of the beams could pass through the slits in system G and on to the projection screen as image forming light.
The next drawing here on the right,
shows a control medium, consisting of a liquid oil film of approximately 0.02mm thickness at the image plane C. For the sake of this illustration, consider this oil film as being supported on a thin, flat glass plate.
This layer of liquid is called the Eidophor liquid. It takes the place of an emulsion on the usual motion picture film in the film gate, as one would find in the usual projector. If the layer of Eidophor oil is of uniform thickness and homogeneous, light passing through the oil film will not be diffracted anywhere in the image plane C and all of the light passing will be blocked by the bars of G. No light can reach the screen.
The next step is to create a form of optical inhomogeneity in the oil film, point by point, that will diffract the light beam past the bars and through the slits of system G. This is done with a beam of electrons from an electron gun, scanning an approximate 3 by 4 inch raster directly on the the oil layer. The electron gun operating at a 15 kilovolt level, deposits electric charges point by point, corresponding to the scanned picture. These charges cause minute wave-shaped corrugations in the surface of the oil layer. Where the oil surface is corrugated as at H1 on the surface C in the drawing, those light rays passing through this point are diffracted and no longer blocked at G, instead passing through the slits and on to the screen. The more the Eidophor surface is distorted, the more intense is the light reaching the screen. A brightness range of 1:300 has been obtained.
The drawing to the right
shows the relationship between the brightness A, along a line of the image and the amount of the wave-shaped deformation B, in the surface of the Eidophor liquid. The amount of deformation on the Eidophor surface is proportional to the desired brightness level for a corresponding point on the screen.
The Eidophor principle of modulation is for the cathode beam to scan the Eidophor surface, controlled by a video signal in such a way that the resulting deformations are proportional to the instantaneous values of the controlling signal. The actual controlling element is the spot size of the electron beam. The smaller the spot size is, the deeper the deformation of the Eidophor will be, causing more diffraction of light to take place, in turn producing a brighter spot on the screen.
The wave-shaped deformations are caused by electrostatic forces in the oil film, due to the electrical charges placed on the Eidophor surface by the scanning electron gun. The wavelength of these deformations is constant, but their height is proportional to the level of the video signal. As the illumination of the image points on the screen are always proportional to the height of the waves at the corresponding point on the Eidophor, the distribution of light over the projection screen corresponds to the video signal and thus to the object being reproduced.
The deformation commences at the moment that the electron beam scans a particular point of the image. By a suitable choice of the conductivity and viscosity of the Ediphor oil, the deformation can be preserved for a considerable part of the image scanning period, so that it disappears shortly before the next scan of that point. In the ideal case, the deformation of the oil should remain for the duration of one picture period, but then decay as quickly as possible. In practice, 70% of the ideal is achieved. Since the screen illumination is maintained for this
part of the scanning period, a substantial increase in screen brightness occurs due to this light storage effect.
After considerable testing, the results were encouraging. A simplified compact prototype model was developed. This is illustrated in the figure below .
Notice that it uses only one bar and slit assembly, which is reflective and actually does double duty.
Another change in this prototype was the addition of a color wheel, developed especially for the Eidophor system by the Columbia Broadcasting System, using its field sequential color knowledge and techniques. But before this unit could be completed, Dr. Fisher had died and his work was carried on by his associates, directed by Professor Baumann and Dr. Thiemann.
Since there is an electron gun in this system, it might be well to point out that the electron gun and the Eidophor oil can only operate in a vacuum. The Ediphor oil characteristics are subject to change with temperature, so the system includes a means to stabilize the temperature of the spherical mirror and Eidophor oil in contact with it. This is accomplished with a small external refrigeration system.
Another view of the Eidophor Projector is given here.
It shows a side view of the vacuum chamber containing the lens systems, electron gun, spherical mirror and the Eidophor oil surface on the spherical mirror. The mirror rotates at about one revolution per hour to prevent a gradual build up of charge that would otherwise change the characteristics of the Eidophor oil film.This drawing shows an arc lamp, but later it was found that certain xenon lamps could also be used effectively.
The drawing on the right
shows the approximate size of the Eidophor projector. The space requirements are similar to those of a standard 35 mm movie film projector, as found in most projection booths in theaters around the world. Not shown in this drawing are the various
power supplies and the vacuum pump that are normally contained in the same cabinet as the Ediophor projector. Also not shown here are the cabinets that house the various signal associated electronic circuits. The over-all dimensions of this machine were approximately 5 feet high; 5.5
feet long and 2.5 feet in width. The weight of this assembly was 1800 pounds.
This photo to the left
shows a complete system, including the two upright cabinets (6), containing the low level electronic circuits and their power supplies.
In the main assembly, the projection arc lamp (5) is located at the top left and the vacuum pump and auxiliary services equipment (4) are directly below it. The color wheel (3) is located at the top center. The Eidophor projector (1) is at the lower and center left. The projection light beam hood (2) is at the top right.
In later models like the one pictured below,
the arc light was abandoned
in favor of hi-intensity Xenon lamps rated at either 3000 or 5000 watts. A color dot sequential system was also incorporated, replacing the CBS field sequential method and the purchaser was then given the choice of using the NTSC, PAL, SECAM or HDTV color systems.
The over-all specifications of the more recent models of the Ediophor systems were most impressive. They included these:
Screen sizes up to approximately 40 by 50 feet; 80 times brighter than than the best three tube CRT systems; up to 1250 lines horizontal, 120 Hz vertical; Video bandwidth, 50 Mhz; all digital control; white field brightness levels of over 10,000 lumens; projection throws of over 650
feet.
What a fantastic system! An engineering marvel, if there ever was one!! Fabulous!!! (Editor's comment)
In spite of it though, the Eidophor is becoming obsolete. It looks as if it will undoubtedly be replaced by the LCD and/or the DLP device, manufactured by Texas Instruments, basically an integrated circuit with teeny, tiny little movable mirrors, (pardon the scientific terms).
Peter F. Yanczer