Friday, August 04, 2017

JPEG2000 GPU Codec toolkit

http://comprimato.com/


Ultra-high speed compression and life-like viewing experience starts here. JPEG2000 Codec for GPU and CPU. Comprimato's JPEG2000 GPU Codec toolkit helps Media & Entertainment and Geospatial Imaging technology companies keep it real and with more accurate decision-making power.

Camera in a furniture screw

CIA Hacking Tool "Dumbo" Hack WebCams & Corrupt Video Recordings

https://gbhackers.com/cia-hacking-tool-dumba-hack-webcams/


Sent from my iPad

Saturday, July 29, 2017

Michael Ossmann Pulls DSSS Out of Nowhere | Hackaday

http://hackaday.com/2017/07/29/michael-ossmann-pulls-dsss-out-of-nowhere/

Altspace VR closes

https://www.wired.com/story/altspace-vr-closes/


Altspace tweeted the unexpected news: ”It is with tremendously heavy hearts that we must let you know that we are closing down AltspaceVR very soon.” The site had been unable to close its latest round of funding, it elaborated in a blog post, and will be shutting down next week.

Friday, July 28, 2017

The world's only single-lens Monocentric wide-FOV light field camera.


The operative word here is Monocentric 

Monocentric

Monocentric eyepiece diagram
A Monocentric is an achromatic triplet lens with two pieces of crown glass cemented on both sides of a flint glass element. The elements are thick, strongly curved, and their surfaces have a common center giving it the name "monocentric". It was invented by Hugo Adolf Steinheil around 1883. This design, like the solid eyepiece designs of Robert Tolles, Charles S. Hastings, and E. Wilfred Taylor, is free from ghost reflections and gives a bright contrast image, a desirable feature when it was invented (before anti-reflective coatings).


A Wide-Field-of-View Monocentric Light Field Camera
Donald G. Dansereau, Glenn Schuster , Joseph Ford , and Gordon Wetzstein Stanford University, Department of Electrical Engineering

http://www.computationalimaging.org/wp-content/uploads/2017/04/LFMonocentric.pdf

Abstract Light field (LF) capture and processing are important in an expanding range of computer vision applications, offering rich textural and depth information and simplification of conventionally complex tasks. Although LF cameras are commercially available, no existing device offers wide field-of-view (FOV) imaging. This is due in part to the limitations of fisheye lenses, for which a fundamentally constrained entrance pupil diameter severely limits depth sensitivity. In this work we describe a novel, compact optical design that couples a monocentric lens with multiple sensors using microlens arrays, allowing LF capture with an unprecedented FOV. Leveraging capabilities of the LF representation, we propose a novel method for efficiently coupling the spherical lens and planar sensors, replacing expensive and bulky fiber bundles. We construct a single sensor LF camera prototype, rotating the sensor relative to a fixed main lens to emulate a wide-FOV multi-sensor scenario. Finally, we describe a processing tool chain, including a convenient spherical LF parameterization, and demonstrate depth estimation and post-capture refocus for indoor and outdoor panoramas with 15 × 15 × 1600 × 200 pixels (72 MPix) and a 138° FOV.


---------

Designing a 4D camera for robots

Stanford engineers have developed a 4D camera with an extra-wide field of view. They believe this camera can be better than current options for close-up robotic vision and augmented reality.

Stanford has created a 4D camera that can capture 140 degrees of information. The new technology would be the perfect addition to robots and autonomous vehicles. The 4D camera relies on light field photography which allows it to gather such a wide degree of information.



Light field camera, or standard plenoptic camera, works by capturing information about the light field emanating from the scene. It measures the intensity of the light in the scene and also the direction that the light rays travel. Traditional photography only captures the light intensity.



The researchers proudly call their design to be the “first-ever single-lens, wide field of view, light field camera.” The camera uses the information it has gathered about the light at the scene in combination with the 2D image to create the 4D image.  
This means the photo can be refocused after the image has been captured. The researchers cleverly use the analogy of the difference between looking out a window and through a peephole to describe the difference between the traditional photography and the new technology. They say, ““A 2D photo is like a peephole because you can’t move your head around to gain more information about depth, translucency or light scattering’. Looking through a window, you can move and, as a result, identify features like shape, transparency and shininess.”
The 4D camera’s unique qualities make it perfect for use with robots.  For instance, images captured by a search and rescue robot could be zoomed in and refocused to provide important information to base control. The imagery produced by the 4D camera could also have application in augmented reality as the information rich images could help with better quality rendering.  
The 4D camera’s unique qualities make it perfect for use with robots.  For instance, images captured by a search and rescue robot could be zoomed in and refocused to provide important information to base control. The imagery produced by the 4D camera could also have application in augmented reality as the information rich images could help with better quality rendering.  
The 4D camera is still at a  proof-of-concept stage, and too big for any of the future possible applications. But now the technology is at a working stage, smaller and lighter versions can be developed. The researchers explain the motivation for creating a camera specifically for robots. Donald Dansereau, a postdoctoral fellow in electrical engineering explains, “We want to consider what would be the right camera for a robot that drives or delivers packages by air. We’re great at making cameras for humans but do robots need to see the way humans do? Probably not.”
The research will be presented at the computer vision conference, CVPR 2017 on July 23.
http://news.stanford.edu/press-releases/2017/07/21/new-camera-impro-virtual-reality/

First images from the world's only single-lens wide-FOV light field camera.

From CVPR 2017 paper "A Wide-Field-of-View Monocentric Light Field Camera", 
   http://dgd.vision/Projects/LFMonocentric/

This parallax pan scrolls through a 138-degree, 72-MPix light field captured using our optical prototype. Shifting the virtual camera position over a circular trajectory during the pan reveals the parallax information captured by the LF.

There is no post-processing or alignment between fields, this is the raw light field as measured by the camera.



Other related work:

http://spie.org/newsroom/6666-panoramic-full-frame-imaging-with-monocentric-lenses-and-curved-fiber-bundles

Monocentric lens-based multi-scale optical systems and methods of use US 9256056 B2





High Definition, Low Delay, SDR-Based Video Transmission in UAV Applications


Software-defined radio (SDR) is a radio communication system where components that have been typically implemented in hardware (e.g. mixersfiltersamplifiersmodulators/demodulatorsdetectors, etc.) are instead implemented by means of software on a personal computer or embedded system.[1] While the concept of SDR is not new, the rapidly evolving capabilities of digital electronics render practical many processes which used to be only theoretically possible.

https://en.wikipedia.org/wiki/Software-defined_radio


SDR changes everything when it comes to radio and is the future of video.

We are having a meetup every Saturday at 4Pm at the Hackerdojo in Santa Clara, CA.

https://www.meetup.com/Fly-by-SDR-Hacker-Club-Prep-for-Darpa-SDR-Hackfest/



High Definition, Low Delay, SDR-Based Video Transmission in UAV Applications


Abstract

Integrated RF agile transceivers are not only widely employed in software-defined radio (SDR)1 architectures in cellular telephone base stations, such as multiservice distributed access system (MDAS) and small cell, but also for wireless HD video transmission for industrial, commercial, and military applications, such as unmanned aerial vehicles (UAVs). This article will examine a wideband wireless video signal chain implementation using the AD9361/AD93642,3 integrated transceiver ICs, the amount of data transmitted, the corresponding RF occupied signal bandwidth, the transmission distance, and the transmitter’s power. It will also describe the implementation of the PHY layer of OFDM and present hopping frequency time test results to avoid RF interference. Finally, we will discuss the advantages and disadvantages between Wi-Fi and the RF agile transceiver in wideband wireless applications.

http://www.analog.com/en/analog-dialogue/articles/high-definition-low-delay-sdr-based-video-transmission-in-uav-applications.html


Tuesday, July 25, 2017

A New Sampling Algorithm Could Eliminate Sensor Saturation (scitechdaily.com)

https://science.slashdot.org/story/17/07/22/0537231/a-new-sampling-algorithm-could-eliminate-sensor-saturation

A New Sampling Algorithm Could Eliminate Sensor Saturation (scitechdaily.com) 70

Baron_Yam shared an article from Science Daily: Researchers from MIT and the Technical University of Munich have developed a new technique that could lead to cameras that can handle light of any intensity, and audio that doesn't skip or pop. Virtually any modern information-capture device -- such as a camera, audio recorder, or telephone -- has an analog-to-digital converter in it, a circuit that converts the fluctuating voltages of analog signals into strings of ones and zeroes. Almost all commercial analog-to-digital converters (ADCs), however, have voltage limits. If an incoming signal exceeds that limit, the ADC either cuts it off or flatlines at the maximum voltage. This phenomenon is familiar as the pops and skips of a "clipped" audio signal or as "saturation" in digital images -- when, for instance, a sky that looks blue to the naked eye shows up on-camera as a sheet of white.

Last week, at the International Conference on Sampling Theory and Applications, researchers from MIT and the Technical University of Munich presented a technique that they call unlimited sampling, which can accurately digitize signals whose voltage peaks are far beyond an ADC's voltage limit. The consequence could be cameras that capture all the gradations of color visible to the human eye, audio that doesn't skip, and medical and environmental sensors that can handle both long periods of low activity and the sudden signal spikes that are often the events of interest.

One of the paper's author's explains that "The idea is very simple. If you have a number that is too big to store in your computer memory, you can take the modulo of the number."

Sunday, June 18, 2017

Road to the Holodeck: Lightfields and Volumetric VR


It's come and gone, but it looked to be very interesting.

https://www.eventbrite.com/e/road-to-the-holodeck-lightfields-and-volumetric-vr-tickets-34087827610#

What's a lightfield, you ask?

Several technologies are required for VR's holy grail: the fabled holodeck. We already have graphical VR experiences that let us move throughout volumentric spaces, such as video games. And we have photorealistic media that lets us look around a 360 plane from a fixed position (a.k.a. head tracking).
But what about the best of both worlds?
We're talking volumetric spaces in which you can move around, but are also photorealistic. In addition to things like positional tracking and lots of processing horsepower, the heart of this vision is lightfields. They define how photons hit our eyes and render what we see.
Because it's a challenge to capture photorealistic imagery from every possible angle in a given space -- as our eyes do in real reality -- the art of lightfields in VR involves extrapolating many vantage points, once a fixed point is captured. And that requires clever algorithms, processing, and whole lot of data.