Friday, October 20, 2017

Weatherproof TTL Serial JPEG Camera with NTSC Video and IR LEDs


  • Metal Housing size: 2" x 2" x 2.5"
  • Weight: 150 grams
  • Image sensor: CMOS 1/4 inch
  • CMOS Pixels: 30M
  • Pixel size: 5.6um*5.6um
  • Output format: Standard JPEG/M-JPEG
  • White balance: Automatic
  • Exposure: Automatic
  • Gain: Automatic
  • Shutter: Electronic rolling shutter
  • SNR: 45DB
  • Dynamic Range: 60DB
  • Max analog gain: 16DB
  • Frame speed: 640*480 30fps
  • Scan mode: Progressive scan
  • Viewing angle: 60 degrees
  • Monitoring distance: 10 meters, maximum 15meters (adjustable)
  • Image size: VGA(640*480), QVGA(320*240), QQVGA(160*120)
  • Baud rate: Default 38400, Maximum 115200
  • Current draw: 75mA without IR LEDs on. 250mA extra for IR LEDs
  • Operating voltage: DC +5V
  • Communication: 3.3V TTL (Three wire TX, RX, GND)

Monday, September 25, 2017

Gigabit Multimedia Serial Link (GMSL)

Gigabit Multimedia Serial Link (GMSL) serializer and deserializer (SerDes)

Right now this is a sort of standard built around Maxim's chips.
It seems to be used almost exclusively in the self driving car / automotive industry.

power and data is carried over a single Coax cable to a GMSL camera.

Maxim Integrated’s MAX9272A and MAX9275 gigabit multimedia serial link (GMSL) serializers and deserializers (SerDes) used in the Surround View Kit are designed primarily for automotive video applications such as ADAS & Infotainment. Maxim’s GMSL SerDes technology provides a compression-free alternative to Ethernet, delivering 10x faster data rates, 50% lower cabling costs, and better EMC. The ADAS Starter Kit comes with coax cables having a length of 20cm, but they can be exchanged to longer ones as Maxim's GMSL chipsets drive 15 meters of coax or shielded twisted-pair (STP) cabling, thereby providing the margin required for robust and versatile designs

Friday, August 04, 2017

JPEG2000 GPU Codec toolkit

Ultra-high speed compression and life-like viewing experience starts here. JPEG2000 Codec for GPU and CPU. Comprimato's JPEG2000 GPU Codec toolkit helps Media & Entertainment and Geospatial Imaging technology companies keep it real and with more accurate decision-making power.

Camera in a furniture screw

CIA Hacking Tool "Dumbo" Hack WebCams & Corrupt Video Recordings

Sent from my iPad

Saturday, July 29, 2017

Michael Ossmann Pulls DSSS Out of Nowhere | Hackaday

Altspace VR closes

Altspace tweeted the unexpected news: ”It is with tremendously heavy hearts that we must let you know that we are closing down AltspaceVR very soon.” The site had been unable to close its latest round of funding, it elaborated in a blog post, and will be shutting down next week.

Friday, July 28, 2017

The world's only single-lens Monocentric wide-FOV light field camera.

The operative word here is Monocentric 


Monocentric eyepiece diagram
A Monocentric is an achromatic triplet lens with two pieces of crown glass cemented on both sides of a flint glass element. The elements are thick, strongly curved, and their surfaces have a common center giving it the name "monocentric". It was invented by Hugo Adolf Steinheil around 1883. This design, like the solid eyepiece designs of Robert Tolles, Charles S. Hastings, and E. Wilfred Taylor, is free from ghost reflections and gives a bright contrast image, a desirable feature when it was invented (before anti-reflective coatings).

A Wide-Field-of-View Monocentric Light Field Camera
Donald G. Dansereau, Glenn Schuster , Joseph Ford , and Gordon Wetzstein Stanford University, Department of Electrical Engineering

Abstract Light field (LF) capture and processing are important in an expanding range of computer vision applications, offering rich textural and depth information and simplification of conventionally complex tasks. Although LF cameras are commercially available, no existing device offers wide field-of-view (FOV) imaging. This is due in part to the limitations of fisheye lenses, for which a fundamentally constrained entrance pupil diameter severely limits depth sensitivity. In this work we describe a novel, compact optical design that couples a monocentric lens with multiple sensors using microlens arrays, allowing LF capture with an unprecedented FOV. Leveraging capabilities of the LF representation, we propose a novel method for efficiently coupling the spherical lens and planar sensors, replacing expensive and bulky fiber bundles. We construct a single sensor LF camera prototype, rotating the sensor relative to a fixed main lens to emulate a wide-FOV multi-sensor scenario. Finally, we describe a processing tool chain, including a convenient spherical LF parameterization, and demonstrate depth estimation and post-capture refocus for indoor and outdoor panoramas with 15 × 15 × 1600 × 200 pixels (72 MPix) and a 138° FOV.


Designing a 4D camera for robots

Stanford engineers have developed a 4D camera with an extra-wide field of view. They believe this camera can be better than current options for close-up robotic vision and augmented reality.

Stanford has created a 4D camera that can capture 140 degrees of information. The new technology would be the perfect addition to robots and autonomous vehicles. The 4D camera relies on light field photography which allows it to gather such a wide degree of information.

Light field camera, or standard plenoptic camera, works by capturing information about the light field emanating from the scene. It measures the intensity of the light in the scene and also the direction that the light rays travel. Traditional photography only captures the light intensity.

The researchers proudly call their design to be the “first-ever single-lens, wide field of view, light field camera.” The camera uses the information it has gathered about the light at the scene in combination with the 2D image to create the 4D image.  
This means the photo can be refocused after the image has been captured. The researchers cleverly use the analogy of the difference between looking out a window and through a peephole to describe the difference between the traditional photography and the new technology. They say, ““A 2D photo is like a peephole because you can’t move your head around to gain more information about depth, translucency or light scattering’. Looking through a window, you can move and, as a result, identify features like shape, transparency and shininess.”
The 4D camera’s unique qualities make it perfect for use with robots.  For instance, images captured by a search and rescue robot could be zoomed in and refocused to provide important information to base control. The imagery produced by the 4D camera could also have application in augmented reality as the information rich images could help with better quality rendering.  
The 4D camera’s unique qualities make it perfect for use with robots.  For instance, images captured by a search and rescue robot could be zoomed in and refocused to provide important information to base control. The imagery produced by the 4D camera could also have application in augmented reality as the information rich images could help with better quality rendering.  
The 4D camera is still at a  proof-of-concept stage, and too big for any of the future possible applications. But now the technology is at a working stage, smaller and lighter versions can be developed. The researchers explain the motivation for creating a camera specifically for robots. Donald Dansereau, a postdoctoral fellow in electrical engineering explains, “We want to consider what would be the right camera for a robot that drives or delivers packages by air. We’re great at making cameras for humans but do robots need to see the way humans do? Probably not.”
The research will be presented at the computer vision conference, CVPR 2017 on July 23.

First images from the world's only single-lens wide-FOV light field camera.

From CVPR 2017 paper "A Wide-Field-of-View Monocentric Light Field Camera",

This parallax pan scrolls through a 138-degree, 72-MPix light field captured using our optical prototype. Shifting the virtual camera position over a circular trajectory during the pan reveals the parallax information captured by the LF.

There is no post-processing or alignment between fields, this is the raw light field as measured by the camera.

Other related work:

Monocentric lens-based multi-scale optical systems and methods of use US 9256056 B2

High Definition, Low Delay, SDR-Based Video Transmission in UAV Applications

Software-defined radio (SDR) is a radio communication system where components that have been typically implemented in hardware (e.g. mixersfiltersamplifiersmodulators/demodulatorsdetectors, etc.) are instead implemented by means of software on a personal computer or embedded system.[1] While the concept of SDR is not new, the rapidly evolving capabilities of digital electronics render practical many processes which used to be only theoretically possible.

SDR changes everything when it comes to radio and is the future of video.

We are having a meetup every Saturday at 4Pm at the Hackerdojo in Santa Clara, CA.

High Definition, Low Delay, SDR-Based Video Transmission in UAV Applications


Integrated RF agile transceivers are not only widely employed in software-defined radio (SDR)1 architectures in cellular telephone base stations, such as multiservice distributed access system (MDAS) and small cell, but also for wireless HD video transmission for industrial, commercial, and military applications, such as unmanned aerial vehicles (UAVs). This article will examine a wideband wireless video signal chain implementation using the AD9361/AD93642,3 integrated transceiver ICs, the amount of data transmitted, the corresponding RF occupied signal bandwidth, the transmission distance, and the transmitter’s power. It will also describe the implementation of the PHY layer of OFDM and present hopping frequency time test results to avoid RF interference. Finally, we will discuss the advantages and disadvantages between Wi-Fi and the RF agile transceiver in wideband wireless applications.