Tuesday, June 30, 2015

Watch: The way this camera zooms in on the Moon is mind-blowing - ScienceAlert

http://www.sciencealert.com/watch-the-way-this-camera-zooms-in-on-the-moon-is-mind-blowing

Sunday, June 28, 2015

Kiwi drone racers get ahead of the competition | NZNews | 3 News

http://www.3news.co.nz/nznews/kiwi-drone-racers-get-ahead-of-the-competition-2015062418

What could have been possible back in 1981


https://www.youtube.com/watch?v=MWdG413nNkI

I thought we where pushing the limits at the time by playback of digital audio off memory and later even streaming off a floppy drive. 

This goes to show that not only could the audio have been done but video. 
Now to be honest they are using a sound blaster card which didn't exist for another 6 years or so Aug 87. 

The only things that's changed is optimization and algorithms, improvements in software development tools, completely open documentation and countless examples as well as easy access to video source materials.  

I wasn't able to capture video till around 1984 it was a black and white threshold and looked dreadful and took several seconds to capture 1 image. A wasn't able to capture a proper grey scale till 1987 and it was a board that cost several thousand dollars.  Color video capture wasn't until in 1990 on a Sun work station, all of this was exotic and super expensive at the time. 

It wasn't till 1995 with the Matrox meteor at $500 that decent high quality video capture was possible.  We were just reaching 90 to 120 Mhz Pentiums at that time. 

Imagine what could be possible with today's hardware if only the software where optimized?






Sunday, June 14, 2015

Future of Adobe Photoshop Demo - GPU Technology Conference - YouTube

Demonstration of turning a light field image into a viewable image using the GPU to accelerate performance.



https://youtu.be/lcwm4yaom4w

http://www.netbooknews.com/9917/futur... There was alot going on during the Day 1 Keynote at Nvidia's GPU conference, as far as consumer are concerned there was one significant demo. The demo involved Adobe showing off a technology being developed using a plenoptic lens aka one of those lenses that looks like a bug eye. By attaching a number of small lenses to a camera, users can take a picture that looks, at first, like something an insect might see which, each displaying a small part of the entire picture. Using an algorithm Adobe invented, the multitudes of little images can be resolved into a single photo. The possibilities are endless on how you can manipulate the photo. 2D image are made 3D & they can adjust the focal point in real time so the background comes into focus and the subject blurry. Fingers crossed this gets into Adobe CS6 or CS7.

Light Field Cameras for 3D Imaging and Reconstruction | Thoughts from Tom Kurke


This is a excellent article talking about light field / plenoptic cameras, Pelican imaging, Lytro and listing out all the signigicant published papers in this area.
http://3dsolver.com/light-field-cameras-for-3d-imaging/



Superresolution with Plenoptic Camera 2.0


Excellent paper

Abstract:
 This work is based on the plenoptic 2.0 camera, which captures an array of real images focused on the object. We show that this very fact makes it possible to use the camera data with super-resolution techniques, which enables the focused plenoptic camera to achieve high spatial resolution. We derive the conditions under which the focused plenoptic camera can capture radiance data suitable for super resolution. We develop an algorithm for super resolving those images. Experimental results are presented that show a 9× increase in spatial resolution compared to the basic plenoptic 2.0 rendering approach. Categories and Subject Descriptors (according to ACM CCS): I.4.3 [Image Processing and Computer Vision, Imaging Geometry, Super Resolution]:

http://www.tgeorgiev.net/Superres.pdf



Image and Depth from a Conventional Camera with a Coded Aperture



Image and Depth from a Conventional Camera with a Coded Aperture
Anat Levin, Rob Fergus, Fredo Durand, Bill Freeman

Abstract
A conventional camera captures blurred versions of scene information away from the plane of focus. Camera systems have been proposed that allow for recording all-focus images, or for extracting depth, but to record both simultaneously has required more extensive hardware and reduced spatial resolution. We propose a simple modi_cation to a conventional camera that allows for the simultaneous recovery of both (a) high resolution image information and (b) depth information adequate for semi-automatic extraction of a layered depth representation of the image.
Our modification is to insert a patterned occluder within the aperture of the camera lens, creating a coded aperture. We introduce a criterion for depth discriminability which we use to design the preferred aperture pattern. Using a statistical model of images, we can recover both depth information and an all-focus image from single photographs taken with the modified camera. A layered depth map is then extracted, requiring user-drawn strokes to clarify layer assignments in some cases. The resulting sharp image and layered depth map can be combined for various photographic applications, including automatic scene segmentation, post-exposure refocussing, or re-rendering of the scene from an alternate viewpoint. 

http://groups.csail.mit.edu/graphics/CodedAperture/


Tuesday, June 09, 2015

Glasses free vr display


http://polarscreens.com/

Virtual Reality 3D display that used head tracking to optimize the 3D effect for a single viewer.



https://youtu.be/4E9HZZ9UgcI




Saturday, June 06, 2015

Building your own SDR-based Passive Radar on a Shoestring | Hackaday

Software defined radio based passive radar using low cost hardware.

http://hackaday.com/2015/06/05/building-your-own-sdr-based-passive-radar-on-a-shoestring/

VideoCoreIV-AG100-R Raspberry Pi Gpu docs


https://docs.broadcom.com/doc/12358545

and just in case

https://github.com/doe300/VC4C/tree/master/doc

You want:   VideoCoreIV-AG100-R.pdf 


Original , Now Dead Link
http://www.broadcom.com/docs/support/videocore/VideoCoreIV-AG100-R.pdf


How to optimize Raspberry Pi code using its GPU « Pete Warden's blog

http://petewarden.com/2014/08/07/how-to-optimize-raspberry-pi-code-using-its-gpu/


Computational cameras

Computational Cameras: Convergence of Optics and Processing

 Changyin Zhou, Student Member, IEEE, and Shree K. Nayar, Member, IEEE

Abstract—A computational camera uses a combination of optics and processing to produce images that cannot be captured with traditional cameras. In the last decade, computational imaging has emerged as a vibrant field of research. A wide variety of computational cameras has been demonstrated to encode more useful visual information in the captured images, as compared with conventional cameras. In this paper, we survey computational cameras from two perspectives. First, we present a taxonomy of computational camera designs according to the coding approaches, including object side coding, pupil plane coding, sensor side coding, illumination coding, camera arrays and clusters, and unconventional imaging systems. Second, we use the abstract notion of light field representation as a general tool to describe computational camera designs, where each camera can be formulated as a projection of a high-dimensional light field to a 2-D image sensor. We show how individual optical devices transform light fields and use these transforms to illustrate how different computational camera designs (collections of optical devices) capture and encode useful visual information. Index Terms—Computer vision, imaging, image processing, optics.


http://www1.cs.columbia.edu/CAVE/publications/pdfs/Zhou_TIP11.pdf


Node.JS truly on Android and iOS | Oguz Bastemur

http://oguzbastemur.blogspot.com/2015/03/nodejs-truly-on-android-and-ios.html


Sent from my iPad

OpenMAX

http://en.m.wikipedia.org/wiki/OpenMAX

OpenMAX (Open Media Acceleration), often shortened as "OMX", is a non-proprietary and royalty-free cross-platform set of C-language programming interfaces that provides abstractions for routines especially useful for audio, video, and still images processing. It is intended for low power and embedded system devices (including smartphonesgame consolesdigital media players, and set-top boxes) that need to efficiently process large amounts of multimedia data in predictable ways, such as video codecs, graphics libraries, and other functions for video, image, audio, voice and speech.

RealTime Data Compression: LZ4 Frame format : Final specifications

http://fastcompression.blogspot.fr/2013/04/lz4-streaming-format-final.html


Sent from my iPad

Thursday, June 04, 2015

VR demo & fireside chat with Amir Rubin, industry pioneer & CEO of Sixense

http://www.youtube.com/watch?v=48dt9Q3FJYs&sns=em


#TWiSTLive!! Virtual reality is an exciting & hotly debated space. Its potential is massive, but will it ever really "arrive"? In today's live show, recorded at Samsung Global Innovation Center, Jason answers this question with a resounding YES, as he demos the latest & greatest VR and hosts a riveting fireside chat with Amir Rubin, VR pioneer & CEO of Sixense, a premiere virtual reality platform. After a lively demo of Sixense's cutting-edge, award-winning products, Jason talks with Amir about his inspirations, the state of the field, and what's coming. We learn why VR is not just about gaming but about every industry (esp. healthcare, education), why motion sickness is no longer an issue, how technologies like 3D printing help VR immensely, how developers are building amazing applications on the Sixense platform, how every phone going forward will be VR ready, why VR is so effective in training for high-risk jobs like welders & pilots, why service providers will give VR headsets away for free, how VR is going to be a new way for creative people to monetize their skills, how movie studios can use VR for you to actually experience being the super hero, why the biggest challenge facing VR is educating consumers -- and much much more! 

Full show notes: http://goo.gl/v3JaLc