Monday, December 21, 2015
Matte vs Glossy Monitors
Matte vs Glossy Monitors
https://pcmonitors.info/articles/matte-vs-glossy-monitors/
Great discussion of Screen Surfaces and optical coatings.
Sunday, December 20, 2015
The Electrowetting Display
Electrowetting displays are just as capable as the liquid crystal displays in tablets and notebooks, but they are three times more efficient. Johan Feenstra, who heads Samsung's electronic display research center in the Netherlands, explains how they work.
http://spectrum.ieee.org/consumer-electronics/portable-devices/lighter-brighter-displays
Bright, full color e paper under development by Ricoh
This e-paper is currently under development by Ricoh.
It has a unique structure, with layers of a new electrochromic material that turn magenta, yellow, and cyan from their transparent state. In this way, Ricoh's e-paper enables a bright, full-color display, like ordinary paper, which hasn't been possible with e-paper until now.
This prototype is a 3.5-inch, QVGA display, with a pixel density of 113.6 ppi. Its reflectivity is 70%. Compared with current color-filter displays, this e-paper is 2.5 times brighter. It has a color reproduction range of 35%, higher than that of a newspaper, which is 31% in Japan.
"To produce colors, CMY subtractive mixing is ideal, and that's the model used in printing. We've implemented this by coating the panel with layers of yellow, magenta, and cyan. Ordinarily, if you try to use layers like this, you need an electrode driver for each layer. But in this display we're developing, the electrodes are active TFTs. So, we can achieve all colors with a single TFT, by switching the electrodes on the display side."
The material used for the chromic layers is transparent in its oxidized state, but becomes colored when it's reduced. To achieve a color display, rewriting is done three times in the order magenta, yellow, cyan. As the spaces between the electrochromic layers are narrow, at about 2 microns, the result is an ideal color-mixing display.
"The stage we're at right now is, we're checking that this model works with an actual active panel. Regarding the color drive, we haven't refined this yet, so switching takes over a second for each color. But the reaction speed of the chromic material is, ideally, about 100 ms."
"From now on, we'd like to increase the size to 6 inches, then 10 inches. We also want to work on achieving stable driving and faster response."
Saturday, November 14, 2015
OGV.JS: AN OGG THEORA AND VORBIS VIDEO DECODER IN JAVASCRIPT
Wednesday, November 11, 2015
Tuesday, November 10, 2015
Fwd: A great use for virtual reality headsets.
Perhaps also hinting at possible virtual tourism in the very near future (Oculus Rift is set to be released in 2016).
This Robot Will Let Kids In Hospital Explore Zoos Through Virtual Reality
A community called "Robots for Good" has come together to help kids stuck in Great Ormond Street Hospital in London visit the zoo. If the name hasn't given it…
Thursday, November 05, 2015
Mechanical television
https://en.wikipedia.org/wiki/Mechanical_television
http://hackaday.com/2010/04/13/mechanical-scanning-television/
http://www.earlytelevision.org/mechanical_tv.html
http://bs.cyty.com/menschen/e-etzold/archiv/TV/mechanical/scanningdisc.htm
http://www.home1.stofanet.dk/television/
http://www.home1.stofanet.dk/television/pjgn.html
Wednesday, October 28, 2015
Saturday, October 17, 2015
Tuesday, October 13, 2015
Wednesday, October 07, 2015
Wednesday, September 23, 2015
Friday, September 18, 2015
Monday, September 07, 2015
Saturday, September 05, 2015
Augmented Pixels: Indoor Navigation Platform for Drones
Augmented Pixels has been actively developing technology (including SLAM) to ensure safe flights as well as intuitive and easy navigation using Augmented Reality.
They came up with a platform that significantly reduces accident rates and minimizes the effect of "human factor". Moreover, it is possible to program the drone to fly around and land by itself.
Tiny 3D Camera Offers Brain Surgery Innovation
http://www.jpl.nasa.gov/news/news.php?feature=4702
Harish Manohara, principal investigator of the project at JPL, working in collaboration with surgeon Dr. Hrayr Shahinian at the Skull Base Institute in Los Angeles, who approached JPL to create this technology.
“Multi-Angle and Rear Viewing Endoscopic tooL” (MARVEL) denotes an auxiliary endoscope, now undergoing development, that a surgeon would use in conjunction with a conventional endoscope to obtain additional perspective.
http://neurosciencenews.com/marvel-3d-neurosurgery-camera-2573/
Monday, August 31, 2015
Tuesday, August 25, 2015
Fwd: Visual Intelligence Through Deep Learning; Open UC Santa Cruz Faculty Position; More
Sent from my iPad
Begin forwarded message:
From: "Embedded Vision Insights from the Embedded Vision Alliance" <newsletter@embeddedvisioninsights.com>
Date: August 25, 2015 at 10:53:20 PM GMT+8
To: john.sokol@gmail.com
Subject: Visual Intelligence Through Deep Learning; Open UC Santa Cruz Faculty Position; More
Embedded Vision Insights
VOL. 5, NO. 15 A NEWSLETTER FROM THE EMBEDDED VISION ALLIANCE Late August 2015
IN THIS EDITION To view this newsletter online, please click here
- New Content from the Embedded Vision Summit and Alliance Member Meeting
- Embedded Lucas-Kanade Tracking: Theory and Implementation
- Has Neural Network Processors' Time Come?
- Embedded Vision Community Conversations
- Embedded Vision in the News
LETTER FROM THE EDITOR The Alliance continues to publish videos of great presentations from May's Embedded Vision Summit. Make sure you check out, for example, the highly rated keynote "Enabling Ubiquitous Visual Intelligence Through Deep Learning," by Dr. Ren Wu, formerly distinguished scientist at Baidu's Institute of Deep Learning (IDL). Dr. Wu shares an insider's perspective on the practical use of neural networks for vision.
In "Navigating the Vision API Jungle: Which API Should You Use and Why?", Neil Trevett, President of the Khronos Group, maps the landscape of APIs for vision software development. Long-time Alliance collaborator Gary Bradski, President of the OpenCV Foundation, provides an insider's perspective on the new version of OpenCV and how vision developers can utilize it in his presentation, "The OpenCV Open Source Computer Vision Library: Latest Developments." Also make sure to take a look at "Harman's Augmented Navigation Platform: The Convergence of ADAS and Navigation" from that company's Vice President of Technology Strategy, Alon Atsmon.
Roberto Mijat, Visual Computing Marketing Manager at ARM, explores when it makes sense to utilize a graphics core as a coprocessor in his presentation, "Understanding the Role of Integrated GPUs in Vision Applications." And echoing Dr. Wu's neural network focus, Jeff Gehlhaar, Vice President of Technology at Qualcomm, used his presentation "Deep-learning-based Visual Perception in Mobile and Embedded Devices: Opportunities and Challenges" to discuss the benefits, challenges and solutions for implementing neural networks in mobile and embedded devices. And the insights continued the next day at the quarterly Alliance Member Meeting: in "Combining Vision, Machine Learning and Natural Language Processing to Answer Everyday Questions," Faris Alqadah, CEO and Co-Founder of QM Scientific, explains how his firm is simplifying shopping for consumers by extracting and organizing product information from many data sources.
While you're on the Alliance website, make sure to check out all the other great content recently published there. And for timely notification of the publication of new content, subscribe to our RSS feed and Facebook, Google+, LinkedIn company and group, and Twitter social media channels. Thanks as always for your support of the Embedded Vision Alliance, and for your interest in and contributions to embedded vision technologies, products and applications. Please don't hesitate to let me know how the Alliance can better serve your needs.
Brian Dipert
Editor-In-Chief, Embedded Vision Alliance
FEATURED VIDEOS "Embedded Lucas-Kanade Tracking: How It Works, How to Implement It, and How to Use It," a Presentation from Goksel Dedeoglu of Perceptonic
Goksel Dedeoglu, Ph.D., Founder and Lab Director of PercepTonic, presents the "Embedded Lucas-Kanade Tracking: How It Works, How to Implement It, and How to Use It" tutorial at the May 2014 Embedded Vision Summit. This tutorial is intended for technical audiences interested in learning about the Lucas-Kanade (LK) tracker, also known as the Kanade-Lucas-Tomasi (KLT) tracker. Invented in the early 80s, this method has been widely used to estimate pixel motion between two consecutive frames. Dedeoglu presents how the LK tracker works and discuss its advantages, limitations, and how to make it more robust and useful. Using DSP-optimized functions from TI's Vision Library (VLIB), he also shows how to detect feature points in real-time and track them from one frame to the next using the LK algorithm. He demonstrates this on Texas Instruments' C6678 Keystone DSP, where he detects and tracks thousands of Harris corner features in 1080p HD resolution video.
Introduction to the Embedded Vision Opportunity and the Embedded Vision Alliance Community
Jeff Bier, Founder of the Embedded Vision Alliance and President and Co-Founder of BDTI, presents the introductory remarks at the May 2015 Embedded Vision Summit. Jeff provides an overview of the embedded vision market opportunity, challenges, solutions and trends. He also introduces the Embedded Vision Alliance and the resources it offers for both product creators and potential members, and reviews the event agenda and other logistics.FEATURED ARTICLES Neural Network Processors: Has Their Time Come?
Lately, neural network algorithms have been gaining prominence in computer vision and other fields where there's a need to extract insights based on ambiguous data. Classical computer vision algorithms typically attempt to identify objects by first detecting small features, then finding collections of these small features to identify larger features, and then reasoning about these larger features to deduce the presence and location of an object of interest (such as a face). These approaches can work well when the objects of interest are fairly uniform and the imaging conditions are favorable, but they often struggle when conditions are more challenging. An alternative approach, convolutional neural networks ("CNNs"), massively parallel algorithms made up of layers computation nodes, have been showing impressive results on these more challenging problems. MoreFacebook Oculus Acquires Pebbles Interfaces for Gesture Control
Last month, Facebook's subsidiary Oculus reported that it had acquired Israel-based Pebbles Interfaces. Based in Israel, Pebbles Interfaces has spent the past five years developing technology that uses custom optics, sensor systems and algorithms to detect and track hand movement. Pebbles Interfaces will be joining the hardware engineering and computer vision teams at Oculus to help advance virtual reality, tracking, and human-computer interactions. MoreFEATURED COMMUNITY DISCUSSION FEATURED NEWS Upcoming Free Qualcomm Vuforia Webinar Discusses Enabling Mobile Apps to See
Intel Expands Developer Opportunities as Computing Expands Across All Areas of Peoples' Lives
Altera Launches Worldwide SoC FPGA Developers Forums
ON Semiconductor Introduces Series of Advanced Image Co-Processors for Next Generation Automotive Camera
About This E-Mail
Embedded Vision Insights respects your time and privacy. If you are not interested in receiving future newsletters on this subject, please reply to this message with the word "UNSUBSCRIBE" in the Subject area.
LETTERS AND COMMENTS TO THE EDITOR: Letters and comments can be directed to the editor, Brian Dipert, at editor@embeddedvisioninsights.com.
PASS IT ON...Feel free to forward this newsletter to your colleagues. If this newsletter was forwarded to you and you would like to receive it regularly, click here to register.
Sent to:john.sokol@gmail.com
If you prefer not to receive
future e-mails of this type,
click here
Sent By: BDTI 1646 N. California Blvd., Suite 220
Walnut Creek California 94596
United States
To view as a web page click here.
Sunday, August 23, 2015
Let's build a humanoid robot with computer vision
---------- Forwarded message ----------
From: Hack A Robot <info@meetup.com>
Date: Saturday, August 22, 2015
Subject: Invitation: Let's build a humanoid robot with computer vision
6:30 PM
TBD
South bay area, CA
(Venue of this event is TBD. Looking for suggestions/ offers to host this event as well) Earlier this year, I created Hackabot Nano (a feature-rich Arduino compatible wheeled robot). It was crowdfunded through Kickstarter. The robot was p...
WebRTC Codec Wars: Rebooted
Wednesday, September 9, 2015
6:00 PM
-
Just when we thought we were done with the video codec wars in WebRTC – we found out we're only just beginning.
In the past several weeks we've seen the names Thor, Daala, VP9 and H.265 thrown in the news as potential candidates to replace our current generation of video codecs. How is that going to play out, and where are we headed with all this?
We can't be sure but we think Tsahi Levent-Levi can make a few educated guesses about it.
Join TokBox and Tsahi to discuss the power plays of the video coding industry.
Come along to TokBox HQ at 6:00pm for a 6:30pm start.
As usual, we will provide pizza and beer.
We look forward to seeing you there!
About the speaker:Tsahi Levent-Levi is an Independent Analyst and Consultant for WebRTC.
Tsahi Levent-Levi has over 15 years of experience in the telecommunications, VoIP and 3G industry as an engineer, manager, marketer and CTO. Tsahi is an entrepreneur, independent analyst and consultant, assisting companies to form a bridge between technologies and business strategy in the domain of telecommunications.
Tsahi has an MSc in Computer Science, and an MBA degree specializing in Entrepreneurship and Strategy. Tsahi has been granted three patents related to 3G-324M and VoIP. He acted as the chairman of various activity groups within the IMTC, an organization focusing on interoperability of multimedia communications.
Tsahi is the author and editor of bloggeek.me, which focuses on the ecosystem and business opportunities around WebRTC.
Monday, August 17, 2015
In 1975, this Kodak employee invented the digital camera.
Curious Minds video streaming service
Saturday, August 08, 2015
Tuesday, August 04, 2015
Sunday, August 02, 2015
Robot stereo vision
From http://smprobotics.com/
They make an Unmanned Ground Video Surveillance Vehicle based on this.
Wednesday, July 22, 2015
Wednesday, July 01, 2015
Tuesday, June 30, 2015
Sunday, June 28, 2015
What could have been possible back in 1981
Thursday, June 18, 2015
Wednesday, June 17, 2015
Tuesday, June 16, 2015
Monday, June 15, 2015
Sunday, June 14, 2015
Future of Adobe Photoshop Demo - GPU Technology Conference - YouTube
https://youtu.be/lcwm4yaom4w
http://www.netbooknews.com/9917/futur... There was alot going on during the Day 1 Keynote at Nvidia's GPU conference, as far as consumer are concerned there was one significant demo. The demo involved Adobe showing off a technology being developed using a plenoptic lens aka one of those lenses that looks like a bug eye. By attaching a number of small lenses to a camera, users can take a picture that looks, at first, like something an insect might see which, each displaying a small part of the entire picture. Using an algorithm Adobe invented, the multitudes of little images can be resolved into a single photo. The possibilities are endless on how you can manipulate the photo. 2D image are made 3D & they can adjust the focal point in real time so the background comes into focus and the subject blurry. Fingers crossed this gets into Adobe CS6 or CS7.
Light Field Cameras for 3D Imaging and Reconstruction | Thoughts from Tom Kurke
This is a excellent article talking about light field / plenoptic cameras, Pelican imaging, Lytro and listing out all the signigicant published papers in this area.
http://3dsolver.com/light-field-cameras-for-3d-imaging/
Superresolution with Plenoptic Camera 2.0
Excellent paper
Abstract:
This work is based on the plenoptic 2.0 camera, which captures an array of real images focused on the object. We show that this very fact makes it possible to use the camera data with super-resolution techniques, which enables the focused plenoptic camera to achieve high spatial resolution. We derive the conditions under which the focused plenoptic camera can capture radiance data suitable for super resolution. We develop an algorithm for super resolving those images. Experimental results are presented that show a 9× increase in spatial resolution compared to the basic plenoptic 2.0 rendering approach. Categories and Subject Descriptors (according to ACM CCS): I.4.3 [Image Processing and Computer Vision, Imaging Geometry, Super Resolution]:
http://www.tgeorgiev.net/Superres.pdf
Image and Depth from a Conventional Camera with a Coded Aperture
Image and Depth from a Conventional Camera with a Coded Aperture
Anat Levin, Rob Fergus, Fredo Durand, Bill Freeman
Abstract
A conventional camera captures blurred versions of scene information away from the plane of focus. Camera systems have been proposed that allow for recording all-focus images, or for extracting depth, but to record both simultaneously has required more extensive hardware and reduced spatial resolution. We propose a simple modi_cation to a conventional camera that allows for the simultaneous recovery of both (a) high resolution image information and (b) depth information adequate for semi-automatic extraction of a layered depth representation of the image. Our modification is to insert a patterned occluder within the aperture of the camera lens, creating a coded aperture. We introduce a criterion for depth discriminability which we use to design the preferred aperture pattern. Using a statistical model of images, we can recover both depth information and an all-focus image from single photographs taken with the modified camera. A layered depth map is then extracted, requiring user-drawn strokes to clarify layer assignments in some cases. The resulting sharp image and layered depth map can be combined for various photographic applications, including automatic scene segmentation, post-exposure refocussing, or re-rendering of the scene from an alternate viewpoint. |
http://groups.csail.mit.edu/graphics/CodedAperture/
Thursday, June 11, 2015
Samsung shows retail-ready transparent, mirrored OLED - CNET
http://www.cnet.com/uk/news/samsung-shows-retail-ready-transparent-mirrored-oled/
Tuesday, June 09, 2015
Glasses free vr display
http://polarscreens.com/
Virtual Reality 3D display that used head tracking to optimize the 3D effect for a single viewer.
https://youtu.be/4E9HZZ9UgcI
Saturday, June 06, 2015
Building your own SDR-based Passive Radar on a Shoestring | Hackaday
http://hackaday.com/2015/06/05/building-your-own-sdr-based-passive-radar-on-a-shoestring/
VideoCoreIV-AG100-R Raspberry Pi Gpu docs
https://docs.broadcom.com/doc/12358545
and just in case
https://github.com/doe300/VC4C/tree/master/doc
You want: VideoCoreIV-AG100-R.pdf
Original , Now Dead Link
Computational cameras
Changyin Zhou, Student Member, IEEE, and Shree K. Nayar, Member, IEEE
Abstract—A computational camera uses a combination of optics and processing to produce images that cannot be captured with traditional cameras. In the last decade, computational imaging has emerged as a vibrant field of research. A wide variety of computational cameras has been demonstrated to encode more useful visual information in the captured images, as compared with conventional cameras. In this paper, we survey computational cameras from two perspectives. First, we present a taxonomy of computational camera designs according to the coding approaches, including object side coding, pupil plane coding, sensor side coding, illumination coding, camera arrays and clusters, and unconventional imaging systems. Second, we use the abstract notion of light field representation as a general tool to describe computational camera designs, where each camera can be formulated as a projection of a high-dimensional light field to a 2-D image sensor. We show how individual optical devices transform light fields and use these transforms to illustrate how different computational camera designs (collections of optical devices) capture and encode useful visual information. Index Terms—Computer vision, imaging, image processing, optics.
http://www1.cs.columbia.edu/CAVE/publications/pdfs/Zhou_TIP11.pdf
OpenMAX
OpenMAX (Open Media Acceleration), often shortened as "OMX", is a non-proprietary and royalty-free cross-platform set of C-language programming interfaces that provides abstractions for routines especially useful for audio, video, and still images processing. It is intended for low power and embedded system devices (including smartphones, game consoles, digital media players, and set-top boxes) that need to efficiently process large amounts of multimedia data in predictable ways, such as video codecs, graphics libraries, and other functions for video, image, audio, voice and speech.
Thursday, June 04, 2015
VR demo & fireside chat with Amir Rubin, industry pioneer & CEO of Sixense
#TWiSTLive!! Virtual reality is an exciting & hotly debated space. Its potential is massive, but will it ever really "arrive"? In today's live show, recorded at Samsung Global Innovation Center, Jason answers this question with a resounding YES, as he demos the latest & greatest VR and hosts a riveting fireside chat with Amir Rubin, VR pioneer & CEO of Sixense, a premiere virtual reality platform. After a lively demo of Sixense's cutting-edge, award-winning products, Jason talks with Amir about his inspirations, the state of the field, and what's coming. We learn why VR is not just about gaming but about every industry (esp. healthcare, education), why motion sickness is no longer an issue, how technologies like 3D printing help VR immensely, how developers are building amazing applications on the Sixense platform, how every phone going forward will be VR ready, why VR is so effective in training for high-risk jobs like welders & pilots, why service providers will give VR headsets away for free, how VR is going to be a new way for creative people to monetize their skills, how movie studios can use VR for you to actually experience being the super hero, why the biggest challenge facing VR is educating consumers -- and much much more!
Full show notes: http://goo.gl/v3JaLc
Sunday, May 31, 2015
Saturday, May 30, 2015
Billboard Advertising Banned Products In Russia Hides If It Recognizes Cops
Billboard Advertising Banned Products In Russia Hides If It Recognizes Cops
from the next-slide dept.
Friday, May 29, 2015
Tuesday, May 26, 2015
Monday, May 25, 2015
Saturday, May 23, 2015
Wednesday, May 13, 2015
Tuesday, May 12, 2015
Monday, May 11, 2015
New thin, flat lenses focus light as sharply as curved lenses
I think this would be considered a phase hologram
Friday, May 08, 2015
Thursday, May 07, 2015
Sunday, May 03, 2015
Sunday, April 26, 2015
Monday, April 20, 2015
Friday, April 10, 2015
Tuesday, April 07, 2015
Friday, March 06, 2015
Monday, March 02, 2015
Thursday, February 26, 2015
Thursday, February 19, 2015
Saturday, February 14, 2015
Thursday, February 12, 2015
FLIR to Manufacture Thermal Imagers at TowerJazz
GlobeNewswire: FLIR is partnering with TowerJazz to manufacture its next generation IR technology to smartphone, security, industrial and other markets. "The partnership between TowerJazz and FLIR has been excellent. Together, we have rapidly implemented a state of the art manufacturing capability in Newport Beach, CA. The Newport Beach site can facilitate commercial, ITAR (International Traffic in Arms Regulations), and more recently Category 1A & 1B "Trusted" services through our subsidiary, Jazz Semiconductor Trusted Foundry (JSTF), as accredited by the U.S. Department of Defense. In particular, we developed an industry leading capability for advanced micro thermal pixel array production," said David Howard, Executive Director and Fellow, TowerJazz.
These micro thermal pixel arrays include novel, high density pixels that enable FLIR to offer more powerful sensors with higher resolution in compact form factors. "When we combine the micro thermal arrays with integrated FLIR wafer level optics, software and MSX technology with TowerJazz process and manufacturing capability, we are able to deliver breakthrough product performance and value," said Tom Surran, COO at FLIR Systems.
Tuesday, February 10, 2015
Ultra-miniature Lensless Computational Imagers and Sensors
This is a huge revolution in Imaging and cameras.
Monday, February 09, 2015
What are Mattel and Google doing with View-Master?
With a View-Master topped teaser (which you can see after the break), Google and Mattel invoked one of our favorite childhood memories -- and frequent inspiration for low-budget virtual reality shenanigans. The two are planning an "exclusive announcement and product debut" ahead of the New York Toy Fair next week, but other than the View-Master theme there's little to go on. Mattel's Fisher-Price division tried a View-Master comeback for the digital age in 2012, although all trace of it is gone now. We'll have to wait until next Friday to see for ourselves what they're planning, but we invite your wildest speculation until then. So what are you thinking -- a plastic pair of branded Mattel VR goggles based on the Cardboard project, or maybe a Hot Wheel based on something else Google has been working on?
LG unveils VR for G3, its own virtual reality headset inspired by Google Cardboard
Today LG announced the impending launch of its own virtual reality headset, a low-tech device which sits somewhere between Samsung VR and Google’s DIY Cardboard kit. LG calls it “VR for G3.”
As the name implies, LG will bundle its virtual reality headset with its G3 smartphone. The promotion will occur in “select markets” in the coming months. For now, that’s all we know about the release date.
Here’s how the device works, according to LG:
The design of VR for G3 is based on the blueprint for Google Cardboard, available online for home DIY fans. The neodymium ring magnet on the side of the VR for G3 works with the magnetic gyroscope sensor in the G3 to select applications and scroll through menus without touching the display. VR for G3 requires no assembly other than inserting the phone in the viewer.
http://venturebeat.com/2015/02/09/lg-teases-vr-for-g3-a-virtual-reality-headset-inspired-by-google-cardboard/
Not in front of the telly: Warning over 'listening' TV
Samsung is warning customers about discussing personal information in front of their smart television set.
The warning applies to TV viewers who control their Samsung Smart TV using its voice activation feature.
Such TV sets "listen" to some of what is said in front of them and may share details they hear with Samsung or third parties, it said.
Privacy campaigners said the technology smacked of the telescreens, in George Orwell's 1984, which spied on citizens.
www.bbc.com/news/technology-31296188