Monday, December 21, 2015

Matte vs Glossy Monitors


Matte vs Glossy Monitors

https://pcmonitors.info/articles/matte-vs-glossy-monitors/

Great discussion of Screen Surfaces and optical coatings.

Sunday, December 20, 2015

The Electrowetting Display





Electrowetting displays are just as capable as the liquid crystal displays in tablets and notebooks, but they are three times more efficient. Johan Feenstra, who heads Samsung's electronic display research center in the Netherlands, explains how they work.


http://spectrum.ieee.org/consumer-electronics/portable-devices/lighter-brighter-displays

Bright, full color e paper under development by Ricoh





This e-paper is currently under development by Ricoh.

It has a unique structure, with layers of a new electrochromic material that turn magenta, yellow, and cyan from their transparent state. In this way, Ricoh's e-paper enables a bright, full-color display, like ordinary paper, which hasn't been possible with e-paper until now.

This prototype is a 3.5-inch, QVGA display, with a pixel density of 113.6 ppi. Its reflectivity is 70%. Compared with current color-filter displays, this e-paper is 2.5 times brighter. It has a color reproduction range of 35%, higher than that of a newspaper, which is 31% in Japan.

"To produce colors, CMY subtractive mixing is ideal, and that's the model used in printing. We've implemented this by coating the panel with layers of yellow, magenta, and cyan. Ordinarily, if you try to use layers like this, you need an electrode driver for each layer. But in this display we're developing, the electrodes are active TFTs. So, we can achieve all colors with a single TFT, by switching the electrodes on the display side."

The material used for the chromic layers is transparent in its oxidized state, but becomes colored when it's reduced. To achieve a color display, rewriting is done three times in the order magenta, yellow, cyan. As the spaces between the electrochromic layers are narrow, at about 2 microns, the result is an ideal color-mixing display.

"The stage we're at right now is, we're checking that this model works with an actual active panel. Regarding the color drive, we haven't refined this yet, so switching takes over a second for each color. But the reaction speed of the chromic material is, ideally, about 100 ms."

"From now on, we'd like to increase the size to 6 inches, then 10 inches. We also want to work on achieving stable driving and faster response."

Saturday, November 14, 2015

OGV.JS: AN OGG THEORA AND VORBIS VIDEO DECODER IN JAVASCRIPT

OGV.JS: AN OGG THEORA AND VORBIS VIDEO DECODER IN JAVASCRIPT


Brion Vibber has been working on ogv.js, an Ogg video player in JavaScript, supporting both audio and video. We've seen video codecs in JavaScript before, such as Broadway (H.264),Route9 (WebM/VP8), and more but mostly without audio support to go along with it. We have audio codecs in JS too, just not combined with video yet. That changes with ogv.js.


Tuesday, November 10, 2015

Fwd: A great use for virtual reality headsets.


Perhaps also hinting at possible virtual tourism in the very near future (Oculus Rift is set to be released in 2016).

 

This Robot Will Let Kids In Hospital Explore Zoos Through Virtual Reality

A community called "Robots for Good" has come together to help kids stuck in Great Ormond Street Hospital in London visit the zoo. If the name hasn't given it…

iflscience.com

 


Thursday, November 05, 2015

Google Open Spherical Camera API

Mechanical television



https://en.wikipedia.org/wiki/Mechanical_television



http://hackaday.com/2010/04/13/mechanical-scanning-television/

http://www.earlytelevision.org/mechanical_tv.html

http://bs.cyty.com/menschen/e-etzold/archiv/TV/mechanical/scanningdisc.htm

http://www.home1.stofanet.dk/television/

http://www.home1.stofanet.dk/television/pjgn.html

Saturday, September 05, 2015

Augmented Pixels: Indoor Navigation Platform for Drones

Drones are notoriously difficult to handle indoors: hard to control and prevent crashing into walls or people.

Augmented Pixels has been actively developing technology (including SLAM) to ensure safe flights as well as intuitive and easy navigation using Augmented Reality.

They came up with a platform that significantly reduces accident rates and minimizes the effect of "human factor". Moreover, it is possible to program the drone to fly around and land by itself.



The prospects for this technology include a wide range of use cases (e.g. inspection of premises for security, creation of 360-degree tours, etc.).Augmented Pixels is located in Palo Alto, CA. 

Tiny 3D Camera Offers Brain Surgery Innovation


http://www.jpl.nasa.gov/news/news.php?feature=4702

Harish Manohara, principal investigator of the project at JPL, working in collaboration with surgeon Dr. Hrayr Shahinian at the Skull Base Institute in Los Angeles, who approached JPL to create this technology.
MARVEL's camera is a mere 0.2 inch (4 millimeters) in diameter and about 0.6 inch (15 millimeters) long. It is attached to a bendable "neck" that can sweep left or right, looking around corners with up to a 120-degree arc. This allows for a highly maneuverable endoscope.
http://www.skullbaseinstitute.com/press/adjustable-viewing-angle-endoscopic-brain-surgery.htm

“Multi-Angle and Rear Viewing Endoscopic tooL” (MARVEL) denotes an auxiliary endoscope, now undergoing development, that a surgeon would use in conjunction with a conventional endoscope to obtain additional perspective.


http://neurosciencenews.com/marvel-3d-neurosurgery-camera-2573/

To operate on the brain, doctors need to see fine details on a small scale. A tiny camera that could produce 3-D images from inside the brain would help surgeons see more intricacies of the tissue they are handling and lead to faster, safer procedures.
An endoscope with such a camera is being developed at NASA’s Jet Propulsion Laboratory in Pasadena, California. MARVEL, which stands for Multi Angle Rear Viewing Endoscopic tooL, has been honored this week with the Outstanding Technology Development award from the Federal Laboratory Consortium. An endoscope is a device that examines the interior of a body part.
“With one of the world’s smallest 3-D cameras, MARVEL is designed for minimally invasive brain surgery,” said Harish Manohara, principal investigator of the project at JPL. Manohara is working in collaboration with surgeon Dr. Hrayr Shahinian at the Skull Base Institute in Los Angeles, who approached JPL to create this technology.
MARVEL’s camera is a mere 0.2 inch (4 millimeters) in diameter and about 0.6 inch (15 millimeters) long. It is attached to a bendable “neck” that can sweep left or right, looking around corners with up to a 120-degree arc. This allows for a highly maneuverable endoscope.
Operations with the small camera would not require the traditional open craniotomy, a procedure in which surgeons take out large parts of the skull. Craniotomies result in higher costs and longer stays in hospitals than surgery using an endoscope.
Stereo imaging endoscopes that employ traditional dual-camera systems are already in use for minimally invasive surgeries elsewhere in the body. But surgery on the brain requires even more miniaturization. That’s why, instead of two, MARVEL has only one camera lens.
To generate 3-D images, MARVEL’s camera has two apertures — akin to the pupil of the eye — each with its own color filter. Each filter transmits distinct wavelengths of red, green and blue light, while blocking the bands to which the other filter is sensitive. The system includes a light source that produces all six colors of light to which the filters are attuned. Images from each of the two sets are then merged to create the 3-D effect.
Image shows the MARVEL camera.
A laboratory prototype of MARVEL, one of the world’s smallest 3-D cameras. MARVEL is in the center foreground. On the display is a 3-D image of the interior of a walnut, taken by MARVEL previously, which has characteristics similar to that of a brain. Credit: NASA/JPL-Caltech/Skull Base Institute.
Now that researchers have demonstrated a laboratory prototype, the next step is a clinical prototype that meets the requirements of the U.S. Food and Drug Administration. The researchers will refine the engineering of the tool to make it suitable for use in real-world medical settings.
In the future, the MARVEL camera technology could also have applications for space exploration. A miniature camera such as this could be put on small robots that explore other worlds, delivering intricate 3-D views of geological features of interest.
“You can implement a zoom function and get close-up images showing the surface roughness of rock and other microscopic details,” Manohara said.
“As a skull base surgeon with a specific vision of endoscopic brain surgery, it has been a privilege and a great personal honor working with the JPL team over the past eight years to realize this project,” Shahinian said.
MARVEL is being developed at JPL for the Skull Base Institute, which has licensed the technology from the California Institute of Technology. JPL is managed for NASA by Caltech.
Source: Elizabeth Landau – NASA’s Jet Propulsion Laboratory

Tuesday, August 25, 2015

Fwd: Visual Intelligence Through Deep Learning; Open UC Santa Cruz Faculty Position; More



Sent from my iPad

Begin forwarded message:

From: "Embedded Vision Insights from the Embedded Vision Alliance" <newsletter@embeddedvisioninsights.com>
Date: August 25, 2015 at 10:53:20 PM GMT+8
To: john.sokol@gmail.com
Subject: Visual Intelligence Through Deep Learning; Open UC Santa Cruz Faculty Position; More

Embedded Vision Insights
embedded-vision.com embedded-vision.com
VOL. 5, NO. 15 A NEWSLETTER FROM THE EMBEDDED VISION ALLIANCE Late August 2015
IN THIS EDITION
To view this newsletter online, please click here
LETTER FROM THE EDITOR

Dear Colleague,Embedded Vision Summit

The Alliance continues to publish videos of great presentations from May's Embedded Vision Summit. Make sure you check out, for example, the highly rated keynote "Enabling Ubiquitous Visual Intelligence Through Deep Learning," by Dr. Ren Wu, formerly distinguished scientist at Baidu's Institute of Deep Learning (IDL). Dr. Wu shares an insider's perspective on the practical use of neural networks for vision.

In "Navigating the Vision API Jungle: Which API Should You Use and Why?", Neil Trevett, President of the Khronos Group, maps the landscape of APIs for vision software development. Long-time Alliance collaborator Gary Bradski, President of the OpenCV Foundation, provides an insider's perspective on the new version of OpenCV and how vision developers can utilize it in his presentation, "The OpenCV Open Source Computer Vision Library: Latest Developments." Also make sure to take a look at "Harman's Augmented Navigation Platform: The Convergence of ADAS and Navigation" from that company's Vice President of Technology Strategy, Alon Atsmon.

Roberto Mijat, Visual Computing Marketing Manager at ARM, explores when it makes sense to utilize a graphics core as a coprocessor in his presentation, "Understanding the Role of Integrated GPUs in Vision Applications." And echoing Dr. Wu's neural network focus, Jeff Gehlhaar, Vice President of Technology at Qualcomm, used his presentation "Deep-learning-based Visual Perception in Mobile and Embedded Devices: Opportunities and Challenges" to discuss the benefits, challenges and solutions for implementing neural networks in mobile and embedded devices. And the insights continued the next day at the quarterly Alliance Member Meeting: in "Combining Vision, Machine Learning and Natural Language Processing to Answer Everyday Questions," Faris Alqadah, CEO and Co-Founder of QM Scientific, explains how his firm is simplifying shopping for consumers by extracting and organizing product information from many data sources.

While you're on the Alliance website, make sure to check out all the other great content recently published there. And for timely notification of the publication of new content, subscribe to our RSS feed and Facebook, Google+, LinkedIn company and group, and Twitter social media channels. Thanks as always for your support of the Embedded Vision Alliance, and for your interest in and contributions to embedded vision technologies, products and applications. Please don't hesitate to let me know how the Alliance can better serve your needs.

Brian Dipert
Editor-In-Chief, Embedded Vision Alliance

FEATURED VIDEOS

"Embedded Lucas-Kanade Tracking: How It Works, How to Implement It, and How to Use It," a Presentation from Goksel Dedeoglu of PerceptonicPercepTonic
Goksel Dedeoglu, Ph.D., Founder and Lab Director of PercepTonic, presents the "Embedded Lucas-Kanade Tracking: How It Works, How to Implement It, and How to Use It" tutorial at the May 2014 Embedded Vision Summit. This tutorial is intended for technical audiences interested in learning about the Lucas-Kanade (LK) tracker, also known as the Kanade-Lucas-Tomasi (KLT) tracker. Invented in the early 80s, this method has been widely used to estimate pixel motion between two consecutive frames. Dedeoglu presents how the LK tracker works and discuss its advantages, limitations, and how to make it more robust and useful. Using DSP-optimized functions from TI's Vision Library (VLIB), he also shows how to detect feature points in real-time and track them from one frame to the next using the LK algorithm. He demonstrates this on Texas Instruments' C6678 Keystone DSP, where he detects and tracks thousands of Harris corner features in 1080p HD resolution video.

Introduction to the Embedded Vision Opportunity and the Embedded Vision Alliance CommunityEmbedded Vision Alliance
Jeff Bier, Founder of the Embedded Vision Alliance and President and Co-Founder of BDTI, presents the introductory remarks at the May 2015 Embedded Vision Summit. Jeff provides an overview of the embedded vision market opportunity, challenges, solutions and trends. He also introduces the Embedded Vision Alliance and the resources it offers for both product creators and potential members, and reviews the event agenda and other logistics.

More Videos

FEATURED ARTICLES

Neural Network Processors: Has Their Time Come?BDTI
Lately, neural network algorithms have been gaining prominence in computer vision and other fields where there's a need to extract insights based on ambiguous data. Classical computer vision algorithms typically attempt to identify objects by first detecting small features, then finding collections of these small features to identify larger features, and then reasoning about these larger features to deduce the presence and location of an object of interest (such as a face). These approaches can work well when the objects of interest are fairly uniform and the imaging conditions are favorable, but they often struggle when conditions are more challenging. An alternative approach, convolutional neural networks ("CNNs"), massively parallel algorithms made up of layers computation nodes, have been showing impressive results on these more challenging problems. More

Facebook Oculus Acquires Pebbles Interfaces for Gesture ControlTouch Display Research
Last month, Facebook's subsidiary Oculus reported that it had acquired Israel-based Pebbles Interfaces. Based in Israel, Pebbles Interfaces has spent the past five years developing technology that uses custom optics, sensor systems and algorithms to detect and track hand movement. Pebbles Interfaces will be joining the hardware engineering and computer vision teams at Oculus to help advance virtual reality, tracking, and human-computer interactions. More

More Articles

FEATURED COMMUNITY DISCUSSION

Faculty Position Open at UC Santa Cruz

More Community Discussions

FEATURED NEWS

Upcoming Free Qualcomm Vuforia Webinar Discusses Enabling Mobile Apps to See

Intel Expands Developer Opportunities as Computing Expands Across All Areas of Peoples' Lives

Altera Launches Worldwide SoC FPGA Developers Forums

ON Semiconductor Introduces Series of Advanced Image Co-Processors for Next Generation Automotive Camera

More News


About This E-Mail
Embedded Vision Insights respects your time and privacy. If you are not interested in receiving future newsletters on this subject, please reply to this message with the word "UNSUBSCRIBE" in the Subject area.

LETTERS AND COMMENTS TO THE EDITOR: Letters and comments can be directed to the editor, Brian Dipert, at editor@embeddedvisioninsights.com.

PASS IT ON...Feel free to forward this newsletter to your colleagues. If this newsletter was forwarded to you and you would like to receive it regularly, click here to register.
 
Sent to:john.sokol@gmail.com
If you prefer not to receive
future e-mails of this type,
click here
Sent By:
BDTI
1646 N. California Blvd., Suite 220
Walnut Creek California 94596
United States
 
To view as a web page click here.

Sunday, August 23, 2015

Let's build a humanoid robot with computer vision


---------- Forwarded message ----------
From: Hack A Robot <info@meetup.com>
Date: Saturday, August 22, 2015
Subject: Invitation: Let's build a humanoid robot with computer vision



Meetup
New Meetup
Hack A Robot
Added by Thomas Lee
Thursday, August 27, 2015
6:30 PM
South bay area
TBD
South bay area, CA

(Venue of this event is TBD.  Looking for suggestions/ offers to host this event as well) Earlier this year, I created Hackabot Nano (a feature-rich Arduino compatible wheeled robot). It was crowdfunded through Kickstarter. The robot was p...

3D projectors and hmd viewers in Huaqiang market , ShenZhen China

WebRTC Codec Wars: Rebooted

  • Wednesday, September 9, 2015

    6:00 PM

  • Tokbox

    501 2nd Street, San Francisco, CA (map)

  • Just when we thought we were done with the video codec wars in WebRTC – we found out we're only just beginning.

    In the past several weeks we've seen the names Thor, Daala, VP9 and H.265 thrown in the news as potential candidates to replace our current generation of video codecs. How is that going to play out, and where are we headed with all this?

    We can't be sure but we think Tsahi Levent-Levi can make a few educated guesses about it.

    Join TokBox and Tsahi to discuss the power plays of the video coding industry.

    Come along to TokBox HQ at 6:00pm for a 6:30pm start.

    As usual, we will provide pizza and beer.

    We look forward to seeing you there!


    About the speaker:

    Tsahi Levent-Levi is an Independent Analyst and Consultant for WebRTC.

    Tsahi Levent-Levi has over 15 years of experience in the telecommunications, VoIP and 3G industry as an engineer, manager, marketer and CTO. Tsahi is an entrepreneur, independent analyst and consultant, assisting companies to form a bridge between technologies and business strategy in the domain of telecommunications.

    Tsahi has an MSc in Computer Science, and an MBA degree specializing in Entrepreneurship and Strategy. Tsahi has been granted three patents related to 3G-324M and VoIP. He acted as the chairman of various activity groups within the IMTC, an organization focusing on interoperability of multimedia communications.

    Tsahi is the author and editor of bloggeek.me, which focuses on the ecosystem and business opportunities around WebRTC.


Sunday, August 02, 2015

Robot stereo vision





From http://smprobotics.com/



They make an Unmanned Ground Video Surveillance Vehicle based on this.


Sunday, June 28, 2015

Kiwi drone racers get ahead of the competition | NZNews | 3 News

http://www.3news.co.nz/nznews/kiwi-drone-racers-get-ahead-of-the-competition-2015062418

What could have been possible back in 1981


https://www.youtube.com/watch?v=MWdG413nNkI

I thought we where pushing the limits at the time by playback of digital audio off memory and later even streaming off a floppy drive. 

This goes to show that not only could the audio have been done but video. 
Now to be honest they are using a sound blaster card which didn't exist for another 6 years or so Aug 87. 

The only things that's changed is optimization and algorithms, improvements in software development tools, completely open documentation and countless examples as well as easy access to video source materials.  

I wasn't able to capture video till around 1984 it was a black and white threshold and looked dreadful and took several seconds to capture 1 image. A wasn't able to capture a proper grey scale till 1987 and it was a board that cost several thousand dollars.  Color video capture wasn't until in 1990 on a Sun work station, all of this was exotic and super expensive at the time. 

It wasn't till 1995 with the Matrox meteor at $500 that decent high quality video capture was possible.  We were just reaching 90 to 120 Mhz Pentiums at that time. 

Imagine what could be possible with today's hardware if only the software where optimized?






Sunday, June 14, 2015

Future of Adobe Photoshop Demo - GPU Technology Conference - YouTube

Demonstration of turning a light field image into a viewable image using the GPU to accelerate performance.



https://youtu.be/lcwm4yaom4w

http://www.netbooknews.com/9917/futur... There was alot going on during the Day 1 Keynote at Nvidia's GPU conference, as far as consumer are concerned there was one significant demo. The demo involved Adobe showing off a technology being developed using a plenoptic lens aka one of those lenses that looks like a bug eye. By attaching a number of small lenses to a camera, users can take a picture that looks, at first, like something an insect might see which, each displaying a small part of the entire picture. Using an algorithm Adobe invented, the multitudes of little images can be resolved into a single photo. The possibilities are endless on how you can manipulate the photo. 2D image are made 3D & they can adjust the focal point in real time so the background comes into focus and the subject blurry. Fingers crossed this gets into Adobe CS6 or CS7.

Light Field Cameras for 3D Imaging and Reconstruction | Thoughts from Tom Kurke


This is a excellent article talking about light field / plenoptic cameras, Pelican imaging, Lytro and listing out all the signigicant published papers in this area.
http://3dsolver.com/light-field-cameras-for-3d-imaging/



Superresolution with Plenoptic Camera 2.0


Excellent paper

Abstract:
 This work is based on the plenoptic 2.0 camera, which captures an array of real images focused on the object. We show that this very fact makes it possible to use the camera data with super-resolution techniques, which enables the focused plenoptic camera to achieve high spatial resolution. We derive the conditions under which the focused plenoptic camera can capture radiance data suitable for super resolution. We develop an algorithm for super resolving those images. Experimental results are presented that show a 9× increase in spatial resolution compared to the basic plenoptic 2.0 rendering approach. Categories and Subject Descriptors (according to ACM CCS): I.4.3 [Image Processing and Computer Vision, Imaging Geometry, Super Resolution]:

http://www.tgeorgiev.net/Superres.pdf



Image and Depth from a Conventional Camera with a Coded Aperture



Image and Depth from a Conventional Camera with a Coded Aperture
Anat Levin, Rob Fergus, Fredo Durand, Bill Freeman

Abstract
A conventional camera captures blurred versions of scene information away from the plane of focus. Camera systems have been proposed that allow for recording all-focus images, or for extracting depth, but to record both simultaneously has required more extensive hardware and reduced spatial resolution. We propose a simple modi_cation to a conventional camera that allows for the simultaneous recovery of both (a) high resolution image information and (b) depth information adequate for semi-automatic extraction of a layered depth representation of the image.
Our modification is to insert a patterned occluder within the aperture of the camera lens, creating a coded aperture. We introduce a criterion for depth discriminability which we use to design the preferred aperture pattern. Using a statistical model of images, we can recover both depth information and an all-focus image from single photographs taken with the modified camera. A layered depth map is then extracted, requiring user-drawn strokes to clarify layer assignments in some cases. The resulting sharp image and layered depth map can be combined for various photographic applications, including automatic scene segmentation, post-exposure refocussing, or re-rendering of the scene from an alternate viewpoint. 

http://groups.csail.mit.edu/graphics/CodedAperture/


Tuesday, June 09, 2015

Glasses free vr display


http://polarscreens.com/

Virtual Reality 3D display that used head tracking to optimize the 3D effect for a single viewer.



https://youtu.be/4E9HZZ9UgcI




Saturday, June 06, 2015

Building your own SDR-based Passive Radar on a Shoestring | Hackaday

Software defined radio based passive radar using low cost hardware.

http://hackaday.com/2015/06/05/building-your-own-sdr-based-passive-radar-on-a-shoestring/

VideoCoreIV-AG100-R Raspberry Pi Gpu docs


https://docs.broadcom.com/doc/12358545

and just in case

https://github.com/doe300/VC4C/tree/master/doc

You want:   VideoCoreIV-AG100-R.pdf 


Original , Now Dead Link
http://www.broadcom.com/docs/support/videocore/VideoCoreIV-AG100-R.pdf


How to optimize Raspberry Pi code using its GPU « Pete Warden's blog

http://petewarden.com/2014/08/07/how-to-optimize-raspberry-pi-code-using-its-gpu/


Computational cameras

Computational Cameras: Convergence of Optics and Processing

 Changyin Zhou, Student Member, IEEE, and Shree K. Nayar, Member, IEEE

Abstract—A computational camera uses a combination of optics and processing to produce images that cannot be captured with traditional cameras. In the last decade, computational imaging has emerged as a vibrant field of research. A wide variety of computational cameras has been demonstrated to encode more useful visual information in the captured images, as compared with conventional cameras. In this paper, we survey computational cameras from two perspectives. First, we present a taxonomy of computational camera designs according to the coding approaches, including object side coding, pupil plane coding, sensor side coding, illumination coding, camera arrays and clusters, and unconventional imaging systems. Second, we use the abstract notion of light field representation as a general tool to describe computational camera designs, where each camera can be formulated as a projection of a high-dimensional light field to a 2-D image sensor. We show how individual optical devices transform light fields and use these transforms to illustrate how different computational camera designs (collections of optical devices) capture and encode useful visual information. Index Terms—Computer vision, imaging, image processing, optics.


http://www1.cs.columbia.edu/CAVE/publications/pdfs/Zhou_TIP11.pdf


Node.JS truly on Android and iOS | Oguz Bastemur

http://oguzbastemur.blogspot.com/2015/03/nodejs-truly-on-android-and-ios.html


Sent from my iPad

OpenMAX

http://en.m.wikipedia.org/wiki/OpenMAX

OpenMAX (Open Media Acceleration), often shortened as "OMX", is a non-proprietary and royalty-free cross-platform set of C-language programming interfaces that provides abstractions for routines especially useful for audio, video, and still images processing. It is intended for low power and embedded system devices (including smartphonesgame consolesdigital media players, and set-top boxes) that need to efficiently process large amounts of multimedia data in predictable ways, such as video codecs, graphics libraries, and other functions for video, image, audio, voice and speech.

RealTime Data Compression: LZ4 Frame format : Final specifications

http://fastcompression.blogspot.fr/2013/04/lz4-streaming-format-final.html


Sent from my iPad

Thursday, June 04, 2015

VR demo & fireside chat with Amir Rubin, industry pioneer & CEO of Sixense

http://www.youtube.com/watch?v=48dt9Q3FJYs&sns=em


#TWiSTLive!! Virtual reality is an exciting & hotly debated space. Its potential is massive, but will it ever really "arrive"? In today's live show, recorded at Samsung Global Innovation Center, Jason answers this question with a resounding YES, as he demos the latest & greatest VR and hosts a riveting fireside chat with Amir Rubin, VR pioneer & CEO of Sixense, a premiere virtual reality platform. After a lively demo of Sixense's cutting-edge, award-winning products, Jason talks with Amir about his inspirations, the state of the field, and what's coming. We learn why VR is not just about gaming but about every industry (esp. healthcare, education), why motion sickness is no longer an issue, how technologies like 3D printing help VR immensely, how developers are building amazing applications on the Sixense platform, how every phone going forward will be VR ready, why VR is so effective in training for high-risk jobs like welders & pilots, why service providers will give VR headsets away for free, how VR is going to be a new way for creative people to monetize their skills, how movie studios can use VR for you to actually experience being the super hero, why the biggest challenge facing VR is educating consumers -- and much much more! 

Full show notes: http://goo.gl/v3JaLc

Saturday, May 30, 2015

Billboard Advertising Banned Products In Russia Hides If It Recognizes Cops


Billboard Advertising Banned Products In Russia Hides If It Recognizes Cops

Posted by samzenpus  
from the next-slide dept.
m.alessandrini writes:In response to a ban of food imported from the European Union, an Italian grocery in Russia hired an ad agency to create a billboard with a camera and facial recognition software, that's able to change to a different ad when it recognizes the uniform of Russian cops. Gizmodo reports: "With the aid of a camera and facial recognition software, the technology was slightly tweaked to instead recognize the official symbols and logos on the uniforms worn by Russian police. And as they approached the billboard featuring the advertisement for Don Giulio Salumeria's imported Italian goods, it would automatically change to an ad for a Matryoshka doll shop instead."

Monday, April 20, 2015

Thursday, February 12, 2015

FLIR to Manufacture Thermal Imagers at TowerJazz


GlobeNewswire: FLIR is partnering with TowerJazz to manufacture its next generation IR technology to smartphone, security, industrial and other markets. "The partnership between TowerJazz and FLIR has been excellent. Together, we have rapidly implemented a state of the art manufacturing capability in Newport Beach, CA. The Newport Beach site can facilitate commercial, ITAR (International Traffic in Arms Regulations), and more recently Category 1A & 1B "Trusted" services through our subsidiary, Jazz Semiconductor Trusted Foundry (JSTF), as accredited by the U.S. Department of Defense. In particular, we developed an industry leading capability for advanced micro thermal pixel array production," said David Howard, Executive Director and Fellow, TowerJazz.

These micro thermal pixel arrays include novel, high density pixels that enable FLIR to offer more powerful sensors with higher resolution in compact form factors. "When we combine the micro thermal arrays with integrated FLIR wafer level optics, software and MSX technology with TowerJazz process and manufacturing capability, we are able to deliver breakthrough product performance and value," said Tom Surran, COO at FLIR Systems.

Russian computer giant acquire part Zecotek 3d printer development

http://3dprintingindustry.com/2014/07/17/russian-computer-giant-acquire-part-zecotek-3d-printer-development/


Tuesday, February 10, 2015

Monday, February 09, 2015

What are Mattel and Google doing with View-Master?

From: http://www.engadget.com/2015/02/06/google-mattel-view-master/

With a View-Master topped teaser (which you can see after the break), Google and Mattel invoked one of our favorite childhood memories -- and frequent inspiration for low-budget virtual reality shenanigans. The two are planning an "exclusive announcement and product debut" ahead of the New York Toy Fair next week, but other than the View-Master theme there's little to go on. Mattel's Fisher-Price division tried a View-Master comeback for the digital age in 2012, although all trace of it is gone now. We'll have to wait until next Friday to see for ourselves what they're planning, but we invite your wildest speculation until then. So what are you thinking -- a plastic pair of branded Mattel VR goggles based on the Cardboard project, or maybe a Hot Wheel based on something else Google has been working on?

LG unveils VR for G3, its own virtual reality headset inspired by Google Cardboard



Today LG announced the impending launch of its own virtual reality headset, a low-tech device which sits somewhere between Samsung VR and Google’s DIY Cardboard kit. LG calls it “VR for G3.”
As the name implies, LG will bundle its virtual reality headset with its G3 smartphone. The promotion will occur in “select markets” in the coming months. For now, that’s all we know about the release date.
Here’s how the device works, according to LG:
The design of VR for G3 is based on the blueprint for Google Cardboard, available online for home DIY fans. The neodymium ring magnet on the side of the VR for G3 works with the magnetic gyroscope sensor in the G3 to select applications and scroll through menus without touching the display. VR for G3 requires no assembly other than inserting the phone in the viewer.

http://venturebeat.com/2015/02/09/lg-teases-vr-for-g3-a-virtual-reality-headset-inspired-by-google-cardboard/


Not in front of the telly: Warning over 'listening' TV

Samsung said personal information could be scooped up by the Smart TV

Samsung is warning customers about discussing personal information in front of their smart television set.

The warning applies to TV viewers who control their Samsung Smart TV using its voice activation feature.

Such TV sets "listen" to some of what is said in front of them and may share details they hear with Samsung or third parties, it said.

Privacy campaigners said the technology smacked of the telescreens, in George Orwell's 1984, which spied on citizens.

www.bbc.com/news/technology-31296188

Wednesday, January 28, 2015

Hacker Dojo Lightning Talks, CES 2015

Monday, January 26, 2015

WebP image format



http://en.wikipedia.org/wiki/WebP

WebP is an image format employing both lossy[6] and lossless compression. It is currently developed by Google, based on technology acquired with the purchase of On2 Technologies.[7]As a derivative of the VP8 video format, it is a sister project to the WebM multimedia container format.[8] WebP-related software is released under a BSD license