Wednesday, August 31, 2016

Fwd: Low-cost, Low-power Neural Networks; Real-time Object Detection and Classification; More


---------- Forwarded message ----------
From: Embedded Vision Insights from the Embedded Vision Alliance <newsletter@embeddedvisioninsights.com>
Date: Tue, Aug 30, 2016 at 7:36 AM
Subject: Low-cost, Low-power Neural Networks; Real-time Object Detection and Classification; More



embedded-vision.com embedded-vision.com
VOL. 6, NO. 17 A NEWSLETTER FROM THE EMBEDDED VISION ALLIANCE Late August 2016
To view this newsletter online, please click here
FEATURED VIDEOS

"Tailoring Convolutional Neural Networks for Low-Cost, Low-Power Implementation," a Presentation from SynopsysSynopsys
Deep learning-based object detection using convolutional neural networks (CNN) has recently emerged as one of the leading approaches for achieving state-of-the-art detection accuracy for a wide range of object classes. Most of the current CNN-based detection algorithm implementations run on high-performance computing platforms that include high-end general-purpose processors and GP-GPUs. These CNN implementations have significant computing power and memory requirements. Bruno Lavigueur, Project Leader for Embedded Vision at Synopsys, presents the company's experience in reducing the complexity of the CNN graph to make the resulting algorithm amenable to low-cost and low-power computing platforms. This involves reducing the compute requirements, memory size for storing convolution coefficients, and moving from floating point to 8 and 16 bit fixed point data widths. Lavigueur demonstrates results for a face detection application running on a dedicated low-cost and low-power multi-core platform optimized for CNN-based applications.

"An Augmented Navigation Platform: The Convergence of ADAS and Navigation," a Presentation from HarmanHarman
Until recently, advanced driver assistance systems (ADAS) and in-car navigation systems have evolved as separate standalone systems. Today, however, the combination of available embedded computing power and modern computer vision algorithms enables the merger of these functions into an immersive driver information system. In this presentation, Alon Atsmon, Vice President of Technology Strategy at Harman International, discusses the company's activities in ADAS and navigation. He explores how computer vision enables more intelligent systems with more natural user interfaces, and highlights some of the challenges associated with using computer vision to deliver a seamless, reliable experience for the driver. Atsmon also demonstrates Harman's latest unified Augmented Navigation platform which overlays ADAS warnings and navigation instructions over a camera feed or over the driver's actual road view.

More Videos

FEATURED ARTICLES

A Design Approach for Real Time ClassifiersPathPartner Technology
Object detection and classification is a supervised learning process in machine vision to recognize patterns or objects from images or other data, according to Sudheesh TV and Anshuman S Gauriar, Technical Leads at PathPartner Technology. It is a major component in advanced driver assistance systems (ADAS), for example, where it is commonly used to detect pedestrians, vehicles, traffic signs etc. The offline classifier training process fetches sets of selected images and other data containing objects of interest, extracts features from this input, and maps them to corresponding labelled classes in order to generate a classification model. Real-time inputs are categorized based on the pre-trained classification model in an online process which finally decides whether or not the object is present. More

Pokemon Go-es to Show the Power of ARARM
Pokemon Go is an awesome concept, says Freddi Jeffries, Content Marketer at ARM. While she's a strong believer in VR (virtual reality) as a driving force in how we will handle much of our lives in the future, she can now see that apps like this have the potential to take AR (augmented reality) mainstream much faster than VR. More

More Articles

FEATURED NEWS

Intel Announces Tools for RealSense Technology Development

Auviz Systems Announces Video Content Analysis Platform for FPGAs

Sighthound Joins the Embedded Vision Alliance

Qualcomm Helps Make Your Mobile Devices Smarter with New Snapdragon Machine Learning Software Development Kit

Basler Addressing the Industry´s Hot Topics at VISION Stuttgart

More News

UPCOMING INDUSTRY EVENTS

ARC Processor Summit: September 13, 2016, Santa Clara, California

Deep Learning for Vision Using CNNs and Caffe: A Hands-on Tutorial: September 22, 2016, Cambridge, Massachusetts

IEEE International Conference on Image Processing (ICIP): September 25-28, 2016, Phoenix, Arizona

SoftKinetic DepthSense Workshop: September 26-27, 2016, San Jose, California

Sensors Midwest (use code EVA for a free Expo pass): September 27-28, 2016, Rosemont, Illinois

Embedded Vision Summit: May 1-3, 2017, Santa Clara, California



Embedded Vision Alliance · 1646 North California Blvd., Suite 220 · Walnut Creek, California 94596 · USA


Thursday, July 21, 2016

Fwd: Jovisin 3.0MP Start light Test report


---------- Forwarded message ----------
From: Lulu Wang <us@jovision.com>
Date: Thu, Jul 21, 2016 at 8:40 AM
Subject: Jovisin 3.0MP Start light Test report



Dear John sokol,

We now have 3.0MP/4.0 MP Start light IP cameras.  Please see the followed pictures


First class start light R&D by ourslves.


Lulu Wang
    
Jovision Technology Co., Ltd

Address:12th Floor,No.3 Building, Aosheng Square,No.1166 Xinluo Street, Jinan, China     ZIP: 250101
Website: http://en.jovision.com     Tel: 0086-0531-55691778-8668    
E-mail: us@jovision.com               Skype:lulu-jovision
 

Saturday, June 04, 2016

Flat lens promises possible revolution in optics

http://www.bbc.com/news/science-environment-36438686

structure of the lens seen under microscopeImage copyrightFEDERICO CAPASSO
Image captionThis electron microscope image shows the structure of the lens (white line is 0.002mm long)
A flat lens made of paint whitener on a sliver of glass could revolutionise optics, according to its US inventors.
Just 2mm across and finer than a human hair, the tiny device can magnify nanoscale objects and gives a sharper focus than top-end microscope lenses.
It is the latest example of the power of metamaterials, whose novel properties emerge from their structure.
Shapes on the surface of this lens are smaller than the wavelength of light involved: a thousandth of a millimetre.
"In my opinion, this technology will be game-changing," said Federico Capasso of Harvard University, the senior author of a report on the new lens which appears in the journal Science.
The lens is quite unlike the curved disks of glass familiar from cameras and binoculars. Instead, it is made of a thin layer of transparent quartz coated in millions of tiny pillars, each just tens of nanometres across and hundreds high.
Singly, each pillar interacts strongly with light. Their combined effect is to slice up a light beam and remould it as the rays pass through the array (see video below).
Media captionLight passing through the "metalens" is focussed by the array of nanostructures on its surface (video: Capasso Lab/Harvard)
Computer calculations are needed to find the exact pattern which will replicate the focussing effect of a conventional lens.
The advantage, Prof Capasso said, is that these "metalenses" avoid shortfalls - called aberrations - that are inherent in traditional glass optics.
"The quality of our images is actually better than with a state-of-the-art objective lens. I think it is no exaggeration to say that this is potentially revolutionary."
Those comparisons were made against top-end lenses used in research microscopes, designed to achieve absolute maximum magnification. The focal spot of the flat lens was typically 30% sharper than its competition, meaning that in a lab setting, finer details can be revealed.
But the technology could be revolutionary for another reason, Prof Capasso maintains.
"The conventional fabrication of shaped lenses depends on moulding and essentially goes back to 19th Century technology.
"But our lenses, being planar, can be fabricated in the same foundries that make computer chips. So all of a sudden the factories that make integrated circuits can make our lenses."
And with ease. Electronics manufacturers making microprocessors and memory chips routinely craft components far smaller than the pillars in the flat lenses. Yet a memory chip containing billions of components may cost just a few pounds.
two lenses side by sideImage copyrightFEDERICO CAPASSO
Image captionThe lens is much more compact than a traditional microscope objective
Mass production is the key to managing costs, which is why Prof Capasso sees cell-phone cameras as an obvious target. Most of their other components, including the camera's detector, are already made with chip technology. Extending that to include the lens would be natural, he argues.
There are many other potential uses: mass-produced cameras for quality control in factories, light-weight optics for virtual-reality headsets, even contact lenses. "We can make these on soft materials," Prof Capasso assured the BBC.
The prototypes lenses are 2mm across, but only because of the limitations of the Harvard manufacturing equipment. In principle, the method could scale to any size, Prof Capasso said.
"Once you have the foundry - you want a 12-inch lens? Feel free, you can make a 12-inch lens. There's no limit."
The precise character of the lens depends on the layout and composition of the pillars. Paint-whitener - titanium dioxide - is used to make the pillars, because it is transparent and interacts strongly with visible light. It is also cheap.
illustration of light hitting the lensImage copyrightPETER ALLEN/HARVARD
Image captionThe minuscule pillars have a powerful effect on light passing through
The team has previously worked with silicon, which functions well in the infrared. Other materials could be used to make ultraviolet lenses.
Or to get a different focus, engineers could change the size, spacing and orientation of the pillars. It simply means doing the computer calculations and dialling the results into the new design.
The team is already working on beating the performance of its first prototypes. Watch this space, they say - if possible, with a pair of metalenses.