Friday, December 30, 2016

Massive DDoS attacks coming from IoT cameras

https://blog.cloudflare.com/say-cheese-a-snapshot-of-the-massive-ddos-attacks-coming-from-iot-cameras/

Thursday, December 22, 2016

The OpenMV project - OpenMV Cam M7

https://openmv.io/

The OpenMV Cam M7 is powered by the 216 MHz ARM Cortex M7 processor which can execute up to 2 instructions per clock

512KB of RAM enabling 640x480 grayscale images / video (up to 320x240 for RGB565 still)

MicroPython has 64KB more heap space (~100KB total) with the OpenMV Cam M7 so you can do more in MicroPython now

$65 retail ($55 to pre-order)

The OpenMV project is about creating low-cost, extensible, Python powered, machine vision modules and aims at becoming the “Arduino of Machine Vision“. Our goal is to bring machine vision algorithms closer to makers and hobbyists. We’ve done the difficult and time-consuming algorithm work for you leaving more time for your creativity!
The OpenMV Cam is like an super powerful Arduino with a camera on board that you program in Python. We make it easy to run machine visions algorithms on what the OpenMV Cam sees so you can track colors, detect faces, and more in seconds and then control I/O pins in the real-world.


Monday, December 12, 2016

Fwd: live "gov" webcams on oahu


---------- Forwarded message ----------
From: WW
Date: Mon, Dec 12, 2016 at 9:10 AM
Subject: live "gov" webcams on oahu
To: John Sokol 


http://www.honolulu.gov/cameras.html

there's many more, but thought this was interesting, 
  because they are "gov public info" cams

on the traffic cams, 
  i once followed a red convertible full of girls thru town (LOL)
    don't know if you can still do that or not....  

anyway, thought you might like to see and put this in your link files

Sunday, December 11, 2016

SYNQ - Video API built for developers

https://www.synq.fm/

The Synq FM is a cloud based Video API

What it can do?
  • Simple video upload and storage
  • Transcoding into various formats for a variety of platforms
  • A customizable, embedded player
  • Attaching custom video metadata (like tags, groups, playlists, and so on)
  • Webhook notifications for various events
  • Geo-local content delivery

Multiple libraries for mobile, web and servers. Use our client libraries for Python, JavascriptiOSAndroid and others, or directly through HTTP POST requests.

Automatically switch between several Content Delivery Networks (CDNs) to increase performance and improve user experience.

Thursday, December 08, 2016

Magic Leap is actually way behind, like we always suspected it was

http://www.theverge.com/2016/12/8/13894000/magic-leap-ar-microsoft-hololens-way-behind 

Magic Leap’s allegedly revolutionary augmented reality technology may in fact be years away from completion and, as it stands now, is noticeably inferior to Microsoft’s HoloLens headset, according to a report from The Information. The report, which incorporates an interview with Magic Leap CEO Rony Abovitz, reveals that the company posted a misleading product demo last year showcasing its technology. The company has also had trouble miniaturizing its AR technology from a bulky helmet-sized device into a pair of everyday glasses, as Abovitz has repeatedly claimed the finished product will accomplish.

YES MORE BULLSHITTERS, Meanwhile I can't get my several AR startups I'm advising any money because guy like this sucked it all up.

Monday, December 05, 2016

Google Glass Teardown





What's inside Google Glass?

Tuesday, November 29, 2016

MIT Creates AI Able to See Two Seconds Into the Future


http://www.dailygalaxy.com/my_weblog/2016/11/mit-creates-ai-that-is-able-to-see-two-seconds-into-the-future-on-monday-the-massachusetts-institute-of-technology-announce.html


Massachusetts Institute of Technology announced its new artificial intelligence. Based on a photograph alone, it can predict what’ll happen next, then generate a one-and-a-half second video clip depicting that possible future.


When we see two people meet, we can often predict what happens next: a handshake, a hug, or maybe even a kiss. Our ability to anticipate actions is thanks to intuitions born out of a lifetime of experiences.

Machines, on the other hand, have trouble making use of complex knowledge like that. Computer systems that predict actions would open up new possibilities ranging from robots that can better navigate human environments, to emergency response systems that predict falls, to Google Glass-style headsets that feed you suggestions for what to do in different situations.

This week researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) have made an important new breakthrough in predictive vision, developing an algorithm that can anticipate interactions more accurately than ever before.

http://web.mit.edu/vondrick/tinyvideo/paper.pdf

http://web.mit.edu/vondrick/tinyvideo/

Generating Videos with Scene Dynamics

Carl Vondrick
MIT
 Hamed Pirsiavash
University of Maryland Baltimore County
 Antonio Torralba
MIT


NIPS 2016

Abstract

We capitalize on large amounts of unlabeled video in order to learn a model of scene dynamics for both video recognition tasks (e.g. action classification) and video generation tasks (e.g. future prediction). We propose a generative adversarial network for video with a spatio-temporal convolutional architecture that untangles the scene's foreground from the background. Experiments suggest this model can generate tiny videos up to a second at full frame rate better than simple baselines, and we show its utility at predicting plausible futures of static images. Moreover, experiments and visualizations show the model internally learns useful features for recognizing actions with minimal supervision, suggesting scene dynamics are a promising signal for representation learning. We believe generative video models can impact many applications in video understanding and simulation.






Monday, November 28, 2016

Build hardware synchronized 360 VR camera with YI 4K action cameras


http://open.yitechnology.com/vrcamera.html

http://www.yijump.com/

YI 4K Action Camera is your perfect pick for building a VR camera. The camera boasts high resolution image detail powered by amazing video capturing and encoding capabilities, long battery life and camera geometry. This is what makes us stand out and how we are recognized and chosen as a partner by Google for its next version VR Camera, Google Jump - www.yijump.com
There are a number of ways to build a VR camera with YI 4K Action Cameras. The difference being mainly how you control multiple cameras to start and stop recordings. In general, we would like all cameras to start and stop recording synchronously so you can easily record and stitch your virtual reality video.
The easiest solution is to manually control the cameras one-by-one. It is convenient and quick however it doesn’t guarantee synchronized recording.
A better solution therefore is to make good use of Wi-Fi where all cameras are set to work in Wi-Fi mode and are connected to a smartphone hotspot or a Wi-Fi router. Once setup is done, you should be able to control all cameras with smartphone app through Wi-Fi. For details, please check out https://github.com/YITechnology/YIOpenAPI
Please note that this solution also comes with its limitations. For instance, when there are way too many cameras or Wi-Fi interference happens to be serious, controlling the cameras via smartphone app can sometimes fail. Also, synchronized video capturing is not guaranteed since Wi-Fi is not a real-time communication protocol.
You can also control all cameras with a Bluetooth-connected remote control. The limitations however are similar to that with Wi-F solution.
There are also solutions which try to synchronize video files offline after recording is finished. It is normally done by detecting the same audio signal or video motion in the video files and aligning them. Since this kind of the solutions do not control the recording start time, its synchronization error is at least 1 frame.

HARDWARE SYNCHRONIZED RECORDING

In this article, we will introduce a solution to solve synchronization problem using hardware. We do this by connecting all cameras using Multi Endpoint cable where recording start and stop commands are transmitted in real-time among the cameras, in turn, creating a high-resolution and synchronized virtual reality video.


Saturday, November 26, 2016

Breathing Life into Shape (SIGGRAPH 2014)







This has serious implications in security, interrogation, marketing, health and medical.



Soon high res 3D Depth cameras will be cheap and common place.

Pinlight Displays: Wide Field of View Augmented Reality Eyeglasses (SIGG...

Slim near eye display using pinhole aperture arrays


Boom, Mike Drop.

This just got trivial.

Friday, November 25, 2016

LOW-COST VIDEO STREAMING WITH A WEBCAM AND RASPBERRY PI

http://hackaday.com/2016/11/25/low-cost-video-streaming-with-a-webcam-and-raspberry-pi/


http://videos.cctvcamerapros.com/raspberry-pi/ip-camera-raspberry-pi-youtube-live-video-streaming-server.html


Spoiler Alert, Basically they use the Raspberry Pi to connect to the CCTV IP camera over RTSP and then send a live stream up to youtube. Up, out though your firewall and NAT to youtube or any service that will accept RTMP.

Most of this is a good NOVICE GUIDE to setting up and configuring the Pi and buying a IP camera from them.

  1. You can download the source code for the BASH script here.

#!/bin/bash

SERVICE="ffmpeg"
RTSP_URL="rtsp://192.168.0.119:554/video.pro1"
YOUTUBE_URL="rtmp://a.rtmp.youtube.com/live2"
YOUTUBE_KEY="dn7v-5g6p-1d3w-c3da"

COMMAND="sudo ffmpeg -f lavfi -i anullsrc -rtsp_transport tcp -i ${RTSP_URL} -tune zerolatency -vcodec libx264 -t 12:00:00 -pix_fmt + -c:v copy -c:a aac -strict experimental -f flv ${YOUTUBE_URL}/${YOUTUBE_KEY}"

if sudo /usr/bin/pgrep $SERVICE > /dev/null
then
        echo "${SERVICE} is already running."
else
        echo "${SERVICE} is NOT running! Starting now..."
        $COMMAND
fi


They have a bunch of neat Pi Video projects on their web site.
http://videos.cctvcamerapros.com/raspberry-pi

Wednesday, November 23, 2016

Chronos 1.4 high-speed camera up to 21,600fps



Chronos 1.4 is a purpose-designed, professional high-speed camera in the palm of your hand. With a 1.4 gigapixel-per-second throughput, you can capture stunning high-speed video at up to 1280x1024 resolution. Frame rate ranges from 1,057fps at full resolution, up to 21,600fps at minimum resolution.

Features and specs

See the full specs in the Chronos 1.4 Datasheet
  • 1280x1024 1057fps CMOS image sensor with 1.4Gpx/s throughput
  • Higher frame rates at lower resolution (see table below)
  • Sensor dimensions 8.45 x 6.76mm, 6.6um pixel pitch
  • Global shutter - no “jello” effect during high-motion scenes
  • Electronic shutter from 1/fps down to 2us (1/500,000 s)
  • CS and C mount lens support
  • Focus peaking (focus assist) and zebra exposure indicator
  • ISO 320-5120 (Color), 740-11840 (Monochrome) sensitivity
  • 5" 800x480 touchscreen (multitouch, capacitive)
  • Machined aluminum case
  • Record time 4s (8GB) or 8s (16GB)
  • Continuous operation on AC adapter (17-22V 40W)
  • 1.75h runtime on user-replaceable EN-EL4a battery
  • Gigabit ethernet remote control and video download*
  • Audio IO and internal microphone*
  • HDMI video output*
  • Two channel 1Msa/s waveform capture*
  • Storage: SD card, two USB host ports (flash drives/hard drives), eSATA 3G
  • Trigger: TTL, switch closure, image change*, sound*, accelerometer*
  • Low-noise variable-speed fan - camera can run indefinitely without overheating

https://www.kickstarter.com/projects/1714585446/chronos-14-high-speed-camera/description

Monday, November 21, 2016

Sunday, November 20, 2016

ELP contact info

Maker of USB webcam boards.

Wednesday, August 31, 2016

Fwd: Low-cost, Low-power Neural Networks; Real-time Object Detection and Classification; More


---------- Forwarded message ----------
From: Embedded Vision Insights from the Embedded Vision Alliance <newsletter@embeddedvisioninsights.com>
Date: Tue, Aug 30, 2016 at 7:36 AM
Subject: Low-cost, Low-power Neural Networks; Real-time Object Detection and Classification; More



embedded-vision.com embedded-vision.com
VOL. 6, NO. 17 A NEWSLETTER FROM THE EMBEDDED VISION ALLIANCE Late August 2016
To view this newsletter online, please click here
FEATURED VIDEOS

"Tailoring Convolutional Neural Networks for Low-Cost, Low-Power Implementation," a Presentation from SynopsysSynopsys
Deep learning-based object detection using convolutional neural networks (CNN) has recently emerged as one of the leading approaches for achieving state-of-the-art detection accuracy for a wide range of object classes. Most of the current CNN-based detection algorithm implementations run on high-performance computing platforms that include high-end general-purpose processors and GP-GPUs. These CNN implementations have significant computing power and memory requirements. Bruno Lavigueur, Project Leader for Embedded Vision at Synopsys, presents the company's experience in reducing the complexity of the CNN graph to make the resulting algorithm amenable to low-cost and low-power computing platforms. This involves reducing the compute requirements, memory size for storing convolution coefficients, and moving from floating point to 8 and 16 bit fixed point data widths. Lavigueur demonstrates results for a face detection application running on a dedicated low-cost and low-power multi-core platform optimized for CNN-based applications.

"An Augmented Navigation Platform: The Convergence of ADAS and Navigation," a Presentation from HarmanHarman
Until recently, advanced driver assistance systems (ADAS) and in-car navigation systems have evolved as separate standalone systems. Today, however, the combination of available embedded computing power and modern computer vision algorithms enables the merger of these functions into an immersive driver information system. In this presentation, Alon Atsmon, Vice President of Technology Strategy at Harman International, discusses the company's activities in ADAS and navigation. He explores how computer vision enables more intelligent systems with more natural user interfaces, and highlights some of the challenges associated with using computer vision to deliver a seamless, reliable experience for the driver. Atsmon also demonstrates Harman's latest unified Augmented Navigation platform which overlays ADAS warnings and navigation instructions over a camera feed or over the driver's actual road view.

More Videos

FEATURED ARTICLES

A Design Approach for Real Time ClassifiersPathPartner Technology
Object detection and classification is a supervised learning process in machine vision to recognize patterns or objects from images or other data, according to Sudheesh TV and Anshuman S Gauriar, Technical Leads at PathPartner Technology. It is a major component in advanced driver assistance systems (ADAS), for example, where it is commonly used to detect pedestrians, vehicles, traffic signs etc. The offline classifier training process fetches sets of selected images and other data containing objects of interest, extracts features from this input, and maps them to corresponding labelled classes in order to generate a classification model. Real-time inputs are categorized based on the pre-trained classification model in an online process which finally decides whether or not the object is present. More

Pokemon Go-es to Show the Power of ARARM
Pokemon Go is an awesome concept, says Freddi Jeffries, Content Marketer at ARM. While she's a strong believer in VR (virtual reality) as a driving force in how we will handle much of our lives in the future, she can now see that apps like this have the potential to take AR (augmented reality) mainstream much faster than VR. More

More Articles

FEATURED NEWS

Intel Announces Tools for RealSense Technology Development

Auviz Systems Announces Video Content Analysis Platform for FPGAs

Sighthound Joins the Embedded Vision Alliance

Qualcomm Helps Make Your Mobile Devices Smarter with New Snapdragon Machine Learning Software Development Kit

Basler Addressing the Industry´s Hot Topics at VISION Stuttgart

More News

UPCOMING INDUSTRY EVENTS

ARC Processor Summit: September 13, 2016, Santa Clara, California

Deep Learning for Vision Using CNNs and Caffe: A Hands-on Tutorial: September 22, 2016, Cambridge, Massachusetts

IEEE International Conference on Image Processing (ICIP): September 25-28, 2016, Phoenix, Arizona

SoftKinetic DepthSense Workshop: September 26-27, 2016, San Jose, California

Sensors Midwest (use code EVA for a free Expo pass): September 27-28, 2016, Rosemont, Illinois

Embedded Vision Summit: May 1-3, 2017, Santa Clara, California



Embedded Vision Alliance · 1646 North California Blvd., Suite 220 · Walnut Creek, California 94596 · USA


Thursday, July 21, 2016

Fwd: Jovisin 3.0MP Start light Test report


---------- Forwarded message ----------
From: Lulu Wang <us@jovision.com>
Date: Thu, Jul 21, 2016 at 8:40 AM
Subject: Jovisin 3.0MP Start light Test report



Dear John sokol,

We now have 3.0MP/4.0 MP Start light IP cameras.  Please see the followed pictures


First class start light R&D by ourslves.


Lulu Wang
    
Jovision Technology Co., Ltd

Address:12th Floor,No.3 Building, Aosheng Square,No.1166 Xinluo Street, Jinan, China     ZIP: 250101
Website: http://en.jovision.com     Tel: 0086-0531-55691778-8668    
E-mail: us@jovision.com               Skype:lulu-jovision
 

Saturday, June 04, 2016

Flat lens promises possible revolution in optics

http://www.bbc.com/news/science-environment-36438686

structure of the lens seen under microscopeImage copyrightFEDERICO CAPASSO
Image captionThis electron microscope image shows the structure of the lens (white line is 0.002mm long)
A flat lens made of paint whitener on a sliver of glass could revolutionise optics, according to its US inventors.
Just 2mm across and finer than a human hair, the tiny device can magnify nanoscale objects and gives a sharper focus than top-end microscope lenses.
It is the latest example of the power of metamaterials, whose novel properties emerge from their structure.
Shapes on the surface of this lens are smaller than the wavelength of light involved: a thousandth of a millimetre.
"In my opinion, this technology will be game-changing," said Federico Capasso of Harvard University, the senior author of a report on the new lens which appears in the journal Science.
The lens is quite unlike the curved disks of glass familiar from cameras and binoculars. Instead, it is made of a thin layer of transparent quartz coated in millions of tiny pillars, each just tens of nanometres across and hundreds high.
Singly, each pillar interacts strongly with light. Their combined effect is to slice up a light beam and remould it as the rays pass through the array (see video below).
Media captionLight passing through the "metalens" is focussed by the array of nanostructures on its surface (video: Capasso Lab/Harvard)
Computer calculations are needed to find the exact pattern which will replicate the focussing effect of a conventional lens.
The advantage, Prof Capasso said, is that these "metalenses" avoid shortfalls - called aberrations - that are inherent in traditional glass optics.
"The quality of our images is actually better than with a state-of-the-art objective lens. I think it is no exaggeration to say that this is potentially revolutionary."
Those comparisons were made against top-end lenses used in research microscopes, designed to achieve absolute maximum magnification. The focal spot of the flat lens was typically 30% sharper than its competition, meaning that in a lab setting, finer details can be revealed.
But the technology could be revolutionary for another reason, Prof Capasso maintains.
"The conventional fabrication of shaped lenses depends on moulding and essentially goes back to 19th Century technology.
"But our lenses, being planar, can be fabricated in the same foundries that make computer chips. So all of a sudden the factories that make integrated circuits can make our lenses."
And with ease. Electronics manufacturers making microprocessors and memory chips routinely craft components far smaller than the pillars in the flat lenses. Yet a memory chip containing billions of components may cost just a few pounds.
two lenses side by sideImage copyrightFEDERICO CAPASSO
Image captionThe lens is much more compact than a traditional microscope objective
Mass production is the key to managing costs, which is why Prof Capasso sees cell-phone cameras as an obvious target. Most of their other components, including the camera's detector, are already made with chip technology. Extending that to include the lens would be natural, he argues.
There are many other potential uses: mass-produced cameras for quality control in factories, light-weight optics for virtual-reality headsets, even contact lenses. "We can make these on soft materials," Prof Capasso assured the BBC.
The prototypes lenses are 2mm across, but only because of the limitations of the Harvard manufacturing equipment. In principle, the method could scale to any size, Prof Capasso said.
"Once you have the foundry - you want a 12-inch lens? Feel free, you can make a 12-inch lens. There's no limit."
The precise character of the lens depends on the layout and composition of the pillars. Paint-whitener - titanium dioxide - is used to make the pillars, because it is transparent and interacts strongly with visible light. It is also cheap.
illustration of light hitting the lensImage copyrightPETER ALLEN/HARVARD
Image captionThe minuscule pillars have a powerful effect on light passing through
The team has previously worked with silicon, which functions well in the infrared. Other materials could be used to make ultraviolet lenses.
Or to get a different focus, engineers could change the size, spacing and orientation of the pillars. It simply means doing the computer calculations and dialling the results into the new design.
The team is already working on beating the performance of its first prototypes. Watch this space, they say - if possible, with a pair of metalenses.

Wednesday, May 25, 2016

visualsfm - Google Search

https://www.google.com/search?q=visualsfm


Sent from my iPad

Sell XS-6003-9 CCTV Fisheye lens with 240 Ultra wide degree for panoramic camera - EC21 Mobile

http://m.ec21.com/mobile/sDetails.jsp?offerId=23626063


Sent from my iPad

Behringer U-Control UCA202 | Sweetwater.com

http://www.sweetwater.com/store/detail/UCA202


16-bit/48kHz, 2-channel USB Audio Interface with Optical Out

Portable USB Interface

Are you looking for a compact and easy to use USB interface? One that you can take anywhere with you? The Behringer U-Control UCA202 comes in at about the same size as a smartphone, making it extremely portable. The USB-powered interface has two RCA inputs, two RCA outputs, headphone out, and an optical digital output. The 16-bit/48kHz converters ensure high-quality audio into and low-latency playback out of your computer. Connect your instruments or mixer to your computer with the UCA202 and record to your hearts content. The Behringer U-Control UCA202 comes with free downloadable audio recording and editing software.
Behringer U-Control UCA202
  • 2 In/2 Out USB Audio Interface
  • Mac or PC, no drivers required
  • 16-bit/48kHz converters
  • Low latency
  • Headphone output
  • Optical out
  • USB powered
  • Free DAW download
The Behringer U-Control UCA202 is the perfect, portable USB interface.

Tech Specs

Computer ConnectivityUSB 2.0
Form FactorDesktop
Simultaneous I/O2 x 2
A/D Resolution16-bit/48kHz
Analog Inputs2 x RCA
Analog Outputs2 x RCA
Digital Outputs1 x Optical
Bus PoweredYes
Depth0.87"
Width3.46"
Height2.36"
Weight0.26 lbs.
Manufacturer Part NumberUCA202

Visual odometry - Wikipedia, the free encyclopedia

https://en.m.wikipedia.org/wiki/Visual_odometry


Sent from my iPad

GitHub - slawrence/videojs-vr

https://github.com/slawrence/videojs-vr


Sent from my iPad

VR Plugin Example

http://slawrence.io/static/projects/video/


Sent from my iPad

PhaseSpace Motion Capture

http://www.phasespace.com/


Sent from my iPad

Spreadtrum Communications, Inc.

http://www.spreadtrum.com/en/SC7730A.html


Sent from my iPad

svo visual odometry - Google Search

https://www.google.com/search?q=svo+visual+odometry


Sent from my iPad

Camera model

http://homepages.inf.ed.ac.uk/rbf/CVonline/LOCAL_COPIES/FUSIELLO/node3.html#SECTION00021000000000000000


Sent from my iPad

Forecasting Virtual Reality & Live Stream with Wowza Streaming Cloud

https://www.wowza.com/blog/virtual-reality-and-360-degree-streaming-with-wowza


Sent from my iPad

Mozilla Wants To Bring Virtual Reality To The Browser | TechCrunch

http://techcrunch.com/2015/01/24/mozilla-wants-to-bring-virtual-reality-to-the-browser/


Sent from my iPad

WebVR

https://mozvr.com/webvr-spec/


Sent from my iPad

TojiCode: Bringing VR to Chrome

http://blog.tojicode.com/2014/07/bringing-vr-to-chrome.html


Sent from my iPad

Wednesday, April 06, 2016

Fwd: Got what it takes to win an Auggie?



---------- Forwarded message ----------
From: Augmented World Expo <info@augmentedreality.org>
Date: Wednesday, April 6, 2016
Subject: Got what it takes to win an Auggie?
To: sokol@videotechnology.com


Only two weeks left to submit your nomination for the 2016 Auggie Awards. Submit your product/project today!
View this email in your browser

Think you have what it takes to win the Auggies?

Only two weeks left to submit your product or project for this year's Auggie Awards

The Auggie Awards showcase the best-of-the-best in the Augmented World. This year features 11 categories in augmented reality, virtual reality and wearable technology. Past year's winners have included CastAR, Vuforia, Epson, Orbotix, Metaio (now Apple) and more. 

Nominations are open now until April 21.

To enter
1. Hit the button below
2. Complete the entry form
3. Submit your entry (no cost)
4. Share your nomination  

Submit your nomination now
Winners are not only recognized on the world's largest stage for AR, VR and wearables at AWE USA 2016 on June 1. They will also walk away with an official Auggie trophy (below)!

2016 Auggie Award Categories


1. Best Headset or Smart Glasses: Connected eyewear including smart glasses, goggles, head-mounted displays (HMD), heads-up displays (HUD), and helmets

2. Best Hardware: Wearables, gesture and input devices, sensors, cameras, displays, stationary installations, chips and robotics

3. Best App: AR, VR and wearable applications available to the public in app stores

4. Best Campaign: Marketing campaigns for major brands that use AR, VR and wearable technology

5. Best Enterprise Solution: Solutions that solve business problems using AR, VR and wearable technology

6. Best Tool: Software Developer Kits, platforms and other creation tools that enable the creation of solutions and experiences in AR, VR and wearable technology

7. Best Game or Toy: Connected toy or gaming solutions that enable digital interaction with the physical world or physical interaction in the virtual world

8. Best Art or Film: Art Installations or films including design concepts and documentary films that explore the augmented or virtual world

9. Best in Show - Augmented Reality: Any presenting AR company at AWE 2016 (exhibitor, speaker) is eligible to be nominated and the winner will be selected through popular vote at the show

10. Best in Show - Virtual Reality: Any presenting VR company at AWE 2016 (exhibitor, speaker) is eligible to be nominated and the winner will be selected through popular vote at the show

11. Best in Show - Wearable Technology: Any presenting wearable tech company at AWE 2016 (exhibitor, speaker) is eligible to be nominated and the winner will be selected through popular vote at the show

Submit your nomination now
Copyright © 2016 AugmentedReality.org, All rights reserved.
Your are receiving this email because you attended previous AWE events or showed interest in Augmented World Expo.

Our mailing address is:
AugmentedReality.org
39400 Woodward Avenue Suite 101
Bloomfield Hills, MI 48304

Add us to your address book


Want to change how you receive these emails?
You can update your preferences or unsubscribe from this list