Showing posts with label motion tracking. Show all posts
Showing posts with label motion tracking. Show all posts

Friday, July 04, 2014

Touchless Gesture Navigation


Video Link http://bcove.me/tseqw1bf

This video demonstrates touchless gesture navigation utilizing Murata's ultrasonic sensor, the world's smallest, surface mountable ultrasonic sensor.

Murata has developed world's first surface mount ultrasonic sensor


Friday, June 20, 2014

Amazon Fire Phone Uses 4 IR Cameras for 3D UI


This technology looks a lot like the Leap Motion but instead of two cameras, they are using three.


The Verge, Mashable:

Amazon Fire Phone uses four front-facing IR cameras to create a 3D user interface: "Perhaps the most useful application shown off so far is simply tilting the phone to scroll through a web page like The Washington Post or to page through a Kindle book. Of course, we've seen other tilting features before and they've a lot to be desired, but Amazon's demo seems to make these features seem far better than previous attempts. The question will be whether or not the extra processing power and potential battery life drain to run these multiple cameras will be worth it."

Monday, June 16, 2014

free optical motion tracking application for Windows





FreeTrack is a free optical motion tracking application for Microsoft Windows, released under the GNU General Public License. Its main function is inexpensive head tracking in computer games and simulations but can also be used for general computer accessibility, in particular hands-free computing. Tracking is sensitive enough that only small head movements are required so that the user's eyes never leave the screen.

Head motions are tracked with six degrees of freedom (6DOF), specifically; yaw, pitch, roll, left/right, up/down and forward/back. This is done by means of a video capture device, typically a webcam, which is placed in front of the user and tracks a rigid point model headpiece. This point model usually consists of infrared LEDs but can also be normal LEDs and even retroreflective material illuminated by a source of infrared light.

FreeTrack can output head tracking data directly using TrackIR, SimConnect and FSUIPC interfaces, programs that support these interfaces are regarded as having FreeTrack support. General input devices can also be emulated, specifically mouse, keyboard, and joystick (via PPJoy).


Tuesday, May 22, 2012

Leap - FUTURE OF MOTION GESTURE CONTROL

Motion control startup Leap Motion has demoed its Leap 3D motion control system, which can track motion to around 0.01mm accuracy — 100 times more accurate than the Kinect. Rather than taking Microsoft’s approach, Leap Motion creates a personal 3D workspace of about four cubic feet. The Leap consists of a small USB device with industry-standard sensors and cameras that, in tandem with the company’s software, can track multiple objects and recognize gestures. Leap’s designers showed off OS navigation and web browsing using a single finger, writing, pinch-to-zoom, precision drawing, 3D modeling, and gaming. From what we can see, it looks to be a very precise system, capable of recognizing objects in your hands and tracking them instead of your digits. Leap Motion is releasing an SDK and also handing out free sensors to “qualified developers” that want to develop for the system.

Monday, December 12, 2011

Video based air harp

From Hack a day:
http://hackaday.com/2011/12/12/get-ready-to-play-some-wicked-air-harp



Who needs a tactile interface when you can wave your hands in the air to make music? Air String makes that possible and surprisingly it does so without the use of a Kinect sensor.
In the image above, you can see that two green marker caps are used as plectra to draw music out of the non-existent strings. Judiciously perched atop that Analysis and Design of Digital Systems with VHDL textbook is a camcorder recording an image of the player. This signal is processed by an FPGA (hence the textbook) in real-time, and shown on the monitor seen to the right. A set of guides are overlaid on the image, so the player knows where to pluck to get the notes she is expecting.
The program is designed to pick up on bright green colors as the inputs. It works like a charm as you can see in the video after the break. The team of Cornell students responsible for the project also mention a few possible improvements like adding a distance sensor (ultrasonic rangefinder?) so that depth can be used for the dynamics of the sound.

Monday, October 31, 2011

realtime multi-tracking radar systems

This is very cool. Thanks Bill.





Ability to simultaneously monitor the speed of either oncoming or outgoing vehicles in up to four (4) lanes of contiguous traffic using one single speed sensor.



The sensor automatically measures vehicle speeds in control area and creates a pair of high resolution images for each violation:
  • Wide-angle image of multiple targets in the traffic flow situation, with the violator(s) clearly identified;
  • A close-up image of each violator with a clearly visible license plate.
---------- Forwarded message ----------
From: BC
Date: Mon, Oct 31, 2011 at 9:08 PM
Subject: realtime multi-tracking systems?
To: John Sokol <john.sokol@gmail.com>

http://www.peakgainsystems.com/en/cordon.html

Tuesday, August 09, 2011

Disney Research Turns Mo-Cap Inside-Out With Body-Mounted Cameras

From Wired:

By mounting cameras on actors' bodies, Disney hopes to make motion-capture more natural.
Images: Disney Research/Carnegie Mellon University
Researchers from Disney’s R&D lab want to turn motion-capture inside-out. Instead of pointing an array of cameras at an actor, they propose the performer wear 20 cameras on their body.
Motion capture — which is used to turn real-life actions and movements into digitized animations for CGI characters — has become a mainstay in Hollywood and game design. But the current setup, where cameras track pingpong-ball-style markers on an actor, has its limitations. The actor is normally confined to a studio, so capturing a big action like running outside or swinging on monkey bars is tough (if not impossible) for traditional mo-cap systems.
At Siggraph 2011, an annual conference on computer graphics in Vancouver, British Columbia, researchers from Carnegie Mellon University and Disney Research revealed their new take on the idea. Instead of pointing cameras at actor, they put the cameras directly on the performers.

Researchers used velcro to mount 20 lightweight cameras on the limbs and trunk of each performer. The cameras then harnessed a process called “structure from motion,” which analyzes the images from the camera to estimate the location and direction of the lens.
A computer algorithm then collates all the data to automatically build a digital skeleton and position its limbs in accordance with the actor. The system even makes a rough 3-D scan of the environment for elements like the floor and handholds, to provide context for the animator.
Right now, the new mo-cap system has some hurdles to overcome. It can take an entire day to process a minute of motion, and it doesn’t have the fidelity of traditional motion capture. But Carnegie Mellon professor Takeo Kanade says that fidelity will improve as the resolution of these small video cameras increases.


From Slashdot: Breaking Motion Capture Out of the Studio
"Traditional motion capture techniques use cameras to meticulously record the movements of actors inside studios, enabling those movements to be translated into digital models. But by turning the cameras around — mounting almost two dozen, outward-facing cameras on the actors themselves — scientists at Disney Research Pittsburgh, and Carnegie Mellon University have shown that motion capture can occur almost anywhere — in natural environments, over large areas, and outdoors."

  

Saturday, February 26, 2011

Way before Kinect there was Pantomation in 1978

Pantomation was a very early tracking chromakey system from the 1970s. Originally intended for music scoring, the system was adapted to other styles of performance art. While crude by modern standards, the concept was decades ahead of its time; it can reasonably be considered an early forebear of systems like Microsoft's Project Natal.





On Slashdot: Kinect's Grandaddy Running On an Apple IIe In 1978
"30 years before words like performance capture, augmented reality, or avatars were around — let alone commonplace — experimental film and video artist Tom DeWitt created a system that features aspects of all of them. Pantomation let users interact in real-time with a digital environment and props. It was built using Apple IIe's, analog video gear, and lots of custom hacking and patching. He's currently working on a holographic 3D system that's similarly ahead of its time."

The Slashdot article title is incorrect.   Apple IIe didn't exist in 1978.  It was probably an Apple II (released in 1977). The Apple IIe wasn't released until 1983. 

The mini-computer they talk about in this video is the PDP-8/L

Read more:
Video History Project: Electronic Body Arts