Wednesday, September 30, 2009

Multi-camera rig makes trees say cheese

How you you photograph something that is 1500 feet tall?

Photographer Nick Nichols spent a year planning the nearly impossible: a top-to-bottom photograph of a 300-foot-tall redwood tree, now the centerpiece of the October issue of National Geographic Magazine.

Watch "Explorer: Climbing Redwood Giants" on the National Geographic Channel, Original airing on September 29 at 10 PM.

Augmented Reality

The Incredible New World of Augmented Reality

 Here is a collection of Augmented reality and wearable computers images that already exist.
All look retarded and are expensive.

ARToolKit is a software library for building Augmented Reality (AR) applications. These are applications that involve the overlay of virtual imagery on the real world.

Tuesday, September 22, 2009

Some entertaining VR 360 shots

Some entertaining VR 360 shots, just for fun:

New Year's 2009 Time Square:

This one you can actually wander through (follow the arrows on the
floor)(an actual 360 photograph that you can walk through - cool!):

A moment of serenity:

Other interesting things related to this:
Build a Google-style panorama rig for $300
IEEE Spectrum: DIY Street-View Camera
Microsoft LifeCam NX-6000 camera (8)
D-Link USB hub (2)
GlobalSat BU-353 GPS receiver
Laptop running Ubuntu Linux

UVC driver:
Webcam device driver:

Monday, September 21, 2009

A one-eyed filmmaker get a camera eye

Take a one eyed film maker, an unemployed engineer, and a vision for something that's never been done before and you have yourself the EyeBorg Project. Rob Spence and Kosta Grammatis are trying to make history by embedding a video camera and a transmitter in a prosthetic eye. That eye is going in Robs eye socket, and will record the world from a perspective that's never been seen before.

The Eyeborg project  There is another cool video on the main site. 

This short video by Rob Spence shows the operation in which surgeons removed his sightless eye. Warning: Graphic imagery may be unsettling to many viewers.

Google Talks: Photo Tech EDU - A lecture series.

Google Talks: Photo Tech EDU

The goal of PhotoTechEDU is to have a Photographic Technology short course for engineers. 
The course will teach:
  • useful properties of light and image formation
  • theory and techniques of photographic optics and image capture
  • theory of colorimetry and techniques of color reproduction
  • and lots more...

Day 1: Photo Technology Overview  Speaker: Richard Lyon     SLIDES

Day 2: Photo Technology Overview Continued Speaker: Richard Lyon  SLIDES
Overview of front-end photographic technology with more optics, waves, diffraction, silicon photosensors, and color sensing methods.

Day 3: Ray Tracing, Lenses, and Mirrors Speaker: Rom Clement   SLIDES
Principles of ray tracing for simple optical components: thin spherical lenses and spherical mirrors. The notion of real/virtual object and real/virtual images will be highlighted as well as the computation of the position and magnification the image. The case of thick lenses will also be mentioned including the notion of nodal and cardinal points.

Day 4: Contrast, MTF, Flare, and Noise Speaker: Iain McClatchie   SLIDES
Most consumers know that more megapixels is better. But why does a 6MP DSLR take nicer pictures than a 10MP point-and-shoot? This talk will follow light from the surface to the sensor, exploring some of the things that degrade images along the way.

Day 5: Silicon Image Sensors Speaker: Richard Lyon  SLIDES
In this session we examine how digital cameras capture images via the interaction of light with silicon in CCD and CMOS image sensors, including sampling and aliasing effects, noise effects, etc.

Day 6: Digital Camera Image Processing Speaker: Richard Lyon  SLIDES
In this session we examine the steps that a digital camera goes through to take raw data from an image sensor and make a photograph out of it. There are more steps than you might imagine, arranged in what is usually termed a pipeline, and is sometimes implemented on pipelined hardware, to get to a pleasing photographic rendering of the scene. 

Day 7: Lossy Image Compression Speaker: Ashok Popat   SLIDES
This session covers lossy image compression of many forms, including: Transform coding: DCT and relationship to KLT and filter banks; Energy compaction, zonal sampling, and bit allocation; Scalar quantization: Lloyd-Max, entropy constrained; Wavelets and embedded zerotrees; Vector quantization: k-means, Lloyd algorithm.

Day 8: Diffraction and Interference in Imaging Speaker: Rom Clement   SLIDES
This session addresses effects of the wave nature of light. This approach will allow us to talk about the phenomena of interference as well as diffraction. The understanding of the notion of diffraction will be used to determine the Rayleigh criteria and finally the resolving power of an optical system. In the second part of the lecture, we will study gratings using the wave approach. An example of an amateur spectroscope for astronomy using a reflective grating will be shown.

Day 9: Amateur Astrophotography Speaker: Ben Lutch   SLIDES
This session covers amateur astrophotography, particularly automation, gear (cameras, telescopes) and some of the technical challenges of photographing dim objects across the universe from your backyard or remote observatory.

Day 10: Image Compression Part 2 Speaker: Ashok Popat   SLIDES
Continues discussion of lossy image compression and introduces lossless image compression. Recap of transform coding; intro to vector quantization and wavelet-based approaches; basic theory of lossless compression; entropy coding techniques; context-based lossless image compression. Will emphasize underlying principles rather than details of specific methods.

Day 11: Document Image Analysis with Leptonica Speaker: Dan Bloomberg   SLIDES
Graphics typically takes a representation of an image or scene and renders it in raster form. This normally occurs through a well-specified process. What happens when you try to go the other way, from a raster image to a description of its contents? The process, so easy for humans, is not easy for machines, because the input raster data can be highly variable and the interpretation of the contents somewhat arbitrary. We'll talk about how this 'inverse graphics' process can be accomplished quickly and usually with sufficient accuracy for most applications, using rasters of document images as input. The 'trick' is to use the image as the primary...

Day 12: High Dynamic Range Image Capture Speaker: Greg Ward   SLIDES
Although digital imaging has been around for nearly 40 years, the past decade has seen the nearly complete replacement of analog film by digital sensors, most of which capture only a tenth the dynamic range of the black and white film Ansel Adams worked with 80 years ago. Using multiple exposure techniques (or advanced sensors), it is in fact possible to turn this around and capture ten times the range of film, equaling or surpassing the capacity of human vision. Greg will introduce some of the concepts and present a few details of HDR imaging in the context of digital photography, and demonstrate interactive software he has developed to assist in the...

Day 13: Light field (plenoptic) photography – Not Available

Day 14: Exposing Digital Forgeries from Inconsistencies in Lighting Speaker: Hany Farid
With the advent of high-resolution digital cameras, powerful personal computers and sophisticated photo-editing software, the manipulation of digital images is becoming more common. To this end, we have been developing a suite of tools to detect tampering in digital images. I will discuss two related techniques for exposing forgeries from inconsistencies in lighting. In each case we show how to estimate the direction to a light source from only a single image: inconsistencies across the image are then used as evidence of tampering.

Day 15: The Gigapxl Project – Not Available

Day 16: Multi-View Image Compositions Speaker: Lihi Zelnik   SLIDES
Pictures taken by a rotating camera can be registered and blended on the sphere into a smooth panorama. What is left to be done, in order to obtain a flat panorama, is projecting the spherical image onto a picture plane. This step is unfortunately not obvious: the surface of the sphere may not be flattened onto a page without some form of distortion. Distortions are also unavoidable when mosaicking images taken while the point of view changes or/and the scene changes (e.g., due to objects moving). In such cases no geometrical consistent mosaic may be obtained. Artists have explored this problem and demonstrated that the geometrical consistency is not the...

Day 17: Color Management - Not Available

Day 18: Non-destructive , Selective, Non-linear, and Non-Modal Editing Of Photographs Speakers: Uwe Steinmueller & Fabio Riccardi  SLIDES
For photo editing tools there is a lot of buzz about non-destructive or parametric editing. While this is quite standard for todays RAW converters it is not sufficient for a more refined imaging workflow. The talk will demonstrate the need for a Non-destructive, Selective, Non-linear and Non-modal editing workflow. Useful properties of light and image formation. Theory and techniques of photographic optics and image capture. Theory of colorimetry and techniques of color reproduction. Where and how photography is being used in Google products and projects, what tools exist inside Google for photographic image storage, processing, etc.

Day 19: Inkjet printing coming of age  Speaker:Uwe Steinmueller   SLIDES
Today's fine-art printing is dominated mainly by pigment-based inkjet printers. Why do some photographers find inkjet printing so easy and others so hard? The talk presents an overview about inkjet printing in 2007 (printers, inks, papers, software, color management). useful properties of light and image formation, theory and techniques of photographic optics and image capture, theory of colorimetry and techniques of color reproduction, where and how photography is being used in Google products and projects, what tools exist inside Google for photographic image storage, processing, etc.

Day 20: (topic not yet cleared for release from Google) Not Available

Day 21: Visualization Via Matlab: Color Profiles, Ray Tracing, Diffreaction Speaker: Richard Lyon SLIDES

Some topics that we've heard about from other points of views are fruitfully re-examined through some explorations using Matlab. First, we have a bit of code that reads ICC profiles and shows us what's in them, including some plots so that we can see what sorts of data are used to represent colorspace transformations. Second, we look at ray tracing for a non-Gaussian, non-paraxial, real-world situation, in which tracing rays numerically leads to an understanding of an aberration that we always need to deal with in digital cameras: the effect of a flat filter in the optical path. Third, brute-force numerical simulation of light's propagation as waves leads...

Day 22: Measuring, Interpreting and Correcting Optical Aberrations in the Human Eye Speaker: Eric Gross
Every lens has flaws. The eye—arguably the most important lens—has more than it's share. This talk is about recent technology that lets physicians measure and understand these wavefront aberrations. Some of these aberrations degrade our vision, while others seem to enhance it. Finally, I'll talk about efforts and technology to correct those aberrations. useful properties of light and image formation, theory and techniques of photographic optics and image capture, theory of colorimetry and techniques of color reproduction, where and how photography is being used in Google products and projects, what tools exist inside Google for photographic...

Day 23: Raw Files and Formats Speaker: David Cardinal
David Cardinal, veteran nature photographer and software developer discuss raw files, their history, contents and politics, including some of the do's and don'ts for raw file reading and writing. David has written extensively on raw files for magazines including PC Magazine and Outdoor Photographer, and in his books on the Nikon D1 series of cameras, as well as written DigitalPro for Windows, an image management system including a raw file reader and writer that is featured in this months (June, 2007) Dr. Dobbs Journal.

Day 24: Dequantization, Texture, and Superresolution  Speaker: Lance Williams & Diego Rother  SLIDES  and more SLIDES

Quantization is an intrinsic feature of digital signals, an unavoidable nonlinearity in representation. It's instructive to analyze scalar quantization as part of the signal sampling process, how it characterizes the signal, and how signals so characterized can best be reconstructed. The resulting process has many advantages over conventional interpolation. It can reconstruct image structures such as edges and contours with sub-sample precision, an important ingredient of "superresolution." An open question is whether analogous techniques have a role to play in vector quantization. In images, vector quantization is often used spatially, to encode groups of pixels at a time. Typically this is used for data compression. Recent work in texture synthesis, however, appeals to vector quantization to characterize and reproduce the joint statistics of pixel values in a "random" field. In current usage, this is an optimization for what would otherwise be a big search; vector quantization is used to index spatial blocks of "texture." In an early effort in this area, vector quantization was used to synthesize texture by capturing correlations among spatial VQ blocks at different scales. This idea leads to a characterization and interpolation of texture that delivers fine detail at all scales, in the manner of a fractal. After a very brief review of leading texture synthesis techniques, some recent work in continuous scaling of image textures will be demonstrated, and the prospects for combining the superresolution of image structures and image textures will be discussed.

Day 25: Open source based high resolution cameras Speaker: Andrey N. Filippov
Andrey will explain the designs and applications of Elphel, Inc. intelligent, network-enabled cameras based on open source hardware and software. Google currently uses Elphel cameras for book scanning and for capturing street imagery in Google Maps. Andrey hopes Elphel's newest modular cameras, the Model 353 camera and the Model 363 camera, will attract software engineers and FPGA hardware engineers interested in exploring high-definition videography and other innovative applications. # useful properties of light and image formation, theory and techniques of photographic optics and image capture, theory of colorimetry and techniques of color...

Day 26: Image quality testing & real-world challenges Speaker:Norman Koren   SLIDES
Useful properties of light and image formation, theory and techniques of photographic optics and image capture, theory of colorimetry and techniques of color reproduction

Day 27: Focus on Resolution: Degradations in Image Acquistion Speaker: Ken Turkowski  SLIDES
We investigate the digital image acquisition process for factors that adversely affect the quality of acquired imagery. The physical properties of optical systems impose a fundamental limit to the resolution of captured imagery. Real-world optics manifest several types of aberrations, and we show how these types of aberrations affect resolution. Image acquisition devices are imperfect, and have aliasing and other sampling artifacts that affect resolution. We derive the 70% rule for digital imagery, and verify it experimentally. There is a distinction between resolution and...

Day 28: Capturing more Light: Pragmatic use of HDR Capture and Tonemapping Speaker:Uwe Steinmueller
The high contrast of normal sunny daylight scenes is often hard to capture with a single shot. HDR images try to work around these limitations by combining multiple images. This talk is about the ins and outs of handling a higher dynamic range and of course also the limitations of this technique. The contents is the result of extensive practical work photographing urban

Day 29: Photographing VR Panoramas Speaker: Scott Highton
Scott is one of the pioneers of virtual reality photography, will present an overview of methods and techniques for photographing VR panoramas. While VR panoramas have become common for online tours in the real estate and travel industries, where low-quality point-and-shoot technique seems to prevail, Scott focuses on the higher-end and higher-quality approaches that yield memorable and evocative imagery. He was the first independent photographer contracted by Apple to work with and test QuickTime VR, as well as an early photographic consultant and contract photographer during the development of IPIX's PhotoBubble technology. Specializing in photography of extreme locations and environments, he was the first to use both technologies underwater.  Scott has been a commercial photographer, documentary cinematographer, and writer for close to 30 years, and is in the process of finishing his long-awaited book on Virtual Reality Photography techniques. He has lectured at a number of photo industry events, and produces the Virtu...

Day 30: Imaging optics for the next decade Speaker: Robert E. Fischer  (THIS IS THE BEST TALK)
Digital cameras in their many forms will continue to be one of the primary drivers towards new technologies in optics as well as improvements of classical technologies. This has been well illustrated in the past 5-10 years which has seen, for example, the development of compression molded glass aspheric lenses for improved performance and packaging. The incorporation of injection molded plastic lenses and possibly hybrid refractive/diffractive surfaces will grow. Furthermore, as the trend continues towards smaller pixels as well as more pixels in a given sensor, the imaging optics will be further driven towards higher image quality. Zoom lenses will increase in their zoom range, yet there will be a continuing emphasis towards smaller and smaller packaging. The optics and their associated mechanics will need to be more robust with respect to stray light such as flare, glare, ghost images, and other undesirable image anomalies. And our optics must be more robust with respect to environmental effects such as thermal soaks and gradients. And with all of the above, customers will want lower cost too. It is going to be a fun ride over the next 5-10 years so fasten your seat belt and hold on real tight to the safety bar!

Robert is CEO of Optics 1 in Westlake Village, CA, a past president of the SPIE, and a winner of that society's highest award, the Gold Medal for outstanding engineering or scientific accomplishments in optics and electro-optics.  Mr. Fischer's technical interests are in optical system design and engineering, in particular lens design. He is also interested in optical component and system manufacturing, assembly, and testing. His interests extend from the deep UV through the visible and on to the thermal infrared. He is known for his tireless efforts to advance optical science, engineering and scholarship. He served as a book editor of the McGraw-Hill Series on Optical and Electro-Optical
Engineering, and as executive editor of OE Reports, bringing timely and practical information to professionals in the field.

Day 31: Color Balance: Babies, Rugs & Sunsets Speaker: Paul Hubel
Achieving pleasing color balance is one of the most important and difficult problems in photographic systems; if the color balance is off, other image quality attributes drop in important and the result is unacceptable. In this talk I will discuss the differences between color balance and white balance for both photographic and machine vision applications and I will outline the literature of the subject. I will explain some of the more basic and some of the more advanced methods and relate these to complexity and system calibration issues. I will give several examples that show how some methods can fail and why some images can be extremely difficult. I will touch on the relationship between color balance and color perception and how this differentiates photographic systems from machine vision systems.

Dr. Paul M. Hubel has been working as Chief Image Scientist at Foveon, Inc. since 2002. His work includes the design of image processing algorithms and sensor architectures for high, middle, and low end cameras. Before joining Foveon, Dr. Hubel worked for ten years as a Principal Project Scientist at Hewlett-Packard Laboratories working on color algorithms for digital cameras, photofinishing, scanners, copiers, and printers.

Dr. Hubel received his B.Sc. in Optics from The University of Rochester in 1986, and his D.Phil. from Oxford University in 1990. His D.Phil thesis is titled "Colour Reflection Holography". As a graduate student, Dr. Hubel worked part-time at the Rowland Institute for Science under Dr. E.H. Land and later as a Post Doctoral Fellow at the MIT-Media Laboratory. Dr. Hubel has published over 30 technical papers, book chapters, and authored 25 patents. Dr. Hubel is a member of IS&T and SPIE, and has served as the technical and general chair of the IS&T/SID Color Imaging Conference.

Day 32: Art, Science and Reality of High Dynamic Range (HDR) Imaging Speaker: John McCann
High Dynamic Range (HDR) image capture and display has become an important engineering topic. The discipline of reproducing scenes with a high range of luminances has a five-century history that includes painting, photography, electronic imaging and image processing. HDR images are superior to conventional images. There are two fundamental scientific issues that control HDR image capture and reproduction. The first is the range of information that can be measured using different techniques. The second is the range of image information that can be utilized by humans. Optical veiling glare severely limits the range of
luminance that can be captured and seen.

In recent experiments, we measured camera and human responses to calibrated HDR test targets. We calibrated a 4.3-log-unit test target, with minimal and maximal glare from a changeable surround. Glare is an uncontrolled spread of an image-dependent fraction of scene luminance in cameras and in the eye. We use this standard test target to measure the range of luminances that can be captured on a camera's image plane. Further, we measure the appearance of these test luminance patches. It is the improved quantization of digital data and the preservation of the scene's spatial information that cause the improvement in quality in HDR reproductions. HDR is better than conventional imaging, despite the fact the multiple- exposure-HDR reproduction of luminance is inaccurate. This talk describes the history of HDR image processing techniques including painting,  photography, and electronic image processing (analog and digital) over the past 40 years. It reviews the development of Retinex theory, and other spatial-image- processing algorithms, that calculate appearance in images from arrays of radiances.

John McCann received a B.A. degree in Biology from Harvard University in 1964. He worked in, and later managed, the Vision Research Laboratory at Polaroid from 1961 to 1996. He has studied human color vision, digital image processing, large format instant photography and the reproduction of fine art. He is a Fellow of IS&T. He is a past President of IS&T and the Artists Foundation, Boston. He is currently consulting and continuing his research on color vision. He is the IS&T/OSA 2002 Edwin H. Land Medalist and IS&T 2005 Honorary Member and will be a 2008 Fellow of the Optical Society of America. 


A related talk by Dick Lyon at Berkeley pictures meeting: The Ideal Camera – An Outside-the-Box Analysis – slides

Richard Lyon's site on Phototech talks.

Video Surveillance System That Reasons Like a Human

From Slashdot: Video Surveillance System That Reasons Like a Human

"BRS Labs has created a technology it calls Behavioral Analytics which uses cognitive reasoning, much like the human brain, to process visual data and to identify criminal and terroristic activities. Built on a framework of cognitive learning engines and computer vision, AISight, provides an automated and scalable surveillance solution that analyzes behavioral patterns, activities and scene content without the need for human training, setup, or programming."

AISight™ from BSR Labs.

Anti-Photo Shield

Russian Billionaire Installs Anti-Photo Shield on Giant Yacht [Wired]
Roman Abramovich zaps snappers with laser shield [Times]
Celebrity Photographer ‘Laser Shield’ - Is It Legal? [Amateur Photographer]

Sounds good, not too sure how well it will work in practice.

What I did find that works was ultra high powered IR LED's.
Cameras either white out, or AGC kicks in and your face is blacked out.

Another interesting this is people have to avert there eyes from looking directly at you, but can not see IR and don't become consciously aware that they are avoiding looking at you.

Staring directly at one of these high power IR sources is like looking in to the sun, your eyes bug out and you eventually are force to look away. I suppose you cold go blind if you insist on staring. I'd be these would work fantastic on a ship.

As for the article, it mentions.
Lasers sweep the surroundings and when they detect a CCD, they fire a bolt of light right at the camera to obliterate any photograph.

Infrared lasers detect the electronic light sensors in nearby cameras, known as charge-coupled devices. When the system detects such a device, it fires a focused beam of light at the camera, disrupting its ability to record a digital image.
The beams can also be activated manually by security guards if they spot a photographer loitering.

Well I don't see how there is any way to detect a CCD or these days CMOS image sensors. I guess there could be a bit of a "red eye" effect, but then it would also detect when humans looks as well.

And secondly firing a laser while the camera's shutter is not open will do nothing to the film or in the case of a digital image the flash. But if the camera operator was using a reflex lens, you'd probably blow the the retina out of the back of his eye leaving them permanently blind.

In addition lasers only operate on a few select frequencies and there are already some excellent interference filters that can block just those narrow bands, so if you know what laser is being used, just place a $500 filter over the camera for that filter and you'd be able to keep snapping away completely uneffected.

UPDATE: I learned about another article that made some good points. 9/22/2009
How to ZAP a Camera: Using Lasers to Temporarily Neutralize Camera Sensors by Michael Naimar
A Google search of "anti paparazzi device" yielded two hits, both about near-identical devices called "Eagle Eye" and "Backflash" (and both unfindable as actual products). These devices apparently couple a light sensor to a flash unit: when a flash of light is detected, the devices instantaneously flash back. They're both small, made to be worn, and claim to obscure a portion of the photographic image near them whenever a flash is used (ostensibly as protection against intruding photographers). If these devices work, they obviously would only work for still, flash photography.
Antisensor lasers are capable of scanning a region looking for "glints" of reflected light coming from lenses aimed at them, then switching to a high energy laser capable of overloading or destroying the sensor (or whatever) behind the lens. The U.S. developed such a system called the Stingray and deployed two tank-based prototypes in Saudi Arabia during the Gulf War (they allegedly were not used). The Stingray's range of operation is claimed to be several kilometers. It's not clear if (or how) the Stingray could discriminate between lenses and eyeballs, or between sensors behind a lens and human eyeballs behind a lens.
 Maybe I was wrong I don't think you can destroy one, but I can see how a little image processing could detect a telephoto lens from an eyeball then shine a low power laser ~5mw at that camera and without any sort of filtering would ruin any photographs that they try to take, basically washing out the image with bright red or green, even blue as desired. But anything that could damage the roll of film or camera would also injure the photographer. 

If anyone need help defeating this give me drop me a line.

Sunday, September 20, 2009

Sousveillance - inverse surveillance

Sousveillance (pronounced /suːˈveɪləns/, French pronunciation: [suvɛjɑ̃s]) as well as inverse surveillance are terms coined by Steve Mann to describe the recording of an activity from the perspective of a participant in the activity, typically by way of small portable or wearable recording devices that often stream continuous live video to the Internet.

Inverse surveillance is a proper subset of sousveillance with a particular emphasis on the "watchful vigilance from underneath" and a form of surveillance inquiry or legal protection involving the recording, monitoring, study, or analysis of surveillance systems, proponents of surveillance, and possibly also recordings of authority figures and their actions. Inverse surveillance is typically an activity undertaken by those who are generally the subject of surveillance, an analysis of the surveilled from the perspective of a participant in a society under surveillance.

Sousveillance typically involves community-based recording from first person perspectives, without necessarily involving any specific political agenda, whereas inverse-surveillance is a form of sousveillance that is typically directed at, or used to collect data to analyze or study, surveillance or its proponents.

Read the Wiki Entry.

With the advent of Cell Phone cameras that can record video and wearable computers, and augmented reality gear, this is eventually going to become ubiquitous  leveling the playing field.

Some what related:

We Live in Public (2009)

The new film We Live in Public focuses on Josh, whom the film calls “the greatest internet pioneer you’ve never heard of.”  *cough* The film offers a window into Josh’s psyche, and the impacts of living in a digital, recorded age. The director  talks about this web entrepreneur’s fascination with privacy, and with recording life’s every moment. This includes  a six-month stint living under 24-hour live surveillance online which led him to mental collapse.

Saturday, September 19, 2009

3D imaging off a Microscope using Microlens array.

Paper here: Light Field Microscopy

The Video mostly says it all. I wonder if we can do this with a real world HD Video stream? If any one has any idea, contact me, I'd love to work on this in open air regular video and photography.

Interestingly is possible to do the reverse too
Development of a natural 3D display
Kengo Kikuta and Yasuhiro Takaki

Wednesday, September 16, 2009

HD video off android phone!

See this Amazing HD video off cell demo!

Zii Labs, is a wholly-owned subsidiary of Creative Technology Ltd. The same people who made the sound blaster audio card (one of the first audio cards for the PC). Says they have over 800 R&D engineers and invested US$1 billion and 10,000 man-years in media processing solutions. They have offices in the UK, China, USA and Singapore.

Well I am impressed, considering this is the first time I have heard about them.

They make the ZMS-05 media processor / chipset. Claiming to accelerate from 10 Gigaflops to PetaFlops, for handling media-intensive applications.

It uses what they call Stemcell computing technology sounds like a type of cell processor.

This is very similar to the Enumera chip that I was trying to make a prototype with Chuck Moore called the 25x. Man can't believe that opportunity fell apart!

They have a development board for their ZMS-05 chip.

ZiiLABS provide robust Linux-based Board Support Packages (BSPs) supporting the ZMS-05 based Plaszma hardware platforms such as; Zii EGG, ZMS-05 EVM and ZMS-05 System Module. These reference BSPs ensure that a fully operational kernel supporting the native Plaszma OS or Android is ready for use and supports the board specific modules and hardware features exposed on each platform.

Here are one of their Press releases.

The web site they give is

Thursday, September 10, 2009

6 Displays off one graphics card.

Eyefinity pushes over 24 million pixels with one next-gen Radeon

Using one Graphics card and GPU AMD has six monitors at a resolution of 2560x1600, f0r a total resolution: (or 24.6 megapixels).

With their driver it can be configured to appears in Windows like one gigantic 7680x3200 screen so software will not have to be adapted to support it. Pre-existing games should just come up and support it.

Wednesday, September 02, 2009

Mini projectors


WowWee™ :: Cinemin™ Swivel
WowWee is a leading designer, developer, marketer and distributor of innovative hi-tech consumer robotic and entertainment products.

Now I like this... But now I need to upgrade my phone... I saw Microvision had one of these that works within the phone. I haven't even looked at the costs...

This is $349.99, you can get almost the same thing from a toy store for $100 call an eyeclops. I have one it's really cool.