Thursday, May 31, 2012


---------- Forwarded message ----------
From: "3D CineCast" <>
Date: May 31, 2012 4:05 AM
Subject: 3D CineCast
To: <>

3D CineCast


Posted: 30 May 2012 07:13 AM PDT

Broadcast contribution applications like newsgathering, event broadcasting or content exchange currently benefit from the large availability of high-speed networks. These high-bandwidth links open the way to a higher video quality and distinctive operational requirements such as lower end-to-end delays or the ability to store the content for further edition.

Because a lighter video compression is needed, the complexity of common long-GOP codecs can be avoided, and simpler methods like intra-only compression can be considered. These techniques compress pictures independently, which is highly desirable when low latency and error robustness are of major importance. Several intra-only codecs, like JPEG 2000 or MPEG-2 Intra, are today available, but they might not meet all broadcasters' needs.

AVC-I, which is simply an intra-only version of H.264/AVC compression, offers a significant bit-rate reduction over MPEG-2 Intra, while keeping the same advantages in terms of interoperability. AVC-I was standardized in 2005, but broadcast contribution products supporting it were not launched until 2011. Therefore, it may be seen as a brand new technology, and studies have to be performed to evaluate if they match currently available technologies in operational use cases.

Why Intra Compression?
Video compression uses spatial and temporal redundancies to reduce the bit rate needed to transmit or store video content. When exploiting temporal redundancies, predicted pixels are found in already decoded adjacent pictures, while spatial prediction is built with pixels found in the same picture. Long-GOP compression makes use of both methods, and intra-only compression is restricted to spatial prediction.

Long-GOP approaches are more efficient than intra-only compression, but they have also distinctive disadvantages:
  • Handling picture dependencies may be complex when seeking in a file. This makes editing a long-GOP file a complex task.

  • Any decoding error might spread from a picture to the following ones and span a full GOP. This means that a single transmission error can affect decoding for several hundred milliseconds of video and, therefore, be very noticeable.

  • Encoding and decoding delay might be increased using long-GOP techniques compared to intra-only because of compression tools complexity.

Another problem inherent to long-GOP compression relates to video quality that varies significantly from picture to picture. For example, the figure below depicts the PSNR along the sequence ParkJoy when encoding it in long-GOP and in intra-only. While the quality of the long-GOP pictures is always higher than the one of their intra-only counterparts, it varies considerably. On the other hand, the quality of consecutive intra-only coded pictures is much more stable.

Long-GOP versus intra-only compression.

Therefore, intra-only compression might be a better choice than long-GOP when:
  • Enough bandwidth is available on the network;
  • Low end-to-end latency is a decisive requirement;
  • Streams have to be edited; and
  • The application is sensitive to transmission errors.

Several intra-only codecs are currently available to broadcasters to serve the needs of contribution applications:
  • MPEG-2 Intra — This version of MPEG-2 compression is restricted to the use I-frames, removing P-frames and B-frames.

  • JPEG 2000 — This codec is a significantly more efficient successor to JPEG that was standardized in 2000.

  • VC-2 — Also known as Dirac-Pro, this codec has been designed by BBC Research and was standardized by SMPTE in 2009. Like JPEG 2000, it uses wavelet compression.

Older codecs like MPEG-2 Intra benefit from a large base of interoperable equipments but lack coding efficiency. On the other hand, more recent formats like JPEG 2000 are more efficient but are not interoperable. Consequently, there is a need for a codec that could be at the same time efficient and also ensure interoperability between equipment from various vendors.

What is AVC-I?
AVC-I designates a fully compliant variant of the H.264/AVC video codec restricted to the intra toolset. In other words, it is just plain H.264/AVC using only I-frames. But, some form of uniformity is needed in order to ensure interoperability between equipment provided by various vendors. Therefore, ISO/ITU introduced a precise definition in the form of profiles (compression toolsets) in the H.264/AVC standard.

H.264/AVC Intra Profiles
Provision to using only I-frame coding was introduced in the second edition of the H.264/AVC standard with the inclusion of four specific profiles: High10 Intra profile, High 4:2:2 Intra profile, High 4:4:4 Intra profile and CAVLC 4:4:4 Intra profile. They can be described as simple sets of constraints over profiles dedicated to professional applications. Thez Table below gives an overview of the main limitations introduced by these profiles:

Because the intra profiles are defined as reduced toolsets of commonly used H.264/AVC profiles, they don't introduce new features, technologies or even stream syntax. Therefore, AVC-I video streams can be used within systems that already support standard H.264/AVC video streams. This enables the usage of file containers like MPEG files or MXF, MPEG-2 TS or RTP, audio codecs like MPEG Audio or Dolby Digital, and many metadatastandards.

AVC-I and JPEG-2000 Artifacts
Below 100Mb/s, a problematic defect was observed similarly on both codecs: Pictures can exhibit an annoying flicker. This issue is caused by a temporal instability in the coding decisions, amplified by noise. It seems to appear below 85Mb/s with JPEG 2000 and below 75Mb/s with AVC-I. And, it worsens as the bit rate decreases. At 50Mb/s and below, the flicker is extremely problematic, and it was felt that the video quality was too low for high-quality broadcast contribution applications, even when the source is downscaled to 1440 × 1080 or 960 × 720.

Around 100Mb/s, both codecs perform well, even on challenging content. Pictures are flicker-free, and coding artifacts are difficult to notice. However, noise or film-grain looks low-pass filtered, and its structure sometimes seems slightly modified. Even so, it wasn't felt this was an important issue.

All those defects are less visible as the bit rate is increased. But, while AVC-I picture quality raises uniformly, some JPEG 2000 products may still exhibit blurriness artifacts, even at 180Mb/s. Using available JPEG 2000 contribution pairs, a bit rate at which compression is visually lossless on all high-definition broadcast content was not found. On the other hand, some encoders appeared visually lossless at 150Mb/s, even when encoding grainy content like movies.

Bit Rates in Contribution
The subjective analysis of an actual AVC-I implementation on various broadcast contribution content allows us to categorize its usage according to the available transmission bandwidth. The table below presents findings on 1080i25 and 720p50 high-definition formats:

Because AVC-I does not make use of temporal redundancies, 30Hz content (1080i30 or 720p60) is more difficult to encode than 25Hz material. Additionally, to achieve the same perceived video quality level, bit rates have to be raised by 20 percent.

The availability of high speed networks for contribution applications enables broadcasters to use intra-only video compression codecs instead of the more traditional long-GOP formats. This allows them to benefit from distinctive advantages like: low encoding and decoding delays; more constant video quality; easy edit ability when the content is stored; and lower sensitivity to transmission errors. However, currently available intra-only video codecs require one to choose between interoperability and coding efficiency.

AVC-I, being just the restriction of standard H.264/AVC to intra-only coding, avoids making difficult compromises. It is more efficient than other available intra-only codecs, but, more importantly, it benefits from the strong standardization efforts that permitted H.264/AVC to replace MPEG-2 in many broadcastapplications.

Finally, a subjective study across a range of products from multiple vendors identified specific coding artifacts that may occur and confirmed the visual superiority of AVC-I versus MPEG-2 and JPEG 2000, when measured at high bit rates.

Pierre Larbier, CTO for ATEME, Broadcast Engineering

EBUCore: the Dublin Core for Media

Posted: 30 May 2012 05:30 AM PDT

EBUCore was first published in 2000. It was originally a set of definitions for audio archives, applied to the Dublin Core, which is itself a generic set of descriptive terminology that can be applied to any content. XML was then in its infancy but its use would grow dramatically, demanding more structured information to describe audiovisual content. Since then, other semantic languages have greatly influenced the way this information is modelled. EBUCore followed this evolution to become what it is today: the Dublin Core for media, a framework that can be used to describe just about any media content imaginable.

EBUCore is the fruit of well-defined requirements and an understanding of user and developer habits. User friendliness, flexibility, adaptability and scalability are more important than richness and comprehensiveness allied to impossible compliance rules. The richer the metadata, the higher the likelihood that implementers will reinvent their own. History is full of such examples. The golden rule for EBUCore was and remains "keep it simple and tailor it for media".

EBUCore covers 90% of users' needs and its use is no longer restricted to audio or archives. Based on the simple and flexible EBU Class Conceptual Data Model (CCDM), EBUCore's ontology (categories and structure), which is expressed in RDF/OWL (Resource Description Framework/Web Ontology Language), can be used right through to the delivery of content to the end user. It responds to the need for more effective querying. It also paves the way for effective metadata enrichment using Linked Open Data (LOD).

EBUCore was designed to be a metadata specification for "users with different needs" and duly serves this goal. Delegates at the EBU's Production Technology Seminar last January heard a wealth of evidence pointing to the key role that EBUCore is now playing. Several speakers explained how they have deliberately chosen and benefited from EBUCore.

The EBU-AMWA FIMS project, creating a vendor-neutral specification to interconnect production equipment, has adopted EBUCore. The FIMS 1.0 specification uses EBUCore as its core descriptive and technical metadata. FIMS is a vital project for the future of file-based production and feedback received from participants has influenced the most recent version of EBUCore. Early adopters of FIMS, such as Bloomberg, are using this metadata.

The UK's Digital Production Partnership (DPP), which recently published its new specification for file-based programme delivery, is mapping its metadata to EBUCore and TV-Anytime. (TV-Anytime was co-founded by the EBU, who chaired the metadata activities and now actively maintains the specification on behalf of ETSI).

The work on EBUCore and EBU's CCDM greatly influenced the development of W3C Ontology for Media Resources, and vice versa. MA-ONT, as it is known, is a subset of the EBUCore ontology and the RDF/OWL representation rules are common to both. This work is also being used to propose extensions to the in order to describe TV and radio programmes and associated services and schedules.

EBUCore is also used as the solution for metadata aggregation in EUScreen, the European audiovisual archives portal and now a key contributor to Europeana, the European digital library. Two forms of EBUCore are used in this context, the EBUCore XML metadata schema and also the EBUCore RDF ontology.

Other on-going or planned activities using EBUCore include:
• EBUCore will be listed as a formal metadata type by the SMPTE. The EBU is arranging for software to be available to embed EBUCore metadata in languages such as XML or JSON.

• The NoTube project has combined egtaMeta (an EBU specification extending the EBUCore for the exchange of commercials) and TVAnytime to develop innovative solutions in targeting advertising.

• EBUCore is also used in combination with MPEG-7 in the VISION Cloud project exploring technologies for storage in the cloud. The EBU is directly involved in the definition and promotion of the new MPEG-7 AVDP profile.

• Singapore's national broadcaster, MediaCorp, has implemented and adapted EBUCore/SMMCore into its internal company metadata framework.

• The EBU is engaged with several broadcasters for the adaptation of EBUCore in different contexts such as a common metadata format for file exchange.

The above is just a small selection of developments. For example, EBUCore is also republished by the Audio Engineering Society (AES) as AES60, and is available in XML, SMPTE KLV, JSON and RDF/OWL.

Watch this space as the EBU will soon publish a user-friendly EBUCore mapping tool on its website.

By Jean-Pierre Evain, EBU Technical Magazine

Let's DASH!

Posted: 30 May 2012 05:08 AM PDT

In the last century, access to video delivered over networks was almost exclusively dominated by scheduled consumption on dedicated devices – broadcasters distributed premium content at a specific time to TV sets. Broadband internet, both fixed and mobile, as well as highly capable devices such as smartphones and tablets have changed video consumption patterns dramatically in recent years. Video is now consumed on-demand on a multiplicity of devices according to the schedule of the user.

Recent studies conclude that mobile data traffic will grow by a factor of 26 between 2011 and 2016 and that by 2016 video traffic will account for at least two-thirds of the total. The popularity of video also leads to dramatic data needs on the fixed internet. In North America, real-time entertainment traffic (excluding p2p video) today contributes more than 50% of the downstream traffic at peak periods, with notably 30% from Netflix and 11% from YouTube.

HTTP Delivers
The astonishing thing is that these data needs are not driven by traditional broadcast, IP multicast or managed walled-garden services, but by over-the-top video providers. One of the cornerstones of this success is the use of HTTP as the delivery protocol. HTTP enables reach, universal access, connectivity to any device, fixed-mobile convergence, reliability, robustness, and the reuse of existing delivery infrastructure for scalable distribution.

One of the few downsides of HTTP-based delivery is the lack of bitrate guarantees. This can be addressed by enabling the video client to dynamically switch between different quality/bitrate versions of the same content and therefore to adapt to changing network conditions. The provider offers the same media content in different versions and the client can itself select and switch to the appropriate version to ensure continuous playback. The figure below shows a typical distribution architecture for dynamic adaptive streaming over HTTP. HTTP-based Content Delivery Networks (CDNs) have been proven to provide an easy, cost-efficient and scalable means for large-scale video streaming services.

Click to enlarge

Setting a Standard
MPEG has taken the lead on defining a unified format for enabling Dynamic Adaptive Streaming over HTTP (DASH). MPEG-DASH was ratified in 2011 and published as a standard (ISO/IEC 23009-1) in April 2012. It is an evolution of existing proprietary technologies that also addresses new requirements and use cases. DASH enables convergence by addressing mobile, wireless and fixed access networks, different devices such as smartphones, tablets, PCs, laptops, gaming consoles and televisions, as well as different content sources such as on-demand providers, broadcasters, or usergenerated content offerings.

The standard defines two basic formats: the Media Presentation Description (MPD) uses XML to provide a manifest of the available content, its various alternatives, their URL addresses, and other characteristics; and Segments, which contain the actual media streams in the form of chunks, in single or multiple files. In the context of part 1 of MPEG-DASH the focus is on media formats based on the ISO Base Media File Format and the MPEG-2 Transport Stream.

With these basic formats MPEG-DASH provides for a very wide range of use cases and features, including support for server and client-side component synchronization and for efficient trick modes, as well as simple splicing and (targeted) ad insertion and session metrics. DASH can support multiple Digital Rights Management systems, content metadata, and support for advanced codecs including 3D video and multi-channel audio.

Towards Deployment
With the completion of the standard the focus has shifted towards deployment and commercialization of DASH. In this context MPEG will later this year publish Conformance Software and Implementation Guidelines and continues to work on client implementations and optimizations. This is especially relevant for a stable and consistent user experience under varying network conditions.

On the distribution side, coming optimizations include DASH for CDNs – to improve efficiency, scalability and user experience – along with integration into mobile networks and transition between unicast and multicast distribution.

The creation of the DASH Promotors' Group will help to address interoperability and promotional activities. The EBU is among the 50 major industry players that make up this group. Support is also provided for other standards planning to include MPEG-DASH to enable over-the-top video, including HbbTV, DLNA, the Open IPTV Forum and 3GPP. Furthermore, the W3C consortium is considering extensions to the HTML5 video tag that would aid the integration of DASH into web browsers.

The significant efforts currently under way to deploy DASH in a wide range of contexts raise the expectation that MPEG-DASH will become the format for dynamic adaptive streaming over HTTP.

By Thomas Stockhammer, EBU Technical Magazine
You are subscribed to email updates from 3D CineCast
To stop receiving these emails, you may unsubscribe now.
Email delivery powered by Google
Google Inc., 20 West Kinzie, Chicago IL USA 60610

Tuesday, May 29, 2012

Cook sizes up TV prospects for Apple - CNET Mobile

The Horse in Motion. by Leland Stanford, 1882

First edition of the documenration of the first recognized motion picture. $500 @ Feldman's of Menlo Park, California.

Saturday, May 26, 2012

Analyst: Apple should sell the 'world's first non-TV TV'

By  | May 24, 2012, 2:11pm PDT
Summary: The problem with TV isn’t the screen we watch it on, it’s what’s on the screen.
Speculation that Apple is preparing to enter the TV business is at fever pitch, but according to one analyst, the best way for the Cupertino giant to break into this market is by thinking outside the box and begin manufacturing the “world’s first non-TV TV”.
According to James McQuivey, vice president and principal analyst at Forrester, the “TV business is a tough nut to crack” because the “content is still controlled by monopolists unlikely to give Apple the keys to their content archives,” and that Apple introducing a new screen for people to watch content on is unlikely to change anything.
Apple, he claims, has to do “something very different”. And in his opinion, that “something very different” is the iHub.
“Apple should sell the world’s first non-TV TV,” writes McQuivey. “Instead of selling a replacement for the TV you just bought, Apple should convince millions of Apple fans that they need a new screen in their lives. Call it the iHub, a 32-inch screen with touch, gesture, voice, and iPad control that can be hung on the wall wherever the family congregates for planning, talking, or eating - in more and more US homes, that room is the dining room or eat-in kitchen.”
McQuivey believes that the key to success is not content, but apps.
“By pushing developers to create apps that serve as the hub of family life - complete with shared calendars, photo and video viewers, and FaceTime for chatting with grandma - this non-TV TV could take off, ultimately positioning Apple to replace your 60-inch set once it’s ready to retire.”
The problem with McQuivey’s giant, wall-mounted, multi-user iPad is that it doesn’t really bring anything new to the equation. Putting aside the ergonomic issues related to using a 32-inch wall-mounted touch screen device, what does this device do that can’t already be done with an iPad, a Mac, or, for that matter, a whole host of other devices?
Another problem I see with this idea is that while it side-steps the competition in the TV market by being a “non-TV TV,” the device will undoubtedly have to compete for wall/floor/shelf space with a TV. People have limited space to put anything as big as what McQuivey is proposing, and there’s a good chance that the space that he’s thinking that people are going to fill with an iHub is already filled with — you guessed it — a TV.
McQuivey mentions how Xbox 360 owners generating more online video views on TVs than viewers of any other device, but then fails to make the connection between the Xbox 360, which is a box that connects to almost every TV in existence, and the Apple TV, another box that connects to almost every TV in existence.
If Microsoft can change people’s viewing habits with a device that doesn’t have a screen, why does Apple need to make a device with a screen to achieve the same outcome?
As much as I would like to see Apple do something to revolutionize TV, I’m not convinced that any revolution will have anything to do with a screen whatsoever. The problem with TV isn’t the screen we watch it on, it’s what’s displayed on that screen.

Video Dance Floor -- LED Dance Foor light

I just thought this was really cool.  It's probably cheap enough to tile walls with it.

LED Video Dance floor is a new digital terrestrial Display device.You can display any video/flash (all files inside your computer) to our LED video dance floor through our control system, the virtual completion of the stage scenery Interaction with the perfect combination of performance; It uses a high strength resin Mask, the support of solid aluminum die-casting equipment, big load-bearing capacity , easy to connect, can be directly stampede with highly performance protective structure, can be connected seamlessly Next panel, random combinations. The product is suitable for performance, for example: disco stage, concert and, etc.
31DMX channels
Auto / Sound operation
8 groups of r/g/b LEDs
Lamp: 720pcs 10mm LEDs(R240/G240/B240)
 3 pin XLR serial input/output
 3 pin power supply input/output
 Input voltage: AC90-250V/50-60Hz
 Fuse: T3.15A ,
 Max Power Consumption: 80VA
 Stand alone and master/slave , Auto, sound-activated
  R,G,B Additive Color Mixing
* Dimensions (LxWxH): 100cmx 100cmx 15cm
* Gross Weight: 35kgs
NEW LED Video Dance Floor P31.25 for Stage Indoor Light,LED Screen Display,Curtain Screen,LED Vedio Screen

Startups Try to Help Microsoft's Kinect Grow Up Beyond Gaming - Bloomberg

Wednesday, May 23, 2012

Cameras perched on power lines steal electricity

Fwd: $50,000 film budget XXXXX

---------- Forwarded message ----------
From: XXXX
Date: Wed, May 23, 2012 at 6:12 AM
Subject: $50,000 budget XXXXX
To: John Sokol <>

John -

I told you previously how you could do XXXXX for
$50,000 or even less utilizing some of the techniques that you can see
at the successful (7 seasons, 50 full episodes) "Hidden Frontier"
website (which is basically filmed for free in the guy's home using
"Green Screen" techniques).

But now, Mark Zicree (one of the writers from "Star Trek", "Babylon 5"
and "Sliders", etc.) has teamed up with one of the "Battlestar
Galactica" directors and they're shooting their own movie for about
$75,000 with special effects equivalent to "Avatar" and they told
everyone how they did on one of the most recent (in the last week)
"Coast to Coast" shows right here (they start about 12 minutes in):

Here's the Kickstarter page - looks like they got their $75,000 in 3 days:

And here's a really good page on it:

And, as a reminder, here's the original "Hidden Frontier" page with
all 50 some episodes (I point out that their writers are nowhere close
to Marc Zicree above, but it's the techniques I wanted you to study).

Their other more recent series also use the same techniques:

Anyway, Marc Zicree and his team raised $75,000 in 3 days flat this
week at Kickstarter and they're now pushing for $150,000, but,
honestly, they had planned to be able to shoot their project for as
low as $50,000 if necessary, so they've got enough to do it right now
and they're definitely going to finish this thing which proves my
point again:

Yes, if you follow Marc's (or the "Hidden Frontier") approach as
described above, you can most definitely shoot the "9 Billion Names of
God" for about $50,000 or less and still come out with a quality film.


So now you know how to do it.

So if you were really serious about actually making XXXX all
along, then there's now no excuse.  Since it will now hardly cost
anything, all you have to do now is to start it BUT you have to also

Or to quote Yoda, "There is no 'try'.  There is only do."

Something for you to think about.

Fwd: The next Hollywood digital movie camera

From film producer friend.

---------- Forwarded message ----------

Your cell phone:

If they can really go 4K, that matches the best digital cameras in
Hollywood.  This literally means anyone with their own cell phone can make a full length movie (just remember to download your video
regularly to your editing computer) (although when the memristers arrive in the next year or so, you'll be able to store terrabytes on your cell phone).

Hollywood is going to shit bricks over this when it happens.

And I predict the first "Cell Phone-Made Movie Festival" a la Sundance within the next half decade.

The Real Reason Why They Repeat the Same 20 Songs on the Radio & TV Nationwide - True Skool Network

Tuesday, May 22, 2012

Microsoft Boosts Support for Multiple Monitors in Windows 8 | News & Opinion |,2817,2404759,00.asp

Remembering Eugene Polley, an Inventor of the TV Remote - Forbes


Motion control startup Leap Motion has demoed its Leap 3D motion control system, which can track motion to around 0.01mm accuracy — 100 times more accurate than the Kinect. Rather than taking Microsoft’s approach, Leap Motion creates a personal 3D workspace of about four cubic feet. The Leap consists of a small USB device with industry-standard sensors and cameras that, in tandem with the company’s software, can track multiple objects and recognize gestures. Leap’s designers showed off OS navigation and web browsing using a single finger, writing, pinch-to-zoom, precision drawing, 3D modeling, and gaming. From what we can see, it looks to be a very precise system, capable of recognizing objects in your hands and tracking them instead of your digits. Leap Motion is releasing an SDK and also handing out free sensors to “qualified developers” that want to develop for the system.

Thursday, May 17, 2012

Comcast Now Lets You Use 300GB of Bandwidth a Month [VIDEO]

Fwd: Chinese government movie agency uses Zaxel's Mathematically Lossless Compression to Archive Movies

---------- Forwarded message ----------
From: Nori Suzuki <>
Date: Thu, May 17, 2012 at 4:26 PM
Subject: Chinese government movie agency uses Zaxel's Mathematically Lossless Compression to Archive Movies
To: "" <>

Today Zaxel announced that the movie agency under the Chinese government has been using Zaxel's mathematically lossless compression to archive their digital intermediaries.

China's State Administration of Radio, Film, and Television, SARFT, controls television stations, radio stations, and movie studios nation wide.

SARFT's movie division first purchased Zaxel's servers with ZLC, Zaxel mathematically lossless compression, ten years ago, and has been archiving digital intermediaries.

ZLC is versatile; it works with more color space as well as more bit-depth than JPEG 2000 mathematically lossless compression.   ZLC can encode the color space of RGB, YUV, XYZ, and RAW, and color-depth of 8, 10, 12, and 16.  Moreover, ZLC easily adapts to more color spaces and color-depths not mentioned above.

ZLC runs fast; it compresses and decompresses 4K 10 bit RGB files faster than 60 frames per second on inexpensive graphics cards, such as NVIDIA GeForce GTX 580.

ZLC is efficient; The ZLC compression ratio on RGB frame is 2 to 1, but but the compression ratio on RAW is 6 to 1.

In comparison, JPEG 2000 Mathematically Lossless Compression needs expensive accelerator card, and still can compress 4K RGB files at 10 frames per second.  JPEG 2000 cannot compress RAW files either.

Being implemented as a software application, ZLC runs on microprocessor or on GPU.

Nori Suzuki
President & CEO
Zaxel Systems, Inc.
2045 Martin Avenue, Suite 206
Santa Clara, CA 95050, USA
+1-408-727-6403 X 107, cell +1-650-533-8456

Zaxel, Inc.
Minami Ohi 3-37-10, Asano Building 2fl
Shinagawa-ku, Tokyo, Japan 140-0013

Wednesday, May 16, 2012

Kinect + 3D TV = Virtual Reality

Zoff's Virtual Mirror -How to Get a Virtual Glasses Fitting

Free Viewpoint Television

A Free-Viewpoint Virtual Mirror with Marker-Less User Interaction

Android based transparent HMD now available in Japan for $771

Epson Moverio BT-100 Head Mounted Display now available in Japan for $771  Looks like it came out in November. 

BBC News - Google patents augmented reality Project Glass design

Video reveal: BBC super-sizing “the first truly digital Olympics” | paidContent

Nvidia Makes the GPU Virtual | PCWorld

Fwd: NCTA Cableshow: Automated Content Recognition Solutions

---------- Forwarded message ----------
From: "Jay Friedman" <>
Date: May 16, 2012 6:17 AM
Subject: NCTA Cableshow: Automated Content Recognition Solutions
To: <>

Audible Magic Logo              NCTA Cableshow 2012

Automated Content Recognition Solutions

Visit Us at the NCTA Cableshow to Experience the Magic

Booth #2043

A revolution is occurring in the living room. Socially engaged TV viewing, interactive advertising, and a more compelling personalized TV experience are quickly becoming pervasive realities. Operators also are seeking the ability to detect and substitute advertising in video streams.

Our SmartID ACR solutions are at the center of this revolution, providing the means for smart devices to become content-aware, and enabling them to create a whole new world of interactivity for your programming and advertising. Using the magic of our patented digital fingerprint technology, set-top boxes, smart TVs, smart phones and tablets can now automatically recognize the video content viewers watch.

At the NCTA Cableshow in Boston, we'll be demonstrating two new SmartID solutions to content producers, programmers, apps developers, and service providers — our Live TViD and SmartSync services. The Live TViD service automatically identifies TV content even if it is live, never-before-seen, reality-based material. The SmartSync service enables companies to create interactive, time-based experiences synchronized to video content.

Jay Friedman, VP of Marketing & Client Services for Audible Magic, will also be presenting and showing demo's of our technology in action at Imagine Park on Tuesday, between 12:15-1:00pm. Follow this link for more information on this session.

We'll be at:

NCTA Cableshow
Booth #2043
May 21-23, 2012

Audible Magic already works with such companies as TVPlus, Accelerated Media, Coincident.TV, Miso, TV Globo, yap.TV, WatchWith, Tribune Media, Rovi Corporation, Cinran, Facebook, MySpace and more than 200 other companies.

We hope you'll visit our booth to learn how the magic we do can also work for you.

Click to view this email in a browser

If you no longer wish to receive these emails, please reply to this message with "Unsubscribe" in the subject line or simply click on the following link: Unsubscribe

Audible Magic
985 University Avenue, Suite 35
Los Gatos, California 95030

Read the VerticalResponse marketing policy.

Try Email Marketing with VerticalResponse!

Flexible Displays Landing in 2012, But Not in Apple Gear | Gadget Lab |

Digital Signage Expo | The world's largest international trade show dedicated to digital signage, interactive technology, and Out-of-Home networks.

Digital Signage Today |

Screenmedia Expo 2012 - May 16th - 17th - Earls Court - London - Screenmedia - digital signage and interactive media - Digital Place-Based Media - interactive screens - ATMs - Kiosks - Connected Screenmedia - Connected content - Interactive connectio

Tuesday, May 15, 2012

EU fines flat-screen panel companies for illegal cartel

Fwd: Samsung Kills Two Birds With One Stone With Smart Dual View OLED TV

On of the better ideas I wanted to patent but lacked the resources to do so.

Well it's out now.

---------- Forwarded message ----------
From: "NPD DisplaySearch News" <>
Date: May 15, 2012 7:58 AM
Subject: Samsung Kills Two Birds With One Stone With Smart Dual View OLED TV
To: <>



Please see the link below for a recent blog post from NPD DisplaySearch regarding last week's debut of Samsung's "smart dual view" OLED TV, which uses the fast switching speed to show two channels simultaneously. This technology was revealed during the "2012 Samsung Premium TV Showcase" in Korea last Thursday.


For more insight from NPD DisplaySearch analysts Ken Park and Yoonsung Chung, please visit the blog post here.


Warm regards,

Lauren Leetun, APR

SAVVY Public Relations


If you would rather not receive future communications from Savvy Public Relations, let us know by clicking here.
Savvy Public Relations, Not listed, Orlando, FL 32814 United States

Monday, May 14, 2012

Re: Unified Streaming Platform

The so-called adaptive streaming does not work with Android
Gingerbread, it may work with Ice Cream Sandwich.


On 5/14/12, John Sokol <> wrote:

Unified Streaming Platform

Thursday, May 10, 2012

Fwd: Catch up from Digicave & Time-Slice Films

---------- Forwarded message ----------
From: Digicave <>
Date: Thu, May 10, 2012 at 8:53 AM
Subject: Catch up from Digicave & Time-Slice Films
To: JOHN <>

  DigiCave logo NAB Follow up - May 2012  
enews vegas conference
Long Live Las Vegas
Digicave and Time-Slice Films once again took a stand this year in the International Research Park at NAB. As always we would like to thank Skip Pizzi for his wonderful support.

This year's 'Tech Lounge' was an enormous success with attendees taking the opportunity for a well-earned rest on leather sofas to view the next generation Digicave content. Using iPads to view models and animations floating above a touch screen table, attendees also had a chance to see Time-Slice Films' cutting-edge HD Go-Pro camera array, and footage of their astonishing Rip-Curl shoot in Fiji.

Whilst there Digicave's CEO Callum Rex Reid had a chance to catch up with Butch Stearns of the Pulse Network, to discuss how our exciting content capture works. You can watch the video here.
enews james may science museum app
Science Stories Success
In our last newsletter, we introduced our project with The British Science Museum creating an augmented reality tour featuring TV journalist and writer James May.

This App has now gone live, and enjoyed coverage from some of the top gadget and technology blogs in the country along with features on the BBC and Sky. Wired UK for example calls it an "appropriately awesome use" of Augmented Reality, which covers 9 exhibits in total at the Science Museum. We look forward to continuing to expand on this as an exciting educational platform.
enews sculptural photography app
Coming Soon
Keep an eye out for new content on our website and most importantly our own Sculptural Photography app, soon to be released on Apple's App store and Google Play.

Tuesday, May 08, 2012

The Wretched State of GPU Transcoding - Slashdot

The Wretched State of GPU Transcoding

"This story began as an investigation into why Cyberlink's Media Espresso software produced video files of wildly varying quality and size depending on which GPU was used for the task. It then expanded into a comparison of several alternate solutions. Our goal was to find a program that would encode at a reasonably high quality level (~1GB per hour was the target) and require a minimal level of expertise from the user. The conclusion, after weeks of work and going blind staring at enlarged images, is that the state of 'consumer' GPU transcoding is still a long, long way from prime time use. In short, it's simply not worth using the GPU to accelerate your video transcodes; it's much better to simply use Handbrake, which uses your CPU."

Thursday, May 03, 2012

Researcher Causes Endless Restart Loop on Samsung TV's

Italian security researcher Luigi Auriemma was trying to play a trick on his brother when he accidentally discovered two vulnerabilities in all current versions of Samsung TVs and Blu-Ray systems that could allow an attacker to gain remote access to those devices.
Auriemma claims that the vulnerabilities will affect all Samsung devices with support for remote controllers, and that the vulnerable protocol is on both TVs and Blu-Ray enabled devices.
One of the bugs leads to a loop of endless restarts while the other could cause a potential buffer overflow.

Auriemma discovered the issues accidentally. He told Threatpost via email that he was trying to play a trick on his brother. He only wanted to send a remote controller request with a funny message, but he ended up nearly destroying the TV.
To exploit Auriemma’s vulnerabilities requires only that the devices are connected to a wi-fi network.
As background, Auriemma explains that when the device receives a controller packet it displays message informing users that a new ‘remote’ has been detected, and prompts the user to ‘allow’ or ‘deny’ access. Included with this remote packet is a string field used for the name of device. Auriemma found that if he altered the name string to contain line feed and other invalid characters, the device would enter an endless loop.
Auriemma claims that nothing really happens for the first five seconds, but then he lost control of the TV, both manually on the control panel and with the remote. Then after another five seconds, he claims, the TV automaticall restarts. Then the process repeats itself forever, even after unplugging the TV. Eventually, Auriemma managed to reset the TV in service mode. He writes that users can avoid the situation altogether by hitting ‘exit’ when prompted to ‘allow’ or ‘deny’ the new remote device.
As for the buffer overflow, Auriemma determined that he could crash devices by setting the MAC address to a long string. He is only guessing that this is a buffer overflow vulnerability, and he told Threatpost via email that the vulnerability would be much more “attractive” if it was in fact a buffer overflow vulnerability.
“The bugs have been tested on a d6000 and d6050 TV, but it's highly possible that many of the Samsung devices supporting this protocol are vulnerable because d6xxx is a recent TV and usually these 'core' components are like libraries shared with other devices that make use of the same protocol,” he said via email.
Auriemma claims there is no fix for these bugs because he was unable to report the bugs to Samsung. He has also received no word from Samsung. He claims that Samsung doesn’t even have a channel through which to report these types of bugs.