Wednesday, December 31, 2014

Mesmerizing Quake demake runs on a decades-old oscilloscope

Mesmerizing Quake demake runs on a decades-old oscilloscope From:

Sunday, December 28, 2014

Effect of Police Body-Worn Cameras on Use of Force and Citizens’ Complaints

The Effect of Police Body-Worn Cameras on Use of Force and Citizens’ Complaints Against the Police: A Randomized Controlled Trial



Police use-of-force continues to be a major source of international concern, inviting interest from academics and practitioners alike. Whether justified or unnecessary/excessive, the exercise of power by the police can potentially tarnish their relationship with the community. Police misconduct can translate into complaints against the police, which carry large economic and social costs. The question we try to answer is: do body-worn-cameras reduce the prevalence of use-of-force and/or citizens’ complaints against the police?


We empirically tested the use of body-worn-cameras by measuring the effect of videotaping police–public encounters on incidents of police use-of-force and complaints, in randomized-controlled settings. Over 12 months, we randomly-assigned officers to “experimental-shifts” during which they were equipped with body-worn HD cameras that recorded all contacts with the public and to “control-shifts” without the cameras (n = 988). We nominally defined use-of-force, both unnecessary/excessive and reasonable, as a non-desirable response in police–public encounters. We estimate the causal effect of the use of body-worn-videos on the two outcome variables using both between-group differences using a Poisson regression model as well as before-after estimates using interrupted time-series analyses.


We found that the likelihood of force being used in control conditions were roughly twice those in experimental conditions. Similarly, a pre/post analysis of use-of-force and complaints data also support this result: the number of complaints filed against officers dropped from 0.7 complaints per 1,000 contacts to 0.07 per 1,000 contacts. We discuss the findings in terms of theory, research methods, policy and future avenues of research on body-worn-videos.

Saturday, December 27, 2014

Workshop on Light Field Imaging to be held at Stanford on February 12, 2015

Workshop on Light Field Imaging 
February 12, 2015 
MacKenzie Conference Room, Huang Engineering Center
Stanford University
We invite you to join us on February 12, 2015 at Stanford University to explore the exciting area of research and product development in Light Field Imaging. 
The Workshop on Light Field Imaging will include a summary of the state-of-the-art research and a glimpse into the future of technologies designed to capture and create light rays in a three dimensional scene. Participants will leave with a better understanding of the concept of a light field as it is used in geometric optics, computer vision, computer graphics and computational photography.  The Workshop will include talks that summarize recent advances in light field cameras and light field displays, as well as applications of these technologies in entertainment, consumer devices, industrial applications and medical imaging. The Workshop will also include an interactive session with experts from industry and academics addressing questions about the killer applications and challenges in product development, new areas for research and graduate training, and the future of light field imaging.   There will also a technology demo session that will include presentations by research labs and startup companies.   
You can now register for the Workshop on Light Field Imaging that will be held at Stanford on February 12, 2014.  Registration is limited to 200 peopleand we are rapidly approaching this limit, so you should register now if you intend to participate.
Visit our website to get updates on the program.  A list of companies that will participating in the Interactive Demo Session will be published in the coming weeks.
If you would like to receive future announcements about this event, be sure to subscribe to our mailing list at

Friday, December 26, 2014

lensfree holographic on-chip microscopy

Actually this shouldn't be that hard to do.  It is computational photography at it's finest.

It should be able to completely put to shame normal optical microscopes.
It is volumetric and 3D viewable and could even go multi-spectral. 

There's no information on the specifics of the optics, but the sample must go directly on the imaging chip or very close to it.

So cleaning and reuse are my only questions.

Lens-free microscope can detect cancer at the cellular level

UCLA researchers develop device that can do the work of pathology lab microscopes

 The latest invention is the first lens-free microscope that can be used for high-throughput 3-D tissue imaging — an important need in the study of disease.

“This is a milestone in the work we’ve been doing,” said Ozcan, who also is the associate director of UCLA’s California NanoSystems Institute. “This is the first time tissue samples have been imaged in 3D using a lens-free on-chip microscope.”

The device works by using a laser or light-emitting-diode to illuminate a tissue or blood sample that has been placed on a slide and inserted into the device. A sensor array on a microchip — the same type of chip that is used in digital cameras, including cellphone cameras — captures and records the pattern of shadows created by the sample.

The device processes these patterns as a series of holograms, forming 3-D images of the specimen and giving medical personnel a virtual depth-of-field view. An algorithm color codes the reconstructed images, making the contrasts in the samples more apparent than they would be in the holograms and making any abnormalities easier to detect.

Wide-field computational imaging of pathology slides using lens-free on-chip microscopy

Alon Greenbaum, Yibo Zhang,  Alborz Feizi, Ping-Luen Chung, Wei Luo, Shivani R. Kandukuri and Aydogan Ozcan

Optical examination of microscale features in pathology slides is one of the gold standards to diagnose disease. However, the use of conventional light microscopes is partially limited owing to their relatively high cost, bulkiness of lens-based optics, small field of view (FOV), and requirements for lateral scanning and three-dimensional (3D) focus adjustment. We illustrate the performance of a computational lens-free, holographic on-chip microscope that uses the transport-of-intensity equation, multi-height iterative phase retrieval, and rotational field transformations to perform wide-FOV imaging of pathology samples with comparable image quality to a traditional transmission lens-based microscope. The holographically reconstructed image can be digitally focused at any depth within the object FOV (after image capture) without the need for mechanical focus adjustment and is also digitally corrected for artifacts arising from uncontrolled tilting and height variations between the sample and sensor planes. Using this lens-free on-chip microscope, we successfully imaged invasive carcinoma cells within human breast sections, Papanicolaou smears revealing a high-grade squamous intraepithelial lesion, and sickle cell anemia blood smears over a FOV of 20.5 mm2. The resulting wide-field lens-free images had sufficient image resolution and contrast for clinical evaluation, as demonstrated by a pathologist’s blinded diagnosis of breast cancer tissue samples, achieving an overall accuracy of ~99%. By providing high-resolution images of large-area pathology samples with 3D digital focus adjustment, lens-free on-chip microscopy can be useful in resource-limited and point-of-care settings.

Toward giga-pixel nanoscopy on a chip: a computational wide-field look at the nano-scale without the use of lenses!divAbstract

The development of lensfree on-chip microscopy in the past decade has opened up various new possibilities for biomedical imaging across ultra-large fields of view using compact, portable, and cost-effective devices. However, until recently, its ability to resolve fine features and detect ultra-small particles has not rivalled the capabilities of the more expensive and bulky laboratory-grade optical microscopes. In this Frontier Review, we highlight the developments over the last two years that have enabled computational lensfree holographic on-chip microscopy to compete with and, in some cases, surpass conventional bright-field microscopy in its ability to image nano-scale objects across large fields of view, yielding giga-pixel phase and amplitude images. Lensfree microscopy has now achieved a numerical aperture as high as 0.92, with a spatial resolution as small as 225 nm across a large field of view e.g., >20 mm2. Furthermore, the combination of lensfree microscopy with self-assembled nanolenses, forming nano-catenoid minimal surfaces around individual nanoparticles has boosted the image contrast to levels high enough to permit bright-field imaging of individual particles smaller than 100 nm. These capabilities support a number of new applications, including, for example, the detection and sizing of individual virus particles using field-portable computational on-chip microscopes.

Thursday, December 18, 2014


I have permission to share this.

---------- Forwarded message ----------
From: Morales, Gerry

Hi John ,

My name is Gerry Morales. I am a recruiter on the IMDB team in Santa Monica, California. We will be holding an interview event on January 9th to interview Software Development Engineers who would be interested working for IMDB. We are looking for software developers that are passionate about writing high-quality code, solving big technical problems and delivering awesome customer experiences. We use a range of technologies and programming languages including Amazon Web Services, Java, HTML5/CSS3/JQuery and git. 

If you are excited about building the next generation of digital products that will be used by millions of people worldwide, please respond with your resume attached. I will then add your resume into our system and reach out to you to discuss the next steps in the process.

I look forward to hearing from you!

Gerry Morales  |  Technical Recruiter  at IMDB

Wednesday, December 17, 2014

Sony Leaks Reveal Hollywood Is Trying To Break DNS

Sony Leaks Reveal Hollywood Is Trying To Break DNS

from the scorched-net-policy dept.
schwit1 sends this report from The Verge:Most anti-piracy tools take one of two paths: they either target the server that's sharing the files (pulling videos off YouTube or taking down sites like The Pirate Bay) or they make it harder to find (delisting offshore sites that share infringing content). But leaked documents reveal a frightening line of attack that's currently being considered by the MPAA: What if you simply erased any record that the site was there in the first place? To do that, the MPAA's lawyers would target the Domain Name System that directs traffic across the internet.

The tactic was first proposed as part of the Stop Online Piracy Act (SOPA) in 2011, but three years after the law failed in Congress, the MPAA has been looking for legal justification for the practice in existing law and working with ISPs like Comcast to examine how a system might work technically. If a takedown notice could blacklist a site from every available DNS provider, the URL would be effectively erased from the internet. No one's ever tried to issue a takedown notice like that, but this latest memo suggests the MPAA is looking into it as a potentially powerful new tool in the fight against piracy.

Saturday, December 13, 2014

Leaked Emails Reveal MPAA Plans To Pay Elected Officials To Attack Google

from the holy-fuck dept

Okay, it's no secret that the MPAA hates Google. It doesn't take a psychology expert to figure that out. But in the last few days, some of the leaks from the Sony Pictures hack have revealed the depths of that hatred, raising serious questions about how the MPAA abuses the legal process in corrupt and dangerous ways. The most serious charge -- unfortunately completely buried by this report at The Verge -- is that it appears the MPAA and the major Hollywood studios directly funded various state Attorneys General in their efforts to attack and shame Google. Think about that for a second.

There's a lot of background here that's important (beyond just the MPAA really hates Google). First, as you know, the MPAA has certainly not given up on its SOPA desire to get certain websites completely blocked. The leaked emails reveal a lot more about that (which we'll get to). Second, a year ago, the MPAA hired a pitbull of an anti-piracy lawyer in naming Steve Fabrizio its General Counsel. Fabrizio has spent the last decade and a half or so deeply involved in litigating a bunch of anti-piracy battles at both the RIAA and the MPAA/RIAA's favorite big law firm, Jenner & Block. This is not a guy you hire if you're looking to innovate. This is a guy you hire if you want to get into knock-down, dirty legal fights.

Third, there is the role of state Attorneys General. A recent NY Times article detailed how lobbyists have figured out ways to effectively "lobby" state Attorneys General to do their bidding. Frequently, this is around getting the state AGs to drop investigations (and potential lawsuits) against companies. The article is somewhat eye-opening, as it's hard to distinguish much of what's discussed from straight up bribery. There is talk of lavish events, travel and dinners all paid for by corporate lobbyists for state AGs, often followed soon after with dropped, or reduced investigations. In one case, an AG told staff not to start an investigation into a public company without first getting his approval. Campaign funding is a big part of it as well, as these lobbyists dump lots of money into AG campaigns. And it's no secret that the state Attorney General position is often seen as a stepping stone to a Governorship or US Senate job.

We've discussed in the past that state Attorneys General are often the biggest grandstanders, as their main goal in certain investigations seems to be about generating headlines for themselves, rather than any real legal basis. More than four years ago, we wrote about Topix CEO Chris Tolles' experience being hounded by state Attorneys' General so they could get a bunch of headlines out of something in which everyone admitted Topix wasn't actually doing anything illegal. Along those lines, we've noted that popular tech companies have increasingly been a target for state AGs -- because they're almost sure to generate headlines. We've also noted that state AGs have been pushing for changes to federal laws, like Section 230 of the CDA, to allow them to further go after big tech companies for things like actions of their users.

Not surprisingly, Google has been a popular target for some state AGs. In the past, we've written about state Attorneys General from Nebraska and Oklahoma blaming Google for videos made by users, and about Texas' Attorney General going after Google for supposed antitrust violations(based on the same claims that the FTC later dropped entirely). But the state Attorney General with the biggest chip on his shoulder for Google has absolutely been Mississippi Attorney General Jim Hood, who seemed to think that it was Google's fault that he could find counterfeit goods via search. A few months later, he was back blaming Google for infringement online as well.

This was no accident. What's come out of the Sony Pictures Leak is not just that the MPAA was buddying up to state Attorneys General, but that the MPAA was funding some of this activity and actively supporting the investigation. The leaked emails reveal that rather than seeing that NY Times article about corporate/AG corruption as a warning sign, the MPAA viewed it as a playbook. But not for preventing investigations but for encouraging and funding them. This appears to goway beyond that NY Times article. This isn't campaign donations or inviting AGs to speak at lavish events and paying for the travel. This is flat out paying AGs to investigate Google (even on issues unrelated to copyright infringement) and then promising to get extra press attention to those articles.

Here's the Verge's summary of a key email (which the Verge doesn't even seem to realize why it's so damning):
May 8, 2014: Fabrizio to group. "We’ve had success to date in motivating the AGs; however as they approach the CID phase, the AGs will need greater levels of legal support." He outlines two options, ranging from $585,000 to $1.175 million, which includes legal support for AGs (through Jenner) and optional investigation and analysis of ("ammunition / evidence against") Goliath. Both options include at least $85,000 for communication (e.g. "Respond to / rebut Goliath's public advocacy, amplify negative Goliath news, [and] seed media stories based on investigation and AG actions.").
"Goliath" is the MPAA's rather transparent "codename" for Google. CID stands for a "civil investigative demand" -- which is a form of an administrative subpoena, demanding information from a company, related to an investigation.

What seems to come out from these emails is that the MPAA, in coordination with the major Hollywood studios, agreed to willfully pay tons of money indirectly to state AGs (and Hood in particular) to get them to investigate Google (using the time and labor of the MPAA's favorite law firm -- and the one that Fabrizio just left). That goes way beyond anything discussed in that NY Times articles, and certainly smacks of serious illegality. It's difficult to see how this isn't bribing a public official to attack a company they dislike.

Not only that, but it shows that the MPAA and the studios were aware of Hood's plans well before they happened, suggesting that he or his office has been coordinating with Hollywood on their plans and that the specific CIDs are actually written by the MPAA's lawyers themselves:
A report from the previous February suggests that the Goliath group drafted civil investigative demands (similar to a subpoena) to be issued by the attorneys general. "Some subset of AGs (3-5, but Hood alone if necessary) should move toward issuing CIDs before mid-May," the email says.
And, more recent emails (from just in October) show that they know that another CID is apparently coming and that the MPAA intends to use that CID for negotiating leverage against Google. This follows a claim that Google was pissed off at the MPAA for mocking its recent search algorithm changes to further push down sites that may link to infringing materials (it's not like we didn't warn everyone that the MPAA wouldn't be satisfied with Google's changes). Either way, the MPAA's Fabrizio brushes off concerns that Google has, telling the studios not to worry, that Google should be more willing to talk after Hood sends out his next CID:
After a dispute over Google’s most recent anti-piracy measures in October, Fabrizio suggested further action may be yet to come. "We believe Google is overreacting — and dramatically so. Their reaction seems tactical (or childish)," the email reads. "Following the issuance of the CID [civil investigative demand] by [Mississippi attorney general Jim] Hood (which may create yet another uproar by Google), we may be in a position for more serious discussions with Google."
While the Verge report is focused on the "sexy" topic of the MPAA having an "anti-Google' (er... "Goliath") working group, the real story here is that it appears that this infatuation with taking down Google has extended to funding state politicians in their investigations and attacks on Google, even when it's on totally unrelated issues (the initial CID was about counterfeit drugs -- which is an issue that the MPAA likes to mock Google over by totally misrepresenting some actual, but historical, bad behavior).

And beyond that, the MPAA is showing that part of its plan is to fund "media stories based on" the Attorneys General investigations. Remember, so much AG activity these days is driven by what's going to get them into the headlines. Setting aside nearly $100,000 from the MPAA to get a state AG some headlines for an investigation paid for by the MPAA, using administrative subpoenas written by the MPAA... all designed to attack a company they don't like (which actually has done pretty much exactly what they'd been asking for in downranking sites that lead to infringing works), is really stunning.

I get that it's natural to dislike a company or organization that has undermined your business model. It happens. But there are different ways to respond to it. One is to innovate and compete. Another is to use the legal process to throw hurdles in their path. This is the distinction between "market entrepreneurs" and "political entrepreneurs" that Andy Kessler has described. What the MPAA appears to have done in the last few months, however, certainly suggests that the organization, with the help of the major studios, went beyond just lobbying and political pressure, to actually funding elected officials to try to attack a company they didn't like. And, at the very least, this also has to raise serious questions about Mississippi Attorney General Jim Hood and who he takes orders from. Is he really "protecting" the people of Mississippi? Or is he focused on gobbling up Hollywood's money and promotion?

Bluetooth Handset Gloves - talk to the hand.

Thursday, December 11, 2014

Catherine Crump: The small and surprisingly dangerous detail the police track about you

A very unsexy-sounding piece of technology could mean that the police know where you go, with whom, and when: the automatic license plate reader. These cameras are innocuously placed all across small-town America to catch known criminals, but as lawyer and TED Fellow Catherine Crump shows, the data they collect in aggregate could have disastrous consequences for everyone the world over.

FLIR Lepton minature Thermal Imager & Breakout Board

Flir Lepton Thermal Camera Breakout

FLIR Lepton Thermal Imager - Batch 4. Now with Breakout Board!

The FLIR Lepton™ is the most compact longwave infrared (LWIR) camera core exclusively available here in small quantities for prototypers, makers, and hobbyists. It packs a resolution of 80 × 60 pixels into a camera body that is smaller than a dime. This Lepton is shutterless with a 51deg HFOV lens.

SystemPlus Publishes FLIR Lepton Reverse Engineering

Amazing Technology Invented By MIT - Tangible Media

Tuesday, December 09, 2014

Grand Jury Indicts The Man Who Filmed Eric Garner's Death

Grand Jury Indicts The Man Who Filmed Eric Garner's Death
On Wednesday, a Staten Island grand jury decided not to return an indictment for the police officer who put Eric Garner in a chokehold shortly before his death. A different Staten Island grand jury was less...

Class Says Comcast Piggybacks on Homes to Set Up Public Network


Class Says Comcast Piggybacks on Homes to Set Up Public Network

SAN FRANCISCO (CN) - A federal class action accuses Comcast of surreptitiously making its residential customers bear the cost of using their wireless routers to set up a secondary public wi-fi network.
Lead plaintiff Toyer Grear sued Comcast on Dec. 4.
He claims that Comcast saw its millions of residential customers as an opportunity to compete with major cellular carriers such as AT&T and Verizon. Though Comcast does not have cellular towers, its customers' households "could be used as infrastructure for a national wi-fi network," the complaint states.
So Comcast supplied its residential customers with new wireless routers equipped to broadcast their home wi-fi signals and additional wi-fi signals for the public, selectively activating the routers to broadcast the secondary public network (the "Xfinity wifi hotspot") across the country, with the goal of enabling 8 million hotspots by the end of 2014, according to the lawsuit.
"Public" in this case does not mean "free," but that access is available to anyone who pays to use a particular wi-fi hotspot.
Grear claims that Comcast does not request customers' authorization to use their residential equipment and networks for public use.
"Indeed, Comcast's contract with its customers is so vague that it is unclear as to whether Comcast even addresses this practice at all," the lawsuit claims.
In using its customers' home networks to build a national network, Comcast
"has externalized the costs of its national wi-fi network onto its customers," Grear says in the complaint.
He claims that the new routers use much more electricity than regular routers, and that this is "a cost borne by the unwitting customer."
Engineers at Speedify, a technology company that increases Internet connection speeds, ran tests on Comcast's new routers and determined that "Comcast will be pushing tens of millions of dollars per month of the electricity bills needed to run their nationwide public wi-fi network onto consumers," the complaint states.
Based on the results of this study, Grear claims, Comcast's residential customers can expect electricity cost increases as great as 30 to 40 percent.
In addition, Grear claims, the Xfinity hotspots slow down the speed of customers' home wi-fi networks, since these home networks are available for use by strangers.
They also expose Comcast's residential customers' data to increased privacy and security risks, according to the complaint.
Comcast declined to comment.
Grear seeks certification of a class of all households in the United States that have subscribed to Comcast's Xfinity Internet Service, and a subclass of all California households that have subscribed to the service.
He also seeks declaratory judgment, an injunction, restitution and damages for violations of the Computer Fraud and Abuse Act, the Comprehensive Computer Data Access and Fraud Act and California's Unfair Competition Law.He is represented by Gillian Wade and Sara Avila, with Milstein Adelman, of Santa Monica.

Sunday, December 07, 2014

(Tele)Visions of Tomorrow

This is a great read:
(Tele)Visions of Tomorrow

The introduction of television to the American public has long been one of the most-discussed aspects of the 1939 World's Fair. But did anyone there realize how important this moment would be?

The Official Guide Book of the New York World's Fair expends only two sentences on television. More space is given to Nature’s Mistakes, a barnyard freak show featuring a bull with “skin so transparent that the veins are visible.” It’s possible that the editors, confronted with crazy items like Elektro the Moto-Man and a transatlantic “rocket gun,” assumed television was just another pie-in-the-sky fantasy.

Introducing Television at the Fair

The 1939 World’s Fair in New York was the coming-out party for television. For almost two decades before, networks and entrepreneurs were experimenting with the new electronic technology, hoping to perfect a mass communication system that would surpass radio.

ZANO, A Palm-Size Nano Drone With Built-In HD Video Capture

Computer Classic: "The Incredible Machine" 1968 Western Electric AT&T 15min

Thursday, December 04, 2014

New optical technique extracts audio from video

Those formerly silent walls can "talk" now: Researchers have demonstrated a simple optical technique by which audio information can be extracted from high-speed video recordings. The method uses an image-matching process based on vibration from sound waves, and is reported in an article appearing in the November issue of the journal Optical Engineering, published by SPIE, the international society for optics and photonics.

"One of the intriguing aspects of the paper is the ability to recover spoken words from a video of objects in the room," said journal Associate Editor Reiner Eschbach, a Research Fellow at Xerox Corp. "The paper shows that the sound creates minute vibrations in objects and that these vibrations ― given the right equipment ― can be picked up from a video signal. This is an interesting foray into a new application space and will, in my view, trigger interesting research in the field,"
The article, "Audio extraction from silent high-speed video using an ," was authored by Zhaoyang Wang, Hieu Nguyen, and Jason Quisberth of the Department of Engineering of the Catholic University of America, and is available from the SPIE Digital Library.
The technique is based on the fact that sound waves are mechanical waves that cause air to vibrate when traveling, the paper notes. That vibration through air can cause vibration of objects located in its traveling path, especially if the objects are lightweight, thin, and flexible, such as a piece of paper. The vibrations, although usually with small amplitudes, can be detected and analyzed algorithmically, and audio reconstructed based on those calculations.
The authors used a subset-based image-correlation approach to detect the motions of points on the surface of an object, capturing target images with a high-speed camera and applying the Gauss-Newton algorithm and a few other measures to achieve very fast and highly accurate image matching. Because the detected vibrations are directly related to sound waves, a simple model was used to reconstruct the original audio information of the .
While other recent work in the area reports on more sophisticated techniques to compute motion signals, the authors chose a simpler image-matching approach to measure vibration. Because light can travel through air considerably farther than sound and can pass through glass, they anticipate that the technique may find applications such as the passive detection of conversations inside of a building from a far distance, Wang said. "We are currently improving the technique to increase its accuracy and sensitivity, make the measurements in real-time, and remove interference from other sources."

FIREFOX VR - Experimental FIREFOX builds with VR interfaces.



MozVR is our open lab, a VR website about VR websites, where we share experiments and code.

You will need a VR-enabled build of Firefox for Mac or PC, and an Oculus Rift headset. Support for additional devices coming soon. MozVR will also work with VR-enabled builds of Chromium. Once you have your VR-enabled browser and Rift, check our quick Read Me for configuration tips. On your first run, pressing "Enter VR" will prompt you to grant Fullscreen permission. Grant it and check the "Remember" option, if one is present. You will then be able to experience MozVR.

Paolo Favaro: Portable Light Field Imaging: Extended Depth of Field, Ali...

From ICCP11 Hosted by Carnegie Mellon University, Robotics Institute

April 8, 2011


Portable light field cameras have demonstrated capabilities beyond conventional cameras. In a single snapshot, they enable digital image refocusing, i.e., the ability to change the camera focus after taking the snapshot, and 3D reconstruction. We show that they also achieve a larger depth of field while maintaining the ability to reconstruct detail at high resolution. More interestingly, we show that their depth of field is essentially inverted compared to regular cameras. Crucial to the success of the light field camera is the way it samples the light field, trading off spatial vs. angular resolution, and how aliasing affects the light field. We present a novel algorithm that estimates a full resolution sharp image and a full resolution depth map from a single input light field image. The algorithm is formulated in a variational framework and it is based on novel image priors designed for light field images. We demonstrate the algorithm on synthetic and real images captured with our own light field camera, and show that it can outperform other computational camera systems.


Paolo Favaro received the D.Ing. degree from Universita di Padova, Italy in 1999, and the M.Sc. and Ph.D. degree in electrical engineering from Washington University in St. Louis in 2002 and 2003 respectively. He was a postdoctoral researcher in the computer science department of the University of California, Los Angeles and subsequently in Cambridge University, UK. Dr. Favaro is now lecturer (assistant professor) in Heriot-Watt University and Honorary Fellow at the University of Edinburgh, UK. His research interests are in computer vision, computational photography, machine learning, signal and image processing, estimation theory, inverse problems and variational techniques. He is also a member of the IEEE Society.

Rambus Lensless Camera Demo


Rambus publishes a Youtube video with Patrick Gill showing the company's lensless camera operation:

Dr. Patrick Gill demonstrates a diffraction-based lensless imaging system

 Meanwhile, it appears that Rambus somewhat downplays its image sensor activities in its recent investor presentations. For example, in the Nov. 2014 presentation, imaging appears in only one slide #29:

Gangnam Style Has Been Viewed So Many Times It Broke YouTube’s Code

PSY’s Gangnam Style has been viewed so many times that it broke YouTube’s view counter, making it the very first video to break the reaches of a 32-bit integer.

Tuesday, December 02, 2014

The Digital Sex Industry

Soon, virtual reality is going to crash into our lives in a way we never
even imagined. Though dating and masturbating have long been
commandeered by the web, it's only been as a kind of middleman. Now
we're nearing the possibility of falling in love with your computer, as
meeting your dream partner could be as easy as slipping on Oculus
Rift—the most advanced virtual reality headset in the world.

The Digital Love Industry (Full Length)

Intel post promotional videos for its 3D camera-based RealSense

Another Two Video Promotions from Intel

Intel keeps posting promotional videos for its 3D camera-based RealSense technology. The first one shows refocusing capability similar to Lytro,  Pelican Imaging and some Nokia products:

second video demos 3D scanning:    Intel® RealSense™ Developer Kit.

Monday, December 01, 2014

Haptic holograms let you touch the void in VR - New Scientist

Obama Calls For $75 Million In Funding for 50,000 Police Body Cameras

Today President Obama proposed $263 million in funding for law enforcement to help avoid another disaster like the ongoing mess in Ferguson, Missouri.…

David Brin SAID:
Cop cams... a trend predicted way back in 1997... were resisted and resisted -- till they suddenly became a no-brainer obvious. Now: The Obama Administration is proposing " $75 million for a Body Worn Cameras Partnership, which would help states purchase and store the new equipment." Next step of course (also circa 1997)? Often-stopped youths will step out of the car armed... with their own cameras.

Friday, November 28, 2014

Infrared thermal Imaging Hack-a-thon

FLIR ONE Hack-a-thon: 

Infrared thermal Imaging Camera for your Smartphone

At the HackerDojo
Start:   Friday, December 12 2014 at 1:00pm
End:    Sunday, December 14 2014 at 9:00am
Carlos.Uranga@Hackerdojo.Com; Interim Assistance: Anil.Reddy@Hackerdojo.Com, Jaun.Alvarez@Hackerdojo.Com
Fee: Free for first 50 participants, $15 thereafter per person. Click on the link above to reserve your ticket now!

Join FLIR and HackerDojo in a 38-hour Hack-a-thon to develop cool and interesting iOS apps for the FLIR ONE!
Cash, products, and promotional prizes for best apps in four categories.

Event Agenda

Friday 12/12
4:00 p.m.- 6:00 p.m. - FLIR ONE Hack-A-Thon Event Kickoff and Keynotes

6:00 p.m.- 12:00 a.m. - FLIR ONE Hack-A-Thon with Hourly Prize Drawings*

*Must be present to win

Saturday 12/13
12:00 a.m.- 12:00 a.m. - FLIR ONE Hack-A-Thon Continues with Hourly Prize Drawings

Sunday 12/14
12:00 a.m.- 8:00 a.m. - FLIR ONE Hack-A-Thon Continues

8:00 a.m.- 12:00 p.m. - Developer Presentations and Demonstrations

12:00 p.m.- 1:00 p.m. - Catered Lunch and Judging

1:00 p.m. - Award Ceremony

Best New App Prizes

1st - $5000 +FLIR ONEs for the entire team + FLIR FX

2nd - $2000 + FLIR ONEs for the entire team + Flir FX

3rd - $1000 + FLIR ONEs for the entire team

4th - $500 + FLIR ONEs for the entire team

5th - FLIR ONEs for the entire team

Additional prizes: Best app in each category - $1000 + FLIR ONEs
App categories: Work, Home, Play, Games/Entertainment

Ideally teams of 1-4, hacking the future!

Check back weekly for further updates.

Softkinetic depthsense time-of-flight 3D camera Teardown

Another pill-cam teardown

Wednesday, November 26, 2014

police use of body cameras cuts violence and complaints

"I think we've opened some eyes in the law enforcement world. We've shown the potential," said Tony Farrar, Rialto's police chief. "It's catching on."
Body-worn cameras are not new. Devon and Cornwall police launched a pilot scheme in 2006 and forces in Strathclyde, Hampshire and the Isle of Wight, among others, have also experimented.

But Rialto's randomised controlled study has seized attention because it offers scientific – and encouraging – findings: after cameras were introduced in February 2012, public complaints against officers plunged 88% compared with the previous 12 months. Officers' use of force fell by 60%.

"When you know you're being watched you behave a little better. That's just human nature," said Farrar. "As an officer you act a bit more professional, follow the rules a bit better."

Cops Deleted Video, But it Survived on the Cloud

Denver, CO — The Denver police department has been accused of using excessive force after a video, which they allegedly deleted, survived on the cloud and was turned into FOX 31. 

After the police-on-pregnant woman violence subsided, Frasier says that’s when the Denver police officers became interested in his Samsung Tablet.

Fraiser told FOX 31 that officers on scene threatened him with arrest, demanded he turn over all photos and videotape to them and then seized his tablet over his objections.

“When he took it, I said, ‘Hey! You can’t do that. You need a warrant for that!’ and he said, ‘What program did you take the video with? Where is that?’” Frasier said.

He said police ignored his objections and dug through his personal photos without obtaining a court order.

“The first officer that comes up to ask me about my witness statement brings me to the police car and says we could do this the easy way or we could do this the hard way,” Fraser said. “It was taken as ‘You can either cooperate and give us what we want or we’re going to incarcerate you.’”

According to Frasier, when he got back his tablet, the video was gone. “I couldn’t believe it. My heart dropped. I know I just shot that video, like it’s not on there now?” Frasier said.

Frasier said it’s “possible” both he and the police officer who looked through his tablet “missed seeing” the clip inside his files.
However, Frasier said he suspects, in reality, the clip was deleted either with intention or by mistake.

When he got back home that evening, Fraiser synced his tablet with his electronic cloud and within a few moments, the video reappeared.
“It was very well known that the video was shot and things were done on the video that shouldn’t be leaked out, that it would be bad for the reputations of the police officers,” Frasier said.

Despite his friends telling him to delete the video for fear that the officers would seek revenge, Fraiser did the courageous thing and submitted it to FOX31, and for this Frasier deserves credit.

Monday, November 24, 2014

Sony Promotes 4K Resolution for Security Applications

Sony publishes Youtube video showing 4K technology for security cameras:

Another Sony video demos 5-axis optical image stabilization operation in Alpha 7-II DSLR, said to be the first in a full-frame camera.

Tuesday, November 18, 2014

Fwd: video book with 3 sensors

---------- Forwarded message ----------
From: Masrui video brochure
Subject: video book with 3 sensors

5inch video book,each page has a video.

Sunday, November 16, 2014

The Motley Fool predicts the end of TV.


And the big winners will surprise you. One simple test (explained in this video) shows you how to bag these 3 killer stocks & maximize your profit potential.

Wednesday, November 12, 2014

Mozilla's new site MozVR virtual reality web site.

Mozilla Research VR Team just launched to foster VR-native sites. The page lets Oculus Rift owners browse technology demos (in a Rift-friendly interface, naturally) that show off what VR can do on the web.

Mozilla is sharing the code, tools and tutorials for its own front end.

Tuesday, November 11, 2014

The Work of Dr. Harold Edgerton as presented by Dr. Kim Vandiver

This may seem some what off topic, but if you love photography and history this is about the first flash photos,

Edgerton's work in stop motion photography advanced many areas of science and technology.

Temporal Aliasing of Guitar Strings

This neat little trick is possible because of the rolling shutter of the phone's camera. Note that the 'waves' seen are temporal; the strings are actually vibrating up and down remaining horizontal.
Read this blog post for an excellent explanation:

Thursday, November 06, 2014

Optical super reflector: for specific wavelength the light is perfectly reflected

MIT Creates World's First 'Perfect Mirror' with Zero Distortion, Signaling Breakthrough for Solar Power

Scientists at MIT just announced that they have created the perfect mirror – and it could signal a breakthrough for solar power technology. The team’s “perfect mirror” is capable of reflecting any type of wave — light, sound, or water — with absolutely zero distortion, so it could provide a huge boost to concentrated solar power installations, which use mirrors to focus concentrated beams of sunlight onto a specific area.

Ever feel like what you see in the mirror can’t possibly be accurate? Well, there’s actually some truth to that. All mirrors absorb some of the light waves that hit them or scatter photons around in different directions, resulting in slight distortion. When it comes to solar power installations, these tiny distortions can add up to big efficiency losses.
Marin Soljačić and colleagues from MIT’s photonics and electromagnetics group didn’t set out to create a perfect mirror that could eliminate these inefficiencies – as in many scientific discoveries, they stumbled upon it while investigating something else. ExtremeTech explains:
The team was studying the behavior of a photonic crystal — in this case, a silicon wafer with a nanopatterned layer of silicon nitride on top — that had had holes drilled into it, forming a lattice. These holes are so small that they can only accommodate a single light wave. At most angles, light was partially absorbed by the photonic crystal, as they expected — but with a specific wavelength of red light, at an angle of 35 degrees, the light was perfectly reflected. Every photon that was emitted by the red light source was perfectly bounced back, at exactly the right angle, with no absorption or scattering.
This work is “very significant, because it represents a new kind of mirror which, in principle, has perfect reflectivity,” says A. Douglas Stone, in a press release. Stone is a professor of physics at Yale University who was not involved in this research. The finding, he says, “is surprising because it was believed that photonic crystal surfaces still obeyed the usual laws of refraction and reflection,” but in this case they do not.
The researchers are still trying to figure out why this deviation from known scientific laws took place. However, there is some excitement about what a perfect mirror could mean for various industries. The most obvious application is more powerful and efficient lasers, but concentrated solar power and fiber optics could also be improved.

~ $300 for stereo video on PI.

Dual Camera input PI $215

Camera Boards are $30 each.

I think this price is mostly driven by OpenCV users and other hobbiests driving up prices to be too high to use internal to all but high ticket products. 

Magic Leap raises $542 million for Augmented Reality (AR).

Google is leading a huge $542 million round of funding for the secretive startup c, which is said to be working on augmented reality glasses that can create digital objects that appear to exist in the world around you. Though little is known about what Magic Leap is working on, Google is placing a big bet on it: in addition to the funding, Android and Chrome leader Sundar Pichai will join Magic Leap's board, as will Google's corporate development vice-president Don Harrison. The funding is also coming directly from Google itself — not from an investment arm like Google Ventures — all suggesting this is a strategic move to align the two companies and eventually partner when the tech is more mature down the road.
Magic Leap's technology currently takes the shape of something like a pair of glasses, according to The Wall Street Journal. Rather than displaying images on the glasses or projecting them out into the world, Magic Leap's glasses reportedly project their image right onto their wearer's eyes — and apparently to some stunning effects.
"It was incredibly natural and almost jarring — you’re in the room, and there’s a dragon flying around, it’s jaw-dropping and I couldn’t get the smile off of my face," Thomas Tull, CEO of Legendary Pictures, tells the Journal. Legendary also took part in this round of investment, alongside Qualcomm, Kleiner Perkins, Andreessen Horowitz, and Obvious Ventures, among others. Qualcomm's executive chairman, Paul Jacobs, is also joining Magic Leap's board.
The eclectic mix of companies participating in this investment round speak to how broadly Magic Leap sees its potential. Its founder says that he wants the company to become "a creative hub for gamers, game designers, writers, coders, musicians, filmmakers, and artists." Legendary, which makes films including Godzilla and The Dark Knight, is interested in its potential for movies. Google likely sees far more ways to put it to use.
The technology sounds like it could be an obvious companion to Google Glass, but for now the Journal reports that they're not being integrated. Magic Leap declined to commented on what might happen down the road. Nonetheless, the investment in Magic Leap appears to be Google betting on augmented reality as the future of computing, pitting it in a fight against virtual reality competitors. Eventually, it'll likely be facing off against Facebook's Oculus Rift — the biggest name in VR right now, and one that Facebook was willing to pay $2 billion for.

Magic Leap also says that it may "positively transform the process of education."
Magic Leap is run and was founded by Rony Abovitz, who previously founded the medical robotics company Mako Surgical, which was sold for $1.65 billion last year. The Journal reports that Abovitz has a biomedical engineering degree from the University of Miami. He previously made a bizarre, psychedelic TEDx talk involving 2001, green and purple apes, and a punk band. His new company, which has been around since 2011, is headquartered in Florida, so it isn't exactly the typical tech startup out out of Silicon Valley. Abovitz says the location allows Magic Leap to recruit globally. It currently has over 100 employees.
Though Magic Leap's product sounds like a pair of augmented reality glasses, Abovitz and his company dislike the term. Magic Leap brands its effect as "Cinematic Reality," which sounds a bit cooler but doesn't really mean anything just yet. "Those are old terms – virtual reality, augmented reality. They have legacy behind them," Abovitz told the South Florida Business Journal back in February, after closing an initial round of funding. "They are associated with things that didn’t necessarily deliver on a promise or live up to expectations. We have the term cinematic reality because we are disassociated with those things. … When you see this, you will see that this is computing for the next 30 or 40 years. To go farther and deeper than we’re going, you would be changing what it means to be human."
This is all something that Google is eager to view the results of. "We are looking forward to Magic Leap's next stage of growth, and to seeing how it will shape the future of visual computing," Pichai says in a statement. What exactly Google will do with augmented reality is still unknown, but, much like how Google has managed to control a great deal of mobile computing through Android, it's been looking ahead to ensure that it doesn't miss out on the next leap either. It declined to provide further comment on the investment.
Talking to TechCrunch, Abovitz says that Magic Leap should be launching a product for consumers "relatively soon." There's no stated target date for now, though, and it sounds like it still has some development to do.

Stereo Vision and Depth Mapping with Two Raspi Camera Modules

The Raspberry Pi has a port for a camera connector, allowing it to capture 1080p video and stream it to a network without having to deal with the craziness of webcams and the improbability of capturing 1080p video over USB. The Raspberry Pi compute module is a little more advanced; it breaks out two camera connectors, theoretically giving the Raspberry Pi stereo vision and depth mapping. [David Barker] put a compute module and two cameras together making this build a reality.

The use of stereo vision for computer vision and robotics research has been around much longer than other methods of depth mapping like a repurposed Kinect, but so far the hardware to do this has been a little hard to come by. You need two cameras, obviously, but the software techniques are well understood in the relevant literature.

[David] connected two cameras to a Pi compute module and implemented three different versions of the software techniques: one in Python and NumPy, running on an 3GHz x86 box, a version in C, running on x86 and the Pi’s ARM core, and another in assembler for the VideoCore on the Pi. Assembly is the way to go here – on the x86 platform, Python could do the parallax computations in 63 seconds, and C could manage it in 56 milliseconds. On the Pi, C took 1 second, and the VideoCore took 90 milliseconds. This translates to a frame rate of about 12FPS on the Pi, more than enough for some very, very interesting robotics work.

There are some better pictures of what this setup can do over on the Raspi blog. We couldn’t find a link to the software that made this possible, so if anyone has a link, drop it in the comments.

Lytro Hits Up The Enterprise With The Introduction Of The Lytro Platform And Dev Kit


Lytro has been working for three years to build a brand new type of camera with light field technology, and while the tech itself is quite incredible, transforming that into a viable business has proven difficult.
Until now, the company has been selling special cameras, the original Lytro and the newer,photographer-friendly Illum. It’s a difficult business that is in fast flux, given that so many entry-level photographers now have a camera in their smartphone and more intensive photographers want a proper DSLR.
And so, Lytro is adding a new revenue stream to its business with the launch of the Lytro Development Kit. For now, it’s a software development kit that comes with an API for integrating light field technology into applications in a number of imaging fields, including holography, microscopy, architecture, and security.
Lytro’s light field sensor takes into account the direction that light is traveling relative to the shot, rather than capture light on a single plane. This, paired with Lytro’s software, allows for image refocusing post-shot, among other dimension-based features. With the LDK, Lytro is looking to open up that functionality to other field and businesses, with a revenue stream coming from the enterprise side.
Alongside access to the API and the Lytro processing engine, the company will also be working alongside partners to develop custom devices and photography hardware to accomplish their specific, industry-based goals.
With the launch, Lytro has announced four major partnerships with organizations already on the platform, including NASA’s Jet Propulsion Laboratory, a medical devices startup called General Sensing, the Army Night Vision and Electronic Sensors Directorate, and an unnamed “industrial partner” applying the tech to work with nuclear reactors.
Pricing starts at $20,000 for access to the platform. You can learn more here.