Tuesday, February 04, 2020

Neural networks upscaling of 1896


Denis Shiryaev upscaled 60 fps 4k version of 1896 movie "Arrival of a Train at La Ciotat" with several neural networks



https://www.youtube.com/watch?v=3RYNThid23g





https://www.reddit.com/r/videos/comments/eyoxfb/oc_i_have_made_60_fps_4k_version_of_1896_movie/

Upscaled and resounded version of a classic B&W movie: Arrival of a Train at La Ciotat, The Lumière Brothers, 1896

Source used to upscale: https://youtu.be/MT-70ni4Ddo

Algorithms that were used:
 ››› To upscale to 4k – Gigapixel AI – Topaz Labs https://topazlabs.com/gigapixel-ai/
 ››› To add FPS – Dain, https://sites.google.com/view/wenbobao/dain 

This is the interesting Bit:
Depth-Aware Video Frame Interpolation
Wenbo Bao*, Wei-Sheng Lai#, Chao Ma*, Xiaoyun Zhang*, Zhiyong Gao*, Ming-Hsuan Yang#&

*Shanghai Jiao Tong University,     #University of California, Merced,   &Google

Abstract
Video frame interpolation aims to synthesize non-existent frames in-between the original frames. While significant advances have been made from the deep convolutional neural networks, the quality of interpolation is often reduced due to large object motion or occlusion. In this work, we propose to explicitly detect the occlusion by exploring the depth cue in frame interpolation. Specifically, we develop a depth-aware flow projection layer to synthesize intermediate flows that preferably sample closer objects than farther ones. In addition, we learn hierarchical features as the contextual  information. The proposed model then warps the input frames, depth maps, and contextual features based on the optical flow and local interpolation kernels for synthesizing the output frame. Our model is compact, efficient, and fully differentiable to optimize all the components. We conduct extensive experiments to analyze the effect of the depth-aware flow projection layer and hierarchical contextual features. Quantitative and qualitative results demonstrate that the proposed model performs favorably against state-of-the-art frame interpolation methods on a wide variety of datasets. 


 Update: Colorized by DeOldify Neural Network version of this video: https://youtu.be/EqbOhqXHL7E





This is an upscaled version of the gorgeous video by Bard Canning of curiosity descent uploaded on Sep 13, 2012,
 source: https://youtu.be/Esj5juUzhpU

 The video was upscaled with Gigapixel AI software to 4K, frame by frame, 60 FPS was achieved with After Effect frame blending. I made this video for fun, to spread the love of space traveling. Here is my telegram channel: http://t.me/denissexy Here is comparison: https://gfycat.com/diligentgianteaste... Here is a tutorial how to upscale things in Russian language: https://vc.ru/76580 x


Friday, November 01, 2019

OmniVision announces world record for smallest image sensor


OmniVision, a developer of advanced digital imaging solutions, has announced that it has won a place in the Guinness Book of World Records with the development of its OV6948 image sensor—it now holds the record for the smallest image sensor in the world. Along with the sensor, the company also announced the development of a camera module based on the sensor called the CameraCubeChip.

https://techxplore.com/news/2019-10-omnivision-world-smallest-image-sensor.html

Thursday, October 10, 2019

NeTV2 - overlapping content on encrypted video signals. DMCA Lawsuit Progress

NeTV2 by Alphamax
An open video development board in a PCI express form factor that supports overlaying content on encrypted video signals. Let's bring open video to the digital age!

https://www.crowdsupply.com/alphamax/netv2

Bugfix and DMCA Lawsuit Progress


by Andrew H


Dear backers,
Things have been progressing quietly behind the scenes on NeTV2. Here’s two major developments that you might find particularly relevant.
First, the lawsuit challenging the constitutionality of the anti-circumvention provisions of section 1201 of the DMCA with respect to the NeTV2 is finally moving forward again. After a 2.5 year hiatus sitting on the judge’s bench, a ruling was made which is allowing the case to move forward toward a preliminary injunction that could enable much-requested features such as alpha blending and automatic colorspace selection, as well as finally opening the door for image processing and ML applications.
... Removed...
Finally, if you have any ideas or applications for NeTV2 that rely upon the right to access to plaintext video, please add them to
Thanks to everyone who has logged an issue here – the US government has made an argument that the 1201 exemptions sought around the NeTV2 would only benefit a very limited audience. This public list of ideas helps refute this argument, as well as the argument that non-infringing applications of the NeTV2 hardware are just “hypothetical uses”. This list helps to explain to regulators why the features that would be enabled by access to the plaintext video streams rise above the level of a “mere inconvenience.”
Happy hacking,
-b.

https://www.crowdsupply.com/alphamax/netv2/updates/bugfix-and-dmca-lawsuit-progress



Sunday, May 12, 2019

new A.I. camera that can spot you from 28 miles away



A new camera can photograph you from 45 kilometers away
Developed in China, the lidar-based system can cut through city smog to resolve human-sized features at vast distances.
by Emerging Technology from the arXiv
May 3, 2019

Long-distance photography on Earth is a tricky challenge. Capturing enough light from a subject at great distances is not easy. And even then, the atmosphere introduces distortions that can ruin the image; so does pollution, which is a particular problem in cities. That makes it hard to get any kind of image beyond a distance of a few kilometers or so (assuming the camera is mounted high enough off the ground to cope with Earth’s curvature).

But in recent years, researchers have begun to exploit sensitive photodetectors to do much better. These detectors are so sensitive they can pick up single photons and use them to piece together images of subjects up to 10 kilometers (six miles) away.

Nevertheless, physicists would love to improve even more. And today, Zheng-Ping Li and colleagues from the University of Science and Technology of China in Shanghai show how to photograph subjects up to 45 km (28 miles) away in a smog-plagued urban environment. Their technique uses single-photon detectors combined with a unique computational imaging algorithm that achieves super-high-resolution images by knitting together the sparsest of data points.

The new technique is relatively straightforward in principle. It is based on laser ranging and detection, or lidar—illuminating the subject with laser light and then creating an image from reflected light.

The big advantage of this kind of active imaging is that the photons reflected from the subject return to the detector within a specific time window that depends on the distance. So any photons that arrive outside this window can be ignored.

This “gating” dramatically reduces the noise created by unwanted photons from elsewhere in the environment. And it allows lidar systems to be highly sensitive and distance specific.

To make the new system even better in urban environments, Zheng-Ping and co use an infrared laser with a wavelength of 1550 nanometers, a repetition rate of 100 kilohertz,  and a modest power of 120 milliwatts. This wavelength makes the system eye-safe and allows the team to filter out solar photons that would otherwise overwhelm the detector.

The researchers send and receive these photons through the same optical apparatus—an ordinary astronomical telescope with an aperture of 280 mm. The reflected photons are then detected by a commercial single-photon detector. To create an image, the researchers scan the field of view using a piezo-controlled mirror that can tilt up, down, and side to side.

In this way, they can create two-dimensional images. But by changing the gating timings, they can pick up photons reflected from different distances to build a 3D image.

The final advance the team has made is to develop an algorithm that knits an image together using the single-photon data. This kind of computational imaging has advanced in leaps and bounds in recent years, allowing researchers to create images from relatively small sets of data.

The results speak for themselves. The team set up the new camera on the 20th floor of a building on Chongming Island in Shanghai and pointed it at the Pudong Civil Aviation Building across the river, some 45 km away.

single pixel resolution imaging

Conventional images taken through the telescope show nothing other than noise. But the new technique produces images with a spatial resolution of about 60 cm, which resolves building windows. “This result demonstrates the superior capability of the near-infrared single-photon LiDAR system to resolve targets through smog,” say the team.

That’s also significantly better than the conventional diffraction limit of 1 meter at 45 km, and certainly better than other recently developed algorithms. The image here shows the potential of the technique in images taken in daylight from a distance of 21 km. ”Our results open a new venue for high-resolution, fast, low-power 3D optical imaging over ultralong ranges,” say Zheng-Ping and co.

That’s interesting work that has a wide range of applications. The team mention remote sensing, airborne surveillance, and target recognition and identification. Indeed, the entire device is about the size of a large shoebox and so is relatively portable.

And Zheng-Ping and co say it can be significantly improved. “Our system is feasible for imaging at a few hundreds of kilometers by refining the setup, and thus represents a significant milestone towards rapid, low-power, and high-resolution LiDAR over extra-long ranges,” they say.

So keep smiling—they may be watching.

Ref: arxiv.org/abs/1904.10341 : Single-Photon Computational 3D Imaging at 45 km