Showing posts with label Super resolution. Show all posts
Showing posts with label Super resolution. Show all posts

Monday, December 21, 2020

Computational Imaging and Microscopy



This is an excellent talk, and takes one of the most complex subjects and breaks it down simply from the beginning.  


     14:38 in to the video. 

  It took be the better part of 25 years to learn the secret of zooming in to a license plate from an impossibly Zoomed image.  Something that when shown in Sci-Fi TV shows in the 70's and 80's - 90's...  I just assumed was Bullshit.  

 I eventually learned the secret from one of the top digital imaging experts that's I've known 25 years, and well after his retirement.  He implemented it on super top secret hardware for satellite imaging, when I was still in grade school.  It used the fact that every pixel in your image represents a sine cardinal from the whole image.  Also known as Sinc function it is the continuous inverse Fourier transform of a rectangular pulse and can be thought of a Gaussian modulated sine wave, although I am not really sure if these are the exact equivalent though I see it implemented in RF applications like this. 
In his case the Lens had to be extremely well understood, and in the end it generated convolution filters that were able to be ran quickly and efficiently. 

What is interesting is to see this generalized in to a generic computational imaging problem.  It may actually yield better results with more information, but most likely will be much more computation. 





Computational imaging involves the joint design of imaging system hardware and software, optimizing across the entire pipeline from acquisition to reconstruction. This talk will describe new methods for computational microscopy with coded illumination, based on a simple and inexpensive hardware modification of a commercial microscope. Traditionally, one must trade field-of-view for resolution; with our methods we can have both, resulting in Gigapixel-scale images with resolution beyond the diffraction limit of the system. Our reconstruction algorithms are based on large-scale nonlinear non-convex optimization procedures for phase retrieval. Laura Waller leads the Computational Imaging Lab, which develops new methods for optical imaging, with optics and computational algorithms designed jointly. She holds the Ted Van Duzer Endowed Professorship and is a Senior Fellow at the Berkeley Institute of Data Science (BIDS), with affiliations in Bioengineering and Applied Sciences & Technology. Laura was a Postdoctoral Researcher and Lecturer of Physics at Princeton University from 2010-2012 and received BS, MEng and PhD degrees from MIT in 2014, 2015 and 2010, respectively. She is a Moore Foundation Data-Driven Investigator, Bakar fellow, Distinguished Graduate Student Mentoring awardee, NSF CAREER awardee and Packard Fellow.

Tuesday, February 04, 2020

Neural networks upscaling of 1896


Denis Shiryaev upscaled 60 fps 4k version of 1896 movie "Arrival of a Train at La Ciotat" with several neural networks



https://www.youtube.com/watch?v=3RYNThid23g





https://www.reddit.com/r/videos/comments/eyoxfb/oc_i_have_made_60_fps_4k_version_of_1896_movie/

Upscaled and resounded version of a classic B&W movie: Arrival of a Train at La Ciotat, The Lumière Brothers, 1896

Source used to upscale: https://youtu.be/MT-70ni4Ddo

Algorithms that were used:
 ››› To upscale to 4k – Gigapixel AI – Topaz Labs https://topazlabs.com/gigapixel-ai/
 ››› To add FPS – Dain, https://sites.google.com/view/wenbobao/dain 

This is the interesting Bit:
Depth-Aware Video Frame Interpolation
Wenbo Bao*, Wei-Sheng Lai#, Chao Ma*, Xiaoyun Zhang*, Zhiyong Gao*, Ming-Hsuan Yang#&

*Shanghai Jiao Tong University,     #University of California, Merced,   &Google

Abstract
Video frame interpolation aims to synthesize non-existent frames in-between the original frames. While significant advances have been made from the deep convolutional neural networks, the quality of interpolation is often reduced due to large object motion or occlusion. In this work, we propose to explicitly detect the occlusion by exploring the depth cue in frame interpolation. Specifically, we develop a depth-aware flow projection layer to synthesize intermediate flows that preferably sample closer objects than farther ones. In addition, we learn hierarchical features as the contextual  information. The proposed model then warps the input frames, depth maps, and contextual features based on the optical flow and local interpolation kernels for synthesizing the output frame. Our model is compact, efficient, and fully differentiable to optimize all the components. We conduct extensive experiments to analyze the effect of the depth-aware flow projection layer and hierarchical contextual features. Quantitative and qualitative results demonstrate that the proposed model performs favorably against state-of-the-art frame interpolation methods on a wide variety of datasets. 


 Update: Colorized by DeOldify Neural Network version of this video: https://youtu.be/EqbOhqXHL7E





This is an upscaled version of the gorgeous video by Bard Canning of curiosity descent uploaded on Sep 13, 2012,
 source: https://youtu.be/Esj5juUzhpU

 The video was upscaled with Gigapixel AI software to 4K, frame by frame, 60 FPS was achieved with After Effect frame blending. I made this video for fun, to spread the love of space traveling. Here is my telegram channel: http://t.me/denissexy Here is comparison: https://gfycat.com/diligentgianteaste... Here is a tutorial how to upscale things in Russian language: https://vc.ru/76580 x


Sunday, June 14, 2015

Superresolution with Plenoptic Camera 2.0


Excellent paper

Abstract:
 This work is based on the plenoptic 2.0 camera, which captures an array of real images focused on the object. We show that this very fact makes it possible to use the camera data with super-resolution techniques, which enables the focused plenoptic camera to achieve high spatial resolution. We derive the conditions under which the focused plenoptic camera can capture radiance data suitable for super resolution. We develop an algorithm for super resolving those images. Experimental results are presented that show a 9× increase in spatial resolution compared to the basic plenoptic 2.0 rendering approach. Categories and Subject Descriptors (according to ACM CCS): I.4.3 [Image Processing and Computer Vision, Imaging Geometry, Super Resolution]:

http://www.tgeorgiev.net/Superres.pdf