Monday, December 21, 2020

Computational Imaging and Microscopy



This is an excellent talk, and takes one of the most complex subjects and breaks it down simply from the beginning.  


     14:38 in to the video. 

  It took be the better part of 25 years to learn the secret of zooming in to a license plate from an impossibly Zoomed image.  Something that when shown in Sci-Fi TV shows in the 70's and 80's - 90's...  I just assumed was Bullshit.  

 I eventually learned the secret from one of the top digital imaging experts that's I've known 25 years, and well after his retirement.  He implemented it on super top secret hardware for satellite imaging, when I was still in grade school.  It used the fact that every pixel in your image represents a sine cardinal from the whole image.  Also known as Sinc function it is the continuous inverse Fourier transform of a rectangular pulse and can be thought of a Gaussian modulated sine wave, although I am not really sure if these are the exact equivalent though I see it implemented in RF applications like this. 
In his case the Lens had to be extremely well understood, and in the end it generated convolution filters that were able to be ran quickly and efficiently. 

What is interesting is to see this generalized in to a generic computational imaging problem.  It may actually yield better results with more information, but most likely will be much more computation. 





Computational imaging involves the joint design of imaging system hardware and software, optimizing across the entire pipeline from acquisition to reconstruction. This talk will describe new methods for computational microscopy with coded illumination, based on a simple and inexpensive hardware modification of a commercial microscope. Traditionally, one must trade field-of-view for resolution; with our methods we can have both, resulting in Gigapixel-scale images with resolution beyond the diffraction limit of the system. Our reconstruction algorithms are based on large-scale nonlinear non-convex optimization procedures for phase retrieval. Laura Waller leads the Computational Imaging Lab, which develops new methods for optical imaging, with optics and computational algorithms designed jointly. She holds the Ted Van Duzer Endowed Professorship and is a Senior Fellow at the Berkeley Institute of Data Science (BIDS), with affiliations in Bioengineering and Applied Sciences & Technology. Laura was a Postdoctoral Researcher and Lecturer of Physics at Princeton University from 2010-2012 and received BS, MEng and PhD degrees from MIT in 2014, 2015 and 2010, respectively. She is a Moore Foundation Data-Driven Investigator, Bakar fellow, Distinguished Graduate Student Mentoring awardee, NSF CAREER awardee and Packard Fellow.

Tuesday, December 15, 2020

Transmit a video stream to a PAL analog TV using low-frequency PWM

 



Uses STM32F411 a slow 6.86MHz PWM output to generate modulated transmitting output and the 9th harmonic is 61.71MHz (it is picked up on channel 3 of the TV).

https://hackaday.io/project/171977-pal-streamer

https://github.com/zst123/PAL-Streamer






He used an AVR ATTiny85 to generate PWM waveforms which were picked up by his TV.

https://hackaday.io/project/4348-attiny85-does-ntsc-over-vhf

Thursday, December 10, 2020

A camera that can look inside the keyhole to read the keys pattern!!




Currently $345 USD

The LockTech LTKS KwikSet Decoder is a WIFI enabled digital scope that when used with a compatible IOS or Android Smartphone makes decoding these locks ridiculously easy and fast!

Features:
- Decodes all current SmartKey locks (GEN 1, 2, 3, & 4) and SmartKey Control Key cylinders as well.
- A real glass mirror for the clearest image possible.
- Internal LED eliminates glare off the front of the lock.
- Position Alignment Spacers eliminate the guesswork of where you're looking at in the lock and locating individual wafers/pins during the decoding process.
- LED dimmer allows the user to increase or decrease the brightness inside of the lock.
- Live Video Display Feed, SnapShot Mode, or Video Mode.
- Rechargeable battery
- Magnetic Protective Storage Cap
- Spacers, Protective Cap, and Laminated Depth Chart are tethered for convenience.

 System requirement:

Android 4.2 and iOS 8.0 or later


https://www.internationalkeysupply.com/products/locktech-ltks-wi-fi-enabled-decoder-for-kwikset-smartkey-locks

Saturday, December 05, 2020

Thursday, October 29, 2020

Watch "The First Live Light Field Videocamera | SEBI" on YouTube

https://youtu.be/hiYc8iRK-Ok 

Wooptix say their SEBI development kit for their liquid lens "light field" camera is available now -- camera alone, or with LookingGlass display etc. Any ideas of prices, expertise level needed?
"The development kit contains the SEBI camera, a high speed liquid lens, a FPGA lens controller synchronizing the liquid lens and the image sensor, two camera link cables for data transfer and a PC for live algorithm processing using Wooptix' proprietary software. Using the SEBI software, the camera can acquire either snapshots or video sequences, obtaining an all-in-focus image and a corresponding depth map for each image frame"


Thursday, October 22, 2020

Wednesday, September 23, 2020

Facebook: Video@Scale 2020

https://videoscale2020.splashthat.com/

OCTOBER 22, 2020

10:00AM – 1:00PM

REGISTER NOW

YOU'RE INVITED TO VIDEO @SCALE REMOTE EDITION


BUILDING DISTRIBUTED VIDEO SYSTEMS




Video @Scale is an invitation-only technical conference for engineers that develop or manage large-scale video systems serving millions of people. The development of large-scale video systems includes complex, unprecedented engineering challenges. The @Scale community focuses on bringing people together to discuss these challenges and collaborate on the development of new solutions.


 


This year, we will be hosting our Video @Scale event virtually. 




 


AGENDA


SESSION #1


VIDEO QUALITY


10:00 AM - 11:00 AM PST






 






KEYNOTE


    Rajeev Rajan


VIDEO CODING STANDARDIZATION


     Ioannis Katsavounidis


VIDEO ENCODING PARAMETER SELECTION WITH HYBRID SOFTWARE/HARDWARE APPROACH


     Nick Wu


VMAF


     Zhi Li | Netflix


SESSION #2


SCALABILITY & RELIABILITY


11:00 AM - 12:00 PM PST




YET ANOTHER LIVE VIDEO DELIVERY ARCHITECTURE


     Kirill Pugin


SCALING I/O TO MILLIONS OF VIDEOS


     David Zhang


PROVIDING BETTER VIDEO EXPERIENCE FOR THE NEXT BILLION USERS


     Denise Noyes


BYTES RANGE ADDRESSING WITH LL-HLS


     Will Law | Akamai


SESSION #3 


PANEL + Q&A 


12:00 PM - 1:00 PM PST




VIDEO TRENDS DURING COVID-19


     Jaron Schaeffer | Google


     Connie Goshgarian | AT&T


     Li-Tal Mashiach | Facebook 


     Tremain Wheatley | Facebook (Moderator)


 


 

Tuesday, September 22, 2020

Engineers produce a fisheye lens that’s completely flat

Engineers produce a fisheye lens that’s completely flat

The single piece of glass produces crisp panoramic images.

https://news.mit.edu/2020/flat-fisheye-lens-0918

To capture panoramic views in a single shot, photographers typically use fisheye lenses — ultra-wide-angle lenses made from multiple pieces of curved glass, which distort incoming light to produce wide, bubble-like images. Their spherical, multipiece design makes fisheye lenses inherently bulky and often costly to produce.


Now engineers at MIT and the University of Massachusetts at Lowell have designed a wide-angle lens that is completely flat. It is the first flat fisheye lens to produce crisp, 180-degree panoramic images. The design is a type of “metalens,” a wafer-thin material patterned with microscopic features that work together to manipulate light in a specific way.


In this case, the new fisheye lens consists of a single flat, millimeter-thin piece of glass covered on one side with tiny structures that precisely scatter incoming light to produce panoramic images, just as a conventional curved, multielement fisheye lens assembly would. The lens works in the infrared part of the spectrum, but the researchers say it could be modified to capture images using visible light as well.


The new design could potentially be adapted for a range of applications, with thin, ultra-wide-angle lenses built directly into smartphones and laptops, rather than physically attached as bulky add-ons. The low-profile lenses might also be integrated into medical imaging devices such as endoscopes, as well as in virtual reality glasses, wearable electronics, and other computer vision devices.


“This design comes as somewhat of a surprise, because some have thought it would be impossible to make a metalens with an ultra-wide-field view,” says Juejun Hu, associate professor in MIT’s Department of Materials Science and Engineering. “The fact that this can actually realize fisheye images is completely outside expectation.


This isn’t just light-bending — it’s mind-bending.”


Hu and his colleagues have published their results today in the journal Nano Letters. Hu’s MIT coauthors are Mikhail Shalaginov, Fan Yang, Peter Su, Dominika Lyzwa, Anuradha Agarwal, and Tian Gu, along with Sensong An and Hualiang Zhang of UMass Lowell.


Video thumbnailPlay video

Design on the back side


Metalenses, while still largely at an experimental stage, have the potential to significantly reshape the field of optics. Previously, scientists have designed metalenses that produce high-resolution and relatively wide-angle images of up to 60 degrees. To expand the field of view further would traditionally require additional optical components to correct for aberrations, or blurriness — a workaround that would add bulk to a metalens design.


Hu and his colleagues instead came up with a simple design that does not require additional components and keeps a minimum element count. Their new metalens is a single transparent piece made from calcium fluoride with a thin film of lead telluride deposited on one side. The team then used lithographic techniques to carve a pattern of optical structures into the film.


Each structure, or “meta-atom,” as the team refers to them, is shaped into one of several nanoscale geometries, such as a rectangular or a bone-shaped configuration, that refracts light in a specific way. For instance, light may take longer to scatter, or propagate off one shape versus another — a phenomenon known as phase delay.


In conventional fisheye lenses, the curvature of the glass naturally creates a distribution of phase delays that ultimately produces a panoramic image. The team determined the corresponding pattern of meta-atoms and carved this pattern into the back side of the flat glass.


‘We’ve designed the back side structures in such a way that each part can produce a perfect focus,” Hu says.


On the front side, the team placed an optical aperture, or opening for light.


“When light comes in through this aperture, it will refract at the first surface of the glass, and then will get angularly dispersed,” Shalaginov explains. “The light will then hit different parts of the backside, from different and yet continuous angles. As long as you design the back side properly, you can be sure to achieve high-quality imaging across the entire panoramic view.”


Across the panorama


In one demonstration, the new lens is tuned to operate in the mid-infrared region of the spectrum. The team used the imaging setup equipped with the metalens to snap pictures of a striped target. They then compared the quality of pictures taken at various angles across the scene, and found the new lens produced images of the stripes that were crisp and clear, even at the edges of the camera’s view, spanning nearly 180 degrees.


“It shows we can achieve perfect imaging performance across almost the whole 180-degree view, using our methods,” Gu says.


In another study, the team designed the metalens to operate at a near-infrared wavelength using amorphous silicon nanoposts as the meta-atoms. They plugged the metalens into a simulation used to test imaging instruments. Next, they fed the simulation a scene of Paris, composed of black and white images stitched together to make a panoramic view. They then ran the simulation to see what kind of image the new lens would produce.


“The key question was, does the lens cover the entire field of view? And we see that it captures everything across the panorama,” Gu says. “You can see buildings and people, and the resolution is very good, regardless of whether you’re looking at the center or the edges.”


The team says the new lens can be adapted to other wavelengths of light. To make a similar flat fisheye lens for visible light, for instance, Hu says the optical features may have to be made smaller than they are now, to better refract that particular range of wavelengths. The lens material would also have to change. But the general architecture that the team has designed would remain the same.


The researchers are exploring applications for their new lens, not just as compact fisheye cameras, but also as panoramic projectors, as well as depth sensors built directly into smartphones, laptops, and wearable devices.


“Currently, all 3D sensors have a limited field of view, which is why when you put your face away from your smartphone, it won’t recognize you,” Gu says. “What we have here is a new 3D sensor that enables panoramic depth profiling, which could be useful for consumer electronic devices.”


This research was funded in part by DARPA under the EXTREME Program.

World's largest camera sensor snaps first ever 3,200-megapixel photos

Saturday, September 19, 2020

Tuesday, September 01, 2020

500'000€ Prize for Compressing Human Knowledge

500'000€ Prize for Compressing Human Knowledge

http://prize.hutter1.net/

The Task

Losslessly compress the 1GB file enwik9 to less than 116MB. More precisely:
  • Create a Linux or Windows compressor comp.exe of size S1 that compresses enwik9 to archive.exe of size S2 such that S:=S1+S2 < L := 116'673'681 = previous record.
  • If run, archive.exe produces (without input from other sources) a 109 byte file that is identical to enwik9.
  • If we can verify your claim, you are eligible for a prize of 500'000€×(1-S/L). Minimum claim is 5'000€ (1% improvement).
  • Restrictions: Must run in ≲100 hours using a single CPU core and <10gb a="" and="" class="extern" hdd="" href="http://browser.primatelabs.com/v4/cpu/145066" nbsp="" on="" our="" ram="" style="color: black;">test machine
.

Friday, August 21, 2020

Glenn Weiss live-switched the entire DNC convention from his house.


Alex Lightman on Facebook Said:

How to make yourself highly employable 1 of 100

This is Glenn Weiss. He live-switched the entire DNC convention from his house.

Note to self: learn to live-switch. Seems like a useful and highly scalable skill.

I have the video equipment. Now I just need a convention or something with important people saying impressive things in inspiring ways and I’m gtg.





Wednesday, August 19, 2020

PJSIP is an Open Source Embedded SIP protocol stack written in C.

PJSIP

PJSIP is an Open Source Embedded SIP protocol stack written in C.
The development of PJSIP is mainly focused on having a small footprint, modular, and very portable SIP stack for embedded development purposes (although it’s perfectly good for Win32/Linux/MacOS as well). Some of the characteristics of PJSIP:
  • it is built on top of PJLIB, and since PJLIB is a very very portable library, basically PJSIP can run on any platforms where PJLIB is ported (including platforms where normally it would be hard to port existing programs to, such as Symbian and some custom OSes).
  • it has quite a small footprint, although probably it’s not the smallest SIP stack on the planet (the smallest SIP stack would be a stack that does nothing!),
  • it is quite customizable and modular, meaning that features that are not needed won’t get linked into the executable,
  • it has pretty good performance (thousands of calls per second), and
  • it has quite a lot of SIP features.
A high-level SIP multimedia user agent API is available for both C and Python language.

Links



https://www.pjsip.org/

https://github.com/pjsip/pjproject


New: Video codec VP8 & VP9!

PJSIP is a free and open source multimedia communication library written in C language implementing standard based protocols such as SIP, SDP, RTP, STUN, TURN, and ICE. It combines signaling protocol (SIP) with rich multimedia framework and NAT traversal functionality into high level API that is portable and suitable for almost any type of systems ranging from desktops, embedded systems, to mobile handsets.

PJSIP is both compact and feature rich. It supports audio, video, presence, and instant messaging, and has extensive documentation. PJSIP is very portable. On mobile devices, it abstracts system dependent features and in many cases is able to utilize the native multimedia capabilities of the device.

Learning VoIP, RTP and SIP (aka awesome pjsip) @ Medium.com


https://stackoverflow.com/questions/tagged/pjsip


This is interesting
Debugging SIP message traffic with PJSIP History

Bus Passenger Counting System

Bus Passenger Counting System

The people counting function could be integrated in mobile DVR or cameras. 

You can choose the solution as you like.
If you're interested in the system, pls kindly let me know. TKS. My skype: levy-he. My email: levy.he@jasanwit.com

Wednesday, August 12, 2020

Sony Exmor IMX219

  • Sony Exmor IMX219 Sensor Capable of 4K30 1080P60 720P180 8MP Still
  • 3280 (H) x 2464 (V) Active Pixel Count
  • Maximum of 1080P30 and 8MP Stills in Raspberry Pi Board
  • 2A Power Supply Highly Recommended


The specs on the camera is amazing, and it takes decent pictures. 

The fov field of view is rather limited.  Angle of View: 62.2 x 48.8 degrees

But the aperture is so small it's practically a pinhole camera which can make this useful for a number of applications. 





Arducam Multi Camera Adapter Module for Raspberry Pi - Stereo 3D cameras


Arducam Multi Camera Adapter Module V2.1 for Raspberry Pi 4 B, 3B+, Pi 3, Pi 2, Model A/B/B+, Work with 5MP or 8MP Cameras

$49.99 on Amazon


  • Support Raspberry Pi Model A/B/B+, Pi 2 and Raspberry Pi 4, 3, 3b+.
  • Accommodate up to four 5MP or 8MP Raspberry Pi cameras on a multi camera adapter board.
  • All camera ports are FFC (flexible flat cable) connectors, Demo: youtu.be/DRIeM5uMy0I
  • Cameras work in sequential, not simultaneously. High resolution still image photography demo
  • Note: No mixing of 5MP and 8MP cameras is allowed. Low resolution, low frame rate video surveillance demo 
  • with 4 cameras






Product description
Compared to previous multi-camera adapter module which can only support 5MP RPI cameras, the new multi-camera adapter module V2.1 is designed for connecting maximum four 5MP or 8MP camera to a single CSI camera port on a Raspberry Pi board. Considering that the high-speed CSI camera MIPI signal integrity is sensitive to a long cable connection, this adapter board does not support stacking and can only connect 4 cameras at maximum. Because It covers most of the use cases like 360-degree view photography and surveillance, adding more cameras will degrading the camera performance.
The previous model work with Raspbian 9.8 and backward, and does NOT work with Raspbian 9.9 and onward, so this model is out to tackle this issue.
Please note that Raspberry Pi multi-camera adapter board is a nascent product that may have some stability issues and limitations because of the cable’s signal integrity and RPi's closed source video core libraries, so use it at your own risk.

Features
Accommodate 4 Raspberry Pi cameras on a single RPi board
Support 5MP OV5647 or 8MP IMX219 camera, no mixing allowed
3 GPIOs required for multiplexing
Cameras work in sequential, not simultaneously
Low resolution, low frame rate video surveillance demo with 4 cameras
High resolution still image photography demo




https://www.arducam.com/product-category/cameras-for-raspberrypi/raspberry-pi-camera-multi-cam-adapter-stereo/

https://www.arducam.com/


This is even better for 3D Stereo vision.
https://www.arducam.com/raspberry-pi-stereo-camera-hat-arducam/

What is OSVR?

https://en.wikipedia.org/wiki/Open_Source_Virtual_Reality

https://osvr.github.io/


OSVR presentation at Boston VR meetup Jul 2015

OSVR Software Framework - Core - April 2015



 OSVR is an open-source software platform for virtual and augmented reality. It allows discovery, configuration and operation of hundreds of VR/AR devices and peripherals. OSVR supports multiple game engines, and operating systems and provides services such as asynchronous time warp and direct mode in support of low-latency rendering. OSVR software is provided free under Apache 2.0 license and is maintained by Sensics.


Looks like this was all starting in the 2016 VR Boom and Sensics founded in the late 1990’s for VR development looks like the website went down in the middle of 2019, Their domain was dead and now hijacked. Yuval Boger the CEO left Feb 2018.

https://www.reddit.com/r/OSVR/comments/bgby9y/what_happened_to_osvr_read_me_if_youre_new/

Sometime quite a while ago (early 2018 maybe?) I was informed in a personal email conversation with a Razer employee that Razer’s team was no longer focusing on OSVR, and instead had directed their efforts to supporting OpenXR.
More recently, I was directed to this tweet by former Sensics employee JeroMiya, which seems to indicate that Sensics, the other major OSVR partner, has dissolved.



What is OSVR?
OSVR or Open Source Virtual Reality is a software platform that allows different types of VR technologies to reside on the same ecosystem. This means end to end compatibility across different brands and devices, allowing you to use any OSVR supported HMDs, controllers and other technologies together, giving you the ability to customize the combination of hardware you’d like to use.

Think of OSVR as a software that allows you to customize your VR rig the same way you can customize your PC. When buying a PC it doesn’t matter what brand of monitor, printer, keyboard, graphics card or CPU you want to use – they all work together, allowing you to get a truly customized experience.

This is what OSVR is driving for the VR industry and to date it is the world’s most supported open VR ecosystem. It puts the power of choice in your hands.



NOLO Instructions: Use NOLO with OCULUS DK2 to play SteamVR
https://www.youtube.com/watch?v=qgL7NHixIX8

https://www.nolovr.com/ocdk2

StereoPi : Stereo Camera Vision Board for the PI Zero Modules.



waveshare_wide_pair.jpg?width=600

Front view:
stereopi_front_noted_1280.jpg?width=600
Top view:
stereopi_top_noted_2_1280.jpg?width=600
Dimensions: 90x40 mm
Camera: 2 x CSI 15 lanes cable
GPIO: 40 classic Raspberry PI GPIO
USB: 2 x USB type A, 1 USB on a pins
Ethernet: RJ45
Storage: Micro SD (for CM3 Lite)
Monitor: HDMI out
Power: 5V DC
Supported Raspberry Pi: Raspberry Pi Compute Module 3, Raspberry Pi CM 3 Lite, Raspberry Pi CM 1
Supported cameras: Raspberry Pi camera OV5647, Raspberry Pi camera Sony IMX 237, HDMI In (single mode)
Firmware update: MicroUSB connector
Power switch: Yes! No more connect-disconnect MicroUSB cable for power reboot!
Status: we have fully tested ready-to-production samples
That’s all that I wanted to cover today. If you have any questions I will be glad to answer.
Project website is http://stereopi.com

https://www.crowdsupply.com/virt2real/stereopi




------

StereoPi - companion computer on Raspberry Pi with stereo video support
https://diydrones.com/profiles/blogs/stereopi-companion-computer-on-raspberry-pi-with-stereo-video-1


 Compute Module 3 for playing with stereo video and OpenCV. It could be interesting for those who study computer vision or make drones and robots (3D FPV).
It works with a stock Raspbian, you only need to put a dtblob.bin file to a boot partition for enabling second camera. It means you can use raspivid, raspistill and other traditional tools for work with pictures and video.
JFYI stereo mode supported in Raspbian from 2014, you can read implementation story on Raspberry forum.
Before diving into the technical details let me show you some real work examples.
1. Capture image:
raspistill -3d sbs -w 1280 -h 480 -o 1.jpg
and you get this:
photo_1280_480.jpg?width=600
You can download original file here.
2. Capture video:

raspivid -3d sbs -w 1280 -h 480 -o 1.h264

and you get this:
2buo65.gif?width=600

You can download original captured video fragment (converted to mp4) here.
3. Using Python and OpenCV you can experiment with depth map:
2018-06-01-140326_1824x984_scrot.png?width=600
For this example I used slightly modified code from my previous project 3Dberry (https://github.com/realizator/3dberry-turorial)
I used this pair of cameras for taking the pictures in examples above:
waveshare_pair.jpg?width=600



SLP (StereoPi Livestream Playground) Raspbian Image

https://wiki.stereopi.com/index.php?title=SLP_(StereoPi_Livestream_Playground)_Raspbian_Image#SLP_Admin_panel_options



https://github.com/realizator

https://github.com/realizator/StereoVision

https://github.com/realizator/stereopi-fisheye-robot

https://github.com/search?p=3&q=stereopi&type=Repositories

360 Deg Green Screen Studio

Building a 360 Deg Green Screen Studio

Tuesday, August 11, 2020

ShaderToy - OpenGL GL/ES Programming




Shader Toy is one of several websites that allow you to code interactively wish GLSL ES Shaders, that run directly on the GPU in real time.

GLSL ES is very C Like but it's not C.  It's intended to run parallelized on the GPU.

https://www.shadertoy.com/browse




https://en.wikipedia.org/wiki/OpenGL_ES
OpenGL for Embedded Systems (OpenGL ES or GLES) is a subset[2] of the OpenGL computer graphics rendering application programming interface (API) for rendering 2D and 3D computer graphics such as those used by video games, typically hardware-accelerated using a graphics processing unit (GPU). It is designed for embedded systems like smartphonestablet computersvideo game consoles and PDAs. OpenGL ES is the "most widely deployed 3D graphics API in history".[3]

 Almost all rendering features of the transform and lighting stage, such as the specification of materials and light parameters formerly specified by the fixed-function API, are replaced by shaders written by the graphics programmer.



https://shadertoyunofficial.wordpress.com/



The principles of painting with maths  https://www.youtube.com/watch?v=0ifChJ0nJfM