Monday, December 21, 2020
Computational Imaging and Microscopy
Tuesday, December 15, 2020
Transmit a video stream to a PAL analog TV using low-frequency PWM
Uses STM32F411 a slow 6.86MHz PWM output to generate modulated transmitting output and the 9th harmonic is 61.71MHz (it is picked up on channel 3 of the TV).
https://hackaday.io/project/171977-pal-streamer
https://github.com/zst123/PAL-Streamer
He used an AVR ATTiny85 to generate PWM waveforms which were picked up by his TV.
https://hackaday.io/project/4348-attiny85-does-ntsc-over-vhf
Thursday, December 10, 2020
A camera that can look inside the keyhole to read the keys pattern!!
The LockTech LTKS KwikSet Decoder is a WIFI enabled digital scope that when used with a compatible IOS or Android Smartphone makes decoding these locks ridiculously easy and fast!
Features:
- Decodes all current SmartKey locks (GEN 1, 2, 3, & 4) and SmartKey Control Key cylinders as well.
- A real glass mirror for the clearest image possible.
- Internal LED eliminates glare off the front of the lock.
- Position Alignment Spacers eliminate the guesswork of where you're looking at in the lock and locating individual wafers/pins during the decoding process.
- LED dimmer allows the user to increase or decrease the brightness inside of the lock.
- Live Video Display Feed, SnapShot Mode, or Video Mode.
- Rechargeable battery
- Magnetic Protective Storage Cap
- Spacers, Protective Cap, and Laminated Depth Chart are tethered for convenience.
System requirement:
Android 4.2 and iOS 8.0 or later
Monday, December 07, 2020
Saturday, December 05, 2020
Thursday, October 29, 2020
Watch "The First Live Light Field Videocamera | SEBI" on YouTube
Wednesday, October 28, 2020
Monday, October 26, 2020
Friday, October 23, 2020
Thursday, October 22, 2020
Wednesday, September 23, 2020
Facebook: Video@Scale 2020
https://videoscale2020.splashthat.com/
OCTOBER 22, 2020
10:00AM – 1:00PM
REGISTER NOW
YOU'RE INVITED TO VIDEO @SCALE REMOTE EDITION
BUILDING DISTRIBUTED VIDEO SYSTEMS
Video @Scale is an invitation-only technical conference for engineers that develop or manage large-scale video systems serving millions of people. The development of large-scale video systems includes complex, unprecedented engineering challenges. The @Scale community focuses on bringing people together to discuss these challenges and collaborate on the development of new solutions.
This year, we will be hosting our Video @Scale event virtually.
AGENDA
SESSION #1
VIDEO QUALITY
10:00 AM - 11:00 AM PST
KEYNOTE
Rajeev Rajan
VIDEO CODING STANDARDIZATION
Ioannis Katsavounidis
VIDEO ENCODING PARAMETER SELECTION WITH HYBRID SOFTWARE/HARDWARE APPROACH
Nick Wu
VMAF
Zhi Li | Netflix
SESSION #2
SCALABILITY & RELIABILITY
11:00 AM - 12:00 PM PST
YET ANOTHER LIVE VIDEO DELIVERY ARCHITECTURE
Kirill Pugin
SCALING I/O TO MILLIONS OF VIDEOS
David Zhang
PROVIDING BETTER VIDEO EXPERIENCE FOR THE NEXT BILLION USERS
Denise Noyes
BYTES RANGE ADDRESSING WITH LL-HLS
Will Law | Akamai
SESSION #3
PANEL + Q&A
12:00 PM - 1:00 PM PST
VIDEO TRENDS DURING COVID-19
Jaron Schaeffer | Google
Connie Goshgarian | AT&T
Li-Tal Mashiach | Facebook
Tremain Wheatley | Facebook (Moderator)
Tuesday, September 22, 2020
Engineers produce a fisheye lens that’s completely flat
Engineers produce a fisheye lens that’s completely flat
The single piece of glass produces crisp panoramic images.
https://news.mit.edu/2020/flat-fisheye-lens-0918
To capture panoramic views in a single shot, photographers typically use fisheye lenses — ultra-wide-angle lenses made from multiple pieces of curved glass, which distort incoming light to produce wide, bubble-like images. Their spherical, multipiece design makes fisheye lenses inherently bulky and often costly to produce.
Now engineers at MIT and the University of Massachusetts at Lowell have designed a wide-angle lens that is completely flat. It is the first flat fisheye lens to produce crisp, 180-degree panoramic images. The design is a type of “metalens,” a wafer-thin material patterned with microscopic features that work together to manipulate light in a specific way.
In this case, the new fisheye lens consists of a single flat, millimeter-thin piece of glass covered on one side with tiny structures that precisely scatter incoming light to produce panoramic images, just as a conventional curved, multielement fisheye lens assembly would. The lens works in the infrared part of the spectrum, but the researchers say it could be modified to capture images using visible light as well.
The new design could potentially be adapted for a range of applications, with thin, ultra-wide-angle lenses built directly into smartphones and laptops, rather than physically attached as bulky add-ons. The low-profile lenses might also be integrated into medical imaging devices such as endoscopes, as well as in virtual reality glasses, wearable electronics, and other computer vision devices.
“This design comes as somewhat of a surprise, because some have thought it would be impossible to make a metalens with an ultra-wide-field view,” says Juejun Hu, associate professor in MIT’s Department of Materials Science and Engineering. “The fact that this can actually realize fisheye images is completely outside expectation.
This isn’t just light-bending — it’s mind-bending.”
Hu and his colleagues have published their results today in the journal Nano Letters. Hu’s MIT coauthors are Mikhail Shalaginov, Fan Yang, Peter Su, Dominika Lyzwa, Anuradha Agarwal, and Tian Gu, along with Sensong An and Hualiang Zhang of UMass Lowell.
Video thumbnailPlay video
Design on the back side
Metalenses, while still largely at an experimental stage, have the potential to significantly reshape the field of optics. Previously, scientists have designed metalenses that produce high-resolution and relatively wide-angle images of up to 60 degrees. To expand the field of view further would traditionally require additional optical components to correct for aberrations, or blurriness — a workaround that would add bulk to a metalens design.
Hu and his colleagues instead came up with a simple design that does not require additional components and keeps a minimum element count. Their new metalens is a single transparent piece made from calcium fluoride with a thin film of lead telluride deposited on one side. The team then used lithographic techniques to carve a pattern of optical structures into the film.
Each structure, or “meta-atom,” as the team refers to them, is shaped into one of several nanoscale geometries, such as a rectangular or a bone-shaped configuration, that refracts light in a specific way. For instance, light may take longer to scatter, or propagate off one shape versus another — a phenomenon known as phase delay.
In conventional fisheye lenses, the curvature of the glass naturally creates a distribution of phase delays that ultimately produces a panoramic image. The team determined the corresponding pattern of meta-atoms and carved this pattern into the back side of the flat glass.
‘We’ve designed the back side structures in such a way that each part can produce a perfect focus,” Hu says.
On the front side, the team placed an optical aperture, or opening for light.
“When light comes in through this aperture, it will refract at the first surface of the glass, and then will get angularly dispersed,” Shalaginov explains. “The light will then hit different parts of the backside, from different and yet continuous angles. As long as you design the back side properly, you can be sure to achieve high-quality imaging across the entire panoramic view.”
Across the panorama
In one demonstration, the new lens is tuned to operate in the mid-infrared region of the spectrum. The team used the imaging setup equipped with the metalens to snap pictures of a striped target. They then compared the quality of pictures taken at various angles across the scene, and found the new lens produced images of the stripes that were crisp and clear, even at the edges of the camera’s view, spanning nearly 180 degrees.
“It shows we can achieve perfect imaging performance across almost the whole 180-degree view, using our methods,” Gu says.
In another study, the team designed the metalens to operate at a near-infrared wavelength using amorphous silicon nanoposts as the meta-atoms. They plugged the metalens into a simulation used to test imaging instruments. Next, they fed the simulation a scene of Paris, composed of black and white images stitched together to make a panoramic view. They then ran the simulation to see what kind of image the new lens would produce.
“The key question was, does the lens cover the entire field of view? And we see that it captures everything across the panorama,” Gu says. “You can see buildings and people, and the resolution is very good, regardless of whether you’re looking at the center or the edges.”
The team says the new lens can be adapted to other wavelengths of light. To make a similar flat fisheye lens for visible light, for instance, Hu says the optical features may have to be made smaller than they are now, to better refract that particular range of wavelengths. The lens material would also have to change. But the general architecture that the team has designed would remain the same.
The researchers are exploring applications for their new lens, not just as compact fisheye cameras, but also as panoramic projectors, as well as depth sensors built directly into smartphones, laptops, and wearable devices.
“Currently, all 3D sensors have a limited field of view, which is why when you put your face away from your smartphone, it won’t recognize you,” Gu says. “What we have here is a new 3D sensor that enables panoramic depth profiling, which could be useful for consumer electronic devices.”
This research was funded in part by DARPA under the EXTREME Program.
Saturday, September 19, 2020
Youtube like -- GUBA.com Welcome to the Gigantic Usenet Binaries Archive (GUBA). 1998
Tuesday, September 15, 2020
Friday, September 11, 2020
Tuesday, September 01, 2020
500'000€ Prize for Compressing Human Knowledge
500'000€ Prize for Compressing Human Knowledge
http://prize.hutter1.net/The Task
Losslessly compress the 1GB file enwik9 to less than 116MB. More precisely:- Create a Linux or Windows compressor comp.exe of size S1 that compresses enwik9 to archive.exe of size S2 such that S:=S1+S2 < L := 116'673'681 = previous record.
- If run, archive.exe produces (without input from other sources) a 109 byte file that is identical to enwik9.
- If we can verify your claim, you are eligible for a prize of 500'000€×(1-S/L). Minimum claim is 5'000€ (1% improvement).
- Restrictions: Must run in ≲100 hours using a single CPU core and <10gb a="" and="" class="extern" hdd="" href="http://browser.primatelabs.com/v4/cpu/145066" nbsp="" on="" our="" ram="" style="color: black;">test machine10gb>
Thursday, August 27, 2020
Friday, August 21, 2020
Glenn Weiss live-switched the entire DNC convention from his house.
This is Glenn Weiss. He live-switched the entire DNC convention from his house.
Note to self: learn to live-switch. Seems like a useful and highly scalable skill.
I have the video equipment. Now I just need a convention or something with important people saying impressive things in inspiring ways and I’m gtg.
Wednesday, August 19, 2020
PJSIP is an Open Source Embedded SIP protocol stack written in C.
PJSIP
- it is built on top of PJLIB, and since PJLIB is a very very portable library, basically PJSIP can run on any platforms where PJLIB is ported (including platforms where normally it would be hard to port existing programs to, such as Symbian and some custom OSes).
- it has quite a small footprint, although probably it’s not the smallest SIP stack on the planet (the smallest SIP stack would be a stack that does nothing!),
- it is quite customizable and modular, meaning that features that are not needed won’t get linked into the executable,
- it has pretty good performance (thousands of calls per second), and
- it has quite a lot of SIP features.
Links
- PJSIP – SIP Stack Features
- PJSIP – Open Source SIP Stack
- PJSUA – Console Based SIP User Agent
- PJMEDIA – Small footprint media stack
- PJSIP-JNI – A Java wrapper for the PJSIP library
- Open Source VOIP Software
https://www.pjsip.org/
https://github.com/pjsip/pjproject
New: Video codec VP8 & VP9!
PJSIP is a free and open source multimedia communication library written in C language implementing standard based protocols such as SIP, SDP, RTP, STUN, TURN, and ICE. It combines signaling protocol (SIP) with rich multimedia framework and NAT traversal functionality into high level API that is portable and suitable for almost any type of systems ranging from desktops, embedded systems, to mobile handsets.
PJSIP is both compact and feature rich. It supports audio, video, presence, and instant messaging, and has extensive documentation. PJSIP is very portable. On mobile devices, it abstracts system dependent features and in many cases is able to utilize the native multimedia capabilities of the device.
Learning VoIP, RTP and SIP (aka awesome pjsip) @ Medium.com
https://stackoverflow.com/questions/tagged/pjsip
This is interesting
Debugging SIP message traffic with PJSIP History
Bus Passenger Counting System
Tuesday, August 18, 2020
Thursday, August 13, 2020
Wednesday, August 12, 2020
Sony Exmor IMX219
- Sony Exmor IMX219 Sensor Capable of 4K30 1080P60 720P180 8MP Still
- 3280 (H) x 2464 (V) Active Pixel Count
- Maximum of 1080P30 and 8MP Stills in Raspberry Pi Board
- 2A Power Supply Highly Recommended
Arducam Multi Camera Adapter Module for Raspberry Pi - Stereo 3D cameras
Arducam Multi Camera Adapter Module V2.1 for Raspberry Pi 4 B, 3B+, Pi 3, Pi 2, Model A/B/B+, Work with 5MP or 8MP Cameras
$49.99 on Amazon
- Support Raspberry Pi Model A/B/B+, Pi 2 and Raspberry Pi 4, 3, 3b+.
- Accommodate up to four 5MP or 8MP Raspberry Pi cameras on a multi camera adapter board.
- All camera ports are FFC (flexible flat cable) connectors, Demo: youtu.be/DRIeM5uMy0I
- Cameras work in sequential, not simultaneously. High resolution still image photography demo
- Note: No mixing of 5MP and 8MP cameras is allowed. Low resolution, low frame rate video surveillance demo
- with 4 cameras
Product description
The previous model work with Raspbian 9.8 and backward, and does NOT work with Raspbian 9.9 and onward, so this model is out to tackle this issue.
Please note that Raspberry Pi multi-camera adapter board is a nascent product that may have some stability issues and limitations because of the cable’s signal integrity and RPi's closed source video core libraries, so use it at your own risk.
Features
Accommodate 4 Raspberry Pi cameras on a single RPi board
Support 5MP OV5647 or 8MP IMX219 camera, no mixing allowed
3 GPIOs required for multiplexing
Cameras work in sequential, not simultaneously
Low resolution, low frame rate video surveillance demo with 4 cameras
High resolution still image photography demo
https://www.arducam.com/product-category/cameras-for-raspberrypi/raspberry-pi-camera-multi-cam-adapter-stereo/
https://www.arducam.com/
This is even better for 3D Stereo vision.
https://www.arducam.com/raspberry-pi-stereo-camera-hat-arducam/
What is OSVR?
https://osvr.github.io/
OSVR presentation at Boston VR meetup Jul 2015
OSVR Software Framework - Core - April 2015
OSVR is an open-source software platform for virtual and augmented reality. It allows discovery, configuration and operation of hundreds of VR/AR devices and peripherals. OSVR supports multiple game engines, and operating systems and provides services such as asynchronous time warp and direct mode in support of low-latency rendering. OSVR software is provided free under Apache 2.0 license and is maintained by Sensics.
https://www.reddit.com/r/OSVR/comments/bgby9y/what_happened_to_osvr_read_me_if_youre_new/
Sometime quite a while ago (early 2018 maybe?) I was informed in a personal email conversation with a Razer employee that Razer’s team was no longer focusing on OSVR, and instead had directed their efforts to supporting OpenXR.
More recently, I was directed to this tweet by former Sensics employee JeroMiya, which seems to indicate that Sensics, the other major OSVR partner, has dissolved.
Think of OSVR as a software that allows you to customize your VR rig the same way you can customize your PC. When buying a PC it doesn’t matter what brand of monitor, printer, keyboard, graphics card or CPU you want to use – they all work together, allowing you to get a truly customized experience.
This is what OSVR is driving for the VR industry and to date it is the world’s most supported open VR ecosystem. It puts the power of choice in your hands.
NOLO Instructions: Use NOLO with OCULUS DK2 to play SteamVR
https://www.youtube.com/watch?v=qgL7NHixIX8
https://www.nolovr.com/ocdk2
StereoPi : Stereo Camera Vision Board for the PI Zero Modules.
Camera: 2 x CSI 15 lanes cable
GPIO: 40 classic Raspberry PI GPIO
USB: 2 x USB type A, 1 USB on a pins
Ethernet: RJ45
Storage: Micro SD (for CM3 Lite)
Monitor: HDMI out
Power: 5V DC
Supported Raspberry Pi: Raspberry Pi Compute Module 3, Raspberry Pi CM 3 Lite, Raspberry Pi CM 1
Supported cameras: Raspberry Pi camera OV5647, Raspberry Pi camera Sony IMX 237, HDMI In (single mode)
Firmware update: MicroUSB connector
Power switch: Yes! No more connect-disconnect MicroUSB cable for power reboot!
Status: we have fully tested ready-to-production samples
https://www.crowdsupply.com/virt2real/stereopi
------
StereoPi - companion computer on Raspberry Pi with stereo video support
https://diydrones.com/profiles/blogs/stereopi-companion-computer-on-raspberry-pi-with-stereo-video-1
You can download original file here.
raspivid -3d sbs -w 1280 -h 480 -o 1.h264
and you get this:
You can download original captured video fragment (converted to mp4) here.
SLP (StereoPi Livestream Playground) Raspbian Image
https://github.com/realizator
https://github.com/realizator/StereoVision
https://github.com/realizator/stereopi-fisheye-robot
https://github.com/search?p=3&q=stereopi&type=Repositories
Tuesday, August 11, 2020
ShaderToy - OpenGL GL/ES Programming
Shader Toy is one of several websites that allow you to code interactively wish GLSL ES Shaders, that run directly on the GPU in real time.
GLSL ES is very C Like but it's not C. It's intended to run parallelized on the GPU.
https://www.shadertoy.com/browse
https://en.wikipedia.org/wiki/OpenGL_ES
OpenGL for Embedded Systems (OpenGL ES or GLES) is a subset[2] of the OpenGL computer graphics rendering application programming interface (API) for rendering 2D and 3D computer graphics such as those used by video games, typically hardware-accelerated using a graphics processing unit (GPU). It is designed for embedded systems like smartphones, tablet computers, video game consoles and PDAs. OpenGL ES is the "most widely deployed 3D graphics API in history".[3]
Almost all rendering features of the transform and lighting stage, such as the specification of materials and light parameters formerly specified by the fixed-function API, are replaced by shaders written by the graphics programmer.
https://shadertoyunofficial.wordpress.com/