Thursday, August 27, 2020

Immersive Light Field Video with a Layered Mesh Representation

Friday, August 21, 2020

Glenn Weiss live-switched the entire DNC convention from his house.


Alex Lightman on Facebook Said:

How to make yourself highly employable 1 of 100

This is Glenn Weiss. He live-switched the entire DNC convention from his house.

Note to self: learn to live-switch. Seems like a useful and highly scalable skill.

I have the video equipment. Now I just need a convention or something with important people saying impressive things in inspiring ways and I’m gtg.





Wednesday, August 19, 2020

PJSIP is an Open Source Embedded SIP protocol stack written in C.

PJSIP

PJSIP is an Open Source Embedded SIP protocol stack written in C.
The development of PJSIP is mainly focused on having a small footprint, modular, and very portable SIP stack for embedded development purposes (although it’s perfectly good for Win32/Linux/MacOS as well). Some of the characteristics of PJSIP:
  • it is built on top of PJLIB, and since PJLIB is a very very portable library, basically PJSIP can run on any platforms where PJLIB is ported (including platforms where normally it would be hard to port existing programs to, such as Symbian and some custom OSes).
  • it has quite a small footprint, although probably it’s not the smallest SIP stack on the planet (the smallest SIP stack would be a stack that does nothing!),
  • it is quite customizable and modular, meaning that features that are not needed won’t get linked into the executable,
  • it has pretty good performance (thousands of calls per second), and
  • it has quite a lot of SIP features.
A high-level SIP multimedia user agent API is available for both C and Python language.

Links



https://www.pjsip.org/

https://github.com/pjsip/pjproject


New: Video codec VP8 & VP9!

PJSIP is a free and open source multimedia communication library written in C language implementing standard based protocols such as SIP, SDP, RTP, STUN, TURN, and ICE. It combines signaling protocol (SIP) with rich multimedia framework and NAT traversal functionality into high level API that is portable and suitable for almost any type of systems ranging from desktops, embedded systems, to mobile handsets.

PJSIP is both compact and feature rich. It supports audio, video, presence, and instant messaging, and has extensive documentation. PJSIP is very portable. On mobile devices, it abstracts system dependent features and in many cases is able to utilize the native multimedia capabilities of the device.

Learning VoIP, RTP and SIP (aka awesome pjsip) @ Medium.com


https://stackoverflow.com/questions/tagged/pjsip


This is interesting
Debugging SIP message traffic with PJSIP History

Bus Passenger Counting System

Bus Passenger Counting System

The people counting function could be integrated in mobile DVR or cameras. 

You can choose the solution as you like.
If you're interested in the system, pls kindly let me know. TKS. My skype: levy-he. My email: levy.he@jasanwit.com

Wednesday, August 12, 2020

Sony Exmor IMX219

  • Sony Exmor IMX219 Sensor Capable of 4K30 1080P60 720P180 8MP Still
  • 3280 (H) x 2464 (V) Active Pixel Count
  • Maximum of 1080P30 and 8MP Stills in Raspberry Pi Board
  • 2A Power Supply Highly Recommended


The specs on the camera is amazing, and it takes decent pictures. 

The fov field of view is rather limited.  Angle of View: 62.2 x 48.8 degrees

But the aperture is so small it's practically a pinhole camera which can make this useful for a number of applications. 





Arducam Multi Camera Adapter Module for Raspberry Pi - Stereo 3D cameras


Arducam Multi Camera Adapter Module V2.1 for Raspberry Pi 4 B, 3B+, Pi 3, Pi 2, Model A/B/B+, Work with 5MP or 8MP Cameras

$49.99 on Amazon


  • Support Raspberry Pi Model A/B/B+, Pi 2 and Raspberry Pi 4, 3, 3b+.
  • Accommodate up to four 5MP or 8MP Raspberry Pi cameras on a multi camera adapter board.
  • All camera ports are FFC (flexible flat cable) connectors, Demo: youtu.be/DRIeM5uMy0I
  • Cameras work in sequential, not simultaneously. High resolution still image photography demo
  • Note: No mixing of 5MP and 8MP cameras is allowed. Low resolution, low frame rate video surveillance demo 
  • with 4 cameras






Product description
Compared to previous multi-camera adapter module which can only support 5MP RPI cameras, the new multi-camera adapter module V2.1 is designed for connecting maximum four 5MP or 8MP camera to a single CSI camera port on a Raspberry Pi board. Considering that the high-speed CSI camera MIPI signal integrity is sensitive to a long cable connection, this adapter board does not support stacking and can only connect 4 cameras at maximum. Because It covers most of the use cases like 360-degree view photography and surveillance, adding more cameras will degrading the camera performance.
The previous model work with Raspbian 9.8 and backward, and does NOT work with Raspbian 9.9 and onward, so this model is out to tackle this issue.
Please note that Raspberry Pi multi-camera adapter board is a nascent product that may have some stability issues and limitations because of the cable’s signal integrity and RPi's closed source video core libraries, so use it at your own risk.

Features
Accommodate 4 Raspberry Pi cameras on a single RPi board
Support 5MP OV5647 or 8MP IMX219 camera, no mixing allowed
3 GPIOs required for multiplexing
Cameras work in sequential, not simultaneously
Low resolution, low frame rate video surveillance demo with 4 cameras
High resolution still image photography demo




https://www.arducam.com/product-category/cameras-for-raspberrypi/raspberry-pi-camera-multi-cam-adapter-stereo/

https://www.arducam.com/


This is even better for 3D Stereo vision.
https://www.arducam.com/raspberry-pi-stereo-camera-hat-arducam/

What is OSVR?

https://en.wikipedia.org/wiki/Open_Source_Virtual_Reality

https://osvr.github.io/


OSVR presentation at Boston VR meetup Jul 2015

OSVR Software Framework - Core - April 2015



 OSVR is an open-source software platform for virtual and augmented reality. It allows discovery, configuration and operation of hundreds of VR/AR devices and peripherals. OSVR supports multiple game engines, and operating systems and provides services such as asynchronous time warp and direct mode in support of low-latency rendering. OSVR software is provided free under Apache 2.0 license and is maintained by Sensics.


Looks like this was all starting in the 2016 VR Boom and Sensics founded in the late 1990’s for VR development looks like the website went down in the middle of 2019, Their domain was dead and now hijacked. Yuval Boger the CEO left Feb 2018.

https://www.reddit.com/r/OSVR/comments/bgby9y/what_happened_to_osvr_read_me_if_youre_new/

Sometime quite a while ago (early 2018 maybe?) I was informed in a personal email conversation with a Razer employee that Razer’s team was no longer focusing on OSVR, and instead had directed their efforts to supporting OpenXR.
More recently, I was directed to this tweet by former Sensics employee JeroMiya, which seems to indicate that Sensics, the other major OSVR partner, has dissolved.



What is OSVR?
OSVR or Open Source Virtual Reality is a software platform that allows different types of VR technologies to reside on the same ecosystem. This means end to end compatibility across different brands and devices, allowing you to use any OSVR supported HMDs, controllers and other technologies together, giving you the ability to customize the combination of hardware you’d like to use.

Think of OSVR as a software that allows you to customize your VR rig the same way you can customize your PC. When buying a PC it doesn’t matter what brand of monitor, printer, keyboard, graphics card or CPU you want to use – they all work together, allowing you to get a truly customized experience.

This is what OSVR is driving for the VR industry and to date it is the world’s most supported open VR ecosystem. It puts the power of choice in your hands.



NOLO Instructions: Use NOLO with OCULUS DK2 to play SteamVR
https://www.youtube.com/watch?v=qgL7NHixIX8

https://www.nolovr.com/ocdk2

StereoPi : Stereo Camera Vision Board for the PI Zero Modules.



waveshare_wide_pair.jpg?width=600

Front view:
stereopi_front_noted_1280.jpg?width=600
Top view:
stereopi_top_noted_2_1280.jpg?width=600
Dimensions: 90x40 mm
Camera: 2 x CSI 15 lanes cable
GPIO: 40 classic Raspberry PI GPIO
USB: 2 x USB type A, 1 USB on a pins
Ethernet: RJ45
Storage: Micro SD (for CM3 Lite)
Monitor: HDMI out
Power: 5V DC
Supported Raspberry Pi: Raspberry Pi Compute Module 3, Raspberry Pi CM 3 Lite, Raspberry Pi CM 1
Supported cameras: Raspberry Pi camera OV5647, Raspberry Pi camera Sony IMX 237, HDMI In (single mode)
Firmware update: MicroUSB connector
Power switch: Yes! No more connect-disconnect MicroUSB cable for power reboot!
Status: we have fully tested ready-to-production samples
That’s all that I wanted to cover today. If you have any questions I will be glad to answer.
Project website is http://stereopi.com

https://www.crowdsupply.com/virt2real/stereopi




------

StereoPi - companion computer on Raspberry Pi with stereo video support
https://diydrones.com/profiles/blogs/stereopi-companion-computer-on-raspberry-pi-with-stereo-video-1


 Compute Module 3 for playing with stereo video and OpenCV. It could be interesting for those who study computer vision or make drones and robots (3D FPV).
It works with a stock Raspbian, you only need to put a dtblob.bin file to a boot partition for enabling second camera. It means you can use raspivid, raspistill and other traditional tools for work with pictures and video.
JFYI stereo mode supported in Raspbian from 2014, you can read implementation story on Raspberry forum.
Before diving into the technical details let me show you some real work examples.
1. Capture image:
raspistill -3d sbs -w 1280 -h 480 -o 1.jpg
and you get this:
photo_1280_480.jpg?width=600
You can download original file here.
2. Capture video:

raspivid -3d sbs -w 1280 -h 480 -o 1.h264

and you get this:
2buo65.gif?width=600

You can download original captured video fragment (converted to mp4) here.
3. Using Python and OpenCV you can experiment with depth map:
2018-06-01-140326_1824x984_scrot.png?width=600
For this example I used slightly modified code from my previous project 3Dberry (https://github.com/realizator/3dberry-turorial)
I used this pair of cameras for taking the pictures in examples above:
waveshare_pair.jpg?width=600



SLP (StereoPi Livestream Playground) Raspbian Image

https://wiki.stereopi.com/index.php?title=SLP_(StereoPi_Livestream_Playground)_Raspbian_Image#SLP_Admin_panel_options



https://github.com/realizator

https://github.com/realizator/StereoVision

https://github.com/realizator/stereopi-fisheye-robot

https://github.com/search?p=3&q=stereopi&type=Repositories

360 Deg Green Screen Studio

Building a 360 Deg Green Screen Studio

Tuesday, August 11, 2020

ShaderToy - OpenGL GL/ES Programming




Shader Toy is one of several websites that allow you to code interactively wish GLSL ES Shaders, that run directly on the GPU in real time.

GLSL ES is very C Like but it's not C.  It's intended to run parallelized on the GPU.

https://www.shadertoy.com/browse




https://en.wikipedia.org/wiki/OpenGL_ES
OpenGL for Embedded Systems (OpenGL ES or GLES) is a subset[2] of the OpenGL computer graphics rendering application programming interface (API) for rendering 2D and 3D computer graphics such as those used by video games, typically hardware-accelerated using a graphics processing unit (GPU). It is designed for embedded systems like smartphonestablet computersvideo game consoles and PDAs. OpenGL ES is the "most widely deployed 3D graphics API in history".[3]

 Almost all rendering features of the transform and lighting stage, such as the specification of materials and light parameters formerly specified by the fixed-function API, are replaced by shaders written by the graphics programmer.



https://shadertoyunofficial.wordpress.com/



The principles of painting with maths  https://www.youtube.com/watch?v=0ifChJ0nJfM



Monday, August 10, 2020

VR180 Cameras

Just reposting this 2018 article for my own reminder for further research.

---------------------------
Lenovo and Yi have both launched 180-degree VR cameras that capture 3D videos and photos in 4K for use on Google’s Daydream platform. Google said that it is also partnering with other manufacturers including LG and Panasonic to bring other 180-degree cameras to the market, with different models sporting different features.
Google and its partners are not the first companies to tout 180-degree VR video. Californian start-up LucidCam brought its VR180 camera to market in mid-2017.
The Lenovo Mirage Camera and YI Technology’s YI Horizon VR180 Camera will hit shelves in the second quarter, and a camera from LG will be coming later this year. For professional creators, the Z Cam K1 Pro recently launched, and Panasonic is building VR180 support for its just-announced GH5 cameras with a new add-on.

VR180 cameras are designed to allow users to capture special moments in crisp 3D VR, allowing people to capture their memories in stunning immersive detail. By capturing the action in 180- degrees as opposed to 360-degrees, these devices have the advantage of putting the shooter in control of what footage is captured, while also lightening the load by not recording unnecessary content.
Footage captured on VR180 cameras can be viewed and shared in either 2D or 3D. A VR headset, including even a Google Cardboard headset, is all that’s required to experience footage in VR.
Lenovo Mirage Camera is just over 4″ long and 2″ tall, and weighs less than 5 ounces. The device has dual 13MP cameras for 3D, or stereoscopic, recording. Lenovo launched the device at CES along with its new VR headset.
Yi Horizon uses an Ambarella H2V95 chipset supports 5.7K video and photo capture. With the accompanying YI 360 App, users can easily share their 3D VR video. The integrated 2.2” retina touchscreen supports a flip design.

JPEG XR (JPEG extended range)

https://en.wikipedia.org/wiki/JPEG_XR

JPEG XR[4] (JPEG extended range[5]) is a still-image compression standard and file format for continuous tone photographic images, based on technology originally developed and patented by Microsoft under the name HD Photo (formerly Windows Media Photo).[6] It supports both lossy and lossless compression, and is the preferred image format for Ecma-388 Open XML Paper Specification documents.
Support for the format is available in Adobe Flash Player 11.0, Adobe AIR 3.0, Sumatra PDF 2.1, Windows Imaging Component.NET Framework 3.0, Windows VistaWindows 7Windows 8Internet Explorer 9Internet Explorer 10Internet Explorer 11Pale Moon 27.2.[7][8][9] As of August 2014, there were still no cameras that shoot photos in the JPEG XR (.JXR) format.

Facebook’s new 3D photos





https://techcrunch.com/2018/06/07/how-facebooks-new-3d-photos-work/

Johannes Kopf, a research scientist at Facebook’s Seattle office, where its Camera and computational photography departments are based. Kopf is co-author (with University College London’s Peter Hedman) of the paper describing the methods by which the depth-enhanced imagery is created; they will present it at SIGGRAPH in August.
http://visual.cs.ucl.ac.uk/pubs/instant3d/





http://stereo.jpn.org/eng/stphmkr/makedm/

How to make Facebook 3D Photo from Stereo pair

Download the latest StereoPhoto Maker(ver5.29b or later)

I use DMAG(Depth Map Automatic Generator)64bit software to create the Depth map from stereo pair.
Grate thanks to Ugo Capeto 3D who made DMAG software!
Attention : DMAG works on 64bit Windows only!

3D Stereoscopic Photography

http://3dstereophoto.blogspot.com/


the3dconverter2

Download Here:
http://3dstereophoto.blogspot.com/2019/11/2d-to-3d-image-conversion-software-3d.html




wigglemaker.

DMAG5


I have tried it yet, if you do, let me know.


Extracting Depth Map Photos for the Looking Glass

https://blog.lookingglassfactory.com/process/extracting-depth-map-photos-for-the-looking-glass/

If your unaware of what Looking Glass is, it's truly amazing.
a life like 3D volumetric display.  Done using some sort of holographic or lenticular layers in front of a 2D display. It has a solid 2 inch thick Acrylic Brick that hold the optics together in a super solid fashion. Anyhow the end result is amazing.






RGB and depth map images


Article goes in to how to extract and manipulate a depth map using exiftool. 

https://exiftool.org/

ExifTool by Phil Harvey

ReadWrite and Edit Meta Information!


Seems there is a magic undocumented command that might only run on MacOS?
-MPImage2




StereoPhoto Maker’s Looking Glass Web Tools site.


http://stereo.jpn.org/lkg/depth/depthe.html  
This is cool, seems the web page can detect and render directly to the Looking Glass.