Sunday, November 19, 2023
Wednesday, October 11, 2023
Eulerian Video Magnification
Read more about it here: http://people.csail.mit.edu/mrub/vidmag/
Software and Code
Eulerian Video Magnification code
Matlab code and executables implementing Eulerian video processing for amplifying color and motion changes.
Phase Based Video Motion Processing code
Matlab code implementing the new and improved phase-based motion magnification pipeline.
Learning-based Video Motion Magnification code
Tensorflow implementation of the learning-based motion magnification pipeline.
Videoscope
Web interface for motion and color magnification. Upload your videos and have them magnified!
Saturday, September 30, 2023
Wednesday, September 27, 2023
Sunday, September 24, 2023
Wednesday, September 13, 2023
Sun Micro Systems - Scott McNealy , First Global Internet Stream Christm...
https://github.com/johnsokol/holiday_greeting_1992
Tuesday, August 22, 2023
Was this plumber Intentionally scamming this Elderly widower?#plumbing #...
Sunday, August 13, 2023
Sun Micro Systems - Scott McNealy , First Global Internet Stream Decembe...
Sun Micro Systems - Scott McNealy , First Global Internet Stream December 1992.
I need a old copy of Solaris 2.4 or 2.5 on a SparkStation or emulator to capture an Mpeg4 of this.
player will play TESTME or holliday greeting 1992 files out to local audio and standard Xwindow.
Still looking for the source code for these.
This should be a precursor to : Sun's CellB Video Encoding rfc2029
http://www.cs.columbia.edu/~hgs/rtp/drafts/draft-ietf-avt-cellb-06.txt
"CellB, derived from CellA, has been optimized for network-based video applications. It is computationally symmetric in both encode and decode. CellB utilizes a fixed colormap and vector quantization techniques in the YUV color space to achieve compression."
I came in and implemented that change to there codec in a few days, and did the stream, I have no idea what happened to that code after I left my brief contract in 1992.
John
Tuesday, August 01, 2023
Universal Media Server: Streamlining Your Media Sharing Experience
In today's fast-paced digital age, our lives are intertwined with various smart devices. We capture memories through images, listen to music on the go, and indulge in our favorite movies and TV shows whenever we desire. However, as the number of devices we own increases, the need for seamless media sharing becomes apparent. Universal Media Server (UMS) emerges as the ultimate solution, providing a DLNA-compliant UPnP Media Server capable of effortlessly sharing video, audio, and images across most modern devices.
A Heritage of Stability: From PS3 Media Server to Universal Media Server
UMS has a rich history rooted in the popular PS3 Media Server, created by shagrath. The objective behind developing Universal Media Server was to enhance stability and file-compatibility, guaranteeing a smoother user experience. Building on the foundations of PS3 Media Server, UMS adopted an array of improvements and new features, making it a robust and versatile media server for all.
DLNA and UPnP: Powering Seamless Media Sharing
DLNA (Digital Living Network Alliance) and UPnP (Universal Plug and Play) are the technologies behind UMS's ability to share media across devices. DLNA provides a set of guidelines for compatible devices to discover and communicate with each other over a network. UPnP, on the other hand, facilitates the automatic discovery and configuration of devices within a network. These two technologies work harmoniously within UMS, enabling it to act as a bridge between various devices, ensuring your media content is accessible on all connected platforms.
Features and Benefits of Universal Media Server
1. Wide Device Compatibility: UMS supports an extensive range of devices, including smart TVs, gaming consoles, smartphones, tablets, and more. Regardless of the manufacturer, as long as the devices are DLNA-compliant, UMS will efficiently cater to your media-sharing needs.
2. Streaming on the Go: With UMS, you can enjoy your favorite videos, music, and images on the go. Whether you're lounging in your living room or traveling to a different city, as long as you have an internet connection, your media content will be just a few taps away.
3. Customizable Transcoding: Not all devices support the same media formats. UMS takes care of this by offering transcoding capabilities, converting media files on-the-fly to a format compatible with the target device. This ensures smooth playback on devices with limited format support.
4. User-Friendly Interface: UMS boasts an intuitive and user-friendly interface that simplifies the media-sharing process. Even those who aren't tech-savvy can navigate and set up the server with ease.
5. Regular Updates and Support: The development team behind UMS is dedicated to providing constant updates, bug fixes, and new features to enhance the overall user experience. Additionally, an active community of users ensures a wealth of support and assistance when needed.
Setting Up Universal Media Server
Getting started with UMS is a breeze. Simply download the software from the official website and install it on your computer. Once installed, you can customize the settings according to your preferences. Then, start adding your media libraries, and UMS will take care of the rest. Within minutes, your media will be accessible across all your connected devices.
Conclusion
In conclusion, Universal Media Server has successfully evolved from its predecessor, PS3 Media Server, to become a dependable and versatile media-sharing solution. Its DLNA-compliant UPnP Media Server capabilities, combined with its wide device compatibility and user-friendly interface, make it an ideal choice for anyone looking to streamline their media-sharing experience. With UMS, you can effortlessly share videos, audio, and images across modern devices, ensuring your cherished memories and entertainment content are always within reach. So, why wait? Embrace the world of Universal Media Server and revolutionize your media-sharing journey today.
https://github.com/UniversalMediaServer/UniversalMediaServer
https://www.universalmediaserver.com/
Saturday, May 13, 2023
Here's a simple example of a VR project in Three.js, updated for WebXR:
Here's a simple example of a VR project in Three.js, updated for WebXR:
javascriptlet renderer, scene, camera;
let controller1, controller2;
let xrControllerModelFactory;
init();
animate();
function init() {
// Set up the WebGL renderer
renderer = new THREE.WebGLRenderer({ antialias: true });
document.body.appendChild(renderer.domElement);
// Set up the scene
scene = new THREE.Scene();
// Set up the camera
camera = new THREE.PerspectiveCamera(75, window.innerWidth / window.innerHeight, 0.1, 1000);
// Set up VR
renderer.xr.enabled = true;
navigator.xr.isSessionSupported('immersive-vr').then(function(supported) {
if (supported) {
const button = new VRButton.createButton(renderer); // You need to implement VRButton or use the one provided by Three.js examples
document.body.appendChild(button);
}
});
// Set up controllers
controller1 = renderer.xr.getController(0);
scene.add(controller1);
controller2 = renderer.xr.getController(1);
scene.add(controller2);
xrControllerModelFactory = new XRControllerModelFactory();
const controllerModel1 = xrControllerModelFactory.createControllerModel(controller1);
controller1.add(controllerModel1);
const controllerModel2 = xrControllerModelFactory.createControllerModel(controller2);
controller2.add(controllerModel2);
// Add other objects to your scene here
}
function animate() {
renderer.setAnimationLoop(render);
}
function render() {
// Render the scene through the camera
renderer.render(scene, camera);
}
In this example, we're setting up a Three.js scene with a single camera and enabling WebXR. We're also adding controllers, which will be represented in the VR scene and will move to match the movements of the user's VR controllers. Finally, we're setting up an animation loop to continuously render the scene.
Please note that this is a simple example and a real WebXR application might involve more complex setups including interaction handling, complex 3D models, and more. Remember to include the three.js library and any additional libraries like XRControllerModelFactory
which is used to create 3D models of the XR controllers.
How to port code from WebVR to WebXR:
to port code from WebVR to WebXR:
Deprecated APIs: Replace all deprecated WebVR APIs with their WebXR equivalents. WebXR uses a different set of APIs that are more flexible and comprehensive than those in WebVR.
Session Handling: In WebXR, instead of directly requesting a VR display and starting a VR session, you would need to ask the user agent for an XR device, then start a session.
Reference Spaces: WebXR introduces the concept of reference spaces, which replace the eye-level and stage-level frames of reference in WebVR. You'll have to update your code to use these new reference spaces.
Input Sources: WebXR has a more flexible and comprehensive system for handling input from a variety of devices. If your WebVR code is designed to handle input from specific devices, you'll need to update it to use the new WebXR input source APIs.
Viewer and Frame Data: In WebVR, you use
getFrameData()
to get the data needed to render a frame. In WebXR, you use theXRFrame.getViewerPose()
method.Animation Loop: In WebVR, you use
VRDisplay.requestAnimationFrame()
. In WebXR, you useXRSession.requestAnimationFrame()
.Rendering: In WebXR, you don't render to the canvas directly. Instead, you render to
XRWebGLLayer
s that are part of anXRSession
.
Sure, let's take a simple "Hello, World!" example of a rotating cube in WebVR and port it to WebXR. Note that the code snippets below are simplified for illustration purposes and might require additional setup and error handling for a full production application.
WebVR Version
javascriptlet vrDisplay, frameData;
initWebGL();
initWebVR();
function initWebGL() {
// WebGL setup goes here
}
function initWebVR() {
navigator.getVRDisplays().then(function(displays) {
if(displays.length > 0) {
vrDisplay = displays[0];
frameData = new VRFrameData();
vrDisplay.requestAnimationFrame(loop);
}
});
}
function loop() {
vrDisplay.getFrameData(frameData);
render(frameData); // your rendering function
vrDisplay.requestAnimationFrame(loop);
}
WebXR Version
javascriptlet xrSession, xrReferenceSpace;
initWebGL();
initWebXR();
function initWebGL() {
// WebGL setup goes here
}
async function initWebXR() {
if (navigator.xr) {
xrSession = await navigator.xr.requestSession("immersive-vr", { optionalFeatures: ['local-floor', 'bounded-floor'] });
xrSession.updateRenderState({ baseLayer: new XRWebGLLayer(xrSession, gl) });
xrReferenceSpace = await xrSession.requestReferenceSpace('local-floor');
xrSession.requestAnimationFrame(loop);
}
}
function loop(time, xrFrame) {
let xrViewerPose = xrFrame.getViewerPose(xrReferenceSpace);
if (xrViewerPose) {
let glLayer = xrSession.renderState.baseLayer;
gl.bindFramebuffer(gl.FRAMEBUFFER, glLayer.framebuffer);
for (let xrView of xrViewerPose.views) {
// Render from the point of view of xrView
render(xrView); // your rendering function
}
}
xrSession.requestAnimationFrame(loop);
}
This code creates a VR session, sets up the WebGL context for rendering in VR, requests a 'local-floor' reference space (which provides an origin at the user's floor level), and enters the render loop.
Remember, these are simplified examples. In a real application, you would need to handle errors, manage WebGL resources, and account for different types of VR devices and input methods. You would also need a way to start and stop the VR session based on user input.
You may also want to consider using a library such as Three.js, which can handle much of this complexity for you and provides a higher-level API for 3D rendering. As of my knowledge cutoff in September 2021, Three.js has good support for both WebVR and WebXR.
The transition from WebVR to WebXR primarily involved a change in the APIs used to interact with VR/AR hardware. Here are some of the most important changes:
navigator.getVRDisplays()
: This was a WebVR method used to get a list of all connected VR displays. In WebXR, the equivalent function isnavigator.xr.requestSession()
, which requests a session for XR content. UnlikegetVRDisplays()
,requestSession()
returns a promise for a singleXRSession
object instead of a list of displays.VRDisplay
: This interface represented a VR display in WebVR. It has been replaced by theXRSystem
interface in WebXR.XRSystem
provides methods to get information about the XR device, check if a certain session mode is supported, and request anXRSession
.VRDisplay.requestAnimationFrame()
: This method requested an animation frame for the VR display in WebVR. In WebXR, the equivalent method isXRSession.requestAnimationFrame()
. It behaves in a similar way, but is called on anXRSession
object instead of aVRDisplay
.VRDisplay.requestPresent()
: This WebVR method was used to start presenting to the VR display. It has been replaced by theXRSession.updateRenderState()
method in WebXR. Instead of directly presenting to the display, you now update the render state of the XR session.VRFrameData
: This interface represented all the data needed to render a single frame in WebVR. In WebXR, the equivalent is theXRFrame
interface. You use thegetViewerPose()
andgetPose()
methods ofXRFrame
to get the data you need for rendering.VREyeParameters
: In WebVR, this interface was used to get information about the eyes of the VR display. In WebXR, the concept of eyes has been replaced withXRView
s. You get a list ofXRView
s from anXRViewerPose
object, which you can get from anXRFrame
.VRDisplayCapabilities
: This interface represented the capabilities of a VR display in WebVR. In WebXR, capabilities are represented by theXRSessionMode
andXRSessionInit
dictionaries. You check if a certain session mode is supported by using theXRSystem.isSessionSupported()
method.VRStageParameters
: This interface represented the stage parameters in WebVR. In WebXR, stages are represented by bounded reference spaces. You request a bounded reference space by callingXRSession.requestReferenceSpace()
with the'bounded-floor'
option.
These changes reflect a shift in how the APIs conceptualize VR/AR hardware and sessions. The goal of these changes was to make the APIs more flexible, able to handle a wider range of hardware, and better aligned with how modern VR/AR hardware works.