Showing posts with label transcoding. Show all posts
Showing posts with label transcoding. Show all posts

Wednesday, March 01, 2023

Livepeer (LPT): Decentralized Video Transcoding Services

 

https://livepeer.org/primer


Livepeer is an Ethereum-based protocol that distributes video transcoding work throughout its decentralized network. The protocol aims to provide cost efficient, secure, and reliable infrastructure that can handle today's high demand for video streaming.


Livepeer is a scalable Platform-as-a-Service (PaaS) for startups and organizations looking to add live or on-demand video to their offerings. At its core, Liverpeer is an Ethereum-based protocol for video transcoding, which refers to the reformatting of a video to suit a variety of bandwidths and devices. Designed to make streaming more reliable while reducing costs, Livepeer acts as a decentralized marketplace for developers building applications that integrate live video and transcoding providers. The network’s native token is the Livepeer Token (LPT).


At its core, Livepeer seeks to offer a scalable and cost-effective infrastructure solution that can meet today’s demand for streaming. Beyond accounting for 80% of internet bandwidth consumption, video streaming is also extremely costly on the computing side, largely because video distributors must first transcode video before broadcasting it. The resulting costs have meant that many video streaming companies have resorted to selling user data and subjecting users to ads to earn revenue to pay their infrastructure bills. Livepeer seeks to offer a decentralized, token-incentivized, and open network to replace this model — and claims that it can reduce costs up to 50x compared to legacy methods.


How Does Livepeer Work?

First, nodes called Broadcasters send video streams to the network for transcoding. These streams are received by Orchestrators — users who contribute their computer’s CPU, GPU, and bandwidth to the network in exchange for fees denominated in ether (ETH) charged to Broadcasters. To become an Orchestrator, you must know how to stake Livepeer. As an Orchestrator, your LPT stake could be slashed if you behave maliciously or suboptimally.


Orchestrators serve as coordinators, being responsible for ensuring that video is correctly transcoded. They send the video to Transcoder hardware that encodes and reformats the video before sending it back to Orchestrators. Work is distributed to Orchestrators in proportion to the amount of LPT they have staked. The Transcoders that perform the work are often GPUs which are mining digital currencies, but also happen to have video encoding ASICs sitting idle during the mining process. Livepeer enables these ASICs to be put to use, driving additional revenue for the operators, all without disrupting their mining operations. 


If you are a LPT holder but do not wish to participate as an Orchestrator or Transcoder, you can stake your LPT tokens with an Orchestrator in exchange for a portion of the Orchestrator’s earned rewards and fees.


Consensus on Livepeer

Livepeer utilizes a two-layer consensus mechanism. First, the Livepeer ledger and its transactions are recorded on and secured by the Ethereum blockchain. The second consensus layer handles the distribution of newly generated LPT and verifies that transcoding work has been done correctly. This layer utilizes a Delegated Proof-of-Stake (DPoS) model in which Orchestrators act as validators — nodes that participate in the protocol to ensure proper payment settlement, token distribution, and security. When one Orchestrator performs transcoding work, Broadcast nodes can self-validate or outsource to other Orchestrators to check for mistakes and malicious behavior. This is a costly operation, so Livepeer only randomly verifies a small percentage of the work done.


The Livepeer Token (LPT)

LPT is designed to act as a coordination and incentive mechanism that helps keep the network as cost-effective, reliable, and secure as possible. It serves as a bonding mechanism to financially incentivize Orchestrators to act honestly, thus securing the network.


New Livepeer tokens are minted at the conclusion of periods known as ‘rounds,’ and are distributed to Delegators and Orchestrators in proportion to their stakes. This is intended to give those who participate in Livepeer more ownership over the network than those who do not participate. One round corresponds to roughly 24 hours. The inflation rate of LPT adjusts automatically depending upon how many tokens are staked out of the total supply in circulation. This is designed to keep participation in the network at a desirable level.


Livepeer’s decentralized architecture provides video broadcasters with an alternative to the costly, centralized infrastructure that has traditionally been relied on. However, broadcasters are not the only stakeholders who stand to benefit. Livepeer’s model could enable video streaming companies to explore new business models which don’t rely on selling user data and serving ads — creating a better experience for consumers.


Likewise, Livepeer predicts that its technology could make a variety of new services possible, such as pay-as-you-go content consumption and better creator-economy streaming applications that create better alignment between content creators, consumers, and platforms themselves. Livepeer also provides a long-needed decentralized solution for embedding video into decentralized applications (dApps).

Thursday, July 30, 2020

Testing the Mobfish free trial of VRStudio.

https://mobfish.net/features/cloud-transcoding/






https://johnsokol.mobfish.studio/webvr

OMG, there are no more WebVR browsers, it was discontinued about 2 years ago everything is now WebXR.

Well it play fine on a Desktop and it works on the Oculus Go.


Tuesday, July 14, 2020

Nageru: Taking free software video mixing into 2016 (FOSDEM 2016)







https://nageru.sesse.net/



Nageru (投げる), a modern free software video mixer

Nageru (a pun on the Japanese verb nageru, meaning to throw or cast) is a live video mixer. It takes in inputs from one or more video cards (any DeckLink PCI card via Blackmagic's drivers, and Intensity Shuttle USB3 and UltraStudio SDI USB3 cards via bmusb), mixes them together based on the operator's desire and a theme written in Lua, and outputs a high-quality H.264 stream over TCP suitable for further transcoding and/or distribution. Nageru is free software, licensed under the GNU General Public License, version 3 or later.
Nageru aims to produce high-quality output, both in terms of audio and video, while still running on modest hardware. The reference system for two 720p60 inputs is a ThinkPad X240, ie. an ultraportable dual-core with a not-very-fast GPU. Nageru's performance scales almost linearly with the available GPU power; e.g., if you have a GPU that's twice as fast as mine (which is not hard to find at all these days; desktop GPUs are frequently more like 10x), going to 1080p60 will only cost you about 10% more CPU power.
Various real-world examples of videos produced by Nageru:
  • The stream at Solskogen 2016–2018 was made with Nageru (in collaboration with other software and some hardware); you can view a copy of the 2016 streams and 2017 streams at YouTube (although YouTube doesn't seem to deal properly with 50/60 fps switches, causing jerkiness in some videos).
  • Fyrrom in 2016, and again in 2017.
  • All videos from Trøndisk 2017 (an example of multi-camera and overlay graphics), Norwegian ultimate championships 2018 (same, but with the newer native CEF support), and Trøndisk 2018 (also with Futatabi integration).
  • The Norwegian municipality of Frøya is live streaming all of their council meetings using Nageru (Norwegian only).
  • Breizhcamp, a French technology conference, used Nageru in 2018 and 2019. If you speak French, you can watch their keynote about it (itself produced with Nageru) and all their other video online.

Futatabi (再び), a multi-camera free software instant replay system with slow motion

Futatabi is a multi-camera instant replay system with slow motion. It supports efficient real-time interpolation using optical flow, making for full-framerate output without having to use special high-framerate cameras. (Of course, interpolation can only take you so far, and the results will depend on the type of content.) Futatabi is currently in alpha. It is distributed and built together with Nageru.

Documentation

Nageru and Futatabi have extensive documentation at https://nageru.sesse.net/doc/. In addition, you can see the FOSDEM 2016 talk introducing Nageru, although it covers only 1.0.0 and a lot of things have happened since then:
There was a talk about Futatabi at FOSDEM 2019, too (covering 1.8.2):








https://nageru.sesse.net/doc-1.8.6/streaming.html