Tuesday, June 30, 2020

Fwd: Face recognition Temperature Measurement Terminal

---------- Forwarded message ---------

This high-precision detection algorithm can detect the body temperature whether wearing a mask or not.It integrates image acquisition, face detection, face tracking and face contrast, and human body temperature detection. 1 second detection speed.

Voice broadcast temperature,LED light turn red if temperature is not normal.

Each Entrance can install this machine (Hotel,CBD building,school,shopping mall,bus station,apartment.....)

On computer we can check lastest 10,000 records

Also can use as face recognition punch in and punch out machine.Avoid touching

Ching Deng

Masrui Technology (HK) Co., Ltd. 

Shenzhen Masrui Technology Co., Ltd. 

M(Wechat/WhatsApp): +86 13652367942



Sunday, June 28, 2020

What Is CoaXPress? | Vision Campus


CoaXPress | CXP

The CoaXPress (CXP) standard was originally launched by various companies in the industrial image processing sector.

The CoaXPress (CXP) standard was originally launched by various companies in the industrial image processing sector. The goal was to develop a fast data interface that also made it possible to bridge a large data volume across greater distances. The first CoaXPress interfaces were introduced at "Vision", the leading trade fair for industrial image processing in 2008 in Stuttgart. After three more years of development, CXP 1.0 was officially released as a standard in 2011. Since then, the standard has established itself in industrial image processing. This standard was then developed further, into CoaXPress 2.0. An interface with the CoaXPress 1.0/1.1 standard supports data rates as high as 6.25 Gbps. The transmission speed of the CoaXPress 2.0 standard is twice as high, at up to 12.5 Gbps. This allows for even higher resolutions and frame rates compared to other efficient standards. In contrast to the preceding interface, the new standard only needs half as many cables to transfer the same amount of data.

Thanks to the combined triggering and power supply (power over CXP), only one CoaXPress cable is needed, which can have a maximum cable length of 40 meters – another benefit of this update.

Friday, June 26, 2020

Exact Recompression

Is it possible to "reverse engineer" a compressed lossy image or audio file?

By this, instead of re-encoding and incurring more compression artifacts is it possible to work out what the original compressed data looks like and repacket back in to the original compressed file without any loss or degradation?

A researcher at Cambridge was able to revert a decompressed bitmap to a JPEG source, covered by this paper on Exact JPEG recompression. The authors are able to recover the compression parameters (quantized DCT coefficients) for 96% of the blocks** in images compressed at a JPEG quality of 75%. 


Exact JPEG recompression

 Andrew B. Lewis, Markus G. Kuhn 
Computer Laboratory, University of Cambridge, 


We present a variant of the JPEG baseline image compression algorithm optimized for images that were generated by a JPEG decompressor. It inverts the computational steps of one particular JPEG decompressor implementation (Independent JPEG Group, IJG), and uses interval arithmetic and an iterative process to infer the possible values of intermediate results during the decompression, which are not directly evident from the decompressor output due to rounding. We applied our exact recompressor on a large database of images, each compressed at ten different quality factors. At the default IJG quality factor 75, our implementation reconstructed the exact quantized transform coefficients in 96% of the 64-pixel image blocks. For blocks where exact reconstruction is not feasible, our implementation can output transform-coefficient intervals, each guaranteed to contain the respective original value. Where different JPEG images decompress to the same result, we can output all possible bit-streams. At quality factors 90 and above, exact recompression becomes infeasible due to combinatorial explosion; but 68% of blocks still recompressed exactly. 

Matt Montag discussed the possiblity of doing this with MP3.

I am sure it should be possible to train neural networks to be able to effectively do this. 

Tuesday, June 23, 2020

Sun Microsystems - CELL-B encoding from 1992



Sun Micro Systems - Scott McNealy , First Global Internet Stream December 1992.
I need a old copy of Solaris 2.4 or 2.5 on a SparkStation or emulator to capture an Mpeg4 of this.
We tried QEMU and the video played, but no audio...
player will play TESTME or holliday greeting 1992 files out to local audio and standard Xwindow.
Still looking for the source code for these, but it's been lost in time. 
This should be a precursor to : Sun's CellB Video Encoding rfc2029
"CellB, derived from CellA, has been optimized for network-based video applications. It is computationally symmetric in both encode and decode. CellB utilizes a fixed colormap and vector quantization techniques in the YUV color space to achieve compression."


5.1 CelB

The CELL-B encoding is a proprietary encoding proposed by Sun Microsystems. The byte stream format is described in work in progress [12]. The author can be contacted at Michael F. Speer Sun Microsystems Computer Corporation 2550 Garcia Ave MailStop UMPK14-305 Mountain View, CA 94043 United States electronic mail: michael.speer@eng.sun.com

  [12] M. F. Speer and D. Hoffman, "RTP payload format of CellB video
       encoding," Work in Progress, Internet Engineering Task Force,
       Aug.  1995.

https://tools.ietf.org/html/draft-ietf-avt-profile-04   March 24, 1995 It is not permissible to use distinct payload types to multiplex several media concurrently onto a single RTP session (e.g., to concurrently send PCMU audio and CelB video over the same RTP session). Some payload types may designate a combination of both audio and video, both within the same packet or differentiated by information within the payload. Currently, the MPEG Transport encapsulation is the only such payload type. 

     CelB: The CELL-B encoding is a proprietary encoding proposed by Sun
          Microsystems. The byte stream format is described in RFC TBD.

Source code was published in the drafts, but later removed, 
Upon further investigation there are 2 version of the code, but other then doing 
variable cleanup like u_char is now U_INT8 not much has changed. 

draft-ietf-avt-cellb-profile-01.txt  This is the only one that mentions NetVideo. 
draft-ietf-avt-cellb-profile-03.txt    (all the rest are the same code) 

This code after resolving countless build issues is missing the tables making it almost unusable
as is, although there are Sun documents that explain the format. 

but finding working code is way faster then reconstruction a dead format from what looks
 like an incomplete set of specifications. 

I found the source code for the CellB codec in VIC,
 a Lawrence Berkeley National Labs , White board and video conferencing Application done in 1993/4


I later also found a copy in the source code for NV will I will soon post a link for.

ViewPoint Compressed Packet Video (CPV)

Doing a little but of code Archaeology on old video conferencing code from 25 years ago.

Ran across an Interesting codec I wasn't familiar with.  CPV

In the early days the same codec keep getting referenced, some like JPEG and H.261 are ubiquitous and still around.  Others like CPV, NV, PicW are still mysteries at least to me and about to be forgotten in time.

     CelB: The CELL-B encoding is a proprietary encoding proposed by Sun
          Microsystems. The byte stream format is described in RFC TBD.

     CPV: This proprietary encoding, "Compressed Packet Video is  imple-
          mented by Concept, Bolter, and ViewPoint Systems video codecs.
          For further information, contact:  Glenn Norem, President
          ViewPoint Systems, Inc.
          2247 Wisconsin Street, Suite 110
          Dallas, TX 75229-2037
          United States
          Phone: +1-214-243-0634

     JPEG: The encoding  is  specified  in  ISO  Standards  10918-1  and
          10918-2. The RTP payload format is as specified in RFC TBD.

     H261: The encoding is specified in CCITT/ITU-T standard H.261.  The
          packetization and RTP-specific properties are described in RFC

     HDCC: The HDCC encoding is a proprietary encoding used  by  Silicon
          Graphics. Contact

          inperson@sgi.com for further details.

     MPV: MPV designates the use MPEG-I and MPEG-II video encoding  ele-
          mentary  streams  as  specified in ISO Standards ISO/IEC 11172
          and 13818-2, respectively. The RTP payload format is as speci-
          fied in RFC TBD, Section 3.

     MP2T: MP2T designates the use of  MPEG-II  transport  streams,  for
          either  audio  or video. The encapsulation is described in RFC
          TBD, Section 2.

     nv:  The encoding is implemented in the program 'nv'  developed  at
          Xerox PARC by Ron Frederick.

     CUSM: The encoding is implemented in the program CU-SeeMe developed
          at  Cornell  University by Dick Cogger, Scott Brim, Tim Dorcey
          and John Lynn.

     PicW: The encoding is  implemented  in  the  program  PictureWindow
          developed at Bolt, Beranek and Newman (BBN).

https://tools.ietf.org/html/draft-ietf-avt-profile-00  December 15, 1992

Internet Engineering Task Force          Audio-Video Transport Working Group
INTERNET-DRAFT                                                H. Schulzrinne
                                                      AT&T Bell Laboratories
                                                           December 15, 1992
                                                            Expires:  5/1/93

   Sample Profile for the Use of RTP for Audio and Video Conferences with
                              Minimal Control
_number__name_ 31 H261 30 Bolt 29 dvc 28 nv Table 2: Default Video Encodings

Bolt is Bolter - CPV

V2 of this document extended to: 
                                31      H261
                                30      Bolt
                                29      PicW
                                28      nv
                                27      CUSM
                                26      JPEG

                     Table 2:  Standard Video Encodings

I just got a hold of the NV source code.  (Xerox Park, Net Video _ which is plan to make sure it's available somewhere on github.

https://tools.ietf.org/html/draft-ietf-avt-profile-03  October 20, 1993

CPV: This encoding, "Compressed Packet Video" is implemented by Concept, Bolter, and ViewPoint Systems video codecs.
RFC 1890                       AV Profile                   January 1996

      PT         encoding      audio/video    clock rate    channels
                 name          (A/V)          (Hz)          (audio)
      0          PCMU          A              8000          1
      1          1016          A              8000          1
      2          G721          A              8000          1
      3          GSM           A              8000          1
      4          unassigned    A              8000          1
      5          DVI4          A              8000          1
      6          DVI4          A              16000         1
      7          LPC           A              8000          1
      8          PCMA          A              8000          1
      9          G722          A              8000          1
      10         L16           A              44100         2
      11         L16           A              44100         1
      12         unassigned    A
      13         unassigned    A
      14         MPA           A              90000        (see text)
      15         G728          A              8000          1
      16--23     unassigned    A
      24         unassigned    V
      25         CelB          V              90000
      26         JPEG          V              90000
      27         unassigned    V
      28         nv            V              90000
      29         unassigned    V
      30         unassigned    V
      31         H261          V              90000
      32         MPV           V              90000
      33         MP2T          AV             90000
      34--71     unassigned    ?
      72--76     reserved      N/A            N/A           N/A
      77--95     unassigned    ?
      96--127    dynamic       ?

   Table 2: Payload types (PT) for standard audio and video encodings

From NV 2.7 code and similar in the 3.2 tree.

more bolter_decode.h
/****************************************************************************//* bolter_decode.h -- Return codes from Bolter_Decode subroutine *//****************************************************************************/ #define VxSUCCESS 0 /* Packet successfully decoded */#define VxEXTRAPIXEL 1 /* Extra pixel past end of pixmap */#define VxBADADDRESS 2 /* Bad address in video data */#define VxUNTERMINATED 3 /* Packet not terminated */#define VxBADHEADER 4 /* Bad video header format */#define VxBADLENGTH 5 /* Unreasonable packet length */#define VxNONMOTION 6 /* Non-motion video packet found */ src/nv.c:#ifdef BOLTERsrc/nv.c: (void) Bolter_Decode(source[i].image, packet+8, len-8);src/nv.c:#endif BOLTER

From NV (Xerox Park, Net Video version 3.3)
There are Binary object for it.

7592 Feb 22  1994 cpv_decode-alpha.o
2849 Nov  1  1994 cpv_decode-bsdi.o
4176 Feb 18  1994 cpv_decode-dec5k.o
3120 Feb 18  1994 cpv_decode-hp.o
4112 Feb 17  1994 cpv_decode-irix4.o
5156 Mar  7  1994 cpv_decode-irix5.o
2764 Feb 17  1994 cpv_decode-sunos4.o
4132 Mar  7  1994 cpv_decode-sunos5.o

/src/cpv.h Feb 3, 1994
/*                                                                          */
/*      cpv.h -- Return codes from CPV_Decode subroutine to decode          */
/*      Concept/Bolter/ViewPoint Compressed Packet Video (CPV) (TM)         */
/*                                                                          */
/*                                                                          */
/*      Copyright (c) 1994 by the University of Southern California.        */
/*      All rights reserved.                                                */
/*                                                                          */
/*      de-CPV-ware(TM), CPV(TM), and Compressed Packet Video(TM) are       */
/*      trademarks of VIEWPOINT SYSTEMS, INC., 2247 Wisconsin Street        */
/*      #110, DALLAS, TEXAS 75229                                           */
/*                                                                          */
/*      Permission to use, copy, and distribute this software and its       */
/*      documentation in binary form for non-commercial purposes and        */
/*      without fee is hereby granted, provided that the above copyright    */
/*      notice appears in all copies, that both the copyright notice and    */
/*      this permission notice appear in supporting documentation, and      */
/*      that any documentation, advertising materials, and other            */
/*      materials related to such distribution and use acknowledge that     */
/*      the software was developed by the University of Southern            */
/*      California, Information Sciences Institute.  The name of the        */
/*      University may not be used to endorse or promote products derived   */
/*      from this software without specific prior written permission.       */
/*                                                                          */
/*      THE UNIVERSITY OF SOUTHERN CALIFORNIA makes no representations      */
/*      about the suitability of this software for any purpose.  THIS       */
/*                                                                          */
/*      Other copyrights might apply to parts of this software and are      */
/*      so noted when applicable.                                           */
/*                                                                          */
/*                                                                          */
/*      This software decompression function is also known as               */
/*      "de-CPV-ware", Version 1.0.                                         */
/*                                                                          */
/*                                                                          */
/*      Author:           Stephen Casner, casner@isi.edu                    */
/*                        USC Information Sciences Institute                */
/*                        4676 Admiralty Way                                */
/*                        Marina del Rey, CA 90292-6695                     */
/*                                                                          */
/*                                                                          */
/*      Programming Interface                                               */
/*                                                                          */
/*      int CPV_Decode(image, pktptr, pktlen)                               */
/*          struct vidimage {      Output image arrays CPV_WIDTHxCPV_HEIGHT */
/*              unsigned char *y_data;    Pixel luminance image             */
/*              char *uv_data;            Pixel chrominance image           */
/*              short width, height;      Size of pixel image               */
/*                                        Other stuff here that we ignore   */
/*          } *image;              Pointer to output image array struct     */
/*          unsigned char *pktptr; Pointer to start of video data in packet */
/*          int pktlen;            Length of video data as received         */
/*                                                                          */
/*      The caller is responsible for allocating the image output           */
/*      arrays y_data and uv_data; these arrays continuously maintain       */
/*      the output image as it is updated for each call.  For each          */
/*      call, one packet of compressed video data is decompressed and       */
/*      written as Y and UV pixels at the addressed locations in the        */
/*      y_data and uv_data output arrays.  The image is in "4:2:2"          */
/*      format; that is, for each horizontal pair of Y pixels there is      */
/*      one U and one V pixel that together form the chrominance shared     */
/*      by both Y pixels.  The conversion from the encoding of video        */
/*      data in the packet to Y and UV pixels is accomplished with          */
/*      lookup tables indexed by an RGB value of 5 bits each.  These        */
/*      tables occupy a total of 98304 bytes.  These lookup tables may      */
/*      be supplied by the caller in the two arrays:                        */
/*                                                                          */
/*      extern unsigned char rgb2y[32768];     RGB to Y or B&W table    */
/*      extern unsigned short rgb2uv[32768];   RGB to U & V table       */
/*                                                                          */
/*      Or, if this module is compiled with FIRST_ENTRY defined,            */
/*      static arrays will be allocated and on the first call tables        */
/*      values will be calculated.                                          */
/*                                                                          */
/*      For each rectangular area of the image which has been updated,      */
/*      a call is made to the following routine to allow the caller to      */
/*      further process and display those parts of the image:               */
/*                                                                          */
/*      extern void VidImage_UpdateRect(image, x, y, width, height);        */
/*          struct vidimage *image; Pointer to output image array struct    */
/*          int x,y;                Offsets of area within image (left,top) */
/*          int width,height;       Size of update area in pixels           */
/*                                                                          */
/*      The return value from CPV_decode is an integer success/failure      */
/*      code.                                                               */
/*                                                                          */

#define CPV_SUCCESS     0       /* Packet successfully decoded */
#define CPV_EXTRAPIXEL  1       /* Extra pixel past end of pixmap */
#define CPV_BADADDRESS  2       /* Bad address in video data */
#define CPV_UNTERM      3       /* Packet not terminated */
#define CPV_BADHEADER   4       /* Bad video header format */
#define CPV_BADLENGTH   5       /* Unreasonable packet length */
#define CPV_NONMOTION   6       /* Non-motion video packet found */

#define CPV_WIDTH       256
#define CPV_HEIGHT      200

extern int CPV_Decode(vidimage_t *image, unsigned char *data, int len);


Saturday, June 20, 2020

A history of video conferencing (VC) technology

Lifted from:

I started streaming audio in 1988 across the lan, and video across the Stanford IP Network in 1989. and was invited to work on video at Sun in 1992. 

So I have the original global internet video stream I did at Sun in December 1992.
13000 viewers using 384 rebroadcast servers.

Now posted at:

  6/22/2020 With a little persuasion I convinced Ron Frederick to post this up on github. 

A history of video conferencing (VC) technology

"This 'telephone' has too many shortcomings to be seriously considered as a means of communication. The device is inherently of no value to us." -- Western Union internal memo, 1876.
Picurephone, 1964
The Bell system Picturephone, 1964
  • 1956: AT&T builds the first Picturephone test system
  • 1964: AT&T introduces Picturephone at the World's Fair, New York
  • 1970: AT&T offers Picturephone for $160 per month
  • 1971: Ericsson demonstrates the first trans-atlantic video telephone (LME) call
  • 1973 DecARPAnet packet voice experiments [1]
  • 1976 Mar: Network Voice Protocol (NVP), by Danny Cohen, USC/ISI
  • 1981 Jul: Packet Video Protocol (PVP), by Randy Cole, USC/ISI [2]
  • 1982CCITT (forerunner of the ITU-T) standard H.120 (2 Mbit/s) video coding, by European COST 211 project
  • 1982: Compression Labs begins selling $250,000 VC system, $1,000 per hour lines
  • 1986: PictureTel's $80,000 VC system, $100 per hour lines
  • 1987: Mitsubishi sells $1,500 still-picture phone
  • 1989: Mitsubishi drops still-picture phone
  • 1990: TWBnet packet audio/video experiments, vt (audio) and pvp (video) from ISI/BBN [3]
  • 1990: CCITT standard H.261 (p x 64) video coding
  • 1990 Dec: CCITT standard H.320 for ISDN conferencing
  • 1991: PictureTel unveils $20,000 black-and-white VC system, $30 per hour lines
  • 1991: IBM and PictureTel demonstrate videophone on PC
  • 1991 Feb: DARTnet voice experiments, Voice Terminal (vt) program from USC/ISI [4]
  • 1991 Jun: DARTnet packet video test between ISI and BBN.
  • 1991 Aug: UCB/LBNL's audio tool vat releases for DARTnet use
  • 1991 Sep: First audio/video conference (H.261 hardware codec) at DARTnet
  • 1991 Decdvc (receive-only) program, by Paul Milazzo from BBN, IETF meeting, Santa Fe [5]
  • 1992: AT&T's $1,500 videophone for home market
  • 1992 Mar: World's first MBone audio cast (vat), 23rd IETF, San Diego
  • 1992 Jul: MBone audio/video casts (vat/dvc), 24th IETF, Boston [6]
  • 1992 Jul: INRIA Videoconferencing System (IVS), by Thierry Turletti from INRIA [7]
  • 1992 Sep: CU-SeeMe v0.19 for Macintosh (without audio), by Tim Dorcey from Cornell [8]
  • 1992 Nov: Network Video (nv) v1.0, by Ron Frederick from Xerox PARC, 25th IETF, Washington DC [9]

       The Code nv this is LOST, and no longer available.   6/22/2020 Correction, with a little persuasion I convinced Ron Frederick to post this up on github.

  • 1992 Dec: Real-time Transport Protocol (RTP) v1, by Henning Schulzrinne
  • 1992 Dec, First Global Video Stream, over Sun's private network. 13000 viewers.
    This becomes Sun's CellB.
  • 1993 Apr: CU-SeeMe v0.40 for Macintosh (with multipoint conferencing)
  • 1993 May: Network Video (nv) v3.2 (with color video)
  • 1993 Octvic initial alpha, by Steven McCanne and Van Jacobson from UCB/LBNL [10]

       Code for VIC at https://github.com/johnsokol/LBL-VIC
       The only Source Code for Sun's Cellb Video compression
  • 1993 Nov: VocalChat v1.0, an audio conferencing software for Novell IPX networks [11]
  • 1994 Feb: CU-SeeMe v0.70b1 for Macintosh (with audio) , audio code by Charley Kline's Maven [12]
  • 1994 Apr: CU-SeeMe v0.33b1 for Windows (without audio), by Steve Edgar from Cornell
  • 1994 Jun: John Sokol forms NetSys Inc and begins building first CDN for video Streaming of Livecam Motion Jpeg Streams. 
  • 1995 Feb: VocalTec Internet Phone v1.0 for Windows (without video)
  • 1995 Aug: CU-SeeMe v0.66b1 for Windows (with audio)
  • 1996 Jan: Real-time Transport Protocol (RTP) v2, by IETF avt-wg [13]
  • 1996 Mar: ITU-T standard H.263 (p x 8) video coding for low bit-rate communication
  • 1996 Mar: VocalTec Telephony Gateway
  • 1996 May: ITU-T standard H.324 for POTS conferencing
  • 1996 Jul: ITU-T standard T.120 for data conferencing
  • 1996 Aug: Microsoft NetMeeting v1.0 (without video)
  • 1996 Oct: ITU-T standard H.323 v1, by ITU-T SG 16 [14]
  • 1996 Nov: VocalTec Surf&Call, the first Web to phone plugin
  • 1996 Dec: Microsoft NetMeeting v2.0b2 (with video) [15]
  • 1996 Dec: VocalTec Internet Phone v4.0 for Windows (with video)
  • 1997 Jul: Virtual Room Videoconferencing System (VRVS), Caltech-CERN project
  • 1997 Sep: Resource ReSerVation Protocol (RSVP) v1, by IETF rsvp-wg
  • 1998 Jan: ITU-T standard H.323 v2
  • 1998 Jan: ITU-T standard H.263 v2 (H.263+) video coding
  • 1998 Apr: CU-SeeMe v1.0 for Windows and Macintosh (with color video), from Cornell
  • 1998 May: Cornell's CU-SeeMe development team has completed their work and has gone on to other projects
  • 1998 Oct: ISO/IEC standard MPEG-4 v1, by ISO/IEC JTC1/SC29/WG11 (MPEG)
  • 1999 Feb: Session Initiation Protocol (SIP) makes proposed standard, by IETF mmusic-wg [16]
  • 1999 Apr: Microsoft NetMeeting v3.0b (with gatekeeper)
  • 1999 Aug: ITU-T H.26L Test Model Long-term (TML) project , by ITU-T SG16/Q.6 (VCEG)
  • 1999 Sep: ITU-T standard H.323 v3
  • 1999 Oct: NAT compatible version of iVisit, v2.3b5 for Windows and Mac
  • 1999 Oct: Media Gateway Control Protocol (MGCP) v1, IETF
  • 1999 Dec: Microsoft NetMeeting v3.01 service pack 1 (4.4.3388)
  • 1999 Dec: ISO/IEC standard MPEG-4 v2
  • 2000 May: Columbia SIP user agent sipc v1.30
  • 2000 Oct: Samsung releases the first MPEG-4 streaming 3G (CDMA2000-1x) video cell phone
  • 2000 Nov: ITU-T standard H.323 v4
  • 2000 NovMEGACO/H.248 Protocol v1, by IETF megaco-wg
  • and ITU-T SG 16
  • 2000 Dec: Microsoft NetMeeting v3.01 service pack 2 (4.4.3396))
  • 2000 Dec: ISO/IEC Motion JPEG 2000 (JPEG 2000, Part 3) project, by ISO/IEC JTC1/SC29/WG1 (JPEG)
  • 2001 Jun: Windows XP Messenger supports the SIP
  • 2001 Sep: World's first trans-atlantic tele gallbladder surgery (operation Lindbergh)
  • 2001 Oct: NTT DoCoMo sells $570 3G (WCDMA) mobile videophone
  • 2001 Oct: TV reporters use $7,950 portable satellite videophone to broadcast live from Afghanistan
  • 2001 Oct: Microsoft NetMeeting v3.01 (4.4.3400) on XP
  • 2001 DecJVT video coding (H.26L and MPEG-4 Part 10) project, by ITU-T SG16/Q.6 (VCEG) and ISO/IEC JTC1/SC29/WG 11 (MPEG)
  • 2002 Jun: World's first 3G video cell phone roaming
  • 2002 Dec: JVT completes the technical work leading to ITU-T H.264
  • 2003 May: ITU-T recommendation H.264 advanced video coding
Sources : Wall Street Journal (27 February 1996), The MBone FAQ, rem-conf listserv, The MBone listserv, CU-SeeMe listserv, RTP: Historical Notes, and few PostScripts (*.ps).
Notes and References
[1] Danny Cohen, "Specifications for the Network Voice Protocol (NVP)", RFC 741, Internet Engineering Task Force, November 1977.
"The major objective of ARPA's Network Secure Communications (NSC) project is to develop and demonstrate the feasibility of secure, high-quality, low-bandwidth, real-time, full-duplex (two-way) digital voice communications over packet-switched computer communications networks. The Network Voice Protocol (NVP), implemented first in December 1973, and has been in use since then for local and transnet real-time voice communication over the ARPANET."
[2] Randy Cole, "PVP - A Packet Video Protocol", Internal Document, USC/ISI, July 1981.
"The Packet Video Protocol (PVP) is a set of extensions to the Network Voice Protocol (NVP-II) and consists mostly of a data protocol for transmission of video data. No specific changes to the NVP-II protocol are necessary for the PVP."
[3] Eve M. Schooler, "A Distributed Architecture for Multimedia Conference Control", ISI research report ISI/RR-91-289, November 1991.
"Voice Terminal (VT) program and Packet Video Program (PVP) were originally implemented on a BBN Butterfly multiprocessor. VT and PVP digitize and packetize data, using the Netowrk Voice Protocol (NVP) for audio and the Packet Video Protocol (PVP) for video. They transmit this data across the network using the experimental Stream Protocol (SP) and the Terrestrial Wideband Network (TWBnet)."
[4] DARTnet : A trans-continental IP network of about a dozen research sites connected by T1 trunks.
November 1988, small group (MIT, BBN, UDel, ISI, SLI, PARC, LBL) led by Bob Braden of USC/ISI proposes testbed net to DARPA. This becomes DARPA Research Testbed Net (DARTnet).
DARTnet has since evolved to CAIRN, which presently connects 27 institutions in the US and Britain.
[5] Tim Dorcey, "CU-SeeMe Desktop VideoConferencing Software", Connexions, Volume 9, No.3, March 1995.
"In fact, it was Paul Milazzo's demonstration of such a tool in 1991 that inspired development of CU-SeeMe."
[6] The video used for the July 1992 Internet Engineering Task Force (IETF) was the Desktop Video Conferencing (DVC) program from BBN, written by Paul Milazzo and Bob Clements.
They have made available a receive-only program, but they retain a proprietary interest in the version that is capable of sending.
This program has since become a product, called PictureWindow.
[7]INRIA Videoconferencing System (IVS) was used to broadcast part of the IETF 25 meeting held in Washington, DC during 16-20 November, 1992. The broadcast was sent over the Mbone. See Internet Monthly Report published in November 1992.
[8] "When development of CU-SeeMe began in July 1992, the only real-time videoconferencing software for the Internet required expensive hardware which severely limited the number of potential senders and receivers. Working with Richard Cogger in the summer of 1992, Tim Dorcey wrote the original version of CU-SeeMe."
URL: http://cu-seeme.cornell.edu
As the Macintosh did not have IP multicast support, CU-SeeMe took a more traditional approach and developed a multipoint server (Reflector) that CU-SeeMe clients could connect to.
[9] For the November 1992 IETF and several events since then, they have used two other programs.
The first is the Network Video (nv) program from Ron Frederick at Xerox PARC.
Also available from INRIA is the IVS program written by Thierry Turletti (as mentioned previously).
Van Jacobson, "Old timers might remember that the first, binary-only, release of nv happened 24 hours before the November 1992 IETF where it was first used."
[10] vic (vi/deo c/onfernece) was inspired by nv. Portions of vic (the ordered dither, the nv-format codec, and some of the video capture code) were derived from the nv source code.
An early version of vic was used by the Xunet research community to carry out distributed meetings in Fall 1993.
vic change history at http://www-nrg.ee.lbl.gov/vic/CHANGES.html
[11] In 1991, 5 high school friends established ClassX, a start-up software company, at Raanana, Israel.
2 years later, 4 members of them joined VocalTec and they have developed the VocalChat v1.0-2.5, and the Internet Phone.
Ofer Shem Tov, "VocalChat early version was introduced first time in PCEXPO end of June 1993 in New York. It did half duplex calls over Novell IPX networks. VocalChat v1.0 was released in Comdex Fall, November 1993, in Las Vegas, it was a finished version of the PCEXPO product. First long distance call was done on Bell South Novell network from Atlanta to Miami. VocalChat 2.02 LAN and WAN were released in June 1994 and included voice mail, address book, TCP/IP support and support of VocalTec Compression Card (VCC) for low bandwidth links.
VocalChat GTI (Gateway To the Internet) was released in October 1994. It was focused on the Internet and required the VCC card."
[12] Charley Kline, "I got annoyed at the Fall 1992 IETF when told that the only serious platform for multimedia conferencing was a hefty Unix workstation. I figured a Macintosh has better audio processing ability than a Sun (true!), so set about to write an audio conferencing tool for the Macintosh that would interoperate with the popular vat program for Unix."
URL: http://spiffy.ci.uiuc.edu/~kline/cvk-ido.html
[13] Henning Schulzrinne, "Real-time Transport Protocol (RTP) is the Internet-standard protocol for the transport of real-time data, including audio and video. It can be used for media-on-demand as well as interactive services such as Internet telephony. RTP consists of a data and a control part. The latter is called RTP Control Protocol (RTCP)."
RTP has its roots in the early work done using Network Voice Protocol 2 (NVP-II) with vatvt and nevot in 1991, which in turn has its roots in the Network Voice Protocol (NVP) experiences in the early 1970s.
[14] H.323 : "Visual telephone systems and equipment for Local Area Networks which provide a non-guaranteed Quality of Service." (original title)
"Packet-based multimedia communications systems." (revised title in H.323 v2 drafts)
4 main H.323 network components; Terminals, Gateways, Gatekeepers, and Multipoint Control Units (MCUs).
H.320 (N-ISDN), H.321 (B-ISDN, ATM), H.322 (GQoS LAN), H.323 (H.320 over LAN), H.324 (SCN), H.324 M (Mobile).
[15] Toby Nixon, "Microsoft NetMeeting version 2.0 and below uses an alternative call setup procedure that is permitted for combined H.323/T.120 terminals. Because NetMeeting was originally a T.120-based product (without H.323 support), it sets up the T.120 (data conference) call first, and then the H.323 (audio and video conference) call."
Current versions of NetMeeting are not compliant with the H.323 standard as they do not attempt to register with a gatekeeper, a required function.
[16] SIP is a simple signaling protocol for Internet conferencing and telephony.
H.323 is an ITU-T standard, while SIP is the IETF approach.
The pioneering video conferencing tools :
  • CU-SeeMe [video]
    from Cornell University
    platform : Apple Macintosh
  • DVC [video]
    from BBN
    platform : Sun SPARC
  • IVS [video & audio]
    from INRIA
    platform : Sun SPARC, HP, SGI and DEC stations
  • NV [video]
    from Xerox PARC
    platform : Sun SPARC, SGI and DEC stations

      6/22/2020 With a little persuasion I convinced Ron Frederick to post this up on github.


     I am told it will build and run on modern machines and Ron was surprised on how well his codec performs against more modern ones.

Sun - CELLB video compression.

So I have the original global internet video stream I did at Sun in December 1992.

Now posted at:

So the code posted in:
  has a number of small compile issues and once past that I realized there are a number of Missing lookup tables.

This is the key to performance is to precompute these tables. But without the table generators,
 it's very hard to understand what's going on once 
the algorithm has been reduced to cascading and recursive lookup tables .
It may be possible to regenerate these from the paper and RFC's.   

Good news is I did find the one last remaining copy of a CellB codec in the wild. 

Inside the ancient LBL VIC package is the only copy of a CellB codec on the internet, I am in the process of extracting it.

It's almost worthless by comparison to jpeg. 

 Anyhow there is a transport stream format where the audio and video are multiplexed.   My file is in an RTP format. 
I am still reading several RFC's and other breadcrumbs to the solution that are still out there. 

On Fri, Jun 19, 2020 at 4:41 PM John Sokol wrote:

In all likelihood everything we need is in the link below.

Would be super cool to build a wasm player or even just regular javascript. 

On Fri, Jun 19, 2020 at 4:04 PM John Sokol  wrote:

On Fri, Jun 19, 2020 at 2:31 PM Allen   wrote:
yes, I got it running. It was easy (qemu on MacOS), but like I said, no audio.
 On Jun 19, 2020, at 14:15, John Sokol   wrote:
So you did get the emulator running.  How hard was it?  If you have that working, capture a video clip please.

I have to spend money to get the parts to capture this video and post on youtube

On Fri, Jun 19, 2020 at 10:41 AM Allen  wrote:
Oh, and is it not possible to convert your video format to something else? I actually recently ran your player on an emulator, but the emu didn't support audio.

On Jun 19, 2020, at 10:29, John Sokol wrote:

   Thank you so much for those files and the workstations back.

  So, what I really want to do is get a video capture of that video playing on to youtube. 
 Do you have the video cable for those Suns? Also the whole keyboard and mouse thing would be great too, just need them for a few hours.


John Sokol