Friday, July 31, 2020

Projection Tool - poor mans laser cutter.

PolyProjector

This is a tool for projecting the individual shapes of a 3d model onto sheets of cardboard to create physical models. It's kind of like a poor mans laser cutter. Its build on top of the game engine Unity.




Source code:  https://github.com/greengiant83/PolyProjector

You can download a precompiled version of the app here: Windows 10: https://github.com/greengiant83/PolyProjector/raw/master/App%20Builds/PolyProjector%20-%20Windows.zip


After calibrating the screen. Press L on your keyboard to load the 3d model. Use < and > (command and period) to cycle through the faces of your model. Mouse wheel can be used to rotated the polygon for easier working. Some more information can be found in this video:
https://www.youtube.com/watch?v=iDshWkUkbXk



The cool think here is it was made by Matt Bell, a Virtual Reality expert who wanted to make physical objects,
Sort of the reverse of my path of having the digital manufacturing down but wanting to put things in to VR now.




ImmersionVR has a great article about VR video and VR360

https://immersionvr.co.uk/about-360vr/vr-video/

What is VR video?

VR video and 360 VR are essentially interchangeable terms. They refer to videos that are captured using specialist omnidirectional cameras which enable the filming of an entire 360 degrees at the same time.
In the finished video the user is free to look around the entire scene. In contrast to regular videos, VR videos provide an immersive, interactive experience. The user often experiences the feeling of actually “being there”.
Contrary to popular belief, you do not need a special device in order to view 360 videos. They can be viewed on the vast majority of devices, including mobile.
The user can swipe or scroll across the screen in order to view the entire 360 degrees of the scene. This can be particularly useful for commercial applications, such as virtual reality real estate tours.



DepthKit - Depth Image Video - Free Viewpoint video. volumetric video



https://www.depthkit.tv/
DepthKit AFrame DepthKit for AFrame

An A-Frame component for rendering Volumetric videos captured using DepthKit (i.e Kinect + DSLR) in WebVR. The component wraps DepthKit.js which provides a similar interface for Three.js projects.


https://github.com/juniorxsound/DepthKit-A-Frame

https://orfleisher.com/aframe

https://github.com/juniorxsound/DepthKit-for-Max


DepthKit for Max/Msp/Jitter

A sample Max patch demonstrating a workflow for playing volumetric video in Max/Msp/Jitter using DepthKit combined-per-pixel exports. Supports rendering a mesh, wireframe and points.
DepthKit in Max


Thursday, July 30, 2020

Seurat : system for image-based scene simplification for VR

https://developers.google.com/vr/discover/seurat



Seurat

Seurat is a scene simplification technology designed to process very complex 3D scenes into a representation that renders efficiently on mobile 6DoF VR systems.
Seurat works by taking advantage of the fact that VR scenes are typically viewed from within a limited viewing region (the box on the left below), and leverages this to optimize the geometry and textures in your scene.
It takes RGBD images (color and depth) as input and generates a textured mesh, targeting a configurable number of triangles, texture size, and fill rate, to simplify scenes beyond what traditional methods can achieve.
Seurat is available as an open source project on GitHub, and includes plugin support for generating depth images for scenes in both Unity and Unreal.

Seurat - Documentation

What is Seurat?

Seurat is a system for image-based scene simplification for VR. It converts complex 3D scenes with millions of triangles, including complex lighting and shading effects, into just tens of thousands of triangles that can be rendered very efficiently on 6DOF devices with little loss in visual quality. It delivers high fidelity graphics on mobile VR devices. (One way to think of it is as a serving the same role as stereo panoramas on 3DoF VR devices, on 6DoF devices.)
The processing pipeline for static environments generates data for a single headbox (e.g. 1 m³ of space). Input data can be generated with any rendering system, e.g. a real-time game engine or an offline ray tracer. We have plugins for Unity, Unreal and Maya. Seurat outputs a mesh with an RGBA texture atlas, which can be rendered in any real-time engine. Dynamic content can be composited on top of the static Seurat environments.



Optimizing for 6DOF mobile VR with Google's Seurat









I lifted this from a 3D photo off facebook.  Still trying to make sense of it...
You can see this image have been broken in to subimages, I suspect these are mapped on to various planes with different Z buffer levels, something that renders quickly in webgl.



VR180 Test on Mobfish...




https://vr180test.mobfish.studio/webvr


Testing the Mobfish free trial of VRStudio.

https://mobfish.net/features/cloud-transcoding/






https://johnsokol.mobfish.studio/webvr

OMG, there are no more WebVR browsers, it was discontinued about 2 years ago everything is now WebXR.

Well it play fine on a Desktop and it works on the Oculus Go.


Wednesday, July 29, 2020

How The 'Anti-Paparazzi' Scarf That Ruins Photos Really Works



Retroreflective fabric, they sell for chromakey backdrops.

With a fabric silk screen print on it. 


Video Analytics.

A JavaScript SDK for tracking events and revenue to Amplitude.

https://github.com/amplitude/Amplitude-JavaScript

https://developers.amplitude.com/docs

How Amplitude Works

To understand how Amplitude works, let’s walk through a hypothetical example.
Tunes is a standard music player for mobile devices that has common actions like playing a song, skipping a song, shuffling play, and sharing a song.
Using Amplitude, you can track all the actions your users make in detail and better understand what’s working and what’s not.

What actions will Amplitude keep track of?

Amplitude gives you the power to determine what’s important to your experience. You can choose to track anything and everything.
For example, in Tunes, you could track the music control buttons the users presses or even how many songs each user has listened to in each session.



Tuesday, July 28, 2020

AR Augmented Reality PCB board inspector.

https://www.instagram.com/p/CCbGRfqDJBS/


inspectarteam

inspectAR uses the board files from @adskeagle and @kicadsolutions, or the ipc2581b files from @altiumdesign, @mentorpcb and many other EDA’s to create the AR overlays.
Save yourself hours of time flipping through datasheets, pinout diagrams and everything else that reduces your productivity. Eliminate the frustration of recounting pins for test points, or searching for specific traces.

#electronics #electrical #electricalengineering #engineering #engineers #engineer #engineeringlife #smd #tht #cad #hardware #firmware #pcb #pcba #pcbassembly #pcbdesign #pcbdesignengineer #debug #components #tech #technology #augmentedreality
2w

Friday, July 24, 2020

360 Green Screen Studio in San Jose.

Free STUN, WebRTC, Web Socket Servers, Storage and Media Processing


found on: https://free-for.dev/

videoinu — Create and edit screen recordings and other videos online

Google Meet — Use Google Meet for your business's online video meeting needs. Meet provides secure, easy-to-join online meetings.

meet.jit.si — One click video conversations, screen sharing, for free

riot.im — A decentralized communication tool built on Matrix. Group chats, direct messaging, encrypted file transfers, voice and video chats, and easy integration with other services.


talky.io — Free group video chat. Anonymous. Peer‑to‑peer. No plugins, signup, or payment required

whereby.com — One click video conversations, for free (formerly known as appear.in)

wistia.com — Video hosting with viewer analytics, HD video delivery and marketing tools to help understand your visitors, 25 videos and Wistia branded player

yammer.com — Private social network standalone or for MS Office 365. Free with a bit less admin tools and user management features

zoom.us — Secure Video and Web conferencing, add-ons available. Free limited to 40 minut

STUN, WebRTC, Web Socket Servers and Other Routers

conveyor.cloud — Visual Studio extension to expose IIS Express to the local network or over a tunnel to a public URL.
Hamachi — LogMeIn Hamachi is a hosted VPN service that lets you securely extend LAN-like networks to distributed teams with free plan allows unlimited networks with up to 5 peoples
Radmin VPN - Connect multiple computers together via a VPN enabling LAN-like networks. Unlimited peers. (Hamachi alternative)
localhost.run — Instantly share your localhost environment! No download required. Run your app on port 8080 and then run this command and share the URL.
ngrok.com — Expose locally running servers over a tunnel to a public URL.
segment.com — Hub to translate and route events to other third-party services. 100,000 events/month free
serveo.net — Quickly expose any local port to the public internet on a servo subdomain using an SSH tunnel, includes SSH GUI to replay requests over HTTP.
stun:global.stun.twilio.com:3478?transport=udp — Twilio STUN
stun:stun.l.google.com:19302 — Google STUN
webhookrelay.com — Manage, debug, fan-out and proxy all your webhooks to public or internal (ie: localhost) destinations. Also, expose servers running in a private network over a tunnel by getting a public HTTP endpoint (https://yoursubdomain.webrelay.io <----> http://localhost:8080).
ZeroTier — FOSS managed virtual Ethernet as a service. Unlimited end-to-end encrypted networks of 100 clients on free plan. Clients for desktop/mobile/NA; web interface for configuration of custom routing rules and approval of new client nodes on private networks.

Storage and Media Processing
sirv.com — Smart Image CDN with on-the-fly image optimization and resizing. Free tier includes 500 MB of storage and 2 GB bandwidth.

image4.io — Image upload, powerful manipulations, storage and delivery for websites and apps, with SDK's, integrations and migration tools. Free tier includes 25 credits. 1 credit is equal to 1 GB of CDN usage, 1GB of storage or 1000 image transformations.
cloudimage.com — Full image optimization and CDN service with 1500+ Points of Presence around the world. A variety of image resizing, compression, watermarking functions. Open source plugins for responsive images, 360 image making and image editing. Free monthly plan with 25GB of CDN traffic and 25GB of cache storage and unlimited transformations.
cloudinary.com — Image upload, powerful manipulations, storage and delivery for sites and apps, with libraries for Ruby, Python, Java, PHP, Objective-C and more. Perpetual free tier includes 7,500 images/month, 2 GB storage, 5 GB bandwidth
easyDB.io — one-click, hosted database provider. They provide a database for the programming language of your choice for development purposes. The DB is ephemeral and will be deleted after 24 or 72 hours on the free tier.
embed.ly — Provides APIs for embedding media in a webpage, responsive image scaling, extracting elements from a webpage. Free for up to 5,000 URLs/month at 15 requests/second
filestack.com — File picker, transform and deliver, free for 250 files, 500 transformations and 3 GB bandwidth
gumlet.com — Image resize-as-a-service. It also optimizes images and performs delivery via CDN. Free tier includes 1 GB bandwidth and unlimited number of image processing every month for 1 year.
image-charts.com — Unlimited image chart generation with a watermark
jsonbin.io — Free JSON data storage service, ideal for small-scale web apps, website, mobile apps.
kraken.io — Image optimization for website performance as a service, free plan up to 1 MB file size
npoint.io — JSON store with collaborative schema editing
otixo.com — Encrypt, share, copy and move all your cloud storage files from one place. Basic plan provides unlimited files transfer with 250 MB max. file size and allows 5 encrypted files
packagecloud.io — Hosted Package Repositories for YUM, APT, RubyGem and PyPI. Limited free plans, open source plans available via request
piio.co — Responsive image optimization and delivery for every website. Free plan for developers and personal websites. Includes free CDN, WebP and Lazy Loading out of the box.
placeholder.com — A quick and simple image placeholder service
placekitten.com — A quick and simple service for getting pictures of kittens for use as placeholders
plot.ly — Graph and share your data. Free tier includes unlimited public files and 10 private files
podio.com — You can use Podio with a team of up to five people and try out the features of the Basic Plan, except user management
QuickChart — Generate embeddable image charts, graphs, and QR codes
redbooth.com — P2P file syncing, free for up to 2 users
shrinkray.io — Free image optimization of GitHub repos
tinypng.com — API to compress and resize PNG and JPEG images, offers 500 compressions for free each month
transloadit.com — Handles file uploads and encoding of video, audio, images, documents. Free for Open source, charities, and students via the GitHub Student Developer Pack. Commercial applications get 2 GB free for test driving

Tuesday, July 21, 2020

Introducing Kandao Meeting 360° All-in-one conference camera







Kandao Meeting, the 360° conferencing camera combines two high-quality fisheye lenses, 8 mics array and speaker together. It can be put in the center of a meeting room to capture all the attendees, while automatically focus on the people as they speak.

-Learn More:
https://kandaovr.com/kandao-meeting/




Thursday, July 16, 2020

Tiny Camera Backpack Turns Beetles Into FPV Inspection Robots - Hackster.io

PEAK QUALITY 3D 180 IMMERSIVE VIDEO IN OCULUS GO AND GEAR VR


http://echeng.com/articles/peak-quality-3d-180-oculus/


Skybox VR, Virtual Desktop, Pixvana SPIN Player

Excellent article and I am collecting FFMPEG commands to try to understand how to do this sort of encoding.

ffmpeg -i "in.mov" -vf "scale=4096x2048:out_range=full:out_color_matrix=bt709" -c:v libx264 -preset fast -crf 18 -pix_fmt yuv420p -c:a copy -g 60 -movflags faststart "out.mp4"

Finally, I used Facebook Spatial Workstation’s FB360 Encoder to mux the video and audio, outputting in “FB360 Matroska” for Oculus headsets, and in “YouTube Video (with 1st order ambiX”) for other players like Pixvana SPIN Player.


Tuesday, July 14, 2020

Create a WebRTC P2P Shooting App with the Ricoh THETA Plug-in


https://community.theta360.guide/t/create-a-webrtc-p2p-shooting-app-with-the-theta-plug-in/4050/2


Has links for an Open Source Java based websocket communications to stream data from the camera.

Nageru: Taking free software video mixing into 2016 (FOSDEM 2016)







https://nageru.sesse.net/



Nageru (投げる), a modern free software video mixer

Nageru (a pun on the Japanese verb nageru, meaning to throw or cast) is a live video mixer. It takes in inputs from one or more video cards (any DeckLink PCI card via Blackmagic's drivers, and Intensity Shuttle USB3 and UltraStudio SDI USB3 cards via bmusb), mixes them together based on the operator's desire and a theme written in Lua, and outputs a high-quality H.264 stream over TCP suitable for further transcoding and/or distribution. Nageru is free software, licensed under the GNU General Public License, version 3 or later.
Nageru aims to produce high-quality output, both in terms of audio and video, while still running on modest hardware. The reference system for two 720p60 inputs is a ThinkPad X240, ie. an ultraportable dual-core with a not-very-fast GPU. Nageru's performance scales almost linearly with the available GPU power; e.g., if you have a GPU that's twice as fast as mine (which is not hard to find at all these days; desktop GPUs are frequently more like 10x), going to 1080p60 will only cost you about 10% more CPU power.
Various real-world examples of videos produced by Nageru:
  • The stream at Solskogen 2016–2018 was made with Nageru (in collaboration with other software and some hardware); you can view a copy of the 2016 streams and 2017 streams at YouTube (although YouTube doesn't seem to deal properly with 50/60 fps switches, causing jerkiness in some videos).
  • Fyrrom in 2016, and again in 2017.
  • All videos from Trøndisk 2017 (an example of multi-camera and overlay graphics), Norwegian ultimate championships 2018 (same, but with the newer native CEF support), and Trøndisk 2018 (also with Futatabi integration).
  • The Norwegian municipality of Frøya is live streaming all of their council meetings using Nageru (Norwegian only).
  • Breizhcamp, a French technology conference, used Nageru in 2018 and 2019. If you speak French, you can watch their keynote about it (itself produced with Nageru) and all their other video online.

Futatabi (再び), a multi-camera free software instant replay system with slow motion

Futatabi is a multi-camera instant replay system with slow motion. It supports efficient real-time interpolation using optical flow, making for full-framerate output without having to use special high-framerate cameras. (Of course, interpolation can only take you so far, and the results will depend on the type of content.) Futatabi is currently in alpha. It is distributed and built together with Nageru.

Documentation

Nageru and Futatabi have extensive documentation at https://nageru.sesse.net/doc/. In addition, you can see the FOSDEM 2016 talk introducing Nageru, although it covers only 1.0.0 and a lot of things have happened since then:
There was a talk about Futatabi at FOSDEM 2019, too (covering 1.8.2):








https://nageru.sesse.net/doc-1.8.6/streaming.html






High-Fidelity Generative Image Compression



https://hific.github.io/

We combine (GAN) Generative Adversarial Networks with learned compression to obtain a state-of-the-art generative lossy compression system. In the paper, we investigate normalization layers, generator and discriminator architectures, training strategies, as well as perceptual losses. In a user study, we show that our method is preferred to previous state-of-the-art approaches even if they use more than 2× the bitrate.






360° VR TRANSCODING LIVE | ON DEMAND

https://cubemap.io/

360° TRANSCODING
LIVE | ON DEMAND
Cubemap optimizes video for Virtual Reality by stitching videos faster, and more efficiently than any other solution, no GPU required.

H.266 (VVC) New Codec Announced: Laying the Groundwork for 8K Streaming

https://ymcinema.com/2020/07/10/h-266-vvc-new-codec-announced-laying-the-groundwork-for-8k-streaming/


The Fraunhofer Institute for Telecommunications, Heinrich Hertz Institute (HHI) has announced the standardization of a new codec called H.266 or VVC (Versatile Video Coding) which allows half of the bit rate compared to H.265 (HEVC) without a reduction in quality. The goal: To lay the groundwork for 8K streaming.

Support ultra-high-resolution: Up to 16K
The VVC should support resolutions from 4K to 16K as well as 360° videos, as well as support YCbCr 4:4:4, 4:2:2, and 4:2:0 with 10 to 16 bits per component




The Versatile Video Coding (VVC) Standard on the Final Stretch by Benjamin Bross

Monday, July 13, 2020

Focal Surface Displays - Varifocal displays



https://en.wikipedia.org/wiki/Varifocal_lens

https://www.oculus.com/blog/oculus-research-to-present-focal-surface-display-discovery-at-siggraph/

https://www.oculus.com/blog/introducing-the-team-behind-half-dome-facebook-reality-labs-varifocal-prototype/


ffmpeg stereo 3d VR180

Sorry, this is some raw notes and highlights here.


https://ffmpeg.org/ffmpeg-filters.html#stereo3d


hequirect
Half equirectangular projection.

target_geometry
The target geometry of the output image/video. The following values are valid options:
rectilinear (default)
fisheye
panoramic
equirectangular
fisheye_orthographic
fisheye_stereographic
fisheye_equisolid
fisheye_thoby
dfisheye
Dual fisheye.

fisheye
Fisheye projection.

11.223.1 Examples


  • Convert equirectangular video to cubemap with 3x2 layout and 1% padding using bicubic interpolation:
    ffmpeg -i input.mkv -vf v360=e:c3x2:cubic:out_pad=0.01 output.mkv
    
  • Extract back view of Equi-Angular Cubemap:
    ffmpeg -i input.mkv -vf v360=eac:flat:yaw=180 output.mkv
    
  • Convert transposed and horizontally flipped Equi-Angular Cubemap in side-by-side stereo format to equirectangular top-bottom stereo format:
    v360=eac:equirect:in_stereo=sbs:in_trans=1:ih_flip=1:out_stereo=tb
    

https://trac.ffmpeg.org/wiki/Stereoscopic

ffmpeg -i top-and-bottom.mov -vf stereo3d=abl:sbsl -c:a copy side-by-side.mov


ENCODING VR 360 AND 180 IMMERSIVE VIDEOS FOR OCULUS MEDIA STUDIO AND SIDELOADING INTO OCULUS QUEST AND GOhttps://echeng.com/articles/encoding-for-oculus-media-studio/



Sunday, July 12, 2020

Exploring the Kanda QooCam 4K - pass1




I was hoping  to get live streaming off this camera, and it would appear if you have a 64 Bit android device their app can see a live video feed?

There is a Live option on the windows app that appears to do nothing.

 The USB doesn't show up as a video driver.


You can connect with the device over Wifi

To do this power on the device and wait for the power up tune to play.

one well timed (approx 1 second) power button push and it will beep and the blue wifi like will come on.

Scanning your wifi network SSID's, and you will see something like:

QOOCAM-06515

your camera id will vary based on the last 5 digits of the serial number.

password is : 12345678




From Linux, this command works but I was seeing many errors.
ffplay  rtsp:192.168.1.1:554/live


sokol@nuc2:~/Desktop/vr180$ ffplay  rtsp:192.168.1.1:554/live
ffplay version 2.8.15-0ubuntu0.16.04.1 Copyright (c) 2003-2018 the FFmpeg developers
  built with gcc 5.4.0 (Ubuntu 5.4.0-6ubuntu1~16.04.10) 20160609
  configuration: --prefix=/usr --extra-version=0ubuntu0.16.04.1 --build-suffix=-ffmpeg --toolchain=hardened --libdir=/usr/lib/x86_64-linux-gnu --incdir=/usr/include/x86_64-linux-gnu --cc=cc --cxx=g++ --enable-gpl --enable-shared --disable-stripping --disable-decoder=libopenjpeg --disable-decoder=libschroedinger --enable-avresample --enable-avisynth --enable-gnutls --enable-ladspa --enable-libass --enable-libbluray --enable-libbs2b --enable-libcaca --enable-libcdio --enable-libflite --enable-libfontconfig --enable-libfreetype --enable-libfribidi --enable-libgme --enable-libgsm --enable-libmodplug --enable-libmp3lame --enable-libopenjpeg --enable-libopus --enable-libpulse --enable-librtmp --enable-libschroedinger --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libspeex --enable-libssh --enable-libtheora --enable-libtwolame --enable-libvorbis --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx265 --enable-libxvid --enable-libzvbi --enable-openal --enable-opengl --enable-x11grab --enable-libdc1394 --enable-libiec61883 --enable-libzmq --enable-frei0r --enable-libx264 --enable-libopencv
  libavutil      54. 31.100 / 54. 31.100
  libavcodec     56. 60.100 / 56. 60.100
  libavformat    56. 40.101 / 56. 40.101
  libavdevice    56.  4.100 / 56.  4.100
  libavfilter     5. 40.101 /  5. 40.101
  libavresample   2.  1.  0 /  2.  1.  0
  libswscale      3.  1.101 /  3.  1.101
  libswresample   1.  2.101 /  1.  2.101
  libpostproc    53.  3.100 / 53.  3.100
[h264 @ 0x7f95580008c0] RTP: missed 129 packetsKB sq=    0B f=0/0 
[h264 @ 0x7f95580008c0] RTP: missed 37 packets0KB sq=    0B f=0/0 
[h264 @ 0x7f95580008c0] left block unavailable for requested intra mode at 0 44
[h264 @ 0x7f95580008c0] error while decoding MB 0 44, bytestream 659
[h264 @ 0x7f95580008c0] concealing 1969 DC, 1969 AC, 1969 MV errors in P frame
[h264 @ 0x7f95580008c0] RTP: missed 28 packets0KB sq=    0B f=0/0 
[h264 @ 0x7f95580008c0] RTP: missed 41 packets0KB sq=    0B f=0/0 
[h264 @ 0x7f95580008c0] RTP: missed 31 packets0KB sq=    0B f=0/0 
[h264 @ 0x7f95580008c0] RTP: missed 27 packets0KB sq=    0B f=0/0 
[h264 @ 0x7f95580008c0] RTP: missed 25 packets0KB sq=    0B f=0/0 
[h264 @ 0x7f95580008c0] RTP: missed 121 packetsKB sq=    0B f=0/0 
[h264 @ 0x7f95580008c0] RTP: missed 33 packets0KB sq=    0B f=0/0 
[h264 @ 0x7f95580008c0] RTP: missed 65 packets0KB sq=    0B f=0/0 
[h264 @ 0x7f95580008c0] RTP: missed 25 packets0KB sq=    0B f=0/0 
[h264 @ 0x7f95580008c0] RTP: missed 33 packets0KB sq=    0B f=0/0 
[h264 @ 0x7f95580008c0] RTP: missed 33 packets0KB sq=    0B f=0/0 
[h264 @ 0x7f95580008c0] RTP: missed 29 packets0KB sq=    0B f=0/0 
Input #0, rtsp, from 'rtsp:192.168.1.1:554/live': sq=    0B f=0/0 
  Metadata:
    title           : H.264 Video. Streamed by iCatchTek.
    comment         : H264
  Duration: N/A, start: 0.801333, bitrate: N/A
    Stream #0:0: Video: h264 (High), yuv420p, 1920x960, 15 fps, 14.99 tbr, 90k tbn, 30 tbc
[h264 @ 0x7f955812a200] left block unavailable for requested intra mode at 0 44
[h264 @ 0x7f955812a200] error while decoding MB 0 44, bytestream 659
[h264 @ 0x7f955812a200] concealing 1969 DC, 1969 AC, 1969 MV errors in P frame
[h264 @ 0x7f95580008c0] RTP: missed 41 packets9KB sq=    0B f=1/1 
[h264 @ 0x7f95580008c0] RTP: missed 65 packets1KB sq=    0B f=1/1 
[h264 @ 0x7f95580008c0] RTP: missed 25 packets2KB sq=    0B f=1/1 
[h264 @ 0x7f95580008c0] RTP: missed 41 packets
[h264 @ 0x7f95580008c0] RTP: missed 33 packets
    Last message repeated 1 times
[h264 @ 0x7f95580008c0] RTP: missed 161 packetsKB sq=    0B f=1/1 
 




$ nmap -v -sN 192.168.1.1
Starting Nmap 7.70 ( https://nmap.org ) at 2020-07-06 08:11 ric
Initiating ARP Ping Scan at 08:11
Scanning 192.168.1.1 [1 port]
Completed ARP Ping Scan at 08:11, 2.16s elapsed (1 total hosts)
mass_dns: warning: Unable to determine any DNS servers. Reverse DNS is disabled. Try using --system-dns or specify valid servers with --dns-s
ervers
Initiating NULL Scan at 08:11
Scanning 192.168.1.1 [1000 ports]
Completed NULL Scan at 08:11, 6.23s elapsed (1000 total ports)
Nmap scan report for 192.168.1.1
Host is up (0.018s latency).
Not shown: 997 closed ports
PORT     STATE         SERVICE
21/tcp   open|filtered ftp
554/tcp  open|filtered rtsp
9200/tcp open|filtered wap-wsp
MAC Address: CC:4B:73:35:B4:2E (Ampak Technology)

Read data files from: C:\Program Files (x86)\Nmap
Nmap done: 1 IP address (1 host up) scanned in 15.72 seconds
           Raw packets sent: 1004 (40.148KB) | Rcvd: 998 (39.908KB)



$ nmap -v -A 192.168.1.1
Starting Nmap 7.70 ( https://nmap.org ) at 2020-07-06 08:12 ric
NSE: Loaded 148 scripts for scanning.
NSE: Script Pre-scanning.
Initiating NSE at 08:12
Completed NSE at 08:12, 0.00s elapsed
Initiating NSE at 08:12
Completed NSE at 08:12, 0.00s elapsed
Initiating ARP Ping Scan at 08:12
Scanning 192.168.1.1 [1 port]
Completed ARP Ping Scan at 08:12, 2.46s elapsed (1 total hosts)
mass_dns: warning: Unable to determine any DNS servers. Reverse DNS is disabled. Try using --system-dns or specify valid servers with --dns-s
ervers
Initiating SYN Stealth Scan at 08:12
Scanning 192.168.1.1 [1000 ports]
Discovered open port 554/tcp on 192.168.1.1
Discovered open port 21/tcp on 192.168.1.1
Discovered open port 9200/tcp on 192.168.1.1
Completed SYN Stealth Scan at 08:12, 3.94s elapsed (1000 total ports)
Initiating Service scan at 08:12
Scanning 3 services on 192.168.1.1
Completed Service scan at 08:12, 6.01s elapsed (3 services on 1 host)
Initiating OS detection (try #1) against 192.168.1.1
Retrying OS detection (try #2) against 192.168.1.1
Retrying OS detection (try #3) against 192.168.1.1
Retrying OS detection (try #4) against 192.168.1.1
Retrying OS detection (try #5) against 192.168.1.1
NSE: Script scanning 192.168.1.1.
Initiating NSE at 08:13
Completed NSE at 08:13, 10.04s elapsed
Initiating NSE at 08:13
Completed NSE at 08:13, 0.00s elapsed
Nmap scan report for 192.168.1.1
Host is up (0.0042s latency).
Not shown: 997 closed ports
PORT     STATE SERVICE    VERSION
21/tcp   open  tcpwrapped
554/tcp  open  rtsp       DoorBird video doorbell rtspd
9200/tcp open  tcpwrapped
MAC Address: CC:4B:73:35:B4:2E (Ampak Technology)
No exact OS matches for host (If you know what OS is running on it, see https://nmap.org/submit/ ).
TCP/IP fingerprint:
OS:SCAN(V=7.70%E=4%D=7/6%OT=554%CT=1%CU=34247%PV=Y%DS=1%DC=D%G=Y%M=CC4B73%T
OS:M=5F02CF11%P=i686-pc-windows-windows)SEQ(CI=I%II=RI)ECN(R=N)T1(R=Y%DF=N%
OS:T=FF%S=O%A=S+%F=AS%RD=0%Q=)T1(R=N)T2(R=N)T3(R=N)T4(R=Y%DF=N%T=FF%W=E420%
OS:S=A%A=S%F=AR%O=%RD=0%Q=)T5(R=Y%DF=N%T=FF%W=E420%S=A%A=S+%F=AR%O=%RD=0%Q=
OS:)T6(R=Y%DF=N%T=FF%W=E420%S=A%A=S%F=AR%O=%RD=0%Q=)T7(R=Y%DF=N%T=FF%W=E420
OS:%S=A%A=S+%F=AR%O=%RD=0%Q=)U1(R=Y%DF=N%T=FF%IPL=38%UN=0%RIPL=G%RID=G%RIPC
OS:K=G%RUCK=G%RUD=G)IE(R=Y%DFI=S%T=FF%CD=S)

Network Distance: 1 hop
Service Info: Device: webcam

TRACEROUTE
HOP RTT     ADDRESS
1   4.20 ms 192.168.1.1

NSE: Script Post-scanning.
Initiating NSE at 08:13
Completed NSE at 08:13, 0.01s elapsed
Initiating NSE at 08:13
Completed NSE at 08:13, 0.01s elapsed
Read data files from: C:\Program Files (x86)\Nmap
OS and Service detection performed. Please report any incorrect results at https://nmap.org/submit/ .
Nmap done: 1 IP address (1 host up) scanned in 52.54 seconds
           Raw packets sent: 1241 (60.438KB) | Rcvd: 1047 (43.282KB)
$