Ffmpeg

compile
Compiling ffmpeg for Raspberry Pi 4 (script only works fine on RPIOS(gist)) and https://pastebin.com/CZBzJtGq modification as OMX library don't work on new bullseye build(24 Apr.2022). OMX does hardware acceleration which you would need to for modules to h264 conversion, reducing streaming bandwidth from 10meg to 700k, use previous rpios build if this can't be sorted out with bullseye.

https://gist.github.com/moritzmhmk/48e5ed9c4baa5557422f16983900ca95 OMX issue, using hardware gpu to avoid cpu usage with ffmpeg. Capturing video from the rpi camera with ffmpeg can vary from less than 5% to 100% of the CPU (rpi zero) depending on ffmpeg using the hardware acceleration or not. On many github issues one finds the suggestion of using h264_omx codec to use the gpu - but it does not ship with the default ffmpeg on Raspbian. Instead I found that one can use the v4l2 driver provided by raspbian to get hardware accelerated h264 output. Also setting the video size will save one from using a (cpu) scale filter.

https://github.com/georgmartius/vid.stab and(http://public.hronopik.de/vid.stab/) to be added to ffmpeg pastebin build script https://ffmpeg.org/ffmpeg-protocols.html for ffmpeg protocols and command line options. https://gist.github.com/Brainiarc7/95c9338a737aa36d9bb2931bed379219 This gist contains instructions on setting up FFmpeg and Libav to use VAAPI-based hardware accelerated encoding (on supported platforms) for H.264 (and H.265 on supported hardware) video formats. https://docs.nvidia.com/video-technologies/video-codec-sdk/ffmpeg-with-nvidia-gpu/ nvidia cuda compile https://trac.ffmpeg.org/wiki/HWAccelIntro https://github.com/Freescale/libimxvpuapi freescale vpu library https://github.com/pyke369/sffmpeg sffmpeg is a simple CMake-based full-featured FFmpeg build helper. linked from https://sonnati.wordpress.com/2011/08/30/ffmpeg-%e2%80%93-the-swiss-army-knife-of-internet-streaming-%e2%80%93-part-iv/

streaming
retroarch ffmpeg recording and live streaming scripts https://retroresolution.com/ Recording Live Gameplay in RetroPie’s RetroArch Emulators Natively on the Raspberry Pi with ffmpeg https://retroresolution.com/#li_stream_twitch stream to twitch https://wiki.friendlyelec.com/wiki/index.php/NanoPi_NEO_Air#Connect_to_DVP_Camera_CAM500B

vaapi
https://01.org/linuxmedia https://github.com/intel/libva VA-API is an open-source library and API specification, which provides access to graphics hardware acceleration capabilities for video processing. It consists of a main library and driver-specific acceleration backends for each supported hardware vendor.https://github.com/intel/libva-utils https://github.com/intel/gmmlib The Intel(R) Graphics Memory Management Library provides device specific and buffer management for the Intel(R) Graphics Compute Runtime for OpenCL(TM) and the Intel(R) Media Driver for VAAPI. https://github.com/intel/media-driver The Intel(R) Media Driver for VAAPI is a new VA-API (Video Acceleration API) user mode driver supporting hardware accelerated decoding, encoding, and video post processing for GEN based graphics hardware. https://github.com/mypopydev/ffmpeg_build verify ffmpeg with VA-API hwaccel with the command: ffmpeg -hwaccel vaapi -hwaccel_output_format vaapi \ -hwaccel_device /dev/dri/renderD128 -i \ input.mp4 -c:v h264_vaapi output.mp4 http://libav.org/ ffmpeg fork http://blog.pkh.me/p/13-the-ffmpeg-libav-situation.html See man pages for usage of avconv  command. It is a very fast video and audio converter that can also grab from a live audio/video source. It can also convert between arbitrary sample rates and resize video on the fly with a high quality polyphase filter. avconv reads from an arbitrary number of input "files" (which can be regular files, pipes, network streams, grabbing devices, etc.), specified by the "-i" option, and writes to an arbitrary number of output "files", which are specified by a plain output filename. Anything found on the command line which cannot be interpreted as an option is considered to be an output filename.

Stream UDP over ssh
Streaming a videofile via UDP over sshMesh and socat. Not a practical solution as ssh is TCP. See streaming for rtsp tcp over ssh link.

server: socat -d -d TCP-LISTEN:6800,fork,reuseaddr  UDP:localhost:8500 ffplay -protocol_whitelist file,rtp,udp -i config1.sdp  # only start ffplay after server streams udp file or use mplayer(better latency) mplayer sdp://config.sdp config.sdp is created by the ffmpeg command on the client side, copy this file over to the server side

client: function sshportforwardAndSocat { ssh -L 6800:localhost:6800 theroom@192.168.1.98 socat -d -d UDP-LISTEN:8500,fork,reuseaddr TCP:localhost:6800 }

function streaMmp4file { echo "https://trac.ffmpeg.org/wiki/StreamingGuide" echo "copy the config1.sdp file, generated by ffmpeg over to the client side" sleep 5 local thefile="redisdata.mp4" local videohost="127.0.0.1" local videoport="8500" local audioport="9010" ffmpeg -re -i "$thefile" -vcodec copy -an -f rtp rtp://"$videohost":"$videoport" \ -vn -sdp_file config1.sdp } sshportforwardAndSocat streaMmp4file

tech
http://www.tecmint.com/ffmpeg-commands-for-video-audio-and-image-conversion-in-linux/

https://trac.ffmpeg.org/wiki/CompilationGuide/Ubuntu

https://stackoverflow.com/questions/40088222/ffmpeg-convert-video-to-images

http://www.bugcodemaster.com/article/extract-images-frame-frame-video-file-using-ffmpeg

mp3
https://superuser.com/questions/704493/ffmpeg-convert-m4a-files-to-mp3-without-significant-loss-of-information-quali

https://trac.ffmpeg.org/wiki/Encode/MP3

https://coderwall.com/p/zbevoq/convert-m4a-to-mp3-with-ffmpeg

https://rg3.github.io/youtube-dl/download.html

See Tahoe for stripping metadata.

links

 * Gist code
 * Gprs_and_wifi_streaming and Cctv_cameras modding, rooting webcams.