FFmpeg: Difference between revisions
| (16 intermediate revisions by the same user not shown) | |||
| Line 8: | Line 8: | ||
* Linux: [https://johnvansickle.com/ffmpeg/ https://johnvansickle.com/ffmpeg/] | * Linux: [https://johnvansickle.com/ffmpeg/ https://johnvansickle.com/ffmpeg/] | ||
* Windows: [https://ffmpeg.zeranoe.com/builds/ https://ffmpeg.zeranoe.com/builds/] | * Windows: [https://ffmpeg.zeranoe.com/builds/ https://ffmpeg.zeranoe.com/builds/] | ||
If you need nvenc support, you can build FFmpeg with https://github.com/markus-perl/ffmpeg-build-script. | |||
Basic usage is as follows: | Basic usage is as follows: | ||
| Line 86: | Line 88: | ||
</syntaxhighlight> | </syntaxhighlight> | ||
If you want better quality, you can use the following filter_complex: | |||
<pre> | |||
< | [0]split=2[v1][v2];[v1]palettegen=stats_mode=full[palette];[v2][palette]paletteuse=dither=sierra2_4a | ||
</pre> | |||
</ | |||
Here is another script from [https://superuser.com/questions/556029/how-do-i-convert-a-video-to-gif-using-ffmpeg-with-reasonable-quality https://superuser.com/questions/556029/how-do-i-convert-a-video-to-gif-using-ffmpeg-with-reasonable-quality] | Here is another script from [https://superuser.com/questions/556029/how-do-i-convert-a-video-to-gif-using-ffmpeg-with-reasonable-quality https://superuser.com/questions/556029/how-do-i-convert-a-video-to-gif-using-ffmpeg-with-reasonable-quality] | ||
| Line 109: | Line 100: | ||
</syntaxhighlight> | </syntaxhighlight> | ||
}} | }} | ||
===Pipe to stdout=== | |||
Below is an example of piping the video only to stdout: | |||
<pre> | |||
ffmpeg -i video.webm -pix_fmt rgb24 -f rawvideo - | |||
</pre> | |||
In Python, you can read it as follows: | |||
<syntaxhighlight lang="python"> | |||
video_width = 1920 | |||
video_height = 1080 | |||
ffmpeg_process = subprocess.Popen(ffmpeg_command, | |||
stdout=subprocess.PIPE, | |||
stderr=subprocess.PIPE) | |||
raw_image = ffmpeg_process.stdout.read( | |||
video_width * video_height * 3) | |||
image = (np.frombuffer(raw_image, dtype=np.uint8) | |||
.reshape(video_height, video_width, 3)) | |||
</syntaxhighlight> | |||
==Filters== | ==Filters== | ||
| Line 204: | Line 214: | ||
===Stack and Unstack=== | ===Stack and Unstack=== | ||
To stack, see <code>hstack</code>, <code>vstack</code>. | To stack, see [https://ffmpeg.org/ffmpeg-all.html#hstack <code>hstack</code>], [https://ffmpeg.org/ffmpeg-all.html#vstack <code>vstack</code>]. | ||
To unstack, see <code>crop</code>. | To unstack, see <code>crop</code>. | ||
| Line 240: | Line 250: | ||
Add a background to transparent images.<br> | Add a background to transparent images.<br> | ||
<pre> | <pre> | ||
ffmpeg -i in.mov -filter_complex | ffmpeg -i in.mov -filter_complex "[0]format=pix_fmts=yuva420p,split=2[bg][fg];[bg]drawbox=c=white@1:replace=1:t=fill[bg];[bg][fg]overlay=format=auto" -c:a copy new.mov | ||
</pre> | |||
===Draw Text=== | |||
https://stackoverflow.com/questions/15364861/frame-number-overlay-with-ffmpeg | |||
<pre> | |||
ffmpeg -i input -vf "drawtext=fontfile=Arial.ttf: text='%{frame_num}': start_number=1: x=(w-tw)/2: y=h-(2*lh): fontcolor=black: fontsize=20: box=1: boxcolor=white: boxborderw=5" -c:a copy output | |||
</pre> | </pre> | ||
| Line 340: | Line 354: | ||
==JavaScript API== | ==JavaScript API== | ||
To use FFmpeg in a browser, see [https://ffmpegwasm. | To use FFmpeg in a browser, see [https://ffmpegwasm.netlify.app/ ffmpegwasm]. | ||
This is used in https://davidl.me/apps/media/index.html. | |||
==My Preferences== | ==My Preferences== | ||
My preferences for encoding video | My preferences for encoding video | ||
===AV1=== | |||
Prefer AV1 for encoding video on on modern devices. | |||
===H265/HEVC=== | ===H265/HEVC=== | ||
H264/HEVC is | H264/HEVC is now a good tradeoff between size, quality, and compatibility. | ||
This has been supported on devices since Android 5.0 (2014). | |||
<syntaxhighlight lang="bash"> | <syntaxhighlight lang="bash"> | ||
ffmpeg -i $1 -c:v libx265 -crf 23 -preset slow -pix_fmt yuv444p10le -c:a libopus -b:a 128K $2 | ffmpeg -i $1 -c:v libx265 -crf 23 -preset slow -pix_fmt yuv444p10le -c:a libopus -b:a 128K $2 | ||
</syntaxhighlight> | </syntaxhighlight> | ||
;Notes | ;Notes | ||
* The pixel format <code>yuv444p10le</code> is 10 bit color without chroma subsampling. If your source is lower, you can use <code>yuv420p</code> instead for 8-bit color and 4:2:0 chroma subsampling. | * The pixel format <code>yuv444p10le</code> is 10 bit color without chroma subsampling. If your source is lower, you can use <code>yuv420p</code> instead for 8-bit color and 4:2:0 chroma subsampling. | ||
=== | ===H264=== | ||
If you need compatability with very old and low end devices. | |||
<syntaxhighlight lang="bash"> | |||
ffmpeg -i $1 -c:v libx264 -crf 28 -preset medium -pix_fmt yuv420p -c:a libfdk_aac -b:a 128K $2 | |||
</syntaxhighlight> | |||
===Opus=== | |||
For streaming: | |||
<syntaxhighlight lang="bash"> | |||
ffmpeg -i input.wav -c:a libopus -b:a 96k output.opus | |||
</syntaxhighlight> | |||
See https://wiki.xiph.org/Opus_Recommended_Settings | |||