FFmpeg: Difference between revisions

Line 64: Line 64:
* [https://www.ffmpeg.org/doxygen/trunk/structAVFrame.html <code>AVFrame</code>] Decoded audio or video data.
* [https://www.ffmpeg.org/doxygen/trunk/structAVFrame.html <code>AVFrame</code>] Decoded audio or video data.
* [https://www.ffmpeg.org/doxygen/trunk/structSwsContext.html <code>SwsContext</code>] Used for image scaling and colorspace and pixel format conversion operations.
* [https://www.ffmpeg.org/doxygen/trunk/structSwsContext.html <code>SwsContext</code>] Used for image scaling and colorspace and pixel format conversion operations.
====Pixel Formats====
[https://www.ffmpeg.org/doxygen/4.0/pixfmt_8h.html Reference]<br>
Pixel formats are stored as <code>AVPixelFormat</code> enums.<br>
Below are descriptions for a few common pixel formats.<br>
Note that the exact sizes of buffers may vary depending on alignment.<br>
=====AV_PIX_FMT_RGB24=====
This is your standard 24 bits per pixel RGB.<br>
In your AVFrame, data[0] will contain your single buffer RGBRGBRGB.<br>
Where the linesize is typically <math>3 * width</math> bytes per row and <math>3</math> bytes per pixel.
=====AV_PIX_FMT_YUV420P =====
This is a planar YUV pixel format with chroma subsampling.<br>
Each pixel will have its own luma component (Y) but each <math>2 \times 2</math> block of pixels will share chrominance components (U, V)<br>
In your AVFrame, data[0] will contain your Y image, data[1] will contain your .<br>
Data[0] will typically be <math>width * height</math> bytes.<br>
Data[1] and data[2] will typically be <math>width * height / 4</math> bytes.<br>


===Muxing to memory===
===Muxing to memory===