-
Notifications
You must be signed in to change notification settings - Fork 98
Home
Yami is short for Yet Another Media Infrastructure; It is YUMMY to your video experience on Linux like platform.
Yami is core building block for media solution; with efficient/simple APIs, without external dependency. It parses video stream and decodes/encodes them leverage hardware acceleration. It targets two type of usage:
- Hardware accelerated codec is the only gap (there is existing media framework ), for example: Chromeos/Android, maybe ffmpeg.
- Usage is simple enough to be interested in codec only, for example: remote desktop client/server, video surveillance.
Yami servers as develop kit for VAAPI, it has rich tests/examples to demonstrate/verify vaapi features. The tests can run without Window system (X11/Wayland etc), also supports advanced feature like texture video (dma_buf etc).
Yami doesn't target GStreamer world, it can be tested by gst-omx (in an inefficient way) though.
- git clone git://anongit.freedesktop.org/vaapi/libva
- git clone git://anongit.freedesktop.org/vaapi/intel-driver
- git clone https://github.com/01org/libyami.git
- there is no special dependency to build libyami core library.
- if you want to build yami decode tests with texture-video support, additional EGL/GLES2 packages are required. (sudo apt-get install libgles2-mesa-dev libegl1-mesa-dev)
#!/bin/sh
if [ -n "$1" ]; then
export export YAMI_ROOT_DIR=$1
else
export YAMI_ROOT_DIR="/opt/yami"
fi
export VAAPI_PREFIX="${YAMI_ROOT_DIR}/vaapi"
export LIBYAMI_PREFIX="${YAMI_ROOT_DIR}/libyami"
ADD_PKG_CONFIG_PATH="${VAAPI_PREFIX}/lib/pkgconfig/:${LIBYAMI_PREFIX}/lib/pkgconfig/"
ADD_LD_LIBRARY_PATH="${VAAPI_PREFIX}/lib/:${LIBYAMI_PREFIX}/lib/"
ADD_PATH="${VAAPI_PREFIX}/bin/"
PLATFORM_ARCH_64=`uname -a | grep x86_64`
if [ -n "$PKG_CONFIG_PATH" ]; then
export PKG_CONFIG_PATH="${ADD_PKG_CONFIG_PATH}:$PKG_CONFIG_PATH"
elif [ -n "$PLATFORM_ARCH_64" ]; then
export PKG_CONFIG_PATH="${ADD_PKG_CONFIG_PATH}:/usr/lib/pkgconfig/:/usr/lib/i386-linux-gnu/pkgconfig/"
else
export PKG_CONFIG_PATH="${ADD_PKG_CONFIG_PATH}:/usr/lib/pkgconfig/:/usr/lib/x86_64-linux-gnu/pkgconfig/"
fi
export LD_LIBRARY_PATH="${ADD_LD_LIBRARY_PATH}:$LD_LIBRARY_PATH"
export PATH="${ADD_PATH}:$PATH"
echo "*======================current configuration============================="
echo "* VAAPI_PREFIX: $VAAPI_PREFIX"
echo "* LIBYAMI_PREFIX: ${LIBYAMI_PREFIX}"
echo "* LD_LIBRARY_PATH: ${LD_LIBRARY_PATH}"
echo "* PATH: $PATH"
echo "*========================================================================="
echo "* vaapi: git clean -dxf && ./autogen.sh --prefix=\$VAAPI_PREFIX && make -j8 && make install"
echo "* libyami: git clean -dxf && ./autogen.sh --prefix=\$LIBYAMI_PREFIX --enable-tests --enable-tests-gles && make -j8 && make install"
-
video playback
./decode -i test.h264 [-m 0/1/2/3/4]
Full list of decode options:
-i media file to decode
-w wait before quit
-f dumped raw frame fourcc
-o dumped output dir
-m <render mode>
0: dump video frame to file
1: render to X window
2: texture: render to Pixmap + texture from Pixmap
3: texture: export video frame as drm name (RGBX) + texture from drm name
4: texture: export video frame as dma_buf(RGBX) + texutre from dma_buf
5: texture: export video frame as dma_buf(NV12) + texture from dma_buf
-
h264 encoder
./h264encode -i raw.yuv -s I420 -W width_value -H height_value -c AVC -o out.h264 ./h264encode -i ./bear_320x192_40frames.yuv -s I420 -W 320 -H 192 -c AVC -o out.h264
-
encode with camera
./h264encode -i /dev/video0 -s YUY2 -W 640 -H 480 -c AVC -o out.h264
Full list of supported options are:
-i <source yuv filename> load YUV from a file
-W <width> -H <height>
-o <coded file> optional
-b <bitrate> optional
-f <frame rate> optional
-c <codec: AVC|VP8|JPEG> Note: not support now
-s <fourcc: NV12|I420|YUY2|YV12> Note: not support now
-N <number of frames to encode(camera default 50), useful for camera>
- Test libyami with gst-omx: https://github.com/01org/gst-omx/wiki
- Links to all related projects: https://github.com/orgs/01org/teams/01-org-openmax
- we general follows WebKit coding style in C++ code (common/, vaapi/, decoder/, encoder/ and v4l2/, tests/): http://www.webkit.org/coding/coding-style.html
- codecparsers keeps the existing style from gst
- you can apply the style check by "cp pre-commit .git/hooks/"; it is applied after git commit. so you can check/accept/reject the suggest coding style after initial commit.
- start work on libyami by 'fork' it to your home
- enable coding style check by:
cp pre-commit .git/hooks/
then follows by prompt. - commit/push to your home's repo, test it on chromeos/ubuntu
- create 'pull requests'
- wait for other's review
- update your home's repo following feedbacks
we recommend force update to avoid unnecessary commit history, your pull requests get update automatically. - repeat the above two steps until no disagree
- you can integrate your patch (home repo) with the condition met:
- 1+ member flags ok when patches are less than 50 lines.
- 2+ members flag ok when patches are more than 50 lines
- at least one work day passed after latest patch update
- integrate the patches manually (NOT the button on web page)
-
git push [email protected]/01org/libyami.git HEAD:master
we do not use the 'Merge pull request' button on web page, because:
- it creates redundant commit log.
- it creates unclear parent-child relation between commits when two merge requests basing on unique commit id.
- now, the pull request is automatically close on web page.
- VP8 encoder
- VP9 decoder
- JPEG encoder
- V4L2 wrapper for yami
- Unify encode input frame and decode output frame (transcoding)
- decode: h264, vp8, jpeg
- encode: h264
- pure C language API wrapper
- va drm backend support:
- by default: yami use X11 backend; when there is no X server, yami falls back using DRM backend automatically.
- With “--disable-x11” configure option, libyami can be built without X11.
- tests: h264/vp8/jpeg decode test; h264 encode test; c-api decoder/encoder test; v4l2 encoder test.
- video texture are supported for decode/render (TFP, DRM name and dma_buf)
- camera input are supported for encode
- various fourcc/format support for encode input frame
- V4L2 codec interface wrapper: it simulate V4L2 codec interface
- test for: camera + jpegdec; camera+jpegdec+h264enc
- coded buffer pool, support mmap of coded data
- h264 decode optimization to avoid scan start code again for nalu input (use VideoDecodeBuffer.flag to indicate buffer type)
- use getOutput(VideoFrameRawData* ...) in omx, then drop VideoRenderBuffer *getOutput()
- get hw aligned width/height after start()
- B frame h264 encoder support
- dma_buf support for encode input
- use yami in ffmpeg
- h265 decoder
- legacy codec: mpeg2 decoder/encoder, vc1 decoder, mpeg4/h263 decoder/encoder
- vasink on ubuntu
related links: