Skip to content
This repository has been archived by the owner on Oct 23, 2023. It is now read-only.
halleyzhao edited this page Oct 24, 2014 · 97 revisions

Introduction

Yami is short for Yet Another Media Infrastructure; It is YUMMY to your video experience on Linux like platform.

Yami is core building block for media solution; with efficient/simple APIs, without external dependency. It parses video stream and decodes/encodes them leverage hardware acceleration. It targets two type of usage:

  • Hardware accelerated codec is the only gap (there is existing media framework ), for example: Chromeos/Android, maybe ffmpeg.
  • Usage is simple enough to be interested in codec only, for example: remote desktop client/server, video surveillance.

Yami servers as develop kit for VAAPI, it has rich tests/examples to demonstrate/verify vaapi features. The tests can run without Window system (X11/Wayland etc), also supports advanced feature like texture video (dma_buf etc).

Yami doesn't target GStreamer world, it can be tested by gst-omx (in an inefficient way) though.

Infrastructure Diagram

yami infrastructure

Build & Run

related git repos

  1. git clone git://anongit.freedesktop.org/vaapi/libva
  2. git clone git://anongit.freedesktop.org/vaapi/intel-driver
  3. git clone https://github.com/01org/libyami.git

environments:

  • there is no special dependency to build libyami core library.
  • if you want to build yami decode tests with texture-video support, additional EGL/GLES2 packages are required. (sudo apt-get install libgles2-mesa-dev libegl1-mesa-dev)

example script for build environments

#!/bin/sh
if [ -n "$1" ]; then
    export export YAMI_ROOT_DIR=$1
else
    export YAMI_ROOT_DIR="/opt/yami"
fi

export VAAPI_PREFIX="${YAMI_ROOT_DIR}/vaapi"
export LIBYAMI_PREFIX="${YAMI_ROOT_DIR}/libyami"
ADD_PKG_CONFIG_PATH="${VAAPI_PREFIX}/lib/pkgconfig/:${LIBYAMI_PREFIX}/lib/pkgconfig/"
ADD_LD_LIBRARY_PATH="${VAAPI_PREFIX}/lib/:${LIBYAMI_PREFIX}/lib/"
ADD_PATH="${VAAPI_PREFIX}/bin/"

PLATFORM_ARCH_64=`uname -a | grep x86_64`
if [ -n "$PKG_CONFIG_PATH" ]; then
    export PKG_CONFIG_PATH="${ADD_PKG_CONFIG_PATH}:$PKG_CONFIG_PATH"
elif [ -n "$PLATFORM_ARCH_64" ]; then
    export PKG_CONFIG_PATH="${ADD_PKG_CONFIG_PATH}:/usr/lib/pkgconfig/:/usr/lib/i386-linux-gnu/pkgconfig/"
else 
    export PKG_CONFIG_PATH="${ADD_PKG_CONFIG_PATH}:/usr/lib/pkgconfig/:/usr/lib/x86_64-linux-gnu/pkgconfig/"
fi

export LD_LIBRARY_PATH="${ADD_LD_LIBRARY_PATH}:$LD_LIBRARY_PATH"

export PATH="${ADD_PATH}:$PATH"

echo "*======================current configuration============================="
echo "* VAAPI_PREFIX:               $VAAPI_PREFIX"
echo "* LIBYAMI_PREFIX:             ${LIBYAMI_PREFIX}"
echo "* LD_LIBRARY_PATH:            ${LD_LIBRARY_PATH}"
echo "* PATH:                       $PATH"
echo "*========================================================================="

echo "* vaapi:      git clean -dxf && ./autogen.sh --prefix=\$VAAPI_PREFIX && make -j8 && make install"
echo "* libyami:    git clean -dxf && ./autogen.sh --prefix=\$LIBYAMI_PREFIX --enable-tests --enable-tests-gles && make -j8 && make install"

Tests/Examples

  • video playback

     ./decode -i test.h264 [-m 0/1/2/3/4]
    

Full list of decode options:

   -i media file to decode
   -w wait before quit
   -f dumped raw frame fourcc
   -o dumped output dir
   -m <render mode>
      0: dump video frame to file
      1: render to X window
      2: texture: render to Pixmap + texture from Pixmap
      3: texture: export video frame as drm name (RGBX) + texture from drm name
      4: texture: export video frame as dma_buf(RGBX) + texutre from dma_buf
      5: texture: export video frame as dma_buf(NV12) + texture from dma_buf
  • h264 encoder

     ./h264encode -i raw.yuv -s I420 -W width_value -H height_value -c AVC -o out.h264
     ./h264encode -i ./bear_320x192_40frames.yuv -s I420 -W 320 -H 192 -c AVC -o out.h264
    
  • encode with camera

     ./h264encode -i /dev/video0 -s YUY2 -W 640 -H 480 -c AVC -o out.h264
    

Full list of supported options are:

    -i <source yuv filename> load YUV from a file
    -W <width> -H <height>
    -o <coded file> optional
    -b <bitrate> optional
    -f <frame rate> optional
    -c <codec: AVC|VP8|JPEG> Note: not support now
    -s <fourcc: NV12|I420|YUY2|YV12> Note: not support now
    -N <number of frames to encode(camera default 50), useful for camera>

Quick Links

Coding Style

  • we general follows WebKit coding style in C++ code (common/, vaapi/, decoder/, encoder/ and v4l2/, tests/): http://www.webkit.org/coding/coding-style.html
  • codecparsers keeps the existing style from gst
  • you can apply the style check by "cp pre-commit .git/hooks/"; it is applied after git commit. so you can check/accept/reject the suggest coding style after initial commit.

Code Review Process

  1. start work on libyami by 'fork' it to your home
  2. enable coding style check by: cp pre-commit .git/hooks/
    then follows by prompt.
  3. commit/push to your home's repo, test it on chromeos/ubuntu
  4. create 'pull requests'
  5. wait for other's review
  6. update your home's repo following feedbacks
    we recommend force update to avoid unnecessary commit history, your pull requests get update automatically.
  7. repeat the above two steps until no disagree
  8. you can integrate your patch (home repo) with the condition met:
  • 1+ member flags ok when patches are less than 50 lines.
  • 2+ members flag ok when patches are more than 50 lines
  • at least one work day passed after latest patch update
  1. integrate the patches manually (NOT the button on web page)

we do not use the 'Merge pull request' button on web page, because:

  • it creates redundant commit log.
  • it creates unclear parent-child relation between commits when two merge requests basing on unique commit id.
  1. now, the pull request is automatically close on web page.

In Progress Features

  • VP8 encoder
  • VP9 decoder
  • JPEG encoder
  • V4L2 wrapper for yami
  • Unify encode input frame and decode output frame (transcoding)

Supported Feature

  • decode: h264, vp8, jpeg
  • encode: h264
  • pure C language API wrapper
  • va drm backend support:
  • by default: yami use X11 backend; when there is no X server, yami falls back using DRM backend automatically.
  • With “--disable-x11” configure option, libyami can be built without X11.
  • tests: h264/vp8/jpeg decode test; h264 encode test; c-api decoder/encoder test; v4l2 encoder test.
  • video texture are supported for decode/render (TFP, DRM name and dma_buf)
  • camera input are supported for encode
  • various fourcc/format support for encode input frame
  • V4L2 codec interface wrapper: it simulate V4L2 codec interface

TODO

  • test for: camera + jpegdec; camera+jpegdec+h264enc
  • coded buffer pool, support mmap of coded data
  • h264 decode optimization to avoid scan start code again for nalu input (use VideoDecodeBuffer.flag to indicate buffer type)
  • use getOutput(VideoFrameRawData* ...) in omx, then drop VideoRenderBuffer *getOutput()
  • get hw aligned width/height after start()
  • B frame h264 encoder support
  • dma_buf support for encode input
  • use yami in ffmpeg
  • h265 decoder
  • legacy codec: mpeg2 decoder/encoder, vc1 decoder, mpeg4/h263 decoder/encoder
  • vasink on ubuntu

related links:

Clone this wiki locally