Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Ver Very very Slow #2231

Open
sysmaya opened this issue Oct 17, 2024 · 17 comments
Open

Ver Very very Slow #2231

sysmaya opened this issue Oct 17, 2024 · 17 comments
Labels
bug Issues that report (apparent) bugs.

Comments

@sysmaya
Copy link

sysmaya commented Oct 17, 2024

What an absurdly slow and exasperating thing.
2 hours to make a 5 minute video??
There are faster alternatives:
FFmpeg only
VidGear

  • Anything is faster than MoviePY, it is faster to make the video in

Expected Behavior

You expect a 5-minute video to take less than 10 minutes to complete.

Actual Behavior

2 long hours waiting to build the video

Steps to Reproduce the Problem

Get started by downloading MoviePY
Upload a 5 minute wav audio
place some image clips
and 50 text subtitles

@sysmaya sysmaya added the bug Issues that report (apparent) bugs. label Oct 17, 2024
@steinathan
Copy link

Very slow indeed, found an alternative yet?

@sysmaya
Copy link
Author

sysmaya commented Oct 18, 2024

Very slow indeed, found an alternative yet?

I'm writing my own functions to use ffmpeg, they are 10 times faster. But it's tedious.

def ffmpeg_image_overlay( start_time, duration, x='(W-w)/2', y='(H-h)/2'):
    """
    Generates an overlay filter for FFmpeg.
    Args:
        image_file (str): Path to the image file.
        start_time (float): Start time in seconds to display the image.
        duration (float): Duration in seconds to display the image.
        x (str): Expression for the horizontal position of the image.
        y (str): Expression for the vertical position of the image.
    Returns:
        str: Overlay filter for FFmpeg.
    """
    return f"[0:v][1:v] overlay={x}:{y}:enable='between(t,{start_time},{start_time + duration})'"
def ffmpeg_run_command(video_input, video_output, filtro, imageFile=''):
    """
    Run the FFmpeg command with the provided filter.    
    Args:
        video_input (str): Path of the input video file.
        video_output (str): Path to the output video file.
        filter (str): Command or FFmpeg filter to apply to the video.    
    Returns:
        bool: True if the command succeeds, False if it fails.
    """
    inicioExec = time.time()
    filtro = re.sub(r'\s+', ' ', filtro)
    lentxt = len(filtro)
    
    caracteres = string.ascii_uppercase + string.digits
    nombreTMP = ''.join(random.choice(caracteres) for _ in range(12))
    nombreTMP = nombreTMP + '.mp4'

    ffmpeg_path = r'C:\\ffmpeg\\bin\\ffmpeg.exe'
    directorio_actual = os.path.dirname(os.path.abspath(__file__))
    video_input = os.path.join(directorio_actual, video_input)

    if imageFile == '' :
        comando = [
            ffmpeg_path,
            '-i', video_input,              # Input del video
            '-vf', filtro,                  # Filtro de video para superponer el texto
            '-codec:a', 'copy',             # Copiar el audio sin modificar
            '-stats',                       # Muestra el banner de avance del proceso
            nombreTMP                       # Archivo de salida
        ]
    else :
        comando = [
            ffmpeg_path,
            '-i', video_input,              # Input del video
            '-i', imageFile,                # Input de la imagen
            '-filter_complex', filtro,      # Filtro de video para superponer la imagen
            '-pix_fmt', 'yuv420p',          # Formato de píxeles
            '-codec:a', 'copy',             # Copiar el audio sin modificar
            '-v', 'quiet',                  # Oculta todos los mensajes excepto el banner de progreso
            '-stats',                       # Muestra el banner de avance del proceso
            nombreTMP                       # Archivo de salida
        ]
    
    try:
        subprocess.run(comando, check=True)
        duracion_ejecucion = time.time() - inicioExec
        minutos = int(duracion_ejecucion // 60)
        segundos = int(duracion_ejecucion % 60)
        milisegundos = int((duracion_ejecucion - int(duracion_ejecucion)) * 1000)
        print(f"run_ffmpeg() OK: {minutos}:{segundos}:{milisegundos}")

        if os.path.exists(video_output): os.remove(video_output)   
        os.rename(nombreTMP, video_output)
        return True
    except subprocess.CalledProcessError as e:
        print(f"run_ffmpeg() Error: {e}")
        return False

@ericmadureira
Copy link

Hi everyone, haven't used the lib yet but this issue caught my attention. Is rendering really taking that long?
I'm looking for "video edit via code" libs to automate video editing.

@steinathan
Copy link

It is slow, but thats just because of ffmpeg - moviepy don't efficiently make use of ffmpeg, but if you're on Mac, it can get 30% faster by using hevc_videotoolbox decoders

I'm currently using moviepy in my app: https://dub.sh/voidface

@ericmadureira
Copy link

Have you heard of https://www.remotion.dev/ or https://re.video/?
I'm reading about them. I'm interested in editing content faster without using GUI programs like capcut, etc.

@steinathan
Copy link

Yes heard about them but haven't used them, if you're just looking for editing and cool with react then remotion is the way to go - you'd still need a headless browser to render the video - but it's faster

My use case is mainly automation and Python - sad they're no alternatives to moviepy that's faster

@JoelOnyedika
Copy link

NGL, moviepy is slow like crazy. I am planning to use it in my saas project which uses ReactJS on the frontend, Django on the backend, and Moviepy + Flask for the video processsing and is to be deployed on a VPS, but i think i will be forced to move to ffmpeg because Moviepy is way too slow and seriously it is way too much bugs. Moviepy cant work with the latest ffmpeg binary. It just breaks, which is the worst nightmare on production.

And well i still dont get the Python community, moviepy is the best video processing libary for Python which is basically the go to language for automation and still yet it has little to no support. Thats just unfrortunate, real unfortunate.

@steinathan
Copy link

steinathan commented Oct 29, 2024

@JoelOnyedika na true shaa, moviepy dey slow but I'm able to use the latest ffmpeg with the dev branch

Nevertheless, I did a comparison of raw ffmpeg with moviepy and the speed was almost the same

My use case is mainly, adding watermark, resizing, adding audio, cropping and it get almost the same speed

So I copied the ffmpeg commands moviepy generates and ran it outside moviepy and got the same results

The only part that seems slower a bit is the write_videofile which I'm running asynchronous with 'asyncio.to_thread'

Check out my repo
https://github.com/steinathan/reelsmaker

[edit]
FFMPEG is a lot faster if you're copying codecs

@JoelOnyedika
Copy link

Okay, thanks a lot. But what do you mean by you are usin the dev branch. Please drop a link

@MitchMunn
Copy link

You should clone moviepy from source and build it. The latest version is from 2020, meaning if you just pip install moviepy it will get this version. I believe some work has been done to improve the write time since.

Also, don't use the method='compose' if you are concatenating video files - especially if you have them nested.

@JoelOnyedika
Copy link

JoelOnyedika commented Nov 17, 2024

@steinathan Yo bro, do you know how to do the progressive text highlighting synchronized with text-to-speech narration. This effect is often called "karaoke-style highlighting" or "progressive text highlighting" and is commonly used in educational videos and lyric videos.

I have been trying to do something like this in moviepy but i cannot get it right. Do you know a logic i can use to get it right here is a sample video https://rpie.b-cdn.net/wp-content/uploads/2023/11/telegram-cloud-document-2-5240322943975701817.mp4

@steinathan
Copy link

steinathan commented Nov 17, 2024

I gave up on moviepy to use casual FFMPEG with ffmpeg-python

but you can do that with the combination of these two libs (ASS subtitle) but bro to bro - thats what ive been using, its very manual so im finally switching to remotion.dev because it just CSS

# convert casual SRT to ASS (if you have srt before)
subs = pysubs2.load(
    "/tmp/vid_r70pddalx9j4z1qjwfc0okd.srt"
)
subs.info["PlayResX"] = 1280  # type: ignore
subs.info["PlayResY"] = 720  # type: ignore

subs.styles["Default"] = pysubs2.SSAStyle(
    bold=True,
    fontsize=30,
    fontname="Luckiest Guy",
    shadow=5.0,
    alignment=pysubs2.Alignment.BOTTOM_CENTER,
)

out = "/tmp/subtitles.ass"
# save so we can work with PyonFX
subs.save(out)

# Load the .ass file
io = Ass(out)

# Get the events (lines) in the script
meta, styles, lines = io.get_data()
def sub(line: Line, l: Line):
   # TODO: add kareoke effects
    l.text = "{\\fad(%d,%d)}%s" % (300, 200, line.text)
    io.write_line(l)


for line in lines:
    sub(line, line.copy())


io.path_output = out
io.save()

burn it

./ffmpeg -i input.mp4 -vf "ass=subtitles.ass" -c:v libx264 -crf 23 -preset medium -c:a copy output.mp4 -y 

https://github.com/CoffeeStraw/PyonFX
https://pysubs2.readthedocs.io/
https://aegisub.org/

image

@JoelOnyedika
Copy link

Damn. Its so manual. Thanks btw

@sysmaya
Copy link
Author

sysmaya commented Nov 18, 2024

This video was made with moviepy, but only for the transitions, the text was made with pure ffmpeg, the same as the curtain and the initial video, they were coupled with ffmpeg.
https://www.youtube.com/watch?v=RQ_aRkg009s

@JoelOnyedika
Copy link

@steinathan since when you were using remotion did you experience this issue with remotion's webpack config, here is the issue remotion-dev/remotion#4546

@bzczb
Copy link

bzczb commented Nov 24, 2024

In large part it's slow because there's a LOT of redundant "inspection" going on while rendering: see #2094

Subtitles are also slow to render for different reasons.

@aperture147
Copy link

Don't try to create a long blank clip and place your clip in it with set_start function. Build multiple small Clip with the same size and transparent background, place your content there, concatenate then use it with composite clip later.

Example:

bg_clip_list = []

bg_clip_a = Clip()
fg_clip_list.append(fg_clip_a)
bg_clip_b = Clip()
fg_clip_list.append(fg_clip_b)
bg_clip_c = Clip()
fg_clip_list.append(fg_clip_c)
bg_clip_d = Clip()
fg_clip_list.append(fg_clip_d)

fg_clip_list = []

# make this clip background transparent
fg_clip_a = Clip()
fg_clip_list.append(fg_clip_a)
fg_clip_b = Clip()
fg_clip_list.append(fg_clip_b)
fg_clip_c = Clip()
fg_clip_list.append(fg_clip_c)
fg_clip_d = Clip()
fg_clip_list.append(fg_clip_d)

bg_clip = concatenate_videoclips(bg_clip_list)
fg_clip = concatenate_videoclips(fg_clip_list)

your_clip = CompositeVideoClip([bg_clip, fg_clip])

Moviepy implements timeline and layer by using ffmpeg filters, which is the slowest thing ffmpeg can do and honestly I can't think there is any better implementation without using libavcodec directly. This implementation makes the timeline and layer horribly slow. I switched to libopenshot now.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Issues that report (apparent) bugs.
Projects
None yet
Development

No branches or pull requests

7 participants