Skip to content

wils-oool/lightning-whisper-mlx

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

17 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Lightning Whisper MLX

An incredibly fast implementation of Whisper optimized for Apple Silicon.

Whisper Decoding Speed

10x faster than Whisper CPP, 4x faster than current MLX Whisper implementation.

Features

  • Batched Decoding -> Higher Throughput
  • Distilled Models -> Faster Decoding (less layers)
  • Quantized Models -> Faster Memory Movement
  • Coming Soon: Speculative Decoding -> Faster Decoding with Assistant Model

Installation

Install lightning whisper mlx using pip:

pip install lightning-whisper-mlx

Usage

Models

["tiny", "small", "distil-small.en", "base", "medium", distil-medium.en", "large", "large-v2", "distil-large-v2", "large-v3", "distil-large-v3"]

Quantization

[None, "4bit", "8bit"]

Example

from lightning_whisper_mlx import LightningWhisperMLX

whisper = LightningWhisperMLX(model="distil-medium.en", batch_size=12, quant=None)

text = whisper.transcribe(audio_path="/audio.mp3")['text']

print(text)

Notes

  • The default batch_size is 12, higher is better for throughput but you might run into memory issues. The heuristic is it really depends on the size of the model. If you are running the smaller models, then higher batch size, larger models, lower batch size. Also keep in mind your unified memory!

Credits

  • Mustafa - Creator of Lightning Whisper MLX
  • Awni - Implementation of Whisper MLX (I built on top of this)
  • Vaibhav - Inspired me to build this (He created a version optimized for Cuda)

About

An extremely fast implementation of whisper optimized for Apple Silicon using MLX.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 95.2%
  • HTML 4.5%
  • Shell 0.3%