Header Image

TRANSCRIBE_DEVICE: Can transcribe via gpu (Cuda only) or cpu. Takes option of 'cpu', 'gpu', 'cuda'. (default: cpu)

WHISPER_MODEL: Can be: 'tiny', 'tiny.en', 'base', 'base.en', 'small', 'small.en', 'medium', 'medium.en', 'large-v1','large-v2', 'large-v3', 'large', 'distil-large-v2', 'distil-medium.en', 'distil-small.en' (default: medium)

CONCURRENT_TRANSCRIPTIONS: Number of files it will transcribe in parallel (default: 2)

WHISPER_THREADS: Number of threads to use during computation (default: 4)

MODEL_PATH: This is where the WHISPER_MODEL will be stored. This defaults to placing it where you execute the script in the folder 'models' (default: ./models)

PROCADDEDMEDIA: Will gen subtitles for all media added regardless of existing external/embedded subtitles (based off of SKIPIFINTERNALSUBLANG) (default: True)


PROCMEDIAONPLAY: Will gen subtitles for all played media regardless of existing external/embedded subtitles (based off of SKIPIFINTERNALSUBLANG) (default: True)


NAMESUBLANG: Allows you to pick what it will name the subtitle. Instead of using EN, I'm using AA, so it doesn't mix with existing external EN subs, and AA will populate higher on the list in Plex. (default: aa)

SKIPIFINTERNALSUBLANG: Will not generate a subtitle if the file has an internal sub matching the 3 letter code of this variable (default: eng)

WORD_LEVEL_HIGHLIGHT: Highlights each word as it's spoken in the subtitle. (default: False)


PLEXSERVER: This needs to be set to your local plex server address/port (default: http://plex:32400)

PLEXTOKEN: This needs to be set to your plex token (default: token here)

JELLYFINSERVER: Set to your Jellyfin server address/port (default: http://jellyfin:8096)

JELLYFINTOKEN: Generate a token inside the Jellyfin interface (default: token here)

WEBHOOKPORT: Change this if you need a different port for your webhook (default: 9000)

USE_PATH_MAPPING: Similar to sonarr and radarr path mapping, this will attempt to replace paths on file systems that don't have identical paths. Currently only support for one path replacement. (default: False)


PATH_MAPPING_FROM: This is the path of my media relative to my Plex server (default: /tv)

PATH_MAPPING_TO: This is the path of that same folder relative to my Mac Mini that will run the script (default: /Volumes/TV)

TRANSCRIBE_FOLDERS: Takes a pipe '|' separated list and iterates through and adds those files to be queued for subtitle generation if they don't have internal subtitles (default: )

TRANSCRIBE_OR_TRANSLATE: Takes either 'transcribe' or 'translate'. Transcribe will transcribe the audio in the same language as the input. Translate will transcribe and translate into English. (default: transcribe)


COMPUTE_TYPE: Set compute-type using the following information: https://github.com/OpenNMT/CTranslate2/blob/master/docs/quantization.md (default: auto)

DEBUG: Provides some debug data that can be helpful to troubleshoot path mapping and other issues. If set to true, any modifications to the script will auto-reload it (if it isn't actively transcoding). Useful to make small tweaks without re-downloading the whole file. (default: True)


FORCE_DETECTED_LANGUAGE_TO: This is to force the model to a language instead of the detected one, takes a 2 letter language code. (default: )

CLEAR_VRAM_ON_COMPLETE: This will delete the model and do garbage collection when queue is empty. Good if you need to use the VRAM for something else. (default: True)


UPDATE: Will pull latest subgen.py from the repository if True. False will use the original subgen.py built into the Docker image. Standalone users can use this with launcher.py to get updates. (default: False)


APPEND: Will add the following at the end of a subtitle: 'Transcribed by whisperAI with faster-whisper ({whisper_model}) on {datetime.now()}' (default: False)


MONITOR: Will monitor TRANSCRIBE_FOLDERS for real-time changes to see if we need to generate subtitles (default: False)


USE_MODEL_PROMPT: When set to True, will use the default prompt stored in greetings_translations 'Hello, welcome to my lecture.' to try and force the use of punctuation in transcriptions that don't. (default: False)


CUSTOM_MODEL_PROMPT: If USE_MODEL_PROMPT is True, you can override the default prompt (See: [prompt engineering in whisper](https://medium.com/axinc-ai/prompt-engineering-in-whisper-6bb18003562d%29) for great examples). (default: )

LRC_FOR_AUDIO_FILES: Will generate LRC (instead of SRT) files for filetypes: '.mp3', '.flac', '.wav', '.alac', '.ape', '.ogg', '.wma', '.m4a', '.m4b', '.aac', '.aiff' (default: True)


CUSTOM_REGROUP: Attempts to regroup some of the segments to make a cleaner looking subtitle. See #68 for discussion. Set to blank if you want to use Stable-TS default regroups algorithm of cm_sp=,* /,_sg=.5_mg=.3+3_sp=.* /。/?/? (default: cm_sl=84_sl=42++++++1)

DETECT_LANGUAGE_LENGTH: Detect language on the first x seconds of the audio. (default: 30)