Softvelum team is continuously improving Nimble Streamer voice recognition feature set. Real-time transcription and translation of live streams into subtitles becomes a key requirement for live streaming so we work on that to give our customers the best options available. Previously, Nimble Streamer introduced native AI-based speech recognition with Whisper engine, as well as integrated with Speechmatics, a robust cloud-based transcription and translation service.
Now Nimble Streamer has integration with KWIKmotion Real-Time AI-Powered Captions. KWIKmotion provides accurate real-time speech-to-text transcription and allows switching between multiple languages, including English, French, Arabic, Persian, Mandarin, and more.
Nimble Streamer sends audio from your live stream to the KWIKmotion speech recognition endpoint. KWIKmotion then processes the audio in real time and returns captions to Nimble which injects them into the corresponding HLS output stream as WebVTT subtitles.
Note that KWIKmotion service pricing applies to the transcription process, you need to contact KWIKmotion for exact quote. Softvelum is not affiliated with KWIKmotion.
Now, let’s see how to enable KWIKmotion processing in Nimble Streamer.
Prerequisites
In order to enable and set up KWIKmotion transcribing in Nimble Streamer, you need to have the following.
- WMSPanel account with active subscription.
- Nimble Streamer is installed on an Ubuntu 24.04 and registered in WMSPanel. Other OSes and versions will be supported later.
- Live Transcoder is installed and its license is activated and registered on your Nimble Streamer instance. You can do it easily via panel UI.
- Addenda license is activated and registered on your Nimble Streamer instance.
In order to add Nimble Transcriber transcription engine, run the following command and restart Nimble instance as shown.
sudo apt install nimble-transcriber
sudo service nimble restart
Another requirement would be to obtain KWIKmotion endpoint URL and API key. Please contact KWIKmotion to obtain them for your case.
Now let’s proceed with the setup.
Set up Nimble Streamer for KWIKmotion ASR
Once your Nimble instance has the speech recognition package installed, you may enable transcription for that server in general as well as for any particular live stream application.
Enable WebVTT for Nimble instance
If you’d like to enable transcription on the server level, go to Nimble Streamer top menu, click on “Live Streams Settings” and select the server where you want to enable transcription. Then open Global tab, enable the Generate WebVTT for audio checkbox there and save settings.

You may also select a particular output application where you’d like to enable transcription, or create a new app setting. Just select application in “Applications” tab, check “Generate WebVTT for audio” checkbox for it and save settings.
You may also enable the generation of CEA-708 subtitles globally and at the applications level, please read this article for more details.
After you apply all settings, you need to re-start the input stream. Once the re-started input is picked up by Nimble instance, the output HLS stream will have WebVTT subtitles carrying the transcribed and translated closed captions, according to the settings that we show further.
Define KWIKmotion settings
Add the following line to nimble.conf file:
transcriber_config_path = /etc/nimble/transcriber-config.json
Then re-start Nimble instance:
sudo service nimble restart
Below is an example of a basic transcriber-config.json configuration:
{
"kwikmotion_params" : [
{
"api_url": "wss://livecc01.kwikmotion.com:8004",
"api_key": "<your_api_key>"
},
{"app":"live", "stream":"stream", "lang":"ar"},
{"app":"live", "stream":"stream-ar", "lang":"ar", "target_langs": "en", "webvtt_style": "line:70% position:50% align:center"}
]
}
The following parameters are used:
- “api_url” and “api_key” are your KWIKmotion endpoint URL and API key respectively.
- “app” defines Nimble streamer application.
- “stream” defines the respective stream. It’s optional and if you don’t specify it, then the language settings will be applied to all streams within the specified app.
- “lang” is the two-letter language code of the stream’s content.
- “target_langs” is a list of language codes for automatic translation. If it’s not specified, no translation will be added.
- “webvtt_style” defines the display parameters of the captions.
Changes can be applied in this file without Nimble re-start. However, you need to re-start the input stream in order to transcribe it with the new language.
The resulting subtitles can be:
- Rendered by players supporting WebVTT for HLS.
- Distributed through your existing streaming delivery setup.
This makes the integration suitable for OTT services, live events, digital media workflows, and any other scenario requiring real-time captioning.
Please let us know if you have any questions, issues or suggestion for our voice recognition feature set.