WhisperCore is a modular, actor-safe Swift framework for real-time or file-based audio transcription using Whisper.cpp. Designed for embedding into native SwiftUI, UIKit, and cross-platform frameworks like Flutter or React Native.
WhisperCore provides a simplified API for:
- Loading Whisper models (async or callback-based)
- Managing microphone permissions
- Starting, stopping, or toggling audio recording
- Transcribing audio from files
- Optional playback of audio after transcription
- Resetting internal state between sessions
- Receiving transcription results or errors via delegate
It is ideal for voice interfaces, command processing, dictation, or AI-driven mobile assistants.
Whisper/initializeModel(at:)– Async/await model loadingWhisper/initializeModel(at:log:completion:)– Callback-based model loadingWhisper/callRequestRecordPermission()– Requests mic permission from the user
Whisper/startRecording()– Begins microphone captureWhisper/stopRecording()– Ends microphone captureWhisper/toggleRecording()– Toggles between recording and idle
Whisper/transcribeSample(from:)– Transcribes a given audio file
Whisper/enablePlayback(_:)– Enables or disables audio playbackWhisper/reset()– Resets the internal state, clears models and sessionsWhisper/canTranscribe()– Indicates if transcription is currently possibleWhisper/isRecording()– Returns whether audio recording is activeWhisper/isModelLoaded()– Returns whether a model is loadedWhisper/getMessageLogs()– Returns internal logs from WhisperCoreWhisper/benchmark()– Runs model benchmark (DEBUG builds only)
To receive transcriptions or error feedback, assign a delegate conforming to WhisperDelegate:
WhisperDelegate/didTranscribe(_:)– Called with the transcribed textWhisperDelegate/recordingFailed(_:)– Called when microphone access or recording failsWhisperDelegate/failedToTranscribe(_:)– Called when transcription failsWhisperDelegate/startRecording(_:)– Called when audio recording has startedWhisperDelegate/stopRecording(_:)– Called when audio recording has stopped
class MyHandler: WhisperDelegate { func didTranscribe(_ text: String) { print("Transcript:", text) } func recordingFailed(_ error: Error) { print("Recording error:", error.localizedDescription) } func failedToTranscribe(_ error: Error) { print("Transcription error:", error.localizedDescription) } } let whisper = Whisper() whisper.delegate = MyHandler() Task { let modelPath = Bundle.main.path(forResource: "ggml-base.en", ofType: "bin")! //Example path to model try await whisper.initializeModel(at: modelPath) await whisper.callRequestRecordPermission() await whisper.startRecording() // ... wait or monitor user gesture ... await whisper.stopRecording() } | Model Name | Info | Size | Download URL |
|---|---|---|---|
| tiny | F16 | 75 MiB | tiny.bin |
| tiny-q5_1 | Quantized | 31 MiB | tiny-q5_1.bin |
| tiny-q8_0 | Quantized | 42 MiB | tiny-q8_0.bin |
| tiny.en | F16 (English) | 75 MiB | tiny.en.bin |
| tiny.en-q5_1 | Quantized | 31 MiB | tiny.en-q5_1.bin |
| tiny.en-q8_0 | Quantized | 42 MiB | tiny.en-q8_0.bin |
| base.en | F16 (English) | 142 MiB | base.en.bin |
| base.en-q5_1 | Quantized | 57 MiB | base.en-q5_1.bin |
| base.en-q8_0 | Quantized | 78 MiB | base.en-q8_0.bin |
| small.en-q5_1 | Quantized | 181 MiB | small.en-q5_1.bin |
| small.en-q8_0 | Quantized | 252 MiB | small.en-q8_0.bin |
| large-v3-turbo-q5_0 | Quantized | 547 MiB | large-v3-turbo-q5_0.bin |
| large-v3-turbo-q8_0 | Quantized | 834 MiB | large-v3-turbo-q8_0.bin |
💡 Tip: Smaller quantized models like
tiny-q5_1load faster and are ideal for lower-end devices or testing. Usebase.enor larger for more accurate results. ✅ Recommended default model for English-only apps:ggml-base.en.bin(142 MiB) You can also explore the Whisper.cpp GitHub repo for more models, quantization options, and platform-specific setup (including iOS).
- WhisperCore iOS Demo – Project used to build WhsiperCore.
WhisperCore is released under the MIT License.
See LICENSE for details.