Releases: software-mansion/react-native-executorch
Releases · software-mansion/react-native-executorch
v0.5.2
v0.5.1
v0.5.0 🚀
Announcing React Native ExecuTorch v0.5.0 ⚛️
What’s new?
🏎️ A full native code rewrite based on C++ JSI bindings enabling zero-copy data transfer between native code and JS
🔥 Up to 2x faster Whisper transcription on iOS
📷 CLIP - extract semantic meaning of images
🚴 Significant improvements to the performance of text embeddings
🧘 Rewrite of the STT streaming API delivering better qualitative results & DX
🐐 Specify LLM structured output schemas using zod
👌 LLMs now correctly handle Unicode/emoji in outputs
⚠️ Breaking changes
- Multiple model instances: JSI bindings remove the static singleton limitation. You can now create and run multiple instances of the same model simultaneously.
- We've replaced separate URL imports with bundled model objects. Instead of manually specifying modelSource, tokenizerSource, and tokenizerConfigSource, just pass a single object like LLAMA_3_2_1B. The individual URLs are still accessible as properties if you need them.
import {
useLLM,
LLAMA3_2_1B,
} from 'react-native-executorch';
// Current API:
const llm = useLLM({ model: LLAMA3_2_1B });
// Previous API:
const llama = useLLM({
modelSource: LLAMA3_2_1B,
tokenizerSource: LLAMA3_2_TOKENIZER,
tokenizerConfigSource: LLAMA3_2_TOKENIZER_CONFIG,
});
v0.4.8
v0.4.7
What's Changed
- chore: update aar path to point to branch instead of release by @NorbertKlockiewicz in #441
Full Changelog: v0.4.6...v0.4.7
v0.4.6
Full Changelog: v0.4.5...v0.4.6
- Changed version of executorch.aar to point to a specific release
v0.3.4
Relase Notes:
- changed version of executorch.aar to point to specific release
v0.4.5
v0.4.4
v0.4.3
- Removed left over react-native-audio-api dependency
- Fixed error when transcribing audio file shorter than
windowSize + overlapSeconds
(single chunk file) withtranscribe
method - Fixed being able to run inference on LLMs models when they are already generating a response.
Full Changelog: v0.4.2...v0.4.3