Machine Learning Systems
- Updated
Mar 25, 2026 - JavaScript
Machine Learning Systems
Production Android AI with ExecuTorch 1.0 - Deploy PyTorch models to mobile with NPU acceleration and 50KB footprint
LLM inference on mobile via Capacitor — run quantized GGUF models on-device
📱 Optimized ML for edge devices. Showcasing efficient model deployment, GPU-CPU memory transfer optimization, and real-world edge AI applications. 🤖
On-device text embedding generation for iOS and Android via Capacitor
Model download and serving orchestration for Dust — Capacitor bridge
Android ML model server — download management, session caching, accelerator probing
Standalone ONNX runtime session management and preprocessing for Dust — iOS/macOS
INT8 quantization of MobileNetV2 for learning and production-oriented iOS mobile inference.
Claude Code skill for Google LiteRT - on-device AI/ML deployment framework
Core Capacitor bridge for the Dust on-device ML framework — iOS and Android
A lightweight, mobile-optimized Neural Machine Translation (NMT) framework in PyTorch. LingoLite features a modern transformer architecture with state-of-the-art optimizations for efficient multilingual translation on resource-constrained devices.
Android ONNX runtime session management and preprocessing for Dust
Standalone model server business logic for iOS — download, caching, accelerator probing
Add a description, image, and links to the mobile-ml topic page so that developers can more easily learn about it.
To associate your repository with the mobile-ml topic, visit your repo's landing page and select "manage topics."