Miaa-774 -
| Feature | Detail | |---|---| | | 774 B (dense) → ≈ 120 B active per token via 64‑expert MoE | | Modalities | Text, static images, audio waveforms, short video clips (≤ 30 s), source code | | Training data | 12 TB of curated multimodal corpora (WebText‑5, LAION‑5B, AudioSet‑2, GitHub‑Code‑3, YouTube‑8M‑V) | | Compute budget | 1.8 M GPU‑hours on 512 × A100‑80 GB (≈ 2 PFLOP‑days) | | Tokenizer | Unified byte‑pair encoder (BPE) with 256 K tokens that can embed image patches, audio frames, and code tokens | | Inference cost | 0.9 USD per 1 M tokens (text) or 1.2 USD per 1 M image‑tokens (≈ 32 × 32 patches) | | License | “MIAA‑Open” – non‑commercial research use free; commercial use via paid API or on‑prem container |
response = client.generate( text="Explain this chart:", image="https://example.com/sales_q1.png", max_tokens=512, temperature=0.2, ) MIAA-774
# Simple multimodal prompt from miaalib import MIAAClient | Feature | Detail | |---|---| | |