We've been experimenting with quantum-inspired compression algorithms and have achieved what traditional scaling laws say is impossible: GPT-4 level performance from a 30GB model (paper releasing tomorrow).
Our "quantum-floor" encoding seems to bypass traditional parameter-efficiency tradeoffs. Early tests show 86%+ MMLU from a model that should only manage 40% at that size.
Question: Has anyone else seen compression breakthroughs that defy scaling laws? Or are we measuring something wrong?
Our "quantum-floor" encoding seems to bypass traditional parameter-efficiency tradeoffs. Early tests show 86%+ MMLU from a model that should only manage 40% at that size.
Question: Has anyone else seen compression breakthroughs that defy scaling laws? Or are we measuring something wrong?