LazuliQ Photon (Beta)
An ultra-smart, lightweight reasoning model built for everyday tasks. Fine-tuned for impeccable logic, expansive knowledge, and drastically reduced bias.
An ultra-smart, lightweight reasoning model built for everyday tasks. Fine-tuned for impeccable logic, expansive knowledge, and drastically reduced bias.
Built upon the robust Qwen2.5-1.5B-Instruct base, Photon pushes the boundaries of what small language models can achieve.
Extensively trained on high-quality reasoning datasets, Photon breaks down complex everyday tasks with step-by-step clarity.
Rigorous safety and alignment fine-tuning ensures outputs are highly objective, naturally formatted, and structurally beautiful.
Despite its lightweight 1.5B footprint, advanced data curation allows Photon to punch far above its weight class in general knowledge.
A unique Retrieval-Augmented Generation technology that operates securely server-side offline. It fundamentally improves Photon's accuracy and dynamic knowledge injection without relying on slow external internet tool calls. Super fast, completely private, and ideal for smaller LLMs.
Operates securely in isolated, internet-free server environments ensuring total data privacy. (Endpoint local deployment architecture coming soon).
By eliminating external web-based API calls, knowledge augmentation happens at near-instant speeds alongside inference.
Dramatically increases output factual accuracy, anchoring the lightweight model to reliable truth.
Modern Decoder-only Transformer architecture optimized for maximal efficiency and vast context understanding.
12 Query heads and 2 KV heads. Drastically reduces VRAM usage and exponentially speeds up inference, especially during long-context 128k tasks.
Rotary Positional Embeddings optimized specifically for vast context handling, ensuring no loss of understanding across 128,000 tokens.
Modern LLM staples that ensure extreme training stability and peak runtime performance.
Keeps the model's footprint incredibly small (1.31B non-embedding parameters) without sacrificing linguistic nuance.