We are announcing our first lineup of on-premise LLMs, X1 Large 8k, 32k — pre-trained and fine-tuned versions of llama2 70B, which are outperforming Claude 2 on the MT bench with a score of 8.1 vs 8. (White paper coming soon with performance on all the benchmarks.)
X1 Large is available for further fine-tuning and pre-training. Try it out here and let us know what you think!
The problems today:
- Pre-training: Existing large language models (LLMs) lack the ability to pre-train on specific text data, hindering their effectiveness in specialized domains like healthcare, legal, and finance.
- Fine-tuning: The inability to fine-tune LLMs for specific output structures or forms restricts their adaptability in critical areas requiring tailored responses.
- Privacy: Organizations dealing with sensitive customer data face trust & compliance challenges when using third-party servers like OpenAI and Anthropic.
X1 Large:
- Performance: Achieves an MT bench score of 8.1, surpassing Claude 2 after fine-tuning and pre-training.
- Customization: Our unique pre-training and fine-tuning capabilities provide unrivaled performance for industry-specific use cases.
- Security: Offers secure on-premise deployment, ensuring data privacy for enterprises.
- State of the art RAG: We’re partnering with @Mano AI to bring state of art RAG for on-prem deployment on your petabytes of data.
Our ask:
Try out out the demo here with your prompt and let us know how it performed! Email us at founders@gigaml.com if you want fine-tuning and pre-training access in-cloud or on-premise.
Coming soon:
X1 Large Med: Continuing pre training on Medical data.
X1 Large Law: Continuing pre training on entire law data base of all the countries.