Snowflake & Meta Launch Llama 3.1 in Cortex AI

News Desk -

Share

Snowflake (NYSE: SNOW), an AI Data Cloud company, has announced a strategic partnership with Meta to host the Llama 3.1 collection of multilingual open-source large language models (LLMs) in Snowflake Cortex AI. This collaboration aims to empower enterprises with the tools needed to build powerful AI applications at scale. Meta’s most advanced open-source LLM, Llama 3.1 405B, will be optimized by Snowflake’s AI Research Team for both inference and fine-tuning, making it a versatile choice for various applications.

The Llama 3.1 405B model, known for its extensive capabilities, will now be available within Snowflake Cortex AI. Snowflake has developed an advanced system stack that delivers real-time, high-throughput performance with up to 3x lower latency and 1.4x higher throughput compared to existing open-source solutions. The system supports a massive 128K context window, allowing for detailed and context-rich applications. Additionally, Snowflake’s infrastructure enables fine-tuning of this large model using just a single GPU node, significantly reducing costs and complexity for developers.

By partnering with Meta, Snowflake provides customers with seamless access, fine-tuning, and deployment of Meta’s newest models within the AI Data Cloud. The integration emphasizes a comprehensive approach to trust and safety, ensuring that enterprises can confidently leverage these advanced AI tools.

Snowflake’s AI Research Team continues to push the boundaries of open-source AI technology. Their recent contributions include the Massive LLM Inference and Fine-Tuning System Optimization Stack, developed in collaboration with industry leaders like DeepSpeed, Hugging Face, and vLLM. This system is designed to overcome the challenges of low-latency inference, high throughput, and extensive memory requirements, making it accessible for a wide range of AI applications.

The introduction of Snowflake Cortex Guard further strengthens the company’s commitment to AI safety. This feature is designed to safeguard against harmful content, providing an additional layer of security for any LLM application or asset built in Cortex AI. The partnership also includes Meta’s Llama Guard 2, enhancing the safety and reliability of AI deployments.

Feedback from industry leaders has been overwhelmingly positive. Dave Lindley, Sr. Director of Data Products at E15 Group, expressed excitement about the ability to leverage Meta’s Llama models within Snowflake Cortex AI to gain valuable insights from customer data. Ryan Klapper from Hakkoda highlighted the importance of Snowflake’s safety measures in enabling innovation with generative AI. Matthew Scullion, CEO and co-founder of Matillion, emphasized the flexibility and choice that the addition of Llama 3.1 brings to their offerings. Kevin Niparko, VP of Product and Technology Strategy at Twilio Segment, praised the platform’s ability to empower businesses to generate intelligent insights and engage with customers more effectively.

Snowflake and Meta’s partnership marks a significant step forward in making advanced AI tools more accessible and efficient for enterprises. The combination of cutting-edge technology, optimized infrastructure, and a strong focus on safety and trust positions Snowflake Cortex AI as a leading platform for enterprise-grade AI applications.

Snowflake is a cloud-based data platform that enables organizations to manage and analyze data at scale. Meta, formerly Facebook, is a global leader in social media and AI technologies.

Both companies are committed to sustainability and responsible AI usage, continuously working on energy-efficient solutions and ethical practices.