Snowflake has made a global partnership with Mistral AI, a provider of AI Large Language Model (LLM) solutions. The multi-year partnership, which includes a parallel investment in Mistral’s Series A from Snowflake Ventures, will give Snowflake customers access to Mistral AI’s newest and most powerful LLM, Mistral Large, which includes reasoning capacities, is proficient in code and mathematics, and is fluent in five languages – French, English, German, Spanish and Italian. It can also process hundreds of pages of documents in a single call.

In addition, Snowflake customers can gain access to Mixtral 8x7B, Mistral AI’s open-source model, and Mistral 7B, Mistral AI’s first foundation model optimised for low latency with a low memory requirement and high throughput for its size. The models are available to customers in public preview as a part of Snowflake Cortex, Snowflake’s fully managed LLM and vector search service that enables organisations to accelerate analytics and quickly build AI apps securely with their enterprise data.

Sridhar Ramaswamy, CEO of Snowflake“By partnering with Mistral AI, Snowflake is putting one of the most powerful LLMs on the market directly in the hands of customers, empowering every user to build cutting-edge, AI-powered apps with simplicity and scale,” says Sridhar Ramaswamy, CEO of Snowflake. “With Snowflake as the trusted data foundation, we’re transforming how enterprises harness the power of LLMs through Snowflake Cortex so they can cost-effectively address new AI use cases within the security and privacy boundaries of the Data Cloud.”

Arthur Mensch, CEO and co-founder of Mistral AIArthur Mensch, CEO and co-founder of Mistral AI, adds: “With our models available in the Snowflake Data Cloud, we are able to further democratise AI so users can create more sophisticated AI apps that drive value at a global scale.”

Snowflake Cortex first announced support for industry-leading LLMs for specialised tasks such as sentiment analysis, translation, and summarisation, alongside foundation LLMs, starting with Meta AI’s Llama 2 model, for use cases including retrieval-augmented generation at Snowday 2023. The company is continuing to invest in generative AI efforts by partnering with Mistral AI and advancing the suite of foundation LLMs in Snowflake Cortex, providing organisations with a path to bring generative AI to every part of the business.

To deliver a serverless experience that makes AI accessible to a broad set of users, Snowflake Cortex eliminates the long-cycled procurement and complex management of GPU infrastructure by partnering with NVIDIA to deliver a full stack accelerated computing platform that leverages NVIDIA Triton Inference Server among other tools.

With Snowflake Cortex LLM functions now in public preview, Snowflake users can leverage AI with their enterprise data to support a wide range of use cases. Users with SQL skills can leverage smaller LLMs to cost-effectively address specific tasks such as sentiment analysis, translation, and summarisation in seconds. For more complex use cases, Python developers can go from concept to full-stack AI apps such as chatbots in minutes, combining the power of foundation LLMs, including Mistral AI’s LLMs in Snowflake Cortex, with chat elements, which will be in public preview soon, within Streamlit in Snowflake.

Source: A-Team Insight