Mistral:我们的第一个 AI 端点现已开放抢先体验
Mistral: Our first AI endpoints are available in early access

原始链接: https://mistral.ai/news/la-plateforme/

介绍 Mistral AI 的第一个开放生成模型 Mistral AI 以前所未有的价格为开发者带来最好的开放生成语言模型! 他们最近在早期访问中推出的第一个人工智能端点具有四个聊天端点:mistral-tiny,它生成英文文本,并在 MT-Bench 评分上提供 7.6 分; mistral-small,提供英语/法语/意大利语/德语/西班牙语等多种语言和代码,MT-Bench 得分为 8.3; 以及高性能的 Mistra-Medium,它也支持这些语言以及编程语言,在 MT-Bench 上得分为 8.6。 他们甚至提供所有这些模型之间的混合选项。 单独的嵌入端点提供 1024 维向量输出,可用于基于文本的搜索任务,在 MTEB 上的检索得分为 55.26。 通过简单的指令控制,mistral-AI 的 API 遵循领先的聊天界面规范,允许注册用户通过提供的 Python 和 JavaScript 库运行他们的模型。 通过电子邮件、LinkedIn、Discord 联系他们,或访问他们的网站以注册兴趣并了解最新动态。 对于企业来说,他们也接受合作请求。 要了解更多信息或注册他们的服务,请立即联系他们。

不,Mistral.ai 不是 Y Combinator 的财产。 然而,根据该网站团队部分列出的 LinkedIn 个人资料,它似乎得到了包括 Alex Dong 在内的 Y Combinator 员工的支持或参与。 此外,尤里·维克多·泰兰 (Yuri Victor Teran) 在他的个人资料中将自己列为联合创始人,尽管他在 Twitter 帖子中表示该公司是他创立的。 考虑到不同初创公司在没有适当披露或明确界定的情况下声称与 YC 存在联系的多个实例,所有 YC 员工的说法在某种程度上仍然有些可疑。 这导致我们对参与创建或推广与 Yuri Teran 相关的人工智能服务的实体提出的任何资金、所有权结构或其他主张的有效性表示担忧,并敦促进一步澄清。 为了澄清这一情况,我们要求所有相关方提供透明的文件,详细说明所有权股份、收入分享协议和投资资金来源。 我们进一步鼓励 Mistral.ai 及其附属公司通过清晰的沟通积极解决此问题,以消除未来的混乱。
相关文章

原文

Mistral AI brings the strongest open generative models to the developers, along with efficient ways to deploy and customise them for production.

We’re opening a beta access to our first platform services today. We start simple: la plateforme serves three chat endpoints for generating text following textual instructions and an embedding endpoint. Each endpoint has a different performance/price tradeoff.

Generative endpoints

The two first endpoints, mistral-tiny and mistral-small, currently use our two released open models; the third, mistral-medium, uses a prototype model with higher performances that we are testing in a deployed setting.

We serve instructed versions of our models. We have worked on consolidating the most effective alignment techniques (efficient fine-tuning, direct preference optimisation) to create easy-to-control and pleasant-to-use models. We pre-train models on data extracted from the open Web and perform instruction fine-tuning from annotations.

Mistral-tiny. Our most cost-effective endpoint currently serves Mistral 7B Instruct v0.2, a new minor release of Mistral 7B Instruct. Mistral-tiny only works in English. It obtains 7.6 on MT-Bench. The instructed model can be downloaded here.

Mistral-small. This endpoint currently serves our newest model, Mixtral 8x7B, described in more detail in our blog post. It masters English/French/Italian/German/Spanish and code and obtains 8.3 on MT-Bench.

Mistral-medium. Our highest-quality endpoint currently serves a prototype model, that is currently among the top serviced models available based on standard benchmarks. It masters English/French/Italian/German/Spanish and code and obtains a score of 8.6 on MT-Bench. The following table compare the performance of the base models of Mistral-medium, Mistral-small and the endpoint of a competitor.

Embedding endpoint

Mistral-embed, our embedding endpoint, serves an embedding model with a 1024 embedding dimension. Our embedding model has been designed with retrieval capabilities in mind. It achieves a retrieval score of 55.26 on MTEB.

API specifications

Our API follows the specifications of the popular chat interface initially proposed by our dearest competitor. We provide a Python and Javascript client library to query our endpoints. Our endpoints allow users to provide a system prompt to set a higher level of moderation on model outputs for applications where this is an important requirement.

Ramping up from beta access to general availability

Anyone can register to use our API as of today as we progressively ramp up our capacity. Our business team can help qualify your needs and accelerate access. Expect rough edges as we stabilise our platform towards fully self-served availability.

Acknowledgement

We are grateful to NVIDIA for supporting us in integrating TensorRT-LLM and Triton and working alongside us to make a sparse mixture of experts compatible with TRT-LLM.

联系我们 contact @ memedata.com