Search results for "MPT"
02:48
🚀Miracle Play (MPT) Trading Contest Kicks Off with $10,000 Worth of Prizes! ⏳ Event Period: 04.22-04.29 11:00AM [UTC+8] ✅ Trade $MPT$ to win a share of $8,000 ✅ Exclusive benefits for new users to share a $1,000 prize pool ✅ Invite new users and enjoy $1,000 in rewards 💸 Get involved: https://www.gate.io/zh/article/36075 #Gateio #MPT #Trade
MPT0.86%
  • 6
04:57
According to a report by Webmaster's Home on January 6, the TinyLlama team released a high-performance AI Open Source model that occupies only 637 MB, TinyLlama. TinyLlama is a compact version of Meta's open source language model Llama2, which has 1 billion parameters and superior performance for multi-domain language model research, and its final version outperforms existing open source language models of comparable size, including Pythia-1.4B, OPT-1.3B, and MPT-1.3B. It is reported that TinyLlama can be deployed on edge devices and can also be used to assist in speculative decoding of large models.
  • 1
  • 1
07:39
According to the "Kechuangban Daily" report on August 2, Baidu Smart Cloud Qianfan large-scale model platform has completed a new round of upgrades, fully accessing 33 large-scale models including the full series of LLaMA2, ChatGLM2, RWKV, MPT, Dolly, OpenLLaMA, and Falcon , has become the platform with the largest number of large models in China, and the connected models have undergone secondary performance enhancement of the Qianfan platform, and the cost of model reasoning can be reduced by 50%. At the same time, the Qianfan platform has launched a preset_template library with 103 templates, covering more than ten scenarios of dialogue, games, programming, and writing. In addition, this upgrade released a number of new plug-ins again.
06:55
According to IT House's report on June 25, AI startup MosaicML recently released its language model MPT-30B. The model has 30 billion parameters, and the training cost is "only a fraction of other similar competing models." The training cost of such models expands the application of AI models in a wider range of fields. Naveen Rao, CEO and co-founder of MosaicML, said that the training cost of MPT-30B is 700,000 US dollars (about 5.0244 million yuan), which is far lower than the tens of millions of dollars required for similar products such as GPT-3. . In addition, due to the lower cost and smaller size of MPT-30B, it can also be trained more quickly and is more suitable for deployment on local hardware. It is reported that MosaicML uses Alibi and FlashAttention technology to optimize the model, which can achieve longer text length and higher utilization of GPU computing. MosaicML is also one of the few laboratories that can use Nvidia H100 GPU. Compared with previous achievements, the current throughput of each GPU has increased by more than 2.4 times, which can bring faster completion time.
  • 2
Load More
Trade Crypto Anywhere Anytime
qrCode
Scan to download Gate App
Community
English
  • 简体中文
  • English
  • Tiếng Việt
  • 繁體中文
  • Español
  • Русский
  • Français (Afrique)
  • Português (Portugal)
  • Bahasa Indonesia
  • 日本語
  • بالعربية
  • Українська
  • Português (Brasil)