今天我们宣布与 Amazon Web Services (AWS) 达成合作,将数千万个 AWS Graviton 核心引入我们的计算资源组合。
此次合作标志着我们多样化 AI 基础设施的进一步扩张,将助力扩展 Meta AI 及面向数十亿用户的智能体(agentic)体验背后的系统规模。
了解更多:https://about.fb.com/news/2026/04/meta-partners-with-aws-on-graviton-chips-to-power-agentic-ai/
定制化芯片是扩展下一代 AI 能力的关键。我们正在详细介绍 Meta 训练与推理加速器 (MTIA) 的演进历程,这是我们旨在驱动下一代 AI 体验的自研芯片系列。
传统芯片的研发周期长达数年,而模型架构的更迭却以月为单位。为缩小这一差距,我们加速了 MTIA 的开发,在短短两年内便推出了四代产品。
点击此处查看我们的路线图与技术规格:https://ai.meta.com/blog/meta-mtia-scale-ai-chips-for-billions/?utm_source=twitter&utm_medium=organic_social&utm_content=image&utm_campaign=mtia
错过请看:@alexandr_wang 在印度 AI 影响力峰会上发表了演讲,分享了 Meta 对个人超智能的愿景,以及印度开发者如何利用 AI 解决重大社会挑战。
精彩片段 👇,完整演讲请点击:https://www.youtube.com/live/WgW7cC-kHgY?si=rzOWRsir_oobx-D9&t=8871
我们将邀请 SAM 3 + SAM 3D + SAM Audio 的研究人员在 Reddit 举办 AMA 问答活动。
明天太平洋时间下午 2 点,欢迎准时参加。
https://www.reddit.com/r/LocalLLaMA/comments/1pp9w31/ama_with_the_meta_researchers_behind_sam_3_sam_3d/
Introducing SAM 3D, the newest addition to the SAM collection, bringing common sense 3D understanding of everyday images. SAM 3D includes two models:
🛋️ SAM 3D Objects for object and scene reconstruction
🧑🤝🧑 SAM 3D Body for human pose and shape estimation
Both models achieve
Today we’re excited to unveil a new generation of Segment Anything Models:
1️⃣ SAM 3 enables detecting, segmenting and tracking of objects across images and videos, now with short text phrases and exemplar prompts.
🔗 Learn more about SAM 3: https://ai.meta.com/blog/segment-anyt
Introducing Meta Omnilingual Automatic Speech Recognition (ASR), a suite of models providing ASR capabilities for over 1,600 languages, including 500 low-coverage languages never before served by any ASR system.
While most ASR systems focus on a limited set of languages that are
New from Meta FAIR: Code World Model (CWM), a 32B-parameter research model designed to explore how world models can transform code generation and reasoning about code.
We believe in advancing research in world modeling and are sharing CWM under a research license to help empower
Introducing DINOv3: a state-of-the-art computer vision model trained with self-supervised learning (SSL) that produces powerful, high-resolution image features. For the first time, a single frozen vision backbone outperforms specialized solutions on multiple long-standing dense h
🏆 We're thrilled to announce that Meta FAIR’s Brain & AI team won 1st place at the prestigious Algonauts 2025 brain modeling competition.
Their 1B parameter model, TRIBE (Trimodal Brain Encoder), is the first deep neural network trained to predict brain responses to stimuli
Today Mark shared Meta’s vision for the future of personal superintelligence for everyone.
Read his full letter here: https://www.meta.com/superintelligence/ https://twitter.com/AIatMeta/status/1950543458609037550/video/1
We're rapidly expanding our AI infrastructure and have adopted a novel approach of building weather-proof tents to house GPU clusters. This enables us to get new data centers online in months instead of years. 🚀
Read more in this @FastCompany article: https://www.fastcompany.co
We’re thrilled to see our advanced ML models and EMG hardware — that transform neural signals controlling muscles at the wrist into commands that seamlessly drive computer interactions — appearing in the latest edition of @Nature.
Read the story: https://www.nature.com/articles/
Today Mark announced Meta's major AI compute investment.
See his post: https://www.facebook.com/zuck/videos/2300161320399228/?rdid=TPmGsInugvCAl19v&share_url=https%3A%2F%2Fwww.facebook.com%2Fshare%2Fv%2F1AnKhQouDb%2F https://twitter.com/AIatMeta/status/1944783224288465165/photo/
Announcing the newest releases from Meta FAIR. We’re releasing new groundbreaking models, benchmarks, and datasets that will transform the way researchers approach molecular property prediction, language processing, and neuroscience.
1️⃣ Open Molecules 2025 (OMol25): A dataset
We’re releasing model weights for our 8B- parameter Dynamic Byte Latent Transformer, an alternative to traditional tokenization methods with the potential to redefine the standards for language model efficiency and reliability.
Learn more about how Dynamic Byte Latent https://tw
Introducing Meta Perception Language Model (PLM): an open & reproducible vision-language model tackling challenging visual tasks.
Learn more about how PLM can help the open source community build more capable computer vision systems.
Read the research paper, and download th
Take a look under the hood of Llama 4 Scout and Llama 4 Maverick – our most advanced AI models yet 🧵 https://twitter.com/AIatMeta/status/1908618302676697317/photo/1
Today is the start of a new era of natively multimodal AI innovation.
Today, we’re introducing the first Llama 4 models: Llama 4 Scout and Llama 4 Maverick — our most advanced models yet and the best in their class for multimodality.
Llama 4 Scout
• 17B-active-parameter model
Llama has now been downloaded over 1 Billion times!
A note to:
The researchers at Meta training these models — and those building on the research in other labs.
The developers and enthusiasts on r/LocalLlama, @huggingface and more; experimenting with new models and creating
Research paper from Meta FAIR and @bcbl_ researchers – Brain-to-Text Decoding: A Non-invasive Approach via Typing ➡️ https://ai.meta.com/research/publications/brain-to-text-decoding-a-non-invasive-approach-via-typing/?utm_source=twitter&utm_medium=organic_social&utm_content=video
Introducing Aria Gen 2, next generation glasses that we hope will enable researchers from industry and academia to unlock new work in machine perception, contextual AI, robotics and more.
Aria Gen 2 details + sign up for availability updates ➡️ https://www.meta.com/blog/project-
OpenBioLLM-8B and OpenBioLLM-70B are new fine-tuned Llama models developed by Saama to streamline tasks that can accelerate clinical trials, opening up new possibilities in personalized medicine. More details in their research paper ➡️ https://aclanthology.org/2024.bionlp-1.51.pd
New research from Meta FAIR: Large Concept Models (LCM) is a fundamentally different paradigm for language modeling that decouples reasoning from language representation, inspired by how humans can plan high-level thoughts to communicate. https://twitter.com/AIatMeta/status/18712
As we continue to explore new post-training techniques, today we're releasing Llama 3.3 — a new open source model that delivers leading performance and quality across text-based use cases such as synthetic data generation at a fraction of the inference cost. https://twitter.com/A
Today at Meta FAIR we’re announcing three new cutting-edge developments in robotics and touch perception — and releasing a collection of artifacts to empower the community to build on this work.
Details on all of this new work ➡️ https://ai.meta.com/blog/fair-robotics-open-sourc
We previously shared our research on Layer Skip, an end-to-end solution for accelerating LLMs from researchers at Meta FAIR. It achieves this by executing a subset of an LLM’s layers and utilizing subsequent layers for verification and correction. We’re now releasing inference ht
We want to make it easier for more people to build with Llama — so today we’re releasing new quantized versions of Llama 3.2 1B & 3B that deliver up to 2-4x increases in inference speed and, on average, 56% reduction in model size, and 41% reduction in memory footprint.
Detai