← 精选
今天我们宣布与 Amazon Web Services (AWS) 达成合作,将数千万个 AWS Graviton 核心引入我们的计算资源组合。 此次合作标志着我们多样化 AI 基础设施的进一步扩张,将助力扩展 Meta AI 及面向数十亿用户的智能体(agentic)体验背后的系统规模。 了解更多:https://about.fb.com/news/2026/04/meta-partners-with-aws-on-graviton-chips-to-power-agentic-ai/
2026-04-24 · AIatMeta · 打开 ↗
🔜
2026-04-10 · AIatMeta · 打开 ↗
立即通过 Meta AI 应用或访问 https://meta.ai/ 体验 Muse Spark。
2026-04-09 · AIatMeta · 打开 ↗
5/ https://x.com/Nain1sh/status/2041977532535468468
2026-04-09 · AIatMeta · 打开 ↗
来看看社区是如何玩转 Muse Spark,并将其应用于工作与娱乐的 🧵👇 1/ https://x.com/skirano/status/2041926272226398393
2026-04-09 · AIatMeta · 打开 ↗
为了在不显著增加延迟的情况下提升推理性能,我们可以通过增加并行智能体的协作规模来解决难题。传统的推理时扩展是让单个智能体思考更久,而 Muse Spark 通过多智能体协作模式,在延迟相当的情况下实现了更卓越的性能。
2026-04-08 · AIatMeta · 打开 ↗
凭借 Muse Spark,我们正处于高效且可预测的规模化发展轨道。我们期待在通往个人超智能的道路上,尽快分享能力日益强大的模型。
2026-04-08 · AIatMeta · 打开 ↗
为了构建个人超智能,模型的能力应当能够可预测且高效地扩展。下面,我们将分享如何从预训练、强化学习和测试时推理这三个维度,研究并追踪 Muse Spark 的扩展特性。🧵👇 首先从预训练开始。在过去的 9 个月里,我们通过改进模型架构、优化算法和数据整理,重构了预训练技术栈,从而提升了单位算力的能力产出。为了严谨评估这一新方案,我们通过一系列小模型拟合了缩放法则(scaling law),并对比了达到特定性能水平所需的训练算力(FLOPs)。 结果显示:达到相同的性能水平,Muse Spark 所需的算力比之前的模型 Llama 4 Maverick 降低了一个数量级以上,这使得 Muse Spark 的效率显著高于目前可供对比的领先基础模型。
2026-04-08 · AIatMeta · 打开 ↗
Muse Spark 采用原生设计,旨在实现跨领域、跨工具的视觉信息整合。它在视觉 STEM 问题、实体识别和定位方面表现出色,能够提供交互式体验,例如通过动态标注协助家电故障排除。
2026-04-08 · AIatMeta · 打开 ↗
个人超智能将助力人们了解自身健康。我们与1000多名医生合作构建训练数据,以提供更准确、更全面的回答。它还能生成交互式展示,深入解析健康信息,例如不同食物的营养成分或运动时激活的肌肉。
2026-04-08 · AIatMeta · 打开 ↗
SAM 3.1 的核心创新是“对象多路复用”技术,允许模型在单次前向传播中追踪多达 16 个对象。此前,每个对象都需要独立进行一次前向传播,而通过多路复用,SAM 3.1 可以同时处理所有追踪对象,从而消除了冗余计算并缓解了内存瓶颈。 对于包含中等数量对象的视频,该技术使处理速度翻倍,在单块 H100 GPU 上的吞吐量从每秒 16 帧提升至 32 帧。
2026-03-27 · AIatMeta · 打开 ↗
SAM 3.1 正式发布!作为 SAM 3 的即插即用更新,它引入了对象多路复用技术,在不牺牲精度的前提下显著提升了视频处理效率。 我们向社区分享这一更新,旨在让高性能应用在更小、更普及的硬件上也能得以实现。 🔗 模型权重:https://huggingface.co/facebook/sam3.1 🔗 代码库:https://github.com/facebookresearch/sam3
2026-03-27 · AIatMeta · 打开 ↗
无需重新训练,TRIBE v2 即可可靠预测未知个体的脑反应。在电影和有声读物任务中,其性能较以往方法提升了近 2-3 倍。 我们将开源模型、代码、论文及演示程序,旨在助力神经科学研究、利用大脑洞察构建更强大的 AI,并通过计算模拟加速神经系统疾病的诊疗突破。 🔗 论文: https://t.co/lcizLXXOZR 🔗 模型: https://t.co/fBm7P2Kdag 🔗 代码: https://t.co/USpnvXYbsp
2026-03-26 · AIatMeta · 打开 ↗
今天,我们推出 TRIBE v2(三模态大脑编码器)。这是一款基座模型,能够预测人类大脑对几乎任何视听刺激的反应。 基于荣获 Algonauts 2025 大奖的架构,TRIBE v2 利用 700 多人、逾 500 小时的 fMRI 记录构建了神经活动的“数字孪生”,并实现了对新受试者、新语言及新任务的零样本预测。 点击此处体验 Demo 并了解更多:https://aidemos.atmeta.com/tribev2/
2026-03-26 · AIatMeta · 打开 ↗
CHMv2 已在为美国、欧洲及全球其他地区的公共部门提供支持。通过将这些成果开源,我们旨在加速相关研究,并为全球范围内的碳抵消、植树造林及土地管理决策提供科学依据。 🔗 阅读论文:https://arxiv.org/abs/2603.06382 🔗 下载模型:https://github.com/facebookresearch/dinov3/
2026-03-12 · AIatMeta · 打开 ↗
我们正式发布 Canopy Height Maps v2 (CHMv2),这是我们与 @WorldResources 合作开发的用于高分辨率全球森林冠层制图的开源模型。 CHMv2 利用专门针对卫星图像优化的 DINOv3 Sat-L 视觉模型,在准确性、细节度和全球一致性方面实现了显著提升。 🔗 了解更多:https://ai.meta.com/blog/world-resources-institute-dino-canopy-height-maps-v2/?utm_source=twitter&utm_medium=organic_social&utm_content=video&utm_campaign=mtia
2026-03-12 · AIatMeta · 打开 ↗
定制化芯片是扩展下一代 AI 能力的关键。我们正在详细介绍 Meta 训练与推理加速器 (MTIA) 的演进历程,这是我们旨在驱动下一代 AI 体验的自研芯片系列。 传统芯片的研发周期长达数年,而模型架构的更迭却以月为单位。为缩小这一差距,我们加速了 MTIA 的开发,在短短两年内便推出了四代产品。 点击此处查看我们的路线图与技术规格:https://ai.meta.com/blog/meta-mtia-scale-ai-chips-for-billions/?utm_source=twitter&utm_medium=organic_social&utm_content=image&utm_campaign=mtia
2026-03-11 · AIatMeta · 打开 ↗
Meta 🤝 AMD 今日,我们宣布与 @AMD 达成多年期合作协议,将其最新的 Instinct GPU 集成至我们的全球基础设施中。通过为此次部署规划约 6GW 的数据中心容量,我们正不断提升计算能力,以加速前沿 AI 模型的开发,并为全球数十亿用户带来个人超智能体验。 了解更多:https://about.fb.com/news/2026/02/meta-amd-partner-longterm-ai-infrastructure-agreement/
2026-02-24 · AIatMeta · 打开 ↗
错过请看:@alexandr_wang 在印度 AI 影响力峰会上发表了演讲,分享了 Meta 对个人超智能的愿景,以及印度开发者如何利用 AI 解决重大社会挑战。 精彩片段 👇,完整演讲请点击:https://www.youtube.com/live/WgW7cC-kHgY?si=rzOWRsir_oobx-D9&t=8871
2026-02-19 · AIatMeta · 打开 ↗
我们的首席 AI 官 @alexandr_wang 将出席印度 AI 影响力峰会。 日期:2 月 19 日,星期四 时间:IST 下午 1:53 // PST 凌晨 12:23 点击此处观看直播:https://www.youtube.com/live/WgW7cC-kHgY
2026-02-18 · AIatMeta · 打开 ↗
本周,我们将前往印度参加 AI Impact Summit & Expo 🇮🇳 欢迎前往 Meta 展位(3 号展馆,3.7 号展位)与我们的团队交流并体验: 📚 前沿研究演示,包括全语言自动语音识别 (ASR) 和 SeamlessExpressive ⚡ 专家闪电演讲,探讨 AI 如何在语言、无障碍及医疗健康领域创造实际价值 👓 最新 AI 眼镜上手体验,包括 Oakley Meta Vanguard 期待与您相见!
2026-02-16 · AIatMeta · 打开 ↗
我们的 Segment Anything 模型(SAM)正助力提升洪水监测与灾害响应能力。 了解 @USRAedu 和 @USGS 如何通过微调 SAM 攻克实时河流制图中的关键瓶颈,实现更快速、更具扩展性且更经济的防灾准备:https://ai.meta.com/blog/usra-sam-flood-emergencies/?utm_source=twitter&utm_medium=organic_social&utm_content=video&utm_campaign=sam
2025-12-22 · AIatMeta · 打开 ↗
我们现已开源 Perception Encoder Audiovisual (PE-AV),它是驱动 SAM Audio 实现顶尖音频分离效果的核心技术引擎。 PE-AV 基于我们今年早些时候发布的 Perception Encoder 模型,通过整合音频与视觉感知,在多项音视频基准测试中均取得了业界领先的成绩。其原生多模态支持可助力日常任务,包括声音检测以及更丰富的视听场景理解。 🔗 阅读论文:https://t.co/RLWJOgG2uz 🔗 下载代码:https://t.co/1L5ZqCZlxq
2025-12-18 · AIatMeta · 打开 ↗
我们将邀请 SAM 3 + SAM 3D + SAM Audio 的研究人员在 Reddit 举办 AMA 问答活动。 明天太平洋时间下午 2 点,欢迎准时参加。 https://www.reddit.com/r/LocalLLaMA/comments/1pp9w31/ama_with_the_meta_researchers_behind_sam_3_sam_3d/
2025-12-17 · AIatMeta · 打开 ↗
隆重推出 Muse Spark,这是 Meta 超级智能实验室(Meta Superintelligence Labs)推出的 Muse 系列模型中的首款产品。 Muse Spark 是一款原生多模态推理模型,支持工具调用、视觉思维链及多智能体编排。 Muse Spark 今日上线:https://twitter.com/AIatMeta/status/2041910285653737975/photo/1
2026-04-08 · AIatMeta · 打开 ↗
Introducing SAM 3D, the newest addition to the SAM collection, bringing common sense 3D understanding of everyday images. SAM 3D includes two models: 🛋️ SAM 3D Objects for object and scene reconstruction 🧑‍🤝‍🧑 SAM 3D Body for human pose and shape estimation Both models achieve
2025-11-19 · AIatMeta · 打开 ↗
Today we’re excited to unveil a new generation of Segment Anything Models: 1️⃣ SAM 3 enables detecting, segmenting and tracking of objects across images and videos, now with short text phrases and exemplar prompts. 🔗 Learn more about SAM 3: https://ai.meta.com/blog/segment-anyt
2025-11-19 · AIatMeta · 打开 ↗
Introducing Meta Omnilingual Automatic Speech Recognition (ASR), a suite of models providing ASR capabilities for over 1,600 languages, including 500 low-coverage languages never before served by any ASR system. While most ASR systems focus on a limited set of languages that are
2025-11-10 · AIatMeta · 打开 ↗
New from Meta FAIR: Code World Model (CWM), a 32B-parameter research model designed to explore how world models can transform code generation and reasoning about code. We believe in advancing research in world modeling and are sharing CWM under a research license to help empower
2025-09-24 · AIatMeta · 打开 ↗
Introducing DINOv3: a state-of-the-art computer vision model trained with self-supervised learning (SSL) that produces powerful, high-resolution image features. For the first time, a single frozen vision backbone outperforms specialized solutions on multiple long-standing dense h
2025-08-14 · AIatMeta · 打开 ↗
🏆 We're thrilled to announce that Meta FAIR’s Brain & AI team won 1st place at the prestigious Algonauts 2025 brain modeling competition. Their 1B parameter model, TRIBE (Trimodal Brain Encoder), is the first deep neural network trained to predict brain responses to stimuli
2025-08-11 · AIatMeta · 打开 ↗
Today Mark shared Meta’s vision for the future of personal superintelligence for everyone. Read his full letter here: https://www.meta.com/superintelligence/ https://twitter.com/AIatMeta/status/1950543458609037550/video/1
2025-07-30 · AIatMeta · 打开 ↗
We're rapidly expanding our AI infrastructure and have adopted a novel approach of building weather-proof tents to house GPU clusters. This enables us to get new data centers online in months instead of years. 🚀 Read more in this @FastCompany article: https://www.fastcompany.co
2025-07-24 · AIatMeta · 打开 ↗
We’re thrilled to see our advanced ML models and EMG hardware — that transform neural signals controlling muscles at the wrist into commands that seamlessly drive computer interactions — appearing in the latest edition of @Nature. Read the story: https://www.nature.com/articles/
2025-07-23 · AIatMeta · 打开 ↗
Today Mark announced Meta's major AI compute investment. See his post: https://www.facebook.com/zuck/videos/2300161320399228/?rdid=TPmGsInugvCAl19v&share_url=https%3A%2F%2Fwww.facebook.com%2Fshare%2Fv%2F1AnKhQouDb%2F https://twitter.com/AIatMeta/status/1944783224288465165/photo/
2025-07-14 · AIatMeta · 打开 ↗
Announcing the newest releases from Meta FAIR. We’re releasing new groundbreaking models, benchmarks, and datasets that will transform the way researchers approach molecular property prediction, language processing, and neuroscience. 1️⃣ Open Molecules 2025 (OMol25): A dataset
2025-05-14 · AIatMeta · 打开 ↗
We’re releasing model weights for our 8B- parameter Dynamic Byte Latent Transformer, an alternative to traditional tokenization methods with the potential to redefine the standards for language model efficiency and reliability. Learn more about how Dynamic Byte Latent https://tw
2025-05-12 · AIatMeta · 打开 ↗
Introducing Meta Perception Language Model (PLM): an open & reproducible vision-language model tackling challenging visual tasks. Learn more about how PLM can help the open source community build more capable computer vision systems. Read the research paper, and download th
2025-05-07 · AIatMeta · 打开 ↗
@willccbb https://twitter.com/AIatMeta/status/1908620595371388992/photo/1
2025-04-05 · AIatMeta · 打开 ↗
Take a look under the hood of Llama 4 Scout and Llama 4 Maverick – our most advanced AI models yet 🧵 https://twitter.com/AIatMeta/status/1908618302676697317/photo/1
2025-04-05 · AIatMeta · 打开 ↗
Today is the start of a new era of natively multimodal AI innovation. Today, we’re introducing the first Llama 4 models: Llama 4 Scout and Llama 4 Maverick — our most advanced models yet and the best in their class for multimodality. Llama 4 Scout • 17B-active-parameter model
2025-04-05 · AIatMeta · 打开 ↗
Llama has now been downloaded over 1 Billion times! A note to: The researchers at Meta training these models — and those building on the research in other labs. The developers and enthusiasts on r/LocalLlama, @huggingface and more; experimenting with new models and creating
2025-03-18 · AIatMeta · 打开 ↗
Research paper from Meta FAIR and @bcbl_ researchers – Brain-to-Text Decoding: A Non-invasive Approach via Typing ➡️ https://ai.meta.com/research/publications/brain-to-text-decoding-a-non-invasive-approach-via-typing/?utm_source=twitter&utm_medium=organic_social&utm_content=video
2025-03-03 · AIatMeta · 打开 ↗
Introducing Aria Gen 2, next generation glasses that we hope will enable researchers from industry and academia to unlock new work in machine perception, contextual AI, robotics and more. Aria Gen 2 details + sign up for availability updates ➡️ https://www.meta.com/blog/project-
2025-02-27 · AIatMeta · 打开 ↗
OpenBioLLM-8B and OpenBioLLM-70B are new fine-tuned Llama models developed by Saama to streamline tasks that can accelerate clinical trials, opening up new possibilities in personalized medicine. More details in their research paper ➡️ https://aclanthology.org/2024.bionlp-1.51.pd
2025-01-17 · AIatMeta · 打开 ↗
New research from Meta FAIR: Large Concept Models (LCM) is a fundamentally different paradigm for language modeling that decouples reasoning from language representation, inspired by how humans can plan high-level thoughts to communicate. https://twitter.com/AIatMeta/status/18712
2024-12-23 · AIatMeta · 打开 ↗
As we continue to explore new post-training techniques, today we're releasing Llama 3.3 — a new open source model that delivers leading performance and quality across text-based use cases such as synthetic data generation at a fraction of the inference cost. https://twitter.com/A
2024-12-06 · AIatMeta · 打开 ↗
Today at Meta FAIR we’re announcing three new cutting-edge developments in robotics and touch perception — and releasing a collection of artifacts to empower the community to build on this work. Details on all of this new work ➡️ https://ai.meta.com/blog/fair-robotics-open-sourc
2024-10-31 · AIatMeta · 打开 ↗
We previously shared our research on Layer Skip, an end-to-end solution for accelerating LLMs from researchers at Meta FAIR. It achieves this by executing a subset of an LLM’s layers and utilizing subsequent layers for verification and correction. We’re now releasing inference ht
2024-10-29 · AIatMeta · 打开 ↗
We want to make it easier for more people to build with Llama — so today we’re releasing new quantized versions of Llama 3.2 1B & 3B that deliver up to 2-4x increases in inference speed and, on average, 56% reduction in model size, and 41% reduction in memory footprint. Detai
2024-10-24 · AIatMeta · 打开 ↗
Prism is open source · ⭐ Star on GitHub · about