今日摘要

X Andrej Karpathy:Someone recently suggested to me that the reason OpenClaw moment was so big is because it's the first time a large group of non-te…

X Andrej Karpathy:Judging by my tl there is a growing gap in understanding of AI capability. The first issue I think is around recency and tier of u…

GitHub karpathy:Karpathy 新发布的最小 ChatGPT 复现项目,训练到推理的完整栈只有几千行可读代码,目标是把“百美元跑一个 ChatGPT”压到个人可动手的范围。

GitHub karpathy:Karpathy 早期的教学级 GPT 实现,代码短到可以一口气读完,长期用作理解 Transformer 训练与推理最短路径的入口。

GitHub anthropics:Anthropic 公开其内部工程师 take-home 面试题,可作为理解他们工程品味和评估标准的一手材料。

总结 + 观点:OpenAI 新开源的多 agent 编排框架,重点不是写代码的 coding agent,而是…|中文观点:Symphony 的定位更像工作流基础设施:真正的价值在于它把“多 agent 协作”的实…

总结 + 观点:OpenAI 官方示例库更新,通常折射出他们希望开发者优先采用的新模式(tool use、str…|中文观点:cookbook 的更新值得单独跟踪:它折射出 OpenAI 想让开发者默认走哪些新 AP…

总结 + 观点:Karpathy 用 Rust 重写的 BPE tokenizer 训练器,把 tiktoken…|中文观点:rustbpe 补上了 tokenizer 训练这块的“黑盒”:它让 tokenizer…

总结 + 观点:OpenAI 官方 Python SDK 更新,通常先于公告暴露出新接口细节、参数变化或默认路径…|中文观点:官方 SDK 的 commit 经常是 API 方向的早期指示灯,对做集成和多模型平台的团…

总结 + 观点:Anthropic 官方的 Claude Agent SDK 示例仓库,覆盖代码 agent、文…|中文观点:demos 仓库往往比文档更早暴露 SDK 的边界和推荐模式,对正在选型 agent 栈的…

Someone recently suggested to me that the reason OpenClaw moment was so big is because it's the first time a large group of non-technical pe...

来源:X Andrej Karpathy

标签:#x_profiles #extended

作者:

原文:Someone recently suggested to me that the reason OpenClaw moment was so big is because it's the first time a large group of non-technical people (who otherwise only knew AI as synonymous with ChatGPT as a website) experienced the latest agentic models.

链接:https://twitter.com/karpathy/status/2042341482531864741

观点:围绕 R to @karpathy: Someone recently suggested to me that the re...,真正重要的是它会不会影响团队的模型选型、性能边界和产品体验。

Judging by my tl there is a growing gap in understanding of AI capability. The first issue I think is around recency and tier of use.

来源:X Andrej Karpathy

标签:#x_profiles #extended

作者:

原文:Judging by my tl there is a growing gap in understanding of AI capability. The first issue I think is around recency and tier of use. I think a lot of people tried the free tier of ChatGPT somewhere last year and allowed it to inform their views on AI a little too much. This is a group of reactions laughing at various quirks of the models, hallucinations, etc. Yes I also saw the viral videos of OpenAI's Advanced Voice mode fumbling simple queries like "should I drive or walk to the carwash". The thing is that these free and old/deprecated models don't reflect the capability in the latest round of state of the art agentic models of this year, especially OpenAI Codex and Claude Code. But that brings me to the second issue. Even if people paid $200/month to use the state of the art models, a lot of the capabilities are relatively "peaky" in highly technical areas. Typical queries around search, writing, advice, etc. are *not* the domain that has made the most noticeable and dramatic strides in capability. Partly, this is due to the technical details of reinforcement learning and its use of verifiable rewards. But partly, it's also because these use cases are not sufficiently prioritized by the companies in their hillclimbing because they don't lead to as much value. The goldmines are elsewhere, and the focus comes along. So that brings me to the second group of people, who *both* 1) pay for and use the state of the art frontier agentic models (OpenAI Codex Claude Code) and 2) do so professionally in technical domains like programming, math and research. This group of people is subject to the highest amount of "AI Psychosis" because the recent improvements in these domains as of this year have been nothing short of staggering. When you hand a computer terminal to one of these models, you can now watch them melt programming problems that you'd normally expect to take days/weeks of work. It's this second group of people that assigns a much greater gravity to the capabilities, their slope, and various cyber-related repercussions. TLDR the people in these two groups are speaking past each other. It really is simultaneously the case that OpenAI's free and I think slightly orphaned "Advanced Voice Mode" will fumble the dumbest questions in your Instagram's reels and *at the same time*, OpenAI's highest-tier and paid Codex model will go off for 1 hour to coherently restructure an entire code base, or find and exploit vulnerabilities in computer systems. This part really works and has made dramatic strides because 2 properties: 1) these domains offer explicit reward functions that are verifiable meaning they are easily amenable to reinforcement learning training (e.g. unit tests passed yes or no, in contrast to writing, which is much harder to explicitly judge), but also 2) they are a lot more valuable in b2b settings, meaning that the biggest fraction of the team is focused on improving them. So here we are. staysaasy (@staysaasy) The degree to which you are awed by AI is perfectly correlated with how much you use AI to code. https://nitter.net/staysaasy/status/2042063369432183238#m

链接:https://twitter.com/karpathy/status/2042334451611693415

观点:从 Judging by my tl there is a growing gap in understanding of... 看,后续更应关注安全事故是否改变企业采购、接入和上线前的合规门槛。

karpathy/nanochat

来源:GitHub karpathy

标签:#github_orgs #extended

作者:

原文:Karpathy 新发布的最小 ChatGPT 复现项目,训练到推理的完整栈只有几千行可读代码,目标是把“百美元跑一个 ChatGPT”压到个人可动手的范围。

链接:https://github.com/karpathy/nanochat

观点:nanochat 最值得看的不是性能,而是它第一次把 ChatGPT 训练+推理的全流程压到个人能读懂、能跑通的粒度,对想吃透底层的开发者最有价值。

karpathy/minGPT

来源:GitHub karpathy

标签:#github_orgs #extended

作者:

原文:Karpathy 早期的教学级 GPT 实现,代码短到可以一口气读完,长期用作理解 Transformer 训练与推理最短路径的入口。

链接:https://github.com/karpathy/minGPT

观点:minGPT 的价值不是生产就绪,而是教材级清晰:它最适合那些想从零搭一遍训练循环、确认自己真的理解 GPT 的工程师。

anthropics/original_performance_takehome

来源:GitHub anthropics

标签:#github_orgs #extended

作者:

原文:Anthropic 公开其内部工程师 take-home 面试题,可作为理解他们工程品味和评估标准的一手材料。

链接:https://github.com/anthropics/original_performance_takehome

观点:这条的信号不是题目本身,而是 Anthropic 把招聘标准开放出来,对想了解他们工程文化与评价尺度的人非常有用。

openai/symphony

来源:GitHub openai

标签:#github_orgs #extended

作者:

原文:OpenAI 新开源的多 agent 编排框架,重点不是写代码的 coding agent,而是任务隔离、委派与团队级协作。

链接:https://github.com/openai/symphony

观点:Symphony 的定位更像工作流基础设施:真正的价值在于它把“多 agent 协作”的实现细节标准化,而不是又出一个 coding agent。

openai/openai-cookbook

来源:GitHub openai

标签:#github_orgs #extended

作者:

原文:OpenAI 官方示例库更新,通常折射出他们希望开发者优先采用的新模式(tool use、structured output、responses API 等)。

链接:https://github.com/openai/openai-cookbook

观点:cookbook 的更新值得单独跟踪:它折射出 OpenAI 想让开发者默认走哪些新 API 和用法路径,是路线图的早期信号。

karpathy/rustbpe

来源:GitHub karpathy

标签:#github_orgs #extended

作者:

原文:Karpathy 用 Rust 重写的 BPE tokenizer 训练器,把 tiktoken 里不透明的训练流程变成可学习、可实验的代码。

链接:https://github.com/karpathy/rustbpe

观点:rustbpe 补上了 tokenizer 训练这块的“黑盒”:它让 tokenizer 变体实验、教学与复现都更直观,研究者最先受益。

openai/openai-python

来源:GitHub openai

标签:#github_orgs #extended

作者:

原文:OpenAI 官方 Python SDK 更新,通常先于公告暴露出新接口细节、参数变化或默认路径调整。

链接:https://github.com/openai/openai-python

观点:官方 SDK 的 commit 经常是 API 方向的早期指示灯,对做集成和多模型平台的团队比市场通稿更有参考价值。

anthropics/claude-agent-sdk-demos

来源:GitHub anthropics

标签:#github_orgs #extended

作者:

原文:Anthropic 官方的 Claude Agent SDK 示例仓库,覆盖代码 agent、文件编辑、工具链编排等典型用法。

链接:https://github.com/anthropics/claude-agent-sdk-demos

观点:demos 仓库往往比文档更早暴露 SDK 的边界和推荐模式,对正在选型 agent 栈的团队是最值得先跑一遍的材料。

anthropics/prompt-eng-interactive-tutorial

来源:GitHub anthropics

标签:#github_orgs #extended

作者:

原文:Anthropic 的官方交互式 prompt 工程教程,沿用他们内部训练素材的结构,适合团队系统补齐 prompt 基础功。

链接:https://github.com/anthropics/prompt-eng-interactive-tutorial

观点:它价值不在炫技,而在把 prompt 工程从“艺术”收敛成“可教可测”。对刚上手 Claude 的团队尤其值得跑一遍。

OpenAI Full Fan Mode Contest: Terms Conditions

来源:OpenAI Blog

标签:#ai_engineering_blogs #core

作者:

原文:OpenAI Full Fan Mode 比赛规则页面,覆盖参赛条件、评判、奖项等。

链接:https://openai.com/index/full-fan-mode-contest-terms-conditions

观点:这类 marketing 页值得收录的理由只有一个:它暴露 OpenAI 把产品往哪种用户场景上推。信息密度低但信号清晰。

The next phase of enterprise AI

来源:OpenAI Blog

标签:#ai_engineering_blogs #core

作者:

原文:OpenAI outlines the next phase of enterprise AI, as adoption accelerates across industries with Frontier, ChatGPT Enterprise, Codex, and company-wide AI agents.

链接:https://openai.com/index/next-phase-of-enterprise-ai

观点:围绕 The next phase of enterprise AI,真正重要的是它会不会影响团队的模型选型、性能边界和产品体验。

karpathy/autoresearch

来源:GitHub karpathy

标签:#github_orgs #extended

作者:

原文:AI agents running research on single-GPU nanochat training automatically

链接:https://github.com/karpathy/autoresearch

观点:围绕 karpathy/autoresearch,真正重要的是它会不会影响团队的模型选型、性能边界和产品体验。

anthropics/skills

来源:GitHub anthropics

标签:#github_orgs #extended

作者:

原文:Public repository for Agent Skills

链接:https://github.com/anthropics/skills

观点:anthropics/skills 的核心不在新鲜感,而在它是否能提升工程效率、部署稳定性或开发者工作流。

Extrinsic Hallucinations in LLMs

来源:Lilian Weng Lil'Log

标签:#ai_engineering_blogs #core

作者:

原文:Hallucination in large language models usually refers to the model generating unfaithful, fabricated, inconsistent, or nonsensical content. As a term, hallucination has been somewhat generalized to cases when the model makes mistakes. Here, I would like to narrow down the problem of hallucination to cases where the model output is fabricated and not grounded by either the provided context or world knowledge. There are two types of hallucination: In-context hallucination: The model output should be consistent with the source content in context. Extrinsic hallucination: The model output should be grounded by the pre-training dataset. However, given the size of the pre-training dataset, it is too expensive to retrieve and identify conflicts per generation. If we consider the pre-training data corpus as a proxy for world knowledge, we essentially try to ensure the model output is factual and verifiable by external world knowledge. Equally importantly, when the model does not know about a fact, it should say so. This post focuses on extrinsic hallucination. To avoid hallucination, LLMs need to be (1) factual and (2) acknowledge not knowing the answer when applicable.

链接:https://lilianweng.github.io/posts/2024-07-07-hallucination/

观点:Extrinsic Hallucinations in LLMs 更值得从实际采用价值来判断,而不是只看它有没有制造新的讨论热度。

Diffusion Models for Video Generation

来源:Lilian Weng Lil'Log

标签:#ai_engineering_blogs #core

作者:

原文:Diffusion models have demonstrated strong results on image synthesis in past years. Now the research community has started working on a harder task--using it for video generation. The task itself is a superset of the image case, since an image is a video of 1 frame, and it is much more challenging because: It has extra requirements on temporal consistency across frames in time, which naturally demands more world knowledge to be encoded into the model. In comparison to text or images, it is more difficult to collect large amounts of high-quality, high-dimensional video data, let along text-video pairs. Required Pre-read: Please make sure you have read the previous blog on “What are Diffusion Models?” for image generation before continue here.

链接:https://lilianweng.github.io/posts/2024-04-12-diffusion-video/

观点:Diffusion Models for Video Generation 更值得从实际采用价值来判断,而不是只看它有没有制造新的讨论热度。

Thinking about High-Quality Human Data

来源:Lilian Weng Lil'Log

标签:#ai_engineering_blogs #core

作者:

原文:[Special thank you to Ian Kivlichan for many useful pointers (E.g. the 100+ year old Nature paper “Vox populi”) and nice feedback. High-quality data is the fuel for modern data deep learning model training. Most of the task-specific labeled data comes from human annotation, such as classification task or RLHF labeling (which can be constructed as classification format) for LLM alignment training. Lots of ML techniques in the post can help with data quality, but fundamentally human data collection involves attention to details and careful execution. The community knows the value of high quality data, but somehow we have this subtle impression that “Everyone wants to do the model work, not the data work” Sambasivan et al. 2021

链接:https://lilianweng.github.io/posts/2024-02-05-human-data-quality/

观点:比起表面参数,Thinking about High-Quality Human Data 更需要观察它是否在推理质量、检索效果或可用性上带来真实改进。

Adversarial Attacks on LLMs

来源:Lilian Weng Lil'Log

标签:#ai_engineering_blogs #core

作者:

原文:The use of large language models in the real world has strongly accelerated by the launch of ChatGPT. We (including my team at OpenAI, shoutout to them) have invested a lot of effort to build default safe behavior into the model during the alignment process (e.g. via RLHF However, adversarial attacks or jailbreak prompts could potentially trigger the model to output something undesired. A large body of ground work on adversarial attacks is on images, and differently it operates in the continuous, high-dimensional space. Attacks for discrete data like text have been considered to be a lot more challenging, due to lack of direct gradient signals. My past post on Controllable Text Generation is quite relevant to this topic, as attacking LLMs is essentially to control the model to output a certain type of (unsafe) content.

链接:https://lilianweng.github.io/posts/2023-10-25-adv-attack-llm/

观点:从 Adversarial Attacks on LLMs 看,后续更应关注安全事故是否改变企业采购、接入和上线前的合规门槛。

LLM Powered Autonomous Agents

来源:Lilian Weng Lil'Log

标签:#ai_engineering_blogs #core

作者:

原文:Building agents with LLM (large language model) as its core controller is a cool concept. Several proof-of-concepts demos, such as AutoGPT GPT-Engineer and BabyAGI serve as inspiring examples. The potentiality of LLM extends beyond generating well-written copies, stories, essays and programs; it can be framed as a powerful general problem solver. Agent System Overview In a LLM-powered autonomous agent system, LLM functions as the agent’s brain, complemented by several key components: Planning Subgoal and decomposition: The agent breaks down large tasks into smaller, manageable subgoals, enabling efficient handling of complex tasks. Reflection and refinement: The agent can do self-criticism and self-reflection over past actions, learn from mistakes and refine them for future steps, thereby improving the quality of final results. Memory Short-term memory: I would consider all the in-context learning (See Prompt Engineering as utilizing short-term memory of the model to learn. Long-term memory: This provides the agent with the capability to retain and recall (infinite) information over extended periods, often by leveraging an external vector store and fast retrieval. Tool use The agent learns to call external APIs for extra information that is missing from the model weights (often hard to change after pre-training), including current information, code execution capability, access to proprietary information sources and more. Overview of a LLM-powered autonomous agent system. Component One: Planning A complicated task usually involves many steps. An agent needs to know what they are and plan ahead.

链接:https://lilianweng.github.io/posts/2023-06-23-agent/

观点:LLM Powered Autonomous Agents 的核心不在新鲜感,而在它是否能提升工程效率、部署稳定性或开发者工作流。