第46期 | karpathy/rustbpe
今日摘要
X Andrej Karpathy:Someone recently suggested to me that the reason OpenClaw moment was so big is because it's the first time a large group of non-te…
X Andrej Karpathy:Judging by my tl there is a growing gap in understanding of AI capability. The first issue I think is around recency and tier of u…
GitHub karpathy:Karpathy 新发布的最小 ChatGPT 复现项目,训练到推理的完整栈只有几千行可读代码,目标是把“百美元跑一个 ChatGPT”压到个人可动手的范围。
GitHub karpathy:Karpathy 早期的教学级 GPT 实现,代码短到可以一口气读完,长期用作理解 Transformer 训练与推理最短路径的入口。
GitHub anthropics:Anthropic 公开其内部工程师 take-home 面试题,可作为理解他们工程品味和评估标准的一手材料。
总结 + 观点:OpenAI 新开源的多 agent 编排框架,重点不是写代码的 coding agent,而是…|中文观点:Symphony 的定位更像工作流基础设施:真正的价值在于它把“多 agent 协作”的实…
总结 + 观点:OpenAI 官方示例库更新,通常折射出他们希望开发者优先采用的新模式(tool use、str…|中文观点:cookbook 的更新值得单独跟踪:它折射出 OpenAI 想让开发者默认走哪些新 AP…
总结 + 观点:Karpathy 用 Rust 重写的 BPE tokenizer 训练器,把 tiktoken…|中文观点:rustbpe 补上了 tokenizer 训练这块的“黑盒”:它让 tokenizer…
总结 + 观点:OpenAI 官方 Python SDK 更新,通常先于公告暴露出新接口细节、参数变化或默认路径…|中文观点:官方 SDK 的 commit 经常是 API 方向的早期指示灯,对做集成和多模型平台的团…
总结 + 观点:Anthropic 官方的 Claude Agent SDK 示例仓库,覆盖代码 agent、文…|中文观点:demos 仓库往往比文档更早暴露 SDK 的边界和推荐模式,对正在选型 agent 栈的…
Someone recently suggested to me that the reason OpenClaw moment was so big is because it's the first time a large group of non-technical pe...
标签:#x_profiles #extended
作者:
原文:Someone recently suggested to me that the reason OpenClaw moment was so big is because it's the first time a large group of non-technical people (who otherwise only knew AI as synonymous with ChatGPT as a website) experienced the latest agentic models.
Judging by my tl there is a growing gap in understanding of AI capability. The first issue I think is around recency and tier of use.
标签:#x_profiles #extended
作者:
原文:Judging by my tl there is a growing gap in understanding of AI capability. The first issue I think is around recency and tier of use. I think a lot of people tried the free tier of ChatGPT somewhere last year and allowed it to inform their views on AI a little too much. This is a group of reactions laughing at various quirks of the models, hallucinations, etc. Yes I also saw the viral videos of OpenAI's Advanced Voice mode fumbling simple queries like "should I drive or walk to the carwash". The thing is that these free and old/deprecated models don't reflect the capability in the latest round of state of the art agentic models of this year, especially OpenAI Codex and Claude Code. But that brings me to the second issue. Even if people paid $200/month to use the state of the art models, a lot of the capabilities are relatively "peaky" in highly technical areas. Typical queries around search, writing, advice, etc. are *not* the domain that has made the most noticeable and dramatic strides in capability. Partly, this is due to the technical details of reinforcement learning and its use of verifiable rewards. But partly, it's also because these use cases are not sufficiently prioritized by the companies in their hillclimbing because they don't lead to as much value. The goldmines are elsewhere, and the focus comes along. So that brings me to the second group of people, who *both* 1) pay for and use the state of the art frontier agentic models (OpenAI Codex Claude Code) and 2) do so professionally in technical domains like programming, math and research. This group of people is subject to the highest amount of "AI Psychosis" because the recent improvements in these domains as of this year have been nothing short of staggering. When you hand a computer terminal to one of these models, you can now watch them melt programming problems that you'd normally expect to take days/weeks of work. It's this second group of people that assigns a much greater gravity to the capabilities, their slope, and various cyber-related repercussions. TLDR the people in these two groups are speaking past each other. It really is simultaneously the case that OpenAI's free and I think slightly orphaned "Advanced Voice Mode" will fumble the dumbest questions in your Instagram's reels and *at the same time*, OpenAI's highest-tier and paid Codex model will go off for 1 hour to coherently restructure an entire code base, or find and exploit vulnerabilities in computer systems. This part really works and has made dramatic strides because 2 properties: 1) these domains offer explicit reward functions that are verifiable meaning they are easily amenable to reinforcement learning training (e.g. unit tests passed yes or no, in contrast to writing, which is much harder to explicitly judge), but also 2) they are a lot more valuable in b2b settings, meaning that the biggest fraction of the team is focused on improving them. So here we are. staysaasy (@staysaasy) The degree to which you are awed by AI is perfectly correlated with how much you use AI to code. https://nitter.net/staysaasy/status/2042063369432183238#m
karpathy/nanochat
标签:#github_orgs #extended
作者:
原文:Karpathy 新发布的最小 ChatGPT 复现项目,训练到推理的完整栈只有几千行可读代码,目标是把“百美元跑一个 ChatGPT”压到个人可动手的范围。
karpathy/minGPT
标签:#github_orgs #extended
作者:
原文:Karpathy 早期的教学级 GPT 实现,代码短到可以一口气读完,长期用作理解 Transformer 训练与推理最短路径的入口。
anthropics/original_performance_takehome
标签:#github_orgs #extended
作者:
原文:Anthropic 公开其内部工程师 take-home 面试题,可作为理解他们工程品味和评估标准的一手材料。
链接:https://github.com/anthropics/original_performance_takehome
openai/symphony
标签:#github_orgs #extended
作者:
原文:OpenAI 新开源的多 agent 编排框架,重点不是写代码的 coding agent,而是任务隔离、委派与团队级协作。
openai/openai-cookbook
标签:#github_orgs #extended
作者:
原文:OpenAI 官方示例库更新,通常折射出他们希望开发者优先采用的新模式(tool use、structured output、responses API 等)。
karpathy/rustbpe
标签:#github_orgs #extended
作者:
原文:Karpathy 用 Rust 重写的 BPE tokenizer 训练器,把 tiktoken 里不透明的训练流程变成可学习、可实验的代码。
openai/openai-python
标签:#github_orgs #extended
作者:
原文:OpenAI 官方 Python SDK 更新,通常先于公告暴露出新接口细节、参数变化或默认路径调整。
anthropics/claude-agent-sdk-demos
标签:#github_orgs #extended
作者:
原文:Anthropic 官方的 Claude Agent SDK 示例仓库,覆盖代码 agent、文件编辑、工具链编排等典型用法。
anthropics/prompt-eng-interactive-tutorial
标签:#github_orgs #extended
作者:
原文:Anthropic 的官方交互式 prompt 工程教程,沿用他们内部训练素材的结构,适合团队系统补齐 prompt 基础功。
链接:https://github.com/anthropics/prompt-eng-interactive-tutorial
OpenAI Full Fan Mode Contest: Terms Conditions
标签:#ai_engineering_blogs #core
作者:
原文:OpenAI Full Fan Mode 比赛规则页面,覆盖参赛条件、评判、奖项等。
链接:https://openai.com/index/full-fan-mode-contest-terms-conditions
The next phase of enterprise AI
标签:#ai_engineering_blogs #core
作者:
原文:OpenAI outlines the next phase of enterprise AI, as adoption accelerates across industries with Frontier, ChatGPT Enterprise, Codex, and company-wide AI agents.
karpathy/autoresearch
标签:#github_orgs #extended
作者:
原文:AI agents running research on single-GPU nanochat training automatically
anthropics/skills
标签:#github_orgs #extended
作者:
原文:Public repository for Agent Skills
Extrinsic Hallucinations in LLMs
标签:#ai_engineering_blogs #core
作者:
原文:Hallucination in large language models usually refers to the model generating unfaithful, fabricated, inconsistent, or nonsensical content. As a term, hallucination has been somewhat generalized to cases when the model makes mistakes. Here, I would like to narrow down the problem of hallucination to cases where the model output is fabricated and not grounded by either the provided context or world knowledge. There are two types of hallucination: In-context hallucination: The model output should be consistent with the source content in context. Extrinsic hallucination: The model output should be grounded by the pre-training dataset. However, given the size of the pre-training dataset, it is too expensive to retrieve and identify conflicts per generation. If we consider the pre-training data corpus as a proxy for world knowledge, we essentially try to ensure the model output is factual and verifiable by external world knowledge. Equally importantly, when the model does not know about a fact, it should say so. This post focuses on extrinsic hallucination. To avoid hallucination, LLMs need to be (1) factual and (2) acknowledge not knowing the answer when applicable.
链接:https://lilianweng.github.io/posts/2024-07-07-hallucination/
Diffusion Models for Video Generation
标签:#ai_engineering_blogs #core
作者:
原文:Diffusion models have demonstrated strong results on image synthesis in past years. Now the research community has started working on a harder task--using it for video generation. The task itself is a superset of the image case, since an image is a video of 1 frame, and it is much more challenging because: It has extra requirements on temporal consistency across frames in time, which naturally demands more world knowledge to be encoded into the model. In comparison to text or images, it is more difficult to collect large amounts of high-quality, high-dimensional video data, let along text-video pairs. Required Pre-read: Please make sure you have read the previous blog on “What are Diffusion Models?” for image generation before continue here.
链接:https://lilianweng.github.io/posts/2024-04-12-diffusion-video/
Thinking about High-Quality Human Data
标签:#ai_engineering_blogs #core
作者:
原文:[Special thank you to Ian Kivlichan for many useful pointers (E.g. the 100+ year old Nature paper “Vox populi”) and nice feedback. High-quality data is the fuel for modern data deep learning model training. Most of the task-specific labeled data comes from human annotation, such as classification task or RLHF labeling (which can be constructed as classification format) for LLM alignment training. Lots of ML techniques in the post can help with data quality, but fundamentally human data collection involves attention to details and careful execution. The community knows the value of high quality data, but somehow we have this subtle impression that “Everyone wants to do the model work, not the data work” Sambasivan et al. 2021
链接:https://lilianweng.github.io/posts/2024-02-05-human-data-quality/
Adversarial Attacks on LLMs
标签:#ai_engineering_blogs #core
作者:
原文:The use of large language models in the real world has strongly accelerated by the launch of ChatGPT. We (including my team at OpenAI, shoutout to them) have invested a lot of effort to build default safe behavior into the model during the alignment process (e.g. via RLHF However, adversarial attacks or jailbreak prompts could potentially trigger the model to output something undesired. A large body of ground work on adversarial attacks is on images, and differently it operates in the continuous, high-dimensional space. Attacks for discrete data like text have been considered to be a lot more challenging, due to lack of direct gradient signals. My past post on Controllable Text Generation is quite relevant to this topic, as attacking LLMs is essentially to control the model to output a certain type of (unsafe) content.
链接:https://lilianweng.github.io/posts/2023-10-25-adv-attack-llm/
LLM Powered Autonomous Agents
标签:#ai_engineering_blogs #core
作者:
原文:Building agents with LLM (large language model) as its core controller is a cool concept. Several proof-of-concepts demos, such as AutoGPT GPT-Engineer and BabyAGI serve as inspiring examples. The potentiality of LLM extends beyond generating well-written copies, stories, essays and programs; it can be framed as a powerful general problem solver. Agent System Overview In a LLM-powered autonomous agent system, LLM functions as the agent’s brain, complemented by several key components: Planning Subgoal and decomposition: The agent breaks down large tasks into smaller, manageable subgoals, enabling efficient handling of complex tasks. Reflection and refinement: The agent can do self-criticism and self-reflection over past actions, learn from mistakes and refine them for future steps, thereby improving the quality of final results. Memory Short-term memory: I would consider all the in-context learning (See Prompt Engineering as utilizing short-term memory of the model to learn. Long-term memory: This provides the agent with the capability to retain and recall (infinite) information over extended periods, often by leveraging an external vector store and fast retrieval. Tool use The agent learns to call external APIs for extra information that is missing from the model weights (often hard to change after pre-training), including current information, code execution capability, access to proprietary information sources and more. Overview of a LLM-powered autonomous agent system. Component One: Planning A complicated task usually involves many steps. An agent needs to know what they are and plan ahead.