今日摘要

GitHub karpathy:A positive developer community for builders and agents.

GitHub anthropics:A collection of notebooks/recipes showcasing some fun and effective ways of using Claude.

GitHub openai:Evals is a framework for evaluating LLMs and LLM systems, and an open-source registry of benchmarks.

GitHub anthropics:Official, Anthropic-managed directory of high quality Claude Code Plugins.

GitHub anthropics:Claude Code is an agentic coding tool that lives in your terminal, understands your codebase, and helps you code faster by executi…

总结 + 观点:A lightweight, powerful framework for multi-agen…|中文观点:对 openai/openai-agents-python,更该看它能不能改善多步骤协作、…

总结 + 观点:Use Codex from Claude Code to review code or del…|中文观点:openai/codex-plugin-cc 的核心不在新鲜感,而在它是否能提升工程效率、…

总结 + 观点:Anthropic's educational courses|中文观点:anthropics/courses 更值得从实际采用价值来判断,而不是只看它有没有制造新…

总结 + 观点:Lightweight coding agent that runs in your termi…|中文观点:openai/codex 更值得从实际采用价值来判断,而不是只看它有没有制造新的讨论热度。

总结 + 观点:LLM101n: Let's build a Storyteller|中文观点:karpathy/LLM101n 更值得从实际采用价值来判断,而不是只看它有没有制造新的讨…

karpathy/KarpathyTalk

来源:GitHub karpathy

标签:#github_orgs #extended

作者:

原文:A positive developer community for builders and agents.

链接:https://github.com/karpathy/KarpathyTalk

观点:karpathy/KarpathyTalk 更值得从实际采用价值来判断,而不是只看它有没有制造新的讨论热度。

anthropics/claude-cookbooks

来源:GitHub anthropics

标签:#github_orgs #extended

作者:

原文:A collection of notebooks/recipes showcasing some fun and effective ways of using Claude.

链接:https://github.com/anthropics/claude-cookbooks

观点:anthropics/claude-cookbooks 更值得从实际采用价值来判断,而不是只看它有没有制造新的讨论热度。

openai/evals

来源:GitHub openai

标签:#github_orgs #extended

作者:

原文:Evals is a framework for evaluating LLMs and LLM systems, and an open-source registry of benchmarks.

链接:https://github.com/openai/evals

观点:比起表面参数,openai/evals 更需要观察它是否在推理质量、检索效果或可用性上带来真实改进。

anthropics/claude-plugins-official

来源:GitHub anthropics

标签:#github_orgs #extended

作者:

原文:Official, Anthropic-managed directory of high quality Claude Code Plugins.

链接:https://github.com/anthropics/claude-plugins-official

观点:anthropics/claude-plugins-official 的核心不在新鲜感,而在它是否能提升工程效率、部署稳定性或开发者工作流。

anthropics/claude-code

来源:GitHub anthropics

标签:#github_orgs #extended

作者:

原文:Claude Code is an agentic coding tool that lives in your terminal, understands your codebase, and helps you code faster by executing routine tasks, explaining complex code, and handling git workflows - all through natural language commands.

链接:https://github.com/anthropics/claude-code

观点:对 anthropics/claude-code,更该看它能不能改善多步骤协作、记忆管理和稳定交付,而不是只看 demo 效果。

openai/openai-agents-python

来源:GitHub openai

标签:#github_orgs #extended

作者:

原文:A lightweight, powerful framework for multi-agent workflows

链接:https://github.com/openai/openai-agents-python

观点:对 openai/openai-agents-python,更该看它能不能改善多步骤协作、记忆管理和稳定交付,而不是只看 demo 效果。

openai/codex-plugin-cc

来源:GitHub openai

标签:#github_orgs #extended

作者:

原文:Use Codex from Claude Code to review code or delegate tasks.

链接:https://github.com/openai/codex-plugin-cc

观点:openai/codex-plugin-cc 的核心不在新鲜感,而在它是否能提升工程效率、部署稳定性或开发者工作流。

anthropics/courses

来源:GitHub anthropics

标签:#github_orgs #extended

作者:

原文:Anthropic's educational courses

链接:https://github.com/anthropics/courses

观点:anthropics/courses 更值得从实际采用价值来判断,而不是只看它有没有制造新的讨论热度。

openai/codex

来源:GitHub openai

标签:#github_orgs #extended

作者:

原文:Lightweight coding agent that runs in your terminal

链接:https://github.com/openai/codex

观点:openai/codex 更值得从实际采用价值来判断,而不是只看它有没有制造新的讨论热度。

karpathy/LLM101n

来源:GitHub karpathy

标签:#github_orgs #extended

作者:

原文:LLM101n: Let's build a Storyteller

链接:https://github.com/karpathy/LLM101n

观点:karpathy/LLM101n 更值得从实际采用价值来判断,而不是只看它有没有制造新的讨论热度。

karpathy/nanoGPT

来源:GitHub karpathy

标签:#github_orgs #extended

作者:

原文:The simplest, fastest repository for training/finetuning medium-sized GPTs.

链接:https://github.com/karpathy/nanoGPT

观点:比起表面参数,karpathy/nanoGPT 更需要观察它是否在推理质量、检索效果或可用性上带来真实改进。

karpathy/rendergit

来源:GitHub karpathy

标签:#github_orgs #extended

作者:

原文:Render any git repo into a single static HTML page for humans or LLMs

链接:https://github.com/karpathy/rendergit

观点:karpathy/rendergit 的核心不在新鲜感,而在它是否能提升工程效率、部署稳定性或开发者工作流。

Introducing the Child Safety Blueprint

来源:OpenAI Blog

标签:#ai_engineering_blogs #core

作者:

原文:Discover OpenAI’s Child Safety Blueprint—a roadmap for building AI responsibly with safeguards, age-appropriate design, and collaboration to protect and empower young people online.

链接:https://openai.com/index/introducing-child-safety-blueprint

观点:Introducing the Child Safety Blueprint 更值得从实际采用价值来判断,而不是只看它有没有制造新的讨论热度。

Someone recently suggested to me that the reason OpenClaw moment was so big is because it's the first time a large group of non-technical pe...

来源:X Andrej Karpathy

标签:#x_profiles #extended

作者:

原文:Someone recently suggested to me that the reason OpenClaw moment was so big is because it's the first time a large group of non-technical people (who otherwise only knew AI as synonymous with ChatGPT as a website) experienced the latest agentic models.

链接:https://twitter.com/karpathy/status/2042341482531864741

观点:围绕 R to @karpathy: Someone recently suggested to me that the re...,真正重要的是它会不会影响团队的模型选型、性能边界和产品体验。

Judging by my tl there is a growing gap in understanding of AI capability. The first issue I think is around recency and tier of use.

来源:X Andrej Karpathy

标签:#x_profiles #extended

作者:

原文:Judging by my tl there is a growing gap in understanding of AI capability. The first issue I think is around recency and tier of use. I think a lot of people tried the free tier of ChatGPT somewhere last year and allowed it to inform their views on AI a little too much. This is a group of reactions laughing at various quirks of the models, hallucinations, etc. Yes I also saw the viral videos of OpenAI's Advanced Voice mode fumbling simple queries like "should I drive or walk to the carwash". The thing is that these free and old/deprecated models don't reflect the capability in the latest round of state of the art agentic models of this year, especially OpenAI Codex and Claude Code. But that brings me to the second issue. Even if people paid $200/month to use the state of the art models, a lot of the capabilities are relatively "peaky" in highly technical areas. Typical queries around search, writing, advice, etc. are *not* the domain that has made the most noticeable and dramatic strides in capability. Partly, this is due to the technical details of reinforcement learning and its use of verifiable rewards. But partly, it's also because these use cases are not sufficiently prioritized by the companies in their hillclimbing because they don't lead to as much value. The goldmines are elsewhere, and the focus comes along. So that brings me to the second group of people, who *both* 1) pay for and use the state of the art frontier agentic models (OpenAI Codex Claude Code) and 2) do so professionally in technical domains like programming, math and research. This group of people is subject to the highest amount of "AI Psychosis" because the recent improvements in these domains as of this year have been nothing short of staggering. When you hand a computer terminal to one of these models, you can now watch them melt programming problems that you'd normally expect to take days/weeks of work. It's this second group of people that assigns a much greater gravity to the capabilities, their slope, and various cyber-related repercussions. TLDR the people in these two groups are speaking past each other. It really is simultaneously the case that OpenAI's free and I think slightly orphaned "Advanced Voice Mode" will fumble the dumbest questions in your Instagram's reels and *at the same time*, OpenAI's highest-tier and paid Codex model will go off for 1 hour to coherently restructure an entire code base, or find and exploit vulnerabilities in computer systems. This part really works and has made dramatic strides because 2 properties: 1) these domains offer explicit reward functions that are verifiable meaning they are easily amenable to reinforcement learning training (e.g. unit tests passed yes or no, in contrast to writing, which is much harder to explicitly judge), but also 2) they are a lot more valuable in b2b settings, meaning that the biggest fraction of the team is focused on improving them. So here we are. staysaasy (@staysaasy) The degree to which you are awed by AI is perfectly correlated with how much you use AI to code. https://nitter.net/staysaasy/status/2042063369432183238#m

链接:https://twitter.com/karpathy/status/2042334451611693415

观点:从 Judging by my tl there is a growing gap in understanding of... 看,后续更应关注安全事故是否改变企业采购、接入和上线前的合规门槛。

State of Open Source on Hugging Face: Spring 2026

来源:Hugging Face Blog

标签:#ai_engineering_blogs #core

作者:

原文:State of Open Source on Hugging Face: Spring 2026

链接:https://huggingface.co/blog/huggingface/state-of-os-hf-spring-2026

观点:State of Open Source on Hugging Face: Spring 2026 更值得从实际采用价值来判断,而不是只看它有没有制造新的讨论热度。

Friend Bubbles: Enhancing Social Discovery on Facebook Reels

来源:Meta Engineering

标签:#engineering_ai_infra_blogs #extended

作者:

原文:Friend bubbles in Facebook Reels highlight Reels your friends have liked or reacted to, helping you discover new content and making it easier to connect over shared interests. This article explains the technical architecture behind friend bubbles, including how machine learning estimates relationship strength and ranks content your friends have interacted with to create more Read More... The post Friend Bubbles: Enhancing Social Discovery on Facebook Reels appeared first on Engineering at Meta

链接:https://engineering.fb.com/2026/03/18/ml-applications/friend-bubbles-enhancing-social-discovery-on-facebook-reels/

观点:Friend Bubbles: Enhancing Social Discovery on Facebook Reels 更值得从实际采用价值来判断,而不是只看它有没有制造新的讨论热度。

GPT 5.4 is a big step for Codex

来源:Interconnects AI

标签:#hidden_high_value #hidden_high_value

作者:

原文:On evaluating and understanding the frontier of agents, and why I still turn to Claude.

链接:https://www.interconnects.ai/p/gpt-54-is-a-big-step-for-codex

观点:GPT 5.4 is a big step for Codex 更值得从实际采用价值来判断,而不是只看它有没有制造新的讨论热度。

Build a Domain-Specific Embedding Model in Under a Day

来源:Hugging Face Blog

标签:#ai_engineering_blogs #core

作者:

原文:Build a Domain-Specific Embedding Model in Under a Day

链接:https://huggingface.co/blog/nvidia/domain-specific-embedding-finetune

观点:比起表面参数,Build a Domain-Specific Embedding Model in Under a Day 更需要观察它是否在推理质量、检索效果或可用性上带来真实改进。

Lossy self-improvement

来源:Interconnects AI

标签:#hidden_high_value #hidden_high_value

作者:

原文:The case for why self-improvement is real but it doesn't lead to fast takeoff.

链接:https://www.interconnects.ai/p/lossy-self-improvement

观点:Lossy self-improvement 更值得从实际采用价值来判断,而不是只看它有没有制造新的讨论热度。