第56期 | openai/privacy-filter
今日摘要
GitHub openai:A lightweight, powerful framework for multi-agent workflows and voice agents
GitHub openai:Official JavaScript TypeScript library for the OpenAI API
GitHub anthropics:Open source repository of plugins primarily intended for knowledge workers to use in Claude Cowork
GitHub karpathy:Inference Llama 2 in one file of pure C
GitHub openai:OpenAI Privacy Filter
总结 + 观点:LLM Council works together to answer your hardes…|中文观点:karpathy/llm-council 更值得从实际采用价值来判断,而不是只看它有没有制…
总结 + 观点:A tiny scalar-valued autograd engine and a neura…|中文观点:对 karpathy/micrograd 来说,更值得判断的是它会不会进入团队默认工具链,…
总结 + 观点:Reference and an example for the Bluetooth API f…|中文观点:如果 anthropics/claude-desktop-buddy 能减少集成成本、维护…
总结 + 观点:Neural Networks: Zero to Hero|中文观点:karpathy/nn-zero-to-hero 更值得从实际采用价值来判断,而不是只看它…
总结 + 观点:Introducing GPT-5.5, our smartest model yet—fast…|中文观点:Introducing GPT-5.5 的核心不在新鲜感,而在它是否能提升工程效率、部署稳…
openai/openai-agents-js
标签:#github_orgs #extended
作者:
原文:A lightweight, powerful framework for multi-agent workflows and voice agents
openai/openai-node
标签:#github_orgs #extended
作者:
原文:Official JavaScript TypeScript library for the OpenAI API
anthropics/knowledge-work-plugins
标签:#github_orgs #extended
作者:
原文:Open source repository of plugins primarily intended for knowledge workers to use in Claude Cowork
karpathy/llama2.c
标签:#github_orgs #extended
作者:
原文:Inference Llama 2 in one file of pure C
openai/privacy-filter
karpathy/llm-council
标签:#github_orgs #extended
作者:
原文:LLM Council works together to answer your hardest questions
karpathy/micrograd
标签:#github_orgs #extended
作者:
原文:A tiny scalar-valued autograd engine and a neural net library on top of it with PyTorch-like API
anthropics/claude-desktop-buddy
标签:#github_orgs #extended
作者:
原文:Reference and an example for the Bluetooth API for makers in Claude Cowork Claude Code Desktop
karpathy/nn-zero-to-hero
标签:#github_orgs #extended
作者:
原文:Neural Networks: Zero to Hero
Introducing GPT-5.5
标签:#ai_engineering_blogs #core
作者:
原文:Introducing GPT-5.5, our smartest model yet—faster, more capable, and built for complex tasks like coding, research, and data analysis across tools.
GPT-5.5 System Card
标签:#ai_engineering_blogs #core
作者:
原文:GPT-5.5 System Card
Automations
标签:#ai_engineering_blogs #core
作者:
原文:Learn how to automate tasks in Codex using schedules and triggers to create reports, summaries, and recurring workflows without manual effort.
Top 10 uses for Codex at work
标签:#ai_engineering_blogs #core
作者:
原文:Explore 10 practical Codex use cases to automate tasks, create deliverables, and turn real inputs into outputs across tools, files, and workflows.
链接:https://openai.com/academy/top-10-use-cases-codex-for-work
Plugins and skills
标签:#ai_engineering_blogs #core
作者:
原文:Learn how to use Codex plugins and skills to connect tools, access data, and follow repeatable workflows to automate tasks and improve results.
Working with Codex
标签:#ai_engineering_blogs #core
作者:
原文:Learn how to set up your Codex workspace, create threads and projects, manage files, and start completing tasks with step-by-step guidance.
huggingface/peft
标签:#github_orgs #extended
作者:
原文:PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.
SpaceX Test Like You Fly Documentary 4K [video]
标签:#research_community #extended
作者:
原文:Article URL: https://www.youtube.com/watch?v=PHWvbbQGb5g Comments URL: https://news.ycombinator.com/item?id=47898884 Points: 1 Comments: 0
langchain-ai/rag-from-scratch
标签:#github_orgs #extended
作者:
原文:langchain-ai/rag-from-scratch recently updated repository.
About the security content of iOS 26.4.2 and iPadOS 26.4.2
标签:#research_community #extended
作者:
原文:Article URL: https://support.apple.com/en-us/127002 Comments URL: https://news.ycombinator.com/item?id=47898850 Points: 2 Comments: 0
Show HN: ShadowPEFT Centralized and Detachable Parameter-Efficient Fine-Tuning
标签:#research_community #extended
作者:
原文:Unlike LoRA and its variants, which inject trainable parameters directly into the weights of the Transformer, requiring tight coupling with the backbone. ShadowPEFT instead enhances the frozen large base model by adding a lightweight, centralized, pretrainable, and detachable Shadow network. This shadow network operates in parallel with the base model, delivering learned corrections to each decoder layer. Because the shadow module is architecturally decoupled from the backbone, it can be independently trained, stored, and deployed, benefiting edge computing scenarios and edge-cloud collaboration computing. Comments URL: https://news.ycombinator.com/item?id=47898816 Points: 4 Comments: 1