第75期 | Building a safe, effective sandbox to enable Codex on Wi...
今日摘要
OpenAI Blog:Learn how OpenAI built a secure sandbox for Codex on Windows, enabling safe, efficient coding agents with controlled file access a…
GitHub anthropics:A suite of plugins for legal workflows
GitHub openai:openai/openai-builder-lab recently updated repository.
GitHub karpathy:LLM training in simple, raw C/CUDA
OpenAI Blog:See how finance teams can use Codex to build MBRs, reporting packs, variance bridges, model checks, and planning scenarios from re…
总结 + 观点:anthropics/financial-services recently updated r…|中文观点:anthropics/financial-services 的核心不在新鲜感,而在它是否能…
总结 + 观点:Lightweight Android capture developer tools for…|中文观点:openai/snap-o 的核心不在新鲜感,而在它是否能提升工程效率、部署稳定性或开发者…
总结 + 观点:VQVAEs, GumbelSoftmaxes and friends|中文观点:karpathy/deep-vector-quantization 更值得从实际采用价值来…
总结 + 观点:Transformers: State-of-the-art Machine Learning…|中文观点:karpathy/transformers 更值得从实际采用价值来判断,而不是只看它有没有…
总结 + 观点:Video+code lecture on building nanoGPT from scra…|中文观点:karpathy/build-nanogpt 更值得从实际采用价值来判断,而不是只看它有没…
Building a safe, effective sandbox to enable Codex on Windows
标签:#ai_engineering_blogs #core
作者:
原文:Learn how OpenAI built a secure sandbox for Codex on Windows, enabling safe, efficient coding agents with controlled file access and network restrictions.
anthropics/claude-for-legal
标签:#github_orgs #extended
作者:
原文:A suite of plugins for legal workflows
openai/openai-builder-lab
标签:#github_orgs #extended
作者:
原文:openai/openai-builder-lab recently updated repository.
karpathy/llm.c
标签:#github_orgs #extended
作者:
原文:LLM training in simple, raw C/CUDA
How finance teams use Codex
标签:#ai_engineering_blogs #core
作者:
原文:See how finance teams can use Codex to build MBRs, reporting packs, variance bridges, model checks, and planning scenarios from real work inputs.
anthropics/financial-services
标签:#github_orgs #extended
作者:
原文:anthropics/financial-services recently updated repository.
openai/snap-o
标签:#github_orgs #extended
作者:
原文:Lightweight Android capture developer tools for macOS
karpathy/deep-vector-quantization
标签:#github_orgs #extended
作者:
原文:VQVAEs, GumbelSoftmaxes and friends
karpathy/transformers
标签:#github_orgs #extended
作者:
原文:Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.
karpathy/build-nanogpt
标签:#github_orgs #extended
作者:
原文:Video+code lecture on building nanoGPT from scratch
How NVIDIA engineers and researchers build with Codex
标签:#ai_engineering_blogs #core
作者:
原文:Teams use Codex with GPT-5.5 to ship production systems and turn research ideas into runnable experiments.
What Parameter Golf taught us about AI-assisted research
标签:#ai_engineering_blogs #core
作者:
原文:Parameter Golf brought together 1,000+ participants and 2,000+ submissions to explore AI-assisted machine learning research, coding agents, quantization, and novel model design under strict constraints.
AutoScout24 scales engineering with AI-powered workflows
标签:#ai_engineering_blogs #core
作者:
原文:Learn how AutoScout24 Group uses Codex and ChatGPT to speed development cycles, improve code quality, and expand AI adoption.
This works really well btw, at the end of your query ask your LLM to "structure your response as HTML", then view the generated file in your...
标签:#x_profiles #extended
作者:
原文:This works really well btw, at the end of your query ask your LLM to "structure your response as HTML", then view the generated file in your browser. I've also had some success asking the LLM to present its output as slideshows, etc. More generally, imo audio is the human-preferred input to AIs but vision (images/animations/video) is the preferred output from them. Around a ~third of our brains are a massively parallel processor dedicated to vision, it is the 10-lane superhighway of information into brain. As AI improves, I think we'll see a progression that takes advantage: 1) raw text (hard/effortful to read) 2) markdown (bold, italic, headings, tables, a bit easier on the eyes) nitter.net/zan2434/status/2046982… There are also improvements necessary and pending at the input. Audio nor text nor video alone are not enough, e.g. I feel a need to point/gesture to things on the screen, similar to all the things you would do with a person physically next to you and your computer screen. TLDR The input/output mind meld between humans and AIs is ongoing and there is a lot of work to do and significant progress to be made, way before jumping all the way into neuralink-esque BCIs and all that. For what's worth exploring at the current stage, hot tip try ask for HTML. Thariq (@trq212) x.com/i/article/205279610060… https://nitter.net/trq212/status/2052809885763747935#m
How ChatGPT adoption broadened in early 2026
标签:#ai_engineering_blogs #core
作者:
原文:ChatGPT adoption surged in Q1 2026, with fastest growth among users over 35 and more balanced gender usage, signaling broader mainstream AI adoption.
Greenhouse and Icehouse Earth
标签:#research_community #extended
作者:
原文:Article URL: https://en.wikipedia.org/wiki/Greenhouse_and_icehouse_Earth Comments URL: https://news.ycombinator.com/item?id=48130818 Points: 1 Comments: 1
链接:https://en.wikipedia.org/wiki/Greenhouse_and_icehouse_Earth
Show HN: Building a universal device experience [video]
标签:#research_community #extended
作者:
原文:I have been working on this project for a few years now and the end goal is to make a ubiquitous and natural user experience to interact with machines. The long term goal is to build a fully agentic experience that drives the UI for you (generative UI) and display it to you on whatever device you are on. Still a long way to get there but I am laying the foundation for it. happy to address any questions here. Comments URL: https://news.ycombinator.com/item?id=48130816 Points: 1 Comments: 0
Nginx Rift
标签:#research_community #extended
作者:
原文:Article URL: https://depthfirst.com/nginx-rift Comments URL: https://news.ycombinator.com/item?id=48130811 Points: 1 Comments: 1
Ask HN: Efflora_run DAG Regression Test
标签:#research_community #extended
作者:
原文:efflora_run DAG regression test 2026-05-14 11:30:24 Comments URL: https://news.ycombinator.com/item?id=48130803 Points: 1 Comments: 0
Vibe code your next presentation
标签:#research_community #extended
作者:
原文:Article URL: https://www.variant.art/ Comments URL: https://news.ycombinator.com/item?id=48130779 Points: 1 Comments: 1