> ▌

AICodeKing

AI Samson

WorldofAI
The April 30 and May 7 Codex CLI releases expand the terminal agent with persisted goals, `codex update`, Vim composer editing, richer status-line controls, and more structured plugin and hook workflows. Together, they push Codex further toward a durable, team-friendly command-line environment for long-running coding work.
OpenAI has added a Chrome extension for Codex that lets the agent operate inside a real signed-in browser session, which makes authenticated workflows like Gmail, Salesforce, LinkedIn, and internal admin tools practical instead of fragile. The update also introduces host-based approval flows, allowlist and blocklist controls, and additional browser-safety guardrails so teams can decide when Codex may touch a site and when it must ask first.
Anthropic's Applied AI team released a free 24-minute workshop on prompting Claude more effectively. It focuses on practical prompt structure, context, and workflow habits.
colss is a small open-source Python library for evaluating math-style string expressions over NumPy arrays, with support for logical operators, arithmetic, ternaries, and conditional forms. The repo also shows compatibility with Pandas, Polars, and standard Python arrays, and the API is aimed at reducing verbosity for longer formulas without requiring manual variable registration.
Dolly is a workplace AI product that creates a per-employee “digital twin,” connects to each person’s tools, learns their communication style, and responds to messages on their behalf. The team is opening access to its first 20 organizations and pitching the product as a way to reclaim time lost to inbox and Slack churn.
Released May 8, SuperSplat v2.25.1 keeps the browser-based 3D Gaussian splat editor on a maintenance track. The patch updates npm dependencies and standardizes splat orientation from DataTable.transform, which should reduce format-specific drift.
The benchmark looks broadly sane: Qwen3.6-27B is running across two V100 32GB cards in llama.cpp tensor-parallel mode with flash attention and an unquantized KV cache. The big story is not a misconfiguration, but the expected throughput drop as prefill depth climbs into long-context territory.

AI Search

Github Awesome

AI Revolution

DIY Smart Code

DIY Smart Code

Rob The AI Guy

Better Stack

DIY Smart Code

Better Stack

The PrimeTime

The PrimeTime

Github Awesome

Better Stack

AICodeKing

Better Stack

DIY Smart Code

Discover AI