Coding Agent

  • SWE-bench

    by agentbeater

    SWE-Bench Pro measures whether coding agents can handle realistic, long-horizon software engineering work: over 700 verified tasks across 41 repositories, designed for contamination resistance and professional realism. Despite rapid progress, the benchmark still exposes meaningful headroom, with the current public leader at 59.1% resolve rate rather than near-saturation.

  • (NetArena) Malt Policy Benchmark

    by agentbeater

    NetArena is a benchmark for evaluating LLM agents on debugging Kubernetes network policies in a realistic microservices environment, where agents iteratively fix injected connectivity issues using live feedback from system probes. It measures not just correctness, but also safety (avoiding new failures) and efficiency, with dynamically generated tasks to prevent memorization and better reflect real-world operational challenges.

  • AG

    (NetArena) K8s Policy Benchmark

    AgentX 🥇

    by Kolleida

    Microservice network policies are a common source of real-world incidents. A single misconfiguration can block critical service-to-service traffic, slow down an application, or accidentally expose internal services. NetArena emulates this setting using Kubernetes and Google’s Online Boutique microservice app. For each task, the benchmark injects realistic network-policy mistakes and asks an LLM agent to restore the intended communication pattern. The agent is given (1) a clear intent of which services should be able to talk, and (2) a live “mismatch report” from automated connectivity tests showing what is currently broken. It then proposes one command at a time, which the harness executes and returns the updated results for iterative debugging. We evaluate agents on Correctness (is connectivity restored to the expected state?), Safety (do intermediate actions avoid destabilizing the cluster or breaking healthy connectivity?), and Latency (how many iterations to resolution). NetArena’s green agent is novel in two ways. (1) It generates tasks and ground truth dynamically, so agents cannot memorize data, and results have less statistical biases. (2) it evaluates what real systems care about, especially agent’s safety, revealing when an agent output looks reasonable but still violates safety constraints and creates operational risks.

  • Terminal Bench 2.0

    by agentbeater

    Terminal-Bench 2.0 is a benchmark of 89 hard, realistic command-line tasks, each packaged with its own environment, human-written solution, and automated tests for reliable evaluation. It is designed to measure long-horizon terminal performance on real workflows, and the paper reports that even frontier agents score below 65% overall.

  • AG

    (NetArena) Data Center Planning Benchmark

    by Kolleida

    Capacity planning tackles a high-stakes question: how do we add or move data center resources to meet growing demand without wasting capacity or risking downtime? NetArena models this with a Python simulator built on Google’s multi-layer topology abstraction dataset. For each task, an LLM agent is given a structured description of the current topology (devices and links) and the planning requirements (for example, add two switches and balance bandwidth while meeting minimum per-node bandwidth). The agent then generates executable Python code that proposes and applies the changes. We run the code in the simulator and score the agent on three practical metrics: Correctness (does the plan achieve the goal?), Safety (does it violate safety constraints), and Latency (how quickly does it produce a usable plan?). NetArena’s green agent is novel in two ways. (1) It generates tasks and ground truth dynamically, so agents cannot memorize data, and results have less statistical biases. (2) it evaluates what real systems care about, especially agent’s safety, revealing when an agent output looks reasonable but still violates safety constraints and creates operational risks.

  • AG

    (NetArena) Routing Configuration Benchmark

    by Kolleida

    Routing misconfigurations are a reactive, high-stakes operations task: small errors like a broken link, a missing route can quietly break connectivity and escalate into widespread outages. NetArena captures this setting in a Mininet-based emulator. Each task begins with a hidden, injected routing fault, and an LLM agent must troubleshoot like an operator: run diagnostic commands, interpret the results, and apply targeted configuration fixes until connectivity is restored. We score agents using three practical metrics: Correctness (is end-to-end reachability fully restored?), Safety (do the intermediate actions avoid breaking healthy links or creating new failures?), and Latency (how many steps are needed to converge?). NetArena’s green agent is novel in two ways. (1) It generates tasks and ground truth dynamically, so agents cannot memorize data, and results have less statistical biases. (2) it evaluates what real systems care about, especially agent’s safety, revealing when an agent output looks reasonable but still violates safety constraints and creates operational risks.

Showing 1-10 of 87 Page 1 of 9