linear-bench-mini / README.md
hubertmarek's picture
Improve dataset card: add metadata, paper link, and usage (#2)
bd3c951
metadata
license: mit
task_categories:
  - text-generation
tags:
  - agent-evals
  - linear
  - graphql

Agent-Diff: Linear Bench Mini

This dataset is part of the Agent-Diff benchmark, presented in the paper Agent-Diff: Benchmarking LLM Agents on Enterprise API Tasks via Code Execution with State-Diff-Based Evaluation.

Website | GitHub | Paper

image

Context

The Linear Bench suite runs inside the Agent Diff isolation engine, with its own Postgres schema replaying the Linear GraphQL API. Agents interact via Linear's public surface area to satisfy CRUD-style tasks (create issues, move states, add labels, etc.). Task success is defined by whether the expected change in environment state was achieved using a novel state-diff contract.

Files

  • data/train.jsonl: Training tasks and prompts.
  • dataset_infos.json: Dataset configuration.
  • seeds/linear_default.json: Initial state configuration for the Linear environment.

Usage

To use this benchmark, you can interact with the isolated environment using the agent-diff Python SDK.

Install the SDK

pip install agent-diff

Initialize and Run

from agent_diff import AgentDiff

# Initialise isolated environment from the linear template
client = AgentDiff()
env = client.init_env(
    templateService="linear", 
    templateName="linear_default",
    impersonateUserId="U01AGENBOT9", 
    TTL="3600"
)

# Take before snapshot
run = client.start_run(envId=env.environmentId)

# Your agent performs tasks using the environment URL:
# print(env.environmentUrl)

# Compute diff (changes in the environment) and get results
diff = client.diff_run(runId=run.runId)

# Inspect changes
print(diff.diff['inserts'])   # New records added by agent
print(diff.diff['updates'])   # Modified records

# Clean up
client.delete_env(envId=env.environmentId)

For more details on evaluations and the state-diff framework, please refer to the official GitHub repository.