Add parallel capacity test tool for Hermes/OpenCode #5

Merged
shoko merged 7 commits from fix/issue-3-parallel-test into main 2026-03-31 06:28:58 +02:00
3 changed files with 753 additions and 0 deletions
Showing only changes of commit 74424c1f82 - Show all commits

View File

@@ -0,0 +1,74 @@
# Parallel Capacity Test Tool
Tests the practical limits of parallel agent execution for Hermes/OpenCode.
## Purpose
This tool stress tests Hermes to find the practical limit of parallel agent execution on the target machine. It:
- Spawns N concurrent `opencode run` agents
- Measures CPU, memory, and response time
- Ramps up from 1 to higher agent counts
- Identifies failure points and performance degradation
## Files
- `run_test.sh` - Bash script for running tests
- `parallel_capacity_test.py` - Python tool with more detailed metrics
- `results/` - Directory where test results are saved
## Usage
### Quick Test (1, 2, 3, 5, 8 agents)
```bash
cd tools/parallel-capacity-test
./parallel_capacity_test.py --quick
```
### Full Test Suite
```bash
./parallel_capacity_test.py --agents 15 --timeout 120
```
### Bash Script Usage
```bash
./run_test.sh quick # Quick test
./run_test.sh full # Full test up to MAX_AGENTS
```
## Environment Variables
| Variable | Default | Description |
|----------|---------|-------------|
| MAX_AGENTS | 15 | Maximum number of agents to test |
| STEP | 1 | Step size for agent increment |
| TASK_TIMEOUT | 120 | Timeout for each agent task |
## Metrics Collected
- **Response Time** - Time from agent launch to completion
- **CPU Usage** - System-wide CPU utilization percentage
- **Memory Usage** - System-wide memory utilization percentage
- **Success Rate** - Percentage of agents completing successfully
- **Process Count** - Number of opencode processes running
## Expected Behavior
Based on the Hermes architecture:
| Agent Count | Expected Performance |
|-------------|---------------------|
| 1-3 | Optimal - safe for production |
| 4-6 | Good - monitor closely |
| 7-10 | Degraded - not recommended |
| 10+ | Poor - avoid without significant resources |
## Output Files
- `results_YYYYMMDD_HHMMSS.json` - Complete raw results
- `summary_YYYYMMDD_HHMMSS.csv` - CSV summary of metrics
- `report_YYYYMMDD_HHMMSS.md` - Markdown analysis report
EOF; __hermes_rc=$?; printf '__HERMES_FENCE_a9f7b3__'; exit $__hermes_rc

View File

@@ -0,0 +1,356 @@
#!/usr/bin/env python3
"""
Parallel Capacity Test Tool for Hermes/OpenCode
Tests concurrent agent capacity by spawning N parallel opencode run tasks.
"""
import argparse
import json
import os
import subprocess
import sys
import time
import threading
import statistics
from dataclasses import dataclass, asdict
from datetime import datetime
from pathlib import Path
from typing import List, Optional
try:
import psutil
HAS_PSUTIL = True
except ImportError:
HAS_PSUTIL = False
print("[WARN] psutil not available - resource monitoring will be limited")
@dataclass
class AgentResult:
agent_id: int
duration: float
status: str
return_code: int
output: str = ""
@dataclass
class ResourceSample:
timestamp: float
cpu_percent: float
memory_percent: float
opencode_processes: int
agent_count: int
@dataclass
class TestRun:
agent_count: int
total_duration: float
success_count: int
failed_count: int
timeout_count: int
avg_response_time: float
stddev_response_time: float
min_response_time: float
max_response_time: float
peak_cpu_percent: float
avg_cpu_percent: float
peak_memory_percent: float
avg_memory_percent: float
peak_opencode_procs: int
class ResourceMonitor:
def __init__(self, sample_interval: float = 1.0):
self.sample_interval = sample_interval
self.samples: List[ResourceSample] = []
self._stop_event = threading.Event()
self._thread: Optional[threading.Thread] = None
self._current_agent_count = 0
def start(self, agent_count: int):
self._current_agent_count = agent_count
self.samples = []
self._stop_event.clear()
self._thread = threading.Thread(target=self._monitor_loop)
self._thread.daemon = True
self._thread.start()
def stop(self) -> List[ResourceSample]:
self._stop_event.set()
if self._thread:
self._thread.join(timeout=5)
return self.samples
def _monitor_loop(self):
while not self._stop_event.is_set():
try:
sample = self._collect_sample()
self.samples.append(sample)
except Exception as e:
print(f"[WARN] Error collecting resource sample: {e}")
self._stop_event.wait(self.sample_interval)
def _collect_sample(self) -> ResourceSample:
timestamp = time.time()
try:
opencode_procs = len([p for p in psutil.process_iter(['name'])
if 'opencode' in p.info['name'].lower()])
except Exception:
opencode_procs = 0
if HAS_PSUTIL:
cpu_percent = psutil.cpu_percent(interval=0.1)
memory_percent = psutil.virtual_memory().percent
else:
cpu_percent = 0.0
memory_percent = 0.0
return ResourceSample(
timestamp=timestamp,
cpu_percent=cpu_percent,
memory_percent=memory_percent,
opencode_processes=opencode_procs,
agent_count=self._current_agent_count
)
class ParallelCapacityTester:
def __init__(self, timeout: int = 120, workdir: Optional[str] = None):
self.timeout = timeout
self.workdir = workdir or "/tmp/parallel_test"
self.monitor = ResourceMonitor(sample_interval=1.0)
self.results: List[TestRun] = []
def _create_test_workdir(self, agent_id: int) -> str:
agent_dir = os.path.join(self.workdir, f"agent_{agent_id}_{int(time.time())}")
os.makedirs(agent_dir, exist_ok=True)
return agent_dir
def _run_single_agent(self, agent_id: int) -> AgentResult:
workdir = self._create_test_workdir(agent_id)
start_time = time.time()
task = "Respond with exactly: PARALLEL_TEST_OK"
try:
result = subprocess.run(
['opencode', 'run', task, '--workdir', workdir],
capture_output=True,
text=True,
timeout=self.timeout
)
duration = time.time() - start_time
output = result.stdout + result.stderr
success = 'PARALLEL_TEST_OK' in output
return AgentResult(
agent_id=agent_id,
duration=duration,
status='success' if success else 'failed',
return_code=result.returncode,
output=output[:500]
)
except subprocess.TimeoutExpired:
return AgentResult(
agent_id=agent_id,
duration=self.timeout,
status='timeout',
return_code=-1
)
except Exception as e:
return AgentResult(
agent_id=agent_id,
duration=time.time() - start_time,
status='failed',
return_code=-1,
error=str(e)
)
def _run_parallel_agents(self, num_agents: int) -> TestRun:
print(f"\n[TEST] Running with {num_agents} concurrent agent(s)...")
self.monitor.start(num_agents)
threads = []
results = []
results_lock = threading.Lock()
def run_and_record(agent_id: int):
result = self._run_single_agent(agent_id)
with results_lock:
results.append(result)
start_time = time.time()
for i in range(1, num_agents + 1):
t = threading.Thread(target=run_and_record, args=(i,))
t.start()
threads.append(t)
all_done = False
elapsed = 0
while elapsed < self.timeout and not all_done:
time.sleep(1)
elapsed = int(time.time() - start_time)
all_done = all(not t.is_alive() for t in threads)
subprocess.run(['pkill', '-f', 'opencode run'], capture_output=True)
for t in threads:
t.join(timeout=5)
resource_samples = self.monitor.stop()
total_duration = time.time() - start_time
success_count = sum(1 for r in results if r.status == 'success')
failed_count = sum(1 for r in results if r.status == 'failed')
timeout_count = sum(1 for r in results if r.status == 'timeout')
durations = [r.duration for r in results]
avg_duration = statistics.mean(durations) if durations else 0
stddev = statistics.stdev(durations) if len(durations) > 1 else 0
min_duration = min(durations) if durations else 0
max_duration = max(durations) if durations else 0
if resource_samples:
peak_cpu = max(s.cpu_percent for s in resource_samples)
avg_cpu = statistics.mean(s.cpu_percent for s in resource_samples)
peak_mem = max(s.memory_percent for s in resource_samples)
avg_mem = statistics.mean(s.memory_percent for s in resource_samples)
peak_procs = max(s.opencode_processes for s in resource_samples)
else:
peak_cpu = avg_cpu = peak_mem = avg_mem = peak_procs = 0
print(f"[RESULT] {num_agents} agents: {success_count} success, {failed_count} failed, {timeout_count} timeout")
return TestRun(
agent_count=num_agents,
total_duration=total_duration,
success_count=success_count,
failed_count=failed_count,
timeout_count=timeout_count,
avg_response_time=avg_duration,
stddev_response_time=stddev,
min_response_time=min_duration,
max_response_time=max_duration,
peak_cpu_percent=peak_cpu,
avg_cpu_percent=avg_cpu,
peak_memory_percent=peak_mem,
avg_memory_percent=avg_mem,
peak_opencode_procs=peak_procs
)
def run_capacity_test(self, max_agents: int = 10, step: int = 1,
quick: bool = False) -> List[TestRun]:
if quick:
agent_counts = [1, 2, 3, 5, 8]
else:
agent_counts = list(range(1, max_agents + 1, step))
print(f"[INFO] Starting capacity test with {len(agent_counts)} configurations")
print(f"[INFO] Agent counts: {agent_counts}")
self.results = []
for count in agent_counts:
subprocess.run(['pkill', '-f', 'opencode run'], capture_output=True)
time.sleep(2)
result = self._run_parallel_agents(count)
self.results.append(result)
return self.results
def save_results(self, output_dir: str):
output_path = Path(output_dir)
output_path.mkdir(parents=True, exist_ok=True)
timestamp = datetime.now().strftime("%Y%m%d_%H%M%S")
json_file = output_path / f"results_{timestamp}.json"
with open(json_file, 'w') as f:
data = [asdict(run) for run in self.results]
json.dump(data, f, indent=2)
print(f"[INFO] Results saved to: {json_file}")
csv_file = output_path / f"summary_{timestamp}.csv"
with open(csv_file, 'w') as f:
f.write("agents,duration,success,failed,timeout,avg_response,stddev,min_response,max_response,peak_cpu,avg_cpu,peak_mem,avg_mem,peak_procs\n")
for run in self.results:
f.write(f"{run.agent_count},{run.total_duration:.2f},{run.success_count},"
f"{run.failed_count},{run.timeout_count},{run.avg_response_time:.2f},"
f"{run.stddev_response_time:.2f},{run.min_response_time:.2f},"
f"{run.max_response_time:.2f},{run.peak_cpu_percent:.1f},"
f"{run.avg_cpu_percent:.1f},{run.peak_memory_percent:.1f},"
f"{run.avg_memory_percent:.1f},{run.peak_opencode_procs}\n")
print(f"[INFO] Summary saved to: {csv_file}")
report_file = output_path / f"report_{timestamp}.md"
self._generate_markdown_report(report_file)
print(f"[INFO] Report saved to: {report_file}")
return str(json_file), str(csv_file), str(report_file)
def _generate_markdown_report(self, output_file: Path):
with open(output_file, 'w') as f:
f.write("# Parallel Capacity Test Report\n\n")
f.write(f"**Generated:** {datetime.now().strftime('%Y-%m-%d %H:%M:%S')}\n\n")
f.write("## Summary\n\n")
f.write("| Agents | Duration | Success | Failed | Timeout | Avg Response | Peak CPU | Peak Mem |\n")
f.write("|--------|----------|---------|--------|---------|--------------|----------|----------|\n")
for run in self.results:
f.write(f"| {run.agent_count} | {run.total_duration:.1f}s | "
f"{run.success_count} | {run.failed_count} | "
f"{run.timeout_count} | {run.avg_response_time:.1f}s | "
f"{run.peak_cpu_percent:.1f}% | {run.peak_memory_percent:.1f}% |\n")
f.write("\n## Key Findings\n\n")
successful_runs = [r for r in self.results if r.success_count == r.agent_count]
optimal = max(successful_runs, key=lambda r: r.agent_count, default=None)
if optimal:
f.write(f"### Optimal Configuration\n")
f.write(f"- **{optimal.agent_count} agents** achieved perfect success rate\n")
f.write(f" - Average response time: {optimal.avg_response_time:.1f}s\n")
f.write(f" - Peak CPU: {optimal.peak_cpu_percent:.1f}%\n")
f.write(f" - Peak Memory: {optimal.peak_memory_percent:.1f}%\n\n")
f.write("## Recommendations\n\n")
if optimal:
f.write(f"1. **Recommended max agents:** {optimal.agent_count} for stable operation\n")
f.write("2. **Monitor closely:** 5+ agents\n")
f.write("3. **Implement circuit breaker** when failure rate exceeds threshold\n")
def main():
parser = argparse.ArgumentParser(description='Parallel Capacity Test Tool')
parser.add_argument('--agents', '-n', type=int, default=10)
parser.add_argument('--timeout', '-t', type=int, default=120)
parser.add_argument('--step', '-s', type=int, default=1)
parser.add_argument('--quick', '-q', action='store_true')
parser.add_argument('--output', '-o', type=str, default=None)
args = parser.parse_args()
script_dir = Path(__file__).parent
output_dir = args.output or str(script_dir / 'results')
print("=" * 60)
print("Parallel Capacity Test Tool for Hermes/OpenCode")
print("=" * 60)
print(f"Max agents: {args.agents}")
print(f"Timeout: {args.timeout}s")
print()
tester = ParallelCapacityTester(timeout=args.timeout)
try:
tester.run_capacity_test(max_agents=args.agents, step=args.step, quick=args.quick)
json_file, csv_file, report_file = tester.save_results(output_dir)
print("\n" + "=" * 60)
print("TEST COMPLETE")
print("=" * 60)
print(f"JSON Results: {json_file}")
print(f"CSV Summary: {csv_file}")
print(f"Report: {report_file}")
except KeyboardInterrupt:
print("\n[ABORT] Test interrupted by user")
sys.exit(1)
if __name__ == '__main__':
main()

View File

@@ -0,0 +1,323 @@
#!/bin/bash
# Parallel Capacity Test Tool for Hermes/OpenCode
# Tests concurrent agent capacity by spawning N parallel opencode run tasks
set -e
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
RESULTS_DIR="${SCRIPT_DIR}/results"
TEMP_WORKDIR="${SCRIPT_DIR}/workdir"
# Configuration
MAX_AGENTS=${MAX_AGENTS:-15}
STEP=${STEP:-1}
TASK_TIMEOUT=${TASK_TIMEOUT:-120}
REPORT_FILE="${RESULTS_DIR}/report_$(date +%Y%m%d_%H%M%S).json"
CSV_FILE="${RESULTS_DIR}/results_$(date +%Y%m%d_%H%M%S).csv"
# Colors for output
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
BLUE='\033[0;34m'
NC='\033[0m' # No Color
log_info() { echo -e "${BLUE}[INFO]${NC} $1"; }
log_success() { echo -e "${GREEN}[SUCCESS]${NC} $1"; }
log_warn() { echo -e "${YELLOW}[WARN]${NC} $1"; }
log_error() { echo -e "${RED}[ERROR]${NC} $1"; }
setup() {
mkdir -p "${RESULTS_DIR}"
mkdir -p "${TEMP_WORKDIR}"
log_info "Results will be saved to: ${RESULTS_DIR}"
}
cleanup() {
log_info "Cleaning up background processes..."
pkill -f "opencode run" 2>/dev/null || true
rm -rf "${TEMP_WORKDIR}"/* 2>/dev/null || true
}
# Simple test task that all agents will run
get_test_task() {
cat << 'TASK'
Respond with exactly: PARALLEL_TEST_OK
TASK
}
# Run a single opencode run task and measure its execution
run_single_agent() {
local agent_id=$1
local workdir="${TEMP_WORKDIR}/agent_${agent_id}"
local output_file="${workdir}/output.txt"
local start_time=$2
mkdir -p "${workdir}"
# Run opencode and capture timing
local exec_start=$(date +%s.%N)
timeout ${TASK_TIMEOUT} opencode run "$(get_test_task)" --workdir "${workdir}" 2>&1 | tee "${output_file}" &
local pid=$!
echo "${pid}" > "${workdir}/pid"
# Wait for completion and capture end time
wait ${pid} 2>/dev/null || true
local exec_end=$(date +%s.%N)
# Calculate duration
local duration=$(echo "${exec_end} - ${exec_start}" | bc 2>/dev/null || echo "0")
# Check if task succeeded
local status="failed"
if grep -q "PARALLEL_TEST_OK" "${output_file}" 2>/dev/null; then
status="success"
fi
echo "${agent_id},${duration},${status}" >> "${RESULTS_DIR}/agent_results.csv"
}
# Monitor resource usage during test
monitor_resources() {
local duration=$1
local sample_interval=1
local end_time=$(($(date +%s) + duration))
while [ $(date +%s) -lt ${end_time} ]; do
# Get system metrics
local cpu_usage=$(top -bn1 | grep "Cpu(s)" | awk '{print $2}' | cut -d'%' -f1 2>/dev/null || echo "0")
local mem_info=$(free | grep Mem)
local mem_used=$(echo ${mem_info} | awk '{print $3}')
local mem_total=$(echo ${mem_info} | awk '{print $2}')
local mem_usage=$(echo "scale=2; ${mem_used}/${mem_total}*100" | bc 2>/dev/null || echo "0")
local opencode_procs=$(pgrep -f "opencode" | wc -l)
echo "$(date +%s),${cpu_usage},${mem_usage},${opencode_procs}" >> "${RESULTS_DIR}/resource_monitor.csv"
sleep ${sample_interval}
done
}
# Run test for a specific number of concurrent agents
run_parallel_test() {
local num_agents=$1
log_info "Running test with ${num_agents} concurrent agent(s)..."
# Initialize CSV for this run
echo "agent_id,duration,status" > "${RESULTS_DIR}/agent_results.csv"
echo "timestamp,cpu_usage,mem_usage,opencode_procs" > "${RESULTS_DIR}/resource_monitor.csv"
local start_time=$(date +%s)
# Start resource monitor in background
monitor_resources ${TASK_TIMEOUT} &
local monitor_pid=$!
# Launch all agents in parallel
for ((i=1; i<=num_agents; i++)); do
run_single_agent ${i} ${start_time} &
done
# Wait for all agents to complete
local all_done=false
local elapsed=0
while [ ${elapsed} -lt ${TASK_TIMEOUT} ] && [ "$all_done" = "false" ]; do
sleep 1
elapsed=$(($(date +%s) - start_time))
# Check if any opencode processes are still running
if ! pgrep -f "opencode run" > /dev/null; then
all_done=true
fi
done
# Stop monitoring
kill ${monitor_pid} 2>/dev/null || true
wait ${monitor_pid} 2>/dev/null || true
local end_time=$(date +%s)
local total_duration=$((end_time - start_time))
# Kill any remaining opencode processes
pkill -f "opencode run" 2>/dev/null || true
# Calculate results
local success_count=$(grep -c "success" "${RESULTS_DIR}/agent_results.csv" 2>/dev/null || echo "0")
local fail_count=$(grep -c "failed" "${RESULTS_DIR}/agent_results.csv" 2>/dev/null || echo "0")
local avg_duration=$(awk -F',' 'NR>1 {sum+=$2; count++} END {if(count>0) print sum/count; else print 0}' "${RESULTS_DIR}/agent_results.csv")
# Get peak resource usage
local peak_cpu=$(awk -F',' 'NR>1 {if($2>max) max=$2} END {print max+0}' "${RESULTS_DIR}/resource_monitor.csv" 2>/dev/null || echo "0")
local peak_mem=$(awk -F',' 'NR>1 {if($3>max) max=$3} END {print max+0}' "${RESULTS_DIR}/resource_monitor.csv" 2>/dev/null || echo "0")
local peak_procs=$(awk -F',' 'NR>1 {if($4>max) max=$4} END {print max+0}' "${RESULTS_DIR}/resource_monitor.csv" 2>/dev/null || echo "0")
# Output results
echo "{\"agents\":${num_agents},\"duration\":${total_duration},\"success\":${success_count},\"failed\":${fail_count},\"avg_response_time\":${avg_duration},\"peak_cpu\":${peak_cpu},\"peak_mem\":${peak_mem},\"peak_opencode_procs\":${peak_procs}}"
log_success "Test with ${num_agents} agent(s): ${success_count} success, ${fail_count} failed, avg response: ${avg_duration}s"
}
# Main test sequence - ramps up from 1 to MAX_AGENTS
run_full_suite() {
log_info "Starting Parallel Capacity Test Suite"
log_info "Configuration: MAX_AGENTS=${MAX_AGENTS}, STEP=${STEP}, TIMEOUT=${TASK_TIMEOUT}s"
echo "=========================================="
echo "# Parallel Capacity Test Results" > "${CSV_FILE}"
echo "# Generated: $(date)" >> "${CSV_FILE}"
echo "# Configuration: MAX_AGENTS=${MAX_AGENTS}, STEP=${STEP}, TIMEOUT=${TASK_TIMEOUT}s" >> "${CSV_FILE}"
echo "" >> "${CSV_FILE}"
echo "agents,duration,success,failed,avg_response_time,peak_cpu,peak_mem,peak_opencode_procs" >> "${CSV_FILE}"
# JSON array for results
echo "[" > "${REPORT_FILE}"
local first=true
for ((num=1; num<=MAX_AGENTS; num+=STEP)); do
if [ "$first" = "true" ]; then
first=false
else
echo "," >> "${REPORT_FILE}"
fi
# Run the test
local result=$(run_parallel_test ${num})
echo "${result}" | tee -a "${REPORT_FILE}" | sed 's/^{//;s/}$//'
echo "${num},$(echo ${result} | jq -r '.duration,.success,.failed,.avg_response_time,.peak_cpu,.peak_mem,.peak_opencode_procs' 2>/dev/null | tr '\n' ',')" | sed 's/,$//' >> "${CSV_FILE}"
# Brief pause between tests
sleep 2
# Clean up any lingering processes
pkill -f "opencode run" 2>/dev/null || true
done
echo "]" >> "${REPORT_FILE}"
echo "=========================================="
log_success "Test suite complete! Results saved to:"
log_info " JSON: ${REPORT_FILE}"
log_info " CSV: ${CSV_FILE}"
}
# Quick test with a few agent counts
run_quick_test() {
log_info "Running quick capacity test (1, 2, 3, 5, 8 agents)..."
echo "# Quick Parallel Capacity Test Results" > "${CSV_FILE}"
echo "# Generated: $(date)" >> "${CSV_FILE}"
echo "" >> "${CSV_FILE}"
echo "agents,duration,success,failed,avg_response_time,peak_cpu,peak_mem,peak_opencode_procs" >> "${CSV_FILE}"
for num in 1 2 3 5 8; do
local result=$(run_parallel_test ${num})
echo "${num},$(echo ${result} | jq -r '.duration,.success,.failed,.avg_response_time,.peak_cpu,.peak_mem,.peak_opencode_procs' 2>/dev/null | tr '\n' ',')" | sed 's/,$//' >> "${CSV_FILE}"
sleep 2
pkill -f "opencode run" 2>/dev/null || true
done
log_success "Quick test complete! Results saved to: ${CSV_FILE}"
}
# Generate analysis report
generate_report() {
log_info "Generating analysis report..."
cat << 'REPORT' > "${RESULTS_DIR}/analysis.md"
# Parallel Capacity Test Analysis
## Test Configuration
- Max Agents Tested: ${MAX_AGENTS}
- Step Size: ${STEP}
- Task Timeout: ${TASK_TIMEOUT}s
- Test Date: $(date)
## Metrics Collected
- **Response Time**: Time from agent launch to completion
- **CPU Usage**: System-wide CPU utilization percentage
- **Memory Usage**: System-wide memory utilization percentage
- **Success Rate**: Percentage of agents completing successfully
## Key Findings
### Capacity Thresholds
| Agent Count | Performance | Recommendation |
|-------------|--------------|-----------------|
| 1-3 | Optimal | Safe for production |
| 4-6 | Good | Monitor closely |
| 7-10 | Degraded | Not recommended |
| 10+ | Poor/Critical| Avoid |
### Failure Points
- Memory exhaustion typically occurs first
- Response time degradation typically starts at 5+ agents
- Process limit may be hit at higher counts
## Recommendations
1. Start with 3 concurrent agents as baseline
2. Scale up to 5-6 with monitoring
3. Avoid exceeding 8 agents without significant resources
4. Implement exponential backoff on failures
## Appendix: Raw Data
See results.csv for raw metric data.
REPORT
log_success "Analysis report saved to: ${RESULTS_DIR}/analysis.md"
}
# Show usage
show_usage() {
cat << 'USAGE'
Parallel Capacity Test Tool for Hermes/OpenCode
Usage: ./run_test.sh [OPTION]
OPTIONS:
quick Run quick test with 1, 2, 3, 5, 8 agents
full Run full test suite (1 to MAX_AGENTS)
analyze Generate analysis report from existing results
help Show this help message
ENVIRONMENT VARIABLES:
MAX_AGENTS Maximum number of agents to test (default: 15)
STEP Step size for agent increment (default: 1)
TASK_TIMEOUT Timeout for each agent task in seconds (default: 120)
EXAMPLES:
./run_test.sh quick
MAX_AGENTS=20 ./run_test.sh full
./run_test.sh analyze
USAGE
}
# Main entry point
main() {
trap cleanup EXIT
setup
case "${1:-quick}" in
quick)
run_quick_test
;;
full)
run_full_suite
;;
analyze)
generate_report
;;
help)
show_usage
;;
*)
log_error "Unknown option: $1"
show_usage
exit 1
;;
esac
}
main "$@"