shokollm 7f7f8b1085 fix: multiple issues with queue daemon and agent forking
1. daemon: use $WORKTREES_DIR instead of $HOME/.kugetsu-worktrees
   - Worktrees are created in ~/.kugetsu/worktrees but daemon was checking ~/.kugetsu-worktrees
   - This caused cmd_start to be called when cmd_continue should have been

2. load_agent_env: add fallback to pm-agent.env
   - When dev.env and default.env don't exist, fallback to pm-agent.env
   - Ensures GITEA_TOKEN is available

3. Remove '&& disown' pattern that causes 'no such job' errors
   - nohup already makes process immune to SIGHUP
   - disown was causing errors because job context was lost
2026-04-06 09:14:37 +00:00
2026-04-06 00:49:00 +00:00

Kugetsu

Name background: Kugetsu (月掴, "grasping the moon") is derived from Jujutsu Kaisen's "Tokusa no Kage Boujutsu" (Shadow Art Style) — a technique that summons up to ten different creatures from the user's shadow. This project embodies the concept of one orchestrator managing multiple specialized agents working in parallel.

Overview

Kugetsu is an agent orchestration system that enables parallel task execution across multiple repositories. Inspired by the IT department metaphor:

  • Human acts as executive, reviewing and approving
  • PM (Project Manager) Agent coordinates and delegates tasks
  • Coding Agents execute tasks autonomously on assigned issues

The core idea: instead of working through issues one-by-one, a PM spawns multiple coding agents in parallel — similar to Hermes running multiple tasks, but scaled to a full team workflow.

Why

When you have 10 issues, typically you work through them sequentially. With Kugetsu:

  • PM prioritizes and splits tasks
  • Coding agents work in parallel on their own branches
  • PM reviews and merges to a release branch
  • Human provides final approval to master/main

This means your focus shifts from doing to overseeing — reviewing PRs, not writing code.

Status

Phase 3: Chat Integration (Implemented)

  • PM Agent with git worktree isolation per session
  • Chat Agent via Telegram gateway
  • Parallel capacity testing tool available

See Architecture for full system design and phase status.

Capacity Planning

Based on parallel capacity testing (tools/parallel-capacity-test/):

Resource Value
Memory per agent ~340 MB
Recommended max agents 5
Timeout threshold 8+ agents
Memory limit 1 GB per agent (configurable)

Observed Behavior

  • 1-5 agents: 100% success rate, ~6-9s avg response time
  • 8+ agents: Timeouts occur due to resource contention
  • Scaling is roughly linear up to 5 agents

Recommendations

  1. Limit max parallel agents to 5 for stable operation
  2. Monitor memory usage when scaling beyond 3 agents
  3. Configure memory limit via --memory-limit flag based on available RAM

Documentation

License

MIT

Description
Agent orchestration system for parallel task execution - PM coordinates multiple coding agents, Human reviews and approves
Readme MIT 2.8 MiB
2026-04-07 14:26:02 +02:00
Languages
Shell 89.8%
Python 10.2%