Use Case: Autonomous Task Execution
Learn how to assign tasks to Orgn Copilot, Origin's autonomous coding agent, so your team can delegate repetitive or lower-priority work and stay focused on what matters most.
Origin's Orgn Copilot is an autonomous coding agent that can be assigned to tasks just like a human teammate. Once assigned, it spins up a sandboxed worktree, reads the codebase, plans its approach, and executes, committing changes and reporting progress without you needing to be present.
This use case walks through assigning a task to Orgn Copilot and monitoring it to completion.
When to Delegate to Orgn Copilot
Orgn Copilot is best suited for tasks that are well-defined but time-consuming, the kind of work that's important but shouldn't occupy a senior engineer's afternoon:
- Refactors: rename variables, extract components, simplify logic, enforce consistent patterns
- Test coverage: add unit, snapshot, or integration tests for existing functions
- Documentation: write JSDoc, inline comments, or README sections
- Cleanup: remove dead code, fix lint warnings, delete stale comments
- Boilerplate: add TypeScript interfaces, PropTypes, or error boundaries
Tasks that require product judgment, multi-system coordination, or user-facing design decisions are better kept with a human.
Step 1: Find or Create the Task
Open your project's Dashboard and locate the task you want to delegate, or create a new one. The task should have a clear, scoped title, Orgn Copilot uses the task title and description as its primary instruction set.
For this example the task is:
Refactor code to simplify readability
Good task titles are specific and actionable. Vague titles like "Fix things" produce unfocused results.
Step 2: Assign to Orgn Copilot
Open the task detail panel and click Assign.
The assignee dropdown has two sections:
- Team members: human collaborators on the project
- Chat Agents: AI agents available to the project. Orgn Copilot appears here, listed as Autonomous coding agent
Click Orgn Copilot to assign it.
Step 3: The Agent Starts Automatically
Once assigned, Origin immediately:
- Changes the task status from To Do to In Progress
- Creates a Worktree: an isolated branch-based sandbox scoped to this task
- Launches a Copilot session inside that worktree
You'll see the activity log update in real time:
- Sid Lais assigned AI agent
- Status changed from To Do to In Progress
- Worktree #1: [task title], Trial created
The agent is now running. You don't need to stay in this view.
Step 4: Monitor the Session
Click into the Worktree to open the live Copilot session. You can observe the agent's full reasoning and execution stream as it works.
The session view shows:
Thought blocks: the agent's internal reasoning before each action. These explain why it's doing what it's doing.
Tool calls: actions the agent takes: searching the codebase, listing files, reading specific modules, editing files. Each is shown with its result.
To-do list: the agent breaks its plan into sub-items and checks them off as it goes. For example:
To-dos 0/5 — Remove misleading 'This file has been deleted.' comments from all .tsx files
Changed Files: a live diff panel showing every modification made so far. You can expand this at any time to review edits before the agent finishes.
The model used by the agent is shown at the bottom of the session (e.g., GPT-OSS 120B (NEAR TEE)). You can intervene at any point by typing a message directly into the session.
Step 5: Review and Merge
When the agent finishes, the session ends and the task moves to your review queue. From the Worktree panel you can:
- Review the diff: see every file changed, line by line
- Open with VS Code or your preferred editor for a deeper review
- Create a PR: the worktree is on its own branch, ready for a pull request
- Request changes: leave a comment or reassign the task if the result needs adjustment
Only merge when you're satisfied. The agent's work is isolated to its branch, your main branch is never touched until you choose to merge.
Tips for Better Results
Write a description, not just a title. The task description is part of the agent's instruction context. A few sentences explaining the goal, the relevant files, and what "done" looks like significantly improves output quality.
Attach context documents. If the task involves a specific component or pattern, link the relevant spec or design doc as a context attachment. The agent will read it before starting.
Keep scope tight. One well-scoped task produces better results than one large vague task. If a task touches more than 5–6 files or requires judgment calls, break it up.
Intervene early if needed. If the agent's first few tool calls are heading in the wrong direction, send a correction in the session chat. It's cheaper to course-correct early than to let it finish and re-run.
Use the to-do list as a progress signal. The agent creates its own to-do breakdown before executing. If the to-do list looks wrong or incomplete, that's the right moment to step in and clarify.