REF. 2604-01 IN PROGRESS

Line-Level Conversation Branching in an AI Command Line Interface

Date
April 2026
Client
Open Source / OpenAI
Deliverable
OpenAI Codex CLI Contribution
Source
GitHub
AI CLI UX Rust Open Source Codex

Single-threaded AI conversation is unnatural for anything complex. You're reading a twenty-step guide. Step three needs clarification. You ask, and now step three's answer, the follow-up, and the correction are all embedded in the same stream as steps four through twenty. You've also lost your place. So now you need to scroll up and find the line where you left off and continue with step four.

Stacked conditionals make it worse. "What do I do if step three fails?" generates an answer that lives nowhere near step three. By step twelve you're managing a conversation that has branched in your head but not on screen.

This contribution to OpenAI's Codex CLI makes the aside natural. Users can branch from the line where the question arose, handle it in a child thread with its own multi-turn conversation, and return to the parent conversation at exactly the point where they left. The main thread stays clean. The aside stays contained. No re-orientation required.

There is a secondary efficiency. Every aside that stays in the main thread consumes context that the model carries forward; re-orientations, conditional answers, corrections that no longer apply, etc. A contained branch takes its own context budget. The main thread stays focused on what it was doing, and the model working in it does too.

The implementation anchors the fork to a byte offset in the raw assistant response — precise enough to survive session reloads, structurally aware enough to expand to a logical boundary like a full list item rather than the middle of one.

Find the the github fork repo here: https://github.com/mkiiim/codex/tree/tui/line-branching-faithful-clean
Or just clone with: git clone --branch tui/line-branching-faithful-clean https://github.com/mkiiim/codex.git