Skip to content

Conversation

@benhaotang
Copy link
Owner

graph TB
    subgraph Input
    A[Large Context]
    end

    subgraph "Context Processing"
    B[Split Into Chunks]
    C1[Process Chunk 1]
    C2[Process Chunk 2]
    C3[Process Chunk N]
    D[Combine Notes/Comments]
    end

    subgraph "Iterative Refinement"
    E1[Search Planning]
    E2[Writing Planning]
    F1[Generate Queries/Next Steps]
    F2[Refine Writing Plan]
    G1[Final Search Decision]
    G2[Final Report Writing]
    end

    A --> B
    B --> C1
    B --> C2
    B --> C3
    C1 --> D
    C2 --> D
    C3 --> D

    D --> E1
    D --> E2
    E1 --> F1
    E2 --> F2
    F1 --> G1
    F2 --> G2

    style A fill:#f9f,stroke:#333,stroke-width:2px
    style G1 fill:#ccf,stroke:#333,stroke-width:2px
    style G2 fill:#ccf,stroke:#333,stroke-width:2px

    classDef chunk fill:#fcf,stroke:#333,stroke-width:1px
    class C1,C2,C3 chunk
    classDef process fill:#cfc,stroke:#333,stroke-width:1px
    class B,D,E1,E2,F1,F2 process
Loading

Now token number will be estimated and split into chunks, either leave comments per chunk or refine based on chunks

@benhaotang benhaotang marked this pull request as ready for review February 27, 2025 02:10
@benhaotang benhaotang self-assigned this Feb 27, 2025
@benhaotang
Copy link
Owner Author

benhaotang commented Mar 5, 2025

Seems only works well with claude sonnet 3.5/3.7, so this PR will only be kept as a draft for now. I think the ultimate solution is to have a memory feature for all the citation for the less capable models.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants