Skip to content

Commit

Permalink
Merge pull request #218 from l3vels/docs/file-datasource
Browse files Browse the repository at this point in the history
Docs: index types and response modes
  • Loading branch information
okradze authored Oct 10, 2023
2 parents dfb8803 + 391dc6c commit 0025dad
Show file tree
Hide file tree
Showing 5 changed files with 108 additions and 6 deletions.
4 changes: 2 additions & 2 deletions apps/ui/.env.develop
Original file line number Diff line number Diff line change
Expand Up @@ -31,8 +31,8 @@ REACT_APP_DECENTRALIZED_SPEAKER_LINK=https://github.com/l3vels/L3AGI/blob/main/d

REACT_APP_TEAM_LINK=https://github.com/l3vels/L3AGI/blob/main/docs/team.md

REACT_APP_INDEX_TYPES_LINK=https://gpt-index.readthedocs.io/en/latest/core_modules/data_modules/index/index_guide.html#
REACT_APP_RESPONSE_MODES_LINK=https://gpt-index.readthedocs.io/en/latest/core_modules/query_modules/response_synthesizers/usage_pattern.html#configuring-the-response-mode
REACT_APP_INDEX_TYPES_LINK=https://github.com/l3vels/L3AGI/blob/main/docs/index-types.md
REACT_APP_RESPONSE_MODES_LINK=https://github.com/l3vels/L3AGI/blob/main/docs/response-modes.md

REACT_APP_STATIC_URL=https://static.l3vels.xyz
REACT_APP_DATA_TEST_MODE=true
Expand Down
4 changes: 2 additions & 2 deletions apps/ui/.env.local
Original file line number Diff line number Diff line change
Expand Up @@ -33,8 +33,8 @@ REACT_APP_DECENTRALIZED_SPEAKER_LINK=https://github.com/l3vels/L3AGI/blob/main/d

REACT_APP_TEAM_LINK=https://github.com/l3vels/L3AGI/blob/main/docs/team.md

REACT_APP_INDEX_TYPES_LINK=https://gpt-index.readthedocs.io/en/latest/core_modules/data_modules/index/index_guide.html#
REACT_APP_RESPONSE_MODES_LINK=https://gpt-index.readthedocs.io/en/latest/core_modules/query_modules/response_synthesizers/usage_pattern.html#configuring-the-response-mode
REACT_APP_INDEX_TYPES_LINK=https://github.com/l3vels/L3AGI/blob/main/docs/index-types.md
REACT_APP_RESPONSE_MODES_LINK=https://github.com/l3vels/L3AGI/blob/main/docs/response-modes.md


# Images
Expand Down
4 changes: 2 additions & 2 deletions apps/ui/.env.production
Original file line number Diff line number Diff line change
Expand Up @@ -27,8 +27,8 @@ REACT_APP_DEBATES_LINK=https://github.com/l3vels/L3AGI/blob/main/docs/team.md#ty
REACT_APP_DECENTRALIZED_SPEAKER_LINK=https://github.com/l3vels/L3AGI/blob/main/docs/team.md#types-of-teams
REACT_APP_TEAM_LINK=https://github.com/l3vels/L3AGI/blob/main/docs/team.md

REACT_APP_INDEX_TYPES_LINK=https://gpt-index.readthedocs.io/en/latest/core_modules/data_modules/index/index_guide.html#
REACT_APP_RESPONSE_MODES_LINK=https://gpt-index.readthedocs.io/en/latest/core_modules/query_modules/response_synthesizers/usage_pattern.html#configuring-the-response-mode
REACT_APP_INDEX_TYPES_LINK=https://github.com/l3vels/L3AGI/blob/main/docs/index-types.md
REACT_APP_RESPONSE_MODES_LINK=https://github.com/l3vels/L3AGI/blob/main/docs/response-modes.md

REACT_APP_STATIC_URL=https://static.l3vels.xyz
REACT_APP_DATA_TEST_MODE=true
Expand Down
58 changes: 58 additions & 0 deletions docs/index-types.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,58 @@
# Index Types

This guide describes how each index works with diagrams. We also visually highlight our "Response Synthesis" modes.

Some terminology:

- **Node**: Corresponds to a chunk of text from a Document. LlamaIndex takes in Document objects and internally parses/chunks them into Node objects.
- **Response Synthesis**: Our module which synthesizes a response given the retrieved Node. You can see how to
[specify different response modes](setting-response-mode) here.
See below for an illustration of how each response mode works.

## Summary Index

The summary index simply stores Nodes as a sequential chain.

![](https://docs.llamaindex.ai/en/stable/_images/list.png)

### Querying

During query time, if no other query parameters are specified, LlamaIndex simply loads all Nodes in the list into
our Reponse Synthesis module.

![](https://docs.llamaindex.ai/en/stable/_images/list_query.png)

The summary index does offer numerous ways of querying a summary index, from an embedding-based query which
will fetch the top-k neighbors, or with the addition of a keyword filter, as seen below:

![](https://docs.llamaindex.ai/en/stable/_images/list_filter_query.png)

## Vector Store Index

The vector store index stores each Node and a corresponding embedding in a [Vector Store](vector-store-index).

![](https://docs.llamaindex.ai/en/stable/_images/vector_store.png)

### Querying

Querying a vector store index involves fetching the top-k most similar Nodes, and passing
those into our Response Synthesis module.

![](https://docs.llamaindex.ai/en/stable/_images/vector_store_query.png)

## Tree Index

The tree index builds a hierarchical tree from a set of Nodes (which become leaf nodes in this tree).

![](https://docs.llamaindex.ai/en/stable/_images/tree.png)

### Querying

Querying a tree index involves traversing from root nodes down
to leaf nodes. By default, (`child_branch_factor=1`), a query
chooses one child node given a parent node. If `child_branch_factor=2`, a query
chooses two child nodes per parent.

![](https://docs.llamaindex.ai/en/stable/_images/tree_query.png)

For more information, please visit [LlamaIndex docs](https://docs.llamaindex.ai/en/stable/core_modules/data_modules/index/index_guide.html)
44 changes: 44 additions & 0 deletions docs/response-modes.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,44 @@
# Response Modes

1. **Refine Mode**:

- Best for detailed answers.
- Processes each text chunk individually with a unique LLM call per Node.
- Uses the `text_qa_template` for the first chunk, then sequentially refines the answer with `refine_template` for each subsequent chunk.
- If a chunk is too lengthy, it's split with TokenTextSplitter, creating additional chunks for processing.

2. **Compact Mode (default)**:

- Essentially, a more efficient version of refine in terms of LLM calls.
- Similar to refine but begins by concatenating the chunks.
- Aims to minimize LLM calls by packing as much text as possible within the context window.
- Splits oversized texts and treats each as a “chunk” for the refine synthesizer.

3. **Tree Summarize Mode**:

- Ideal for generating summaries.
- Utilizes the `summary_template` prompt for all chunk concatenations.
- Sends each chunk/split to the summary_template, with no refining.
- Recursive: answers from chunks are treated as new chunks until only one chunk remains.

4. **Simple Summarize Mode**:

- Provides quick summaries but may omit certain details.
- Directly truncates all text chunks to fit a single LLM prompt.

5. **No Text Mode**:

- Only retrieves the nodes meant for the LLM without actual LLM querying.
- Retrieved nodes can be viewed in `response.source_nodes`.

6. **Accumulate Mode**:

- Suitable when querying each text chunk individually.
- Uses the given query on every text chunk and accumulates responses into an array.
- The outcome is a merged string of all responses.

7. **Compact Accumulate Mode**:
- Merges the approach of the compact mode with the accumulate mode.
- “Compacts” each LLM prompt and runs the query on each text chunk.

For more information, please visit [LlamaIndex docs](https://docs.llamaindex.ai/en/stable/core_modules/query_modules/query_engine/response_modes.html)

0 comments on commit 0025dad

Please sign in to comment.