Skip to content

Swe-related changes #773

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 52 commits into from
May 16, 2025
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
52 commits
Select commit Hold shift + click to select a range
5cda9a3
Refine tool descriptions and error handling for 'multiple' argument. …
JegernOUTT Apr 9, 2025
c4b993f
Merge branch 'dev' into swe-boosted-prompt
JegernOUTT Apr 16, 2025
310dcf0
Add models "o3" and "o4-mini" to known_models.rs
JegernOUTT Apr 21, 2025
c182a83
Refactor YAML configuration and code logic for problem-solving workfl…
JegernOUTT Apr 22, 2025
a16ca5f
Refine YAML task strategy and enhance error handling in chat client c…
JegernOUTT Apr 23, 2025
b552760
Enhance deep analysis with iterative refinement process
JegernOUTT Apr 23, 2025
36e2e1b
Merge branch 'dev' into swe-boosted-prompt
JegernOUTT Apr 25, 2025
bcdf0be
Replace `deep_analysis` with `strategic_planning` tool, add `critique…
JegernOUTT Apr 28, 2025
bb30df0
Handle decorated definitions in Python AST parsing and add test case
JegernOUTT Apr 28, 2025
15639a8
Remove root_cause_analysis tool and update related tools
JegernOUTT Apr 29, 2025
c24b049
Make `status` module public within the crate, enhance `RagStatus` and…
JegernOUTT Apr 29, 2025
4b7e4bc
Refactor tar file extraction with retries and improved error handling
JegernOUTT Apr 30, 2025
b6b23a2
Merge branch 'dev' into swe-boosted-prompt-verified
JegernOUTT Apr 30, 2025
320fde4
Update known models, improve token handling
JegernOUTT Apr 30, 2025
0bb3218
Refactor path handling logic to correctly manage absolute paths in to…
JegernOUTT Apr 30, 2025
c7387f5
Remove redundant test for parsing decorated Python code
JegernOUTT Apr 30, 2025
52ca88e
Increase shared memory allocation to 8g in Docker container creation …
JegernOUTT Apr 30, 2025
13006b8
Enhance strategic planning guidance and refine coding task strategy s…
JegernOUTT May 1, 2025
316928a
Add ToolDebugScript for Python Debugging and Update Docker Manager
JegernOUTT May 1, 2025
c5926f5
Refactor YAML strategy steps to improve clarity and flow
JegernOUTT May 1, 2025
c2331cc
Merge branch 'dev' into swe-boosted-prompt-verified
JegernOUTT May 2, 2025
5fe7f3d
Merge branch 'dev' into swe-boosted-prompt-verified
JegernOUTT May 2, 2025
bb49091
Correct typos and improve debug reports in tool_debug_script.rs
JegernOUTT May 4, 2025
d8016ce
Add suggested fix step to tool_debug_script and update usage guidance
JegernOUTT May 6, 2025
1538e05
Enhance regex search tool with path and text match separation, improv…
JegernOUTT May 6, 2025
16b37b3
Merge branch 'dev' into swe-boosted-prompt-verified
JegernOUTT May 6, 2025
f420f4a
Fix line number type conversion in regex search tool
JegernOUTT May 6, 2025
93def56
rm docker_wait_indexing_done
JegernOUTT May 6, 2025
77e27e9
Refine debug script workflow and enable strategic planning tool
JegernOUTT May 7, 2025
b2bd0c1
Remove critique and improvement iterations from strategic planning tool
JegernOUTT May 7, 2025
2dee2e7
Merge branch 'dev' into swe-boosted-prompt-verified
JegernOUTT May 7, 2025
8ef0efa
Refine PROMPT_AGENTIC_TOOLS and SOLVER_PROMPT instructions in customi…
JegernOUTT May 7, 2025
7dbe781
Remove automatic line inclusion when AST is unavailable and add handl…
JegernOUTT May 7, 2025
dc12dbc
Simplify duplicate content check and update test plan in YAML config
JegernOUTT May 8, 2025
ae69397
Enhance SOLVER_PROMPT with alternative solution suggestions for robus…
JegernOUTT May 9, 2025
0ff57ef
Remove root cause analysis from strategic planning procedure
JegernOUTT May 13, 2025
a00dfa2
Merge branch 'dev' into swe-boosted-prompt-verified
JegernOUTT May 14, 2025
c4c3d3b
Remove deprecated critique and debug_script tools, simplify Docker co…
JegernOUTT May 14, 2025
a0804f5
Refactor tool limit calculations for better scaling with context length
JegernOUTT May 14, 2025
694c9ee
Refactor file search and tagging logic in tools
JegernOUTT May 14, 2025
b301c1e
Remove bold formatting from task list in YAML configuration file
JegernOUTT May 14, 2025
38bf3ef
Remove commented-out code related to change requests and file inspect…
JegernOUTT May 14, 2025
23464f3
fix warnings
JegernOUTT May 14, 2025
23aa27f
Refactor tool filtering logic in ChatMode::EXPLORE and ChatMode::AGENT
JegernOUTT May 14, 2025
710cd3e
Update subchat tool parameters to optimize for thinking model type
JegernOUTT May 14, 2025
bd62022
Enhance logging with detailed request parameters in forward_to_openai…
JegernOUTT May 14, 2025
a0baab1
Add test comment warning against tool calls in tool_locate_search.rs
JegernOUTT May 14, 2025
a9d88b9
Refine system prompt for clearer task instructions and improved tool …
JegernOUTT May 14, 2025
2a6b4d5
Add `what_to_find` parameter for contextual file location
JegernOUTT May 15, 2025
97068f4
Improve tool_locate_search.rs with new `cat` tool and clarify prompts
JegernOUTT May 15, 2025
2a8c279
Merge branch 'dev' into swe-useful-changes
JegernOUTT May 15, 2025
6d984f1
Remove tool_relevant_files and update tool references
JegernOUTT May 16, 2025
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
6 changes: 3 additions & 3 deletions refact-agent/engine/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -63,9 +63,9 @@ Installable by the end user:

- [x] Code completion with RAG
- [x] Chat with tool usage
- [x] definition() references() tools
- [x] vecdb search() with scope (semantic search)
- [x] regex_search() with scope (pattern matching)
- [x] search_symbol_definition() search_symbol_usages() tools
- [x] search_semantic() with scope (semantic search)
- [x] search_pattern() with scope (pattern matching)
- [x] @file @tree @web @definition @references @search mentions in chat
- [x] locate() uses test-time compute to find good project cross-section
- [x] Latest gpt-4o gpt-4o-mini
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -548,7 +548,10 @@ def print_messages(

def con(x):
if console:
console.print(x)
try:
console.print(x)
except:
print(x)

def _is_tool_call(m: Message) -> bool:
return m.tool_calls is not None and len(m.tool_calls) > 0
Expand Down
7 changes: 6 additions & 1 deletion refact-agent/engine/src/ast/ast_db.rs
Original file line number Diff line number Diff line change
Expand Up @@ -294,10 +294,15 @@ pub async fn doc_remove(ast_index: Arc<AMutex<AstDB>>, cpath: &String)
}

pub async fn doc_defs(ast_index: Arc<AMutex<AstDB>>, cpath: &String) -> Vec<Arc<AstDefinition>>
{
let db = ast_index.lock().await.sleddb.clone();
doc_def_internal(db, cpath)
}

pub fn doc_def_internal(db: Arc<Db>, cpath: &String) -> Vec<Arc<AstDefinition>>
{
let to_search_prefix = filesystem_path_to_double_colon_path(cpath);
let d_prefix = format!("d|{}::", to_search_prefix.join("::"));
let db = ast_index.lock().await.sleddb.clone();
let mut defs = Vec::new();
let mut iter = db.scan_prefix(d_prefix);
while let Some(Ok((_, value))) = iter.next() {
Expand Down
2 changes: 1 addition & 1 deletion refact-agent/engine/src/ast/ast_structs.rs
Original file line number Diff line number Diff line change
Expand Up @@ -65,7 +65,7 @@ pub struct AstDB {
pub ast_max_files: usize,
}

#[derive(Serialize, Clone)]
#[derive(Serialize, Deserialize, Clone, Debug)]
pub struct AstStatus {
#[serde(skip)]
pub astate_notify: Arc<ANotify>,
Expand Down
72 changes: 35 additions & 37 deletions refact-agent/engine/src/at_commands/at_tree.rs
Original file line number Diff line number Diff line change
Expand Up @@ -3,18 +3,16 @@ use std::path::PathBuf;
use std::sync::{Arc, RwLock};

use async_trait::async_trait;
use sled::Db;
use tokio::sync::Mutex as AMutex;
use tracing::warn;

use crate::ast::ast_structs::AstDB;
// use crate::ast::ast_indexer_thread::AstIndexService;
// use crate::ast::treesitter::structs::SymbolType;
use crate::ast::ast_structs::{AstDB, SymbolType};
use crate::at_commands::at_commands::{AtCommand, AtCommandsContext, AtParam};
use crate::at_commands::at_file::return_one_candidate_or_a_good_error;
use crate::at_commands::execute_at::AtCommandMember;
use crate::call_validation::{ChatMessage, ContextEnum};
use crate::files_correction::{correct_to_nearest_dir_path, get_project_dirs, paths_from_anywhere};
// use crate::files_in_workspace::Document;


pub struct AtTree {
Expand Down Expand Up @@ -113,36 +111,27 @@ pub fn construct_tree_out_of_flat_list_of_paths(paths_from_anywhere: &Vec<PathBu
root_nodes
}

fn _print_symbols(_entry: &PathsHolderNode) -> String
{
// XXX fix tree
// if let Some(ast) = ast_index_maybe {
// let doc = Document { doc_path: entry.path.clone(), doc_text: None };
// match ast.get_by_file_path(RequestSymbolType::Declaration, &doc) {
// Ok(symbols) => {
// let symbols_list = symbols
// .iter()
// .filter(|x| x.symbol_type == SymbolType::StructDeclaration
// || x.symbol_type == SymbolType::FunctionDeclaration)
// .filter(|x| !x.name.is_empty() && !x.name.starts_with("anon-"))
// .map(|x| x.name.clone())
// .collect::<Vec<String>>()
// .join(", ");
// if !symbols_list.is_empty() { format!(" ({symbols_list})") } else { "".to_string() }
// }
// Err(_) => "".to_string()
// }
// } else {
"".to_string()
// }
fn _print_symbols(db: Arc<Db>, entry: &PathsHolderNode) -> String {
let cpath = entry.path.to_string_lossy().to_string();
let defs = crate::ast::ast_db::doc_def_internal(db.clone(), &cpath);
let symbols_list = defs
.iter()
.filter(|x| match x.symbol_type {
SymbolType::StructDeclaration | SymbolType::TypeAlias | SymbolType::FunctionDeclaration => true,
_ => false
})
.map(|x| x.name())
.collect::<Vec<String>>()
.join(", ");
if !symbols_list.is_empty() { format!(" ({symbols_list})") } else { "".to_string() }
}

fn _print_files_tree(
async fn _print_files_tree(
tree: &Vec<PathsHolderNodeArc>,
ast_db: Option<Arc<AMutex<AstDB>>>,
maxdepth: usize,
) -> String {
fn traverse(node: &PathsHolderNodeArc, depth: usize, maxdepth: usize, ast_db: Option<Arc<AMutex<AstDB>>>) -> Option<String> {
fn traverse(node: &PathsHolderNodeArc, depth: usize, maxdepth: usize, db_mb: Option<Arc<Db>>) -> Option<String> {
if depth > maxdepth {
return None;
}
Expand All @@ -151,14 +140,18 @@ fn _print_files_tree(
let indent = " ".repeat(depth);
let name = if node.is_dir { format!("{}/", node.file_name()) } else { node.file_name() };
if !node.is_dir {
output.push_str(&format!("{}{}{}\n", indent, name, _print_symbols(&node)));
if let Some(db) = &db_mb {
output.push_str(&format!("{}{}{}\n", indent, name, _print_symbols(db.clone(), &node)));
} else {
output.push_str(&format!("{}{}\n", indent, name));
}
return Some(output);
}
output.push_str(&format!("{}{}\n", indent, name));
let (mut dirs, mut files) = (0, 0);
let mut child_output = String::new();
for child in &node.child_paths {
if let Some(child_str) = traverse(child, depth + 1, maxdepth, ast_db.clone()) {
if let Some(child_str) = traverse(child, depth + 1, maxdepth, db_mb.clone()) {
child_output.push_str(&child_str);
} else {
dirs += child.0.read().unwrap().is_dir as usize;
Expand All @@ -175,7 +168,12 @@ fn _print_files_tree(

let mut result = String::new();
for node in tree {
if let Some(output) = traverse(&node, 0, maxdepth, ast_db.clone()) {
let db_mb = if let Some(ast) = ast_db.clone() {
Some(ast.lock().await.sleddb.clone())
} else {
None
};
if let Some(output) = traverse(&node, 0, maxdepth, db_mb.clone()) {
result.push_str(&output);
} else {
break;
Expand All @@ -184,20 +182,20 @@ fn _print_files_tree(
result
}

fn _print_files_tree_with_budget(
async fn _print_files_tree_with_budget(
tree: Vec<PathsHolderNodeArc>,
char_limit: usize,
ast_db: Option<Arc<AMutex<AstDB>>>,
) -> String {
let mut good_enough = String::new();
for maxdepth in 1..20 {
let bigger_tree_str = _print_files_tree(&tree, ast_db.clone(), maxdepth);
let bigger_tree_str = _print_files_tree(&tree, ast_db.clone(), maxdepth).await;
if bigger_tree_str.len() > char_limit {
break;
}
good_enough = bigger_tree_str;
}
return good_enough;
good_enough
}

pub async fn print_files_tree_with_budget(
Expand All @@ -218,10 +216,11 @@ pub async fn print_files_tree_with_budget(
}
match ast_module_option {
Some(ast_module) => {
crate::ast::ast_indexer_thread::ast_indexer_block_until_finished(ast_module.clone(), 20_000, true).await;
let ast_db: Option<Arc<AMutex<AstDB>>> = Some(ast_module.lock().await.ast_index.clone());
Ok(_print_files_tree_with_budget(tree, char_limit, ast_db.clone()))
Ok(_print_files_tree_with_budget(tree, char_limit, ast_db.clone()).await)
}
None => Ok(_print_files_tree_with_budget(tree, char_limit, None)),
None => Ok(_print_files_tree_with_budget(tree, char_limit, None).await),
}
}

Expand Down Expand Up @@ -267,7 +266,6 @@ impl AtCommand for AtTree {
};

let use_ast = args.iter().any(|x| x.text == "--ast");

let tree = print_files_tree_with_budget(ccx.clone(), tree, use_ast).await.map_err(|err| {
warn!("{}", err);
err
Expand Down
2 changes: 1 addition & 1 deletion refact-agent/engine/src/file_filter.rs
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@ use std::fs;
use std::os::unix::fs::PermissionsExt;
use std::path::PathBuf;

const LARGE_FILE_SIZE_THRESHOLD: u64 = 180*1024; // 180k files (180k is ~0.2% of all files on our dataset)
const LARGE_FILE_SIZE_THRESHOLD: u64 = 4096*1024; // 4Mb files
const SMALL_FILE_SIZE_THRESHOLD: u64 = 5; // 5 Bytes

pub const SOURCE_FILE_EXTENSIONS: &[&str] = &[
Expand Down
18 changes: 12 additions & 6 deletions refact-agent/engine/src/forward_to_openai_endpoint.rs
Original file line number Diff line number Diff line change
Expand Up @@ -49,9 +49,12 @@ pub async fn forward_to_openai_style_endpoint(
data["temperature"] = serde_json::Value::from(temperature);
}
data["max_completion_tokens"] = serde_json::Value::from(sampling_parameters.max_new_tokens);
info!("NOT STREAMING TEMP {}", sampling_parameters.temperature
.map(|x| x.to_string())
.unwrap_or("None".to_string()));
info!("Request: model={}, reasoning_effort={}, T={}, n={}, stream=false",
model_rec.name,
sampling_parameters.reasoning_effort.clone().map(|x| x.to_string()).unwrap_or("none".to_string()),
sampling_parameters.temperature.clone().map(|x| x.to_string()).unwrap_or("none".to_string()),
sampling_parameters.n.clone().map(|x| x.to_string()).unwrap_or("none".to_string())
);
if is_passthrough {
passthrough_messages_to_json(&mut data, prompt, &model_rec.name);
} else {
Expand Down Expand Up @@ -133,9 +136,12 @@ pub async fn forward_to_openai_style_endpoint_streaming(
}
data["max_completion_tokens"] = serde_json::Value::from(sampling_parameters.max_new_tokens);

info!("STREAMING TEMP {}", sampling_parameters.temperature
.map(|x| x.to_string())
.unwrap_or("None".to_string()));
info!("Request: model={}, reasoning_effort={}, T={}, n={}, stream=true",
model_rec.name,
sampling_parameters.reasoning_effort.clone().map(|x| x.to_string()).unwrap_or("none".to_string()),
sampling_parameters.temperature.clone().map(|x| x.to_string()).unwrap_or("none".to_string()),
sampling_parameters.n.clone().map(|x| x.to_string()).unwrap_or("none".to_string())
);

if let Some(meta) = meta {
data["meta"] = json!(meta);
Expand Down
3 changes: 2 additions & 1 deletion refact-agent/engine/src/http/routers/v1/chat.rs
Original file line number Diff line number Diff line change
Expand Up @@ -48,7 +48,8 @@ pub fn available_tools_by_chat_mode(current_tools: Vec<Value>, chat_mode: &ChatM
ChatMode::NO_TOOLS => {
vec![]
},
ChatMode::EXPLORE | ChatMode::AGENT => current_tools,
ChatMode::AGENT => filter_out_tools(&current_tools, &vec!["search_symbol_definition", "search_symbol_usages", "search_pattern", "search_semantic"]),
ChatMode::EXPLORE => current_tools,
ChatMode::CONFIGURE => filter_out_tools(&current_tools, &vec!["tree", "locate", "knowledge", "search"]),
ChatMode::PROJECT_SUMMARY => keep_tools(&current_tools, &vec!["cat", "tree", "bash"]),
}
Expand Down
10 changes: 5 additions & 5 deletions refact-agent/engine/src/http/routers/v1/status.rs
Original file line number Diff line number Diff line change
@@ -1,20 +1,20 @@
use axum::Extension;
use axum::response::Result;
use hyper::{Body, Response, StatusCode};
use serde::Serialize;
use serde::{Deserialize, Serialize};

use crate::ast::ast_structs::AstStatus;
use crate::custom_error::ScratchError;
use crate::global_context::SharedGlobalContext;

#[derive(Serialize)]
#[derive(Serialize, Deserialize, Debug)]
pub struct RagStatus {
pub ast: Option<AstStatus>,
ast_alive: String,
pub ast_alive: String,
#[cfg(feature="vecdb")]
pub vecdb: Option<crate::vecdb::vdb_structs::VecDbStatus>,
vecdb_alive: String,
vec_db_error: String,
pub vecdb_alive: String,
pub vec_db_error: String,
}

pub async fn get_rag_status(gcx: SharedGlobalContext) -> RagStatus {
Expand Down
11 changes: 8 additions & 3 deletions refact-agent/engine/src/integrations/integr_pdb.rs
Original file line number Diff line number Diff line change
Expand Up @@ -122,15 +122,20 @@ impl Tool for ToolPdb {
if python_command.is_empty() {
python_command = "python3".to_string();
}
if command_args.windows(2).any(|w| w == ["-m", "pdb"]) {
if command_args.iter().any(|x| x.starts_with("python")) {
if !command_args.windows(2).any(|w| w == ["-m", "pdb"]) {
let python_index = command_args.iter().position(|x| x.starts_with("python")).ok_or("Open a new session by running pdb(\"python -m pdb my_script.py\")")?;
command_args.insert(python_index + 1, "pdb".to_string());
command_args.insert(python_index + 1, "-m".to_string());
}
let output = start_pdb_session(&python_command, &mut command_args, &session_hashmap_key, &workdir_maybe, gcx.clone(), 10).await?;
return Ok(tool_answer(output, tool_call_id));
}

let command_session = {
let gcx_locked = gcx.read().await;
gcx_locked.integration_sessions.get(&session_hashmap_key)
.ok_or("There is no active pdb session in this chat, you can open it by running pdb(\"python -m pdb my_script.py\")")?
.ok_or("There is no active pdb session in this chat. Open a new session by running pdb(\"python -m pdb my_script.py\")")?
.clone()
};

Expand All @@ -139,7 +144,7 @@ impl Tool for ToolPdb {
.ok_or("Failed to downcast to PdbSession")?;

let output = match command_args[0].as_str() {
"kill" => {
"kill" | "q" | "quit" => {
let mut gcx_locked = gcx.write().await;
gcx_locked.integration_sessions.remove(&session_hashmap_key);
"Pdb session has been killed".to_string()
Expand Down
6 changes: 5 additions & 1 deletion refact-agent/engine/src/known_models.json
Original file line number Diff line number Diff line change
Expand Up @@ -432,7 +432,11 @@
"scratchpad": "PASSTHROUGH",
"tokenizer": "hf://Xenova/gpt-4o",
"similar_models": [
"openai/o3-mini"
"openai/o3-mini",
"o3",
"openai/o3",
"o4-mini",
"openai/o4-mini"
]
},
"o4-mini": {
Expand Down
8 changes: 4 additions & 4 deletions refact-agent/engine/src/postprocessing/pp_context_files.rs
Original file line number Diff line number Diff line change
Expand Up @@ -89,7 +89,8 @@ fn collect_lines_from_files(
let file = lines.first().unwrap().file_ref.clone();
if file.symbols_sorted_by_path_len.is_empty() {
info!("{file_name} ignoring skeletonize because no symbols found in the file, maybe the file format is not supported or the file is empty");
lines.iter_mut().for_each(|x| x.take_ignoring_floor = true);
// Don't automatically mark all lines for inclusion when AST is unavailable
// This will be handled based on specific line range requests in convert_input_into_usefullness
}
}

Expand Down Expand Up @@ -316,10 +317,9 @@ async fn pp_limit_and_merge(
anything = true;
if first_line == 0 { first_line = i; }
if i > prev_line + 1 {
out.push_str("...\n".to_string().as_str());
out.push_str("...\n");
}
out.push_str(&line_ref.line_content);
out.push_str("\n");
out.push_str(&format!("{:4} | {}\n", line_ref.line_n + 1, line_ref.line_content));
prev_line = i;
}
if last_line > prev_line + 1 {
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -348,7 +348,7 @@ fn compress_duplicate_context_files(messages: &mut Vec<ChatMessage>) -> Result<(
if !compressed_files.is_empty() {
let compressed_files_str = compressed_files.join(", ");
if remaining_files.is_empty() {
let summary = format!("💿 Duplicate context file compressed: '{}' files were shown earlier in the conversation history", compressed_files_str);
let summary = format!("💿 Duplicate files compressed: '{}' files were shown earlier in the conversation history. Do not ask for these files again.", compressed_files_str);
messages[file.msg_idx].content = ChatContent::SimpleText(summary);
messages[file.msg_idx].role = "cd_instruction".to_string();
tracing::info!("Stage 0: Fully compressed ContextFile at index {}: all {} files removed",
Expand Down
14 changes: 4 additions & 10 deletions refact-agent/engine/src/scratchpads/scratchpad_utils.rs
Original file line number Diff line number Diff line change
Expand Up @@ -65,17 +65,11 @@ pub fn max_tokens_for_rag_chat_by_tools(
};

let tool_limit = match tool.function.name.as_str() {
"search" | "regex_search" | "definition" | "references" | "cat" if is_cat_with_lines => {
if context_files_len < crate::http::routers::v1::chat::CHAT_TOP_N {
// Scale down proportionally to how much we exceed the context limit
let scaling_factor = crate::http::routers::v1::chat::CHAT_TOP_N as f64 / context_files_len as f64;
(4096.0 * scaling_factor) as usize
} else {
4096
}
"search_semantic" | "search_pattern" | "search_symbol_definition" | "search_symbol_usages" | "cat" if is_cat_with_lines => {
(4096 * context_files_len).min(base_limit / 2).max(4096)
},
"cat" | "locate" => 8192,
_ => 4096, // Default limit for other tools
"cat" | "locate" => (8192 * context_files_len).min(base_limit / 2).max(8192),
_ => (4096 * context_files_len).min(base_limit / 2).max(4096)
};

overall_tool_limit += tool_limit;
Expand Down
Loading
Loading