-
-
Notifications
You must be signed in to change notification settings - Fork 359
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
WIP: Fragments #638
base: main
Are you sure you want to change the base?
WIP: Fragments #638
Conversation
Here's a but of a weird problem. We're storing Lines 476 to 482 in 9b2d4d6
Here we are only even creating that But now we need that database at the start of that CLI command in case we need to resolve any fragment aliases using it. It's a bit weird that it's called |
Lots more fragments available now - I added Datasette, |
Claude idea: since fragments are visible on the This is the simplest thing that could possibly work, but I think it would be worthwhile. |
Refs:
TODO:
llm prompt -f alias
work-f
optionllm fragments list
needs non-JSON output (that truncates) - this should work similar to the newllm logs --short
mode.llm fragments -q one -q two
-f fragment-hash
work in addition to-f fragment-alias
llm fragments
family of commands-f $URL
follows up to three redirectsprompt_json
still duplicates the prompt - I can use https://pypi.org/project/condense-json/ for thatllm logs
to correctly display prompts using this new mechanism<details><summary>
by default in log outputllm logs
output easier to visually scan. Could have a-e/--expand
option to expand those fragments..py
or.js
then it gets wrapped in a syntax highlight block with the correct language tag - bit of a weird feature, maybe have this turned on byllm logs --syntax
or similar? Might be easier to ignore programming languages but still wrap in triple backticks if it looks like code - if most of the lines are less than 120 chars for example.llm fragments show hash-or-alias
- to show the full fragment based on the hash from the previous commandStretch goals
llm chat
- so you can dollm chat -f docs