-
Notifications
You must be signed in to change notification settings - Fork 611
Merge master into feature/flare-inline #7261
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merged
Hweinstock
merged 31 commits into
feature/flare-inline
from
autoMerge/feature/flare-inline
May 13, 2025
Merged
Merge master into feature/flare-inline #7261
Hweinstock
merged 31 commits into
feature/flare-inline
from
autoMerge/feature/flare-inline
May 13, 2025
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
## Problem We introduced `editorState` in data instrumentation launch. The service has a requirement of 40k character limit for the `text` field. ## Solution Implement a check on text length. If the text length exceeds 40k characters, section 20k max characters from the left and right side of current cursor position, so the final text is always less than 40k. validated prod endpoint inline working for files > 40k characters. Example request id: `57bbbe65-fbe7-47fc-81c4-c65262f47ce8` --- - Treat all work as PUBLIC. Private `feature/x` branches will not be squash-merged at release time. - Your code changes must meet the guidelines in [CONTRIBUTING.md](https://github.com/aws/aws-toolkit-vscode/blob/master/CONTRIBUTING.md#guidelines). - License: I confirm that my contribution is made under the terms of the Apache 2.0 license.
|
This new name more accurately represents what this class is for. It is just a util to create the "Amazon Q generating" inline message. - Class is renamed - File is renamed and move out of the "stateTracker" folder --- - Treat all work as PUBLIC. Private `feature/x` branches will not be squash-merged at release time. - Your code changes must meet the guidelines in [CONTRIBUTING.md](https://github.com/aws/aws-toolkit-vscode/blob/master/CONTRIBUTING.md#guidelines). - License: I confirm that my contribution is made under the terms of the Apache 2.0 license. Signed-off-by: nkomonen-amazon <[email protected]>
…lableCustomization (#7242) (#7266) This reverts commit 98b0d5d. ## Problem It regress #7181 and make 7181 not working: profile will be changed, but customization will be swapped to default always. ## Solution --- - Treat all work as PUBLIC. Private `feature/x` branches will not be squash-merged at release time. - Your code changes must meet the guidelines in [CONTRIBUTING.md](https://github.com/aws/aws-toolkit-vscode/blob/master/CONTRIBUTING.md#guidelines). - License: I confirm that my contribution is made under the terms of the Apache 2.0 license.
## Problem Inconsistent behavior when opening agent tabs (/review, /doc, etc). When the tab is reused it keeps the prompt input options visible, but when a new tab is created it doesn't. https://github.com/user-attachments/assets/2ff7264f-f7a3-46f6-9a34-e29835768833 ## Solution Set `promptInputOptions` to empty when an existing tab is reused. --- - Treat all work as PUBLIC. Private `feature/x` branches will not be squash-merged at release time. - Your code changes must meet the guidelines in [CONTRIBUTING.md](https://github.com/aws/aws-toolkit-vscode/blob/master/CONTRIBUTING.md#guidelines). - License: I confirm that my contribution is made under the terms of the Apache 2.0 license.
I saw while testing that the "Amazon Q is generating..." got stuck at some point. I think this fix should avoid that Signed-off-by: nkomonen-amazon <[email protected]>
This was a regression that appeared while doing the port to flare, now we will show the spinning symbol when generating a suggestion. Additionally the file was more appropriately named since it now only has the status bar related code. Signed-off-by: nkomonen-amazon <[email protected]>
## Problem the tutorial trackers aren't implemented when using the language server ## Solution - re-add the inlineLineAnnotationController (inlineChatTutorialAnnotation) for adding hints with inline chat - re-add the lineAnnotationController (inlineTutorialAnnotation) for adding the inline suggestions tutorial ## Notes in a future PR I'll fully deprecate the old trackers --- - Treat all work as PUBLIC. Private `feature/x` branches will not be squash-merged at release time. - Your code changes must meet the guidelines in [CONTRIBUTING.md](https://github.com/aws/aws-toolkit-vscode/blob/master/CONTRIBUTING.md#guidelines). - License: I confirm that my contribution is made under the terms of the Apache 2.0 license.
There was a regression that appeared while doing the port to flare where the status bar would not show the spinning icon when generating a suggestion. Now we will show the spinning symbol when generating a suggestion. Additionally the file was more appropriately named since it now only has the status bar related code.
## Problem clientId from `clientParams.initializationOptions?.aws?.clientInfo?.clientId` is random on every restart ## Solution use the client id from telemetry utils --- - Treat all work as PUBLIC. Private `feature/x` branches will not be squash-merged at release time. - Your code changes must meet the guidelines in [CONTRIBUTING.md](https://github.com/aws/aws-toolkit-vscode/blob/master/CONTRIBUTING.md#guidelines). - License: I confirm that my contribution is made under the terms of the Apache 2.0 license.
…7234) ## Problem the only metric it looks like we're missing for inline on the vscode side is `codewhisperer_clientComponentLatency` ## Solution codewhisperer_clientComponentLatency uses a very similar implementation as before the only differences are: 1. codewhispererCredentialFetchingLatency is no longer relevant because the token is always injected into the language server and it doesn't need to build the client on demand like before. - This causes the preprocessing latency to decrease, because that used to contain the time it takes to fetch the credentials 2. postProcessing latency is way lower because once we get the result vscode instantly displays it -- we no longer have control of that example metric now: ``` 2025-05-06 11:53:59.858 [debug] telemetry: codewhisperer_clientComponentLatency { Metadata: { codewhispererAllCompletionsLatency: '792.7122090000048', codewhispererCompletionType: 'Line', codewhispererCredentialFetchingLatency: '0', codewhispererCustomizationArn: 'arn:aws:codewhisperer:us-east-1:12345678910:customization/AAAAAAAAAA', codewhispererEndToEndLatency: '792.682249999998', codewhispererFirstCompletionLatency: '792.6440000000002', codewhispererLanguage: 'java', codewhispererPostprocessingLatency: '0.019500000002153683', codewhispererPreprocessingLatency: '0.007166999996115919', codewhispererRequestId: 'XXXXXXXXXXXXXXXXXXXXXXXXXXX', codewhispererTriggerType: 'AutoTrigger', credentialStartUrl: 'https://XXXXX.XXXXX.com/start', awsAccount: 'not-set', awsRegion: 'us-east-1' }, Value: 1, Unit: 'None', Passive: true } ``` --- - Treat all work as PUBLIC. Private `feature/x` branches will not be squash-merged at release time. - Your code changes must meet the guidelines in [CONTRIBUTING.md](https://github.com/aws/aws-toolkit-vscode/blob/master/CONTRIBUTING.md#guidelines). - License: I confirm that my contribution is made under the terms of the Apache 2.0 license.
## Problem we no longer need local workspace context, since it's now provided through flare ## Solution deprecate it
## Problem For some reason inline suggestions won't auto complete in function args e.g. ``` def getName( ``` or ``` def getName(firstName, ``` if you don't provide a range ## Solution provide a range similar to what the old implementation was doing
When the server crashes and then restarts again, we will emit a metric to indicate it crashed. When querying look for: `metadata.metricName: languageServer_crash` & `metadata.id: AmazonQ` --- - Treat all work as PUBLIC. Private `feature/x` branches will not be squash-merged at release time. - Your code changes must meet the guidelines in [CONTRIBUTING.md](https://github.com/aws/aws-toolkit-vscode/blob/master/CONTRIBUTING.md#guidelines). - License: I confirm that my contribution is made under the terms of the Apache 2.0 license. Signed-off-by: nkomonen-amazon <[email protected]>
## Problem inline tests don't work with the language server ## Solution - I removed the `${name} invoke on unsupported filetype` test because you can't trigger inline completions at all for unsupported filetypes (vscode doesn't allow it -- you need to specify file extentions) - hack: I have no way to spy on actual inline completions so I added a global variable that lets the tests know when recommendations are being generated :/. This allows us to wait for before accepting/rejecting requests - hack: codewhisperer_perceivedLatency, codewhisperer_serviceInvocation don't have the result field set in flare so it causes a lot of noisy logs. TODO add the result field there --- - Treat all work as PUBLIC. Private `feature/x` branches will not be squash-merged at release time. - Your code changes must meet the guidelines in [CONTRIBUTING.md](https://github.com/aws/aws-toolkit-vscode/blob/master/CONTRIBUTING.md#guidelines). - License: I confirm that my contribution is made under the terms of the Apache 2.0 license.
## Problem FileCreationFailed exceptions are displayed as UnknownException in telemetry. This exception is new and we want to separate this out from other unknown exceptions. ## Solution Return API service error with `FileCreationFailedException`
## Problem Extension version sent to Q LSP is hardcoded. ## Solution Ssend the actual extension version BEFORE: aws-sdk-nodejs/2.1692.0 darwin/v23.10.0 AWS-Language-Servers AWS-CodeWhisperer/0.1.0 AmazonQ-For-VSCode/0.0.1 Visual-Studio-Code---Insiders/1.100.0-insider ClientId/c342ab45-6aba-4118-b48c-44dcedb10a78 promise AFTER aws-sdk-nodejs/2.1692.0 darwin/v23.10.0 AWS-Language-Servers AWS-CodeWhisperer/0.1.0 AmazonQ-For-VSCode/testPluginVersion Visual-Studio-Code---Insiders/1.100.0-insider ClientId/c342ab45-6aba-4118-b48c-44dcedb10a78 promise
## Problem - No logs is being emitted for telemetry events. ## Solution - Adding logs if telemetry events are emitted. --- - Treat all work as PUBLIC. Private `feature/x` branches will not be squash-merged at release time. - Your code changes must meet the guidelines in [CONTRIBUTING.md](https://github.com/aws/aws-toolkit-vscode/blob/master/CONTRIBUTING.md#guidelines). - License: I confirm that my contribution is made under the terms of the Apache 2.0 license.
## Problem When typing a decent amount of text in quick succession, the language server will get throttled in its requests to the backend. This is because we send a request for recommendations on every key stroke, causing the language server to make a request on each key stroke. This is rightfully getting throttled by the backend. ## Solution - The ideal behavior is that we only make a request to the language server, and thus to the backend, when typing stops. Therefore, this is an ideal use case for `debounce`. However, we need to extend debounce slightly outlined below. - Apply `debounce` to the recommendations such that we wait 20 ms after typing stops before fetching the results. - By applying this at the recommendation level, none of the inline latency metrics are affected. ### Debounce Changes -Let f be some debounced function that takes a string argument, our current debounce does the following: ``` f('a') f('ab') f('abc') (pause for debounce delay) -> f would be called with 'a' ``` The issue is is that for suggestions, this means the language server request will be made with stale context (i.e. not including our most recent content). What we want instead is for the case above to call f with `'abc'` and not with `'a'` or `'ab'`. We can accomplish this by adding a flag to `debounce` allowing us to choose whether we call it with the first args of the debounce interval (default, and 'a' in the example above), or the most recent args ('abc' in the example above). ## Verification - I did not notice the added latency when testing inline. However it does seem slower than prod, with and without this change. - I was not able to get a throttling exception. --- - Treat all work as PUBLIC. Private `feature/x` branches will not be squash-merged at release time. - Your code changes must meet the guidelines in [CONTRIBUTING.md](https://github.com/aws/aws-toolkit-vscode/blob/master/CONTRIBUTING.md#guidelines). - License: I confirm that my contribution is made under the terms of the Apache 2.0 license.
…7291 ## Problem `workspaceIdentifier` should be a string: - aws/language-server-runtimes#497 ## Solution Pass `extensionContext.storageUri?.path`.
## Problem New telemetry metrics were [added](aws/aws-toolkit-common#1023) to aws-toolkit-common ## Solution Consume latest version of aws-toolkit-common package
## Problem Adding logging statements to the top level is extremely noisy, since that part still triggers on each key stroke. This could also improve latency since any computation at the top-level will be redone on each keystroke. ## Solution - move the debounce up a layer. - remove outdated tests, since the debounce util already has tests for all this logic. - added new tests for new behavior. ## Verification - I tested this side-by-side with previous version and didn't notice a difference. --- - Treat all work as PUBLIC. Private `feature/x` branches will not be squash-merged at release time. - Your code changes must meet the guidelines in [CONTRIBUTING.md](https://github.com/aws/aws-toolkit-vscode/blob/master/CONTRIBUTING.md#guidelines). - License: I confirm that my contribution is made under the terms of the Apache 2.0 license.
## Problem VS Code treats each cell in a notebook as a separate editor. As a result, when building the left- and right-contexts for the completion from the current editor, we were limited to just the current cell, which might be very small and/or reference variables and functions defined in other cells. That meant that completions never used the context of other cells when making suggestions, and were often _very_ generic. #7031 ## Solution The `extractContextForCodeWhisperer` function now checks if it is being called in a cell in a Jupyter notebook. If so, it collects the surrounding cells to use as context, respecting the maximum context length. During this process, Markdown cells have each line prefixed with a language-specific comment character.
Hweinstock
approved these changes
May 13, 2025
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Automatic merge failed
Command line hint
To perform the merge from the command line, you could do something like the following (where "origin" is the name of the remote in your local git repo):