Skip to content

Pre-intern some const infer vars #141499

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 2 commits into
base: master
Choose a base branch
from

Conversation

compiler-errors
Copy link
Member

Cache const vars like we do for ty/region vars

@rustbot
Copy link
Collaborator

rustbot commented May 24, 2025

r? @nnethercote

rustbot has assigned @nnethercote.
They will have a look at your PR within the next two weeks and either review your PR or reassign to another reviewer.

Use r? to explicitly pick a reviewer

@rustbot rustbot added S-waiting-on-review Status: Awaiting review from the assignee but also interested parties. T-compiler Relevant to the compiler team, which will review and decide on the PR/issue. labels May 24, 2025
@compiler-errors
Copy link
Member Author

@bors try @rust-timer queue

@rust-timer

This comment has been minimized.

@rustbot rustbot added the S-waiting-on-perf Status: Waiting on a perf run to be completed. label May 24, 2025
@bors
Copy link
Collaborator

bors commented May 24, 2025

⌛ Trying commit 5771471 with merge 5221322...

bors added a commit that referenced this pull request May 24, 2025
Pre-intern some const infer vars

Cache const vars like we do for ty/region vars
@compiler-errors
Copy link
Member Author

@nnethercote: You authored #107869, but presumably didn't add a fast path for const infer vars at that point b/c we still had to store the type of the const in the ty::Const (thanks @BoxyUwU for #125958 🎉), so it wasn't possible to implement at that point.

Let's see how this fares.

Some questions:

  • Do you have any intuition about what values you chose for NUM_PREINTERNED_*?
  • Somewhat unrelated, but was it just not worthwhile to pre-intern int and float vars (the non-fresh ones) when you were testing out that PR?

I also didn't pre-intern fresh const vars, but those may benefit from this too. Maybe a good follow-up after this tho, though.

@bors
Copy link
Collaborator

bors commented May 24, 2025

☀️ Try build successful - checks-actions
Build commit: 5221322 (5221322a8e78f5547d7c0b5585ff992545789beb)

@rust-timer

This comment has been minimized.

@rust-timer
Copy link
Collaborator

Finished benchmarking commit (5221322): comparison URL.

Overall result: ❌ regressions - please read the text below

Benchmarking this pull request likely means that it is perf-sensitive, so we're automatically marking it as not fit for rolling up. While you can manually mark this PR as fit for rollup, we strongly recommend not doing so since this PR may lead to changes in compiler perf.

Next Steps: If you can justify the regressions found in this try perf run, please indicate this with @rustbot label: +perf-regression-triaged along with sufficient written justification. If you cannot justify the regressions please fix the regressions and do another perf run. If the next run shows neutral or positive results, the label will be automatically removed.

@bors rollup=never
@rustbot label: -S-waiting-on-perf +perf-regression

Instruction count

This is the most reliable metric that we have; it was used to determine the overall result at the top of this comment. However, even this metric can sometimes exhibit noise.

mean range count
Regressions ❌
(primary)
3.0% [3.0%, 3.0%] 1
Regressions ❌
(secondary)
0.1% [0.1%, 0.1%] 1
Improvements ✅
(primary)
- - 0
Improvements ✅
(secondary)
- - 0
All ❌✅ (primary) 3.0% [3.0%, 3.0%] 1

Max RSS (memory usage)

Results (primary -4.1%, secondary -2.2%)

This is a less reliable metric that may be of interest but was not used to determine the overall result at the top of this comment.

mean range count
Regressions ❌
(primary)
- - 0
Regressions ❌
(secondary)
3.7% [3.7%, 3.7%] 1
Improvements ✅
(primary)
-4.1% [-4.1%, -4.1%] 1
Improvements ✅
(secondary)
-3.3% [-5.8%, -2.2%] 5
All ❌✅ (primary) -4.1% [-4.1%, -4.1%] 1

Cycles

Results (primary 3.3%, secondary -4.5%)

This is a less reliable metric that may be of interest but was not used to determine the overall result at the top of this comment.

mean range count
Regressions ❌
(primary)
3.3% [3.3%, 3.3%] 1
Regressions ❌
(secondary)
0.9% [0.9%, 0.9%] 1
Improvements ✅
(primary)
- - 0
Improvements ✅
(secondary)
-6.3% [-7.1%, -5.3%] 3
All ❌✅ (primary) 3.3% [3.3%, 3.3%] 1

Binary size

Results (primary 1.1%)

This is a less reliable metric that may be of interest but was not used to determine the overall result at the top of this comment.

mean range count
Regressions ❌
(primary)
1.1% [1.1%, 1.1%] 1
Regressions ❌
(secondary)
- - 0
Improvements ✅
(primary)
- - 0
Improvements ✅
(secondary)
- - 0
All ❌✅ (primary) 1.1% [1.1%, 1.1%] 1

Bootstrap: 776.203s -> 778.493s (0.30%)
Artifact size: 366.33 MiB -> 366.31 MiB (-0.01%)

@rustbot rustbot added perf-regression Performance regression. and removed S-waiting-on-perf Status: Waiting on a perf run to be completed. labels May 24, 2025
@nnethercote
Copy link
Contributor

Some questions:

* Do you have any intuition about what values you chose for `NUM_PREINTERNED_*`?

It would have been based on measured data, either trying to hit some fraction of actual occurrences without pre-interning too many, or maybe the actual icount results.

* Somewhat unrelated, but was it just not worthwhile to pre-intern int and float vars (the non-_fresh_ ones) when you were testing out that PR?

I don't remember specifically, but probably the profile data didn't point at that being useful. You'd have to try it to be certain.

I also didn't pre-intern fresh const vars, but those may benefit from this too. Maybe a good follow-up after this tho, though.

There is a very small benefit here for ctfe-stress, small enough that you have to click "show non-relevant results" to see it. I'm guessing you didn't try this because it showed up in a profile, but because you saw some types and regions were pre-interned and figured it was worth trying with consts too?

The perf results suggest this is borderline whether it's worth merging, what do you think?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
perf-regression Performance regression. S-waiting-on-review Status: Awaiting review from the assignee but also interested parties. T-compiler Relevant to the compiler team, which will review and decide on the PR/issue.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants