-
Notifications
You must be signed in to change notification settings - Fork 505
Issues: pytorch/xla
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Author
Label
Projects
Milestones
Assignee
Sort
Issues list
Standardize XLA loop APIs
enhancement
New feature or request
#8918
opened Apr 1, 2025 by
rpsilva-aws
Sliced add returns wrong output
pytorch divergence
XLA behavior doesn't match Pytorch eager frontend
#8917
opened Apr 1, 2025 by
vealocia
[Deprecation Tracking] API deprecation timeline summary
usability
Bugs/features related to improving the usability of PyTorch/XLA
#8915
opened Apr 1, 2025 by
zpcore
Large number of graph break with flash_attention on dynamo openxla backend
dynamo
performance
#8913
opened Mar 31, 2025 by
bhavya01
Output shape from flash attention is not expected
bug
Something isn't working
pallas
#8910
opened Mar 31, 2025 by
lsy323
Profiler and SPMD and other distributed things.
documentation
use_spmd()
order.
distributed
#8906
opened Mar 31, 2025 by
ysiraichi
Torch-XLA gets stuck with large max_new_tokens when running HF CausalLM inference
performance
#8901
opened Mar 28, 2025 by
Zantares
The Stable Diffusion notebook is broken.
bug
Something isn't working
documentation
#8899
opened Mar 27, 2025 by
zhanyong-wan
2D linear upsample with XLA behavior doesn't match Pytorch eager frontend
xla:gpu
align_corners=False
doesn't match PyTorch.
pytorch divergence
#8897
opened Mar 27, 2025 by
ysiraichi
BrokenProcessPool: A process in the process pool was terminated abruptly while the future was running or pending. Error
bug
Something isn't working
needs reproduction
#8884
opened Mar 25, 2025 by
oayk23
Check "Autograd" code generation custom operation edge case is covered by tests.
lowering
ATen Operation lowering
tech debt
Technical Debt Is Evil
testing
Testing and coverage related issues.
#8880
opened Mar 24, 2025 by
pgmoka
Create a nightly torch_xla wheel without version name
enhancement
New feature or request
#8877
opened Mar 24, 2025 by
bhavya01
torch_xla.experimental.custom_kernel.flash_attention
output does not match F.scaled_dot_product_attention
on TPU
pallas
pytorch divergence
#8869
opened Mar 21, 2025 by
NickLucche
Replace New feature or request
usability
Bugs/features related to improving the usability of PyTorch/XLA
xm.mark_step
with torch_xla.sync()
in examples and tests
documentation
enhancement
#8862
opened Mar 19, 2025 by
tengyifei
Document the difference between New feature or request
device=
vs .to(device)
documentation
enhancement
#8861
opened Mar 19, 2025 by
tengyifei
Replace upstream PyTorch GRU module
enhancement
New feature or request
lowering
ATen Operation lowering
tracing
Lazy Tensor tracing
#8860
opened Mar 19, 2025 by
tengyifei
Improve New feature or request
torch_xla.compile
documentation
documentation
enhancement
#8859
opened Mar 19, 2025 by
tengyifei
Document the difference between tracing time and execution time
documentation
enhancement
New feature or request
#8858
opened Mar 19, 2025 by
tengyifei
torch.distributed.all_reduce not converted to stableHLO
bug
Something isn't working
distributed
SPMD and other distributed things.
stablehlo
StableHLO related work
#8854
opened Mar 19, 2025 by
AleksKnezevic
Have documentation to point to all our environment variables and their meaning
documentation
usability
Bugs/features related to improving the usability of PyTorch/XLA
#8853
opened Mar 19, 2025 by
miladm
Add a 2-slice pallas training test in pre-submit CI
testing
Testing and coverage related issues.
xla:tpu
TPU specific issues and PRs
#8850
opened Mar 18, 2025 by
tengyifei
How to compile torch-xla form source?
build
Build process related matters (e.g. build system).
question
#8847
opened Mar 18, 2025 by
south-ocean
Previous Next
ProTip!
Mix and match filters to narrow down what you’re looking for.