torch._foreach_mul_ segmentation fault #113156
Labels
module: crash
Problem manifests as a hard crash, as opposed to a RuntimeError
module: edge cases
Adversarial inputs unlikely to occur in practice
module: error checking
Bugs related to incorrect/lacking error checking
module: numpy
Related to numpy support, and also numpy compatibility of our operators
triaged
This issue has been looked at a team member, and triaged and prioritized into an appropriate module
Uh oh!
There was an error while loading. Please reload this page.
🐛 Describe the bug
The following example will cause segmentation fault.
Strangely, only
torch._foreach_mul_
will have this problem andtorch._foreach_div_
is fine. Although I haven't quite understand the direct segmentation fault reason, but I think this may caused by the following code thatmul_.Scalar
callsmul.out
butdiv_.scalar
callsdiv_.Tensor
.pytorch/aten/src/ATen/native/BinaryOps.cpp
Lines 997 to 999 in c6f435b
pytorch/aten/src/ATen/native/BinaryOps.cpp
Lines 897 to 899 in c6f435b
And, after I rewrite the
mul_.Scalar
likediv_.scalar
, it works!Versions
PyTorch version: 2.2.0a0+git0d669f0
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.5 LTS (aarch64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
Clang version: Could not collect
CMake version: version 3.20.5
Libc version: glibc-2.31
Python version: 3.9.17 (main, Jul 5 2023, 20:44:37) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-4.19.90-2107.6.0.0098.oe1.bclinux.aarch64-aarch64-with-glibc2.31
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
cc @malfet @mruberry @rgommers
The text was updated successfully, but these errors were encountered: