重置GPU显存 Reset GPU memory after CUDA errors
Sometimes CUDA program crashed during execution, before memory was flushed. As a result, device memory remained occupied.
There are some solutions:
1.
Try using:
nvidia-smi --gpu-reset
nvidia-smi -r
2.
Although it should be unecessary to do this in anything other than exceptional circumstances, the recommended way to do this on linux hosts is to unload the nvidia driver by doing
sudo rmmod nvidia
with suitable root privileges and then reloading it with
sudo modprobe nvidia
If the machine is running X11, you will need to stop this manually beforehand, and restart it afterwards. The driver intialisation processes should eliminate any prior state on the device.
This answer has been assembled from comments and posted as a community wiki to get this question off the unanswered list for the CUDA tag
3.
This methods working for me:
check what is using your GPU memory with
sudo fuser -v /dev/nvidia*
Your output will look something like this:
USER PID ACCESS COMMAND /dev/nvidia0: root 1256 F...m Xorg username 2057 F...m compiz username 2759 F...m chrome username 2777 F...m chrome username 20450 F...m python username 20699 F...m python
Then kill the PID that you no longer need on htop
or with
sudo kill -9 PID.
4.
Or simply reboot:
sudo reboot
欢迎转载,转载请保留页面地址。帮助到你的请点个推荐。