Message boards :
Application Code Discussion :
GPU RAM Requirements
Message board moderation
Author | Message |
---|---|
Send message Joined: 13 Mar 18 Posts: 9 Credit: 66,232,294 RAC: 0 |
I am seeing errors such as the following when trying to run 8 WUs per GPU: Error creating context (-6): CL_OUT_OF_HOST_MEMORY https://milkyway.cs.rpi.edu/milkyway/result.php?resultid=2304133664 How much GPU RAM do WUs require on average? Is it unreasonable to expect 8 WUs to run on a GPU with only 12GBs RAM? I ask because when I only run 4 WUs per GPU the GPU utilization is not near 100%, thus why I would like to run a greater number at once. Could someone point to me the relevant lines in the code? I'd be happy to take a look to better understand the GPU RAM allocations. |
Send message Joined: 29 Jul 14 Posts: 19 Credit: 3,451,802,406 RAC: 0 |
Yeah it seems like the work units use up to 1.5GB of VRAM on NVIDIA cards for whatever reason. On AMD cards they only use like 100MB per work unit, I'm not sure why NVIDIA work units use so much more VRAM. But I would also like to know if there's a way to reduce the VRAM usage for NVIDIA cards. |
Send message Joined: 13 Mar 18 Posts: 9 Credit: 66,232,294 RAC: 0 |
Yeah it seems like the work units use up to 1.5GB of VRAM on NVIDIA cards for whatever reason. On AMD cards they only use like 100MB per work unit, I'm not sure why NVIDIA work units use so much more VRAM. But I would also like to know if there's a way to reduce the VRAM usage for NVIDIA cards. Thanks for sharing! This provides a good lead into where the issue my be. The GPUs I am using are from Nvidia. Also the 1.5GB VRAM observation is in line with 8WUs/GPU * 1.5GB/WU = 12GBs pushing the limit of a 12GB card. |
Send message Joined: 28 Sep 17 Posts: 19 Credit: 60,732,047 RAC: 0 |
maybe the volta cards run so fast it needs more vram to keep up |
©2024 Astroinformatics Group