Message boards :
Number crunching :
AMD Ryzen 7 1800X CPU task takes over 20 hours?
Message board moderation
Author | Message |
---|---|
Send message Joined: 17 Feb 17 Posts: 21 Credit: 8,511,880 RAC: 0 ![]() ![]() |
So I just want to make sure this is normal. task: N-Body Simulation 1.76, de_nbody_04_23_2019_v176_40k__data__3_1556550902_54648_0. (Not sure how much of that is relevant). But now for the interesting part. Unless I'm missing something, this amount of data should only take a few hours on this processor. It isn't. State Running Received 2019-05-01 1:22:44 PM Report deadline 2019-05-13 1:19:06 PM Estimated computation size 15,994 GFLOPs CPU time 13:36:38 CPU time since checkpoint 00:00:04 Elapsed time 13:50:40 Estimated time remaining 17:50:46 Fraction done 43.686% Virtual memory size 13.59 MB Working set size 17.79 MB Directory slots/14 Process ID 18784 Progress rate 3.240% per hour Executable milkyway_nbody_1.76_windows_x86_64.exe Is there something wrong with this WU or my machine? I should also note that the CPU temperatures are well below their usual 60 plus and are hovering in the high 50s. The fan isn't revving up to the higher rpm range either and it's got stable clock speeds of 3.7 ghz. I'm running this exclusively on the CPU along with the GPU. I have the processor set to use 92% to account for the one thread that the GPU needs, and those tasks appear to be flying along just fine on the rx570. I'm not sure what to do at this point. Help appreciated! |
![]() ![]() Send message Joined: 24 Jan 11 Posts: 716 Credit: 560,076,732 RAC: 72,089 ![]() ![]() ![]() ![]() |
Ignore the estimated time remaining. That is only a guess by BOINC since it has only seen 5 tasks so far on your host. BOINC can't accurately predict runtimes on tasks until the host has validated 11 tasks for each application that are not overflows, 100% radar blanked or errors. ![]() |
Send message Joined: 17 Feb 17 Posts: 21 Credit: 8,511,880 RAC: 0 ![]() ![]() |
Ignore the estimated time remaining. That is only a guess by BOINC since it has only seen 5 tasks so far on your host. BOINC can't accurately predict runtimes on tasks until the host has validated 11 tasks for each application that are not overflows, 100% radar blanked or errors. I'm not sure if this means anything, however here are two different tasks with drastically different results. And it looks like I'm hitting virtual memory now which could explain things. Application Milkyway@home N-Body Simulation 1.76 Name de_nbody_04_23_2019_v176_40k__data__3_1556550902_54648 State Suspended - computer is in use Received 2019-05-01 1:22:44 PM Report deadline 2019-05-13 1:19:06 PM Estimated computation size 15,994 GFLOPs CPU time 1d 04:30:31 CPU time since checkpoint 00:00:14 Elapsed time 1d 05:14:44 Estimated time remaining 02:04:10 Fraction done 93.391% Virtual memory size 13.61 MB Working set size 1.42 MB Directory slots/14 Process ID 16916 Progress rate 3.240% per hour Executable milkyway_nbody_1.76_windows_x86_64.exe Application Milkyway@home N-Body Simulation 1.76 Name de_nbody_04_23_2019_v176_40k__data__1_1556550902_83400 State Suspended - computer is in use Received 2019-05-02 11:17:26 PM Report deadline 2019-05-14 11:13:49 PM Estimated computation size 41,239 GFLOPs CPU time 07:56:09 CPU time since checkpoint 00:00:13 Elapsed time 08:12:19 Estimated time remaining 03:34:18 Fraction done 63.672% Virtual memory size 12.64 MB Working set size 1.42 MB Directory slots/7 Process ID 13492 Progress rate 7.920% per hour Executable milkyway_nbody_1.76_windows_x86_64.exe Does the estimated computation size not have anything to do with how long the task takes? Seems like the bigger WU is going to be much shorter (less than half the time) as the theoretical shorter and smaller one. I think next week I'll be wiping this and installing a fresh copy of Windows just to be sure. I'll give it a few days to settle. I just hope I don't get more errors in the meantime... |
![]() ![]() Send message Joined: 24 Jan 11 Posts: 716 Credit: 560,076,732 RAC: 72,089 ![]() ![]() ![]() ![]() |
Does the estimated computation size not have anything to do with how long the task takes? Seems like the bigger WU is going to be much shorter (less than half the time) as the theoretical shorter and smaller one. Yes, generally it does. But is also depends on how the project scientists setup the application's expected results for a calculation. Some tasks are more difficult than others and may require more FLOPS to compute. The calculation difficulty also factors into how much credit is awarded in the classic BOINC code. But the creation of the CreditNew credit algorithm pretty much threw a monkey wrench into that business. That topic is a political minefield. The project scientists determine how much and how they award credit. I would not bother with reinstalling Windows. That won't change the difficulty in different task calculations. You are trying to compare apples to oranges. Just let BOINC run and do its thing. You can't change how it works. ![]() |
Send message Joined: 17 Feb 17 Posts: 21 Credit: 8,511,880 RAC: 0 ![]() ![]() |
Does the estimated computation size not have anything to do with how long the task takes? Seems like the bigger WU is going to be much shorter (less than half the time) as the theoretical shorter and smaller one. That makes sense. The task might have more difficult calculations during the process and so that smaller number may equal longer compute times. I'm not too hung up on credit, especially with a project like this - my GPUs don't hold a candle to just about anything out there and I'm in it to progress science, not for bragging rights at this point. I'd rather throw that $100 at the projects for funding than throw together a new machine I don't actually have room for. ;) I just wanted to make sure this was normal for these tasks. I think the NBS runtimes vary wildly per task, which was what made me question this to begin with. I wasn't sure if they should be consistent with each other. I'm used to coming off of WCG, where just about every project is pretty consistent with the runtimes you receive. |
![]() ![]() Send message Joined: 24 Jan 11 Posts: 716 Credit: 560,076,732 RAC: 72,089 ![]() ![]() ![]() ![]() |
I'm used to coming off of WCG, where just about every project is pretty consistent with the runtimes you receive. And I'm used to the varying runtimes of the tasks we get at Seti depending on the origin of the tasks (which antenna) and also the way the data was gathered. Typically one type of task from Arecibo telescope takes twice as long to compute as a task from Green Bank telescope even thought the task sizes are identical. I just realize that some tasks from any particular project are easier or harder to crunch and don't worry about it. ![]() |
Send message Joined: 17 Feb 17 Posts: 21 Credit: 8,511,880 RAC: 0 ![]() ![]() |
I'm used to coming off of WCG, where just about every project is pretty consistent with the runtimes you receive. Thanks again. Exactly what I was looking for. :) |
©2025 Astroinformatics Group