Welcome to MilkyWay@home

Isn't this a waste of my CPU resources?


Advanced search

Message boards : Number crunching : Isn't this a waste of my CPU resources?
Message board moderation

To post messages, you must log in.

AuthorMessage
Grzegorz Skoczylas

Send message
Joined: 2 Feb 12
Posts: 2
Credit: 1,621,952
RAC: 59
1 million credit badge8 year member badge
Message 69090 - Posted: 20 Sep 2019, 14:14:27 UTC

Today is 20 September. In the BOINC Manager I see among others two MilkyWay tasks with the following data:

  • Task #1: progress 44.758%, elapsed 6d 02:33:44, remaining: 7d 12:50:53, deadline 24 September, 04:04:47
  • Task #2: progress: 20.537%, elapsed 3d 05:55:39, remaining: 12d 13:25:52, deadline: 25 September, 23:37:01


Does it make any sense to continue these tasks? After all, it looks like there is no chance that they will be completed before the deadline!

I still have similar problems with the tasks of the MilkyWay. Apart from MilkyWay, I also carry out tasks of several other projects (Asteroids, Einstein, SETI). No other project has such problems.

The two computers on which the MW tasks are calculated are more or less equally loaded, i.e. there are no periods of significantly higher load. What's more, one of these computers does nothing else almost all the time.

The problem is with the Milkyway@home Separation 1.46 application.

It looks as if the MilkyWay has a problem with the correct estimation of the complexity of tasks.
It seems to me that it makes no sense to calculate the tasks long after the deadline.

ID: 69090 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
ProfileJoseph Stateson
Avatar

Send message
Joined: 18 Nov 08
Posts: 222
Credit: 1,155,614,257
RAC: 182,166
1 billion credit badge10 year member badge
Message 69091 - Posted: 20 Sep 2019, 14:36:00 UTC - in response to Message 69090.  
Last modified: 20 Sep 2019, 14:38:44 UTC

Your Quadro P1000 has horrendous FP64 performance.: 59 gflops

Your Quadro K1100M is even worse: 22.59 gflops.

This app makes extensive use of double precision floating point and your statistics seem to confirm the problem.

Not sure why they use FP64 so much. I though originally the program was coded in Fortran but no, it is in C.
ID: 69091 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
ProfileKeith Myers
Avatar

Send message
Joined: 24 Jan 11
Posts: 273
Credit: 131,303,596
RAC: 96,740
100 million credit badge9 year member badgeextraordinary contributions badge
Message 69095 - Posted: 20 Sep 2019, 18:05:21 UTC - in response to Message 69090.  

Today is 20 September. In the BOINC Manager I see among others two MilkyWay tasks with the following data:

  • Task #1: progress 44.758%, elapsed 6d 02:33:44, remaining: 7d 12:50:53, deadline 24 September, 04:04:47
  • Task #2: progress: 20.537%, elapsed 3d 05:55:39, remaining: 12d 13:25:52, deadline: 25 September, 23:37:01


Does it make any sense to continue these tasks? After all, it looks like there is no chance that they will be completed before the deadline!

I still have similar problems with the tasks of the MilkyWay. Apart from MilkyWay, I also carry out tasks of several other projects (Asteroids, Einstein, SETI). No other project has such problems.

The two computers on which the MW tasks are calculated are more or less equally loaded, i.e. there are no periods of significantly higher load. What's more, one of these computers does nothing else almost all the time.

The problem is with the Milkyway@home Separation 1.46 application.

It looks as if the MilkyWay has a problem with the correct estimation of the complexity of tasks.
It seems to me that it makes no sense to calculate the tasks long after the deadline.


Your Quadro normally has been crunching tasks in 600 seconds. So the fact that the tasks you show have been running for 6 days, means the card has gone for a walkabout with the drivers. Reboot the host.
ID: 69095 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Phoenix
Avatar

Send message
Joined: 5 Feb 11
Posts: 3
Credit: 1,508,690
RAC: 70
1 million credit badge9 year member badge
Message 69110 - Posted: 23 Sep 2019, 2:09:55 UTC

Jobs do not run properly
Do not wish to waste my computer time
Im done
ID: 69110 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Phoenix
Avatar

Send message
Joined: 5 Feb 11
Posts: 3
Credit: 1,508,690
RAC: 70
1 million credit badge9 year member badge
Message 69111 - Posted: 23 Sep 2019, 2:10:04 UTC

Jobs do not run properly
Do not wish to waste my computer time
Im done
ID: 69111 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Grzegorz Skoczylas

Send message
Joined: 2 Feb 12
Posts: 2
Credit: 1,621,952
RAC: 59
1 million credit badge8 year member badge
Message 69112 - Posted: 23 Sep 2019, 7:41:50 UTC - in response to Message 69095.  

The website http://dell.com/support informs that all drivers are up to date.

It seems to me that other projects also use the GPU, but they don't have such problems.
ID: 69112 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profilemikey
Avatar

Send message
Joined: 8 May 09
Posts: 2303
Credit: 393,321,660
RAC: 36,490
300 million credit badge10 year member badgeextraordinary contributions badge
Message 69113 - Posted: 23 Sep 2019, 10:32:23 UTC - in response to Message 69112.  

The website http://dell.com/support informs that all drivers are up to date.

It seems to me that other projects also use the GPU, but they don't have such problems.


This is not a driver problem, it's a project problem in the configuration on the Server side of things. The problem is if you just wait 10 minutes you will get your cache refilled so it's not like no one gets any workunits at all it just takes longer than we crunchers would like it to take.
ID: 69113 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote

Message boards : Number crunching : Isn't this a waste of my CPU resources?

©2020 Astroinformatics Group