Message boards :
Number crunching :
GFLOPS backwards
Message board moderation
Author | Message |
---|---|
Send message Joined: 23 May 23 Posts: 1 Credit: 6,563,544 RAC: 30,224 |
A problem I have seen for a long time is the GFLOPS number for each work unit. The more GFLOPS the faster the unit runs! For example: 4079 GFLOPS about 3 hours 30 minutes run time. 65284 GFLOPS about 20 minutes run time. A side effect of this is that when one of those 65284 GFLOP work units is downloaded BOINC thinks it will take a full day to run and doesn't download anything else until it finishes. |
Send message Joined: 24 Jan 11 Posts: 715 Credit: 555,443,660 RAC: 38,775 |
A problem I have seen for a long time is the GFLOPS number for each work unit. The more GFLOPS the faster the unit runs! You are correct. I've got one of each type running. The estimated GFLOPS/sec is the same for both tasks at 0.92 GFLOPS/sec. But the 4079 GFLOPS task is going to run for 1 hour 50 minutes and the 65,284 GFLOPS task is only estimated to run 22 minutes. [Edit] I pinged Kevin to this thread for his attention. [Edit2] This is backwards from standard BOINC client convention that the task property of estimated GFLOPs reflects the total amount of computation power needed to crunch the task. Disregarding BOINC's broken ability to properly calculate GFLOPS for gpus, it should get this correct for cpu computation power based on the benchmark profile capability of each host. |
Send message Joined: 4 Jul 09 Posts: 97 Credit: 17,382,328 RAC: 1,691 |
I had not been able to put my finger on what I was seeing different in my downloads. My four PC's as they are downloading work ... but yes it is in smaller amounts Hopefully this will be the answer and it can easily be corrected. In October of 1969 I took an oath to support and defend the Constitution of the United States against all enemies, foreign and domestic; There was no expiration date. |
Send message Joined: 9 Aug 22 Posts: 82 Credit: 2,849,739 RAC: 6,876 |
I will put it on the list of things to look at this week. |
Send message Joined: 24 Jan 11 Posts: 715 Credit: 555,443,660 RAC: 38,775 |
Thanks Kevin, appreciated. |
Send message Joined: 4 Jul 09 Posts: 97 Credit: 17,382,328 RAC: 1,691 |
Once the GFLOPS issue is resolved Volunteers that process multiple projects on a system may see an increase in the number of MW tasks that download when their system does a MW task request. Perhaps after the current application rework and testing is complete Kevin will be able to work this issue into his schedule. Bill F In October of 1969 I took an oath to support and defend the Constitution of the United States against all enemies, foreign and domestic; There was no expiration date. |
Send message Joined: 4 Jul 09 Posts: 97 Credit: 17,382,328 RAC: 1,691 |
Kevin Have you been able to set aside any time to re-visit the GFLOPS calculation issue for the project ? Thanks Bill F In October of 1969 I took an oath to support and defend the Constitution of the United States against all enemies, foreign and domestic; There was no expiration date. |
Send message Joined: 24 Jan 11 Posts: 715 Credit: 555,443,660 RAC: 38,775 |
+1 |
Send message Joined: 8 Sep 21 Posts: 5 Credit: 20,844,146 RAC: 10,426 |
and us slow guys get them done in two days or so..Kinda comes in groups that lasts for a couple weeks or so then gets back to normal for a week or so. My average looks like a curved sawtooth. Well, it doesn't cost me anything I suppose. |
Send message Joined: 4 Jul 09 Posts: 97 Credit: 17,382,328 RAC: 1,691 |
Well if the Project did a better job of estimating the CPU power of each Client system. And did a better job of estimating how many Tasks to give each client system ... (GFLOPS Fixed ...) Then some of the saw tooth might not be so extreme. |
Send message Joined: 16 Mar 10 Posts: 213 Credit: 108,362,278 RAC: 4,516 |
Just a thought -- isn't the problem with the GFLOPS estimate that it is based in some way on the number of "particles" in the data being analyzed but it doesn't (or cannot) take account of where they are relative to the main area of computational focus? (I may be phrasing that badly...) If the vast majority of a calculation involves "empty space" it probably runs a lot quicker. If it takes a lot of effort at workunit generation time to decide whether that might happen, that could be a problem :-) -- however, the consistency of the inverse relationship between GFLOPS estimates and actual run time suggests that a simpler fix would actually cover most cases... Cheers - Al. |
Send message Joined: 4 Jul 09 Posts: 97 Credit: 17,382,328 RAC: 1,691 |
Nope ... I think that you might be thinking of a different Science Estimate not GFLOPS which is not science related in anyway. GFLOPS is a calculation that the BOINC Project (MilkyWay@home) makes regarding the processing power of each Client system's power. Is it an old slow system with few cores or ... is it a new powerful system with lots of cores and speed ? The calculated estimate is used by the Project to send more work to a power system and less to the older slower old systems. The thought is that more work for those who can handle it and less to those who might struggle to complete by the task deadline. Bill F |
Send message Joined: 16 Mar 10 Posts: 213 Credit: 108,362,278 RAC: 4,516 |
BillF, You are quite right; I was thinking of the "Estimated computation size" (which it cites in GFLOPs [note the small 's'] when it reports it in the BOINC Manager's task properties. I should've either called it something else or got the capitalization right :-) -- I'll plead that I paid more attention to the thread title than I should have done. I seem to recall (possibly a couple of "admins" ago) that it was explained that the likely computation size was based on a [simplistic] knowledge of where in the complete data stream the WU was targetting and the nature of the stream at that point. Hence my [poorly labelled] observations. Cheers - Al. |
©2024 Astroinformatics Group