Message boards :
Number crunching :
GFLOPS backwards
Message board moderation
| Author | Message |
|---|---|
|
Send message Joined: 23 May 23 Posts: 1 Credit: 12,074,282 RAC: 17,174 |
A problem I have seen for a long time is the GFLOPS number for each work unit. The more GFLOPS the faster the unit runs! For example: 4079 GFLOPS about 3 hours 30 minutes run time. 65284 GFLOPS about 20 minutes run time. A side effect of this is that when one of those 65284 GFLOP work units is downloaded BOINC thinks it will take a full day to run and doesn't download anything else until it finishes. |
Keith MyersSend message Joined: 24 Jan 11 Posts: 738 Credit: 565,020,458 RAC: 19,016 |
A problem I have seen for a long time is the GFLOPS number for each work unit. The more GFLOPS the faster the unit runs! You are correct. I've got one of each type running. The estimated GFLOPS/sec is the same for both tasks at 0.92 GFLOPS/sec. But the 4079 GFLOPS task is going to run for 1 hour 50 minutes and the 65,284 GFLOPS task is only estimated to run 22 minutes. [Edit] I pinged Kevin to this thread for his attention. [Edit2] This is backwards from standard BOINC client convention that the task property of estimated GFLOPs reflects the total amount of computation power needed to crunch the task. Disregarding BOINC's broken ability to properly calculate GFLOPS for gpus, it should get this correct for cpu computation power based on the benchmark profile capability of each host.
|
Bill FSend message Joined: 4 Jul 09 Posts: 107 Credit: 18,271,281 RAC: 3,594 |
I had not been able to put my finger on what I was seeing different in my downloads. My four PC's as they are downloading work ... but yes it is in smaller amounts Hopefully this will be the answer and it can easily be corrected. In October of 1969 I took an oath to support and defend the Constitution of the United States against all enemies, foreign and domestic; There was no expiration date.
|
Kevin RouxSend message Joined: 9 Aug 22 Posts: 96 Credit: 4,474,690 RAC: 13 |
I will put it on the list of things to look at this week. |
Keith MyersSend message Joined: 24 Jan 11 Posts: 738 Credit: 565,020,458 RAC: 19,016 |
Thanks Kevin, appreciated.
|
Bill FSend message Joined: 4 Jul 09 Posts: 107 Credit: 18,271,281 RAC: 3,594 |
Once the GFLOPS issue is resolved Volunteers that process multiple projects on a system may see an increase in the number of MW tasks that download when their system does a MW task request. Perhaps after the current application rework and testing is complete Kevin will be able to work this issue into his schedule. Bill F In October of 1969 I took an oath to support and defend the Constitution of the United States against all enemies, foreign and domestic; There was no expiration date.
|
Bill FSend message Joined: 4 Jul 09 Posts: 107 Credit: 18,271,281 RAC: 3,594 |
Kevin Have you been able to set aside any time to re-visit the GFLOPS calculation issue for the project ? Thanks Bill F In October of 1969 I took an oath to support and defend the Constitution of the United States against all enemies, foreign and domestic; There was no expiration date.
|
Keith MyersSend message Joined: 24 Jan 11 Posts: 738 Credit: 565,020,458 RAC: 19,016 |
+1
|
|
Send message Joined: 8 Sep 21 Posts: 11 Credit: 26,268,636 RAC: 17,353 |
and us slow guys get them done in two days or so..Kinda comes in groups that lasts for a couple weeks or so then gets back to normal for a week or so. My average looks like a curved sawtooth. Well, it doesn't cost me anything I suppose. |
Bill FSend message Joined: 4 Jul 09 Posts: 107 Credit: 18,271,281 RAC: 3,594 |
Well if the Project did a better job of estimating the CPU power of each Client system. And did a better job of estimating how many Tasks to give each client system ... (GFLOPS Fixed ...) Then some of the saw tooth might not be so extreme. |
|
Send message Joined: 16 Mar 10 Posts: 217 Credit: 110,351,906 RAC: 1,937 |
Just a thought -- isn't the problem with the GFLOPS estimate that it is based in some way on the number of "particles" in the data being analyzed but it doesn't (or cannot) take account of where they are relative to the main area of computational focus? (I may be phrasing that badly...) If the vast majority of a calculation involves "empty space" it probably runs a lot quicker. If it takes a lot of effort at workunit generation time to decide whether that might happen, that could be a problem :-) -- however, the consistency of the inverse relationship between GFLOPS estimates and actual run time suggests that a simpler fix would actually cover most cases... Cheers - Al. |
Bill FSend message Joined: 4 Jul 09 Posts: 107 Credit: 18,271,281 RAC: 3,594 |
Nope ... I think that you might be thinking of a different Science Estimate not GFLOPS which is not science related in anyway. GFLOPS is a calculation that the BOINC Project (MilkyWay@home) makes regarding the processing power of each Client system's power. Is it an old slow system with few cores or ... is it a new powerful system with lots of cores and speed ? The calculated estimate is used by the Project to send more work to a power system and less to the older slower old systems. The thought is that more work for those who can handle it and less to those who might struggle to complete by the task deadline. Bill F |
|
Send message Joined: 16 Mar 10 Posts: 217 Credit: 110,351,906 RAC: 1,937 |
BillF, You are quite right; I was thinking of the "Estimated computation size" (which it cites in GFLOPs [note the small 's'] when it reports it in the BOINC Manager's task properties. I should've either called it something else or got the capitalization right :-) -- I'll plead that I paid more attention to the thread title than I should have done. I seem to recall (possibly a couple of "admins" ago) that it was explained that the likely computation size was based on a [simplistic] knowledge of where in the complete data stream the WU was targetting and the nature of the stream at that point. Hence my [poorly labelled] observations. Cheers - Al. |
Bill FSend message Joined: 4 Jul 09 Posts: 107 Credit: 18,271,281 RAC: 3,594 |
I believe that Kevin is looking at the coding .... which once he fixes it will improve the calculated download capacities (per system) so that for users like myself that crunch for over 36 Projects .... when my BOINC client says "Lets go get some Milkyway tasks" it will get probably a lot more .... this is based on the current broken backwards GFLOPS which is saying ..."send less tasks" As of today it looks like there are about 6,100 Active Milkyway users working about 13,000 active systems. The "'Fix" when implemented will bump the production of the Milkyway project. Bill F In October of 1969 I took an oath to support and defend the Constitution of the United States against all enemies, foreign and domestic; There was no expiration date.
|
Bill FSend message Joined: 4 Jul 09 Posts: 107 Credit: 18,271,281 RAC: 3,594 |
Kevin Can you update the GFLOP.s status for those following this thread Thanks Billl In October of 1969 I took an oath to support and defend the Constitution of the United States against all enemies, foreign and domestic; There was no expiration date.
|
Bill FSend message Joined: 4 Jul 09 Posts: 107 Credit: 18,271,281 RAC: 3,594 |
Kevin Kevin now that the current application is in and running and largely stable can you revisit the GFLOPS backwards calculation issue ? This could result in a positive improvement to the Project with improved task distribution. Respectfully Bill F In October of 1969 I took an oath to support and defend the Constitution of the United States against all enemies, foreign and domestic; There was no expiration date.
|
|
Send message Joined: 11 Sep 24 Posts: 10 Credit: 10,421 RAC: 0 |
We have changed our gflops calculation to be much more accurate than what it was. The new calculation will be included in upcoming updates. |
|
Send message Joined: 5 Oct 25 Posts: 2 Credit: 1,957 RAC: 15 |
With a lack of work units for my three other BOINC projects, and not long after I installed Podman, apparently successfully, at BOINC's suggestion, I added MilkyWay@home. I'm running 3 cores of my Ryzen 9 5900X at 20% of CPU time in order to keep its temperatures (and my electricity bill) within reasonable limits. Unlike my other projects (when running: climateprediction.net, LHC@home and Rosetta@home), MilkyWay vastly overruns its anticipated calculation time, to the extent that most of my first batch were aborted while waiting even to start due to overrunning the start date. I am currently running my second unit which has been running for about two weeks and has a deadline of a week ago, yet the Elapsed Time is claimed to be 1 day, 17:10:36, with 1:6:53 to go. Am I doing something wrong? I notice another thread, now a few months old, referring to miscalculated time estimates. Even BOINC is now recommending the unit be aborted. |
Bill FSend message Joined: 4 Jul 09 Posts: 107 Credit: 18,271,281 RAC: 3,594 |
We have changed our gflops calculation to be much more accurate than what it was. The new calculation will be included in upcoming updates. Thank you..... This issue has been hanging a very long time and your fix will aid in improved download amounts that more closely match the System's capabilities and loading. You mentioned upcoming updates. Would this be an upgrade to the applications or something else ? Respectfully Bill F In October of 1969 I took an oath to support and defend the Constitution of the United States against all enemies, foreign and domestic; There was no expiration date.
|
Keith MyersSend message Joined: 24 Jan 11 Posts: 738 Credit: 565,020,458 RAC: 19,016 |
A new app that fixes this long running problem will be most appreciated.
|
©2025 Astroinformatics Group