21)
Message boards :
News :
Scheduled Maintenance Concluded
(Message 65821)
Posted 15 Nov 2016 by Arivald Ha'gel Post:
IMHO your PC will still be saturated :) Current max of 80 tasks will require at least 10 minutes for your PC to crunch through :) If Jake will increase bundle sizes, this will be even easier to achieve. |
22)
Message boards :
News :
Scheduled Maintenance Concluded
(Message 65819)
Posted 15 Nov 2016 by Arivald Ha'gel Post: OK, for what it's worth, I downloaded 1.43 Nvidia apps for WinXP and Win10 awhile ago. Results so far: For 12h of work I got 3 "unable to validate", about 1200 "validated", and a little above 200 "validation inconclusive". As for CPU, since the fast Modification Fit, app took: 1 core for first few (3-4s) seconds, then only GPU, but needed 1 CPU core at the and for 5-6s. This essentially creates a situation when similar CPU is needed for a WU as GPU (on my system at least) :) That's why I run few at the same time, for GPU not to rest at all. I run 4, and at first I try to start only 2. After few seconds, I start next 2. That way their CPU/GPU cycle will not be identical - thus GPU will be saturated constantly. Other projects (eg SETI, Einstein) have much larger work units sent to GPUs, each WU lasts for 15 minutes to an hour on my Radeon R9 290. Is there a reason MW ones are much smaller and have to be bundled? It just a methodology. It isn't necessarily that bigger is better. I thought that one of those bundled WUs, but right now I can't find that info. For example ClimatePrediction@Home have WUs that take 10 or even more DAYS. That doesn't mean they're great, they do have checkpointing and they upload their data periodically, but it's still quite a lot of time. Similar is for some subprojects in PrimeGrid. However in there a single error causes 2-3 days worth of GPU processing going to waste, since it's not possible to upload partial result. Thus there need to be some reason in WU size :) Although we might have a problem with larger amount of some Hosts spamming computational errors, and overall "unable to validate". This was seen some time ago when some Hosts were "rejecting" several thousands WU per hour. As for this problem that I pointed out with bigger WUs. There is a solution - to send bundles only to "proven" hosts - "proven" means more than 1000 "Consecutive valid tasks". This is already tracked in "Hosts" -> "Details" -> "Application details". Ofc. 1000 can be changed to any reasonable amount. This would also increase my relative queue size - previously I have had 80 tasks, each taking 30/4 = 7.5s (since I process 4 tasks at the same time). This totaled to 600s, 10 minutes. Right now my queue is: 80 tasks, each taking 2min01sec. Thus my queue gives me 2420s = 40min20s. That's a lot better. However in the event of Server problems, that only gives me 40minutes of work. Bigger bundles will allow us to be prepared for any Server connection problems - either due to local, or to remote problems. |
23)
Message boards :
News :
Scheduled Maintenance Concluded
(Message 65811)
Posted 14 Nov 2016 by Arivald Ha'gel Post: Jake, Would it be possible to create a subproject for bundles of 25/50/100 ? Right now we have subprojects: MilkyWay@Home MilkyWay@Home N-Body Simulation And MilkyWay@Home is clearly CPU & GPU. Wouldn't it be better to multiply it a little: MilkyWay@Home CPU (single WU) MilkyWay@Home GPU (Bundle of 5/(20?)WU - for lower end GPUs) MilkyWay@Home GPU (Bundle of 50/(100?)WU - for high end GPUs) Although we might have a problem with larger amount of some Hosts spamming computational errors, and overall "unable to validate". This was seen some time ago when some Hosts were "rejecting" several thousands WU per hour. Bundle of 5 still takes only 2minutes (when computing 4 at the same time, so essentially 30s for 5 old WUs). Thus I think that a subproject that will bundle more in a WU would still be a valid idea. Especially since bundle of 5 takes 120-130s on my PC, while single WU took ~26-30s (when computing 4 at a time). There is some improvement, thus bundle of 20/50/100 would potentially increase our throughput even further. Also as I have mentioned increasing "min time to contact" from 30s to 5min would also decrease load on server. |
24)
Message boards :
News :
Scheduled Maintenance Concluded
(Message 65803)
Posted 14 Nov 2016 by Arivald Ha'gel Post: I just love that it takes less than 5 times of the single, non-bundled WU. But credits are x5. There are slight problems, but I think we will make past them. My WUs are already validated 50/50 :) |
25)
Message boards :
News :
Scheduled Maintenance Concluded
(Message 65800)
Posted 14 Nov 2016 by Arivald Ha'gel Post: Application works ok right now. Except for reverting to 0% (but that's not immediate issue). So I believe that it should be proper to increase min time to contact MilkyWay@Home server to 5 minutes - or at least definitely much more than the current 30s (or so). |
26)
Message boards :
News :
Scheduled Maintenance Concluded
(Message 65712)
Posted 12 Nov 2016 by Arivald Ha'gel Post: I think Jesse hit the nail on the head in message 65703. I think I have already stated that in message 65688, but sure. I have said that there's no GPU usage in opencl_ati tasks. |
27)
Message boards :
News :
Scheduled Maintenance Concluded
(Message 65688)
Posted 11 Nov 2016 by Arivald Ha'gel Post: Looks like Milkyway@Home 1.42 (opencl_ati_101) app is CPU only app. No GPU usage (at all). 4 WUs have had gone back to 0% around 77-78%. (Copied from slots of one of WU after it went back to 0%)
|
28)
Message boards :
News :
Scheduled Maintenance Friday November 11th
(Message 65636)
Posted 10 Nov 2016 by Arivald Ha'gel Post: Good Luck! How many WU will be bundled in a single task? (I hope for at least 10) :) I hope that my R280X will finally have enough work to do... it's getting cold in my place :) |
29)
Message boards :
News :
Updated Server Daemons and Libraries
(Message 65513)
Posted 22 Oct 2016 by Arivald Ha'gel Post: Radeon 280X is best in Double Precision work - all TOP computers have this specific GPU. It is however one of the Least Efficient per W for Single Precision work. There are two projects that require Double Precision support MW@H, and PrimeGrid. Not many people want to use cash, just to grind prime numbers. I dedicate my 280X to MilkyWay@Home, and I'll buy new efficient GPU for Single Precision work (either GTX1080, or most likely Radeon 480). Once upon a time I was flamed that I wish for old, almost ancient CPUs to be excluded from MW@H. It was said that we need all FLOPS that we can get. Riiight... and now we specifically ignore GPU users requests for working project... because what? |
30)
Message boards :
News :
Updated Server Daemons and Libraries
(Message 65461)
Posted 17 Oct 2016 by Arivald Ha'gel Post: For me Forum works rather ok (however not the best). However checking my tasks takes ages - there's a problem in DB. Server status is also not optimistic - 1.3m workunits waiting for validation. It seems that Validator is overloaded... either from DB perspective, or we just need one more. Workunit distribution doesn't work perfectly. I still get some work done on my "backup" projects. |
31)
Message boards :
News :
Updated Server Daemons and Libraries
(Message 65351)
Posted 29 Sep 2016 by Arivald Ha'gel Post: My PC (Radeon 280X) also isn't getting workunits. Most often it's getting 0 tasks when requesting new tasks. Problem still persists. I can even say it's worse. 2k credits for 2016-09-29. So like 340k too low... |
32)
Message boards :
News :
Updated Server Daemons and Libraries
(Message 65304)
Posted 27 Sep 2016 by Arivald Ha'gel Post: My PC (Radeon 280X) also isn't getting workunits. Most often it's getting 0 tasks when requesting new tasks. Started having problems 2016-09-12, 2016-09-14 - 2016-09-20, and once again since 2016-09-23. PC have capacity of ~350k credits, and had days when it was < 100k (even below 20k). http://boincstats.com/en/stats/61/user/detail/1021475/lastDays |
33)
Message boards :
News :
MilkyWay@home Version 1.38 Released
(Message 65190)
Posted 20 Sep 2016 by Arivald Ha'gel Post: What about: "Using a target frequency of 60.0" not conforming to "MilkyWay@Home preferences"? Will it be coupled with preferences like in old versions, or will it be removed from logs (cause it doesn't give any info in reality if it's not coupled to preferences)? On a side note, I fixed the server running out of work units, so that should no longer be a problem. It seems it's not true. My PC still runs out of work. I have event log if you want. Last few days my PC computes only 30-35% of time. Here's the link: http://www.filedropper.com/boinceventlog |
34)
Message boards :
Number crunching :
"Number of Tasks Today" not working
(Message 64973)
Posted 3 Aug 2016 by Arivald Ha'gel Post: I see many Hosts that generate a LOT of Computation errors, validate errors. Like: http://milkyway.cs.rpi.edu/milkyway/host_app_versions.php?hostid=606779 Max tasks per day 10000 Number of tasks today 16588 So it should not receive new tasks. But it still does. 16 000+ erroneous tasks. That way most of "Validation inconclusive" tasks end up being "Unable to validate". Also why default amount of "Tasks per day" is 5000. That means that we allow such Hosts to spoil at least 5000 workunits per day. Don't we really not care about scheduler, server resources and volunteer resources? I also assume that once WorkUnit fails some have to manually decide why and if it should be repeated. Do we really need it? Shouldn't "Tasks per day" be at about 500, and go higher with each "accepted" unit? |
35)
Message boards :
Number crunching :
New Benchmark Thread - times wanted for any hardware, CPU or GPU, old or new!
(Message 63859)
Posted 8 Aug 2015 by Arivald Ha'gel Post: That is visible in my profile (Poland) :) Today it's 33 degrees air temperature, and up to 43 ground temperature. |
36)
Message boards :
Number crunching :
New Benchmark Thread - times wanted for any hardware, CPU or GPU, old or new!
(Message 63855)
Posted 7 Aug 2015 by Arivald Ha'gel Post: It's quite hot outside (up to 40 degrees C). Today it reached 76 degrees with 93% fan speed. It's HOT out here... |
37)
Message boards :
Number crunching :
New Benchmark Thread - times wanted for any hardware, CPU or GPU, old or new!
(Message 63853)
Posted 6 Aug 2015 by Arivald Ha'gel Post: It's quite hot outside (up to 40 degrees C). Card reaches 71 degrees, it could be cooler, however it's located next to my 3rd card. Second R280X is not crunching, since it's on "top", and when stressed it can reach up to 91 degrees... |
38)
Message boards :
Number crunching :
New Benchmark Thread - times wanted for any hardware, CPU or GPU, old or new!
(Message 63851)
Posted 5 Aug 2015 by Arivald Ha'gel Post: Both Windows and Linux scheduler prioritizes physical cores use above logical cores. So don't worry. What's "fun" is that my R280X on (GPU 1000MHz, RAM 850MHz) when computing 4 workunits at the same time have a average time (divided by amount of simultaneous workunits) of 22s per workunit. I have yet to try 1.1GHz, however in the middle of the summer it might just be idiotic to strain cooling that much. |
39)
Message boards :
Number crunching :
New Benchmark Thread - times wanted for any hardware, CPU or GPU, old or new!
(Message 63512)
Posted 4 May 2015 by Arivald Ha'gel Post: Then I don't really know what to do. Have you tried "last resort" - calling manufacturer/vendor? |
40)
Message boards :
Number crunching :
New Benchmark Thread - times wanted for any hardware, CPU or GPU, old or new!
(Message 63507)
Posted 3 May 2015 by Arivald Ha'gel Post: That's indeed strange. Even more since 1550MHz Memory seems a little high... Well, sometimes different version of MSI Afterburner helps. You might want to try: MSI Afterburner 3.0.0 Beta 15: http://goo.gl/h95FWE Like the guy from https://www.youtube.com/watch?v=-IJiWMwK11I I'm able to downclock Memory. Without this my R9 280Xs would not be stable enough for S@H or Einstein@Home. |
©2022 Astroinformatics Group