1)
Message boards :
News :
Server Issues
(Message 74537)
Posted 22 Oct 2022 by zioriga Post: I have 300 WUs to report but the update command generate those mewssages 10/22/2022 8:07:36 AM | Milkyway@Home | Fetching scheduler list 10/22/2022 8:07:38 AM | | Project communication failed: attempting access to reference site 10/22/2022 8:07:39 AM | | Internet access OK - project servers may be temporarily down. and communication deferred 24 hours |
2)
Message boards :
News :
New Poll Regarding GPU Application of N-Body
(Message 70996)
Posted 22 Jul 2021 by zioriga Post: And do you mean NVidia or ATI GPUs ?? Usually ATI cards are faster compared to NVidia when double precision is required |
3)
Message boards :
Number crunching :
Only one WU per Computer ?
(Message 67980)
Posted 3 Jan 2019 by zioriga Post: - resource share = 0, i.e. backup project thanks a lot !!!!! This is the solution Now it's working fine Now I have 80 Wus in the queue |
4)
Message boards :
Number crunching :
Only one WU per Computer ?
(Message 67978)
Posted 3 Jan 2019 by zioriga Post: Is there one reason why I receive only one WU per computer (one per GPU and one per CPU) ? I checked on my Win 10/NVidia 1080 on intel 7 5820 with boinc 7.14.2 (on the same pc with ati 560) and another one of my computers with Linux Mint 17.3 /NVidia 660 with boinc 7.2.42, but all are behaving the same way. I asked someone else in my team and things are different (people receive untill 160 WU per PC) Some time ago I received much more WUs (I remenber a number near 40 or more) |
5)
Message boards :
News :
GPU Issues Mega Thread
(Message 65154)
Posted 16 Sep 2016 by zioriga Post: I found the same problem with windows 10 anniversary and after the reinstallation of the nvidia driver, everything runs correctly on my gtx 1080 |
6)
Message boards :
Number crunching :
Work fetch errors
(Message 57858)
Posted 9 Apr 2013 by zioriga Post: I submitted this problem to BOINC developement group (David Anderson). The problem is solved with the new BOINC version (7.0.60) |
7)
Message boards :
Number crunching :
Work fetch errors
(Message 57853)
Posted 8 Apr 2013 by zioriga Post: I also have the same problem |
8)
Message boards :
Number crunching :
Problems with "Use GPU while computer is in use" setting
(Message 36708)
Posted 22 Feb 2010 by zioriga Post: I waited some news about the problem I submitted, but nothing happened !! This problem is still alive!! also in the .24 (cuda 23) version Is there an estimated of a solution ?? |
9)
Message boards :
Number crunching :
Problems with "Use GPU while computer is in use" setting
(Message 34380)
Posted 8 Dec 2009 by zioriga Post: Now the only way to bypass this problem is, when I receive some WU, to put the client in "Use GPU while computer is in use", wait untill all the WUs terminate and then restore the previous situation with "Don't use GPU while computer is in use". |
10)
Message boards :
Number crunching :
Problems with "Use GPU while computer is in use" setting
(Message 34379)
Posted 8 Dec 2009 by zioriga Post: Despite the new 0.24 version, the problem I sumbitted has not been resolved !!!! |
11)
Message boards :
Number crunching :
Problems with "Use GPU while computer is in use" setting
(Message 32878)
Posted 29 Oct 2009 by zioriga Post: @cluster physik. I've never changed the "Write to disk at most every" parameter in all the projects I'm crunching, as far as I remember BTW this parameter, in Milkyway, is 60 seconds |
12)
Message boards :
Number crunching :
Problems with "Use GPU while computer is in use" setting
(Message 32870)
Posted 29 Oct 2009 by zioriga Post: @David. Warning! I've said "Not" to use GPU while computer is in use in the Boinc Manager Advanced Preferences, whilst in the Milkyway Computer preferences there is "Use GPU while computer is in use" BTW I submitted this problem to BOINC Alpha Test |
13)
Message boards :
Number crunching :
Problems with "Use GPU while computer is in use" setting
(Message 32850)
Posted 28 Oct 2009 by zioriga Post: Sorry, but the option "leave application in memory" was already activated. And this problem is specific only to this project, all the other CUDA projects are working fine in this situation |
14)
Message boards :
Number crunching :
Problems with "Use GPU while computer is in use" setting
(Message 32847)
Posted 28 Oct 2009 by zioriga Post: I've found this error with the unchecked "Use GPU while computer is in use" checkbox. If you don't use the computer, the progress is running correctly, but if you start working, the progress stops (that is correct !!) but if you again stop working, the progress restart from the beginning (0.0) But the worst is that if you repeat this behaviour more times, the elapsed time can reach many hours, and the WU aborts with a "Computation error". WinXP 64b - BM 6.10.16 64b - GTX260 - CUDA 191.07 |
15)
Message boards :
Number crunching :
New faster application?
(Message 9715)
Posted 5 Feb 2009 by zioriga Post: I tried the zslip optimized application in my AMD 5200 X2 - Windows XP 64b but all the Wu crashed. I'm asking if those applications are accepted by the project administrator !!! Now I discard those application and return to a normal application. http://www.zslip.com/ |
16)
Message boards :
Number crunching :
First WU ended with computational error
(Message 123)
Posted 6 Oct 2007 by zioriga Post: OK thanks. I hope you'll debug soon!! In the meantime do I loose credits ??? (in other words I crunched for nothing ??) Bye bye |
17)
Message boards :
Number crunching :
First WU ended with computational error
(Message 118)
Posted 5 Oct 2007 by zioriga Post: this is the message for this first wu Windows XP SP2 on a AMD 3000 (32b) 05/10/2007 21.47.37|Milkyway@home|Starting test_wu_33_12 05/10/2007 21.47.38|Milkyway@home|Starting task test_wu_33_12 using astronomy version 105 05/10/2007 22.21.32|Milkyway@home|Deferring communication for 2 hr 35 min 5 sec 05/10/2007 22.21.32|Milkyway@home|Reason: Unrecoverable error for result test_wu_33_12 ( - exit code -1073741819 (0xc0000005)) 05/10/2007 22.21.32|Milkyway@home|Computation for task test_wu_33_12 finished |
©2024 Astroinformatics Group