Posts by kashi
log in
1) Message boards : News : Scheduled Server Maintenance 2/21 (Message 67143)
Posted 26 Feb 2018 by Profile kashi
Yes me too. Reset project, detached and reattached about 5 times. Even uninstalled and reinstalled BOINC. No tasks, won't even download executable file.
2) Message boards : Number crunching : New Benchmark Thread - times wanted for any hardware, CPU or GPU, old or new! (Message 67100)
Posted 17 Feb 2018 by Profile kashi
GTX 970, stock GPU 1329 MHz, (CPU, i7 6700K @4.2GHz)..................................273s......kashi

Driver 382.53, Win 8.1, BOINC 7.6.22.
Average of 7 227.23 tasks.
3) Message boards : Number crunching : "WU Freezes BOINC Manager" Redux... (Message 54575)
Posted 1 Jun 2012 by Profile kashi
BOINC 6.12.43 can be found at:

http://boinc.berkeley.edu/dl/?C=M;O=D

This is the same link I posted before.
4) Message boards : Number crunching : 7970 crunching more WUs (Message 54048)
Posted 16 Apr 2012 by Profile kashi
I use Process Lasso, as mentioned by Sabroe_SMC, to set processor affinity of applications when I need to do so. Used it in the past with my HD 5970 on PrimeGrid because the earlier Catalyst drivers used to have the busy wait bug with dual core GPUs. So each GPU task would use a full CPU core although they only needed about half a CPU core to run efficiently. Used Process Lasso to assign the ppsieve OpenCL application to one CPU core so both GPU cores were supported by one CPU core instead of one each. This freed a CPU core to be used for CPU projects. CPU project applications I set the affinity to the other 7 CPU cores.
5) Message boards : Number crunching : tasks being sent to wrong gpu card (Message 53895)
Posted 3 Apr 2012 by Profile kashi
Check the "Remaining (estimated)" time for tasks is correct. I remember when POEM changed to fixed credit tasks, the DCF was tiny so the Remaining time went very high. I had to use Minimum work buffer of 10 days or change the <flops> value in my app_info.xml file in order increase the number of tasks in my cache. The DCF would adjust over a few days but if a resend task with the much smaller estimated computation size was downloaded and processed then the DCF would go way down again.

Newer versions of BOINC report double the GFLOPS value for ATI cards. The older calculation was based on code by Crunch3r and related to double precision. This may affect GPU task work fetch.

You are doing 2 different types of work unit on SETI, perhaps this causes fluctuations in the DCF value. Fluctuating DCF could cause work fetch problems for both GPU projects. Although different cards are assigned to different projects, BOINC work fetch algorithms for the 2 GPUs are possibly combined, so the amount of work in the cache on one project affects when and how many tasks are downloaded for the other project.

You could try suspending one GPU project and see if the other project then downloads work. While you're at it you could suspend any CPU projects too. It shouldn't make a difference but development BOINC versions may use the ncpus value in strange ways.

You could also try setting your CPU projects to No new tasks and then increasing Minimum work buffer to the maximum 10 days to see if that forces GPU work download before cache is dry.

You've been checking Event Log I suppose, to ensure you haven't had error tasks causing your daily quota to be reduced.

Maybe you could get greater detail in your Event Log by using a cc_config.xml with the extra logging commands related to work fetch.

6) Message boards : Number crunching : tasks being sent to wrong gpu card (Message 53883)
Posted 1 Apr 2012 by Profile kashi
With development BOINC 7.0.xx versions, Connect about every x.xx days has now effectively become Minimum work buffer. In fact in the later 7.0.xx versions it has been renamed. If you leave it at 0 days which was previously recommended for an always on connection it will not download any new tasks until your cache is empty. The value needs to be set above 0 if you want to download work before the cache is empty.

This new method of controlling work download may cause tasks to run in high priority mode on projects that have a short deadline. The higher the value you use for Connect about every x.xx days/Minimum work buffer, the more likely it is that tasks may go into high priority mode. It depends on the deadline, for example with projects with a short deadline of 2 days, tasks may run high priority with a value above 0.7-0.8 days. This can cause trouble if they run out of order while leaving other tasks "waiting to run". In later BOINC versions (after 7.0.14, I think) high priority tasks run in "earliest deadline first" order which overcomes this problem on most projects. WCG can still upset the applecart though if a computer becomes a trusted host and gets sent repair tasks with a shorter deadline than normal tasks. These repair tasks may start to run in high priority mode as soon as they are downloaded.
7) Message boards : Number crunching : 6990 (Message 53843)
Posted 29 Mar 2012 by Profile kashi
It's not on the MilkyWay "GPU Requirements" thread list and is a Barts LE similar to the Barts Pro and XT of the HD 68xx cards which also don't support double precision. You can see on page 7 here where AMD states HD 6790 does not support double precision.
8) Message boards : Number crunching : 6990 (Message 53798)
Posted 26 Mar 2012 by Profile kashi
I am in the process of installing a Sapphire HD 6790 GPU on my #1 PC AMD Phenom II X6 1090T, running @ 3.8 GHz. I hope to begin crunching Milky Way tasks on Monday 26 March 2012. Can't wait.

HD 6790 does not have double precision capability required for MilkyWay@Home.
9) Message boards : Number crunching : GPU Requirements (Message 53595)
Posted 9 Mar 2012 by Profile kashi
Have you tried?:

Setting "Run only the selected applications MilkyWay@Home: yes
MilkyWay@Home N-Body Simulation: no"

Setting "If no work for selected applications is available, accept work from other applications?" to No

Aborting any nbody tasks in your cache

Suspending DistrRTgen

Increasing your "Minimum work buffer" in BOINC Manager, Tools>Computing preferences>network usage tab
10) Message boards : Number crunching : "Missed" Status (Message 53588)
Posted 8 Mar 2012 by Profile kashi
Be hard to catch MilkyWay tasks since they no longer have a separate upload but have the results included in the stderr section. BOINCTasks would need to poll in the time between task completion and reporting. For separation tasks being processed on a fast GPU this time can be very short.
11) Message boards : Number crunching : More CPU use since last week (Message 53533)
Posted 4 Mar 2012 by Profile kashi
The avg_ncpus value for a single task is of no concern as long as it is less than 1. How much CPU time is used for each task is all that matters.

As your computers stay hidden it is likely that few if any will respond because only you know what is happening on your computer.
12) Message boards : Number crunching : "WU Freezes BOINC Manager" Redux... (Message 53516)
Posted 2 Mar 2012 by Profile kashi
Hope 6.12.43 works OK for you.

To find the details of your hosts.

From project website: Your account>Computers on this account>click Free-DC(or BOINCstats) logo under first column, Computer ID.

From Free-DC website: On your "User by CPID Stats page" click your score for a current project. Hosts will be shown on the page you go to, click ID of host you are interested in. Only works if hosts aren't hidden.

Changes in host ID for the same computer means not all projects always show under the one host ID, especially projects you have not recently contributed to, if you are no longer attached.
13) Message boards : Number crunching : "WU Freezes BOINC Manager" Redux... (Message 53499)
Posted 1 Mar 2012 by Profile kashi
Being careful about development BOINC versions is a healthy attitude, I understand your reluctance. I have used two of the 7.0.xx versions due to POEM OpenCL requirements so am a bit more comfortable now about using development versions to overcome specific problems. The very unstable versions with severe problems are usually quickly removed from the list of downloadable versions.

MB Atlanos is using 6.10.43 on a number of different projects on a Core2 Duo with Darwin 10.8.0. Also Cap is using 6.12.41 on a 24 core Xeon X5650 with Darwin 10.8.0.

Just because it doesn't become the recommended version doesn't necessarily mean a development version has problems. It can be that the improvements/changes are only minor or only needed by a relatively small number of users, so the recommended version is not changed. If the recommended version is changed to suit a very small number of users then the number of problems caused by a large number of people updating to the new recommended BOINC version outweighs the benefit of the fixes for the few. This is possibly why the recommended version is not changed very often.

The last few 6.12.xx development versions had more limited testing than usual as resources were moved to developing 6.13.xx versions. Many of the changes in 6.12.xx versions after 6.12.36 related to virtual machine issues related to Test for Theory.
14) Message boards : Number crunching : "WU Freezes BOINC Manager" Redux... (Message 53486)
Posted 29 Feb 2012 by Profile kashi
I think 6.12.43 was the last of the older kind similar to the current release version. As far as I know the new 7.0.xx type with the new scheduling code started at 6.13.0.
15) Message boards : Number crunching : "WU Freezes BOINC Manager" Redux... (Message 53476)
Posted 28 Feb 2012 by Profile kashi
....And, finally, for the time being I'll just clear MWAH/BOINC with a daily restart to keep the issue at bay until I'm ready to devote some time to chasing BOINC v.7 glitches!....

Not going to try 6.12.41 or 6.12.43 from my second link? Perhaps the best option if you don't mind restarting every day. Who knows if those versions have something else that could cause a problem.

I would never have found the reference to the fix if you hadn't found the first ticket reference from scasady..
16) Message boards : Number crunching : "WU Freezes BOINC Manager" Redux... (Message 53465)
Posted 28 Feb 2012 by Profile kashi
"fixed in 6.12.41"

http://boinc.berkeley.edu/trac/ticket/1144#comment:1

http://boinc.berkeley.edu/dl/?C=M;O=D
17) Message boards : Number crunching : "WU Freezes BOINC Manager" Redux... (Message 53441)
Posted 26 Feb 2012 by Profile kashi
Fair enough, I wasn't aware that MilkyWay GPU was not available for many Apple computers.

You had mentioned nbody tasks in your previous thread so I thought they may be involved somehow. So if it still occurs even when the nbody application is not selected and only separation tasks are processed, it cannot be due to one application or the other but to MilkyWay itself.

So what is different about MilkyWay compared to other projects, possibly in the network area? I haven't done any nbody tasks so I don't know if they are the same but I know that separation tasks no longer send an upload file but include the results when they are reported. Perhaps Mac BOINC version is waiting for uploads or acknowledgements that never happen? This may not be relevant but it's the only difference to other projects I can think of.

I can't offer any Apple specific help at all, I have never used an Apple computer and know almost nothing about them.
18) Message boards : Number crunching : "WU Freezes BOINC Manager" Redux... (Message 53435)
Posted 26 Feb 2012 by Profile kashi
One possibility is the scheduling/handover interaction between mt tasks and single threaded tasks. I remember when I did AQUA mt tasks I sometimes had the problem of idle cores, so the combination of mt tasks and single threaded tasks has a history of causing trouble in BOINC. You may have the opposite problem of an nbody mt task starting or attempting to start causing some separation tasks to freeze and boinc.exe becoming unresponsive.

I would not process separation tasks on a CPU myself because I feel CPU resources are more efficiently used to do nbody tasks or other CPU projects. I don't see the point in taking 2 hours on a CPU core for a task that can be done in under a minute on the last three generations of ATI/AMD cards.

If the nbody tasks are currently too short and cause work fetch issues I would share CPU resources with another project. Or just give the MilkyWay project a miss if I had no GPU until the nbody tasks are a more reasonable length and/or the number of tasks allowed in the cache is increased so that selecting nbody tasks only no longer causes work fetch problems. 5 seconds runtime is just too short with the current number of tasks allowed. Also the shorter that mt tasks are the greater the number of times that BOINC is switching from and to single threaded tasks when the mt tasks start or finish. So the greater potential for trouble to occur and for it to happen more often. If the CPU is busy at the time of the switch and it is delayed/interrupted then this could possibly cause the problem, especially as you are using all 8 cores for BOINC.

The first thing you could try is configuring BOINC to use 7 cores. This leaves a core free for operating system processes and so minimises/avoids momentary CPU overcommitment. Some do not like this idea because it reduces BOINC contribution slightly, but due to contention issues the reduction is less than the 12.5% expected because the tasks speed up slightly. In the case of high cache/memory intense CPU tasks the reduction in total work performed is so low that it is of no concern.

If that didn't work you could then perhaps try limiting the number of CPU cores used by nbody tasks to 5 with an app_info.xml file. With the current short tasks you could possibly even limit nbody to 4, 3 or 2 cores but this would allow more than 1 nbody task to run at the same time. You would need to include both the nbody and the separation application in the app_info.xml if you wanted to do both. The theory here is that a smaller number of CPU cores being switched by BOINC when mt tasks start or finish has less potential to cause trouble. May not work of course, just something to experiment with if you wish.

A further option is to try a future development version of BOINC which has had some changes made in mt tasks job scheduling. This is designed to fix the idle CPUs problem but the changes may also affect your issues. These changes were only made 3 days ago so you will have have to wait until a development BOINC version with the changes included is released. Changeset 25312 in boinc

You should only try a development version of BOINC after you have exhausted all other options. Only use as a last resort. The usual disclaimers apply about development versions of BOINC. May be unstable, may cause loss of all work, etc., etc. One particular thing to note is that you can't revert to a current release version of BOINC from development 7.0.xx versions without losing all work in progress. This is of particular importance to those who contribute to projects with very long running tasks such as Climate Prediction.
19) Message boards : Number crunching : getting errors with new v1.02 separation application? (Message 53372)
Posted 22 Feb 2012 by Profile kashi
Yes that's true, it would depend on what you are used to and how you use the computer. I remember when I was using a HD 5970 and and a HD 5870, things were slowed a little. Mouse clicks and opening files were not noticeably affected but the time it took for webpages to display had increased a bit. I had thought it was my internet connection until one day I when I had stopped GPU processing due to both MilkyWay and Collatz being down at the same time. Webpages displayed almost instantly and I thought the phone line must have been upgraded. When MilkyWay came back and I started up GPU crunching again I noticed that the slight delay on webpage display had returned and the penny dropped.

I didn't say lag was non-existent with a target frequency of 100, I said it was unbearable on this computer with it lower than 100. The lag I experienced with more recent MilkyWay versions running 2 concurrent tasks when I used the default or a value less than 100 for --gpu target frequency was in another league altogether to the "normal" lag of a GPU used for the screen also being used for processing. Mouse clicks were non-responsive for a long time and the screen updated very slowly or froze completely. Basically the computer became unusable.

I'm currently running POEM OpenCL application on a single 5870 and it is a lot less GPU intensive than many other GPU projects. Even running 4 tasks concurrently and using 3 CPU cores to support GPU processing, GPU load is still only 90% and current draw is relatively low. Webpages display noticeably faster than when I'm running more intensive GPU projects at 99-100% GPU load.
20) Message boards : Number crunching : getting errors with new v1.02 separation application? (Message 53361)
Posted 21 Feb 2012 by Profile kashi
Yes that's probably it. The "324MB available" made me think the OpenCL line was referring to the HD 4290. Either way she's a mixed up, shook up girl.


Next 20

Main page · Your account · Message boards


Copyright © 2018 AstroInformatics Group