Welcome to MilkyWay@home

Posts by Vid Vidmar*

41) Message boards : Number crunching : Cruncher's MW Concerns (Message 33083)
Posted 5 Nov 2009 by Vid Vidmar*
Post:
Again some reactionary posts. I've looked at my results and can say the credits have not changed in any way. A run time on my 4870 of 50 to 52 seconds still gets me 53.45 credits.

I think Travis doubled the size of the wu's that were taking ~26 seconds and giving ~26.7 credits as I can't find any of those listed in my results any more and there are no wu's with more than 53 credits listed.


Again, due to such fast result purging, I cannot prove my past observations, and things seem to have been sorted by now. Also, I would like to apologize for my earlier behaviour, to which has in some part contributed the fact, that my ATIs have experienced some unexplained slowdown of about 20%, which was discovered after I made those posts. However I still claim, that there were WUs that ran 1.5x longer than others with the same credit grants, but will never be able to prove it. And just some small suggestion, before I drop this topic completely. It wouldn't hurt if Travis told us not only that there was increase in runtime, but also by how much, and which WUs would that be.
42) Message boards : Number crunching : No Thankyou (Message 33069)
Posted 5 Nov 2009 by Vid Vidmar*
Post:
I doubt it ... history says far more people threaten to go away than actually do so ... and in any case I am not leaving ... :)

Though I will admit that I am not a super-user type in that I only have a few GPUs and those I have are not the top of the line even ... still, better than a poke in the eye with a sharp stick (or even a dull one...)


I can agree with that, as sometimes leaving a project takes some effort too, and fore some that are so fed up, even that effort is too much, especially those who have many computers (some of them scattered around), so it's easier to not do anything.
Also, I must admit, that I have overreacted a bit in my previous posts, as I discovered, that for some unknown reason, my ATIs were running a bit (around 20%) slower from 10am UTC till now (I noticed, that my Collatz tasks took 20% longer to complete). However, considering this the new WUs are still some 1.6 times longer than before, but grant the same credit, which still stinks.

But I fail to understand that eye poking thing you wrote at the end of your post.
BR,
43) Message boards : Number crunching : No Thankyou (Message 33065)
Posted 5 Nov 2009 by Vid Vidmar*
Post:
That said though, I don't think a server upgrade alone will suffice... The GPUs need more complex work so that they are not pounding away all the time...

Collatz just increased the task size by 50% to slow the activity there ... not sure if it is working or not, only time will tell ... and by working I mean reducing the server load ... I now the tasks are taking longer, though on my fastest the time only went from 9-10 minutes a task to ~15-17 minutes per ... still, it is an increase ... which will slow my hit rate ...


And same was tried here, but with only half mind put into it. Sad, and damn sloppy. But it will help reduce the load on server to almost 0, as there will be a mass exodus from here...
44) Message boards : Number crunching : Cruncher's MW Concerns (Message 33062)
Posted 5 Nov 2009 by Vid Vidmar*
Post:
From news section on front page:

I was just about to post about sensibility of this move (increasing WU length) in the "Thank you" thread, until I checked results. Times are double, while credits remain the same, so we were screwed once again. And just when I got MW to crunch on my 5870 without VPU crashes and recoveries every 12 - 24h. So, I'd like to thank you for sticking it up our arses, by only doing one half of what you were supposed to do, once again!
I'm joining boosted and other top users in going 100% Collatz. Thanks for making at least this decision an easy one.


Vid, have you been listening to Laibach at extreme volumes? :-)

Nobody at RPI is trying to 'screw' you or anyone else that is crunching here. Let's collar that dog and give Travis and his team a bit of time to work things out. The MW team and the faculty that runs this project wants it to succeed!

BTW - Laibach rules!


Well, I in fact live in Ljubljana (which was called Laibach while we were under austro-hungarian rulership - well german speakers still cal it Laibach). And no, I don't like them, and listen to their music, very much. I am more a drum'n'bass person, which I enjoy at any volume; might call it extreme at the moment, yes.
I know, that at RPI nobody is trying intentionally to screw anyone among us (unless there have been some sympathies made through PM system), however this project has demonstrated more sloppiness when applying changes than one would consider reasonable. Everything here seems to be only half done. Every change only half implemented, and the user base less than half considered. Caches less than half full and so on... And posting such news on front page, well lets see: jobs are longer, great, less stress for servers, jobs do run, great for science, however the only "compensation" users get for runnig thi s project, got halved yet again after numerous previous reductions, well I think these facts speak for themselves. If it wasn't intentional, then someone has a very looooooong way, to learn ho to properly change or fix things, ahead of him.
I hate to compare one project to another, as each has its own individual characteristics, however, implementing such changes as longer runtimes, should be comparable across projects. Recently over at Collatz, job runtimes were extended by 50% and also credits were adjusted accordingly at the same time (not retroactively), like many other projects before that (even Seti). Which leads me to believe changes here were made on impulse without any planning or considerations made prior to acting.
45) Message boards : Number crunching : Cruncher's MW Concerns (Message 33059)
Posted 5 Nov 2009 by Vid Vidmar*
Post:
From news section on front page:
November 4, 2009
I've started some new searches with larger sized workunits, so hopefully these will help the server strain. Let us know how they're running.
--Travis

I was just about to post about sensibility of this move (increasing WU length) in the "Thank you" thread, until I checked results. Times are double, while credits remain the same, so we were screwed once again. And just when I got MW to crunch on my 5870 without VPU crashes and recoveries every 12 - 24h. So, I'd like to thank you for sticking it up our arses, by only doing one half of what you were supposed to do, once again!
I'm joining boosted and other top users in going 100% Collatz. Thanks for making at least this decision an easy one.
46) Message boards : Number crunching : No Work.... (Message 32917)
Posted 1 Nov 2009 by Vid Vidmar*
Post:
Validator needs some serious kicking. Wasn't this Collatz's weekend grippe?
47) Message boards : Number crunching : HD5870 (Message 32901)
Posted 31 Oct 2009 by Vid Vidmar*
Post:
I did it! :D
Take a lookie! Yes, there are a 4870 AND a 5870 doing it together; each grunching 3 WUs and all under win XP 64bit. Yesterday after job, I started tweaking w and f cmdline parameters, and today I just hit the sweet spot with w1.1 and f80. Got GPU load around 90 - 97% on both cards with NO VPU recoveries.
Hope this info helps anyone...

[edit]written and edited on that same computer[/edit]
BR,
48) Message boards : Number crunching : HD5870 (Message 32789)
Posted 26 Oct 2009 by Vid Vidmar*
Post:
...
I haven't Tweaked anything because I really don't have any idea what the Tweaks are for so I'd just be flying blind if I started Tweaking. I run Collatz exclusively now where I don't have any VPU Errors and no Tweaking is required, at least not for my systems anyway ... :)


You can read all about them in readme file that comes with ati optimized apps. So, if you are using those apps, you should have already read it ;D
BR,

49) Message boards : Number crunching : HD5870 (Message 32765)
Posted 26 Oct 2009 by Vid Vidmar*
Post:
I just did have my Win XP 64-Bit Box with 2 5870's running the Milkyway Project for 90 Minutes before it had a VPU error. As long as I don't do anything else with the Box like open files or Web Browse the Cards seem to stay running. But as soon as I do something like I just did to incur the VPU Error it won't stay running the Wu's. The Wu's say their running but their not and I have to exit & restart BOINC to get the Wu's actually running again.


Hey.
Congrats on this success. I couldn't ever keep my VPU working for longer than just a couple ms after WU start. From your description, I'd guess that VPU hangs whenever there is a massive screen update. Could it be: system requests screen update, during which a part of MW code runs, suspending screen update, which in return does something to running drivers, ... Have you tried tweaking f, w and b command line parameters? I will try using f60 and/or w2.0 just to see if shorter work packages or giving more time between them has any effect at all.
BR,
50) Message boards : Number crunching : HD5870 (Message 32659)
Posted 23 Oct 2009 by Vid Vidmar*
Post:
BUMP!
Any news on getting MW to run on 5870 under WinXP 64? Collatz is running superfine, so I don't think it's big thing to fix the MW ATI app.
BR,
51) Message boards : Number crunching : Milkyway not running... (Message 32603)
Posted 21 Oct 2009 by Vid Vidmar*
Post:
Hi all,

Shortly, this is my problem:

I would like to run Milkyway for most of the time and use Collatz as a backup project. I do have some network drops and it doesn't reconnect automatically so I really need a backup project.

My dual-core runs Collatz, SETI and Primegrid with resource share 100 and MW with resource share 700.

However, although the resoruce share is respected by the cpu only projects (SETI & Primegrid) it seems that is ignored by the ATI GPU only projects. Collatz is running almost all the time....

Any hints?

BTW, I use client v6.10.13, all but Primegrid with app_info. No problems at all crunching, ie no errors, just the resource share problem. Boinc connects every 0.1 and has an additional work buffer of 4 days.


It might be because MW is low/out of work?
On the other hand, so far BOINC supports only resource share by project. I proposed to make it by resource (CPU, GPU, ...) and was calmly ignored. I also reported a bug in the feature, that enabled me for a while to achieve per resource shares (using 2 clients in parallel - I was even able to process MW on CPU and GPU at the same time), but no devs stirred. So, if you happen to make any of the devs even flinch, all kudos to you.
BR,
52) Message boards : Number crunching : Little help for those using opti apps. (Message 32546)
Posted 19 Oct 2009 by Vid Vidmar*
Post:
VM is out of the question. Been there done that. So far VMs are good only for easy tasks != crunching.
Instead I will expand my one BOINC CC for each resource idea a bit further. At this point I won't promise when, as what I have in mind requires some BOINC CC code tweaking, with which I'm unfortunately insufficiently familiar. But in time, I'll show you, it can be done and it will work.
BR,


Well, yesterday I did it! With both --allow_multiple_clients command line parameter and <suppress_net_info> tag in cc_config.xml, I was able to run MW on CPU and GPU concurrently on one computer AND with different resource shares for each resource WITHOUT the use of VM. However, my triumph was a short one. After second restart of GPU designated CC, it refused to start saying that another BOINC instance is running. Well it was running, that's why I supply --allow_multiple_clients parameter. After that, I tried almost all versions from 5.10.45 up. No go. Another bug, I guess it must be caused by something stored in one of the configuration files (anyone cares to compile a current CC with another-instance-running check commented out? It would make my day!).
Anyways, after being successful with this experiment (the concept at least, as the software is obviously broken somewhere), I decided to try out my new HD5870 again (that is on Win XP 64bit). This time, I put it beside the 4870. Drivers installed ok., both cards are running. MW fails to run (locks up graphics) on either card, while Collatz runs fine. This leads me to think it's a driver thing and I hope/wish/eagerly await, that it will be fixed here soon, as it's just Collatz for my ATIs now.
BR,
53) Message boards : Number crunching : HD5870 (Message 32379)
Posted 15 Oct 2009 by Vid Vidmar*
Post:
Is anyone successful in grunching with 5870 under Win XP64? I got my running w/o a problem, but whenever I start a MW ati app, the card freezes.
BR,
54) Message boards : Number crunching : Little help for those using opti apps. (Message 32357)
Posted 14 Oct 2009 by Vid Vidmar*
Post:
VM is out of the question. Been there done that. So far VMs are good only for easy tasks != crunching.
Instead I will expand my one BOINC CC for each resource idea a bit further. At this point I won't promise when, as what I have in mind requires some BOINC CC code tweaking, with which I'm unfortunately insufficiently familiar. But in time, I'll show you, it can be done and it will work.
BR,
55) Message boards : Number crunching : Little help for those using opti apps. (Message 32317)
Posted 12 Oct 2009 by Vid Vidmar*
Post:
Hello.
Can anyone point me to an app_info.xml that is 6.10.x ready for CPU and GPU opti. apps, like there is one for seti (by the Lunatics crew), or should I try to make my own? (I'd consider existence of such .xml a great help for us using opti apps)
BR,

All opti apps have a app_info file with them that is needed to run the opti apps.
Apps -> http://www.brilliantsite.com


Hey banditwolf. Thanks for your answer, which is correct for my question asked. So I'll refine my question: Lunatics crew have provided an app_info.xml, that allows one to crunch CPU (multibeam and astropulse) and GPU (multibeam) WUs on same computer. I was wondering if anyone did it for milkyway?

OTOH this would be just a step toward the complicated resource usage policy I'd like to achieve. (on a Q9450 with ATI R4870 I'd like to run on CPU: MW cpu (25%), SETI cpu (25%), primegrid cpu (25%), other backup projects and cpu part of gpu apps (25%), on GPU: MW (99%), collaz - as backup (1%)). So far I have been successful in implementing most of it by running 2 BOINC core clients in parallel one for CPU the other for GPU, however it's impossible to have both attached to same project(s), as server scheduler recognizes both as the same computer and then it assigns new CCPID, WUs and in fact whole hell breaks loose.
I went so far as to try changing hostnames (<domain> tag) in client_state.xml, but on restart client overwrites my changes.

For now, I have run out of ideas what could be tried next, so I eagerly await suggestions.

BR,
56) Message boards : Number crunching : Little help for those using opti apps. (Message 32272)
Posted 11 Oct 2009 by Vid Vidmar*
Post:
Hello.
Can anyone point me to an app_info.xml that is 6.10.x ready for CPU and GPU opti. apps, like there is one for seti (by the Lunatics crew), or should I try to make my own? (I'd consider existence of such .xml a great help for us using opti apps)
BR,
57) Message boards : Number crunching : Grunching (Message 29860)
Posted 28 Aug 2009 by Vid Vidmar*
Post:
laviathan and verstapp, was that compared to stock MW CPU app, or the optimized one? I came to a factor of 29 when comparing GPU and CPU opti. app runtimes of my C2D E8500 and ATI 4870HD.
BR,
58) Message boards : Number crunching : Problem with ATI GPU crunching (Message 29859)
Posted 28 Aug 2009 by Vid Vidmar*
Post:
Hey.
BOINC 6.10.x is still unstable and lots of people have great difficulties with it. However, I found a solution that accomplishes exactly what you wish to do.
Read here to get the idea.

So, with 1. already implemented in BOINC CC, its just the matter of getting the right amount of WUs running on the "ATI" client. I did this by setting ncpus to 3 in cc_config.xml and avg_cpus and max_cpus to 1 in app_info.xml. That way I am always running 3 MW WUs concurrently (as long as there are MW WUs available), regardless of work amount and scheduling of the "CPU" client.

Using this method, in just a bit more than 2 weeks, my RAC has risen from ~60k to 80k (and is still rising, expected to settle at ~90k).
BR,
59) Message boards : Number crunching : Conflict MW (ATI) & Aqua CPU (Message 29394)
Posted 15 Aug 2009 by Vid Vidmar*
Post:
Success!
I am now a happy cruncher. I did bork a couple of WUs though, but hey, now I have all the cores doing their CPU stuff and my ATI card chewing happily on 3 MW WUs at any given moment (if there is work OFC). If anyone is interested in details, I'll be happy to share them.
BR,
60) Message boards : Number crunching : Conflict MW (ATI) & Aqua CPU (Message 29318)
Posted 13 Aug 2009 by Vid Vidmar*
Post:
Look what I just found in checkin notes:
David May 12 2008
- client: add <allow_multiple_clients> cc_config.xml option
- client: remove stress_shmem code
...
David May 12 2008
- client: change --allow_multiple_clients to a command line option
(it can't go in the config file)


Now, on to try it out. [edit]checkin quote[/edit]
BR,


Previous 20 · Next 20

©2024 Astroinformatics Group