Welcome to MilkyWay@home

Posts by Vid Vidmar*

21) Message boards : Number crunching : Multiple GPU's (Message 39093)
Posted 24 Apr 2010 by Vid Vidmar*
Post:
No.
22) Message boards : Number crunching : Question for GPU code writers of the world ... :) (Message 39009)
Posted 22 Apr 2010 by Vid Vidmar*
Post:
... we could schedule more than one task from more than one project to run "at the same time" though they would be interleaved ... but this would address Claggy's problem quite well and address my utilization problem as well ... SaH would get slices as needed to run its kernel and when it was done release the GPU for kernels from MW to be run (for example) getting the best use out of a scarce resource ...


I have had no problems with running Collatz and MW on the same (ATI) card "at the same time", when I could trick BOINC (because of BOINC's FIFO handling of the GPU tasks) into it. So I really miss your point here, unless, it's quite different story in CUDA world.
BR
23) Message boards : News : testing validator for new application (Message 38979)
Posted 21 Apr 2010 by Vid Vidmar*
Post:
<app_info>

	<app>
		<name>milkyway</name>
	</app>

	<file_info>
		<name>astronomy_0.20_x64_SSE3.exe</name>
		<executable/>
	</file_info>

	<file_info>
		<name>astronomy_0.23_ATI_x64.exe</name>
		<executable/>
	</file_info>

	<file_info>
		<name>brook64.dll</name>
		<executable/>
	</file_info>

	<app_version>
		<app_name>milkyway</app_name>
		<version_num>20</version_num>
		<avg_ncpus>1</avg_ncpus>
		<max_ncpus>1</max_ncpus>
		<platform>windows_x86_64</platform>
		<file_ref>
			<file_name>astronomy_0.20_x64_SSE3.exe</file_name>
			<main_program/>
		</file_ref>
	</app_version>

	<app_version>
		<app_name>milkyway</app_name>
		<version_num>21</version_num>
		<flops>1.0e11</flops>
		<avg_ncpus>0.05</avg_ncpus>
		<max_ncpus>0.5</max_ncpus>
		<platform>windows_x86_64</platform>
		<plan_class>ati13ati</plan_class>
		<coproc>
			<type>ATI</type>
			<count>0.48</count>
		</coproc>
		<cmdline>b-1</cmdline>
		<file_ref>
			<file_name>astronomy_0.23_ATI_x64.exe</file_name>
			<main_program/>
		</file_ref>
		<file_ref>
			<file_name>brook64.dll</file_name>
		</file_ref>
	</app_version>

</app_info>

With this I crunch on both, GPU and CPU.
BR
24) Message boards : News : testing validator for new application (Message 38973)
Posted 21 Apr 2010 by Vid Vidmar*
Post:
One Question more maybe.

Why can't we make crunching the CPU AND the GPU (ATI) in the same time for MW ?



JMG

I can, and so can you. Just make entries for both, CPU and GPU apps in app_info.xml. Will paste contents of mine, as soon as I get home from work (that should be around 2hr from now).
BR
25) Message boards : Number crunching : It is fermi? How do you think? (Message 37845)
Posted 29 Mar 2010 by Vid Vidmar*
Post:
That, and a required hash check for all apps. It shouldn't be hard to get a new hash for one of Cluster Physik's apps verified and accepted when they need to be updated, and this would at least make it more difficult to use unsanctioned code. Of course, hash checks -can- be spoofed, but hopefully no one is that dedicated to messing up the project.


Hash checks don't work here since he's using the anonymous platform. The only way i see to prevent this is to implement the double precision check for anonymous platform (planclass cuda23) too, like it's done for the stock cuda app.


Or going to initial replication of 2...
/me hides quickly
26) Message boards : News : Scripting to remove WUs from certain searches (Message 37835)
Posted 29 Mar 2010 by Vid Vidmar*
Post:
Aborting tasks is soooo yesterday! One needs to write a script to weed out those pesky long runners, even worse, one risks getting banned! Running SP apps, that's the way to go! No scripting, no sanctions, 4x+ the credits!
27) Message boards : Number crunching : It is fermi? How do you think? (Message 37834)
Posted 29 Mar 2010 by Vid Vidmar*
Post:
Hmmm... Since no action was taken, it appears that there is nothing wrong with running SP apps. So, I think I'll install them also, as DP just waste electricity. I wonder, if many crunchers were to switch to SP apps, would the staff remain so indifferent?
28) Message boards : Number crunching : It is fermi? How do you think? (Message 37767)
Posted 26 Mar 2010 by Vid Vidmar*
Post:
I think this is fake. Somebody modified the science app, to do only 1/4 of the work. I hope project admins will look into it.
29) Message boards : Number crunching : How can I run two WU's concurrently? (Message 37705)
Posted 23 Mar 2010 by Vid Vidmar*
Post:
I know I need to use the count tag, but I can't find which file this would go in. Any help would be appreciated.

VJG

The file is called app_info.xml, and needs to be put in the project directory.
Here is what I use:
<app_info>

	<app>
		<name>milkyway</name>
	</app>

	<file_info>
		<name>astronomy_0.20_x64_SSE3.exe</name>
		<executable/>
	</file_info>

	<file_info>
		<name>astronomy_0.20b_ATI_x64_ati.exe</name>
		<executable/>
	</file_info>

	<file_info>
		<name>brook64.dll</name>
		<executable/>
	</file_info>

	<app_version>
		<app_name>milkyway</app_name>
		<version_num>20</version_num>
		<avg_ncpus>1</avg_ncpus>
		<max_ncpus>1</max_ncpus>
		<platform>windows_x86_64</platform>
		<file_ref>
			<file_name>astronomy_0.20_x64_SSE3.exe</file_name>
			<main_program/>
		</file_ref>
	</app_version>

	<app_version>
		<app_name>milkyway</app_name>
		<version_num>21</version_num>
		<flops>1.0e11</flops>
		<avg_ncpus>0.05</avg_ncpus>
		<max_ncpus>0.5</max_ncpus>
		<platform>windows_x86_64</platform>
		<plan_class>ati13ati</plan_class>
		<coproc>
			<type>ATI</type>
			<count>0.48</count>
		</coproc>
		<cmdline>b-1</cmdline>
		<file_ref>
			<file_name>astronomy_0.20b_ATI_x64_ati.exe</file_name>
			<main_program/>
		</file_ref>
		<file_ref>
			<file_name>brook64.dll</file_name>
		</file_ref>
	</app_version>

</app_info>

BR
30) Message boards : Number crunching : more credits :) (Message 36961)
Posted 5 Mar 2010 by Vid Vidmar*
Post:
...
Any more it does not matter if you crunch 1 or 2 at a time, they take an identical time.

1 at a time: 5:06
2 at a time: 10:12

No benefit at all.


Not true. If you run only 1 app at a time, the GPU will sit idle between WUs. If you run 2 WUs at a time, offset, GPU will never be idle.
BR,
31) Message boards : Number crunching : Optimized Apps for Both CPU and GPU (Message 36709)
Posted 22 Feb 2010 by Vid Vidmar*
Post:
You can do it if you do not use the "optimized apps"

The ATI app is the exact same app as what is in the optimized app package.


I wouldn't be so sure about it. On home computer I have MW app_info set for cpu and gpu opti. apps each with its distinct version value (20 cpu, 21 gpu), and on ocasion I did receive WUs for both. I even took some screenshots, which I can post on imageshack or some similar service when I get home, if anyone would like to take a look at them.
BR
32) Message boards : Number crunching : Testing ATI Application Availability (Message 35541)
Posted 12 Jan 2010 by Vid Vidmar*
Post:
So why is the app at 0.38 cpu's and taking about 25 seconds longer on my 4870?


Did you use the b-1 tag in your app_info?

I am back to using a app_info as I did not like that slower crunching, of course I also had to change my prefs to use CPU as it will not get any work when it asks for GPU tasks.


Yes, sux, doesn't it.
And there is no indication whatsoever that this might get fixed anytime (soon? lol). Anyway, it's time for me, to dust off my almost forgotten "non DA power BOINC" CC plans and bring them into existence. The main features I intend to implement are:
- independent resource/project pairing allowing user to specify exactly which resources are used by each attached project
- independent resource shares by resources (now a resource share is global for all types of resources)
- true backup projects (now it is only possible to approximate this functionality using very low resource shares for "would be" backup projects, which in combination with current flaky scheduler - not to mention the most idiotic "workaround" ever, GPU FIFO execution rule - just isn't functioning properly)
- benchmarking only in case of resource change (new cpu, gpu), or in similar relevant cases

I'm still in design/pseudocode phase, so it is still uncertain if this will require writing client/manager from scratch, depending on usability of current code. If anyone is interested and/or willing to help, PM me.

[edit]Made the subject of my reply bold[/edit]
BR,
33) Message boards : Number crunching : MW Preferences are Gone! (Message 35536)
Posted 12 Jan 2010 by Vid Vidmar*
Post:
Hi,

Computer has no work, it is only for Milkyway@home, CPU.
It is waiting for more than 18 hours now for work.


The server seems to be sending out work... Is anyone else having this problem?


Ohhh, yes.
Since yesterday, or the day before, I get work only on CPU requests. When asking for GPU work, I get response: "No work sent". It's this computer, I'm using app_info.xml, to have 2 WUs running concurrently. I can post the contents of that after I get home.
BR,
34) Message boards : Number crunching : Testing ATI Application Availability (Message 35164)
Posted 7 Jan 2010 by Vid Vidmar*
Post:
Win XP 64, ATI 5870, 9.12 drivers, no CCC, running smooth. Or is that only 32bit XP problem?
BR,
35) Message boards : Number crunching : New Dr in the House (Message 33383)
Posted 20 Nov 2009 by Vid Vidmar*
Post:
GZ!
36) Message boards : Number crunching : No Thankyou (Message 33108)
Posted 7 Nov 2009 by Vid Vidmar*
Post:
>eyes, sticks
Reminds me of a [home-made] safety notice I saw in a lab the other day -

"Do not look into the laser with your remaining eye."


Brilliant!
When I stop rolling on the floor, laughing, I need to get a laser somewhere, to hang such a notice beside it. .)
BR,
37) Message boards : Number crunching : No Thankyou (Message 33096)
Posted 6 Nov 2009 by Vid Vidmar*
Post:
Ah, I see (therefore no eyes poked) :D
It was just the first time I ran across this saying and didn't know how to interpret it. Now it makes sense, thank you for explanation.
BR,
38) Message boards : Number crunching : Cruncher's MW Concerns (Message 33090)
Posted 5 Nov 2009 by Vid Vidmar*
Post:
Thank you.
So they should all take about the same amount of time on same hardware? And by how much variation in run times should be considered normal? I am asking this because I discovered that there may be some other instabilities besides VPU recovers caused by combination of MW and 9.10 drivers, and would like to be able to detect them properly instead of making false assumptions and accusations.
BR,
39) Message boards : Number crunching : Cruncher's MW Concerns (Message 33088)
Posted 5 Nov 2009 by Vid Vidmar*
Post:
However I still claim, that there were WUs that ran 1.5x longer than others with the same credit grants, but will never be able to prove it.


Such a defeatist attitude... ;-)

I have 3 reported tasks from my Pentium 4 right now. One took around 7250 seconds, while the other two took around 5100 seconds, all three getting 53.45 credits.

My average credit per day on that system when I stopped processing here a few days ago was around 800.

7250 + (5100 * 2) = 17450 total runtime seconds thus far.

24 * 60 * 60 = 86400 seconds in a day

86400 / 17450 = 4.9513

53.45 * 3 * 4.9513 = 793.94 ~= "around 800"

Yes, it may "stink" to get multiple of the longer running tasks, but there are also plenty of the shorter running tasks to average things out over the long term.


Today was the first time I got any of those; couple of caches full of them without normal ones, so combined with front page news, sure it made my blood boil. And thank you for confirming my claim that such workunits exist. ;)


And just some small suggestion, before I drop this topic completely. It wouldn't hurt if Travis told us not only that there was increase in runtime, but also by how much, and which WUs would that be.


Personally, I think he just meant that they were starting up 3s (3-stream) searches instead of 1 or 2 stream which generally cause the server to be real sluggish. The runtime variation in the tasks has been there for weeks, if not months, so this is a tempest in a teapot...


In my case the de_14_3s_const type WUs would take 12s on my 5870, _2s_const 17s and _1s_const 23s to complete. The new de_s222_3s_best which as I see normally take around 23s took 45s in that batch I got. Now do some mental extrapolation what that would mean for _2s and _1s WUs. Combine with the news blurb, and there you have it. That is why I think some more clarity as to what and by how much WUs have been lengthened should be in order.
BR

[edit]quotes[/edit]
40) Message boards : Number crunching : No Thankyou (Message 33086)
Posted 5 Nov 2009 by Vid Vidmar*
Post:
Also, I must admit, that I have overreacted a bit in my previous posts, as I discovered, that for some unknown reason, my ATIs were running a bit (around 20%) slower from 10am UTC till now (I noticed, that my Collatz tasks took 20% longer to complete). However, considering this the new WUs are still some 1.6 times longer than before, but grant the same credit, which still stinks.

The newer Collatz tasks grant proportional credit at least those that I have checked. There are still some older tasks laying about so you may get a mix of old 500 something and newer 700 something tasks ...

But I fail to understand that eye poking thing you wrote at the end of your post.

Well, having a few slow to medium GPUs working for me is a lot more fun that the alternatives I was suggesting ... then again, maybe some out there enjoy poking their eyes with sticks ... :)


Hey Paul.
What I meant to say is, that after looking at my Collatz results i fugured out that something is wrong with my cards. I was already used to new Collatz runtimes and new credits, however during that "slow" period those already longer times got another 20% longer, which, considering pretty consistent run times over there, signaled me that there was problem on my side of the line.

Eyes and sticks don't go along too well, but I still fail to see what do eyes on sticks have to do with GPU crunching, except for the fact, that with eyes on a stick, one cannot use a GPU for much more than crunching and heating.
BR


Previous 20 · Next 20

©2024 Astroinformatics Group