Welcome to MilkyWay@home

Posts by Wrend

41) Message boards : News : Scheduled Maintenance Concluded (Message 65862)
Posted 16 Nov 2016 by Profile Wrend
Post:
Hey Everyone,

Sorry for the silence yesterday, I decided to take a day off yesterday to recharge a bit after the huge push I've been doing for the last two weeks. I will be working on the Linux GPU apps today, the Mac applications, and if I have time I will look into fixing the cosmetic issues with the progress bar.

Jake


Thanks for the update. Sounds like you could have used the break and things seem to be running rather well overall now.

As for the cosmetic changes, I actually rather like being able to easily see on the progress bar when one of the bundled tasks ends as it's helping me to troubleshoot some issues I seem to be having on my end (see my last couple posts), being able to monitor bundled task progress, CPU load, and GPU load in real time. But hey, that's just me.

Best of luck and thanks for all your efforts.
42) Message boards : News : Scheduled Maintenance Concluded (Message 65860)
Posted 16 Nov 2016 by Profile Wrend
Post:

I seem to be getting some computation errors now though, so I'm going to try dialing it back down to 5 task per GPU to see if that fixes it. We'll see. I've typically been running 4 per GPU in the more recent past (last few months).

Still getting some errors. It may be on the CPU end as the tasks seem to occasionally error out at the same time when they switch between the bundled tasks and load the CPU.


My setup have 0% failure rate. I think it's more likely a GPU issue than a CPU.

I'm not sure, but it doesn't look like it's the GPUs, since the WU errors seem to be occurring when they hit the CPU at the same time and I get much less to seeming no errors when they don't hit the CPU at the same time.

https://i.imgur.com/qiY2tv0.png

I do have my CPU overclocked a little bit and not my GPUs, so it may be related to that with the changing load spikes the CPU was going through.

I'll mess around with it some more and report back if I find anything conclusive.
43) Message boards : Number crunching : New WU Progress Bar (Message 65858)
Posted 16 Nov 2016 by Profile Wrend
Post:
I like that it goes back to 0, knowing that it is running 5 WU in 1 in effect, to let me monitor the progress each WU package.

Currently I'm running 5 of the new 1.43 WUs in parallel at the same time on each of my 2 Titan Black cards, 10 in total. Crunching time per the new WU is 3:33 (93s) to 3:36 (96s) in total, loading each GPU up to about 78%, so total productivity does seem to have improved for me as well.

Very nice job, it seems, and it's probably easier bundling these both for the servers and most hosts with less down time and communication needs overall.


Sorry, the times in seconds are not correct. I was thinking 1 minute instead of 3 for some reason, so the times in seconds posted should have been 213s to 216s.
44) Message boards : News : Scheduled Maintenance Concluded (Message 65857)
Posted 16 Nov 2016 by Profile Wrend
Post:

See my previous post for Titan Black performance. If left running, this will likely place my host PC within the top 5 performing PCs crunching for MW@H.


Cool. My single Radeon 280X can crunch up to 380k-400k per day.
Crunching 4WU at the same time. Times are around 110-130s. So 22-26s per WU.

Updated to 6 MW@H 1.43 WU bundles running per GPU, 12 total. Also allocated more CPU headroom so WUs are finishing about 3:28 (88s) total. GPUs are loaded up to around 93%, VRAM up to around 3668MB (59% on SLIed Titan Black cards, so the memory usage is double from being mirrored between cards – expect roughly half this on independent cards).


3:28 is 208s (not 88s)

208 / 6 (single card performance) = ~35s.
So it would appear that Titan Black gets around 70% performance of R280X in MW@H.


Yeah, sorry, I was thinking 1 minute instead of 3 for some reason. I blame it on not being awake enough, at least that's my excuse. Either way, this PC has been within the top 5 not too long ago. (Edit: And of course it's too late to edit those posts now...)

Also, I recall others recommending running 8 MW@H tasks per GPU on Titan Black cards so that they're kept more consistently loaded, but I personally don't want to push mine that hard.

I seem to be getting some computation errors now though, so I'm going to try dialing it back down to 5 task per GPU to see if that fixes it. We'll see. I've typically been running 4 per GPU in the more recent past (last few months).

Cheers.

...

Update:

Still getting some errors. It may be on the CPU end as the tasks seem to occasionally error out at the same time when they switch between the bundled tasks and load the CPU.

For now I will assume this is on my end and not an issue with the WU, though I'm not sure about it.

This problem may also potentially work itself out over time as the WU tasks start to not hit the CPU at the same time. I might also try further limiting CPU usage, but had increased it to get faster turnaround between tasks.
45) Message boards : News : Scheduled Maintenance Concluded (Message 65855)
Posted 16 Nov 2016 by Profile Wrend
Post:
Like what? I have a 600W PSU on the XP machine but limited cooling. The Win10 machine is an HP slimline requireing a half-height boad and only has a 350W PSU.


For AMD/ATI Radeon 280/280X is still the best for DP. They're also quite cheap right now (around 150$? each)

As for NVidia GeForce GTX Titan, and GeForce GTX Titan Black (from GeForce 700 Series), but they're extra costly - I believe more than 500$ each. In my place they're really unavailable. I saw some on UK eBay - 700 Pounds each... oh my eyes.

Titan should be almost twice as effective per W in DP but in old MW@H performed a little worse than R280X per /s (so I assume that per W it's the same).
See benchmark thread:
https://milkyway.cs.rpi.edu/milkyway/forum_thread.php?id=3551&postid=64162#64162

In this thread it can be seen that really ANYTHING AMD/ATI is better than NVidia in terms of DP.

For both, 600W PSU should be enough.


See my previous post for Titan Black performance. If left running, this will likely place my host PC within the top 5 performing PCs crunching for MW@H.

Cheers.
46) Message boards : News : Scheduled Maintenance Concluded (Message 65854)
Posted 16 Nov 2016 by Profile Wrend
Post:
Updated to 6 MW@H 1.43 WU bundles running per GPU, 12 total. Also allocated more CPU headroom so WUs are finishing about 3:28 (88s) total. GPUs are loaded up to around 93%, VRAM up to around 3668MB (59% on SLIed Titan Black cards, so the memory usage is double from being mirrored between cards – expect roughly half this on independent cards).

There can be significant CPU load spikes between individually bundled GPU tasks, even though the CPU tends to mostly idle. I've allocated up to 0.75 CPUs per GPU WU, leaving 33% of my 12 threaded CPU free.

Loads: https://i.imgur.com/8wygOs6.png

Config: https://i.imgur.com/SGKV9XD.png

Still seems to be running great.
47) Message boards : News : Scheduled Maintenance Concluded (Message 65851)
Posted 16 Nov 2016 by Profile Wrend
Post:
As I mentioned here. → http://milkyway.cs.rpi.edu/milkyway/forum_thread.php?id=4058&postid=65850#65850

I like that it goes back to 0, knowing that it is running 5 WU in 1 in effect, to let me monitor the progress each WU package.

Currently I'm running 5 of the new 1.43 WUs in parallel at the same time on each of my 2 Titan Black cards, 10 in total. Crunching time per the new WU is 3:33 (93s) to 3:36 (96s) in total, loading each GPU up to about 78%, so total productivity does seem to have improved for me as well.

Very nice job, it seems, and it's probably easier bundling these both for the servers and most hosts with less down time and communication needs overall.


Good job, guys. I'm looking forward to keeping an eye on these and seeing how well they work overall now.

Cheers.
48) Message boards : Number crunching : New WU Progress Bar (Message 65850)
Posted 16 Nov 2016 by Profile Wrend
Post:
I like that it goes back to 0, knowing that it is running 5 WU in 1 in effect, to let me monitor the progress each WU package.

Currently I'm running 5 of the new 1.43 WUs in parallel at the same time on each of my 2 Titan Black cards, 10 in total. Crunching time per the new WU is 3:33 (93s) to 3:36 (96s) in total, loading each GPU up to about 78%, so total productivity does seem to have improved for me as well.

Very nice job, it seems, and it's probably easier bundling these both for the servers and most hosts with less down time and communication needs overall.
49) Message boards : Number crunching : Android/ARM WUs all failing with computation errors (Message 65849)
Posted 15 Nov 2016 by Profile Wrend
Post:
And how many of those WU were at the expense of a more capable platform being left idle? Hopefully none.

I'll be impressed if you can get within the top 10 hosts in performance in 5 weeks for MW@H, as I've been able to do with my PC in less time. In the larger picture, my computer isn't comparable to most though, so it's kind of a moot point.

Of course it's great that you want to help with your mobile and alternative devices as well, and I have no issue with it if it were practical, efficient, and productive to do so on the administrative server side of things as well as taking into account all available host resources and their performance capacities. I just have my doubts about it.
50) Message boards : Number crunching : Tracking in Windows 7 (Message 65845)
Posted 15 Nov 2016 by Profile Wrend
Post:
Windows 7 will be the last Windows OS I use on my PCs. I still enjoy using it for the time being though and don't have any problems with it beyond Microsoft's occasional incompetence.

I've gotten used to Debian/KDE (Linux) on my other computers and in VMs. I'd have already moved over if it weren't for legacy support and Windows 7 working so well for me. I leave my main Windows 7 computer up and running for months at a time sometimes and do things like gaming on it, crunching for BOINC, web surfing, media consumption, server hosting, and some production related things.

My laptop is Debian/KDE only now though. I also game on it with some of my Steam hosted games and Minecraft on occasion. I also enjoy the greater customization and much richer and more capable software packages available to me on this Linux platform, so it isn't surprising that beyond legacy software needs, Windows doesn't have much to offer me that I'm interested in.

In my professional and personal experience, the vast majority of people aren't competent enough to manage and administer their computer systems effectively, regardless of the OS they're using, and they eventually make and let their PCs degrade in security, reliability, and performance to the point where they just end up buying a new system every few years. Microsoft privacy policies are probably the least of their concerns, though I think it is an important issue people should be aware of as well.
51) Message boards : Number crunching : Android/ARM WUs all failing with computation errors (Message 65843)
Posted 15 Nov 2016 by Profile Wrend
Post:
I'm not really sure how practical it is to use these mobile devices anyway. Other than not being very powerful compared to full higher end PCs, they're specifically made and optimized to be mostly idle most of the time and aren't suited to being up and crunching in the background for extended periods of time.

In practical terms, I think supporting them is more of a novelty to generate interests in these scientific projects, which of course is a good thing as well, but not an efficient use of project resources. There are much more powerful and available platforms which are left ideal too often as it is now anyway. These mobile devises deserve to be on the bottom of a priorities list, if not left off altogether, until such time as it becomes more practical to use them.

This isn't meant to offend anyway. I myself very much enjoy using, customizing, and playing around with some of these devices. I think it's important to acknowledged their limitations and more fitting usage scenarios though.
52) Message boards : Number crunching : New 1.42 WU's seem not to use the GPU! (Message 65841)
Posted 15 Nov 2016 by Profile Wrend
Post:
I'll keep an eye on this and further WU changes. I also very much prefer GPU workloads for MW@H as I have two Titan Black cards FP64 double precision optimized, one of the main reasons I choose to crunch for MW@H as I feel these cards can do more good here.

I've been slacking a little bit more recently – this computer had been holding within the top 10 and sometimes top 5 performing host systems for MW@H for several months – but I've been messing more with other 3D applications outside of BOINC.

This is the first BOINC project I started crunching for and I'd rather not go back to GPUGrid as it can't take advantage of the double precision optimization of my graphics cards, the one thing they really excel at to this day over other "consumer" cards.

Thanks, and best of luck.
53) Message boards : Number crunching : New Benchmark Thread - times wanted for any hardware, CPU or GPU, old or new! (Message 64511)
Posted 26 Apr 2016 by Profile Wrend
Post:
Numbers are very slightly slower (about an additional second) with the additional CPU tasks I'm currently running, it seems, and in case anything else changes in the meantime that would throw off the results in the link I posted, here's a picture instead that's more relevant to this specific test...

https://i.imgur.com/HgG7EJu.png

I now return you to your regularly scheduled thread... (and sorry for the post bombing – I wish I could edit posts over an hour old instead).
54) Message boards : Number crunching : New Benchmark Thread - times wanted for any hardware, CPU or GPU, old or new! (Message 64510)
Posted 26 Apr 2016 by Profile Wrend
Post:
...

The GPU tasks I'm running that I mentioned are the 160.88* and 20 credit tasks. I believe anyone is able to check this host's listed results to confirm this as they please.

...


Correction: *106.88

Also, here are the validated host results for this computer, if anyone want's to check... http://milkyway.cs.rpi.edu/milkyway/results.php?hostid=478184&offset=0&show_names=0&state=4&appid=

Don't get me wrong, I wouldn't mind if it only took 24 seconds to crunch these work units on my Titan Black cards. ;)
55) Message boards : Number crunching : New Benchmark Thread - times wanted for any hardware, CPU or GPU, old or new! (Message 64509)
Posted 26 Apr 2016 by Profile Wrend
Post:
...

Welcome, and thanks for your efforts!

With distributed computing we can all do our parts and every little bit helps.

Some of us have pretty ridiculous computer systems that we've spent many thousands of dollars on. These computers are an exception though, and I would presume that the majority of work is actually done on more modest systems.
56) Message boards : Number crunching : New Benchmark Thread - times wanted for any hardware, CPU or GPU, old or new! (Message 64508)
Posted 26 Apr 2016 by Profile Wrend
Post:
...


Update 2:

It had been a little while since I updated Windows and my Nvidia drives and since I had restarted this computer – about a month. So, I did those and performed some more tests to help eliminate any potential performance influencing factors. However, nothing really changed.

Even with the updated Nvidia drivers, SLI disable, and not running a VM for my game servers nor running any additional separate BOINC CPU tasks, the results remain identical to my former results with Double Precision optimization disable and enabled.

The GPU tasks I'm running that I mentioned are the 160.88 and 20 credit tasks. I believe anyone is able to check this host's listed results to confirm this as they please.

I can only assume that there is some flaw in the methodology I or others are using who have tested and submitted results for Titan Black cards.

Cheers.
57) Message boards : Number crunching : New Benchmark Thread - times wanted for any hardware, CPU or GPU, old or new! (Message 64506)
Posted 25 Apr 2016 by Profile Wrend
Post:
...

Update:

Completion times are identical with Double Precision optimization disabled in the Nvidia Control Panel while running 1 GPU tasks per GPU and CPU thread – 00:02:25 (145s) and 00:00:35 (35s) respectively.

However... these same kinds of tasks load the GPU to about 3 to 4 times more when Double Precision optimization is disabled! This means I can only effectively run 1 or maybe 2 tasks simultaneously per GPU before the GPU load capacity is full and significantly bottlenecks performance. Coincidentally, maxing out the load on the GPUs this way somewhat strangely also seems to spill the workload over onto the CPU.

And again... these are all on my Titan Black cards (EVGA GeForce GTX Titan Black Superclocked). The "superclocked" Titan Blacks basically just have a very slightly higher base clock and boost clock rate, which is effectively irrelevant when using Double Precision optimization as it locks in the clock speed to 966MHz while raising GPU voltage a little, though as mentioned I'm getting identical completion times with Double Precision optimization disabled and the GPUs boosting up a little anyway.
58) Message boards : Number crunching : New Benchmark Thread - times wanted for any hardware, CPU or GPU, old or new! (Message 64505)
Posted 25 Apr 2016 by Profile Wrend
Post:
Just an FYI that I don't notice any single task speed difference between running just 1 or 5 tasks simultaneously per GPU on my Titan Black cards, nor do I notice any difference between using one CPU thread per GPU task or 0.09 CPU threads per GPU task. With Double Precision optimization enabled in the Nvidia Control Panel, 5 tasks running simultaneously per GPU only loads them about 79% and only loads the VRAM about 36% above background usage (43% total) while using SLI – I also game on this computer, host game servers on a GNU/Linux (Debian "Jessie" 64bit/KDE) VM, et cetera and typically run 4 tasks simultaneously per GPU (8 total) and 8 tasks simultaneously on the CPU. Using SLI essentially doubles VRAM usage as it mirrors memory on both GPUs, even though BOINC doesn't take advantage of SLI (nor would it really make any sense for it to), so it is not recommended to use SLI unless you're using it for something else and have more than enough VRAM to play around with as I do.

I'm using an i7-3930K with all cores clocked to 4.2GHz and the OS I'm using is Windows 7 Pro 64bit SP1.

Titan Black GPUs are clocked to 966MHz, and as mentioned, Double Precision optimization is enabled in the Nvidia Control Panel.

The current GPU MilkyWay@Home 1.02 (opencl_nvidia) application tasks take about 00:02:25 to complete...

...and the current GPU MilkyWay@Home Separation (Modified Fit) 1.36 (opencl_nvidia_101) application tasks take about 00:00:35 to complete.
59) Message boards : Cafe MilkyWay : Are Dark Energy and Gravity Actually the Same Thing? (Message 64502)
Posted 25 Apr 2016 by Profile Wrend
Post:
So to clarify this a little, the idea is basically that space-time is pushing mass away to form some kind of uniformity or equilibrium, non-displacement (if you will).

Space-time pushing mass away in this way may need some kind of "extradimensional" component to make more sense, perhaps related to the multidimensional geometry and size, age, or even rate of the universe in some way.

Then again, maybe it is just ripples on the surface of a pond pushing things apart, some kind of space-time quantum pressure effect. Who can say for sure? Either way, I'll be glad when we find out. :)
60) Message boards : Cafe MilkyWay : Are Dark Energy and Gravity Actually the Same Thing? (Message 64501)
Posted 25 Apr 2016 by Profile Wrend
Post:
I was reading a Reddit Q&A from MilkWay@Home and came across this... https://www.reddit.com/r/science/comments/3rr7pd/science_ama_series_we_are_milkywayathome_and_prof/cwr3p1s

It turns out that matter and space are intertwined, as proposed by Albert Einstein (who seems to never have had a good hair day) in his theory of General Relativity. Matter bends space. The curvature of space and the density of matter are related to the expansion rate of space. If that seems too complicated, then I guess you could say that gravity is holding the space in place.

Prof. Heidi Jo Newberg


For something like going on 15 years I've had the notion that dark energy and gravity are really the same thing, ever since around the time it was discovered and made public that the observable universe is not only expanding, but expanding at an accelerating rate. The reason I've thought this was somewhat related to the well known analogy and conceptualization of balls being placed on a sheet and the curves creating indentations in the sheet with the balls being "pulled" and rolling toward the indentations of each other. Of course in space this is in 3D instead of on a plane, but that's kind of beside the point. In the analogy the sheet is pushing back against the balls, and basically so that the balls aren't displacing it as much, pushes the balls together.

Maybe space operates in a similar manner, seeing as how gravity is observably the curvature of space-time as it relates to and interacts with mass. So maybe on the smaller scale where masses are relatively close, this force (in effect) appears to be pulling objects together, when it could be that it's really how the objects are interacting with space-time dilation. On the larger scale, maybe this appears as though very distant objects are being pushed away from each other.

I of course don't have the kind of data and resources at my disposal to investigate these things further really, so I'm just kind of throwing the notion out there to see what sticks.

Cheers.


Previous 20 · Next 20

©2024 Astroinformatics Group