Welcome to MilkyWay@home

Posts by DaveSun

21) Message boards : Number crunching : Results purged too quickly (Message 1013)
Posted 11 Dec 2007 by ProfileDaveSun
Post:
i was seeing how running db_purge as a daemon went (to keep the database small and everything running faster) because the assimilator takes the results and does all the crunching we need, so there's really no reason for us to keep them there. but if you guys like to see your results up for awhile i can change it to only run the purge every couple days instead.


This would be a good change. It will allow you to keep the database lean and allow those of us that like to review what we have done the time to do so. Being able to look at the past couple of days results also gives us the ability to track a history of how the project is progressing. It also can give us an indication of a potential problems with the project (yeah gives us something to ask questions about) :)
22) Message boards : Number crunching : server really back up (Message 797)
Posted 29 Nov 2007 by ProfileDaveSun
Post:
Looks like everything is running fine here also. I have one machine that looks like it's taken a vacation but I'll be able to check it in a couple hours.

Good job Travis.
23) Message boards : Number crunching : Please check this host (Message 556)
Posted 24 Nov 2007 by ProfileDaveSun
Post:
Checkout this host for the last few days it's claimed/granted credit jumped from 1.xx to 25x.xx per result with no increase in crunch time.

Even without a quorum requirement there should be a check for this type of over claim.
24) Message boards : Number crunching : Per Host Limit (Message 522)
Posted 21 Nov 2007 by ProfileDaveSun
Post:

11/21/2007 02:09:28|Milkyway@home|Sending scheduler request: To fetch work. Requesting 100470 seconds of work, reporting 0 completed tasks
11/21/2007 02:09:33|Milkyway@home|Scheduler request succeeded: got 0 new tasks
11/21/2007 02:09:33|Milkyway@home|Message from server: No work sent
11/21/2007 02:09:33|Milkyway@home|Message from server: (reached per-host limit of 8 tasks)

oops I forgot. edited:

I only have 1 WU left and still get this message.


Have the other 7 results reported or have they only been uploaded? If the Tasks tab of BOINC Manager shows 8 tasks listed and 7 of them are listed as "Ready to Report" in their status then the limit of 8 still applies. Once they are reported you should get more work based on other factors for your host.
25) Message boards : Number crunching : Per Host Limit (Message 502)
Posted 19 Nov 2007 by ProfileDaveSun
Post:
Hello To all fellow crunchers here at milkyway@home

I noticed a while ago in the messages tab:

19/11/2007 15:46:11|Milkyway@home|Message from server: No work sent
19/11/2007 15:46:11|Milkyway@home|Message from server: (reached per-host limit of 8 tasks)

Is this right?
Are there new limits being set?

As previously the quota was set in the thousands per host.

Kind Regards,

John Gray :0)


At the moment there is a limit of 2000 units per core per day. But the project is set to limit the number of units per host to 8 at one time. As these are reported back you receive more. The only people that might have a major problem with this would be those that don't have a continuous connection and run this project exclusively.
26) Message boards : Number crunching : is this output correct ? (Message 452)
Posted 15 Nov 2007 by ProfileDaveSun
Post:
If it is of any assistance I see the same general output as Ensor with a few exceptions on all of the sysems I have crunching at the moment.

Intel PIII 850MHz Windows 2000 SP4 BOINC v5.10.28
AMD Athalon 700MHz Windows 2000 SP4 BOINC v5.10.28
Intel P4 3.2GHz Windows XP SP2 BOINC v5.10.28

I have also had the same output from 2 other systems that I have used here
Intel PII 400MHz Windows 2000 SP4 BOINC v5.10.28
Intel P4 2.8GHz Windows 2000 SP4 BOINC v5.10.28

<core_client_version>5.10.28</core_client_version>
<![CDATA[
<stderr_txt>
Unrecognized XML in GLOBAL_PREFS::parse_override: suspend_if_no_recent_input
Skipping: 0.000000
Skipping: /suspend_if_no_recent_input
reading parameters file: parameters.txt
APP: astronomy reading volume from file: volume.txt
APP: astronomy reading integral checkpoint file
APP: astronomy read integral checkpoint finished
APP: astronomy integral checkpointing
APP: astronomy integral checkpoint done

/sniped more of the same

APP: astronomy reading likelihood checkpoint file
APP: astronomy read likelihood checkpoint finished
APP: astronomy likelihood checkpointing
APP: astronomy likelihood checkpoint done


**********
**********

Memory Leaks Detected!!!

Memory Statistics:
0 bytes in 0 Free Blocks.
166 bytes in 4 Normal Blocks.
12916 bytes in 8 CRT Blocks.
0 bytes in 0 Ignore Blocks.
0 bytes in 0 Client Blocks.
Largest number used: 6413860 bytes.
Total allocations: 11473064 bytes.

Dumping objects ->
c:\research\boinc_samples\astronomy\parameters.c(168) : {400179} normal block at 0x01715208, 64 bytes long.
Data: <-I(} ? /g + 0@> 2D 25 CB 49 28 7D E7 3F BA 2F 67 B6 2B 04 30 40
{56} normal block at 0x00665130, 12 bytes long.
Data: < >f Cf Kf > 98 3E 66 00 F8 43 66 00 D0 4B 66 00
c:\research\boinc\api\boinc_api.c(160) : {51} normal block at 0x00662960, 4 bytes long.
Data: < x > 00 00 78 00
c:\research\boinc\lib\parse.c(142) : {50} normal block at 0x006628D8, 86 bytes long.
Data: < <color_scheme>T> 0A 3C 63 6F 6C 6F 72 5F 73 63 68 65 6D 65 3E 54
Object dump complete.


</stderr_txt>
]]>
27) Message boards : Number crunching : exit code -2147483645 (0x80000003)) (Message 435)
Posted 14 Nov 2007 by ProfileDaveSun
Post:
Just got this error from this Result / Workunit it ran almost to completion before erroring out. Boinc Manager messages are:
11/13/2007 6:20:32 PM|Milkyway@home|Computation for task gs_4_1195065653_26479_0 finished
11/13/2007 6:20:32 PM|Milkyway@home|Output file gs_4_1195065653_26479_0_0 for task gs_4_1195065653_26479_0 absent
28) Message boards : Number crunching : exit code -2147483645 (0x80000003)) (Message 415)
Posted 12 Nov 2007 by ProfileDaveSun
Post:
This error also causes a windows error of a failed application which can/is reported to MS.

I've gotten a bunch of those.


are these still happening? I think they should all have been removed by now


Just reported this one it is qued to be sent out again.


Previous 20

©2022 Astroinformatics Group