New WUs
log in

Advanced search

Message boards : News : New WUs

Previous · 1 · 2 · 3 · 4 · Next
Author Message
Profile Ben
Project administrator
Project developer
Project tester
Project scientist
Send message
Joined: 17 Nov 14
Posts: 316
Credit: 1
RAC: 0
Message 1806 - Posted: 27 Jan 2015, 9:23:05 UTC - in response to Message 1805.

Pros: I will have my results sooner :)
Cons: I'm still searching how to change the deadline... (found it!)

Profile Ben
Project administrator
Project developer
Project tester
Project scientist
Send message
Joined: 17 Nov 14
Posts: 316
Credit: 1
RAC: 0
Message 1807 - Posted: 27 Jan 2015, 12:36:41 UTC - in response to Message 1806.

308000 new tasks. But more are coming.
Deadline changed to 5 days.

And, the progress bar! (need cache update)

RNR
Send message
Joined: 12 Apr 13
Posts: 58
Credit: 1,961,436
RAC: 0
Message 1808 - Posted: 27 Jan 2015, 12:50:31 UTC - in response to Message 1807.

Fresh off the press. Downloading WUs now. Cheers~

Aurel
Send message
Joined: 10 Feb 13
Posts: 84
Credit: 275,734
RAC: 0
Message 1812 - Posted: 27 Jan 2015, 17:26:29 UTC - in response to Message 1807.

Progress bar under linux 32 bit not working. [app progress bar...]

Profile Ben
Project administrator
Project developer
Project tester
Project scientist
Send message
Joined: 17 Nov 14
Posts: 316
Credit: 1
RAC: 0
Message 1814 - Posted: 27 Jan 2015, 17:28:58 UTC - in response to Message 1812.

That's not my fault I guess. It depends on the browser.

Aurel
Send message
Joined: 10 Feb 13
Posts: 84
Credit: 275,734
RAC: 0
Message 1816 - Posted: 27 Jan 2015, 17:31:05 UTC - in response to Message 1814.

That's not my fault I guess. It depends on the browser.


Application progress meter (unit)

Wu Shichao
Send message
Joined: 2 Oct 13
Posts: 8
Credit: 48,020
RAC: 0
Message 1834 - Posted: 28 Jan 2015, 4:10:45 UTC - in response to Message 1807.

308000 new tasks. But more are coming.
Deadline changed to 5 days.

And, the progress bar! (need cache update)

That's great!

Larry
Send message
Joined: 25 Nov 13
Posts: 10
Credit: 54,976
RAC: 0
Message 1836 - Posted: 31 Jan 2015, 11:40:32 UTC
Last modified: 31 Jan 2015, 12:11:25 UTC

Hey,

Changing/shortening the deadline hurts those of us running multiple BOINC projects "at home." Why was this change so necessary? And, why implement so immediately without any discussion of the cons?

The WU batch turn-around time on this project was already faster than any other BOINC project I know of.

Consider giving back to those of us running multiple BOINC projects a little extra deadline time!

Profile Ben
Project administrator
Project developer
Project tester
Project scientist
Send message
Joined: 17 Nov 14
Posts: 316
Credit: 1
RAC: 0
Message 1837 - Posted: 1 Feb 2015, 11:09:51 UTC

Hi, sorry I don't have internet in my new studio. I'am using my phone right now :)

Users were asking for a shorter deadline, I changed from 10 to 5 days.

You think 5 days is too short?

Bryan
Send message
Joined: 22 Apr 13
Posts: 2
Credit: 6,002,444
RAC: 64
Message 1838 - Posted: 1 Feb 2015, 13:46:05 UTC - in response to Message 1837.



You think 5 days is too short?


No, 5 days is adequate. The project shouldn't change to meet the lowest common denominator.

For those that run multiple projects they can increase the "share" setting on FIND to accommodate -or- reduce the size of their cached WU. Either will reduce the turn around time for them.

Larry
Send message
Joined: 25 Nov 13
Posts: 10
Credit: 54,976
RAC: 0
Message 1839 - Posted: 1 Feb 2015, 19:21:14 UTC

Yes, 5 days is too short. Only one user that I can see asked for this change. There was absolutely no need to change this parameter to suit the elite! (Btw, my hardware is far from the lowest common denominator.) Every other BOINC project that I am aware of uses a 10-day or more deadline. The WU batches for this project finish before you're ready for the next in most cases anyway.

And, editing specific BOINC project parameters within my manager should not be a requirement for user participation.

If my humble CPU cycles are not worthy, and if their efforts are invalidated by unnecessarily short deadlines, then perhaps they are better spent elsewhere.

Bryan
Send message
Joined: 22 Apr 13
Posts: 2
Credit: 6,002,444
RAC: 64
Message 1840 - Posted: 1 Feb 2015, 19:48:29 UTC - in response to Message 1839.

Larry, no one said your "humble" cpu cycles are worthless. But if you choose to spread them around then it is up to you to set your BOINC preferences to accommodate the choice that you have made. Just drop your cache size and the problem goes away. Why should others have to wait for you to get around to crunching WU that you've downloaded and then chosen not to work on?

BTW, I would strongly suggest you stay away from SRBase ... it has a 2 day deadline.

Ananas
Send message
Joined: 8 Jun 13
Posts: 128
Credit: 1,947,833
RAC: 0
Message 1841 - Posted: 1 Feb 2015, 21:47:36 UTC
Last modified: 1 Feb 2015, 21:51:55 UTC

Imo. 5 days would be fine if the estimated runtime and the real runtime would at least roughly match all the time.

But 5 days are too short even with a fairly small cache of 0.5 days and only few active BOINC projects, when the core client receives a longer series of short running results, adjusts the correction factor and then receives a lot of very long running ones, forcing the core client into panic mode.

This problem would be less relevant, if the runtimes would vary in shorter cycles, so the core clients would not have the time to adjust to the short ones - but currently there are often enough consecutive short ones to allow the correction factor to adjust to them.

Once the server is set to limit the concurrent tasks per core, one could think about such a short deadline again but currently I think it would be better to go back to the higher deadline.

btw. : Limiting the number of cached tasks per core will probably even speed up a result batch, as currenty some hosts stuff their caches whereas others with small caches run out of work long before all results are back.

Profile Ben
Project administrator
Project developer
Project tester
Project scientist
Send message
Joined: 17 Nov 14
Posts: 316
Credit: 1
RAC: 0
Message 1842 - Posted: 2 Feb 2015, 10:34:11 UTC

So, I can change a little bit the deadline.

The estimated time is not very accurate (sorry :( ).
What do you propose? 6 days, 7 days?

Aurel
Send message
Joined: 10 Feb 13
Posts: 84
Credit: 275,734
RAC: 0
Message 1843 - Posted: 2 Feb 2015, 15:57:48 UTC - in response to Message 1842.

So, I can change a little bit the deadline.

The estimated time is not very accurate (sorry :( ).
What do you propose? 6 days, 7 days?


Well, 7 days will be an good deadline. I vote for one week. ;)

Larry
Send message
Joined: 25 Nov 13
Posts: 10
Credit: 54,976
RAC: 0
Message 1844 - Posted: 2 Feb 2015, 17:23:01 UTC

7 days is a good compromise. I'll try it out, and thanks for the other options offered.

My concern was raised due to the need for my manual intervention with this last batch of WU/tasks to avoid having several of them become invalidated upon completion/reporting beyond the deadline that would have occurred without my intervention.

I just really dislike "wasting" my contributed CPU cycles, as do we all, I'm sure.

Ananas
Send message
Joined: 8 Jun 13
Posts: 128
Credit: 1,947,833
RAC: 0
Message 1845 - Posted: 2 Feb 2015, 17:39:08 UTC - in response to Message 1842.
Last modified: 2 Feb 2015, 17:47:34 UTC

Yes, I agree with Larry, 2 days more would have avoided the need of micromanaging the cache contents during the last batch, maybe together with a cache limit of 30 per core, if it's a mix like the last one.

The previous batches (well, those I have seen) did not have such a wide runtime spectrum with those very high peaks, so a per-core limit was not required.

You can still increase the limit later, when (and if) it turns out that they are all of the shorter running type.

p.s.: less cached results will result in shorter turnaround times so your batch will be done somewhat faster - in the final phase, when the server side cache is empty, more active hosts will be involved instead of just a few that pushed their work stack to the limits.

Dayle Diamond
Send message
Joined: 5 Dec 12
Posts: 62
Credit: 4,116,833
RAC: 1,127
Message 1846 - Posted: 2 Feb 2015, 19:25:37 UTC

No, PLEASE don't limit us to 30 per core.

The split second the server doesn't have any tasks in the queue, my PC downloads second and third priority tasks, which can take hours and hours to complete. A larger limit, like 100 per core, would give us more wiggle room.

As for tasks timing out, do any of us really need to have a week's worth of work on the computer? Set it for a day or two, you should be okay.

Profile Charles Dennett
Avatar
Send message
Joined: 18 Dec 14
Posts: 88
Credit: 3,342,826
RAC: 0
Message 1847 - Posted: 2 Feb 2015, 19:34:28 UTC

FYI, the current limit appears to be 50/core. In my experience, it's been like that since I joined.

A 7 day deadline is fine with me.

Charlie
____________

Ananas
Send message
Joined: 8 Jun 13
Posts: 128
Credit: 1,947,833
RAC: 0
Message 1848 - Posted: 2 Feb 2015, 21:23:13 UTC - in response to Message 1846.
Last modified: 2 Feb 2015, 21:25:10 UTC

... As for tasks timing out, do any of us really need to have a week's worth of work on the computer? Set it for a day or two, you should be okay.

That's exactly the problem. With the FPOPS_EST and the correction factor adjusted to a series of short running results, you can end up sitting on a 5 days cache even if you have set your cache size to less than a day.

I did receive the full 800 (50 per core) with only 0.8 days cache setting - but then came more than 200 results that ran more than 40 times longer than the short ones, followed by a lot that ran about 30 times longer than the small ones. If all 800 would have been of that very long running type, about one third of them would have missed the deadline. Fortunately after the long running series some short ones came so they all made it in time.

Previous · 1 · 2 · 3 · 4 · Next

Message boards : News : New WUs


Main page · Your account · Message boards


Copyright © 2017 Dr Anthony Chubb