When a WU finishes, the client tries to upload the result several different ways. Then it downloads a new WU and goes to work.
Every 6 hours, the autosend timer wakes up a task to see if there is anything to send (e.g.- think of a case where the new WU that just started is expected to run for many days.) Every 6 hours is timed from the last restart which is not correlated with the end of a WU so the two can happen 9 minutes apart -- or any other value of time up to 6 hours.
There may be other reasons, too, but you didn't include enough of your FAHlog.txt to be able to tell if I guessed the situation that applied to you.
thanks . . . I'd just never seen 25 - 52MB result files being sent back . . . takes some time too, my cable modem service has a relatively low cap on upload speeds (~56KB).
I'm working on a 56k dial-up linkage to Stanford so I hear you talking. I've had the joy of these HUGE WUs, lately. The points are nice (~1900) but the 2 1/2 hour uploads upon completion are a stretch. Probably time to seriously consider Broadband. No fair laughing .......
Speaking of which, how about a feature to "preload" the next WU, when the previous one hits the point of being, say, 98% complete? That would mean that the old results can be queued up for download, and the download that may take a while doesn't keep the rest of the system for producing the next batch of results in the meantime.
I've been toying with the idea of setting up alternately running instances of fah, and have a log file monitor fire off the 2nd instance as soon as it detects the first one has completed and is uploading. Run it with -oneunit, and it would yield a much better hardware utilization, especially for people on slow connections.
The only problem is that F@H's WU's are dependent on the previous ones being returned. Some projects may need the all or most of the first batch of WU's back and processed so it can setup the next set.
What you might need is an option to where if someone has dialup they can do smaller WU's which they can complete faster but will have less info to upload to the server so it won't bog down their connection.
Baowoulf wrote:The only problem is that F@H's WU's are dependent on the previous ones being returned. Some projects may need the all or most of the first batch of WU's back and processed so it can setup the next set.
I'm not sure where you're getting this. It takes days/weeks before results get processed to create further WUs. There are almost always further units available, even if the current unit gets corrupted/lost and has to be re-sent to someone else for re-processing. Pre-loading another already available unit when the completion of the previous one is imminent would have a minimal impact on latency and potentially significant impact on throughput.
BigWU's (now small/normal/big) does allocation based on size. Typically, it is the big setting that generates large results files. SMP WUs are all classified 'big' due to their requirements AFAIK, and having a normal/big distinction would surely help. I reckon there aren't any SMP projects that fall into the 'normal' bucket.
@shatteredsilicion, you might want to specify a different machine id for the "preload" instance but I'm still unsure if you'll be assigned a different WU. Also, watch out for bad interactions between the instances for that 2% duration where you'll effectively have 8 core processes. Last time I heard, FAH-SMP does have some limitations when running multiple instances simultaneously.
Baowoulf wrote:The only problem is that F@H's WU's are dependent on the previous ones being returned. Some projects may need the all or most of the first batch of WU's back and processed so it can setup the next set.
I'm not sure where you're getting this. It takes days/weeks before results get processed to create further WUs. There are almost always further units available, even if the current unit gets corrupted/lost and has to be re-sent to someone else for re-processing. Pre-loading another already available unit when the completion of the previous one is imminent would have a minimal impact on latency and potentially significant impact on throughput.
I'm getting it from posts from mods/PG and others that have responded to people wanting to be able to download more then one WU at a time or wanting to do what the poster before mine wanted too. My estimate of all or most of wasn't meant to be taken as an exact number but it has been said that Projects need previous WU's at least sent back in before giving out future WU's out to be worked on.
I'm not trying to start a nit-picking exchange, but you did say
SMP WUs are all classified 'big' due to their requirements AFAIK, and having a normal/big distinction would surely help. I reckon there aren't any SMP projects that fall into the 'normal' bucket.
I am working in the OSX environment. Things do change between the different architectures but I have my Client configured to receive "normal" size files . My machine/System profile indicates that I should receive SMP files, apparently. Some come down around 5 megs and they go back around 25 Megs, so far.
Just trying to keep the discussion objective......
Baowoulf wrote:The only problem is that F@H's WU's are dependent on the previous ones being returned. Some projects may need the all or most of the first batch of WU's back and processed so it can setup the next set.
I'm not sure where you're getting this. It takes days/weeks before results get processed to create further WUs. There are almost always further units available, even if the current unit gets corrupted/lost and has to be re-sent to someone else for re-processing. Pre-loading another already available unit when the completion of the previous one is imminent would have a minimal impact on latency and potentially significant impact on throughput.
I'm getting it from posts from mods/PG and others that have responded to people wanting to be able to download more then one WU at a time or wanting to do what the poster before mine wanted too. My estimate of all or most of wasn't meant to be taken as an exact number but it has been said that Projects need previous WU's at least sent back in before giving out future WU's out to be worked on.
Within reason. I'm pretty sure there are more uncalculated available WUs at most times than there are instances of F@H running.
I'm not trying to start a nit-picking exchange, but you did say
SMP WUs are all classified 'big' due to their requirements AFAIK, and having a normal/big distinction would surely help. I reckon there aren't any SMP projects that fall into the 'normal' bucket.
I am working in the OSX environment. Things do change between the different architectures but I have my Client configured to receive "normal" size files . My machine/System profile indicates that I should receive SMP files, apparently. Some come down around 5 megs and they go back around 25 Megs, so far.
Just trying to keep the discussion objective......
I have it set to "small", and I quite regularly end up returning 50MB result sets from my a2 cored SMP units. I'm not at all convinced that the bigpackets setting does anything for SMP WUs.