YMMV - Possible PPD improvement

Maybe, but in the interest of experimentation I'm keen to give it a go when I move back to WCG. I noticed that when a moved my 4p rig off WCG for a day the credits went up a few points per hour when it returned. Something strange is definitely afoot in the world of WCG CreditNew.

pututu I think I've finally found the right checkbox to tick to show the daily scores for each of my machines. It's fairly dire reading though.

My overall average across sub-projects is about 27.5 points per hour which is comprised of the 4p at about 18 and the Ryzen based rigs at about 32.

Now, here's the kicker. When I recently moved the TRs and 4p to Linux there was a crazy increase in processing speed and ppd with the 4p stabilising after a few days at about 35 points per hour. This number is for OET which seems to be the best sub-project for Linux post FAH. For me at least, post stabilisation, MIP has been the worst scoring sub-project on both platforms. There isn't a stable number for the TRs yet as they have been off defending on other projects.

I'll try to find the time to post badges/scores on the WCG badge thread tomorrow.

Let me know if you need any other data.
Last edited:
phoenicis, thanks. Please keep me posted. Today, I just got my sapphire badge for MIP. Averaging around 39.1 PPHr for my 2683v3 and 2695v2 combined.

BTW, if you can post the linux machine result, that would confirm if linux is still faster and generate more PPHr than Windows. The latter is the most important metric when we are participating in challenges.
Last edited:
If you guys don't mind, I'm consolidating the MIP PPHr numbers that were posted in WCG milestone thread here.

Skillz - 22.3 PPHr
applejacks - 23.5
ChristianVirtual - 21.2
pututu - 39.1
Whyyyyy ? I mainly run on Ryzen ... why I’m the looser here ? Do I need more ponies ?
Lol, you can never have enough ponies CV.

You're not the loser, I am. I reckon my MIP pph is around 19.63.

p.s. The trick to picking up HSTB is to download a couple of days of work and then only leave HSTB selected while the cache of work gradually runs down.

Whilst running another project, in my case Numberfields, this can be maintained by running the recently shared 'update' script (replacing WCG for Leiden url and changing time delay to 250 secs) for a couple of hours a day. The project releases HSTB tasks between approx. 2 to 5 minutes after the hour and half hour over a period of 2-3 hours per day.

From looking at task receipt time this is normally from about 12:30 UTC Mon-Wed and 06:30 UTC Thu-Fri but YMMV. I'm not sure about weekend timing.

Using the script for 2 hours yesterday bagged about 55 HSTB tasks across 6 computers.
great; that 3,33 trick via crontab really helps. Finally I have 10 assignments in the pipeline; thats a great start to come closer to my planned 2 year minimum each project (though FAH is over now for a while and I stuck with my 1year something.

Thank for the hints !
pututu, I've given the method a go on the TR rigs with the following OET results:

Firstly, to set the scene, an example of a TR thread under Windows 10 gave a pph of 32.48 although this is drawn from a small data set still available on the WCG results screen.

rjcpc OET Win10 Sample.png

Then same thing but under Linux gave a pph of 52.84

OET DC4Linux Sample.png

Then running at 50% of cpus but no overclock gave a pph of 83.40 (effective rate of 41.70)

OET DC4Linux 50% Sample.png

Finally, the outcome following the switch to 'no network activity' and reversion to 100% cpu gave a pph of 95.21

OET DC4Linux NoNetwork Sample.png

I only ran this for about two thirds of a day and there were gpu variables involved (GPUGrid, PrimeGrid and Collatz running at different times) so I'm not entirely confident as to efficacy of the testing environment.

That being said the outcome seems quite promising. Also, I can confirm that, on all the AMD chips that I have, Linux seems to be faster than Windows for most WCG sub-projects. The opteron 6174 was achieving a Windows pph of about 20 versus the current Linux pph of between 36-39.

So, thanks, I'd certainly be tempted to use the approach for competitions/sprints.
Last edited:
Actually the only 2p setup that outperforms your TR is the 2699v4 x2 setup here with 88 threads. The guy is averaging close to 78K PPD.
Looking at the second TR rig now. It didn't do quite so well with a pph of about 70 which could be down to the gpu variables I mentioned earlier, a shorter run time or poor prep on that rig by me.

In 'normal' mode both rigs are currently running at around 60 pph which may be a residual effect from the exercise. I'll have to see at what level it stabilises.

An aweful lot of jobs from the faster TR have been thrown into the verification pending queue so something, somewhere, might not have liked what I did, lol.

Badge hunting permitting, I'd give OET a go pututu. It's been a lot kinder to me than MIP has.
I've already got sapphire for OET. I don't have Linux installed on any of my rigs so I'll try the VM approach.

To be honest the best PPHr WCG sub-projects is actually the FAH where I averaged around 128 PPHr. On the 2695v2, this is about 76K PPD. Unfortunately no more work for FAH until researchers have a chance to analyze the data.

In FAH, the WU identified with "non-rigid" gave the best PPD. In MIP, I notice that there are some WUs with shorter run time and better PPHr but there is no way to identify such WU in their WU batch name. If you see step 3 above, once you have good PPHr, you should start disable the network and this will give you the best result in MIP when you finish crunching longer duration MIP tasks.

FightAIDS@Home 44,495,351___345,293___5:245:05:31:01

Oh btw, would be great to post a pic of your TR(y). Maybe might inspire a few [H]ordes here to upgrade. Thanks.
To be honest the best PPHr WCG sub-projects is actually the FAH where I averaged around 128 PPHr. On the 2695v2, this is about 76K PPD. Unfortunately no more work for FAH until researchers have a chance to analyze the data.

Oh btw, would be great to post a pic of your TR(y). Maybe might inspire a few [H]ordes here to upgrade. Thanks.

I had a brief taste of the FAH pph when I first started the recent run. Based on that I decided to leave it until I'd achieved Emerald on everything else. What can I say ........ mistakes were made, lol.

I'll try to take a pic of a TR rig when I'm home at the weekend.
phoenicis, for 2683v3, I'm averaging about 69pphr with linux virtualbox. So the pphr is much higher than the MIP. Thanks for the suggestion as I haven't run OET for quite some time.



  • upload_2017-10-10_21-16-49.png
    22.2 KB · Views: 1
Giving this a shot, hopefully I did it correctly.

Last night before I went to bed ~4AM I set one of my 4P rigs to 50% CPU. Filled up 1100+ tasks. Went to bed. Woke up about half an hour ago (1030 right now) and disabled networking. Then set CPU to 100%. Now it's crunching on 48 cores. Will let that eat through some of those ~1100 tasks it has and then release the kraken on them.

Then I did the same to another 4P. Set CPU to 50% with ~1100 tasks. Gonna let it run for a few hours then do the above steps.

Will keep doing this in a rotation today with my 4P boxen. Don't want to cripple them all at once just in case this doesn't work.
Skillz, I forgot that some of your 4p rigs were Interlagos builds. I haven't tried the method on my 6174 because I figured that the 50% setting wouldn't speed up each core much.

The credits for my 32 threaders are starting to drop again so I'm jumping on board as well. At least for the next 24 hours while I'm home anyway.
I have no idea if it will work or not. I am just going through the process. I doubt I even check the individual stats to see if it does anything. So hopefully it works.
I'll keep my fingers crossed but you should be fine. I was playing around on the 4p using the iptables command you provided and found that blocking WCG for just an hour provided a pph boost after access was allowed again. A strange and yet nice surprise.

We ought to time our bunker release with pututu tomorrow for shock and awe effect :D
Well the rig I started at 1030 this AM had 1100 tasks. It's completed 260 of them already at 640PM.
I've made similar progress and should be running dry at about 18:00 UTC tomorrow.
I think that's 1PM EST. Let's aim for 12EST which is 17:00 UTC tomorrow. I am pretty sure my box will keep work crunching until then. I might have a second box I can unload that'll have a partial bunker period.
Sounds good. I'll plan to start unloading 4 rigs at 17:05 so they should be complete by the 18:00 UTC stats run. I've left the 4p running normally to try and cover up our subterfuge although I don't think anybody will be fooled.

pututu, have you got anything that will be ready to drop at about that time?
We got a few big rigs coming online that might help mask it.

WFeather has a 2P coming online. ChelseaOilman has his rigs going online tonight at midnight, meaty has a Xeon x5680 that he said he loaded it up on. So maybe the production wouldn't suffer too much.
With all that coming online I'm almost dreaming that 3rd place is possible. At the very least we should hopefully be able to outrun the TAAT bombs coming our way on the last day of the race.
Don't make me go all Churchillian, lol. Glass half full please

I'm convincing myself that are current predicament is because so much bunkering is going on.
I've got one rig that I can unload tomorrow around 16:00 UTC or 8:00AM PST.

Another rig probably 14 hours after that.

I'll just keep maximizing my output from each rig and probably do one more round of bunkering as the finish line is just under 3 days 22 hours to go.
then I will try to unload mine around 16:00 also.
We are 115k points ahead of 5th right now. So we are probably gonna lose 4th in the next hour update. We have 2 hours to go before 16:00 UTC. So hopefully once we dump our load we will jump back way ahead of them.
Nice. I'm not sure how much I did. I'm thinking around 700K WCG. Letting that rig run at 50% for a couple more hours then I am going to repeat the process.
I didn't experience much of a boost but, if for nothing but the lols, me too.