LOL the N-body tasks are set to 16 threads and they don't change when I properly set the number of CPUs, and restart. Hmm they might not run the most efficiently. It appears these Nbody tasks are set from the project's servers.
Edit: So far for my Threadripper build I have sorted out the CPU, Heatsink, and RAM, Storage (old HD), PSU. Need a case and motherboard.
MN Scout, that behavior with the work units downloading with the core count applies pretty much to all MT projects. To get around it, you typically have to set up an app_config assigning a limit of threads used and that only works if it is in place before you download the work units. Anyone crunching with GPU's in their boxes should make sure and do this anyways to guarantee the CPU's doing MT are not stealing the threads away from feeding the GPU's.
There are two main tasks. The one that gives 203.92 and 227.62 credits per task.
Running one HD7970 @ 1GHz, Win7, Driver 14.9, 4 tasks per GPU, no CPU running and with default MW preference setting.
Avg run time for 203.92 task is about 40.3s
Avg run time for 227.62 task is about 31.7s
Both results are averaged over 100 tasks in total.
I can fire up two Nvidia cards but they are not as efficient as 7970. Will see how we go.
XS output is not catching up with us yet. Not sure if they are bunkering considering that the tasks finish very quickly or perhaps they have all the time in this world to fire up lots of boinc instances. Haven't seen SUSA members yet in their team.
Only see caveman running at full blast atm. Here are his rigs before he hides them.
my rig with 2 1080ti crashed while i was at work today... then when it came back online it was only running 2x gpu tasks and favoring CPU over gpu... had to use the config for project_max_concurrent, but this fixed it
Is anyone getting a Amazon p3 spot instance? I think 10 Milkyway tasks could run on the V100 at once in just over a minute. I don't have the time to learn how to set it up. I don't understand the impact of the spot vs Reserved Duration instances.
Pretty sure you should save your money on those for another sprint. I am guessing we are going to walk all over them this sprint as it runs well on GPU. They always get a good lead since they seem to have their bunker game on point and apparently too much time on their hands. I could be wrong and feel free to call me out on it in a few days if i am
After the sprint race got off yesterday, XS just released their CSG bunker. Pretty sure they are prepping this for some time already and hoping that CSG got selected at some point in time. This is a project that is quite difficult to catch up if you just started to bunker only when this project is announced. Just a gentle reminder....
Any benefit to me buying something like a 7970 or 2080X? Debating buying one and parking it on Milkway to try and keep consistent points going there and hopefully gaining some positions. Are they even good at anything else?
I’m currently running MilkyWay on both of my 280X’s and will soon fire up a brand new 280 I found this week in my inventory. It’s a warranty replacement from XFX I had completely forgotten about until I went rummaging through all my old video card boxes for a VGA to DVI adapter. Picked up a box for a 280 that was heavy and realized the box was still sealed shut. I then remembered what happened and that I received the replacement a couple of years ago when I wasn’t doing any GPU crunching anymore due to electricity costs. Thing has been sitting on my shelf ever since. Anyway, that bitch is getting some electrons flowing through it soon. Going to hit MW first. Looking to at least hit my 100 million milestone there, may go to 200 million.
My nVidia GPU’s will stay on PrimeGrid for a while, as they are currently making me 9-10 million PPD there on PPS Sieve and my goal is 1 billion in that sub project (~350 million to go).
I think because of this constraint applied to the work unit.
So theoretically, you can download lots of tasks and then intentionally abort them. Rinse and repeat to infinity. The aborted task will be part of the quorum of 5 total tasks sent out. This will certainly affect those who bunker (like XS and us). There seems to be a lot of aborted tasks, at least from the tasks that I'm currently bunkering. Ditto for tasks with error. I'm guessing some people aborting because the tasks are too long or for whatever reasons. I don't know if there is a limit on how many you can abort the tasks. It is certainly a dirty strategy but it is still a strategy that can be employed by anyone. Nothing to stop them as far as I know.
I am experimenting and have today turned down the power limit on all of my GPU's to 80%. I have always run my GPU's at stock clocks with no tweaking, but I now want to see if I can save some power $$$ without hitting my PPD too badly. Will need a couple of days to assess impact to PPD, but in just two hours after turning down the limits on all of them, the ambient temp in my basement has gone down by 6 degrees F. There are currently 6 1070ti's, a 980ti, and 2 R9 280X's running PPS Sieve and MW down there. Amazing how much heat they make when running full bore 24x7.