If these specs are true then it's a pretty depressing looking card
I'm running a 3440X1440 34" for my main and a 2560X1440 for my secondary, and can run WoW or Rift on the main and STO or Netflix or a VM on the secondary at the same time without any issues. That's a heck of a lot of screen real estate, and a lot of pixels. It may not be as much as a 4k screen, but it is still not enough to make my 980 Ti breath heavy.In 1080p and 1440p. 4K? No way. My 980Ti in sig is screaming UPGRADE ME! Bouncing between 27-34 fps in AC:Origins is killing me. And on a 100 inch screen, 1080p vs. 4K w/HDR is night vs. day literally. From the numbers 1080Ti looks like a 40-50% increase in most games I play over the 980Ti. So add in another 20%, yeah this card is really for us 980Ti users looking for the next level and needing it bad. I would bet nVidia uses the 980 or 980Ti to show performance gains on this 1180.
Gaping Poop Producer?
You down with GPP (Yeah you know me) [Repeat: x3]
Who's down with GPP (Every last homie)
You down with GPP (Yeah you know me) [Repeat: x3]
Who's down with GPP (All the nerdies)
With the Titan V , is it assumed there won't be a Titan flavor on the gaming side again or at least in this next gen ?
I'll buy once GPP is gone. For now I will buy used Nvidia GPUs on ebay just so Nvidia doesn't get my hard earned money.
How do they know what the performance is going to be if they don't even know anything about the architecture changes yet? Or is Turing just a modified Volta?
Not "skipping," just logically progressing. Why would they go 4-5-6-7-8-9-10-20?
Was so very often i'd be reading a news bit, usual dosage of irony/sarcasm (which i do love) followed by a "thanks cageymaru!", that i actually thought you were a meme, lol
So you're an actual person. O.K.!
(i don't exactly follow contemporary.. trends? You'd be surprised how perplexing the internet can be at times)
Sounds like it's pretty much a side-grade to the 1080Ti, idk, maybe around 10% on top like the 1080 was to the 980Ti? I should just be content and sit this one out... or go the used 1080Ti route.
Isn't that 1080 was like at least 20% faster than factory overclocked 980ti?Sounds like it's pretty much a side-grade to the 1080Ti, idk, maybe around 10% on top like the 1080 was to the 980Ti? I should just be content and sit this one out... or go the used 1080Ti route.
It is a side grade to a 1080 ti.. It doesn't compete with it.. It replaces 1080 and 1070s. Which it won't be a side grade on.
Isn't that 1080 was like at least 20% faster than factory overclocked 980ti?
https://en.wikipedia.org/wiki/GeForce_800M_seriesthere was no 8 series
I cant wait to get 20 new cards in the mining rigs...
A 1080 replacement that's significantly faster for the same MSRP? Should be a winner. Meanwhile AMD is busy making videos crying about GPP lol!!
it is interesting... on this.
I have 2 x 8 pin connectors and I think I have the small power supply.
I think I have the 425w power supply.. this link says there are 2.. 685 / 425 W
so I could upgrade the ps as well...
I don't think they'd give you 2x8 pin connectors on a 425W PSU, that makes me suspect you've got the 685. OTOH even the 425 should be enough to run a 150w card unless you've got the biggest Xeon, and maxed out your ram and HDD capacity.
You might need to unscrew it to check, but the wattage should be printed on the side of the PSU somewhere. The important number is what's available on the 12V rail not the headline number. (For modern designs these tend to be very close or the same since almost everything is 12v now.)
Perhaps get out a calculator if you can't do the math in your head because that is actually more bandwidth than the 1080ti.GDDR6 and only a 256bit memory interface and the same number of Cuda cores as the 1080Ti?
Sure it's likely much faster ram but a step back from the 352bit interface of the 1080Ti.
Sounds more like a future: "Woulda, Coulda, Shoulda"
Perhaps get out a calculator if you can't do the math in your head because that is actually more bandwidth than the 1080ti.
You are making no sense as the effective bandwidth is what matters and it's irrelevant how it achieves that.still bus wide have a huge role in memory performance, sure, bandwidth it's a part, but sometimes depending on the game coding, bus width have a more important job if max bandwidth hasn't been achieved.
You are making no sense as the effective bandwidth is what matters and it's irrelevant how it achieves that.
I don't recall, but if they did, they did not have the market share to cause the kind of damage NVIDIA is currently capable of.Didn't 3Dfx pull this exclusivity crap?
2048 coresehm nope, big nope there. first learn a bit about game coding and programming then return here and say the same. Using the highway analogy, if the bus width is the number of lanes, and the bus speed is how fast the cars are driving, then the bandwidth is the product of these two and reflects the amount of traffic that the channel can convey per second, so following this same analogy, more number of lanes in the highway at the same car speed will result in a more efficient final "throughput", more bus lanes mean faster access to data streaming, larger data size and faster I/O/cache operations, which also mean more efficient achievement of the maximum "theoretical" bandwidth as there's not always full efficiency in the Double Data Rate interface, that is present not only in vRAM but also in system RAM, you can do whatever test you want on your system, and you will see with benchmark that the achieved bandwidht by your machine isn't always the same as theoric numbers can mean, same principle apply to overall GPU architecture, Pixel fillrate, Texel Fillrate, Shader arithmetic rate, rasterization rate are all together to bandwidth maximum theoretical numbers.
The "bandwidth measurement" calculation just give a "peak" of theoretical maximum bandwdith (which isn't always achieved or sometimes exceeded and it's guarantee depend on many factors which you probably will not know anyways but to make it simply, it depends on type of texturing and cache programming)
256-bit / 8 = 32
32 * 10008Mhz = 320256MB/s maximum theoric bandwdith
352 bit/ 8 = 44
44 * 11008Mhz = 484352MB/s maximum theoric bandwdidth
384 bit /8 = 48
48 * 7012Mhz = 336576MB/s maximum theoric bandwidth
however when you put the GPUS to a test to measure the realworld bandwdith usage scenario you find things like this.
View attachment 67947
Do I need to explain what's going on here with texture performance and texture compression?
I don't recall, but if they did, they did not have the market share to cause the kind of damage NVIDIA is currently capable of.
I believ 3dfx do away from AIB partners after acquiring STB Systems and sell its GPU alone. I could be wrong but I don't recalled seeing Voodoo3, 4 or 5 from any AIB partners.
Wallowing in the muck of avarice, much?
Why would anyone want even more problems with availability by doing a gleaming mining review?