AdoredTV Discusses Leaked AMD 'Rome' Specifications

cageymaru

Fully [H]
Joined
Apr 10, 2003
Messages
21,633
AdoredTV says that his sources are signaling to him that AMD has switched to a 9 chiplets design for the "Rome" 7nm server CPU. 64 cores, 128 threads, 256MB of L3 cache, 128 PCIe 4 lanes, 70% - 80% more performance and much more information has surfaced about the upcoming product launch in the video below.

All roads lead to Rome.
 

Neapolitan6th

[H]ard|Gawd
Joined
Nov 18, 2016
Messages
1,182
Saw that last night. I hope they are true as these multi-chip designs seem incredibly interesting.

Really curious how that center control module would juggle all those cores. It'll be interesting to see how any of this impacts Ryzen's latencies. 256MB of cache ... Oh boy
 

oROEchimaru

Supreme [H]ardness
Joined
Jun 1, 2004
Messages
4,662
I love AMD, love what they are doing. Here is the problem: licensing are making killing off these chips. My company and other peers in the industry wont upgrade due to Microsft, VM, Citrix etc gouging everyone on licensing.

a. "why have a 64 core cpu" when you can have an "intel xeon 16 core cpu?"

because you may save money on the amd chip, it may be faster but your license for SQL will be $100k-200k more expensive due to the "per core BS" licensing models.

I'm not sure how AMD could help bring costs down for there clients by possibly re-bundling "cores" it would be cool if 64core was "4 cores" (4 cores of 16 on the back end) similar to logical cores. Do something to make them cost effective for servers and licensing. Anyone else? I dont see this mentioned in articles by enthusiasts.
 

Mohonri

Supreme [H]ardness
Joined
Jul 29, 2005
Messages
5,765
Yeah, the software vendors need to think about reworking their licensing schemes, given the recent explosion in core counts.
 
D

Deleted member 93354

Guest
I love AMD, love what they are doing. Here is the problem: licensing are making killing off these chips. My company and other peers in the industry wont upgrade due to Microsft, VM, Citrix etc gouging everyone on licensing.

a. "why have a 64 core cpu" when you can have an "intel xeon 16 core cpu?"

because you may save money on the amd chip, it may be faster but your license for SQL will be $100k-200k more expensive due to the "per core BS" licensing models.

I'm not sure how AMD could help bring costs down for there clients by possibly re-bundling "cores" it would be cool if 64core was "4 cores" (4 cores of 16 on the back end) similar to logical cores. Do something to make them cost effective for servers and licensing. Anyone else? I dont see this mentioned in articles by enthusiasts.

Pricing structures don't quite work that way. But to argue your point, a 64 core EPYC would be just as effective at a 16 core xeon x 4 because you are dealing with a single chassis, more cost effective.
 

Nobu

[H]F Junkie
Joined
Jun 7, 2007
Messages
8,967
If your workloads benefit from 64c, then the licensing costs will be a non-issue–you're already paying for x-cores, after all. Getting a 64c machine would be the same as getting 6 12c machines, except the density is higher. What's more important is whether the other costs (power, cooling, system/platform, etc) remain reasonable, and whether the single system can be worked into your workflow.
 
D

Deleted member 93354

Guest
If your workloads benefit from 64c, then the licensing costs will be a non-issue–you're already paying for x-cores, after all. Getting a 64c machine would be the same as getting 6 12c machines, except the density is higher. What's more important is whether the other costs (power, cooling, system/platform, etc) remain reasonable, and whether the single system can be worked into your workflow.

^-------This guy gets it.
 

Lakados

Supreme [H]ardness
Joined
Feb 3, 2014
Messages
7,314
I love AMD, love what they are doing. Here is the problem: licensing are making killing off these chips. My company and other peers in the industry wont upgrade due to Microsft, VM, Citrix etc gouging everyone on licensing.

a. "why have a 64 core cpu" when you can have an "intel xeon 16 core cpu?"

because you may save money on the amd chip, it may be faster but your license for SQL will be $100k-200k more expensive due to the "per core BS" licensing models.

I'm not sure how AMD could help bring costs down for there clients by possibly re-bundling "cores" it would be cool if 64core was "4 cores" (4 cores of 16 on the back end) similar to logical cores. Do something to make them cost effective for servers and licensing. Anyone else? I dont see this mentioned in articles by enthusiasts.
First off for Microsoft, VMWare, and Citrix you only pay for the physical cores not the virtual cores (and the first 16 physical cores are included in the base license), secondly this allows you to dramatically decrease the number of sockets and even machines all together. Why run 2 dual socket servers when you can run 1 single socket server in its place. Once you get them running you then virtualize your individual servers and the licensing there is only paid based on the number of cores you assign to the individual VM's. When all is said and done it usually ends up being cheaper. The bonuses from decreased power consumption from the server and its associated cooling systems is just icing on the cake.
 

Lakados

Supreme [H]ardness
Joined
Feb 3, 2014
Messages
7,314
If your workloads benefit from 64c, then the licensing costs will be a non-issue–you're already paying for x-cores, after all. Getting a 64c machine would be the same as getting 6 12c machines, except the density is higher. What's more important is whether the other costs (power, cooling, system/platform, etc) remain reasonable, and whether the single system can be worked into your workflow.
Yeah more and more I am finding even on my 4 of 5 year old servers they are scaling very well with the software upgrades to the point where CPU usage has dropped significantly but the HDD's are getting worked double time, I have had to do more to upgrade from gigabit to SFP+ fiber modules and SSD's than increasing CPU counts. Just keeping the general core count the same has worked wonders with just the better per core performance of the newer parts and throwing more RAM in there for good measure. My costs are way down with this trend.
 

MHzTweaker

n00b
Joined
Jul 29, 2009
Messages
47
What an AMAZING time for hardware....again! It's probably been almost 10 years since socket X58 and SSDs were newish that I have been thrilled to build a new "MAIN" rig. I started building my current Threadripper 2 rig 2 weeks ago and it replaced my i7-5930k last Saturday morning. I have no regrets!!! AMD has the core counts. With better single core performance that 7nm should bring Intel has just got to be distraught. Don't get me wrong, I've been in the Intel camp the entire last 10 years but with a resentment. I have X58, X79, X99, Z97, Z170, Z270, Z370 based systems. Now add Ryzen 1700 and 2950x. Very happy to see some healthy competition again.
 

Eickst

[H]ard|Gawd
Joined
Aug 24, 2005
Messages
1,884
Yeah more and more I am finding even on my 4 of 5 year old servers they are scaling very well with the software upgrades to the point where CPU usage has dropped significantly but the HDD's are getting worked double time, I have had to do more to upgrade from gigabit to SFP+ fiber modules and SSD's than increasing CPU counts. Just keeping the general core count the same has worked wonders with just the better per core performance of the newer parts and throwing more RAM in there for good measure. My costs are way down with this trend.

Same here. We've started getting the higher clocked CPU's instead of 'as many cores as I can get' CPU's because of licensing and general performance of the CPU's these days.
 

lostinseganet

[H]ard|Gawd
Joined
Oct 8, 2008
Messages
1,207
Saw that last night. I hope they are true as these multi-chip designs seem incredibly interesting.

Really curious how that center control module would juggle all those cores. It'll be interesting to see how any of this impacts Ryzen's latencies. 256MB of cache ... Oh boy
Finally enough CPU for VRchat :p
 

Riccochet

Fully [H]
Joined
Apr 11, 2007
Messages
28,210
I love AMD, love what they are doing. Here is the problem: licensing are making killing off these chips. My company and other peers in the industry wont upgrade due to Microsft, VM, Citrix etc gouging everyone on licensing.

a. "why have a 64 core cpu" when you can have an "intel xeon 16 core cpu?"

because you may save money on the amd chip, it may be faster but your license for SQL will be $100k-200k more expensive due to the "per core BS" licensing models.

I'm not sure how AMD could help bring costs down for there clients by possibly re-bundling "cores" it would be cool if 64core was "4 cores" (4 cores of 16 on the back end) similar to logical cores. Do something to make them cost effective for servers and licensing. Anyone else? I dont see this mentioned in articles by enthusiasts.

If your SQL servers are VM's you only pay for the cores you assign to the VM. Where you save on licensing is the "per socket" licensing for VMWare. So, go ahead and load up a 4 socket Xeon system and pay out the ass for VMWare licenses, or slap a single AMD in there and save some money.
 

psyclist

Gawd
Joined
Jan 25, 2005
Messages
844
Things are finally moving at a good pace again in the CPU world, Im gonna hold off till 2020 (or at least try and wait) DDR5 and PCIe 5.0 i think thats where to build a new main rig, looking forward to all that rolls out between now and then though! Keep those Intel feet to the Fire AMD!
 

ecktt

Limp Gawd
Joined
Oct 22, 2004
Messages
415
I love AMD, love what they are doing. Here is the problem: licensing are making killing off these chips. My company and other peers in the industry wont upgrade due to Microsft, VM, Citrix etc gouging everyone on licensing.

a. "why have a 64 core cpu" when you can have an "intel xeon 16 core cpu?"

because you may save money on the amd chip, it may be faster but your license for SQL will be $100k-200k more expensive due to the "per core BS" licensing models.

I'm not sure how AMD could help bring costs down for there clients by possibly re-bundling "cores" it would be cool if 64core was "4 cores" (4 cores of 16 on the back end) similar to logical cores. Do something to make them cost effective for servers and licensing. Anyone else? I dont see this mentioned in articles by enthusiasts.


This guy gets it! Server hardware cost is the smallest part of the TOC. Lics and anual support waaaaay exceed the cost everything else.

That said. WTF. First they moved the memory controller and cache off the CPU because ...performance. Remember what we used to call the North bridge and external cache? They moved the cache back to the CPU because....performance. Then they moved the memory controller back to the CPU because....performnce. Now they are taking the memory controller back out but leaving it on the CPU board because performce? Its a fringgin discrete north bridge.

Seem like they are going in circles. Also that adoreTv guy is just anti intel (based on his other videos).
 

Oldmodder

Gawd
Joined
Aug 24, 2018
Messages
706
Holy too tight spandex Batman ! :eek:

Still have my opteron 185 ( Denmark ) CPU / machine, had great OC fun on that one for a while, but very soon it will have to go as i have no need to donate my current machine to my friend, so it will become the backup machine to my threadripper.
The "Denmark" CPU will end up on my key-chain, or go into the OLD CPU collection just prior to the Northwood EE CPU
 

ecktt

Limp Gawd
Joined
Oct 22, 2004
Messages
415
New
If your SQL servers are VM's you only pay for the cores you assign to the VM. Where you save on licensing is the "per socket" licensing for VMWare. So, go ahead and load up a 4 socket Xeon system and pay out the ass for VMWare licenses, or slap a single AMD in there and save some money.

That's no what M$ told us when we got audited. I wish they did!

We pay out our @$$for WMware either way!. Anual support cost more that a New Nutanix deployment ...... with hardware!
 
Last edited:

Nobu

[H]F Junkie
Joined
Jun 7, 2007
Messages
8,967
This guy gets it! Server hardware cost is the smallest part of the TOC. Lics and anual support waaaaay exceed the cost everything else.

That said. WTF. First they moved the memory controller and cache off the CPU because ...performance. Remember what we used to call the North bridge and external cache? They moved the cache back to the CPU because....performance. Then they moved the memory controller back to the CPU because....performnce. Now they are taking the memory controller back out but leaving it on the CPU board because performce? Its a fringgin discrete north bridge.

Seem like they are going in circles. Also that adoreTv guy is just anti intel (based on his other videos).
Technology changed a lot. Expansion cards in the past had limited memory and processing power, and leaned heavily on the cpu and main memory. Having a cache and controller between the expansion cards, cpu, and memory made sense because of the bandwidth and latency limitations of the time. Once busses became faster and components started having more local memory, it made more sense to have the memory controller and caches as close to the cpu as possible. As process sizes shrunk, more components could be squeezed close together and latency went down. Now the cost isn't as high to have the memory controller on a separate chip, but you still want it close to the cpu to keep latency down.
 
  • Like
Reactions: ecktt
like this

BinarySynapse

[H]F Junkie
Joined
Feb 6, 2006
Messages
15,103
This guy gets it! Server hardware cost is the smallest part of the TOC. Lics and anual support waaaaay exceed the cost everything else.

That said. WTF. First they moved the memory controller and cache off the CPU because ...performance. Remember what we used to call the North bridge and external cache? They moved the cache back to the CPU because....performance. Then they moved the memory controller back to the CPU because....performnce. Now they are taking the memory controller back out but leaving it on the CPU board because performce? Its a fringgin discrete north bridge.

Seem like they are going in circles. Also that adoreTv guy is just anti intel (based on his other videos).


It's just the typical cycle of performance improvement. Find a way to make something faster than what you have now, then find a way to make it faster again. The make it faster stage usually means discrete components and/or higher-speed narrow data paths. Then you hit a speed limit and the only way to make it faster again usually involves integration to reduce transmission times or making a wider path to get more data through at a time (or both).
 
  • Like
Reactions: ecktt
like this
D

Deleted member 93354

Guest
This guy gets it! Server hardware cost is the smallest part of the TOC. Lics and anual support waaaaay exceed the cost everything else.

That said. WTF. First they moved the memory controller and cache off the CPU because ...performance. Remember what we used to call the North bridge and external cache? They moved the cache back to the CPU because....performance. Then they moved the memory controller back to the CPU because....performnce. Now they are taking the memory controller back out but leaving it on the CPU board because performce? Its a fringgin discrete north bridge.

Seem like they are going in circles. Also that adoreTv guy is just anti intel (based on his other videos).

You're not getting it. The way licenses work out this is in your favor

White rooms break drive data apart from CPU clusters using fiber. (Someone brought up CPU loads being low and data io being the bottleneck. What do you think all those pcie lanes are for?

I've been working these blocks for a while. These aren't chips on a motherboard with copper interconnects but close proximity chips on a mcp interposer. There is little speed let down.

Today's chips are actually multiple stages working independently including prefetch to branch prediction / spec execution, memory controller, cache, alu fpu and mimd/vector extensions.

Now admittingly putting all these separate pieces of silicon on an interposer is not as fast as putting it on one piece of silicon. This is especially true with cache conflicts/misses. But you have a lot more resources at your disposal over a wider area making it easier to cool. Plus it's easier to speed bin lots of fast small chips then 1 fast huge chip.

That large center chip handles io memory and video instructions. All take considerable silicon. Alu fpu and cache branch pred are small and manageable units
 

kandrey89

Limp Gawd
Joined
Jul 11, 2015
Messages
182
ZenX is the Rome-Vega hybrid, it's following Apple's X greek 10 numeration, 9 cores plus GPU 10th core.
 

DrBorg

Gawd
Joined
Jan 22, 2005
Messages
555
What an AMAZING time for hardware....again! It's probably been almost 10 years since socket X58 and SSDs were newish that I have been thrilled to build a new "MAIN" rig.....

If you still have an x58 chipset laying around, find a xeon x5670 for it; it overclocks like you would not believe.

I'm running one at 4.5GHz and it runs 60C maxxed out. If you have the right mobo, you can run faster, my mobo won't run it over 22x, and the chip supports 24x. And 96GB of memory, lol.
The i7-920 had issues with over 16GB.
 

N4CR

Supreme [H]ardness
Joined
Oct 17, 2011
Messages
4,948
The utter silence has been AMD going on about active interposer briefly then absolute crickets. I think that's the secret sauce here, if not this release, for Zen3 for sure. I really want to know if they are using active interposer to allow for 8 core CCX and maybe even two 8 core CCX per die with the savings from dropping interconnections mostly to the active interposer. When you combined the die shrink and lack of IF then you have the room and perf advantage..
That would be a significant upgrade and the most sense. Vs individual 8 dies as per Charlies stupid leak. Yeah, they will screw the latency and complexity and cost into a ditch with 8 dies... come on Charlie. Over twice the failure rate of Epyc potentially when you start doing twice the dies, I'm sure a statistician will be here to correct my shitty math soon ;).. Basically that seems to me overly complex and a far, far too large socket with too much latency.
But, if they have done the massive L3 with an 8 core CCX for now on an active interposer it will give great gains as well.


Either way will be interesting but I'm holding out for 2x8 core CCX per die. It would make TR-style Zen cores possible but with the TR WX memory controller limitation.
 

Pieter3dnow

Supreme [H]ardness
Joined
Jul 29, 2009
Messages
6,784
The utter silence has been AMD going on about active interposer briefly then absolute crickets. I think that's the secret sauce here, if not this release, for Zen3 for sure. I really want to know if they are using active interposer to allow for 8 core CCX and maybe even two 8 core CCX per die with the savings from dropping interconnections mostly to the active interposer. When you combined the die shrink and lack of IF then you have the room and perf advantage..
That would be a significant upgrade and the most sense. Vs individual 8 dies as per Charlies stupid leak. Yeah, they will screw the latency and complexity and cost into a ditch with 8 dies... come on Charlie. Over twice the failure rate of Epyc potentially when you start doing twice the dies, I'm sure a statistician will be here to correct my shitty math soon ;).. Basically that seems to me overly complex and a far, far too large socket with too much latency.
But, if they have done the massive L3 with an 8 core CCX for now on an active interposer it will give great gains as well.


Either way will be interesting but I'm holding out for 2x8 core CCX per die. It would make TR-style Zen cores possible but with the TR WX memory controller limitation.

In a way AMD will be ahead of what is worth buying for consumers because of software not scaling on more cores (which will change now).
And what there doing on the server platform is outright amazing, but that is still very different from what consumers need I hope that the coming years we could get something more towards desktop then "binned"server parts.
 

Meaker

Official representative of powernotebooks.com!
Joined
Jan 10, 2004
Messages
924
The utter silence has been AMD going on about active interposer briefly then absolute crickets. I think that's the secret sauce here, if not this release, for Zen3 for sure. I really want to know if they are using active interposer to allow for 8 core CCX and maybe even two 8 core CCX per die with the savings from dropping interconnections mostly to the active interposer. When you combined the die shrink and lack of IF then you have the room and perf advantage..
That would be a significant upgrade and the most sense. Vs individual 8 dies as per Charlies stupid leak. Yeah, they will screw the latency and complexity and cost into a ditch with 8 dies... come on Charlie. Over twice the failure rate of Epyc potentially when you start doing twice the dies, I'm sure a statistician will be here to correct my shitty math soon ;).. Basically that seems to me overly complex and a far, far too large socket with too much latency.
But, if they have done the massive L3 with an 8 core CCX for now on an active interposer it will give great gains as well.


Either way will be interesting but I'm holding out for 2x8 core CCX per die. It would make TR-style Zen cores possible but with the TR WX memory controller limitation.

Smaller chips have a higher yield rate and are more reliable.
 

ole-m

Limp Gawd
Joined
Oct 5, 2015
Messages
452
Yeah more and more I am finding even on my 4 of 5 year old servers they are scaling very well with the software upgrades to the point where CPU usage has dropped significantly but the HDD's are getting worked double time, I have had to do more to upgrade from gigabit to SFP+ fiber modules and SSD's than increasing CPU counts. Just keeping the general core count the same has worked wonders with just the better per core performance of the newer parts and throwing more RAM in there for good measure. My costs are way down with this trend.

we've noticed the same.
However, we've halved our server count over the years and we're doing MORE.
Our disk enclosures have increased by a lot but total cost of ports on network side and server side is a big cost reduction in modules despite we're buying increasingly higher tier faster and more bleeding edge modules for 40\100gbit but we've reduced the amount of modules in use thus reducing the switch\router count and on top of it we've reduced power consumption by 40% by increasing core count per server.
 

Riccochet

Fully [H]
Joined
Apr 11, 2007
Messages
28,210
New

That's no what M$ told us when we got audited. I wish they did!

We pay out our @$$for WMware either way!. Anual support cost more that a New Nutanix deployment ...... with hardware!

Then you got screwed, because M$ SQL licensing has been "per core" since 2012. Even says it's "per core" on their own pricing site.

https://www.microsoft.com/en-us/sql-server/sql-server-2017-pricing

We're in the process of replacing our 12 Cisco UCS dual socket Sandy Bridge based blades with EPYC based UCS blades. Higher core density per socket, higher IPC per core. Looking at 8 blades. That will lower our VMWare licensing costs and possibly lower our SQL licensing costs as well. If we can go from, say, 8 cores per SQL VM to 4 or 6. This may not be the case for some people, but we're due for a hardware refresh.
 

Nobu

[H]F Junkie
Joined
Jun 7, 2007
Messages
8,967
Then you got screwed, because M$ SQL licensing has been "per core" since 2012. Even says it's "per core" on their own pricing site.

https://www.microsoft.com/en-us/sql-server/sql-server-2017-pricing

We're in the process of replacing our 12 Cisco UCS dual socket Sandy Bridge based blades with EPYC based UCS blades. Higher core density per socket, higher IPC per core. Looking at 8 blades. That will lower our VMWare licensing costs and possibly lower our SQL licensing costs as well. If we can go from, say, 8 cores per SQL VM to 4 or 6. This may not be the case for some people, but we're due for a hardware refresh.
Thinking about it, maybe they bought the wrong license by mistake?
 
Joined
May 10, 2016
Messages
634
Thinking about it, maybe they bought the wrong license by mistake?

The last time I looked, it's both. 2-way minimum, 8 cores minimum per, for the Windows Server licencing. SQL is either Server+CAL ie. "user who benefits from the use of the system" (per the 2014 multiplexing memo) or per-core.
 

Uvaman2

2[H]4U
Joined
Jan 4, 2016
Messages
3,143
Hopefully navi will have some chiplet scalability sauce baked in too. Rome is looking great, i am surprised how much the architecture for zen seems to have changed in a short time.
 

oROEchimaru

Supreme [H]ardness
Joined
Jun 1, 2004
Messages
4,662
If your SQL servers are VM's you only pay for the cores you assign to the VM. Where you save on licensing is the "per socket" licensing for VMWare. So, go ahead and load up a 4 socket Xeon system and pay out the ass for VMWare licenses, or slap a single AMD in there and save some money.


that is not how it works with sql. if you want a non-vm server a license for sql will be 2x expensive for AMD than intel due to the cores.
 

Riccochet

Fully [H]
Joined
Apr 11, 2007
Messages
28,210
that is not how it works with sql. if you want a non-vm server a license for sql will be 2x expensive for AMD than intel due to the cores.

Who TF doesn't run it as a VM? And if that's the case, you only buy the CPU for the needs of the standalone server. Not some high core count, multi socketed server for a standalone SQL instance.
 
  • Like
Reactions: ecktt
like this

psyclist

Gawd
Joined
Jan 25, 2005
Messages
844
Intel trying to stay in the news cycle and steal some thunder from AMD's presentation tmrw. Adored was right before about the potential dual Cascade lake chip, im guessing hes right about Rome as well. 2019 is gonna be a good year! and if the rumors are true, AMD has done it again and surpassed Intel in performance. 64 vs 48 with AMD pushing a much lower TDP, and should being able to boost for longer as a result. Looking forward to what Intel answers with...
 

oROEchimaru

Supreme [H]ardness
Joined
Jun 1, 2004
Messages
4,662
Who TF doesn't run it as a VM? And if that's the case, you only buy the CPU for the needs of the standalone server. Not some high core count, multi socketed server for a standalone SQL instance.


why would you run a server as VM when you need it dedicated ? VM is handy if your separating out the databases or have many smaller projects to separate out system resources. Adding a VM to a single box doesnt magically add any value except costs.
 
  • Like
Reactions: ecktt
like this
D

Deleted member 93354

Guest
why would you run a server as VM when you need it dedicated ? VM is handy if your separating out the databases or have many smaller projects to separate out system resources. Adding a VM to a single box doesnt magically add any value except costs.

When you are teaching classes
When you are spawning up processes for webAPI and want to isolate users
For advanced security initiatives (Amti virus companies do this to test effects of viruses in a sandbox)
For dividing up work resources for build machines
For multiple test environments

The list goes on and on.
 
Top