• Some users have recently had their accounts hijacked. It seems that the now defunct EVGA forums might have compromised your password there and seems many are using the same PW here. We would suggest you UPDATE YOUR PASSWORD and TURN ON 2FA for your account here to further secure it. None of the compromised accounts had 2FA turned on.
    Once you have enabled 2FA, your account will be updated soon to show a badge, letting other members know that you use 2FA to protect your account. This should be beneficial for everyone that uses FSFT.

Intel’s Nova Lake-S LGA1954 52 cores

Problem is: 14P cores is HEDT (specs in sig).

/fight me.
on like donkey kong.gif

You clearly didn't look at mine.

Our systems were HEDT. In 2019. 4-core Nehalem in 2009, 6-core Sandy Bridge-E in 2011, etc. Extra memory channels, extra PCIe slots (all mine are in use, how about you?), most of the server platform features plus OC; it's generally about having more cores and other features over the standard desktop platform.

16 P-cores has been on mainstream desktop since AMD's 3950X, and 24 is coming with Ryzen 10k. You can still get Threadripper/Xeon-W chips with fewer cores than desktop, but that's super entry-level stuff for those that need the platform more than the cores. Intel is just spamming the E-cores and I don't think it's providing the real experience they want us to believe it is over a comparably priced AMD performance-only CPU instead of upping the good core count with that sweet AVX-512.

Calling a chip that just has more cores than its bretheren, maybe because of a second chiplet, but is still socketed on the same limited desktop motherboards doesn't deserve to share the HEDT label with the likes of TR and Xeon-W (and the former true HEDT Core(-X) products). Core 2 Quad wasn't the start of HEDT, either; it was just a higher tier of a mainstream platform from before there was true differentiation.
 
Those new Artic Wolf e-core should support avx10.2 too and if there generational jump is similar to what panther lake achieved on laptop they could be close to 12/13th gen p-core in some workload

Running avx512 bits in 512 bits on p-core + 256 bits emulation mode on so many e-core is maybe not ideal, but we will have to see how well it work.

pcie lane capacity/bandwith will be quite something, the thing that really does not feel HEDT is the 2 memory channel (16p+32e on 288MB of L3 is well above the HEDT thredshold imo, it is so much more than the 9950x3d current line, zen6 could make a new one too of course), it has a giant L3 cache and ddr5-8000 to help, but with current memory price a 4x32 5600 type of kit would have been and interesting option instead of going for those 2x64 8000 kit that will cost a small fortune we can imagine.
 
You clearly didn't look at mine.

Our systems were HEDT. In 2019. 4-core Nehalem in 2009, 6-core Sandy Bridge-E in 2011, etc. Extra memory channels, extra PCIe slots (all mine are in use, how about you?), most of the server platform features plus OC; it's generally about having more cores and other features over the standard desktop platform.
I saw your sig, my response was tongue in cheek. I want the extra pcie lanes so I can run 3 x m2 SSD's without sharing slots or running through the chipset, I really don't have any pressing need for multiple cards in the pcie slots, but it's nice to know they're there in the instance Nvidia bring back some form of multi GPU implementation (DLSS5?).

16 P-cores has been on mainstream desktop since AMD's 3950X, and 24 is coming with Ryzen 10k. You can still get Threadripper/Xeon-W chips with fewer cores than desktop, but that's super entry-level stuff for those that need the platform more than the cores. Intel is just spamming the E-cores and I don't think it's providing the real experience they want us to believe it is over a comparably priced AMD performance-only CPU instead of upping the good core count with that sweet AVX-512.
Agreed, AVX-512 is a game changer that Intel have pretty much had to abandon due to e-cores. I guess we'll have to see how the 256 bit implementation on e-cores works before forming an opinion.

Calling a chip that just has more cores than its bretheren, maybe because of a second chiplet, but is still socketed on the same limited desktop motherboards doesn't deserve to share the HEDT label with the likes of TR and Xeon-W (and the former true HEDT Core(-X) products). Core 2 Quad wasn't the start of HEDT, either; it was just a higher tier of a mainstream platform from before there was true differentiation.
Agreed.
 
Calling a chip that just has more cores than its bretheren, maybe because of a second chiplet, but is still socketed on the same limited desktop motherboards doesn't deserve to share the HEDT label with the likes of TR and Xeon-W (and the former true HEDT Core(-X) products). Core 2 Quad wasn't the start of HEDT, either; it was just a higher tier of a mainstream platform from before there was true differentiation.
I wouldn't call any ThreadRipper or Xeon HEDT. Those are workstation. Even the later Intel X series were slipping hard towards workstation. They were basically overclocked server chips with ECC disabled, but not quite as fast as the desktop chips in games most of the time. They didn't clock as high. HEDT to me is what I used to build an SLI rig with a pair of GTX 680s back in 2012. Then I added a NIC that wanted an 8x slot. So to me HEDT = cram a lot of stuff in a gaming rig. Socket 2011 X79 board, i7-3820 that was only a little more $ than a regular desktop model i7 and about as fast CPU-wise, lots more I/O and ram capacity. If you wanted you could pay more for a 6-core CPU. To me it's not current HEDT if it can't keep up with current desktop chips in games. I like more than 8 cores for occasional hobby use, but I'm not going to pay up for an expensive workstation setup that runs games slower.

I did build an X299 system, largely because at the time desktop boards just didn't have enough slots. I wanted at least 5 4x PCI-e devices minimum. 3 M.2 + 10Gb SFP+ NIC + some extra for expansion because it seems like I always end up wanting to add something over the life of a rig. That X299 build could provide an 8X slot for my NIC, 3 onboard + 4 on a 16X card for 7 M.2, and still run the vid card at 16X. Fast forward to now and basic Z890 boards have 6, usually 4 M.2 and 2 4x slots. Some AMD boards have 5-6, but they tend to get into that "use this and that turns off" mess and some of it's PCI-e 3.0 because they steal lanes from SATA ports. I want an "HEDT" chipset that has more PCI-e lanes and allows me to stuff more storage and cards into a rig. It would be nice if it has a bit better CPU, but at this point a "9" desktop CPU is plenty good enough for me.
 
I wouldn't call any ThreadRipper or Xeon HEDT. Those are workstation. Even the later Intel X series were slipping hard towards workstation. They were basically overclocked server chips with ECC disabled, but not quite as fast as the desktop chips in games most of the time. They didn't clock as high. HEDT to me is what I used to build an SLI rig with a pair of GTX 680s back in 2012. Then I added a NIC that wanted an 8x slot. So to me HEDT = cram a lot of stuff in a gaming rig. Socket 2011 X79 board, i7-3820 that was only a little more $ than a regular desktop model i7 and about as fast CPU-wise, lots more I/O and ram capacity. If you wanted you could pay more for a 6-core CPU. To me it's not current HEDT if it can't keep up with current desktop chips in games. I like more than 8 cores for occasional hobby use, but I'm not going to pay up for an expensive workstation setup that runs games slower.

I did build an X299 system, largely because at the time desktop boards just didn't have enough slots. I wanted at least 5 4x PCI-e devices minimum. 3 M.2 + 10Gb SFP+ NIC + some extra for expansion because it seems like I always end up wanting to add something over the life of a rig. That X299 build could provide an 8X slot for my NIC, 3 onboard + 4 on a 16X card for 7 M.2, and still run the vid card at 16X. Fast forward to now and basic Z890 boards have 6, usually 4 M.2 and 2 4x slots. Some AMD boards have 5-6, but they tend to get into that "use this and that turns off" mess and some of it's PCI-e 3.0 because they steal lanes from SATA ports. I want an "HEDT" chipset that has more PCI-e lanes and allows me to stuff more storage and cards into a rig. It would be nice if it has a bit better CPU, but at this point a "9" desktop CPU is plenty good enough for me.
Bearing in mind that x299 coupled with an x-series CPU also allowed for quad channel memory, I can get bandwidth that's right up there with many DDR5 systems - It was something that set HEDT parts above entry level consumer grade parts, and I think this is the point Grebuloner is making, increased core count alone doesn't really make a CPU a HEDT part.
 

Intel "Nova Lake-S" Uses 2L-ILM Socket for Better CPU Cooler Contact

by AleksandarK Today, 03:09 Discuss (8 Comments)
In the latest series of leaks about Intel's upcoming "Nova Lake" chips, we learn that the desktop "Nova Lake-S" designs will feature a new socket mounting and independent loading mechanism (ILM). According to exclusive information from VideoCardz, the NVL-S desktop designs on the LGA-1954 socket will include a new 2L-ILM socket mechanism to improve contact with CPU coolers. Given that Intel's upcoming NVL-S desktop processors will have up to 52 high-frequency, power-hungry cores, a better mounting mechanism is definitely needed. This 2L-ILM is a two-level independent loading mechanism, using one lever on each side of the CPU to secure it into the LGA-1954 socket. Applying pressure on both sides of the socket ensures improved flatness while the locking mechanism does its job.

Intel categorizes its desktop sockets in various forms. One of the most basic is the Default-ILM, along with the RL-ILM. As seen with "Arrow Lake," Intel's motherboards divide the socket locking mechanism into the ILM and RL-ILM designs, applied depending on the sector. For the lower-end sector, the Default-ILM socket is used, while the RL-ILM is applied to higher-end overclocking motherboards, providing better pressure and ensuring a flatter surface contact for a cooler. Noctua and Cooler Master already differentiate these two in their CPU coolers, but the mounting hardware is generally the same for both. The RL-ILM is simply a better design for overclocking due to the improved surface contact it provides.”
 
So... a 2 lever setup like my socket 2011 and 2066 rigs? All that was old is new again!
lol, gotta keep all those tiny little pins connected when that long ass-chip tries to flex. Whenever I closed one of those sockets, I was always worried I was going to crack something from the amount of force that second lever required.

With the pin density increases, I think it's time to move on from desktop clamshell ILMs to the server-style socket loads. Either the no ILM, cooler holds chip in like Intel, or the multi-screw slot loading ILM from AMD.
 
lol, gotta keep all those tiny little pins connected when that long ass-chip tries to flex. Whenever I closed one of those sockets, I was always worried I was going to crack something from the amount of force that second lever required.
I always found LGA2066 to be somewhat less daunting as I had less pressure on a single leaver. FCLGA1700 really wasn't a great design, resulting in the IHS flexing, resulting in uneven contact between the IHS and the cooler unless a contact frame was used - going back to two leavers will hopefully resolve this problem 100%.

I will say, AMD's TRx socket is an awesome design.
 

Intel Core Ultra "Nova Lake-S" Desktop Core Configurations Surface

by btarunr Today, 04:23 Discuss (4 Comments)
CPU core configurations of Intel's next-generation Core Ultra Series 4 "Nova Lake-S" desktop processors were leaked to the web by VideoCardz. These processors will remain disaggregated, tile-based chips, much like current Core Ultra Series 2 "Arrow Lake-S," but Intel will probably rearrange components around the various tiles. These chips will also introduce low-power island E-cores (LPE-cores) to the socketed desktop platform. The new processors will be designed for Socket LGA1954, which will be significantly different from the current LGA1851, while retaining cooler compatibility.

Intel intends LGA1954 to have longevity spanning several processor generations lasting till the end of the decade. The Core Ultra Series 4 will span a wide range of price-points, starting with Core Ultra 3, having the least core-count, going all the way up to Core Ultra 9 (probably X9). Perhaps the most interesting bit of news is the introduction of dual-die processors. These are SKUs with two Compute tiles connected to the SoC tile. This allows Intel to achieve extremely high CPU core counts. Since this is essentially the same approach to high core counts as AMD, both Compute tiles have equal access to memory and PCIe.”
 
One of the Ultra 7 only 4 pcores max?
Looks like there are other Ultra 7 with 8 p core.

View: https://youtu.be/KIK4fgsYtqQ?si=tnqWDUgIEBpnLFKp

That's not unlike Intel, but I don't think those 4 P-core Core Ultra 7s are going to be mainline desktop chips.

9>7>5>3 isn't strict in terms of performance. The pecking order only sticks inside a family (HX, H, or U). In recent generations desktops have been "S" although it doesn't show up in the branding. K, T and non-K are all "S", but if you consider K, T and (nothing) as being "S" then the pecking order doesn't hold up for "S". i5 K beating up on an i9 T, etc.

Intel's latest Core Ultra 7 and 9 H models have 4 P-cores. Panther Lake uses 4P+8E+4LPE. Arrow Lake "U" laptop models only have 2 P-cores, including the Core Ultra 7 265U. That's a "15W" chip for thin and light laptops.

The 4 p-core 7 and the low end 5 and 3 might be embedded CPUs. It's also possible it's "Core" rather than "Core Ultra". I could see them doing that. They made this Core Ultra/Core split and they've been busily rebranding Raptor Lake as Core 100 and Core 200. Bartlett Lake came out as "Core 200" on LGA1700, but Arrow Lake is TSMC so they may not want to recycle it for their next batch of "Core" chips. Meteor Lake desktop was cancelled, so they don't have an Intel 3 or Intel 4 option. What to do? Maybe cut down Nova Lake and call it "Core" instead of "Core Ultra"? So basically I think those chips are planned but the names in the leak aren't totally accurate.
 
I just hope there is a "KF" version of these CPU's since I will be pairing it with a RTX 6090 so I do not need some anemic onboard IGP using power and heating the CPU as it will NEVER be used.
 
cheaper is an obvious reason to pick a KF when that the case, pay more for a iGPU despite owning a discrete GPU is resale value (or when it turn into a server second life because you do not sell it, even headless it is a nice to have the possiblity of a monitor on initial setup and what not) or emergency gpu issues/debugging, power usage of a disabled iGPU (even enabled when not used it is so minimal, disabled must be minuscule), with quicksync being often better that discrete gpu for some things, probably better to keep it enabled. It is a non-issue, it would be different if KF did not had the actual silicon for the iGPU on there and you gained something by its absence of course, but that not the case.
 
cheaper is an obvious reason to pick a KF when that the case, pay more for a iGPU despite owning a discrete GPU is resale value (or when it turn into a server second life because you do not sell it, even headless it is a nice to have the possiblity of a monitor on initial setup and what not) or emergency gpu issues/debugging, power usage of a disabled iGPU (even enabled when not used it is so minimal, disabled must be minuscule), with quicksync being often better that discrete gpu for some things, probably better to keep it enabled. It is a non-issue, it would be different if KF did not had the actual silicon for the iGPU on there and you gained something by its absence of course, but that not the case.
I am not selling my systems, my RTX 4090 Systeems goes to my wife and I dislike midgde "gpu's" on my CPU, I an not a console user.
 
I am not selling my systems, my RTX 4090 Systeems goes to my wife and I dislike midgde "gpu's" on my CPU, I an not a console user.
all fair enough, but they are always there, it is not like the KF does not have them, they are just disabled, would the option exist to buy a cpu that did not waste silicon on the gpu and give you more cpu that would be a bigger deal if Intel make one or not for a new generation, but as it work it does not matter.

I would not worry about it, they will not throw away cpu just because the silicon in the gpu part failed anytime soon, they have been binning and had a market for them established a long time now, AMD is the one with it always enabled. Buying a 9950x does not make you a console user..
 
I actually wouldn't be all that surprised if iGPUs became useful in future games even on systems with a dGPU for offloading non-graphics processing. The idea here isn't that you'd need it, but rather it would relieve your vid card of some work so it could concentrate on graphics. Eventually we're going to see AI creeping into games. There are good uses and bad uses for that, but one of the good ones IMHO would be making NPC enemies smarter about fighting. Maybe the NPUs could be useful there as well. Then there's stuff like physics processing. Presently there are too many systems out there that either don't have an iGPU or have one that's too weak to bother with, but I think that'll change over time. Arrow Lake was a big improvement over Raptor Lake, and AM5 has an iGPU whereas previous AMD generations didn't.
 
Back
Top