Intel's Computex 2018 Keynote in 8 Minutes

FrgMstr

Just Plain Mean
Staff member
Joined
May 18, 1997
Messages
53,878
Did you miss the 70 minute long keynote by Intel last night at Computex 2018? That is a damn shame. I know a lot of HardOCP readers get a bit chapped when you have to sit around and listen to company execs drone on forever to get a tidbit of information. I have to hand it to Engadget, they took the 70 minutes of droning (and Intel is known for drones) and edited it down to 8 minutes and 33 seconds. Kudos!

Check out the video.

Here at Computex 2018, Intel showed off everything from a 28-core processor, to an AI-powered rock band. Oh, and NBA star Jeremy dropped in to talk a bit about his experience owning and managing a Dota esports team.
 

Chris_B

Supreme [H]ardness
Joined
May 29, 2001
Messages
5,289
Presumably they're still cheaping out using their *cough cough* "high grade" TIM on any forthcoming processors. Its amazing that amd can do it right despite being not that financially well off (compared to intel), and intel execs wipe their asses with $100 bills yet can't give the nod to use solder.
 

pcgeekesq

[H]ard|Gawd
Joined
Apr 23, 2012
Messages
1,399
Kudos to Engadget, I'm sure they did it to make a point (to paraphrase Sturgeon's Law, 90% of every presentation is crap.) But I loathe the new trend (at other websites) towards video reporting instead of text. It's easier for them I guess, and maybe the kiddies like it, but it's a waste of my time, because I read much faster than talking heads can talk.

The only news video I want to watch is SpaceX booster landings. I'm waiting for the next Falcon Heavy double-landing, hopefully it will be at night. :)
 
Joined
Nov 6, 2004
Messages
917
Kudos to Engadget, I'm sure they did it to make a point (to paraphrase Sturgeon's Law, 90% of every presentation is crap.) But I loathe the new trend (at other websites) towards video reporting instead of text. It's easier for them I guess, and maybe the kiddies like it, but it's a waste of my time, because I read much faster than talking heads can talk.

The only news video I want to watch is SpaceX booster landings. I'm waiting for the next Falcon Heavy double-landing, hopefully it will be at night. :)


Oh god so very much this.

Let me read it. I don't need to watch people talk about it. 80% of communication is in non-verbal, and it is mainly the bullshit part that convinces you things are better than they are.

Let me read it. I had to get tested for work a couple of years back, and I read ~4-5 times faster than a 'normal' person, which is like 6-8x faster than spoken word. I got better shit to do.
 

vegeta535

[H]F Junkie
Joined
Jul 19, 2013
Messages
9,686
Presumably they're still cheaping out using their *cough cough* "high grade" TIM on any forthcoming processors. Its amazing that amd can do it right despite being not that financially well off (compared to intel), and intel execs wipe their asses with $100 bills yet can't give the nod to use solder.
AMD doesn't solder their lower end chips and APU. Give it time and they will also switch everything to Tim.
 
Last edited:

alamox

Gawd
Joined
Jun 6, 2014
Messages
596
just love the bit with the ASUS CEO checking his boxes, throws " with Ai " in a middle of a sentence, without explaining how or why, but so that you know it comes with Ai....oh my...
 

Chris_B

Supreme [H]ardness
Joined
May 29, 2001
Messages
5,289
AMD doesn't solder their lower end chips and APU. Give it time and they will also switch everything to Tim.

You don't know that they will. The lower end stuff probably doesn't require solder as much as the higher end CPUs. The temp drops in delidded Intel's alone makes their use of paste farsical, and it seems to be an area amd want to steer clear of.
 

FrgMstr

Just Plain Mean
Staff member
Joined
May 18, 1997
Messages
53,878
AMD doesn't solder their lower end chips and APU. Give it time and they will also switch everything to Tim.
Given how they have been marketing heavily on soldered TIMs, I don't think you are going to see that happen on their high end desktop chips any time soon.
 

Formula.350

[H]ard|Gawd
Joined
Sep 30, 2011
Messages
1,102
Was the full presentation as lackluster as this condensed one?
By that I mean, was the crowd just as unimpressed in it as they were in this? From the presentations like this that I've seen in the past (full disclosure: I've only watched a dozen or so, game and hardware alike), the crowd has never needed prompting to get excited and cheer... Typically it has been the crowd erupting first, and then the presenter who follows with their own excitement. Hell, based on how quiet the cheering was even after being prompted, and the fact that often times there are company employees in the audience, I wouldn't put it past said employees as making up the bulk of the cheering. :(

What surprises me is that this is Intel, which has very very loyal fans, who like any fanbase will get excited easily over something new... and yet, that wasn't happening. I had read the 8086K news blurb before watching this and went in to it impressed at all cores running at 5Ghz. Aaaand then the shoe drops. "5Ghz Turbo Boost" womp womp.

No mention of 10nm either? heh

And was anyone else a bit caught off guard by the sheer size of that case, or is it just me?!
upload_2018-6-5_16-0-49.png


Is that insulated metal plumbing for phase-change running in the bottom as well??
Or is there just an ice-bath under the desk pumping in extra cold water?
upload_2018-6-5_16-6-29.png



Maybe if they used a soldered heatspreader they wouldn't need such massive cooling to keep those 28 cores running at speed.
Just sayin'....
 
Joined
Jun 4, 2008
Messages
2,811
Haha I noticed the huge cooling tubes also coming off of both of these rigs. 28 cores is a lot to keep cool but to have such crazy cooling and calling it 5ghz is a bit off. Nothing impressive at all. Believe they have gotten to complacent for to long and no one really cares any more. Here is hoping amd keeps it up and gets there ass back in gear to make some nice stuff again.
 

Formula.350

[H]ard|Gawd
Joined
Sep 30, 2011
Messages
1,102
Haha I noticed the huge cooling tubes also coming off of both of these rigs. 28 cores is a lot to keep cool but to have such crazy cooling and calling it 5ghz is a bit off. Nothing impressive at all. Believe they have gotten to complacent for to long and no one really cares any more. Here is hoping amd keeps it up and gets there ass back in gear to make some nice stuff again.
I mean it's obvious that they're using Xeon parts, given the Hexa-Channel memory it's running... but man... Considering this is a rather late-stage Aorus motherboard in there (unless it was a server/workstation board they already had in production with an Aorus heatsink chucked on last-minute), seems like it's been in the pipe for awhile and so is there enough time for Intel to work out enough refinements for their chip to lower the thermal envelope enough that they'll be able to release something that can cope with even high-end water cooling for even 28 cores at 4.5GHz? (That's a genuine question, too, not a jab)

Even with top-top binning, it seems like they'd be asking a lot... The Xeon Platinium 8180 is $10K, and 205W, with clocks of 2.5Ghz with 3.8GHz Boost. After looking this up something tells me this 28 Core is not going to be all it's cracked up to be. I can't imagine what Intel could disable that would merit a massive drop in price, so lets assume from the start that just removing the 8-socket SMP support will cut the price in half (not much else you can remove and remain competitive with Threadripper). Now you've got a $5,000 CPU.
Next, assuming you're getting, like TR, the creme-de-la-creme of binned chips, we'll say the base clock has increased to... 3.2GHz (+700Mhz over the 8180) and Boost to that 5GHz (+1.2GHz!). Removing scientific extrapolation from the picture (since I lack the mathematical prowess to figure it out), if we assume 12.195MHz per Watt and ignore that there would be an increase solely from the bump to 3.2GHz (nevermind any additional voltage), that's still a jump to 262W. Fair then, to assume we're at least going to be north of 300W for 3.2GHz? I won't even speculate where it would be with just a few cores boosting to 5Ghz (assuming other cores are not throttled back, which I'd assume they would be, if not fully put to sleep).

I wouldn't be looking forwards to trying to cool this thing. Mainly in terms of the cost associated with it. :S
 

dgz

Supreme [H]ardness
Joined
Feb 15, 2010
Messages
5,838
Dummy RGB sticks hehehhe. Oh man, why is this a thing. Still beats wood screws though. Don't mind the fridge behind the desk lol
 

Formula.350

[H]ard|Gawd
Joined
Sep 30, 2011
Messages
1,102
1m53s point is the motherboard.... holy sh.... lmao
Looks like 32-true phases, no doubling junk. 8 8-pin CPU Power connectors.
As he said, which I wouldn't be surprised at, is there being 1 phase per core, with the rest being for IMC/SoC.

And wow! AMD went straight for the nuts and are going to release 28 and 32 core Threadripper 2 chips!
The heat is on, for Intel!! [Pun and no-pun, intended!]
No wonder Intel is doing what they can to churn out a 5GHz 28 Core demo... o_0
 

SvenBent

2[H]4U
Joined
Sep 13, 2008
Messages
3,319
Seen it ages ago, still "meh" about it. By the time this alleged damage would occur odds are the chip would be long gone anyway.

I think you under estimate how long these chips need to hold up in server environments.
Your user role is not the only one in the world.
 

Chris_B

Supreme [H]ardness
Joined
May 29, 2001
Messages
5,289
I think you under estimate how long these chips need to hold up in server environments.
Your user role is not the only one in the world.

Well if that were the case surely amd are aware of it? If its as much of an issue as some seem to think you would think threadripper and epyc would be pasted instead of soldered? As they're pretty much the server side versions of the ryzen series.
 

SvenBent

2[H]4U
Joined
Sep 13, 2008
Messages
3,319
Well if that were the case surely amd are aware of it? If its as much of an issue as some seem to think you would think threadripper and epyc would be pasted instead of soldered? As they're pretty much the server side versions of the ryzen series.

I think you didnt read the article or undre3stodd it because you are using the reverse argument of the article to prove its wrong.
Try reading it again and you will see exactly why the articles does not claim what you say its claiming.
 

Formula.350

[H]ard|Gawd
Joined
Sep 30, 2011
Messages
1,102
I think you didnt read the article or undre3stodd it because you are using the reverse argument of the article to prove its wrong.
Try reading it again and you will see exactly why the articles does not claim what you say its claiming.
Dunno dude... it made sense to me?

De8aur's argument in the article was the heating and cooling cycles can cause micro-fractures in the solder, whereas that is not a problem with using a paste. While paste doesn't offer as high a thermal transfer as the solder, it is claimed to provide a 'more reliable' conductivity in the long term due to no micro-fracturing. However, if we're honest, a server rarely shuts down and so there won't really be that many heating and cooling cycles overall. My Minecraft server is run out of a top tier data center and currently its uptime is 820 days. :p

So as I understood it, Chris's point was that if there was such a problem with using Soldered Heatspreaders and if the micro-fracturing was that prevalent (to be an issue), then AMD wouldn't be soldering them still. Either that, or AMD must not be aware of it.

Though, according to the comments on his page, the Xeons apparently are still soldered so the overall argument of why soldering is not good, is moot. As such, it reinstates the one that Intel is just cost-cutting since they opt not to solder their desktop chips. If it is in fact true and Xeons do have soldered heatspreaders (server socketed Xeons, not embedded or desktop-chip based), then until Intel moves away from soldered heatspreaders completely, it doesn't really seem like a valid argument.


Here's my opinion though... I know who de8aur is and as such I respect his knowledge/opinion, but I think he may be over-analyzing the situation a bit. Most of us are aware of metal fatigue and how micro-fractures can ultimate build up enough to cause a metal to fail. If you aren't, after enough hands-on experience with electronics you will, but it's as simple as bending a paperclip. Granted, a paperclip is an extreme situation, but if you straight out the one bend towards the end enough times it will eventually break off. Different metals can take different amounts of this depending on how hard or soft the metal is, which a razor blade for example will just snap in half. I'm straying though, my point is that with most solderable metals their melting point is quite low and as such they're a 'soft metal', and can flex/bend rather easily. Lots of imperceptible flexing, as in the case of a CPU, isn't going to be enough to cause the amount of micro-fractures that amount to a complete failure, but over time I'm sure SOME will indeed form.

Now where I'm going with this is that this isn't as big of a problem with things like power cables. You bend them often enough and they'll have plenty of micro-fractures but still deliver power sufficiently up until a capacitor fails in the PSU. SATA cables with their signaling can be impacted by too much bending and it's suggested not to. Yet how often have any of us experienced a situation where there has been a noticeable degradation in performance, even on cables that have seen multiple system upgrades? I've personally never seen any fail at all, and the only SATA cables I've had that didn't work were an entire set that came with an ASRock 990FX Fatal1ty review sample, which don't work in any system I've ever tried them in.

Point is, the surface area of the die and heatspreader is far larger than a small wired in a SATA cable, and yet they manage to hold up just fine after years and years of use, even after lots of bending in system upgrades. The heatspreader solder is going to undergo far less flex, because remember, it's tightly clamped down between a CPU and motherboard backplate with a socket inbetween. Even then I would imagine the Die would crack from these same thermal cooling cycles before the micro-fractures would cause an issue in the thermal performance of the solder.

Overall, de8aur's point is more applicable to sub-zero cooling I imagine, where the metal is cooled down to a point where it is now brittle, so changes in temperature have a different impact compared to our above-ambient situations.

(Again, I am not a metallurgist, nor am I as knowledgeable/experienced as de8aur. I'm just applying my every-day logic and general knowledge to the situation.)
 

Nobu

[H]F Junkie
Joined
Jun 7, 2007
Messages
8,678
Silicon is much harder than the (indium) solder, and thicker as well, so it'd take much more force to cause any sort of fracture in it. But it's also more fragile, in that if force is applied in a specific way (near the edge, for instance) it will fracture easily. The solder is malleable, but it's also sticky and expands and contracts at a different rate than silicon. This expansion causes extreme forces in all directions parallel with the plane of the solder, which eventually (after some number of cycles) causes microfractures in the solder. This will happen eventually no matter how small the change in temperature (unless you somehow manage to prevent any temperature change), but larger fluctuations will cause it to fail sooner.
 
Last edited:

Formula.350

[H]ard|Gawd
Joined
Sep 30, 2011
Messages
1,102
Silicon is much harder than the (indium) solder, and thicker as well, so it'd take much more force to cause any sort of fracture in it. But it's also more fragile, in that if force is applied in a specific way (near the edge, for instance) it will fracture easily. The solder is malleable, but it's also sticky and expands and contracts at a different rate than silicon. This expansion causes extreme forces in all directions parallel with the plane of the solder, which eventually (after some number of cycles) causes microfractures in the solder.
I suppose what I meant by the silicon die being damaged, was referencing his image. While I know his painted depiction of bending is so extreme as to illustrate the point, I'm also curious exactly how much flexing it can take, period. I've indeed crushed a corner of die before, unfortunately rendering that chip kaput (fortunately though, it was just an AMD E350 mITX, so not the end of the world), but that experience is also why I question how much they'd be able to take in terms of overall flex.

This will happen eventually no matter how small the change in temperature (unless you somehow manage to prevent any temperature change), but larger fluctuations will cause it to fail sooner.
When you say "will cause it to fail sooner", are you implying an outright separation of the solder from the die/heatspreader? Or just failure in the sense of lots and lots of micro-fractures, numerous enough to impact thermal performance?


Also a point I forgot to add to my previous post was that I can't help but feel that the gradual change in temperature that the solder sees, I imagine that has to decrease the severity of micro-fractures forming? It's not like we're applying 100W of heat output through it in an instant, it's a gradual climb to the idle temp, and then from there a bit quicker climb to a loaded temp. In most cases that load isn't the absolute max, Stress-Test-Benchmark-Scenario type of load either.

But are you saying that the gradual change doesn't matter? That it isn't a case of how slow or fast the solder (indium) flexes, but the simple fact that there is flex at all, which induces micro-fracturing?
 
D

Deleted member 204526

Guest
So weird to see Intel on their heels like this. Great, but weird.
 

Formula.350

[H]ard|Gawd
Joined
Sep 30, 2011
Messages
1,102
So you DID totally miss the point about size of the die.

Let me recap it then

Big die = very little issues with sodering
Small die = Big issues with sodering
lol Yes, I ramble, I know.

But you're right, I had forgotten about that data point. Not ashamed to admit that. Though it still seems like only half the picture.

Still, that doesn't explain why Ryzen, with a smaller die than Skylake, is soldered then.
 

Nobu

[H]F Junkie
Joined
Jun 7, 2007
Messages
8,678
I suppose what I meant by the silicon die being damaged, was referencing his image. While I know his painted depiction of bending is so extreme as to illustrate the point, I'm also curious exactly how much flexing it can take, period. I've indeed crushed a corner of die before, unfortunately rendering that chip kaput (fortunately though, it was just an AMD E350 mITX, so not the end of the world), but that experience is also why I question how much they'd be able to take in terms of overall flex.
as far as flexing, I'd imagine not much. It's very hard, but brittle. Still, the connection between it and the substrate would probably be damaged first, unless excess force was applied near a corner of the die.
When you say "will cause it to fail sooner", are you implying an outright separation of the solder from the die/heatspreader? Or just failure in the sense of lots and lots of micro-fractures, numerous enough to impact thermal performance?
I'm not really sure, honestly. Probably the former, eventually, but maybe not a complete separation, per se. In this case I just meant failure as in microfractures.

Also a point I forgot to add to my previous post was that I can't help but feel that the gradual change in temperature that the solder sees, I imagine that has to decrease the severity of micro-fractures forming? It's not like we're applying 100W of heat output through it in an instant, it's a gradual climb to the idle temp, and then from there a bit quicker climb to a loaded temp. In most cases that load isn't the absolute max, Stress-Test-Benchmark-Scenario type of load either.

But are you saying that the gradual change doesn't matter? That it isn't a case of how slow or fast the solder (indium) flexes, but the simple fact that there is flex at all, which induces micro-fracturing?
The solder will experience much more extreme changes in temperature than the sensor data will lead you to believe. They (the os drivers) just don't sample the data frequently enough for you to see it, and they don't tell you the temperature of the solder but that of the thermal junction. The actual temperature will depend on the thermal junction temperature, the thermal resistance of the die, solder, heatspreader, tim, heatsink, and air, and probably some other things (I haven't studied thermal dynamics, just picked up some knowledge while trying to figure out AMDs weird reporting of temperature). Otoh, i guess it's possible that the solder (being attached to something hotter than itself), might remain at a constant temperature, but I'd think that would depend on the heatspreader being able to accumulate dissipate the heat faster than (or at least as fast as) the solder can accumulate it from the die, which seems unlikely to me.
 
Last edited:

Chris_B

Supreme [H]ardness
Joined
May 29, 2001
Messages
5,289
So you DID totally miss the point about size of the die.

Let me recap it then

Big die = very little issues with sodering
Small die = Big issues with sodering


Well it's up to them to find a suitable alternative that doesn't cause upwards of 15c temperature differential. The thermal paste intel are using simply doesn't seem to be of good enough quality to get the job done.
 

Formula.350

[H]ard|Gawd
Joined
Sep 30, 2011
Messages
1,102
Well it's up to them to find a suitable alternative that doesn't cause upwards of 15c temperature differential. The thermal paste intel are using simply doesn't seem to be of good enough quality to get the job done.
In fairness to Intel (as much as I've joked about it being 'Radio Shack TIM bought in bulk'), if what de8auer said is true and it is made by Dow-Corning, then the quality of it is definitely there... But the thermal performance is clearly lacking and, personally, I can only suspect that it's a TIM that perhaps wasn't intended for the level of heat that Intel is subjecting it to. There's a reason we utilize silver, and now even diamond, in compounds, as they can conduct heat better. Once upon a time, that sort of exotic ingredient wasn't needed and Radio Shack grade TIM was more than sufficient to get the job done... We're now pumping vastly higher amounts of energy into a very small amount of space, thereby creating much more heat, so I just can't help but think that the 'recipe' D-C used isn't ideal for this sort of heat load.

(And yes, I do understand that you most likely meant 'quality' in terms of performance, but felt what I wrote still worth it for other reader's sake as well.)
 
Top