It's good to see some work being put into getting power and cooling reduced in Pascal GPUs which may have upped the performance bar a little in exchange for higher TDPs across all performance categories, but it might be too little and too late. The lack of consumer transparency is a minor problem as well because NVIDIA could be advertising the change and encouraging OEMs to offer the newer, cooler version in full sized notebooks instead of just utility-limited ultrabooks as well so consumers could get a quieter, better laptop without giving up discrete graphics as long as a sacrifice in performance is the understood cost. It overall seems like NVIDIA is giving up potential sales, but I do continue to hope they'll reach further up the GPU mobile GPU stack with future version and maybe look at doing something about their desktop chips as well.
I strongly disagree with "The lack of consumer transparency is a minor problem" , I could have been easily fixed MX150-E for energy efficiency , now you get NO CLUE what you buy? And those numbers make it look like the performance difference might be big enough for people to care.
Major manufactures/Online stores already lack details or correct specifications , the cynical side of me is saying just call everything a MX150 , who care if it runs 10-30% faster ...
Look at the TDP of your chassis, that should tell you which version you'll get. I'm wondering if you can simply flash the higher TDP bioses onto these to unlock their full potential.
Do you currently publish a list of "chassis TDPs" we can peruse? Gosh golly gee that's a swell solution to the problem. Surely the average consumer will know to stop and search Morawka's Database of Chassis TDPs while standing at Best Buy, to know what they're getting.
Sorry about the poor word choice causing confusion. When I say it's a minor problem, I meant to imply that it would be very easy to address by either NVIDIA changing the GPU model number or by the OEMs very clearly showing the lower power MX150's specifications on boxes and in marketing specs. I wasn't trying to reduce the significance of the performance differences between the two identically named graphics processors. That concern got lost amid my excitement about a heat-reduced GPU finally making its way into laptops.
WHAT THE HELL ARE YOU ON ABOUT!! HEAT REDUCED (read with outraged intonation)!! It's straight up performance reduced! And yes the heat does reduce as well when you decrease performance, but heat reduced would imply no performance penalty. I am glad you feel free enough to voice your opinion. Keep it up. But know that it was noted as a clumsy attempt at mitigating NVIDIA's shenanigans. Since it's the first comment one might even wonder if it's by a PR firm: "It's not a degredation, your laptop will be cooler and quieter." (The last sentence is the gist of the first comment and not my opinion)
This is nothing new. In the past there have been mobile GPUs with very different performance characteristics but same name. On the top of my head I remember the mobile Geforce 7600 had several variations, some with less pipelines, some with 128 bit memory bits, others with 64bit memory bus, some GDDR2, some GDDR3 and so on. But all were called Geforce 7600. And there were other similar cases, I just don't remember the specific models.
Under 860m name was hiding one of chips from DIFFERENT ARCHITECTURES: Maxwell GM107 or Kepler GK104 with almost two-fold difference in TDP and unified shaders count.
It's not new but it the fact that they've been doing this crap for years doesn't make it cool. I hated it then, I hate it now. Second, the RAM type was usually better documented in the specs - not that this helped your average consumer in the way that a higher or lower model designation would.
It seems like the ROP/TMU/shader counts are identical between the two, so my question would be - can the lower-clocked version be overclocked to the same levels as the "ordinary" version? Because if it can, this isn't really a different product, it's the same product with a slower/more energy-efficient BIOS.
OTOH, if the GPUs and GDDR5 aren't capable of reaching the same clocks, then yes I would class these as distinct products, and they should be named accordingly.
While specs wise is the same, it may be possible to OC it to the original clockspeed. However, I said may be because Nvidia always impose a clockspeed lock, I.e. Either completely locked or it only allows you to OC up to a certain clockspeed.
Anyway, any changes in specs should be clearly conveyed to customers. I believe both Nvidia and AMD are guilty of such shady practice. If the name does not clearly highlight a difference, they are selling a product that is not just different, but also worst. If they are changing the specs to make it faster, I believe people will welcome it. But it is never the case where the product of the same model gets a spec bump for free,
They already have a mx130... It probably has about the same performance and power draw. This will only confuse consumers as to what performance they can really expect.
Would it have killed them to show a bit of honesty and just labeled this GPU as the MX140. Nvidia is trying really hard to be the dishonest Intel of 15 years ago.
I honestly don't understand the point of these gpus. The performance difference between an mx150 and regular Intel gpus is so minimal that you really aren't going to get any additional benefit out of them. 1.2 teraflops at the high end is really not enough to play any modern games. All it's doing is draining extra power and providing very little in return. It seems like more of a selling point to just have an Nvidia logo sticker on a laptop than any meaningful performance difference.
That's not true though. A normally clocked MX150,such as in the HP Spectre x360, typically offers anywhere from 100 to 500% of the performance of the Intel UHD620, depending on the game. Even the under clocked variants in the Zenbook 13,for example, handily out performs the UHD620. So I don't see how an MX150 could possibly be useless
If it's an Intel machine, as Retycint said it beats the everliving SNOT out of the iGPU. Especially on the kind of mid-tier chips the MX150 tends to ship with. So for Intel-based systems I totally get it.
With that being said, a Ryzen 2700U with decent RAM is fairly competitive with even a full-clock MX150, at an incredibly low TDP (even if an OEM up-TDPs to 25W it still undercuts total CPU/GPU power by a LOT). I hope we see more Ryzen APU designs trickling down to more price points.
While it might not be worth gaming on these GPUs, mobile GPUs can help with some 2D graphics app (ClipStudio, Photoshop, Mischief, etc.). Intel's iGPU tends to choke on these apps, especially on 2K screen and above. That's why some manufacturer put GPU on business laptops.
You base your data on GPU-only workload. Add CPU workload and intel's IGP get neutered because of chip power/TDP limitations. A friend of mine had a laptop with additional radeon GPU, which gave 8fps in Furmark compared to IGPs 9fps. But when I also started 2 CPU threads (Furmarks build-in CPU stress), IGP went to 3fps while radeon stayed at 8fps. That is the advantage of discrete GPU.
I'm in the market for one of these or similar but this whole category is downright depressing atm...
- Been seeing a few 940mx pop up with 8th gen Intel :( - Only Ryzen option is the ACER in my country and costs almost $990! (800 Euro). And that's for the 2500U. - MX150 options here are so extortionately priced that I may as well go for something with a GTX 1050.
Kinda overkill for me though for just a bit of SETI crunching and maybe the odd ol' game. TBH I really wanted to try the Ryzen but with only one option at such a ridiculous price....
I'm also in the market for something similar. I don't think the 940MX is worth getting, The MX150 is about twice as fast as the 940MX, and is the slowest discrete cpu worth getting. There is also a low end Radeon 530 dGPU - about as fast as the 940MX, and I don't see the point in that one at all. Your right in the fact that the GTX 1050 isn't much more expensive, and is way much, although you'll need more cooling.
I'm very impressed with the 8250u cpu - for heavy work loads is the same speed as the 8550u, and for light work loads is only a hair slower. An 8250u with a GTX 1050 would be a sweet spot for price/performance.
Why Nvidia didn't simply name this the MX140 is beyond me. 25% performance variation within identically named products with no indication whatsoever that this is the case? Yeah, that's not okay.
What I would like is NVidia Version of 8709G EMIB on notebook - my biggest concern with 8709G as a future purchase of say Dell XPS 15 2in1 is support for profession 3d programs like Lightwave 3D 2018 - from what I can see so far Vega RX support is not too much in in professional graphic area.
This whole thing of the clock reduction being Nvidia's fault is pure BS, it's Lenovo's.
For mobile GPUs the way things have always worked is Nvidia (and AMD) recommends clocks to the OEM, and the OEM decides what clock they will actually ship. OEMs can even ship at higher than recommended clocks (ex. MSI 680m was 771 MHz while all others were 720 MHz), just like you see factory overclocked cards in the desktop space. Unlike in the desktop space though, OEMs can ship with lower than recommended, as you see here.
For around the past decade OEMs have been sticking very closely to the Nvidia recommended clocks. This is the first GPU in a while where shipped clocks are significantly below Nvidia recommended. In the early 2000s though it was very difficult to find any mobile GPU that actually shipped at the recommended clocks.
Memory implementation is also purely up to the OEM. An OEM can put DDR3 on a card instead of GDDR5 and call it the same card, despite it having half the memory throughput. This was done particularly often on Fermi and Kepler cards.
So in short, stop blaming Nvidia for something Lenovo did. If you want to complain about the Geforce Partner Program though... that does seem to be a very real thing to be upset with Nvidia over.
You need to re-read the article. We are dealing with two distinct device IDs, so no, it is NOT a vendor-specific issue to do with the card being under-clocked in the system's BIOS.
Differing amounts and quality of RAM were also, once again, an nVidia thing and not a vendor thing. They specifically allow for it in their specs so, yes, it is the vendor's choice which RAM they use but it is entirely nVidia's fault that the choice is invisible to the end user.
Essentially you can just assume any product with an MX150 is basically just this "MX130ti" bottom of the barrel version unless it's explicitly stated that it the high TDP real MX150 used.
For anyone who purchased a laptop with the underperforming MX150 GPU: The law firm of Migliaccio and Rathod LLP has recently opened an investigation into Nvidia's deceptive marketing of the GPU. As other comments have already noted, it is difficult to know which MX150 chip a notebook or ultrabook uses. Considering the fact that the slower MX150 variant takes a 20-25% performance hit over its identically-named counterpart, this is no small issue.
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
40 Comments
Back to Article
PeachNCream - Friday, March 23, 2018 - link
It's good to see some work being put into getting power and cooling reduced in Pascal GPUs which may have upped the performance bar a little in exchange for higher TDPs across all performance categories, but it might be too little and too late. The lack of consumer transparency is a minor problem as well because NVIDIA could be advertising the change and encouraging OEMs to offer the newer, cooler version in full sized notebooks instead of just utility-limited ultrabooks as well so consumers could get a quieter, better laptop without giving up discrete graphics as long as a sacrifice in performance is the understood cost. It overall seems like NVIDIA is giving up potential sales, but I do continue to hope they'll reach further up the GPU mobile GPU stack with future version and maybe look at doing something about their desktop chips as well.plopke - Friday, March 23, 2018 - link
I strongly disagree with "The lack of consumer transparency is a minor problem" , I could have been easily fixed MX150-E for energy efficiency , now you get NO CLUE what you buy? And those numbers make it look like the performance difference might be big enough for people to care.Major manufactures/Online stores already lack details or correct specifications , the cynical side of me is saying just call everything a MX150 , who care if it runs 10-30% faster ...
Morawka - Saturday, March 24, 2018 - link
Look at the TDP of your chassis, that should tell you which version you'll get. I'm wondering if you can simply flash the higher TDP bioses onto these to unlock their full potential.Vayra - Sunday, March 25, 2018 - link
'Should'... who are we really trusting that it truly is in the end? Nvidia? The OEM? It really is anyone's guess and that is bad.Alexvrb - Sunday, March 25, 2018 - link
Do you currently publish a list of "chassis TDPs" we can peruse? Gosh golly gee that's a swell solution to the problem. Surely the average consumer will know to stop and search Morawka's Database of Chassis TDPs while standing at Best Buy, to know what they're getting.OEMs and their sheisty marketing.
PeachNCream - Monday, March 26, 2018 - link
Sorry about the poor word choice causing confusion. When I say it's a minor problem, I meant to imply that it would be very easy to address by either NVIDIA changing the GPU model number or by the OEMs very clearly showing the lower power MX150's specifications on boxes and in marketing specs. I wasn't trying to reduce the significance of the performance differences between the two identically named graphics processors. That concern got lost amid my excitement about a heat-reduced GPU finally making its way into laptops.SleepyFE - Friday, March 30, 2018 - link
WHAT THE HELL ARE YOU ON ABOUT!! HEAT REDUCED (read with outraged intonation)!! It's straight up performance reduced! And yes the heat does reduce as well when you decrease performance, but heat reduced would imply no performance penalty. I am glad you feel free enough to voice your opinion. Keep it up. But know that it was noted as a clumsy attempt at mitigating NVIDIA's shenanigans. Since it's the first comment one might even wonder if it's by a PR firm: "It's not a degredation, your laptop will be cooler and quieter." (The last sentence is the gist of the first comment and not my opinion)Glock24 - Friday, March 23, 2018 - link
This is nothing new. In the past there have been mobile GPUs with very different performance characteristics but same name. On the top of my head I remember the mobile Geforce 7600 had several variations, some with less pipelines, some with 128 bit memory bits, others with 64bit memory bus, some GDDR2, some GDDR3 and so on. But all were called Geforce 7600. And there were other similar cases, I just don't remember the specific models.willis936 - Friday, March 23, 2018 - link
I too remember multiple instances of this.I feel like this should break some law that incurs a healthy fine, if it doesn't already.
shabby - Friday, March 23, 2018 - link
I think it's the oem's responsibility to inform customers that the gpu is slower, they know what they're getting from nVidia.Samus - Saturday, March 24, 2018 - link
True. Typical Lenovo scam here. They should clearly market the product with the stated clock speeds.Xaguoc - Saturday, March 24, 2018 - link
NVIDIA's 860M is a prime example. It came with variations of DDR3, GDDR3 and GDDR5 VRAM. Oh and 2/4GB versions on top of the VRAM lottery.Vayra - Sunday, March 25, 2018 - link
Yes, and LO AND BEHOLD, the Nvidia specs page actually has infohttps://www.geforce.com/hardware/notebook-gpus/gef...
Now look at the MX150's
https://www.geforce.com/hardware/notebook-gpus/gef...
vithrell - Monday, March 26, 2018 - link
Under 860m name was hiding one of chips from DIFFERENT ARCHITECTURES: Maxwell GM107 or Kepler GK104 with almost two-fold difference in TDP and unified shaders count.https://en.wikipedia.org/wiki/List_of_Nvidia_graph...
Alexvrb - Sunday, March 25, 2018 - link
It's not new but it the fact that they've been doing this crap for years doesn't make it cool. I hated it then, I hate it now. Second, the RAM type was usually better documented in the specs - not that this helped your average consumer in the way that a higher or lower model designation would.Anonymous Blowhard - Friday, March 23, 2018 - link
I imagine that a 30% haircut in performance is justification enough - in the EU at least, where consumer protection laws exist - for a product return.The_Assimilator - Friday, March 23, 2018 - link
It seems like the ROP/TMU/shader counts are identical between the two, so my question would be - can the lower-clocked version be overclocked to the same levels as the "ordinary" version? Because if it can, this isn't really a different product, it's the same product with a slower/more energy-efficient BIOS.OTOH, if the GPUs and GDDR5 aren't capable of reaching the same clocks, then yes I would class these as distinct products, and they should be named accordingly.
watzupken - Saturday, March 24, 2018 - link
While specs wise is the same, it may be possible to OC it to the original clockspeed. However, I said may be because Nvidia always impose a clockspeed lock, I.e. Either completely locked or it only allows you to OC up to a certain clockspeed.Anyway, any changes in specs should be clearly conveyed to customers. I believe both Nvidia and AMD are guilty of such shady practice. If the name does not clearly highlight a difference, they are selling a product that is not just different, but also worst. If they are changing the specs to make it faster, I believe people will welcome it. But it is never the case where the product of the same model gets a spec bump for free,
Ej24 - Friday, March 23, 2018 - link
They already have a mx130... It probably has about the same performance and power draw. This will only confuse consumers as to what performance they can really expect.SquarePeg - Friday, March 23, 2018 - link
Would it have killed them to show a bit of honesty and just labeled this GPU as the MX140. Nvidia is trying really hard to be the dishonest Intel of 15 years ago.mr_tawan - Saturday, March 24, 2018 - link
I'd hazard guess that MX130 would be outperformed and may draw power. It is 940mx in disguise after all.oRAirwolf - Friday, March 23, 2018 - link
I honestly don't understand the point of these gpus. The performance difference between an mx150 and regular Intel gpus is so minimal that you really aren't going to get any additional benefit out of them. 1.2 teraflops at the high end is really not enough to play any modern games. All it's doing is draining extra power and providing very little in return. It seems like more of a selling point to just have an Nvidia logo sticker on a laptop than any meaningful performance difference.Retycint - Friday, March 23, 2018 - link
That's not true though. A normally clocked MX150,such as in the HP Spectre x360, typically offers anywhere from 100 to 500% of the performance of the Intel UHD620, depending on the game. Even the under clocked variants in the Zenbook 13,for example, handily out performs the UHD620. So I don't see how an MX150 could possibly be uselesskfishy - Friday, March 23, 2018 - link
I have a NUC with Skylake Iris graphics with eDRAM and also a laptop with MX150, the MX150 handily beats even Intel GPU with eDRAM.kfishy - Friday, March 23, 2018 - link
Oh, and not to mention that the TDP of the integrated GPU counts towards the entire chip TDP so when you game it’s throttle throttle throttle.Alexvrb - Sunday, March 25, 2018 - link
If it's an Intel machine, as Retycint said it beats the everliving SNOT out of the iGPU. Especially on the kind of mid-tier chips the MX150 tends to ship with. So for Intel-based systems I totally get it.With that being said, a Ryzen 2700U with decent RAM is fairly competitive with even a full-clock MX150, at an incredibly low TDP (even if an OEM up-TDPs to 25W it still undercuts total CPU/GPU power by a LOT). I hope we see more Ryzen APU designs trickling down to more price points.
mr_tawan - Monday, March 26, 2018 - link
While it might not be worth gaming on these GPUs, mobile GPUs can help with some 2D graphics app (ClipStudio, Photoshop, Mischief, etc.). Intel's iGPU tends to choke on these apps, especially on 2K screen and above. That's why some manufacturer put GPU on business laptops.mathew7 - Monday, March 26, 2018 - link
You base your data on GPU-only workload. Add CPU workload and intel's IGP get neutered because of chip power/TDP limitations. A friend of mine had a laptop with additional radeon GPU, which gave 8fps in Furmark compared to IGPs 9fps. But when I also started 2 CPU threads (Furmarks build-in CPU stress), IGP went to 3fps while radeon stayed at 8fps. That is the advantage of discrete GPU.f4tali - Friday, March 23, 2018 - link
I'm in the market for one of these or similar but this whole category is downright depressing atm...- Been seeing a few 940mx pop up with 8th gen Intel :(
- Only Ryzen option is the ACER in my country and costs almost $990! (800 Euro). And that's for the 2500U.
- MX150 options here are so extortionately priced that I may as well go for something with a GTX 1050.
Kinda overkill for me though for just a bit of SETI crunching and maybe the odd ol' game. TBH I really wanted to try the Ryzen but with only one option at such a ridiculous price....
eva02langley - Monday, March 26, 2018 - link
If I were you, I would wait until something more interesting is released on Ryzen mobile.kmmatney - Tuesday, March 27, 2018 - link
I'm also in the market for something similar. I don't think the 940MX is worth getting, The MX150 is about twice as fast as the 940MX, and is the slowest discrete cpu worth getting. There is also a low end Radeon 530 dGPU - about as fast as the 940MX, and I don't see the point in that one at all. Your right in the fact that the GTX 1050 isn't much more expensive, and is way much, although you'll need more cooling.I'm very impressed with the 8250u cpu - for heavy work loads is the same speed as the 8550u, and for light work loads is only a hair slower. An 8250u with a GTX 1050 would be a sweet spot for price/performance.
Valantar - Saturday, March 24, 2018 - link
Why Nvidia didn't simply name this the MX140 is beyond me. 25% performance variation within identically named products with no indication whatsoever that this is the case? Yeah, that's not okay.BigCapitalist - Saturday, March 24, 2018 - link
Only a total mental retardation would buy this GPU, much less the last one.AMD Vega is beating this thing with 2x the ram, 3x the performance for the same price.
Sadly the majority of consumers, probably 80% will buy the POS MX150 not knowing anything about specs, because people are in general stupid.
deepblue08 - Saturday, March 24, 2018 - link
They should call it MX150 SE like they did in the Geforce 4 days.SquarePeg - Saturday, March 24, 2018 - link
I think Nvidia got tired of people calling them the "Shitty Edition" GPU's.HStewart - Saturday, March 24, 2018 - link
What I would like is NVidia Version of 8709G EMIB on notebook - my biggest concern with 8709G as a future purchase of say Dell XPS 15 2in1 is support for profession 3d programs like Lightwave 3D 2018 - from what I can see so far Vega RX support is not too much in in professional graphic area.Khenglish - Sunday, March 25, 2018 - link
This whole thing of the clock reduction being Nvidia's fault is pure BS, it's Lenovo's.For mobile GPUs the way things have always worked is Nvidia (and AMD) recommends clocks to the OEM, and the OEM decides what clock they will actually ship. OEMs can even ship at higher than recommended clocks (ex. MSI 680m was 771 MHz while all others were 720 MHz), just like you see factory overclocked cards in the desktop space. Unlike in the desktop space though, OEMs can ship with lower than recommended, as you see here.
For around the past decade OEMs have been sticking very closely to the Nvidia recommended clocks. This is the first GPU in a while where shipped clocks are significantly below Nvidia recommended. In the early 2000s though it was very difficult to find any mobile GPU that actually shipped at the recommended clocks.
Memory implementation is also purely up to the OEM. An OEM can put DDR3 on a card instead of GDDR5 and call it the same card, despite it having half the memory throughput. This was done particularly often on Fermi and Kepler cards.
So in short, stop blaming Nvidia for something Lenovo did. If you want to complain about the Geforce Partner Program though... that does seem to be a very real thing to be upset with Nvidia over.
Spunjji - Monday, March 26, 2018 - link
You need to re-read the article. We are dealing with two distinct device IDs, so no, it is NOT a vendor-specific issue to do with the card being under-clocked in the system's BIOS.Differing amounts and quality of RAM were also, once again, an nVidia thing and not a vendor thing. They specifically allow for it in their specs so, yes, it is the vendor's choice which RAM they use but it is entirely nVidia's fault that the choice is invisible to the end user.
SaturnusDK - Monday, March 26, 2018 - link
Essentially you can just assume any product with an MX150 is basically just this "MX130ti" bottom of the barrel version unless it's explicitly stated that it the high TDP real MX150 used.MigRath - Tuesday, March 27, 2018 - link
For anyone who purchased a laptop with the underperforming MX150 GPU: The law firm of Migliaccio and Rathod LLP has recently opened an investigation into Nvidia's deceptive marketing of the GPU. As other comments have already noted, it is difficult to know which MX150 chip a notebook or ultrabook uses. Considering the fact that the slower MX150 variant takes a 20-25% performance hit over its identically-named counterpart, this is no small issue.More information is found here: http://www.classlawdc.com/2018/03/27/nvidia-mx-150...