Hello,
I just bumped into AnandTech Video Card Tests, and I'm really impressed !
As a Belgian Vision Systems Integration Consultant (since 1979), I'm very interrested about the ability to compare these 3 cards (Realizm 200 vs FireGL X3 256 vs NVIDIA Quatro FX 4000).
I just had a bad experience with the Realizm 200 (!)
On a ASUS NCCH-DL motherboard, Dual Xeon 2.8GHz, 2GB DDR 400, Seagate SCSI Ultra 320 HDD, 2 EIZO monitors (Monitor N°1= L985EX at 1600x1200 px), (Monitopr N°2= L565 at 1280x1024 px), Windows XP Pro SP2 x32bit partition C:\ 16GB, Windows XP Pro x64bit edition partition D:\ 16GB, plus Extended partions (2 logical E:\ and F:\). All NTFS.
Using the main monitor for images analyses (quality control) and the slave monitor for tools, I was unable to have a stable image at 1600 by 1200 pixels. While the Wildcat4 - 7110, or even the VP990 Pro have a very stable screen at maximum resolution. But the 7110 and the VP990 Pro don't have drivers for Window XP x64bit.
Tried everything, latest BIOS, latest drive for ChipSet...
Even 3Dlabs was unable to give me the necessary support and do not answer anymore !
As soon I reduced the resolution from the main monitor to 1280 by 1024, was everything stable, but that's not what I want, I need the maximum resolution on the main monitor.
The specs from 3Dlabs resolution table is giving 3840 by 2400 pixels maximum!
I send it back, and I'm looking for an other card.
I wonder if the FireGL X3 256 will do the job ?
We also use an other monitor from EIZO (S2410W) with 1920 by 1200 pixels !
What are exactly the several resolutions possible with the FireGL X3 256 using 2 monitors ? I cannot find it on the specs.
Excellent article. I hope that you make workstation graphic card comparision a regular article. How about an article on workstation notebooks? Thanks a lot.
This is the best workstation graphics card review I have read in ages. Not only does it present the positive and negatives of each the principal cards in question, it presents them in relationship to high end mainsteam cards and thereby helps many, including myself, understand the real differences in performance. Also, by inovatingly including AutoCAD and Gaming Tests one gets a clear indication of when the workstation cards are necessary and when they would be a waste of money. Thanks
Thanks for letting us know about that one :-) We'll have to have a nice long talk with NV's workstation team about what exactly is going on there. They very strongly gave us the idea that the featureset wasn't present on geforce cards.
#19, NUMA was disabled because most people running a workstation with 4 or fewer GB of RAM on a 32 machine will not be running with the pae kernel installed. We wanted to test with a setup most people would be running under the circumstances. We will test NUMA capabilities in the future.
#20,
When we test workstation CPU performance or system performance, POVRay will be a possible inclusion. Thanks for the suggestion.
By high end features, I wasn't talking about texturing or prgrammatic vertex or fragment shading (which is highend in the consumer space).
I was rather talking about hardware support for: AA lines and points, overlay plane support, two-sided lighting (fixed function path), logic operations, fast pixel read-back speeds, and dual 10-bit 400MHz RAMDACs and 2 dual-link DVI-I connectors supporting 3840x2400 on a single display (the IBM T221 comes to mind).
There are other features, but these are key. In products like Maya and 3D Studio, not having overlay plane support creates an absolutely noticable performance hit. It really does depend on how you push the cards. We do prefer the in application benchmarks to SPECveiwperf. Even the SPECapc tests can give a better feel for where things will fall -- because the entire system is a factor rather than just the gfx card and CPU.
#14, Dubb -- I hate to be the one to tell you this -- GeForce and Quadro are physically different silicon now (NV40 and NV40GL). AFAIK, ever since GF4/Quadro4, it has been impossible to softquadro an nvidia card. The Quadro team uses the GeForce as it's base core, but then adds on workstation class features.
I want to add to my first post that there were 2 parts and a complex assembly (>110 very complex parts without simplified rep).
The amount of data to process was pretty high (XP shows >400 Mb and it can goes up to 600 Mb).
About the specific features, I believe that most of the CAD users do not use them. People like me, mechanical engineers and other engineers, are using the software like Pro/E, UGS, Solidworks, Inventor and Catia for solid modeling without any textures or special effects.
My comment was really to point that the high end features seams useless in real world application for engineering.
I still believe that for 3D multimedia content, there is place for high-end workstation and the specviewperf benchmark is a good tool for that.
how about throwing in soft-quadro'd cards? when people realize with a little effort they can take a $350 6800GT to near-q4000 performance, that changes the pricing issue a bit.
dvinnen, workstation cards are higher margin -- selling consumer parts may be higher volume, but the competition is harder as well. Creative would have to really change their business model if they wanted to sell consumer parts.
Sword, like we mentioned, the size of the data set tested has a large impact on performance in our tests. Also, Draven31 is correct -- a lot depends on the specific features that you end up using during your normal work day.
Draven31, 3dlabs drivers have improved greatly with the Realizm from what we've seen in the past. In fact, the Realizm does a much better job of video overlay playback as well.
Since one feature of the Quadro and Realizm cards is their ability to run genlock/framelock video walls, perhaps a video playback/editing test would make a good addition to our benchmark suite
Coming up with the difference between the spec viewperf tests and real-world 3d work means finding out which "high-end card' features that the test is using and then turning them off in the tests. With NVidia cards, this usually starts with antialiased lines. It also depends on whether the application you are running even uses these features... in Lightwave3D, the 'pro' cards and the consumer cards are very comparable performance-wise because it doesn't use these so-called 'high-end' features very extensively.
And while they may be faster in some Viewperf tests, 3dLabs drivers generally suck. Having owned and/or used several, I can tell you any app that uses DirectX overlays as part of its display routines is going to either be slow or not work at all. For actual application use, 3dLabs cards are useless. I've seen 3dLabs cards choke on directX apps, and that includes both games and applications that do windowed video playback on the desktop (for instance, video editing and compositing apps)
I am a mechanical engineer in Canada and I am a fan of anandtech.
I made last year a very big comparison of mainstream vs workstation video card for our internal use (the company I work for).
The goal was to compare the different systems (and mainly video cards) to see if in Pro-Engineer and the kind of work with do we could take real advantage of high-end workstation video card.
My conclusion is very clear : in specviewperf there is a huge difference between mainstream video card and workstation video card. BUT, in the day-to-day work, there is no real difference in our reaults.
To summarize, I made a benchmark in Pro/E using the trail files with 3 of our most complex parts. I made comparison in shading, wireframe, hidden line and I also verified the regeneration time for each part. The benchmark was almost 1 hour long. I compared 3D Labs product, ATI professionnal, Nvidia professionnal and Nvidia mainstream.
My point is : do not believe specviewperf !! Make your own comparison with your actual day-to-day work to see if you really have to spend 1000 $ per video cards. Also, take the time to choose the right components so you minimize the calculation time.
If anyone at Anandtech is willing to take a look at my study, I am willing to share the results.
I always wondered why Creative (they own 3dLabs) never made a consumer edition of the Wildcat. Seems like a smallish market when it wouldn't be all that hard to expand into consumer cards.
there have been some reports of people getting consumer level driver to install on workstatoin class parts, which should give better performance numbers for the ATI and NVIDIA parts under games if possible. But keep in mind that the trend in workstation parts is to clock them at lower speeds than the current highest end consumer level products for heat and stability reasons.
if you're a gamer who's insane about performance, you'd be much better off paying $800 on ebay for the ultra rare uberclocked parts from ATI and NVIDIA than going out and getting a workstation class card.
Now, if you're a programmer, having access to the workstation level features is fun and interesting. But probably not worth the money in most cases.
Only people who want workstation class features should buy workstation class cards.
Yes, very interesting. This gives me and lots of others something to point to when someone asks why they shouldn't get the multi-thousand dollar video card if they want top gaming performance :)
thanks for the suggestion. we're definitly exploring options for other workstation articles.
since this is the first of the graphics workstation articles we've tackled in quite a while, we wanted to start with current technology (R4xx, NV4x, and WC Realizm based parts). There aren't curently lower end parts (with the exception of the Wildcat Realizm 100) based on the technology we tested for this article.
thanks again. let us know if there's anything else we can look into doing for future reviews.
How about benchmarking some of the lower Quadro and FireGL cards? ATI has the FireGL 9600 (aka FireGL T2-128), FireGL 9700 (aka FireGL X1), and FireGL 9800 (aka FireGL X2-256t) at $250, $500 and $600 price points repectively. Comparable Quadros are available as well.
For many professional uses, a workstation class card (with attendant workstation class, certified drives) is desired, but ultra-high performance isn't important. It'd be nice to see the comparitive performance of the lower end cards.
You may have some luck with the 6600gt under AutoCAD, espeically if you don't intend to push the graphics subsystem as much as we did (no AA lines, less tess, etc...), but depending on the Pro/E workload, you may have trouble.
The SPECviewperf veiwset tests a much larger workload than the OCUS benchmark. If you're working with smaller data, you should be fine, but if we're talking millions of verts, you're going to have increasing ammounts of trouble with a 128MB card.
You guys should throw in a few mainstream graphics cards for comparison. I am trying to build a systems whos primary use will be with Pro/Engineer and AutoCAD and i certainly do not have the money for a $1000+ video card. Im just wondering how the other cards match up (like the 6600gt AGP)
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
25 Comments
Back to Article
Jeanlou - Thursday, December 1, 2005 - link
Hello,I just bumped into AnandTech Video Card Tests, and I'm really impressed !
As a Belgian Vision Systems Integration Consultant (since 1979), I'm very interrested about the ability to compare these 3 cards (Realizm 200 vs FireGL X3 256 vs NVIDIA Quatro FX 4000).
I just had a bad experience with the Realizm 200 (!)
On a ASUS NCCH-DL motherboard, Dual Xeon 2.8GHz, 2GB DDR 400, Seagate SCSI Ultra 320 HDD, 2 EIZO monitors (Monitor N°1= L985EX at 1600x1200 px), (Monitopr N°2= L565 at 1280x1024 px), Windows XP Pro SP2 x32bit partition C:\ 16GB, Windows XP Pro x64bit edition partition D:\ 16GB, plus Extended partions (2 logical E:\ and F:\). All NTFS.
Using the main monitor for images analyses (quality control) and the slave monitor for tools, I was unable to have a stable image at 1600 by 1200 pixels. While the Wildcat4 - 7110, or even the VP990 Pro have a very stable screen at maximum resolution. But the 7110 and the VP990 Pro don't have drivers for Window XP x64bit.
Tried everything, latest BIOS, latest drive for ChipSet...
Even 3Dlabs was unable to give me the necessary support and do not answer anymore !
As soon I reduced the resolution from the main monitor to 1280 by 1024, was everything stable, but that's not what I want, I need the maximum resolution on the main monitor.
The specs from 3Dlabs resolution table is giving 3840 by 2400 pixels maximum!
I send it back, and I'm looking for an other card.
I wonder if the FireGL X3 256 will do the job ?
We also use an other monitor from EIZO (S2410W) with 1920 by 1200 pixels !
What are exactly the several resolutions possible with the FireGL X3 256 using 2 monitors ? I cannot find it on the specs.
Any comment will be appreciated,
Best regards,
Jean
kaissa - Sunday, February 20, 2005 - link
Excellent article. I hope that you make workstation graphic card comparision a regular article. How about an article on workstation notebooks? Thanks a lot.laverdir - Thursday, December 30, 2004 - link
dear derek wilson,could you tell us how much is the performance
difference between numa and uma in general
on this tests..
and it would be great if you could post maya
related results for guadro 4k with numa enabled..
seasonal greetings
RedNight - Tuesday, December 28, 2004 - link
This is the best workstation graphics card review I have read in ages. Not only does it present the positive and negatives of each the principal cards in question, it presents them in relationship to high end mainsteam cards and thereby helps many, including myself, understand the real differences in performance. Also, by inovatingly including AutoCAD and Gaming Tests one gets a clear indication of when the workstation cards are necessary and when they would be a waste of money. ThanksDerekWilson - Monday, December 27, 2004 - link
Dubb,Thanks for letting us know about that one :-) We'll have to have a nice long talk with NV's workstation team about what exactly is going on there. They very strongly gave us the idea that the featureset wasn't present on geforce cards.
#19, NUMA was disabled because most people running a workstation with 4 or fewer GB of RAM on a 32 machine will not be running with the pae kernel installed. We wanted to test with a setup most people would be running under the circumstances. We will test NUMA capabilities in the future.
#20,
When we test workstation CPU performance or system performance, POVRay will be a possible inclusion. Thanks for the suggestion.
Derek Wilson
mbhame - Sunday, December 26, 2004 - link
Please include POVRay benchies in Workstation tests.Myrandex - Saturday, December 25, 2004 - link
I wonder why NUMA was fully supported but yet disabled. Maybe instabilities or something.Dubb - Friday, December 24, 2004 - link
http://newbietech.net/eng/qtoq/index.phphttp://forums.guru3d.com/showthread.php?s=2347485b...
Dubb - Friday, December 24, 2004 - link
uhhh.. my softquadro'd 5900 ultra begs to differ. as would all the 6800 > qfx4000 mods being done by people on guru3d's rivatuner forum.I thought you guys knew that just because nvida says something doesn't mean it's true?
they must consider "physically different sillicon" to be "we moved a resistor or two"...
DerekWilson - Friday, December 24, 2004 - link
By high end features, I wasn't talking about texturing or prgrammatic vertex or fragment shading (which is highend in the consumer space).I was rather talking about hardware support for: AA lines and points, overlay plane support, two-sided lighting (fixed function path), logic operations, fast pixel read-back speeds, and dual 10-bit 400MHz RAMDACs and 2 dual-link DVI-I connectors supporting 3840x2400 on a single display (the IBM T221 comes to mind).
There are other features, but these are key. In products like Maya and 3D Studio, not having overlay plane support creates an absolutely noticable performance hit. It really does depend on how you push the cards. We do prefer the in application benchmarks to SPECveiwperf. Even the SPECapc tests can give a better feel for where things will fall -- because the entire system is a factor rather than just the gfx card and CPU.
#14, Dubb -- I hate to be the one to tell you this -- GeForce and Quadro are physically different silicon now (NV40 and NV40GL). AFAIK, ever since GF4/Quadro4, it has been impossible to softquadro an nvidia card. The Quadro team uses the GeForce as it's base core, but then adds on workstation class features.
Sword - Friday, December 24, 2004 - link
Hi again,I want to add to my first post that there were 2 parts and a complex assembly (>110 very complex parts without simplified rep).
The amount of data to process was pretty high (XP shows >400 Mb and it can goes up to 600 Mb).
About the specific features, I believe that most of the CAD users do not use them. People like me, mechanical engineers and other engineers, are using the software like Pro/E, UGS, Solidworks, Inventor and Catia for solid modeling without any textures or special effects.
My comment was really to point that the high end features seams useless in real world application for engineering.
I still believe that for 3D multimedia content, there is place for high-end workstation and the specviewperf benchmark is a good tool for that.
Dubb - Friday, December 24, 2004 - link
how about throwing in soft-quadro'd cards? when people realize with a little effort they can take a $350 6800GT to near-q4000 performance, that changes the pricing issue a bit.Slaimus - Friday, December 24, 2004 - link
If the Realizm 200 performs this well, it will be scary to see the 800 in action.DerekWilson - Friday, December 24, 2004 - link
dvinnen, workstation cards are higher margin -- selling consumer parts may be higher volume, but the competition is harder as well. Creative would have to really change their business model if they wanted to sell consumer parts.Sword, like we mentioned, the size of the data set tested has a large impact on performance in our tests. Also, Draven31 is correct -- a lot depends on the specific features that you end up using during your normal work day.
Draven31, 3dlabs drivers have improved greatly with the Realizm from what we've seen in the past. In fact, the Realizm does a much better job of video overlay playback as well.
Since one feature of the Quadro and Realizm cards is their ability to run genlock/framelock video walls, perhaps a video playback/editing test would make a good addition to our benchmark suite
Draven31 - Friday, December 24, 2004 - link
Coming up with the difference between the spec viewperf tests and real-world 3d work means finding out which "high-end card' features that the test is using and then turning them off in the tests. With NVidia cards, this usually starts with antialiased lines. It also depends on whether the application you are running even uses these features... in Lightwave3D, the 'pro' cards and the consumer cards are very comparable performance-wise because it doesn't use these so-called 'high-end' features very extensively.And while they may be faster in some Viewperf tests, 3dLabs drivers generally suck. Having owned and/or used several, I can tell you any app that uses DirectX overlays as part of its display routines is going to either be slow or not work at all. For actual application use, 3dLabs cards are useless. I've seen 3dLabs cards choke on directX apps, and that includes both games and applications that do windowed video playback on the desktop (for instance, video editing and compositing apps)
Sword - Thursday, December 23, 2004 - link
Hi everyone,I am a mechanical engineer in Canada and I am a fan of anandtech.
I made last year a very big comparison of mainstream vs workstation video card for our internal use (the company I work for).
The goal was to compare the different systems (and mainly video cards) to see if in Pro-Engineer and the kind of work with do we could take real advantage of high-end workstation video card.
My conclusion is very clear : in specviewperf there is a huge difference between mainstream video card and workstation video card. BUT, in the day-to-day work, there is no real difference in our reaults.
To summarize, I made a benchmark in Pro/E using the trail files with 3 of our most complex parts. I made comparison in shading, wireframe, hidden line and I also verified the regeneration time for each part. The benchmark was almost 1 hour long. I compared 3D Labs product, ATI professionnal, Nvidia professionnal and Nvidia mainstream.
My point is : do not believe specviewperf !! Make your own comparison with your actual day-to-day work to see if you really have to spend 1000 $ per video cards. Also, take the time to choose the right components so you minimize the calculation time.
If anyone at Anandtech is willing to take a look at my study, I am willing to share the results.
Thank you
dvinnen - Thursday, December 23, 2004 - link
I always wondered why Creative (they own 3dLabs) never made a consumer edition of the Wildcat. Seems like a smallish market when it wouldn't be all that hard to expand into consumer cards.Cygni - Thursday, December 23, 2004 - link
Im surprised by the power of the Wildcat, really... great for the dollar.DerekWilson - Thursday, December 23, 2004 - link
mattsaccount,glad we could help out with that :-)
there have been some reports of people getting consumer level driver to install on workstatoin class parts, which should give better performance numbers for the ATI and NVIDIA parts under games if possible. But keep in mind that the trend in workstation parts is to clock them at lower speeds than the current highest end consumer level products for heat and stability reasons.
if you're a gamer who's insane about performance, you'd be much better off paying $800 on ebay for the ultra rare uberclocked parts from ATI and NVIDIA than going out and getting a workstation class card.
Now, if you're a programmer, having access to the workstation level features is fun and interesting. But probably not worth the money in most cases.
Only people who want workstation class features should buy workstation class cards.
Derek Wilson
mattsaccount - Thursday, December 23, 2004 - link
Yes, very interesting. This gives me and lots of others something to point to when someone asks why they shouldn't get the multi-thousand dollar video card if they want top gaming performance :)DerekWilson - Thursday, December 23, 2004 - link
johnsonx,thanks for the suggestion. we're definitly exploring options for other workstation articles.
since this is the first of the graphics workstation articles we've tackled in quite a while, we wanted to start with current technology (R4xx, NV4x, and WC Realizm based parts). There aren't curently lower end parts (with the exception of the Wildcat Realizm 100) based on the technology we tested for this article.
thanks again. let us know if there's anything else we can look into doing for future reviews.
Derek Wilson
johnsonx - Thursday, December 23, 2004 - link
How about benchmarking some of the lower Quadro and FireGL cards? ATI has the FireGL 9600 (aka FireGL T2-128), FireGL 9700 (aka FireGL X1), and FireGL 9800 (aka FireGL X2-256t) at $250, $500 and $600 price points repectively. Comparable Quadros are available as well.For many professional uses, a workstation class card (with attendant workstation class, certified drives) is desired, but ultra-high performance isn't important. It'd be nice to see the comparitive performance of the lower end cards.
DerekWilson - Thursday, December 23, 2004 - link
ksherman,You may have some luck with the 6600gt under AutoCAD, espeically if you don't intend to push the graphics subsystem as much as we did (no AA lines, less tess, etc...), but depending on the Pro/E workload, you may have trouble.
The SPECviewperf veiwset tests a much larger workload than the OCUS benchmark. If you're working with smaller data, you should be fine, but if we're talking millions of verts, you're going to have increasing ammounts of trouble with a 128MB card.
Derek Wilson
ksherman - Thursday, December 23, 2004 - link
You guys should throw in a few mainstream graphics cards for comparison. I am trying to build a systems whos primary use will be with Pro/Engineer and AutoCAD and i certainly do not have the money for a $1000+ video card. Im just wondering how the other cards match up (like the 6600gt AGP)Speedo - Thursday, December 23, 2004 - link
Nice review!