Comments Locked

57 Comments

Back to Article

  • Spivonious - Monday, August 25, 2008 - link

    I don't think nVidia or AMD will try to force Lucid out of the market. If I can actually get a 100% increase in performance from purchasing a second video card, I will.

    This chip only means more sales for nVidia and AMD.
  • 7Enigma - Tuesday, August 26, 2008 - link

    But that doesn't help their bottom line in the end. Right now CF and SLI are not very popular due to their scaling and custom profile issues. Because of that, many people spring for the highest priced single card they can afford. This keeps the market segment basically tiered the way any business would like. You have low end parts, mid-grade, and uber parts.

    Now throw in the possibility that this Hydra chip works as specified. That 3 tier system just fell apart. When you look at most of the non-mainstream parts from both sides (for example Nvidia's 280, 200, and say 9800/8800GTS), you'll notice that while the price of those chips are drastically different, the performance is not near as different. This makes sense from an R&D standpoint to recoup costs, but from a logical standpoint shelling out $650 for the 280 when it debuted WOULD NOT make sense if 2 200's or 2 9800's was significantly faster for the same or less total $$$.

    That's why both ATI/AMD and Nvidia don't want them in the market. It destroys the pricing structure, and would place much more influence on the bang for the buck part (currently this would hurt Nvidia with their 280 and favor slightly ATI/AMD with their cheaper 4870 and 4850).

    Why would I spend twice as much for a 30% increase in performance with a top of the line single card solution, when I could just get two of the cheaper version for a near 60% increase over the single top card (using general performance of the latest cards)? Sure I'd need a board to support it, but it would make the SLI/CF mobo's MUCH MUCH more attractive then they currently are (I have no plans to purchase a dual-slot mobo with my upcoming build....unless we can get some actual data before Jan09...not likely).
  • jnanster - Tuesday, August 26, 2008 - link

    This is terrible!
    I was all set to buy a new system in a few months.
    Now I have to wait again, again.
  • shin0bi272 - Tuesday, August 26, 2008 - link

    lol sorry dude... but hey this way you can wait for 8 core nehalem cpus too.
  • TheDoc9 - Monday, August 25, 2008 - link

    This article reads like the same sort of hype-machine dribble that many of the dot-com wonder companies used before the 2001 collapse so they could get investors interested.

    The writer of this piece is fortunately skeptical and he should be, more so even. I hope I'm wrong and we see this technology in a year or so, but it reminds me of Constellation 3D.
  • shin0bi272 - Sunday, August 24, 2008 - link

    The way they outlined it in one of their diagrams is, an instruction which usually goes from the cpu to the northbridge to the gpus and then the gpu's sort out which card should render the command. The Hydra changes that to, cpu to northbridge to hydra to which ever gpu is ready for a new instruction. Which means its essentially taking the place of the little bridge between the gpus and the chip that makes the decision on which card is rendering the scene.

    Nvidia and AMD could have put a chip like this on their motherboards yeah but then you wouldn't need to buy 2 of the same card (and it would possibly work for the competitors card too like the hydra does). Nvidia never tried a motherboard chip to my knowledge and ati did at one time do a y cable and software controlled card selection. But I dont believe that they had a chip on the motherboard either. That reminds me of the difference between a software raid5 card and a hardware raid5 card. The hardware raid card has much better performance but it costs 3x as much. Cost could still be a factor with this chip too. I mean if it ads an extra 20 or 50 dollars to a motherboard gamers will have no problem with that. But if its an extra 200 dollars would they? Gotta make back all that R&D money somehow even if Intel backed them.

    Another question is will this solution require a multi y-cable type device like ati used to do? If the different cards are rendering the scene at different times it would stand to reason. Or will one card be designated as the output card and all finished scenes be sent to that card? That would probably be a bad idea latency wise but who wants to buy a 4way y-split cable? Then again if im going to get linear performance out of sli I can spring for a cable. Could even make a 4 way hub sort of device so that all of the cards would feed into it and then one into the monitor. Could also do a multi-in and multi-out hub to do multiple monitors (though you might not need to do that it could be easier to add and subtract monitors with one).
  • computerfarmer - Sunday, August 24, 2008 - link

    It is nice to here about new products. I hope to see this work.

    I am still waiting for the AMD 790GX/SB750 review.
  • MamiyaOtaru - Sunday, August 24, 2008 - link

    What are the odds this will be cross platform? If it relies on drivers for doing a lot of stuff odds are it will not be, which would make it a nonstarter for me. And yes I know close to no one cares ;) I do though and I'd be interested to know.
  • metro15 - Sunday, August 24, 2008 - link

    hey. they do not need any motherboard manufacturer. Imagine a Intel Labaree graphic card with many cores synchronized with Lucid chip. The performance would be unbeatable.
  • pool1892 - Sunday, August 24, 2008 - link

    larrabee does not need hydra. it will reconfigure itself to suit the load. and with something like larrabee gen2 it will have qpi, which results in much lower latencies and much higher bandwidth.
    larrabee could even achieve more than linear skaling. (theoretically more cores could result in less context changes, which means more cache hits and less waiting cycles - this will of course not happen in reality)
  • haplo602 - Sunday, August 24, 2008 - link

    The more I am reading about this Hydra thing, the more I believe it will turn out to be a hoax. Look at the thing in a logical way.

    1. we want to achieve multi-gpu scaling as best as possible
    2. we cannot manipulate the scene data, since we don't know what the scene rendered actualy is (we can't identify object in a reasonable way)
    3. the existing cards are already fast enough in actualy renderingthe scene

    This boils down to an engine that offloads the actual scene set-up. If you look at the current SLI/CF mechanics, they either work in AFR mode or in split render mode. ATI/NVIDIA know enough about graphics to get to the same ideas Lucid did. However they abandoned the approach for some reason. That reason is consistency.

    You cannot pick objects from a scene in any reliable way. Of course there are ways to separate objects. After all the programmer will usualy send one stream of rendering commands for one object etc. But that is not the rule.

    You cannot do scene set-up on separate objects (things like removing not visible objects or parts of them) unless you are using some kind of z-buffer manipulation at the end.

    I know very little about shader programs to tell how they work, but they also seem like a major issue in splitting a scene.

    ATI/NVIDIA approach is the only reasonable one, and the only reason why they don't scale linarly is the scene set-up step. Each card has to do the same scene set-up every frame, thus this is the one thing that cannot be paralelised in a reasonable way and is lowering the gain in performance.

    If Lucid found a way to do a scene set-up only once and split it to relevant parts for each card, they will have grave issues with optimised rendering paths for different DX/OGL/card versions. At one time, they will exhibit the same issues current CF/SLI does.

    ATI/NVIDIA can simply implement this in software by making a GPU hypervisor engine.
  • Clauzii - Sunday, August 24, 2008 - link

    Good post! Thumbs up :)
  • pool1892 - Sunday, August 24, 2008 - link

    ya, to me it is sort of the other way round - and still i agree. i am not sure what to expect, this is a technique i could imagine working.
    but it seems to be a job for a much stronger hardware - there is pattern recognition, on the fly optimization and balancing (different games will cleary be limited by different stages of the hardware rendering pipeline), qos (no latencies and sync) and many other things.
    i have a hard time believing that this little programmable chip can do that amount of work without utilizing the cpu and without a local memory besides 16+16k L1, while it has to handle massive throughput.
    so either they have found a REALLY clever trick or amd and nvidia could do the same, from a much better position, being in control of the complete environment. and well: why haven't they?
  • LOPOPO - Sunday, August 24, 2008 - link

    If this thing works at it claims...... I would not be surprised. We know the problem with SLI/CS. Management pure and simple. The fact that we are all so astounded by this box speaks volumes to how much we are used to being screwed by Nvidia and ATI/AMD. It is obvious that Hydra allocates system resources far better that current solutions. The fact that it can do this and draw 5w (supposedly) just goes to show you how flawed SLI/CS really are.
    This seemingly, impending paradigm shift is occurring because card makers have a one track mind -bigger is better-. Add more memory...add more speed...more stream processors throw in ridiculous names then that equals success, bu not really. For them(AMD/Nvidia) yes, for you...somewhat... depending on how you shop. Nowadays performance demands are higher than ever and AMD/Nvidia solutions always = more power draw which creates more heat which must be dissipated which of course necessitates a larger profile card and cooler. Extremely inefficient.
    It appears as if these newcomers are not trying to fit a square peg in a round hole. Can or could established card makers do this or something like this solution? Of course. But why when the consumer is perfectly happy spending ridiculous amounts of money for an extra 10 fps...AMD/Nvidia keep costs down and maximize profit it's all good for them. Consumers on the other hand rarely see the big picture. Such is the way this sector of the economy works, faster, more memory, die shrinks... never smarter, leaner, more efficient and the ever elusive: dynamic software/hardware architecture that adjust to given tasks. Those are my two cents and all of the above is contingent on the validity of Lucid's claims. I hope they are more valid than Nvidia's claims of 60% scaling in Crysis.
  • jeff4321 - Saturday, August 23, 2008 - link

    C'mon, how can they perform better than AMD's Crossfire or NVIDIA's SLI? Teams at AMD and NVIDIA know the intimate details of their boards. They know what they're doing.

    Besides, someone could implement this kind of solution w/o hardware (the hardware is probably there to prevent folks from running the software w/o the Company getting revenue). Most likely what this hardware and software is doing is that their API interception code is directing all of the underlying cards to render parts of the frame to a surface on the framebuffer. The framebuffer is transferred to system memory. And then, depending on how you want to do things, you composite in system memory, or you direct the video card that is driving the video buffer to treat the system memory surface as an overlay surface.

    All of this doesn't require magic hardware (unless you want to go really fast). This is how SLI and Crossfire work. Since AMD and NVIDIA designed their hardware and software, they can add hardware acceleration magic (things like synchronizing the two boards' scanout, directly transferring scanout data through the sli or crossfire cable, or making groups of boards look like one). Unfortunately for Lucid, I doubt that AMD or NVIDIA gave them any secret sauce so Lucid cannot leverage the hardware acceleration.

    Their ASIC is just a PCIe switch with an endpoint device for software security.
  • whatthehey - Saturday, August 23, 2008 - link

    I'm glad you're so incredibly knowledgeable that you can say what something does and how it works without ever seeing it or working on the project. Obviously nVidia and ATI don't want to give away their secrets, just like Lucid isn't going to give away theirs. Will this work? We don't know for sure yet. Is it better than SLI and Crossfire? We don't know that either. What I do know for certain is that there are plenty of games that are GPU limited that still don't get better than 30 to 50% scaling with current SLI/Crossfire. More than that, I know that most games don't come anywhere near even 50% scaling when going from dual GPUs to quad GPUs.

    I think the whole point of this chip is to do the compositing and splitting up of rendering tasks "really fast". I also think that the current ATI and nVidia solutions are less than ideal, given we need custom profiles for every game in order to see any benefit. What I'm most worried about is that the Lucid chip will just transfer the need for custom profiles from nVidia and ATI over to Lucid - a completely unproven company at this point.

    For now, I'm interested in seeing concrete numbers and independent testing. The world is full of successful inventions that were deemed impossible or "smoke and mirrors" by dullards that just couldn't think outside the box. This Hydra chip may turn out to be exactly what you state, but I'm more inclined to wait and see rather than trusting on people like you to tell us what can and can't be done.
  • shin0bi272 - Saturday, August 23, 2008 - link

    Im with Whatthehey. You are lucky to get 40 or 50% performance boost with current multi-gpu solutions and IIRC the game has to support either crossfire or sli. So if you are running say UT3 and have crossfire you are SOL for getting ANY boost if you are using AMD's crossfire. BUUUUT if the hydra tech works as advertised (or even close to it) it will be night and day to current solutions.

    If this chip is even exclusive to intel's mobos it will outperform either solution from amd/nvidia since it isnt alternating screens or portions of the screen via hardware over a tiny bridge (which adds latency). This chip is sort of like the hardware Xor chip on a raid5 card in that it just makes a decision on what card to send data to. The hydra's ONLY job is to intercept a data command being sent to the graphics card(s) and send it to the one that's not working as hard or is ready for a new operation. That doesnt take a lot of power or time as long as the software is efficient in telling the chip what graphics card(s) you have.

    I read another comment that said: "the hydra is a tensilica diamond based programmable risc controller with custom logic around it running at 225mhz. it uses about 5watt."

    For an explanation of RISC vs CISC visit: http://cse.stanford.edu/class/sophomore-college/pr...">http://cse.stanford.edu/class/sophomore-college/pr...

    This chip does essentially 1 thing and does it very very very fast.
  • pool1892 - Sunday, August 24, 2008 - link

    i made the tensilica 5watt risc chip comment - and the thing that is most interesting to me is that it is programmable to an extend. it is maybe best to imagine a dsp with a multitude of presets, each of which accelerates a different load. if i understand it correctly, hydra will autooptimize itself to suit different applications. this way you get near dsp throughput for many different usage models (that is different games) and you do not need the spezial units big fpga chips have.
    i just wonder where this optimization takes place, since hydra only has 16+16k of memory - and liquid talks about very low cpu utilization. (we are talking about a basic KI engine or really large table lookups)

    risc v cisc is no business here, there are no real cisc chips left in the market (macro/micro ops and so on - this is gone since pentiumpro and the "weird shift from alpha to athlon"TM^^)
  • jeff4321 - Saturday, August 23, 2008 - link

    If it is strictly software solution (where they call into DX for the multiple boards and eventually the rendered data makes it into system memory and the master board outputs the frame from system memory), of course it will work. Will it be fast and responsive? I don't know. If it is, you will see the same improvement in SLI or Crossfire because NVIDIA or ATI will figure out how the Lucid software is configuring their device. If you look at the block diagrams in the article, Lucid uses application profiles to determine how to configure the devices.

    A good comparison to Lucid's system is ATI's Software Crossfire (the Crossfire solution after the master-slave boards, but before Crossfire X cable like NVIDIA's SLI). Since ATI no longer runs this way, the Crossfire X solution is probably better. I doubt that ATI would stop using the software approach to multi-GPU solutions unless there were a benefit; the Crossfire X port makes the silicon bigger and it makes the board cost more because of the board traces and physical port.

    I doubt that their hardware does any compositing for the video stream. That would involve reverse engineering how each device driver talks to the board. Not impossible, just unlikely because of the effort. (Also, interacting with the ATI and NVIDIA device drivr would be quite dangerous because each device driver assumes that it is in control of the hardware. The Lucid hardware or software, if it talks to the hardware directly, would make the driver and the board incoherent and lead to system crash)

    The smoke and mirrors to this is the requirement for their ASIC. The actual approach is the tried and true solution for graphics hardware: the computation for the color values for each pixel is (mostly) independent of an adjacent pixel; therefore, you just add more hardware to make it faster.
  • JarredWalton - Sunday, August 24, 2008 - link

    You know, doing it in software makes SLI and CF more CPU limited than single GPUs, so unless you're really GPU limited scaling isn't as good as it could be. The whole point of this ASIC seems to be to handle the compositing and assignment of tasks in hardware, thus making it faster and alleviating the CPU of handling such tasks. That's not smoke and mirrors to me... at least, not if it works.

    It seems like we're still six months or so away from seeing actual hardware in our hands. My impression is also that their goal is to get the hardware to split up generic DX/OGL streams even if it doesn't have a profile, though with a profile it could do a better job. Also, judging by the http://www.dailytech.com/Chipmaker+Hydras+Stunning...">images we've been shown (http://www.pcper.com/article.php?aid=607">more details here, the breaking up of tasks and compositing is FAR more involved than what SLI and CF are doing, and probably makes more sense. (I wasn't at IDF, so I didn't see this in person.)

    "Tried and true" has a few synonyms you might want to put in there instead. "Conservative" is one, and so is "stagnation". Just like AMD stagnated with Athlon 64, NVIDIA and ATI seem to be dragging their heels when it comes to true innovation in the GPU industry. GPGPU is the most interesting thing to come out in the past few years, and what do we get? Two proprietary approaches to GPGPU, so that developers need to code for either NVIDIA *or* ATI -- or do twice as much work to support both.

    That's a lot like SLI, where NVIDIA wants us to use their GPUs with *their* chipset, and they have been aggressive in preventing other companies from supporting SLI without help from NVIDIA. (ATI is only marginally better - unless something has changed and CF now runs on SLI chipsets without a custom BIOS? But at least ATI will license the tech to Intel.) It would hardly be surprising if a third party were to come out and say "*BEEP* you guys! I'm going to do this in an agnostic fashion and let the users decide."

    Whether or not the Lucid Hydra chip works, I can't imagine anyone outside of NVIDIA and ATI employees actually wanting it to fail. You might as well bury your head in the sand and scream loudly that you want all competition and progress to stop. (It won't, of course, but at least if your head is buried you won't be able to tell the difference.)
  • jeff4321 - Sunday, August 24, 2008 - link

    If you think that NVIDIA and AMD have been stagnant, you haven't seen the graphics industry change. The basic graphics pipeline hasn't changed. It simply got smaller. A current NVIDIA or ATI GPU probably has as much computation power as an SGI workstation from the 90's. GPGPU is a natural extension of graphics hardware. Once the graphics hardware becomes powerful enough, it starts to resemble a general purpose machine, so you build it that way. It's possible because the design space for the GPU can do more (Moore's Law).

    Since it's early in the deployment of using a GPU as an application-defined co-processor, I would expect there to be competing APIs. Believe it or not, in the late eighties, x87 wasn't the only floating point processor available for x86's. Intel's 387 was slower than Weitek's floating point unit. Weitek lost because the next generation CPUs at the time started integrating floating point. Who will win? The team that has better development tools or the team that exclusively runs the next killer app.

    Dynamically changing between AFR and splitting the scene is hard to do. I'm sure that ATI and NVIDIA have experimented w/ this in-house and they are either doing it now, or they have decided that it kills performance because of the overhead to change it on the fly. How Lucid can do better than the designers of the device drivers and ASICs, I don't know.

    Lucid Hydra is not competition for either NVIDIA or ATI. The Lucid Hydra chip is a mechanism for the principals of the company to get rich when Intel buys them to get access to Multi-GPU software for Larrabee. It'll be a good deal for the principals, but probably a bad deal for Intel.

    Licensing Crossfire and SLI is a business decision. Both technologies cost a bundle to develop. Both companies want to maximize return.
  • AnnonymousCoward - Saturday, August 23, 2008 - link

    I'm afraid this solution will cause unacceptable lag. If the lag isn't inherent, maybe the solution will require a minimum "max frames to render ahead / Prerender limit". I don't buy their "negligible" BS answer.

    Does SLI require a minimum? I got the impression it does, from what I've read in the past. I don't have SLI, and use RivaTuner to set mine to "1".
  • Aethelwolf - Saturday, August 23, 2008 - link

    Lets pretend, if only for a moment, that I was a GPU company interested giving a certain other GPU company a black eye. And lets say I have this strategy where I design for the middle range and then scale up and down. I would be seriously haggling lucid right now to become a partner in supplying me, and pretty much only me, besides intel, with their hydra engine.
  • DerekWilson - Saturday, August 23, 2008 - link

    that'd be cool, but lucid will sell more parts if they work with everyone.

    they're interested in making lots of money ... maybe amd and intel could do that for them, but i think the long term solution is to support as much as possible.
  • Sublym3 - Saturday, August 23, 2008 - link

    Correct me if i am wrong but isn’t this technology still depending on making the hardware specifically for each DirectX version?

    So when a new DirectX or OpenGL version comes out not only will we have to update our videos cards but also our motherboard at the same time?

    Not to mention this will probably jack up the price on already expensive motherboards.

    Seems like a step backwards to me...
  • DerekWilson - Saturday, August 23, 2008 - link

    you are both right and wrong --

    yes the need to update the technology for each new directx and opengl release.

    BUT

    they don't need to update the hardware at all. the hardware is just a smart switch with a compositor.

    to support a new directx or opengl version, you would only need to update the driver / software for the hydra 100 ...

    just like a regular video card.
  • magao - Saturday, August 23, 2008 - link

    There seems to be a strong correlation between Intel's claims about Larrabee, and Lucid's claims about Hydra.

    This is pure speculation, but I wouldn't be surprised if Hydra is the behind-the-scenes technology that makes Larrabee work.
  • Aethelwolf - Saturday, August 23, 2008 - link

    I think this is the case. Hydra and Larrabee appear to be made for each other. I won't be surprised if they end up mating.

    From a programmers view, Larrabee is very, very exciting tech. If it fails in the PC space, it might be resurrected when next-gen consoles come along, since it is fully programmable and claims linear performance (thanks to hydra?).
  • DerekWilson - Saturday, August 23, 2008 - link

    i'm sure intel will love hydra for allowing their platforms to support linear scaling with multigpu solutions.

    but larrabee won't have anything near the same scaling issues that nvidia and amd have in scaling to multi-gpu -- larrabee may not even need this to get near linear scaling in multigpu situation.

    essentially they just need to build an smp system and it will work -- shared mem and all ...

    their driver would need to optimize differently, but that would be about it.
  • GmTrix - Saturday, August 23, 2008 - link

    If larrabee doesn't need hydra to get near linear scaling isn't hydra just providing a way for amd and nvidia to compete with it?
  • pool1892 - Saturday, August 23, 2008 - link

    i think it is possible to build a solution like this, but this thing has a lot to do, on-the-fly qos and scheduling and optimizing and so on. with data in the gigabits/s. sounds like a heavy duty cisco switch.
    i can imagine this working, but the chip will be a heavyweight - and it will be power consuming and expensive.
    and it only has potential in the marketplace if the price premium for a mainboard with hydra beats the faster graphics you can buy for this premium. that will be tough.
    larrabee is as usual a totally different animal, hydra could very well be a software feature for it (esp. with qpi in gen 2)
  • pool1892 - Saturday, August 23, 2008 - link

    gotta correct myself - after a little diggin: the hydra is a tensilica diamond based programmable risc controller with custom logic around it running at 225mhz. it uses about 5watt. this is a tiny chip, it might be affordable. (but how is liquid going to earn money? and: they have to optimize their driver and the the programmable parts of the chip for different rendering techniques in different games - who is paying for that?)
  • Goty - Saturday, August 23, 2008 - link

    I don't see this as a bad thing for GPU makers, personally. Since ATI no longer has anything like the "master card" for crossfire, as long as they're selling two GPUs to people running multi-card systems, they're not losing out. Sure, they may lose a bit of money on the mainboard side of things since consumers will be able to use any chipset they want with this technology, but the margin on the GPU silicon is probably higher than that on the chipset side, anyhow.
  • yyrkoon - Saturday, August 23, 2008 - link

    "Lucid also makes what seems like a ridiculous claim. They say that in some cases they could see higher than linear scaling. The reason they claim this should be possible is that the CPU will be offloaded by their hardware and doesn't need to worry about as much so that overall system performance will go up. We sort of doubt this, and hearing such claims makes us nervous. They did state that this was not the norm, but rather the exception. If it happens at all it would have to be the exception, but it still seems way too out there for me to buy it."

    Come now guys . . . if a CPU dependent game such as World in Conflict could offload the CPU 10%, would it not make sense that the CPU could do an additional 10%, thus offering more performance ? I am not saying I believe this is possible myself, but taking Lucid at their word, this just makes sense to me.

    "The demo we saw behind closed doors with Lucid did show a video playing on one 9800 GT while the combination of it and one other 9800 GT worked together to run Crysis DX9 with the highest possible settings at 40-60 fps (in game) with a resolution of 1920x1200. Since I've not tested Crysis DX9 mode on 9800 GT I have no idea how good this is, but it at least sounds nice."

    Just going from this review, and assuming you meant a 9800GTX/GTX+: 47-41 FPS average with 16x AF/ 0x AA.

    "An explanation for this is the fact that the Hydra software can keep requesting and queuing up tasks beyond what graphics cards could do, so that the CPU is able to keep going and send more graphics API calls than it would normally. This seems like it would introduce more lag to us, but they assured us that the opposite is true. If the Hydra engine speeds things up over all, that's great. But it certainly takes some time to do its processing and we'd love to know what it is."

    Wait a minute . . . did you not just mention on a previous page somewhere that the number of cards implemented were limited due to latency implications ? . . .

    "Of course, while it seems like an all or nothing situation that would serve no purpose but to destroy the experience of end users, NVIDIA and ATI have lots of resources to work on this sort of "problem" and I'm sure they'll try their best to come up with something. Maybe one day they'll wake up and realize (especially if one starts to dominate over the other other) that Microsoft and Intel got slammed with antitrust suits for very similar practices."

    OR, they could just purchase the company outright, which seems to me what Lucid may have been aiming for to begin with. After that the buying company could do whatever they please, such as kill the project. or completely decimate the opposite camp *if* the hardware truely does what it claims. At least where gaming is concerned . . . and we all know that IGP's make up for a very large portion of home systems.

    Now what I have to say is that this totally smells like the gaming Physics "fiasco". Buy the hardware now, and the hardware is dead in a year or two. Sure a few games implemented features that leveraged these cards, but do you think developers are going to write code for hardware that has gone way of the dodo ? Probably not.

    The idea is interesting yes, but I will believe it when I see the hardware on sale at the egg . . .
  • DerekWilson - Saturday, August 23, 2008 - link

    it was not 9800 gtx cards -- they were GT cards ... lower performance, single slot.

    also game devs wont have to optimize for it, so there is no problem with them ignoring the situation -- if it works it works
  • yyrkoon - Saturday, August 23, 2008 - link

    9800GTX/GTX+ benchmarks ---> http://www.guru3d.com/article/geforce-9800-gtx-512...">http://www.guru3d.com/article/geforce-9800-gtx-512...
  • JarredWalton - Saturday, August 23, 2008 - link

    http://www.newegg.com/Product/ProductList.aspx?Sub...">9800 GT FTW!

    Basically, performance is closer (identical) to that of 8800 GT. You know, this goes along with the whole "let's rename 8800 GT and 8800 GTS 512MB to 9800 parts, because after all G92 is GeForce 9 hardware." Why the 8800 GT was ever launched with that name remains something of a mystery... well, except that performance was about the same as 8800 GTX.
  • yyrkoon - Saturday, August 23, 2008 - link

    So basically just a 8800GTS with fewer ROPs ? nVidias naming convention definitely leaves a lot to be desired : /
  • Lakku - Saturday, August 23, 2008 - link

    Who are nVidia and AMD/ATi supposed to strong arm in this situation? I don't think they would be in any kind of position to strong arm ANYONE, if this works as advertised. Why? Because they'd have to strong arm Intel (apparently a very big investor into this tech and company) to do so, and that's just not going to happen. Intel only need put this on their own Intel branded gaming or consumer boards, and/or Intel can strong arm Asus and the others into putting this chip onto their motherboards if they want Intel chipsets, still by far the best selling PC chipsets. If this works as advertised, it's probably Intel who will be the biggest winner... and maybe us end users in some way, provided Intel and this company don't charge outrageous prices for this tech.
  • djc208 - Monday, August 25, 2008 - link

    Easy, like the author stated nVidia just writes in some code that looks for the Hydra software or hardware and shuts down parts of the driver. Therefore you can't use their hardware on a system running or equiped with Hydra. If it was a unified front then Intel will have only Larabee to use with this for gaming.

    Problem I see is that it could upset the market if the boycot isn't universal. If ATI let their hardware work with this and nVidia didn't then it could seriously hurt nVidia, as there would be even less reason to go with their chipsets or graphics cards at the high end, where nVidia likes to play.

    More likely is that ATI/nVidia will quickly push out something along the same lines and now we'll have three competing solutions, and then ATI and nVidia will lock out Hydra since they offer an alternative, just like now.

    All this assumes that Hydra works the way it's said to, if not then all bets are off.
  • GTVic - Friday, August 22, 2008 - link

    This company is not making graphics cards, and to use their product you have to buy more graphics cards. Seems like a win-win situation. AMD and nVidia can dump development on crossfire/sli and sales go up.
  • DerekWilson - Saturday, August 23, 2008 - link

    if nvidia dumps sli then there is zero reason for them to be in the chipset business right now.

    they are no longer needed for AMD because AMD isn't making horrid chipsets anymore. they aren't needed for Intel because Intel builds awesome motherboards.

    the only value add nvidia has on the platform side is sli. period.

    they do not want to see it become irrelevant.
  • shin0bi272 - Friday, August 22, 2008 - link

    This is a gamers dream (assuming it works as advertised) and a video card makers nightmare.

    If they really wanted to demo it they probably should have been running 2 systems side by side, one with 1 card and one with the hydra running 2 cards to show the actual difference. Maybe also not run crysis since crysis has issues with framerate on any system... maybe run 3dmark vantage (I know its not an actual game but its a standardized program) especially if its transparent to the game and hardware.

    Personally if AMD and Nvidia have a problem with this technology and they disable it (or force me to so I can play any game) there's still Intel's Larabee on the horizon and I'm sure Intel wouldnt disable the hydra so Id just dump AMD and Nvidia all together to get linear performance increases (again assuming it works).

    On top of that AMD and Nvidia have their own performance issues and competition to worry about especially now that the physx war has begun (AMD hooking up with havoc and Nvidia buying Ageia).

    I think both AMD and Nvidia should embrace this technology and abandon their approaches so that they can concentrate more on individual card performance. Since the performance gains with both SLi and crossfire arent linear and this promises to be. Even if its not 100% linear but its a 90% speed gain thats still better than either of the other solutions.

    The game designers would also love this technology because they wouldnt have to worry about enabling SLi or crossfire in their games they could concentrate on the actual game play and making the game fun and cool looking.
  • shin0bi272 - Friday, August 22, 2008 - link

    Oh also I forgot to mention that the article did say that you would have to have 2 of the same brand of card so youd still be locked into one manufacturer. So its not like youd be mixing an nvidia 280 with an amd 4870x2. So amd and nvidia really shouldnt have a huge problem with it.
  • Diesel Donkey - Friday, August 22, 2008 - link

    That is false. The article states that any combination of two, three, or four cards from either AMD or Nvidia can be used. That's one reason this technology would be so amazing if it actually works and is implemented successfully.
  • The Preacher - Saturday, August 23, 2008 - link

    I don't think you would like some portions of the same screen rendered by nvidia and others by ATI since they will look different and could create some discontinuities in the final image.
  • DerekWilson - Saturday, August 23, 2008 - link

    they try really hard to render nearly the same image ... but if you played half-life 2 then this would be an issue.

    also, to enable this they would have to wait for vista to allow it (i think) ... thing is they are building a wddm driver ... so ... nvidia's display driver wouldn't be "running" either? I don't really know how that works.
  • jordanclock - Friday, August 22, 2008 - link

    No, he is right. You can't have an nVidia card with an AMD card. As it stands, Windows won't allow two graphics drivers to run in 3D mode. This was addressed in the first article featuring this technology.
  • prophet001 - Friday, August 22, 2008 - link

    how amazing would this be. nice article with what you were given.
  • MrHanson - Friday, August 22, 2008 - link

    I thing having a separete box with it's own power supply(s) is ideal for something like this. That way if you want to add 2 or more gpu's to your hydra system, you don't have to rip apart your computer and put in a different motherboard and power supply. I imagine this system will probably come with it's own mainboard and power supply with several separate pcie x16 slots for scalablity. Also if you were to upgrade your motherboard and cpu, you don't have to worry about getting a motherboard with enough pcie x16 slots or if the motherboard supports the hydra engine. Any ol' motherboard with one pci express slot will do.


  • TonyB - Friday, August 22, 2008 - link

    but can it play crysis?
  • Googer - Sunday, August 24, 2008 - link

    Please let that 3 year old former inside engadget joke die. It's starting to get old, send it to the joke graveyard; it's past it's prime.
  • TonyB - Sunday, August 24, 2008 - link

    F YOU, two of my friends died trying to run Crysis.
  • UnlimitedInternets36 - Saturday, August 23, 2008 - link

    I got one better Crysis:Warhead all setting Maxed @ 120fps @ 2560x1600 FTW!
  • InuYasha - Saturday, August 23, 2008 - link

    but does it blend?
  • PrinceGaz - Saturday, August 23, 2008 - link

    Yes, it blends.
  • Lightnix - Friday, August 22, 2008 - link

    At 40-60FPS at 1920x1080.

Log in

Don't have an account? Sign up now