![](/Content/images/logo2.png)
Original Link: https://www.anandtech.com/show/986
Intel Developer Forum Fall 2002 - Preview
by Anand Lal Shimpi on September 9, 2002 6:13 PM EST- Posted in
- Trade Shows
The Fall 2002 Intel Developer Forum does not officially start for another couple of hours but we're already ready to bring you some information directly from the forum in San Jose, California.
Our first meeting of the day was with members of Intel's Microprocessor Research Lab and as you can expect, the topic of discussion was quite interesting. We'll get to some of the things disclosed in a bit, but first we'll talk a little about the show itself.
For the first time in quite a while the Intel Developer Forum has not been
crashed by AMD; if you'll remember just 6 months ago AMD stole the show with the world's first public demonstration of Hammer. According to AMD they won't be at IDF:
"The real reason is that this is the week of September 11th and we didn't think it would be appropriate to set up shop at IDF to bang a drum or show off the latest hardware/software/other bits when most folks have their minds on friends and family during this somber time."
So unfortunately (and understandably) you won't be seeing any more Hammer photos or information during our week at IDF.
With that said, there's a number of things that you will be seeing this week. There's going to be a lot of talk about the first Hyper-Threading enabled Pentium 4 CPUs, we're going to learn a little more about the mobile-wonder known as Banias, and although we won't see a demonstration of it at the show, some Prescott information should be flowing this week as well.
As usual, to stay on top of our IDF coverage be sure to subscribe to the AnandTech Newsletter as we'll be sending out most of our coverage over the Newsletter first.
The Future of CPUs - Dual Threshold Voltage?
There has been a dramatic change in the way Intel goes about designing microprocessors over the past decade; power never used to be the primary concern when architecting a CPU, it was more of an afterthought. Over the past several years Intel has done some things to improve power efficiency but performance was still the primary goal. Today, Intel doesn't begin architecting a processor before they define a power limit for the CPU. Based on the "power budget" they will go on to design the CPU to meet other limitations and goals.
The most popular methods of reducing power consumption and heat production are by improving manufacturing processes and making cores smaller; an unfortunate downside of this is that with smaller cores, your power density obviously increases and you end up having to deal with a much bigger problem - removing a lot of heat from a very small core.
In order to deal with this problem of increasing power density, there are a lot of things that must be done on the inside of the core to help. It turns out that during normal CPU operation there are a number of transistors that are operating much quicker than they need to be. Transistors that are critical to the high performance execution of tasks obviously must operate as fast as possible, but there are a number of other transistors in the CPU that are either unused or relatively idle during normal operation - yet they still operate at the same frequency and voltage as the rest of the CPU. This is where the idea of Dual Threshold Voltage (Dual Vt) comes in to play.
You're very familiar with the single voltage that your CPU works off of, often referred to as its "core voltage" or "Vcore." This is the voltage that all of the millions of transistors in the CPU operate at but, as we just mentioned, this isn't the most efficient method of operation. Intel is currently researching into the idea of feeding future CPU's two different voltages, a high and a low voltage. These Dual Vt CPUs would then be architected to run non-critical transistors at the lower voltage while the rest would run at the higher voltage. The benefit of this is that transistors that don't need to be switching as fast end up running slower and thus reduce the overall power consumption of the CPU.
Slowing down the non-critical transistors won't actually reduce performance of the CPU any because of the fact that identical transistors don't all switch at the same speed. There's a distribution of speeds which identical transistors will switch at; the "high" voltage will effectively keep transistors running as fast as they can while the "low" voltage will ensure that transistors will operate at the slower end of their distribution curve.
Taking Advantage of Hyper-Threading
Ever since we first started benchmarking Intel's Hyper-Threading technology we realized that it's going to be quite some time before the general population sees a performance increase from the technology. As a brief recap, remember that Hyper-Threading allows multiple threads to be executed on a single CPU at the same time by banking on the threads using different execution units on the CPU itself. This works out perfectly if you happen to be running two applications that are dramatically different in the type of operations they're performing or inherently multithreaded apps; for a lot of today's single-threaded (or sequential) applications, what ends up happening the most is a lot of inefficiency introduced as two threads contend for the same execution resources.
We've already seen in our own internal tests that enabling Hyper-Threading can significantly reduce performance in most everyday applications such as Word, Excel or 3D Games by a significant amount (5 - 20%). At the same time we're well aware that Intel will be introducing Hyper-Threading on the desktop before the end of this year, the question remains - how will Intel convince end users to enable the feature if all it does currently is reduce performance?
One potential solution is an idea known as Pseudo-Multithreading; what pseudo-multithreading allows is for a thread to be generated while executing a single threaded application that can be used to speculatively fetch data that will eventually be needed. One of the biggest performance limitations in today's systems is the speed of main memory; if a piece of data is not found in cache then performance clearly suffers as the CPU must go to much slower main memory to get the data it needs. The concept of pseudo-multithreading calls for the creation of a helper thread whose job is to go out and speculatively fetch data that may be needed by the parent thread being executed; that data would then be pulled into the CPU's cache and potentially increase performance by avoiding cache misses.
These helper threads would require some software interaction to be created, their benefit is that they can be integrated into currently available applications - the only question that remains is what is the best way for calling them. All that's necessary is that a small piece of code is inserted into the application that would trigger one of these helper threads; one of the options that Intel is currently considering is releasing an application that would execute during the installation of your applications and insert the appropriate triggers in the applications' executables.
It's good to hear that there's something Intel has up their sleeves but we're still wondering exactly when they'll be introducing these things. We already know when Hyper-Threading is coming to the desktop but unless the feature is able to offer a compelling performance increase initially, it could get a bad rap that would hurt the launch considerably.
More on Multicore CPUs
The final topic of discussion in our meeting this morning was on multicore CPUs; with technology announcements such as BBUL it's clear that Intel does see a future with multicore CPUs but what sort of a future still remains a question.
In our Inside Intel article we talked about the possibility of having two CPU cores - one high performing core for all critical execution and a lower performing core for everything else. Today Intel extended the multicore possibilities to talking about the benefits of having multiple specialized cores that would vastly change the way server CPUs are viewed as opposed to desktop CPUs; currently the two are very similar but if the future does lay with multicore designs then they could be dramatically different as time moves on.
One such example is that a server CPU with multiple cores could potentially have application specific cores as a part of its array including things like a TCP/IP offload engine or a core optimized for compression. On the other hand a desktop CPU could have fewer cores but more specialized for the tasks of a conventional desktop system.
The first Intel CPU to actually put multicore technology to use will be the Itanium, however Intel was not able to give us a timeframe for when.
Final Words
That's it for this preview of the Fall 2002 Intel Developer Forum but stay tuned as there's much more to come. The first keynote is scheduled to start shortly so be sure to check your mail for the Newsletter and check the front page for updates as we get them.