Well written and informative, Liz. What is your assessment of the overall attitude, energy and expectations of this years attendees? Has VMware risen to a level that validates this event every year? Have you been able to attend any of the workshops? What are your thoughts on holding VMworld Europe two years in a row at Cannes and is the city an ideal location for fostering growth and partnerships within the virtualization world? Anxiously waiting for your next article.
Thanks for the compliment, and I'm currently working on it. The big problem with being a journalist at these conferences is that they don't really expect the press to attend the breakout sessions, and as such stuff our schedule with a lot of less interesting stuff. ;-)
Now that we're back home, more updates are forthcoming however, expect another post later today!
have been to the event for several years now, even before it was called "world" but rather just "TX".
It is clear that from that point on the focus was changed, before it was all about the technical part, real vm freaks, now you also see a growing focus on the marketing aspects and to my opinion the original true technical is fading a bit, the breakout sessions are decent but there are very few real advanced sessions but 90% will do for the daily overall user and that is what you see more and more in the vmworld, customers that have a system deployed attending these sessions, they don't need more technical then this.
Location is fine, nice weather, nice location and don't forget its about 4500 people, not so easy to find the right locations.
The only time things need to synch is when the VM produces a side effect that is externally visible; i.e., I/O. All such activity is visible to the hypervisor. The only information that needs to be sent is the VM state information that has changed since the last update, which the hypervisor should be able to figure out with relatively little effort.
I expect the algorithm is more like "since the last I/O or after X seconds". In any case, the amount of data that needs to be sent is considerably less than maintaining a "clock per clock" copy, and yields exactly the same results--except that some CPU cycles might be duplicated when the shadow copy starts, but if there's been no I/O during that period, then no harm and no foul.
Obviously other optimizations are possible depending on the type of IO, as certain types of I/O might not require an update. For example, reading from a disk wouldn't necessarily require an update.
Well if you think about it, there is only one way how the fault tolerance feature can work - progressive VMotion.
It starts a VMotion and when it is complete, it is simply syncing the VMs page table data while the VM is not running on a CPU (or it interrupts it for this) between the original VM and the shadow copy.
Something like that was done very long ago with L4 microkernel running a Linux instance inside. However the data was much smaller at that time.
it's actually the record and replay functionality that is already available in workstation 6, but this time enhanced and off course to be FT it needs the "ok" from the other part, interesting things they implemented to reduce the overhead with reads and writes and proven to be very powerfull, now we only need this on more vcpu. A nice addon is the self-healing :) one goes down a new one comes up.
Indeed, later that day we had the chance to interview Lionel Cavalliere of VMware EMEA and grill him a bit about the subject. It seems at this point, Fault Tolerance will indeed be limited to single vCPU VM's and will give quite a bit of overhead. Not quite ready to use for company-critical database systems and such, I guess.
I like the idea, though, using a combination of existing techniques to provide a completely new way of handling High Availability. Definitely goes to show how much the IT landscape has changed since virtualization came along.
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
7 Comments
Back to Article
venice1 - Thursday, February 26, 2009 - link
Well written and informative, Liz. What is your assessment of the overall attitude, energy and expectations of this years attendees? Has VMware risen to a level that validates this event every year? Have you been able to attend any of the workshops? What are your thoughts on holding VMworld Europe two years in a row at Cannes and is the city an ideal location for fostering growth and partnerships within the virtualization world? Anxiously waiting for your next article.LizVD - Friday, February 27, 2009 - link
Thanks for the compliment, and I'm currently working on it. The big problem with being a journalist at these conferences is that they don't really expect the press to attend the breakout sessions, and as such stuff our schedule with a lot of less interesting stuff. ;-)Now that we're back home, more updates are forthcoming however, expect another post later today!
duploxxx - Friday, February 27, 2009 - link
have been to the event for several years now, even before it was called "world" but rather just "TX".It is clear that from that point on the focus was changed, before it was all about the technical part, real vm freaks, now you also see a growing focus on the marketing aspects and to my opinion the original true technical is fading a bit, the breakout sessions are decent but there are very few real advanced sessions but 90% will do for the daily overall user and that is what you see more and more in the vmworld, customers that have a system deployed attending these sessions, they don't need more technical then this.
Location is fine, nice weather, nice location and don't forget its about 4500 people, not so easy to find the right locations.
has407 - Wednesday, February 25, 2009 - link
The only time things need to synch is when the VM produces a side effect that is externally visible; i.e., I/O. All such activity is visible to the hypervisor. The only information that needs to be sent is the VM state information that has changed since the last update, which the hypervisor should be able to figure out with relatively little effort.I expect the algorithm is more like "since the last I/O or after X seconds". In any case, the amount of data that needs to be sent is considerably less than maintaining a "clock per clock" copy, and yields exactly the same results--except that some CPU cycles might be duplicated when the shadow copy starts, but if there's been no I/O during that period, then no harm and no foul.
Obviously other optimizations are possible depending on the type of IO, as certain types of I/O might not require an update. For example, reading from a disk wouldn't necessarily require an update.
haplo602 - Wednesday, February 25, 2009 - link
Well if you think about it, there is only one way how the fault tolerance feature can work - progressive VMotion.It starts a VMotion and when it is complete, it is simply syncing the VMs page table data while the VM is not running on a CPU (or it interrupts it for this) between the original VM and the shadow copy.
Something like that was done very long ago with L4 microkernel running a Linux instance inside. However the data was much smaller at that time.
duploxxx - Wednesday, February 25, 2009 - link
it's actually the record and replay functionality that is already available in workstation 6, but this time enhanced and off course to be FT it needs the "ok" from the other part, interesting things they implemented to reduce the overhead with reads and writes and proven to be very powerfull, now we only need this on more vcpu. A nice addon is the self-healing :) one goes down a new one comes up.LizVD - Thursday, February 26, 2009 - link
Indeed, later that day we had the chance to interview Lionel Cavalliere of VMware EMEA and grill him a bit about the subject. It seems at this point, Fault Tolerance will indeed be limited to single vCPU VM's and will give quite a bit of overhead. Not quite ready to use for company-critical database systems and such, I guess.I like the idea, though, using a combination of existing techniques to provide a completely new way of handling High Availability. Definitely goes to show how much the IT landscape has changed since virtualization came along.