Original Link: https://www.anandtech.com/show/6074/ocz-vertex-4-review-128gb



When OCZ released the Vertex 4 in April, it brought us excepionally great write performance. Based on OCZ's Everest 2 controller (Marvell IP with custom firmware), the Vertex 4 began OCZ's transition away from SandForce for its high-end drives. However, as we noted in our review, sequential read performance at low queue depths needed work in the launch firmware. 

Fortunately, OCZ was well aware of the issue and it only took them a bit over a month to come up with a firmware update to address low queue depth sequential read performance. We updated our Vertex 4s (including the 128GB model that was missing in our initial review) to the new 1.4 firmware and ran them through our suite. By the time we finished running our 1.4 tests, OCZ had already released an even faster 1.5 firmware, so we decided to kill two birds with one stone and combine the two updates into one article. 

The 1.4 Firmware

With the latest versions of OCZ's Toolbox, you can now update your drive's firmware even if you have Intel's RST drivers installed. The toolbox actually downloads the drive's firmware from OCZ's servers before updating your drive, so you'll need to have an active internet connection. I have noticed that older RST drivers may trigger in a firmware file not found error during the update process, but the absolute latest RST works as well as Windows 7's standard AHCI drivers. The toolbox update is only possible on secondary drives, not the drive that Windows booted from.

Note: Upgrading to 1.4 firmware is destructive, meaning that your SSD will be erased in the process. Thus it's absolutely necessary to make a backup of your data before upgrading, unless you are fine with losing the data in your SSD.

  • Increased read performance at low queue depths
  • Improved sequential write performance for 128GB and 256GB models
  • Increased performance under specific workloads of mixed reads and writes
  • Improved host compatibility with dated/uncommon BIOS revisions
  • Improved stability when resuming from S3/S4 on older generation motherboards
  • Increased read performance on small file sizes (lower than 4K)
 
The release notes are promising. Read performance at low queue depths is exactly what needed fixing and 1.4 claims to address this directly. OCZ also published an updated performance table, which is below:
 
OCZ Vertex 4 with 1.4 Firmware Specifications
Capacity 64GB 128GB 256GB 512GB
Sequential Read 460MB/s 535MB/s -> 550MB/s 535MB/s -> 550MB/s 535MB/s -> 550MB/s
Sequential Write 220MB/s 200MB/s -> 420MB/s 380MB/s -> 465MB/s 475MB/s
4K Random Read 70K IOPS 90K IOPS 90K IOPS 95K IOPS
4K Random Write 50K IOPS 85K IOPS 85K IOPS 85K IOPS

The 64GB model was introduced along with the 1.4 firmware and it will be shipping with the new firmware, hence only one set of performance figures. As for other capacities, sequential read performance is up by 15MB/s. That's not a significant increase, although it should be kept in mind that we are very close to the limits of 6Gbps SATA already. However, this data does not tell whether sequential read performance at low queue depths is what it should be. As we discovered in our review, increasing the queue depth lead to better results. 

Sequential write performance, on the other hand, is significantly improved in 128GB and 256GB models. The 128GB model had a fairly poor write performance at 200MB/s before the update, but the 1.4 firmware brings that to 420MB/s. That's over 100% increase, which is fairly abnormal but welcome for sure. The 256GB model is also getting a 85MB/s (~22%) boost in sequential write performance. Random read and write speeds remain unchanged for all models.

The 1.5 Firmware

Note: The 1.5 upgrade is destructive if upgrading from 1.4 RC or older. However, if upgrading from final version of 1.4 firmware, the upgrade is not destructive. We still recommend having an up-to-date backup of your data because something may go wrong and result in a data loss.

  • Improved sequential file transfer performance for 128GB, 256GB and 512GB models
  • Optimized idle garbage collection algorithms to extend the benefits of performance
    mode by enabling the feature across a greater percentage of the drive
  • Improved HBA / RAID card compatibility
  • Further improved compatibility with desktop and mobile ATA security features
  • Corrected a corner case issue where the ‘Remaining Life’ SMART attribute could be reported incorrectly

 

OCZ Vertex 4 with Firmware 1.5 Specifications
Capacity 128GB 256GB 512GB
Sequential Read 550MB/s -> 560MB/s 550MB/s -> 560MB/s 550MB/s -> 560MB/s
Sequential Write 420MB/s -> 430MB/s 465MB/s -> 510MB/s 475MB/s -> 510MB/s

The 1.5 firmware provides more incremental improvements compared to the 1.4 firmware. Sequential read speed is up by 10MB/s (~2%) and sequential write speeds are up by 2-10% depending on the capacity. Apparently, the 1.5 firmware does not provide any performance gains for the 64GB model. The other remarkable change in 1.5 firmware is enhanced garbage collection. This update actually relates to a unique performance mode OCZ introduced with the 1.4 firmware.

The Performance Mode

With the 1.4 firmware OCZ introduced a two operating mode structure for most capacities of the Vertex 4. As long as less than 50% of the drive is in use, the Vertex 4 will operate in a performance mode - delivering better sequential performance. Once you hit the 50% mark, the drive switches to its standard performance mode (similar to the max performance pre-1.4 firmware).

This mode switching is mostly transparent to the end user with one exception. When you cross the 50% threshold, the Vertex 4 has to reorganize all pages on the drive. During this reorganization performance is impacted. The entire process should only take a matter of minutes, and it only happens once, but it's worth keeping in mind. 

You may remember Intel did something similar (on the fly internal data re-organization) after the first X25-M firmware update, however that process took much longer. 

This isn't the only performance trick OCZ has up its sleeve, but it is something that is enabled by the fact that OCZ finally has full, low-level control over the Vertex 4's firmware.



The 128GB Vertex 4

For the initial Vertex 4 review, we only had 256GB and 512GB samples to play with. It's common that manufacturers only have one or two models to send for reviewers, especially if the product has not been launched to public yet. However, we finally have a 128GB Vertex 4 and we of course put it through our regular test suite. Before we go into benchmarks, lets have a look inside the 128GB Vertex 4:

 

The PCB is the same as in the 256GB and 512GB models. Aside from the Indilinx Everest 2 controller, there are eight Intel's 25nm synchronous MLC NAND packages, coupled with 512MB of DDR3-1333 DRAM from Micron. 

 

Flip the PCB over and we find exactly the same major components sans the controller. There are sixteen NAND packages in total, meaning that each is a single-die 8GB package. The second DRAM chip brings total cache to 1GB, although OCZ has said that only early units come with 1GB of DRAM. The 128GB and 256GB units found in stores today should come with 512MB of DRAM instead.



Random Read/Write Speed

The four corners of SSD performance are as follows: random read, random write, sequential read and sequential write speed. Random accesses are generally small in size, while sequential accesses tend to be larger and thus we have the four Iometer tests we use in all of our reviews. Our first test writes 4KB in a completely random pattern over an 8GB space of the drive to simulate the sort of random access that you'd see on an OS drive (even this is more stressful than a normal desktop user would see).

We perform three concurrent IOs and run the test for 3 minutes. The results reported are in average MB/s over the entire time. We use both standard pseudo randomly generated data for each write as well as fully random data to show you both the maximum and minimum performance offered by SandForce based drives in these tests. The average performance of SF drives will likely be somewhere in between the two values for each drive you see in the graphs. For an understanding of why this matters, read our original SandForce article.

Desktop Iometer - 4KB Random Read (4K Aligned)

.Desktop Iometer - 4KB Random Write (4K Aligned) - 8GB LBA Space

Desktop Iometer - 4KB Random Write (8GB LBA Space QD=32)

Neither the 1.4 or 1.5 firmware promised any improvements regarding random read/write performance. 

As for the 128GB model, which is new in our tests, its random read/write performance is actually on-par with 256GB and 512GB capacities. This is a pleasant surprise the difference between the 128GB and 256GB capacities in many families can be quite substantial (e.g. Samsung's SSD 830).

Sequential Read/Write Speed

To measure sequential performance we ran a one minute long 128KB sequential test over the entire span of the drive at a queue depth of 1. The results reported are in average MB/s over the entire test length.

Desktop Iometer - 128KB Sequential Read (4K Aligned)

It's relieving to look at the graph above. It's now a fact that the 1.4 firmware fixes the sequential read speed at low queue depths issue. Not only does the firmware fix the issue, the sequential read performance is also very good. 

Desktop Iometer - 128KB Sequential Write (4K Aligned)

OCZ was promising increased sequential write performance for all models with the 1.5 firmware but at least in Iometer, at low queue depths, the difference is negligible. We'll take a look at AS-SSD and ATTO at higher queue depths next.



AS-SSD Incompressible Sequential Performance

The AS-SSD sequential benchmark uses incompressible data for all of its transfers. The result is a pretty big reduction in sequential write speed on SandForce based controllers, while other drives continue to work at roughly the same speed as with compressible data.

Incompressible Sequential Read Performance - AS-SSD

AS-SSD's sequential read performance has surprisingly gone downhill with the 1.4 firmware. The drops are not huge, only 17.1MB/s (~4%) for 256GB model and 31.5MB/s (~7%) for the 512GB one. While firmware 1.5 brings some improvement, it's not enough to take performance back to the level it originally was.

Incompressible Sequential Write Performance - AS-SSD

OCZ claimed increased sequential write speeds with both 1.4 and 1.5 firmwares. IO Meter didn't show any significant changes but AS-SSD is telling a different story. The 256GB model gets a substantial boost of over 100MB/s and the 1.5 firmware should give a small increase as well if the scores of 512GB model are anything to go by. Vertex 4 is definitely in its own class when it comes to high queue depth sequential write performance, although we'll find out how big of a deal this is in real world performance once we get to AnandTech Storage Benches. 



Performance vs. Transfer Size

All of our Iometer sequential tests happen at a queue depth of 1, which is indicative of a light desktop workload. It isn't too far fetched to see much higher queue depths on the desktop. The performance of these SSDs also greatly varies based on the size of the transfer. For this next test we turn to ATTO and run a sequential write over a 2GB span of LBAs at a queue depth of 4 and varying the size of the transfers.

Click to open in full size

Read performance is now more consistent and better, especially at smaller transfer sizes. There is still an odd drop in performance at transfer size of 512KB but overall the performance scales better with the new firmwares as we increase the transfer size. Compared to other drives, Vertex 4 can't deliver all that great read performance between transfer sizes of 2KB and 64KB, though. 

Write performance at all capacities and transfer sizes is great. Only Intel's 240GB SSD 520 is faster, but this is expected since it uses a SandForce controller and ATTO tests with highly compressible data.



AnandTech Storage Bench 2011

Last year we introduced our AnandTech Storage Bench, a suite of benchmarks that took traces of real OS/application usage and played them back in a repeatable manner. Anand assembled the traces out of frustration with the majority of what we have today in terms of SSD benchmarks.

Although the AnandTech Storage Bench tests did a good job of characterizing SSD performance, they weren't stressful enough. All of the tests performed less than 10GB of reads/writes and typically involved only 4GB of writes specifically. That's not even enough exceed the spare area on most SSDs. Most canned SSD benchmarks don't even come close to writing a single gigabyte of data, but that doesn't mean that simply writing 4GB is acceptable.

Originally we kept the benchmarks short enough that they wouldn't be a burden to run (~30 minutes) but long enough that they were representative of what a power user might do with their system. Later, however, we created what we refer to as the Mother of All SSD Benchmarks (MOASB). Rather than only writing 4GB of data to the drive, this benchmark writes 106.32GB. This represents the load you'd put on a drive after nearly two weeks of constant usage. And it takes a long time to run.

1) The MOASB, officially called AnandTech Storage Bench 2011—Heavy Workload, mainly focuses on the times when your I/O activity is the highest. There is a lot of downloading and application installing that happens during the course of this test. Our thinking was that it's during application installs, file copies, downloading, and multitasking with all of this that you can really notice performance differences between drives.

2) We tried to cover as many bases as possible with the software incorporated into this test. There's a lot of photo editing in Photoshop, HTML editing in Dreamweaver, web browsing, game playing/level loading (Starcraft II and WoW are both a part of the test), as well as general use stuff (application installing, virus scanning). We included a large amount of email downloading, document creation, and editing as well. To top it all off we even use Visual Studio 2008 to build Chromium during the test.

The test has 2,168,893 read operations and 1,783,447 write operations. The IO breakdown is as follows:

AnandTech Storage Bench 2011—Heavy Workload IO Breakdown
IO Size % of Total
4KB 28%
16KB 10%
32KB 10%
64KB 4%

Only 42% of all operations are sequential; the rest ranges from pseudo to fully random (with most falling in the pseudo-random category). Average queue depth is 4.625 IOs, with 59% of operations taking place in an IO queue of 1.

Many of you have asked for a better way to really characterize performance. Simply looking at IOPS doesn't really say much. As a result we're going to be presenting Storage Bench 2011 data in a slightly different way. We'll have performance represented as Average MB/s, with higher numbers being better. At the same time we'll be reporting how long the SSD was busy while running this test. These disk busy graphs will show you exactly how much time was shaved off by using a faster drive vs. a slower one during the course of this test. Finally, we will also break out performance into reads, writes, and combined. The reason we do this is to help balance out the fact that this test is unusually write intensive, which can often hide the benefits of a drive with good read performance.

There's also a new light workload for 2011. This is a far more reasonable, typical every day use case benchmark. It has lots of web browsing, photo editing (but with a greater focus on photo consumption), video playback, as well as some application installs and gaming. This test isn't nearly as write intensive as the MOASB but it's still multiple times more write intensive than what we were running last year.

We don't believe that these two benchmarks alone are enough to characterize the performance of a drive, but hopefully along with the rest of our tests they will help provide a better idea. The testbed for Storage Bench 2011 has changed as well. We're now using a Sandy Bridge platform with full 6Gbps support for these tests.

AnandTech Storage Bench 2011—Heavy Workload

We'll start out by looking at average data rate throughout our new heavy workload test:

Heavy Workload 2011 - Average Data Rate

The 1.4 update brought a small boost in our Heavy suite. As you can see in the table above, most IOs are small and the 1.4 and 1.5 firmwares brought marginal improvements to small transfer size performance.

Heavy Workload 2011 - Average Read Speed

As I mentioned in the previous page, Vertex 4 still lacks read performance at small transfer sizes and the above graphs supports this. The average write performance is very good however and definitely makes up for the middle of the road read performance.

Heavy Workload 2011 - Average Write Speed

The next three charts just represent the same data, but in a different manner. Instead of looking at average data rate, we're looking at how long the disk was busy for during this entire test. Note that disk busy time excludes any and all idles; this is just how long the SSD was busy doing something:

Heavy Workload 2011 - Disk Busy Time

Heavy Workload 2011 - Disk Busy Time (Reads)

Heavy Workload 2011 - Disk Busy Time (Writes)

 



AnandTech Storage Bench 2011, Light Workload

Our new light workload actually has more write operations than read operations. The split is as follows: 372,630 reads and 459,709 writes. The relatively close read/write ratio does better mimic a typical light workload (although even lighter workloads would be far more read centric). The I/O breakdown is similar to the heavy workload at small IOs, however you'll notice that there are far fewer large IO transfers:

AnandTech Storage Bench 2011—Light Workload IO Breakdown
IO Size % of Total
4KB 27%
16KB 8%
32KB 6%
64KB 5%

Light Workload 2011 - Average Data Rate

Performance improved a bit with the latest OCZ firmwares in our light workload, but not enough to substantially change its position. As we mentioned in our initial review, the Vertex 4 seems best suited for the truly intense, heavy multitasking workloads. 

Once again the issue is small read IO performance, which you can see from the chart below:

Light Workload 2011 - Average Read Speed

...and once again write performance is very good:

Light Workload 2011 - Average Write Speed

Light Workload 2011 - Disk Busy Time

Light Workload 2011 - Disk Busy Time (Reads)

Light Workload 2011 - Disk Busy Time (Writes)



Final Words

The latest Vertex 4 firmware updates provide a noticeable boost in sequential write speed at 128GB and 256GB capacities, and the low queue depth sequential read performance issue has also been fixed. Our primary complaints from the initial review, at least from a performance perspective, have been addressed.  

That being said, there's still room for improvement. Small file size sequential read performance needs work. Thankfully most sequential reads in client workloads tend to be in the sweet spot for the Vertex 4, but there are some applications that do a lot of small sequential IO (e.g. web browser cache accesses). 

The Vertex 4 continues to do a great job addressing one of the major performance issues with SSDs: maintaining great write performance. The V4 always tested very well under the most strenuous of circumstances. Now the trick is bringing mass appeal to the drive, which is admittedly more about ensuring compatibility and reliability than improving performance with small files.

OCZ's new performance mode in the Vertex 4's firmware is pretty unique. While the specifics of what's going on internally are unknown, (somewhat) dynamically switching between performance states depending on the amount of space used on the drive is an interesting idea. I don't know how practical it is for the majority of users (I tend to run most of my drives well above 50% capacity), but innovation should always be encouraged. In this case, it's innovation that's the direct result of having complete access to the controller's firmware - an important step for OCZ in its evolution as a drive maker.

Log in

Don't have an account? Sign up now