Quantcast
Channel: linux – LINBIT Blogs
Viewing all articles
Browse latest Browse all 4

Testing SSD Drives with DRBD: Intel DC 3700 Series

$
0
0

Over the next few weeks we’ll be posting results from tests that we’ve run against various manufactures SSD drives; including Intel, SanDisk, and Micron, to name a few.

The first post in this series goes over our findings of the Intel DC S 3700 Series 800GB SATA SSD drives.IntelSSDs

Background:
Intel Corporation designs, manufactures, and sells integrated digital technology platforms worldwide. The company produces  SSDs as well as the NAND flash memory products used inside them.

For those who are unfamiliar with the “shared nothing” High Availability approach to block level synchronous data replication: DRBD uses two (2) separate servers so that if one (1) fails, the other takes over. Synchronous replication is completely transaction safe and is used for 100% data protection purposes. DRBD has been available as part of the mainline Linux kernel since version 2.6.33.

This post reviews DRBD in an active/passive configuration using synchronous replication (DRBD’s Protocol C). Server A is active and server B is passive. Due to DRBD’s positioning in the Linux kernel (just above the disk scheduler), DRBD is application agnostic. It can work with any filesystem, database, or application that writes data to disk on Linux.

High Availability Testing: Sequential Read/Writes
Objective: Determine the performance implications of synchronous replication when using high performance Intel SSD drives.

In the initial test, LINBIT used a 10GbE connection between servers. The Ethernet Connection’s latency became the bottleneck when replicating data. We replaced the 10GbE with Dolphin Interconnects cards – removing the latency constraint.

Each test was ran 5 times, the averages displayed below:

Screen Shot 2016-02-19 at 4.10.12 PM

The advertised Intel drive speed is: Read- 500MB/s, Write- 460MB/s. As you can see from the above table, installing DRBD introduced negligible write overhead. Mounting an EXT4 filesystem on top of DRBD, only incurs a 1.98% performance hit.

Running DRBD, the SSD’s work above the advertised speed of the drive. In each write scenario, using high performance Intel SSD drives with DRBD either performed near or above advertised speeds for all sequential read/write tests.  0.5-2% overhead is a small price to pay for 100% guaranteed data integrity.

The data above in Chart 1.0 graphically represented:

03-Intel-Sequential-read-write

High Availability Testing: Random Read/Write tests
Objective: Mimic production scenarios by using random reads and writes to determine the performance implications of synchronous replication.

Here we dig deeper after finding the theoretical maximum speeds of the DRBD software replication with Intel DC S 3700 800GB SSDs; using random read and write assessments. These random reads and writes simulate how many applications and databases work in a production environment.

Screen Shot 2016-02-19 at 4.21.23 PM

The data demonstrates that in this type of environment, enacting DRBD for local data replication with Intel hardware will have a minimal impact on overall performance as compared to running a single SSD, and can even have positive implications.

As of DRBD 8.4, the DRBD software has the ability to do read balancing, used to increase the read performance of the DRBD device. As you can see, the read performance surpasses that of a single Intel SSD by up to 63.9%.  This feature enables you to make use of the idle server, that would otherwise just be sitting there waiting for a fail over.

We saw 11367 IOPS when writing to the SSD with the EXT4 filesystem without DRBD installed and 11480 IOPS when replicating writes with DRBD. This represents a slight performance enhancement when using DRBD and synchronously replicating data. The performance improvements are even bigger for reads.

05-Intel-ReadWriteGraph

Increased performance when using DRBD is counter intuitive. There is natural overhead when synchronously replicating data, so why are the disks performing faster? DRBD is carefully optimized for performance. This involves flushing kernel internal request queries where it makes sense from DRBD’s point of view. This can lead to the effect that a certain test pattern gets executed faster with DRBD than without it.

In random read/write mode, it is safe to say that using these technologies together will enhance service availability with minimal performance implications.

Stay tuned next week for our findings on SanDisk’s Optimus AscendTM 2.5” 800GB SAS SSD drives.

Authored by : Greg Eckert, Matt Kereczman, Devin Vance

 


Viewing all articles
Browse latest Browse all 4

Latest Images

Trending Articles





Latest Images