benchmarking the storages: iSCSI Enterprise Target (+Software RAID0)

This time we got a Software RAID0 Volume with 2 drives which made a volume size of 976784130 sectors = 476945 Megabyte. This Volume is exported with the iSCSI Enterprise Target Software for Linux.

You may ask: Why only 2 drives this time? – The answer: The guy who did the setup of the linux machine did not know how to delete the RAID5 volume he created for the previous benchmark. So there where only 2 250 Gigabyte drives left…and yes: we sometimes have a subliminal feeling that we have to hurt him.

Interface-Transferrate with a blocksize of 128 sectors at 0.0 percent of the capacity:

sequential read rate medium (unthrottled): 65399 Kilobyte/s
sequential read rate Read-Ahead (Latency 1.08 ms): 67237 Kilobyte/s
repeatedly sequential read (“coretest”): 55854 Kilobyte/s

permanent transfer rate: (blocksize: 128 sectors):
read:

  • Average: 54098.5 Kilobyte/s
  • Minimum: 51692.8 Kilobyte/s
  • Maximum: 54530.8 Kilobyte/s

write:

  • Average: 30086.6 Kilobyte/s
  • Minimum: 29700.0 Kilobyte/s
  • Maximum: 30895.6 Kilobyte/s

access time read:

  • Average: 13.61 ms
  • Minimum: 0.11 ms
  • Maximum: 26.78 ms

access time write:

  • Average: 15.70 ms
  • Minimum: 0.36 ms
  • Maximum: 41.20 ms

access time read (<504 MByte):

  • Average: 6.22 ms
  • Minimum: 0.11 ms
  • Maximum: 21.31 ms

access time write (<504 MByte):

  • Average: 7.31 ms
  • Minimum: 0.29 ms
  • Maximum: 26.66 ms

benchmarking the storages: iSCSI Enterprise Target (+Software RAID5)

This time we got a Software RAID5 Volume with 3 drives which made a volume size of 976784130 sectors = 476945 Megabyte. This Volume is exported with the iSCSI Enterprise Target Software for Linux.

Interface-Transferrate with a blocksize of 128 sectors at 0.0 percent of the capacity:

sequential read rate medium (unthrottled): 61384 Kilobyte/s
sequential read rate Read-Ahead (Latency 1.15 ms): 67472 Kilobyte/s
repeatedly sequential read (“coretest”): 54294 Kilobyte/s

permanent transfer rate: (blocksize: 128 sectors):
read:

  • Average: 51913.4 Kilobyte/s
  • Minimum: 49738.2 Kilobyte/s
  • Maximum: 63889.7 Kilobyte/s

write:

  • Average: 9080.2 Kilobyte/s
  • Minimum: 6650.9 Kilobyte/s
  • Maximum: 10129.3 Kilobyte/s

access time read:

  • Average: 13.47 ms
  • Minimum: 0.12 ms
  • Maximum: 28.65 ms

access time write:

  • Average: 38.82 ms
  • Minimum: 10.10 ms
  • Maximum: 108.19 ms

access time read (<504 MByte):

  • Average: 6.09 ms
  • Minimum: 0.12 ms
  • Maximum: 19.58 ms

access time write (<504 MByte):

  • Average: 14.42 ms
  • Minimum: 0.37 ms
  • Maximum: 75.51 ms



benchmarking the storages: Promise VTrak m500i

After all strange things that we got to deal with using the Promise VTrak we made some benchmarks (which took about 10 hours each complete run).

So here are the results of the first test:

We made a Hardware RAID5 Volume with 5 drives which made a volume size of 1945310850 sectors = 949859 Megabyte.

Here are the results in all their beauty:

Interface-Transferrate with a blocksize of 128 sectors at 0.0 percent of the capacity:

sequential read rate medium (unthrottled): 43570 Kilobyte/s
sequential read rate Read-Ahead (Latency 1.62 ms): 39447 Kilobyte/s
repeatedly sequential read (“coretest”): 68155 Kilobyte/s

permanent transfer rate: (blocksize: 128 sectors):
read:

  • Average: 44896.2 Kilobyte/s
  • Minimum: 18416.7 Kilobyte/s
  • Maximum: 45645.9 Kilobyte/s

write:

  • Average: 29821.7 Kilobyte/s
  • Minimum: 12688.0 Kilobyte/s
  • Maximum: 30792.4 Kilobyte/s

access time read:

  • Average: 13.58 ms
  • Minimum: 4.13 ms
  • Maximum: 104.33 ms

access time write:

  • Average: 9.58 ms
  • Minimum: 0.48 ms
  • Maximum: 2930.85 ms

access time read (<504 MByte):

  • Average: 6.14 ms
  • Minimum: 0.48 ms
  • Maximum: 37.94 ms

access time write (<504 MByte):

  • Average: 3.74 ms
  • Minimum: 0.47 ms
  • Maximum: 1258.26 ms