Reading a recent article from Brian Morin of Condusiv Technologies, we noted a couple of great points we can relate to:
"Typically, as soon as we mention "fragmentation" and "SAN" in the same sentence, an 800 pound gorilla walks into the room and we’re met with some resistance as there is an assumption that RAID controllers and technologies within the SAN mitigate the problem of fragmentation at the physical layer......
As much as SAN technologies do a good job of managing blocks at the physical layer, the real problem why SAN performance degrades over time has nothing to do with the physical disk layer but rather fragmentation that is inherent to the Windows file system at the logical disk software layer.
In a SAN environment, the physical layer is abstracted from the Windows OS, so Windows doesn't even see the physical layer at all – that’s the SAN's job. Windows references the logical disk layer at the file system level.
Fragmentation is inherent to the fabric of Windows. When Windows writes a file, it is not aware of the size of the file or file extension, so it will break that file apart into multiple pieces with each piece allocated to its own address at the logical disk layer. Therefore, the logical disk becomes fragmented BEFORE the SAN even receives the data.
How does a fragmented logical disk create performance problems? Unnecessary IOPS (input/output operations per sec). If Windows sees a file existing as 20 separate pieces at the logical disk level, it will execute 20 separate I/O commands to process the whole file. That’s a lot of unnecessary I/O overhead to the server and, particularly, a lot of unnecessary IOPS to the underlying SAN for every write and subsequent read.
By eliminating the Windows I/O "tax" at the source, organizations achieve greater I/O density, improved throughput, and less I/O required for any given workload. Fragmentation prevention at the top of the technology stack ultimately means systems can process more data in less time.
Many administrators are led to believe they need to buy more IOPS to improve storage performance when in fact, the Windows I/O tax has made them more IOP dependent than they need to be because much of their workload is fractured I/O."
We have often seen Domino servers labouring along with files in Millions of fragments and many organisations spending huge sums of money on storage systems and virtual infrastructure believing they can't be having the problem that they are seeing, well the evidence is right there, the inconvenient truth remains that addressing the Windows I/O "tax" at the source is part of optimising the entire technology stack, starting at the source, organizations that can understand this will achieve improved throughput, with less I/O required and will therefore maximise their investment in the entire system.
It's akin to driving around with a 500 pound Anvil in the trunk of your car and being oblivious to the load because you're running a powerful new V8 engine!