One aspect that has confused me in this discussion, that I was hoping somebody would address...
I believe I have seen slower than expected pvmove times in the past (but I only rarely do it, so it has never particularly concerned me). When I saw it, my first assumption was that the pvmove had to be done "carefully" to ensure that every segment was safely moved in such a way that it was definitely in one place, or definitely in the other, and not "neither" or "both". This is particularly important if the volume is mounted, and is being actively used, which was my case.
Would these safety checks not reduce overall performance? Sure, it would transfer one segment at full speed, but then it might pause to do some book-keeping, making sure to fully synch the data and metadata out to both physical volumes and ensure that it was still crash-safe?
For SAN speeds - I don't think LVM has ever been proven to be a bottleneck for me. On our new OpenStack cluster, I am seeing 550+ MByte/s with iSCSI backed disks, and 700+ MByte/s with NFS backed disks (with read and write cached disabled). I don't even look at LVM as a cause of concern here as there is usually something else at play. In fact, on the same OpenStack cluster, I am using LVM on NVMe drives, with an XFS LV to back the QCOW2 images, and I can get 2,000+ MByte/s sustained with this setup. Again, LVM isn't even a performance consideration for me.