My simulated data center hardware configuration is complete now. My 4 HP hosts are running XenServer 6.2 and are in a single resource pool. I wanted to test the high availability feature so I added a SAN device. The SAN is a DLINK 1550-04 which I found for a ridiculously cheap price. I’ll review it in detail later.
I created two iSCSI volumes on the SAN. One is 10 gigabytes for the high availability “heartbeat”. The other serves as the primary shared VM storage which is 1 terabyte. All the VMs will use this shared iSCSI volume rather than local storage. If I can get the DLINK to replicate with the Synology I will add disaster recovery as well.
The network configuration has been a challenge. I read the excellent Citrix whitepaper on the subject, however, I’m still not sure I have the optimal settings. I decided to add an additional NIC card to my servers to give them a total of 4. I divided the NICs as follows:
- 1 for the management network
- 2 (bonded) for the VM (user) network
- 1 for the storage network which is also its own VLAN
Ideally, I need 6 NICs so that I can allocate 2 to each of these roles. This would follow the primary reliability directive:
No Single Point of Failure. Given my needs, 4 should be plenty. I will accept some risk with my two single points of failure.
The part I don’t fully understand is multipathing. If, for example, I bond two NICs dedicated to storage and even connect them to two switches, Xen will not see this as multipathing. Apparently multipathing is a good thing. I don’t know how to reconcile these without adding yet another pair of bonded NICs. Is the choice multipathing or bonding? Or is the answer simple more NICs?
Within the pool, the network issue is much simpler. I can create a variety of virtual networks, some of which will be entirely private. I can already see that the production pipeline will require several of these private networks.