Xen Host Configuration

My simulated data center hardware configuration is complete now. My 4 HP hosts are running XenServer 6.2 and are in a single resource pool. I wanted to test the high availability feature so I added a SAN device. The SAN is a DLINK 1550-04 which I found for a ridiculously cheap price. I’ll review it in detail later.

I created two iSCSI volumes on the SAN. One is 10 gigabytes for the high availability “heartbeat”. The other serves as the primary shared VM storage which is 1 terabyte. All the VMs will use this shared iSCSI volume rather than local storage. If I can get the DLINK to replicate with the Synology I will add disaster recovery as well.

The network configuration has been a challenge. I read the excellent Citrix whitepaper on the subject, however, I’m still not sure I have the optimal settings. I decided to add an additional NIC card to my servers to give them a total of 4. I divided the NICs as follows:

  • 1 for the management network
  • 2 (bonded) for the VM (user) network
  • 1 for the storage network which is also its own VLAN

Ideally, I need 6 NICs so that I can allocate 2 to each of these roles. This would follow the primary reliability directive: No Single Point of Failure. Given my needs, 4 should be plenty. I will accept some risk with my two single points of failure.

The part I don’t fully understand is multipathing. If, for example, I bond two NICs dedicated to storage and even connect them to two switches, Xen will not see this as multipathing. Apparently multipathing is a good thing. I don’t know how to reconcile these without adding yet another pair of bonded NICs. Is the choice multipathing or bonding? Or is the answer simple more NICs?

Within the pool, the network issue is much simpler. I can create a variety of virtual networks, some of which will be entirely private. I can already see that the production pipeline will require several of these private networks.

Categories: Hardware

Tags: , , , ,

8 replies

  1. First, let me say that you have a very nice blog. Very well written.

    Both multipathing and bonding address failover. When I think of multipathing, I am usually thinking of a SAN that uses a fiber network instead of cat5/6. With that said, both provide failover. To use multipathing, you need two paths, this could be two single network cables or two sets of bonded network cables (which I guess would technically be 4 paths).

    That is my take, hope it helps and keep up the good work.


    • Thanks for the comment. Is it unusual to use multipathing with iSCSI and cat6 Ethernet? Xen insists that multipathed connections use different subnets so I had to artificially create two for my SAN VLAN. I guess that Xen’s rationale is that you are using two different switches.

      Networking really isn’t my area of expertise but I did like configuring my switches and router. My conclusion was that you can’t have too many network cards in your servers.


      • Hi,

        Linux Bonding (link aggregation in generic terms) provides high availability/failover and/or load balancing (depends on chosen options) at layer 2/ethernet level. iSCSI using TCP/IP, a bonded ethernet link (with a single IP address) appears to it as a single path.
        iSCSI multipath needs two different IP (layer 3) “routes” between initiator and target.
        In your case, with two dual-link bonds, you could for example, create two storage VLANs, create a VLAN interface on each bond for one of them and give it an address on a different subnet to get your 2 paths.

        In the end a possible config could be:

        bond0 (eth0+eth3)

        bond0.10: Management VLAN,
        bond0.21: Storage VLAN 1,

        bond1 (eth1+eth2)

        bond1.30: Users/VMs VLAN,
        bond1.22: Storage VLAN 2,

        Ethernet interfaces are aggregated, considering you have two dual interfaces controlers, to have interfaces from different controlers in each bond (no controler SPOF).



      • Great comment thanks. If I understand correctly what you are proposing, it is to create VLAN trunks on the bonded interfaces?


  2. Aaron, I don’t think it is unusual, but I would make a choice of either 1. Bonding or 2. Multipathing, not both. Either of those scenarios would require two Nics to the switch(s).


  3. re,

    (sorry for breaking threading but had no ‘reply’ link under your answer to my post…)

    About trunks, yes, exactly. Actually, the configuration I use in production today has 3x2GbEth in bonded pairs trunks with each VLAN subinterface added to as many bridges. My config goes like this:

    eth1+eth2 > bond0

    bond0.10 > vmbr10 : Management
    bond0.11 > vmbr11 : Cluster back-channel

    eth3+eth4 > bond1

    bond1.20 > vmbr20 : storage (DRBD replication)
    bond1.21 > vmbr21 : backup

    eth5+eth0 > bond2

    bond2.30 > vmbr30 : services
    bond2.31 > vmbr31 : users
    bond2.32 > vmbr32 : guests

    Bonds use LACP Layer3+4 mode with one leg connected to each of a dual switch stack (Netgear M5300). Switch ports are also configured as LAGed trunks accepting and transmitting only known tagged frames.

    Hosts network configuration may seem hairy when handled by hand, but shouldn’t be anymore once placed under CM.


    • Thanks for this. I didn’t know about this technique but I like it a lot. Problem is now you made want to go back and change my network config… 🙂


      • Tough ! 😀
        Network migration… ultimate level of infra automation and testing, the day you become a DevOps Jedi !
        May the Force be with you 😉


Leave a Reply to Aaron Cancel reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: