Atom feed of this document
Draft -  Draft -  Draft -  Draft -  Draft -  Draft -  Draft -  Draft - 

 Distributed Virtual Router (DVR)

DVR stands for Distributed Virtual Router. For OpenStack Networking you can provide high availability across compute nodes using a DVR configuration. Both the layer-2 and layer-3 agents work together on a compute node, and the L2 agent works in an enhanced mode for DVR, providing management for OVS rules. In this scenario you do not use a separate networking node unless a legacy one is in place already.

Here is a sample network topology showing how distributed routing occurs.


Figure 6.1. DVR configuration diagram

The DVR agent takes responsibility for creating, updating, or deleting the routers in the router namespaces. For all clients in the network owned by the router, the DVR agent populates the ARP entry. By pre-populating ARP entries across compute nodes, the distributed virtual router ensures traffic goes to the correct destination. The integration bridge on a particular compute node identifies the incoming frame's source MAC address as a DVR-unique MAC address because every compute node l2 agent knows all configured unique MAC addresses for DVR used in the cloud. The agent replaces the DVR-unique MAC Address with the green subnet interface MAC address and forwards the frame to the instance. By default, distributed routing is not turned on. When set to true, the layer-2 agent handles the DVR ports detected on the integration bridge. Also, when a tenant creates a router with neutron router-create, the Networking services creates only distributed routers after you have enabled distributed routing.

 Configure Distributed Virtual Router (DVR)

  1. Edit the ovs_neutron_plugin.ini file to change enable_distributed_routing to True:

    enable_distributed_routing = True
  2. Edit the /etc/neutron/neutron.conf file to set the base MAC address that the DVR system uses for unique MAC allocation with the dvr_base_mac setting:

    dvr_base_mac = fa:16:3f:00:00:00

    This dvr_base_mac value must be different from the base_mac value assigned for virtual ports to ensure port isolation and for troubleshooting purposes. The default is fa:16:3f:00:00:00. If you want four octets used, substitute again for the fourth octet (00), otherwise three octets are kept the same and the others are randomly generated.

  3. Edit the /etc/neutron/neutron.conf file to set router_distributed to True.

    router_distributed = True
  4. Edit the l3_agent.ini file to set agent_mode to dvr on compute nodes for multi-node deployments:

    agent_mode = dvr

    When using a separate networking host, set agent_mode to dvr_snat. Use dvr_snat for Devstack or other single-host deployments also.

  5. In the [ml2] section, edit the ml2_conf.ini file to add l2population:

    mechanism_drivers = openvswitch,l2population
  6. In the [agent] section of the ml2_conf.ini file, set these configuration options to these values:

    l2_population = True
    tunnel_types = vxlan
    enable_distributed_routing = True
  7. Restart the OVS L2 agent.

    • Ubuntu/Debian:

      # service neutron-plugin-openvswitch-agent restart restart
    • RHEL/CentOS/Fedora:

      # service neutron-openvswitch-agent restart
    • SLES/openSUSE:

      # service openstack-neutron-openvswitch-agent restart

 DVR requirements

  • You must use the ML2 plug-in for Open vSwitch (OVS) to enable DVR.

  • Be sure that your firewall or security groups allows UDP traffic over the VLAN, GRE, or VXLAN port to pass between the compute hosts.

 DVR limitations

  • Distributed virtual router configurations work with the Open vSwitch Modular Layer 2 driver only for Juno.

  • In order to enable true north-south bandwidth between hypervisors (compute nodes), you must use public IP addresses for every compute node and enable floating IPs.

  • For now, based on the current neutron design and architecture, DHCP cannot become distributed across compute nodes.

Questions? Discuss on
Found an error? Report a bug against this page

loading table of contents...