Atom feed of this document
  
Draft -  Draft -  Draft -  Draft -  Draft -  Draft -  Draft -  Draft - 

 Chapter 5. Deployment

OpenStack Networking provides an extreme amount of flexibility when deploying networking in support of a compute environment. As a result, the exact layout of a deployment will depend on a combination of expected workloads, expected scale, and available hardware.

For demonstration purposes, this chapter concentrates on a networking deployment that consists of these types of nodes:

  • Service node: The service node exposes the networking API to clients and handles incoming requests before forwarding them to a message queue. Requests are then actioned by the other nodes. The service node hosts both the networking service itself and the active networking plug-in. In environments that use controller nodes to host the client-facing APIs, and schedulers for all services, the controller node would also fulfill the role of service node as it is applied in this chapter.

    Network node: The network node handles the majority of the networking workload. It hosts the DHCP agent, the Layer-3 (L3) agent, the Layer-2 (L2) agent, and the metadata proxy. In addition to plug-ins that require an agent, it runs an instance of the plug-in agent (as do all other systems that handle data packets in an environment where such plug-ins are in use). Both the Open vSwitch and Linux Bridge mechanism drivers include an agent.

    Compute node: The compute node hosts the compute instances themselves. To connect compute instances to the networking services, compute nodes must also run the L2 agent. Like all other systems that handle data packets it must also run an instance of the plug-in agent.

Questions? Discuss on ask.openstack.org
Found an error? Report a bug against this page

loading table of contents...