Network Service

Goal 1: Add a first-class, customer-facing service for the management of network infrastructure within an OpenStack cloud. This will allow service providers to offer "Networking as a Service" (NaaS) to their customers.

Goal 2: Allow the customer to start and stop network-related services provided by the service provider. These might be load-balancers, firewalls or tunnels, for example. The service provider may charge for these services, so starting one may be a chargeable event.
Goal 3: Allow the customer to configure rich network topologies within the cloud. These topologies will include private connections between VMs, and connections between VMs and network services such as those mentioned in Goal 2. Of course, this reconfiguration must happen without affecting other tenants within the cloud.
Goal 4: Allow the customer to extend their networks from the cloud to a remote site. This is a simple extension of Goal 3 where the customer would configure a connection from their VMs to a bridging device within the cloud, which would then bridge to the appropriate remote site.
Goal 5: Allow the service provider to select and plug in third-party technologies as appropriate. This may be for extended features, improved performance, or reduced complexity or cost. For example, one service provider may choose to offer their firewall service based on hardened Linux VMs, but another one may choose to use commercial firewall software instead.
Goal 6: The extent to which customers can manipulate their own network infrastructure will depend upon the service provider and the underlying technologies that they have deployed. Goal 6 is to gracefully manage the disparity between various deployments. The service provider must be free to limit what customers can do, and the API must gracefully handle that.
Goal 7: Support the network topologies already in use today (Topologies 1 and 2 below).

Use Cases

Below are a number of example network topologies. For each topology you can consider two use-cases -- the customer who wishes to deploy their resources within such a topology, and the service provider who wants to allow the customer to do that.
Regardless of the topology selected, each VIF on each VM will need an IP address (IPv4 or IPv6). This may be given to the VM by one of many ways (described below). IP address injection is orthogonal to the topology choice -- any topology may be used in combination with any address injection scheme.

Topology 1: Isolated per-tenant networks

Each tenant has an isolated network, which the tenant accesses via a VPN gateway. Tenants are free to do whatever they want on their network, because it is completely isolated from those of other tenants.
This is the NASA Nebula model today.
This is currently implemented using one VLAN per tenant, and with one instance of nova-network per tenant to act as the VPN gateway. Note that this same model could equally be implemented with different technologies (e.g. a commercial VPN gateway, or using GRE tunnels instead of VLANs for isolation). One aim of this blueprint is to consider the model independently from the underlying implementation technology.

Topology 2: Direct internet connections

Each VM has a single public IP address and is connected directly to the Internet. Tenants may not choose their IP address or manage anything about the network topology.
This is the Rackspace Cloud model today.

Topology 3: Firewall service

Like Topology 1, but the VPN gateway is replaced by a firewall service that can be managed by the customer through the OpenStack Networking API. The public side of the firewall would usually be connected directly to the Internet. The firewall itself is provided by the service provider.

Topology 4: Customer-owned gateway

Like Topologies 1 and 3, but instead of the gateway being a service provided by the cloud and managed through the Networking API, the customer instead provides their own gateway, as a virtual appliance.
The customer would have most of their VMs attached to their isolated network, but would be able to configure one (or more) of their VMs to have two interfaces, one connected to the isolated network, and one connected to the Internet. The customer would be responsible for the software within the publicly-connected VMs, and would presumably install a gateway or firewall therein.

Topology 5: Multi-tier applications

Like Topology 4, but rather than running a gateway or a firewall, the customer is expected to run web servers instead. These would serve content to the public interfaces, and would contact the backend tiers via the private network.
In this topology, it's very likely that there would be more than one web server with a public Internet connection.


Popular posts from this blog