Join the 85,000 open source advocates who receive our giveaway alerts and article roundups.
New features in OpenStack Neutron
New features in OpenStack Neutron
OpenStack's Stein release offers a variety of network connectivity-as-a-service enhancements to support 5G, the IIoT, and edge computing use cases.
Get the newsletter
The community of infrastructure developers working on Neutron—the network connectivity-as-a-service project used by 92% of production OpenStack deployments, according to the 2018 OpenStack User Survey—has been busy extending the project to support new use cases driven by the rollout of 5G, the Industrial Internet of Things (IIoT), and edge computing.OpenStack is the open source cloud infrastructure software project that provides compute, storage, and networking services for bare-metal, container, and VM workloads. To get a sense of the core functionality and additional services, check out the OpenStack map.
The platform has a modular architecture that works across industry segments because infrastructure operators can choose the components they need to manage their infrastructure in the way that best supports their application workloads. The modules are also pluggable to provide further flexibility and make sure they can be used with a specific storage backend or software-defined networking (SDN) controller.
Neutron is an OpenStack project to provide a de-facto standard REST API to manage and configure networking services and make them available to other components such as Nova. According to Alok Kumar:
In very simple terms, Neutron:
- Allows users to create and manage network objects, such as networks, subnets, and ports, which other OpenStack services can use through a REST API.
- Enables a large number of operators to implement complex sets of networking technologies to power their network infrastructure through the use of agents, plugins, and drivers.
The roadmap for Neutron in OpenStack's Stein release (slated for April 10) has lots of enhancements. Following are some of the more interesting updates.
SR-IOV VF-to-VF mirroring
Port mirroring is a well-known technique to monitor network traffic without affecting performance. Based on specific rules, the traffic from a network port is mirrored to an analyzer where it can be processed without any disruption on the traffic. While this challenge has been solved for physical ports, the demand rose to mirror traffic between virtual functions (VFs) where the network interface card (NIC) provides the support for this operation. Providing an API for this service is crucial, as SR-IOV has become a very widely and often-used capability.
Guaranteed minimum bandwidth
Quality of service (QoS) is an important area where the OpenStack team spends a lot of time and effort on enhancements. For network-heavy applications, it is crucial to have a minimum amount of network bandwidth available. Work began during the Rocky cycle to provide scheduling based on minimum bandwidth requirements. The team already showed a demo of this new feature and plans to finalize it by the time Stein is released. As part of the enhancements, Neutron treats bandwidth as a resource and works with the Nova OpenStack compute service to schedule the instance to a host where the requested amount is available.
Using hardware acceleration is becoming more and more common as we move towards use cases such as augmented and virtual reality and other scenarios that 5G will bring to our table. OpenStack has a new project to provide a hardware acceleration framework: Cyborg. The Cyborg and Neutron teams are working together to provide joint management of NICs with field-programmable gate array (FPGA) capabilities to make it possible to bind Neutron ports with these type of cards.
Smart NIC support
As OpenStack manages bare-metal workloads beyond VMs and containers, it is crucial for the team to continuously look into enhancements in this area as well. The Neutron team is actively working on providing support for smart NICs that will enable bare-metal networking with feature parity to the virtualization use case. With this functionality, the number of bare-metal compute hosts can be significantly increased per deployment, as it eliminates the need for an agent running on the hosts and for using remote procedure call (RPC) as a communication channel between software components.
Better scalability and performance
Neutron is already at work at massive scale for users like AT&T and CERN. The team is working to push scalability and performance even further in the Stein release. For instance, Neutron already supports creating ports in a bulk request; however, the functionality can be tuned to make it faster, which is one of the targets for this release. In addition to this improvement, a performance sub-team was formed to take targeted measurements and implement further improvements to make the service faster.
Working to serve users and integrate across communities
The community of Neutron developers is engaged closely with other open source networking projects, helping to extend cross-project integration to address an ever-expanding set of networking use cases. To address the need for integration testing in a full-stack environment, contributors are working closely together with the Open Platform for Network Functions Virtualization (OPNFV) community. There is also collaboration in the area of standardization with ETSI NFV.
Furthermore, as a result of collaboration, you can use Neutron with various SDN controllers and technologies such as Middonet, OpenDaylight, Tungsten Fabric, BaGPipe, and BGP VPN.
If you are interested in further details about larger design and development activities, take a look at the Neutron specs on the documentation webpage. To learn more about the services provided by Neutron, check the online documentation, which the community keeps current.
Also, a wide range of Neutron technical sessions and user stories will be featured at the Open Infrastructure Summit, April 29–May 1 in Denver. Check out the schedule of Neutron-related sessions and see what applies to your use case. If you are interested in participating in deeper technical discussions, stay for the Project Teams Gathering (PTG) right after the Summit in Denver.