ONAP: First Steps with NFV and Datacenter Automation
Telecommunications is going through a paradigm shift. Network Function Virtualization (NFV) is the key driver behind this change. For the purposes of this blog post, NFV is an umbrella term that relates to the entire NFV ecosystem: from Infrastructure as a Service platforms (IaaS) like OpenStack, to the “softwareization” of the network (SDN and more), Virtual Network Function (VNF) onboarding, and, perhaps most importantly, management and orchestration (MANO).
A MANO system is a software component that provisions and manages infrastructure. It talks to IaaS APIs and manages networking, storage, and compute. It provisions network functions and orchestrates them, keeps an inventory of all resources, scales network services up and down, and performs self-healing through closed loop automation–and that’s only scratching the surface.
While MANO systems are usually associated with telecoms and NFV, MANO also has a place in datacenter automation, and is equally applicable in that context.,  At Interdynamix, we expect to see more reference to MANO as part of standard datacenter automation as it matures, and as datacenter operators require more automation.
It’s MANO that this post will focus on, specifically the open source MANO system ONAP–the “open network automation platform.”
ONAP is a massive open source project. Its purpose is to provide a platform for “real-time, policy-driven orchestration and automation of physical and virtual network functions.” It allows organizations to use agile methodologies to manage network infrastructure. You want closed loop automation? ONAP can do it. You want monitoring and metrics? ONAP can do it. You want to deploy AI and ML models? ONAP can do it.
ONAP provides a common platform for telecommunications, cable and cloud operators and their solution providers to rapidly design, implement and manage differentiated services. It provides orchestration, automation and end-to-end lifecycle management of network services…additionally, it provides tools for network service design and a framework for closed-loop automation.
Like many other complex open source systems, such as OpenStack for example, ONAP is a collection of loosely coupled subprojects. While some of these projects will almost always be deployed, others are optional. The point is that what actually makes up a particular ONAP deployment is fluid and depends on what data centers or telecoms desire to achieve with it.
When we work with clients that want ONAP one of the first things we do is to ensure that there is an understanding that there may not be such thing as a “generic ONAP install.” In fact most production ONAP deployments that exist today are comprised of a small number of ONAP pieces that provide a specific automation function, as opposed to a “full ONAP” deployment.
OOM – ONAP Operations Manager
Due to its complexity, all the pieces of ONAP, which end up being hundreds of containers, need to be deployed and tied together in an automated fashion. Again, like OpenStack or Kubernetes, ONAP will have various distributions that do this. Installer, distribution, configuration management, call it what you will–it’s a piece of software that can be used to setup and manage the lifecycle of ONAP itself.
This is where ONAP Operations Manager, or OOM, comes in. OOM is the defacto standard tool to deploy and manage ONAP. However, for better or worse, it’s a complex system on its own. All OOM does is deploy and manage ONAP.
Currently OOM works best in conjunction with Kubernetes. OOM will automatically deploy selected ONAP components into an existing Kubernetes cluster. Most organizations first attempt to use and understand ONAP will involve using OOM. That can turn into a daunting task as, at the very least, it involves having a working Kubernetes cluster with the proper networking, storage, and additional components (such as Helm, a Kubernetes package manager of sorts).
Get Started with ONAP and OOM
At Interdynamix we have deep expertise in deployment of ONAP using OOM:
- First, we help organizations determine what parts of ONAP are applicable to their requirements.
- Then we determine how best to fit Kubernetes and ONAP into their environment, be it in the lab or otherwise.
- Next, ONAP components are deployed into Kubernetes.
- Finally, we validate the deployment and hand it over to the client.