Network Function Virtualization (NFV)
Concepts
Network Functions Virtualization (NFV):
Virtualizes network functions using software on VMs
Replaces proprietary hardware (e.g., NAT, firewall, DNS)
NFV Benefits
Decouples hardware & software
Scalable, flexible resource usage
Runs on COTS x86 servers
Virtualized Devices
Network Functions: Routers, switches, CPE, DPI
Compute Devices: Firewalls, IDS, management tools
Storage: Network-attached file/database servers
Traditional vs NFV
Traditional: Fixed, closed, hardware-dependent
NFV: Dynamic, shared, software-driven
NFV Principles
Service Chaining
VNFs = Modular blocks
Traffic flows through multiple VNFs
Enables desired end-to-end functionality
Management and Orchestration
Manages VNF lifecycle: create, monitor, relocate, shut down
Handles service chaining, infrastructure, and billing
Distributed Architecture
VNFs consist of VNFCs (components)
VNFCs can run across multiple hosts
Enables scalability and redundancy
High-Level NFV Framework
Three Domains of NFV
Virtualized Network Functions
(VNFs)
Software-based NFs running on
NFV
NFV Infrastructure
Virtualizes compute, storage, and
network resources
Management and Orchestration
Manages VNF lifecycle and NFVI
resources
Handles virtualization-specific tasks
VNF Relationships
VNF Forwarding Graph (VNF
FG):Specifies flow between VNFs
(e.g., firewall → NAT → load
balancer)
VNF Set: VNFs grouped without
defined connectivity (e.g., server
pool)
NFV Benefits
Reduced Capital Expenditure (CapEx):
Commodity hardware, equipment consolidation, pay-as-you-
grow models
Eliminates overprovisioning
Reduced Operational Expenditure (OpEx):
Lower power, space, and management costs
Simplified network control
Faster Service Deployment:
Quick rollout of services
Reduced risk, faster ROI, agile service evolution
Interoperability:
Standardized, open interfaces
Resource Efficiency:
Shared platform for multiple users/apps/tenants
Agility & Flexibility:
Dynamic scaling to meet demand
Geo- or customer-specific service targeting
Open Ecosystem & Innovation:
Encourages small players and academia
Enables new services and revenue streams at lower risk
NFV Requirements
Portability & Interoperability:
Run VNFs from various vendors on standard hardware
Decouple software from hardware via standardized interfaces
Performance Trade-offs:
Manage potential performance drops due to lack of specialized
hardware
Minimize latency and overhead with efficient software/hypervisors
Migration & Coexistence:
Support hybrid environments (physical + virtual appliances)
Maintain compatibility with existing interfaces
Management & Orchestration:
Unified architecture aligned with standards
Simplified control and monitoring of VNFs
Automation:
Essential for scalability and operational efficiency
Security & Resilience:
Maintain network integrity, availability, and resistance to attacks
Network Stability:
Ensure stability during VNF relocation, failure recovery, or attacks
Simplicity:
Aim for simpler operations compared to legacy systems
Integration:
Enable multi-vendor compatibility without high costs
Ecosystem must support validation and third-party maintenance
NFV Reference Architecture
NFV Reference Architecture
4 Key Components
NFV Infrastructure (NFVI):
Virtualizes compute, storage, and network resources into
resource pools.
VNF/EMS:
Software-based network functions (VNFs) + Element
Management Systems (EMS).
NFV-Management and Orchestration:
Manages and orchestrates VNFs and infrastructure
resources (compute, storage, network).
OSS/BSS:
Business and operational support systems.
Three Architectural Layers
Infrastructure Layer:
NFVI + Virtualized Infrastructure Manager (VIM)
VNF Layer:
VNFs, EMS, and VNF Managers
Management & Orchestration Layer:
OSS/BSS + NFV Orchestrator
NFV Reference Architecture
NFV-Management and Orchestration Functional Blocks:
NFV Orchestrator:
Manages NS/VNF lifecycles, global resources, and
authorization.
VNF Manager:
Handles VNF instance lifecycle.
Virtualized Infrastructure Manager (VIM):
Manages virtual/physical resource interaction.
Key Reference Points (Interfaces):
Vi-Ha: Interface to physical hardware
Vn-Nf: VNF to virtual infrastructure API
Nf-Vi: NFVI to VIM
Or-Vnfm: Orchestrator to VNF Manager
Vi-Vnfm: VNF Manager to VIM
Or-Vi: Orchestrator to VIM
Os-Ma: Orchestrator to OSS/BSS
Ve-Vnfm: VNF lifecycle management
Se-Ma: Access to deployment templates & infrastructure models
NFV Reference Architecture
NFV Implementation Overview
NFV Reference Architecture
NFV Implementation Overview
Standards & Open Source:
ISG NFV:
Developing standards for NFV interfaces and components.
OPNFV (Open Platform for NFV):
Launched by Linux Foundation (2014) to accelerate NFV
adoption.
Key Objectives of OPNFV:
Build integrated, tested open source NFV platform
Ensure operator-driven validation of releases
Contribute to & align with relevant open source projects
Foster open NFV ecosystem (standards + software)
Promote OPNFV as preferred open reference platform
Initial Focus of OPNFV
Development of:
NFV Infrastructure (NFVI)
Virtual Infrastructure Manager (VIM)
APIs to Management and Orchestration and VNFs
Provides common base for vendors to build VNF &
Management and Orchestration solutions
NFV Infrastructure
Core Domains
Compute Domain
Commercial-Off-The-Shelf (COTS) servers and storage (high-
volume, standard hardware)
Hypervisor Domain
Abstracts hardware for VMs
Mediates compute resources for VNFs
Infrastructure Network Domain (IND)
High-volume switches forming configurable network
Delivers infrastructure-level network services
Container Interface in NFV (European Telecommunications
Standards Institute (ETSI) Perspective)
Container Interface in NFV (European Telecommunications
Standards Institute (ETSI) Perspective)
Important Clarification
ETSI states that "Container Interface" ≠ Container
Virtualization .
Interface Types
Functional Block Interface:
Connects two software blocks (can be on different hosts)
Enables inter-block communication
Container Interface:
Execution environment on a single physical host
Hosts and runs a functional block locally
Key Features:
Highlights execution on physical hosts
Differentiates communication vs. execution environment
Crucial for understanding VM/VNF behavior within NFVI
Container Interface in NFV (European Telecommunications
Standards Institute (ETSI) Perspective)
ETSI NFVI Architecture: Key Insights
VNF architecture is separate from the hosting NFVI architecture.
VNFs and NFVI consist of distinct domains (compute, hypervisor,
network).
Management & Orchestration (MANO) is a separate domain, but
often overlaps with NFVI (e.g., element management).
Container Interface:
Connects VNFs to NFVI
Hosts VNF & MANO functions as VMs on the same physical host
Three-Layer VNF Deployment
Physical Resources
Virtualization Layer
Application Layer (VNFs)
→ All typically co-located on the same physical host.
Interface Types (As shown in Figure)
Container Interfaces: Same host (Interfaces: 4, 6, 7, 12)
Functional Block Interfaces: Cross-host or distributed (Interfaces:
3, 8, 9, 10, 11, 14)
Legacy Network Interfaces: For integration with non-NFV networks
(Interfaces: 1, 2, 5, 13)
NFV vs. SDN
NFV: VNFs run on same host as virtualization software
SDN: Control & data planes are often on separate physical hosts
Deployment of NFVI Containers
VNF Components (VNFCs) and Deployment
Single Host Setup (Fig. a)
One VM per VNFC
Multiple VMs run on one host via a hypervisor
Each VM = 1 VNFC
Hosted on compute container interface
Deployment of NFVI Containers
VNF Components (VNFCs) and Deployment
Distributed Setup (Fig. b)
Multiple VNFCs form one VNF
VNFCs can run across different compute nodes
Nodes interconnected via infrastructure network domain
Logical Structure of NFVI Domains
NFVI Logical Structure (ISG NFV)
ISG NFV standards define the logical architecture of NFVI domains
and interconnections.
Supports both open source and proprietary implementations.
Provides a framework for development and identifies key interfaces
between components.
Compute Domain
CPU/Memory: COTS processor and main memory for executing
VNFC code.
Internal Storage: Local nonvolatile storage (e.g., flash).
Accelerators: Optional hardware for security, networking,
packet processing.
External Storage: Access via storage controller to secondary
memory.
NIC: Connects compute node to infrastructure network (Interface
14 – Ha/CSr-Ha/Nr).
Control/Admin Agent: Interfaces with VIM (see Fig. 7.8).
Eswitch: Server-embedded switch, functionally part of
infrastructure network.
Execution Environment: Presented to hypervisor by
server/storage (Interface 12 – VI-Ha/CSr).
Eswitch
Two VNF Workloads:
Control Plane: Protocols like BGP, CPU-intensive, light I/O.
Data Plane: Routing/switching tasks & high I/O demands.
Challenge in Virtualized Environment:
Traffic goes through hypervisor’s virtual switch.
Adds processing overhead and latency.
Eswitch Solution:
Bypasses hypervisor.
Enables direct memory access (DMA) to NIC.
Boosts performance with zero processor overhead.
NFVI Implementation Using Compute
Domain Nodes
VNF Structure & NFVI Nodes
VNFs: Made of one or more VNFCs (VNF Components)
VNFCs: Run as software on VMs via hypervisors on compute
nodes
Virtual Links: Defined via Infrastructure Network Domain
(IND)
NFVI Node: Group of physical devices managed as one,
supports VNF execution
Types of Compute Domain Nodes
Compute Node: Executes fast, deterministic instructions
Gateway Node: Connects NFVI to transport & legacy
networks
Storage Node: Offers storage via local or remote access (e.g.,
NFS, Fibre Channel)
Network Node: Provides switching/routing using compute &
storage
NFVI-PoP (Point of Presence)
Multiple physical devices per compute domain
Distributed locations enable service diversity
NFVI Implementation Using Compute
Domain Nodes
Deployment Scenarios
Monolithic Operator – Single org. owns hardware and VNFs
(e.g., private cloud)
Operator Hosting VNOs – Host multiple operators (e.g.,
hybrid cloud)
Hosted Network Operator – IT org runs infra; operator runs
VNFs (e.g., BT, Verizon)
Hosted Communication Providers – Multiple providers
hosted (e.g., community cloud)
Hosted Communication + Application Providers – Add
public app hosting (e.g., public cloud)
Managed Service on Customer Premises – Provider
equipment at client site
Managed Service on Customer Equipment – VNFs on client-
owned hardware
Hypervisor Domain
Abstraction Layer: Manages hardware for VM operations (start,
stop, scale, migrate)
Main Components:
Compute/Storage Management : Virtualized access to VMs
Network Management: Virtualizes NICs for VMs
VM Management & API: Supports VNFC execution (Interface
7)
Control/Admin Agent: Interfaces with VIM
vSwitch: Virtual Ethernet switch connecting virtual Network
Interface Cards of VMs
vSwitch Operation
Same Host: VNFs connect via local vSwitch
Different Hosts: Traffic flows → vSwitch → NIC → external
switch → target NIC → vSwitch → VNF