Traffic Management Performs Testing For Triple-Play Services

Oct. 13, 2006
As data, voice, and video services emerge, their efficient deployment and proper operation will depend on FPGA-based traffic-management functions for emerging Internet protocols and QoS capabilities.

Triple-play telecommunications services—data, voice, and video—will add to the complexity of testing next-generation wireless system architectures. To handle this latency-sensitive traffic, emerging protocols and new hardware are needed that provide quality of service (QoS) via traffic management. Triple-play requirements are influencing network technologies like Universal Mobile Telecommunication System (UMTS), WiMAX, and digital-subscriber-line (DSL) broadband networks. These influences are driving architectural changes as well as testing requirements for triple-play services. Today's original equipment manufacturers (OEMs) need flexible architectures, which can be enabled by field-programmable-gate-array (FPGA)-based network-processing and traffic-management functions.

To understand the role of QoS, traffic management, and FPGA implementations, take a look at a current UMTS wireless network. UMTS is a third-generation (3G) wireless system that delivers high-bandwidth data and voice services to mobile users. This system evolved from the Global Systems for Mobile Communications (GSM). UMTS has a new air interface, which is based on Wideband Code Division Multiple Access (WCDMA). In addition, it has an intellectual-property (IP) core network that is based on General Packet Radio Service (GPRS). The voice and data transport are performed by the transport-layer nodes. In contrast, the call-control layer nodes generally perform the call-control function (Fig. 1).

The transport-layer node can be built on an asynchronous-transfer-mode (ATM) switch, an IP packet switch, or an Internet-protocol router. The node consists of an optional interface to call-control layer nodes, a host processor, adjacent-node interfaces, and switch fabric. Together, the adjacent-node interfaces and switch fabric form the voice and data path. Two parts of a transport-layer node are implemented in programmable logic: the call-control layer interface and the voice/data path (Fig. 2). The call-control layer interface is the interface logic to call-control layer nodes, such as HSS, CSCF, and MGCF. The voice and data path uses the Internet protocol to transport packet voice and data within the UMTS wireless network. The main functions of a packet-voice and data-path implementation include the following (Fig. 3):

  • Physical-layer processing: The physical-layer function processes SONET/SDH or T/E/J frame headers and extracts point-to-point-protocol (PPP) packets on the receiver side. On the transmitter side, it places PPP packets into the frame payload and adds the frame header.
  • Higher-layer processing: The higher-layer function performs parsing, framing, packet classification, and modification. Encryption and compression processors are usually supported. In addition, special processors are often used to accelerate the process. The queuing and traffic-manager function places packets on different priority queues and drops packets according to the traffic condition.
  • Switching: The switch fabric performs switching and routing functions for voice and data. It also contains a queue manager.
  • Control and management: The control and management function performs path control and collects data for management purposes.

Traffic-management functions must be created and tested for emerging Internet triple-play protocols and QoS capabilities. These functions will play a key role in providing the efficient deployment of triple-play equipment and verifying its proper operation. With such a wide range of factors driving the testing requirements, not all market requirements can be met with fixed application-specific standard products (ASSPs) and application-specific integrated circuits (ASICs). IPTV equipment must measure the device-undertest (DUT) response in terms of latency, throughput, missing packets, transaction rate, and mean opinion score (MOS). Furthermore, equipment must be able to assign different QoS parameters (physical/logical ports, priorities, classes, distribution, etc.) in order to test a DUT's ability to correctly implement QoS policies on its traffic.

An FPGA-based traffic manager reference design can provide QoS capabilities for triple play (Fig. 4). Programmable traffic-manager solutions can support high-speed throughput in a solution that can adapt to the changing market. The solution can be demonstrated in a hardware environment using a traffic-manager board. By using FPGA technology to enforce QoS, communications-equipment OEMs can quickly deliver customized test services to specific markets.

Test equipment for triple-play services must create or emulate many different types of Layer 4-7 traffic (HTTP, FTP, e-mail, video streams, audio streams, and VoIP). Each traffic pattern needs to be tested under different QoS characteristics to effectively evaluate a DUT's ability to correctly handle these different types. Measuring the performance for triple-play services means emulating and evaluating protocols in a step-by-step approach:

  1. Create several baseline traffic types and measure the performance of a DUT when the traffic is run in isolation.
  2. Combine all traffic types and re-assess the performance in terms of throughput, latency, and data loss.
  3. Adjust QoS parameters on certain traffic flows and implement QoS policies on the DUT. It is then possible to measure the DUT's ability to properly prioritize certain streams within a tripleplay environment.

Page Title

Internet baseline traffic consists of web accesses, mail, ftp, P2P, and other forms of business traffic. This traffic is based on different port pairs. For instance, HTTP requests target TCP port 80, POP3 transactions target port 110, etc. In contrast, video baseline traffic can be actual video streams or emulated through the use of scripts. These streams simulate the behavior of video traffic through the DUT. Delay, jitter, and throughput must be measured. Both unicast for video on demand and multicast for broadcast services over UDP are required.

Voice-over-Internet-protocol (VoIP) traffic must be tested bi-directionally over multiple channels (48 VoIP pairs) with several different types of codec algorithms (G.711u, G.711a, G.723.1-ACELP, G.723.1-MPMLQ, G.729, and G.726). The quality of the voice calls is measured using the mean opinion score of voice conversations. It therefore determines the effectiveness of the network for carrying voice traffic.

Once the baseline types are characterized, the traffic is combined and reevaluated—usually with one type earmarked with QoS and the others without—until all combinations are measured. Various QoS parameters are then applied to quantify traffic throughput and response times under different QoS combinations. Generally, Internet baseline traffic has the lowest priority because data services are not adversely affected by packet delays. Video traffic may have the next-highest priority. As long as the audio track (which is typically sent as part of a stream) does not get broken, a few missing video frames will not seriously degrade perceived appearance. VoIP traffic would typically have the highest priority, as voice services are very sensitive to latency and corrupted data packets—particularly with third-generation (3G) and fourth-generation (4G) wireless.

A typical communications tester is a chassis-based heterogeneous architecture, which is very similar to the one shown in Fig. 3. It has line-interface cards, network-processing/traffic-management cards, switch-fabric cards, and a control central processing unit (CPU). (Note that smaller models may combine these functions.) Due to the variety of line interfaces and multiple protocols to be processed (classification, editing, and policing), multi-protocol framers and network-processing units (NPUs)—that are primarily ASSPs—have been developed. Many of these ASSPs require FPGA bridging solutions. Some of today's larger FPGAs also can do the framing and network-processing functions.

Previous data-only packet equipment had little need for traffic management into the line interfaces ( primarily for buffering under back-pressure mechanisms). In addition, traffic demanded only modest QoS requirements into the switch fabric. As the line-interface types diversified and triple-play QoS requirements intensified, heavier traffic management became essential in both directions. Because each direction has different requirements for ingress and egress, FPGA-based traffic management becomes a viable solution.

For example, the programmability of an FPGA solution can allow a traffic manager to be adapted to support changing test requirements and emerging services. Altera's FPGA-based traffic-manager reference design, for example, interfaces to a network processor on one side and a fabric interface chip (FIC) on the other. In the ingress direction, traffic flows into the line card encapsulated in Ethernet, SONET/SDH, RPR, or OTN frames. The framer or MAC device performs the necessary processing and transmits the traffic to the NPU with a Layer 2 header attached. The NPU, in turn, performs classification, modification, and forwarding operations as dictated by the protocols being processed. The NPU also adds additional headers to the packet for communicating information to downstream devices including both ingress and egress traffic managers as well as the egress NPU. The header for the ingress traffic manager contains information like class of service (CoS), multicast, and drop precedence. This information is needed for the device to appropriately prioritize and switch the traffic.

The packet reassembler block receives data from the SPI-4.2 interface and parses the header for relevant parameters, such as CoS. The parsing can be customized to locate parameters within any location of the header. This information is stored in the control memory along with pointers to the packets. The reassembler block converts the received data into fixed-sized cells. It then communicates with the buffer manager block to ensure that the configured memory partitions are not violated by packet enqueues.

Next, the buffer manager passes the enqueue request to the queue manager block. That block maintains state and pointer information about each of the queues. If the enqueue is valid, the reassembler block selects an available pointer to external memory and passes the segmented packets to the packet-memory controller for storage in external memory. The list of available pointers is maintained both off chip in the packet-memory and in an on-chip cache using the FPGA's embedded memory. After the packet has been written to external memory, it is eligible to be transmitted by the scheduler.

For its part, the scheduler examines all of the ports that have data to send. It chooses the next port according to a hierarchical scheduling scheme. The scheduling algorithms are configurable. In addition, the entire scheduling block is customizable in order to support proprietary scheduling algorithms. When the scheduler has selected a cell from a packet for transmission, it issues the request from the queue manager. The queue manager initiates the dequeue operation through the reassembler block.

Packets that have been scheduled are sent through the FIC interface. The current FIC interface also utilizes the serial-peripheral-interface (SPI) 4.2 MegaCore function. A header is added to each of the cells to enable the fabric to switch the cells to the appropriate port with the appropriate priority. The cells flow through the switch fabric. The egress FIC then converts the traffic to a SPI-4.2 interface for the egress traffic manager.

In summary, business and technology uncertainties for triple play will be in flux for the foreseeable future, driving emerging protocols and equipment upgrades. OEMs must create flexible, scalable platforms for both transport and test equipment. By using FPGA-based silicon technology to enforce QoS, equipment OEMs can meet the changing needs of their customers by rapidly delivering customized test services to specific markets. OEMs also can future-proof their traffic-manager solutions by relying on the flexibility of an FPGA.

Sponsored Recommendations

Getting Started with Python for VNA Automation

April 19, 2024
The video goes through the steps for starting to use Python and SCPI commands to automate Copper Mountain Technologies VNAs. The process of downloading and installing Python IDC...

Can I Use the VNA Software Without an Instrument?

April 19, 2024
Our VNA software application offers a demo mode feature, which does not require a physical VNA to use. Demo mode is easy to access and allows you to simulate the use of various...

Introduction to Copper Mountain Technologies' Multiport VNA

April 19, 2024
Modern RF applications are constantly evolving and demand increasingly sophisticated test instrumentation, perfect for a multiport VNA.

Automating Vector Network Analyzer Measurements

April 19, 2024
Copper Mountain Technology VNAs can be automated by using either of two interfaces: a COM (also known as ActiveX) interface, or a TCP (Transmission Control Protocol) socket interface...