Agilent Technologies Inc.

EEsof EDA
395 PageMIll Road
Palo Alto, CA, 94303
Website: http://www.agilent.com/find/eesof





Systematic Troubleshooting of HSDPA Mobile Data Throughput

By Richard Maguire
Next generation HSDPA mobile devices promise 3.6 Mb/s download speed to UMTS customers. However, unlike current network services, this new high-speed capability does not get exploited by the mobiles themselves; customers use the device as a modem and rely on their PC to fully utilize high speed data. The combination of high speed cellular data and end-user computers introduces new and sometimes confusing bottlenecks into our traditional mobile/cell system, so new understanding and techniques are necessary to best deliver a satisfactory end-user experience.
The aim of this paper is to introduce mobile development engineers to factors impacting data throughput caused by actual customer connections of the mobile device. Areas addressed are: 1. The setup for testing and how that has changed with HSDPA 2. Basic introduction to the layers involved and how they work together 3. Tips for tuning the system and ensuring your part of the design performs well.

The Mobile Data System

Figure 1. The mobile data system
Figure 1 shows a proposed system for R&D testing of data throughput. While not exhaustive, this system emulates conditions close to customer-use while giving some control over system variables useful for tuning throughput. For example, in place of the Internet we use a dedicated server PC, eliminating uncontrolled latencies as experienced on the real network. Even with this reduced system, you may still experience greatly reduced throughput, necessitating some discussion of our system components.

System Protocol Layers

Figure 2. System protocol layers
Figure 2 shows a simplified diagram of protocol layers involved in our mobile data system. When faced with poor data throughput, we can see from this diagram that many layers are involved. Bottleneck layers discussed in this paper are highlighted in yellow and we will attempt to get data transfer going using the fewest number of layers possible and work outward.

Radio Bearer Test Mode

Starting with least complexity allows us to minimize troublesome layer interactions and validate the system core before adding more layers. For HSDPA devices we use Radio Bearer Test Mode to validate layers common to just the cell and mobile. During data transfer, RB Test Mode really only exercises RF and MAC, providing the connection we need to validate our system at the lowest layer. For this paper it is assumed the mobile has a solid RF design, so when using RB Test Mode to validate MAC, it is important to set the relative power of downlink data channels high in order to eliminate any throughput issues caused by RF. Figure 3 shows the basic protocol flow during RB Test Mode, highlighting a focus on proving MAC before adding further system layers. Also, ensure the network emulator used has enough flexibility and bandwidth to stress MAC to maximum throughput.

Figure 3. Radio Bearer Test Mode
Once successful transferring 3.6 Mb/s with RB Test mode, we can be confident with the mobile/cell foundation for throughput testing and add further system components. We maintain the network emulator settings used for RB Test Mode and connect client and server computers to the system. Client/Server presence provides a means to test the next layers in our system using UDP as our tool.

UDP Flood

Figure 4. UDP Figure
Figure 4 shows the next step in troubleshooting using UDP flood. UDP flooding is a server-initiated process that sends data directly to the client IP address; but since no real service has been negotiated, the client has no entities listening for UDP and simply passes data from IP to the trash. UDP flooding is used to validate RLC at 3.6 Mb/s before moving on to our first true customer application — streaming video.

Streaming Video

Figure 5. Streaming Video
Streaming video servers use UDP when sending video data, allowing us to validate client and server issues and system interconnections as our next step in troubleshooting. Since steaming servers communicate over IP, you should validate your client/server setup by direct connection via LAN X-cable or through a hub before connecting the entire system together as shown in Figure 5. Our experience shows video streaming can be a bit tricky to get started, so direct connection between client/server allows you to verify application settings, operation, and throughput before running the service over HSDPA. Running the application through the complete mobile data system will likely result in throughput less than 3.6 Mb/s, requiring some application tuning. One quick check of the system to determine if we have an application issue is to log throughput with more than one movie streaming. Using the network emulator’s throughput measure, or one provided in the OS, we see if raw data throughput meets our 3.6 Mb/s goal. If a single movie does not sustain 3.6 Mb/s, but multiple movies bring throughput up to 3.6 Mb/s, we have successfully validated system capacity at this stage to handle high data rates, but we should properly tune the application in order to achieve the best experience with just one movie. We have found for one popular streaming server a configuration file allowing us to edit data transfer buffer/window sizes. By searching through the configuration file and doubling window sizes, we may find you get maximum data throughput now with only a single file playing. When using video streaming to verify system throughput, movies made for web delivery will not give adequate results, so ensure the video has been sampled at a high enough rate to stress system capacity.

FTP

Transferring files using ftp continues our testing by introducing TCP layers and completing our mobile data system as shown in Figure 1. There are three likely areas in our system that must be tuned to achieve full throughput and we will make some mention of each: TCP Receive Window, Server Send Buffer, and Server Socket Buffer.

TCP Receive Window

During ftp transfers, the client receives data and uses a buffer to keep data transfer running smoothly. If the server is sending data too fast, this client buffer fills up, at which point the client tells the server to stop sending data. This start/stop/start sequence slows data throughput but can be addressed by increasing the size of the buffer, or TCP Receive Window. Searching the Internet for this term provides ample information and even some free tools for setting this value on most operating systems. Default settings for most operating systems are too small for cellular data and must be changed. For 3.6 Mb/s throughput speeds, we have found a value around 64 KB should be sufficient for this type of testing.

Server Send Buffer

Most ftp server software uses a sending buffer to send definable-sized blocks of data in short bursts instead of sending a continuous stream to the client. If this buffer is too small, and most are by default, high speed data can not be achieved in our system. We have found most buffer sizes are not set high enough to counteract the long delay time in mobile data systems, so changing this buffer to match the TCP Receive Window setting of 64 KB will address this bottleneck. The combination of sending and receiving buffers is sometimes calculated as Bandwidth Delay Product, but we must use caution when implementing this calculation method as it will not give the correct size setting for our system. Typical BDP calculations use PING to determine delay between client and server, which will work fine for our system too if we PING during actual data transfer instead of when the connection is idle. For mobile data systems, the delay time will increase with increasing data transfer, and if Ping is done while idle, the resulting buffer sizes will again be too small. Regardless, 64KB should be a good enough number for 3.6 Mb/s systems.

Server Socket Buffer

Most available ftp servers do not give access to this necessary buffer size, but fortunately for us, a few do. Socket buffer size can be tied directly to send buffer size, and if too small, the send buffer will overflow the socket buffer and poor throughput will result. Use an ftp server that gives access to both Send and Socket buffers, setting the socket buffer a bit larger than the send buffer. 65 KB or more will be good enough for our system. We need to be mindful not to set buffer sizes too large as this can make some applications stall a bit as they wait for the buffer to fill before communicating between server and client. Using the 64/65 values presented in this paper will likely achieve the system performance we need to successfully validate our product’s throughput numbers.

Summary

Figure 6. Summary
Figure 6 represents our completed mobile data system, highlighting a few areas that should be understood and configured before putting too much weight in your data throughput results. While some in this business may be content with data performance when taken at the simple cell/mobile level, end customers are more interested in the kind of throughput generated at the system level, necessitating your dedicated approach to ensuring your device functions not just as you designed it, but as customers will use it. Good luck with HSDPA!

Glossary of Terms

BDP— Business Data Processing HSDPA— High-Speed Downlink Packet Access LAN— Local Area Network PING     — Packet Internet Groper TCP— Transmission Control Protocol UMTS— Universal Mobile Telecommunications System UDP— User Datagram Protocol

About the Author

Richard Maguire is a product marketing engineer for Agilent Technologies. He can be reached at (509) 921-4421; richard_maguire@agilent.com. nno

© 2006 Advantage Business Media
Use of this website is subject to its terms of use.
New Privacy Policy