Abstract

Popular mobile apps have hundreds of millions of users. Consequently, servers that support such apps can receive as many as hundreds of thousands of user requests every second. Certain applications submit requests from a large number of devices such that the requests arrive at the server at nearly the same time. This causes a sharp spike in the number of user requests to be processed by the server and congestion throughout the network stack that can result in errors and dropped user requests. This disclosure presents techniques that schedule incoming user requests such that the histogram of load-versus-time is relatively flat and smooth. A flatter load-versus-time curve thus obtained results in higher system availability, lower errorrates, less strain on load-balancers, and enables a greater level of user satisfaction.

Creative Commons License

Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 License.

Share

COinS