Determining the size of a jitter buffer for calls conducted over the network involves a trade-off between the probability of buffer under-runs and user-perceived delay, which impedes communication. Techniques of this disclosure perform data-driven optimization of the jitter buffer of audio and/or video calls conducted over the internet. Event logs that contain data about network conditions experienced during completed calls are used to calculate optimal jitter buffer size using an optimizer. The output of the optimizer is used as a target for offline training of a machine-learning model to calculate optimal jitter buffer size. The trained model is employed to control jitter buffer size during a call. Control of jitter buffer size using the techniques can reduce buffer under-runs without increasing average delay. Alternatively, depending on the weights in the optimization criterion, the techniques of this disclosure can reduce average delay with the same chance of buffer underrun.
Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 License.
Creusen, Ivo; Walter, Oliver; and Lundin, Henrik, "Control of jitter buffer size using machine learning", Technical Disclosure Commons, (December 06, 2017)