Abstract
Dynamic systems and processes can be controlled to drive them to a desired state (e.g., optimal). Control can be done with anything ranging from a simple model to a complex algorithm, depending on the dynamic system. In real world problems we can collect real data and use machine learning (ML) approaches to develop these control systems. A common ML approach used for these problems is reinforcement learning. In this paper we present a simplified method for problems that have three characteristics: a relatively low number of actions; we can predict the output state given an action; and for each state there is a default action that will maintain the system stable unless there are external factors. Our approach permits fast model training and uses a relatively simple control process. As an example, the jitter buffer of a real-time telecommunications system that encounters stochastic packet losses and delays can be controlled by this approach.
Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 License.
Recommended Citation
González, Pablo Barrera and Creusen, Ivo, "Fixed-policy value-function prediction for stable control applications using machine learning", Technical Disclosure Commons, (May 06, 2021)
https://www.tdcommons.org/dpubs_series/4280