Mark SchottFollow


Traditional computing systems use simple, fast-to-compute heuristics to inform various decisions, such as allocating fractions of input/output bandwidth when switching among various tasks. Although fast and simple to execute, heuristics alone fail to account for the context of the computing system and the system’s behavior as a whole and cannot make a more-optimal decision that accounts for hardware components, energy resources, or the environment in which the computing device operates. A machine-learning-assisted or neural-network-based scheduler can make inferences or predictions based on the system information, conditions, and dynamics and make a more-optimal decision in allocating system resources during input and output operations. However, a machine-learning-assisted or neural-network-based scheduler can be slow to run, which may interfere with or disrupt normal computing system operations without an appreciable benefit. A hybrid, machine-learning-assisted scheduler can combine insights from a machine-learning model into traditional heuristic rules in such a way that the computing system can make more-accurate predictions with respect to input and output operation scheduling while still enjoying the fast response time of traditional heuristics.

Creative Commons License

Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 License.