Author ORCID Identifier

Document Type

Conference Paper


Available under a Creative Commons Attribution Non-Commercial Share Alike 4.0 International Licence


Electrical and electronic engineering, Communication engineering and systems, telecommunications

Publication Details

Published version on IEEE Xplore


To lower costs associated with providing cloud resources, a network manager would like to estimate how busy the servers will be in the near future. This is a necessary input in deciding whether to scale up or down computing requirements. We formulate the problem of estimating cloud computational requirements as an integrated framework comprising of a learning and an action stage. In the learning stage, we use Machine Learning (ML) models to predict the video Quality of Delivery (QoD) metric for cloud-hosted servers and use the knowledge gained from the process to make resource management decisions during the action stage. We train the ML model weights conditional on the system load. Numerical results demonstrate performance gains of ≈ 59% of the proposed technique over state-of-art methods. This gain is achieved using less computational resources.