Using History to Improve Mobile Application Adaptation
Prior work has shown the value of changing application ﬁdelity to adapt to varying resource levels in a mobile environment. Choosing the right ﬁdelity requires us to predict its effect on resource consumption. In this paper, we describe a history-based mechanism for such predictions. Our approach generates predictors that are specialized to the hardware on which the application runs, and to the speciﬁc input data on which it operates. We are able to predict the CPU consumption of a complex graphics application to within 20% and the energy consumption of fetching and rendering web images to within 15%.
A key strategy in mobile computing is adapting application behavior to resource availability and user goals. Changing application ﬁdelity — the quality of results presented to the user — has been shown to be effective in adapting application resource consumption to varying resource availability [7, 11, 12]. Fidelity is an application-speciﬁc notion of the “goodness” of a computed result or data object: for example, the JPEG Quality Factor of a lossily compressed image, or the precision bound of a ﬂoating point computation. Naturally, there is a tradeoff between ﬁdelity and resource consumption: a lower ﬁdelity results in a lower resource consumption, but at the cost of presenting a more degraded result to the user. Fidelity is not always a single real number: there could be multiple ﬁdelity metrics, each of which could be discrete or continuous. The ultimate goal of ﬁdelity adaptation is to improve a mobile user’s computing experience by delivering results quickly, with low battery drain and little distraction of the user. Consider a graphics computation that operates on a 3-D model in a mobile augmented reality application. The latency of the computation depends both on the CPU consumed by the computation and the CPU demands of other applications. The CPU consumption depends on the ﬁdelity — the resolution of the model. If we could predict the CPU consumption as a function of ﬁdelity, we could combine this with CPU load information to predict latency. This lets us characterize the tradeoff between ﬁdelity and latency, and to pick good operating points: for good interactive response, we might always pick the highest ﬁdelity that keeps the latency below 200 ms. In this paper we show how history-based prediction enables the system to learn an application’s behavior and predict its resource consumption. We have augmented Odyssey , an operating system platform for adaptation, with a history-based prediction system that monitors, logs, and predicts application resource consumption as a function of the ﬁdelity. Our initial experience suggests that historybased prediction is feasible. We can predict to within 20% the CPU consumption of a 3-D graphics computation — typical of those found at the heart of augmented reality applications. We can also predict the energy consumption of fetching lossily compressed web images to within 15%. Our current prototype has a CPU overhead of 0.22% for a typical application; we expect the overhead to be even lower in a production version of the code.
FREE IEEE PAPER