International Journal of Information Technology and Computer Science(IJITCS)

ISSN: 2074-9007 (Print), ISSN: 2074-9015 (Online)

Published By: MECS Press

IJITCS Vol.11, No.2, Feb. 2019

Tuning Stacked Auto-encoders for Energy Consumption Prediction: A Case Study

Full Text (PDF, 486KB), PP.1-8

Views:11   Downloads:0


Muhammed Maruf Öztürk

Index Terms

Deep Learning;Stacked Auto-Encoder (SAE);Energy Consumption Prediction


Energy is a requirement for electronic devices.  A processor is a substantial part of computer components in terms of energy consumption. A great concern has risen over recent years about computers with regard to the energy consumption.  Taking accurate information about energy consumption of a processor allows us to predict energy flow features. However, using traditional classifiers may not enhance the accuracy of the prediction of energy consumption. Deep learning shows great promise for predicting energy consumption of a processor. Stacked auto-encoders has emerged a robust type of deep learning. This work investigates the effects of tuning stacked auto-encoder in computer processor with regard to the energy consumption. To search parameter space, a grid search based training method is adopted. To prepare data to prediction, a data preprocessing algorithm is also proposed. According to the obtained results, on average, the method provides 0.2% accuracy improvement along with a remarkable success in reducing parameter tuning error. Further, in receiver operating curve analysis, tuned stacked auto-encoder was able to increase value of are under the curve up to 0.5.

Cite This Paper

Muhammed Maruf Öztürk, "Tuning Stacked Auto-encoders for Energy Consumption Prediction: A Case Study", International Journal of Information Technology and Computer Science(IJITCS), Vol.11, No.2, pp.1-8, 2019. DOI: 10.5815/ijitcs.2019.02.01


[1]P. I. Pénzes and A. J. Martin, “Energy-delay efficiency of  VLSI computations,” Proceedings of the 12th ACM Great Lakes symposium on VLSI, pp. 104-111, 2002.

[2]L. Senn, E. Senn, and C. Samoyeau, ”Modelling the power and energy consumption of NIOS II softcores on FPGA,” Cluster computing workshops (cluster workshops) IEEE international conference on, pp. 179-183, 2012.

[3]A. Sinha and A. P. Chandrakasan, “JouleTrack: a web based tool for software energy profiling,” In Proceedings of the 38th annual Design Automation Conference, pp. 220-225, June 2001.

[4]S. Lee, A. Ermedahl, S. L. Min, and N. Chang, “An accurate instruction-level energy consumption model for embedded risc processors,” Acm Sigplan Notices, vol. 36, no. 8, pp. 1-10, 2001.

[5]Y. S. Lin, C. C. Chiang, J. B. Li, Z. S. Hung, and K. M. Chao, “Dynamic fine-tuning stacked auto-encoder neural network for weather forecast,” Future Generation Computer Systems, vol. 89, pp. 446-454, 2018.

[6]T. Zhou, G. Han, X. Xu, Z. Lin, C. Han, Y. Huang, and J. Qin, “δ-agree AdaBoost stacked autoencoder for short-term traffic flow forecasting,” Neurocomputing, vol. 247, pp. 31-38, 2017.

[7]S. X. Lv, L. Peng, and L. Wang, “Stacked autoencoder with echo-state regression for tourism demand forecasting using search query data,” Applied Soft Computing, vol. 73, pp. 119-133, 2018.

[8]K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770-778, 2016.

[9]T. H. Chan, K. Jia, S. Gao, J. Lu, Z. Zeng, and Y. Ma, “PCANet: A simple deep learning baseline for image classification?,” IEEE Transactions on Image Processing, vol.24, no. 12, pp. 5017-5032, 2015.

[10]D. Shen, G. Wu, and H. I. Suk, “Deep learning in medical image analysis,” Annual review of biomedical engineering, vol. 19, pp. 221-248, 2017.

[11]J. Xu, L. Xiang, Q. Liu, H. Gilmore, J. Wu, J. Tang, and A. Madabhushi, “Stacked sparse autoencoder (SSAE) for nuclei detection on breast cancer histopathology images. IEEE transactions on medical imaging, vol. 35, no. 1, pp. 119-130, 2016.

[12]D. Luo, R. Yang, B. Li, and J. Huang, “Detection of double compressed AMR Audio using stacked autoencoder,” IEEE Transactions on Information Forensics and Security, vol. 12, no. 2, pp. 432-444, 2017.

[13]A. Majumdar and A. Tripathi, ”Asymmetric stacked autoencoder,” Neural Networks (IJCNN) International Joint Conference, pp. 911-918, 2017.

[14]M. Yang, A. Nayeem, and L. L. Shen, “Plant classification based on stacked autoencoder,” Technology, Networking, Electronic and Automation Control Conference (ITNEC), pp. 1082-1086, 2017.

[15]R. Jiao, X. Huang, X. Ma, L. Han, and W. Tian, “A Model Combining Stacked Auto Encoder and Back Propagation Algorithm for Short-Term Wind Power Forecasting,” IEEE Access, vol. 6, pp. 17851-17858, 2018.

[16]D. Singh and C. K. Mohan, “Deep Spatio-Temporal Representation for Detection of Road Accidents Using Stacked Autoencoder,” IEEE Transactions on Intelligent Transportation Systems, in press.

[17]V. Singhal, A. Gogna, and A. Majumdar, “Deep dictionary learning vs deep belief network vs stacked autoencoder: An empirical analysis,” International conference on neural information processing, pp. 337-344, 2016. 

[18]Y. Lv, Y. Duan, W. Kang, Z. Li, and F. Y. Wang, “Traffic flow prediction with big data: A deep learning approach,” IEEE Trans. Intelligent Transportation Systems, vol. 16 no. 2, pp. 865-873, 2015.

[19]M. A. Beghoura, A. Boubetra, and A. Boukerram, “Green software requirements and measurement: random decision forests-based software energy consumption profiling,” Requirements Engineering, vol. 22, no. 1, pp. 27-40, 2017.

[20]A. L. França, R. Jasinski, P. Cemin, V. A. Pedroni, and A. O. Santin, “The energy cost of network security: A hardware vs. software comparison,” Circuits and  Systems (ISCAS) IEEE International Symposium, pp. 81-84, 2015.

[21]A. S. Ahmad, M. Y. Hassan, M. P. Abdullah, H. A. Rahman, F. Hussin, H. Abdullah, and R. Saidur, “A review on applications of ANN and SVM for building electrical energy consumption forecasting,” Renewable and Sustainable Energy Reviews, vol. 33, no. 1, pp. 102-109, 2014.

[22]D. N. Lane, S. Bhattacharya, P. Georgiev, C. Forlivesi, L. Jiao, L. Qendro, and F. Kawsar, F. “Deepx: A software accelerator for low-power deep learning inference on mobile devices,” Proceedings of the 15th International Conference on Information Processing in Sensor Networks, p. 23, 2016.

[23]P. Vincent, H. Larochelle, I. Lajoie, Y. Bengio, and P. A. Manzagol, “Stacked denoising autoencoders: Learning useful representations in a deep network with a local denoising criterion,” Journal of machine learning research, pp. 3371-3408, 2010.

[24]H. Zhan, G. Gomes, X. S. Li, K. Madduri, and K. Wu, “Efficient Online Hyperparameter Optimization for Kernel Ridge Regression with Applications to Traffic Time Series Prediction,” arXiv preprint arXiv:1811.00620, 2018.

[25]S. Romansky, N. C. Borle, S. Chowdhury, A. Hindle, and R. Greiner, “Deep Green: modelling time-series of software energy consumption,” In Software Maintenance and Evolution ICSME, pp. 273-283, 2017.

[26]D. Di Nucci, F. Palomba, A. Prota, A. Panichella, A. Zaidman, and A. De Lucia, “Petra: a software-based tool for estimating the energy profile of android applications,” Proceedings of the 39th International Conference on Software Engineering Companion, pp. 3-6, 2017.

[27]D. Feitosa, R. Alders, A. Ampatzoglou, P. Avgeriou, and E. Y. Nakagawa, “Investigating the effect of design patterns on energy consumption,” Journal of Software: Evolution and Process, vol. 29, no. 2, 2017.

[28]B. R. Bruce, J. Petke, and M. Harman, “Reducing energy consumption using genetic improvement,” Proceedings of the Annual Conference on Genetic and Evolutionary Computation, pp. 1327-1334, 2015.

[29]G. Bekaroo, C. Bokhoree, and C. Pattinson, “Power measurement of computers: analysis of the effectiveness of the software based approach,” Int. J. Emerg. Technol. Adv. Eng, vol. 4, no. 5, pp. 755-762, 2014.

[30]G. K. Van Steenkiste and J. Schmidhuber, “Neural expectation maximization,” Advances in Neural Information Processing Systems, pp. 6691-6701, 2017.

[31]P. Li, E. A. Stuart, and D. B. Allison, “Multiple imputation: a flexible tool for handling missing data,” Jama, vol. 314, no. 18, pp. 1966-1967, 2015.

[32]Y. H. Kung, P. S. Lin, and C. H. Kao, “An optimal k-nearest neighbor for density estimation,” Statistics & Probability Letters, vol. 82, no. 10, pp. 1786-1791, 2012