Memorias de investigación
Estancias o Sabáticos:
Tokyo Institute of Technology - Matsuoka Lab
Año:2016

Áreas de investigación
  • Tecnología electrónica y de las comunicaciones,
  • Ciencias de la computación y tecnología informática

Datos
Descripción
Erasmus Mundus EASED : doctoral mobility at Tokyo Institute of Technology for 6 months Supercomputers have reached a massive energy consumption due to computational demand, so there is an urgent necessity to keep them on a more scalable curve. In the last years, there has been a rising interest in reducing the power consumption of these systems. Recently research works focus on the adjustment of their power states by reducing clock frequency, applying power capping, and on the analysis of the thermal impact on static consumption. These techniques rely on power models to predict the power consumption of the infrastructure. However, the power consumption in these complex systems involves a vast number of interacting variables of different nature that may include non-linear dependencies. So, extracting the relationships between the most representative parameters and the power consumption requires an enormous effort and knowledge about the problem. We propose an automatic methodology based on Grammatical Evolution to obtain a model that minimizes the power prediction error of a supercomputer node that incorporates both CPU and GPU devices. We monitor the system during runtime using performance counters, fre- quency and power measurements. This evolutionary technique provides both Feature Engineering and Symbolic Regression to infer accurate models, which only depend on the most suitable variables, with little designers expertise requirements and effort. Our work improves the pos- sibilities of deriving proactive energy-efficient policies in supercomputers that are simultaneously aware of complex considerations of different nature. Data have been collected by monitoring real measures from a node of TSUBAME-KFC. This supercomputer is a state-of-the-art prototype for the next-generation TSUBAME 3.0, and achieved world No.1 on the Green500 ranking in November 2013 and June 2014. Each of its nodes consists of 2 CPUs and 4 GPUs based on Intel Xeon E5-2620 v2 (IvyBridge) and NVIDIA Tesla K80. This work makes contributions on the accurate power modeling of CPU-GPU based supercomputers. Our Grammatical Evolutionary (GE) based automatic approach does not require designer?s expertise to describe the complex relationships between parameters and power consumption sources. Results show average errors of ±3W and ±1W for GPU and CPU modeling respectively for both training and testing phases. Future work will aim at providing a complete model for the entire CPU-GPU node by applying our proposed methodology.
Internacional
Si
Lugar
Tokyo, Japan
Tipo
Fecha inicio
11/10/2016
Fecha fin
10/05/2017

Esta actividad pertenece a memorias de investigación

Participantes

Grupos de investigación, Departamentos, Centros e Institutos de I+D+i relacionados
  • Creador: Grupo de Investigación: Laboratorio de Sistemas Integrados (LSI)
  • Departamento: Ingeniería Electrónica