Overlapped Data Processing Scheme for Accelerating Training and Validation in Machine Learning

Jinseo Choi, Donghyun Kang

Research output: Contribution to journalArticlepeer-review

2 Scopus citations

Abstract

For several years, machine learning (ML) technologies open up new opportunities which solve traditional problems based on a rich set of hardware resources. Unfortunately, ML technologies sometimes waste available hardware resources (e.g., CPU and GPU) because they spend a lot of time waiting for a previous step inside ML procedure. In this paper, we first study data flows of the ML procedure in detail to find avoidable performance bottlenecks. Then, we propose ol.data, the first software-based data processing scheme that aims to (1) overlap training and validation steps in one epoch or two adjacent epochs, and (2) perform validation steps in parallel, which helps to significantly improve not only the computation time but also the resource utilization. To confirm the positive effectiveness of ol.data, we implemented a convolution neural network (CNN) model with ol.data and compared it with the traditional approaches, Numpy (i.e., baseline) and tf.data on three different datasets. As a result, we confirmed that ol.data reduces the inference time by up to 41.8% and increases the utilization of CPU and GPU resources by up to 75.7% and 38.7%, respectively.

Original languageEnglish
Pages (from-to)72015-72023
Number of pages9
JournalIEEE Access
Volume10
DOIs
StatePublished - 2022

Keywords

  • CPU/GPU utilization
  • Machine learning
  • multiple threads
  • overlapping
  • TensorFlow

Fingerprint

Dive into the research topics of 'Overlapped Data Processing Scheme for Accelerating Training and Validation in Machine Learning'. Together they form a unique fingerprint.

Cite this