Deep Reinforcement Learning Approach for Material Scheduling Considering High-Dimensional Environment of Hybrid Flow-Shop Problem
|저자||Chang-Bae Gil, Jee-Hyong Lee|
Manufacturing sites encounter various scheduling problems, which must be dealt with to efficiently manufacture products and reduce costs. With the development of smart factory technology, many elements at manufacturing sites have become unmanned and more complex. Moreover, owing to the mixing of several processes in one production line, the need for efficient scheduling of materials has emerged. The aim of this study is to solve the material scheduling problem of many machines in a hybrid flow-shop environment using deep reinforcement learning. Most previous work has ignored some conditions, which were critical for solving practical problems. Such critical conditions make the scheduling more complex and difficult to solve. They expand the size of the state and large action space and make learning in an environment with many machines problematic. In this study, a reinforcement learning approach was developed considering practical factors such as the processing time and material transfer to solve realistic manufacturing scheduling problems. Additionally, a method to simplify the high-dimensional environmental space at manufacturing sites for efficient learning was established to solve the problem of learning in a high-dimensional space. Through experiments, we showed that our approach could optimally schedule material scheduling in multi-process lines, which contributes to realistic manufacturing intelligence.