| 초록 |
Plastic injection molding plays a critical role in modern manufacturing by enabling high-volume production of complex plastic parts. However, achieving both high product quality and profitability remains a persistent challenge due to dynamic operating conditions such as fluctuating electricity prices, variable environmental factors, and mold degradation. This study proposes a deep reinforcement learning (DRL)-based framework for real-time optimization of injection molding process parameters, integrating both quality assurance and costefficiency into a unified control objective. A profit function was formulated to reflect real-world economic constraints, incorporating resin usage, mold maintenance cost, and time-of-use electricity pricing. To support efficient offline training, surrogate models for quality classification and cycle time regression were developed to simulate process outcomes. These models enabled the training of DRL agents using Soft Actor-Critic (SAC) and Proximal Policy Optimization (PPO), which were selected for their stability and sample efficiency in continuous control tasks. Experimental validation showed that the proposed DRL framework effectively adapts to seasonal and operational variations, maintaining product quality while maximizing profit. Compared to a Genetic Algorithm (GA), a widely used global optimization method, the DRL models achieved comparable profitability with up to 135x faster inference, making them highly suitable for real-time deployment. The framework's scalability and adaptability suggest strong potential for broader applications in intelligent, data-driven manufacturing systems.
|