OBRABOTKAMETALLOV Vol. 23 No. 3 2021 MATERIAL SCIENCE EQUIPMENT. INSTRUMENTS 7 2 5 Research methodology Milling operations were performed on a DMG MORI DMU 50 universal milling machine with 9 kW power and maximum rotation speed nmax = 8,000 min −1. The workpiece was made of austenitic stainless steel AISI 321 with chemical composition,wt. %: C ≤ 0.12; Si ≤ 0.8; Mn ≤ 2.0; P ≤ 0.035; S ≤ 0.02; Ni 9–11; Cr 17–19; Ti < 0,8; Fe – bal. The cutting tool was a hard alloy with multilayer (TiN and TiNAl) PVD coating and fine-grained base, with diameters of 6, 8, 10, and 12 mm from Sandvik Coromant. During the experiments, measurements of tool radius wear (r, mm) by levels were obtained using a TT140 contact sensor from Heidenhain. Surface roughness after milling was measured using a SURFCOM 1800D profilometer; for this device, the error according to the standard is 3 %. Filter – 50 % Gaussian. The parameter of the basic length (step cutoff) was chosen to be 0.8 mm (ISO 4288:1996) for all measurements, since the expected range should be 0.5 < Rz ≤ 10. Tracing was performed three times in the direction of tool feed. Experimental design included controlled factors: ap, fz, γ, D, V, and uncontrolled factors: W and r. The response parameter was surface roughness Rz. After experiments, ANN models were built using Python with TensorFlow and Keras libraries for neural network creation, training, and regularization, NumPy for array operations, and Scikit-learn for data preprocessing. Experimental data were divided into training and test sets that went through a process of standardization and normalization, accounting for 70 % and 30 % of the total number of experiments performed, corresponding to 28 training attempts and 12 testing attempts. The neural network training algorithm used is the back propagation (BP) method. This method calculates the gradient of the loss function with respect to the weights of the neural network. During forward propagation, the input data passes through the network, generating an output prediction. The error of this prediction is then calculated and propagated back through the network, from the output layer through all hidden layers to the input layer. At each layer, the gradient of the error with respect to the weights is calculated. Hyperparameter values were tested and best values for the models are presented in Table 1. The optimizer (Adam) updates weights according to computed gradients. These steps are repeated for each training epoch, allowing iterative improvement of predictions. Results and discussion One of the most important distributions in statistics is the normal distribution, which describes typical behavior of many phenomena. To determine the distribution of Rz, measured data after mechanical processing of 512 surfaces with coolant were analyzed. For all surfaces, technological parameters were ap = 0.2 mm, ae = 0.4 mm. Roughness results (Rz) are shown in Fig. 2. Ta b l e 1 Hyperparameters for the BPNN model Model Hyperparameters Indicator Sequential Activation hidden layers Leaky ReLU Kernel_regularizer l1=0,0001, l2=0,0001 Dropout 0.01 Optmizer Adam Learning_rate 0.001 Loss mean_squared_error Metrics MSE, RMSE, MAE Batch size 16 Epochs 500
RkJQdWJsaXNoZXIy MTk0ODM1