An Efficient Training Algorithm of Restricted Boltzmann Machines

Abstract

The paper deals with an actual applied problem related to the artificial neural networks training. An approach to the solution based on the idea of random search is proposed. An original training algorithm that implements Boltzmann annealing has been developed and its convergence in probability to the global optimum has been proved. It is also shown that the proposed algorithm can be easily modified to train any artificial neural network. Thus, it has a good prospect for solving applied problems using neural network technologies in general. Experimental studies have been carried out, in which, using the example of compressing color raster images problem, the proposed algorithm was compared with the known adaptive moment algorithm - one of the best gradient methods for training neural networks. Image compression was performed using an ensemble of n Gauss-Bernoulli restricted Boltzmann machines. The use of an ensemble of n machines in combination with a specially developed parallelization procedure made it possible to reduce the computational complexity of the training process and increase the speed of the proposed algorithm. As a result of experiments, it was shown that the proposed approach is not inferior to gradient methods in terms of speed. Moreover, the developed training algorithm turned out to be more than twice as effective as the adaptive moment algorithm in terms of the quality of the solution obtained.

Keywords: Annealing method, gradient decent method, training, neural networks, restricted Boltzmann machine

Introduction

Modern society is moving from the post-industrial to the informational stage of its development. Huge arrays of numerical data are generated daily, which stimulates the development of information and computer technologies (Chen et al., 2022). Recently, neural network technology for data processing has become widespread. The technology is based on a flexible mathematical model - an artificial neural network, with the help of which a wide range of applied problems is solved. To tune a neural network to a specific subject area, it is necessary to train it. Training is a typical optimization problem, where the objective function is given, which describes the solution quality, and the data on which it is necessary to achieve the optimal functional value. At the initial stage of the neural networks development for training, as a rule, the gradient approach was used. This approach has become widespread due to the high convergence rate of the methods that implement it (Nakamura et al., 2021). However, with the development of computer technology, the situation has changed cardinally and this factor has ceased to be a determining factor. This, in turn, made it possible to develop other approaches to training.

The paper considers an alternative approach to training neural networks based on the idea of random search. An algorithm is proposed that implements one of the variants of the annealing method, and its efficiency is studied using the example of solving the color images compression problem.

Problem Statement

Consider the problem of color images compression, which can be described as follows.

Let a bitmap color image be given by a set of color pixels (containing red, blue and green colors). In this case, any of the colors is set to a value from 0 to 255, that is, it is an 8-bit number. In this case, can be represented a single pixel as a Cartesian product of three 8-bit sets, and an image as a Cartesian product of 8-bit sets. As a result of image compression in the bit space, a certain vector of minimum dimension in a certain sense should be constructed.

The problem of color images compression can be formally written as follows:

Let a set of color images be given. It is necessary to construct an algorithm such that:

A : { 0 , , 255 } 3 . { 0,1 } k A - 1 : { 0,1 } k { 0 , , 255 } s N k m i n x = X i = 1 5 N x i - y i 2 m i n ,

where is the original image; is an image restored from a compressed image; is the number of pixels in the image.

Inverse mapping-1 may not be unique, but it is required to compress images as much as possible ( k m i n ), but square error across all images set should not be greater than some threshold, which is set manually.

Lower-dimensional representations can improve performance on many tasks, such as image compression, reconstruction and clustering (Liu et al., 2019). Color image compression occurs in many information retrieval problems (Dewi et al., 2021; Khanna et al., 2019; Knop et al., 2016). In such tasks, the restricted Boltzmann machine (RBM) is often used, which allows you to extract informative features that are used later for image classification.

Research Questions

In course of the study the following questions were raised:

  • Is it possible to build efficient RBM training algorithm implementing annealing method?
  • What is the efficiency of the algorithm based on the annealing method in relation to the gradient descent algorithm?

Purpose of the Study

The answers to the issues raised above will help achieve the goal and in future to contribute to the development of neural network training algorithms. It should increase a solution quality in a wide range of applied problems.

Research Methods

RBM refers to models (architectures) of recurrent neural networks. The parameters that define the properties of the connections between the neurons of the layers are called weights, and the parameters of the neurons are called their characteristics. Characteristics depend on the type of distribution that the network generates. For Gauss-Bernoulli-type RBMs, often used for data compression, neurons have three characteristics. For the hidden and visible layers, the displacements of the neurons and are set, respectively, as well as the parameters of the dispersion of neurons σ. As a result, the architecture of the RBM can be described by four types of parameters: the sets of weights, the bias of the visible and hidden words and, and the variances of neurons in the visible layer σ. That is, RBM can be formally described as a parametric family=(,,, σ). By fixing certain elements in each of the sets, you can set different subtypes of architectures.

Consider a RBM containing1 neurons in the input layer and2 neurons in the hidden layer. Then arbitrary parameters' values(121*2212), R =1,…,1221+2 define a specific version of the this type neural network.

It is proposed to use an approach based on the idea of a random search to train the RBM. The optimization problem in this case can be formulated as follows.

Let Ω be the set of feasible solutions. In the case of neural networks of the described above type, it can be represented as a Cartesian product of subsets of admissible parameter values for each group of network parameters (the Gauss-Bernoulli RBM has four groups of parameters). In this case, for each group of parameters, the set of feasible values is determined by the formula

where is the number of parameters in the group;, – upper and lower bounds of values in the group.

Suppose that on the set of feasible solutions Ω the objective function is defined, and for each element x O there is a set of neighboring elements N ( x ) Ω

. Then the optimization problem can be formally specified as a triple (Ω,,). A set of neighboring elements determines the optimization algorithm. Almost all random search theory was build on several restrictions on this set.

Let us consider the possibility of solving the problem described above using Boltzmann annealing.

For this variant of the annealing method, the sequence0,1,2,… is specified, the elements of which are interconnected by the relation:

T k = T 0 l n ( k + 2 ) , k > 0 (1)

Using these values, the transition probability from the current solution to the new solution is determined. The probability is determined by the formula:

P ( y x ) = m i n 1 , e x p F ( x ) - F ( y ) T k (2)

An algorithm that implements Boltzmann annealing is proposed (Kirkpatrick et al., 1983).

Parameters initialization.

Step 0; Setting initial values for problem parameters(1212212) and temperature0.

General k-th iteration.

Step 1. Four random variables are generated1,2,3,4 according to the formula

n i = R 0 ; m i , i = 1,4 -

where R[a;b] is a realization of a uniformly distributed random variable on the segment [;] a , b R ; is number of parameters in each group. Values1,2,3,4 are amount of parameters to change.

Step 2. Random permutations are generated1,2,3,4 1,2,3,4 length respectively; First1,2,3,4 elements specify the indexes of the parameters to be changed in each of the parameter groups.

Let, for example,1{11,12,…,11},2{21,22,…,22},3{31,32,…,33},4{41,42,…,44},1,2,3,4 Ω – sets of changing parameters.

Step 3. New solution(1212212) is generating according to the formula:

y k = y k y k J 1 J 2 J 3 J 4 y k + a i k y k J i , i = 1,4 - a i k = R - l i l i , i = 1,4 - k = 1 , n 1 n 2 + 2 n 1 + n 2 -

where1,2,3,4 – algorithm parameters. These parameters determine the set of neighboring elements size. The set of neighboring elements size is critical. It influences on convergence speed and final solution quality.

Step 4. Calculating of the objective function value().

Step 5. A new solution is chosen according to the probability value (2).

Step 6. Checking the optimality criterion. If the time for training has expired, then the algorithm terminates, otherwise the value of is increased by one and go to Step 1.

It is easy to show that the developed training algorithm is correct, in the sense that the algorithm does not make transitions to invalid solutions.

An important characteristic of this type of algorithm is the convergence property. For this annealing method variant, the convergence theorem is known.

(Hajek, 1988).

For any non-local minimum

l i m k + P x k = x = 0

2. If is the set of local minima of depth, then for any from

l i m k + P x k x = 0

if and only if when

k = 1 + e x p - d / T k = +

3. Let Ω* be the set of global minima and* be the maximum from the depths of local minima that do not match with any of the global

l i m k + P x k Ω k = 1

if and only if when

k = 1 + e x p - d * / T k = + (3)

According to (Hajek, 1988) theorem, the algorithm must satisfy the following conditions:

The convergence conditions were formulated in the form of statements and proved (Krasnoproshin & Matskevich, 2022).

Problem (Ω,,) is irreducible and has the weak reversibility property.

 The temperature sequence (1) monotonic decreasing to zero and satisfies the constraints (3).

. The proposed training algorithm can be easily modified for neural networks of any architecture. To do this, you need to determine the number of network parameter groups and set the parameters of the algorithm from step 3.

It should be noted that this algorithm allows the use of parallelization procedures, including those developed earlier to speed up the training of neural networks.

The theoretical guarantee of convergence makes it possible to obtain a global optimum, but when solving an applied problem, it is important to know how fast the algorithm converges. Therefore, to study the problem and study the effectiveness of the proposed approach, experimental studies were carried out using the example of solving the color images compression problem.

The well-known dataset CIFAR-10 (2020), which contains 60000 color raster images with a resolution of 32x32 pixels, was chosen as experimental data. Each image contains exactly an object belonging to one of the ten classes. Images, in addition to the object, contain an additional background, which greatly complicates compression.

For experimental studies, 8-fold, 16-fold and 32-fold compression ratios were chosen. Higher compression ratios lead to excessive losses, and lower compression ratios have no practical application in neural networks.

An ensemble of RBMs was used as a compression tool. For 8-fold compression, 256 Gauss-Bernoulli RBMs were used. 128 machines were used for 16x compression, and 64 machines for 32x compression.

One of the best modifications of the gradient method, the adaptive moment algorithm (Hamis et al., 2019), was used as a comparison algorithm. To estimate the gradient, there are many modifications: CD-N (Li et al., 2021), PCD (Oswin et al., 2018), N-PT-M (Brugge et al., 2013). To speed up training by the gradient method and achieve maximum quality (by mean square error), the CD-1 algorithm was chosen.

Training and validation were carried out on 4000 images. The remaining 52,000 images were used as a test set to check the quality of the resulting solution. To measure the quality, the functionals MSE, PSNR, PSNR_HVS, SSIM (Temel & AlRegib, 2019) were used.

The experiments were carried out on a computer with the operating system Lubuntu 20.04, with 4 core CPU intel i7-4770k with 16 GB 1600 MHz RAM and GPU nvidia rtx 3070 with 5888 cores. The training time was measured using the gettimeofday function.

The results are presented in the Table 1.

Table 1 - Training results for a restricted Boltzmann machine
See Full Size >

Findings

Based on the experimental results, a conclusion can be drawn. Thanks to the use of a special parallelization procedure, it was possible to achieve parity in performance with gradient methods. In terms of the MSE functionality (the lower the value, the better), the proposed algorithm more than doubled the quality of the opponent, in terms of the PSNR and PSNR_HVS functionals (the higher the value, the better) by about 50%, and in terms of the SSIM functionality (more is better), more than twice.

Conclusion

The paper proposes an approach to training neural networks based on random search and develops an original algorithm that implements Boltzmann annealing. The algorithm convergence in probability to the global optimum is proved, and it is shown that the proposed algorithm can be easily modified to train any artificial neural network.

As part of experimental studies, using the example of the problem of compressing color bitmap images, the proposed algorithm was compared with the adaptive moment algorithm.

As a result of experiments, it was shown that the proposed approach is not inferior to gradient methods in terms of speed. Moreover, according to various metrics, the developed algorithm turned out to be more efficient than the adaptive moment algorithm in terms of the quality of the solution obtained.

Thus, the proposed training approach, based on the idea of random search, has good prospects for solving applied problems using neural network technologies in general.

Acknowledgments

This work was supported by the Ministry of Science and Higher Education of the Russian Federation (Grant № 075-15-2022-1121).

References

  • Brugge, K., Fischer, A., & Igel, C. (2013). The flip-the-state transition operator for restricted Boltzmann machines. Machine Learning, 93(1), 53-69.

  • Chen, H., He, X., Yang, H., Qing, L., & Teng, Q. (2022). A Feature-Enriched Deep Convolutional Neural Network for JPEG Image Compression Artifacts Reduction and its Applications. IEEE Transactions on Neural Networks and Learning Systems, 33(1), 430-444.

  • CIFAR-10 dataset. (2020). Retrieved on 04 March 2022, from https://www.cs.toronto.edu/~kriz/cifar.html

  • Dewi, C., Chen, R.-C., Hendry, & Hung, H.-T. (2021). Experiment Improvement of Restricted Boltzmann Machine Methods for Image Classification. Vietnam Journal of Computer Science, 08(03), 417-432. https://doi.org/10.1142/s2196888821500184

  • Temel, D., & AlRegib, G. (2019). Perceptual image quality assessment through spectral analysis of error representations. Signal Processing: Image Communication, 70, 37-46.

  • Hajek, B. (1988). Cooling Schedules for Optimal Annealing. Mathematics of Operations Research, 13(2), 311-329.

  • Hamis, S., Zaharia, T., & Rousseau, O. (2019, June). Image compression at very low bitrate based on deep learned super-resolution. In 2019 IEEE 23rd International Symposium on Consumer Technologies (ISCT) (pp. 128-133). IEEE.

  • Khanna, M. T., Ralekar, C., Goel, A., Chaudhury, S., & Lall, B. (2019). Memorability-based image compression. IET Image Processing, 13, 1490-1501.

  • Kirkpatrick, S., Gelatt, C. D., Jr., & Vecchi, M. P. (1983). Optimization by Simulated Annealing. Science, 220(4598), 671-680.

  • Knop, M., Kapurscirnski, T., Mleczko, W. K., & Angryk, R. (2016). Neural Video Compression Based on RBM Scene Change Detection Algorithm. Artificial Intelligence and Soft Computing, Springer, 660-669.

  • Krasnoproshin, V. V., & Matskevich, V. V. (2022). Random search in neural networks training. Proceedings of the 13-th International Conference “Computer Data Analysis and Modeling” – CDAM’2022, Minsk, 96-99.

  • Li, X., Gao, X., & Wang, C. (2021). A Novel Restricted Boltzmann Machine Training Algorithm With Dynamic Tempering Chains. IEEE ACCESS, 9, 21939-21950. https://doi.org/10.1109/access.2020.3043599

  • Liu, W., Meng, F. Y., Liang, Y. S., Yang, H. X., & Wang, C. W. (2019). Loss Function Optimization Based on Adversarial Networks. In Fuzzy Systems and Data Mining V (pp. 619-634). IOS Press.

  • Nakamura, K., Derbel, B., Won, K.-J., & Hong, B.-W. (2021). Learning-Rate Annealing Methods for Deep Neural Networks. Electronics, 10(16), 2029. MDPI AG.

  • Oswin, K., Fischer, A., & Igel, C. (2018). Population-Contrastive-Divergence: Does consistency help with RBM training? Pattern Recognition Letters, 102, 1-7.

Copyright information

Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License

About this article

Publication Date

27 February 2023

eBook ISBN

978-1-80296-960-3

Publisher

European Publisher

Volume

1

Print ISBN (optional)

-

Edition Number

1st Edition

Pages

1-403

Subjects

Cite this article as:

Matskevich, V. V., & Stasiuk, V. A. (2023). An Efficient Training Algorithm of Restricted Boltzmann Machines. In P. Stanimorovic, A. A. Stupina, E. Semenkin, & I. V. Kovalev (Eds.), Hybrid Methods of Modeling and Optimization in Complex Systems, vol 1. European Proceedings of Computers and Technology (pp. 296-303). European Publisher. https://doi.org/10.15405/epct.23021.36