Next Article in Journal
Singularities of Non-Developable Ruled Surface with Space-like Ruling
Next Article in Special Issue
A Septuple Integral of the Product of Three Bessel Functions of the First Kind Jα()Jγ()Jη(): Derivation and Evaluation over General Indices
Previous Article in Journal
Fractional Hypergeometric Functions
Previous Article in Special Issue
Novel Oscillation Theorems and Symmetric Properties of Nonlinear Delay Differential Equations of Fourth-Order with a Middle Term
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Fast Overlapping Block Processing Algorithm for Feature Extraction

by
Sadiq H. Abdulhussain
1,†,
Basheera M. Mahmmod
1,†,
Jan Flusser
2,3,†,
Khaled A. AL-Utaibi
4,*,† and
Sadiq M. Sait
5,†
1
Department of Computer Engineering, University of Baghdad, Al-Jadriya, Baghdad 10071, Iraq
2
Czech Academy of Sciences, Institute of Information Theory and Automation, Pod Vodárenskou vìží 4, 182 08 Prague, Czech Republic
3
Faculty of Management, University of Economics, Jarosovska 1117/II, 377 01 Jindrichuv Hradec, Czech Republic
4
Department of Computer Engineering, University of Ha’il, Ha’il 55476, Saudi Arabia
5
Department of Computer Engineering, Center for Communications and IT Research, Research Institute, King Fahd University of Petroleum & Minerals, Dhahran 31261, Saudi Arabia
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Symmetry 2022, 14(4), 715; https://doi.org/10.3390/sym14040715
Submission received: 1 March 2022 / Revised: 20 March 2022 / Accepted: 22 March 2022 / Published: 1 April 2022

Abstract

:
In many video and image processing applications, the frames are partitioned into blocks, which are extracted and processed sequentially. In this paper, we propose a fast algorithm for calculation of features of overlapping image blocks. We assume the features are projections of the block on separable 2D basis functions (usually orthogonal polynomials) where we benefit from the symmetry with respect to spatial variables. The main idea is based on a construction of auxiliary matrices that virtually extends the original image and makes it possible to avoid a time-consuming computation in loops. These matrices can be pre-calculated, stored and used repeatedly since they are independent of the image itself. We validated experimentally that the speed up of the proposed method compared with traditional approaches approximately reaches up to 20 times depending on the block parameters.

1. Introduction

In most image and video processing applications, image or video frame is partitioned into blocks (usually overlapping ones) to make the processing local. Each block is then processed separately. We focus on transformations, the goal of which is extracting features. The features are stored in a memory location corresponding to the image block. Then, these features are utilized as local image descriptors for recognition.
Traditional approaches in computer vision applications partition the image into smaller two-dimensional blocks and process them sequentially, where a sequential double loop over the blocks is carried out. However, the image matrix is usually stored in memory either in a row-wise or column-wise order. As a result, accessing the entire matrix sequentially has a predictable behavior from the perspective of memory since the accesses confine with the spatial locality. On the other hand, when the image is processed in a block-wise sequence, the memory access patterns are irregular, and thus cache misses and replacement increase. The speed gap between the CPU and memory is considered a major drawback of computer performance [1], making increased cache misses and replacement a performance issue [2].
To improve the performance of feature extraction, some processes have to be excluded, namely partitioning the image into blocks, sequentially processing image blocks, and accumulating the result. Motivated by this idea, a fast method for extracting local features from overlapping blocks is introduced in this paper.
The presented method may find applications wherever the block-wise image representation has been used. We refer to a few sample papers where this approach was used in various application areas—in plant biology [3], in fingerprint recognition [4], in face recognition on infrared images [5], in facial expression classification [6], in optical flow estimation [7], in denoising of medical images [8], tamper detection [9], image compression [10], and in scalable video coding [11].
The paper is organized as follows. The main idea of the method is introduced in Section 2. In Section 3, we present implementation details, complexity analysis, and experimental comparison to traditional approaches.

2. The Proposed Algorithm

The main idea is to avoid a sequential processing of the blocks, which is usually implemented as a slow “for” loop. We propose special auxiliary matrices that can be pre-computed and and that transfer the sequential processing into a single-step one. Since the auxiliary matrices do not depend on the image content, they can be stored and used repeatedly for different images, which makes the method even more efficient.
Consider image I with the size of N y × N x partitioned into overlapped blocks of size S y × S x each (as shown in Figure 1), such that the number of blocks is B y × B x , where B y = N y S y 2 v y and B x = N x S x 2 v x . Now, let us consider a separable integral transformation with kernel function U n m ( x , y ) = U n ( x ) U m ( y ) . This can stand for Fourier transform, Laplace transform, z-transform, cosine transform, and many others, but we are particularly interested in moment transform, where U n ( x ) is a polynomial of degree n (we refer to [12] for more information about polynomials and moments in image analysis). The results of this transformation, which is for a single block B i j , are given as
M n , m i , j = x = 0 S x 1 y = 0 S y 1 U n ( x ) U m ( y ) B i , j ( x , y ) ,
are called moments of the block. We can arrange them into a moment matrix M n , m i , j , where n and m are the orders of the moments.
Clearly, the computation of moments up to the given order can be expressed as a matrix multiplication
M n , m i , j = U y B i , j U x T
where U x and U y represent the matrix of the discretized polynomials U n ( x ) and U m ( y ) , respectively. It is noteworthy that in most programming environments, such as MATLAB and Python, the matrix multiplication is much faster than nested loops thanks to the Intel Math Kernel Library (MKL) [13,14].
To compute the moments of all blocks of the image I using (2), we have
M = U y B 1 , 1 U x T U y B 1 , 2 U x T U y B 1 , B x U x T U y B 2 , 1 U x T U y B 2 , 2 U x T U y B 2 , B x U x T U y B B y , 1 U x T U y B B y , 2 U x T U y B B y , B x U x T ,
that can be equivalently rewritten into the form
M = U y O O O U y O O O U y B 1 , 1 B 1 , 2 B 1 , B x B 2 , 1 B 2 , 2 B 2 , B x B B y , 1 B B y , 2 B B y , B x U x O O O U x O O O U x T .
In a shorter notation, Equation (4) can be expressed as
M = R y I B R x T .
Matrix I B denotes the so-called extended image which is formed by the blocks of the original image I arranged in such a way that they do not overlap one another, see Figure 2.
An explicit construction of the extended image I B would be time consuming because we would need to extract each block and shift it into a new location. So, we propose to perform this process implicitly by multiplying I with “shift matrices” A x and A y
I B = A y I A x T .
Matrix A α (where α stands for x or y) is a rectangular matrix of the size ( S α · B α × N α ) . It is composed of B α unit submatrices of the size S α × S α , which are arranged diagonally, and in horizontal direction they are mutually shifted by v α .
Symmetry 14 00715 i001
Now, the computation of the moments can be performed directly without the necessity of constructing the extended image. Substituting Equation (6) into Equation (5), we obtain
M = R y A y I A x T R x T
which can be further simplified to the form
M = Q y I Q x T
where Q x = R x A x and Q y = R y A y .
Equation (9) performs the main result of the paper. The moments of all blocks of I, arranged into a matrix M (see Figure 3), can be calculated by a single matrix multiplication, without any loops over the blocks and without construction of the extended image. The matrices Q x and Q y depend only on the polynomial basis functions and on the block size and overlap but do not depend on the image I at all. So, they can be pre-computed only once and used repeatedly. Moreover, in most block-wise representations the blocks are squares, their overlap is the same, and the kernel functions of the transform are the same in both directions. Under these circumstances, the computation simplifies even more as we have A x = A y and R x = R y .
Note that the proposed method is not restricted to 2D images. It can be generalized for 3D signals by introducing a third matrix A z . Then, the algorithm performs analogously to the 2D case, as can be seen from the flowchart in Figure 4 (for more elucidation, see Algorithm 1).
Algorithm 1 Generate auxiliary matrices for moment computation.
Input: N α , v α , S α
                N α represents the size of the signal.
                v α represents the overlap size.
                S α represents the block size.      Output: Q α
1:  B α = N α S α 2 v α                                                                      ▹ Compute number of blocks B α
2:  S B = S α B α                                                                                       ▹ Total length of vector
3: Initialize A α                                                                                    ▹ Generate the matrix A α
4: for  i = 1 : N α  do
5:     for  j = 1 : S B  do
6:         if  i = j j / S α 2 v α  then
7:             A α ( i , j ) = 1
8:         end if
9:     end for
10: end for
11: Generate polynomial matrices U α
12:  R α = I U α
13:  Q α = R α A α

3. Performance Analysis

In this section, we present implementation details and an experimental analysis of the proposed algorithm. First, the computation cost analysis is presented to show the effectiveness of the proposed algorithm. Second, several numerical experiments are performed on various public datasets and the performance is compared with traditional methods. Finally, we present a similar experiment with 3D objects.

3.1. Computation Cost Analysis

In this section, we compare the computing complexity of our algorithm to traditional methods. The implementation of the proposed algorithm consists of the four following steps (which are described in Algorithms 1 and 2):
  • Input user-defined parameters: image size, block size, overlap size, polynomial basis, and maximum moment order.
  • Matrices U α , R α , and A α are generated.
  • Matrices Q α are calculated.
  • The moment matrix M is computed using (9).
For the traditional algorithms used in [15,16], the procedure is described in Algorithm 3.
Our hypothesis is that Step 3 of the traditional algorithms, which contains a “for” loop that must be run B y · B x times is a bottleneck which makes the calculation slow. Below, we verify this hypothesis experimentally for various setups of the input parameters and various images (see Algorithm 2).
Algorithm 2 Moment computation using the proposed overlap block processing.
Input: Image I with parameters N x , N y , S x , S y , v x , and v y
            N x and N y represent the size of the image.
            S x and S y represent the block size in the x and y directions.
            v x and v y represent the overlap size in the x and y directions.
Output: M
1: Generate polynomial matrices Q x and Q y using Algorithm 1
2: for each image in the dataset do
3:       M = Q y I Q x T                                                                               ▹ Computing moments
4: end for

3.2. Numerical Experiments

In the first experiments, we used the well-known “boat” image, see Figure 5. The experiment was repeated 10 times with different values of image sizes ( 128 × 128 , 256 × 256 , and 512 × 512 ), different block sizes (8, 16, and 32), and different overlaps.
Table 1 depicts the computational time for image size of 128 × 128 . In addition, Table 1 includes the speed-up ratio between the proposed algorithm and existing works in [15,16] (see Algorithm 3). The results show that the time required to compute the moments using the proposed algorithm is less than the existing works [15,16] for all values of tested moment orders (2, 4, and 8). In addition, the reported improvement (speed-up ratio) shows that the proposed algorithm outperforms the existing works.
Algorithm 3 Moment computation using the traditional overlap block processing.
Input: Image I with parameters N x , N y , S x , S y , v x , and v y
            N x and N y represent the size of the image
            S x and S y represent the block size in the x and y directions.
            v x and v y represent the overlap size in the x and y directions.
Output: M
1:  Generate polynomial matrices U x and U y
2:  B x = N x S x 2 v x                                            ▹ Compute number of blocks in the x direction
3:  B y = N y S y 2 v y                                             ▹ Compute number of blocks in the y direction
4:  for each image in the dataset do
5:     for i do=1 to B x
6:         for j do=1 to B y
7:            Compute start and end indices ( x s t a r t , x e n d , y s t a r t , y e n d ) for block B i , j
8:            Extract block B i , j
9:             M i , j = U y B i , j U x        ▹ Compute moment for each block
10:         end for
11:     end for
12:  end for
It is noteworthy that the speed-up factor decreases as the moment order increases. This is because our algorithm does not alter the moment computation itself, it only efficiently handles the blocks. For high-order moments, their computation takes more and more time and the impact of block handling is not so apparent. However, in most practical applications one usually works with low-order moments only.
We also repeated this series of experiments for images of the size 256 × 256 , 512 × 512 , and 1024 × 1024 pixels. The results are summarized in Table 2, Table 3 and Table 4, respectively. We can observe that the results are consistent, but the overall improvement for the given block size decreases as the image size increases. This is probably because the auxiliary matrices in our algorithm are large and their multiplication is not as fast as in the case of smaller images.
In the second experiment, two different datasets have been employed—the ORL and FEI facial datasets. The ORL face database, obtained from AT&T [17], has been used by many researchers for evaluation purposes. It includes 40 distinct classes (persons). Each class contains 10 images which were taken at different positions and lighting conditions. The size of each image is 92 × 112 pixels [18].
We calculated the block moments of all ORL images using the proposed method and the reference method. We ran the experiment ten times and calculated the average time. We used the blocks of the size 20 × 16 with the overlap (0,0), (2,2), and (4,4). The results are reported in Table 5; the time in milliseconds is an average over 10 runs and all images. As in the previous experiment, we witness a significant speed up.
The FEI dataset [19] is a Brazilian facial dataset composed of 200 faces. In the experiment, we have included 10 images of each person with a size of 480 × 640 . The images show various expressions and head poses. We used the block size of 48 × 48 and five overlap sizes as shown in Table 6. The results again show a substantial speed up in all settings.
Finally, we tested the performance of the 3D version of our algorithm. We used 19 model images from the well known McGill benchmark dataset [20]. We alternated each sample by shift and rotation such that 1252 versions of each object were generated, which resulted in a total number of 23,788 objects. The Charlier polynomials are used in this experiment [21]. The speed analysis for various block sizes and overlaps is given in Table 7. The last column of the table shows the improvement factor.

4. Conclusions

In this paper, we proposed a method for fast calculation of features of overlapping image blocks. The main idea is based on a construction of auxiliary matrices that virtually “extend” the original image and make it possible to avoid time-consuming calculations in loops. These matrices can be pre-calculated, stored, and used repeatedly since they are independent of the image itself. We verified experimentally that the speed up, compared with the traditional approach, may be up to 20 times depending on the block parameters. The method is applicable to the calculation of any integral features such as moments and other transform coefficients (including Meixner [22] and Krawtchouk [23]), if the multivariate basis functions of the transformation are separable. The algorithm may find an application wherever a local, block-based image processing description and recognition is required.

Author Contributions

Conceptualization, S.H.A. and B.M.M.; methodology, S.H.A. and B.M.M.; software, S.H.A. and B.M.M.; validation, J.F., K.A.A.-U. and S.M.S.; investigation, K.A.A.-U., S.M.S. and B.M.M.; resources, J.F. and S.H.A.; writing—original draft preparation, B.M.M. and K.A.A.-U.; writing—review and editing, J.F. and S.M.S.; visualization, S.M.S., K.A.A.-U. and B.M.M.; project administration, S.H.A. and J.F. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

All the links and how to obtain the presented data in this paper, if it is publicly available, can be found through the referenced papers.

Acknowledgments

The authors would also like to thank the University of Baghdad, University of Hai’l, and King Fahad University of Petroleum and Minerals for their help and support. Jan Flusser has been supported by the Czech Science Foundation under the grant No. GA21-03921S, by the Praemium Academiae, and by Joint Laboratory Salome 2.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
2D Two dimension
3D Three dimension
MKL Math Kernel Library
msec millisecond

References

  1. Alted, F. Why modern CPUs are starving and what can be done about it. Comput. Sci. Eng. 2010, 12, 68–71. [Google Scholar] [CrossRef]
  2. Abdulhussain, S.H.; Rahman Ramli, A.; Mahmmod, B.M.; Iqbal Saripan, M.; Al-Haddad, S.; Baker, T.; Flayyih, W.N.; Jassim, W.A. A Fast Feature Extraction Algorithm for Image and Video Processing. In Proceedings of the 2019 International Joint Conference on Neural Networks (IJCNN), Budapest, Hungary, 14–19 July 2019; IEEE: Piscataway, NJ, USA, 2019; pp. 1–8. [Google Scholar] [CrossRef]
  3. Oyewola, D.O.; Dada, E.G.; Misra, S.; Damaševičius, R. Detecting cassava mosaic disease using a deep residual convolutional neural network with distinct block processing. PeerJ Comput. Sci. 2021, 7, e352. [Google Scholar] [CrossRef] [PubMed]
  4. Kim, B.G.; Park, D.J. Adaptive image normalisation based on block processing for enhancement of fingerprint image. Electron. Lett. 2002, 38, 696–698. [Google Scholar] [CrossRef]
  5. Farokhi, S.; Sheikh, U.U.; Flusser, J.; Yang, B. Near infrared face recognition using Zernike moments and Hermite kernels. Inf. Sci. 2015, 316, 234–245. [Google Scholar] [CrossRef]
  6. Fan, X.; Tjahjadi, T. A dynamic framework based on local Zernike moment and motion history image for facial expression recognition. Pattern Recognit. 2017, 64, 399–406. [Google Scholar] [CrossRef] [Green Version]
  7. Li, Z.; Xiang, J.; Gong, L.; Blaauw, D.; Chakrabarti, C.; Kim, H.S. Low complexity, hardware-efficient neighbor-guided sgm optical flow for low-power mobile vision applications. IEEE Trans. Circuits Syst. Video Technol. 2018, 29, 2191–2204. [Google Scholar] [CrossRef]
  8. Rai, S.; Bhatt, J.S.; Patra, S. An unsupervised deep learning framework for medical image denoising. arXiv 2021, arXiv:2103.06575. [Google Scholar]
  9. Mousa, A.K. Tamper Detection in Color Image. Baghdad Sci. J. 2008, 5, 155–159. [Google Scholar]
  10. AL-Hadithy, S.S.; Ghadah, K.; Al-Khafaji; Siddeq, M. Adaptive 1-D Polynomial Coding of C621 Base for Image Compression. Turk. J. Comput. Math. Educ. (Turcomat) 2021, 12, 5720–5731. [Google Scholar]
  11. Baldev, S.; Rathore, P.K.; Peesapati, R.; Anumandla, K.K. A directional and scalable streaming deblocking filter hardware architecture for HEVC decoder. Microprocess. Microsyst. 2021, 84, 104029. [Google Scholar] [CrossRef]
  12. Flusser, J.; Suk, T.; Zitová, B. 2D and 3D Image Analysis by Moments; John Wiley & Sons: Hoboken, NJ, USA, 2016. [Google Scholar]
  13. Do, Q.; Acuña, S.; Kristiansen, J.I.; Agarwal, K.; Ha, P.H. Highly Efficient and Scalable Framework for High-Speed Super-Resolution Microscopy. IEEE Access 2021, 9, 97053–97067. [Google Scholar] [CrossRef]
  14. Rinkevicius, Z.; Li, X.; Vahtras, O.; Ahmadzadeh, K.; Brand, M.; Ringholm, M.; List, N.H.; Scheurer, M.; Scott, M.; Dreuw, A.; et al. VeloxChem: A Python-driven density-functional theory program for spectroscopy simulations in high-performance computing environments. Wiley Interdiscip. Rev. Comput. Mol. Sci. 2020, 10, e1457. [Google Scholar] [CrossRef] [Green Version]
  15. Hameed, I.M.; Abdulhussain, S.H. An efficient multistage CBIR based on Squared Krawtchouk-Tchebichef polynomials. In IOP Conference Series: Materials Science and Engineering; IOPscience: Samawah, Iraq, 2021; Volume 1090, p. 012100. [Google Scholar] [CrossRef]
  16. Parekh, R. Fundamentals of Image, Audio, and Video Processing Using MATLAB®: With Applications to Pattern Recognition; CRC Press: Boca Raton, FL, USA, 2021. [Google Scholar]
  17. AT&T Corp. The Database of Faces; AT&T Corp.: Dallas, TX, USA, 2016. [Google Scholar]
  18. Aggarwal, A.; Alshehri, M.; Kumar, M.; Sharma, P.; Alfarraj, O.; Deep, V. Principal component analysis, hidden Markov model, and artificial neural network inspired techniques to recognize faces. Concurr. Comput. Pract. Exp. 2021, 33, e6157. [Google Scholar] [CrossRef]
  19. FEI Face Database. 2010. Available online: https://fei.edu.br/~cet/facedatabase.html (accessed on 1 April 2021).
  20. Siddiqi, K.; Zhang, J.; Macrini, D.; Shokoufandeh, A.; Bouix, S.; Dickinson, S. Retrieving articulated 3-D models using medial surfaces. Mach. Vis. Appl. 2008, 19, 261–275. [Google Scholar] [CrossRef] [Green Version]
  21. Abdul-Hadi, A.M.; Abdulhussain, S.H.; Mahmmod, B.M. On the computational aspects of Charlier polynomials. Cogent Eng. 2020, 7, 1763553. [Google Scholar] [CrossRef]
  22. Abdulhussain, S.H.; Mahmmod, B.M. Fast and efficient recursive algorithm of Meixner polynomials. J. Real-Time Image Process. 2021, 18, 2225–2237. [Google Scholar] [CrossRef]
  23. AL-Utaibi, K.A.; Abdulhussain, S.H.; Mahmmod, B.M.; Naser, M.A.; Alsabah, M.; Sait, S.M. Reliable Recurrence Algorithm for High-Order Krawtchouk Polynomials. Entropy 2021, 23, 1162. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Image I with partitioned blocks.
Figure 1. Image I with partitioned blocks.
Symmetry 14 00715 g001
Figure 2. Extended image I B formed by non-overlapping blocks extracted from I.
Figure 2. Extended image I B formed by non-overlapping blocks extracted from I.
Symmetry 14 00715 g002
Figure 3. The matrix of moments of the original blocks.
Figure 3. The matrix of moments of the original blocks.
Symmetry 14 00715 g003
Figure 4. The proposed algorithm to generate the matrices for overlapped block processing.
Figure 4. The proposed algorithm to generate the matrices for overlapped block processing.
Symmetry 14 00715 g004
Figure 5. Test image used in the experiment.
Figure 5. Test image used in the experiment.
Symmetry 14 00715 g005
Table 1. Computation time comparison for image size of 128 × 128 .
Table 1. Computation time comparison for image size of 128 × 128 .
Order = 2Order = 4Order = 8
Block SizeOverlap SizeTraditional [15,16]Time (Proposed) msecImprovementTraditional [15,16]Time (Proposed) msecImprovementTraditional [15,16]Time (Proposed) msecImprovement
803.150.1226.253.260.1521.733.320.2612.77
812.870.0557.402.920.0741.713.180.1619.88
822.920.0473.002.950.0836.883.070.1520.47
843.200.0564.003.260.0840.753.250.1719.12
1601.060.0253.001.030.0425.751.040.0714.86
1611.220.0340.671.540.0530.801.240.0717.71
1621.350.0345.001.510.0530.201.250.0717.86
1641.410.0347.001.300.0526.001.310.0718.71
1681.370.0345.671.370.0527.401.390.0915.44
3200.560.0156.000.570.0228.500.580.0414.50
3210.770.0238.500.820.0327.330.800.0420.00
3220.790.0239.500.800.0326.670.820.0516.40
3240.810.0240.500.860.0328.670.830.0516.60
3280.870.0243.500.900.0330.000.920.0518.40
32160.980.0249.000.990.0424.751.100.0618.33
Average Improvement47.93 29.81 17.40
Table 2. Computation time comparison for image size of 256 × 256 .
Table 2. Computation time comparison for image size of 256 × 256 .
Order = 2Order = 4Order = 8
Block SizeOverlap SizeTraditional [15,16]Time (Proposed) msecImprovementTraditional [15,16]Time (Proposed) msecImprovementTraditional [15,16]Time (Proposed) msecImprovement
8011.950.5621.349.590.5816.539.981.267.92
819.520.2047.6010.360.4025.909.911.009.91
829.570.1756.299.630.4322.409.950.8112.28
849.990.2147.5710.040.4721.3610.650.9810.87
1602.760.0930.672.810.2113.382.970.427.07
1613.190.0935.443.190.2015.953.520.428.38
1623.260.0936.223.240.1917.053.480.418.49
1643.560.0939.563.380.2016.903.780.448.59
1684.050.1233.753.820.2316.613.930.478.36
3201.170.0814.631.220.139.381.300.216.19
3212.360.0733.711.610.1114.641.630.208.15
3221.760.0822.001.490.1113.551.670.208.35
3241.960.0824.501.610.1213.421.670.208.35
3281.820.0822.751.830.1018.301.910.219.10
32162.390.0926.562.170.1119.733.140.2413.08
Average Improvement32.84 17.01 9.01
Table 3. Computation time comparison for image size of 512 × 512 .
Table 3. Computation time comparison for image size of 512 × 512 .
Order = 2Order = 4Order = 8
Block SizeOverlap SizeTraditional [15,16]Time (Proposed) msecImprovementTraditional [15,16]Time (Proposed) msecImprovementTraditional [15,16]Time (Proposed) msecImprovement
8034.611.5422.4734.703.4310.1236.267.624.76
8135.580.8342.8736.242.4514.7938.356.096.30
8236.051.3227.3136.332.8112.9339.057.555.17
8438.051.1732.5238.082.4115.8040.155.906.81
1609.610.3626.699.720.7313.3210.322.464.20
16111.180.6716.6911.861.348.8511.872.514.73
16211.690.6418.2712.711.0611.9912.463.203.89
16414.100.7119.8612.061.2010.0512.663.363.77
16813.610.7318.6413.190.7717.1314.113.284.30
3203.810.3610.584.030.646.303.651.272.87
3215.070.3514.495.240.3813.795.470.717.70
3225.450.3714.735.300.628.555.300.697.68
3245.400.2323.485.850.718.245.570.945.93
3286.210.3318.826.230.718.776.230.787.99
32167.760.3025.877.990.7810.249.841.436.88
Average Improvement22.22 11.39 5.53
Table 4. Computation time comparison for image size of 1024 × 1024 .
Table 4. Computation time comparison for image size of 1024 × 1024 .
Order = 2Order = 4Order = 8
Block SizeOverlap SizeTime (Traditional) msec [15,16]Time (Proposed) msecImprovementTraditional [15,16]Time (Proposed) msecImprovementTraditional [15,16]Time (Proposed) msecImprovement
80134.589.5114.15136.3423.555.79144.5153.932.68
81140.779.9414.16142.4723.476.07147.3549.682.97
82142.267.0420.21144.3220.936.90156.7848.733.22
84147.967.7319.14149.7421.077.11160.0155.232.90
16038.264.957.7339.319.334.2140.8224.041.70
16143.344.948.7744.0710.344.2647.8420.322.35
16244.945.008.9944.889.084.9448.9720.652.37
16446.515.049.2346.209.264.9950.9323.682.15
16851.295.1210.0250.5610.174.9756.1119.702.85
32014.412.575.6114.024.942.8414.569.281.57
32119.572.597.5620.295.293.8420.156.103.30
32219.862.617.6119.235.013.8420.357.872.59
32420.732.697.7119.935.013.9821.669.702.23
32823.582.648.9323.524.884.8226.749.872.71
321629.602.7310.8429.104.486.5041.208.494.85
Average Improvement10.71 5.00 2.70
Table 5. Computation time (in msec) and improvement for the proposed and reference algorithms on the ORL dataset.
Table 5. Computation time (in msec) and improvement for the proposed and reference algorithms on the ORL dataset.
Overlap SizeTraditional AlgorithmsProposed AlgorithmImprovement
(0,0)1.3810.13710.12
(2,2)1.7620.15111.70
(4,4)2.5180.16715.07
Table 6. Computation time (in msec) and improvement for the proposed and reference algorithms performed on the FEI dataset.
Table 6. Computation time (in msec) and improvement for the proposed and reference algorithms performed on the FEI dataset.
Overlap SizeTraditional AlgorithmsProposed AlgorithmImprovement
(1,1)13.7155.2982.59
(2,2)14.3345.3612.67
(4,4)15.8575.4972.88
(6,6)18.9655.6193.38
(8,8)20.9435.8633.57
Average16.7705.5303.03
Table 7. Computation time (msec) for the proposed and traditional algorithms in 3D.
Table 7. Computation time (msec) for the proposed and traditional algorithms in 3D.
Block SizeOverlap SizeTraditional AlgorithmProposed AlgorithmImprovement
64 × 64 × 64 0 , 0 , 0 22.0757.9412.78
2 , 2 , 2 24.9099.8262.54
4 , 4 , 4 28.70010.1572.83
32 × 32 × 32 0 , 0 , 0 27.9268.4003.32
2 , 2 , 2 35.13910.6373.30
4 , 4 , 4 42.12811.3483.71
16 × 16 × 16 0 , 0 , 0 89.39812.1307.37
2 , 2 , 2 118.81716.0457.41
4 , 4 , 4 154.99820.0037.75
Average4.56
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Abdulhussain, S.H.; Mahmmod, B.M.; Flusser, J.; AL-Utaibi, K.A.; Sait, S.M. Fast Overlapping Block Processing Algorithm for Feature Extraction. Symmetry 2022, 14, 715. https://doi.org/10.3390/sym14040715

AMA Style

Abdulhussain SH, Mahmmod BM, Flusser J, AL-Utaibi KA, Sait SM. Fast Overlapping Block Processing Algorithm for Feature Extraction. Symmetry. 2022; 14(4):715. https://doi.org/10.3390/sym14040715

Chicago/Turabian Style

Abdulhussain, Sadiq H., Basheera M. Mahmmod, Jan Flusser, Khaled A. AL-Utaibi, and Sadiq M. Sait. 2022. "Fast Overlapping Block Processing Algorithm for Feature Extraction" Symmetry 14, no. 4: 715. https://doi.org/10.3390/sym14040715

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop