Fusion-features and visual-dictionary image recognition methods for apple classification in smart manufacturing / Ahsiah Ismail

Ahsiah , Ismail (2020) Fusion-features and visual-dictionary image recognition methods for apple classification in smart manufacturing / Ahsiah Ismail. PhD thesis, Universiti Malaya.

[img] PDF (The Candidate's Agreement)
Restricted to Repository staff only

Download (227Kb)
    [img] PDF (Thesis PhD)
    Download (4Mb)

      Abstract

      Smart manufacturing enables an efficient manufacturing process to optimize production. The optimization is performed through data analytics that requires reliable and informative data as input. Therefore, in this research, two image recognition feature extraction methods namely Curvelet Wavelet-Gray Level Co-occurrence Matrix (CWGLCM) and Fuzzy-Spatial Pyramid Matching (F-SPM) are proposed to provide reliable inputs for vision-based apple classification in smart manufacturing. Feature extraction is one of the major steps that could influent the efficiency of the manufacturing process. The CW-GLCM method is a feature extraction of fusion-features with Decision Tree classifier, while the F-SPM method uses a visual-dictionary based method to extract features of visual pattern and the output is process by Support Vector Machine (SVM) classifier. To evaluate the performance of the proposed methods, they are compared with five existing methods, which are Bag of Words (BOW), Spatial Pyramid Matching (SPM), Gray Level Co-occurrence Matrix (GLCM) Texture analysis, Convolutional Neural Network (CNN) and Contrast‐Limited Adaptive Histogram Equalization + GLCM + Extreme Learning Machine (CLAHE+GLCM+ELM). Three datasets which are NDDA, NDDAW and DA datasets with a total of 1310 apple images are collected to test the proposed methods. The NDDA and NDDAW datasets are both binary-class of defective and non-defective apple dataset, with NDDAW contains more low-quality region images compared to the NDDA. Conversely, the DA dataset comprised of five different types of defective apples to be used in multi-class tests. The proposed methods are trained and evaluated using 10-fold cross-validation. Their classification accuracy, precision and recall rate are then measured. Training and testing times are also recorded. From the evaluation, the proposed F-SPM method attained 98.15% classification accuracy, 96.30% precision and 100% recall for NDDA, 91.07% for accuracy, 100% precision and 84.85% recall for NDDAW, 86.33% for accuracy, 91.43% precision and 85.00% recall for DA dataset. The F-SPM method outperformed the existing methods especially for NDDAW and DA datasets. Alternatively, the CW-GLCM method able to obtain 98.15% accuracy, 96.30% precision and 100% recall for NDDA, 89.11% accuracy, 86.79% precision and 91.01% recall for NDDAW, 85.20% of accuracy, 88.33% precision and 85.00% recall for DA dataset. The proposed CW-GLCM also shows the highest percentage (100%) for all measurements (accuracy, precision and recall) and it even outperform others in recognizing the Bruise defect. These results indicate that both proposed methods are reliable and have the potential to be used for vision classification in smart manufacturing.

      Item Type: Thesis (PhD)
      Additional Information: Thesis (PhD) – Faculty of Computer Science & Information Technology, Universiti Malaya, 2020.
      Uncontrolled Keywords: Image recognition; Feature extraction; Classification; Smart manufacturing; Data analytics
      Subjects: Q Science > QA Mathematics > QA75 Electronic computers. Computer science
      Divisions: Faculty of Computer Science & Information Technology
      Depositing User: Mr Mohd Safri Tahir
      Date Deposited: 11 May 2023 03:58
      Last Modified: 11 May 2023 03:58
      URI: http://studentsrepo.um.edu.my/id/eprint/14388

      Actions (For repository staff only : Login required)

      View Item