Abstract
Many different cultures and countries have fish as a central piece in their diet, particularly in coastal countries such as Portugal, with the fishery and aquaculture sectors playing an increasingly important role in the provision of food and nutrition. As a consequence, fish-freshness evaluation is very important, although so far it has relied on human judgement, which may not be the most reliable at times.
This paper proposes an automated non-invasive system for fish-freshness classification, which takes fish images as input, as well as a seabream fish image dataset.
The dataset will be made publicly available for academic and scientific purposes with the publication of this paper. The dataset includes metadata, such as manually generated segmentation masks corresponding to the fish eye and body regions, as well as the time since capture.
For fish-freshness classification four freshness levels are considered: very-fresh, fresh, not-fresh and spoiled. The proposed system starts with an image segmentation stage, with the goal of automatically segmenting the fish eye region, followed by freshness classification based on the eye characteristics. The system employs transformers, for the first time in fish-freshness classification, both in the segmentation process with the Segformer and in feature extraction and freshness classification, using the Vision Transformer (ViT).
Encouraging results have been obtained, with the automatic fish eye region segmentation reaching a detection rate of 98.77%, an accuracy of 96.28% and a value of the Intersection over Union (IoU) metric of 85.7%. The adopted ViT classification model, using a 5-fold cross-validation strategy, achieved a final classification accuracy of 80.8% and an F1 score of 81.0%, despite the relatively small dataset available for training purposes.
Supported by organizations 1, 2, 3 and 4.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
FAO, The State of World Fisheries and Aquaculture (SOFIA), Rome, Italy (2022). https://doi.org/10.4060/cc0461en
United Nations Food and Agricultural Organization (FAO). (2020) Fish, Seafood- Food Supply Quantity (Kg/Capita/Yr) (FAO, 2020). http://www.fao.org/faostat/en/#data/FBS. Accessed 2 May 2023
Cultivated fish in Aquaculture: total and main species. Source: PORDATA. https://www.pordata.pt/db/portugal/ambiente+de+consulta/tabela. Accessed 2 May 2023
Muhamad, F., Hashim, H., Jarmin, R., Ahmad, A.: Fish freshness classification based on image processing and fuzzy logic. In: Proceedings of the 8th WSEAS International Conference on Circuits, Systems, Electronics, Control, pp. 109–115 (2009)
Lalabadi, H.M., Sadeghi, M., Mireei, S.A.: Fish freshness categorization from eyes and gills color features using multi-class artificial neural network and support vector machines. Aquacult. Eng. 90, 102076 (2020)
Otsu, N.: A threshold selection method from gray-level histograms. IEEE Trans. Syst. Man Cybern. 9(1), 62–66 (1979)
Rosario, T., Correia, P.L., Pacheco, O.: Image-based fish freshness estimation. In: RECPAD Portuguese Conference on Pattern Recognition (2022)
Taheri-Garavand, A., Nasiri, A., Banan, A., Zhang, Y.-D.: Smart deep learning-based approach for non-destructive freshness diagnosis of common carp fish. J. Food Eng. 278, 109930 (2020)
Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition, arXiv preprint arXiv:1409.1556 (2015)
ImageNet Homepage. https://www.image-net.org/. Accessed 13 June 2023
Prasetyo, E., Purbaningtyas, R., Adityo, R.D.: Performance evaluation of pre-trained convolutional neural network for milkfish freshness classification. In: 2020 6th Information Technology International Seminar (ITIS), pp. 30–34. IEEE (2020)
He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016)
Howard, A.G., et al.: Mobilenets: efficient convolutional neural networks for mobile vision applications, CoRR, vol. abs/1704.04861 (2017)
Sandler, M., Howard, A.G., Zhu, M., Zhmoginov, A., Chen, L.: Inverted residuals and linear bottlenecks: mobile networks for classification, detection and segmentation, CoRR, vol. abs/1801.04381 (2018)
Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017)
Chollet, F.: Xception: deep learning with depthwise separable convolutions. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1251–1258 (2017)
Zoph, B., Vasudevan, V., Shlens, J., Le, Q.V.: Learning transferable architectures for scalable image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8697–8710 (2018)
Rayan, M.A., Rahim, A., Rahman, M.A., Marjan, M.A., Ali, U.M.E.: Fish freshness classification using combined deep learning model. In: 2021 International Conference on Automation, Control and Mechatronics for Industry 4.0 (ACMI), pp. 1–5. IEEE (2021)
Anas, D., Jaya, I., et al.: Design and implementation of fish freshness detection algorithm using deep learning. In: IOP Conference Series: Earth and Environmental Science, vol. 944, no. 1, p. 012007. IOP Publishing (2021)
Redmon, J., Divvala, S., Girshick, R., Farhadi, A.: You only look once: unified, real-time object detection. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 779–788 (2016)
Vaswani, A., et al.: Attention is all you need. In: Advances in Neural Information Processing Systems, vol. 30 (2017)
Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J.-M. (eds.) ECCV 2020. LNCS, vol. 12346, pp. 213–229. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-58452-8_13
Dosovitskiy, A., et al.: An image is worth 16x16 words: transformers for image recognition at scale. arXiv preprint arXiv:2010.11929 (2020)
Naseer, M.M., Ranasinghe, K., Khan, S.H., Hayat, M., Shahbaz Khan, F., Yang, M.H.: Intriguing properties of vision transformers. Adv. Neural. Inf. Process. Syst. 34, 23296–23308 (2021)
Seabream Database Homepage. http://www.img.lx.it.pt/SeabreamDB. Accessed 27 June 2023
Segments.ai Homepage. https://www.segments.ai/. Accessed 7 May 2023
Khemir, M., Besbes, N., Khemis, I.B., Bella, C.D., Monaco, D.L., Sadok, S.: Determination of shelf-life of vacuum-packed sea bream (sparus aurata) fillets using chitosan-microparticles-coating. CyTA J. Food 18(1), 51–60 (2020)
Erkan, N., Alakavuk, D., Tosun, S., Ozden, O.: Gutted effect on quality and shelf-life of sea bream stored in ice, vol. 86, pp. 105–110 (2006)
Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: SegFormer: simple and efficient design for semantic segmentation with transformers. Adv. Neural. Inf. Process. Syst. 34, 12077–12090 (2021)
Chen, L.C., Papandreou, G., Schroff, F., Adam, H.: Rethinking atrous convolution for semantic image segmentation. arXiv preprint arXiv:1706.05587 (2017)
Zheng, S., et al.: Rethinking semantic segmentation from a sequence-to-sequence perspective with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 6881–6890 (2021)
Huss, H.H.: Fresh fish-quality and quality changes: a training manual prepared for the FAO/DANIDA Training Program on Fish Technology and Quality Control, no. 29. Food & Agriculture Org. (1988)
Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. arXiv preprint arXiv:1711.05101 (2017)
Acknowledgements
This work is partly funded by FCT/MEC under the project UID/50008/2020.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2024 Springer Nature Switzerland AG
About this paper
Cite this paper
Rodrigues, J.P., Pacheco, O.R., Correia, P.L. (2024). Seabream Freshness Classification Using Vision Transformers. In: Vasconcelos, V., Domingues, I., Paredes, S. (eds) Progress in Pattern Recognition, Image Analysis, Computer Vision, and Applications. CIARP 2023. Lecture Notes in Computer Science, vol 14469. Springer, Cham. https://doi.org/10.1007/978-3-031-49018-7_36
Download citation
DOI: https://doi.org/10.1007/978-3-031-49018-7_36
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-49017-0
Online ISBN: 978-3-031-49018-7
eBook Packages: Computer ScienceComputer Science (R0)