-
게재일
2022.07.21
-
저자
Sunyong Seo; Sangwook Yoo; Semin Kim; Daeun Yoon; Jongha Lee
-
URL
https://ieeexplore.ieee.org/abstract/document/9867133
-
논문요약
Pores are minute skin openings through which hair and sebum come out and appear as holes in the facial skin. Enlarged pore is one of the major concerns for people who care about their skin. Remedies include the use of cosmetics and pore-reduction medical
Seo, S., Yoo, S., Kim, S., Yoon, D., & Lee, J. (2022, July). Facial Pore Segmentation Algorithm using Shallow CNN. In 2022 IEEE 35th International Symposium on Computer-Based Medical Systems (CBMS) (pp. 311-316). IEEE.
Abstract:
Pores are minute skin openings through which hair and sebum come out and appear as holes in the facial skin. Enlarged pore is one of the major concerns for people who care about their skin. Remedies include the use of cosmetics and pore-reduction medical procedures. Awareness of the condition of one's facial pores and appropriate management are required to prevent pore deterioration. Pore segmentation algorithms based on classical image processing are characterized by low accuracy and high computational costs. In addition, these algorithms require that input images be taken in light-controlled environments. These issues were resolved by using a light-specialized data augmentation method and a neural network with a narrow receptive field for identifying local features. We introduce Pore-Net, an algorithm that can be used on mobile devices to segment pores with a low computational cost, using selfie-camera images as an input. Pore-Net has the following algorithm flow. First, a confidence map-based segmentation without encoder-decoder form is applied to lower the computational costs on high-resolution input images. Second, pre- and post-processing for input based on region-of-interest(ROI) of facial landmarks are performed to work robustly in mobile devices. Pore-Net achieved the lowest computational cost in inference time and multiply-and-accumulates(MACs) when compared with the binary segmentation models with similar performance in intersection-over-union(IoU).