Browsing by Author "Liu, Duan"
Now showing 1 - 1 of 1
- Results Per Page
- Sort Options
Item Acrospire Detection in Malt Using Smartphone Images and Deep Learning(2023-04) Liu, DuanAcrospire length is a visual indication of the degree of malt modification in the malting process, and an indicator of the uniformity of germination. Conventionally, maltsters rely on visually inspecting each malt kernel to judge the process of malt and make process decisions to deliver the in-spec malt. However, this manual inspection method, which could take up to 30 mins for one single sample, is tedious, time-consuming, and subject to human error. Our proposed method uses smartphone imaging along with deep learning models to inspect malt-acrospire lengths to replace traditional manual inspection with higher inspection throughput and accuracy. We split the task into three parts, and three novel datasets were created to train and test the deep learning models. The malt were classified according to the ratio of the length of acrospire to the seed lengths: (0~¼), (¼~½), (½~¾), (¾~1), and overgrown (>1) classes. The overgrown class, where the acrospire grows longer than the full length of the kernel, is considered undesirable in terms of both quality and economics. First, a dataset containing 70 images of malt at different phases of germination, each image containing 150~250 kernels, was used to train a Mask-RCNN model and Hybrid Task Cascade (HTC) model for instance segmentation of malt seeds. The Mask-RCNN model reached an instance segmentation average precision (AP75) value of 0.77 while the HTC model was able to achieve 0.84. After successful instance segmentation of malt, a second dataset containing 3155 images of single malt kernels was used to fine-tune a Swin Transformer and ResNet50 image classification model to filter out malt seeds with ventral side facing up. The Swin Transformer model reached an accuracy of 84% and the ResNet50 model reached an accuracy of 82%. Lastly, 1500 images of single malt kernels were used to train Mask-RCNN, Cascade Mask-RCNN, and HTC models to detect the acrospire region on a seed. The Mask R-CNN model with Swin transformer backbone had the best segmentation average precision (AP75) at 0.818 while the HTC model had the best bounding box average precision (AP75) at 0.848. The overall pipeline achieved a mean Average Precision (mAP) of 0.75, and reduces inspection time to approximately 2~3 minutes.