This study proposes an automated neurofibroma detection method for whole-body magnetic resonance imaging (WBMRI) based on radiomics and ensemble learning. A dynamic weighted box fusion mechanism integrating two dimensional (2D) object detection and three dimensional (3D) segmentation is developed, where the fusion weights are dynamically adjusted according to the respective performance of the models in different tasks. The 3D segmentation model leverages spatial structural information to effectively compensate for the limited boundary perception capability of 2D methods. In addition, a radiomics-based false positive reduction strategy is introduced to improve the robustness of the detection system. The proposed method is evaluated on 158 clinical WBMRI cases with a total of 1,380 annotated tumor samples, using five-fold cross-validation. Experimental results show that, compared with the best-performing single model, the proposed approach achieves notable improvements in average precision, sensitivity, and overall performance metrics, while reducing the average number of false positives by 17.68. These findings demonstrate that the proposed method achieves high detection accuracy with enhanced false positive suppression and strong generalization potential.
This article aims to combine deep learning with image analysis technology and propose an effective classification method for distal radius fracture types. Firstly, an extended U-Net three-layer cascaded segmentation network was used to accurately segment the most important joint surface and non joint surface areas for identifying fractures. Then, the images of the joint surface area and non joint surface area separately were classified and trained to distinguish fractures. Finally, based on the classification results of the two images, the normal or ABC fracture classification results could be comprehensively determined. The accuracy rates of normal, A-type, B-type, and C-type fracture on the test set were 0.99, 0.92, 0.91, and 0.82, respectively. For orthopedic medical experts, the average recognition accuracy rates were 0.98, 0.90, 0.87, and 0.81, respectively. The proposed automatic recognition method is generally better than experts, and can be used for preliminary auxiliary diagnosis of distal radius fractures in scenarios without expert participation.
Aiming at the problems of missing important features, inconspicuous details and unclear textures in the fusion of multimodal medical images, this paper proposes a method of computed tomography (CT) image and magnetic resonance imaging (MRI) image fusion using generative adversarial network (GAN) and convolutional neural network (CNN) under image enhancement. The generator aimed at high-frequency feature images and used double discriminators to target the fusion images after inverse transform; Then high-frequency feature images were fused by trained GAN model, and low-frequency feature images were fused by CNN pre-training model based on transfer learning. Experimental results showed that, compared with the current advanced fusion algorithm, the proposed method had more abundant texture details and clearer contour edge information in subjective representation. In the evaluation of objective indicators, QAB/F, information entropy (IE), spatial frequency (SF), structural similarity (SSIM), mutual information (MI) and visual information fidelity for fusion (VIFF) were 2.0%, 6.3%, 7.0%, 5.5%, 9.0% and 3.3% higher than the best test results, respectively. The fused image can be effectively applied to medical diagnosis to further improve the diagnostic efficiency.