In clinical diagnosis of brain tumors, accurate segmentation based on multimodal magnetic resonance imaging (MRI) is essential for determining tumor type, extent, and spatial boundaries. However, differences in imaging mechanisms, information emphasis, and feature distributions among multimodal MRI data have posed significant challenges for precise tumor modeling and fusion-based segmentation. In recent years, fusion neural networks have provided effective strategies for integrating multimodal information and have become a major research focus in multimodal brain tumor segmentation. This review systematically summarized relevant studies on fusion neural networks for multimodal brain tumor segmentation published since 2019. First, the fundamental concepts of multimodal data fusion and model fusion were introduced. Then, existing methods were categorized into three types according to fusion levels: prediction fusion models, feature fusion models, and stage fusion models, and their structural characteristics and segmentation performance were comparatively analyzed. Finally, current limitations were discussed, and potential development trends of fusion neural networks for multimodal MRI brain tumor segmentation were summarized. This review aims to provide references for the design and optimization of future multimodal brain tumor segmentation models.