L2NLF: a novel linear-to-nonlinear framework for multi-modal medical image registration

Balakrishnan G, Zhao A, Sabuncu MR, Guttag J, Dalca AV. An unsupervised learning model for deformable medical image registration. In Proceedings of the IEEE conference on computer vision and pattern recognition. 2018. p. 9252–60. https://doi.org/10.1109/CVPR.2018.00964.

Zhu F, Wang S, Li D, Li Q. Similarity attention-based CNN for robust 3D medical image registration. Biomed Signal Process Control. 2023;81:104403. https://doi.org/10.1016/j.bspc.2022.104403.

Article  Google Scholar 

Huang J, Guo J, Pedrosa I, Fei B. Deep learning-based deformable registration of dynamic contrast-enhanced MR images of the kidney. Proc SPIE Int Soc Opt Eng. 2022;12034:213. https://doi.org/10.1117/12.2611768.

Article  Google Scholar 

Shi HB, Lu LY, Yin MX, Zhong C, Yang F. Joint few-shot registration and segmentation self-training of 3D medical images. Biomed Signal Process Control. 2023;80:104294. https://doi.org/10.1016/j.bspc.2022.104294.

Article  Google Scholar 

Zheng ZY, Cao WM, Duan Y, Cao GT, Lian DL. Multi-strategy mutual learning network for deformable medical image registration. Neurocomputing. 2022;501:102–12. https://doi.org/10.1016/j.neucom.2022.06.020.

Article  Google Scholar 

He YT, Li TT, Ge RJ, Yang J, Kong YY, Zhu J, et al. Few-shot learning for deformable medical image registration with perception-correspondence decoupling and reverse teaching. IEEE J Biomed Health Inform. 2022;26:1177–87. https://doi.org/10.1109/JBHI.2021.3095409.

Article  Google Scholar 

Wei DM, Ahmad S, Guo YY, Chen LY, Huang YZ, Ma L, et al. Recurrent tissue-aware network for deformable registration of infant brain MR images. IEEE Trans Med Imaging. 2022;41:1219–29. https://doi.org/10.1109/TMI.2021.3137280.

Article  Google Scholar 

Dosovitskiy A, Beyer L, Kolesnikov A, Weissenborn D, Zhai X, Unterthiner T, et al. An image is worth 16×16 words: transformers for image recognition at scale. arXiv preprint arXiv:201011929. 2020. https://doi.org/10.48550/arXiv.2010.11929.

Lin AL, Chen BZ, Xu JY, Zhang Z, Lu GM, Zhang D. DS-TransUNet: dual swin transformer U-net for medical image segmentation. IEEE Trans Instrum Meas. 2022. https://doi.org/10.1109/TIM.2022.3178991.

Article  Google Scholar 

Yuan FN, Zhang ZX, Fang ZJ. An effective CNN and transformer complementary network for medical image segmentation. Pattern Recognit. 2023;136:109228. https://doi.org/10.1016/j.patcog.2022.109228.

Article  Google Scholar 

He A, Wang K, Li T, Du C, Xia S, Fu H. H2Former: an efficient hierarchical hybrid transformer for medical image segmentation. IEEE Trans Med Imaging. 2023. https://doi.org/10.1109/TMI.2023.3264513.

Article  Google Scholar 

Li B, Liu SK, Wu F, Li GH, Zhong ML, Guan XH. RT-Unet: an advanced network based on residual network and transformer for medical image segmentation. Int J Intell Syst. 2022;37:8565–82. https://doi.org/10.1002/int.22956.

Article  Google Scholar 

Dalmaz O, Yurt M, Cukur T. ResViT: residual vision transformers for multimodal medical image synthesis. IEEE Trans Med Imaging. 2022;41:2598–614. https://doi.org/10.1109/TMI.2022.3167808.

Article  Google Scholar 

Zhao B, Cheng TT, Zhang XR, Wang JJ, Zhu H, Zhao RC, et al. CT synthesis from MR in the pelvic area using Residual Transformer Conditional GAN. Comput Med Imaging Graph. 2023;103:102150. https://doi.org/10.1016/j.compmedimag.2022.102150.

Article  Google Scholar 

Li Y, Zhou T, He K, Zhou Y, Shen D. Multi-scale transformer network with edge-aware pre-training for cross-modality MR image synthesis. IEEE Trans Med Imaging. 2023. https://doi.org/10.1109/TMI.2023.3288001.

Article  Google Scholar 

Shankar V, Yousefi E, Manashty A, Blair D, Teegapuram D. Clinical-GAN: trajectory forecasting of clinical events using transformer and generative adversarial networks. Artif Intell Med. 2023;138:102507. https://doi.org/10.1016/j.artmed.2023.102507.

Article  Google Scholar 

Gao S, Li XG, Li X, Li Z, Deng YQ. Transformer based tooth classification from cone-beam computed tomography for dental charting. Comput Biol Med. 2022;148:105880. https://doi.org/10.1016/j.compbiomed.2022.105880.

Article  Google Scholar 

Ma ZQ, Xie QX, Xie PX, Fan F, Gao XX, Zhu J. HCTNet: a hybrid ConvNet-transformer network for retinal optical coherence tomography image classification. Biosensors-Basel. 2022;12:542. https://doi.org/10.3390/bios12070542.

Article  Google Scholar 

Rodriguez MA, AlMarzouqi H, Liatsis P. Multi-label retinal disease classification using transformers. IEEE J Biomed Health Inform. 2023;27:2739–50. https://doi.org/10.1109/JBHI.2022.3214086.

Article  Google Scholar 

Manzari ON, Ahmadabadi H, Kashiani H, Shokouhi SB, Ayatollahi A. MedViT: a robust vision transformer for generalized medical image classification. Comput Biol Med. 2023;157:106791. https://doi.org/10.1016/j.compbiomed.2023.106791.

Article  Google Scholar 

Zhang J, Liu AP, Wang D, Liu Y, Wang ZJ, Chen X. Transformer-based end-to-end anatomical and functional image fusion. IEEE Trans Instrum Meas. 2022. https://doi.org/10.1109/TIM.2022.3200426.

Article  Google Scholar 

Yu KX, Yang XM, Jeon S, Dou QY. An end-to-end medical image fusion network based on Swin-transformer. Microprocess Microsyst. 2023;98:104781. https://doi.org/10.1016/j.micpro.2023.104781.

Article  Google Scholar 

Li WS, Zhang Y, Wang GF, Huang YP, Li RY. DFENet: a dual-branch feature enhanced network integrating transformers and convolutional feature learning for multimodal medical image fusion. Biomed Signal Process Control. 2023;80:104402. https://doi.org/10.1016/j.bspc.2022.104402.

Article  Google Scholar 

Zhou Q, Ye SZ, Wen MW, Huang ZW, Ding MY, Zhang XM. Multi-modal medical image fusion based on densely-connected high-resolution CNN and hybrid transformer. Neural Comput Appl. 2022;34:21741–61. https://doi.org/10.1007/s00521-022-07635-1.

Article  Google Scholar 

Chen J, Frey EC, He Y, Segars WP, Li Y, Du Y. TransMorph: transformer for unsupervised medical image registration. Med Image Anal. 2022;82:102615. https://doi.org/10.1016/j.media.2022.102615.

Article  Google Scholar 

Liu Z, Lin Y, Cao Y, Hu H, Wei Y, Zhang Z, et al. Swin transformer: hierarchical vision transformer using shifted windows. In Proceedings of the IEEE/CVF international conference on computer vision. 2021. p. 10012–22. https://doi.org/10.48550/arXiv.2103.14030.

Ma MR, Xu YB, Song L, Liu GX. Symmetric transformer-based network for unsupervised image registration. Knowl-Based Syst. 2022;257:109959. https://doi.org/10.1016/j.knosys.2022.109959.

Article  Google Scholar 

Song XR, Chao HQ, Xu XN, Guo HT, Xu S, Turkbey B, et al. Cross-modal attention for multi-modal image registration. Med Image Anal. 2022;82:102612. https://doi.org/10.1016/j.media.2022.102612.

Article  Google Scholar 

Liu Y, Wang W, Li Y, Lai H, Huang S, Yang X. Geometry-consistent adversarial registration model for unsupervised multi-modal medical image registration. IEEE J Biomed Health Inform. 2023. https://doi.org/10.1109/JBHI.2023.3270199.

Article  Google Scholar 

Lian C, Li X, Kong L, Wang J, Zhang W, Huang X, et al. CoCycleReg: collaborative cycle-consistency method for multi-modal medical image registration. Neurocomputing. 2022;500:799–808. https://doi.org/10.1016/j.neucom.2022.05.113.

Article  Google Scholar 

Goodfellow I, Pouget-Abadie J, Mirza M, Xu B, Warde-Farley D, Ozair S, et al. Generative adversarial networks. Commun ACM. 2020;63:139–44. https://doi.org/10.1145/3422622.

Article  Google Scholar 

Zheng Y, Sui X, Jiang Y, Che T, Zhang S, Yang J, et al. SymReg-GAN: symmetric image registration with generative adversarial networks. IEEE Trans Pattern Anal Mach Intell. 2022;44:5631–46. https://doi.org/10.1109/TPAMI.2021.3083543.

Article  Google Scholar 

Yan S, Wang C, Chen W, Lyu J. Swin transformer-based GAN for multi-modal medical image translation. Front Oncol. 2022;12:942511. https://doi.org/10.3389/fonc.2022.942511.

Article  Google Scholar 

Han R, Jones CK, Lee J, Zhang X, Wu P, Vagdargi P, et al. Joint synthesis and registration network for deformable MR-CBCT image registration for neurosurgical guidance. Phys Med Biol. 2022. https://doi.org/10.1088/1361-6560/ac72ef.

Article  Google Scholar 

Zhang JK, Wang YQ, Dai J, Cavichini M, Bartsch DUG, Freeman WR, et al. Two-step registration on multi-modal retinal images via deep neural networks. IEEE Trans Image Process. 2022;31:823–38. https://doi.org/10.1109/TIP.2021.3135708.

Article  Google Scholar 

Sengupta D, Gupta P, Biswas A. A survey on mutual information based medical image registration algorithms. Neurocomputing. 2022;486:174–88. https://doi.org/10.1016/j.neucom.2021.11.023.

Article  Google Scholar 

Vaswani A, Shazeer N, Parmar N, Uszkoreit J, Jones L, Gomez AN, et al. Attention is all you need. In Advances in neural information processing systems. 2017;30. https://doi.org/10.48550/arXiv.1706.03762.

Wang W, Chen W, Qiu Q, Chen L, Wu B, Lin B, et al. Crossformer++: a versatile vision transformer hinging on cross-scale attention. 2023. https://doi.org/10.48550/arXiv.2303.06908.

Menze BH, Jakab A, Bauer S, Kalpathy-Cramer J, Farahani K, Kirby J, et al. The multimodal brain tumor image segmentation benchmark (BRATS). IEEE Trans Med Imaging. 2014;34:1993–2024. https://doi.org/10.1109/TMI.2014.2377694.

Article  Google Scholar 

Baid U, Ghodasara S, Mohan S, Bilello M, Calabrese E, Colak E, et al. The RSNA-ASNR-MICCAI BraTS 2021 benchmark on brain tumor segmentation and radiogenomic classification. 2021. https://doi.org/10.48550/arXiv.2107.02314.

Zhu L, Wang X, Ke Z, Zhang W, Lau RW. BiFormer: vision transformer with bi-level routing attention. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2023:10323–33. https://doi.org/10.48550/arXiv.2103.14030.

Ding M, Xiao B, Codella N, Luo P, Wang J, Yuan L. DaViT: dual attention vision transformers. 2022. pp. 74–92. https://doi.org/10.1007/978-3-031-20053-3_5.

Comments (0)

No login
gif