Sun Y, Thakor N. Photoplethysmography revisited: from contact to noncontact, from point to imaging. IEEE Trans Biomed Eng. 2016;63(3):463–77. https://doi.org/10.1109/TBME.2015.2476337.
Gilgen-Ammann R, Schweizer T, Wyss T. RR interval signal quality of a heart rate monitor and an ECG Holter at rest and during exercise. Eur J Appl Physiol. 2019;119(7):1525–32. https://doi.org/10.1007/s00421-019-04142-5.
McDuff D. Camera measurement of physiological vital signs. ACM Comput Surv. 2023;55(9):176:1-176:40. https://doi.org/10.1145/3558518.
Poh MZ, McDuff DJ, Picard RW. Non-contact, automated cardiac pulse measurements using video imaging and blind source separation. Opt Express. 2010;18(10):10762–74. https://doi.org/10.1364/OE.18.010762.
de Haan G, Jeanne V. Robust pulse rate from chrominance-based rPPG. IEEE Trans Biomed Eng. 2013;60(10):2878–86. https://doi.org/10.1109/TBME.2013.2266196.
Yu Z, Li X, Zhao G. Facial-video-based physiological signal measurement: recent advances and affective applications. IEEE Signal Process Mag. 2021;38(6):50–8. https://doi.org/10.1109/MSP.2021.3106285.
Chen W, McDuff D. DeepPhys: video-based physiological measurement using convolutional attention networks. In: Computer vision—ECCV 2018: 15th European conference, Munich, Germany, September 8-14, 2018, Proceedings, Part II. Berlin, Heidelberg: Springer; 2018. p. 356–373. https://doi.org/10.1007/978-3-030-01216-8_22.
Ni A, Azarang A, Kehtarnavaz N. A review of deep learning-based contactless heart rate measurement methods. Sensors. 2021;21(11):3719. https://doi.org/10.3390/s21113719.
Lu H, Han H, Zhou SK. Dual-GAN: Joint BVP and noise modeling for remote physiological measurement. In: 2021 IEEE/CVF conference on computer vision and pattern recognition (CVPR); 2021. p. 12399–12408.
Ho J, Jain A, Abbeel P. Denoising diffusion probabilistic models. In: Proceedings of the 34th international conference on neural information processing systems. NIPS’20. Red Hook, NY, USA: Curran Associates Inc.; 2020. p. 6840–6851.
Niu X, Han H, Shan S, Chen X. VIPL-HR: a multi-modal database for pulse estimation from less-constrained face video. In: Jawahar CV, Li H, Mori G, Schindler K, editors. Computer vision—ACCV 2018. Lecture notes in computer science. Cham: Springer; 2019. p. 562–76.
Stricker R, Müller S, Gross HM. Non-contact video-based pulse rate measurement on a mobile service robot. In: The 23rd IEEE international symposium on robot and human interactive communication; 2014. p. 1056–1062. https://ieeexplore.ieee.org/document/6926392?denied=.
Yu Z, Shen Y, Shi J, Zhao H, Torr P, Zhao G. PhysFormer: facial video-based physiological measurement with temporal difference transformer. In: 2022 IEEE/CVF conference on computer vision and pattern recognition (CVPR); 2022. p. 4176–4186.
Wang W, den Brinker AC, Stuijk S, de Haan G. Algorithmic principles of remote PPG. IEEE Trans Biomed Eng. 2017;64(7):1479–91. https://doi.org/10.1109/TBME.2016.2609282.
Haan Gd, Leest Av. Improved motion robustness of remote-PPG by using the blood volume pulse signature. Physiol Meas. 2014;35(9):1913. https://doi.org/10.1088/0967-3334/35/9/1913.
Balakrishnan G, Durand F, Guttag J. Detecting pulse from head motions in video. In: 2013 IEEE conference on computer vision and pattern recognition; 2013. p. 3430–3437. https://ieeexplore.ieee.org/document/6619284.
Yu Z, Li X, Zhao G. Remote photoplethysmograph signal measurement from facial videos using spatio-temporal networks. In: 30th British machine vision conference 2019, BMVC 2019, Cardiff, UK, September 9-12, 2019. BMVA Press; 2019. p. 277. https://bmvc2019.org/wp-content/uploads/papers/0186-paper.pdf.
Yu Z, Shen Y, Shi J, Zhao H, Cui Y, Zhang J, et al. Physformer++: Facial video-based physiological measurement with slowfast temporal difference transformer. Int J Comput Vis. 2023;131(6):1307–30.
Luo C, Xie Y, Yu Z. PhysMamba: efficient remote physiological measurement with SlowFast temporal difference mamba. In: Chinese conference on biometric recognition. Springer; 2024. p. 248–259.
Yu Z, Li X, Zhao G. Remote photoplethysmograph signal measurement from facial videos using spatio-temporal networks. In: 30th British machine visison conference: BMVC 2019. 9th-12th September 2019, Cardiff, UK. The British Machine Vision Conference (BMVC); 2019.
Goodfellow IJ, Pouget-Abadie J, Mirza M, Xu B, Warde-Farley D, Ozair S, et al. Generative adversarial nets. In: Proceedings of the 27th international conference on neural information processing systems - Volume 2. NIPS’14. Cambridge, MA, USA: MIT Press; 2014. p. 2672–2680.
Song R, Chen H, Cheng J, Li C, Liu Y, Chen X. PulseGAN: learning to generate realistic pulse waveforms in remote photoplethysmography. IEEE J Biomed Health Inform. 2021;25(5):1373–84. https://doi.org/10.1109/JBHI.2021.3051176.
Sohl-Dickstein J, Weiss E, Maheswaranathan N, Ganguli S. Deep unsupervised learning using nonequilibrium thermodynamics. In: Proceedings of the 32nd international conference on machine learning. PMLR; 2015. p. 2256–2265. https://proceedings.mlr.press/v37/sohl-dickstein15.html.
Chen S, Wong KL, Chin JW, Chan TT, So RH. DiffPhys: enhancing signal-to-noise ratio in remote photoplethysmography signal using a diffusion model approach. Bioengineering. 2024;11(8):743.
Niu X, Shan S, Han H, Chen X. RhythmNet: end-to-end heart rate estimation from face via spatial-temporal representation. IEEE Trans Image Process. 2020;29:2409–23. https://doi.org/10.1109/TIP.2019.2947204.
Verkruysse W, Svaasand LO, Nelson JS. Remote plethysmographic imaging using ambient light. Opt Express. 2008;16(26):21434–45. https://doi.org/10.1364/OE.16.021434.
Kazemi V, Sullivan J. One millisecond face alignment with an ensemble of regression trees. In: 2014 IEEE conference on computer vision and pattern recognition; 2014. p. 1867–1874. https://ieeexplore.ieee.org/document/6909637.
Bobbia S, Macwan R, Benezeth Y, Mansouri A, Dubois J. Unsupervised skin tissue segmentation for remote photoplethysmography. Pattern Recognit Lett. 2019;124:82–90. https://doi.org/10.1016/j.patrec.2017.10.017.
Lee E, Chen E, Lee CY. Meta-rPPG: remote heart rate estimation using a transductive meta-learner. In: Vedaldi A, Bischof H, Brox T, Frahm JM, editors. Computer vision—ECCV 2020. Lecture notes in computer science, vol. 12372. Cham: Springer; 2020. p. 392–409.
Tulyakov S, Alameda-Pineda X, Ricci E, Yin L, Cohn JF, Sebe N. Self-adaptive matrix completion for heart rate estimation from face videos under realistic conditions. In: 2016 IEEE conference on computer vision and pattern recognition (CVPR); 2016. p. 2396–2404. https://ieeexplore.ieee.org/document/7780632.
Carreira J, Zisserman A. Quo Vadis, action recognition? A new model and the kinetics dataset. In: 2017 IEEE conference on computer vision and pattern recognition (CVPR). Honolulu, HI: IEEE; 2017. p. 4724–4733. http://ieeexplore.ieee.org/document/8099985/.
Niu X, Han H, Shan S, Chen X. SynRhythm: learning a deep heart rate estimator from general to specific. In: 2018 24th international conference on pattern recognition (ICPR); 2018. p. 3580–3585. https://ieeexplore.ieee.org/document/8546321.
Liu X, Hill B, Jiang Z, Patel S, McDuff D. EfficientPhys: enabling simple, fast and accurate camera-based cardiac measurement. In: 2023 IEEE/CVF winter conference on applications of computer vision (WACV). Waikoloa, HI, USA: IEEE; 2023. p. 4997–5006. https://ieeexplore.ieee.org/document/10030453/.
Comments (0)