Self-supervised learning in medicine and healthcare – Nature. com

  • Rajpurkar, P., Chen, E., Banerjee, O. & Topol, E. J. AI in health and medicine. Nat. Med. 28 , 31–38 (2022).

    CAS   Article   Google Scholar  

  • Sambasivan, N. et al. “Everyone wants to do the model work, not the particular data work”: data cascades in high-stakes AI. In Proc. 2021 CHI Conference on Human Factors within Computing Systems (Association for Computing Machinery, 2021); https://doi.org/10.1145/3411764.3445518

  • Russakovsky, O. et al. Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115 , 211–252 (2015).

    Article   Google Scholar  

  • Irvin, J. ainsi que al. CheXpert: a large chest radiograph dataset with uncertainty labels and expert comparison. Within 33rd AAAI Conference on Artificial Intelligence, AAAI 2019, 31st Innovative Applications of Artificial Cleverness Conference, IAAI 2019 plus the 9th AAAI Symposium on Educational Advances in Artificial Intelligence, EAAI 2019 590–597 (AAAI Press, 2019).

  • Huh, M., Agrawal, P. & Efros, A. What makes ImageNet good for transfer learning? Preprint at https://doi.org/10.48550/arXiv.1608.08614 (2016).

  • Chen, T., Kornblith, S., Norouzi, Meters. & Hinton, G. A simple framework with regard to contrastive studying of visual representations. In Proc. 37th International Meeting on Machine Learning (eds. Daumé, H. III & Singh, A. ) 1597–1607 (PMLR, 2020).

  • Chen, X., Fan, They would., Girshick, R. & He, K. Improved baselines along with momentum contrastive learning. Preprint at https://doi.org/10.48550/arXiv.2003.04297 (2020).

  • Zbontar, J., Jing, L., Misra, I., LeCun, Y. & Deny, H. Barlow Twins: self-supervised understanding via redundancy reduction. Within Proc. 38th International Conference on Device Learning (eds. Meila, Mirielle. & Zhang, T. ) 12310–12320 (PMLR, 2021).

  • Sowrirajan, H., Yang, J., Ng, A. Y. & Rajpurkar, P. MoCo-CXR: MoCo pretraining improves representation and transferability of chest X-ray models. In Medical Imaging with Deep Learning 2021 727–743 (PMLR, 2021).

  • Soni, G. N., Shi, S., Sriram, P. L., Ng, The. Y. & Rajpurkar, L. Contrastive learning of heart and lung sounds regarding label-efficient diagnosis. Patterns 3 , 100400 (2022).

    Article   Google College student  

  • Zhang, Y., Jiang, H., Miura, Y., Manning, C. D. & Langlotz, C. P. Contrastive studying of medical visual representations from paired images and text. Preprint at https://doi.org/10.48550/arXiv.2010.00747 (2020).

  • Sriram, A. ou al. COVID-19 prognosis via self-supervised portrayal learning plus multi-image prediction. Preprint in https://doi.org/10.48550/arXiv.2101.04909 (2021).

  • Han, Con., Chen, C., Tewfik, A. H., Ding, Y. & Peng, Y. Pneumonia detection on upper body X-ray using radiomic features and contrastive learning. In 2021 IEEE 18th International Symposium upon Biomedical Imaging ISBI 247–251 (IEEE Computer Society, 2021).

  • Azizi, T. et ing. Big self-supervised models advance medical image classification. Within 2021 IEEECVF International Meeting on Pc Vision ICCV 3458–3468 (IEEE Computer Society, 2021).

  • Vu, Y. And. T. et al. MedAug: contrastive understanding leveraging patient metadata improves representations intended for chest X-ray interpretation. In Proc. 6th Machine Studying for Healthcare Conference (eds Jung, K. et ‘s. ) 755–769 (PMLR, 2021).

  • Lu, M. Y., Chen, R. J. & Mahmood, F. Semi-supervised breast cancer histology classification using deep multiple instance learning and contrast predictive coding. Within Medical Image resolution 2020: Digital Pathology (eds. Tomaszewski, M. E. & Ward, The. D. ) 11320J (SPIE, 2020).

  • Yg, P., Hong, Z., Yin, X., Zhu, C. & Jiang, Ur. Self-supervised visible representation learning for histopathological images. within Medical Image Computing plus Computer Assisted Intervention – MICCAI 2021 (eds. de Bruijne, Meters. et al. ) 47–57 (Springer Worldwide Publishing, 2021).

  • Srinidhi, Chemical. L., Kim, S. W., Chen, F. -D. & Martel, A. L. Self-supervised driven consistency training for annotation efficient histopathology image analysis. Med. Picture Anal. 75 , 102256 (2022).

    Article   Google Scholar  

  • DiPalma, J., Suriawinata, A. The., Tafe, L. J., Torresani, L. & Hassanpour, S i9000. Resolution-based distillation for efficient histology picture classification. Artif. Intell. Med. 119 , 102136 (2021).

    Post   Search engines Scholar  

  • Kiyasseh, Deb., Zhu, To. & Clifton, D. A. CLOCS: Contrastive Learning associated with Cardiac Signals across space, time, and patients. In Proc. 38th International Conference on Machine Learning (eds. Meila, Mirielle. & Zhang, T. ) 5606–5615 (PMLR, 2021).

  • Banville, H. L. et ing. Self-supervised rendering learning from electroencephalography signals. Within 2019 IEEE 29th Global Workshop on Machine Understanding Signal Process MLSP (IEEE Computer Community, 2019); https://doi.org/10.1109/MLSP.2019.8918693

  • Gopal, B. ainsi que al. 3KG: contrastive studying of 12-lead electrocardiograms making use of physiologically-inspired augmentations. in Proc. Machine Learning for Health (eds. Roy, S. ou al. ) 156–167 (PMLR, 2021).

  • Jiao, J. et al. Self-supervised contrastive video-speech representation understanding for ultrasound. Med. Image Comput. Comput. Assist. Interv. 12263 , 534–543 (2020).

    PubMed   PubMed Central   Google Scholar  

  • Wang, Con., Wang, J., Cao, Z. & Barati Farimani, The. Molecular contrastive learning of representations through graph neural networks. Nat. Mach. Intell. 4 , 279–287 (2022).

    Content   Search engines Scholar  

  • Xie, Y., Xu, Z .., Zhang, M., Wang, Unces. & Ji, S. Self-supervised learning associated with graph nerve organs networks: an unified review. IEEE Trans. Pattern Anal. Mach. Intell. (2022); https://doi.org/10.1109/TPAMI.2022.3170559

  • Meng, X., Ganoe, C. L., Sieberg, R. T., Cheung, Y. Con. & Hassanpour, S. Self-supervised contextual language representation of radiology reports to improve the identification associated with communication urgency. AMIA Jt. Summits Transl. Sci. Proc. 2020 , 413–421 (2020).

    PubMed   PubMed Main   Google Scholar  

  • Girgis, H. Z., James, B. Capital t. & Luczak, B. B. Identity: rapid alignment-free conjecture of sequence alignment identity scores using self-supervised general linear versions. NAR Genom. Bioinform. a few , lqab001 (2021).

  • Li, Y. ainsi que al. BEHRT: transformer to get electronic health records. Sci. Rep. 10 , 7155 (2020).

    CAS   Write-up   Search engines Scholar  

  • Wang, By., Xu, Z., Tam, T., Yang, M. & Xu, D. Self-supervised image-text pre-training with mixed data in chest X-rays. Preprint from https://doi.org/10.48550/arXiv.2103.16022 (2021).

  • Rasmy, D., Xiang, Y., Xie, Z .., Tao, D. & Zhi, D. Med-BERT: pretrained contextualized embeddings upon large-scale structured electronic wellness records pertaining to disease prediction. npj Digit. Med. 4 , 86 (2021).

    Article   Google College student  

  • Li, F. ou al. Fine-tuning Bidirectional Encoder Representations From Transformers (BERT)-based models on large-scale electronic health record notes: a good empirical study. JMIR Mediterranean sea. Inform. 7 , e14830 (2019).

    Article   Google Scholar  

  • Kraljevic, Z. et al. Multi-domain clinical natural language processing with MedCAT: the Healthcare Concept Annotation Toolkit. Artif. Intell. Mediterranean. 117 , 102083 (2021).

    Article   Google Scholar  

  • Kostas, G., Aroca-Ouellette, Ersus. & Rudzicz, F. BENDR: using transformers and a contrastive self-supervised learning task to learn from massive amounts of EEG data. Front. Hum. Neurosci. 15 , 653659 (2021).

    Article   Google College student  

  • Baevski, A., Zhou, Y., Mohamed, A. & Auli, M. wav2vec 2 . 0: a framework meant for self-supervised studying of speech representations. In Advances within Neural Information Processing Techniques (eds Larochelle, H. ainsi que al. ) 12449–12460 (Curran Associates, 2020).

  • Boyd, L. et ‘s. Self-supervised representation learning making use of visual field expansion upon digital pathology. In 2021 IEEE/CVF International Conference on Computer Vision Workshops (ICCVW) 639–647 (IEEE Computer Modern society, 2021).

  • Vaswani, A. ou al. Attention is all you need. Within Proc. 31st International Meeting on Neural Information Processing Systems 6000–6010 (Curran Associates, 2017).

  • Jaegle, A. et al. Perceiver IO: the general architecture for organized inputs plus outputs. In International Conference on Studying Representations 4039 (ICLR, 2022).

  • Akbari, They would. et al. VATT: transformer repair for multimodal self-supervised listening to advice from raw video, audio and text. Within Advances in Neural Info Processing Systems (eds. Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P. S. & Vaughan, J. Watts. ) 24206–24221 (Curran Affiliates, 2021).

  • Nagrani, A. ainsi que al. Attention bottlenecks designed for multimodal fusion. In Advances in Nerve organs Information Digesting Systems (eds Ranzato, Meters. et ing. ) 14200–14213 (Curran Acquaintances, 2021).

  • Choromanski, K. ou al. Masked language modeling for proteins via linearly scalable long-context transformers. Preprint at https://doi.org/10.48550/arXiv.2006.03555 (2020).

  • Jumper, J. et al. Highly accurate protein structure conjecture with AlphaFold. Nature 596 , 583–589 (2021).

    CAS   Post   Search engines Scholar  

  • Rao, L. M. ainsi que al. MSA Transformer. In Proc. 38th International Meeting on Device Learning (eds Meila, Mirielle. & Zhang, T. ) 8844–8856 (PMLR, 2021).

  • Lu, A. X., Zhang, L., Ghassemi, M. & Moses, A. Self-supervised contrastive understanding of proteins representations by mutual information maximization. Preprint at bioRxiv https://doi.org/10.1101/2020.09.04.283929 (2020).

  • Yang, C., Wu, Unces., Zhou, W. & Lin, S. Instance localization just for self-supervised recognition pretraining. Within 2021 IEEE/CVF Conference upon Computer Eyesight and Pattern Recognition (CVPR) 3986–3995 (IEEE Computer Culture, 2021).

  • Jana, A. ou al. Deep learning based NAS score and fibrosis stage prediction from CT and pathology data. In 2020 IEEE 20th Worldwide Conference on Bioinformatics plus Bioengineering BIBE 981–986 (IEEE Computer Society, 2020).

  • Ohri, K. & Kumar, Meters. Review upon self-supervised image recognition using deep neural networks. Knowl. Based Syst. 224 , 107090 (2021).

    Content   Google Scholar  

  • Holmberg, O. G. et al. Self-supervised retinal thickness prediction enables deep learning from unlabelled information to boost classification of diabetic retinopathy. Nat. Mach. Intell. 2 , 719–726 (2020).

    Article   Google Scholar  

  • Spahr, A., Bozorgtabar, B. & Thiran, J. -P. Self-taught semi-supervised anomaly detection on upper limb X-rays. Within 2021 IEEE 18th Global Symposium upon Biomedical Imaging ISBI 1632–1636 (IEEE Personal computer Society, 2021).

  • Radford, A. et ‘s. Learning transferable visual models from organic language supervision. In Proc. 38th International Conference on Machine Understanding (eds Meila, M. & Zhang, Big t. ) 8748–8763 (PMLR, 2021).

  • Geirhos, Ur. et al. Shortcut learning in heavy neural networks. Nat. Mach. Intell. two , 665–673 (2020).

    Article   Google College student  

  • Sagawa, S., Koh, P. W., Hashimoto, T. B. & Liang, G. Distributionally robust neural systems. In Worldwide Conference upon Learning Representations 1796 (ICLR, 2020).

  • Fedorov, A. ainsi que al. Tasting the cake: evaluating self-supervised generalization on out-of-distribution multimodal MRI data. Preprint with https://doi.org/10.48550/arXiv.2103.15914 (2021).

  • Li, Z. et ing. Domain generalization for mammography detection via multi-style and multi-view contrastive learning. In Medical Picture Computing plus Computer Aided Intervention – MICCAI 2021 (eds sobre Bruijne, Mirielle. et ‘s. ) 98–108 (Springer, 2021).

  • Endo, M., Krishnan, R., Krishna, V., Ng, The. Y. & Rajpurkar, L. Retrieval-based chest X-ray report generation making use of a pre-trained contrastive language-image model. within Proc. Machine Learning for Health (eds. Roy, H. et al. ) 209–219 (PMLR, 2021).

  • Sriram, A. et ing. COVID-19 prognosis via self-supervised representation studying and multi-image prediction. Preprint at https://doi.org/10.48550/arXiv.2101.04909 (2021).

  • Chen, R. M. & Krishnan, R. G. Self-supervised vision transformers learn visual concepts in histopathology. in LMLR at Neural Information Processing Systems (NeurIPS, 2021).

  • Brown, T. ou al. Language models are few-shot learners. In Improvements in Nerve organs Information Digesting Systems (eds Larochelle, H. et ‘s. ) 1877–1901 (Curran Co-workers, 2020).

  • Logé, C. et al. Q-Pain: a question answering dataset to measure social bias in pain management. Within Proc. Neural Information Processing Systems Track on Datasets and Benchmarks (eds. Vanschoren, J. & Yeung, T. ) 105 (NeurIPS, 2021).

  • Larrazabal, The. J., Nieto, N., Peterson, V., Milone, D. They would. & Ferrante, E. Gender imbalance in medical imaging datasets produces biased classifiers for computer-aided diagnosis. Proc. Natl Acad. Sci. USA 117 , 12592 (2020).

    CAS   Article   Google Scholar  

  • Gamble, P. ainsi que al. Determining breast cancer biomarker status and associated morphological features using deep understanding. Commun. Med. 1 , 14 (2021).

    Write-up   Search engines Scholar  

  • Leave a Reply

    Your email address will not be published.