
Adoption of Electronic Health Record Systems among U. S. Non-Federal Acute Care Hospitals: 2008–2015. ONC Data Brief . https://www.healthit.gov/sites/default/files/briefs/2015_hospital_adoption_db_v17.pdf (2016).
Adler-Milstein, J. et al. Digital health record adoption in US hospitals: the emergence of a digital ‘advanced use’ divide. J. Am. Med. Inform. Assoc. 24 , 1142–1148 (2017).
Bush, R. A., Kuelbs, C. L., Ryu, J., Jian, W. & Chiang, G. J. Structured data entry in the digital medical report: perspectives of pediatric specialty physicians and surgeons. J. Med. Syst. 41 , 1–8 (2017).
Meystre, H. M., Savova, G. K., Kipper-Schuler, K. C. & Hurdle, M. F. Extracting information from textual documents within the electric health document: a review of recent research. Yearb. Med. Notify. 17 , 128–144 (2008).
Liang, H. et al. Evaluation plus accurate diagnoses of pediatric diseases using artificial intelligence. Nat. Med. 25 , 433–438 (2019).
Yang, J. ainsi que al. Assessing the prognostic significance associated with tumor-infiltrating lymphocytes in patients with melanoma using pathologic features identified by natural language processing. JAMA Netw. Open 4 , e2126337 (2021).
Nadkarni, P. M., Ohno-Machado, L. & Chapman, W. W. Natural language processing: an introduction. L. Am. Mediterranean sea. Inform. Assoc. 18 , 544–551 (2011).
LeCun, Y., Bengio, Con. & Hinton, G. Deep learning. Nature 521 , 436–444 (2015).
Collobert, R. ou al. Natural language running (almost) from scratch. J. Mach. Learn Res. 12 , 2493–2537 (2011).
Lample, G., Ballesteros, M., Subramanian, S., Kawakami, K. & Dyer, Chemical. Neural architectures for named entity recognition. Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies . 260–270 (2016).
Lee, J. ain al. BioBERT: a pre-trained biomedical vocabulary representation design for biomedical text mining. Bioinformatics. 36 , 1234–1240 (2020).
Vaswani, A. et al. Attention is All you Need. Advances in Neural Information Processing Systems . 30 (2017).
Wang, The. et ing. GLUE: A new multi-task benchmark and analysis platform for natural terminology understanding. Proceedings of the particular 2018 EMNLP Workshop BlackboxNLP: Analyzing in addition to Interpreting Nerve organs Networks with regard to NLP. 353–355 (2018).
Wang, A. tout autant que al. SuperGLUE: a stickier benchmark regarding general-purpose dialect understanding systems. Advances in neural information processing techniques . 32 (2019).
Qiu, X. au même tire que al. Pre-trained models intended for natural words processing: the survey. Science China Technological Sciences. 63 , 1872–1897 (2020).
Tay, Y., Dehghani, M., Bahri, D. & Metzler, D. Efficient transformers: a survey. ACM Computing Surveys. 55 , 1–28 (2020).
Yu, J., Bohnet, B. & Poesio, M. Named entity recognition as dependency parsing. Proceedings regarding the 58th Annual Meeting of typically the Association to get Computational Linguistics . 6470–6476 (2020).
Yamada, I., Asai, A., Shindo, H., Takeda, H. & Matsumoto, Sumado a. LUKE: deep contextualized organization representations with entity-aware self-attention. Proceedings involving the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP) . 6442–6454 (2020).
Li, X. et ‘s. Dice loss for data-imbalanced NLP tasks. Proceedings from the 58th Annual Meeting in the Association pertaining to Computational Linguistics . 465–476 (2020).
Xu, B., Wang, Q., Lyu, Y., Zhu, Y. & Mao, Z. Entity structure within and even throughout: modeling mention dependencies for document-level relation extraction. Proceedings with the AAAI Meeting on Artificial Intelligence 35 , 14149–14157 (2021).
Ye, D., Lin, Y. & Sun, M. Pack together: entity together with relation removal with levitated marker. Procedures of this 60th Annual Meeting connected with the Association for Computational Linguistics . 1 , 4904–4917 (2021).
Cohen, Some sort of. D., Rosenman, S. & Goldberg, B. Relation classification as two-way span-prediction. ArXiv arXiv: 2010. 04829 (2021).
Lyu, T. & Chen, H. Relation classification along with entity type restriction. Findings of often the Association meant for Computational Linguistics: ACL-IJCNLP . 390–395 (2021).
Wang, T. & Lu, W. Two are better than one: joint enterprise and relation extraction together with table-sequence encoders. Proceedings with the 2020 Conference upon Empirical Strategies in Organic Language Digesting (EMNLP) . 1706–1721 (2020).
Jiang, They would. et approach. SMART: Robust and efficient fine-tuning designed for pre-trained natural language models through principled regularized optimization. Proceedings on the 58th Yearly Meeting of this Association just for Computational Linguistics. 2177–2190 (2020).
Yang, Z .. et way. XLNet: Generalized Autoregressive Pretraining for Language Understanding. Process of your 33rd International Conference about Neural Information Processing Systems . 5753–5763 (2019).
Raffel, C. de plus al. Exploring the limits of transfer learning with an unified text-to-text transformer. J. Mach. Learn. Ers. 21 , 1–67 (2019).
Lan, Z. -Z. et jordlag. ALBERT: a new lite BERT for self-supervised learning for language representations. ArXiv arXiv: 1909. 11942 (2019).
Wang, S i9000., Fang, L., Khabsa, Meters., Mao, H. & Ma, H. Entailment as Few-Shot Learner. ArXiv arXiv: 2104. 14690 (2021).
Zhang, Z. de surcroît al. Semantics-aware BERT for the purpose of language understanding. Proceedings about the AAAI Conference on Artificial Intelligence . 34 , 9628-963 5 (2020).
Zhang, Z., Yg, J. & Zhao, They would. Retrospective reader for machine reading comprehension. Proceedings of your AAAI Convention on Synthetic Intelligence . 35 , 14506-14514 (2021).
Garg, Ersus., Vu, T. & Moschitti, A. TANDA: transfer and additionally adapt pre-trained transformer versions for answer sentence selection. Proceedings belonging to the AAAI Seminar on Unnatural Intelligence. thirty four, 7780-7788 (2020).
Bommasani, R. et geologi. On the exact opportunities not to mention risks in foundation designs. ArXiv arXiv: 2108. 07258 (2021).
Floridi, L. & Chiriatti, M. GPT-3: its nature, scope, limits, and consequences. Minds Mach 30 , 681–694 (2020).
Gu, Gym. et aqui. Domain-specific terms model pretraining for biomedical natural foreign language processing. ACM Trans. Comput. Healthc. 3 , 1–23 (2022).
Shin, H. -C. et jordoverflade. BioMegatron: larger biomedical domain language type. Proceedings from the 2020 Conference in Empirical Procedures in Organic Language Running (EMNLP) . 4700–4706 (2020).
Alsentzer, E. et geologi. Publicly Available Clinical BERT Embeddings. inside Proc. 2nd Clinical Normal Language Control Workshop 72–78 (2019).
Johnson, A. Electronic. W. ainsiq ue al. MIMIC-III, a freely accessible critical care database. Sci. Data 3 , 160035 (2016).
Uzuner, Ö., South, B. L., Shen, Nasiums. & DuVall, S. T. 2010 i2b2/VA challenge with concepts, assertions, and relations in clinical text. N. Am. Med. Inform. Assoc. 18 , 552–556 (2011).
Sun, Watts., Rumshisky, A good. & Uzuner, O. Evaluating temporal relationships in medical text: 2012 i2b2 Challenge. J. Feel. Med. Advise. Assoc. 20 , 806–813 (2013).
Dalam, X. de même que al. Identifying relations of medications with adverse drug events using recurrent convolutional neural networks and also gradient boosting. J. Was. Med. Tell. Assoc. 27 , 65–72 (2020).
Yang, X. the top al. A study of heavy learning methods for de-identification of clinical notes in cross-institute settings. BMC Med. Enlighten. Decis. Mak. 19 , 232 (2019).
Shoeybi, Meters. et al. Megatron-LM: training multi-billion parameter language types using unit parallelism. ArXiv arXiv: 1909. 08053 (2020).
Levine, Ymca., Wies, N., Sharir, O., Bata, L. & Shashua, A. Limits to depth efficiencies associated with self-attention. Improvements in Neural Information Finalizing Systems 33 , 22640–22651 (2020).
Sennrich, Ur., Haddow, B. & Birch, A. Nerve organs Machine Translation of Rare Words using Subword Units. in Proc. 54th Yearly Meeting within the Association with respect to Computational Linguistics (Volume 1: Long Papers) 1715–1725 (Association for Computational Linguistics, 2016).
Devlin, C., Chang, Mirielle. -W., Lee, K. & Toutanova, Ok. BERT: pre-training of full bidirectional transformer repair for language understanding. Actions of the 2019 Discussion of the North American Section of the particular Association when it comes to Computational Linguistics: Human Vocabulary Technologies . 4171–4186 (2019).
Wu, Ful., Xu, Is usually., Jiang, Michael., Zhang, Y simply. & Xu, H. Your study regarding neural word embeddings with regards to named business recognition found in clinical text. Amia. Annu. Symp. Proc. 2015 , 1326–1333 (2015).
Soysal, E. puis al. CLAMP—a toolkit to find efficiently building customized scientific natural vocabulary processing pipelines. J. Are. Med. Educate. Assoc. 25 , 331–336 (2018).
Wu, Y., Jiang, M., Lei, J. & Xu, H. Named thing recognition within chinese specialized medical text using deep nerve organs network. Stud. Health Technol. Inform. 216 , 624–628 (2015).
Wu, Y. et ing. Combine factual medical knowledge and distributed word representation to improve clinical called entity acknowledgement. in AMIA Annual Symposium Proceedings vol. 2018, 1110 (American Medical Informatics Organization, 2018).
Yg, X. the perfect al. Determining relations involving medications having adverse drug events making use of recurrent convolutional neural networks and gradient boosting. His or her. Am. Mediterranean sea. Inform. Assoc. 27 , 65–72 (2020).
Kumar, Ings. A survey of deep learning techniques for relation extraction. ArXiv arXiv: 1705. 03645 (2017).
Lv, X., Guan, Y., Dalam, J. & Wu, Most commonly known. Clinical connection extraction by using deep learning. Int. Intended for. Hybrid. Inf. Technol. 9 , 237–248 (2016).
Wei, Q. et ‘s. Relation removal from professional medical narratives making use of pre-trained terminology models. Amia. Annu. Symp. Proc. 2019 , 1236–1245 (2020).
Guan, They would. & Devarakonda, M. Leveraging contextual info in extracting long distance relations through clinical information. Amia. Annu. Symp. Proc. 2019 , 1051–1060 (2020).
Alimova, I. & Tutubalina, At the. Multiple features for healthcare relation extraction: a machine learning approach. J. Biomed. Inform. 103 , 103382 (2020).
Mahendran, D. & McInnes, W. T. Removing adverse drug events coming from clinical notes. AMIA Summits on Translational Science Cases . 420–429 (2021).
Yang, X., Zhang, H., He, X., Bian, J. & Wu, Con. Extracting family history connected with patients by clinical narratives: exploring an end-to-end solution with heavy learning models. JMIR Mediterranean. Inform. 8 , e22982 (2020).
Yg, X., Yu, Z., Guo, Y., Bian, J. & Wu, Sumado a. Clinical Connection Extraction Using Transformer-based Models. ArXiv. arXiv: 2107. 08957 (2021).
Cer, D., Diab, M., Agirre, E., Lopez-Gazpio, I. & Specia, T. Semeval-2017 task 1: Semantic textual similarity-multilingual and cross-lingual focused evaluation. Proceedings for the 11th Worldwide Workshop at Semantic Evaluation (SemEval-2017) . 1–14 (2017).
Farouk, D. Measuring sentences similarity: some sort of survey. ArXiv arXiv: 1910. 03940 (2019).
Ramaprabha, J., Das, Beds. & Mukerjee, P. Survey on sentence similarity assessment using strong learning. M. Phys. Conf. Ser. 1000 , 012070 (2018).
Gomaa, W. L. & Fahmy, A. Some survey with text similarity approaches. International journal for Computer Applications 68 , 13–18 (2013).
Wang, B. et approach. MedSTS: a resource for health-related semantic textual similarity. Lang. Resour. Eval. 54 , 57–72 (2020).
Rastegar-Mojarad, M. et way. BioCreative/OHNLP Problem 2018. in Proc. 2018 ACM World Conference regarding Bioinformatics, Computational Biology, as well as Health Informatics 575–575 (ACM, 2018).
Wang, Y. the most beneficial al. Overview of the 2019 n2c2/OHNLP track on surgical semantic calcado similarity. JMIR Med. Explain to. 8 , e23375 (2020).
Mahajan, D. et jordlag. Identification about semantically similar sentences inside clinical records: iterative intermediate training working with multi-task learning. JMIR Med. Inform. 8 , e22508 (2020).
Dagan, I., Glickman, O. & Magnini, M. in Machine Learning Challenges. Evaluating Predictive Uncertainty, Visual Object Classification, and Recognising Tectual Entailment (eds. Quiñonero-Candela, J., Dagan, I., Magnini, B. & d’Alché-Buc, F. ) 177–190 (Springer Berlin Heidelberg, 2006).
Williams, A fabulous., Nangia, And. & Bowman, S. 3rd there’s r. A broad-coverage challenge corpus for sentence understanding through inference. Divorce proceedings of typically the 2018 National gathering of the North American Part of this Association to Computational Linguistics: Human Terminology Technologies . 1 , 1112–1122 (2018).
Bowman, Ring. R., Angeli, G., Potts, C. & Manning, D. D. An important large annotated corpus with learning organic language inference. Proceedings for this 2015 Summit on Empirical Methods throughout Natural Dialect Processing . 632–642 (2015).
Shivade, G. MedNLI—a natural language inference dataset for the clinical domain name. PhysioNet https://doi.org/10.13026/C2RS98 (2017).
Conneau, A., Kiela, D., Schwenk, H., Barrault, L. & Bordes, A. Supervised studying of universal sentence illustrations from normal language inference data. Action of often the 2017 Getting together with on Scientific Methods inside of Natural Words Processing . 670–680 (2017).
Rajpurkar, P., Zhang, L., Lopyrev, K. & Liang, P. SQuAD: 100, 000+ questions to obtain machine understanding of textual content. Proceedings in the 2016 Conference for Empirical Approaches in Normal Language Handling . 2383–2392 (2016).
Rajpurkar, P., Jia, R. & Liang, G. Know what you don’t know: unanswerable questions needed for SQuAD. Courtroom proceedings of one’s 56th Annual Meeting of the Organization for Computational Linguistics 2 , 784–789 (2018).
Zhu, D., Ahuja, The., Juan, Deb. -C., Wei, W. & Reddy, M. K. Question Answering through Long Multiple-Span Answers. around Findings among the Association available for Computational Linguistics: EMNLP 2020 3840–3849 (Association for Computational Linguistics, 2020).
Ben Abacha, A. & Demner-Fushman, D. A question-entailment approach to question answering. BMC Bioinforma 20 , 511 (2019).
Pampari, A new., Raghavan, L., Liang, T. & Peng, J. emrQA: a large ensemble for question answering concerning electronic medical records. Petition of a 2018 Conference relating to Empirical Solutions in All-natural Language Producing . 2357–2368 (2018).
Yue, X., Gutierrez, B. N. & Sun, H. Clinical reading knowledge: a thorough analysis of the emrQA dataset. Matters from the 58th Annual Conference of the Relationship for Computational Linguistics . 4474–4486 (2020).