São João University Hospital, Alameda Professor Hernâni Monteiro; WGO Gastroenterology and Hepatology Training Center; University of Porto, Alameda Professor Hernâni Monteiro; INEGI-Institute of Science and Innovation in Mechanical and Industrial Engineering, Porto, Portugal
aDepartment of Gastroenterology, São João University Hospital, Alameda Professor Hernâni Monteiro (Tiago Ribeiro, Miguel Mascarenhas Saraiva, Hélder Cardoso, João Afonso, Patrícia Andrade, Guilherme Macedo); bWGO Gastroenterology and Hepatology Training Center (Tiago Ribeiro, Miguel Mascarenhas Saraiva, Hélder Cardoso, João Afonso, Patrícia Andrade, Guilherme Macedo); cFaculty of Medicine of the University of Porto, Alameda Professor Hernâni Monteiro (Miguel Mascarenhas Saraiva, Hélder Cardoso, Patrícia Andrade, Guilherme Macedo); dFaculty of Engineering of the University of Porto (João P.S. Ferreira, Marco Parente, Renato Natal Jorge); eINEGI - Institute of Science and Innovation in Mechanical and Industrial Engineering (João P.S. Ferreira, Marco Parente, Renato Natal Jorge), Porto, Portugal
Background Capsule endoscopy (CE) is the first line for evaluation of patients with obscure gastrointestinal bleeding. A wide range of small intestinal vascular lesions with different hemorrhagic potential are frequently found in these patients. Nevertheless, reading CE exams is time-consuming and prone to errors. Convolutional neural networks (CNN) are artificial intelligence tools with high performance levels in image analysis. This study aimed to develop a CNN-based model for identification and differentiation of vascular lesions with distinct hemorrhagic potential in CE images.
Methods The development of the CNN was based on a database of CE images. This database included images of normal small intestinal mucosa, red spots, and angiectasia/varices. The hemorrhagic risk was assessed by Saurin’s classification. For CNN development, 11,588 images (9525 normal mucosa, 1026 red spots, and 1037 angiectasia/varices) were ultimately extracted. Two image datasets were created for CNN training and testing.
Results The network was 91.8% sensitive and 95.9% specific for detection of vascular lesions, providing accurate predictions in 94.4% of cases. In particular, the CNN had a sensitivity and specificity of 97.1% and 95.3%, respectively, for detection of red spots. Detection of angiectasia/varices occurred with a sensitivity of 94.1% and a specificity of 95.1%. The CNN had a frame reading rate of 145 frames/sec.
Conclusions The developed algorithm is the first CNN-based model to accurately detect and distinguish enteric vascular lesions with different hemorrhagic risk. CNN-assisted CE reading may improve the diagnosis of these lesions and overall CE efficiency.
Keywords Capsule endoscopy, artificial intelligence, convolutional neural network, vascular lesions, gastrointestinal bleeding
Ann Gastroenterol 2021; 34 (6): 820-828
Capsule endoscopy (CE) has revolutionized the approach to patients with suspected small intestine disease, allowing noninvasive inspection of this portion of the gastrointestinal tract. The clinical value of CE has been demonstrated in a wide array of diseases, including the evaluation of patients with suspected small bowel hemorrhage, diagnosis and monitoring of Crohn’s disease activity, and detection of protruding small intestinal lesions [1-4].
Obscure gastrointestinal bleeding (OGIB), either overt or occult, is responsible for 5% of all gastrointestinal hemorrhage cases. OGIB is currently the most frequent indication for CE. The source of bleeding is located in the small intestine in most cases [5,6]. A classification of CE findings according to their bleeding potential has been proposed by Saurin et al [7]. This classification defines CE findings as having no bleeding potential (P0), uncertain bleeding potential (P1), or high bleeding potential (P2). Findings with high bleeding potential include large ulcers, angiectasia, and varices. Vascular lesions are among the most commonly diagnosed lesions during the investigation of OGIB. Angiectasia is the most commonly found lesion with high bleeding potential among patients undergoing CE for OGIB [5]. These lesions are associated with chronic blood loss and are most commonly found in the elderly, patients with chronic kidney disease, cardiovascular disease and cirrhosis [8,9]. Other vascular lesions include enteric varices and red spots, which may be found in portal hypertension or diseases with systemic involvement [10-12]. In fact, OGIB is the most common indication for CE in patients with cirrhosis [13]. Furthermore, although the bleeding potential of mucosal red spots is uncertain, they are frequently found during CE for investigation of unexplained iron deficiency anemia and can occur as manifestations of several conditions, including portal hypertensive enteropathy and systemic vasculitis [11,12].
Evaluation of CE exams can be a burdensome task. Each CE video comprises approximately 50,000 frames, requiring an average of 30-120 min for reading [14]. Thus, this process is time-consuming for the clinical gastroenterologist. Moreover, mucosal lesions may be restricted to a small number of frames, increasing the risk of overlooking significant lesions.
The existence of large image databases and enhanced computational power have boosted the development of artificial intelligence (AI) tools for automatic image analysis. Among the different types of AI, convolutional neural networks (CNN) have delivered promising results in diverse fields of medicine [15-17]. Endoscopic imaging, and particularly CE, is one of the branches which can benefit the most from the development of CNN-based tools for the automatic detection of lesions [18]. These technological advances may increase diagnostic rates and optimize the reading process, including its time cost, which constitutes one of the main drawbacks of CE. Therefore, we aimed to create a CNN capable of automatically detecting and differentiating small intestinal vascular lesions of distinct bleeding potential, including red spots, angiectasias, and varices.
Subjects who underwent CE during the period 2015-2020 in a single tertiary center (São João University Hospital, Porto, Portugal), either as inpatients or outpatients, were approached for enrolment in this retrospective study (n=1229). A total of 1483 CE exams were performed. Data retrieved from these examinations were used for development, training and validation of a CNN-based model aimed at detecting vascular lesions and differentiating their bleeding potential. The full-length CE video of all participants was reviewed (total number of frames: 67,214,009). A total of 11,588 images of the enteric mucosa were ultimately extracted. The findings represented on the frames were labeled by 2 gastroenterologists (MMS, HC). Each of these researchers have read more than 1500 CE prior this study. The inclusion and final labeling of the frames was dependent on a double-validation method, requiring consensus between both researchers for final decision.
This study was approved by the ethics committee of São João University Hospital/Faculty of Medicine of the University of Porto (No. CE 407/2020). This study was retrospective and was of non-interventional nature, respecting the original and subsequent revisions of the declaration of Helsinki. Therefore, there was no interference in the conventional clinical management of each included patient. Any information deemed to potentially identify the subjects was omitted, and each patient was assigned a random number in order to guarantee effective data anonymization for researchers involved in CNN network development. A team with Data Protection Officer certification (Maastricht University) confirmed the non-traceability of data and conformity with the general data protection regulation.
In all patients, the procedures were conducted using the PillCam™ SB3 system (Medtronic, Minneapolis, MN, USA). The system includes 3 major components: the endoscopic capsule, an array of sensors connected to a data recorder, and a software for image revision. The capsule measures 26.2 mm in length and 11.4 mm in width. It has a high-resolution camera with reported 156° field of view. The capture frame rate automatically varies between 2 and 6 frames per second, depending on the speed of progression of the endoscopic capsule. The battery of the endoscopic capsule has an estimated life of ≥8 h. The images were reviewed using PillCam™ Software v9 (Medtronic, Minneapolis, MN, USA). Images were processed in order to remove possible patient identifying information (name, operating number, date of procedure). Each extracted frame was stored and assigned a consecutive number.
Each patient received bowel preparation, which globally conformed with previously published guidelines by the European Society of Gastrointestinal Endoscopy [19]. Briefly, patients were advised to have a clear liquid diet on the day preceding capsule ingestion, with fasting during the night before examination. A bowel preparation consisting of 2 L of polyethylene glycol solution was used prior to the capsule ingestion. Simethicone was used as an anti-foaming agent. Prokinetic therapy (10 mg domperidone) was used if the capsule remained in the stomach 1 h after ingestion, upon image review on the data recorder worn by the patient. No eating was allowed for 4 h after the ingestion of the capsule.
Each frame was evaluated for the presence of vascular lesions or normal enteric mucosa. Images with vascular lesions were further categorized according to the specific type of lesion and respective bleeding potential. The presence of red spots, angiectasias and varices was noted. Red spots were defined as a punctuate (<1 mm) flat lesion with a bright red area, within the mucosal layer, without vessel appearance [20]. Angiectasias were defined as a well demarcated bright red lesion consisting of tortuous and clustered capillary dilatations, within the mucosal layer [20]. Varices were defined as raised venous dilatation with serpiginous appearance. The hemorrhagic potential of these lesions was ascertained according to Saurin’s classification [7]. This classification divides lesions into 3 levels of bleeding risk: P0 – no hemorrhagic potential; P1 – uncertain/intermediate hemorrhagic potential; P2 – high hemorrhagic potential. Red spots were classified as P1 lesions, whereas angiectasias and varices were classified as P2 [7].
From the collected pool of images (n=11588), 9525 contained normal enteric mucosa, 1026 had evidence of red spots (P1 lesions), and 1037 had angiectasia or varices (P2 lesions). This pool of images was split for the construction of training and validation image sets. The training dataset was composed by selecting 80% of the consecutively extracted images (n=9270). The remaining 20% were used as the validation dataset (n=2318). The validation dataset was used for assessing the performance of the CNN. A flowchart summarizing the study design and image selection for the development (training and validation) of the CNN is presented in Fig. 1.
Figure 1 Study flow chart for the training and validation phases CNN, convolutional neural network; CE, capsule endoscopy; AUROC, area under the receiver operating characteristic curve
To create the CNN, we used the Xception model with its weights trained on ImageNet (a large-scale image dataset aimed for use in development of object recognition software). To transfer this learning to our data, we kept the convolutional layers of the model. We removed the last fully connected layers and attached fully connected layers based on the number of classes we used to classify our endoscopic images. We used 2 blocks, each having a fully connected layer followed by a Dropout layer of 0.3 drop rate. Following these 2 blocks, we added a Dense layer with a size defined as the number of categories (n=3) to classify. The learning rate of 0.0001, batch size of 32, and the number of epochs of 100 were set by trial and error. We used Tensorflow 2.3 and Keras libraries to prepare the data and run the model. The analyses were performed using a computer equipped with a 2.1 GHz Intel® Xeon® Gold 6130 processor (Intel, Santa Clara, CA, USA) and a double NVIDIA Quadro® RTX™ 4000 graphic processing unit (NVIDIA Corporate, Santa Clara, CA, USA).
The primary outcome measures included sensitivity, specificity, precision, and the accuracy in differentiating between images containing normal mucosa, red spots and P2 lesions. In addition, we used receiver operating characteristic (ROC) curve analysis and area under the ROC curve (AUROC) to measure the performance of our model in the distinction between the 3 categories. The network’s classification was compared to the diagnosis provided by specialists’ analysis, the latter being considered the gold standard. In addition to its diagnostic performance, the computational speed of the network was determined using the validation image dataset by calculating the time required for the CNN to provide output for all images. For each image, the CNN calculated the probability for each of the 3 categories (normal mucosa, red spots and P2 lesions). A higher probability value translated into a greater confidence in the CNN prediction. The category with the highest probability score was outputted as the CNN’s predicted classification (Fig. 2). Sensitivities, specificities, and precisions are presented as mean ± standard deviation. ROC curves were graphically represented and AUROC calculated as mean and 95% confidence intervals (CI), assuming normal distribution of these variables. Statistical analysis was performed using Sci-Kit learn v0.22.2 [21].
Figure 2 Output obtained from the application of the convolutional neural network. The bars represent the probability estimated by the network. The finding with the highest probability was output as the predicted classification. A blue bar represents a correct prediction. Red bars represent an incorrect prediction N, normal mucosa; P1, red spots; P2, angiectasia and varices
A total of 1229 patients underwent CE and were enrolled in this study, from which 11,588 frames were extracted. The validation dataset comprised 2318 images (20% of the extracted frames). It was composed of 206 (8.9%) images with red spots, 207 (8.9%) images with P2 findings (angiectasia and varices), and 1905 (82.2%) images with normal mucosa. The CNN evaluated each image and predicted a classification (normal mucosa, red spots or P2 lesions) that was compared with the classification provided by the specialists. The network demonstrated its learning ability, with increasing accuracy as data were repeatedly input into the multi-layer CNN (Fig. 3).
Figure 3 Evolution of the accuracy of the convolutional neural network during training and validation phases, as the training and validation datasets were repeatedly input into the neural network
The distribution of results is displayed in Table 1. Overall, the mean sensitivity and specificity of the CNN were 91.8±2.2% and 95.9±1.2%. The network provided accurate predictions in 94.4±3.7%. The positive predictive value was 91.3±3.7%. The negative predictive value was 95.7±2.1%.
Table 1 Confusion matrix of the automatic detection vs. expert classification
We aimed to evaluate the CNN’s performance in the detection and distinction of enteric vascular lesions. The trained CNN had a sensitivity of 91.7%, specificity of 95.3%, and an accuracy of 94.1% for the detection of P1 lesions (red spots) (Table 2). The AUROC was 0.97. The network detected varices and angiectasia (P2 lesions) with a sensitivity, specificity and accuracy of 94.1%, 95.1% and 94.8%, respectively, and had an AUROC of 0.98. Classification as normal mucosa occurred with a sensitivity and specificity of 89.8% and 97.2% (Table 2), respectively, and an AUROC of 0.98. The ROC curves and respective AUROCs for detection of red spots, P2 lesions, and normal mucosa are represented in Fig. 4.
Table 2 CNN performance for detection and differentiation of red spots (P1) and P2 lesions
Figure 4 ROC analyses of the network’s performance in the detection of normal mucosa, P1 vascular lesions (red spots) and P2 vascular lesions (angiectasia and varices) ROC, receiver operating characteristic; AUC, area under the curve
The time required for production of outputs for the images in the validation dataset was calculated. The CNN completed the reading of the validation image set in 16 sec. This translates into an approximated reading rate of 145 frames/sec. At this rate, revision of a full-length CE video containing an estimated 50,000 frames would require approximately 6 min.
In this study, we developed an accurate deep learning tool for detection and differentiation of enteric lesions with distinct hemorrhagic potential. To the best of our knowledge, this is the first study to evaluate the performance of a CNN for detection of a wide range of vascular lesions with different bleeding potential. Our network reached high levels of performance in the detection of findings with uncertain and high bleeding potential. We believe that these results are promising for the development and introduction into clinical practice of tools for the automatic detection and classification of small intestine vascular lesions.
CE has revolutionized the etiologic investigation of the patient presenting with non-emergent OGIB, either overt or occult. The diagnostic yield of CE for OGIB is superior to the generality of other noninvasive diagnostic methods and is comparable to the much more invasive device-assisted double-balloon enteroscopy (DBE) [2]. The cost-effectiveness of CE in the setting of OGIB has been demonstrated [22,23]. Moreover, application of CE may synergistically enhance the diagnostic yield of deep enteroscopy techniques (e.g., DBE), thus selecting patients who may benefit from more invasive techniques with therapeutic potential [2,24].
The accurate and timely detection of small intestine lesions in CE is essential. Saurin et al have created a useful and pragmatic classification of the bleeding potential for lesions detected on CE [7]. Vascular lesions, including angiectasias and varices, present high bleeding risk (P2). Red spots have uncertain/intermediate clinical significance [7]. Nevertheless, they are frequently found in patients with occult OGIB without other high-risk lesions and may be a sign of systemic diseases [12]. However, the pleomorphism of these lesions as well as their frequent small size may increase the risk of missing lesions. To the best of our knowledge, this is the first CNN to specifically identify and distinguish red spots from other vascular lesions and normal mucosa.
Reading CE exams is a time-consuming task, and significant lesions may be restricted to a small number of frames, thus increasing the risk of overlooking significant lesions [25,26]. The application of AI tools to CE may allow these drawbacks to be overcome. Assistance of deep learning methods such as CNNs may improve detection of these lesions while shortening the time required for reading the images [27,28]. Recent application of these technologies has confirmed the high diagnostic performance of CNN-based models for small-bowel CE, including for the detection of ulcers and erosions, protruding lesions, celiac disease, luminal blood content, and angiectasia [25,29-33].
This is the first study reporting the performance of a CNN-based model for the detection of lesions with distinct hemorrhagic potential on CE images. To date, the existing studies regarding automatic detection of vascular lesions have focused on the detection of angiectasia. Noya et al reported the development of a CNN for detection of angiectasia with a sensitivity of 90%, a specificity of 96.8% and an AUROC of 0.93 [34]. Leenhardt et al developed a CNN capable of detecting angiectasia with a sensitivity of 100% and specificity of 96% [31]. However, that study was performed with frames extracted from the French national CE image database, which included only clean images. Therefore, their results may not be generalizable because of variations in bowel preparation and image artifacts. In 2019, Tsuboi et al developed a CNN-based system for automatic detection of angiectasia [35]. Their system proved to have a high diagnostic yield for the detection of angiectasia, with a sensitivity of 98.8% and specificity of 98.4%. However, this study failed to address the detection of lesions according to their hemorrhagic potential, which is clinically important, as lesions with different bleeding potential have distinct rebleeding rates [36]. More recently, a CNN model developed by Otani et al reported an AUROC of 0.95 for the detection of vascular lesions (angiectasias and venous malformations) [37].
A significant number of patients presenting with OGIB are concomitantly treated with antiplatelet or anticoagulant drugs for comorbid cardiovascular disease [38]. Special attention is required for these patients, as these therapies are associated with increased risk of mucosal injury or bleeding from preexistent vascular lesions [38,39]. The detection of these lesions in CE is important for the management of OGIB in this subset of patients. In this study, we developed a CNN-based model with high diagnostic yield for the detection of red spots (P1) and P2 lesions (angiectasias and varices). Indeed, this is the first published CNN to differentiate vascular lesions, rather than simply identifying them, as reported in previously published CNNs regarding vascular lesions. Furthermore, our CNN is able to stratify hemorrhagic risk, by accurately classifying frames according to Saurin’s classification. The development of sensitive AI tools for the automatic detection of vascular lesions in CE images may improve the diagnostic yield of CE for these lesions, thus decreasing the number of negative CE exams in the setting of OGIB. Moreover, the development of algorithms capable of foretelling the hemorrhagic potential of vascular lesions may help stratify patients who require further evaluation. This might translate into future gains regarding adequate management of healthcare resources.
This work has several highlights. First, to our knowledge, this is the first study to evaluate the performance of CNN for the detection of several vascular lesions. Moreover, we tested the ability of our model in the detection and discrimination of lesions with different bleeding potential. Second, our algorithm demonstrated high levels of performance in the detection and differentiation of such lesions. The sensitivity, specificity and AUROC for the detection of red spots and P2 lesions (angiectasias/varices) were, respectively, 91.7%, 65.3% and 0.97, and 94.1%, 95.1% and 0.98. Third, the architecture of our network demonstrated a high image processing performance, with approximate reading rates of 145 frames/sec. This performance is superior to most studies regarding automatic detection of lesions, including vascular abnormalities [25,29-31,35]. We believe that this performance may, in the near future, translate into shorter reading times, thus overcoming one of the main drawbacks of CE. Further prospective multicentric studies are required to assess whether AI-assisted CE image reading translates into enhanced time efficiency compared to conventional reading.
This study had several limitations. First, our study focused on patients evaluated in a single center and was conducted in a retrospective manner. Thus, these promising results must be confirmed by robust prospective multicenter studies before application to clinical practice. Second, this model was developed using PillCam SB3. Therefore, our results may not be generalizable for other CE systems. Third, our system was developed using still frames. Assessing the performance of this technology using full-length videos is required before the clinical implementation of this model. Third, although our network demonstrated high processing speed, we did not assess whether CNN-assisted image review reduces the reading time compared to conventional reading. Finally, the number of included patients was relatively small. Moreover, the number of images in the validation dataset was small, thus limiting the interpretation of our results. Therefore, prospective investigation in larger patient sets is required to confirm these results before the introduction of these tools into clinical practice.
The implementation of AI tools in routine clinical practice is expected to grow in the near future. CE is a fertile ground for the development of deep learning-based tools for enhanced image processing. These tools may help reduce CE reading times, thus overcoming one of its main downsides, as well as improving the diagnostic accuracy of CE for multiple lesions.
In conclusion, we developed a CNN-based model capable of detecting and discriminating vascular lesions with distinct hemorrhagic potential. Our model achieved high levels of accuracy and excellent image processing performance. We believe that our results may help lay the foundations for the widespread application of AI technology in the field of CE.
What is already known:
Capsule endoscopy (CE) is the first line approach for patients presenting with obscure gastrointestinal bleeding, as the bleeding source is located in the small bowel in most cases
Reading CE images is a monotonous, time-consuming and error-prone task
Small bowel vascular lesions are common causes of gastrointestinal bleeding and their identification in CE is often difficult
Artificial intelligence (AI) has shown promising diagnostic capacity across several medical fields
What the new findings are:
An AI tool based on a convolutional neural network detected and differentiated vascular lesions with distinct hemorrhagic potential with high sensitivity, specificity and accuracy
The AI algorithm demonstrated high image processing performance
Application of AI technologies to CE may improve its diagnostic performance as well as its time efficiency
1. Triester SL, Leighton JA, Leontiadis GI, et al. A meta-analysis of the yield of capsule endoscopy compared to other diagnostic modalities in patients with obscure gastrointestinal bleeding. Am J Gastroenterol 2005;100:2407-2418.
2. Teshima CW, Kuipers EJ, van Zanten SV, Mensink PB. Double balloon enteroscopy and capsule endoscopy for obscure gastrointestinal bleeding:an updated meta-analysis. J Gastroenterol Hepatol 2011;26:796-801.
3. Le Berre C, Trang-Poisson C, Bourreille A. Small bowel capsule endoscopy and treat-to-target in Crohn's disease:A systematic review. World J Gastroenterol 2019;25:4534-4554.
4. Cheung DY, Lee IS, Chang DK, et al;Korean Gut Images Study Group. Capsule endoscopy in small bowel tumors:a multicenter Korean study. J Gastroenterol Hepatol 2010;25:1079-1086.
5. Nennstiel S, Machanek A, von Delius S, et al. Predictors and characteristics of angioectasias in patients with obscure gastrointestinal bleeding identified by video capsule endoscopy. United European Gastroenterol J 2017;5:1129-1135.
6. Liao Z, Gao R, Xu C, Li ZS. Indications and detection, completion, and retention rates of small-bowel capsule endoscopy:a systematic review. Gastrointest Endosc 2010;71:280-286.
7. Saurin JC, Delvaux M, Gaudin JL, et al. Diagnostic value of endoscopic capsule in patients with obscure digestive bleeding:blinded comparison with video push-enteroscopy. Endoscopy 2003;35:576-584.
8. Igawa A, Oka S, Tanaka S, et al. Major predictors and management of small-bowel angioectasia. BMC Gastroenterol 2015;15:108.
9. Grooteman KV, Holleran G, Matheeuwsen M, van Geenen EJM, McNamara D, Drenth JPH. A risk assessment of factors for the presence of angiodysplasias during endoscopy and factors contributing to symptomatic bleeding and rebleeds. Dig Dis Sci 2019;64:2923-2932.
10. De Palma GD, Rega M, Masone S, et al. Mucosal abnormalities of the small bowel in patients with cirrhosis and portal hypertension:a capsule endoscopy study. Gastrointest Endosc 2005;62:529-534.
11. Goenka MK, Shah BB, Rai VK, Jajodia S, Goenka U. Mucosal changes in the small intestines in portal hypertension:first study using the Pillcam SB3 capsule endoscopy system. Clin Endosc 2018;51:563-569.
12. Ichikawa R, Hosoe N, Imaeda H, et al. Evaluation of small-intestinal abnormalities in adult patients with Henoch-Schönlein purpura using video capsule. Endoscopy 2011;43 Suppl 2 UCTN:E162-E163.
13. Dabos KJ, Yung DE, Bartzis L, Hayes PC, Plevris JN, Koulaouzidis A. Small bowel capsule endoscopy and portal hypertensive enteropathy in cirrhotic patients:results from a tertiary referral centre. Ann Hepatol 2016;15:394-401.
14. Wang A, Banerjee S, Barth BA, et al;ASGE Technology Committee. Wireless capsule endoscopy. Gastrointest Endosc 2013;78:805-815.
15. Yasaka K, Akai H, Abe O, Kiryu S. Deep learning with convolutional neural network for differentiation of liver masses at dynamic contrast-enhanced CT:a preliminary study. Radiology 2018;286:887-896.
16. Esteva A, Kuprel B, Novoa RA, et al. Dermatologist-level classification of skin cancer with deep neural networks. Nature 2017;542:115-118.
17. Gargeya R, Leng T. Automated identification of diabetic retinopathy using deep learning. Ophthalmology 2017;124:962-969.
18. Soffer S, Klang E, Shimon O, et al. Deep learning for wireless capsule endoscopy:a systematic review and meta-analysis. Gastrointest Endosc 2020;92:831-839.
19. Rondonotti E, Spada C, Adler S, et al. Small-bowel capsule endoscopy and device-assisted enteroscopy for diagnosis and treatment of small-bowel disorders:European Society of Gastrointestinal Endoscopy (ESGE) Technical Review. Endoscopy 2018;50:423-446.
20. Leenhardt R, Li C, Koulaouzidis A, et al. Nomenclature and semantic description of vascular lesions in small bowel capsule endoscopy:an international Delphi consensus statement. Endosc Int Open 2019;7:E372-E379.
21. Pedregosa F, Varoquaux G, Gramfort A, et al. Scikit-learn:machine learning in Python. J Mach Learn Res 2011;12:2825-2830.
22. Marmo R, Rotondano G, Rondonotti E, et al;Club Italiano Capsula Endoscopica - CICE. Capsule enteroscopy vs. other diagnostic procedures in diagnosing obscure gastrointestinal bleeding:a cost-effectiveness study. Eur J Gastroenterol Hepatol 2007;19:535-542.
23. Otani K, Watanabe T, Shimada S, et al. Clinical utility of capsule endoscopy and double-balloon enteroscopy in the management of obscure gastrointestinal bleeding. Digestion 2018;97:52-58.
24. Pennazio M, Spada C, Eliakim R, et al. Small-bowel capsule endoscopy and device-assisted enteroscopy for diagnosis and treatment of small-bowel disorders:European Society of Gastrointestinal Endoscopy (ESGE) Clinical Guideline. Endoscopy 2015;47:352-376.
25. Saito H, Aoki T, Aoyama K, et al. Automatic detection and classification of protruding lesions in wireless capsule endoscopy images based on a deep convolutional neural network. Gastrointest Endosc 2020;92:144-151.
26. Koulaouzidis A, Iakovidis DK, Karargyris A, Plevris JN. Optimizing lesion detection in small-bowel capsule endoscopy:from present problems to future solutions. Expert Rev Gastroenterol Hepatol 2015;9:217-235.
27. Aoki T, Yamada A, Aoyama K, et al. Clinical usefulness of a deep learning-based system as the first screening on small-bowel capsule endoscopy reading. Dig Endosc 2020;32:585-591.
28. Ding Z, Shi H, Zhang H, et al. Gastroenterologist-level identification of small-bowel diseases and normal variants by capsule endoscopy using a deep-learning model. Gastroenterology 2019;157:1044-1054.
29. Aoki T, Yamada A, Aoyama K, et al. Automatic detection of erosions and ulcerations in wireless capsule endoscopy images based on a deep convolutional neural network. Gastrointest Endosc 2019;89:357-363.
30. Aoki T, Yamada A, Kato Y, et al. Automatic detection of blood content in capsule endoscopy images based on a deep convolutional neural network. J Gastroenterol Hepatol 2020;35:1196-1200.
31. Leenhardt R, Vasseur P, Li C, et al;CAD-CAP Database Working Group. A neural network algorithm for detection of GI angiectasia during small-bowel capsule endoscopy. Gastrointest Endosc 2019;89:189-194.
32. Klang E, Barash Y, Margalit RY, et al. Deep learning algorithms for automated detection of Crohn's disease ulcers by video capsule endoscopy. Gastrointest Endosc 2020;91:606-613.
33. Wang X, Qian H, Ciaccio EJ, et al. Celiac disease diagnosis from videocapsule endoscopy images with residual learning and deep feature extraction. Comput Methods Programs Biomed 2020;187:105236.
34. Noya F, Alvarez-Gonzalez MA, Benitez R. Automated angiodysplasia detection from wireless capsule endoscopy. Annu Int Conf IEEE Eng Med Biol Soc 2017;2017:3158-3161.
35. Tsuboi A, Oka S, Aoyama K, et al. Artificial intelligence using a convolutional neural network for automatic detection of small-bowel angioectasia in capsule endoscopy images. Dig Endosc 2020;32:382-390.
36. Silva JC, Pinho R, Ponte A, et al. Predicting the risk of rebleeding after capsule endoscopy in obscure gastrointestinal bleeding - External validation of the RHEMITT Score. Dig Dis 2020 Jul 8 [Online ahead of print]. doi:10.1159/000509986
37. Otani K, Nakada A, Kurose Y, et al. Automatic detection of different types of small-bowel lesions on capsule endoscopy images using a newly developed deep convolutional neural network. Endoscopy 2020;52:786-791.
38. Boal Carvalho P, Rosa B, Moreira MJ, Cotter J. New evidence on the impact of antithrombotics in patients submitted to small bowel capsule endoscopy for the evaluation of obscure gastrointestinal bleeding. Gastroenterol Res Pract 2014;2014:709217.
39. Cañas-Ventura A, Márquez L, Bessa X, et al. Outcome in obscure gastrointestinal bleeding after capsule endoscopy. World J Gastrointest Endosc 2013;5:551-558.