Open hour: senin - sabtu 09:00:00 - 20:00:00; minggu & tanggal merah tutup
Panoramic radiographs were obtained from patients who received implant treatment in the Department of Prosthodontics, Gerodontology and Oral Rehabilitation at Osaka University Dental Hospital after January 2000.

Method : Identification of dental implants using deep learning

author: Toshihito Takahashi,Kazunori Nozaki,Tomoya Gonda,Tomoaki Mameno, Masahiro Wada, Kazunori Ikebe | publisher: drg. Andreas Tjandra, Sp. Perio, FISID

Methods

Data collection

Panoramic radiographs were obtained from patients who received implant treatment in the Department of Prosthodontics, Gerodontology and Oral Rehabilitation at Osaka University Dental Hospital after January 2000. Panoramic radiographs with unknown implants were excluded and totally 1282 images were used to annotate implants. All images were JPEG files that were resized to 416 × 416 pixels. The images were randomly divided into two datasets: one for training (1026 images, 80%) and one for testing (256 images, 20%). Training datasets were used to make the model by learning, and the testing dataset was independent of the training dataset and used to assess the performance of models which was made using a training dataset.

Annotation of implants

Six implant systems manufactured by three companies were annotated manually in all panoramic radiographs using an annotation tool (labelImg). They consisted of four systems, which have straight apex: MK III and MK III Groovy (MK III/IIIG) by Nobel Biocare (Zürich, Switzerland), bone level implant (BL) by Straumann (Basel, Switzerland), and Genesio Plus ST (Genesio) by GC (Tokyo, Japan); and two systems which have tapered apex: MK IV and Speedy Groovy (MK IV/SG) by Nobel Biocare.

Deep learning algorithm

To implement the object detection algorithm, Python 3.5.2 and the Keras library 2.2.4 were used with TensorFlow 1.12.0 as the backend. The object detection application, You Only Look Once (YOLO) v3 [12], with fine-tuning was used, and the dataset was trained to detect implants. The training dataset was separated into 16 batches for every epoch, and 1000 epochs were run with a learning rate of 0.01.

Assessment of the learning result

The total number of the implant system in all panoramic radiographs, number of implant system identified correctly (true positives; TP), and those identified as other types of implant system (false positives; FP) were identified. The average precisions (AP) of each implant system, the mean average precision (mAP) of an intersection over unit (IoU) of more than 0.5, and mean IoU (mIoU) were calculated. IoU was calculated as follows (Fig. 1).

IoU = area of overlap (both ground-truth bounding box and predicted bounding box)/area of union (either ground-truth bounding box or predicted bounding box)

APs are higher in value depending on the IoU threshold. In this study, the IoU threshold was determined to be 0.5, which is the value commonly used in other studies on object detection [13]. In addition, mAP is calculated by taking the average of AP over all classes. Higher values indicate that the learning is more accurate.

 

Serial posts:


id post:
New thoughts
Me:
search
glossary
en in