Search published articles


Showing 2 results for Tarokh

Fatemeh Bagheri, Mohammad Jafar Tarokh, Majid Ziaratban,
Volume 8, Issue 2 (7-2020)
Abstract

Background and objective: Automatic semantic segmentation of skin lesions is one of the most important medical requirements in the diagnosis and treatment of skin cancer, and scientists always try to achieve more accurate lesion segmentation systems. Developing an accurate model for lesion segmentation helps in timely diagnosis and appropriate treatment.
Material and Methods: In this study, a two-stage deep learning-based method is presented for accurate segmentation of skin lesions. At the first stage, detection stage, an approximate location of the lesion in a dermoscopy is estimated using deep Yolo v2 network. A sub-image is cropped from the input dermoscopy by considering a margin around the estimated lesion bounding box and then resized to a predetermined normal size. DeepLab convolutional neural network is used at the second stage, segmentation stage, to extract the exact lesion area from the normalized image.
Results: A standard and well-known dataset of dermoscopic images, (ISBI) 2017 dataset, is used to evaluate the proposed method and compare it with the state-of-the-art methods. Our method achieved Jaccard value of 79.05%, which is 2.55% higher than the Jaccard of the winner of the ISIC 2017 challenge.
Conclusion: Experiments demonstrated that the proposed two-stage CNN-based lesion segmentation method outperformed other state-of-the-art methods on the well-known ISIB2017 dataset. High accuracy in detection stage is of most important. Using the detection stage based on Yolov2 before segmentation stage, DeepLab3+ structure with appropriate backbone network, data augmentation, and additional modes of input images are the main reasons of the significant improvement.

Parisa Karimi Darabi, Mohammad Jafar Tarokh,
Volume 8, Issue 3 (10-2020)
Abstract

Background and Objectives: Currently, diabetes is one of the leading causes of death in the world. According to several factors diagnosis of this disease is complex and prone to human error. This study aimed to analyze the risk of having diabetes based on laboratory information, life style and, family history with the help of machine learning algorithms. When the model is trained properly, people can examine their risk of having diabetes.
Material and Methods: To classify patients, by using Python, eight different machine learning algorithms (Logistic Regression, Nearest Neighbor, Decision Tree, Random Forest, Support Vector Machine, Naive Bayesian, Neural Network and Gradient Boosting) were analysed. were evaluated by accuracy, sensitivity, specificity and ROC curve parameters.
ResultsThe model based on the gradient boosting algorithm showed the best performance with a prediction accuracy of %95.50.
ConclusionIn the future, this model can be used for diagnosis diabete. The basis of this study is to do more research and develop models such as other learning machine algorithms.


Page 1 from 1     

© 2021 CC BY-NC 4.0 | Jorjani Biomedicine Journal

Designed & Developed by : Yektaweb