أنا  وهب خلف عربو


Lecturer


التعليم

University of Zakho, Faculty of Science, M.Sc. in Computer Science.

Faculty of Science (Department of Computer Science) من University of Zakho

2015

University of Duhok, College of Education, B.Sc. in Computer Science.

University of Duhok من University of Duhok

2010

اللقب العلمي

Lecturer

2033-05-23

Assistant Lecturer

2015-03-15

البحوث العلمية

Indonesian Journal of Computer Science (القضية : 1) (الحجم : 13)
Facial Beauty Prediction Based on Deep Learning: A Review

This review delves into Facial Beauty Prediction (FBP) using deep learning, specifically focusing on convolutional... See more

This review delves into Facial Beauty Prediction (FBP) using deep learning, specifically focusing on convolutional neural networks (CNNs). It synthesizes recent advancements in the field, examining diverse methodologies and key datasets like SCUT-FBP and SCUT-FBP5500. The review identifies trends in FBP research, including the evolution of deep learning models and the challenges of dataset biases and cultural specificity. The paper concludes by emphasizing the need for more inclusive and balanced datasets and suggests future research directions to enhance model fairness and address ethical implications

 2024-02
Applied Soft Computing (الحجم : 140)
Letter: Application of Optimization Algorithms to Engineering Design Problems and Discrepancies in Mathematical Formulas

Engineering design optimization problems have attracted the attention of researchers since they appeared. Those who... See more

Engineering design optimization problems have attracted the attention of researchers since they appeared. Those who work on developing optimization algorithms, in particular, apply their developed algorithms to these problems in order to test their new algorithms’ capabilities. The mathematical discrepancy emerges during the implementation of equations and constraints related to these engineering problems. This is due to an error occurring in writing or transmitting these equations from one paper to another. Maintaining these discrepancies will have a negative impact on the assessment and model performance verification of the newly developed algorithms, as well as the decision-making process. To address this issue, this study investigates the mathematical discrepancies occurred by researchers in four well-known engineering design optimization problems (Welded Beam Design WBD, Speed Reducer Design SRD, Cantilever Beam Design CBD, and Multiple Disk Clutch Brake Design MDCBD). We have investigated some of the recently published papers in the literature, identifying discrepancies in their mathematical formulas, and fixing them appropriately by referring and comparing them to the original problem. Furthermore, all mathematical discrepancies , references, parameters, cost functions, constraints, and constraint errors are highlighted, arranged and organized in tables. As a result, this work can help readers and researchers avoid being confused and wasting time when working on these engineering design optimization problems

 2023-03
Academic Journal of Nawroz University (القضية : 4) (الحجم : 11)
The Effect of Data Splitting Methods on Classification Performance in Wrapper-Based Cuttlefish Gene-Selection Model

Considering the high dimensionality of gene expression datasets, selecting informative genes is key to improving... See more

Considering the high dimensionality of gene expression datasets, selecting informative genes is key to improving classification performance. The outcomes of data classification, on the other hand, are affected by data splitting strategies for the training-testing task. In light of the above facts, this paper aims to investigate the impact of three different data splitting methods on theperformance of eight well-known classifiers when paired by Cuttlefish algorithm (CFA) as a Gene-Selection. The classification algorithms included in this study are K-Nearest Neighbors (KNN), Logistic Regression (LR), Gaussian Naive Bayes (GNB), Linear Support Vector Machine (SVM-L), Sigmoid Support Vector Machine (SVM-S), Random Forest (RF), Decision Tree (DT), and Linear Discriminant Analysis (LDA). Whereas the tested data splitting methods are cross-validation (CV), train-test (TT), and train-validation-test (TVT). The efficacy of the investigated classifiers was evaluated on nine cancer gene expression datasets using various evaluation metrics, such as accuracy, F1-score, Friedman test. Experimental results revealed that LDA and SVM-L outperformed other algorithms in general. In contrast, the RF and DT algorithms provided the worst results. In most often used datasets, the results of all algorithms demonstrated that the train-test method of data separation is more accurate than the train-validation-test method, while the cross-validation method was superior to both. Furthermore, RF and GNB was affected by data splitting techniques less than other classifiers, whereas the LDA was the most affected on

 2022-11
Computers, Materials & Continua (القضية : 1) (الحجم : 69)
Oversampling Method Based on Gaussian Distribution and K-Means Clustering

Learning from imbalanced data is one of the greatest challenging problems in binary classification, and... See more

Learning from imbalanced data is one of the greatest challenging problems in binary classification, and this problem has gained more importance in recent years. When the class distribution is imbalanced, classical machine learning algorithms tend to move strongly towards the majority class and disregard the minority. Therefore, the accuracy may be high, but the model cannot recognize data instances in the minority class to classify them, leading to many misclassifications. Different methods have been proposed in the literature to handle the imbalance problem, but most are complicated and tend to simulate unnecessary noise. In this paper, we propose a simple oversampling method based on Multivariate Gaussian distribution and K-means clustering, called GK-Means. The new method aims to avoid generating noise and control imbalances between and within classes. Various experiments have been carried out with six classifiers and four oversampling methods. Experimental results on different imbalanced datasets show that the proposed GK-Means outperforms other oversampling methods and improves classification performance as measured by F1-score and Accuracy.

 2021-06
Science Journal of University of Zakho (القضية : 4) (الحجم : 5)
NORMALIZATION METHODS FOR BACKPROPAGATION: A COMPARATIVE STUDY

Neural Networks (NN) have been used by many researchers to solve problems in several domains... See more

Neural Networks (NN) have been used by many researchers to solve problems in several domains including classification and pattern recognition, and Backpropagation (BP) which is one of the most well-known artificial neural network models. Constructing effective NN applications relies on some characteristics such as the network topology, learning parameter, and normalization approaches for the input and the output vectors. The Input and the output vectors for BP need to be normalized properly in order to achieve the best performance of the network. This paper applies several normalization methods on several UCI datasets and comparing between them to find the best normalization method that works better with BP. Norm, Decimal scaling, Mean-Man, Median-Mad, Min-Max, and Z-score normalization are considered in this study. The comparative study shows that the performance of Mean-Mad and Median-Mad is better than the all remaining methods. On the other hand, the worst result is produced with Norm method

 2017-12

الاطاريح

2015-01-25
A Web-Based Application Of Blood Bank Information System

A Web-Based Application Of Blood Bank Information System

 2015