| الإنجليزية | العربية | الرئسية | تسجيل الدخول |

البحوث العلمية

2024

Facial Beauty Prediction Based on Deep Learning: A Review

2024-02
Indonesian Journal of Computer Science (القضية : 1) (الحجم : 13)
This review delves into Facial Beauty Prediction (FBP) using deep learning, specifically focusing on convolutional neural networks (CNNs). It synthesizes recent advancements in the field, examining diverse methodologies and key datasets like SCUT-FBP and SCUT-FBP5500. The review identifies trends in FBP research, including the evolution of deep learning models and the challenges of dataset biases and cultural specificity. The paper concludes by emphasizing the need for more inclusive and balanced datasets and suggests future research directions to enhance model fairness and address ethical implications
2023

Letter: Application of Optimization Algorithms to Engineering Design Problems and Discrepancies in Mathematical Formulas

2023-03
Applied Soft Computing (الحجم : 140)
Engineering design optimization problems have attracted the attention of researchers since they appeared. Those who work on developing optimization algorithms, in particular, apply their developed algorithms to these problems in order to test their new algorithms’ capabilities. The mathematical discrepancy emerges during the implementation of equations and constraints related to these engineering problems. This is due to an error occurring in writing or transmitting these equations from one paper to another. Maintaining these discrepancies will have a negative impact on the assessment and model performance verification of the newly developed algorithms, as well as the decision-making process. To address this issue, this study investigates the mathematical discrepancies occurred by researchers in four well-known engineering design optimization problems (Welded Beam Design WBD, Speed Reducer Design SRD, Cantilever Beam Design CBD, and Multiple Disk Clutch Brake Design MDCBD). We have investigated some of the recently published papers in the literature, identifying discrepancies in their mathematical formulas, and fixing them appropriately by referring and comparing them to the original problem. Furthermore, all mathematical discrepancies , references, parameters, cost functions, constraints, and constraint errors are highlighted, arranged and organized in tables. As a result, this work can help readers and researchers avoid being confused and wasting time when working on these engineering design optimization problems
2022

The Effect of Data Splitting Methods on Classification Performance in Wrapper-Based Cuttlefish Gene-Selection Model

2022-11
Academic Journal of Nawroz University (القضية : 4) (الحجم : 11)
Considering the high dimensionality of gene expression datasets, selecting informative genes is key to improving classification performance. The outcomes of data classification, on the other hand, are affected by data splitting strategies for the training-testing task. In light of the above facts, this paper aims to investigate the impact of three different data splitting methods on theperformance of eight well-known classifiers when paired by Cuttlefish algorithm (CFA) as a Gene-Selection. The classification algorithms included in this study are K-Nearest Neighbors (KNN), Logistic Regression (LR), Gaussian Naive Bayes (GNB), Linear Support Vector Machine (SVM-L), Sigmoid Support Vector Machine (SVM-S), Random Forest (RF), Decision Tree (DT), and Linear Discriminant Analysis (LDA). Whereas the tested data splitting methods are cross-validation (CV), train-test (TT), and train-validation-test (TVT). The efficacy of the investigated classifiers was evaluated on nine cancer gene expression datasets using various evaluation metrics, such as accuracy, F1-score, Friedman test. Experimental results revealed that LDA and SVM-L outperformed other algorithms in general. In contrast, the RF and DT algorithms provided the worst results. In most often used datasets, the results of all algorithms demonstrated that the train-test method of data separation is more accurate than the train-validation-test method, while the cross-validation method was superior to both. Furthermore, RF and GNB was affected by data splitting techniques less than other classifiers, whereas the LDA was the most affected on
2021

Oversampling Method Based on Gaussian Distribution and K-Means Clustering

2021-06
Computers, Materials & Continua (القضية : 1) (الحجم : 69)
Learning from imbalanced data is one of the greatest challenging problems in binary classification, and this problem has gained more importance in recent years. When the class distribution is imbalanced, classical machine learning algorithms tend to move strongly towards the majority class and disregard the minority. Therefore, the accuracy may be high, but the model cannot recognize data instances in the minority class to classify them, leading to many misclassifications. Different methods have been proposed in the literature to handle the imbalance problem, but most are complicated and tend to simulate unnecessary noise. In this paper, we propose a simple oversampling method based on Multivariate Gaussian distribution and K-means clustering, called GK-Means. The new method aims to avoid generating noise and control imbalances between and within classes. Various experiments have been carried out with six classifiers and four oversampling methods. Experimental results on different imbalanced datasets show that the proposed GK-Means outperforms other oversampling methods and improves classification performance as measured by F1-score and Accuracy.
2017

NORMALIZATION METHODS FOR BACKPROPAGATION: A COMPARATIVE STUDY

2017-12
Science Journal of University of Zakho (القضية : 4) (الحجم : 5)
Neural Networks (NN) have been used by many researchers to solve problems in several domains including classification and pattern recognition, and Backpropagation (BP) which is one of the most well-known artificial neural network models. Constructing effective NN applications relies on some characteristics such as the network topology, learning parameter, and normalization approaches for the input and the output vectors. The Input and the output vectors for BP need to be normalized properly in order to achieve the best performance of the network. This paper applies several normalization methods on several UCI datasets and comparing between them to find the best normalization method that works better with BP. Norm, Decimal scaling, Mean-Man, Median-Mad, Min-Max, and Z-score normalization are considered in this study. The comparative study shows that the performance of Mean-Mad and Median-Mad is better than the all remaining methods. On the other hand, the worst result is produced with Norm method

الرجوع