| English | Arabic | Home | Login |

Conference

2022

Compare K-Nearest Neighbor Method with Kernel Model in Nonparametric Functional Regression

2022-05
6th International Conference On Computational Mathematics and Engineering Science, (CMES-2022), 20 to 22 May 2022,Ordu-Turkey
--The point of this research is to compare K-Nearest Neighbor Model with Kernel Model in on parametric functional data with conditional supposition in the case of response as a scalar variable while the function is covariates. We utilize the form of the Nadaraya-Watson estimator method for prediction with two kinds of semi-metrics: semi-metric built on the second derivatives and the second based on Functional Principle Component Analysis. The results get from K-Nearest Neighbor method is more accurate than the outcomes get from kernel model. The achievement of this research is then assessed by computing mean square errors. The method is illustrated by some applications.

A Nonlinear Transformation Methods Using Covid-19 Data in the Kurdistan Region

2022-04
2022 International Conference on Computer Science and Software Engineering (CSASE)
Ordinary Least squares (OLS) are the most widely used due to tradition and their optimal properties to estimate the parameters of linear and nonlinear regression models. Nevertheless, in the presence of outliers in the data, estimates of OLS become inefficient, and even a single unusual point can have a significant impact on the estimation of parameters. In the presence of outliers is the use of robust estimators rather than the method of OLS. They are finding a suitable nonlinear transformation to reduce anomalies, including non-additivity, heteroscedasticity, and non-normality in multiple nonlinear regression. It might be beneficial to transform the response variable or predictor variable, or both together to present the equation in a simple, functional form that is linear in the transformed variables. To illustrate the superior transformation function, we compare the squared correlation coefficient (coefficient of …
2021

Multicollinearity Diagnostic Measure For Fixed Effect Panel Data Model

2021-08
the 5th ISM International Statistical Conference 2021 (ISM-V 2021)
It is now evident that high leverage points (HLPs) can induce the multicollinearity pattern of a data in fixed effect panel data model. Those observations that are responsible for this phenomenon are called high leverage collinearity-enhancing observations (HLCEO). The commonly used within group ordinary least squares (WOLS) estimator for estimating the parameters of fixed effect panel data model is easily affected by HLCEOs. In their presence, the WOLS estimates may produce large variances and this would lead to erroneous interpretation. Therefore, it is imperative to detect the multicollinearity which is caused by HLPs. The classical Variance Inflation Factor (CVIF) is the commonly used diagnostic method for detecting multicollinearity in panel data. However, it is not correctly diagnosed multicollinearity in the presence of HLCEOs. Hence, in this paper a new diagnostic method of diagnosing multicollinearity in panel data is proposed. The method is formulated by incorporating a robust within group estimator based on fast improvised MGT (WGM-FIMGT) and denoted as RVIF (WGM-FIMGT). The numerical evidences show that our proposed method is very successful in diagnosing multicollinearity compared to other methods in this study.
2020

The 4th International Conference on Mathematical Sciences and Computer Engineering

2020-09
ICMSCE 2017
The detection of multicollinearity is very crucial, so that, proper remedial measures can be taken up in their presence. The widely used diagnostic method to detect multicollinearity in multiple linear regressions is by using Classical Variance Inflation Factor (CVIF). It is now evident that the CVIF failed to correctly detect multicollinearity when high leverage points are present in a set of data. Robust Variance Inflation Factor (RVIF) has been introduced to remedy this problem. Nonetheless, the computation of RVIF takes longer time because it is based on robust GM (DRGP) estimator which depends on Minimum Volume Ellipsoid (MVE) estimator that involves a lot of computer times. In this study, we propose a fast RVIF (FRVIF) which take less computing time. The results of the simulation study and numerical examples indicate that our proposed FRVIF successfully detect multicollinearity problem with faster rate compared to other methods.

First International Virtual Conference on Statistics,

2020-05
First International Virtual Conference on Statistics,
Influential observations (IO) are those observations that are responsible for misleading conclusions about the fitting of a multiple linear regression model. The existing IO identification methods such as influential distance (ID) is not very successful in detecting IO. It is suspected that the ID employed inefficient method with long computational running time for the identification of the suspected IO at the initial step. Moreover, this method declares good leverage observations as IO, resulting in misleading conclusion. In this paper, we proposed fast improvised influential distance (FIID) that can successfully identify IO, good leverage observations, and regular observations with shorter computational running time. Monte Carlo simulation study and real data examples show that the FIID correctly identify genuine IO in multiple linear regression model with no masking and a negligible swamping rate.
2014

CIMPA-UNSCO-IRAK SCHOOL Inverse problems: Theory and applications, Erbil, May 5, 2014.

2014-05
CIMPA
participate

Back