ئەز   Karwan Jacksi


Assistant Professor

Specialties

Web Technology

Education

Doctor of Philosophy

Computer Science لە University of Zakho

2018

Master of Science

Computer Science لە Uppsala Univerity

2011

Bachelor of Science

Computer Science لە University of Duhok

2007

Membership


2018

2018-03-01,current
Co-Chair

International Conference on Advanced Science and Engineering (ICOASE 2018)

2012

2012-11-01,current
Member

Computer Programmers Union

Academic Title

Assistant Professor

2019-05-13

Lecturer

2014-09-01

Assistant Lecturer

2011-10-30

Awards

Appreciation Letter

2018-01
President of University of Zakho

Appreciation letter for making the website of the University of Zakho a second top visited website in Iraq among all Kurdistan Region Universities.

 2018

Letter of Appreciation

2015-01
Minister of Higher Education and Scientific Research in KRG

This honor is for my high teaching quality and academic activities including students' feedback for the academic year 2012-2013.

 2015

Appreciation Letter

2014-05
President of University of Zakho

Appreciation letter from Dr. Lazgin A. Jameel (President of University of Zakho) for the successful work that has been done as coordinator in the Computer Science Department from Oct. 2011 to April 2014.

 2014

Appreciation Letter

2013-11
President of University of Zakho

Got an appreciation letter from the president of the University of Zakho for the efforts that have been taken to make the E-learning workshop at the University of Zakho.

 2013

Top Students' Awards

2007-07
Deputy Prime Minister of Iraq

An award from Dr. Barham Salih (Deputy Prime Minister of Iraq) for Top students

 2007

Published Journal Articles

Data in Brief (Issue : 5) (Volume : 60)
KSTRV1: A scene text recognition dataset for central Kurdish in (Arabic-Based) script

Scene Text Recognition (STR) has advanced significantly in recent years, yet languages utilizing Arabic-based scripts,... See more

Scene Text Recognition (STR) has advanced significantly in recent years, yet languages utilizing Arabic-based scripts, such as Kurdish, remain underrepresented in existing datasets. This paper introduces KSTRV1, the first large-scale dataset designed for Kurdish Scene Text Recognition (KSTR), addressing the lack of resources for non-Latin scripts. The dataset comprises 1,420 natural scene images and 19,872 cropped word samples, covering Kurdish (Sorani and Badini dialects), Arabic, and English. Additionally, 20,000 synthetic text instances have been generated to enhance the dataset’s diversity, quantity, and quality by incorporating varied fonts, orientations, distortions, and background complexities. KSTRV1 captures the multilingual landscape of the Kurdistan Region while addressing real-world challenges like occlusion, lighting variations, and script complexity. The dataset includes detailed annotations with bounding boxes, language identification, and text orientation labels, ensuring comprehensive support for training and evaluating STR models. By providing both natural and synthetic data, KSTRV1 enables the development of robust text recognition models, particularly for Central Kurdish, a low-resource language. The KSTRV1 dataset is publicly available at https://doi.org/10.5281/zenodo.15038953 and is expected to significantly contribute to research in multilingual STR, document analysis, and optical character recognition (OCR), facilitating more inclusive and accurate text recognition systems.

 2025-05
Digital Scholarship in the Humanities (Issue : 1) (Volume : 9)
A hybrid part-of-speech tagger with annotated Kurdish corpus: advancements in POS tagging

With the rapid growth of online content written in the Kurdish language, there is an... See more

With the rapid growth of online content written in the Kurdish language, there is an increasing need to make it machine-readable and processable. Part of speech (POS) tagging is a critical aspect of natural language processing (NLP), playing a significant role in applications such as speech recognition, natural language parsing, information retrieval, and multiword term extraction. This study details the creation of the DASTAN corpus, the first POS-annotated corpus for the Sorani Kurdish dialect. The corpus, containing 74,258 words and thirty-eight tags, employs a hybrid approach utilizing the bigram hidden Markov model in combination with the Kurdish rule-based approach to POS tagging. This approach addresses two key problems that arise with rule-based approaches, namely misclassified words and ambiguity-related unanalyzed words. The proposed approach’s accuracy was assessed by training and testing it on the DASTAN corpus, yielding a 96% accuracy rate. Overall, this study’s findings demonstrate the effectiveness of the proposed hybrid approach and its potential to enhance NLP applications for Sorani Kurdish.

 2023-10
International Journal of Intelligent Systems and Applications in Engineering (Issue : 11) (Volume : 11)
An Intelligent and Advance Kurdish Information Retrieval Approach with Ontologies: A Critical Analysis

Today, there are numerous methods of finding information online: radio, TV, and the internet all... See more

Today, there are numerous methods of finding information online: radio, TV, and the internet all provide answers. However, the Internet stands out as being particularly helpful; users can search by typing in questions related to any subject area they wish. Results appear as links to various documents available on the internet, some of which may not even be relevant due to the vast amount of material. Search engines reliant solely on keywords are incapable of making sense of raw data, making it time-consuming and costly to extract critical pieces from an immense collection of web pages. Due to these deficiencies, several concepts were born, such as the Semantic Web (SW) and ontologies. SW serves as an excellent gateway for retrieving key information through various Information Retrieval (IR) techniques. IR algorithms are too simplistic to extract the semantic content from texts. IR, SW, and ontologies can all be used interchangeably, although all three have some connection. The SW can be achieved through IR, while indexing can lead to its creation on the web. The SW is also created through ontologies. Ontologies can be used together with the intelligent approaches to produce web content, which is then marked up using SW Documents. Ontology is the backbone of any software; therefore, the SW becomes simpler to comprehend. Ontology development is the process of creating and refining an ontology over time. This paper investigates various approaches, methodologies, and datasets used to address challenges in information retrieval, including corpus preparation, annotation techniques, query expansion, semantic reasoning, content alignment, and ontology-based retrieval systems.

 2023-09
International Journal of Intelligent Systems and Applications in Engineering (Issue : 11) (Volume : 3)
Web Solution for Processing and Visualizing Mass-Spectrometry Data and Protein Peptides Identified in Cancer Patients

This paper addresses the critical problem of processing and visualizing mass spectrometry data and protein... See more

This paper addresses the critical problem of processing and visualizing mass spectrometry data and protein peptides identified in cancer patients. The growing volume of data produced by advanced technologies, such as mass spectrometry, has necessitated the development of computer systems capable of effectively storing, analyzing, and presenting this data. In response to this challenge, a web-based solution is presented that empowers researchers and clinicians to gain valuable insights through network visualization of peptides and their associated data points across various cancer types and patient cohorts. By leveraging the power of Laravel on PHP 8, this system provides a robust foundation for efficient data processing and management. Additionally, the integration of an API enables seamless communication with a TypeScript and React-based front-end, resulting in an engaging and interactive user experience. The platform's ability to present the complex relationships between protein peptides and cancer-specific data in a network visualization format offers a powerful tool for researchers and clinicians to explore and interpret the data effectively. The development of this web-based solution contributes to the advancement of proteomics research and holds great potential for improving cancer treatment outcomes. By facilitating the exploration and analysis of mass spectrometry data and protein peptides, the system enables researchers to uncover valuable patterns and insights that can inform the development of more effective treatments for cancer patients. Through this work, a meaningful impact in the field of cancer research is strived for by us, and a valuable resource for the scientific community is provided.

 2023-07
Mathematics (Issue : 3) (Volume : 11)
A Semantics-Based Clustering Approach for Online Laboratories Using K-Means and HAC Algorithms

Due to the availability of a vast amount of unstructured data in various forms (e.g.,... See more

Due to the availability of a vast amount of unstructured data in various forms (e.g., the web, social networks, etc.), the clustering of text documents has become increasingly important. Traditional clustering algorithms have not been able to solve this problem because the semantic relationships between words could not accurately represent the meaning of the documents. Thus, semantic document clustering has been extensively utilized to enhance the quality of text clustering. This method is called unsupervised learning and it involves grouping documents based on their meaning, not on common keywords. This paper introduces a new method that groups documents from online laboratory repositories based on the semantic similarity approach. In this work, the dataset is collected first by crawling the short real-time descriptions of the online laboratories’ repositories from the Web. A vector space is created using frequency-inverse document frequency (TF-IDF) and clustering is done using the K-Means and Hierarchical Agglomerative Clustering (HAC) algorithms with different linkages. Three scenarios are considered: without preprocessing (WoPP); preprocessing with steaming (PPwS); and preprocessing without steaming (PPWoS). Several metrics have been used for evaluating experiments: Silhouette average, purity, V-measure, F1-measure, accuracy score, homogeneity score, completeness and NMI score (consisting of five datasets: online labs, 20 NewsGroups, Txt_sentoken, NLTK_Brown and NLTK_Reuters). Finally, by creating an interactive webpage, the results of the proposed work are contrasted and visualized.

 2023-01
Turkish Journal of Computer and Mathematics Education (Issue : 4) (Volume : 12)
Task Scheduling Algorithms in Cloud Computing: A Review

Cloud computing is the requirement based on clients and provides many resources that aim to... See more

Cloud computing is the requirement based on clients and provides many resources that aim to share it as a service through the internet. For optimal use, Cloud computing resources such as storage, application, and other services need managing and scheduling these services. The principal idea behind the scheduling is to minimize loss time, workload, and maximize throughput. So, the scheduling task is essential to achieve accuracy and correctness on task completion. This paper gives an idea about various task scheduling algorithms in the cloud computing environment used by researchers. Finally, many authors applied different parameters like completion time, throughput, and cost to evaluate the system.

 2021-04
Qubahan Academic Journal (Issue : 2) (Volume : 1)
State of Art for Semantic Analysis of Natural Language Processing

Semantic analysis is an essential feature of the NLP approach. It indicates, in the appropriate... See more

Semantic analysis is an essential feature of the NLP approach. It indicates, in the appropriate format, the context of a sentence or paragraph. Semantics is about language significance study. The vocabulary used conveys the importance of the subject because of the interrelationship between linguistic classes. In this article, semantic interpretation is carried out in the area of Natural Language Processing. The findings suggest that the best-achieved accuracy of checked papers and those who relied on the Sentiment Analysis approach and the prediction error is minimal.

 2021-03
QALAAI ZANIST JOURNAL (Issue : 1) (Volume : 6)
An Automated Early Alert System for Natural Disaster Risk Reduction: A Review

According to the research published in the last decades, many peoples died due to natural... See more

According to the research published in the last decades, many peoples died due to natural disasters. So, some researchers tried to find a method and solution to reduce these disasters and risks. Lamentably, there is not any value system for a warning from certain dangerous disasters in the country. This suggestion is constructive to diagnose this kind of problem; every country follows different tactics. Based on the various sources of natural weather monitoring systems in the heterogeneous country regions, this review found no solution to warn the community in real-time. This examination is to find the weakness of the current situation as the growth of technology nowadays. Today mobile application's new technology helps an early alert system for natural disaster risk reduction (DRR) that authorities employed in several ways to reduce the natural disaster risks.

 2021-03
Indonesian Journal of Electrical Engineering and Computer Science (Issue : 1) (Volume : 22)
A state-of-the-art survey on semantic similarity for document clustering using GloVe and density-based algorithms

Semantic similarity is the process of identifying relevant data semantically. The traditional way of identifying... See more

Semantic similarity is the process of identifying relevant data semantically. The traditional way of identifying document similarity is by using synonymous keywords and syntactician. In comparison, semantic similarity is to find similar data using meaning of words and semantics. Clustering is a concept of grouping objects that have the same features and properties as a cluster and separate from those objects that have different features and properties. In semantic document clustering, documents are clustered using semantic similarity techniques with similarity measurements. One of the common techniques to cluster documents is the density-based clustering algorithms using the density of data points as a main strategic to measure the similarity between them. In this paper, a state-of-the-art survey is presented to analyze the density-based algorithms for clustering documents. Furthermore, the similarity and evaluation measures are investigated with the selected algorithms to grasp the common ones. The delivered review revealed that the most used density-based algorithms in document clustering are DBSCAN and DPC. The most effective similarity measurement has been used with densitybased algorithms, specifically DBSCAN and DPC, is Cosine similarity with F-measure for performance and accuracy evaluation.

 2021-02
Academic Journal of Nawroz University (Issue : 1) (Volume : 10)
The Importance of E-Learning in the Teaching Processor Secondary Schools /Review Article

This study explores the usefulness of e-learning in teaching in secondary institutions. The topic of... See more

This study explores the usefulness of e-learning in teaching in secondary institutions. The topic of using new information and communication technology for teaching and learning is very relevant in secondary education institutions. Henceforth, Students can manage the most recent Technologies better. In addition, the School must play an important role to give instructional classes to the teacher to build up their aptitudes on the utilization of present-day advancements and to encourage downloading E-educational module from the service's site. However, still there are deterrents with the application: First, right off the bat the substance of the educational programs is not perfect with E-learning. Second, shortcoming of the mechanical framework important for the foundation of the E-learning framework in general optional school. Third, low attention to understudies and educators about the significance of E-learning and absence of sufficient capability for chiefs and instructors where instructors experience issues in tolerating this kind of Education. This paper examines the concept and the description of e-learning as presented by different researchers, the role that e-learning plays in secondary education institutions in relation to teaching and learning processes, and the advantages and disadvantages of adopting and implementing it.

 2021-01
International Journal of Research -GRANTHAALAYAH (Issue : 8) (Volume : 8)
AN HRM SYSTEM FOR SMALL AND MEDIUM ENTERPRISES (SME)S BASED ON CLOUD COMPUTING TECHNOLOGY

Technology has changed our life and the way we work; however, technology has affected several... See more

Technology has changed our life and the way we work; however, technology has affected several methods of working in Small and Medium Enterprises (SME)s. Human Resource (HR) is one of the core components in businesses, and nowadays most businesses are using technology for daily basis tasks. However, it still is not used all over the world. In Kurdistan Region-Iraq (KRI), most of the SMEs still use the old way of working and follow the paper-based method for their daily basis tasks. According to a survey, more than seventy percent of SMEs in Kurdistan are not using software to manage human resource management tasks. However, some big companies are using HRMS; but even then, there is a lack of use of Cloud Technology. In this study, a model of the Enterprise Human Resource Management System (EHRMS) is proposed and implemented to solve the HR problems in this area using Cloud Technology. The proposed system consists of sixteen standard modules which used usually with famous HRM systems. The system has been developed by using several technologies such as CodeIgniter as a software framework. The system is launched and deployed on Amazon Web Service (AWS) Elastic Compute Cloud (EC2).

 2020-08
Journal of Applied Science and Technology Trends (Issue : 1) (Volume : 1)
Football Ontology Construction using Oriented Programming

According to the W3C, the semantic web is the future of the www. The data... See more

According to the W3C, the semantic web is the future of the www. The data that is based on the semantic web can be understood by machines and devices. The main component of the semantic web is the ontology, which is known as the backbone of the semantic web. There are many tools used to edit and create an ontology, however, few kinds of research construct an ontology using oriented programming. SPARQL and API OWL are used to access and edit ontologies, though they are not using oriented programming. The main objective of this paper is to build an ontology using oriented programming and allowable to access OWL entities. Owlready module is effectively used in sport ontology for football in 11 European Leagues.

 2020-03
Jurnal Informatika (Issue : 2) (Volume : 14)
State of the art document clustering algorithms based on semantic similarity

The constant success of the Internet made the number of text documents in electronic forms... See more

The constant success of the Internet made the number of text documents in electronic forms increases hugely. The techniques to group these documents into meaningful clusters are becoming critical missions. The traditional clustering method was based on statistical features, and the clustering was done using a syntactic notion rather than semantically. However, these techniques resulted in un-similar data gathered in the same group due to polysemy and synonymy problems. The important solution to this issue is to document clustering based on semantic similarity, in which the documents are grouped according to the meaning and not keywords. In this research, eighty papers that use semantic similarity in different fields have been reviewed; forty of them that are using semantic similarity based on document clustering in seven recent years have been selected for a deep study, published between the years 2014 to 2020. A comprehensive literature review for all the selected papers is stated. Detailed research and comparison regarding their clustering algorithms, utilized tools, and methods of evaluation are given. This helps in the implementation and evaluation of the clustering of documents. The exposed research is used in the same direction when preparing the proposed research. Finally, an intensive discussion comparing the works is presented, and the result of our research is shown in figures.

 2020-02
Science Journal of University of Zakho (Issue : 3) (Volume : 6)
A State of Art Survey for OS Performance Improvement

Through the huge growth of heavy computing applications which require a high level of performance,... See more

Through the huge growth of heavy computing applications which require a high level of performance, it is observed that the interest of monitoring operating system performance has also demanded to be grown widely. In the past several years since OS performance has become a critical issue, many research studies have been produced to investigate and evaluate the stability status of OSs performance. This paper presents a survey of the most important and state of the art approaches and models to be used for performance measurement and evaluation. Furthermore, the research marks the capabilities of the performance-improvement of different operating systems using multiple metrics. The selection of metrics which will be used for monitoring the performance depends on monitoring goals and performance requirements. Many previous works related to this subject have been addressed, explained in details, and compared to highlight the top important features that will very beneficial to be depended for the best approach selection.

 2018-09
International Journal of Engineering and Technology (Issue : 2) (Volume : 6)
Student Attendance Management System

Attendance management is important to every single organization; it can decide whether or not an... See more

Attendance management is important to every single organization; it can decide whether or not an organization such as educational institutions, public or private sectors will be successful in the future. Organizations will have to keep a track of people within the organization such as employees and students to maximize their performance. Managing student attendance during lecture periods has become a difficult challenge. The ability to compute the attendance percentage becomes a major task as manual computation produces errors, and wastes a lot of time. For the stated reason, an efficient Web-based application for attendance management system is designed to track student's activity in the class. This application takes attendance electronically and the records of the attendance are storing in a database. The system design using the Model, View, and Controller (MVC) architecture, and implemented using the power of Laravel Framework. JavaScript is adding to the application to improve the use of the system. MySQL used for the Application Database. The system designed in a way that can differentiate the hours of theoretical and practical lessons since the rate of them is different for calculating the percentages of the students' absence. Insertions, deletions, and changes of data in the system can do straightforward via the designed GUI without interacting with the tables. Different presentation of information is obtainable from the system. The test case of the system exposed that the system is working enormously and is ready to use to manage to attend students for any department of the University. INTRODUCTION Due to student's interest in classrooms, and whose is the largest union in the study enviro… Read more

 2018-02
International Journal of Advanced Computer Science and Applications (Issue : 1) (Volume : 9)
LOD explorer: Presenting the Web of data

The quantity of data published on the Web according to principles of Linked Data is... See more

The quantity of data published on the Web according to principles of Linked Data is increasing intensely. However, this data is still largely limited to be used up by domain professionals and users who understand Linked Data technologies. Therefore, it is essential to develop tools to enhance intuitive perceptions of Linked Data for lay users. The features of Linked Data point to various challenges for an easy-to-use data presentation. In this paper, Semantic Web and Linked Data technologies are overviewed, challenges to the presentation of Linked Data is stated, and LOD Explorer is presented with the aim of delivering a simple application to discover triplestore resources. Furthermore, to hide the technical challenges behind Linked Data and provide both specialist and non-specialist users, an interactive and effective way to explore RDF resources.

 2018-01
International Journal of Advanced Computer Science and Applications (Issue : 11) (Volume : 7)
State of the Art Exploration Systems for Linked Data: A Review

The ever-increasing amount of data available on the web is the result of the simplicity... See more

The ever-increasing amount of data available on the web is the result of the simplicity of sharing data over the current Web. To retrieve relevant information efficiently from this huge dataspace, a sophisticated search technology, which is further complicated due to the various data formats used, is crucial. Semantic Web (SW) technology has a prominent role in search engines to alleviate this issue by providing a way to understand the contextual meaning of data so as to retrieve relevant, high-quality results. An Exploratory Search System (ESS), is a featured data looking and search approach which helps searchers learn and explore their unclear topics and seeking goals through a set of actions. To retrieve high-quality retrievals for ESSs, Linked Open Data (LOD) is the optimal choice. In this paper, SW technology is reviewed, an overview of the search strategies is provided, and followed by a survey of the state of the art Linked Data Browsers (LDBs) and ESSs based on LOD. Finally, each of the LDBs and ESSs is compared with respect to several features such as algorithms, data presentations, and explanations.

 2016-11
2015International Journal of Scientific & Technology Research (Issue : 4) (Volume : 8)
Design And Implementation Of Online Submission and Peer Review System A Case Study Of E-Journal Of University Of Zakho

With the aim of designing and implementing a web-based article submission management system for academic... See more

With the aim of designing and implementing a web-based article submission management system for academic research papers, several international models such as Elsevier Editorial System and ICOCI, International Conference on Computing and Informatics, are studied and analyzed. Through this analysis, an open access web-based article submission and peer review system for Journal of University of Zakho (JUOZ) is employed. This kind of systems is not only capable of solving issues such as complex manuscript management, time-delays in the process of reviewing, and loss of manuscripts that occurs often in off-line paper submission and review processes, but also is capable to build the foundation for e-journal publications. Consequently, an active and rapid scholarly communication medium can be made. The implementation and deployment of this system can improve the rank of the university and the reputation and the globalization of science and technology research journals.

 2015-08
International Journal of Emerging Technologies in Computational and Applied Sciences (IJETCAS) (Issue : 4) (Volume : 3)
Database Teaching in Different Universities: A Phenomenographic Research

In this research, the different teaching methodologies practiced in the basic database course taught in... See more

In this research, the different teaching methodologies practiced in the basic database course taught in different universities are discussed. This paper was written based on researched conducted through a questionnaire about university students in three different universities. The study was performed with a phenomenographic research approach among university staffs that have been graduated from University of Duhok , Nawroz University and University of Mosul . It investigates how and how well they have learned the basic database course during their bachelor degree.

 2015-05
INTERNATIONAL JOURNAL OF COMPUTER ENGINEERING IN RESEARCH TRENDS (Issue : 4) (Volume : 2)
Effects of Processes Forcing on CPU and Total Execution-Time Using Multiprocessor Shared Memory System

In this paper, the applications of Shared Memory systems towards the implementation of the Parallel... See more

In this paper, the applications of Shared Memory systems towards the implementation of the Parallel Processing approach is provided. Multiple tasks can be dealt with the applications of such systems by using the principles of Shared Memory Parallel Processing programming called Application-Program. The influences of forcing processes amongst processes of Shared Memory system relying on Parallel Processing approach principals are given. These influences are related with computing total and CPU execution times. The CPU usage is also determined with its changing manner depending on the load size and the number of participated CPUs.

 2015-04
International Journal of Scientific and Engineering Research (Issue : 3) (Volume : 6)
General method for data indexing using clustering methods

Indexing data plays a key role in data retrieval and search. New indexing techniques are... See more

Indexing data plays a key role in data retrieval and search. New indexing techniques are proposed frequently to improve search performance. Some data clustering methods are previously used for data indexing in data warehouses. In this paper, we discuss general concepts of data indexing, and clustering methods that are based on representatives. Then we present a general theme for indexing using clustering methods. There are two main processing schemes in databases, Online Transaction Processing (OLTP) and Online Analytical Processing (OLAP). The proposed method is specific to stationary data like in OLAP. Having general indexing theme, different clustering methods are compared. Here we studied three representative based clustering methods; standard K-Means, Self Organizing Map (SOM) and Growing Neural Gas (GNG). Our study shows that in this context, GNG out performs K-Means and SOM.

 2015-03

Thesis

2018-03-29
An Improved Approach for Information Retrieval with Semantic-Web Crawling

The existing Web allows people to share data over the Internet with no trouble making... See more

The existing Web allows people to share data over the Internet with no trouble making the information to become more ubiquitous and massive. A powerful search technology is definitely one of the main requirements for the success of the Web. However, with the huge amount of information available in various formats, it is difficult to retrieve relevant Information. Semantic Web technology plays a major role in resolving this problem by permitting the search engines to retrieve meaningful information. Exploratory search system, a special information seeking and exploration approach, supports users who are unfamiliar with a topic or whose search goals are vague and unfocused to learn and investigate a topic through a set of activities. In order to achieve exploratory search goals Linked Open Data (LOD) can be used to help search systems in retrieving related data, so the investigation task runs smoothly. The quantity of data published on the Web according to the principles of Linked Data is increasing intensely. However, this data is still largely limited to be used up by domain professionals and users who understand Linked Data technologies. Therefore, it is essential to develop tools to enhance intuitive perceptions of Linked Data for lay users. The features of Linked Data point to various challenges for an easy-to-use data presentation. In this research, Semantic Web and Linked Data technologies are overviewed, challenges to the presentation of Linked Data is stated, and LOD Explorer, which is a Web of Data exploration system, is presented, with the aim of delivering a simple application to discover triplestore resources. Furthermore, the application is deployed to hide the technical challenges behind Linked Data and provide both specialist and nonspecialist users an interactive and effective way to explore RDF resources. The application is made using pure JavaScript and jQuery libraries without the need for a server-side software. The efficiency of the system has been tested using different computing platforms such as Windows, Mac OS, and Linux, and the results of the experiments indicated an outstanding performance. Finally, a usability evaluation has been conducted using the System Usability Scale (SUS) tool, to gain an in-depth understanding of the usefulness and usability of the LOD Explorer and the results confirm the usability and usefulness of the proposed system

 2018
2011-05-17
Further Development of BitTorrent Simulator in Erlang

Among many P2P file-sharing protocols in existence, BitTorrent is one of the few that has... See more

Among many P2P file-sharing protocols in existence, BitTorrent is one of the few that has attracted significant attention by a wide range of users. It uses a variety of algorithms for peer selection, piece selection, and other tasks. Having a simulator that facilitates investigating of applying different strategies in implementing components of a P2P would be of great advantages. An Erlang-based BitTorrent simulator was developed by IT department at Uppsala University. The network side of the project had been rewritten in order to improve the functionality of the application. In this thesis work, a new and modular design approach for the client side of the implementation was employed, documented and incorporated into the application. All nodes run in parallel, and they communicate with each other through the newly developed network module. A variety of options for the BitTorrent simulator are supported in the implementation, algorithms of the typical structure can easily be exchanged and used to experiment with new ideas to find out how the swarm is affected with different approaches to implementing BitTorrent clients and trackers. The report also reviews the structure of the previous thesis work and explains the modifications made to the previously developed network module.

 2011
2007-05-15
Link Characteristics

The Internet has become the main means of exchanging cultures and an important part of... See more

The Internet has become the main means of exchanging cultures and an important part of everyday activities. As the Internet grows fast, the procedures of probing the Internet should become more important. In this work, a new friendly use system is suggested to do some of the Internet probing activities. The Project relies on two keys principals (protocols): Internet Protocol packets have an 8-bit Time-To-Live (TTL) header field and Internet Control Message protocol. The PS helps the administrators and developers to check the host reachability, network connectivity or Internet connectivity, and discovering the routes followed by packets when traveling to their destinations, The project also offers to the administrator a valuable information that assists him to discover and find out any variables occurred on any position on the line in a duration of time which helps him/her to find out where are the strong points or weak points such as the number of drooped packets or lost packets. The implementation of the project stands by testing the desired network or internet house, in order to have a picture of the efficiency of the Internet services which is under the test, many practical examples have been used to test the performance of the Probing System.

 2007

Conference

International Conference on Interactive Collaborative Robotics
 2023-06
DCPV: A Taxonomy for Deep Learning Model in Computer Aided System for Human Age Detection

Deep Learning prediction techniques are widely studied and researched for their implementation in Human Age Prediction (HAP) to prevent, treat and extend life expectancy. So far most of the algorithms are based on facial images,... See more

Deep Learning prediction techniques are widely studied and researched for their implementation in Human Age Prediction (HAP) to prevent, treat and extend life expectancy. So far most of the algorithms are based on facial images, MRI scans, and DNA methylation which is used for training and testing in the domain but rarely practiced. The lack of real-world-age HAP application is caused by several factors: no significant validation and devaluation of the system in the real-world scenario, low performance, and technical complications. This paper presents the Data, Classification technique, Prediction, and View (DCPV) taxonomy which specifies the major components of the system required for the implementation of a deep learning model to predict human age. These components are to be considered and used as validation and evaluation criteria for the introduction of the deep learning HAP model. A taxonomy of the HAP system is a step towards the development of a common baseline that will help the end users and researchers to have a clear view of the constituents of deep learning prediction approaches, providing better scope for future development of similar systems in the health domain. We assess the DCPV taxonomy by considering the performance, accuracy, robustness, and model comparisons. We demonstrate the value of the DCPV taxonomy by exploring state-of-the-art research within the domain of the HAP system.

5th International Conference on Engineering Technology and its Applications (IICETA)
 2022-09
Clustering Document based on Semantic Similarity Using Graph Base Spectral Algorithm

The Internet’s continued growth has resulted in a significant rise in the amount of electronic text documents. Grouping these materials into meaningful collections has become crucial. The old approach of document compilation based on statistical... See more

The Internet’s continued growth has resulted in a significant rise in the amount of electronic text documents. Grouping these materials into meaningful collections has become crucial. The old approach of document compilation based on statistical characteristics and categorization relied on syntactic rather than semantic information. This article introduces a unique approach for classifying texts based on their semantic similarity. The graph-based approach is depended an efficient technique been utilized for clustering. This is performed by extracting document summaries called synopses from the Wikipedia and IMDB databases and grouping thus downloaded documents, then utilizing the NLTK dictionary to generate them by making some important preprocessing to make it more convenient to use. Following that, a vector space is modelled using TFIDF and converted to TFIDF matrix as numeric form, and clustering is accomplished using Spectral methods. The results are compared with previews work.

International Conference on Advanced Science and Engineering (2nd ICOASE)
 2022-08
Design a Clustering Document based Semantic Similarity System using TFIDF and K-Mean

The continuing success of the Internet has led to an enormous rise in the volume of electronic text records. The strategies for grouping these records into coherent groups are increasingly important. Traditional text clustering methods... See more

The continuing success of the Internet has led to an enormous rise in the volume of electronic text records. The strategies for grouping these records into coherent groups are increasingly important. Traditional text clustering methods are focused on statistical characteristics, with a syntactic rather than semantical concept used to do clustering. A new approach for collecting documentation based on textual similarities is presented in this paper. The method is accomplished by defining, tokenizing, and stopping text synopses from Wikipedia and IMDB datasets using the NLTK dictionary. Then, a vector space is created using TFIDF with the K-mean algorithm to carry out clustering. The results were shown as an interactive website.

ICR’22 International Conference on Innovations in Computing Research
 2022-08
Systematic Review for Selecting Methods of Document Clustering on Semantic Similarity of Online Laboratories Repository

In the era of digitalization, the number of electronic text documents has been rapidly increasing on the Internet. Organizing these documents into meaningful clusters is becoming a necessity by using several methods (i.e., TF-IDF, Word... See more

In the era of digitalization, the number of electronic text documents has been rapidly increasing on the Internet. Organizing these documents into meaningful clusters is becoming a necessity by using several methods (i.e., TF-IDF, Word Embedding) and based on documents clustering. Document clustering is the process of dynamically arranging documents into clusters such that the documents contained within a cluster are very similar to those contained inside other clusters. Due to the fact that traditional clustering algorithms do not take semantic relationships between words into account and therefore do not accurately represent the meaning of documents. Semantic information has been widely used to improve the quality of document clusters by grouping documents according to their meaning rather than their keywords. In this paper, twenty-five papers have been systematically reviewed that are published in the last seven years (from 2016 to 2022) linked to semantic similarities which are based on document clustering. Algorithms, similarity measures, tools, and evaluation methods usage have been discussed as well. As result, the survey shows that researchers used different datasets for applying semantic similarity-based clustering regarding the text similarity. Hereby, this paper proposes methods of semantic similarity approach-based clustering that can be used for short text semantic similarity included in online laboratories repository.

International Conference on Advanced Science and Engineering (2nd ICOASE)
 2021-05
Distributed Denial of Service Attack Mitigation using High Availability Proxy and Network Load Balancing

Nowadays, cybersecurity threat is a big challenge to all organizations that present their services over the Internet. Distributed Denial of Service (DDoS) attack is the most effective and used attack and seriously affects the quality... See more

Nowadays, cybersecurity threat is a big challenge to all organizations that present their services over the Internet. Distributed Denial of Service (DDoS) attack is the most effective and used attack and seriously affects the quality of service of each E-organization. Hence, mitigation this type of attack is considered a persistent need. In this paper, we used Network Load Balancing (NLB) and High Availability Proxy (HAProxy) as mitigation techniques. The NLB is used in the Windows platform and HAProxy in the Linux platform. Moreover, Internet Information Service (IIS) 10.0 is implemented on Windows server 2016 and Apache 2 on Linux Ubuntu 16.04 as web servers. We evaluated each load balancer efficiency in mitigating synchronize (SYN) DDoS attack on each platform separately. The evaluation process is accomplished in a real network and average response time and average CPU are utilized as metrics. The results illustrated that the NLB in the Windows platform achieved better performance in mitigation SYN DDOS compared to HAProxy in the Linux platform. Whereas, the average response time of the Window webservers is reduced with NLB. However, the impact of the SYN DDoS on the average CPU usage of the IIS 10.0 webservers was more than those of the Apache 2 webservers.

International Conference on Advanced Science and Engineering (2nd ICOASE)
 2021-05
Clustering Documents based on Semantic Similarity using HAC and K-Mean Algorithms

The continuing success of the Internet has greatly increased the number of text documents in electronic formats. The techniques for grouping these documents into meaningful collections have become mission-critical. The traditional method of compiling documents... See more

The continuing success of the Internet has greatly increased the number of text documents in electronic formats. The techniques for grouping these documents into meaningful collections have become mission-critical. The traditional method of compiling documents based on statistical features and grouping did use syntactic rather than semantic. This article introduces a new method for grouping documents based on semantic similarity. This process is accomplished by identifying document summaries from Wikipedia and IMDB datasets, then deriving them using the NLTK dictionary. A vector space afterward is modeled with TFIDF, and the clustering is performed using the HAC and K-mean algorithms. The results are compared and visualized as an interactive webpage.

International Conference on Advanced Science and Engineering (2nd ICOASE)
 2021-05
Semantic Document Clustering using K-means algorithm and Ward's Method

Nowadays in the age of technology, textual documents are rapidly growing over the internet. Offline and online documents, websites, e-mails, social network and blog posts, are archived in electronic structured databases. It is very hard... See more

Nowadays in the age of technology, textual documents are rapidly growing over the internet. Offline and online documents, websites, e-mails, social network and blog posts, are archived in electronic structured databases. It is very hard to maintain and reach these documents without acceptable ranking and provide demand clustering while there is classification without any details. This paper presents an approach based on semantic similarity for clustering documents using the NLTK dictionary. The procedure is done by defining synopses from IMDB and Wikipedia datasets, tokenizing and stemming them. Next, a vector space is constructed using TFIDF, and the clustering is done using the ward's method and K-mean algorithm. WordNet is also used to semantically cluster documents. The results are visualized and presented as an interactive website describing the relationship between all clusters. For each algorithm three scenarios are considered for the implementations: 1) without preprocessing, 2) preprocessing without stemming, and 3) preprocessing with stemming. The Silhouette metric and seven other metrics are used to measure the similarity with the five different datasets. Using the K-means algorithm, the best similarity ratio acquired from the Silhouette metric with (nltk-Reuters) dataset for all clusters, and the highest ratio is when k=10. Similarly, with Ward's algorithm, the highest similarity ratio of the Silhouette metric obtained using (IMDB and Wiki top 100 movies, and nltk-brown) datasets together for all clusters, and best similarity ratio is obtained when k=5 using the (IMDB and Wiki top 100 movies) dataset. The results are compared with the literature, and the outcome exposed that the Ward's method outperforms the results of K-means for small datasets.

International Conference on Advanced Science and Engineering (2nd ICOASE)
 2021-05
Glove Word Embedding and DBSCAN algorithms for Semantic Document Clustering

In the recently developed document clustering, word embedding has the primary role in constructing semantics, considering and measuring the times a specific word appears in its context. Word2vect and Glove word embedding are the two... See more

In the recently developed document clustering, word embedding has the primary role in constructing semantics, considering and measuring the times a specific word appears in its context. Word2vect and Glove word embedding are the two most used word embeddings in document clustering. Previous works do not consider the use of glove word embedding with DBSCAN clustering algorithm in document clustering. In this work, a preprocessing with and without stemming of Wikipedia and IMDB datasets applied to glove word embedding algorithm, then word vectors as a result are applied to the DBSCAN clustering algorithm. For the evaluation of experiments, seven metrics have been used: Silhouette average, purity, accuracy, F1, completeness, homogeneity, and NMI score. The experimental results are compared with the results of TFIDF and K-means algorithms on six datasets. The results of this work outperform the results of the TFIDF and K-means approach using the four main evaluation metrics and CPU time consuming.

International Conference on Advanced Science and Engineering (2nd ICOASE)
 2021-05
Clustering Document based Semantic Similarity System using TFIDF and K-Mean

The steady success of the Internet has led to an enormous rise in the volume of electronic text records. Sensitive tasks are increasingly being used to organize these materials in meaningful bundles. The standard clustering... See more

The steady success of the Internet has led to an enormous rise in the volume of electronic text records. Sensitive tasks are increasingly being used to organize these materials in meaningful bundles. The standard clustering approach of documents was focused on statistical characteristics and clustering using the syntactic rather than semantic notion. This paper provides a new way to group documents based on textual similarities. Text synopses are found, identified, and stopped using the NLTK dictionary from Wikipedia and IMDB datasets. The next step is to build a vector space with TFIDF and cluster it using an algorithm K-mean. The results were obtained based on three proposed scenarios: 1) no treatment. 2) preprocessing without derivation, and 3) Derivative processing. The results showed that good similarity ratios were obtained for the internal evaluation when using (txt-sentoken data set) for all K values. In contrast, the best ratio was obtained with K = 20. In addition, as an external evaluation, purity measures were obtained and presented V measure of (txt). -sentoken) and the accuracy scale of (nltk-Reuter) gave the best results in three scenarios for K = 20 as subjective evaluation, the maximum time consumed with the first scenario (no preprocessing), and the minimum time recorded with the second scenario (excluding derivation).

2018 International Conference on Advanced Science and Engineering (ICOASE)
 2018-11
Distributed Cloud Computing and Distributed Parallel Computing: A Review

In this paper, we present a discussion panel of two of the hottest topics in this area namely distributed parallel processing and distributed cloud computing. Various aspects have been discussed in this review paper such... See more

In this paper, we present a discussion panel of two of the hottest topics in this area namely distributed parallel processing and distributed cloud computing. Various aspects have been discussed in this review paper such as concentrating on whether these topics are discussed simultaneously in any previous works. Other aspects that have been reviewed in this paper include the algorithms, which simulated in both distributed parallel computing and distributed cloud computing. The goal is to process the tasks over resources then readjusted the calculation among the servers for the sake of optimization. These help us to improve the system performance with the desired rates. During our review, we presented some articles which explain the designing of applications in distributed cloud computing while some others introduced the concept of decreasing the response time in distributed parallel computing.

2018 International Conference on Advanced Science and Engineering (ICOASE)
 2018-10
Distributed Cloud Computing and Distributed Parallel Computing: A Review

In this paper, we present a discussion panel of two of the hottest topics in this area namely distributed parallel processing and distributed cloud computing. Various aspects have been discussed in this review paper such... See more

In this paper, we present a discussion panel of two of the hottest topics in this area namely distributed parallel processing and distributed cloud computing. Various aspects have been discussed in this review paper such as concentrating on whether these topics are discussed simultaneously in any previous works. Other aspects that have been reviewed in this paper include the algorithms, which simulated in both distributed parallel computing and distributed cloud computing. The goal is to process the tasks over resources then readjusted the calculation among the servers for the sake of optimization. These help us to improve the system performance with the desired rates. During our review, we presented some articles which explain the designing of applications in distributed cloud computing while some others introduced the concept of decreasing the response time in distributed parallel computing.

2018 International Conference on Advanced Science and Engineering (ICOASE)
 2018-10
Impact Analysis of HTTP and SYN Flood DDoS Attacks on Apache 2 and IIS 10.0 Web Servers

Nowadays, continuously accessing Internet services is vital for the most of people. However, due to Denial of Service (DoS) and its severe type ‘Distributed Denial of Service (DDoS), online services becomes unavailable to users in... See more

Nowadays, continuously accessing Internet services is vital for the most of people. However, due to Denial of Service (DoS) and its severe type ‘Distributed Denial of Service (DDoS), online services becomes unavailable to users in sometimes. Rather than DDoS is dangerous and has serious impact on the Internet consumers, there are multiple types of that attack such Slowrise, ping of death and UDP, ICMP, SYN flood, etc. In this paper, the effect of HTTP and SYN flood attack on the most recent and widely used web servers is studied and evaluated. Systematic performance analysis is performed on Internet Information Service 10.0 (IIS 10.0) on Windows server 2016 and Apache 2 on Linux Ubuntu 16.04 Long Term Support (LTS) server. Furthermore, the key metrics of the performance are average response time, average CPU usage and standard deviation as a responsiveness, efficiency and stability of the web …

2018 International Conference on Advanced Science and Engineering (ICOASE)
 2018-10
Internet of Things Security: A Survey

Internet of Things (IoT) is a huge number of objects which communicate over a network or the Internet. These objects are a combination of electronics, sensors, and a software to control the way of working... See more

Internet of Things (IoT) is a huge number of objects which communicate over a network or the Internet. These objects are a combination of electronics, sensors, and a software to control the way of working other parts of the object. Each object generates and collects data from its environment using sensors and transfers them to other objects or a central database through a channel. Keeping this generated data and its transformation is one of the biggest challenges in IoT today and it is one of the biggest concerns of all organizations that they use the IoT technology. In this paper, the most crucial researches related to security in the IoT field have been reviewed and discussed while taking account of the great power of the Quantum Computers. Significant attributes of these studies are compared. IoT security ranges from the software layer security, board and chip, vulnerable cryptography algorithm, protocol and network …

Proceedings of the 5th International Conference on Computing and Informatics
 2015-08
A SURVEY OF EXPLORATORY SEARCH SYSTEMS BASED ON LOD RESOURCES

The fact that the existing Web allows people to effortlessly share data over the Internet has resulted in the accumulation of vast amounts of information available on the Web. Therefore, a powerful search technology that... See more

The fact that the existing Web allows people to effortlessly share data over the Internet has resulted in the accumulation of vast amounts of information available on the Web. Therefore, a powerful search technology that will allow retrieval of relevant information is one of the main requirements for the success of the Web which is complicated further due to use of many different formats for storing information. Semantic Web technology plays a major role in resolving this problem by permitting the search engines to retrieve meaningful information. Exploratory search system, a special information seeking and exploration approach, supports users who are unfamiliar with a topic or whose search goals are vague and unfocused to learn and investigate a topic through a set of activities. In order to achieve exploratory search goals Linked Open Data (LOD) can be used to help search systems in retrieving related data, so the investigation task runs smoothly. This paper provides an overview of the Semantic Web Technology , Linked Data and search strategies, followed by a survey of the state of the art Exploratory Search Systems based on LOD. Finally the systems are compared in various aspects such as algorithms, result rankings and explanations .

Presentation

University of Zakho
2017-01
Internet Communication

For the new coming students to the University

 2017
Eastern Mediterranean University
2015-12
Semantic web (Foundation – Architecture – Languages – Tools)

Development of the Web • Limitations of the current Web • Introduction to Semantic Web • Semantic Web Architecture and Languages • Semantic Web Tools • Who actually does the Semantic Web?

 2015
Istanbul Sabahattin Zaim University
2015-08
A Survey of Exploratory Search Systems Based on LOD Resources

WWW • Search Strategies • Semantic Web • Linked Data • Linked data browsers • Linked data recommenders • Linked data based exploratory search systems • Discussions • Computing Semantic Similarity • Database technique

 2015
School of Computing and Technology Eastern Mediterranean University
2014-11
The Semantic Web

• Motivation – Development of the Web – Limitations of the current Web • Technical Solution – Introduction to Semantic Web – Semantic Web Architecture and Languages – Semantic Web Tools

 2014
University of Zakho
2014-05
Database Teaching in Different Universities

Databases are an important part of computer sciences • Different teaching methodologies practiced in the basic database course taught in different universities. • Suitability of the course to the class-level is a critical task. •... See more

Databases are an important part of computer sciences • Different teaching methodologies practiced in the basic database course taught in different universities. • Suitability of the course to the class-level is a critical task. • The study is based on research conducted through a questionnaire about university students in three different universities. • The investigation was performed with a phenomenographic research approach among university staffs that have been graduated from Duhok, Nawroz and Mosul universities. • It investigates how and how well they have learned the basic database course during their bachelor degree.

 2014
University of Zakho
2013-09
E-Learning System Design: Teacher-Student Websites

E-learning System Design Workshop E-learning System Design Workshop Teacher-Student Websites

 2013
University of Zakho
2012-04
Magnet Links

Background • Client-Server vs. Peer to Peer Model. • BitTorrent Protocol. • DHT Networks. • Peer Exchange. • Magnet Links • History • Use of Content Hashes • Technical Description • The Pirate Bay

 2012
Faculty of Science at University of Zakho
2011-11
Bit Torrent Protocol

BitTorrent Protocol Introduction, description and operation

 2011

Workshop

University of Zakho
2013-09
E-Learning System Design: Teacher-Student Websites

E-learning System Design Workshop E-learning System Design Workshop Teacher-Student Websites

 2013