×
Top Control Theory Research articles in 2020

GENETIC ALGORITHM FOR EXAM TIMETABLING PROBLEM - A SPECIFIC CASE FOR JAPANESE UNIVERSITY FINAL PRESENTATION TIMETABLING

    Jiawei LI and Tad Gonsalves , Sophia University, Tokyo, Japan

    ABSTRACT

    This paper presents a Genetic Algorithm approach to solve a specific examination timetabling problem which is common in Japanese Universities. The model is programmed in Excel VBA programming language, which can be run on the Microsoft Office Excel worksheets directly. The model uses direct chromosome representation. To satisfy hard and soft constraints, constraint-based initialization operation, constraint-based crossover operation and penalty points system are implemented. To further improve the result quality of the algorithm, this paper designed an improvement called initial population pre-training. The proposed model was tested by the real data from Sophia University, Tokyo, Japan. The model shows acceptable results, and the comparison of results proves that the initial population pre-training approach can improve the result quality.

    KEYWORDS

    Examination timetabling problem, Excel VBA, Direct chromosome representation, Genetic Algorithm Improvement.


    ..

    Full Paper
    https://aircconline.com/csit/papers/vol10/csit101701.pdf


    Volume Link :
    http://airccse.org/csit/V10N17.html



AN OPTIMIZED CLEANING ROBOT PATH GENERATION AND EXECUTION SYSTEM USING CELLULAR REPRESENTATION OF WORKSPACE

    Qile He1 and Yu Sun2 1Webber Academy, Calgary, Alberta, Canada 2California State Polytechnic University, Pomona, USA

    ABSTRACT

    Many robot applications depend on solving the Complete Coverage Path Problem (CCPP). Specifically, robot vacuum cleaners have seen increased use in recent years, and some models offer room mapping capability using sensors such as LiDAR. With the addition of room mapping, applied robotic cleaning has begun to transition from random walk and heuristic path planning into an environment-aware approach. In this paper, a novel solution for pathfinding and navigation of indoor robot cleaners is proposed. The proposed solution plans a path from a priori cellular decomposition of the work environment. The planned path achieves complete coverage on the map and reduces duplicate coverage. The solution is implemented inside the ROS framework, and is validated with Gazebo simulation. Metrics to evaluate the performance of the proposed algorithm seek to evaluate the efficiency by speed, duplicate coverage and distance travelled.

    KEYWORDS

    Complete Coverage Path Planning, Mobile Robots, Graph Theory.


    For More Details :
    https://aircconline.com/csit/papers/vol10/csit101502.pdf


    Volume Link :
    http://airccse.org/csit/V10N15.html


Parallel Data Extraction Using Word Embeddings

    Pintu Lohar and Andy Way ADAPT Centre, Dublin City University, Ireland

    ABSTRACT

    Building a robust MT system requires a sufficiently large parallel corpus to be available as training data. In this paper, we propose to automatically extract parallel sentences from comparable corpora without using any MT system or even any parallel corpus at all. Instead, we use crosslingual information retrieval (CLIR), average word embeddings, text similarity and a bilingual dictionary, thus saving a significant amount of time and effort as no MT system is involved in this process. We conduct experiments on two different kinds of data: (i) formal texts from news domain, and (ii) user-generated content (UGC) from hotel reviews. The automatically extracted sentence pairs are then added to the already available parallel training data and the extended translation models are built from the concatenated data sets. Finally, we compare the performance of our new extended models against the baseline models built from the available data. The experimental evaluation reveals that our proposed approach is capable of improving the translation outputs for both the formal texts and UGC.

    KEYWORDS

    Machine Translation, parallel data, user-generated content, word embeddings, text similarity, comparable corpora.


    For More Details :
    https://aircconline.com/csit/papers/vol10/csit101521.pdf


    Volume Link :
    http://airccse.org/csit/V10N15.html


Deep Learning Roles Based Approach to Link Prediction in Networks

    Aman Gupta and Yadul Raghav Indian Institute of Technology (BHU) Varanasi, India

    ABSTRACT

    The problem of predicting links has gained much attention in recent years due to its vast application in various domains such as sociology, network analysis, information science, etc. Many methods have been proposed for link prediction such as RA, AA, CCLP, etc. These methods required hand-crafted structural features to calculate the similarity scores between a pair of nodes in a network. Some methods use local structural information while others use global information of a graph. These methods do not tell which properties are better than others. With an in-depth analysis of these methods, we understand that one way to overcome this problem is to consider network structure and node attribute information to capture the discriminative features for link prediction tasks. We proposed a deep learning Autoencoder based Link Prediction (ALP) architecture for the latent representation of a graph, unified with non-negative matrix factorization to automatically determine the underlying roles in a network, after that assigning a mixed-membership of these roles to each node in the network. The idea is to transfer these roles as a feature vector for the link prediction task in the network. Further, cosine similarity is applied after getting the required features to compute the pairwise similarity score between the nodes. We present the performance of the algorithm on the real-world datasets, where it gives the competitive result compared to other algorithms.

    KEYWORDS

    Link Prediction, Deep Learning, Autoencoder, Latent Representation, Non-Negative Matrix Factorization.


    For More Details :
    https://aircconline.com/csit/papers/vol10/csit101416.pdf


    Volume Link :
    http://airccse.org/csit/V10N14.html

A GENETIC PROGRAMMING BASED HYPERHEURISTIC FOR PRODUCTION SCHEDULING IN APPAREL INDUSTRY

    Cecilia E. Nugraheni1 , Luciana Abednego1 , and Maria Widyarini2 1,2Parahyangan Catholic University, Bandung, Indonesia

    ABSTRACT

    The apparel industry is a type of textile industry. One of scheduling problems found in the apparel industry production can be classified as Flow Shop Scheduling Problems (FSSP). GPHH for FSSP is a genetic programming based hyper-heuristic techniques to solve FSSP[1]. The algorithm basically aims to generate new heuristics from two basic (low-level) heuristics, namely Palmer Algorithm and Gupta Algorithm. This paper describes the implementation of the GPHH algorithm and the results of experiments conducted to determine the performance of the proposed algorithm. The experimental results show that the proposed algorithm is promising, has better performance than Palmer Algorithm and Gupta Algorithm. .

    KEYWORDS

    Hyper-heuristic, Genetic Programming, Palmer Algorithm, Gupta Algorithm, Flow Shop Scheduling Problem, Apparel Industry


    For More Details :
    https://aircconline.com/csit/papers/vol10/csit101212.pdf


    Volume Link :
    http://airccse.org/csit/V10N12.html


VIRTFUN: FUNCTION OFFLOAD METHODOLOGY TO VIRTUALIZED ENVIRONMENT

    Carlos A Petry and Rodolfo J. de Azevedo Institute of Computing, University of Campinas, Campinas, Brazil

    ABSTRACT

    The use of virtual machines (VM) has become popular with substantial growth for both personal and commercial use, especially supported by the progress of hardware and software virtualization technologies. There are several reasons for this adoption like: cost, customization, scalability and flexibility. Distinct domains of application, such as scientific, financial and industrial, spanning from embedded to cloud systems, taken advantage of this kind of machines to meet processing computational demands. However, there are setbacks: hardware handling, resources use, performance and management. This growth demands an effective support by the underlying virtualization infrastructure, which directly affects the hosts’ capacity in datacenters and cloud environment that support them. It is evident that the host native processing performs better than VMs, especially when using accelerator devices, where the common solution is to assign each device to a specific VM, instead of sharing it among multiples VMs. Beyond performance issues inside the host, we need to consider the VM performance when using accelerator devices. In this context, it is necessary to provide efficient mechanisms to manage and run VMs which can take advantages of high-performance devices, like FPGAs or even from software resources on the host. To assist this challenge, this paper proposes a methodology to improve communication performance of applications running on the VMs, VirtFun. To do so, we developed a framework able to offload pieces of application's code (vFunction) to host by means of secure data sharing between the application and device. The results achieved in our experiments demonstrated significant acceleration capacity for the guest application vFunction. The speedup reached 340% compared to conventional network execution, reaching maximum slowdown of 2.8% in the worst case and near to 0% in the best case considering the native execution.

    KEYWORDS

    Virtualization, performance , virtual machine, shared-memory.


    For More Details :
    https://aircconline.com/csit/papers/vol10/csit101106.pdf


    Volume Link :
    http://airccse.org/csit/V10N11.html


TRUSTED COMPUTING IN DATA SCIENCE: VIABLE COUNTERMEASURE IN RISK MANAGEMENT PLAN

    Uchechukwu Emejeamara1 , Udochukwu Nwoduh2 and Andrew Madu2 1 IEEE Computer Society, Connecticut Section, USA 2Federal Polytechnic Nekede, Nigeria.

    ABSTRACT

    The need for secure data systems has prompted, the constant reinforcement of security systems in the attempt to prevent and mitigate risks associated with information security. The purpose of this paper is to examine the effectiveness of trusted computing in data science as a countermeasure in risk management planning. In the information age, it is evident that companies cannot ignore the impact of data, specifically big data, in the decision making processes. It promotes not only the proactive capacity to prevent unwarranted situations while exploiting opportunities but also the keeping up of the pace of market competition. However, since the overreliance on data exposes the company, trusted computing components are necessary to guarantee that data acquired, stored, and processed remains secure from internal and external malice. Numerous measures can be adopted to counter the risks associated with data exploitation and exposure due to data science practices. Nonetheless, trusted computing is a reasonable point to begin with, in the aim to protect provenance systems and big data systems through the establishment of a ‘chain of trust’ among the various computing components and platforms. The research reveals that trusted computing is most effective when combined with other hardware-based security solutions since attack vectors can follow diverse paths. The results demonstrate the potential that the technology provides for application in risk management. .

    KEYWORDS

    Trusted Computing, Security, Data, Data Science, Provenance, Risk Management, Big Data, Trusted Platform Module, Platform Computation Register .


    For More Details :
    https://aircconline.com/csit/papers/vol10/csit100602.pdf


    Volume Link :
    http://airccse.org/csit/V10N06.html


ENHANCING NETWORK FORENSICS with PARTICLE SWARM and DEEP LEARNING: THE PARTICLE DEEP FRAMEWORK

    Nickolaos Koroniotis1 and Nour Moustafa1 School of Engineering and Information Technology, University of New South Wales Canberra, Canberra, Australia

    ABSTRACT

    The popularity of IoT smart things is rising, due to the automation they provide and its effects on productivity. However, it has been proven that IoT devices are vulnerable to both well established and new IoT-specific attack vectors. In this paper, we propose the Particle Deep Framework, a new network forensic framework for IoT networks that utilised Particle Swarm Optimisation to tune the hyperparameters of a deep MLP model and improve its performance. The PDF is trained and validated using Bot-IoT dataset, a contemporary network-traffic dataset that combines normal IoT and non-IoT traffic, with well known botnet-related attacks. Through experimentation, we show that the performance of a deep MLP model is vastly improved, achieving an accuracy of 99.9% and false alarm rate of close to 0%. .

    KEYWORDS

    Network forensics, Particle swarm optimization, Deep Learning, IoT, Botnets.


    For More Details :
    https://aircconline.com/csit/papers/vol10/csit100304.pdf


    Volume Link :
    http://airccse.org/csit/V10N03.html






Journals by Area

menu
Reach Us

emailsecretary@airccse.org


emailjsecretary@airccj.org

close