Topic Tracking and Visualization Method using Independent Topic Analysis
Takahiro Nishigaki1 and Kenta Yamamoto2 and Takashi Onoda1, 1Aoyama Gakuin University, Kanagawa, Japan and 2Graduate School of Science and Engineering, Aoyama Gakuin University, Kanagawa, Japan
In this paper propose a topic tracking and visualization method using Independent Topic Analysis. Independent Topic Analysis is a method for extracting mutually independent topics from the documents data by using the Independent Component Analysis. In recent years, as the amount of information increases, there is often a desire to analyse topic transitions in time-series documents and track topics. For example, it is possible to analyse the causes of trend and hoaxes by SNS and predict future changes. However, there is no topic tracking method in Independent Topic Analysis. There is also no way to visualize topic tracking. So, topics in each period was extracted, and topic transition was analysed based on the similarity of topics. And, a method for tracking these four topics was proposed. In addition, this paper developed an interface that visualizes time-series changes of the tracked topics and obtained effective results through user experiments.
Data Mining, Independent Topic Analysis, Text Mining, Topic Tracking
The Impact of AI on the Design of Reception Robot: A Case Study
Nguyen Dao Xuan Hai1 and Nguyen Truong Thinh2, 1Faculty of Mechanical Engineering, HCMC University of Technology and Education Ho Chi Minh City, Viet Nam and 2Department of Mechatronics, HCMC University of Technology and Education Ho Chi Minh City, Viet Nam
Service robots recent attract a lot of attention from the public. Integrating with artificial intelligence of computer science, modern service robots has great potential as they are capable of perform many sophisticate work of the human. In this paper, the service robot named "MiABot" as receptionist robot is described, it is a mobile robot with autonomous platform being used with a differential drive and controlled by mini PC. The MiaBot could sense its surroundings with the aid of various electronic sensors while mechanical actuators were used to move it around. Robot's behavior was determined by the program, which was loaded to the microcontrollers and PC with Arificial Intelligence. The experiment results demonstrated the feasibility and advantages of this predictive control on the trajectory tracking of a mobile robot. The service robot is designed to assist humans with reception tasks. The robot will interact closely with a group of humans in their everyday environment. This means that it is essential to create models for natural and intuitive communication between humans and robots. The theoretical basis of artificial intelligence and its application in the field of natural language processing. Besides, robot software architecture is designed and developed. Robot operation modes and implementation are addressed and discussed, they contain information on algorithm for human – robot interacting in natural language, thus a simple approach for generating robot response in arm gesture and emotion. Finally, system evaluation and testing is addressed.
Network Protocols, Wireless Network, Mobile Network, Virus, Worms &Trojon
Two Staged Prediction of Gastric Cancer Patient’s Survival Via Machine Learning Techniques
Peng Liu, Liuwen Li, Chen Yu, and Shumin Fei
Cancer is one of the most common causes of death in the world, while gastric cancer has the highest incidence in Asia. Predicting gastric cancer patients’ survivability can inform patients care decisions and help doctors prescribe personalized medicine. Classification techniques have been widely used to predict survivability of cancer patients. However, very few attention has been paid to patients who cannot survive. In this research, we consider survival prediction to be a two-staged problem. The first is to predict the patients’ five-year survivability. If the patient’s predicted outcome is death, the second stage predicts the remaining lifespan of the patient. Our research proposed a custom ensemble method which integrated multiple machine learning algorithms. It exhibits a significant predictive improvement in both stages of prediction, comparing to the state-of-the-art Machine Learning techniques. The base machine learning techniques include Decision Trees, Random Forest, Adaboost, Gradient Boost Machine(GBM), Artificial Neural Network (ANN), and the most popular GBM framework--LightGBM. The model is comprehensively evaluated on open source cancer data provided by the Surveillance, Epidemiology, and End Results Program (SEER) in terms of accuracy, area under the curve, f-score, precision, recall rate, training and predicting time in the classification stage, and Root Mean Squared Error, Mean Absolute Error, coefficient of determination (R2) in the regression stage.
Gastric Cancer, Cancer Survival Prediction, Machine Learning, Ensemble Learning, SEER
Performance Evaluation of Prince Based Glitch PUF With Several Selection Parts
Yusuke Nozaki and Masaya Yoshikawa, Department of Information Engineering, Meijo University, Nagoya, Japan
To enhance the internet of things (IoT) security, lightweight ciphers and physically unclonable functions (PUFs) have attracted attention. Unlike standard encryption AES, lightweight ciphers can be implemented on embedded devices with strict constraints used in IoT. The PUF is a technology extracting manufacturing variations in LSI as device's unique ID. Since the manufacturing variation cannot be cloned physically, the generated ID using PUF can be used for device's authentication. Actually, a method combining lightweight cipher (PRINCE) and PUF (glitch PUF) called PRINCE based glitch PUF has been proposed in recent years. However, PRINCE based glitch PUF was not optimized for PUF performance. Therefore, this study evaluates the detailed PUF performance of PRINCE based glitch PUF with changing the parameters. Experiments using a field programmable gate array verify the PUF performance of the PRINCE based glitch PUF with several parameters.
Hardware Security, Physically Unclonable Function, Glitch PUF, PRINCE, Lightweight Cipher
Enhancing Network Forensics with Particle Swarm and Deep Learning: The Particle Deep Framework
Nickolaos Koroniotis and Nour Moustafa, University of New South Wales, Canberra
With more than 7 billion devices deployed in 2018 and double that number in 2019, smart IoT things are becoming ever more popular,
as they provide automated services that improve performance and productivity while reducing operating costs. However, IoT devices have been shown to be vulnerable to both well established and new IoT-specific attack vectors. In this paper, we propose the Particle Deep Framework, a new network forensic framework for IoT networks that utilised Particle
Swarm Optimisation to tune the hyperparameters of a deep MLP model and improve its performance. The PDF is trained and validated using Bot-IoT dataset, a contemporary network-traffic dataset that combines normal IoT and non-IoT traffic, with well known botnet-related attacks. Through experimentation, we show that the performance of a deep MLP model is vastly improved, achieving an accuracy of 99.9% and false alarm rate of close to 0%.
Network forensics, Particle swarm optimization, Deep Learning, Neural Networks, IoT, Botnets
Attribute-based Encryption of Personal Health Record in Cloud Computing Environment: A Short Review
Yuping Yan, Mohammed B. M.Kamel and Peter Ligeti, Department of Informatics, Eötvös Loránd University, Budapest, Hungary
Attribute-Based Encryption (ABE) scheme as a new cryptography primitive, shows its advantages in fine-grained access control mechanism and one-to-many exible encryption mode. By conducting an in-depth study, we demonstrate the development trace, major work and research status of ABE. This paper mainly introduces the basic concepts of ABE, analyzes the research problems, namely key abuse, revocation, multi-authorities, and its applications on real use cases especially in personal health record in cloud computing environment.
attribute-based encryption, personal health record, cloud computing environment.
Ontology-Based Model for Security Assessment: Predicting Cyberattacks through Threat Activity Analysis
Pavel Yermalovich and Mohamed Mejri, Faculté des Sciences et de Génie, Université Laval, Québec City, Canada
The prediction of an attack is essential for the prevention of potential risk. Therefore, risk forecasting contributes a lot to the optimization of the information security budget planning. This article focuses on ontology and stages of a cyberattack, as well as the main representatives of the attacking side and their motivation.
Cyberattack, cyberattack prediction, ontology, ontology of cyberattack, information security, cybersecurity, IT security, data security, threat activity.
Energy Assessment Approach during Context Integration in New Generation Ubiquitous Networks
Vinodini Gupta and Dr. Padma Bonde, Computer Science and Engineering Department, Shri Shankracharya Technical Campus, Bhilai, Chhattisgarh, India
Newer technological advancements in ubiquitous networking have boosted digital dependency of users. Indispensible role of machines in almost every sphere of life has facilitated Human-Computer Interaction (HCI). Quality of Experience (QoE) is therefore an important aspect to analyse the performance of networks. However, achieving desired level of user satisfaction in dynamic environment is quite challenging which has led to the proliferation of Context Aware Computing (CAC). The paper focussed over various context gathering mechanisms and system architecture, presented various aspects of Activity Recognition (AR) in ubiquitous networking and contemplated on how to improve the Quality of Content (QoC).
Activity Recognition (AR), Context Aware Computing (CAC), Quality of Experience (QoE), Quality of Context (QoC), Network throughput
RPDroid: Android Malware Detection using Ranked Permissions
Madan Upadhayay, Ashutosh Sharma, Gourav Garg and Anshul Arora, Discipline of Mathematics and Computing, Department of Applied Mathematics, Delhi Technological University Delhi, India
The number of malware attacks on Android platform has escalated over the past few years. They pose significant threats such as financial loss, information leakage, and system damage. The seriousness of these attacks can be depicted from the fact that around 25 million Android smartphones were infected with malware within the first half of 2019. Keeping these threats in mind, we aim to develop a static permissions based Android malware detector. In this work, first, we find the permissions that are frequently present in normal and malicious apps and rank the permissions based upon their frequency in normal and malware dataset. Additionally, we applied different support thresholds to remove the unnecessary and redundant permissions from the rankings. Further, we propose a novel algorithm that uses the ranked permissions (that are above the specified threshold) and the machine learning algorithms to detect Android malware. The experimental results demonstrate that by using the Random Forest classifier and 5% support threshold, we could achieve 91.96% detection accuracy with the proposed algorithm on the minimum set of 19 permissions.
Android Malware, Permissions, Static Detection, Malware Detection
Playing Virtual Musical Drums By Mems 3d Accelerometer Sensor Data And Machine Learning
Shaikh Farhad Hossain, Kazuhisa Hirose,Shigehiko Kanaya and Md. Altaf-Ul-Amin, Computational Systems Biology Lab, Graduate School of Information Science, Nara Institute of Science and Technology (NAIST), 8916-5, Takayama, Ikoma, Nara 630-0192, Japan
In our life, music is a vital entertainment part whose important elements are musical instruments. For example, the acoustic drum plays a vital role when a song is sung. With the modern era, the style of the musical instruments is changing by keeping identical tune such as an electronic drum. In this work, we have developed "Virtual Musical Drums" by the combination of MEMS 3D accelerometer sensor data and machine learning. Machine learning is spreading in all arena of AI for problem-solving and the MEMS sensor is converting the large physical system to a smaller system. In this work, we have designed eight virtual drums for two sensors. We have found a 91.42% detection accuracy within the simulation environment and an 88.20% detection accuracy within the real-time environment with 20% windows overlapping. Although system detection accuracy satisfaction but the virtual drum sound was non-realistic. Therefore, we implemented a 'multiple hit detection within a fixed interval, sound intensity calibration and sound tune parallel processing' and select 'virtual musical drums sound files' based on acoustic drum sound pattern and duration. Finally, we completed our "Playing Virtual Musical Drums" and played the virtual drum successfully like an acoustic drum. This work has shown a different application of MEMS sensor and machine learning. It shows a different application of data, sensor and machine learning as music entertainment with high accuracy.
Virtual musical drum, MEMS, SHIMMER, support vector machines (SVM) and k-Nearest Neighbors (kNN)
Information Retrieval in Data Science Curricula
Duaa Bukhari, Collage of Education, Information & Technology, Long Island University, New York, USA
In the past decade, the world has been transformed by the rapidly evolving field of data science (DS). Data science as an emerging interdisciplinary field combines elements of mathematics, statistics, computer science, and knowledge in a particular application domain for the purpose of extracting meaningful information from the increasingly sophisticated array of data available in many settings. This study conducted a content analysis of the curricula of the DS master’s programs in the United States to explore what discipline DS programs covers at the graduate level. In addition, the present study discussed how DS curricula relate or connect with the field of information retrieval.
Data Science, Information Retrieval, Curricula, Master’s Programs, DS curriculum
The use of Convolutional Neural Network for Malware Classification
Shah Rukh Sajjad1, Bi Jiana1*, Shah Zaib Sajjad2, 1College of science and technology Bohai University, Jinzhou, Liaoning 121013, China and 2NFC Institute of Engineering and Technology, Multan, 60000, Pakistan
Digital security is confronting an immense risk from malwares or malicious software. In recent years, there has been an increase in the volume of malwares, reaching above 980 million in 2019*. To identify and classify these pernicious software, complex details and patterns among them are to be gathered, segregated and analyzed. In this regard, Convolutional Neural Networks (CNN) – an architecture of Deep Neural Networks (DDN) can offer a more efficient and accurate solution than conventional neural network (NN) systems. In this paper, we have looked into the consequences of using conventional NN systems and benefits of using CNN on a sample of malwares provided by Microsoft. In 2015, Microsoft announced a malware classification challenge and released more than 21,000 malware samples. Many interesting solutions were put forward by scientists and students around the world. Inspired by their efforts we also have put forward a method. We converted the malware binary files into images and then trained a CNN model for identification and categorization of these malwares to their respective families. From this method, we achieved a high percentage accuracy of 98.80%.
DNN (Deep Neural Network), CNN (Convolutional Neural Network), NN (Neural Network)
An improved Safety Detection Algorithm Towards Deadlock Avoidance
Momotaz Begum, Omar Faruque, Md. Waliur Rahman Miah and Bimal Chandra Das, Assistant Professor, Lecturer, Associate Professor, Department of CSE, DUET, Gazipur-1707, Bangladesh
In operating system, the resource allocation to each process is a critical issue, because the resources are limited and sometimes may not be shareable among processes. An ineffective resource allocation may cause deadlock situation in the system. The banker’s algorithm and some other modified algorithms are available to handle deadlock situations. However, the complexities of these algorithms are quite high. This paper presents an innovative technique for safe state detection in a system based on the maximum resource requirements of processes and the minimum resource available. In our approach, the resource requirements of each process are sorted in a linked list, where it is easy to check whether the request exceeds the available resources. In our experiments we compare our approach with some other methods including the original banker’s algorithm. The results show that our proposed method provides less time complexity and less space complexity than the other methods.
deadlock avoidance, resource allocation, time complexity, space complexity, banker’s algorithm
Entanglement in Shor's Factoring Algorithm
Jianing Tan, Zhihao Liu and Hanwu Chen, Southeast University, China
Quantum algorithms are well known for their higher efficiency compared to their classical counterparts. However, the origin of the speed-up offered by quantum algorithms is a debatable question. Using entanglement measure based on coefficient matrix, we investigate the entanglement features of the quantum states used in Shor’s factoring algorithm. The results show that if and only if the order r is 1, the algorithm generates no entanglement. Finally, compare with published studies results (Proceedings: Mathematical, Physical and Engineering Sciences, 459(2036): 2011-2032, 2003, Physical Review A, 72(6): 062308, 2005), we give counter examples to show that previous researches neglect partially entanglement.
Shor's factoring algorithm, entanglement measure, coefficient matrices