Mobile users can open program in new tab for better viewing.

Open program in new tab

Day 1 11/12/2020
Room #1

Opening by the Organizing Committee 09:00 - 09:05

Shanghai Time Zone GMT+8

Welcome message and presentation from EAI 09:05 - 09:15

Keynote speakers 09:15 - 12:00

Lunch 12:00 - 14:00

Keynote 14:00 - 15:00

Coffee Break 15:00 - 15:10

Security, Trust and Privacy in Cloud 15:10 - 16:30

13:00 - 13:15
A fraud detection approach based on combined feature weighting

Data mining technology has yielded fruitful results in the area of crime discovery and intelligent decision-making. Credit card is one of the most popular payment methods, providing great convenience and efficiency. However, due to the vulnerabilities of credit card transactions, criminals are able to commit fraud to infringe on the interests of the state and citizens. How to discover potential fraudsters while guaranteeing high efficiency becomes an extremely valuable problem to solve. In this work, we talk about the advantages and disadvantages of different models to detect credit card fraud. We first introduce the data preprocessing measures for handling imbalanced fraud detection dataset. Then we compare related models to implement fraudster recognition. We also propose a feature selection approach based on combined feature weights. Some future research interests are also envisioned.
Authors: Xiaoqian Liu (Jiangsu Police Institute), Chenfei Yu (Jiangsu Police Institute), Bin Xia (Nanjing University of Posts and Telecommunications), Haiyan Gu (Jiangsu Police Institute), Zhenli Wang (Jiangsu Police Institute),
Hide Authors & Abstract

Show Authors & Abstract
13:15 - 13:30
Reseach on Distributed Trust Management in IoT

Widely used Internet of Things (IoT) has led to close coperation between electronic devices. It requires strong reliability and trustworthiness of the devices involved in the communication. However, current trust mechanisms have the following issues: (1) Heavily relying on a trusted third party, which may incur severe security issues if it is corrupted. (2) Malicious evaluations on the involved devices may bias the trustrank of the devices. By introducing the concept of risk management into the trust mechanism, we propose a trust mechanism for distributed IoT devices in this paper. In the proposed trust mechanism, trustrank is quantified by normative trust and risk measures. Performance analysis shows that the proposed trust mechanism has a higher probability of high trust device being selected and higher success rate of cooperation.
Authors: Ying Wang (Qufu Normal University), Dongfeng Wang (Qufu Normal University), Fengyin Li (Qufu Normal University),
Hide Authors & Abstract

Show Authors & Abstract
13:30 - 13:45
Data Privacy Protection of Industrial Blockchain

This paper studies the data privacy protection of industrial block chain. Aiming at the privacy leakage problem of industrial blockchain , combining symmetric encryption and homomorphic encryption, The data privacy protection method is proposed to ensure the confidentiality and privacy of industrial block chain and improve the privacy security of industrial enterprises.This paper designs common basic chain code, security service chain code and privacy protection chain code in the chain code layer, USES AES algorithm to encrypt sensitive information of industrial enterprises, and USES Paillier algorithm to encrypt maintenance cost. The chain code of privacy protection can be used for security access and fault event audit to guarantee the privacy security of industrial enterprises. In addition, in order to achieve the unity of call chain code, the call chain code is encapsulated to provide support for fast and highly concurrent upload data and access data. Finally, the existing sensitive and private data storage methods of industrial block chain are analyzed, and the experimental comparison with the method in this paper proves the effectiveness of this method.
Authors: Huaqiu Long (Wuyi University/Intelligent Manufacturing Department), Jun Hou (Nanjing Institute of Industry Technology), Qianmu Li (Nanjing University of Science and Technology/School of Cyber Science and Engineering), Na Ma (Nanjing University of Science and Technology/ School of Cyber Science and Engineering), Jian Jiang (Jiangsu Zhongtian Internet Technology Co., Ltd.), Lianyong Qi (Qufu Normal University/ School of Information Science and Engineering), Xiaolong Xu (Nanjing University of Information Science and Technology), Xuyun Zhang (Macquarie University/Department of Computing/Australia),
Hide Authors & Abstract

Show Authors & Abstract
13:45 - 14:00
Lightweight Anonymous Communication Model Based on Anonymous IBE

With increasing application of big data technology, a large amount of personal information is stored and processed on the Internet, which makes people have a greater demand for privacy. In addition, the development of mobile Internet and cloud computing requires the communication model to be as efficient and low-bandwidth as possible on the basis of security. In order to protect people's data privacy, this paper firstly presents a new anonymous Identify-Based Encryption (IBE) scheme, and designs a new lightweight anonymous communication model by introducing the proposed anonymous IBE scheme into an anonymous communication model. This model effectively guarantees the anonymity of system users and the security of messages during communication. Performance analysis shows that our communication model can effectively resists traffic analysis attacks, node eavesdropping, and finally achieve communication security and anonymity. Compared with other anonymous communication systems, our scheme has significant advantages in efficiency and relatively low cost. In the future, it has good application prospects.
Authors: Yanli Wang (Qufu Normal University), Xinying Yu (Qufu Normal University), Fengyin Li (Qufu Normal University),
Hide Authors & Abstract

Show Authors & Abstract
14:00 - 14:15
Exploring Self-Attention Mechanism of Deep Learning in Cloud Intrusion Detection

Cloud computing offers elastic and ubiquitous computing services, thereby receiving extensive attention recently. However, cloud servers have also become the targets of malicious attacks or hackers due to the centralization of data storage and computing facilities. Most intrusion attacks to cloud servers are often originated from inner or external networks. Intrusion detection is a prerequisite to designing anti-intrusion countermeasures of cloud systems. In this paper, we explore deep learning algorithms to design intrusion detection methods. In particular, we present a deep learning-based method with the integration of conventional neural networks, self-attention mechanism, and Long short-term memory (LSTM), namely CNN-A-LSTM to intrusion detection. CNN-A-LSTM leverages the merits of CNN in processing local correlation data and extracting features, the time feature extracting capability of LSTM, and the self-attention mechanism to better exact features. We conduct extensive experiments on the KDDcup99 dataset to evaluate the performance of our CNN-A-LSTM model. Compared with other machine learning and deep learning models, our CNN-A-LSTM has superior performance.
Authors: Chenmao Lu (Macau University of Science and Technology), Hong-Ning Dai (Macau University of Science and Technology), Junhao Zhou (Macau University of Science and Technology), Hao Wang (Norwegian University of Science and Technology),
Hide Authors & Abstract

Show Authors & Abstract

Coffee Break 16:30 - 16:40

AI-based Cloud Applications 16:40 - 17:40

14:30 - 14:45
ECG Arrhythmia Heartbeat Classification Using Deep Learning Networks

This paper designed three deep learning network structures and three electrocardiogram(ECG) signal preprocessing methods, under the same dataset, explored the impact and performance of different preprocessing methods and models on the ECG arrhythmia classification work. The MIT-BIH database was used for classification. The design of the preprocessing method used the raw signal, the denoised signal and the hand-crafted wavelet features as three different model inputs. Three mainstream deep learning networks such as convolutional neural networks(CNN), CNN+long short term memory networks(LSTM), and CNN+LSTM+Attention, were referred and designed for classification. For a fairer comparison, two different types of evaluations were used, intra-patient and inter-patient evaluation.
Authors: Yang Yuxi (Hangzhou Normal University), Jin Linpeng (Hangzhou Normal University), Pan Zhigeng (Hangzhou Normal University),
Hide Authors & Abstract

Show Authors & Abstract
14:45 - 15:00
Encoding Dual Semantic Knowledge for Text-enhanced Cloud Services

Topic modeling techniques have been widely applied in many cloud computing applications. However, few of them have tried to discover latent semantic relationships of implicit topics and explicit words to generate a more comprehensive representation for each text. To fully exploit the semantic knowledge for text classification in cloud computing systems, we attempt to encode topic and word features based on their latent relationships. The extracted topical information reorganizes the original textual structures from two aspects: one is that the topic extracted by Latent Dirichlet Allocation (LDA) is viewed as a textual extension; the other is that the topic feature performs as a counterpart modality to the word. This paper proposes a Dual Semantic Embedding (DSE) method, which uses Convolutional Neural Networks (CNNs) to encode the dual semantic features of topics and words from the reorganized semantic structures. Experimental results show that DSE improves the performance of text classification and outperforms the state-of-the-art feature generation baselines on micro-F1 and macro-F1 scores over the real-world text classification datasets.
Authors: Shicheng Cui (Nanjing University of Science and Technology), Qianmu Li (Nanjing University of Science and Technology), Shu-Ching Chen (Florida International University), Jun Hou (Nanjing Vocational University of Industry Technology), Hanrui Zhang (Nanjing University of Science and Technology), Shunmei Meng (Nanjing University of Science and Technology),
Hide Authors & Abstract

Show Authors & Abstract
15:00 - 15:15
IAS-BERT: An Information Gain Association Vector Semi-supervised BERT Model for Sentiment Analysis

With the popularity of large-scale corpora, statistics-based models have become mainstream model in the Natural Language Processing (NLP). The Bidirectional Encoder Representations from Transformers (BERT), as one of those models, has achieved excellent results in various tasks of NLP since its emergence. But it still has shortcomings, such as poor capability of extracting local features and ex-ploding of training gradients. After analyzing the shortcomings of BERT, this paper proposed an Information-gain Association Vector Semi-supervised Bidi-rectional Encoder Representations from Transformers (IAS-BERT) model, which improves the capability of capturing local features. Considering the influence of feature's polarity to overall sentiment and the association between two word-embeddings, we make information gain on the training corpus. And then, the in-formation gain results are used as an annotation of training corpus to generate a new word embedding. At the same time, we use forward-matching to optimize the computational overhead of IAS-BERT. We experiment this model on public dataset of sentiment analysis, and it have achieved good results. In English da-taset CoLA, its MCC reaches 42.9, and in SST-2, the accuracy reaches 96.5. Then, it was tested in Chinese dataset and achieved good results as well. In Chi-nese dataset waimai_10k and weibo_senti_100k, its accuracy reachs 95.2 and 98.1.
Authors: Linkun Zhang (Qufu Normal University), Yuxia Lei (Qufu Normal University), Zhengyan Wang (Qufu Normal University),
Hide Authors & Abstract

Show Authors & Abstract
15:15 - 15:30
Personalized Medical Diagnosis Recommendation Based on Neutrosophic Sets and Spectral Clustering

With the development of cloud-based services and artificial intelligence technologies, the personalized diagnosis recommender system has been a hot research topic in medical services. An effective diagnosis recommendation model could help doctors and patients make more accurate predictions in clinical diagnosis. In this paper, we propose a novel personalized diagnosis recommendation method based on neutrosophic sets, spectral clustering, and web-based medical information to offer satisfied web-based medical service. Firstly, the neutrosophic set theory is adopted to formulate the patients’ per-sonal information and the symptom features into more interpretable neutro-sophic sets with uniformly normalized values. Moreover, to make more ac-curate predictions, the spectral clustering scheme is integrated into a neutro-sophic-based prediction approach to mining the similarity relationships be-tween the undiagnosed diseases and the history disease records. Finally, a deneutrosophication operation is applied to recommend the final fine-grain diagnoses with interpretable clinic meanings. Experimental results on four real-world medical diagnosis datasets validate the effectiveness of the pro-posed method.
Authors: Mengru Dong (Nanjing University of Science and Technology), Shunmei Meng (Nanjing University of Science and Technology), Lixia Chen (Jiangsu Second Chinese Medicine Hospital), Jing Zhang (Nanjing University of Science and Technology),
Hide Authors & Abstract

Show Authors & Abstract

Coffee Break 17:40 - 17:50

Big Data Meeting Cloud 17:50 - 19:10

15:40 - 15:55
Cost-aware Big Data Stream Processing in Cloud Environment

The increasing size of big data and the speed with which it is generated has put a tremendous burden on cloud storage and communication systems. Network traffic and server capacity are crucial to having systems that are cost aware during big data stream processing in SDN enabled cloud environment. The common approach to address this problem has been through various optimization techniques. In this paper, we propose Software Defined Networking (SDN) based cost optimization approach to address the problem. Although SDN has been shown to improve cloud system performance, there is little attention given to SDN-based cost optimization approach to address the challenges of the increasing big data. To this end, we used Spark Streaming Processing approach (SSP). The proposed cost optimization approach is based on SDN within the cloud environment and focuses on optimizing the communication and computational costs. We performed extensive experiments to valid the approach and compared it with a Spark Streaming approach. The results of the experiment shows that the proposed approach has better cost optimization than the baseline approach.
Authors: Ahmed Al-Mansoori (Deakin University), Jemal Abawajy (Deakin University), Morshed Chowdhury (Deakin University - Melbourne),
Hide Authors & Abstract

Show Authors & Abstract
15:55 - 16:10
A Dual-Index Based Representation for Processing XPath Queries on Very Large XML Documents

Although XML processing has been intensively studied in recent years, efficiency of evaluating XPath queries on XML documents still is a bottleneck when XML documents are very large. In this study, we focus on three classes of XPath queries: backward, order-aware and predicate-containing queries. For processing them efficiently, we introduce a dual index based representation of a special data structure called partial tree that is for large XML document processing by using multiple computers. Our novel tree representation has two index sets that can accelerate the evaluation of structural relationships between nodes. Thus, this representation can be used for high-efficient XML processing. Experiment results show that our approach outperforms a start-of-theart XML database BaseX in both absolute loading time and execution time for the target queries. The absolute execution time over 358 GB of XML data averagely is only seconds by using 32 EC2 instances.
Authors: Wei Hao (Anhui University of Science and Technology), Kiminori Matsuzaki (Kochi University of Technology), Shigeyuki Sato (Kochi University of Technology),
Hide Authors & Abstract

Show Authors & Abstract
16:10 - 16:25
A Concept Lattice Method for Eliminating Redundant Features

Microarray gene technology solves the problem of obtaining gene expression data. It is significant part for current research to obtain effective information from omics genes quickly. Feature selection is an important step of data pre-processing, and it is one of the key factors affecting the capability of algo-rithm information extraction. Since single feature selection method will cause the deviation of feature subsets, we introduce ensemble learning to solve the problem of clusters redundancy. We proposed a new method called Multi-Cluster minimum Redundancy (MCmR). Firstly, features are clustered by L1-normth. And then, redundant features among clusters are removed ac-cording to the mRMR algorithm. Finally, it will be sorted by the calculation results of each feature MCFS_score in the features subset. By this process, the feature with higher score will be used as the output result. The concept lattice constructed by MCmR can reduce redundant concepts while maintain-ing its structure and improve the efficiency of data analysis. We verified the valid of MCmR on multiple disease gene datasets, and its ACC in Pros-tate_Tumor, Lung_cancer, Breast_cancer and Leukemia datasets reached 95.4, 94.9, 96.0 and 95.8 respectively.
Authors: Zhengyan Wang (Qufu Normal University), Yuxia Lei (Qufu Normal University), Linkun Zhang (Qufu Normal University),
Hide Authors & Abstract

Show Authors & Abstract
16:40 - 16:55
Research on The Development of Natural Human-Computer Interaction for Mobile Terminals

As an important part of computer system, human-computer interaction(HCI) technology has developed quickly in computer science. It has experienced the process from human adapting to computer to computer constantly adapting to human. With the development of human-computer interaction, users are more and more inclined to use natural communication methods such as natural language, gestures, vision, etc., instead of traditional keyboard and mouse input. The purpose of natural human-computer interaction is to use existing cognitive habits and familiar behavioral patterns to interact with computers. In today's mobile Internet era, mobile terminals have been widely used. The portability and mobility of mobile terminal make it more urgent for users to use natural human-computer interaction. The natural human-computer interaction for mobile terminal has become a hotspot. By analyzing the evolution process of human-computer interaction for mobile terminal and combining the latest human-computer interface technology, VR/AR,artificial intelligence, cloud computing, affective computing, etc., this paper discusses the hot issues and development of natural human-computer interaction in mobile situations.
Authors: Qing Zhang (Computer Engineering College, Jimei University, XiaMen, China), Xiaoyong Lin (Xiamen University of Technology, XiaMen, China),
Hide Authors & Abstract

Show Authors & Abstract
16:40 - 16:40
Knowledge Graphs Meet Crowdsourcing: A Brief Survey

In recent years, crowdsourcing has received universal concern in both aca-demia and industry, which is widely used in machine learning, information retrieval, software engineering and so on. The emergence of crowdsourcing undoubtedly facilitates Knowledge Graph technology. As an important branch of artificial intelligence, the KG technology usually involves machine intelligence and human intelligence, especially in the creation of knowledge graphs, human participation is indispensable, which provides a good scenario for the application of crowdsourcing. This paper first introduces the concepts of crowdsourcing and knowledge graphs and then discusses how crowdsourcing is utilized in knowledge graph. The paper briefly reviews the techniques of crowdsourcing used in the creation process of knowledge graphs as well as some applications of knowledge graphs based on crowdsourcing.
Authors: Meilin Cao (Nanjing University of Science and Technology), Jing Zhang (Nanjing University of Science and Technology), Sunyue Xu (Nanjing University of Science and Technology), Zijian Ying (Nanjing University of Science and Technology),
Hide Authors & Abstract

Show Authors & Abstract
Day 2 12/12/2020
Room #1