Nicolas Dejon1, 2, Chrystel Gaber1 and Gilles Grimaud2, 1Orange Labs, Châtillon, France, 2Univ. Lille, CNRS, Centrale Lille, UMR 9189 CRIStAL - Centre de Recherche en Informatique Signal et Automatique de Lille, F-59000 Lille, France
This article presents a hardware-based memory isolation solution for constrained devices. Existing solutions target high-end embedded systems (typically ARM Cortex-A with a Memory Management Unit, MMU) such as seL4 or Pip (formally verified kernels) or target low-end devices such as ACES, MINION, TrustLite, EwoK but with limited flexibility by proposing a single level of isolation. Our approach consists in adapting Pip to inherit its flexibility (multiple levels of isolation) but using the Memory Protection Unit (MPU) instead of the MMU since the MPU is commonly available on constrained embedded systems (typically ARMv7 Cortex-M4 or ARMv8 Cortex-M33 and similar devices). This paper describes our design of Pip-MPU (Pip’s variant based on the MPU) and the rationale behind our choices. We validate our proposal with an implementation on an nRF52840 development kit and we perform various evaluations such as memory footprint, CPU cycles and energy consumption. We demonstrate that although our prototyped Pip-MPU causes a 16% overhead on both performance and energy consumption, it can reduce the attack surface of the accessible application memory from 100% down to 2% and the privileged operations by 99%. Pip-MPU takes less than 10 kB of Flash (6 kB for its core components) and 550 B of RAM.
constrained devices, MPU, memory isolation, Pip, OS kernel, secure systems.
Jean-Marie Kuate Fotso*1, ElieFute Tagne2, Bénite Isaoura1, Guy Phares Fotso Fotso1, PélagieFloreTemgoua Nanfack1, Patrice Abiama Ele3, 1National Committee for Development of Technologies, Ministry of Scientific Research and Innovation, Yaoundé, Cameroon, 2Department of Mathematics and Computer Science, University of Dschang, Department of Computer Engineering, University of Buea, Cameroon, 3Energy Research Laboratory, Institute of Geological and Mining Research, Yaoundé, Cameroon
Cloud computing has made it easier to access various forms of data and services through the Internet. Its coupling to sensor networks makes it possible to significantly overcome the storage and computing performance limits of heterogeneous objects in the Internet of Things (IoT). In Cameroon, IoT is not yet very widespread. We use it here to monitor and prevent urban fires in real time, using a set of temperature, humidity, gas, flame and electrical power sensors that are integrated on NodeMCU, in view to detect fire starts and generate an alert. The data collected is stored and processed on ThingSpeak; a second analysis is carried out locally. After calibrating the sensors, we analyzed the data and carried out a correlation test to identify the most sensitive data for the alert system.
Cloud computing, Fire, NodeMCU, IoT, electric current, Urbanization .
Mubeena Nazar1 and Minu R Nath2, 1Department of Computer Applications, College of Engineering Trivandrum, Kerala, India, 2Associate Professor, Department of Computer Applications, College of Engineering Trivandrum, Kerala, India
Navigating from place to place is one of the biggest problems for the visually impaired people The traditional white cane they use only detects obstacles once they touch it. The goal of this project is to solve this issue.The proposed solution employs the IoT paradigm to provide a medium between the blind and the environment. Here we, intent to develop a “Smart Blind Stick”, which increases the accessibility of blind person to move around by providing alerts about the staircase, potholes, water, fire and other obstacles that might occur on his/her path.With the system, an emergency alert can be sent to the concerned persons. Furthermore, the proposed system includes an app that allows users to locate the stick using a buzzer sound and configure its settings. The smart blind stick is user friendly, gives quick response, has very low power consumption and lighter weight.
IoT based smart blind stick, detection of various obstacles,sending alert message,configuration setting using app, buzzer to locate stick.
Ada Alevizaki and Niki Trigoni, Department of Computer Science, University of Oxford
The increasing at-home way of living yields significant interest in analysing human behaviour at home. Estimating a person’s room-level position inside their house can provide essential information to improve situation awareness, but such information is constrained by the cost of required infrastructure, as well as privacy concerns for the monitored household. In this paper, we contribute a comprehensive dataset that combines real-world BLE RSSI data and smartwatch IMU data from houses. We propose a probabilistic framework that leverages the two sensor modalities to effectively track the user around rooms of the house without any additional infrastructure. Over time, through transition-events and stay-events, the model can learn to infer the user’s room position, as well as a semantic map of the rooms of the house. Performance has been evaluated on the collected dataset. Our proposed approach achieves a 19.73% improvement on standard BLE RSSI localisation.
indoor localisation, semantic mapping, smartwatches, BLE, IMU.
Sangeetha S and Aishwarya Lakshmi, PSG College of Technology, Peelamedu,, Coimbatore, 641004, Tamil Nadu, India
This is the age of instant gratification. Browsing the entire Publication is a dawdle , hence we have propose an application that summarizes all the reading’s in a snap of time using AI technologies. The system is composed of Optical Character Recognition(OCR) engine to convert the image to text and transformer to summarize the text, dispensing recurrent networks followed by feature prediction network that maps character embeddings to mel-spectrogram and a GAN based vocoder to convert spectrogram to time based waveforms. Through extensive experiments we demonstrate digest podcast ability to recognize, summarize, speech synthesis for summarized audio generation.
segmentation, summarisation, mel-spectrogram, GAN, speech synthesis.
Xiaoyan Dai and Yisan Hsieh, Advanced Technology Research Institute, Minatomirai Research Center, Kyocera Corporation, Japan
Current Point-of-Sale processing is complex and time consuming. In this paper, we propose an image-based discount sticker and barcode “scan” system for automations. Recognition of discount stickers and barcodes is quite a big challenge, as different shooting conditions can result in different appearances. We design a deep learning classifier of various discount rates and barcode basing on YOLACT detection network. We also propose a data augmentation to generate various data that are close to real scene to improve the classification performance of deep learning model. Evaluation with our original data set shows that the proposed approach achieves high performance and applicable in the real-world scenario.
Classification, data augmentation, discount sticker, barcode, image-based, deep learning.
Assia Kamal-idrissi1 and Abdelouadoud Kerarmi2, 1Ai movement, Center of Artificial Intelligence, Mohammed VI Polytechnic University, Rabat, Morocco, 2Lip6, Sorbonne University, Paris, France
Failure Mode and Effect Critical Analysis method attempts to identify potential modes and treat failures before they occur based on experts evaluation. However, this method is extremely cost-intensive in terms of failure modes since it evaluates each one of them. Moreover, this method is not able to properly treat uncertainty during logical reasoning as it is based on subjective expert judgments and requires a lot of information. Previous studies proposed several versions of Fuzzy Logic but have not explicitly focused on the combinatorial complexity nor justified the choice of membership functions in Fuzzy Logic modeling. In this paper, we develop an optimization-based approach referred to Integrated Truth Table and Fuzzy Logic Model (ITTFLM) that generates smartly fuzzy logic rules using Truth Tables. This approach allows generating fuzzy rules quickly and smartly by assuring consistency and non-redundancy through logical evaluation. We propose to implement ITTFLM for three types of membership functions (Triangular, Trapezoidal, and Gaussian) to choose the best function that fits our real data. The ITTFLM was tested on fan data collected in real time from a plant machine. The experimental evaluation demonstrates that our model identifies the failure states with high accuracy and can deal with large numbers of rules and thus meets the real-time constraints that usually impact user experience.
FMECA, Fuzzy Logic, Truth Table, Combinatorial Complexity, Real-time, Industrial fan motor, Proactive maintenance, Knowledge, Big Data, Artificial Intelligence.
Divya Mereddy, Department of Computer Science, University Of Cincinnati, United States of America
An optimized itinerary plan for visiting n number of points in a tourist trip can be obtained by developing the Hamilton clustering system on top of N points by considering K numbers of NN with calculating the reachability distance between points. Please find Rab!=Rba. Which decides which point should come first. This model is developed based on the Hamilton graph/ Salesman travel problem algorithm, HDBSCAN clustering along with considering tourist constraints like limited total spent time per day, estimated time to visit the place, etc. While this algorithm is great for a multi-day trip, it is especially quite useful and makes the process easy, if someone wants to cover a great number of points in a trip in less number of days or when someone is trying to go for a world tour / big tour with considerably high number of visiting points.
Hamilton Graphs, Clustering, DBSCAN, Traveling Systems, Itinerary.
JABRI Ismail, Laboratory of Engineering Sciences (LSI), Polydisciplinary Faculty of Taza, Sidi Mohamed Ben Abdallah University, Fez, Morocco, ABOULBICHR Ahmed, Laboratory of Engineering Sciences (LSI), Polydisciplinary Faculty of Taza, Sidi Mohamed Ben Abdallah University, Fez, Morocco, Aziza EL OUAAZIZI, Laboratory of Artificial Intelligence, Data Sciences and Emergent Systems (LIASSE), Laboratory of Engineering Sciences (LSI), Sidi Mohamed Ben Abdallah University, Fez, Morocco
Nowadays, the current neural network models of dialogue generation(chatbots) show great promise for generating answers for chatty agents, but they are shortsighted in that they predict utterances one at a time while disregarding their impact on future outcomes Modeling a dialogue’s future direction is critical for generating coherent, interesting dialogues, a need that has led traditional NLP dialogue models that rely on reinforcement learning. In this paper, we demonstrate how to combine these objectives by using deep reinforcement learning to predict future rewards in chatbot dialogue. The model simulates conversations between two virtual agents, with policy gradient methods used to reward sequences that exhibit three useful conversational characteristics: the flow of informality, coherence, and ease of response (related to forward-looking function). We assess our model based on its diversity, length, and complexity with regard to humans. In dialogue simulation, evaluations demonstrated that the proposed model generates more interactive responses and encourages a more sustained successful conversation. This work commemorates a preliminary step toward developing a neural conversational model based on the long-term success of dialogues.
Reinforcement learning, SEQ2SEQ model, Chatbot, NLP, Conversational agent.
Md. Sharifur Rahman and Pratheepan Yogarajah, Engineering and Intelligent Systems, Ulster University, Londonderry, UK
The performance of machine learning classifiers is mostly affected by the kind of classifier and the appropriate application of training data. This study examines five major classifiers with varying quantities of training data and validation procedures to discover the ideal mix of classifiers and validation strategies for getting the highest accuracy rate while testing models with a short dataset.
Machine Learning, Accuracy, Precision, Recall, Cross-validation, Training dataset, F1 score.
Le Cai1, Sam Ferguson2, Haiyan Lu2 and Gengfa Fang1, 1School of Electrical and Data Engineering, 2School of Computer Science, Faculty of Engineering & IT, University of Technology Sydney, Sydney, NSW, 2007, Australia
In music emotion recognition, the high feature dimensionality is a challenge for many researchers, there is no common consensus on a relation between audio features and emotion. The general MER system uses the most available features to recognize the emotion. However, this is not an optimal solution since it contains the interevent data, which may create noise. Thus, finding out which features are useful for MER is curial, and it is the key to tackling this problem. In this study, we applied feature selection techniques to reduce the dimension of the dataset and eliminate the redundant features according to the feature importance. We created a Selected Feature Set (SFS) based on the feature selection algorithm (FSA) and benchmarked it by training models against using the Complete Feature Set (CFS). The result indicates that the performance of MER has improved for both Random Forest (RF) and Support Vector Regression (SVR) models by using SFS. The feature dimension has also reduced for both models. We even obtained higher performance in MER tasks, SVR has improved 9.1% in valence and 14.3% in arousal. RF model has improved 3.9% in valence and but 1.4% decrease in arousal. The results show that the feature selection process can potentially improve valence and arousal recognition performance in overall scenarios. They also have potential benefits for model efficiency and stability for the overall MER task.
Music emotion recognition, arousal-valence dimensions, audio features, feature selection, mediaeval.
Hend Faisal1,2, Hanan Hindy1, Samir Gaber3, Abdel-Badeeh Salem1, 1Faculty of Computer and Information Sciences, Ain Shams University, Egypt, 2Egyptian Computer Emergency and Readiness Team (EG-CERT), National Telecom Regulatory Authority (NTRA), 3Faculty of Engineering, Helwan University, Egypt
During the rapid evolution of technology and being in the era of digital transformation, attackers take the advantage to spread malicious software (malware). Nowadays, malware is increasing at a terrifying rate, and it comes with different generations and forms, making it difficult for researchers to develop efficient tools for malware detection. Over the years, attacks became, not only limited to computer-based operating systems, but also to that of mobile-based, which makes it even harder for analysts. Furthermore, this increases the need for more research in this direction. The technological evolution also gives researchers the chance to utilize Artificial Intelligence widely and leverage its capabilities in many fields in general and in the field of malware detection in particular. This paper provides a literature review on malware detection using Artificial Intelligence techniques and specifically, Machine Learning and Deep Learning techniques. The paper helps researchers to have a broad idea of the latest malware detection techniques, available datasets, challenges, and limitations.
Malware Detection, Artificial Intelligence, Machine Learning, Deep Learning, Android Malware.
Honoré Hounwanou and Mohamed Mejri, Université Laval, Québec QC G1V 0A6, Canada
The security of information systems is one of the most important concerns of today’s computer science field. It’s almost impossible nowadays to find a business, authority or organization that doesn’t make use of computer systems for its proper functioning. As a result, it becomes vital to ensure that the various programs we write work as expected and are not strewn with security vulnerabilities. Even the slightest security vulnerability can cause enormous damage and huge financial losses. Given a program P and a security policy Φ, this paper gives an approach allowing to generate another program P0 that respects the policy Φ and behaves (with respect to trace equivalence) like P except that it stops any execution path whenever the enforced security policy is about to be violated. The proposed approach transforms the problem of finding P 0 to solving a linear system under a given algebra and for which we know how to get the solution.
Program Rewriting, Formal Methods, Computer Security.
Christoff Jacobs, University of Johannesburg, South Africa
To stay competitive in the fast-evolving financial landscape, banks are enhancing their traditional capabilities with new innovative features to satisfy customer demands. Therefore, developing mobile banking apps requires a skilled understanding of software development techniques, protection mechanisms, and mobile security practices to safeguard customers against cyberattacks. In this context, secure development practices are a formidable challenge due to the increasing demand for more sophisticated mobile banking apps. Numerous software development complexities exist, along with a lack of guidelines and a standardised approach for creating secure mobile banking apps. This research aims to identify mobile banking app security drivers to develop a secure software development framework tailored for the banking sector. The security drivers can form the foundation for a novel framework to guide banks in creating secure mobile banking applications.
Mobile banking, software development frameworks, cybersecurity.
Sonam Pankaj and Amit Gautam, Saama Technologies
NLP Augmentation is recently gaining attention, Unlike computer vision, where image data augmentation is standard, text data augmentation in NLP is uncommon. And we have seen advantages of augmentation where there are fewer data available, and it can play a huge role. We have implemented Augmentation in Pairwise sentence scoring in the biomedical domain. We have looked into the solution to improve Bi-encoders’ sentence transformer performance using silver data-set generated by cross-encoders by experimenting with our approach on biomedical domain data-sets like Biosses and MedNli, where it has significantly improved the results.
Augmentation, datasets, sentence-transformers.
Oshan Niluminda and Uthpala Ekanayake, Department of Physical Sciences, Faculty of Applied Sciences, Rajarata University of Sri Lanka, Mihinthale, Sri Lanka
One of the numerous difficulties organizations throughout the world must overcome is the transportation problem (TP). TP has two different sorts of solutions, including the Initial Basic Feasible Solution (IBFS) and the Optimal Solution. An IBFS can be found using the North-West corner rule, the Lowest Cost Method, and Vogels Approximation Method (VAM), and an optimal solution for the TP may be found using the Modified Distribution (MODI) Method and the Stepping Stone Method. The most effective method for solving both balanced and unbalanced TP is examined in this work using the line (edge) colouring of a bipartite network. The optimal or nearly optimal solution of TP is then obtained by first converting the TP into a network form and then applying the new algorithmic approach that has been proposed. This method takes fewer iterations compared to other current approaches to obtain optimality.
Bipartite network, Line colouring, Transportation problem, Balanced and unbalanced, Optimal solution.
Ekanayake E.M.U.S.B.1, Daundasekara W.B.2, and Perera S.P.C3, 1Department of Physical Sciences, Faculty of Applied Sciences, Rajarata University of Sri Lanka, Mihinthale, Sri Lanka, 2Department of Mathematics, Faculty of Science, University of Peradeniya, Sri Lanka, 3Department of Engineering Mathematics, Faculty of Engineering, University of Peradeniya, Sri Lanka
The objective of the Transportation Problem (TP), another well-known optimization problem, is to reduce the overall cost of carrying goods from one site to another. This is a sizable, carefully planned process in the current world. Literature has created a number of methods for resolving various transportation problems. But a lot of actual transportation problems that have equality restrictions are frequently discussed in literature. The Interval Transportation Problem (ITP) has drawn much attention recently. As shown by the literature, numerous methods for solving the TP have been created in the past. In this study, we proposed a practical solution for handling mixed-constraint interval transportation problems. In this paper, the modified Ant Colony Optimization (ACO) algorithm is used to transform the ITP with mixed constraints into a crisp transportation issue, producing a reduced solution. The Transition Rule and Pheromone Update Rule are incorporated into the ACO algorithm to achieve this. The algorithmic strategy used in this study is simpler than well-known meta-heuristic algorithms found in the literature. Finally, the efficiency of the methodology is illustrated using numerical examples.
Interval numbers, Interval Transportation problem, Mixed constraints, Ant colony algorithm, Optimal solution.
Karthika Vijayan and Oshin Anand, Data Science Team, Sahaj AI, Bangalore, India
Conversational AI requires information extraction from text messages through natural language understanding (NLU) capability, that generally entails usage of specific language models. However, the availability of such language specific resources is not ensured for low resource languages and code mixing of languages. In this paper, we study the implementation of multilingual NLU by development of a language agnostic processing pipeline. We perform this study using the case of a conversational assistant, implemented using the RASA framework. We build automatic assistants for answering text queries in different languages and code mixing of languages, while doing so, we experiment with different components in an NLU pipeline. Sparse and dense feature extraction accomplishes the language agnostic composite featurization of text in our pipeline. We perform experiments with intent classification and entity extraction as part of information extraction from multilingual and code mixed text. We confirmed the efficacy of the language agnostic NLU pipeline when (i) dedicated language models are not available for languages of our interest, and (ii) in case of code mixing. Our experiments delivered accuracies in intent classification of 98.49%, 96.41% and 97.98% for same queries in English, Hindi and Malayalam languages, respectively, without any dedicated language models. The language agnostic processing showcased a relative improvement of 6.5% in intent classification from code mixed Hindi-English text, over that with dedicated language models.
Augmentation, datasets, sentence-transformers.
Jinjin Cao1,2, Jie Cao1,2 and Youquan Wang1,2, 1College of Information Engineering, Nanjing University of Finance and Economics, Nanjing, China, 2University of Finance and Economics, Nanjing, China Jiangsu Provincial Key Laboratory of E-Business, Nanjing, China.
With the rise of deep neural networks, question-answering based on machine reading comprehension has attracted more and more attention. Current question-answering systems, including transformer-based models, suffer from increasing model complexity and a large number of parameters, which makes them inefficient in extracting answers in constrained devices. In this paper, we propose an efficient knowledge distillation approach to tackle both model complexity and a huge amount of parameters. To be Specific, we proposed architecture employs an intermediate model to work as the teacher assistant, and then the "teacher assistant" can teach the student more efficiently without consuming massive resources. We conduct experiments on the SQuAD dataset and these experiments show that we can achieve better experimental results than the conventional knowledge distillation model.
Question Answering, Knowledge Distillation, Natural Language Processing, Teaching Assistant.
Dimpal Janu, Dept. of ECE, Malaviya National Institute Technology, Jaipur, India, Kuldeep Singh, Dept. of ECE, Malaviya National Institute of Technology, Jaipur, India, Sandeep Kumar, Central Research Lab, Bharat Electronics Ltd., Ghaziabad, India
In this paper, we have analysed the detection performance of various Machine learning (ML) and Deep Learning (DL) algorithms based cooperative spectrum sensing (CSS) models such as K-means clustering algorithm, Gaussian mixture model (GMM), support vector machine (SVM), Decision Tree (DT) and the DL architectures such as artificial neural network (ANN), convolutional neural network (CNN). We have evaluate the performance of CSS models by considering the multi-antenna multiple secondary users (SUs) Cognitive radio scenario and also catered the scenario of hidden node. The system models adopted by theexisting DL based CSS models have not considered such kind of scenarios for detecting the presence of PU. The fusion centre collects the SU data and computes the statistical features, sensing data fusion method is adopted. The FC divides sensing data collected from all SU into two clusters and extracts one-dimensional feature vector, and these features are used to train the ML classifiers. In case of DL based models, the FC computes covariance matrices from the sensing data collected from each SU. These covariance matrices are fed as input to DL based CSS models. The results are showing that CNN based models outperform the ANN, and other ML based models in terms of detection probability and classification accuracy.
Cognitive Radio, Cooperative spectrum sensing, Support Vector Machine, K-means clustering, Gaussian Mixture Model.
Ananya Chakraborty, Mampi Devi, Alak Roy, Department of Information Technology, Tripura University, India
Gesture recognition means recognizing the different expressions by which physically challenged people or hearing-impaired people can communicate with the outer world. In gesture recognition, hand gestures are one of the most common forms of communication and they can communicate with a wide range of meanings. Dance gestures recognition is one of the challenging tasks in pattern recognition where hand gestures are used. It is a linguistic treatment of human motion by which we can depict the dance drama and will be able to communicate with people culturally. The concept of dance gestures recognition can be used to classify Manipuri classical dance of India where 25 single-hand gestures and 12 dual-hand gestures are available. Unlike other Indian classical dance forms (eg: Bharatnatyam, Odissi, Kathak) there are no dataset available for Manipuri classical dance. In this thesis, a dataset for 25 single-hand gestures of Manipuri classical dance with 1500 mudras collected from 6 volunteers in different angels are presented. An unbiased dataset is targetted to enhance the gesture recognition. This thesis also presents a study on various methods for gesture recognition with their applications. Moreover, this thesis presents four features for recognizing 25 single-hand gestures of manipuri dance which are used to identify hand gestures using skeletization technique.
Gestures recognition, Single-hand gestures, Manipuri classical dance of India, Dataset, Skeletization technique.
Marwa Tarchouli1,2, Sebastien Pelurson1, Thomas Guionnet1, Wassim Hamidouche2, Meriem Outtas2 and Olivier Deforges2, 1Ateme, Rennes, France, 2Univ. Rennes, INSA Rennes, CNRS, IETR - Rennes, France
End-to-end learned image and video codecs, based on auto-encoder architecture, adapt naturally to image resolution, thanks to their convolutional aspect. However, while coding high resolution images, these codecs face hardware problems such as memory saturation. This paper proposes a patch-based image coding solution based on an end-to-end learned model, which aims to remedy to the hardware limitation while maintaining the same quality as full resolution image coding. Our method consists in coding overlapping patches of the image and reconstruct them into a decoded image using a weighting function. This approach manages to be on par with the performance of full resolution image coding using an endto-end learned model, and even slightly outperform it, while being adaptable to different memory size. It is also compatible with any learned codec based on a conv/deconvolutional autoencoder architecture without having to retrain the model.
Auto-encoders, Image compression, Deblocking, block artifacts.
Luoqiao Xiang1, Liaoying Zhao1, Shuhan Chen2 and Xiaorun Li2, 1Hangzhou Dianzi University, Hangzhou, China, 2Zhejiang University, Hangzhou, China.
Infrared and visible dual camera is commonly used in UAV inspection. As a crucial preprocess, accurate registration can promote subsequent multi-source image fusion, which can improve detectability and reduce the false positive rate. Due to significant geometric distortions, gray-scale differences and partial overlap between infrared image and visible image, feature-based methods or joint area-feature based methods cannot obtain satisfactory results. To solve this problem, this paper presents a novel registration method based on UAV imaging characteristics and intensity-structure similarity optimization. The reliable initial registration parameters are obtained by utilizing the UAV imaging parameters and approximate coaxial imaging principle. For further improving the accuracy of registration, this paper proposes an intensity-structure similarity metric and the final rectification parameters are obtained by maximizing the proposed metric via quantum particle swarm optimization (QPSO) method. The experimental results of infrared and visible images in UAV inspection of photovoltaic power station demonstrate that the proposed method is competitive against traditional feature-based methods (FBMs), such as SIFT and SURF, and the joint area-feature based methods (AFBMs) based on SIFT combined with regional mutual information.
Unmanned Aerial Vehicle, Infrared Image, Visible Image, Image Registration, QPSO Algorithm, Region Mutual Information, Structural Similarity.
Rina Su1*, Yumeng Li2, Xin Yin32*, Tao Chen4, Dr. Chen Tao4, 1Sun Yat-sen University Library, Guangzhou 510205, 2School of Journalism and Communication, Jiangxi Normal University, Jiangxi 330022, 3School of Information Resource Management, Renming University of China, Beijing 100872, 4School of Information Management, Sun Yat-sen University, Guangzhou 510205
The digitization of displaced archives is of great historical and cultural significance. Through the construction ofdigital humanistic platforms represented by MISS Platform, and the comprehensive application of IIIF technology, knowledge graph technology, ontology technology, and other popular information technologies. We can find that the digital framework of displaced archives built through the MISS platform can promote the establishment of a standardized cooperation and dialogue mechanism between the archives’ authorities and other government departments. At the same time, it can embed the works of archives in the construction of digital government and the economy, promote the exploration of the integration of archives management, data management, and information resource management, and ultimately promote the construction of a digital society. By fostering a new partnership between archives departments and enterprises, think tanks, research institutes, and industry associations, the role of multiple social subjects in the modernization process of the archives governance system and governance capacity will be brought into play. The National Archives Administration has launched a special operation to recover scattered archives overseas, drawing up a list and a recovery action plan for archives lost to overseas institutions and individuals due to war and other reasons. Through the National Archives Administration, the State Administration of Cultural Heritage, the Ministry of Foreign Affairs, the Supreme Peoples Court, the Supreme Peoples Procuratorate, and the Ministry of Justice, specific recovery work is carried out by studying and working on international laws.
Digital Humanity, Displaced Archive, MISS Platform, International Image Interoperability Framework (IIIF), Linked Data.
Wenyan Zhu1 and Yu Sun2, 1Sage Hill School, 20402 Newport Coast Dr, Newport Beach, USA, 2California State Polytechnic University, Pomona, CA, 91768, Irvine, CA 92620
It’s dif icult for a computer to classify a road sign correctly from an image. In this paper, we address this by using a large road sign dataset to train our machine learning model, which allows it to more accurately classify a road sign image. Studies have shown the importance of diet in correlation with obesity and several chronic diseases. Trying to reduce the incidence of diet related diseases, we designed a mobile application for users to keep track of their nutritional intake and thus promote healthier eating patterns. We implemented a deep learning model into the application that can make predictions when given an image and analyze the nutrients for that food item. The sum of daily nutritional information is displayed to users on the dashboard, as well as a letter grade to help visualize their progress on healthy eating. Every past diet log is saved locally on SharedPreferences for the users to pull up as needed. The users have full control over how to use the application, and it is designed to raise awareness of how much nutrients are suggested daily in comparison to each individual consumption. We evaluated the ef ectiveness of the application with an experiment to test out its accuracy, and the results supported the application’s potential as well as inspired ideas for future improvements.
Artificial Intelligence, Nutrients, Mobile AP.
Ethan Nesel, Ahmed Ahmed and Yan Bai, School of Engineering and Technology, University of Washington Tacoma, Tacoma, Washington, USA
This healthcare sector routinely proves how humanity can push the limit of what is deemed technologically possible. This paper examines the benefits that 6G wireless communication may bring to healthcare infrastructure, focusing specifically on remote haptic telesurgery. To the best of our knowledge, this is the first experimental study that demonstrates the feasibility of remote haptic telesurgery in practice.
5G, 6G, Remote Telesurgery, Tactile Internet, Haptic Surgery, Smart Healthcare.
Danni Yang, University of California, Davis, 1 Shields Ave, Davis, CA 95616
Cancer is a disease which might bring considerable threats to humans, and one reason why it is so troublesome is that it is divided into many subtypes. Therefore, to distinguish what cancer type it is by doing cancer classification is very important for future treatment. In this paper, we will discuss how machine learning would help us to do cancer classification based on gene expression. We also describe the detailed process of doing classification, starting from the measurement of useful gene expression to the finding of data resources of cancer genes and finally to the utilizing of machine learning techniques. After explaining the conceptual knowledge of cancer classification, we do the literature review to see how other researchers do this, what method did they use and what conclusion did they draw out.
Gene Expression, Machine Learning.
Jai Sharma1 and Vidhyacharan Bhaskar2, 1Intern Researcher, Stanford University School of Medicine; 269 Campus Drive, Palo Alto, CA 94305, 2Professor, School of Engineering, San Francisco State University, 1600 Holloway Avenue, San Francisco, CA 94132
Glaucoma is the leading cause of irreversible blindness in people over the age of 60, accounting for 6.6 to 8% of all blindness in 2010, but there is still much to be learned about the genetic origins of the eye disease. With the modern development of Next-Generation Sequencing (NGS) technologies, scientists are starting to learn more about the genetic origins of Glaucoma. This research uses differential expression (DE) and gene ontology (GO) analyses to study the genetic differences between mice with severe Glaucoma and multiple control groups. Optical nerve head (ONH) and retina data samples of genome-wide RNA expression from NCBI (NIH) are used for pairwise comparison experimentation. In addition, principal component analysis (PCA) and dispersion visualization methods are employed to perform quality control tests of the sequenced data. Genes with skewed gene counts are also identified, as they may be marker genes for a particular severity of Glaucoma. The gene ontologies found in this experiment support existing knowledge of Glaucoma genesis, providing confidence that the results were valid. Future researchers can thoroughly study the gene lists generated by the DE and GO analyses to find potential activator or protector genes for Glaucoma in mice to develop drug treatments or gene therapies to slow or stop the progression of the disease. The overall goal is that in the future, such treatments can be made for humans as well to improve the quality of life for human patients with Glaucoma and reduce Glaucoma blindness rates.
mRNA Sequence Data, Statistical Analysis, Differential Expression Analysis, Gene Ontology Analysis, Glaucoma, Ophthalmology.
Simisani Ndaba, Department of Computer Science, University of Botswana, Gaborone, Botswana
R is widely used by researchers in the statistics field and academia, in Botswana, it’s used in a few research for data analysis. The purpose of this paper is to synthesis research conducted in Botswana that have used R programming for data analysis and to demonstrate to data scientists, the R community in Botswana and internationally the gaps and applications in practice in research work using R in the context of Botswana. The paper followed the PRISMA methodology and the articles were taken from information technology databases. The findings show that research conducted in Botswana that use R programming were used in Health Care, Climatology, Conservation and Physical Geography, with Rpart as the most used R package across the research areas. It was also found that a lot of R packages are used in Health care for genomics, plotting, networking and classification was the common model used across research areas.
R Programming, Botswana, R Package, Research Area, Data Analysis.
Swati Chandurkar, Saumya Parag Phadkar, Chinmay Singhania, Siddharth Poddar, Jai Suryawanshi, Pimpri Chinchwad College of Engineering, Pune, India
Most E-commerce websites use a recommendation system to provide a better user experience for customers. The reason for this is CRM (Customer Relations Management). CRM defines the value of a customer and the status of the companys relationship with them. Customer segmentation is dividing customers into different groups based on their past activity on the website. This segregation puts them in a particular group with other customers resembling their behavior. This aids the recommendation system to produce a more customized experience for customers, with added intelligence to the system. RFM or Recency Frequency Monetary is a very popular technique of Customer Segmentation. It uses three parameters to rank customers. In this survey, the authors have discussed their proposed techniques, discussed their impact and relevance.
Customer Segmentation, RFM analysis, Machine Learning, Clustering.
Copyright © NLPTA 2022