Secure and Trusted Execution Framework for Virtualized Workloads (open access)

Secure and Trusted Execution Framework for Virtualized Workloads

In this dissertation, we have analyzed various security and trustworthy solutions for modern computing systems and proposed a framework that will provide holistic security and trust for the entire lifecycle of a virtualized workload. The framework consists of 3 novel techniques and a set of guidelines. These 3 techniques provide necessary elements for secure and trusted execution environment while the guidelines ensure that the virtualized workload remains in a secure and trusted state throughout its lifecycle. We have successfully implemented and demonstrated that the framework provides security and trust guarantees at the time of launch, any time during the execution, and during an update of the virtualized workload. Given the proliferation of virtualization from cloud servers to embedded systems, techniques presented in this dissertation can be implemented on most computing systems.
Date: August 2018
Creator: Kotikela, Srujan D
System: The UNT Digital Library
An Efficient Approach for Dengue Mitigation: A Computational Framework (open access)

An Efficient Approach for Dengue Mitigation: A Computational Framework

Dengue mitigation is a major research area among scientist who are working towards an effective management of the dengue epidemic. An effective dengue mitigation requires several other important components. These components include an accurate epidemic modeling, an efficient epidemic prediction, and an efficient resource allocation for controlling of the spread of the dengue disease. Past studies assumed homogeneous response pattern of the dengue epidemic to climate conditions throughout the regions. The dengue epidemic is climate dependent and also it is geographically dependent. A global model is not sufficient to capture the local variations of the epidemic. We propose a novel method of epidemic modeling considering local variation and that uses micro ensemble of regressors for each region. There are three regressors that are used in the construction of the ensemble. These are support vector regression, ordinary least square regression, and a k-nearest neighbor regression. The best performing regressors get selected into the ensemble. The proposed ensemble determines the risk of dengue epidemic in each region in advance. The risk is then used in risk-based resource allocation. The proposing resource allocation is built based on the genetic algorithm. The algorithm exploits the genetic algorithm with major modifications to its main components, …
Date: May 2019
Creator: Dinayadura, Nirosha
System: The UNT Digital Library
Methodical Evaluation of Processing-in-Memory Alternatives (open access)

Methodical Evaluation of Processing-in-Memory Alternatives

In this work, I characterized a series of potential application kernels using a set of architectural and non-architectural metrics, and performed a comparison of four different alternatives for processing-in-memory cores (PIMs): ARM cores, GPGPUs, coarse-grained reconfigurable dataflow (DF-PIM), and a domain specific architecture using SIMD PIM engine consisting of a series of multiply-accumulate circuits (MACs). For each PIM alternative I investigated how performance and energy efficiency changes with respect to a series of system parameters, such as memory bandwidth and latency, number of PIM cores, DVFS states, cache architecture, etc. In addition, I compared the PIM core choices for a subset of applications and discussed how the application characteristics correlate to the achieved performance and energy efficiency. Furthermore, I compared the PIM alternatives to a host-centric solution that uses a traditional server-class CPU core or PIM-like cores acting as host-side accelerators instead of being part of 3D-stacked memories. Such insights can expose the achievable performance limits and shortcomings of certain PIM designs and show sensitivity to a series of system parameters (available memory bandwidth, application latency and bandwidth sensitivity, etc.). In addition, identifying the common application characteristics for PIM kernels provides opportunity to identify similar types of computation patterns in …
Date: May 2019
Creator: Scrbak, Marko
System: The UNT Digital Library
New Frameworks for Secure Image Communication in the Internet of Things (IoT) (open access)

New Frameworks for Secure Image Communication in the Internet of Things (IoT)

The continuous expansion of technology, broadband connectivity and the wide range of new devices in the IoT cause serious concerns regarding privacy and security. In addition, in the IoT a key challenge is the storage and management of massive data streams. For example, there is always the demand for acceptable size with the highest quality possible for images to meet the rapidly increasing number of multimedia applications. The effort in this dissertation contributes to the resolution of concerns related to the security and compression functions in image communications in the Internet of Thing (IoT), due to the fast of evolution of IoT. This dissertation proposes frameworks for a secure digital camera in the IoT. The objectives of this dissertation are twofold. On the one hand, the proposed framework architecture offers a double-layer of protection: encryption and watermarking that will address all issues related to security, privacy, and digital rights management (DRM) by applying a hardware architecture of the state-of-the-art image compression technique Better Portable Graphics (BPG), which achieves high compression ratio with small size. On the other hand, the proposed framework of SBPG is integrated with the Digital Camera. Thus, the proposed framework of SBPG integrated with SDC is suitable …
Date: August 2016
Creator: Albalawi, Umar Abdalah S
System: The UNT Digital Library
Sensing and Decoding Brain States for Predicting and Enhancing Human Behavior, Health, and Security (open access)

Sensing and Decoding Brain States for Predicting and Enhancing Human Behavior, Health, and Security

The human brain acts as an intelligent sensor by helping in effective signal communication and execution of logical functions and instructions, thus, coordinating all functions of the human body. More importantly, it shows the potential to combine prior knowledge with adaptive learning, thus ensuring constant improvement. These qualities help the brain to interact efficiently with both, the body (brain-body) as well as the environment (brain-environment). This dissertation attempts to apply the brain-body-environment interactions (BBEI) to elevate human existence and enhance our day-to-day experiences. For instance, when one stepped out of the house in the past, one had to carry keys (for unlocking), money (for purchasing), and a phone (for communication). With the advent of smartphones, this scenario changed completely and today, it is often enough to carry just one's smartphone because all the above activities can be performed with a single device. In the future, with advanced research and progress in BBEI interactions, one will be able to perform many activities by dictating it in one's mind without any physical involvement. This dissertation aims to shift the paradigm of existing brain-computer-interfaces from just ‘control' to ‘monitor, control, enhance, and restore' in three main areas - healthcare, transportation safety, and cryptography. …
Date: August 2016
Creator: Bajwa, Garima
System: The UNT Digital Library
Improving Software Quality through Syntax and Semantics Verification of Requirements Models (open access)

Improving Software Quality through Syntax and Semantics Verification of Requirements Models

Software defects can frequently be traced to poorly-specified requirements. Many software teams manage their requirements using tools such as checklists and databases, which lack a formal semantic mapping to system behavior. Such a mapping can be especially helpful for safety-critical systems. Another limitation of many requirements analysis methods is that much of the analysis must still be done manually. We propose techniques that automate portions of the requirements analysis process, as well as clarify the syntax and semantics of requirements models using a variety of methods, including machine learning tools and our own tool, VeriCCM. The machine learning tools used help us identify potential model elements and verify their correctness. VeriCCM, a formalized extension of the causal component model (CCM), uses formal methods to ensure that requirements are well-formed, as well as providing the beginnings of a full formal semantics. We also explore the use of statecharts to identify potential abnormal behaviors from a given set of requirements. At each stage, we perform empirical studies to evaluate the effectiveness of our proposed approaches.
Date: December 2018
Creator: Gaither, Danielle
System: The UNT Digital Library

A Top-Down Policy Engineering Framework for Attribute-Based Access Control

The purpose of this study is to propose a top-down policy engineering framework for attribute-based access control (ABAC) that aims to automatically extract ACPs from requirement specifications documents, and then, using the extracted policies, build or update an ABAC model. We specify a procedure that consists of three main components: 1) ACP sentence identification, 2) policy element extraction, and 3) ABAC model creation and update. ACP sentence identification processes unrestricted natural language documents and identify the sentences that carry ACP content. We propose and compare three different methodologies from different disciplines, namely deep recurrent neural networks (RNN-based), biological immune system (BIS-based), and a combination of multiple natural language processing techniques (PMI-based) in order to identify the proper methodology for extracting ACP sentences from irrelevant text. Our evaluation results improve the state-of-the-art by a margin of 5% F1-Measure. To aid future research, we also introduce a new dataset that includes 5000 sentences from real-world policy documents. ABAC policy extraction extracts ACP elements such as subject, object, and action from the identified ACPs. We use semantic roles and correctly identify ACP elements with an average F1 score of 75%, which bests the previous work by 15%. Furthermore, as SRL tools are often …
Date: May 2020
Creator: Narouei, Masoud
System: The UNT Digital Library
Privacy Preserving Machine Learning as a Service (open access)

Privacy Preserving Machine Learning as a Service

Machine learning algorithms based on neural networks have achieved remarkable results and are being extensively used in different domains. However, the machine learning algorithms requires access to raw data which is often privacy sensitive. To address this issue, we develop new techniques to provide solutions for running deep neural networks over encrypted data. In this paper, we develop new techniques to adopt deep neural networks within the practical limitation of current homomorphic encryption schemes. We focus on training and classification of the well-known neural networks and convolutional neural networks. First, we design methods for approximation of the activation functions commonly used in CNNs (i.e. ReLU, Sigmoid, and Tanh) with low degree polynomials which is essential for efficient homomorphic encryption schemes. Then, we train neural networks with the approximation polynomials instead of original activation functions and analyze the performance of the models. Finally, we implement neural networks and convolutional neural networks over encrypted data and measure performance of the models.
Date: May 2020
Creator: Hesamifard, Ehsan
System: The UNT Digital Library
Traffic Forecasting Applications Using Crowdsourced Traffic Reports and Deep Learning (open access)

Traffic Forecasting Applications Using Crowdsourced Traffic Reports and Deep Learning

Intelligent transportation systems (ITS) are essential tools for traffic planning, analysis, and forecasting that can utilize the huge amount of traffic data available nowadays. In this work, we aggregated detailed traffic flow sensor data, Waze reports, OpenStreetMap (OSM) features, and weather data, from California Bay Area for 6 months. Using that data, we studied three novel ITS applications using convolutional neural networks (CNNs) and recurrent neural networks (RNNs). The first experiment is an analysis of the relation between roadway shapes and accident occurrence, where results show that the speed limit and number of lanes are significant predictors for major accidents on highways. The second experiment presents a novel method for forecasting congestion severity using crowdsourced data only (Waze, OSM, and weather), without the need for traffic sensor data. The third experiment studies the improvement of traffic flow forecasting using accidents, number of lanes, weather, and time-related features, where results show significant performance improvements when the additional features where used.
Date: May 2020
Creator: Alammari, Ali
System: The UNT Digital Library

Multi-Source Large Scale Bike Demand Prediction

Current works of bike demand prediction mainly focus on cluster level and perform poorly on predicting demands of a single station. In the first task, we introduce a contextual based bike demand prediction model, which predicts bike demands for per station by combining spatio-temporal network and environment contexts synergistically. Furthermore, since people's movement information is an important factor, which influences the bike demands of each station. To have a better understanding of people's movements, we need to analyze the relationship between different places. In the second task, we propose an origin-destination model to learn place representations by using large scale movement data. Then based on the people's movement information, we incorporate the place embedding into our bike demand prediction model, which is built by using multi-source large scale datasets: New York Citi bike data, New York taxi trip records, and New York POI data. Finally, as deep learning methods have been successfully applied to many fields such as image recognition and natural language processing, it inspires us to incorporate the complex deep learning method into the bike demand prediction problem. So in this task, we propose a deep spatial-temporal (DST) model, which contains three major components: spatial dependencies, temporal dependencies, …
Date: May 2020
Creator: Zhou, Yang
System: The UNT Digital Library

A Performance and Security Analysis of Elliptic Curve Cryptography Based Real-Time Media Encryption

Access: Use of this item is restricted to the UNT Community
This dissertation emphasizes the security aspects of real-time media. The problems of existing real-time media protections are identified in this research, and viable solutions are proposed. First, the security of real-time media depends on the Secure Real-time Transport Protocol (SRTP) mechanism. We identified drawbacks of the existing SRTP Systems, which use symmetric key encryption schemes, which can be exploited by attackers. Elliptic Curve Cryptography (ECC), an asymmetric key cryptography scheme, is proposed to resolve these problems. Second, the ECC encryption scheme is based on elliptic curves. This dissertation explores the weaknesses of a widely used elliptic curve in terms of security and describes a more secure elliptic curve suitable for real-time media protection. Eighteen elliptic curves had been tested in a real-time video transmission system, and fifteen elliptic curves had been tested in a real-time audio transmission system. Based on the performance, X9.62 standard 256-bit prime curve, NIST-recommended 256-bit prime curves, and Brainpool 256-bit prime curves were found to be suitable for real-time audio encryption. Again, X9.62 standard 256-bit prime and 272-bit binary curves, and NIST-recommended 256-bit prime curves were found to be suitable for real-time video encryption.The weaknesses of NIST-recommended elliptic curves are discussed and a more secure new …
Date: December 2019
Creator: Sen, Nilanjan
System: The UNT Digital Library

Frameworks for Attribute-Based Access Control (ABAC) Policy Engineering

In this disseration we propose semi-automated top-down policy engineering approaches for attribute-based access control (ABAC) development. Further, we propose a hybrid ABAC policy engineering approach to combine the benefits and address the shortcomings of both top-down and bottom-up approaches. In particular, we propose three frameworks: (i) ABAC attributes extraction, (ii) ABAC constraints extraction, and (iii) hybrid ABAC policy engineering. Attributes extraction framework comprises of five modules that operate together to extract attributes values from natural language access control policies (NLACPs); map the extracted values to attribute keys; and assign each key-value pair to an appropriate entity. For ABAC constraints extraction framework, we design a two-phase process to extract ABAC constraints from NLACPs. The process begins with the identification phase which focuses on identifying the right boundary of constraint expressions. Next is the normalization phase, that aims at extracting the actual elements that pose a constraint. On the other hand, our hybrid ABAC policy engineering framework consists of 5 modules. This framework combines top-down and bottom-up policy engineering techniques to overcome the shortcomings of both approaches and to generate policies that are more intuitive and relevant to actual organization policies. With this, we believe that our work takes essential steps towards …
Date: August 2020
Creator: Alohaly, Manar
System: The UNT Digital Library
Kriging Methods to Exploit Spatial Correlations of EEG Signals for Fast and Accurate Seizure Detection in the IoMT (open access)

Kriging Methods to Exploit Spatial Correlations of EEG Signals for Fast and Accurate Seizure Detection in the IoMT

Epileptic seizure presents a formidable threat to the life of its sufferers, leaving them unconscious within seconds of its onset. Having a mortality rate that is at least twice that of the general population, it is a true cause for concern which has gained ample attention from various research communities. About 800 million people in the world will have at least one seizure experience in their lifespan. Injuries sustained during a seizure crisis are one of the leading causes of death in epilepsy. These can be prevented by an early detection of seizure accompanied by a timely intervention mechanism. The research presented in this dissertation explores Kriging methods to exploit spatial correlations of electroencephalogram (EEG) Signals from the brain, for fast and accurate seizure detection in the Internet of Medical Things (IoMT) using edge computing paradigms, by modeling the brain as a three-dimensional spatial object, similar to a geographical panorama. This dissertation proposes basic, hierarchical and distributed Kriging models, with a deep neural network (DNN) wrapper in some instances. Experimental results from the models are highly promising for real-time seizure detection, with excellent performance in seizure detection latency and training time, as well as accuracy, sensitivity and specificity which compare …
Date: August 2020
Creator: Olokodana, Ibrahim Latunde
System: The UNT Digital Library
Online Construction of Android Application Test Suites (open access)

Online Construction of Android Application Test Suites

Mobile applications play an important role in the dissemination of computing and information resources. They are often used in domains such as mobile banking, e-commerce, and health monitoring. Cost-effective testing techniques in these domains are critical. This dissertation contributes novel techniques for automatic construction of mobile application test suites. In particular, this work provides solutions that focus on the prohibitively large number of possible event sequences that must be sampled in GUI-based mobile applications. This work makes three major contributions: (1) an automated GUI testing tool, Autodroid, that implements a novel online approach to automatic construction of Android application test suites (2) probabilistic and combinatorial-based algorithms that systematically sample the input space of Android applications to generate test suites with GUI/context events and (3) empirical studies to evaluate the cost-effectiveness of our techniques on real-world Android applications. Our experiments show that our techniques achieve better code coverage and event coverage compared to random test generation. We demonstrate that our techniques are useful for automatic construction of Android application test suites in the absence of source code and preexisting abstract models of an Application Under Test (AUT). The insights derived from our empirical studies provide guidance to researchers and practitioners involved …
Date: December 2017
Creator: Adamo, David T., Jr.
System: The UNT Digital Library
Hybrid Approaches in Test Suite Prioritization (open access)

Hybrid Approaches in Test Suite Prioritization

The rapid advancement of web and mobile application technologies has recently posed numerous challenges to the Software Engineering community, including how to cost-effectively test applications that have complex event spaces. Many software testing techniques attempt to cost-effectively improve the quality of such software. This dissertation primarily focuses on that of hybrid test suite prioritization. The techniques utilize two or more criteria to perform test suite prioritization as it is often insufficient to use only a single criterion. The dissertation consists of the following contributions: (1) a weighted test suite prioritization technique that employs the distance between criteria as a weighting factor, (2) a coarse-to-fine grained test suite prioritization technique that uses a multilevel approach to increase the granularity of the criteria at each subsequent iteration, (3) the Caret-HM tool for Android user session-based testing that allows testers to record, replay, and create heat maps from user interactions with Android applications via a web browser, and (4) Android user session-based test suite prioritization techniques that utilize heuristics developed from user sessions created by Caret-HM. Each of the chapters empirically evaluate the respective techniques. The proposed techniques generally show improved or equally good performance when compared to the baselines, depending on an …
Date: May 2018
Creator: Nurmuradov, Dmitriy
System: The UNT Digital Library

Understanding and Addressing Accessibility Barriers Faced by People with Visual Impairments on Block-Based Programming Environments

There is an increased use of block-based programming environments in K-12 education and computing outreach activities to introduce novices to programming and computational thinking skills. However, despite their appealing design that allows students to focus on concepts rather than syntax, block-based programming by design is inaccessible to people with visual impairments and people who cannot use the mouse. In addition to this inaccessibility, little is known about the instructional experiences of students with visual impairments on current block-based programming environments. This dissertation addresses this gap by (1) investigating the challenges that students with visual impairments face on current block-based programming environments and (2) exploring ways in which we can use the keyboard and the screen reader to create block-based code. Through formal survey and interview studies with teachers of students with visual impairments and students with visual impairments, we identify several challenges faced by students with visual impairments on block-based programming environments. Using the knowledge of these challenges and building on prior work, we explore how to leverage the keyboard and the screen reader to improve the accessibility of block-based programming environments through a prototype of an accessible block-based programming library. In this dissertation, our empirical evaluations demonstrate that people …
Date: December 2022
Creator: Mountapmbeme, Aboubakar
System: The UNT Digital Library
Joint Schemes for Physical Layer Security and Error Correction (open access)

Joint Schemes for Physical Layer Security and Error Correction

The major challenges facing resource constraint wireless devices are error resilience, security and speed. Three joint schemes are presented in this research which could be broadly divided into error correction based and cipher based. The error correction based ciphers take advantage of the properties of LDPC codes and Nordstrom Robinson code. A cipher-based cryptosystem is also presented in this research. The complexity of this scheme is reduced compared to conventional schemes. The securities of the ciphers are analyzed against known-plaintext and chosen-plaintext attacks and are found to be secure. Randomization test was also conducted on these schemes and the results are presented. For the proof of concept, the schemes were implemented in software and hardware and these shows a reduction in hardware usage compared to conventional schemes. As a result, joint schemes for error correction and security provide security to the physical layer of wireless communication systems, a layer in the protocol stack where currently little or no security is implemented. In this physical layer security approach, the properties of powerful error correcting codes are exploited to deliver reliability to the intended parties, high security against eavesdroppers and efficiency in communication system. The notion of a highly secure and reliable …
Date: August 2011
Creator: Adamo, Oluwayomi Bamidele
System: The UNT Digital Library
Trajectory Analytics (open access)

Trajectory Analytics

The numerous surveillance videos recorded by a single stationary wide-angle-view camera persuade the use of a moving point as the representation of each small-size object in wide video scene. The sequence of the positions of each moving point can be used to generate a trajectory containing both spatial and temporal information of object's movement. In this study, we investigate how the relationship between two trajectories can be used to recognize multi-agent interactions. For this purpose, we present a simple set of qualitative atomic disjoint trajectory-segment relations which can be utilized to represent the relationships between two trajectories. Given a pair of adjacent concurrent trajectories, we segment the trajectory pair to get the ordered sequence of related trajectory-segments. Each pair of corresponding trajectory-segments then is assigned a token associated with the trajectory-segment relation, which leads to the generation of a string called a pairwise trajectory-segment relationship sequence. From a group of pairwise trajectory-segment relationship sequences, we utilize an unsupervised learning algorithm, particularly the k-medians clustering, to detect interesting patterns that can be used to classify lower-level multi-agent activities. We evaluate the effectiveness of the proposed approach by comparing the activity classes predicted by our method to the actual classes from the …
Date: May 2015
Creator: Santiteerakul, Wasana
System: The UNT Digital Library
The Procedural Generation of Interesting Sokoban Levels (open access)

The Procedural Generation of Interesting Sokoban Levels

As video games continue to become larger, more complex, and more costly to produce, research into methods to make game creation easier and faster becomes more valuable. One such research topic is procedural generation, which allows the computer to assist in the creation of content. This dissertation presents a new algorithm for the generation of Sokoban levels. Sokoban is a grid-based transport puzzle which is computational interesting due to being PSPACE-complete. Beyond just generating levels, the question of whether or not the levels created by this algorithm are interesting to human players is explored. A study was carried out comparing player attention while playing hand made levels versus their attention during procedurally generated levels. An auditory Stroop test was used to measure attention without disrupting play.
Date: May 2015
Creator: Taylor, Joshua
System: The UNT Digital Library
Scene Analysis Using Scale Invariant Feature Extraction and Probabilistic Modeling (open access)

Scene Analysis Using Scale Invariant Feature Extraction and Probabilistic Modeling

Conventional pattern recognition systems have two components: feature analysis and pattern classification. For any object in an image, features could be considered as the major characteristic of the object either for object recognition or object tracking purpose. Features extracted from a training image, can be used to identify the object when attempting to locate the object in a test image containing many other objects. To perform reliable scene analysis, it is important that the features extracted from the training image are detectable even under changes in image scale, noise and illumination. Scale invariant feature has wide applications such as image classification, object recognition and object tracking in the image processing area. In this thesis, color feature and SIFT (scale invariant feature transform) are considered to be scale invariant feature. The classification, recognition and tracking result were evaluated with novel evaluation criterion and compared with some existing methods. I also studied different types of scale invariant feature for the purpose of solving scene analysis problems. I propose probabilistic models as the foundation of analysis scene scenario of images. In order to differential the content of image, I develop novel algorithms for the adaptive combination for multiple features extracted from images. I …
Date: August 2011
Creator: Shen, Yao
System: The UNT Digital Library
Detection and Classification of Heart Sounds Using a Heart-Mobile Interface (open access)

Detection and Classification of Heart Sounds Using a Heart-Mobile Interface

An early detection of heart disease can save lives, caution individuals and also help to determine the type of treatment to be given to the patients. The first test of diagnosing a heart disease is through auscultation - listening to the heart sounds. The interpretation of heart sounds is subjective and requires a professional skill to identify the abnormalities in these sounds. A medical practitioner uses a stethoscope to perform an initial screening by listening for irregular sounds from the patient's chest. Later, echocardiography and electrocardiography tests are taken for further diagnosis. However, these tests are expensive and require specialized technicians to operate. A simple and economical way is vital for monitoring in homecare or rural hospitals and urban clinics. This dissertation is focused on developing a patient-centered device for initial screening of the heart sounds that is both low cost and can be used by the users on themselves, and later share the readings with the healthcare providers. An innovative mobile health service platform is created for analyzing and classifying heart sounds. Certain properties of heart sounds have to be evaluated to identify the irregularities such as the number of heart beats and gallops, intensity, frequency, and duration. Since …
Date: December 2016
Creator: Thiyagaraja, Shanti
System: The UNT Digital Library
Improving Memory Performance for Both High Performance Computing and Embedded/Edge Computing Systems (open access)

Improving Memory Performance for Both High Performance Computing and Embedded/Edge Computing Systems

CPU-memory bottleneck is a widely recognized problem. It is known that majority of high performance computing (HPC) database systems are configured with large memories and dedicated to process specific workloads like weather prediction, molecular dynamic simulations etc. My research on optimal address mapping improves the memory performance by increasing the channel and bank level parallelism. In an another research direction, I proposed and evaluated adaptive page migration techniques that obviates the need for offline analysis of an application to determine page migration strategies. Furthermore, I explored different migration strategies like reverse migration, sub page migration that I found to be beneficial depending on the application behavior. Ideally, page migration strategies redirect the demand memory traffic to faster memory to improve the memory performance. In my third contribution, I worked and evaluated a memory-side accelerator to assist the main computational core in locating the non-zero elements of a sparse matrix that are typically used in scientific, machine learning workloads on a low-power embedded system configuration. Thus my contributions narrow the speed-gap by improving the latency and/or bandwidth between CPU and memory.
Date: December 2021
Creator: Adavally, Shashank
System: The UNT Digital Library

Online Testing of Context-Aware Android Applications

This dissertation presents novel approaches to test context aware applications that suffer from a cost prohibitive number of context and GUI events and event combinations. The contributions of this work to test context aware applications under test include: (1) a real-world context events dataset from 82 Android users over a 30-day period, (2) applications of Markov models, Closed Sequential Pattern Mining (CloSPAN), Deep Neural Networks- Long Short Term Memory (LSTM) and Gated Recurrent Units (GRU), and Conditional Random Fields (CRF) applied to predict context patterns, (3) data driven test case generation techniques that insert events at the beginning of each test case in a round-robin manner, iterate through multiple context events at the beginning of each test case in a round-robin manner, and interleave real-world context event sequences and GUI events, and (4) systematically interleaving context with a combinatorial-based approach. The results of our empirical studies indicate (1) CRF outperforms other models thereby predicting context events with F1 score of about 60% for our dataset, (2) the ISFreqOne that iterates over context events at the beginning of each test case in a round-robin manner as well as interleaves real-world context event sequences and GUI events at an interval one achieves …
Date: December 2021
Creator: Piparia, Shraddha
System: The UNT Digital Library
Machine-Learning-Enabled Cooperative Perception on Connected Autonomous Vehicles (open access)

Machine-Learning-Enabled Cooperative Perception on Connected Autonomous Vehicles

The main research objective of this dissertation is to understand the sensing and communication challenges to achieving cooperative perception among autonomous vehicles, and then, using the insights gained, guide the design of the suitable format of data to be exchanged, reliable and efficient data fusion algorithms on vehicles. By understanding what and how data are exchanged among autonomous vehicles, from a machine learning perspective, it is possible to realize precise cooperative perception on autonomous vehicles, enabling massive amounts of sensor information to be shared amongst vehicles. I first discuss the trustworthy perception information sharing on connected and autonomous vehicles. Then how to achieve effective cooperative perception on autonomous vehicles via exchanging feature maps among vehicles is discussed in the following. In the last methodology part, I propose a set of mechanisms to improve the solution proposed before, i.e., reducing the amount of data transmitted in the network to achieve an efficient cooperative perception. The effectiveness and efficiency of our mechanism is analyzed and discussed.
Date: December 2021
Creator: Guo, Jingda
System: The UNT Digital Library