Latest content added for UNT Digital Library Searchhttps://digital2.library.unt.edu/search/?t=fulltext&fq=str_degree_department%3ADepartment+of+Computer+Science+and+Engineering&sort=creator2024-01-27T21:46:08-06:00UNT LibrariesThis is a custom feed for searching UNT Digital Library SearchBayesian Probabilistic Reasoning Applied to Mathematical Epidemiology for Predictive Spatiotemporal Analysis of Infectious Diseases2008-05-05T14:02:08-05:00https://digital.library.unt.edu/ark:/67531/metadc5302/<p><a href="https://digital.library.unt.edu/ark:/67531/metadc5302/"><img alt="Bayesian Probabilistic Reasoning Applied to Mathematical Epidemiology for Predictive Spatiotemporal Analysis of Infectious Diseases" title="Bayesian Probabilistic Reasoning Applied to Mathematical Epidemiology for Predictive Spatiotemporal Analysis of Infectious Diseases" src="https://digital.library.unt.edu/ark:/67531/metadc5302/thumbnail/"/></a></p><p>Abstract Probabilistic reasoning under uncertainty suits well to analysis of disease dynamics. The stochastic nature of disease progression is modeled by applying the principles of Bayesian learning. Bayesian learning predicts the disease progression, including prevalence and incidence, for a geographic region and demographic composition. Public health resources, prioritized by the order of risk levels of the population, will efficiently minimize the disease spread and curtail the epidemic at the earliest. A Bayesian network representing the outbreak of influenza and pneumonia in a geographic region is ported to a newer region with different demographic composition. Upon analysis for the newer region, the corresponding prevalence of influenza and pneumonia among the different demographic subgroups is inferred for the newer region. Bayesian reasoning coupled with disease timeline is used to reverse engineer an influenza outbreak for a given geographic and demographic setting. The temporal flow of the epidemic among the different sections of the population is analyzed to identify the corresponding risk levels. In comparison to spread vaccination, prioritizing the limited vaccination resources to the higher risk groups results in relatively lower influenza prevalence. HIV incidence in Texas from 1989-2002 is analyzed using demographic based epidemic curves. Dynamic Bayesian networks are integrated with probability distributions of HIV surveillance data coupled with the census population data to estimate the proportion of HIV incidence among the different demographic subgroups. Demographic based risk analysis lends to observation of varied spectrum of HIV risk among the different demographic subgroups. A methodology using hidden Markov models is introduced that enables to investigate the impact of social behavioral interactions in the incidence and prevalence of infectious diseases. The methodology is presented in the context of simulated disease outbreak data for influenza. Probabilistic reasoning analysis enhances the understanding of disease progression in order to identify the critical points of surveillance, control and prevention. Public health resources, prioritized by the order of risk levels of the population, will efficiently minimize the disease spread and curtail the epidemic at the earliest.</p>Boosting for Learning From Imbalanced, Multiclass Data Sets2014-11-08T11:56:31-06:00https://digital.library.unt.edu/ark:/67531/metadc407775/<p><a href="https://digital.library.unt.edu/ark:/67531/metadc407775/"><img alt="Boosting for Learning From Imbalanced, Multiclass Data Sets" title="Boosting for Learning From Imbalanced, Multiclass Data Sets" src="https://digital.library.unt.edu/ark:/67531/metadc407775/thumbnail/"/></a></p><p>In many real-world applications, it is common to have uneven number of examples among multiple classes. The data imbalance, however, usually complicates the learning process, especially for the minority classes, and results in deteriorated performance. Boosting methods were proposed to handle the imbalance problem. These methods need elongated training time and require diversity among the classifiers of the ensemble to achieve improved performance. Additionally, extending the boosting method to handle multi-class data sets is not straightforward. Examples of applications that suffer from imbalanced multi-class data can be found in face recognition, where tens of classes exist, and in capsule endoscopy, which suffers massive imbalance between the classes. This dissertation introduces RegBoost, a new boosting framework to address the imbalanced, multi-class problems. This method applies a weighted stratified sampling technique and incorporates a regularization term that accommodates multi-class data sets and automatically determines the error bound of each base classifier. The regularization parameter penalizes the classifier when it misclassifies instances that were correctly classified in the previous iteration. The parameter additionally reduces the bias towards majority classes. Experiments are conducted using 12 diverse data sets with moderate to high imbalance ratios. The results demonstrate superior performance of the proposed method compared to several state-of-the-art algorithms for imbalanced, multi-class classification problems. More importantly, the sensitivity improvement of the minority classes using RegBoost is accompanied with the improvement of the overall accuracy for all classes. With unpredictability regularization, a diverse group of classifiers are created and the maximum accuracy improvement reaches above 24%. Using stratified undersampling, RegBoost exhibits the best efficiency. The reduction in computational cost is significant reaching above 50%. As the volume of training data increase, the gain of efficiency with the proposed method becomes more significant.</p>Online Construction of Android Application Test Suites2018-01-27T07:36:46-06:00https://digital.library.unt.edu/ark:/67531/metadc1062844/<p><a href="https://digital.library.unt.edu/ark:/67531/metadc1062844/"><img alt="Online Construction of Android Application Test Suites" title="Online Construction of Android Application Test Suites" src="https://digital.library.unt.edu/ark:/67531/metadc1062844/thumbnail/"/></a></p><p>Mobile applications play an important role in the dissemination of computing and information resources. They are often used in domains such as mobile banking, e-commerce, and health monitoring. Cost-effective testing techniques in these domains are critical. This dissertation contributes novel techniques for automatic construction of mobile application test suites. In particular, this work provides solutions that focus on the prohibitively large number of possible event sequences that must be sampled in GUI-based mobile applications. This work makes three major contributions: (1) an automated GUI testing tool, Autodroid, that implements a novel online approach to automatic construction of Android application test suites (2) probabilistic and combinatorial-based algorithms that systematically sample the input space of Android applications to generate test suites with GUI/context events and (3) empirical studies to evaluate the cost-effectiveness of our techniques on real-world Android applications. Our experiments show that our techniques achieve better code coverage and event coverage compared to random test generation. We demonstrate that our techniques are useful for automatic construction of Android application test suites in the absence of source code and preexisting abstract models of an Application Under Test (AUT). The insights derived from our empirical studies provide guidance to researchers and practitioners involved in the development of automated GUI testing tools for Android applications.</p>Joint Schemes for Physical Layer Security and Error Correction2012-05-17T21:47:00-05:00https://digital.library.unt.edu/ark:/67531/metadc84159/<p><a href="https://digital.library.unt.edu/ark:/67531/metadc84159/"><img alt="Joint Schemes for Physical Layer Security and Error Correction" title="Joint Schemes for Physical Layer Security and Error Correction" src="https://digital.library.unt.edu/ark:/67531/metadc84159/thumbnail/"/></a></p><p>The major challenges facing resource constraint wireless devices are error resilience, security and speed. Three joint schemes are presented in this research which could be broadly divided into error correction based and cipher based. The error correction based ciphers take advantage of the properties of LDPC codes and Nordstrom Robinson code. A cipher-based cryptosystem is also presented in this research. The complexity of this scheme is reduced compared to conventional schemes. The securities of the ciphers are analyzed against known-plaintext and chosen-plaintext attacks and are found to be secure. Randomization test was also conducted on these schemes and the results are presented. For the proof of concept, the schemes were implemented in software and hardware and these shows a reduction in hardware usage compared to conventional schemes. As a result, joint schemes for error correction and security provide security to the physical layer of wireless communication systems, a layer in the protocol stack where currently little or no security is implemented. In this physical layer security approach, the properties of powerful error correcting codes are exploited to deliver reliability to the intended parties, high security against eavesdroppers and efficiency in communication system. The notion of a highly secure and reliable physical layer has the potential to significantly change how communication system designers and users think of the physical layer since the error control codes employed in this work will have the dual roles of both reliability and security.</p>VLSI Architecture and FPGA Prototyping of a Secure Digital Camera for Biometric Application2008-05-05T14:43:06-05:00https://digital.library.unt.edu/ark:/67531/metadc5393/<p><a href="https://digital.library.unt.edu/ark:/67531/metadc5393/"><img alt="VLSI Architecture and FPGA Prototyping of a Secure Digital Camera for Biometric Application" title="VLSI Architecture and FPGA Prototyping of a Secure Digital Camera for Biometric Application" src="https://digital.library.unt.edu/ark:/67531/metadc5393/thumbnail/"/></a></p><p>This thesis presents a secure digital camera (SDC) that inserts biometric data into images found in forms of identification such as the newly proposed electronic passport. However, putting biometric data in passports makes the data vulnerable for theft, causing privacy related issues. An effective solution to combating unauthorized access such as skimming (obtaining data from the passport's owner who did not willingly submit the data) or eavesdropping (intercepting information as it moves from the chip to the reader) could be judicious use of watermarking and encryption at the source end of the biometric process in hardware like digital camera or scanners etc. To address such issues, a novel approach and its architecture in the framework of a digital camera, conceptualized as an SDC is presented. The SDC inserts biometric data into passport image with the aid of watermarking and encryption processes. The VLSI (very large scale integration) architecture of the functional units of the SDC such as watermarking and encryption unit is presented. The result of the hardware implementation of Rijndael advanced encryption standard (AES) and a discrete cosine transform (DCT) based visible and invisible watermarking algorithm is presented. The prototype chip can carry out simultaneous encryption and watermarking, which to our knowledge is the first of its kind. The encryption unit has a throughput of 500 Mbit/s and the visible and invisible watermarking unit has a max frequency of 96.31 MHz and 256 MHz respectively.</p>Improving Memory Performance for Both High Performance Computing and Embedded/Edge Computing Systems2022-01-08T15:18:03-06:00https://digital.library.unt.edu/ark:/67531/metadc1873542/<p><a href="https://digital.library.unt.edu/ark:/67531/metadc1873542/"><img alt="Improving Memory Performance for Both High Performance Computing and Embedded/Edge Computing Systems" title="Improving Memory Performance for Both High Performance Computing and Embedded/Edge Computing Systems" src="https://digital.library.unt.edu/ark:/67531/metadc1873542/thumbnail/"/></a></p><p>CPU-memory bottleneck is a widely recognized problem. It is known that majority of high performance computing (HPC) database systems are configured with large memories and dedicated to process specific workloads like weather prediction, molecular dynamic simulations etc. My research on optimal address mapping improves the memory performance by increasing the channel and bank level parallelism. In an another research direction, I proposed and evaluated adaptive page migration techniques that obviates the need for offline analysis of an application to determine page migration strategies. Furthermore, I explored different migration strategies like reverse migration, sub page migration that I found to be beneficial depending on the application behavior. Ideally, page migration strategies redirect the demand memory traffic to faster memory to improve the memory performance. In my third contribution, I worked and evaluated a memory-side accelerator to assist the main computational core in locating the non-zero elements of a sparse matrix that are typically used in scientific, machine learning workloads on a low-power embedded system configuration. Thus my contributions narrow the speed-gap by improving the latency and/or bandwidth between CPU and memory.</p>Qos Aware Service Oriented Architecture2015-03-08T17:44:37-05:00https://digital.library.unt.edu/ark:/67531/metadc500032/<p><a href="https://digital.library.unt.edu/ark:/67531/metadc500032/"><img alt="Qos Aware Service Oriented Architecture" title="Qos Aware Service Oriented Architecture" src="https://digital.library.unt.edu/ark:/67531/metadc500032/thumbnail/"/></a></p><p>Service-oriented architecture enables web services to operate in a loosely-coupled setting and provides an environment for dynamic discovery and use of services over a network using standards such as WSDL, SOAP, and UDDI. Web service has both functional and non-functional characteristics. This thesis work proposes to add QoS descriptions (non-functional properties) to WSDL and compose various services to form a business process. This composition of web services also considers QoS properties along with functional properties and the composed services can again be published as a new Web Service and can be part of any other composition using Composed WSDL.</p>Evaluating Appropriateness of Emg and Flex Sensors for Classifying Hand Gestures2014-02-01T18:14:03-06:00https://digital.library.unt.edu/ark:/67531/metadc271769/<p><a href="https://digital.library.unt.edu/ark:/67531/metadc271769/"><img alt="Evaluating Appropriateness of Emg and Flex Sensors for Classifying Hand Gestures" title="Evaluating Appropriateness of Emg and Flex Sensors for Classifying Hand Gestures" src="https://digital.library.unt.edu/ark:/67531/metadc271769/thumbnail/"/></a></p><p>Hand and arm gestures are a great way of communication when you don't want to be heard, quieter and often more reliable than whispering into a radio mike. In recent years hand gesture identification became a major active area of research due its use in various applications. The objective of my work is to develop an integrated sensor system, which will enable tactical squads and SWAT teams to communicate when there is absence of a Line of Sight or in the presence of any obstacles. The gesture set involved in this work is the standardized hand signals for close range engagement operations used by military and SWAT teams. The gesture sets involved in this work are broadly divided into finger movements and arm movements. The core components of the integrated sensor system are: Surface EMG sensors, Flex sensors and accelerometers. Surface EMG is the electrical activity produced by muscle contractions and measured by sensors directly attached to the skin. Bend Sensors use a piezo resistive material to detect the bend. The sensor output is determined by both the angle between the ends of the sensor as well as the flex radius. Accelerometers sense the dynamic acceleration and inclination in 3 directions simultaneously. EMG sensors are placed on the upper and lower forearm and assist in the classification of the finger and wrist movements. Bend sensors are mounted on a glove that is worn on the hand. The sensors are located over the first knuckle of each figure and can determine if the finger is bent or not. An accelerometer is attached to the glove at the base of the wrist and determines the speed and direction of the arm movement. Classification algorithm SVM is used to classify the gestures.</p>Hybrid Optimization Models for Depot Location-Allocation and Real-Time Routing of Emergency Deliveries2021-05-26T21:29:14-05:00https://digital.library.unt.edu/ark:/67531/metadc1808406/<p><a href="https://digital.library.unt.edu/ark:/67531/metadc1808406/"><img alt="Hybrid Optimization Models for Depot Location-Allocation and Real-Time Routing of Emergency Deliveries" title="Hybrid Optimization Models for Depot Location-Allocation and Real-Time Routing of Emergency Deliveries" src="https://digital.library.unt.edu/ark:/67531/metadc1808406/thumbnail/"/></a></p><p>Prompt and efficient intervention is vital in reducing casualty figures during epidemic outbreaks, disasters, sudden civil strife or terrorism attacks. This can only be achieved if there is a fit-for-purpose and location-specific emergency response plan in place, incorporating geographical, time and vehicular capacity constraints. In this research, a comprehensive emergency response model for situations of uncertainties (in locations' demand and available resources), typically obtainable in low-resource countries, is designed. It involves the development of algorithms for optimizing pre-and post-disaster activities. The studies result in the development of four models: (1) an adaptation of a machine learning clustering algorithm, for pre-positioning depots and emergency operation centers, which optimizes the placement of these depots, such that the largest geographical location is covered, and the maximum number of individuals reached, with minimal facility cost; (2) an optimization algorithm for routing relief distribution, using heterogenous fleets of vehicle, with considerations for uncertainties in humanitarian supplies; (3) a genetic algorithm-based route improvement model; and (4) a model for integrating possible new locations into the routing network, in real-time, using emergency severity ranking, with a high priority on the most-vulnerable population. The clustering approach to solving dept location-allocation problem produces a better time complexity, and the benchmarking of the routing algorithm with existing approaches, results in competitive outcomes.</p>Traffic Forecasting Applications Using Crowdsourced Traffic Reports and Deep Learning2020-06-15T19:38:58-05:00https://digital.library.unt.edu/ark:/67531/metadc1703305/<p><a href="https://digital.library.unt.edu/ark:/67531/metadc1703305/"><img alt="Traffic Forecasting Applications Using Crowdsourced Traffic Reports and Deep Learning" title="Traffic Forecasting Applications Using Crowdsourced Traffic Reports and Deep Learning" src="https://digital.library.unt.edu/ark:/67531/metadc1703305/thumbnail/"/></a></p><p>Intelligent transportation systems (ITS) are essential tools for traffic planning, analysis, and forecasting that can utilize the huge amount of traffic data available nowadays. In this work, we aggregated detailed traffic flow sensor data, Waze reports, OpenStreetMap (OSM) features, and weather data, from California Bay Area for 6 months. Using that data, we studied three novel ITS applications using convolutional neural networks (CNNs) and recurrent neural networks (RNNs). The first experiment is an analysis of the relation between roadway shapes and accident occurrence, where results show that the speed limit and number of lanes are significant predictors for major accidents on highways. The second experiment presents a novel method for forecasting congestion severity using crowdsourced data only (Waze, OSM, and weather), without the need for traffic sensor data. The third experiment studies the improvement of traffic flow forecasting using accidents, number of lanes, weather, and time-related features, where results show significant performance improvements when the additional features where used.</p>New Frameworks for Secure Image Communication in the Internet of Things (IoT)2016-08-31T22:41:47-05:00https://digital.library.unt.edu/ark:/67531/metadc862721/<p><a href="https://digital.library.unt.edu/ark:/67531/metadc862721/"><img alt="New Frameworks for Secure Image Communication in the Internet of Things (IoT)" title="New Frameworks for Secure Image Communication in the Internet of Things (IoT)" src="https://digital.library.unt.edu/ark:/67531/metadc862721/thumbnail/"/></a></p><p>The continuous expansion of technology, broadband connectivity and the wide range of new devices in the IoT cause serious concerns regarding privacy and security. In addition, in the IoT a key challenge is the storage and management of massive data streams. For example, there is always the demand for acceptable size with the highest quality possible for images to meet the rapidly increasing number of multimedia applications. The effort in this dissertation contributes to the resolution of concerns related to the security and compression functions in image communications in the Internet of Thing (IoT), due to the fast of evolution of IoT. This dissertation proposes frameworks for a secure digital camera in the IoT. The objectives of this dissertation are twofold. On the one hand, the proposed framework architecture offers a double-layer of protection: encryption and watermarking that will address all issues related to security, privacy, and digital rights management (DRM) by applying a hardware architecture of the state-of-the-art image compression technique Better Portable Graphics (BPG), which achieves high compression ratio with small size. On the other hand, the proposed framework of SBPG is integrated with the Digital Camera. Thus, the proposed framework of SBPG integrated with SDC is suitable for high performance imaging in the IoT, such as Intelligent Traffic Surveillance (ITS) and Telemedicine. Due to power consumption, which has become a major concern in any portable application, a low-power design of SBPG is proposed to achieve an energy- efficient SBPG design. As the visual quality of the watermarked and compressed images improves with larger values of PSNR, the results show that the proposed SBPG substantially increases the quality of the watermarked compressed images. Higher value of PSNR also shows how robust the algorithm is to different types of attack. From the results obtained for the energy- efficient SBPG design, it can be observed that the power consumption is substantially reduced, up to 19%.</p>Deep Learning Methods to Investigate Online Hate Speech and Counterhate Replies to Mitigate Hateful Content2023-07-08T22:17:55-05:00https://digital.library.unt.edu/ark:/67531/metadc2137556/<p><a href="https://digital.library.unt.edu/ark:/67531/metadc2137556/"><img alt="Deep Learning Methods to Investigate Online Hate Speech and Counterhate Replies to Mitigate Hateful Content" title="Deep Learning Methods to Investigate Online Hate Speech and Counterhate Replies to Mitigate Hateful Content" src="https://digital.library.unt.edu/ark:/67531/metadc2137556/thumbnail/"/></a></p><p>Hateful content and offensive language are commonplace on social media platforms. Many surveys prove that high percentages of social media users experience online harassment. Previous efforts have been made to detect and remove online hate content automatically. However, removing users' content restricts free speech. A complementary strategy to address hateful content that does not interfere with free speech is to counter the hate with new content to divert the discourse away from the hate. In this dissertation, we complement the lack of previous work on counterhate arguments by analyzing and detecting them. Firstly, we study the relationships between hateful tweets and replies. Specifically, we analyze their fine-grained relationships by indicating whether the reply counters the hate, provides a justification, attacks the author of the tweet, or adds additional hate. The most obvious finding is that most replies generally agree with the hateful tweets; only 20% of them counter the hate. Secondly, we focus on the hate directed toward individuals and detect authentic counterhate arguments from online articles. We propose a methodology that assures the authenticity of the argument and its specificity to the individual of interest. We show that finding arguments in online articles is an efficient alternative compared to counterhate generation approaches that may hallucinate unsupported arguments. Thirdly, we investigate the replies to counterhate tweets beyond whether the reply agrees or disagrees with the counterhate tweet. We analyze the language of the counterhate tweet that leads to certain types of replies and predict which counterhate tweets may elicit more hate instead of stopping it. We find that counterhate tweets with profanity content elicit replies that agree with the counterhate tweet. This dissertation presents several corpora, detailed corpus analyses, and deep learning-based approaches for the three tasks mentioned above.</p>Comparison and Evaluation of Existing Analog Circuit Simulator using Sigma-Delta Modulator2008-05-05T15:08:40-05:00https://digital.library.unt.edu/ark:/67531/metadc5422/<p><a href="https://digital.library.unt.edu/ark:/67531/metadc5422/"><img alt="Comparison and Evaluation of Existing Analog Circuit Simulator using Sigma-Delta Modulator" title="Comparison and Evaluation of Existing Analog Circuit Simulator using Sigma-Delta Modulator" src="https://digital.library.unt.edu/ark:/67531/metadc5422/thumbnail/"/></a></p><p>In the world of VLSI (very large scale integration) technology, there are many different types of circuit simulators that are used to design and predict the circuit behavior before actual fabrication of the circuit. In this thesis, I compared and evaluated existing circuit simulators by considering standard benchmark circuits. The circuit simulators which I evaluated and explored are Ngspice, Tclspice, Winspice (open source) and Spectre® (commercial). I also tested standard benchmarks using these circuit simulators and compared their outputs. The simulators are evaluated using design metrics in order to quantify their performance and identify efficient circuit simulators. In addition, I designed a sigma-delta modulator and its individual components using the analog behavioral language Verilog-A. Initially, I performed simulations of individual components of the sigma-delta modulator and later of the whole system. Finally, CMOS (complementary metal-oxide semiconductor) transistor-level circuits were designed for the differential amplifier, operational amplifier and comparator of the modulator.</p>Toward Leveraging Artificial Intelligence to Support the Identification of Accessibility Challenges2023-07-08T22:20:19-05:00https://digital.library.unt.edu/ark:/67531/metadc2137559/<p><a href="https://digital.library.unt.edu/ark:/67531/metadc2137559/"><img alt="Toward Leveraging Artificial Intelligence to Support the Identification of Accessibility Challenges" title="Toward Leveraging Artificial Intelligence to Support the Identification of Accessibility Challenges" src="https://digital.library.unt.edu/ark:/67531/metadc2137559/thumbnail/"/></a></p><p>The goal of this thesis is to support the automated identification of accessibility in user reviews or bug reports, to help technology professionals prioritize their handling, and, thus, to create more inclusive apps. Particularly, we propose a model that takes as input accessibility user reviews or bug reports and learns their keyword-based features to make a classification decision, for a given review, on whether it is about accessibility or not. Our empirically driven study follows a mixture of qualitative and quantitative methods. We introduced models that can accurately identify accessibility reviews and bug reports and automate detecting them. Our models can automatically classify app reviews and bug reports as accessibility-related or not so developers can easily detect accessibility issues with their products and improve them to more accessible and inclusive apps utilizing the users' input. Our goal is to create a sustainable change by including a model in the developer's software maintenance pipeline and raising awareness of existing errors that hinder the accessibility of mobile apps, which is a pressing need. In light of our findings from the Blackboard case study, Blackboard and the course material are not easily accessible to deaf students and hard of hearing. Thus, deaf students find that learning is extremely stressful during the pandemic.</p>I Cannot See You—The Perspectives of Deaf Students to Online Learning during COVID-19 Pandemic: Saudi Arabia Case Study2022-01-14T16:33:20-06:00https://digital.library.unt.edu/ark:/67531/metadc1877587/<p><a href="https://digital.library.unt.edu/ark:/67531/metadc1877587/"><img alt="I Cannot See You—The Perspectives of Deaf Students to Online Learning during COVID-19 Pandemic: Saudi Arabia Case Study" title="I Cannot See You—The Perspectives of Deaf Students to Online Learning during COVID-19 Pandemic: Saudi Arabia Case Study" src="https://digital.library.unt.edu/ark:/67531/metadc1877587/thumbnail/"/></a></p><p>This article investigates the e-learning experiences of deaf students, focusing on the college of the Technical and Vocational Training Corporation (TVTC) in the Kingdom of Saudi Arabia (KSA). Particularly, it studies the challenges and concerns faced by deaf students during the sudden shift to online learning. Results report problems with internet access, inadequate support, and inaccessibility of content from learning systems, among other issues. The authors argue that institutions should consider a procedure to create more accessible technology that is adaptable during the pandemic to serve individuals with diverse needs.</p>Scalable Next Generation Blockchains for Large Scale Complex Cyber-Physical Systems and Their Embedded Systems in Smart Cities2023-09-21T07:33:41-05:00https://digital.library.unt.edu/ark:/67531/metadc2179314/<p><a href="https://digital.library.unt.edu/ark:/67531/metadc2179314/"><img alt="Scalable Next Generation Blockchains for Large Scale Complex Cyber-Physical Systems and Their Embedded Systems in Smart Cities" title="Scalable Next Generation Blockchains for Large Scale Complex Cyber-Physical Systems and Their Embedded Systems in Smart Cities" src="https://digital.library.unt.edu/ark:/67531/metadc2179314/thumbnail/"/></a></p><p>The original FlexiChain and its descendants are a revolutionary distributed ledger technology (DLT) for cyber-physical systems (CPS) and their embedded systems (ES). FlexiChain, a DLT implementation, uses cryptography, distributed ledgers, peer-to-peer communications, scalable networks, and consensus. FlexiChain facilitates data structure agreements. This thesis offers a Block Directed Acyclic Graph (BDAG) architecture to link blocks to their forerunners to speed up validation. These data blocks are securely linked. This dissertation introduces Proof of Rapid Authentication, a novel consensus algorithm. This innovative method uses a distributed file to safely store a unique identifier (UID) based on node attributes to verify two blocks faster. This study also addresses CPS hardware security. A system of interconnected, user-unique identifiers allows each block's history to be monitored. This maintains each transaction and the validators who checked the block to ensure trustworthiness and honesty. We constructed a digital version that stays in sync with the distributed ledger as all nodes are linked by a NodeChain. The ledger is distributed without compromising node autonomy. Moreover, FlexiChain Layer 0 distributed ledger is also introduced and can connect and validate Layer 1 blockchains. This project produced a DAG-based blockchain integration platform with hardware security. The results illustrate a practical technique for creating a system depending on diverse applications' needs. This research's design and execution showed faster authentication, less cost, less complexity, greater scalability, higher interoperability, and reduced power consumption.</p>FruitPAL: An IoT-Enabled Framework for Automatic Monitoring of Fruit Consumption in Smart Healthcare2024-01-27T21:46:08-06:00https://digital.library.unt.edu/ark:/67531/metadc2257710/<p><a href="https://digital.library.unt.edu/ark:/67531/metadc2257710/"><img alt="FruitPAL: An IoT-Enabled Framework for Automatic Monitoring of Fruit Consumption in Smart Healthcare" title="FruitPAL: An IoT-Enabled Framework for Automatic Monitoring of Fruit Consumption in Smart Healthcare" src="https://digital.library.unt.edu/ark:/67531/metadc2257710/thumbnail/"/></a></p><p>This research proposes FruitPAL and FruitPAL 2.0. They are full automatic devices that can detect fruit consumption to reduce the risk of disease. Allergies to fruits can seriously impair the immune system. A novel device (FruitPAL) detecting fruit that can cause allergies is proposed in this thesis. The device can detect fifteen types of fruit and alert the caregiver when an allergic reaction may have happened. The YOLOv8 model is employed to enhance accuracy and response time in detecting dangers. The notification will be transmitted to the mobile device through the cloud, as it is a commonly utilized medium. The proposed device can detect the fruit with an overall precision of 86%.
FruitPAL 2.0 is envisioned as a device that encourages people to consume fruit. Fruits contain a variety of essential nutrients that contribute to the general health of the human body. FruitPAL 2.0 is capable of analyzing the consumed fruit and then determining its nutritional value. FruitPAL 2.0 has been trained on YOLOv5 V6.0. FruitPAL 2.0 has an overall precision of 90% in detecting the fruit.
The purpose of this study is to encourage fruit consumption unless it causes illness. Even though fruit plays an important role in people's health, it might cause dangers. The proposed work can not only alert people to fruit that can cause allergies, but also it encourages people to consume fruit that is beneficial for their health.</p>Frameworks for Attribute-Based Access Control (ABAC) Policy Engineering2020-09-07T10:29:05-05:00https://digital.library.unt.edu/ark:/67531/metadc1707241/<p><a href="https://digital.library.unt.edu/ark:/67531/metadc1707241/"><img alt="Frameworks for Attribute-Based Access Control (ABAC) Policy Engineering" title="Frameworks for Attribute-Based Access Control (ABAC) Policy Engineering" src="https://digital.library.unt.edu/ark:/67531/metadc1707241/thumbnail/"/></a></p><p>In this disseration we propose semi-automated top-down policy engineering approaches for attribute-based access control (ABAC) development. Further, we propose a hybrid ABAC policy engineering approach to combine the benefits and address the shortcomings of both top-down and bottom-up approaches. In particular, we propose three frameworks: (i) ABAC attributes extraction, (ii) ABAC constraints extraction, and (iii) hybrid ABAC policy engineering. Attributes extraction framework comprises of five modules that operate together to extract attributes values from natural language access control policies (NLACPs); map the extracted values to attribute keys; and assign each key-value pair to an appropriate entity. For ABAC constraints extraction framework, we design a two-phase process to extract ABAC constraints from NLACPs. The process begins with the identification phase which focuses on identifying the right boundary of constraint expressions. Next is the normalization phase, that aims at extracting the actual elements that pose a constraint. On the other hand, our hybrid ABAC policy engineering framework consists of 5 modules. This framework combines top-down and bottom-up policy engineering techniques to overcome the shortcomings of both approaches and to generate policies that are more intuitive and relevant to actual organization policies. With this, we believe that our work takes essential steps towards a semi-automated ABAC policy development experience.</p>3GPP Long Term Evolution LTE Scheduling2015-01-27T06:52:49-06:00https://digital.library.unt.edu/ark:/67531/metadc490046/<p><a href="https://digital.library.unt.edu/ark:/67531/metadc490046/"><img alt="3GPP Long Term Evolution LTE Scheduling" title="3GPP Long Term Evolution LTE Scheduling" src="https://digital.library.unt.edu/ark:/67531/metadc490046/thumbnail/"/></a></p><p>Future generation cellular networks are expected to deliver an omnipresent broadband access network for an endlessly increasing number of subscribers. Long term Evolution (LTE) represents a significant milestone towards wireless networks known as 4G cellular networks. A key feature of LTE is the implementation of enhanced Radio Resource Management (RRM) mechanism to improve the system performance. The structure of LTE networks was simplified by diminishing the number of the nodes of the core network. Also, the design of the radio protocol architecture is quite unique. In order to achieve high data rate in LTE, 3rd Generation Partnership Project (3GPP) has selected Orthogonal Frequency Division Multiplexing (OFDM) as an appropriate scheme in terms of downlinks. However, the proper scheme for an uplink is the Single-Carrier Frequency Domain Multiple Access due to the peak-to-average-power-ratio (PAPR) constraint. LTE packet scheduling plays a primary role as part of RRM to improve the system’s data rate as well as supporting various QoS requirements of mobile services. The major function of the LTE packet scheduler is to assign Physical Resource Blocks (PRBs) to mobile User Equipment (UE). In our work, we formed a proposed packet scheduler algorithm. The proposed scheduler algorithm acts based on the number of UEs attached to the eNodeB. To evaluate the proposed scheduler algorithm, we assumed two different scenarios based on a number of UEs. When the number of UE is lower than the number of PRBs, the UEs with highest Channel Quality Indicator (CQI) will be assigned PRBs. Otherwise, the scheduler will assign PRBs based on a given proportional fairness metric. The eNodeB’s throughput is increased when the proposed algorithm was implemented.</p>Radio Resource Control Approaches for LTE-Advanced Femtocell Networks2018-09-26T18:16:59-05:00https://digital.library.unt.edu/ark:/67531/metadc1248385/<p><a href="https://digital.library.unt.edu/ark:/67531/metadc1248385/"><img alt="Radio Resource Control Approaches for LTE-Advanced Femtocell Networks" title="Radio Resource Control Approaches for LTE-Advanced Femtocell Networks" src="https://digital.library.unt.edu/ark:/67531/metadc1248385/thumbnail/"/></a></p><p>The architecture of mobile networks has dramatically evolved in order to fulfill the growing demands on wireless services and data. The radio resources, which are used by the current mobile networks, are limited while the users demands are substantially increasing. In the future, tremendous Internet applications are expected to be served by mobile networks. Therefore, increasing the capacity of mobile networks has become a vital issue. Heterogeneous networks (HetNets) have been considered as a promising paradigm for future mobile networks. Accordingly, the concept of small cell has been introduced in order to increase the capacity of the mobile networks. A femtocell network is a kind of small cell networks. Femtocells are deployed within macrocells coverage. Femtocells cover small areas and operate with low transmission power while providing high capacity. Also, UEs can be offloaded from macrocells to femtocells. Thus, the capacity can be increased. However, this will introduce different technical challenges. The interference has become one of the key challenges for deploying femtocells within a certain macrocells coverage. Undesirable impact of the interference can degrade the performance of the mobile networks. Therefore, radio resource management mechanisms are needed in order to address key challenges of deploying femtocells. The objective of this work is to introduce radio resource control approaches, which are used to increase mobile networks' capacity and alleviate undesirable impact of the interference. In addition, proposed radio resource control approaches ensure the coexistence between macrocell and femtocells based on LTE-Advanced environment. Firstly, a novel mechanism is proposed in order to address the interference challenge. The proposed approach mitigates the impact of interference based on controlling radio sub-channels' assignment and dynamically adjusting the transmission power. Secondly, a dynamic strategy is proposed for the FFR mechanism. In the FFR mechanism, the whole spectrum is divided into four fixed sub-channels and each sub-channel is assigned for a different sub-area after splitting the macrocell coverage area into four sub-areas. The objective of the proposed scheme is to divide the spectrum dynamically based on the QoS indicators for each sub-area. Lastly, a novel packet scheduling scheme is proposed to improve the performance of femtocell networks. The proposed scheduling strategy assigns radio resources based on two objectives: increasing the network capacity and achieving better fairness among attached UEs.</p>A Data-Driven Computational Framework to Assess the Risk of Epidemics at Global Mass Gatherings2019-06-09T21:09:49-05:00https://digital.library.unt.edu/ark:/67531/metadc1505145/<p><a href="https://digital.library.unt.edu/ark:/67531/metadc1505145/"><img alt="A Data-Driven Computational Framework to Assess the Risk of Epidemics at Global Mass Gatherings" title="A Data-Driven Computational Framework to Assess the Risk of Epidemics at Global Mass Gatherings" src="https://digital.library.unt.edu/ark:/67531/metadc1505145/thumbnail/"/></a></p><p>This dissertation presents a data-driven computational epidemic framework to simulate disease epidemics at global mass gatherings. The annual Muslim pilgrimage to Makkah, Saudi Arabia is used to demonstrate the simulation and analysis of various disease transmission scenarios throughout the different stages of the event from the arrival to the departure of international participants. The proposed agent-based epidemic model efficiently captures the demographic, spatial, and temporal heterogeneity at each stage of the global event of Hajj. Experimental results indicate the substantial impact of the demographic and mobility patterns of the heterogeneous population of pilgrims on the progression of the disease spread in the different stages of Hajj. In addition, these simulations suggest that the differences in the spatial and temporal settings in each stage can significantly affect the dynamic of the disease. Finally, the epidemic simulations conducted at the different stages in this dissertation illustrate the impact of the differences between the duration of each stage in the event and the length of the infectious and latent periods. This research contributes to a better understanding of epidemic modeling in the context of global mass gatherings to predict the risk of disease pandemics caused by associated international travel. The computational modeling and disease spread simulations in global mass gatherings provide public health authorities with powerful tools to assess the implication of these events at a different scale and to evaluate the efficacy of control strategies to reduce their potential impacts.</p>Exploring Analog and Digital Design Using the Open-Source Electric VLSI Design System2016-06-28T16:28:55-05:00https://digital.library.unt.edu/ark:/67531/metadc849770/<p><a href="https://digital.library.unt.edu/ark:/67531/metadc849770/"><img alt="Exploring Analog and Digital Design Using the Open-Source Electric VLSI Design System" title="Exploring Analog and Digital Design Using the Open-Source Electric VLSI Design System" src="https://digital.library.unt.edu/ark:/67531/metadc849770/thumbnail/"/></a></p><p>The design of VLSI electronic circuits can be achieved at many different abstraction levels starting from system behavior to the most detailed, physical layout level. As the number of transistors in VLSI circuits is increasing, the complexity of the design is also increasing, and it is now beyond human ability to manage. Hence CAD (Computer Aided design) or EDA (Electronic Design Automation) tools are involved in the design. EDA or CAD tools automate the design, verification and testing of these VLSI circuits. In today’s market, there are many EDA tools available. However, they are very expensive and require high-performance platforms. One of the key challenges today is to select appropriate CAD or EDA tools which are open-source for academic purposes. This thesis provides a detailed examination of an open-source EDA tool called Electric VLSI Design system. An excellent and efficient CAD tool useful for students and teachers to implement ideas by modifying the source code, Electric fulfills these requirements. This thesis' primary objective is to explain the Electric software features and architecture and to provide various digital and analog designs that are implemented by this software for educational purposes. Since the choice of an EDA tool is based on the efficiency and functions that it can provide, this thesis explains all the analysis and synthesis tools that electric provides and how efficient they are. Hence, this thesis is of benefit for students and teachers that choose Electric as their open-source EDA tool for educational purposes.</p>Towards a Unilateral Sensing System for Detecting Person-to-Person Contacts2020-06-16T05:56:01-05:00https://digital.library.unt.edu/ark:/67531/metadc1703441/<p><a href="https://digital.library.unt.edu/ark:/67531/metadc1703441/"><img alt="Towards a Unilateral Sensing System for Detecting Person-to-Person Contacts" title="Towards a Unilateral Sensing System for Detecting Person-to-Person Contacts" src="https://digital.library.unt.edu/ark:/67531/metadc1703441/thumbnail/"/></a></p><p>The contact patterns among individuals can significantly affect the progress of an infectious outbreak within a population. Gathering data about these interaction and mixing patterns is essential to assess computational modeling of infectious diseases. Various self-report approaches have been designed in different studies to collect data about contact rates and patterns. Recent advances in sensing technology provide researchers with a bilateral automated data collection devices to facilitate contact gathering overcoming the disadvantages of previous approaches. In this study, a novel unilateral wearable sensing architecture has been proposed that overcome the limitations of the bi-lateral sensing. Our unilateral wearable sensing system gather contact data using hybrid sensor arrays embedded in wearable shirt. A smartphone application has been used to transfer the collected sensors data to the cloud and apply deep learning model to estimate the number of human contacts and the results are stored in the cloud database. The deep learning model has been developed on the hand labelled data over multiple experiments. This model has been tested and evaluated, and these results were reported in the study. Sensitivity analysis has been performed to choose the most suitable image resolution and format for the model to estimate contacts and to analyze the model's consumption of computer resources.</p>Real-time Rendering of Burning Objects in Video Games2015-03-08T17:44:37-05:00https://digital.library.unt.edu/ark:/67531/metadc500131/<p><a href="https://digital.library.unt.edu/ark:/67531/metadc500131/"><img alt="Real-time Rendering of Burning Objects in Video Games" title="Real-time Rendering of Burning Objects in Video Games" src="https://digital.library.unt.edu/ark:/67531/metadc500131/thumbnail/"/></a></p><p>In recent years there has been growing interest in limitless realism in computer graphics applications. Among those, my foremost concentration falls into the complex physical simulations and modeling with diverse applications for the gaming industry. Different simulations have been virtually successful by replicating the details of physical process. As a result, some were strong enough to lure the user into believable virtual worlds that could destroy any sense of attendance. In this research, I focus on fire simulations and its deformation process towards various virtual objects. In most game engines model loading takes place at the beginning of the game or when the game is transitioning between levels. Game models are stored in large data structures. Since changing or adjusting a large data structure while the game is proceeding may adversely affect the performance of the game. Therefore, developers may choose to avoid procedural simulations to save resources and avoid interruptions on performance. I introduce a process to implement a real-time model deformation while maintaining performance. It is a challenging task to achieve high quality simulation while utilizing minimum resources to represent multiple events in timely manner. Especially in video games, this overwhelming criterion would be robust enough to sustain the engaging player's willing suspension of disbelief. I have implemented and tested my method on a relatively modest GPU using CUDA. My experiments conclude this method gives a believable visual effect while using small fraction of CPU and GPU resources.</p>An Integrated Architecture for Ad Hoc Grids2008-05-05T14:02:14-05:00https://digital.library.unt.edu/ark:/67531/metadc5300/<p><a href="https://digital.library.unt.edu/ark:/67531/metadc5300/"><img alt="An Integrated Architecture for Ad Hoc Grids" title="An Integrated Architecture for Ad Hoc Grids" src="https://digital.library.unt.edu/ark:/67531/metadc5300/thumbnail/"/></a></p><p>Extensive research has been conducted by the grid community to enable large-scale collaborations in pre-configured environments. grid collaborations can vary in scale and motivation resulting in a coarse classification of grids: national grid, project grid, enterprise grid, and volunteer grid. Despite the differences in scope and scale, all the traditional grids in practice share some common assumptions. They support mutually collaborative communities, adopt a centralized control for membership, and assume a well-defined non-changing collaboration. To support grid applications that do not confirm to these assumptions, we propose the concept of ad hoc grids. In the context of this research, we propose a novel architecture for ad hoc grids that integrates a suite of component frameworks. Specifically, our architecture combines the community management framework, security framework, abstraction framework, quality of service framework, and reputation framework. The overarching objective of our integrated architecture is to support a variety of grid applications in a self-controlled fashion with the help of a self-organizing ad hoc community. We introduce mechanisms in our architecture that successfully isolates malicious elements from the community, inherently improving the quality of grid services and extracting deterministic quality assurances from the underlying infrastructure. We also emphasize on the technology-independence of our architecture, thereby offering the requisite platform for technology interoperability. The feasibility of the proposed architecture is verified with a high-quality ad hoc grid implementation. Additionally, we have analyzed the performance and behavior of ad hoc grids with respect to several control parameters.</p>Resource Efficient and Scalable Routing using Intelligent Mobile Agents2008-02-15T14:30:24-06:00https://digital.library.unt.edu/ark:/67531/metadc4240/<p><a href="https://digital.library.unt.edu/ark:/67531/metadc4240/"><img alt="Resource Efficient and Scalable Routing using Intelligent Mobile Agents" title="Resource Efficient and Scalable Routing using Intelligent Mobile Agents" src="https://digital.library.unt.edu/ark:/67531/metadc4240/thumbnail/"/></a></p><p>Many of the contemporary routing algorithms use simple mechanisms such as flooding or broadcasting to disseminate the routing information available to them. Such routing algorithms cause significant network resource overhead due to the large number of messages generated at each host/router throughout the route update process. Many of these messages are wasteful since they do not contribute to the route discovery process. Reducing the resource overhead may allow for several algorithms to be deployed in a wide range of networks (wireless and ad-hoc) which require a simple routing protocol due to limited availability of resources (memory and bandwidth). Motivated by the need to reduce the resource overhead associated with routing algorithms a new implementation of distance vector routing algorithm using an agent-based paradigm known as Agent-based Distance Vector Routing (ADVR) has been proposed. In ADVR, the ability of route discovery and message passing shifts from the nodes to individual agents that traverse the network, co-ordinate with each other and successively update the routing tables of the nodes they visit.</p>Resource Management in Wireless Networks2008-05-05T14:43:07-05:00https://digital.library.unt.edu/ark:/67531/metadc5392/<p><a href="https://digital.library.unt.edu/ark:/67531/metadc5392/"><img alt="Resource Management in Wireless Networks" title="Resource Management in Wireless Networks" src="https://digital.library.unt.edu/ark:/67531/metadc5392/thumbnail/"/></a></p><p>A local call admission control (CAC) algorithm for third generation wireless networks was designed and implemented, which allows for the simulation of network throughput for different spreading factors and various mobility scenarios. A global CAC algorithm is also implemented and used as a benchmark since it is inherently optimized; it yields the best possible performance but has an intensive computational complexity. Optimized local CAC algorithm achieves similar performance as global CAC algorithm at a fraction of the computational cost. Design of a dynamic channel assignment
algorithm for IEEE 802.11 wireless systems is also presented. Channels are assigned dynamically depending on the minimal interference generated by the neighboring access points on a reference access point. Analysis of dynamic channel assignment algorithm shows an improvement by a factor of 4 over the default settings of having all access points use the same channel, resulting significantly higher network throughput.</p>Space and Spectrum Engineered High Frequency Components and Circuits2016-02-09T16:37:48-06:00https://digital.library.unt.edu/ark:/67531/metadc801923/<p><a href="https://digital.library.unt.edu/ark:/67531/metadc801923/"><img alt="Space and Spectrum Engineered High Frequency Components and Circuits" title="Space and Spectrum Engineered High Frequency Components and Circuits" src="https://digital.library.unt.edu/ark:/67531/metadc801923/thumbnail/"/></a></p><p>With the increasing demand on wireless and portable devices, the radio frequency front end blocks are required to feature properties such as wideband, high frequency, multiple operating frequencies, low cost and compact size. However, the current radio frequency system blocks are designed by combining several individual frequency band blocks into one functional block, which increase the cost and size of devices. To address these issues, it is important to develop novel approaches to further advance the current design methodologies in both space and spectrum domains. In recent years, the concept of artificial materials has been proposed and studied intensively in RF/Microwave, Terahertz, and optical frequency range. It is a combination of conventional materials such as air, wood, metal and plastic. It can achieve the material properties that have not been found in nature. Therefore, the artificial material (i.e. meta-materials) provides design freedoms to control both the spectrum performance and geometrical structures of radio frequency front end blocks and other high frequency systems. In this dissertation, several artificial materials are proposed and designed by different methods, and their applications to different high frequency components and circuits are studied. First, quasi-conformal mapping (QCM) method is applied to design plasmonic wave-adapters and couplers working at the optical frequency range. Second, inverse QCM method is proposed to implement flattened Luneburg lens antennas and parabolic antennas in the microwave range. Third, a dual-band compact directional coupler is realized by applying artificial transmission lines. In addition, a fully symmetrical coupler with artificial lumped element structure is also implemented. Finally, a tunable on-chip inductor, compact CMOS transmission lines, and metamaterial-based interconnects are proposed using artificial metal structures. All the proposed designs are simulated in full-wave 3D electromagnetic solvers, and the measurement results agree well with the simulation results. These artificial material-based novel design methodologies pave the way toward next generation high frequency circuit, component, and system design.</p>Statistical Strategies for Efficient Signal Detection and Parameter Estimation in Wireless Sensor Networks2014-11-08T11:56:31-06:00https://digital.library.unt.edu/ark:/67531/metadc407740/<p><a href="https://digital.library.unt.edu/ark:/67531/metadc407740/"><img alt="Statistical Strategies for Efficient Signal Detection and Parameter Estimation in Wireless Sensor Networks" title="Statistical Strategies for Efficient Signal Detection and Parameter Estimation in Wireless Sensor Networks" src="https://digital.library.unt.edu/ark:/67531/metadc407740/thumbnail/"/></a></p><p>This dissertation investigates data reduction strategies from a signal processing perspective in centralized detection and estimation applications. First, it considers a deterministic source observed by a network of sensors and develops an analytical strategy for ranking sensor transmissions based on the magnitude of their test statistics. The benefit of the proposed strategy is that the decision to transmit or not to transmit observations to the fusion center can be made at the sensor level resulting in significant savings in transmission costs. A sensor network based on target tracking application is simulated to demonstrate the benefits of the proposed strategy over the unconstrained energy approach. Second, it considers the detection of random signals in noisy measurements and evaluates the performance of eigenvalue-based signal detectors. Due to their computational simplicity, robustness and performance, these detectors have recently received a lot of attention. When the observed random signal is correlated, several researchers claim that the performance of eigenvalue-based detectors exceeds that of the classical energy detector. However, such claims fail to consider the fact that when the signal is correlated, the optimal detector is the estimator-correlator and not the energy detector. In this dissertation, through theoretical analyses and Monte Carlo simulations, eigenvalue-based detectors are shown to be suboptimal when compared to the energy detector and the estimator-correlator.</p>Application of Adaptive Techniques in Regression Testing for Modern Software Development2019-08-29T10:25:12-05:00https://digital.library.unt.edu/ark:/67531/metadc1538762/<p><a href="https://digital.library.unt.edu/ark:/67531/metadc1538762/"><img alt="Application of Adaptive Techniques in Regression Testing for Modern Software Development" title="Application of Adaptive Techniques in Regression Testing for Modern Software Development" src="https://digital.library.unt.edu/ark:/67531/metadc1538762/thumbnail/"/></a></p><p>In this dissertation we investigate the applicability of different adaptive techniques to improve the effectiveness and efficiency of the regression testing. Initially, we introduce the concept of regression testing. We then perform a literature review of current practices and state-of-the-art regression testing techniques. Finally, we advance the regression testing techniques by performing four empirical studies in which we use different types of information (e.g. user session, source code, code commit, etc.) to investigate the effectiveness of each software metric on fault detection capability for different software environments. In our first empirical study, we show the effectiveness of applying user session information for test case prioritization. In our next study, we apply learning from the previous study, and implement a collaborative filtering recommender system for test case prioritization, which uses user sessions and change history information as input parameter, and return the risk score associated with each component. Results of this study show that our recommender system improves the effectiveness of test prioritization; the performance of our approach was particularly noteworthy when we were under time constraints. We then investigate the merits of multi-objective testing over single objective techniques with a graph-based testing framework. Results of this study indicate that the use of the graph-based technique reduces the algorithm execution time considerably, while being just as effective as the greedy algorithms in terms of fault detection rate. Finally, we apply the knowledge from the previous studies and implement a query answering framework for regression test selection. This framework is built based on a graph database and uses fault history information and test diversity in attempt to select the most effective set of test cases in term of fault detection capability. Our empirical evaluation of this study with four open source programs shows that our approach can be effective and efficient by selecting a far smaller subset of tests compared to the existing techniques.</p>Privacy Management for Online Social Networks2014-04-23T20:20:45-05:00https://digital.library.unt.edu/ark:/67531/metadc283816/<p><a href="https://digital.library.unt.edu/ark:/67531/metadc283816/"><img alt="Privacy Management for Online Social Networks" title="Privacy Management for Online Social Networks" src="https://digital.library.unt.edu/ark:/67531/metadc283816/thumbnail/"/></a></p><p>One in seven people in the world use online social networking for a variety of purposes -- to keep in touch with friends and family, to share special occasions, to broadcast announcements, and more. The majority of society has been bought into this new era of communication technology, which allows everyone on the internet to share information with friends. Since social networking has rapidly become a main form of communication, holes in privacy have become apparent. It has come to the point that the whole concept of sharing information requires restructuring. No longer are online social networks simply technology available for a niche market; they are in use by all of society. Thus it is important to not forget that a sense of privacy is inherent as an evolutionary by-product of social intelligence. In any context of society, privacy needs to be a part of the system in order to help users protect themselves from others. This dissertation attempts to address the lack of privacy management in online social networks by designing models which understand the social science behind how we form social groups and share information with each other. Social relationship strength was modeled using activity patterns, vocabulary usage, and behavioral patterns. In addition, automatic configuration for default privacy settings was proposed to help prevent new users from leaking personal information. This dissertation aims to mobilize a new era of social networking that understands social aspects of human network, and uses that knowledge to honor users' privacy.</p>Blockchain for AI: Smarter Contracts to Secure Artificial Intelligence Algorithms2023-09-21T07:51:45-05:00https://digital.library.unt.edu/ark:/67531/metadc2179338/<p><a href="https://digital.library.unt.edu/ark:/67531/metadc2179338/"><img alt="Blockchain for AI: Smarter Contracts to Secure Artificial Intelligence Algorithms" title="Blockchain for AI: Smarter Contracts to Secure Artificial Intelligence Algorithms" src="https://digital.library.unt.edu/ark:/67531/metadc2179338/thumbnail/"/></a></p><p>In this dissertation, I investigate the existing smart contract problems that limit cognitive abilities. I use Taylor's serious expansion, polynomial equation, and fraction-based computations to overcome the limitations of calculations in smart contracts. To prove the hypothesis, I use these mathematical models to compute complex operations of naive Bayes, linear regression, decision trees, and neural network algorithms on Ethereum public test networks. The smart contracts achieve 95\% prediction accuracy compared to traditional programming language models, proving the soundness of the numerical derivations. Many non-real-time applications can use our solution for trusted and secure prediction services.</p>Sensing and Decoding Brain States for Predicting and Enhancing Human Behavior, Health, and Security2016-08-31T22:41:47-05:00https://digital.library.unt.edu/ark:/67531/metadc862723/<p><a href="https://digital.library.unt.edu/ark:/67531/metadc862723/"><img alt="Sensing and Decoding Brain States for Predicting and Enhancing Human Behavior, Health, and Security" title="Sensing and Decoding Brain States for Predicting and Enhancing Human Behavior, Health, and Security" src="https://digital.library.unt.edu/ark:/67531/metadc862723/thumbnail/"/></a></p><p>The human brain acts as an intelligent sensor by helping in effective signal communication and execution of logical functions and instructions, thus, coordinating all functions of the human body. More importantly, it shows the potential to combine prior knowledge with adaptive learning, thus ensuring constant improvement. These qualities help the brain to interact efficiently with both, the body (brain-body) as well as the environment (brain-environment). This dissertation attempts to apply the brain-body-environment interactions (BBEI) to elevate human existence and enhance our day-to-day experiences. For instance, when one stepped out of the house in the past, one had to carry keys (for unlocking), money (for purchasing), and a phone (for communication). With the advent of smartphones, this scenario changed completely and today, it is often enough to carry just one's smartphone because all the above activities can be performed with a single device. In the future, with advanced research and progress in BBEI interactions, one will be able to perform many activities by dictating it in one's mind without any physical involvement. This dissertation aims to shift the paradigm of existing brain-computer-interfaces from just ‘control' to ‘monitor, control, enhance, and restore' in three main areas - healthcare, transportation safety, and cryptography. In healthcare, measures were developed for understanding brain-body interactions by correlating cerebral autoregulation with brain signals. The variation in estimated blood flow of brain (obtained through EEG) was detected with evoked change in blood pressure, thus, enabling EEG metrics to be used as a first hand screening tool to check impaired cerebral autoregulation. To enhance road safety, distracted drivers' behavior in various multitasking scenarios while driving was identified by significant changes in the time-frequency spectrum of the EEG signals. A distraction metric was calculated to rank the severity of a distraction task that can be used as an intuitive measure for distraction in people - analogous to the Richter scale for earthquakes. In cryptography, brain-environment interactions (BBEI) were qualitatively and quantitatively modeled to obtain cancelable biometrics and cryptographic keys using brain signals. Two different datasets were used to analyze the key generation process and it was observed that neurokeys established for every subject-task combination were unique, consistent, and can be revoked and re-issued in case of a breach. This dissertation envisions a future where humans and technology are intuitively connected by a seamless flow of information through ‘the most intelligent sensor', the brain.</p>Unique Channel Email System2016-03-04T16:14:01-06:00https://digital.library.unt.edu/ark:/67531/metadc804980/<p><a href="https://digital.library.unt.edu/ark:/67531/metadc804980/"><img alt="Unique Channel Email System" title="Unique Channel Email System" src="https://digital.library.unt.edu/ark:/67531/metadc804980/thumbnail/"/></a></p><p>Email connects 85% of the world. This paper explores the pattern of information overload encountered by majority of email users and examine what steps key email providers are taking to combat the problem. Besides fighting spam, popular email providers offer very limited tools to reduce the amount of unwanted incoming email. Rather, there has been a trend to expand storage space and aid the organization of email. Storing email is very costly and harmful to the environment. Additionally, information overload can be detrimental to productivity. We propose a simple solution that results in drastic reduction of unwanted mail, also known as graymail.</p>The Role of Intelligent Mobile Agents in Network Management and Routing2007-09-25T21:20:32-05:00https://digital.library.unt.edu/ark:/67531/metadc2736/<p><a href="https://digital.library.unt.edu/ark:/67531/metadc2736/"><img alt="The Role of Intelligent Mobile Agents in Network Management and Routing" title="The Role of Intelligent Mobile Agents in Network Management and Routing" src="https://digital.library.unt.edu/ark:/67531/metadc2736/thumbnail/"/></a></p><p>In this research, the application of intelligent mobile agents to the management of distributed network environments is investigated. Intelligent mobile agents are programs which can move about network systems in a deterministic manner in carrying their execution state. These agents can be considered an application of distributed artificial intelligence where the (usually small) agent code is moved to the data and executed locally. The mobile agent paradigm offers potential advantages over many conventional mechanisms which move (often large) data to the code, thereby wasting available network bandwidth. The performance of agents in network routing and knowledge acquisition has been investigated and simulated. A working mobile agent system has also been designed and implemented in JDK 1.2.</p>Extrapolating Subjectivity Research to Other Languages2014-02-01T18:14:03-06:00https://digital.library.unt.edu/ark:/67531/metadc271777/<p><a href="https://digital.library.unt.edu/ark:/67531/metadc271777/"><img alt="Extrapolating Subjectivity Research to Other Languages" title="Extrapolating Subjectivity Research to Other Languages" src="https://digital.library.unt.edu/ark:/67531/metadc271777/thumbnail/"/></a></p><p>Socrates articulated it best, "Speak, so I may see you." Indeed, language represents an invisible probe into the mind. It is the medium through which we express our deepest thoughts, our aspirations, our views, our feelings, our inner reality. From the beginning of artificial intelligence, researchers have sought to impart human like understanding to machines. As much of our language represents a form of self expression, capturing thoughts, beliefs, evaluations, opinions, and emotions which are not available for scrutiny by an outside observer, in the field of natural language, research involving these aspects has crystallized under the name of subjectivity and sentiment analysis. While subjectivity classification labels text as either subjective or objective, sentiment classification further divides subjective text into either positive, negative or neutral. In this thesis, I investigate techniques of generating tools and resources for subjectivity analysis that do not rely on an existing natural language processing infrastructure in a given language. This constraint is motivated by the fact that the vast majority of human languages are scarce from an electronic point of view: they lack basic tools such as part-of-speech taggers, parsers, or basic resources such as electronic text, annotated corpora or lexica. This severely limits the implementation of techniques on par with those developed for English, and by applying methods that are lighter in the usage of text processing infrastructure, we are able to conduct multilingual subjectivity research in these languages as well. Since my aim is also to minimize the amount of manual work required to develop lexica or corpora in these languages, the techniques proposed employ a lever approach, where English often acts as the donor language (the fulcrum in a lever) and allows through a relatively minimal amount of effort to establish preliminary subjectivity research in a target language.</p>A New N-way Reconfigurable Data Cache Architecture for Embedded Systems2010-03-17T11:40:26-05:00https://digital.library.unt.edu/ark:/67531/metadc12079/<p><a href="https://digital.library.unt.edu/ark:/67531/metadc12079/"><img alt="A New N-way Reconfigurable Data Cache Architecture for Embedded Systems" title="A New N-way Reconfigurable Data Cache Architecture for Embedded Systems" src="https://digital.library.unt.edu/ark:/67531/metadc12079/thumbnail/"/></a></p><p>Performance and power consumption are most important issues while designing embedded systems. Several studies have shown that cache memory consumes about 50% of the total power in these systems. Thus, the architecture of the cache governs both performance and power usage of embedded systems. A new N-way reconfigurable data cache is proposed especially for embedded systems. This thesis explores the issues and design considerations involved in designing a reconfigurable cache. The proposed reconfigurable data cache architecture can be configured as direct-mapped, two-way, or four-way set associative using a mode selector. The module has been designed and simulated in Xilinx ISE 9.1i and ModelSim SE 6.3e using the Verilog hardware description language.</p>Multi-perspective, Multi-modal Image Registration and Fusion2013-03-04T14:02:27-06:00https://digital.library.unt.edu/ark:/67531/metadc149562/<p><a href="https://digital.library.unt.edu/ark:/67531/metadc149562/"><img alt="Multi-perspective, Multi-modal Image Registration and Fusion" title="Multi-perspective, Multi-modal Image Registration and Fusion" src="https://digital.library.unt.edu/ark:/67531/metadc149562/thumbnail/"/></a></p><p>Multi-modal image fusion is an active research area with many civilian and military applications. Fusion is defined as strategic combination of information collected by various sensors from different locations or different types in order to obtain a better understanding of an observed scene or situation. Fusion of multi-modal images cannot be completed unless these two modalities are spatially aligned. In this research, I consider two important problems. Multi-modal, multi-perspective image registration and decision level fusion of multi-modal images. In particular, LiDAR and visual imagery. Multi-modal image registration is a difficult task due to the different semantic interpretation of features extracted from each modality. This problem is decoupled into three sub-problems. The first step is identification and extraction of common features. The second step is the determination of corresponding points. The third step consists of determining the registration transformation parameters. Traditional registration methods use low level features such as lines and corners. Using these features require an extensive optimization search in order to determine the corresponding points. Many methods use global positioning systems (GPS), and a calibrated camera in order to obtain an initial estimate of the camera parameters. The advantages of our work over the previous works are the following. First, I used high level-features, which significantly reduce the search space for the optimization process. Second, the determination of corresponding points is modeled as an assignment problem between a small numbers of objects. On the other side, fusing LiDAR and visual images is beneficial, due to the different and rich characteristics of both modalities. LiDAR data contain 3D information, while images contain visual information. Developing a fusion technique that uses the characteristics of both modalities is very important. I establish a decision-level fusion technique using manifold models.</p>Brain Computer Interface (BCI) Applications: Privacy Threats and Countermeasures2017-07-12T03:17:08-05:00https://digital.library.unt.edu/ark:/67531/metadc984122/<p><a href="https://digital.library.unt.edu/ark:/67531/metadc984122/"><img alt="Brain Computer Interface (BCI) Applications: Privacy Threats and Countermeasures" title="Brain Computer Interface (BCI) Applications: Privacy Threats and Countermeasures" src="https://digital.library.unt.edu/ark:/67531/metadc984122/thumbnail/"/></a></p><p>In recent years, brain computer interfaces (BCIs) have gained popularity in non-medical domains such as the gaming, entertainment, personal health, and marketing industries. A growing number of companies offer various inexpensive consumer grade BCIs and some of these companies have recently introduced the concept of BCI "App stores" in order to facilitate the expansion of BCI applications and provide software development kits (SDKs) for other developers to create new applications for their devices. The BCI applications access to users' unique brainwave signals, which consequently allows them to make inferences about users' thoughts and mental processes. Since there are no specific standards that govern the development of BCI applications, its users are at the risk of privacy breaches. In this work, we perform first comprehensive analysis of BCI App stores including software development kits (SDKs), application programming interfaces (APIs), and BCI applications w.r.t privacy issues. The goal is to understand the way brainwave signals are handled by BCI applications and what threats to the privacy of users exist. Our findings show that most applications have unrestricted access to users' brainwave signals and can easily extract private information about their users without them even noticing. We discuss potential privacy threats posed by current practices used in BCI App stores and then describe some countermeasures that could be used to mitigate the privacy threats. Also, develop a prototype which gives the BCI app users a choice to restrict their brain signal dynamically.</p>Detecting Component Failures and Critical Components in Safety Critical Embedded Systems using Fault Tree Analysis2018-06-06T13:19:50-05:00https://digital.library.unt.edu/ark:/67531/metadc1157555/<p><a href="https://digital.library.unt.edu/ark:/67531/metadc1157555/"><img alt="Detecting Component Failures and Critical Components in Safety Critical Embedded Systems using Fault Tree Analysis" title="Detecting Component Failures and Critical Components in Safety Critical Embedded Systems using Fault Tree Analysis" src="https://digital.library.unt.edu/ark:/67531/metadc1157555/thumbnail/"/></a></p><p>Component failures can result in catastrophic behaviors in safety critical embedded systems, sometimes resulting in loss of life. Component failures can be treated as off nominal behaviors (ONBs) with respect to the components and sub systems involved in an embedded system. A lot of research is being carried out to tackle the problem of ONBs. These approaches are mainly focused on the states (i.e., desired and undesired states of a system at a given point of time to detect ONBs). In this paper, an approach is discussed to detect component failures and critical components of an embedded system. The approach is based on fault tree analysis (FTA), applied to the requirements specification of embedded systems at design time to find out the relationship between individual component failures and overall system failure. FTA helps in determining both qualitative and quantitative relationship between component failures and system failure. Analyzing the system at design time helps in detecting component failures and critical components and helps in devising strategies to mitigate component failures at design time and improve overall safety and reliability of a system.</p>GlobeChain: An Interoperable Blockchain for Global Sharing of Healthcare Data - A COVID-19 Perspective2022-02-21T13:06:10-06:00https://digital.library.unt.edu/ark:/67531/metadc1913262/<p><a href="https://digital.library.unt.edu/ark:/67531/metadc1913262/"><img alt="GlobeChain: An Interoperable Blockchain for Global Sharing of Healthcare Data - A COVID-19 Perspective" title="GlobeChain: An Interoperable Blockchain for Global Sharing of Healthcare Data - A COVID-19 Perspective" src="https://digital.library.unt.edu/ark:/67531/metadc1913262/thumbnail/"/></a></p><p>Article introducing a Blockchain-based medical data-sharing framework (called GlobeChain) to overcome the technical challenges to handle outbreak records. The challenges that might arise due to the proposed Blockchain-based framework are also presented as a future direction that grabs the proposal's effectiveness. This is the accepted manuscript version of the article.</p>Modeling and Simulation of the Vector-Borne Dengue Disease and the Effects of Regional Variation of Temperature in the Disease Prevalence in Homogenous and Heterogeneous Human Populations2016-08-31T22:41:47-05:00https://digital.library.unt.edu/ark:/67531/metadc862802/<p><a href="https://digital.library.unt.edu/ark:/67531/metadc862802/"><img alt="Modeling and Simulation of the Vector-Borne Dengue Disease and the Effects of Regional Variation of Temperature in the Disease Prevalence in Homogenous and Heterogeneous Human Populations" title="Modeling and Simulation of the Vector-Borne Dengue Disease and the Effects of Regional Variation of Temperature in the Disease Prevalence in Homogenous and Heterogeneous Human Populations" src="https://digital.library.unt.edu/ark:/67531/metadc862802/thumbnail/"/></a></p><p>The history of mitigation programs to contain vector-borne diseases is a story of successes and failures. Due to the complex interplay among multiple factors that determine disease dynamics, the general principles for timely and specific intervention for incidence reduction or eradication of life-threatening diseases has yet to be determined. This research discusses computational methods developed to assist in the understanding of complex relationships affecting vector-borne disease dynamics. A computational framework to assist public health practitioners with exploring the dynamics of vector-borne diseases, such as malaria and dengue in homogenous and heterogeneous populations, has been conceived, designed, and implemented. The framework integrates a stochastic computational model of interactions to simulate horizontal disease transmission. The intent of the computational modeling has been the integration of stochasticity during simulation of the disease progression while reducing the number of necessary interactions to simulate a disease outbreak. While there are improvements in the computational time reducing the number of interactions needed for simulating disease dynamics, the realization of interactions can remain computationally expensive. Using multi-threading technology to improve performance upon the original computational model, multi-threading experimental results have been tested and reported. In addition, to the contact model, the modeling of biological processes specific to the corresponding pathogen-carrier vector to increase the specificity of the vector-borne disease has been integrated. Last, automation for requesting, retrieving, parsing, and storing specific weather data and geospatial information from federal agencies to study the differences between homogenous and heterogeneous populations has been implemented.</p>Freeform Cursive Handwriting Recognition Using a Clustered Neural Network2016-03-04T16:14:01-06:00https://digital.library.unt.edu/ark:/67531/metadc804845/<p><a href="https://digital.library.unt.edu/ark:/67531/metadc804845/"><img alt="Freeform Cursive Handwriting Recognition Using a Clustered Neural Network" title="Freeform Cursive Handwriting Recognition Using a Clustered Neural Network" src="https://digital.library.unt.edu/ark:/67531/metadc804845/thumbnail/"/></a></p><p>Optical character recognition (OCR) software has advanced greatly in recent years. Machine-printed text can be scanned and converted to searchable text with word accuracy rates around 98%. Reasonably neat hand-printed text can be recognized with about 85% word accuracy. However, cursive handwriting still remains a challenge, with state-of-the-art performance still around 75%. Algorithms based on hidden Markov models have been only moderately successful, while recurrent neural networks have delivered the best results to date. This thesis explored the feasibility of using a special type of feedforward neural network to convert freeform cursive handwriting to searchable text. The hidden nodes in this network were grouped into clusters, with each cluster being trained to recognize a unique character bigram. The network was trained on writing samples that were pre-segmented and annotated. Post-processing was facilitated in part by using the network to identify overlapping bigrams that were then linked together to form words and sentences. With dictionary assisted post-processing, the network achieved word accuracy of 66.5% on a small, proprietary corpus. The contributions in this thesis are threefold: 1) the novel clustered architecture of the feed-forward neural network, 2) the development of an expanded set of observers combining image masks, modifiers, and feature characterizations, and 3) the use of overlapping bigrams as the textual working unit to assist in context analysis and reconstruction.</p>Toward Supporting Fine-Grained, Structured, Meaningful and Engaging Feedback in Educational Applications2019-01-19T21:34:31-06:00https://digital.library.unt.edu/ark:/67531/metadc1404562/<p><a href="https://digital.library.unt.edu/ark:/67531/metadc1404562/"><img alt="Toward Supporting Fine-Grained, Structured, Meaningful and Engaging Feedback in Educational Applications" title="Toward Supporting Fine-Grained, Structured, Meaningful and Engaging Feedback in Educational Applications" src="https://digital.library.unt.edu/ark:/67531/metadc1404562/thumbnail/"/></a></p><p>Recent advancements in machine learning have started to put their mark on educational technology. Technology is evolving fast and, as people adopt it, schools and universities must also keep up (nearly 70% of primary and secondary schools in the UK are now using tablets for various purposes). As these numbers are likely going to follow the same increasing trend, it is imperative for schools to adapt and benefit from the advantages offered by technology: real-time processing of data, availability of different resources through connectivity, efficiency, and many others. To this end, this work contributes to the growth of educational technology by developing several algorithms and models that are meant to ease several tasks for the instructors, engage students in deep discussions and ultimately, increase their learning gains.
First, a novel, fine-grained knowledge representation is introduced that splits phrases into their constituent propositions that are both meaningful and minimal. An automated extraction algorithm of the propositions is also introduced. Compared with other fine-grained representations, the extraction model does not require any human labor after it is trained, while the results show considerable improvement over two meaningful baselines.
Second, a proposition alignment model is created that relies on even finer-grained units of text while also outperforming several alternative systems. Third, a detailed machine learning based analysis of students' unrestricted natural language responses to questions asked in classrooms is made by leveraging the proposition extraction algorithm to make computational predictions of textual assessment. Two computational approaches are introduced that use and compare manually engineered machine learning features with word embeddings input into a two-hidden layers neural network. Both methods achieve notable improvements over two alternative approaches, a recent short answer grading system and DiSAN – a recent, pre-trained, light-weight neural network that obtained state-of-the-art performance on multiple NLP tasks and corpora.
Fourth, a clustering algorithm is introduced in order to bring structure to the feedback offered to instructors in classrooms. The algorithm organizes student responses based on three important aspects: propositional importance classifications, computational textual understanding of student understanding and algorithm similarity metrics between student responses. Moreover, a dynamic cluster selection algorithm is designed to decide which are the best groups of responses resulting from the cluster hierarchy. The algorithm achieves a performance that is 86.3% of the performance achieved by humans on the same task and dataset.
Fifth, a deep neural network is built to predict, for each cluster, an engagement response that is meant to help generate insightful classroom discussion. This is the first ever computational model to predict how engaging student responses will be in classroom discussion. Its performance reaches 86.8% of the performance obtained by humans on the same task and dataset. Moreover, I also demonstrate the effectiveness of a dynamic algorithm that can self-improve with minimal help from the teachers, in order to reduce its relative error by up to 32%.</p>A New Look at Retargetable Compilers2015-08-21T05:42:39-05:00https://digital.library.unt.edu/ark:/67531/metadc699988/<p><a href="https://digital.library.unt.edu/ark:/67531/metadc699988/"><img alt="A New Look at Retargetable Compilers" title="A New Look at Retargetable Compilers" src="https://digital.library.unt.edu/ark:/67531/metadc699988/thumbnail/"/></a></p><p>Consumers demand new and innovative personal computing devices every 2 years when their cellular phone service contracts are renewed. Yet, a 2 year development cycle for the concurrent development of both hardware and software is nearly impossible. As more components and features are added to the devices, maintaining this 2 year cycle with current tools will become commensurately harder. This dissertation delves into the feasibility of simplifying the development of such systems by employing heterogeneous systems on a chip in conjunction with a retargetable compiler such as the hybrid computer retargetable compiler (Hy-C). An example of a simple architecture description of sufficient detail for use with a retargetable compiler like Hy-C is provided. As a software engineer with 30 years of experience, I have witnessed numerous system failures. A plethora of software development paradigms and tools have been employed to prevent software errors, but none have been completely successful. Much discussion centers on software development in the military contracting market, as that is my background. The dissertation reviews those tools, as well as some existing retargetable compilers, in an attempt to determine how those errors occurred and how a system like Hy-C could assist in reducing future software errors. In the end, the potential for a simple retargetable solution like Hy-C is shown to be very simple, yet powerful enough to provide a very capable product in a very fast-growing market.</p>Content and Temporal Analysis of Communications to Predict Task Cohesion in Software Development Global Teams2017-07-12T03:17:08-05:00https://digital.library.unt.edu/ark:/67531/metadc984118/<p><a href="https://digital.library.unt.edu/ark:/67531/metadc984118/"><img alt="Content and Temporal Analysis of Communications to Predict Task Cohesion in Software Development Global Teams" title="Content and Temporal Analysis of Communications to Predict Task Cohesion in Software Development Global Teams" src="https://digital.library.unt.edu/ark:/67531/metadc984118/thumbnail/"/></a></p><p>Virtual teams in industry are increasingly being used to develop software, create products, and accomplish tasks. However, analyzing those collaborations under same-time/different-place conditions is well-known to be difficult. In order to overcome some of these challenges, this research was concerned with the study of collaboration-based, content-based and temporal measures and their ability to predict cohesion within global software development projects. Messages were collected from three software development projects that involved students from two different countries. The similarities and quantities of these interactions were computed and analyzed at individual and group levels. Results of interaction-based metrics showed that the collaboration variables most related to Task Cohesion were Linguistic Style Matching and Information Exchange. The study also found that Information Exchange rate and Reply rate have a significant and positive correlation to Task Cohesion, a factor used to describe participants' engagement in the global software development process. This relation was also found at the Group level. All these results suggest that metrics based on rate can be very useful for predicting cohesion in virtual groups. Similarly, content features based on communication categories were used to improve the identification of Task Cohesion levels. This model showed mixed results, since only Work similarity and Social rate were found to be correlated with Task Cohesion. This result can be explained by how a group's cohesiveness is often associated with fairness and trust, and that these two factors are often achieved by increased social and work communications. Also, at a group-level, all models were found correlated to Task Cohesion, specifically, Similarity+Rate, which suggests that models that include social and work communication categories are also good predictors of team cohesiveness. Finally, temporal interaction similarity measures were calculated to assess their prediction capabilities in a global setting. Results showed a significant negative correlation between the Pacing Rate and Task Cohesion, which suggests that frequent communications increases the cohesion between team members. The study also found a positive correlation between Coherence Similarity and Task Cohesion, which indicates the importance of establishing a rhythm within a team. In addition, the temporal models at individual and group-levels were found to be good predictors of Task Cohesion, which indicates the existence of a strong effect of frequent and rhythmic communications on cohesion related to the task. The contributions in this dissertation are three fold. 1) Novel use of Temporal measures to describe a team's rhythmic interactions, 2) Development of new, quantifiable factors for analyzing different characteristics of a team's communications, 3) Identification of interesting factors for predicting Task Cohesion levels among global teams.</p>Investigating the Extractive Summarization of Literary Novels2012-10-02T16:18:49-05:00https://digital.library.unt.edu/ark:/67531/metadc103298/<p><a href="https://digital.library.unt.edu/ark:/67531/metadc103298/"><img alt="Investigating the Extractive Summarization of Literary Novels" title="Investigating the Extractive Summarization of Literary Novels" src="https://digital.library.unt.edu/ark:/67531/metadc103298/thumbnail/"/></a></p><p>Abstract
Due to the vast amount of information we are faced with, summarization has become a critical necessity of everyday human life. Given that a large fraction of the electronic documents available online and elsewhere consist of short texts such as Web pages, news articles, scientific reports, and others, the focus of natural language processing techniques to date has been on the automation of methods targeting short documents. We are witnessing however a change: an increasingly larger number of books become available in electronic format. This means that the need for language processing techniques able to handle very large documents such as books is becoming increasingly important. This thesis addresses the problem of summarization of novels, which are long and complex literary narratives. While there is a significant body of research that has been carried out on the task of automatic text summarization, most of this work has been concerned with the summarization of short documents, with a particular focus on news stories. However, novels are different in both length and genre, and consequently different summarization techniques are required. This thesis attempts to close this gap by analyzing a new domain for summarization, and by building unsupervised and supervised systems that effectively take into account the properties of long documents, and outperform the traditional extractive summarization systems typically addressing news genre.</p>Using Reinforcement Learning in Partial Order Plan Space2008-05-05T14:14:05-05:00https://digital.library.unt.edu/ark:/67531/metadc5232/<p><a href="https://digital.library.unt.edu/ark:/67531/metadc5232/"><img alt="Using Reinforcement Learning in Partial Order Plan Space" title="Using Reinforcement Learning in Partial Order Plan Space" src="https://digital.library.unt.edu/ark:/67531/metadc5232/thumbnail/"/></a></p><p>Partial order planning is an important approach that solves planning problems without completely specifying the orderings between the actions in the plan. This property provides greater flexibility in executing plans; hence making the partial order planners a preferred choice over other planning methodologies. However, in order to find partially ordered plans, partial order planners perform a search in plan space rather than in space of world states and an uninformed search in plan space leads to poor efficiency. In this thesis, I discuss applying a reinforcement learning method, called First-visit Monte Carlo method, to partial order planning in order to design agents which do not need any training data or heuristics but are still able to make informed decisions in plan space based on experience. Communicating effectively with the agent is crucial in reinforcement learning. I address how this task was accomplished in plan space and the results from an evaluation of a blocks world test bed.</p>Natural Language Interfaces to Databases2008-05-05T15:02:09-05:00https://digital.library.unt.edu/ark:/67531/metadc5474/<p><a href="https://digital.library.unt.edu/ark:/67531/metadc5474/"><img alt="Natural Language Interfaces to Databases" title="Natural Language Interfaces to Databases" src="https://digital.library.unt.edu/ark:/67531/metadc5474/thumbnail/"/></a></p><p>Natural language interfaces to databases (NLIDB) are systems that aim to bridge the gap between the languages used by humans and computers, and automatically translate natural language sentences to database queries. This thesis proposes a novel approach to NLIDB, using graph-based models. The system starts by collecting as much information as possible from existing databases and sentences, and transforms this information into a knowledge base for the system. Given a new question, the system will use this knowledge to analyze and translate the sentence into its corresponding database query statement. The graph-based NLIDB system uses English as the natural language, a relational database model, and SQL as the formal query language. In experiments performed with natural language questions ran against a large database containing information about U.S. geography, the system showed good performance compared to the state-of-the-art in the field.</p>Measuring Vital Signs Using Smart Phones2011-05-04T13:11:57-05:00https://digital.library.unt.edu/ark:/67531/metadc33139/<p><a href="https://digital.library.unt.edu/ark:/67531/metadc33139/"><img alt="Measuring Vital Signs Using Smart Phones" title="Measuring Vital Signs Using Smart Phones" src="https://digital.library.unt.edu/ark:/67531/metadc33139/thumbnail/"/></a></p><p>Smart phones today have become increasingly popular with the general public for its diverse abilities like navigation, social networking, and multimedia facilities to name a few. These phones are equipped with high end processors, high resolution cameras, built-in sensors like accelerometer, orientation-sensor, light-sensor, and much more. According to comScore survey, 25.3% of US adults use smart phones in their daily lives. Motivated by the capability of smart phones and their extensive usage, I focused on utilizing them for bio-medical applications. In this thesis, I present a new application for a smart phone to quantify the vital signs such as heart rate, respiratory rate and blood pressure with the help of its built-in sensors. Using the camera and a microphone, I have shown how the blood pressure and heart rate can be determined for a subject. People sometimes encounter minor situations like fainting or fatal accidents like car crash at unexpected times and places. It would be useful to have a device which can measure all vital signs in such an event. The second part of this thesis demonstrates a new mode of communication for next generation 9-1-1 calls. In this new architecture, the call-taker will be able to control the multimedia elements in the phone from a remote location. This would help the call-taker or first responder to have a better control over the situation. Transmission of the vital signs measured using the smart phone can be a life saver in critical situations. In today's voice oriented 9-1-1 calls, the dispatcher first collects critical information (e.g., location, call-back number) from caller, and assesses the situation. Meanwhile, the dispatchers constantly face a "60-second dilemma"; i.e., within 60 seconds, they need to make a complicated but important decision, whether to dispatch and, if so, what to dispatch. The dispatchers often feel that they lack sufficient information to make a confident dispatch decision. This remote-media-control described in this system will be able to facilitate information acquisition and decision-making in emergency situations within the 60-second response window in 9-1-1 calls using new multimedia technologies.</p>