An Efficient Approach for Dengue Mitigation: A Computational Framework (open access)

An Efficient Approach for Dengue Mitigation: A Computational Framework

Dengue mitigation is a major research area among scientist who are working towards an effective management of the dengue epidemic. An effective dengue mitigation requires several other important components. These components include an accurate epidemic modeling, an efficient epidemic prediction, and an efficient resource allocation for controlling of the spread of the dengue disease. Past studies assumed homogeneous response pattern of the dengue epidemic to climate conditions throughout the regions. The dengue epidemic is climate dependent and also it is geographically dependent. A global model is not sufficient to capture the local variations of the epidemic. We propose a novel method of epidemic modeling considering local variation and that uses micro ensemble of regressors for each region. There are three regressors that are used in the construction of the ensemble. These are support vector regression, ordinary least square regression, and a k-nearest neighbor regression. The best performing regressors get selected into the ensemble. The proposed ensemble determines the risk of dengue epidemic in each region in advance. The risk is then used in risk-based resource allocation. The proposing resource allocation is built based on the genetic algorithm. The algorithm exploits the genetic algorithm with major modifications to its main components, …
Date: May 2019
Creator: Dinayadura, Nirosha
System: The UNT Digital Library
Data-Driven Decision-Making Framework for Large-Scale Dynamical Systems under Uncertainty (open access)

Data-Driven Decision-Making Framework for Large-Scale Dynamical Systems under Uncertainty

Managing large-scale dynamical systems (e.g., transportation systems, complex information systems, and power networks, etc.) in real-time is very challenging considering their complicated system dynamics, intricate network interactions, large scale, and especially the existence of various uncertainties. To address this issue, intelligent techniques which can quickly design decision-making strategies that are robust to uncertainties are needed. This dissertation aims to conquer these challenges by exploring a data-driven decision-making framework, which leverages big-data techniques and scalable uncertainty evaluation approaches to quickly solve optimal control problems. In particular, following techniques have been developed along this direction: 1) system modeling approaches to simplify the system analysis and design procedures for multiple applications; 2) effective simulation and analytical based approaches to efficiently evaluate system performance and design control strategies under uncertainty; and 3) big-data techniques that allow some computations of control strategies to be completed offline. These techniques and tools for analysis, design and control contribute to a wide range of applications including air traffic flow management, complex information systems, and airborne networks.
Date: August 2016
Creator: Xie, Junfei
System: The UNT Digital Library
Methodical Evaluation of Processing-in-Memory Alternatives (open access)

Methodical Evaluation of Processing-in-Memory Alternatives

In this work, I characterized a series of potential application kernels using a set of architectural and non-architectural metrics, and performed a comparison of four different alternatives for processing-in-memory cores (PIMs): ARM cores, GPGPUs, coarse-grained reconfigurable dataflow (DF-PIM), and a domain specific architecture using SIMD PIM engine consisting of a series of multiply-accumulate circuits (MACs). For each PIM alternative I investigated how performance and energy efficiency changes with respect to a series of system parameters, such as memory bandwidth and latency, number of PIM cores, DVFS states, cache architecture, etc. In addition, I compared the PIM core choices for a subset of applications and discussed how the application characteristics correlate to the achieved performance and energy efficiency. Furthermore, I compared the PIM alternatives to a host-centric solution that uses a traditional server-class CPU core or PIM-like cores acting as host-side accelerators instead of being part of 3D-stacked memories. Such insights can expose the achievable performance limits and shortcomings of certain PIM designs and show sensitivity to a series of system parameters (available memory bandwidth, application latency and bandwidth sensitivity, etc.). In addition, identifying the common application characteristics for PIM kernels provides opportunity to identify similar types of computation patterns in …
Date: May 2019
Creator: Scrbak, Marko
System: The UNT Digital Library
New Frameworks for Secure Image Communication in the Internet of Things (IoT) (open access)

New Frameworks for Secure Image Communication in the Internet of Things (IoT)

The continuous expansion of technology, broadband connectivity and the wide range of new devices in the IoT cause serious concerns regarding privacy and security. In addition, in the IoT a key challenge is the storage and management of massive data streams. For example, there is always the demand for acceptable size with the highest quality possible for images to meet the rapidly increasing number of multimedia applications. The effort in this dissertation contributes to the resolution of concerns related to the security and compression functions in image communications in the Internet of Thing (IoT), due to the fast of evolution of IoT. This dissertation proposes frameworks for a secure digital camera in the IoT. The objectives of this dissertation are twofold. On the one hand, the proposed framework architecture offers a double-layer of protection: encryption and watermarking that will address all issues related to security, privacy, and digital rights management (DRM) by applying a hardware architecture of the state-of-the-art image compression technique Better Portable Graphics (BPG), which achieves high compression ratio with small size. On the other hand, the proposed framework of SBPG is integrated with the Digital Camera. Thus, the proposed framework of SBPG integrated with SDC is suitable …
Date: August 2016
Creator: Albalawi, Umar Abdalah S
System: The UNT Digital Library
Sensing and Decoding Brain States for Predicting and Enhancing Human Behavior, Health, and Security (open access)

Sensing and Decoding Brain States for Predicting and Enhancing Human Behavior, Health, and Security

The human brain acts as an intelligent sensor by helping in effective signal communication and execution of logical functions and instructions, thus, coordinating all functions of the human body. More importantly, it shows the potential to combine prior knowledge with adaptive learning, thus ensuring constant improvement. These qualities help the brain to interact efficiently with both, the body (brain-body) as well as the environment (brain-environment). This dissertation attempts to apply the brain-body-environment interactions (BBEI) to elevate human existence and enhance our day-to-day experiences. For instance, when one stepped out of the house in the past, one had to carry keys (for unlocking), money (for purchasing), and a phone (for communication). With the advent of smartphones, this scenario changed completely and today, it is often enough to carry just one's smartphone because all the above activities can be performed with a single device. In the future, with advanced research and progress in BBEI interactions, one will be able to perform many activities by dictating it in one's mind without any physical involvement. This dissertation aims to shift the paradigm of existing brain-computer-interfaces from just ‘control' to ‘monitor, control, enhance, and restore' in three main areas - healthcare, transportation safety, and cryptography. …
Date: August 2016
Creator: Bajwa, Garima
System: The UNT Digital Library
Improving Software Quality through Syntax and Semantics Verification of Requirements Models (open access)

Improving Software Quality through Syntax and Semantics Verification of Requirements Models

Software defects can frequently be traced to poorly-specified requirements. Many software teams manage their requirements using tools such as checklists and databases, which lack a formal semantic mapping to system behavior. Such a mapping can be especially helpful for safety-critical systems. Another limitation of many requirements analysis methods is that much of the analysis must still be done manually. We propose techniques that automate portions of the requirements analysis process, as well as clarify the syntax and semantics of requirements models using a variety of methods, including machine learning tools and our own tool, VeriCCM. The machine learning tools used help us identify potential model elements and verify their correctness. VeriCCM, a formalized extension of the causal component model (CCM), uses formal methods to ensure that requirements are well-formed, as well as providing the beginnings of a full formal semantics. We also explore the use of statecharts to identify potential abnormal behaviors from a given set of requirements. At each stage, we perform empirical studies to evaluate the effectiveness of our proposed approaches.
Date: December 2018
Creator: Gaither, Danielle
System: The UNT Digital Library

A Top-Down Policy Engineering Framework for Attribute-Based Access Control

The purpose of this study is to propose a top-down policy engineering framework for attribute-based access control (ABAC) that aims to automatically extract ACPs from requirement specifications documents, and then, using the extracted policies, build or update an ABAC model. We specify a procedure that consists of three main components: 1) ACP sentence identification, 2) policy element extraction, and 3) ABAC model creation and update. ACP sentence identification processes unrestricted natural language documents and identify the sentences that carry ACP content. We propose and compare three different methodologies from different disciplines, namely deep recurrent neural networks (RNN-based), biological immune system (BIS-based), and a combination of multiple natural language processing techniques (PMI-based) in order to identify the proper methodology for extracting ACP sentences from irrelevant text. Our evaluation results improve the state-of-the-art by a margin of 5% F1-Measure. To aid future research, we also introduce a new dataset that includes 5000 sentences from real-world policy documents. ABAC policy extraction extracts ACP elements such as subject, object, and action from the identified ACPs. We use semantic roles and correctly identify ACP elements with an average F1 score of 75%, which bests the previous work by 15%. Furthermore, as SRL tools are often …
Date: May 2020
Creator: Narouei, Masoud
System: The UNT Digital Library
Privacy Preserving Machine Learning as a Service (open access)

Privacy Preserving Machine Learning as a Service

Machine learning algorithms based on neural networks have achieved remarkable results and are being extensively used in different domains. However, the machine learning algorithms requires access to raw data which is often privacy sensitive. To address this issue, we develop new techniques to provide solutions for running deep neural networks over encrypted data. In this paper, we develop new techniques to adopt deep neural networks within the practical limitation of current homomorphic encryption schemes. We focus on training and classification of the well-known neural networks and convolutional neural networks. First, we design methods for approximation of the activation functions commonly used in CNNs (i.e. ReLU, Sigmoid, and Tanh) with low degree polynomials which is essential for efficient homomorphic encryption schemes. Then, we train neural networks with the approximation polynomials instead of original activation functions and analyze the performance of the models. Finally, we implement neural networks and convolutional neural networks over encrypted data and measure performance of the models.
Date: May 2020
Creator: Hesamifard, Ehsan
System: The UNT Digital Library
Traffic Forecasting Applications Using Crowdsourced Traffic Reports and Deep Learning (open access)

Traffic Forecasting Applications Using Crowdsourced Traffic Reports and Deep Learning

Intelligent transportation systems (ITS) are essential tools for traffic planning, analysis, and forecasting that can utilize the huge amount of traffic data available nowadays. In this work, we aggregated detailed traffic flow sensor data, Waze reports, OpenStreetMap (OSM) features, and weather data, from California Bay Area for 6 months. Using that data, we studied three novel ITS applications using convolutional neural networks (CNNs) and recurrent neural networks (RNNs). The first experiment is an analysis of the relation between roadway shapes and accident occurrence, where results show that the speed limit and number of lanes are significant predictors for major accidents on highways. The second experiment presents a novel method for forecasting congestion severity using crowdsourced data only (Waze, OSM, and weather), without the need for traffic sensor data. The third experiment studies the improvement of traffic flow forecasting using accidents, number of lanes, weather, and time-related features, where results show significant performance improvements when the additional features where used.
Date: May 2020
Creator: Alammari, Ali
System: The UNT Digital Library

Multi-Source Large Scale Bike Demand Prediction

Current works of bike demand prediction mainly focus on cluster level and perform poorly on predicting demands of a single station. In the first task, we introduce a contextual based bike demand prediction model, which predicts bike demands for per station by combining spatio-temporal network and environment contexts synergistically. Furthermore, since people's movement information is an important factor, which influences the bike demands of each station. To have a better understanding of people's movements, we need to analyze the relationship between different places. In the second task, we propose an origin-destination model to learn place representations by using large scale movement data. Then based on the people's movement information, we incorporate the place embedding into our bike demand prediction model, which is built by using multi-source large scale datasets: New York Citi bike data, New York taxi trip records, and New York POI data. Finally, as deep learning methods have been successfully applied to many fields such as image recognition and natural language processing, it inspires us to incorporate the complex deep learning method into the bike demand prediction problem. So in this task, we propose a deep spatial-temporal (DST) model, which contains three major components: spatial dependencies, temporal dependencies, …
Date: May 2020
Creator: Zhou, Yang
System: The UNT Digital Library

A Performance and Security Analysis of Elliptic Curve Cryptography Based Real-Time Media Encryption

Access: Use of this item is restricted to the UNT Community
This dissertation emphasizes the security aspects of real-time media. The problems of existing real-time media protections are identified in this research, and viable solutions are proposed. First, the security of real-time media depends on the Secure Real-time Transport Protocol (SRTP) mechanism. We identified drawbacks of the existing SRTP Systems, which use symmetric key encryption schemes, which can be exploited by attackers. Elliptic Curve Cryptography (ECC), an asymmetric key cryptography scheme, is proposed to resolve these problems. Second, the ECC encryption scheme is based on elliptic curves. This dissertation explores the weaknesses of a widely used elliptic curve in terms of security and describes a more secure elliptic curve suitable for real-time media protection. Eighteen elliptic curves had been tested in a real-time video transmission system, and fifteen elliptic curves had been tested in a real-time audio transmission system. Based on the performance, X9.62 standard 256-bit prime curve, NIST-recommended 256-bit prime curves, and Brainpool 256-bit prime curves were found to be suitable for real-time audio encryption. Again, X9.62 standard 256-bit prime and 272-bit binary curves, and NIST-recommended 256-bit prime curves were found to be suitable for real-time video encryption.The weaknesses of NIST-recommended elliptic curves are discussed and a more secure new …
Date: December 2019
Creator: Sen, Nilanjan
System: The UNT Digital Library

Frameworks for Attribute-Based Access Control (ABAC) Policy Engineering

In this disseration we propose semi-automated top-down policy engineering approaches for attribute-based access control (ABAC) development. Further, we propose a hybrid ABAC policy engineering approach to combine the benefits and address the shortcomings of both top-down and bottom-up approaches. In particular, we propose three frameworks: (i) ABAC attributes extraction, (ii) ABAC constraints extraction, and (iii) hybrid ABAC policy engineering. Attributes extraction framework comprises of five modules that operate together to extract attributes values from natural language access control policies (NLACPs); map the extracted values to attribute keys; and assign each key-value pair to an appropriate entity. For ABAC constraints extraction framework, we design a two-phase process to extract ABAC constraints from NLACPs. The process begins with the identification phase which focuses on identifying the right boundary of constraint expressions. Next is the normalization phase, that aims at extracting the actual elements that pose a constraint. On the other hand, our hybrid ABAC policy engineering framework consists of 5 modules. This framework combines top-down and bottom-up policy engineering techniques to overcome the shortcomings of both approaches and to generate policies that are more intuitive and relevant to actual organization policies. With this, we believe that our work takes essential steps towards …
Date: August 2020
Creator: Alohaly, Manar
System: The UNT Digital Library
Kriging Methods to Exploit Spatial Correlations of EEG Signals for Fast and Accurate Seizure Detection in the IoMT (open access)

Kriging Methods to Exploit Spatial Correlations of EEG Signals for Fast and Accurate Seizure Detection in the IoMT

Epileptic seizure presents a formidable threat to the life of its sufferers, leaving them unconscious within seconds of its onset. Having a mortality rate that is at least twice that of the general population, it is a true cause for concern which has gained ample attention from various research communities. About 800 million people in the world will have at least one seizure experience in their lifespan. Injuries sustained during a seizure crisis are one of the leading causes of death in epilepsy. These can be prevented by an early detection of seizure accompanied by a timely intervention mechanism. The research presented in this dissertation explores Kriging methods to exploit spatial correlations of electroencephalogram (EEG) Signals from the brain, for fast and accurate seizure detection in the Internet of Medical Things (IoMT) using edge computing paradigms, by modeling the brain as a three-dimensional spatial object, similar to a geographical panorama. This dissertation proposes basic, hierarchical and distributed Kriging models, with a deep neural network (DNN) wrapper in some instances. Experimental results from the models are highly promising for real-time seizure detection, with excellent performance in seizure detection latency and training time, as well as accuracy, sensitivity and specificity which compare …
Date: August 2020
Creator: Olokodana, Ibrahim Latunde
System: The UNT Digital Library
Online Construction of Android Application Test Suites (open access)

Online Construction of Android Application Test Suites

Mobile applications play an important role in the dissemination of computing and information resources. They are often used in domains such as mobile banking, e-commerce, and health monitoring. Cost-effective testing techniques in these domains are critical. This dissertation contributes novel techniques for automatic construction of mobile application test suites. In particular, this work provides solutions that focus on the prohibitively large number of possible event sequences that must be sampled in GUI-based mobile applications. This work makes three major contributions: (1) an automated GUI testing tool, Autodroid, that implements a novel online approach to automatic construction of Android application test suites (2) probabilistic and combinatorial-based algorithms that systematically sample the input space of Android applications to generate test suites with GUI/context events and (3) empirical studies to evaluate the cost-effectiveness of our techniques on real-world Android applications. Our experiments show that our techniques achieve better code coverage and event coverage compared to random test generation. We demonstrate that our techniques are useful for automatic construction of Android application test suites in the absence of source code and preexisting abstract models of an Application Under Test (AUT). The insights derived from our empirical studies provide guidance to researchers and practitioners involved …
Date: December 2017
Creator: Adamo, David T., Jr.
System: The UNT Digital Library
Extracting Useful Information from Social Media during Disaster Events (open access)

Extracting Useful Information from Social Media during Disaster Events

In recent years, social media platforms such as Twitter and Facebook have emerged as effective tools for broadcasting messages worldwide during disaster events. With millions of messages posted through these services during such events, it has become imperative to identify valuable information that can help the emergency responders to develop effective relief efforts and aid victims. Many studies implied that the role of social media during disasters is invaluable and can be incorporated into emergency decision-making process. However, due to the "big data" nature of social media, it is very labor-intensive to employ human resources to sift through social media posts and categorize/classify them as useful information. Hence, there is a growing need for machine intelligence to automate the process of extracting useful information from the social media data during disaster events. This dissertation addresses the following questions: In a social media stream of messages, what is the useful information to be extracted that can help emergency response organizations to become more situationally aware during and following a disaster? What are the features (or patterns) that can contribute to automatically identifying messages that are useful during disasters? We explored a wide variety of features in conjunction with supervised learning algorithms …
Date: May 2017
Creator: Neppalli, Venkata Kishore
System: The UNT Digital Library
Hybrid Approaches in Test Suite Prioritization (open access)

Hybrid Approaches in Test Suite Prioritization

The rapid advancement of web and mobile application technologies has recently posed numerous challenges to the Software Engineering community, including how to cost-effectively test applications that have complex event spaces. Many software testing techniques attempt to cost-effectively improve the quality of such software. This dissertation primarily focuses on that of hybrid test suite prioritization. The techniques utilize two or more criteria to perform test suite prioritization as it is often insufficient to use only a single criterion. The dissertation consists of the following contributions: (1) a weighted test suite prioritization technique that employs the distance between criteria as a weighting factor, (2) a coarse-to-fine grained test suite prioritization technique that uses a multilevel approach to increase the granularity of the criteria at each subsequent iteration, (3) the Caret-HM tool for Android user session-based testing that allows testers to record, replay, and create heat maps from user interactions with Android applications via a web browser, and (4) Android user session-based test suite prioritization techniques that utilize heuristics developed from user sessions created by Caret-HM. Each of the chapters empirically evaluate the respective techniques. The proposed techniques generally show improved or equally good performance when compared to the baselines, depending on an …
Date: May 2018
Creator: Nurmuradov, Dmitriy
System: The UNT Digital Library

Understanding and Addressing Accessibility Barriers Faced by People with Visual Impairments on Block-Based Programming Environments

There is an increased use of block-based programming environments in K-12 education and computing outreach activities to introduce novices to programming and computational thinking skills. However, despite their appealing design that allows students to focus on concepts rather than syntax, block-based programming by design is inaccessible to people with visual impairments and people who cannot use the mouse. In addition to this inaccessibility, little is known about the instructional experiences of students with visual impairments on current block-based programming environments. This dissertation addresses this gap by (1) investigating the challenges that students with visual impairments face on current block-based programming environments and (2) exploring ways in which we can use the keyboard and the screen reader to create block-based code. Through formal survey and interview studies with teachers of students with visual impairments and students with visual impairments, we identify several challenges faced by students with visual impairments on block-based programming environments. Using the knowledge of these challenges and building on prior work, we explore how to leverage the keyboard and the screen reader to improve the accessibility of block-based programming environments through a prototype of an accessible block-based programming library. In this dissertation, our empirical evaluations demonstrate that people …
Date: December 2022
Creator: Mountapmbeme, Aboubakar
System: The UNT Digital Library
Joint Schemes for Physical Layer Security and Error Correction (open access)

Joint Schemes for Physical Layer Security and Error Correction

The major challenges facing resource constraint wireless devices are error resilience, security and speed. Three joint schemes are presented in this research which could be broadly divided into error correction based and cipher based. The error correction based ciphers take advantage of the properties of LDPC codes and Nordstrom Robinson code. A cipher-based cryptosystem is also presented in this research. The complexity of this scheme is reduced compared to conventional schemes. The securities of the ciphers are analyzed against known-plaintext and chosen-plaintext attacks and are found to be secure. Randomization test was also conducted on these schemes and the results are presented. For the proof of concept, the schemes were implemented in software and hardware and these shows a reduction in hardware usage compared to conventional schemes. As a result, joint schemes for error correction and security provide security to the physical layer of wireless communication systems, a layer in the protocol stack where currently little or no security is implemented. In this physical layer security approach, the properties of powerful error correcting codes are exploited to deliver reliability to the intended parties, high security against eavesdroppers and efficiency in communication system. The notion of a highly secure and reliable …
Date: August 2011
Creator: Adamo, Oluwayomi Bamidele
System: The UNT Digital Library
Measuring Semantic Relatedness Using Salient Encyclopedic Concepts (open access)

Measuring Semantic Relatedness Using Salient Encyclopedic Concepts

While pragmatics, through its integration of situational awareness and real world relevant knowledge, offers a high level of analysis that is suitable for real interpretation of natural dialogue, semantics, on the other end, represents a lower yet more tractable and affordable linguistic level of analysis using current technologies. Generally, the understanding of semantic meaning in literature has revolved around the famous quote ``You shall know a word by the company it keeps''. In this thesis we investigate the role of context constituents in decoding the semantic meaning of the engulfing context; specifically we probe the role of salient concepts, defined as content-bearing expressions which afford encyclopedic definitions, as a suitable source of semantic clues to an unambiguous interpretation of context. Furthermore, we integrate this world knowledge in building a new and robust unsupervised semantic model and apply it to entail semantic relatedness between textual pairs, whether they are words, sentences or paragraphs. Moreover, we explore the abstraction of semantics across languages and utilize our findings into building a novel multi-lingual semantic relatedness model exploiting information acquired from various languages. We demonstrate the effectiveness and the superiority of our mono-lingual and multi-lingual models through a comprehensive set of evaluations on specialized …
Date: August 2011
Creator: Hassan, Samer
System: The UNT Digital Library
Trajectory Analytics (open access)

Trajectory Analytics

The numerous surveillance videos recorded by a single stationary wide-angle-view camera persuade the use of a moving point as the representation of each small-size object in wide video scene. The sequence of the positions of each moving point can be used to generate a trajectory containing both spatial and temporal information of object's movement. In this study, we investigate how the relationship between two trajectories can be used to recognize multi-agent interactions. For this purpose, we present a simple set of qualitative atomic disjoint trajectory-segment relations which can be utilized to represent the relationships between two trajectories. Given a pair of adjacent concurrent trajectories, we segment the trajectory pair to get the ordered sequence of related trajectory-segments. Each pair of corresponding trajectory-segments then is assigned a token associated with the trajectory-segment relation, which leads to the generation of a string called a pairwise trajectory-segment relationship sequence. From a group of pairwise trajectory-segment relationship sequences, we utilize an unsupervised learning algorithm, particularly the k-medians clustering, to detect interesting patterns that can be used to classify lower-level multi-agent activities. We evaluate the effectiveness of the proposed approach by comparing the activity classes predicted by our method to the actual classes from the …
Date: May 2015
Creator: Santiteerakul, Wasana
System: The UNT Digital Library
Video Analytics with Spatio-Temporal Characteristics of Activities (open access)

Video Analytics with Spatio-Temporal Characteristics of Activities

As video capturing devices become more ubiquitous from surveillance cameras to smart phones, the demand of automated video analysis is increasing as never before. One obstacle in this process is to efficiently locate where a human operator’s attention should be, and another is to determine the specific types of activities or actions without ambiguity. It is the special interest of this dissertation to locate spatial and temporal regions of interest in videos and to develop a better action representation for video-based activity analysis. This dissertation follows the scheme of “locating then recognizing” activities of interest in videos, i.e., locations of potentially interesting activities are estimated before performing in-depth analysis. Theoretical properties of regions of interest in videos are first exploited, based on which a unifying framework is proposed to locate both spatial and temporal regions of interest with the same settings of parameters. The approach estimates the distribution of motion based on 3D structure tensors, and locates regions of interest according to persistent occurrences of low probability. Two contributions are further made to better represent the actions. The first is to construct a unifying model of spatio-temporal relationships between reusable mid-level actions which bridge low-level pixels and high-level activities. Dense …
Date: May 2015
Creator: Cheng, Guangchun
System: The UNT Digital Library
Investigation on Segmentation, Recognition and 3D Reconstruction of Objects Based on LiDAR Data Or MRI (open access)

Investigation on Segmentation, Recognition and 3D Reconstruction of Objects Based on LiDAR Data Or MRI

Segmentation, recognition and 3D reconstruction of objects have been cutting-edge research topics, which have many applications ranging from environmental and medical to geographical applications as well as intelligent transportation. In this dissertation, I focus on the study of segmentation, recognition and 3D reconstruction of objects using LiDAR data/MRI. Three main works are that (I). Feature extraction algorithm based on sparse LiDAR data. A novel method has been proposed for feature extraction from sparse LiDAR data. The algorithm and the related principles have been described. Also, I have tested and discussed the choices and roles of parameters. By using correlation of neighboring points directly, statistic distribution of normal vectors at each point has been effectively used to determine the category of the selected point. (II). Segmentation and 3D reconstruction of objects based on LiDAR/MRI. The proposed method includes that the 3D LiDAR data are layered, that different categories are segmented, and that 3D canopy surfaces of individual tree crowns and clusters of trees are reconstructed from LiDAR point data based on a region active contour model. The proposed method allows for delineations of 3D forest canopy naturally from the contours of raw LiDAR point clouds. The proposed model is suitable not …
Date: May 2015
Creator: Tang, Shijun
System: The UNT Digital Library
The Procedural Generation of Interesting Sokoban Levels (open access)

The Procedural Generation of Interesting Sokoban Levels

As video games continue to become larger, more complex, and more costly to produce, research into methods to make game creation easier and faster becomes more valuable. One such research topic is procedural generation, which allows the computer to assist in the creation of content. This dissertation presents a new algorithm for the generation of Sokoban levels. Sokoban is a grid-based transport puzzle which is computational interesting due to being PSPACE-complete. Beyond just generating levels, the question of whether or not the levels created by this algorithm are interesting to human players is explored. A study was carried out comparing player attention while playing hand made levels versus their attention during procedurally generated levels. An auditory Stroop test was used to measure attention without disrupting play.
Date: May 2015
Creator: Taylor, Joshua
System: The UNT Digital Library
Scene Analysis Using Scale Invariant Feature Extraction and Probabilistic Modeling (open access)

Scene Analysis Using Scale Invariant Feature Extraction and Probabilistic Modeling

Conventional pattern recognition systems have two components: feature analysis and pattern classification. For any object in an image, features could be considered as the major characteristic of the object either for object recognition or object tracking purpose. Features extracted from a training image, can be used to identify the object when attempting to locate the object in a test image containing many other objects. To perform reliable scene analysis, it is important that the features extracted from the training image are detectable even under changes in image scale, noise and illumination. Scale invariant feature has wide applications such as image classification, object recognition and object tracking in the image processing area. In this thesis, color feature and SIFT (scale invariant feature transform) are considered to be scale invariant feature. The classification, recognition and tracking result were evaluated with novel evaluation criterion and compared with some existing methods. I also studied different types of scale invariant feature for the purpose of solving scene analysis problems. I propose probabilistic models as the foundation of analysis scene scenario of images. In order to differential the content of image, I develop novel algorithms for the adaptive combination for multiple features extracted from images. I …
Date: August 2011
Creator: Shen, Yao
System: The UNT Digital Library