Methodical Evaluation of Processing-in-Memory Alternatives (open access)

Methodical Evaluation of Processing-in-Memory Alternatives

In this work, I characterized a series of potential application kernels using a set of architectural and non-architectural metrics, and performed a comparison of four different alternatives for processing-in-memory cores (PIMs): ARM cores, GPGPUs, coarse-grained reconfigurable dataflow (DF-PIM), and a domain specific architecture using SIMD PIM engine consisting of a series of multiply-accumulate circuits (MACs). For each PIM alternative I investigated how performance and energy efficiency changes with respect to a series of system parameters, such as memory bandwidth and latency, number of PIM cores, DVFS states, cache architecture, etc. In addition, I compared the PIM core choices for a subset of applications and discussed how the application characteristics correlate to the achieved performance and energy efficiency. Furthermore, I compared the PIM alternatives to a host-centric solution that uses a traditional server-class CPU core or PIM-like cores acting as host-side accelerators instead of being part of 3D-stacked memories. Such insights can expose the achievable performance limits and shortcomings of certain PIM designs and show sensitivity to a series of system parameters (available memory bandwidth, application latency and bandwidth sensitivity, etc.). In addition, identifying the common application characteristics for PIM kernels provides opportunity to identify similar types of computation patterns in …
Date: May 2019
Creator: Scrbak, Marko
System: The UNT Digital Library

A Top-Down Policy Engineering Framework for Attribute-Based Access Control

The purpose of this study is to propose a top-down policy engineering framework for attribute-based access control (ABAC) that aims to automatically extract ACPs from requirement specifications documents, and then, using the extracted policies, build or update an ABAC model. We specify a procedure that consists of three main components: 1) ACP sentence identification, 2) policy element extraction, and 3) ABAC model creation and update. ACP sentence identification processes unrestricted natural language documents and identify the sentences that carry ACP content. We propose and compare three different methodologies from different disciplines, namely deep recurrent neural networks (RNN-based), biological immune system (BIS-based), and a combination of multiple natural language processing techniques (PMI-based) in order to identify the proper methodology for extracting ACP sentences from irrelevant text. Our evaluation results improve the state-of-the-art by a margin of 5% F1-Measure. To aid future research, we also introduce a new dataset that includes 5000 sentences from real-world policy documents. ABAC policy extraction extracts ACP elements such as subject, object, and action from the identified ACPs. We use semantic roles and correctly identify ACP elements with an average F1 score of 75%, which bests the previous work by 15%. Furthermore, as SRL tools are often …
Date: May 2020
Creator: Narouei, Masoud
System: The UNT Digital Library
Privacy Preserving Machine Learning as a Service (open access)

Privacy Preserving Machine Learning as a Service

Machine learning algorithms based on neural networks have achieved remarkable results and are being extensively used in different domains. However, the machine learning algorithms requires access to raw data which is often privacy sensitive. To address this issue, we develop new techniques to provide solutions for running deep neural networks over encrypted data. In this paper, we develop new techniques to adopt deep neural networks within the practical limitation of current homomorphic encryption schemes. We focus on training and classification of the well-known neural networks and convolutional neural networks. First, we design methods for approximation of the activation functions commonly used in CNNs (i.e. ReLU, Sigmoid, and Tanh) with low degree polynomials which is essential for efficient homomorphic encryption schemes. Then, we train neural networks with the approximation polynomials instead of original activation functions and analyze the performance of the models. Finally, we implement neural networks and convolutional neural networks over encrypted data and measure performance of the models.
Date: May 2020
Creator: Hesamifard, Ehsan
System: The UNT Digital Library
Traffic Forecasting Applications Using Crowdsourced Traffic Reports and Deep Learning (open access)

Traffic Forecasting Applications Using Crowdsourced Traffic Reports and Deep Learning

Intelligent transportation systems (ITS) are essential tools for traffic planning, analysis, and forecasting that can utilize the huge amount of traffic data available nowadays. In this work, we aggregated detailed traffic flow sensor data, Waze reports, OpenStreetMap (OSM) features, and weather data, from California Bay Area for 6 months. Using that data, we studied three novel ITS applications using convolutional neural networks (CNNs) and recurrent neural networks (RNNs). The first experiment is an analysis of the relation between roadway shapes and accident occurrence, where results show that the speed limit and number of lanes are significant predictors for major accidents on highways. The second experiment presents a novel method for forecasting congestion severity using crowdsourced data only (Waze, OSM, and weather), without the need for traffic sensor data. The third experiment studies the improvement of traffic flow forecasting using accidents, number of lanes, weather, and time-related features, where results show significant performance improvements when the additional features where used.
Date: May 2020
Creator: Alammari, Ali
System: The UNT Digital Library

Multi-Source Large Scale Bike Demand Prediction

Current works of bike demand prediction mainly focus on cluster level and perform poorly on predicting demands of a single station. In the first task, we introduce a contextual based bike demand prediction model, which predicts bike demands for per station by combining spatio-temporal network and environment contexts synergistically. Furthermore, since people's movement information is an important factor, which influences the bike demands of each station. To have a better understanding of people's movements, we need to analyze the relationship between different places. In the second task, we propose an origin-destination model to learn place representations by using large scale movement data. Then based on the people's movement information, we incorporate the place embedding into our bike demand prediction model, which is built by using multi-source large scale datasets: New York Citi bike data, New York taxi trip records, and New York POI data. Finally, as deep learning methods have been successfully applied to many fields such as image recognition and natural language processing, it inspires us to incorporate the complex deep learning method into the bike demand prediction problem. So in this task, we propose a deep spatial-temporal (DST) model, which contains three major components: spatial dependencies, temporal dependencies, …
Date: May 2020
Creator: Zhou, Yang
System: The UNT Digital Library
Hybrid Approaches in Test Suite Prioritization (open access)

Hybrid Approaches in Test Suite Prioritization

The rapid advancement of web and mobile application technologies has recently posed numerous challenges to the Software Engineering community, including how to cost-effectively test applications that have complex event spaces. Many software testing techniques attempt to cost-effectively improve the quality of such software. This dissertation primarily focuses on that of hybrid test suite prioritization. The techniques utilize two or more criteria to perform test suite prioritization as it is often insufficient to use only a single criterion. The dissertation consists of the following contributions: (1) a weighted test suite prioritization technique that employs the distance between criteria as a weighting factor, (2) a coarse-to-fine grained test suite prioritization technique that uses a multilevel approach to increase the granularity of the criteria at each subsequent iteration, (3) the Caret-HM tool for Android user session-based testing that allows testers to record, replay, and create heat maps from user interactions with Android applications via a web browser, and (4) Android user session-based test suite prioritization techniques that utilize heuristics developed from user sessions created by Caret-HM. Each of the chapters empirically evaluate the respective techniques. The proposed techniques generally show improved or equally good performance when compared to the baselines, depending on an …
Date: May 2018
Creator: Nurmuradov, Dmitriy
System: The UNT Digital Library
Trajectory Analytics (open access)

Trajectory Analytics

The numerous surveillance videos recorded by a single stationary wide-angle-view camera persuade the use of a moving point as the representation of each small-size object in wide video scene. The sequence of the positions of each moving point can be used to generate a trajectory containing both spatial and temporal information of object's movement. In this study, we investigate how the relationship between two trajectories can be used to recognize multi-agent interactions. For this purpose, we present a simple set of qualitative atomic disjoint trajectory-segment relations which can be utilized to represent the relationships between two trajectories. Given a pair of adjacent concurrent trajectories, we segment the trajectory pair to get the ordered sequence of related trajectory-segments. Each pair of corresponding trajectory-segments then is assigned a token associated with the trajectory-segment relation, which leads to the generation of a string called a pairwise trajectory-segment relationship sequence. From a group of pairwise trajectory-segment relationship sequences, we utilize an unsupervised learning algorithm, particularly the k-medians clustering, to detect interesting patterns that can be used to classify lower-level multi-agent activities. We evaluate the effectiveness of the proposed approach by comparing the activity classes predicted by our method to the actual classes from the …
Date: May 2015
Creator: Santiteerakul, Wasana
System: The UNT Digital Library
The Procedural Generation of Interesting Sokoban Levels (open access)

The Procedural Generation of Interesting Sokoban Levels

As video games continue to become larger, more complex, and more costly to produce, research into methods to make game creation easier and faster becomes more valuable. One such research topic is procedural generation, which allows the computer to assist in the creation of content. This dissertation presents a new algorithm for the generation of Sokoban levels. Sokoban is a grid-based transport puzzle which is computational interesting due to being PSPACE-complete. Beyond just generating levels, the question of whether or not the levels created by this algorithm are interesting to human players is explored. A study was carried out comparing player attention while playing hand made levels versus their attention during procedurally generated levels. An auditory Stroop test was used to measure attention without disrupting play.
Date: May 2015
Creator: Taylor, Joshua
System: The UNT Digital Library
Content and Temporal Analysis of Communications to Predict Task Cohesion in Software Development Global Teams (open access)

Content and Temporal Analysis of Communications to Predict Task Cohesion in Software Development Global Teams

Virtual teams in industry are increasingly being used to develop software, create products, and accomplish tasks. However, analyzing those collaborations under same-time/different-place conditions is well-known to be difficult. In order to overcome some of these challenges, this research was concerned with the study of collaboration-based, content-based and temporal measures and their ability to predict cohesion within global software development projects. Messages were collected from three software development projects that involved students from two different countries. The similarities and quantities of these interactions were computed and analyzed at individual and group levels. Results of interaction-based metrics showed that the collaboration variables most related to Task Cohesion were Linguistic Style Matching and Information Exchange. The study also found that Information Exchange rate and Reply rate have a significant and positive correlation to Task Cohesion, a factor used to describe participants' engagement in the global software development process. This relation was also found at the Group level. All these results suggest that metrics based on rate can be very useful for predicting cohesion in virtual groups. Similarly, content features based on communication categories were used to improve the identification of Task Cohesion levels. This model showed mixed results, since only Work similarity and …
Date: May 2017
Creator: Castro Hernandez, Alberto
System: The UNT Digital Library
Metamodeling-based Fast Optimization of  Nanoscale Ams-socs (open access)

Metamodeling-based Fast Optimization of Nanoscale Ams-socs

Modern consumer electronic systems are mostly based on analog and digital circuits and are designed as analog/mixed-signal systems on chip (AMS-SoCs). the integration of analog and digital circuits on the same die makes the system cost effective. in AMS-SoCs, analog and mixed-signal portions have not traditionally received much attention due to their complexity. As the fabrication technology advances, the simulation times for AMS-SoC circuits become more complex and take significant amounts of time. the time allocated for the circuit design and optimization creates a need to reduce the simulation time. the time constraints placed on designers are imposed by the ever-shortening time to market and non-recurrent cost of the chip. This dissertation proposes the use of a novel method, called metamodeling, and intelligent optimization algorithms to reduce the design time. Metamodel-based ultra-fast design flows are proposed and investigated. Metamodel creation is a one time process and relies on fast sampling through accurate parasitic-aware simulations. One of the targets of this dissertation is to minimize the sample size while retaining the accuracy of the model. in order to achieve this goal, different statistical sampling techniques are explored and applied to various AMS-SoC circuits. Also, different metamodel functions are explored for their …
Date: May 2012
Creator: Garitselov, Oleg
System: The UNT Digital Library
Incremental Learning with Large Datasets (open access)

Incremental Learning with Large Datasets

This dissertation focuses on the novel learning strategy based on geometric support vector machines to address the difficulties of processing immense data set. Support vector machines find the hyper-plane that maximizes the margin between two classes, and the decision boundary is represented with a few training samples it becomes a favorable choice for incremental learning. The dissertation presents a novel method Geometric Incremental Support Vector Machines (GISVMs) to address both efficiency and accuracy issues in handling massive data sets. In GISVM, skin of convex hulls is defined and an efficient method is designed to find the best skin approximation given available examples. The set of extreme points are found by recursively searching along the direction defined by a pair of known extreme points. By identifying the skin of the convex hulls, the incremental learning will only employ a much smaller number of samples with comparable or even better accuracy. When additional samples are provided, they will be used together with the skin of the convex hull constructed from previous dataset. This results in a small number of instances used in incremental steps of the training process. Based on the experimental results with synthetic data sets, public benchmark data sets from …
Date: May 2012
Creator: Giritharan, Balathasan
System: The UNT Digital Library
Physical-Layer Network Coding for MIMO Systems (open access)

Physical-Layer Network Coding for MIMO Systems

The future wireless communication systems are required to meet the growing demands of reliability, bandwidth capacity, and mobility. However, as corruptions such as fading effects, thermal noise, are present in the channel, the occurrence of errors is unavoidable. Motivated by this, the work in this dissertation attempts to improve the system performance by way of exploiting schemes which statistically reduce the error rate, and in turn boost the system throughput. The network can be studied using a simplified model, the two-way relay channel, where two parties exchange messages via the assistance of a relay in between. In such scenarios, this dissertation performs theoretical analysis of the system, and derives closed-form and upper bound expressions of the error probability. These theoretical measurements are potentially helpful references for the practical system design. Additionally, several novel transmission methods including block relaying, permutation modulations for the physical-layer network coding, are proposed and discussed. Numerical simulation results are presented to support the validity of the conclusions.
Date: May 2011
Creator: Xu, Ning
System: The UNT Digital Library
Exploring Privacy in Location-based Services Using Cryptographic Protocols (open access)

Exploring Privacy in Location-based Services Using Cryptographic Protocols

Location-based services (LBS) are available on a variety of mobile platforms like cell phones, PDA's, etc. and an increasing number of users subscribe to and use these services. Two of the popular models of information flow in LBS are the client-server model and the peer-to-peer model, in both of which, existing approaches do not always provide privacy for all parties concerned. In this work, I study the feasibility of applying cryptographic protocols to design privacy-preserving solutions for LBS from an experimental and theoretical standpoint. In the client-server model, I construct a two-phase framework for processing nearest neighbor queries using combinations of cryptographic protocols such as oblivious transfer and private information retrieval. In the peer-to-peer model, I present privacy preserving solutions for processing group nearest neighbor queries in the semi-honest and dishonest adversarial models. I apply concepts from secure multi-party computation to realize our constructions and also leverage the capabilities of trusted computing technology, specifically TPM chips. My solution for the dishonest adversarial model is also of independent cryptographic interest. I prove my constructions secure under standard cryptographic assumptions and design experiments for testing the feasibility or practicability of our constructions and benchmark key operations. My experiments show that the proposed …
Date: May 2011
Creator: Vishwanathan, Roopa
System: The UNT Digital Library
An Extensible Computing Architecture Design for Connected Autonomous Vehicle System (open access)

An Extensible Computing Architecture Design for Connected Autonomous Vehicle System

Autonomous vehicles have made milestone strides within the past decade. Advances up the autonomy ladder have come lock-step with the advances in machine learning, namely deep-learning algorithms and huge, open training sets. And while advances in CPUs have slowed, GPUs have edged into the previous decade's TOP 500 supercomputer territory. This new class of GPUs include novel deep-learning hardware that has essentially side-stepped Moore's law, outpacing the doubling observation by a factor of ten. While GPUs have make record progress, networks do not follow Moore's law and are restricted by several bottlenecks, from protocol-based latency lower bounds to the very laws of physics. In a way, the bottlenecks that plague modern networks gave rise to Edge computing, a key component of the Connected Autonomous Vehicle system, as the need for low-latency in some domains eclipsed the need for massive processing farms. The Connected Autonomous Vehicle ecosystem is one of the most complicated environments in all of computing. Not only is the hardware scaled all the way from 16 and 32-bit microcontrollers, to multi-CPU Edge nodes, and multi-GPU Cloud servers, but the networking also encompasses the gamut of modern communication transports. I propose a framework for negotiating, encapsulating and transferring data …
Date: May 2021
Creator: Hochstetler, Jacob Daniel
System: The UNT Digital Library
Hybrid Optimization Models for Depot Location-Allocation and Real-Time Routing of Emergency Deliveries (open access)

Hybrid Optimization Models for Depot Location-Allocation and Real-Time Routing of Emergency Deliveries

Prompt and efficient intervention is vital in reducing casualty figures during epidemic outbreaks, disasters, sudden civil strife or terrorism attacks. This can only be achieved if there is a fit-for-purpose and location-specific emergency response plan in place, incorporating geographical, time and vehicular capacity constraints. In this research, a comprehensive emergency response model for situations of uncertainties (in locations' demand and available resources), typically obtainable in low-resource countries, is designed. It involves the development of algorithms for optimizing pre-and post-disaster activities. The studies result in the development of four models: (1) an adaptation of a machine learning clustering algorithm, for pre-positioning depots and emergency operation centers, which optimizes the placement of these depots, such that the largest geographical location is covered, and the maximum number of individuals reached, with minimal facility cost; (2) an optimization algorithm for routing relief distribution, using heterogenous fleets of vehicle, with considerations for uncertainties in humanitarian supplies; (3) a genetic algorithm-based route improvement model; and (4) a model for integrating possible new locations into the routing network, in real-time, using emergency severity ranking, with a high priority on the most-vulnerable population. The clustering approach to solving dept location-allocation problem produces a better time complexity, and the …
Date: May 2021
Creator: Akwafuo, Sampson E
System: The UNT Digital Library
An Artificial Intelligence-Driven Model-Based Analysis of System Requirements for Exposing Off-Nominal Behaviors (open access)

An Artificial Intelligence-Driven Model-Based Analysis of System Requirements for Exposing Off-Nominal Behaviors

With the advent of autonomous systems and deep learning systems, safety pertaining to these systems has become a major concern. The existing failure analysis techniques are not enough to thoroughly analyze the safety in these systems. Moreover, because these systems are created to operate in various conditions, they are susceptible to unknown safety issues. Hence, we need mechanisms which can take into account the complexity of operational design domains, identify safety issues other than failures, and expose unknown safety issues. Moreover, existing safety analysis approaches require a lot of effort and time for analysis and do not consider machine learning (ML) safety. To address these limitations, in this dissertation, we discuss an artificial-intelligence driven model-based methodology that aids in identifying unknown safety issues and analyzing ML safety. Our methodology consists of 4 major tasks: 1) automated model generation, 2) automated analysis of component state transition model specification, 3) undesired states analysis, and 4) causal factor analysis. In our methodology we identify unknown safety issues by finding undesired combinations of components' states and environmental entities' states as well as causes resulting in these undesired combinations. In our methodology, we refer to the behaviors that occur because of undesired combinations as off-nominal …
Date: May 2021
Creator: Madala, Kaushik
System: The UNT Digital Library
IoMT-Based Accurate Stress Monitoring for Smart Healthcare (open access)

IoMT-Based Accurate Stress Monitoring for Smart Healthcare

This research proposes Stress-Lysis, iLog and SaYoPillow to automatically detect and monitor the stress levels of a person. To self manage psychological stress in the framework of smart healthcare, a deep learning based novel system (Stress-Lysis) is proposed in this dissertation. The learning system is trained such that it monitors stress levels in a person through human body temperature, rate of motion and sweat during physical activity. The proposed deep learning system has been trained with a total of 26,000 samples per dataset and demonstrates accuracy as high as 99.7%. The collected data are transmitted and stored in the cloud, which can help in real time monitoring of a person's stress levels, thereby reducing the risk of death and expensive treatments. The proposed system has the ability to produce results with an overall accuracy of 98.3% to 99.7%, is simple to implement and its cost is moderate. Chronic stress, uncontrolled or unmonitored food consumption, and obesity are intricately connected, even involving certain neurological adaptations. In iLog we propose a system which can not only monitor but also create awareness for the user of how much food is too much. iLog provides information on the emotional state of a person along …
Date: May 2021
Creator: Rachakonda, Laavanya
System: The UNT Digital Library
Epileptic Seizure Detection and Control in the Internet of Medical Things (IoMT) Framework (open access)

Epileptic Seizure Detection and Control in the Internet of Medical Things (IoMT) Framework

Epilepsy affects up to 1% of the world's population and approximately 2.5 million people in the United States. A considerable portion (30%) of epilepsy patients are refractory to antiepileptic drugs (AEDs), and surgery can not be an effective candidate if the focus of the seizure is on the eloquent cortex. To overcome the problems with existing solutions, a notable portion of biomedical research is focused on developing an implantable or wearable system for automated seizure detection and control. Seizure detection algorithms based on signal rejection algorithms (SRA), deep neural networks (DNN), and neighborhood component analysis (NCA) have been proposed in the IoMT framework. The algorithms proposed in this work have been validated with both scalp and intracranial electroencephalography (EEG, icEEG), and demonstrate high classification accuracy, sensitivity, and specificity. The occurrence of seizure can be controlled by direct drug injection into the epileptogenic zone, which enhances the efficacy of the AEDs. Piezoelectric and electromagnetic micropumps have been explored for the use of a drug delivery unit, as they provide accurate drug flow and reduce power consumption. The reduction in power consumption as a result of minimal circuitry employed by the drug delivery system is making it suitable for practical biomedical applications. …
Date: May 2020
Creator: Sayeed, Md Abu
System: The UNT Digital Library
An Investigation of Scale Factor in Deep Networks for Scene Recognition (open access)

An Investigation of Scale Factor in Deep Networks for Scene Recognition

Is there a significant difference in the design of deep networks for the tasks of classifying object-centric images and scenery images? How to design networks that extract the most representative features for scene recognition? To answer these questions, we design studies to examine the scales and richness of image features for scenery image recognition. Three methods are proposed that integrate the scale factor to the deep networks and reveal the fundamental network design strategies. In our first attempt to integrate scale factors into the deep network, we proposed a method that aggregates both the context and multi-scale object information of scene images by constructing a multi-scale pyramid. In our design, integration of object-centric multi-scale networks achieved a performance boost of 9.8%; integration of object- and scene-centric models obtained an accuracy improvement of 5.9% compared with single scene-centric models. We also exploit bringing the attention scheme to the deep network and proposed a Scale Attentive Network (SANet). The SANet streamlines the multi-scale scene recognition pipeline, learns comprehensive scene features at various scales and locations, addresses the inter-dependency among scales, and further assists feature re-calibration as well as the aggregation process. The proposed network achieved a Top-1 accuracy increase by 1.83% on …
Date: May 2022
Creator: Qiao, Zhinan
System: The UNT Digital Library
New Computational Methods for Literature-Based Discovery (open access)

New Computational Methods for Literature-Based Discovery

In this work, we leverage the recent developments in computer science to address several of the challenges in current literature-based discovery (LBD) solutions. First, LBD solutions cannot use semantics or are too computational complex. To solve the problems we propose a generative model OverlapLDA based on topic modeling, which has been shown both effective and efficient in extracting semantics from a corpus. We also introduce an inference method of OverlapLDA. We conduct extensive experiments to show the effectiveness and efficiency of OverlapLDA in LBD. Second, we expand LBD to a more complex and realistic setting. The settings are that there can be more than one concept connecting the input concepts, and the connectivity pattern between concepts can also be more complex than a chain. Current LBD solutions can hardly complete the LBD task in the new setting. We simplify the hypotheses as concept sets and propose LBDSetNet based on graph neural networks to solve this problem. We also introduce different training schemes based on self-supervised learning to train LBDSetNet without relying on comprehensive labeled hypotheses that are extremely costly to get. Our comprehensive experiments show that LBDSetNet outperforms strong baselines on simple hypotheses and addresses complex hypotheses.
Date: May 2022
Creator: Ding, Juncheng
System: The UNT Digital Library
Space and Spectrum Engineered High Frequency Components and Circuits (open access)

Space and Spectrum Engineered High Frequency Components and Circuits

With the increasing demand on wireless and portable devices, the radio frequency front end blocks are required to feature properties such as wideband, high frequency, multiple operating frequencies, low cost and compact size. However, the current radio frequency system blocks are designed by combining several individual frequency band blocks into one functional block, which increase the cost and size of devices. To address these issues, it is important to develop novel approaches to further advance the current design methodologies in both space and spectrum domains. In recent years, the concept of artificial materials has been proposed and studied intensively in RF/Microwave, Terahertz, and optical frequency range. It is a combination of conventional materials such as air, wood, metal and plastic. It can achieve the material properties that have not been found in nature. Therefore, the artificial material (i.e. meta-materials) provides design freedoms to control both the spectrum performance and geometrical structures of radio frequency front end blocks and other high frequency systems. In this dissertation, several artificial materials are proposed and designed by different methods, and their applications to different high frequency components and circuits are studied. First, quasi-conformal mapping (QCM) method is applied to design plasmonic wave-adapters and couplers …
Date: May 2015
Creator: Arigong, Bayaner
System: The UNT Digital Library
Extracting Possessions and Their Attributes (open access)

Extracting Possessions and Their Attributes

Possession is an asymmetric semantic relation between two entities, where one entity (the possessee) belongs to the other entity (the possessor). Automatically extracting possessions are useful in identifying skills, recommender systems and in natural language understanding. Possessions can be found in different communication modalities including text, images, videos, and audios. In this dissertation, I elaborate on the techniques I used to extract possessions. I begin with extracting possessions at the sentence level including the type and temporal anchors. Then, I extract the duration of possession and co-possessions (if multiple possessors possess the same entity). Next, I extract possessions from an entire Wikipedia article capturing the change of possessors over time. I extract possessions from social media including both text and images. Finally, I also present dense annotations generating possession timelines. I present separate datasets, detailed corpus analysis, and machine learning models for each task described above.
Date: May 2020
Creator: Chinnappa, Dhivya Infant
System: The UNT Digital Library