Hopfield Networks as an Error Correcting Technique for Speech Recognition

Access: Use of this item is restricted to the UNT Community
I experimented with Hopfield networks in the context of a voice-based, query-answering system. Hopfield networks are used to store and retrieve patterns. I used this technique to store queries represented as natural language sentences and I evaluated the accuracy of the technique for error correction in a spoken question-answering dialog between a computer and a user. I show that the use of an auto-associative Hopfield network helps make the speech recognition system more fault tolerant. I also looked at the available encoding schemes to convert a natural language sentence into a pattern of zeroes and ones that can be stored in the Hopfield network reliably, and I suggest scalable data representations which allow storing a large number of queries.
Date: May 2004
Creator: Bireddy, Chakradhar
System: The UNT Digital Library
Performance comparison of data distribution management strategies in large-scale distributed simulation. (open access)

Performance comparison of data distribution management strategies in large-scale distributed simulation.

Data distribution management (DDM) is a High Level Architecture/Run-time Infrastructure (HLA/RTI) service that manages the distribution of state updates and interaction information in large-scale distributed simulations. The key to efficient DDM is to limit and control the volume of data exchanged during the simulation, to relay data to only those hosts requiring the data. This thesis focuses upon different DDM implementations and strategies. This thesis includes analysis of three DDM methods including the fixed grid-based, dynamic grid-based, and region-based methods. Also included is the use of multi-resolution modeling with various DDM strategies and analysis of the performance effects of aggregation/disaggregation with these strategies. Running numerous federation executions, I simulate four different scenarios on a cluster of workstations with a mini-RTI Kit framework and propose a set of benchmarks for a comparison of the DDM schemes. The goals of this work are to determine the most efficient model for applying each DDM scheme, discover the limitations of the scalability of the various DDM methods, evaluate the effects of aggregation/disaggregation on performance and resource usage, and present accepted benchmarks for use in future research.
Date: May 2004
Creator: Dzermajko, Caron
System: The UNT Digital Library
Analysis of Web Services on J2EE Application Servers (open access)

Analysis of Web Services on J2EE Application Servers

The Internet became a standard way of exchanging business data between B2B and B2C applications and with this came the need for providing various services on the web instead of just static text and images. Web services are a new type of services offered via the web that aid in the creation of globally distributed applications. Web services are enhanced e-business applications that are easier to advertise and easier to discover on the Internet because of their flexibility and uniformity. In a real life scenario it is highly difficult to decide which J2EE application server to go for when deploying a enterprise web service. This thesis analyzes the various ways by which web services can be developed & deployed. Underlying protocols and crucial issues like EAI (enterprise application integration), asynchronous messaging, Registry tModel architecture etc have been considered in this research. This paper presents a report by analyzing what various J2EE application servers provide by doing a case study and by developing applications to test functionality.
Date: May 2004
Creator: Gosu, Adarsh Kumar
System: The UNT Digital Library
Impact of actual interference on capacity and call admission control in a CDMA network. (open access)

Impact of actual interference on capacity and call admission control in a CDMA network.

An overwhelming number of models in the literature use average inter-cell interference for the calculation of capacity of a Code Division Multiple Access (CDMA) network. The advantage gained in terms of simplicity by using such models comes at the cost of rendering the exact location of a user within a cell irrelevant. We calculate the actual per-user interference and analyze the effect of user-distribution within a cell on the capacity of a CDMA network. We show that even though the capacity obtained using average interference is a good approximation to the capacity calculated using actual interference for a uniform user distribution, the deviation can be tremendously large for non-uniform user distributions. Call admission control (CAC) algorithms are responsible for efficient management of a network's resources while guaranteeing the quality of service and grade of service, i.e., accepting the maximum number of calls without affecting the quality of service of calls already present in the network. We design and implement global and local CAC algorithms, and through simulations compare their network throughput and blocking probabilities for varying mobility scenarios. We show that even though our global CAC is better at resource management, the lack of substantial gain in network throughput and …
Date: May 2004
Creator: Parvez, Asad
System: The UNT Digital Library
Intelligent Memory Manager: Towards improving the locality behavior of allocation-intensive applications. (open access)

Intelligent Memory Manager: Towards improving the locality behavior of allocation-intensive applications.

Dynamic memory management required by allocation-intensive (i.e., Object Oriented and linked data structured) applications has led to a large number of research trends. Memory performance due to the cache misses in these applications continues to lag in terms of execution cycles as ever increasing CPU-Memory speed gap continues to grow. Sophisticated prefetcing techniques, data relocations, and multithreaded architectures have tried to address memory latency. These techniques are not completely successful since they require either extra hardware/software in the system or special properties in the applications. Software needed for prefetching and data relocation strategies, aimed to improve cache performance, pollutes the cache so that the technique itself becomes counter-productive. On the other hand, extra hardware complexity needed in multithreaded architectures decelerates CPU's clock, since "Simpler is Faster." This dissertation, directed to seek the cause of poor locality behavior of allocation--intensive applications, studies allocators and their impact on the cache performance of these applications. Our study concludes that service functions, in general, and memory management functions, in particular, entangle with application's code and become the major cause of cache pollution. In this dissertation, we present a novel technique that transfers the allocation and de-allocation functions entirely to a separate processor residing in …
Date: May 2004
Creator: Rezaei, Mehran
System: The UNT Digital Library
A general purpose semantic parser using FrameNet and WordNet®. (open access)

A general purpose semantic parser using FrameNet and WordNet®.

Syntactic parsing is one of the best understood language processing applications. Since language and grammar have been formally defined, it is easy for computers to parse the syntactic structure of natural language text. Does meaning have structure as well? If it has, how can we analyze the structure? Previous systems rely on a one-to-one correspondence between syntactic rules and semantic rules. But such systems can only be applied to limited fragments of English. In this thesis, we propose a general-purpose shallow semantic parser which utilizes a semantic network (WordNet), and a frame dataset (FrameNet). Semantic relations recognized by the parser are based on how human beings represent knowledge of the world. Parsing semantic structure allows semantic units and constituents to be accessed and processed in a more meaningful way than syntactic parsing, moving the automation of understanding natural language text to a higher level.
Date: May 2004
Creator: Shi, Lei
System: The UNT Digital Library
Mobile agent security through multi-agent cryptographic protocols. (open access)

Mobile agent security through multi-agent cryptographic protocols.

An increasingly promising and widespread topic of research in distributed computing is the mobile agent paradigm: code travelling and performing computations on remote hosts in an autonomous manner. One of the biggest challenges faced by this new paradigm is security. The issue of protecting sensitive code and data carried by a mobile agent against tampering from a malicious host is particularly hard but important. Based on secure multi-party computation, a recent research direction shows the feasibility of a software-only solution to this problem, which had been deemed impossible by some researchers previously. The best result prior to this dissertation is a single-agent protocol which requires the participation of a trusted third party. Our research employs multi-agent protocols to eliminate the trusted third party, resulting in a protocol with minimum trust assumptions. This dissertation presents one of the first formal definitions of secure mobile agent computation, in which the privacy and integrity of the agent code and data as well as the data provided by the host are all protected. We present secure protocols for mobile agent computation against static, semi-honest or malicious adversaries without relying on any third party or trusting any specific participant in the system. The security of …
Date: May 2004
Creator: Xu, Ke
System: The UNT Digital Library