Language

Generating Machine Code for High-Level Programming Languages (open access)

Generating Machine Code for High-Level Programming Languages

The purpose of this research was to investigate the generation of machine code from high-level programming language. The following steps were undertaken: 1) Choose a high-level programming language as the source language and a computer as the target computer. 2) Examine all stages during the compiling of a high-level programming language and all data sets involved in the compilation. 3) Discover the mechanism for generating machine code and the mechanism to generate more efficient machine code from the language. 3) Construct an algorithm for generating machine code for the target computer. The results suggest that compiler is best implemented in a high-level programming language, and that SCANNER and PARSER should be independent of target representations, if possible.
Date: December 1976
Creator: Chao, Chia-Huei
System: The UNT Digital Library
Computer Graphics Primitives and the Scan-Line Algorithm (open access)

Computer Graphics Primitives and the Scan-Line Algorithm

This paper presents the scan-line algorithm which has been implemented on the Lisp Machine. The scan-line algorithm resides beneath a library of primitive software routines which draw more fundamental objects: lines, triangles and rectangles. This routine, implemented in microcode, applies the A(BC)*D approach to word boundary alignments in order to create an extremely fast, efficient, and general purpose drawing primitive. The scan-line algorithm improves on previous methodologies by limiting the number of CPU intensive instructions and by minimizing the number of words referenced. This paper will describe how to draw scan-lines and the constraints imposed upon the scan-line algorithm by the Lisp Machine's hardware and software.
Date: December 1988
Creator: Myjak, Michael D. (Michael David)
System: The UNT Digital Library
Direct Online/Offline Digital Signature Schemes. (open access)

Direct Online/Offline Digital Signature Schemes.

Online/offline signature schemes are useful in many situations, and two such scenarios are considered in this dissertation: bursty server authentication and embedded device authentication. In this dissertation, new techniques for online/offline signing are introduced, those are applied in a variety of ways for creating online/offline signature schemes, and five different online/offline signature schemes that are proved secure under a variety of models and assumptions are proposed. Two of the proposed five schemes have the best offline or best online performance of any currently known technique, and are particularly well-suited for the scenarios that are considered in this dissertation. To determine if the proposed schemes provide the expected practical improvements, a series of experiments were conducted comparing the proposed schemes with each other and with other state-of-the-art schemes in this area, both on a desktop class computer, and under AVR Studio, a simulation platform for an 8-bit processor that is popular for embedded systems. Under AVR Studio, the proposed SGE scheme using a typical key size for the embedded device authentication scenario, can complete the offline phase in about 24 seconds and then produce a signature (the online phase) in 15 milliseconds, which is the best offline performance of any known …
Date: December 2008
Creator: Yu, Ping
System: The UNT Digital Library
An English and Arabic Character Printer (open access)

An English and Arabic Character Printer

This paper is presented in satisfaction of the requirement for two problems in lieu of thesis which are required for the degree, Master of Science. The two problems are: (1) to provide an electric interface between the M6800 microprocessor and the printer; and (2) to design an Arabic character set and to provide the logic required for its implementation. As it would be artificial and impractical to document these problems separately, a single document here is provided.
Date: December 1976
Creator: Abdel-Razzack, Malek G.
System: The UNT Digital Library
VISOR (Variable Interval Schedule Of Reinforcement) System Documentation (open access)

VISOR (Variable Interval Schedule Of Reinforcement) System Documentation

This program will be used in operant behavior research to monitor and record responses and trigger and record reinforcements on a variable reinforcement (VI) schedule. The original application of this program will be the servicing of several rat cages simultaneously. The response will be the pressing of a metal bar in the cage, the reinforcement will be the triggering of a feeding mechanism which disperses a food pellet into the cage. The subsequent applications of this program are not limited, in that the actual response and reinforcement devices and the subject type are all treated indifferently by the program.
Date: December 1979
Creator: Long, Daniel Paul
System: The UNT Digital Library
Design and Implementation of a Parser for the DBase II Query Language (open access)

Design and Implementation of a Parser for the DBase II Query Language

In this paper the DBase II query language of an RDBMS for personal computers is discussed. Other languages will be provided by large and sophisticated DBMS will not be discussed here. The reason for selecting the DBase II query language for discussion are as follows: 1. It is a simple language that can be learned easily [TOWN 84, DINE 84]. Within a short period, users can learn all of the facilities and manage the system very well. 2. It is a language suitable for interactive programming and execution like BASIC. 3. It provides adequate facilities for a small data base system and serves as an introductory guide for more sophisticated systems.
Date: December 1985
Creator: Chan, Kin Pong
System: The UNT Digital Library
Influence of Underlying Random Walk Types in Population Models on Resulting Social Network Types and Epidemiological Dynamics (open access)

Influence of Underlying Random Walk Types in Population Models on Resulting Social Network Types and Epidemiological Dynamics

Epidemiologists rely on human interaction networks for determining states and dynamics of disease propagations in populations. However, such networks are empirical snapshots of the past. It will greatly benefit if human interaction networks are statistically predicted and dynamically created while an epidemic is in progress. We develop an application framework for the generation of human interaction networks and running epidemiological processes utilizing research on human mobility patterns and agent-based modeling. The interaction networks are dynamically constructed by incorporating different types of Random Walks and human rules of engagements. We explore the characteristics of the created network and compare them with the known theoretical and empirical graphs. The dependencies of epidemic dynamics and their outcomes on patterns and parameters of human motion and motives are encountered and presented through this research. This work specifically describes how the types and parameters of random walks define properties of generated graphs. We show that some configurations of the system of agents in random walk can produce network topologies with properties similar to small-world networks. Our goal is to find sets of mobility patterns that lead to empirical-like networks. The possibility of phase transitions in the graphs due to changes in the parameterization of agent …
Date: December 2016
Creator: Kolgushev, Oleg
System: The UNT Digital Library
Efficient Linked List Ranking Algorithms and Parentheses Matching as a New Strategy for Parallel Algorithm Design (open access)

Efficient Linked List Ranking Algorithms and Parentheses Matching as a New Strategy for Parallel Algorithm Design

The goal of a parallel algorithm is to solve a single problem using multiple processors working together and to do so in an efficient manner. In this regard, there is a need to categorize strategies in order to solve broad classes of problems with similar structures and requirements. In this dissertation, two parallel algorithm design strategies are considered: linked list ranking and parentheses matching.
Date: December 1993
Creator: Halverson, Ranette Hudson
System: The UNT Digital Library
Recognition of Face Images (open access)

Recognition of Face Images

The focus of this dissertation is a methodology that enables computer systems to classify different up-front images of human faces as belonging to one of the individuals to which the system has been exposed previously. The images can present variance in size, location of the face, orientation, facial expressions, and overall illumination. The approach to the problem taken in this dissertation can be classified as analytic as the shapes of individual features of human faces are examined separately, as opposed to holistic approaches to face recognition. The outline of the features is used to construct signature functions. These functions are then magnitude-, period-, and phase-normalized to form a translation-, size-, and rotation-invariant representation of the features. Vectors of a limited number of the Fourier decomposition coefficients of these functions are taken to form the feature vectors representing the features in the corresponding vector space. With this approach no computation is necessary to enforce the translational, size, and rotational invariance at the stage of recognition thus reducing the problem of recognition to the k-dimensional clustering problem. A recognizer is specified that can reliably classify the vectors of the feature space into object classes. The recognizer made use of the following principle: …
Date: December 1994
Creator: Pershits, Edward
System: The UNT Digital Library
A Highly Fault-Tolerant Distributed Database System with Replicated Data (open access)

A Highly Fault-Tolerant Distributed Database System with Replicated Data

Because of the high cost and impracticality of a high connectivity network, most recent research in transaction processing has focused on a distributed replicated database system. In such a system, multiple copies of a data item are created and stored at several sites in the network, so that the system is able to tolerate more crash and communication failures and attain higher data availability. However, the multiple copies also introduce a global inconsistency problem, especially in a partitioned network. In this dissertation a tree quorum algorithm is proposed to solve this problem, imposing a logical tree structure along with dynamic system reconfiguration on all the copies of each data item. The proposed algorithm can be viewed as a dynamic voting technique which, with the help of an appropriate concurrency control algorithm, exhibits the major advantages of quorum-based replica control algorithms and of the available copies algorithm, so that a single copy is read for a read operation and a quorum of copies is written for a write operation. In addition, read and write quorums are computed dynamically and independently. As a result expensive read operations, like those that require several copies of a data item to be read in most …
Date: December 1994
Creator: Lin, Tsai S. (Tsai Shooumeei)
System: The UNT Digital Library
Efficient Algorithms and Framework for Bandwidth Allocation, Quality-of-Service Provisioning and Location Management in Mobile Wireless Computing (open access)

Efficient Algorithms and Framework for Bandwidth Allocation, Quality-of-Service Provisioning and Location Management in Mobile Wireless Computing

The fusion of computers and communications has promised to herald the age of information super-highway over high speed communication networks where the ultimate goal is to enable a multitude of users at any place, access information from anywhere and at any time. This, in a nutshell, is the goal envisioned by the Personal Communication Services (PCS) and Xerox's ubiquitous computing. In view of the remarkable growth of the mobile communication users in the last few years, the radio frequency spectrum allocated by the FCC (Federal Communications Commission) to this service is still very limited and the usable bandwidth is by far much less than the expected demand, particularly in view of the emergence of the next generation wireless multimedia applications like video-on-demand, WWW browsing, traveler information systems etc. Proper management of available spectrum is necessary not only to accommodate these high bandwidth applications, but also to alleviate problems due to sudden explosion of traffic in so called hot cells. In this dissertation, we have developed simple load balancing techniques to cope with the problem of tele-traffic overloads in one or more hot cells in the system. The objective is to ease out the high channel demand in hot cells by …
Date: December 1997
Creator: Sen, Sanjoy Kumar
System: The UNT Digital Library
Quantifying Design Principles in Reusable Software Components (open access)

Quantifying Design Principles in Reusable Software Components

Software reuse can occur in various places during the software development cycle. Reuse of existing source code is the most commonly practiced form of software reuse. One of the key requirements for software reuse is readability, thus the interest in the use of data abstraction, inheritance, modularity, and aspects of the visible portion of module specifications. This research analyzed the contents of software reuse libraries to answer the basic question of what makes a good reusable software component. The approach taken was to measure and analyze various software metrics as mapped to design characteristics. A related research question investigated the change in the design principles over time. This was measured by comparing sets of Ada reuse libraries categorized into two time periods. It was discovered that recently developed Ada reuse components scored better on readability than earlier developed components. A benefit of this research has been the development of a set of "design for reuse" guidelines. These guidelines address coding practices as well as design principles for an Ada implementation. C++ software reuse libraries were also analyzed to determine if design principles can be applied in a language independent fashion. This research used cyclomatic complexity metrics, software science metrics, and …
Date: December 1995
Creator: Moore, Freeman Leroy
System: The UNT Digital Library
Temporal Connectionist Expert Systems Using a Temporal Backpropagation Algorithm (open access)

Temporal Connectionist Expert Systems Using a Temporal Backpropagation Algorithm

Representing time has been considered a general problem for artificial intelligence research for many years. More recently, the question of representing time has become increasingly important in representing human decision making process through connectionist expert systems. Because most human behaviors unfold over time, any attempt to represent expert performance, without considering its temporal nature, can often lead to incorrect results. A temporal feedforward neural network model that can be applied to a number of neural network application areas, including connectionist expert systems, has been introduced. The neural network model has a multi-layer structure, i.e. the number of layers is not limited. Also, the model has the flexibility of defining output nodes in any layer. This is especially important for connectionist expert system applications. A temporal backpropagation algorithm which supports the model has been developed. The model along with the temporal backpropagation algorithm makes it extremely practical to define any artificial neural network application. Also, an approach that can be followed to decrease the memory space used by weight matrix has been introduced. The algorithm was tested using a medical connectionist expert system to show how best we describe not only the disease but also the entire course of the disease. …
Date: December 1993
Creator: Civelek, Ferda N. (Ferda Nur)
System: The UNT Digital Library
Multiresolutional/Fractal Compression of Still and Moving Pictures (open access)

Multiresolutional/Fractal Compression of Still and Moving Pictures

The scope of the present dissertation is a deep lossy compression of still and moving grayscale pictures while maintaining their fidelity, with a specific goal of creating a working prototype of a software system for use in low bandwidth transmission of still satellite imagery and weather briefings with the best preservation of features considered important by the end user.
Date: December 1993
Creator: Kiselyov, Oleg E.
System: The UNT Digital Library
High Performance Architecture using Speculative Threads and Dynamic Memory Management Hardware (open access)

High Performance Architecture using Speculative Threads and Dynamic Memory Management Hardware

With the advances in very large scale integration (VLSI) technology, hundreds of billions of transistors can be packed into a single chip. With the increased hardware budget, how to take advantage of available hardware resources becomes an important research area. Some researchers have shifted from control flow Von-Neumann architecture back to dataflow architecture again in order to explore scalable architectures leading to multi-core systems with several hundreds of processing elements. In this dissertation, I address how the performance of modern processing systems can be improved, while attempting to reduce hardware complexity and energy consumptions. My research described here tackles both central processing unit (CPU) performance and memory subsystem performance. More specifically I will describe my research related to the design of an innovative decoupled multithreaded architecture that can be used in multi-core processor implementations. I also address how memory management functions can be off-loaded from processing pipelines to further improve system performance and eliminate cache pollution caused by runtime management functions.
Date: December 2007
Creator: Li, Wentong
System: The UNT Digital Library
Intelligent Memory Management Heuristics (open access)

Intelligent Memory Management Heuristics

Automatic memory management is crucial in implementation of runtime systems even though it induces a significant computational overhead. In this thesis I explore the use of statistical properties of the directed graph describing the set of live data to decide between garbage collection and heap expansion in a memory management algorithm combining the dynamic array represented heaps with a mark and sweep garbage collector to enhance its performance. The sampling method predicting the density and the distribution of useful data is implemented as a partial marking algorithm. The algorithm randomly marks the nodes of the directed graph representing the live data at different depths with a variable probability factor p. Using the information gathered by the partial marking algorithm in the current step and the knowledge gathered in the previous iterations, the proposed empirical formula predicts with reasonable accuracy the density of live nodes on the heap, to decide between garbage collection and heap expansion. The resulting heuristics are tested empirically and shown to improve overall execution performance significantly in the context of the Jinni Prolog compiler's runtime system.
Date: December 2003
Creator: Panthulu, Pradeep
System: The UNT Digital Library
Refactoring FrameNet for Efficient Relational Queries (open access)

Refactoring FrameNet for Efficient Relational Queries

The FrameNet database is being used in a variety of NLP research and applications such as word sense disambiguation, machine translation, information extraction and question answering. The database is currently available in XML format. The XML database though a wholesome way of distributing data in its entireness, is not practical for use unless converted to a more application friendly database. In light of this we have successfully converted the XML database to a relational MySQL™ database. This conversion reduced the amount of data storage amount to less than half. Most importantly the new database enables us to perform fast complex querying and facilitates use by applications and research. We show the steps taken to ensure relational integrity of the data during the refactoring process and a simple demo application demonstrating ease of use.
Date: December 2003
Creator: Ahmad, Zeeshan Asim
System: The UNT Digital Library

Peptide-based hidden Markov model for peptide fingerprint mapping.

Access: Use of this item is restricted to the UNT Community
Peptide mass fingerprinting (PMF) was the first automated method for protein identification in proteomics, and it remains in common usage today because of its simplicity and the low equipment costs for generating fingerprints. However, one of the problems with PMF is its limited specificity and sensitivity in protein identification. Here I present a method that shows potential to significantly enhance the accuracy of peptide mass fingerprinting, using a machine learning approach based on a hidden Markov model (HMM). This method is applied to improve differentiation of real protein matches from those that occur by chance. The system was trained using 300 examples of combined real and false-positive protein identification results, and 10-fold cross-validation applied to assess model discrimination. The model can achieve 93% accuracy in distinguishing correct and real protein identification results versus false-positive matches. The receiver operating characteristic (ROC) curve area for the best model was 0.833.
Date: December 2004
Creator: Yang, Dongmei
System: The UNT Digital Library

Voting Operating System (VOS)

Access: Use of this item is restricted to the UNT Community
The electronic voting machine (EVM) plays a very important role in a country where government officials are elected into office. Throughout the world, a specific operating system that tends to the specific requirement of the EVM does not exist. Existing EVM technology depends upon the various operating systems currently available, thus ignoring the basic needs of the system. There is a compromise over the basic requirements in order to develop the systems on the basis on an already available operating system, thus having a lot of scope for error. It is necessary to know the specific details of the particular device for which the operating system is being developed. In this document, I evaluate existing EVMs and identify flaws and shortcomings. I propose a solution for a new operating system that meets the specific requirements of the EVM, calling it Voting Operating System (VOS, pronounced 'voice'). The identification technique can be simplified by using the fingerprint technology that determines the identity of a person based on two fingerprints. I also discuss the various parts of the operating system that have to be implemented that can tend to all the basic requirements of an EVM, including implementation of the memory manager, …
Date: December 2004
Creator: Venkatadusumelli, Kiran
System: The UNT Digital Library
Evaluating the Scalability of SDF Single-chip Multiprocessor Architecture Using Automatically Parallelizing Code (open access)

Evaluating the Scalability of SDF Single-chip Multiprocessor Architecture Using Automatically Parallelizing Code

Advances in integrated circuit technology continue to provide more and more transistors on a chip. Computer architects are faced with the challenge of finding the best way to translate these resources into high performance. The challenge in the design of next generation CPU (central processing unit) lies not on trying to use up the silicon area, but on finding smart ways to make use of the wealth of transistors now available. In addition, the next generation architecture should offer high throughout performance, scalability, modularity, and low energy consumption, instead of an architecture that is suitable for only one class of applications or users, or only emphasize faster clock rate. A program exhibits different types of parallelism: instruction level parallelism (ILP), thread level parallelism (TLP), or data level parallelism (DLP). Likewise, architectures can be designed to exploit one or more of these types of parallelism. It is generally not possible to design architectures that can take advantage of all three types of parallelism without using very complex hardware structures and complex compiler optimizations. We present the state-of-art architecture SDF (scheduled data flowed) which explores the TLP parallelism as much as that is supplied by that application. We implement a SDF single-chip …
Date: December 2004
Creator: Zhang, Yuhua
System: The UNT Digital Library
Adaptive Planning and Prediction in Agent-Supported Distributed Collaboration. (open access)

Adaptive Planning and Prediction in Agent-Supported Distributed Collaboration.

Agents that act as user assistants will become invaluable as the number of information sources continue to proliferate. Such agents can support the work of users by learning to automate time-consuming tasks and filter information to manageable levels. Although considerable advances have been made in this area, it remains a fertile area for further development. One application of agents under careful scrutiny is the automated negotiation of conflicts between different user's needs and desires. Many techniques require explicit user models in order to function. This dissertation explores a technique for dynamically constructing user models and the impact of using them to anticipate the need for negotiation. Negotiation is reduced by including an advising aspect to the agent that can use this anticipation of conflict to adjust user behavior.
Date: December 2004
Creator: Hartness, Ken T. N.
System: The UNT Digital Library
Optimal Access Point Selection and Channel Assignment in IEEE 802.11 Networks (open access)

Optimal Access Point Selection and Channel Assignment in IEEE 802.11 Networks

Designing 802.11 wireless networks includes two major components: selection of access points (APs) in the demand areas and assignment of radio frequencies to each AP. Coverage and capacity are some key issues when placing APs in a demand area. APs need to cover all users. A user is considered covered if the power received from its corresponding AP is greater than a given threshold. Moreover, from a capacity standpoint, APs need to provide certain minimum bandwidth to users located in the coverage area. A major challenge in designing wireless networks is the frequency assignment problem. The 802.11 wireless LANs operate in the unlicensed ISM frequency, and all APs share the same frequency. As a result, as 802.11 APs become widely deployed, they start to interfere with each other and degrade network throughput. In consequence, efficient assignment of channels becomes necessary to avoid and minimize interference. In this work, an optimal AP selection was developed by balancing traffic load. An optimization problem was formulated that minimizes heavy congestion. As a result, APs in wireless LANs will have well distributed traffic loads, which maximize the throughput of the network. The channel assignment algorithm was designed by minimizing channel interference between APs. The …
Date: December 2004
Creator: Park, Sangtae
System: The UNT Digital Library
Execution Time Analysis through Software Monitors (open access)

Execution Time Analysis through Software Monitors

The analysis of an executing program and the isolation of critical code has been a problem since the first program was written. This thesis examines the process of program analysis through the use of a software monitoring system. Since there is a trend toward structured languages a subset of PL/I was developed t~o exhibit source statement monitoring and costing techniques. By filtering a PL/W program through a preorocessor which determines the cost of source statements and inserts monitoring code, a post-execution analysis of the program can be obtained. This analysis displays an estimated time cost for each source statements the number of times the statement w3s executed, and the product of these values. Additionally, a bar graph is printed in order to quickly locate very active code.
Date: December 1977
Creator: Whistler, Wayne C.
System: The UNT Digital Library

A Netcentric Scientific Research Repository

Access: Use of this item is restricted to the UNT Community
The Internet and networks in general have become essential tools for disseminating in-formation. Search engines have become the predominant means of finding information on the Web and all other data repositories, including local resources. Domain scientists regularly acquire and analyze images generated by equipment such as microscopes and cameras, resulting in complex image files that need to be managed in a convenient manner. This type of integrated environment has been recently termed a netcentric sci-entific research repository. I developed a number of data manipulation tools that allow researchers to manage their information more effectively in a netcentric environment. The specific contributions are: (1) A unique interface for management of data including files and relational databases. A wrapper for relational databases was developed so that the data can be indexed and searched using traditional search engines. This approach allows data in databases to be searched with the same interface as other data. Fur-thermore, this approach makes it easier for scientists to work with their data if they are not familiar with SQL. (2) A Web services based architecture for integrating analysis op-erations into a repository. This technique allows the system to leverage the large num-ber of existing tools by wrapping them …
Date: December 2006
Creator: Harrington, Brian
System: The UNT Digital Library