Rollback Reduction Techniques Through Load Balancing in Optimistic Parallel Discrete Event Simulation (open access)

Rollback Reduction Techniques Through Load Balancing in Optimistic Parallel Discrete Event Simulation

Discrete event simulation is an important tool for modeling and analysis. Some of the simulation applications such as telecommunication network performance, VLSI logic circuits design, battlefield simulation, require enormous amount of computing resources. One way to satisfy this demand for computing power is to decompose the simulation system into several logical processes (Ip) and run them concurrently. In any parallel discrete event simulation (PDES) system, the events are ordered according to their time of occurrence. In order for the simulation to be correct, this ordering has to be preserved. There are three approaches to maintain this ordering. In a conservative system, no lp executes an event unless it is certain that all events with earlier time-stamps have been executed. Such systems are prone to deadlock. In an optimistic system on the other hand, simulation progresses disregarding this ordering and saves the system states regularly. Whenever a causality violation is detected, the system rolls back to a state saved earlier and restarts processing after correcting the error. There is another approach in which all the lps participate in the computation of a safe time-window and all events with time-stamps within this window are processed concurrently. In optimistic simulation systems, there is …
Date: May 1996
Creator: Sarkar, Falguni
System: The UNT Digital Library
An Algorithm for the PLA Equivalence Problem (open access)

An Algorithm for the PLA Equivalence Problem

The Programmable Logic Array (PLA) has been widely used in the design of VLSI circuits and systems because of its regularity, flexibility, and simplicity. The equivalence problem is typically to verify that the final description of a circuit is functionally equivalent to its initial description. Verifying the functional equivalence of two descriptions is equivalent to proving their logical equivalence. This problem of pure logic is essential to circuit design. The most widely used technique to solve the problem is based on Binary Decision Diagram or BDD, proposed by Bryant in 1986. Unfortunately, BDD requires too much time and space to represent moderately large circuits for equivalence testing. We design and implement a new algorithm called the Cover-Merge Algorithm for the equivalence problem based on a divide-and-conquer strategy using the concept of cover and a derivational method. We prove that the algorithm is sound and complete. Because of the NP-completeness of the problem, we emphasize simplifications to reduce the search space or to avoid redundant computations. Simplification techniques are incorporated into the algorithm as an essential part to speed up the the derivation process. Two different sets of heuristics are developed for two opposite goals: one for the proof of equivalence …
Date: December 1995
Creator: Moon, Gyo Sik
System: The UNT Digital Library
Multiresolution Signal Cross-correlation (open access)

Multiresolution Signal Cross-correlation

Signal Correlation is a digital signal processing technique which has a wide variety of applications, ranging from geophysical exploration to acoustic signal enhancements, or beamforming. This dissertation will consider this technique in an underwater acoustics perspective, but the algorithms illustrated here can be readily applied to other areas. Although beamforming techniques have been studied for the past fifty years, modern beamforming systems still have difficulty in operating in noisy environments, especially in shallow water.
Date: December 1994
Creator: Novaes, Marcos (Marcos Nogueira)
System: The UNT Digital Library
A New Framework for Classification and Comparative Study of Congestion Control Schemes of ATM Networks (open access)

A New Framework for Classification and Comparative Study of Congestion Control Schemes of ATM Networks

In our work, we have proposed a new framework for the classification and comparative study of ATM congestion control schemes. The different aspects on which we have classified the algorithms are control theoretic approach, action and congestion notification. These three aspects present of the classification present a coherent framework on which congestion control algorithms are to be classified. Such a classification will also help in developing new algorithms.
Date: May 1996
Creator: Chandra, Umesh, 1971-
System: The UNT Digital Library
Speech Recognition Using a Synthesized Codebook (open access)

Speech Recognition Using a Synthesized Codebook

Speech sounds generated by a simple waveform synthesizer were used to create a vector quantization codebook for use in speech recognition. Recognition was tested over the TI-20 isolated word data base using a conventional DTW matching algorithm. Input speech was band limited to 300 - 3300 Hz, then passed through the Scott Instruments Corp. Coretechs process, implemented on a VET3 speech terminal, to create the speech representation for matching. Synthesized sounds were processed in software by a VET3 signal processing emulation program. Emulation and recognition were performed on a DEC VAX 11/750. The experiments were organized in 2 series. A preliminary experiment, using no vector quantization, provided a baseline for comparison. The original codebook contained 109 vectors, all derived from 2 formant synthesized sounds. This codebook was decimated through the course of the first series of experiments, based on the number of times each vector was used in quantizing the training data for the previous experiment, in order to determine the smallest subset of vectors suitable for coding the speech data base. The second series of experiments altered several test conditions in order to evaluate the applicability of the minimal synthesized codebook to conventional codebook training. The baseline recognition rate …
Date: August 1988
Creator: Smith, Lloyd A. (Lloyd Allen)
System: The UNT Digital Library
A Comparative Analysis of Guided vs. Query-Based Intelligent Tutoring Systems (ITS) Using a Class-Entity-Relationship-Attribute (CERA) Knowledge Base (open access)

A Comparative Analysis of Guided vs. Query-Based Intelligent Tutoring Systems (ITS) Using a Class-Entity-Relationship-Attribute (CERA) Knowledge Base

One of the greatest problems facing researchers in the sub field of Artificial Intelligence known as Intelligent Tutoring Systems (ITS) is the selection of a knowledge base designs that will facilitate the modification of the knowledge base. The Class-Entity-Relationship-Attribute (CERA), proposed by R. P. Brazile, holds certain promise as a more generic knowledge base design framework upon which can be built robust and efficient ITS. This study has a twofold purpose. The first is to demonstrate that a CERA knowledge base can be constructed for an ITS on a subset of the domain of Cretaceous paleontology and function as the "expert module" of the ITS. The second is to test the validity of the ideas that students guided through a lesson learn more factual knowledge, while those who explore the knowledge base that underlies the lesson through query at their own pace will be able to formulate their own integrative knowledge from the knowledge gained in their explorations and spend more time on the system. This study concludes that a CERA-based system can be constructed as an effective teaching tool. However, while an ITS - treatment provides for statistically significant gains in achievement test scores, the type of treatment seems …
Date: August 1987
Creator: Hall, Douglas Lee
System: The UNT Digital Library
Linearly Ordered Concurrent Data Structures on Hypercubes (open access)

Linearly Ordered Concurrent Data Structures on Hypercubes

This thesis presents a simple method for the concurrent manipulation of linearly ordered data structures on hypercubes. The method is based on the existence of a pruned binomial search tree rooted at any arbitrary node of the binary hypercube. The tree spans any arbitrary sequence of n consecutive nodes containing the root, using a fan-out of at most [log₂ 𝑛] and a depth of [log₂ 𝑛] +1. Search trees spanning non-overlapping processor lists are formed using only local information, and can be used concurrently without contention problems. Thus, they can be used for performing broadcast and merge operations simultaneously on sets with non-uniform sizes. Extensions to generalized and faulty hypercubes and applications to image processing algorithms and for m-way search are discussed.
Date: August 1992
Creator: John, Ajita
System: The UNT Digital Library
A Real-Time Merging-Buffering Technique for MIDI Messages (open access)

A Real-Time Merging-Buffering Technique for MIDI Messages

A powerful and efficient algorithm has been designed to deal with the critical timing problem of the MIDI messages. This algorithm can convert note events stored in a natural way to MIDI messages dynamically. Only limited memory space (the buffer) is required to finish the conversion work, and the size of the buffer is independent of the size of the original sequence (notes). This algorithm's real-time variable properties suggest not only the flexible real-time controls in the use of musical aspects, but also the expandability to interactive multi-media applications. A compositional environment called MusicSculptor has been implemented in terms of this algorithm.
Date: December 1991
Creator: Chang, Kuo-Lung
System: The UNT Digital Library
Improving Digital Circuit Simulation: A Knowledge-Based Approach (open access)

Improving Digital Circuit Simulation: A Knowledge-Based Approach

This project focuses on a prototype system architecture which integrates features of an event-driven gate-level simulator and features of the multiple expert system architecture, HEARSAY-II. Combining artificial intelligence and simulation techniques, a knowledge-based simulator was designed and constructed to model non-standard circuit behavior. This non-standard circuit behavior is amplified by advances in integrated circuit technology. Currently available digital circuit simulators can not simulate this behavior. Circuit designer expertise on behavioral phenomena is used in the expert system to guide the base simulator by manipulating its events to achieve the desired behavior.
Date: August 1989
Creator: Benavides, John A. (John Anthony)
System: The UNT Digital Library
The Applications of Regression Analysis in Auditing and Computer Systems (open access)

The Applications of Regression Analysis in Auditing and Computer Systems

This thesis describes regression analysis and shows how it can be used in account auditing and in computer system performance analysis. The study first introduces regression analysis techniques and statistics. Then, the use of regression analysis in auditing to detect "out of line" accounts and to determine audit sample size is discussed. These applications led to the concept of using regression analysis to predict job completion times in a computer system. The feasibility of this application of regression analysis was tested by constructing a predictive model to estimate job completion times using a computer system simulator. The predictive model's performance for the various job streams simulated shows that job completion time prediction is a feasible application for regression analysis.
Date: May 1977
Creator: Hubbard, Larry D.
System: The UNT Digital Library
Computerized Analysis of Radiograph Images of Embedded Objects as Applied to Bone Location and Mineral Content Measurement (open access)

Computerized Analysis of Radiograph Images of Embedded Objects as Applied to Bone Location and Mineral Content Measurement

This investigation dealt with locating and measuring x-ray absorption of radiographic images. The methods developed provide a fast, accurate, minicomputer control, for analysis of embedded objects. A PDP/8 computer system was interfaced with a Joyce Loebl 3CS Microdensitometer and a Leeds & Northrup Recorder. Proposed algorithms for bone location and data smoothing work on a twelve-bit minicomputer. Designs of a software control program and operational procedure are presented. The filter made wedge and limb scans monotonic from minima to maxima. It was tested for various convoluted intervals. Ability to resmooth the same data in multiple passes was tested. An interval size of fifteen works well in one pass.
Date: August 1976
Creator: Buckner, Richard L.
System: The UNT Digital Library
A Tool for Measuring the Size, Structure and Complexity of Software (open access)

A Tool for Measuring the Size, Structure and Complexity of Software

The problem addressed by this thesis is the need for a software measurement tool that enforces a uniform measurement algorithm on several programming languages. The introductory chapter discusses the concern for software measurement and provides background for the specific models and metrics that are studied. A multilingual software measurement tool is then introduced, that analyzes programs written in Ada, C, Pascal, or PL/I, and quantifies over thirty different program attributes. Metrics computed by the program include McCabe's measure of cyclomatic complexity and Halstead's software science metrics. Some results and conclusions of preliminary data analysis, using the tool, are also given. The appendices contain exhaustive counting algorithms for obtaining the metrics in each language.
Date: May 1984
Creator: Versaw, Larry
System: The UNT Digital Library
An Efficient Hybrid Heuristic and Probabilistic Model for the Gate Matrix Layout Problem in VLSI Design (open access)

An Efficient Hybrid Heuristic and Probabilistic Model for the Gate Matrix Layout Problem in VLSI Design

In this thesis, the gate matrix layout problem in VLSI design is considered where the goal is to minimize the number of tracks required to layout a given circuit and a taxonomy of approaches to its solution is presented. An efficient hybrid heuristic is also proposed for this combinatorial optimization problem, which is based on the combination of probabilistic hill-climbing technique and greedy method. This heuristic is tested experimentally with respect to four existing algorithms. As test cases, five benchmark problems from the literature as well as randomly generated problem instances are considered. The experimental results show that the proposed hybrid algorithm, on the average, performs better than other heuristics in terms of the required computation time and/or the quality of solution. Due to the computation-intensive nature of the problem, an exact solution within reasonable time limits is impossible. So, it is difficult to judge the effectiveness of any heuristic in terms of the quality of solution (number of tracks required). A probabilistic model of the gate matrix layout problem that computes the expected number of tracks from the given input parameters, is useful to this respect. Such a probabilistic model is proposed in this thesis, and its performance is …
Date: August 1993
Creator: Bagchi, Tanuj
System: The UNT Digital Library
The Telecommunications Network Configuration Optimization Problem (open access)

The Telecommunications Network Configuration Optimization Problem

The purpose of telecommunication network configuration optimization is to find the best homing relationship between tandems and switches so as to minimize interswitch traffic, or equivalently to maximize intraswitch traffic. Note that, since minimal interswitch traffic implies minimal IMT utilization, communication costs will also be minimal.
Date: August 1995
Creator: Azizoglu, Mustafa C.
System: The UNT Digital Library
An English and Arabic Character Printer (open access)

An English and Arabic Character Printer

This paper is presented in satisfaction of the requirement for two problems in lieu of thesis which are required for the degree, Master of Science. The two problems are: (1) to provide an electric interface between the M6800 microprocessor and the printer; and (2) to design an Arabic character set and to provide the logic required for its implementation. As it would be artificial and impractical to document these problems separately, a single document here is provided.
Date: December 1976
Creator: Abdel-Razzack, Malek G.
System: The UNT Digital Library
Design and Implementation of a PDP-8 Computer Assembler Executing on the IBM 360/50 Computer (open access)

Design and Implementation of a PDP-8 Computer Assembler Executing on the IBM 360/50 Computer

This problem is intended to be an introduction to the design of a software system which translates PDP-8 assembly language source into it's machine-readable object code. This assembler runs on the IBM 360/50. It is assumed that the reader is familiar with the basic PDP-8 assembly language. For the description and use of this assembler the reader is referred to the PAL-III SYMBOLIC ASSEMBLER PROGRAMMING MANUAL from DEC (order number DIGITAL 8-3-5, Digital Equipment Corporation: Maynard, Massachusetts, 1965.). The Second problem of the study concerns the design of a simulator for the PDP-8 computer.
Date: August 1977
Creator: Madani, Ali
System: The UNT Digital Library
ADA Tasking Facilities for Concurrent and Real-Time Programming (open access)

ADA Tasking Facilities for Concurrent and Real-Time Programming

This paper describes multitasking facilities of Ada in concurrent and real-time programming. Synchronization and process communication mechanisms are discussed in detail, also, a new mechanism to solve the scheduling problem is developed. In the concurrent programming aspect, a comparison is made between Ada's rendezvous and Pascal's Monitor concept. In the real-time programming aspect, the differences between the Ada multitasking and the traditional "cyclic executive approaches are contrasted and their associated costs/benefits analyzed.
Date: April 1984
Creator: Chang, Ming-Chu
System: The UNT Digital Library
Graphical Simulation of Sorting Methods (open access)

Graphical Simulation of Sorting Methods

In this paper, five different sorting methods will be discussed. Each method will be analyzed and discussed in detail pointing out its efficiency, weaknesses, powerfullness, and the appropriate type of applications. The different methods are represented graphically using Turbo Pascal where one pass is performed in each method at a time. The methods discussed in this project are, Bubble Sort, Quick Sort, Heap Sort, Shell Sort, and Double Selection Sort. The latter is a new method that I modified from the Selection Sort. Finally, comparisons between the sorting methods mentioned above will be discussed.
Date: August 1986
Creator: Saed, Mazen A.
System: The UNT Digital Library
The NTSU School of Music Practice Room Scheduling System (open access)

The NTSU School of Music Practice Room Scheduling System

This is a report concerning the project I completed for my 590 (special problem) credit. The subject of this project was a system for interactive practice room scheduling by music students at NTSU. This system was created in the fall semester of 1982 as a class project for software Development (CSCI 553) with Dr. Irby. The system was not completely finished, and I received permission from Dr. Irby to finish it and help implement its use at the Music department. I was able to observe three usages of the system: Spring, Summer I, and Summer II semesters of 1983. This report details the problems encountered during each of these usages, and changes made to the system due to them. Results of a first-use survey, under documentation, and complete final code listings were also included.
Date: August 1983
Creator: Reed, Susan C.
System: The UNT Digital Library
Design and Implementation of a Parser for the DBase II Query Language (open access)

Design and Implementation of a Parser for the DBase II Query Language

In this paper the DBase II query language of an RDBMS for personal computers is discussed. Other languages will be provided by large and sophisticated DBMS will not be discussed here. The reason for selecting the DBase II query language for discussion are as follows: 1. It is a simple language that can be learned easily [TOWN 84, DINE 84]. Within a short period, users can learn all of the facilities and manage the system very well. 2. It is a language suitable for interactive programming and execution like BASIC. 3. It provides adequate facilities for a small data base system and serves as an introductory guide for more sophisticated systems.
Date: December 1985
Creator: Chan, Kin Pong
System: The UNT Digital Library
Development of a Text Formatted Under VAX/VMS Operating System (open access)

Development of a Text Formatted Under VAX/VMS Operating System

No matter how extended the use of the computer is, the printed document is still the primary medium for the presentation information, and will continue to be for some time. The use of computing facilities for preparation and production of the document is becoming as prevalent as their use for numeric computation. Commercially, document preparation systems are now a standard facility at research institution, and they have become quite common on each computer program. A conventional document preparation system usually contains two parts: a text editor used to create, enter, update, and maintain the text and control words that comprise the document in its "input" form, and a text formatter used to process that input and produce the final document.
Date: March 1984
Creator: Chow, Perng
System: The UNT Digital Library
Quality-of-Service Provisioning and Resource Reservation Mechanisms for Mobile Wireless Networks (open access)

Quality-of-Service Provisioning and Resource Reservation Mechanisms for Mobile Wireless Networks

In this thesis, a framework for Quality of Service provisioning in next generation wireless access networks is proposed. The framework aims at providing a differentiated service treatment to real-time (delay-sensitive) and non-real-time (delay-tolerant) multimedia traffic flows at the link layer. Novel techniques such as bandwidth compaction, channel reservation, and channel degradation are proposed. Using these techniques, we develop a call admission control algorithm and a call control block as part of the QoS framework. The performance of the framework is captured through analytical modeling and simulation experiments. By analytical modeling, the average carried traffic and the worst case buffer requirements for real-time and non-real-time calls are estimated. Simulation results show a 21% improvement in call admission probability of real-time calls, and a 17% improvement for non-real-time calls, when bandwidth compaction is employed. The channel reservation technique shows a 12% improvement in call admission probability in comparison with another proposed scheme in the literature.
Date: August 1998
Creator: Jayaram, Rajeev, 1971-
System: The UNT Digital Library
A Multi-Time Scale Learning Mechanism for Neuromimic Processing (open access)

A Multi-Time Scale Learning Mechanism for Neuromimic Processing

Learning and representing and reasoning about temporal relations, particularly causal relations, is a deep problem in artificial intelligence (AI). Learning such representations in the real world is complicated by the fact that phenomena are subject to multiple time scale influences and may operate with a strange attractor dynamic. This dissertation proposes a new computational learning mechanism, the adaptrode, which, used in a neuromimic processing architecture may help to solve some of these problems. The adaptrode is shown to emulate the dynamics of real biological synapses and represents a significant departure from the classical weighted input scheme of conventional artificial neural networks. Indeed the adaptrode is shown, by analysis of the deep structure of real synapses, to have a strong structural correspondence with the latter in terms of multi-time scale biophysical processes. Simulations of an adaptrode-based neuron and a small network of neurons are shown to have the same learning capabilities as invertebrate animals in classical conditioning. Classical conditioning is considered a fundamental learning task in animals. Furthermore, it is subject to temporal ordering constraints that fulfill the criteria of causal relations in natural systems. It may offer clues to the learning of causal relations and mechanisms for causal reasoning. The …
Date: August 1994
Creator: Mobus, George E. (George Edward)
System: The UNT Digital Library
Using Extended Logic Programs to Formalize Commonsense Reasoning (open access)

Using Extended Logic Programs to Formalize Commonsense Reasoning

In this dissertation, we investigate how commonsense reasoning can be formalized by using extended logic programs. In this investigation, we first use extended logic programs to formalize inheritance hierarchies with exceptions by adopting McCarthy's simple abnormality formalism to express uncertain knowledge. In our representation, not only credulous reasoning can be performed but also the ambiguity-blocking inheritance and the ambiguity-propagating inheritance in skeptical reasoning are simulated. In response to the anomalous extension problem, we explore and discover that the intuition underlying commonsense reasoning is a kind of forward reasoning. The unidirectional nature of this reasoning is applied by many reformulations of the Yale shooting problem to exclude the undesired conclusion. We then identify defeasible conclusions in our representation based on the syntax of extended logic programs. A similar idea is also applied to other formalizations of commonsense reasoning to achieve such a purpose.
Date: May 1992
Creator: Horng, Wen-Bing
System: The UNT Digital Library