Design and Implementation of a TRAC Processor for Fairchild F24 Computer (open access)

Design and Implementation of a TRAC Processor for Fairchild F24 Computer

TRAC is a text-processing language for use with a reactive typewriter. The thesis describes the design and implementation of a TRAC processor for the Fairchild F24 computer. Chapter I introduces some text processing concepts, the TRAC operations, and the implementation procedures. Chapter II examines the history and -characteristics of the TRAC language. The next chapter specifies the TRAC syntax and primitive functions. Chapter IV covers the algorithms used by the processor. The last chapter discusses the design experience from programming the processor, examines the reactive action caused by the processor, and suggests adding external storage primitive functions for a future version of the processor.
Date: August 1974
Creator: Chi, Ping Ray
System: The UNT Digital Library
Macro Control Structures for Structured Programming in ALC (open access)

Macro Control Structures for Structured Programming in ALC

This thesis describes a set of computer program control structures which permits the application of certain structured programming techniques to the IBM/360 assembly language (ALC). The control structures are implemented by programmerdefined instructions known as macros. A history of computer software is presented, providing a basis for the emergence of structured programming. A survey of the major concepts of structured programming with special attention to control structures and their significance to structured programming follows. The macros developed in this study include DO, ENDDO, LEAVE, CASE, and ENDCASE. They provide a looping control structure, a loop-escape construct, and a selective control structure. Examples of usage are given.
Date: December 1975
Creator: Connally, Kim G.
System: The UNT Digital Library
A Mechanism for Facilitating Temporal Reasoning in Discrete Event Simulation (open access)

A Mechanism for Facilitating Temporal Reasoning in Discrete Event Simulation

This research establishes the feasibility and potential utility of a software mechanism which employs artificial intelligence techniques to enhance the capabilities of standard discrete event simulators. As background, current methods of integrating artificial intelligence with simulation and relevant research are briefly reviewed.
Date: May 1992
Creator: Legge, Gaynor W.
System: The UNT Digital Library
Simulation of the IBM System/7 (open access)

Simulation of the IBM System/7

This thesis describes the simulation of the IBM SYSTEM/7. The research leading to this thesis involved the development of a PL/I computer program that runs on an IBM 360/50 computer and simulates the IBM SYSTEM/7. Various methods of simulation are examined and guidelines for computer simulation of another computer are established. The SYSTEM/7 simulator (SIM/7) is the heart of this thesis. SIM/7 simulates the IBM SYSTEM/7 entirely with software as opposed to an emulator which involves the combined use of hardware and software to perform the simulation process. This thesis contains a general introduction to computer simulation, reason for simulation, a user's guide for SIM/7 and a definition of the SYSTEM/7 processor using the Vienna Definition Language.
Date: May 1977
Creator: Lewis, Ted C.
System: The UNT Digital Library
A Tool for Measuring the Size, Structure and Complexity of Software (open access)

A Tool for Measuring the Size, Structure and Complexity of Software

The problem addressed by this thesis is the need for a software measurement tool that enforces a uniform measurement algorithm on several programming languages. The introductory chapter discusses the concern for software measurement and provides background for the specific models and metrics that are studied. A multilingual software measurement tool is then introduced, that analyzes programs written in Ada, C, Pascal, or PL/I, and quantifies over thirty different program attributes. Metrics computed by the program include McCabe's measure of cyclomatic complexity and Halstead's software science metrics. Some results and conclusions of preliminary data analysis, using the tool, are also given. The appendices contain exhaustive counting algorithms for obtaining the metrics in each language.
Date: May 1984
Creator: Versaw, Larry
System: The UNT Digital Library
Development of a Text Formatted Under VAX/VMS Operating System (open access)

Development of a Text Formatted Under VAX/VMS Operating System

No matter how extended the use of the computer is, the printed document is still the primary medium for the presentation information, and will continue to be for some time. The use of computing facilities for preparation and production of the document is becoming as prevalent as their use for numeric computation. Commercially, document preparation systems are now a standard facility at research institution, and they have become quite common on each computer program. A conventional document preparation system usually contains two parts: a text editor used to create, enter, update, and maintain the text and control words that comprise the document in its "input" form, and a text formatter used to process that input and produce the final document.
Date: March 1984
Creator: Chow, Perng
System: The UNT Digital Library
An On-Line Macro Processor for the Motorola 6800 Microprocessor (open access)

An On-Line Macro Processor for the Motorola 6800 Microprocessor

The first chapter discusses the concept of macros: its definition, structure, usage, design goals, and the related prior work. This thesis principally concerns my work on OLMP (an On-Line Macro Processor for the Motorola 6800 Microprocessor), which is a macro processor which interacts with the user. It takes Motorola assembler source code and macro definitions as its input; after the appropriate editing and expansions, it outputs the expanded assembler source statements. The functional objectives, the design for implementation of OLMP, the basic macro format, and the macro definition construction are specified in Chapter Two. The software and the hardware environment of OLMP are discussed in the third chapter. The six modules of OLMP are the main spine of the fourth chapter. The comments on future improvement and how to link OLMP with the Motorola 6800 assembler are the major concern of the final chapter.
Date: May 1980
Creator: Hsieh, Chang-Boe
System: The UNT Digital Library
Using Extended Logic Programs to Formalize Commonsense Reasoning (open access)

Using Extended Logic Programs to Formalize Commonsense Reasoning

In this dissertation, we investigate how commonsense reasoning can be formalized by using extended logic programs. In this investigation, we first use extended logic programs to formalize inheritance hierarchies with exceptions by adopting McCarthy's simple abnormality formalism to express uncertain knowledge. In our representation, not only credulous reasoning can be performed but also the ambiguity-blocking inheritance and the ambiguity-propagating inheritance in skeptical reasoning are simulated. In response to the anomalous extension problem, we explore and discover that the intuition underlying commonsense reasoning is a kind of forward reasoning. The unidirectional nature of this reasoning is applied by many reformulations of the Yale shooting problem to exclude the undesired conclusion. We then identify defeasible conclusions in our representation based on the syntax of extended logic programs. A similar idea is also applied to other formalizations of commonsense reasoning to achieve such a purpose.
Date: May 1992
Creator: Horng, Wen-Bing
System: The UNT Digital Library
A Comparative Analysis of Guided vs. Query-Based Intelligent Tutoring Systems (ITS) Using a Class-Entity-Relationship-Attribute (CERA) Knowledge Base (open access)

A Comparative Analysis of Guided vs. Query-Based Intelligent Tutoring Systems (ITS) Using a Class-Entity-Relationship-Attribute (CERA) Knowledge Base

One of the greatest problems facing researchers in the sub field of Artificial Intelligence known as Intelligent Tutoring Systems (ITS) is the selection of a knowledge base designs that will facilitate the modification of the knowledge base. The Class-Entity-Relationship-Attribute (CERA), proposed by R. P. Brazile, holds certain promise as a more generic knowledge base design framework upon which can be built robust and efficient ITS. This study has a twofold purpose. The first is to demonstrate that a CERA knowledge base can be constructed for an ITS on a subset of the domain of Cretaceous paleontology and function as the "expert module" of the ITS. The second is to test the validity of the ideas that students guided through a lesson learn more factual knowledge, while those who explore the knowledge base that underlies the lesson through query at their own pace will be able to formulate their own integrative knowledge from the knowledge gained in their explorations and spend more time on the system. This study concludes that a CERA-based system can be constructed as an effective teaching tool. However, while an ITS - treatment provides for statistically significant gains in achievement test scores, the type of treatment seems …
Date: August 1987
Creator: Hall, Douglas Lee
System: The UNT Digital Library
A Multi-Time Scale Learning Mechanism for Neuromimic Processing (open access)

A Multi-Time Scale Learning Mechanism for Neuromimic Processing

Learning and representing and reasoning about temporal relations, particularly causal relations, is a deep problem in artificial intelligence (AI). Learning such representations in the real world is complicated by the fact that phenomena are subject to multiple time scale influences and may operate with a strange attractor dynamic. This dissertation proposes a new computational learning mechanism, the adaptrode, which, used in a neuromimic processing architecture may help to solve some of these problems. The adaptrode is shown to emulate the dynamics of real biological synapses and represents a significant departure from the classical weighted input scheme of conventional artificial neural networks. Indeed the adaptrode is shown, by analysis of the deep structure of real synapses, to have a strong structural correspondence with the latter in terms of multi-time scale biophysical processes. Simulations of an adaptrode-based neuron and a small network of neurons are shown to have the same learning capabilities as invertebrate animals in classical conditioning. Classical conditioning is considered a fundamental learning task in animals. Furthermore, it is subject to temporal ordering constraints that fulfill the criteria of causal relations in natural systems. It may offer clues to the learning of causal relations and mechanisms for causal reasoning. The …
Date: August 1994
Creator: Mobus, George E. (George Edward)
System: The UNT Digital Library
An Algorithm for the PLA Equivalence Problem (open access)

An Algorithm for the PLA Equivalence Problem

The Programmable Logic Array (PLA) has been widely used in the design of VLSI circuits and systems because of its regularity, flexibility, and simplicity. The equivalence problem is typically to verify that the final description of a circuit is functionally equivalent to its initial description. Verifying the functional equivalence of two descriptions is equivalent to proving their logical equivalence. This problem of pure logic is essential to circuit design. The most widely used technique to solve the problem is based on Binary Decision Diagram or BDD, proposed by Bryant in 1986. Unfortunately, BDD requires too much time and space to represent moderately large circuits for equivalence testing. We design and implement a new algorithm called the Cover-Merge Algorithm for the equivalence problem based on a divide-and-conquer strategy using the concept of cover and a derivational method. We prove that the algorithm is sound and complete. Because of the NP-completeness of the problem, we emphasize simplifications to reduce the search space or to avoid redundant computations. Simplification techniques are incorporated into the algorithm as an essential part to speed up the the derivation process. Two different sets of heuristics are developed for two opposite goals: one for the proof of equivalence …
Date: December 1995
Creator: Moon, Gyo Sik
System: The UNT Digital Library
Quantifying Design Principles in Reusable Software Components (open access)

Quantifying Design Principles in Reusable Software Components

Software reuse can occur in various places during the software development cycle. Reuse of existing source code is the most commonly practiced form of software reuse. One of the key requirements for software reuse is readability, thus the interest in the use of data abstraction, inheritance, modularity, and aspects of the visible portion of module specifications. This research analyzed the contents of software reuse libraries to answer the basic question of what makes a good reusable software component. The approach taken was to measure and analyze various software metrics as mapped to design characteristics. A related research question investigated the change in the design principles over time. This was measured by comparing sets of Ada reuse libraries categorized into two time periods. It was discovered that recently developed Ada reuse components scored better on readability than earlier developed components. A benefit of this research has been the development of a set of "design for reuse" guidelines. These guidelines address coding practices as well as design principles for an Ada implementation. C++ software reuse libraries were also analyzed to determine if design principles can be applied in a language independent fashion. This research used cyclomatic complexity metrics, software science metrics, and …
Date: December 1995
Creator: Moore, Freeman Leroy
System: The UNT Digital Library
Practical Parallel Processing (open access)

Practical Parallel Processing

The physical limitations of uniprocessors and the real-time requirements of numerous practical applications have made parallel processing an essential technology in military, industry and scientific research. In this dissertation, we investigate parallelizations of three practical applications using three parallel machine models. The algorithms are: Finitely inductive (FI) sequence processing is a pattern recognition technique used in many fields. We first propose four parallel FI algorithms on the EREW PRAM. The time complexity of the parallel factoring and following by bucket packing is O(sk^2 n/p), and they are optimal under some conditions. The parallel factoring and following by hashing requires O(sk^2 n/p) time when uniform hash functions are used and log(p) ≤ k n/p and pm ≈ n. Their speedup is proportional to the number processors used. For these results, s is the number of levels, k is the size of the antecedents and n is the length of the input sequence and p is the number of processors. We also describe algorithms for raster/vector conversion based on the scan model to handle block-like connected components of arbitrary geometrical shapes with multi-level nested dough nuts for the IES (image exploitation system). Both the parallel raster-to-vector algorithm and parallel vector-to-raster algorithm require …
Date: August 1996
Creator: Zhang, Hua, 1954-
System: The UNT Digital Library
Efficient Algorithms and Framework for Bandwidth Allocation, Quality-of-Service Provisioning and Location Management in Mobile Wireless Computing (open access)

Efficient Algorithms and Framework for Bandwidth Allocation, Quality-of-Service Provisioning and Location Management in Mobile Wireless Computing

The fusion of computers and communications has promised to herald the age of information super-highway over high speed communication networks where the ultimate goal is to enable a multitude of users at any place, access information from anywhere and at any time. This, in a nutshell, is the goal envisioned by the Personal Communication Services (PCS) and Xerox's ubiquitous computing. In view of the remarkable growth of the mobile communication users in the last few years, the radio frequency spectrum allocated by the FCC (Federal Communications Commission) to this service is still very limited and the usable bandwidth is by far much less than the expected demand, particularly in view of the emergence of the next generation wireless multimedia applications like video-on-demand, WWW browsing, traveler information systems etc. Proper management of available spectrum is necessary not only to accommodate these high bandwidth applications, but also to alleviate problems due to sudden explosion of traffic in so called hot cells. In this dissertation, we have developed simple load balancing techniques to cope with the problem of tele-traffic overloads in one or more hot cells in the system. The objective is to ease out the high channel demand in hot cells by …
Date: December 1997
Creator: Sen, Sanjoy Kumar
System: The UNT Digital Library