37 Matching Results

Results open in a new window/tab.

A Heuristic Procedure for Specifying Parameters in Neural Network Models for Shewhart X-bar Control Chart Applications (open access)

A Heuristic Procedure for Specifying Parameters in Neural Network Models for Shewhart X-bar Control Chart Applications

This study develops a heuristic procedure for specifying parameters for a neural network configuration (learning rate, momentum, and the number of neurons in a single hidden layer) in Shewhart X-bar control chart applications. Also, this study examines the replicability of the neural network solution when the neural network is retrained several times with different initial weights.
Date: December 1993
Creator: Nam, Kyungdoo T.
System: The UNT Digital Library
The Fixed v. Variable Sampling Interval Shewhart X-Bar Control Chart in the Presence of Positively Autocorrelated Data (open access)

The Fixed v. Variable Sampling Interval Shewhart X-Bar Control Chart in the Presence of Positively Autocorrelated Data

This study uses simulation to examine differences between fixed sampling interval (FSI) and variable sampling interval (VSI) Shewhart X-bar control charts for processes that produce positively autocorrelated data. The influence of sample size (1 and 5), autocorrelation parameter, shift in process mean, and length of time between samples is investigated by comparing average time (ATS) and average number of samples (ANSS) to produce an out of control signal for FSI and VSI Shewhart X-bar charts. These comparisons are conducted in two ways: control chart limits pre-set at ±3σ_x / √n and limits computed from the sampling process. Proper interpretation of the Shewhart X-bar chart requires the assumption that observations are statistically independent; however, process data are often autocorrelated over time. Results of this study indicate that increasing the time between samples decreases the effect of positive autocorrelation between samples. Thus, with sufficient time between samples the assumption of independence is essentially not violated. Samples of size 5 produce a faster signal than samples of size 1 with both the FSI and VSI Shewhart X-bar chart when positive autocorrelation is present. However, samples of size 5 require the same time when the data are independent, indicating that this effect is a …
Date: May 1993
Creator: Harvey, Martha M. (Martha Mattern)
System: The UNT Digital Library
Robustness of Parametric and Nonparametric Tests When Distances between Points Change on an Ordinal Measurement Scale (open access)

Robustness of Parametric and Nonparametric Tests When Distances between Points Change on an Ordinal Measurement Scale

The purpose of this research was to evaluate the effect on parametric and nonparametric tests using ordinal data when the distances between points changed on the measurement scale. The research examined the performance of Type I and Type II error rates using selected parametric and nonparametric tests.
Date: August 1994
Creator: Chen, Andrew H. (Andrew Hwa-Fen)
System: The UNT Digital Library
Robustness of the One-Sample Kolmogorov Test to Sampling from a Finite Discrete Population (open access)

Robustness of the One-Sample Kolmogorov Test to Sampling from a Finite Discrete Population

One of the most useful and best known goodness of fit test is the Kolmogorov one-sample test. The assumptions for the Kolmogorov (one-sample test) test are: 1. A random sample; 2. A continuous random variable; 3. F(x) is a completely specified hypothesized cumulative distribution function. The Kolmogorov one-sample test has a wide range of applications. Knowing the effect fromusing the test when an assumption is not met is of practical importance. The purpose of this research is to analyze the robustness of the Kolmogorov one-sample test to sampling from a finite discrete distribution. The standard tables for the Kolmogorov test are derived based on sampling from a theoretical continuous distribution. As such, the theoretical distribution is infinite. The standard tables do not include a method or adjustment factor to estimate the effect on table values for statistical experiments where the sample stems from a finite discrete distribution without replacement. This research provides an extension of the Kolmogorov test when the hypothesized distribution function is finite and discrete, and the sampling distribution is based on sampling without replacement. An investigative study has been conducted to explore possible tendencies and relationships in the distribution of Dn when sampling with and without replacement …
Date: December 1996
Creator: Tucker, Joanne M. (Joanne Morris)
System: The UNT Digital Library
Comparing the Powers of Several Proposed Tests for Testing the Equality of the Means of Two Populations When Some Data Are Missing (open access)

Comparing the Powers of Several Proposed Tests for Testing the Equality of the Means of Two Populations When Some Data Are Missing

In comparing the means .of two normally distributed populations with unknown variance, two tests very often used are: the two independent sample and the paired sample t tests. There is a possible gain in the power of the significance test by using the paired sample design instead of the two independent samples design.
Date: May 1994
Creator: Dunu, Emeka Samuel
System: The UNT Digital Library
Supply Chain Network Planning for Humanitarian Operations During Seasonal Disasters (open access)

Supply Chain Network Planning for Humanitarian Operations During Seasonal Disasters

To prevent loss of lives during seasonal disasters, relief agencies distribute critical supplies and provide lifesaving services to the affected populations. Despite agencies' efforts, frequently occuring disasters increase the cost of relief operations. The purpose of our study is to minimize the cost of relief operations, considering that such disasters cause random demand. To achieve this, we have formulated a series of models, which are distinct from the current studies in three ways. First, to the best of our knowledge, we are the first ones to capture both perishable and durable products together. Second, we have aggregated multiple products in a different way than current studies do. This unique aggregation requires less data than that of other types of aggregation. Finally, our models are compatible with the practical data generated by FEMA. Our models offer insights on the impacts of various parameters on optimum cost and order size. The analyses of correlation of demand and quality of information offer interesting insights; for instance, under certain cases, the quality of information does not influence cost. Our study has considered both risk averse and risk neutral approaches and provided insights. The insights obtained from our models are expected to help agencies reduce …
Date: May 2013
Creator: Ponnaiyan, Subramaniam
System: The UNT Digital Library
The Impact of Quality on Customer Behavioral Intentions Based on the Consumer Decision Making Process As Applied in E-commerce (open access)

The Impact of Quality on Customer Behavioral Intentions Based on the Consumer Decision Making Process As Applied in E-commerce

Perceived quality in the context of e-commerce was defined and examined in numerous studies, but, to date, there are no consistent definitions and measurement scales. Instruments that measure quality in e-commerce industries primarily focus on website quality or service quality during the transaction and delivery phases. Even though some scholars have proposed instruments from different perspectives, these scales do not fully evaluate the level of quality perceived by customers during the entire decision-making process. This dissertation purports to provide five main contributions for the e-commerce, service quality, and decision science literature: (1) development of a comprehensive instrument to measure how online customers perceive the quality of the shopping channel, website, transaction and recovery based on the customer decision making process; (2) identification of the determinants of customer satisfaction and the key dimensions of customer behavioral intentions in e-commerce; (3) examination of the relationships among perceived quality, customer satisfaction and loyalty intention using empirical data; (4) application of different statistical packages (LISREL and PLS-Graph) for data analysis and comparison of how these methods impact the results; and (5) examination of the moderating effects of control variables. A survey was designed and distributed to a total of 1126 college students in a …
Date: August 2012
Creator: Wen, Chao
System: The UNT Digital Library
Comparing Latent Dirichlet Allocation and Latent Semantic Analysis as Classifiers (open access)

Comparing Latent Dirichlet Allocation and Latent Semantic Analysis as Classifiers

In the Information Age, a proliferation of unstructured text electronic documents exists. Processing these documents by humans is a daunting task as humans have limited cognitive abilities for processing large volumes of documents that can often be extremely lengthy. To address this problem, text data computer algorithms are being developed. Latent Semantic Analysis (LSA) and Latent Dirichlet Allocation (LDA) are two text data computer algorithms that have received much attention individually in the text data literature for topic extraction studies but not for document classification nor for comparison studies. Since classification is considered an important human function and has been studied in the areas of cognitive science and information science, in this dissertation a research study was performed to compare LDA, LSA and humans as document classifiers. The research questions posed in this study are: R1: How accurate is LDA and LSA in classifying documents in a corpus of textual data over a known set of topics? R2: How accurate are humans in performing the same classification task? R3: How does LDA classification performance compare to LSA classification performance? To address these questions, a classification study involving human subjects was designed where humans were asked to generate and classify documents …
Date: December 2011
Creator: Anaya, Leticia H.
System: The UNT Digital Library
Impact of Forecasting Method Selection and Information Sharing on Supply Chain Performance. (open access)

Impact of Forecasting Method Selection and Information Sharing on Supply Chain Performance.

Effective supply chain management gains much attention from industry and academia because it helps firms across a supply chain to reduce cost and improve customer service level efficiently. Focusing on one of the key challenges of the supply chains, namely, demand uncertainty, this dissertation extends the work of Zhao, Xie, and Leung so as to examine the effects of forecasting method selection coupled with information sharing on supply chain performance in a dynamic business environment. The results of this study showed that under various scenarios, advanced forecasting methods such as neural network and GARCH models play a more significant role when capacity tightness increases and is more important to the retailers than to the supplier under certain circumstances in terms of supply chain costs. Thus, advanced forecasting models should be promoted in supply chain management. However, this study also demonstrated that forecasting methods not capable of modeling features of certain demand patterns significantly impact a supply chain's performance. That is, a forecasting method misspecified for characteristics of the demand pattern usually results in higher supply chain costs. Thus, in practice, supply chain managers should be cognizant of the cost impact of selecting commonly used traditional forecasting methods, such as moving …
Date: December 2009
Creator: Pan, Youqin
System: The UNT Digital Library
Links among perceived service quality, patient satisfaction and behavioral intentions in the urgent care industry: Empirical evidence from college students. (open access)

Links among perceived service quality, patient satisfaction and behavioral intentions in the urgent care industry: Empirical evidence from college students.

Patient perceptions of health care quality are critical to a health care service provider's long-term success because of the significant influence perceptions have on customer satisfaction and consequently organization financial performance. Patient satisfaction affects not only the outcome of the health care process such as patient compliance with physician advice and treatment, but also patient retention and favorable word-of-mouth. Accordingly, it is a critical strategy for health care organizations to provide quality service and address patient satisfaction. The urgent care (UC) industry is an integral part of the health care system in the United States that has been experiencing a rapid growth. UC provides a wide range of medical services for a large group of patients and now serves an increasing population. UC is becoming popular because of the convenient locations, extended hours, walk-in policy, short waiting times, and accessibility. A closer examination of the current health care research, however, indicates that there is a paucity of research on urgent care providers. Confronted with the emergence of the urgent care industry and the increasing demand for urgent care, it is necessary to understand how patients perceive urgent care providers and what influences patient satisfaction and retention. This dissertation addresses four …
Date: August 2009
Creator: Qin, Hong
System: The UNT Digital Library
Investigating the relationship between the business performance management framework and the Malcolm Baldrige National Quality Award framework. (open access)

Investigating the relationship between the business performance management framework and the Malcolm Baldrige National Quality Award framework.

The business performance management (BPM) framework helps an organization continuously adjust and successfully execute its strategies. BPM helps increase flexibility by providing managers with an early alert about changes and, as a result, allows faster response to such changes. The Malcolm Baldrige National Quality Award (MBNQA) framework provides a basis for self-assessment and a systems perspective for managing an organization's key processes for achieving business results. The MBNQA framework is a more comprehensive framework and encapsulates the underlying constructs in the BPM framework. The objectives of this dissertation are fourfold: (1) to validate the underlying relationships presented in the 2008 MBNQA framework, (2) to explore the MBNQA framework at the dimension level, and develop and test constructs measured at that level in a causal model, (3) to validate and create a common general framework for the business performance model by integrating the practitioner literature with basic theory including existing MBNQA theory, and (4) to integrate the BPM framework and the MBNQA framework into a new framework (BPM-MBNQA framework) that can guide organizations in their journey toward achieving and sustaining competitive and strategic advantages. The purpose of this study is to achieve these objectives by means of a combination of methodologies …
Date: August 2009
Creator: Hossain, Muhammad Muazzem
System: The UNT Digital Library
Reliable Prediction Intervals and Bayesian Estimation for Demand Rates of Slow-Moving Inventory (open access)

Reliable Prediction Intervals and Bayesian Estimation for Demand Rates of Slow-Moving Inventory

Application of multisource feedback (MSF) increased dramatically and became widespread globally in the past two decades, but there was little conceptual work regarding self-other agreement and few empirical studies investigated self-other agreement in other cultural settings. This study developed a new conceptual framework of self-other agreement and used three samples to illustrate how national culture affected self-other agreement. These three samples included 428 participants from China, 818 participants from the US, and 871 participants from globally dispersed teams (GDTs). An EQS procedure and a polynomial regression procedure were used to examine whether the covariance matrices were equal across samples and whether the relationships between self-other agreement and performance would be different across cultures, respectively. The results indicated MSF could be applied to China and GDTs, but the pattern of relationships between self-other agreement and performance was different across samples, suggesting that the results found in the U.S. sample were the exception rather than rule. Demographics also affected self-other agreement disparately across perspectives and cultures, indicating self-concept was susceptible to cultural influences. The proposed framework only received partial support but showed great promise to guide future studies. This study contributed to the literature by: (a) developing a new framework of self-other …
Date: August 2007
Creator: Lindsey, Matthew Douglas
System: The UNT Digital Library
Developing Criteria for Extracting Principal Components and Assessing Multiple Significance Tests in Knowledge Discovery Applications (open access)

Developing Criteria for Extracting Principal Components and Assessing Multiple Significance Tests in Knowledge Discovery Applications

With advances in computer technology, organizations are able to store large amounts of data in data warehouses. There are two fundamental issues researchers must address: the dimensionality of data and the interpretation of multiple statistical tests. The first issue addressed by this research is the determination of the number of components to retain in principal components analysis. This research establishes regression, asymptotic theory, and neural network approaches for estimating mean and 95th percentile eigenvalues for implementing Horn's parallel analysis procedure for retaining components. Certain methods perform better for specific combinations of sample size and numbers of variables. The adjusted normal order statistic estimator (ANOSE), an asymptotic procedure, performs the best overall. Future research is warranted on combining methods to increase accuracy. The second issue involves interpreting multiple statistical tests. This study uses simulation to show that Parker and Rothenberg's technique using a density function with a mixture of betas to model p-values is viable for p-values from central and non-central t distributions. The simulation study shows that final estimates obtained in the proposed mixture approach reliably estimate the true proportion of the distributions associated with the null and nonnull hypotheses. Modeling the density of p-values allows for better control of …
Date: August 1999
Creator: Keeling, Kellie Bliss
System: The UNT Digital Library