Supply Chain Network Planning for Humanitarian Operations During Seasonal Disasters (open access)

Supply Chain Network Planning for Humanitarian Operations During Seasonal Disasters

To prevent loss of lives during seasonal disasters, relief agencies distribute critical supplies and provide lifesaving services to the affected populations. Despite agencies' efforts, frequently occuring disasters increase the cost of relief operations. The purpose of our study is to minimize the cost of relief operations, considering that such disasters cause random demand. To achieve this, we have formulated a series of models, which are distinct from the current studies in three ways. First, to the best of our knowledge, we are the first ones to capture both perishable and durable products together. Second, we have aggregated multiple products in a different way than current studies do. This unique aggregation requires less data than that of other types of aggregation. Finally, our models are compatible with the practical data generated by FEMA. Our models offer insights on the impacts of various parameters on optimum cost and order size. The analyses of correlation of demand and quality of information offer interesting insights; for instance, under certain cases, the quality of information does not influence cost. Our study has considered both risk averse and risk neutral approaches and provided insights. The insights obtained from our models are expected to help agencies reduce …
Date: May 2013
Creator: Ponnaiyan, Subramaniam
System: The UNT Digital Library
The Evaluation and Control of the Changes in Basic Statistics Encountered in Grouped Data (open access)

The Evaluation and Control of the Changes in Basic Statistics Encountered in Grouped Data

This dissertation describes the effect that the construction of frequency tables has on basic statistics computed from those frequency tables. It is directly applicable only to normally distributed data summarized by Sturges' Rule. The purpose of this research was to identify factors tending to bias sample statistics when data are summarized, and thus to allow researchers to avoid such bias. The methodology employed was a large scale simulation where 1000 replications of samples of size n = 2 ᵏ⁻¹ for 2 to 12 were drawn from a normally distributed population with a mean of zero and a standard deviation of one. A FORTRAN IV source listing is included. The report concludes that researchers should avoid the use of statistics computed from frequency tables in cases where raw data are available. Where the use of such statistics is unavoidable, the researchers can eliminate their bias by the use of empirical correction factors provided in the paper. Further research is suggested to determine the effect of summarization of data drawn from various non-normal distributions.
Date: May 1979
Creator: Scott, James P.
System: The UNT Digital Library
The Impact of Water Pollution Abatement Costs on Financing of Municipal Services in North Central Texas (open access)

The Impact of Water Pollution Abatement Costs on Financing of Municipal Services in North Central Texas

The purpose of this study is to determine the effects of water pollution control on financing municipal water pollution control facilities in selected cities in North Central Texas. This objective is accomplished by addressing the following topics: (1) the cost to municipalities of meeting federally mandated water pollution control, (2) the sources of funds for financing sewage treatment, and (3) the financial implications of employing these financing tools to satisfy water quality regulations. The study makes the following conclusions regarding the impact of water pollution control costs on municipalities in the North Central Texas Region: 1) The financing of the wastewater treatment requirements of the Water Pollution Control Act Amendments of 1972 will cause many municipalities to report operating deficits for their Water and Sewer Fund. 2) A federal grant program funded at the rate of 75 per cent of waste treatment needs will prevent operating deficits in the majority of cities in which 1990 waste treatment needs constitute 20 per cent or more of the expected Water and Sewer Fund capital structure. 3) A federal grant program funded at the average rate of 35 per cent of needs will benefit only a small number of cities. 4) The federal …
Date: May 1976
Creator: Rucks, Andrew C.
System: The UNT Digital Library
A Quantitative Approach to Medical Decision Making (open access)

A Quantitative Approach to Medical Decision Making

The purpose of this study is to develop a technique by which a physician may use a predetermined data base to derive a preliminary diagnosis for a patient with a given set of symptoms. The technique will not yield an absolute diagnosis, but rather will point the way to a set of most likely diseases upon which the physician may concentrate his efforts. There will be no reliance upon a data base compiled from poorly kept medical records with non-standardization of terminology. While this study produces a workable tool for the physician to use in the process of medical diagnosis, the ultimate responsibility for the patient's welfare must still rest with the physician.
Date: May 1975
Creator: Meredith, John W.
System: The UNT Digital Library
Comparing the Powers of Several Proposed Tests for Testing the Equality of the Means of Two Populations When Some Data Are Missing (open access)

Comparing the Powers of Several Proposed Tests for Testing the Equality of the Means of Two Populations When Some Data Are Missing

In comparing the means .of two normally distributed populations with unknown variance, two tests very often used are: the two independent sample and the paired sample t tests. There is a possible gain in the power of the significance test by using the paired sample design instead of the two independent samples design.
Date: May 1994
Creator: Dunu, Emeka Samuel
System: The UNT Digital Library
The Effect of Certain Modifications to Mathematical Programming Models for the Two-Group Classification Problem (open access)

The Effect of Certain Modifications to Mathematical Programming Models for the Two-Group Classification Problem

This research examines certain modifications of the mathematical programming models to improve their classificatory performance. These modifications involve the inclusion of second-order terms and secondary goals in mathematical programming models. A Monte Carlo simulation study is conducted to investigate the performance of two standard parametric models and various mathematical programming models, including the MSD (minimize sum of deviations) model, the MIP (mixed integer programming) model and the hybrid linear programming model.
Date: May 1994
Creator: Wanarat, Pradit
System: The UNT Digital Library
The Fixed v. Variable Sampling Interval Shewhart X-Bar Control Chart in the Presence of Positively Autocorrelated Data (open access)

The Fixed v. Variable Sampling Interval Shewhart X-Bar Control Chart in the Presence of Positively Autocorrelated Data

This study uses simulation to examine differences between fixed sampling interval (FSI) and variable sampling interval (VSI) Shewhart X-bar control charts for processes that produce positively autocorrelated data. The influence of sample size (1 and 5), autocorrelation parameter, shift in process mean, and length of time between samples is investigated by comparing average time (ATS) and average number of samples (ANSS) to produce an out of control signal for FSI and VSI Shewhart X-bar charts. These comparisons are conducted in two ways: control chart limits pre-set at ±3σ_x / √n and limits computed from the sampling process. Proper interpretation of the Shewhart X-bar chart requires the assumption that observations are statistically independent; however, process data are often autocorrelated over time. Results of this study indicate that increasing the time between samples decreases the effect of positive autocorrelation between samples. Thus, with sufficient time between samples the assumption of independence is essentially not violated. Samples of size 5 produce a faster signal than samples of size 1 with both the FSI and VSI Shewhart X-bar chart when positive autocorrelation is present. However, samples of size 5 require the same time when the data are independent, indicating that this effect is a …
Date: May 1993
Creator: Harvey, Martha M. (Martha Mattern)
System: The UNT Digital Library
The Development and Evaluation of a Forecasting System that Incorporates ARIMA Modeling with Autoregression and Exponential Smoothing (open access)

The Development and Evaluation of a Forecasting System that Incorporates ARIMA Modeling with Autoregression and Exponential Smoothing

This research was designed to develop and evaluate an automated alternative to the Box-Jenkins method of forecasting. The study involved two major phases. The first phase was the formulation of an automated ARIMA method; the second was the combination of forecasts from the automated ARIMA with forecasts from two other automated methods, the Holt-Winters method and the Stepwise Autoregressive method. The development of the automated ARIMA, based on a decision criterion suggested by Akaike, borrows heavily from the work of Ang, Chuaa and Fatema. Seasonality and small data set handling were some of the modifications made to the original method to make it suitable for use with a broad range of time series. Forecasts were combined by means of both the simple average and a weighted averaging scheme. Empirical and generated data were employed to perform the forecasting evaluation. The 111 sets of empirical data came from the M-Competition. The twenty-one sets of generated data arose from ARIMA models that Box, Taio and Pack analyzed using the Box-Jenkins method. To compare the forecasting abilities of the Box-Jenkins and the automated ARIMA alone and in combination with the other two methods, two accuracy measures were used. These measures, which are free …
Date: May 1985
Creator: Simmons, Laurette Poulos
System: The UNT Digital Library