The Generalization of the Logistic Discriminant Function Analysis and Mantel Score Test Procedures to Detection of Differential Testlet Functioning (open access)

The Generalization of the Logistic Discriminant Function Analysis and Mantel Score Test Procedures to Detection of Differential Testlet Functioning

Two procedures for detection of differential item functioning (DIF) for polytomous items were generalized to detection of differential testlet functioning (DTLF). The methods compared were the logistic discriminant function analysis procedure for uniform and non-uniform DTLF (LDFA-U and LDFA-N), and the Mantel score test procedure. Further analysis included comparison of results of DTLF analysis using the Mantel procedure with DIF analysis of individual testlet items using the Mantel-Haenszel (MH) procedure. Over 600 chi-squares were analyzed and compared for rejection of null hypotheses. Samples of 500, 1,000, and 2,000 were drawn by gender subgroups from the NELS:88 data set, which contains demographic and test data from over 25,000 eighth graders. Three types of testlets (totalling 29) from the NELS:88 test were analyzed for DTLF. The first type, the common passage testlet, followed the conventional testlet definition: items grouped together by a common reading passage, figure, or graph. The other two types were based upon common content and common process. as outlined in the NELS test specification.
Date: August 1994
Creator: Kinard, Mary E.
System: The UNT Digital Library
Measurement Disturbance Effects on Rasch Fit Statistics and the Logit Residual Index (open access)

Measurement Disturbance Effects on Rasch Fit Statistics and the Logit Residual Index

The effects of random guessing as a measurement disturbance on Rasch fit statistics (unweighted total, weighted total, and unweighted ability between) and the Logit Residual Index (LRI) were examined through simulated data sets of varying sample sizes, test lengths, and distribution types. Three test lengths (25, 50, and 100), three sample sizes (25, 50, and 100), two item difficulty distributions (normal and uniform), and three levels of guessing (no guessing (0%), 25%, and 50%) were used in the simulations, resulting in 54 experimental conditions. The mean logit person ability for each experiment was +1. Each experimental condition was simulated once in an effort to approximate what could happen on the single administration of a four option per item multiple choice test to a group of relatively high ability persons. Previous research has shown that varying item and person parameters have no effect on Rasch fit statistics. Consequently, these parameters were used in the present study to establish realistic test conditions, but were not interpreted as effect factors in determining the results of this study.
Date: August 1997
Creator: Mount, Robert E. (Robert Earl)
System: The UNT Digital Library
Attenuation of the Squared Canonical Correlation Coefficient Under Varying Estimates of Score Reliability (open access)

Attenuation of the Squared Canonical Correlation Coefficient Under Varying Estimates of Score Reliability

Research pertaining to the distortion of the squared canonical correlation coefficient has traditionally been limited to the effects of sampling error and associated correction formulas. The purpose of this study was to compare the degree of attenuation of the squared canonical correlation coefficient under varying conditions of score reliability. Monte Carlo simulation methodology was used to fulfill the purpose of this study. Initially, data populations with various manipulated conditions were generated (N = 100,000). Subsequently, 500 random samples were drawn with replacement from each population, and data was subjected to canonical correlation analyses. The canonical correlation results were then analyzed using descriptive statistics and an ANOVA design to determine under which condition(s) the squared canonical correlation coefficient was most attenuated when compared to population Rc2 values. This information was analyzed and used to determine what effect, if any, the different conditions considered in this study had on Rc2. The results from this Monte Carlo investigation clearly illustrated the importance of score reliability when interpreting study results. As evidenced by the outcomes presented, the more measurement error (lower reliability) present in the variables included in an analysis, the more attenuation experienced by the effect size(s) produced in the analysis, in this …
Date: August 2010
Creator: Wilson, Celia M.
System: The UNT Digital Library
The Effectiveness of a Mediating Structure for Writing Analysis Level Test Items From Text Based Instruction (open access)

The Effectiveness of a Mediating Structure for Writing Analysis Level Test Items From Text Based Instruction

This study is concerned with the effect of placing text into a mediated structure form upon the generation of test items for analysis level domain referenced test construction. The item writing methodology used is the linguistic (operationally defined) item writing technology developed by Bormuth, Finn, Roid, Haladyna and others. This item writing methodology is compared to 1) the intuitive method based on Bloom's definition of analysis level test questions and 2) the intuitive with keywords identified method of item writing. A mediated structure was developed by coordinating or subordinating sentences in an essay by following five simple grammatical rules. Three test writers each composed a ten-item test using each of the three methodologies based on a common essay. Tests were administered to 102 Composition 1 community college students. Students were asked to read the essay and complete one test form. Test forms by writer and method were randomly delivered. Analysis of variance showed no significant differences among either methods or writers. Item analysis showed no method of item writing resulting in items of consistent difficulty among test item writers. While the results of this study show no significant difference from the intuitive, traditional methods of item writing, analysis level test …
Date: August 1989
Creator: Brasel, Michael D. (Michael David)
System: The UNT Digital Library
The Analysis of the Accumulation of Type II Error in Multiple Comparisons for Specified Levels of Power to Violation of Normality with the Dunn-Bonferroni Procedure: a Monte Carlo Study (open access)

The Analysis of the Accumulation of Type II Error in Multiple Comparisons for Specified Levels of Power to Violation of Normality with the Dunn-Bonferroni Procedure: a Monte Carlo Study

The study seeks to determine the degree of accumulation of Type II error rates, while violating the assumptions of normality, for different specified levels of power among sample means. The study employs a Monte Carlo simulation procedure with three different specified levels of power, methodologies, and population distributions. On the basis of the comparisons of actual and observed error rates, the following conclusions appear to be appropriate. 1. Under the strict criteria for evaluation of the hypotheses, Type II experimentwise error does accumulate at a rate that the probability of accepting at least one null hypothesis in a family of tests, when in theory all of the alternate hypotheses are true, is high, precluding valid tests at the beginning of the study. 2. The Dunn-Bonferroni procedure of setting the critical value based on the beta value per contrast did not significantly reduce the probability of committing a Type II error in a family of tests. 3. The use of an adequate sample size and orthogonal contrasts, or limiting the number of pairwise comparisons to the number of means, is the best method to control for the accumulation of Type II errors. 4. The accumulation of Type II error is irrespective …
Date: August 1989
Creator: Powers-Prather, Bonnie Ann
System: The UNT Digital Library
The Effects of the Ratio of Utilized Predictors to Original Predictors on the Shrinkage of Multiple Correlation Coefficients (open access)

The Effects of the Ratio of Utilized Predictors to Original Predictors on the Shrinkage of Multiple Correlation Coefficients

This study dealt with shrinkage in multiple correlation coefficients computed for sample data when these coefficients are compared to the multiple correlation coefficients for populations and the effect of the ratio of utilized predictors to original predictors on the shrinkage in R square. The study sought to provide the rationale for selection of the shrinkage formula when the correlations between the predictors and the criterion are known and determine which of the three shrinkage formulas (Browne, Darlington, or Wherry) will yield the R square from sample data that is closest to the R square for the population data.
Date: August 1983
Creator: Petcharat, Prataung Parn
System: The UNT Digital Library
An Empirical Investigation of Marascuilo's Ú₀ Test with Unequal Sample Sizes and Small Samples (open access)

An Empirical Investigation of Marascuilo's Ú₀ Test with Unequal Sample Sizes and Small Samples

The study seeks to determine the effect upon the Marascuilo Ú₀ statistic of violating the small sample assumption. The study employed a Monte Carlo simulation technique to vary the degree of sample size and unequal sample sizes within experiments to determine the effect of such conditions, Twenty-two simulations, with 1200 trials each, were used. The following conclusion appeared to be appropriate: The Marascuilo Ú₀ statistic should not be used with small sample sizes and it is recommended that the statistic be used only if sample sizes are larger than ten.
Date: August 1976
Creator: Milligan, Kenneth W.
System: The UNT Digital Library
Short-to-Medium Term Enrollment Projection Based on Cycle Regression Analysis (open access)

Short-to-Medium Term Enrollment Projection Based on Cycle Regression Analysis

Short-to-medium projections were made of student semester credit hour enrollments for North Texas State University and the Texas Public and Senior Colleges and Universities (as defined by the Coordinating Board, Texas College and University System). Undergraduate, Graduate, Doctorate, Total, Education, Liberal Arts, and Business enrollments were projected. Fall + Spring, Fall, Summer I + Summer II, Summer I were time periods for which projections were made. A new regression analysis called "cycle regression" which employs nonlinear regression techniques to extract multifrequential phenomena from time-series data was employed for the analysis of the enrollment data. The heuristic steps employed in cycle regression analysis are similar to those used in fitting polynomial models. A trend line and one or more sin waves (cycles) are simultaneously estimated using a partial F test. The process of adding cycle(s) to the model continues until no more significant terms can be estimated.
Date: August 1983
Creator: Chizari, Mohammad
System: The UNT Digital Library
Influence of Item Response Theory and Type of Judge on a Standard Set Using the Iterative Angoff Standard Setting Method (open access)

Influence of Item Response Theory and Type of Judge on a Standard Set Using the Iterative Angoff Standard Setting Method

The purpose of this investigation was to determine the influence of item response theory and different types of judges on a standard. The iterative Angoff standard setting method was employed by all judges to determine a cut-off score for a public school district-wide criterion-reformed test. The analysis of variance of the effect of judge type and standard setting method on the central tendency of the standard revealed the existence of an ordinal interaction between judge type and method. Without any knowledge of p-values, one judge group set an unrealistic standard. A significant disordinal interaction was found concerning the effect of judge type and standard setting method on the variance of the standard. A positive covariance was detected between judges' minimum pass level estimates and empirical item information. With both p-values and b-values, judge groups had mean minimum pass levels that were positively correlated (ranging from .77 to .86), regardless of the type of information given to the judges. No differences in correlations were detected between different judge types or different methods. The generalizability coefficients and phi indices for 12 judges included in any method or judge type were acceptable (ranging from .77 to .99). The generalizability coefficient and phi index …
Date: August 1992
Creator: Hamberlin, Melanie Kidd
System: The UNT Digital Library
A Comparison of Three Item Selection Methods in Criterion-Referenced Tests (open access)

A Comparison of Three Item Selection Methods in Criterion-Referenced Tests

This study compared three methods of selecting the best discriminating test items and the resultant test reliability of mastery/nonmastery classifications. These three methods were (a) the agreement approach, (b) the phi coefficient approach, and (c) the random selection approach. Test responses from 1,836 students on a 50-item physical science test were used, from which 90 distinct data sets were generated for analysis. These 90 data sets contained 10 replications of the combination of three different sample sizes (75, 150, and 300) and three different numbers of test items (15, 25, and 35). The results of this study indicated that the agreement approach was an appropriate method to be used for selecting criterion-referenced test items at the classroom level, while the phi coefficient approach was an appropriate method to be used at the district and/or state levels. The random selection method did not have similar characteristics in selecting test items and produced the lowest reliabilities, when compared with the agreement and the phi coefficient approaches.
Date: August 1988
Creator: Lin, Hui-Fen
System: The UNT Digital Library
Factors Influencing Difficult Special Education Referral Recommendations (open access)

Factors Influencing Difficult Special Education Referral Recommendations

The present study is concerned with selected factors that may strongly influence classroom teachers to refer young children for possible placement in special classes when the children are functioning near the borderline for placement on the basis of intelligence test scores. Particular attention was given to the contribution of student attributes (i.e., sex, ethnic background, socioeconomic status, and classroom behavior) and teacher attributes (i.e., age, sex, ethnic background and teaching experience) to the referral patterns of teachers. Also considered were the size of school enrollment, school locale, and interactions among student, teacher, and school variables. It was concluded that the teachers in the population studied responded to the case histories on the basis of certain selective biases. However, the relationship of these biases to referral decisions was less obvious and considerably more complex than has been suggested previously in the professional literature. At the same time, the presence of any bias in the referral process seemingly warrants careful consideration and points to the -need for greater emphasis in pre-service and in-service training programs upon the objective evaluation of students as an integral part of educational planning.
Date: August 1975
Creator: Luckey, Robert E.
System: The UNT Digital Library
Principles for Formulating and Evaluating Instructional Claims (open access)

Principles for Formulating and Evaluating Instructional Claims

The problem with which this investigation is concerned is that of developing (a) the concept of instructional claim, and (b) credible principles for instructional claim formulation and evaluation. The belief that these constructions are capable of contributing to the advancement of curricular and instructional research and practice is grounded in three major features. The first feature is that of increased precision of basic concepts and increased coherence among them. The second feature is the deliberate connecting of instructional strategies and goal-states and the connecting of instructional configurations with curricular configurations. The third feature is the introduction of fundamental logical principles as evaluative criteria and the framing of instructional plans in such a way as to be subject to empirical tests under the principles of hypothesis testing that are considered credible in the empirical sciences.
Date: August 1978
Creator: McCray, Emajean
System: The UNT Digital Library
How Attitudes towards Statistics Courses and the Field of Statistics Predicts Statistics Anxiety among Undergraduate Social Science Majors: A Validation of the Statistical Anxiety Scale (open access)

How Attitudes towards Statistics Courses and the Field of Statistics Predicts Statistics Anxiety among Undergraduate Social Science Majors: A Validation of the Statistical Anxiety Scale

The aim of this study was to validate an instrument that can be used by instructors or social scientist who are interested in evaluating statistics anxiety. The psychometric properties of the English version of the Statistical Anxiety Scale (SAS) was examined through a confirmatory factor analysis of scores from a sample of 323 undergraduate social science majors enrolled in colleges and universities in the United States. In previous studies, the psychometric properties of the Spanish and Italian versions of the SAS were validated; however, the English version of the SAS had never been assessed. Inconsistent with previous studies, scores on the English version of the SAS did not produce psychometrically acceptable values of validity. However, the results of this study suggested the potential value of a revised two-factor model SAS to measure statistics anxiety. Additionally, the Attitudes Towards Statistics (ATS) scale was used to examine the convergent and discriminant validities of the two-factor SAS. As expected, the correlation between the two factors of the SAS and the two factors of the ATS uncovered a moderately negative correlation between examination anxiety and attitudes towards the course. Additionally, the results of a structural regression model of attitudes towards statistics as a predictor …
Date: August 2017
Creator: Obryant, Monique J
System: The UNT Digital Library
An Empirical Comparison of Random Number Generators: Period, Structure, Correlation, Density, and Efficiency (open access)

An Empirical Comparison of Random Number Generators: Period, Structure, Correlation, Density, and Efficiency

Random number generators (RNGs) are widely used in conducting Monte Carlo simulation studies, which are important in the field of statistics for comparing power, mean differences, or distribution shapes between statistical approaches. Statistical results, however, may differ when different random number generators are used. Often older methods have been blindly used with no understanding of their limitations. Many random functions supplied with computers today have been found to be comparatively unsatisfactory. In this study, five multiplicative linear congruential generators (MLCGs) were chosen which are provided in the following statistical packages: RANDU (IBM), RNUN (IMSL), RANUNI (SAS), UNIFORM(SPSS), and RANDOM (BMDP). Using a personal computer (PC), an empirical investigation was performed using five criteria: period length before repeating random numbers, distribution shape, correlation between adjacent numbers, density of distributions and normal approach of random number generator (RNG) in a normal function. All RNG FORTRAN programs were rewritten into Pascal which is more efficient language for the PC. Sets of random numbers were generated using different starting values. A good RNG should have the following properties: a long enough period; a well-structured pattern in distribution; independence between random number sequences; random and uniform distribution; and a good normal approach in the normal …
Date: August 1995
Creator: Bang, Jung Woong
System: The UNT Digital Library
A Comparison of Two Differential Item Functioning Detection Methods: Logistic Regression and an Analysis of Variance Approach Using Rasch Estimation (open access)

A Comparison of Two Differential Item Functioning Detection Methods: Logistic Regression and an Analysis of Variance Approach Using Rasch Estimation

Differential item functioning (DIF) detection rates were examined for the logistic regression and analysis of variance (ANOVA) DIF detection methods. The methods were applied to simulated data sets of varying test length (20, 40, and 60 items) and sample size (200, 400, and 600 examinees) for both equal and unequal underlying ability between groups as well as for both fixed and varying item discrimination parameters. Each test contained 5% uniform DIF items, 5% non-uniform DIF items, and 5% combination DIF (simultaneous uniform and non-uniform DIF) items. The factors were completely crossed, and each experiment was replicated 100 times. For both methods and all DIF types, a test length of 20 was sufficient for satisfactory DIF detection. The detection rate increased significantly with sample size for each method. With the ANOVA DIF method and uniform DIF, there was a difference in detection rates between discrimination parameter types, which favored varying discrimination and decreased with increased sample size. The detection rate of non-uniform DIF using the ANOVA DIF method was higher with fixed discrimination parameters than with varying discrimination parameters when relative underlying ability was unequal. In the combination DIF case, there was a three-way interaction among the experimental factors discrimination type, …
Date: August 1995
Creator: Whitmore, Marjorie Lee Threet
System: The UNT Digital Library
A Comparison of Traditional Norming and Rasch Quick Norming Methods (open access)

A Comparison of Traditional Norming and Rasch Quick Norming Methods

The simplicity and ease of use of the Rasch procedure is a decided advantage. The test user needs only two numbers: the frequency of persons who answered each item correctly and the Rasch-calibrated item difficulty, usually a part of an existing item bank. Norms can be computed quickly for any specific group of interest. In addition, once the selected items from the calibrated bank are normed, any test, built from the item bank, is automatically norm-referenced. Thus, it was concluded that the Rasch quick norm procedure is a meaningful alternative to traditional classical true score norming for test users who desire normative data.
Date: August 1993
Creator: Bush, Joan Spooner
System: The UNT Digital Library
A Comparison of IRT and Rasch Procedures in a Mixed-Item Format Test (open access)

A Comparison of IRT and Rasch Procedures in a Mixed-Item Format Test

This study investigated the effects of test length (10, 20 and 30 items), scoring schema (proportion of dichotomous ad polytomous scoring) and item analysis model (IRT and Rasch) on the ability estimates, test information levels and optimization criteria of mixed item format tests. Polytomous item responses to 30 items for 1000 examinees were simulated using the generalized partial-credit model and SAS software. Portions of the data were re-coded dichotomously over 11 structured proportions to create 33 sets of test responses including mixed item format tests. MULTILOG software was used to calculate the examinee ability estimates, standard errors, item and test information, reliability and fit indices. A comparison of IRT and Rasch item analysis procedures was made using SPSS software across ability estimates and standard errors of ability estimates using a 3 x 11 x 2 fixed factorial ANOVA. Effect sizes and power were reported for each procedure. Scheffe post hoc procedures were conducted on significant factos. Test information was analyzed and compared across the range of ability levels for all 66-design combinations. The results indicated that both test length and the proportion of items scored polytomously had a significant impact on the amount of test information produced by mixed item …
Date: August 2003
Creator: Kinsey, Tari L.
System: The UNT Digital Library
Bias and Precision of the Squared Canonical Correlation Coefficient under Nonnormal Data Conditions (open access)

Bias and Precision of the Squared Canonical Correlation Coefficient under Nonnormal Data Conditions

This dissertation: (a) investigated the degree to which the squared canonical correlation coefficient is biased in multivariate nonnormal distributions and (b) identified formulae that adjust the squared canonical correlation coefficient (Rc2) such that it most closely approximates the true population effect under normal and nonnormal data conditions. Five conditions were manipulated in a fully-crossed design to determine the degree of bias associated with Rc2: distribution shape, variable sets, sample size to variable ratios, and within- and between-set correlations. Very few of the condition combinations produced acceptable amounts of bias in Rc2, but those that did were all found with first function results. The sample size to variable ratio (n:v)was determined to have the greatest impact on the bias associated with the Rc2 for the first, second, and third functions. The variable set condition also affected the accuracy of Rc2, but for the second and third functions only. The kurtosis levels of the marginal distributions (b2), and the between- and within-set correlations demonstrated little or no impact on the bias associated with Rc2. Therefore, it is recommended that researchers use n:v ratios of at least 10:1 in canonical analyses, although greater n:v ratios have the potential to produce even less bias. …
Date: August 2006
Creator: Leach, Lesley Ann Freeny
System: The UNT Digital Library
The Robustness of O'Brien's r Transformation to Non-Normality (open access)

The Robustness of O'Brien's r Transformation to Non-Normality

A Monte Carlo simulation technique was employed in this study to determine if the r transformation, a test of homogeneity of variance, affords adequate protection against Type I error over a range of equal sample sizes and number of groups when samples are obtained from normal and non-normal distributions. Additionally, this study sought to determine if the r transformation is more robust than Bartlett's chi-square to deviations from normality. Four populations were generated representing normal, uniform, symmetric leptokurtic, and skewed leptokurtic distributions. For each sample size (6, 12, 24, 48), number of groups (3, 4, 5, 7), and population distribution condition, the r transformation and Bartlett's chi-square were calculated. This procedure was replicated 1,000 times; the actual significance level was determined and compared to the nominal significance level of .05. On the basis of the analysis of the generated data, the following conclusions are drawn. First, the r transformation is generally robust to violations of normality when the size of the samples tested is twelve or larger. Second, in the instances where a significant difference occurred between the actual and nominal significance levels, the r transformation produced (a) conservative Type I error rates if the kurtosis of the parent population …
Date: August 1985
Creator: Gordon, Carol J. (Carol Jean)
System: The UNT Digital Library
Missing Data Treatments at the Second Level of Hierarchical Linear Models (open access)

Missing Data Treatments at the Second Level of Hierarchical Linear Models

The current study evaluated the performance of traditional versus modern MDTs in the estimation of fixed-effects and variance components for data missing at the second level of an hierarchical linear model (HLM) model across 24 different study conditions. Variables manipulated in the analysis included, (a) number of Level-2 variables with missing data, (b) percentage of missing data, and (c) Level-2 sample size. Listwise deletion outperformed all other methods across all study conditions in the estimation of both fixed-effects and variance components. The model-based procedures evaluated, EM and MI, outperformed the other traditional MDTs, mean and group mean substitution, in the estimation of the variance components, outperforming mean substitution in the estimation of the fixed-effects as well. Group mean substitution performed well in the estimation of the fixed-effects, but poorly in the estimation of the variance components. Data in the current study were modeled as missing completely at random (MCAR). Further research is suggested to compare the performance of model-based versus traditional MDTs, specifically listwise deletion, when data are missing at random (MAR), a condition that is more likely to occur in practical research settings.
Date: August 2011
Creator: St. Clair, Suzanne W.
System: The UNT Digital Library
Parent Involvement and Science Achievement: A Latent Growth Curve Analysis (open access)

Parent Involvement and Science Achievement: A Latent Growth Curve Analysis

This study examined science achievement growth across elementary and middle school and parent school involvement using the Early Childhood Longitudinal Study – Kindergarten Class of 1998 – 1999 (ECLS-K). The ECLS-K is a nationally representative kindergarten cohort of students from public and private schools who attended full-day or half-day kindergarten class in 1998 – 1999. The present study’s sample (N = 8,070) was based on students that had a sampling weight available from the public-use data file. Students were assessed in science achievement at third, fifth, and eighth grades and parents of the students were surveyed at the same time points. Analyses using latent growth curve modeling with time invariant and varying covariates in an SEM framework revealed a positive relationship between science achievement and parent involvement at eighth grade. Furthermore, there were gender and racial/ethnic differences in parents’ school involvement as a predictor of science achievement. Findings indicated that students with lower initial science achievement scores had a faster rate of growth across time. The achievement gap between low and high achievers in earth, space and life sciences lessened from elementary to middle school. Parents’ involvement with school usually tapers off after elementary school, but due to parent school …
Date: August 2011
Creator: Johnson, Ursula Yvette
System: The UNT Digital Library
Stratified item selection and exposure control in unidimensional adaptive testing in the presence of two-dimensional data. (open access)

Stratified item selection and exposure control in unidimensional adaptive testing in the presence of two-dimensional data.

It is not uncommon to use unidimensional item response theory (IRT) models to estimate ability in multidimensional data. Therefore it is important to understand the implications of summarizing multiple dimensions of ability into a single parameter estimate, especially if effects are confounded when applied to computerized adaptive testing (CAT). Previous studies have investigated the effects of different IRT models and ability estimators by manipulating the relationships between item and person parameters. However, in all cases, the maximum information criterion was used as the item selection method. Because maximum information is heavily influenced by the item discrimination parameter, investigating a-stratified item selection methods is tenable. The current Monte Carlo study compared maximum information, a-stratification, and a-stratification with b blocking item selection methods, alone, as well as in combination with the Sympson-Hetter exposure control strategy. The six testing conditions were conditioned on three levels of interdimensional item difficulty correlations and four levels of interdimensional examinee ability correlations. Measures of fidelity, estimation bias, error, and item usage were used to evaluate the effectiveness of the methods. Results showed either stratified item selection strategy is warranted if the goal is to obtain precise estimates of ability when using unidimensional CAT in the presence of …
Date: August 2009
Creator: Kalinowski, Kevin E.
System: The UNT Digital Library
An Investigation of the Effect of Violating the Assumption of Homogeneity of Regression Slopes in the Analysis of Covariance Model upon the F-Statistic (open access)

An Investigation of the Effect of Violating the Assumption of Homogeneity of Regression Slopes in the Analysis of Covariance Model upon the F-Statistic

The study seeks to determine the effect upon the F-statistic of violating the assumption of homogeneity of regression slopes in the one-way, fixed-effects analysis of covariance model. The study employs a Monte Carlo simulation technique to vary the degree of heterogeneity of regression slopes with varied sample sizes within experiments to determine the effect of such conditions. One hundred and eighty-three simulations were used.
Date: August 1972
Creator: McClaran, Virgil Rutledge
System: The UNT Digital Library