States

58 Matching Results

Results open in a new window/tab.

How Attitudes towards Statistics Courses and the Field of Statistics Predicts Statistics Anxiety among Undergraduate Social Science Majors: A Validation of the Statistical Anxiety Scale (open access)

How Attitudes towards Statistics Courses and the Field of Statistics Predicts Statistics Anxiety among Undergraduate Social Science Majors: A Validation of the Statistical Anxiety Scale

The aim of this study was to validate an instrument that can be used by instructors or social scientist who are interested in evaluating statistics anxiety. The psychometric properties of the English version of the Statistical Anxiety Scale (SAS) was examined through a confirmatory factor analysis of scores from a sample of 323 undergraduate social science majors enrolled in colleges and universities in the United States. In previous studies, the psychometric properties of the Spanish and Italian versions of the SAS were validated; however, the English version of the SAS had never been assessed. Inconsistent with previous studies, scores on the English version of the SAS did not produce psychometrically acceptable values of validity. However, the results of this study suggested the potential value of a revised two-factor model SAS to measure statistics anxiety. Additionally, the Attitudes Towards Statistics (ATS) scale was used to examine the convergent and discriminant validities of the two-factor SAS. As expected, the correlation between the two factors of the SAS and the two factors of the ATS uncovered a moderately negative correlation between examination anxiety and attitudes towards the course. Additionally, the results of a structural regression model of attitudes towards statistics as a predictor …
Date: August 2017
Creator: Obryant, Monique J
System: The UNT Digital Library
Using Posterior Predictive Checking of Item Response Theory Models to Study Invariance Violations (open access)

Using Posterior Predictive Checking of Item Response Theory Models to Study Invariance Violations

The common practice for testing measurement invariance is to constrain parameters to be equal over groups, and then evaluate the model-data fit to reject or fail to reject the restrictive model. Posterior predictive checking (PPC) provides an alternative approach to evaluating model-data discrepancy. This paper explores the utility of PPC in estimating measurement invariance. The simulation results show that the posterior predictive p (PP p) values of item parameter estimates respond to various invariance violations, whereas the PP p values of item-fit index may fail to detect such violations. The current paper suggests comparing group estimates and restrictive model estimates with posterior predictive distributions in order to demonstrate the pattern of misfit graphically.
Date: May 2017
Creator: Xin, Xin
System: The UNT Digital Library
Construct Validation and Measurement Invariance of the Athletic Coping Skills Inventory for Educational Settings (open access)

Construct Validation and Measurement Invariance of the Athletic Coping Skills Inventory for Educational Settings

The present study examined the factor structure and measurement invariance of the revised version of the Athletic Coping Skills Inventory (ACSI-28), following adjustment of the wording of items such that they were appropriate to assess Coping Skills in an educational setting. A sample of middle school students (n = 1,037) completed the revised inventory. An initial confirmatory factor analysis led to the hypothesis of a better fitting model with two items removed. Reliability of the subscales and the instrument as a whole was acceptable. Items were examined for sex invariance with differential item functioning (DIF) using item response theory, and five items were flagged for significant sex non-invariance. Following removal of these items, comparison of the mean differences between male and female coping scores revealed that there was no significant difference between the two groups. Further examination of the generalizability of the coping construct and the potential transfer of psychosocial skills between athletic and academic settings are warranted.
Date: May 2017
Creator: Sanguras, Laila Y., 1977-
System: The UNT Digital Library
An Empirical Investigation of Tukey's Honestly Significant Difference Test with Variance Heterogeneity and Unequal Sample Sizes, Utilizing Kramer's Procedure and the Harmonic Mean (open access)

An Empirical Investigation of Tukey's Honestly Significant Difference Test with Variance Heterogeneity and Unequal Sample Sizes, Utilizing Kramer's Procedure and the Harmonic Mean

This study sought to determine the effect upon Tukey's Honestly Significant Difference (HSD) statistic of concurrently violating the assumptions of homogeneity of variance and equal sample sizes. Two forms for the unequal sample size problem were investigated. Kramer's form and the harmonic mean approach were the two unequal sample size procedures studied. The study employed a Monte Carlo simulation procedure which varied sample sizes with a heterogeneity of variance condition. Four thousand experiments were generated. Findings of this study were based upon the empirically obtained significance levels. Five conclusions were reached in this study. The first conclusion was that for the conditions of this study the Kramer form of the HSD statistic is not robust at the .05 or .01 nominal level of significance. A second conclusion was that the harmonic mean form of the HSD statistic is not robust at the .05 and .01 nominal level of significance. A general conclusion reached from all the findings formed the third conclusion. It was that the Kramer form of the HSD test is the preferred procedure under combined assumption violations of variance heterogeneity and unequal sample sizes. Two additional conclusions are based on related findings. The fourth conclusion was that for …
Date: May 1976
Creator: McKinney, William Lane
System: The UNT Digital Library
A Comparison of Two Criterion-Referenced Item-Selection Techniques Utilizing Simulated Data with Item Pools that Vary in Degrees of Item Difficulty (open access)

A Comparison of Two Criterion-Referenced Item-Selection Techniques Utilizing Simulated Data with Item Pools that Vary in Degrees of Item Difficulty

The problem of this study was to examine the equivalency of two different types of criterion-referenced item-selection techniques on simulated data as item pools varied in degrees of item difficulty. A pretest-posttest design was employed in which pass-fail scores were randomly generated for item pools of twenty-five items. From the item pools, the two techniques determined which items were to be used to make up twelve-item criterion-referenced tests. The twenty-five items also were rank ordered according to the discrimination power of the two techniques.
Date: May 1974
Creator: Davis, Robbie G.
System: The UNT Digital Library
Factors Influencing Difficult Special Education Referral Recommendations (open access)

Factors Influencing Difficult Special Education Referral Recommendations

The present study is concerned with selected factors that may strongly influence classroom teachers to refer young children for possible placement in special classes when the children are functioning near the borderline for placement on the basis of intelligence test scores. Particular attention was given to the contribution of student attributes (i.e., sex, ethnic background, socioeconomic status, and classroom behavior) and teacher attributes (i.e., age, sex, ethnic background and teaching experience) to the referral patterns of teachers. Also considered were the size of school enrollment, school locale, and interactions among student, teacher, and school variables. It was concluded that the teachers in the population studied responded to the case histories on the basis of certain selective biases. However, the relationship of these biases to referral decisions was less obvious and considerably more complex than has been suggested previously in the professional literature. At the same time, the presence of any bias in the referral process seemingly warrants careful consideration and points to the -need for greater emphasis in pre-service and in-service training programs upon the objective evaluation of students as an integral part of educational planning.
Date: August 1975
Creator: Luckey, Robert E.
System: The UNT Digital Library
Principles for Formulating and Evaluating Instructional Claims (open access)

Principles for Formulating and Evaluating Instructional Claims

The problem with which this investigation is concerned is that of developing (a) the concept of instructional claim, and (b) credible principles for instructional claim formulation and evaluation. The belief that these constructions are capable of contributing to the advancement of curricular and instructional research and practice is grounded in three major features. The first feature is that of increased precision of basic concepts and increased coherence among them. The second feature is the deliberate connecting of instructional strategies and goal-states and the connecting of instructional configurations with curricular configurations. The third feature is the introduction of fundamental logical principles as evaluative criteria and the framing of instructional plans in such a way as to be subject to empirical tests under the principles of hypothesis testing that are considered credible in the empirical sciences.
Date: August 1978
Creator: McCray, Emajean
System: The UNT Digital Library
Influence of Item Response Theory and Type of Judge on a Standard Set Using the Iterative Angoff Standard Setting Method (open access)

Influence of Item Response Theory and Type of Judge on a Standard Set Using the Iterative Angoff Standard Setting Method

The purpose of this investigation was to determine the influence of item response theory and different types of judges on a standard. The iterative Angoff standard setting method was employed by all judges to determine a cut-off score for a public school district-wide criterion-reformed test. The analysis of variance of the effect of judge type and standard setting method on the central tendency of the standard revealed the existence of an ordinal interaction between judge type and method. Without any knowledge of p-values, one judge group set an unrealistic standard. A significant disordinal interaction was found concerning the effect of judge type and standard setting method on the variance of the standard. A positive covariance was detected between judges' minimum pass level estimates and empirical item information. With both p-values and b-values, judge groups had mean minimum pass levels that were positively correlated (ranging from .77 to .86), regardless of the type of information given to the judges. No differences in correlations were detected between different judge types or different methods. The generalizability coefficients and phi indices for 12 judges included in any method or judge type were acceptable (ranging from .77 to .99). The generalizability coefficient and phi index …
Date: August 1992
Creator: Hamberlin, Melanie Kidd
System: The UNT Digital Library
A Comparison of Three Correlational Procedures for Factor-Analyzing Dichotomously-Scored Item Response Data (open access)

A Comparison of Three Correlational Procedures for Factor-Analyzing Dichotomously-Scored Item Response Data

In this study, an improved correlational procedure for factor-analyzing dichotomously-scored item response data is described and tested. The procedure involves (a) replacing the dichotomous input values with continuous probability values obtained through Rasch analysis; (b) calculating interitem product-moment correlations among the probabilities; and (c) subjecting the correlations to unweighted least-squares factor analysis. Two simulated data sets and an empirical data set (Kentucky Comprehensive Listening Test responses) were used to compare the new procedure with two more traditional techniques, using (a) phi and (b) tetrachoric correlations calculated directly from the dichotomous item-response values. The three methods were compared on three criterion measures: (a) maximum internal correlation; (b) product of the two largest factor loadings; and (c) proportion of variance accounted for. The Rasch-based procedure is recommended for subjecting dichotomous item response data to latent-variable analysis.
Date: May 1991
Creator: Fluke, Ricky
System: The UNT Digital Library
A Monte Carlo Study of the Robustness and Power of Analysis of Covariance Using Rank Transformation to Violation of Normality with Restricted Score Ranges for Selected Group Sizes (open access)

A Monte Carlo Study of the Robustness and Power of Analysis of Covariance Using Rank Transformation to Violation of Normality with Restricted Score Ranges for Selected Group Sizes

The study seeks to determine the robustness and power of parametric analysis of covariance and analysis of covariance using rank transformation to violation of the assumption of normality. The study employs a Monte Carlo simulation procedure with varying conditions of population distribution, group size, equality of group size, scale length, regression slope, and Y-intercept. The procedure was performed on raw data and ranked data with untied ranks and tied ranks.
Date: December 1984
Creator: Wongla, Ruangdet
System: The UNT Digital Library
The Effectiveness of a Mediating Structure for Writing Analysis Level Test Items From Text Based Instruction (open access)

The Effectiveness of a Mediating Structure for Writing Analysis Level Test Items From Text Based Instruction

This study is concerned with the effect of placing text into a mediated structure form upon the generation of test items for analysis level domain referenced test construction. The item writing methodology used is the linguistic (operationally defined) item writing technology developed by Bormuth, Finn, Roid, Haladyna and others. This item writing methodology is compared to 1) the intuitive method based on Bloom's definition of analysis level test questions and 2) the intuitive with keywords identified method of item writing. A mediated structure was developed by coordinating or subordinating sentences in an essay by following five simple grammatical rules. Three test writers each composed a ten-item test using each of the three methodologies based on a common essay. Tests were administered to 102 Composition 1 community college students. Students were asked to read the essay and complete one test form. Test forms by writer and method were randomly delivered. Analysis of variance showed no significant differences among either methods or writers. Item analysis showed no method of item writing resulting in items of consistent difficulty among test item writers. While the results of this study show no significant difference from the intuitive, traditional methods of item writing, analysis level test …
Date: August 1989
Creator: Brasel, Michael D. (Michael David)
System: The UNT Digital Library
A Monte Carlo Analysis of Experimentwise and Comparisonwise Type I Error Rate of Six Specified Multiple Comparison Procedures When Applied to Small k's and Equal and Unequal Sample Sizes (open access)

A Monte Carlo Analysis of Experimentwise and Comparisonwise Type I Error Rate of Six Specified Multiple Comparison Procedures When Applied to Small k's and Equal and Unequal Sample Sizes

The problem of this study was to determine the differences in experimentwise and comparisonwise Type I error rate among six multiple comparison procedures when applied to twenty-eight combinations of normally distributed data. These were the Least Significant Difference, the Fisher-protected Least Significant Difference, the Student Newman-Keuls Test, the Duncan Multiple Range Test, the Tukey Honestly Significant Difference, and the Scheffe Significant Difference. The Spjøtvoll-Stoline and Tukey—Kramer HSD modifications were used for unequal n conditions. A Monte Carlo simulation was used for twenty-eight combinations of k and n. The scores were normally distributed (µ=100; σ=10). Specified multiple comparison procedures were applied under two conditions: (a) all experiments and (b) experiments in which the F-ratio was significant (0.05). Error counts were maintained over 1000 repetitions. The FLSD held experimentwise Type I error rate to nominal alpha for the complete null hypothesis. The FLSD was more sensitive to sample mean differences than the HSD while protecting against experimentwise error. The unprotected LSD was the only procedure to yield comparisonwise Type I error rate at nominal alpha. The SNK and MRT error rates fell between the FLSD and HSD rates. The SSD error rate was the most conservative. Use of the harmonic mean of …
Date: December 1985
Creator: Yount, William R.
System: The UNT Digital Library
The Analysis of the Accumulation of Type II Error in Multiple Comparisons for Specified Levels of Power to Violation of Normality with the Dunn-Bonferroni Procedure: a Monte Carlo Study (open access)

The Analysis of the Accumulation of Type II Error in Multiple Comparisons for Specified Levels of Power to Violation of Normality with the Dunn-Bonferroni Procedure: a Monte Carlo Study

The study seeks to determine the degree of accumulation of Type II error rates, while violating the assumptions of normality, for different specified levels of power among sample means. The study employs a Monte Carlo simulation procedure with three different specified levels of power, methodologies, and population distributions. On the basis of the comparisons of actual and observed error rates, the following conclusions appear to be appropriate. 1. Under the strict criteria for evaluation of the hypotheses, Type II experimentwise error does accumulate at a rate that the probability of accepting at least one null hypothesis in a family of tests, when in theory all of the alternate hypotheses are true, is high, precluding valid tests at the beginning of the study. 2. The Dunn-Bonferroni procedure of setting the critical value based on the beta value per contrast did not significantly reduce the probability of committing a Type II error in a family of tests. 3. The use of an adequate sample size and orthogonal contrasts, or limiting the number of pairwise comparisons to the number of means, is the best method to control for the accumulation of Type II errors. 4. The accumulation of Type II error is irrespective …
Date: August 1989
Creator: Powers-Prather, Bonnie Ann
System: The UNT Digital Library
The Characteristics and Properties of the Threshold and Squared-Error Criterion-Referenced Agreement Indices (open access)

The Characteristics and Properties of the Threshold and Squared-Error Criterion-Referenced Agreement Indices

Educators who use criterion-referenced measurement to ascertain the current level of performance of an examinee in order that the examinee may be classified as either a master or a nonmaster need to know the accuracy and consistency of their decisions regarding assignment of mastery states. This study examined the sampling distribution characteristics of two reliability indices that use the squared-error agreement function: Livingston's k^2(X,Tx) and Brennan and Kane's M(C). The sampling distribution characteristics of five indices that use the threshold agreement function were also examined: Subkoviak's Pc. Huynh's p and k. and Swaminathan's p and k. These seven methods of calculating reliability were also compared under varying conditions of sample size, test length, and criterion or cutoff score. Computer-generated data provided randomly parallel test forms for N = 2000 cases. From this, 1000 samples were drawn, with replacement, and each of the seven reliability indices was calculated. Descriptive statistics were collected for each sample set and examined for distribution characteristics. In addition, the mean value for each index was compared to the population parameter value of consistent mastery/nonmastery classifications. The results indicated that the sampling distribution characteristics of all seven reliability indices approach normal characteristics with increased sample size. The …
Date: May 1988
Creator: Dutschke, Cynthia F. (Cynthia Fleming)
System: The UNT Digital Library
A Comparison of Three Methods of Detecting Test Item Bias (open access)

A Comparison of Three Methods of Detecting Test Item Bias

This study compared three methods of detecting test item bias, the chi-square approach, the transformed item difficulties approach, and the Linn-Harnish three-parameter item response approach which is the only Item Response Theory (IRT) method that can be utilized with minority samples relatively small in size. The items on two tests which measured writing and reading skills were examined for evidence of sex and ethnic bias. Eight sets of samples, four from each test, were randomly selected from the population (N=7287) of sixth, seventh, and eighth grade students enrolled in a large, urban school district in the southwestern United States. Each set of samples, male/female, White/Hispanic, White/Black, and White/White, contained 800 examinees in the majority group and 200 in the minority group. In an attempt to control differences in ability that may have existed between the various population groups, examinees with scores greater or less than two standard deviations from their group's mean were eliminated. Ethnic samples contained equal numbers of each sex. The White/White sets of samples were utilized to provide baseline bias estimates because the tests could not logically be biased against these groups. Bias indices were then calculated for each set of samples with each of the three …
Date: May 1985
Creator: Monaco, Linda Gokey
System: The UNT Digital Library
The Effects of the Ratio of Utilized Predictors to Original Predictors on the Shrinkage of Multiple Correlation Coefficients (open access)

The Effects of the Ratio of Utilized Predictors to Original Predictors on the Shrinkage of Multiple Correlation Coefficients

This study dealt with shrinkage in multiple correlation coefficients computed for sample data when these coefficients are compared to the multiple correlation coefficients for populations and the effect of the ratio of utilized predictors to original predictors on the shrinkage in R square. The study sought to provide the rationale for selection of the shrinkage formula when the correlations between the predictors and the criterion are known and determine which of the three shrinkage formulas (Browne, Darlington, or Wherry) will yield the R square from sample data that is closest to the R square for the population data.
Date: August 1983
Creator: Petcharat, Prataung Parn
System: The UNT Digital Library
Comparison of Methods for Computation and Cumulation of Effect Sizes in Meta-Analysis (open access)

Comparison of Methods for Computation and Cumulation of Effect Sizes in Meta-Analysis

This study examined the statistical consequences of employing various methods of computing and cumulating effect sizes in meta-analysis. Six methods of computing effect size, and three techniques for combining study outcomes, were compared. Effect size metrics were calculated with one-group and pooled standardizing denominators, corrected for bias and for unreliability of measurement, and weighted by sample size and by sample variance. Cumulating techniques employed as units of analysis the effect size, the study, and an average study effect. In order to determine whether outcomes might vary with the size of the meta-analysis, mean effect sizes were also compared for two smaller subsets of studies. An existing meta-analysis of 60 studies examining the effectiveness of computer-based instruction was used as a data base for this investigation. Recomputation of the original study data under the six different effect size formulas showed no significant difference among the metrics. Maintaining the independence of the data by using only one effect size per study, whether a single or averaged effect, produced a higher mean effect size than averaging all effect sizes together, although the difference did not reach statistical significance. The sampling distribution of effect size means approached that of the population of 60 studies …
Date: December 1987
Creator: Ronco, Sharron L. (Sharron Lee)
System: The UNT Digital Library
A State-Wide Survey on the Utilization of Instructional Technology by Public School Districts in Texas (open access)

A State-Wide Survey on the Utilization of Instructional Technology by Public School Districts in Texas

Effective utilization of instructional technology can provide a valuable method for the delivery of a school program, and enable a teacher to individualize according to student needs. Implementation of such a program is costly and requires careful planning and adequate staff development for school personnel. This study examined the degree of commitment by Texas school districts to the use of the latest technologies in their efforts to revolutionize education. Quantitative data were collected by using a survey that included five informational areas: (1) school district background, (2) funding for budget, (3) staff, (4) technology hardware, and (5) staff development. The study included 137 school districts representing the 5 University Interscholastic League (UIL) classifications (A through AAAAA). The survey was mailed to the school superintendents requesting that the persons most familiar with instructional technology be responsible for completing the questionnaires. Analysis of data examined the relationship between UIL classification and the amount of money expended on instructional technology. Correlation coefficients were determined between teachers receiving training in the use of technology and total personnel assigned to technology positions. Coefficients were calculated between a district providing a plan fortechnology and employment of a coordinator for instructional technology. Significance was established at the …
Date: May 1990
Creator: Hiett, Elmer D. (Elmer Donald)
System: The UNT Digital Library
An Empirical Investigation of Marascuilo's Ú₀ Test with Unequal Sample Sizes and Small Samples (open access)

An Empirical Investigation of Marascuilo's Ú₀ Test with Unequal Sample Sizes and Small Samples

The study seeks to determine the effect upon the Marascuilo Ú₀ statistic of violating the small sample assumption. The study employed a Monte Carlo simulation technique to vary the degree of sample size and unequal sample sizes within experiments to determine the effect of such conditions, Twenty-two simulations, with 1200 trials each, were used. The following conclusion appeared to be appropriate: The Marascuilo Ú₀ statistic should not be used with small sample sizes and it is recommended that the statistic be used only if sample sizes are larger than ten.
Date: August 1976
Creator: Milligan, Kenneth W.
System: The UNT Digital Library
Short-to-Medium Term Enrollment Projection Based on Cycle Regression Analysis (open access)

Short-to-Medium Term Enrollment Projection Based on Cycle Regression Analysis

Short-to-medium projections were made of student semester credit hour enrollments for North Texas State University and the Texas Public and Senior Colleges and Universities (as defined by the Coordinating Board, Texas College and University System). Undergraduate, Graduate, Doctorate, Total, Education, Liberal Arts, and Business enrollments were projected. Fall + Spring, Fall, Summer I + Summer II, Summer I were time periods for which projections were made. A new regression analysis called "cycle regression" which employs nonlinear regression techniques to extract multifrequential phenomena from time-series data was employed for the analysis of the enrollment data. The heuristic steps employed in cycle regression analysis are similar to those used in fitting polynomial models. A trend line and one or more sin waves (cycles) are simultaneously estimated using a partial F test. The process of adding cycle(s) to the model continues until no more significant terms can be estimated.
Date: August 1983
Creator: Chizari, Mohammad
System: The UNT Digital Library
A comparison of the Effects of Different Sizes of Ceiling Rules on the Estimates of Reliability of a Mathematics Achievement Test (open access)

A comparison of the Effects of Different Sizes of Ceiling Rules on the Estimates of Reliability of a Mathematics Achievement Test

This study compared the estimates of reliability made using one, two, three, four, five, and unlimited consecutive failures as ceiling rules in scoring a mathematics achievement test which is part of the Iowa Tests of Basic Skill (ITBS), Form 8. There were 700 students randomly selected from a population (N=2640) of students enrolled in the eight grades in a large urban school district in the southwestern United States. These 700 students were randomly divided into seven subgroups so that each subgroup had 100 students. The responses of all those students to three subtests of the mathematics achievement battery, which included mathematical concepts (44 items), problem solving (32 items), and computation (45 items), were analyzed to obtain the item difficulties and a total score for each student. The items in each subtest then were rearranged based on the item difficulties from the highest to the lowest value. In each subgroup, the method using one, two, three, four, five, and unlimited consecutive failures as the ceiling rules were applied to score the individual responses. The total score for each individual was the sum of the correct responses prior to the point described by the ceiling rule. The correct responses after the ceiling …
Date: May 1987
Creator: Somboon Suriyawongse
System: The UNT Digital Library
A Comparison of Some Continuity Corrections for the Chi-Squared Test in 3 x 3, 3 x 4, and 3 x 5 Tables (open access)

A Comparison of Some Continuity Corrections for the Chi-Squared Test in 3 x 3, 3 x 4, and 3 x 5 Tables

This study was designed to determine whether chis-quared based tests for independence give reliable estimates (as compared to the exact values provided by Fisher's exact probabilities test) of the probability of a relationship between the variables in 3 X 3, 3 X 4 , and 3 X 5 contingency tables when the sample size is 10, 20, or 30. In addition to the classical (uncorrected) chi-squared test, four methods for continuity correction were compared to Fisher's exact probabilities test. The four methods were Yates' correction, two corrections attributed to Cochran, and Mantel's correction. The study was modeled after a similar comparison conducted on 2 X 2 contingency tables and published by Michael Haber.
Date: May 1987
Creator: Mullen, Jerry D. (Jerry Davis)
System: The UNT Digital Library
Willingness of Educators to Participate in a Descriptive Research Study as a Function of a Monetary Incentive (open access)

Willingness of Educators to Participate in a Descriptive Research Study as a Function of a Monetary Incentive

The problem considered involved assessing willingness of educators to participate in a study offering monetary incentives. Determination of willingness was implemented by sending educators a packet requesting return of a postcard to indicate willingness to participate. The purpose was twofold: to determine the effect of a monetary incentive upon willingness of educators to participate in a research study, and to analyze implications for mail questionnaire studies. A sample of 600 educators was chosen from directories of eleven public schools in north Texas. It included equal numbers of male and female teachers and male and female administrators. Subjects were assigned to one of twelve groups. No two from a school were assigned to different levels of the inducement variable.
Date: May 1984
Creator: Pittman, Doyle
System: The UNT Digital Library
Effect of Rater Training and Scale Type on Leniency and Halo Error in Student Ratings of Faculty (open access)

Effect of Rater Training and Scale Type on Leniency and Halo Error in Student Ratings of Faculty

The purpose of this study was to determine if leniency and halo error in student ratings could be reduced by training the student raters and by using a Behaviorally Anchored Rating Scale (BARS) rather than a Likert scale. Two hypotheses were proposed. First, the ratings collected from the trained raters would contain less halo and leniency error than those collected from the untrained raters. Second, within the group of trained raters the BARS would contain less halo and leniency error than the Likert instrument.
Date: May 1987
Creator: Cook, Stuart S. (Stuart Sheldon)
System: The UNT Digital Library