Use of Forward Sensitivity Analysis Method to Improve Code Scaling, Applicability, and Uncertainty (CSAU) Methodology (open access)

Use of Forward Sensitivity Analysis Method to Improve Code Scaling, Applicability, and Uncertainty (CSAU) Methodology

Code Scaling, Applicability, and Uncertainty (CSAU) methodology was developed in late 1980s by US NRC to systematically quantify reactor simulation uncertainty. Basing on CSAU methodology, Best Estimate Plus Uncertainty (BEPU) methods have been developed and widely used for new reactor designs and existing LWRs power uprate. In spite of these successes, several aspects of CSAU have been criticized for further improvement: i.e., (1) subjective judgement in PIRT process; (2) high cost due to heavily relying large experimental database, needing many experts man-years work, and very high computational overhead; (3) mixing numerical errors with other uncertainties; (4) grid dependence and same numerical grids for both scaled experiments and real plants applications; (5) user effects; Although large amount of efforts have been used to improve CSAU methodology, the above issues still exist. With the effort to develop next generation safety analysis codes, new opportunities appear to take advantage of new numerical methods, better physical models, and modern uncertainty qualification methods. Forward sensitivity analysis (FSA) directly solves the PDEs for parameter sensitivities (defined as the differential of physical solution with respective to any constant parameter). When the parameter sensitivities are available in a new advanced system analysis code, CSAU could be significantly improved: …
Date: October 1, 2010
Creator: Zhao, Haihua; Mousseau, Vincent A. & Dinh, Nam T.
Object Type: Article
System: The UNT Digital Library
Use of the ARM Measurement of Spectral Zenith Radiance For Better Understanding Of 3D Cloud-Radiation Processes and Aerosol-Cloud Interaction (open access)

Use of the ARM Measurement of Spectral Zenith Radiance For Better Understanding Of 3D Cloud-Radiation Processes and Aerosol-Cloud Interaction

Our proposal focuses on cloud-radiation processes in a general 3D cloud situation, with particular emphasis on cloud optical depth and effective particle size. We also focus on zenith radiance measurements, both active and passive. The proposal has three main parts. Part One exploits the “solar-background” mode of ARM lidars to allow them to retrieve cloud optical depth not just for thin clouds but for all clouds. This also enables the study of aerosol cloud interactions with a single instrument. Part Two exploits the large number of new wavelengths offered by ARM’s zenith-pointing ShortWave Spectrometer (SWS), especially during CLASIC, to develop better retrievals not only of cloud optical depth but also of cloud particle size. We also propose to take advantage of the SWS’ 1 Hz sampling to study the “twilight zone” around clouds where strong aerosol-cloud interactions are taking place. Part Three involves continuing our cloud optical depth and cloud fraction retrieval research with ARM’s 2NFOV instrument by, first, analyzing its data from the AMF-COPS/CLOWD deployment, and second, making our algorithms part of ARM’s operational data processing.
Date: October 19, 2010
Creator: Chiu, D. Jui-Yuan
Object Type: Report
System: The UNT Digital Library
Used fuel disposition research and development roadmap - FY10 status. (open access)

Used fuel disposition research and development roadmap - FY10 status.

Since 1987 the U.S. has focused research and development activities relevant to the disposal of commercial used nuclear fuel and U.S. Department of Energy (DOE) owned spent nuclear fuel and high level waste on the proposed repository at Yucca Mountain, Nevada. At the same time, the U.S. successfully deployed a deep geologic disposal facility for defense-related transuranic waste in bedded salt at the Waste Isolation Pilot Plant. In 2009 the DOE established the Used Fuel Disposition Campaign (UFDC) within the Office of Nuclear Energy. The Mission of the UFDC is to identify alternatives and conduct scientific research and technology development to enable storage, transportation and disposal of used nuclear fuel and wastes generated by existing and future nuclear fuel cycles. The U.S. national laboratories have participated on these programs and has conducted research and development related to these issues to a limited extent. However, a comprehensive research and development (R&D) program investigating a variety of geologic media has not been a part of the U.S. waste management program since the mid 1980s. Such a comprehensive R&D program is being developed in the UFDC with a goal of meeting the UFDC Grand Challenge to provide a sound technical basis for absolute …
Date: October 1, 2010
Creator: Nutt, W. M. (Nuclear Engineering Division)
Object Type: Report
System: The UNT Digital Library
Validation of two ribosomal RNA removal methods for microbial metatranscriptomics (open access)

Validation of two ribosomal RNA removal methods for microbial metatranscriptomics

The predominance of rRNAs in the transcriptome is a major technical challenge in sequence-based analysis of cDNAs from microbial isolates and communities. Several approaches have been applied to deplete rRNAs from (meta)transcriptomes, but no systematic investigation of potential biases introduced by any of these approaches has been reported. Here we validated the effectiveness and fidelity of the two most commonly used approaches, subtractive hybridization and exonuclease digestion, as well as combinations of these treatments, on two synthetic five-microorganism metatranscriptomes using massively parallel sequencing. We found that the effectiveness of rRNA removal was a function of community composition and RNA integrity for these treatments. Subtractive hybridization alone introduced the least bias in relative transcript abundance, whereas exonuclease and in particular combined treatments greatly compromised mRNA abundance fidelity. Illumina sequencing itself also can compromise quantitative data analysis by introducing a G+C bias between runs.
Date: October 1, 2010
Creator: He, Shaomei; Wurtzel, Omri; Singh, Kanwar; Froula, Jeff L; Yilmaz, Suzan; Tringe, Susannah G et al.
Object Type: Article
System: The UNT Digital Library
Video Scanning Hartmann Optical Tester (VSHOT) Uncertainty Analysis: Preprint (open access)

Video Scanning Hartmann Optical Tester (VSHOT) Uncertainty Analysis: Preprint

This purely analytical work is based primarily on the geometric optics of the system and shows sensitivities to various design and operational parameters. We discuss sources of error with measuring devices, instrument calibrations, and operator measurements for a parabolic trough test. In this paper, we include both the random (precision) and systematic (bias) errors for VSHOT testing and their contributions to the uncertainty. The contributing factors that we considered in this study are target tilt, target face to laser output distance, instrument vertical offset, scanner tilt, distance between the tool and the test piece, camera calibration, and scanner/calibration.
Date: October 1, 2010
Creator: Lewandowski, A. & Gray, A.
Object Type: Article
System: The UNT Digital Library
Visual Scanning Hartmann Optical Tester (VSHOT) Uncertainty Analysis (Milestone Report) (open access)

Visual Scanning Hartmann Optical Tester (VSHOT) Uncertainty Analysis (Milestone Report)

In 1997, an uncertainty analysis was conducted of the Video Scanning Hartmann Optical Tester (VSHOT). In 2010, we have completed a new analysis, based primarily on the geometric optics of the system, and it shows sensitivities to various design and operational parameters. We discuss sources of error with measuring devices, instrument calibrations, and operator measurements for a parabolic trough mirror panel test. These help to guide the operator in proper setup, and help end-users to understand the data they are provided. We include both the systematic (bias) and random (precision) errors for VSHOT testing and their contributions to the uncertainty. The contributing factors we considered in this study are: target tilt; target face to laser output distance; instrument vertical offset; laser output angle; distance between the tool and the test piece; camera calibration; and laser scanner. These contributing factors were applied to the calculated slope error, focal length, and test article tilt that are generated by the VSHOT data processing. Results show the estimated 2-sigma uncertainty in slope error for a parabolic trough line scan test to be +/-0.2 milliradians; uncertainty in the focal length is +/- 0.1 mm, and the uncertainty in test article tilt is +/- 0.04 milliradians.
Date: October 1, 2010
Creator: Gray, A.; Lewandowski, A. & Wendelin, T.
Object Type: Report
System: The UNT Digital Library

Wind Energy Aerodynamics - Rotor, Wake, and Wind Plant

This presentation addresses the current state of wind energy technology in the United States.
Date: October 12, 2010
Creator: Schreck, S.
Object Type: Presentation
System: The UNT Digital Library
Wind Power Plant Short Circuit Current Contribution for Different Fault and Wind Turbine Topologies: Preprint (open access)

Wind Power Plant Short Circuit Current Contribution for Different Fault and Wind Turbine Topologies: Preprint

This paper presents simulation results for SC current contribution for different types of WTGs obtained through transient and steady-state computer simulation software.
Date: October 1, 2010
Creator: Gevorgian, V. & Muljadi, E.
Object Type: Article
System: The UNT Digital Library

Working With the Federal Fleets

Presentation about federal fleet data, working with the federal government, and results from a survey of Clean Cities coordinators about their experiences with regulated fleets.
Date: October 25, 2010
Creator: Daley, R.
Object Type: Presentation
System: The UNT Digital Library
YALINA-booster Subcritical Assembly Pulsed-Neutron E Xperiments: Detector Dead Time and Apatial Corrections. (open access)

YALINA-booster Subcritical Assembly Pulsed-Neutron E Xperiments: Detector Dead Time and Apatial Corrections.

In almost every detector counting system, a minimal dead time is required to record two successive events as two separated pulses. Due to the random nature of neutron interactions in the subcritical assembly, there is always some probability that a true neutron event will not be recorded because it occurs too close to the preceding event. These losses may become rather severe for counting systems with high counting rates, and should be corrected before any utilization of the experimental data. This report examines the dead time effects for the pulsed neutron experiments of the YALINA-Booster subcritical assembly. The nonparalyzable model is utilized to correct the experimental data due to dead time. Overall, the reactivity values are increased by 0.19$ and 0.32$ after the spatial corrections for the YALINA-Booster 36% and 21% configurations respectively. The differences of the reactivities obtained with He-3 long or short detectors at the same detector channel diminish after the dead time corrections of the experimental data for the 36% YALINA-Booster configuration. In addition, better agreements between reactivities obtained from different experimental data sets are also observed after the dead time corrections for the 21% YALINA-Booster configuration.
Date: October 11, 2010
Creator: Cao, Y.; Gohar, Y. & Division, Nuclear Engineering
Object Type: Report
System: The UNT Digital Library
YALINA-booster Subcritical Assembly Pulsed-Neutron Experiments : Data Processing and Spatial Corrections. (open access)

YALINA-booster Subcritical Assembly Pulsed-Neutron Experiments : Data Processing and Spatial Corrections.

The YALINA-Booster experiments and analyses are part of the collaboration between Argonne National Laboratory of USA and the Joint Institute for Power & Nuclear Research - SOSNY of Belarus for studying the physics of accelerator driven systems for nuclear energy applications using low enriched uranium. The YALINA-Booster subcritical assembly is utilized for studying the kinetics of accelerator driven systems with its highly intensive D-T or D-D pulsed neutron source. In particular, the pulsed neutron methods are used to determine the reactivity of the subcritical system. This report examines the pulsed-neutron experiments performed in the YALINA-Booster facility with different configurations for the subcritical assembly. The 1141 configuration with 90% U-235 fuel and the 1185 configuration with 36% or 21% U-235 fuel are examined. The Sjoestrand area-ratio method is utilized to determine the reactivities of the different configurations. The linear regression method is applied to obtain the prompt neutron decay constants from the pulsed-neutron experimental data. The reactivity values obtained from the experimental data are shown to be dependent on the detector locations inside the subcritical assembly and the types of detector used for the measurements. In this report, Bell's spatial correction factors are calculated based on a Monte Carlo model to …
Date: October 11, 2010
Creator: Cao, Y.; Gohar, Y. & Division, Nuclear Engineering
Object Type: Report
System: The UNT Digital Library