88 Matching Results

Results open in a new window/tab.

Accuracy of Projection Methods for the Incompressible Navier-Stokes Equations (open access)

Accuracy of Projection Methods for the Incompressible Navier-Stokes Equations

Numerous papers have appeared in the literature over the past thirty years discussing projection-type methods for solving the incompressible Navier-Stokes equations. A recurring difficulty encountered is the choice of boundary conditions for the intermediate or predicted velocity in order to obtain at least second order convergence. A further issue is the formula for the pressure correction at each timestep. A simple overview is presented here based on recently published results by Brown, Cortez and Minion [2].
Date: June 12, 2001
Creator: Brown, D L
System: The UNT Digital Library
The Advanced Photon Source injector test stand control system. (open access)

The Advanced Photon Source injector test stand control system.

None
Date: November 12, 2001
Creator: Maclean, J. F. & Arnold, N. D.
System: The UNT Digital Library
Analysis of sliding wear rate variation with nominal contact pressure. (open access)

Analysis of sliding wear rate variation with nominal contact pressure.

None
Date: November 12, 2001
Creator: Erck, R. A. & Ajayi, O. O.
System: The UNT Digital Library
Application of scaling properties of the Vlasov and the Fokker-Planck equations to improved macroparticle models (open access)

Application of scaling properties of the Vlasov and the Fokker-Planck equations to improved macroparticle models

Numerical simulations of cooling processes over minutes or hours of real time are usually carried out using direct solution of the Fokker-Planck equation. However, by using scaling rules derived from that equation, it is possible to use macroparticle representations of the beam distribution. Besides having applications for cooling alone, the macroparticle approach allows combining the cooling process with other dynamical processes which are represented by area-preserving maps. A time-scaling rule derived from the Vlasov equation can be used to adjust the time step of a map-based dynamics calculation to one more suitable for combining with a macroparticle Fokker-Planck calculation. The time scaling for the Vlasov equation is also useful for substantially more rapid calculations when a macroparticle model of a conservative multiparticle system requires a large number of macroparticles to faithfully produce the collective potential or when the model must simulate a long time period.
Date: July 12, 2001
Creator: MacLachlan, James A.
System: The UNT Digital Library
Assessment of the Relevance of Displacement Based Design Methods/Criteria to Nuclear Plant Structures (open access)

Assessment of the Relevance of Displacement Based Design Methods/Criteria to Nuclear Plant Structures

Revisions to the USNRC Regulatory Guides and Standard Review Plan Sections devoted to earthquake engineering practice are currently in process. The intent is to reflect changes in engineering practice that have evolved in the twenty years that have passed since those criteria were originally published. Additionally, field observations of the effects of the Northridge (1994) and Kobe (1995) earthquakes have inspired some reassessment in the technical community about certain aspects of design practice. In particular, questions have arisen about the effectiveness of basing earthquake resistant designs on resistance to seismic forces and, then evaluating tolerability of the expected displacements. Therefore, a research effort was undertaken to examine the implications for NRC's seismic practice of the move, in the earthquake engineering community, toward using expected displacement rather than force (or stress) as the basis for assessing design adequacy. The results of the NRC sponsored research on this subject are reported in this paper. A slow trend toward the utilization of displacement based methods for design was noted. However, there is a more rapid trend toward the use of displacement based methods for seismic evaluation of existing facilities. A document known as FEMA 273, has been developed and is being used as …
Date: August 12, 2001
Creator: Hofmayer, C.; Miller, C.; Wang, Y. & Costello, J.
System: The UNT Digital Library
BBU and Corkscrew Growth Predictions for the DARHT Second Axis Accelerator (open access)

BBU and Corkscrew Growth Predictions for the DARHT Second Axis Accelerator

The second axis accelerator of the Dual Axis Radiographic Hydrodynamic Test (DARHT-II) facility will produce a 2-kA, 20-MeV, 2-{micro}s output electron beam with a design goal of less than 1000 {pi} mm-mrad normalized transverse emittance. In order to meet this goal, both the beam breakup instability (BBU) and transverse ''corkscrew'' motion (due to chromatic phase advance) must be limited in growth. Using data from recent experimental measurements of the transverse impedance of actual DARHT-II accelerator cells by Briggs et al., they have used the LLNL BREAKUP code to predict BBU and corkscrew growth in DARHT-II. The results suggest that BBU growth should not seriously degrade the final achievable spot size at the x-ray converter, presuming the initial excitation level is of the order 100 microns or smaller. For control of corkscrew growth, a major concern is the number of ''tuning'' shots needed to utilize effectively the ''tuning-V'' algorithm. Presuming that the solenoid magnet alignment falls within spec, they believe that possibly as few as 50-100 shots will be necessary to set the dipole corrector magnet currents. They give some specific examples of tune determination for a hypothetical set of alignment errors.
Date: June 12, 2001
Creator: Chen, Y. J. & Fawley, W. M.
System: The UNT Digital Library
BBU and Corkscrew Growth Predictions for the Darht Second Axis Accelerator (open access)

BBU and Corkscrew Growth Predictions for the Darht Second Axis Accelerator

The second axis accelerator of the Dual Axis Radiographic Hydrodynamic Test (DARHT-II) facility will produce a 2-kA, 20-MeV, 2-{micro}s output electron beam with a design goal of less than 1000 {pi} mm-mrad normalized transverse emittance. In order to meet this goal, both the beam breakup instability (BBJ) and transverse corkscrew motion (due to chromatic phase advance) must be limited in growth. Using data from recent experimental measurements of the transverse impedance of actual DARHT-II accelerator cells by Briggs et al. [2], they have used the LLNL BREAKUP code to predict BBU and corkscrew growth in DARHT-II. The results suggest that BBU growth should not seriously degrade the final achievable spot size at the x-ray converter, presuming the initial excitation level is of the order 100 microns or smaller. For control of corkscrew growth, a major concern is the number of tuning shots needed to utilize effectively the tuning-V algorithm [3]. Presuming that the solenoid magnet alignment falls within spec, they believe that possibly as few as 50-100 shots will be necessary to set the dipole corrector magnet currents. They give some specific examples of tune determination for a hypothetical set of alignment errors.
Date: June 12, 2001
Creator: Chen, Y. J. & Fawley, W. M.
System: The UNT Digital Library
Beam-Beam Compensation in Tevatron: Status Report (open access)

Beam-Beam Compensation in Tevatron: Status Report

The project of beam-beam compensation (BBC) in the Tevatron using electron beams [1] has passed a successful first step in experimental studies. The first Tevatron electron lens (TEL) has been installed in the Tevatron, commissioned, and demonstrated the theoretically predicted shift of betatron frequencies of a high energy proton beam due to a high current low energy electron beam. After the first series of studies in March-April 2001 (total of 7 shifts), we achieved tuneshifts of 980 GeV protons of about dQ=+0.007 with some 3 A of the electron beam current while the proton lifetime was in the range of 10 hours (some 24 hours at the best). Future work will include diagnostics improvement, beam studies with antiprotons, and fabrication of the 2nd TEL.
Date: July 12, 2001
Creator: Shiltsev, Vladimir D.; Kuznetsov, G.; Solyak, N.; Wildman, D.; Zhang, X. L.; Alexahin, Yu. et al.
System: The UNT Digital Library
BTeV low-beta optics in the Tevatron (open access)

BTeV low-beta optics in the Tevatron

A low-{beta} insertion has been designed for the BTeV experiment to be installed in the Tevatron C0 straight section. With {+-}12 m for detector space, a {beta}* of 0.5 m can be achieved using 170 T/m magnets in the final focus triplets. A half-crossing angle of 240 {micro}r keeps the beams separated by 5{sigma} at the 2nd parasitic crossing; 39.5 m from the IP. There are two possible low-{beta} Tevatron Collider operating modes: CDF and D0 with collisions, but not C0, and; C0 with collisions, but not B0 or D0.
Date: July 12, 2001
Creator: Johnstone, John A.
System: The UNT Digital Library
The CDF Online Silicon Vertex Tracker (open access)

The CDF Online Silicon Vertex Tracker

The Silicon Vertex Tracker (SVT) is the new trigger processor which reconstructs 2-D tracks with high speed and accuracy at the level 2 trigger of the CDFII experiment. SVT allows tagging events with secondary vertices and therefore enhances the CDFII B-physics capability. SVT has been fully assembled and operational since the beginning of Tevatron RunII in April 2001. In this paper we briefly review the SVT design and physics motivation and then describe its performance during the early phase of CDF RunII.
Date: December 12, 2001
Creator: al., I. Fiori et
System: The UNT Digital Library
Challenges to the Fermilab linac and booster accelerators (open access)

Challenges to the Fermilab linac and booster accelerators

A report on the challenges confronting the Fermilab Linac and Booster accelerators is presented. Plans to face those challenges are discussed. Historically, the Linac/Booster system has served only as an injector for the relatively low repetition rate Main Ring synchrotron. With construction of an 8 GeV target station for the 5 Hz MiniBooNE neutrino beam and requirements for rapid multi-batch injection into the Main Injector for the NUMI/MINOS experiment, the demand for 8 GeV protons will increase more than an order of magnitude above recent high levels. To meet this challenge, enhanced ion source performance, better Booster orbit control, a beam loss collimation/localization system, and improved diagnostics are among the items being pursued. Booster beam loss reduction and control are key to the entire near future Fermilab high energy physics program.
Date: July 12, 2001
Creator: Webber, Robert C.
System: The UNT Digital Library
Computational Experience with the Reich-Moore Resolved-Resonance Equations in the AMPX Cross-Section Processing System (open access)

Computational Experience with the Reich-Moore Resolved-Resonance Equations in the AMPX Cross-Section Processing System

The Reich-Moore formulation is used extensively in many isotope/nuclide evaluations to represent neutron cross section data for the resolved-resonance region. The Reich-Moore equations require the evaluation of complex matrices (i.e., matrices with complex quantities) that are a function of the resonance energy and corresponding resonance parameters. Although the Reich-Moore equations are documented in the open literature, computational pitfalls may be encountered with the implementation of the Reich-Moore equations in a cross-section processing code. Based on experience, numerical instabilities in the form of nonphysical oscillations can occur in the calculated absorption, capture or elastic scattering cross sections. To illustrate possible numerical instabilities, the conventional Reich-Moore equations are presented, and the conditions that lead to numerical problems in the cross-section calculations are identified and demonstrated for {sup 28}Si and {sup 60}Ni. In an effort to circumvent the computational problems, detailed or revised Reich-Moore expressions have been developed to efficiently and accurately calculate cross sections for neutron-induced reactions in the resolved-resonance region. The revised equations can be used to avoid numerical problems associated with the implementation of the Reich-Moore formulation in a cross-section processing code. The revised Reich-Moore equations are also used to demonstrate the improved cross-section results (i.e., without numerical instabilities) for …
Date: February 12, 2001
Creator: Dunn, M. E.
System: The UNT Digital Library
Computational investigation of dissipation and reversibility of space-charge driven processes in beams (open access)

Computational investigation of dissipation and reversibility of space-charge driven processes in beams

Collisionless charged particle beams are presumed to equilibrate via the long-range potential from the space charge. The exact mechanism for this equilibration, along with the question of macroscopic reversibility, has been uncertain, however. A number of computational approaches based on particle-in-cell (PIC) methods are presented which can facilitate the resolution of these questions. One such technique is the self-consistent tracking of individual particle orbits through the nonlinear potential formed by nonuniform charge density distributions. This orbit-tracking model differs from the particle-core model in that the sampled particles are systematically chosen from the actual particles in a fully self-consistent simulation. The results of this analysis are presented for a number of representative cases, and the implications of the study on equilibration mechanism are discussed.
Date: July 12, 2001
Creator: al., Courtlandt Bohn et
System: The UNT Digital Library
Correlation of Test Data from Some NIF Small Optical Components (open access)

Correlation of Test Data from Some NIF Small Optical Components

The NIF injection laser system requires over 8000 precision optical components. Two special requirements for such optics are wavefront and laser damage threshold. Wavefront gradient is an important specification on the NIF ILS optics. The gradient affects the spot size and, in the second order, the contrast ratio of the laser beam. Wavefront errors are specified in terms of peak-to-valley, rms, and rms gradient, with filtering requirements. Typical values are lambda/8 PV, lambda/30 rms, and lambda/30/cm rms gradient determined after filtering for spatial periods greater than 2 mm. One objective of this study is to determine whether commercial software supplied with common phase measuring interferometers can filter, perform the gradient analysis, and produce numbers comparable to that by CVOS, the LLNL wavefront analysis application. Laser survivability of optics is another important specification for the operational longevity of the laser system. Another objective of this study is to find alternate laser damage test facilities. The addition of non-NIF testing would allow coating suppliers to optimize their processes according to their test plans and NIF integrators to validate the coatings from their sub-tiered suppliers. The maximum level required for anti-reflective, 45-degree high reflector, and polarizer coatings are 20, 30, and 5 J/cm{sup …
Date: June 12, 2001
Creator: Chow, R.; McBurney, M.; Eickelberg, W. K.; Williams, W. H. & Thomas, M. D.
System: The UNT Digital Library
Data Management Tools (open access)

Data Management Tools

What is data management (DM) and why is it important? As described in the ''Handbook of Data Management'' (Thuraisingham, 1998), data management is the process of understanding the data needs of an organization and making the data available to support the operations of the organization. The ultimate goal of data management is to provide the seamless access and fusion of massive amounts of data, information, and knowledge in a heterogeneous and real-time environment, and to support the functions and decision making processes of an organization. The important questions that need to be asked for proper data management are: who is going to be using the data, what types of data need to be stored, and how will this data be accessed? With these questions answered, the data management system (DMS) can then be created, or an existing system can be modified to meet the needs of the organization. The real importance of a data management system is to provide the end user with a consistent data set of known quality. The elements of a good data management system should include a system that: is modeled to how the data is collected and processed, is very well documented, has specifically defined …
Date: February 12, 2001
Creator: Ridley, M. & Stoker, C.
System: The UNT Digital Library
Decision support facility for the APS control system. (open access)

Decision support facility for the APS control system.

The Advanced Photon Source is now in its fifth year of routine beam production. The EPICS-based [1] control system has entered the phase in its life cycle where new control algorithms must be implemented under increasingly stringent operational and reliability requirements. The sheer volume of the control system ({approx}270,000 records, {approx}145 VME-based input-output controllers (IOCs), and {approx}7,000,000 lines of EPICS ASCII configuration code), presents a daunting challenge for code maintenance. The present work describes a relational database that provides an integrated view of the interacting components of the entire APS control system, including the IOC low-level logic, the physical wiring documentation, and high-level client applications. The database is extracted (booted) from the same operational CVS repository as that used to load the active IOCs. It provides site-wide decision support facilities to inspect and trace control flow and to identify client (e.g., user interface) programs involved at any selected point in the front-end logic. The relational database forms a basis for generalized documentation of global control logic and its connection with both the physical I/O and with external high-level applications.
Date: November 12, 2001
Creator: Dohan, D. A.
System: The UNT Digital Library
Design and prototype tests of a large-aperture 37-53 MHz ferrite-tuned booster synchrotron cavity (open access)

Design and prototype tests of a large-aperture 37-53 MHz ferrite-tuned booster synchrotron cavity

The Booster synchrotron at Fermilab employs eighteen 37-53 MHz ferrite-tuned double-gap coaxial radiofrequency cavities for acceleration of protons from 400 MeV to 8 GeV. The cavities have an aperture of 2.25 inches and operate at 55 kV per cavity. Future high duty factor operation of the Booster will be problematic due to unavoidable beam loss at the cavities resulting in excessive activation. The power amplifiers, high maintenance items, are mounted directly to the cavities in the tunnel. A proposed replacement for the Booster, the Proton Driver, will utilize the Booster radiofrequency cavities and requires not only a larger aperture, but also higher voltage. A research and development program is underway at Fermilab to modify the Booster cavities to provide a 5-inch aperture and a 20% voltage increase. A prototype has been constructed and high power tests have bee completed. The cavity design and test results is presented.
Date: July 12, 2001
Creator: al., Mark S. Champion et
System: The UNT Digital Library
Design Considerations of Fast Kicker Systems for High Intensity Proton Accelerators (open access)

Design Considerations of Fast Kicker Systems for High Intensity Proton Accelerators

In this paper, we discuss the specific issues related to the design of the Fast Kicker Systems for high intensity proton accelerators. To address these issues in the preliminary design stage can be critical since the fast kicker systems affect the machine lattice structure and overall design parameters. Main topics include system architecture, design strategy, beam current coupling, grounding, end user cost vs. system cost, reliability, redundancy and flexibility. Operating experience with the Alternating Gradient Synchrotron injection and extraction kicker systems at Brookhaven National Laboratory and their future upgrade is presented. Additionally, new conceptual designs of the extraction kicker for the Spallation Neutron Source at Oak Ridge and the Advanced Hydrotest Facility at Los Alamos are discussed.
Date: June 12, 2001
Creator: Zhang, W.; Sandberg, J.; Parson, W. M.; Walstrom, P.; Murray, M. M.; Cook, E. et al.
System: The UNT Digital Library
Designing Remote Monitoring Systems for Long Term Maintenance and Reliability (open access)

Designing Remote Monitoring Systems for Long Term Maintenance and Reliability

As part of the effort to modernize safeguards equipment, the IAEA is continuing to acquire and install equipment for upgrading obsolete surveillance systems with digital technology; and providing remote-monitoring capabilities where and when economically justified. Remote monitoring is expected to reduce inspection effort, particularly at storage facilities and reactor sites. Remote monitoring technology will not only involve surveillance, but will also include seals, sensors, and other unattended measurement equipment. The experience of Lawrence Livermore National Laboratory (LLNL) with the Argus Security System offers lessons for the design, deployment, and maintenance of remote monitoring systems. Argus is an integrated security system for protection of high-consequence U.S. Government assets, including nuclear materials. Argus provides secure transmission of sensor data, administrative data, and video information to support intrusion detection and access control functions. LLNL developed and deployed the Argus system on its own site in 1988. Since that time LLNL has installed, maintained, and upgraded Argus systems at several Department of Energy and Department of Defense sites in the U.S. and at the original LLNL site. Argus has provided high levels of reliability and integrity, and reduced overall life-cycle cost through incremental improvements to hardware and software. This philosophy permits expansion of functional …
Date: October 12, 2001
Creator: Davis, G E; Johnson, G L; Schrader, F D; Stone, M A & Wilson, E F
System: The UNT Digital Library
Determining benefits and costs of improved central air conditioner efficiencies (open access)

Determining benefits and costs of improved central air conditioner efficiencies

Economic impacts on individual consumers from possible revisions to U.S. residential-type central air conditioner energy-efficiency standards are examined using a life-cycle cost (LCC) analysis. LCC is the consumer's cost of purchasing and installing a central air conditioner and operating it over its lifetime. This approach makes it possible to evaluate the economic impacts on individual consumers from the revised standards. The methodology allows an examination of groups of the population which benefit or lose from suggested efficiency standards. The results show that the economic benefits to consumers due to modest increases in efficiency are significant. For an efficiency increase of 20percent over the existing minimum standard (i.e., 12 SEER), 35percent of households with central air conditioners experience significant LCC savings, with an average savings of $453, while 25percent show significant LCC losses, with an average loss of $158 compared to apre-standard LCC average of $5,170. The remainder of the population (40percent) are largely unaffected.
Date: January 12, 2001
Creator: Rosenquist, G.; Levok, A.; Chan, P. & McMahon, J.
System: The UNT Digital Library
Developing a Standard Method of Test for Packaged, Solid-Desiccant Based Dehumidification Systems (open access)

Developing a Standard Method of Test for Packaged, Solid-Desiccant Based Dehumidification Systems

A draft Method of Test (MOT) has been proposed for packaged, air-to-air, desiccant-based dehumidifier systems that incorporate a thermally-regenerated desiccant material for dehumidification. This MOT is intended to function as the ''system'' testing and rating compliment to the desiccant ''component'' (desiccant wheels and/or cassettes) MOT (ASHRAE 1998) and rating standard (ARI 1998) already adopted by industry. This draft standard applies to ''packaged systems'' that: Use desiccants for dehumidification of conditioned air for buildings; Use heated air for regeneration of the desiccant material; Include fans for moving process and regeneration air; May include other system components for filtering, pre-cooling, post-cooling, or heating conditioned air; and May include other components for humidification of conditioned air. The proposed draft applies to four different system operating modes depending on whether outdoor or indoor air is used for process air and regeneration air streams . Only the ''ventilation'' mode which uses outdoor air for both process and regeneration inlets is evaluated in this paper. Performance of the dehumidification system is presented in terms that would be most familiar and useful to designers of building HVAC systems to facilitate integration of desiccant equipment with more conventional hardware. Parametric performance results from a modified, commercial desiccant dehumidifier …
Date: July 12, 2001
Creator: Sand, J.R.
System: The UNT Digital Library
Development and Application of a Strength and Damage Model for Rock under Dynamic Loading (open access)

Development and Application of a Strength and Damage Model for Rock under Dynamic Loading

Simulating the behavior of geologic materials under impact loading conditions requires the use of a constitutive model that includes the effects of bulking, yielding, damage, porous compaction and loading rate on the material response. This paper describes the development, implementation and calibration of a thermodynamically consistent constitutive model that incorporates these features. The paper also describes a computational study in which the model was used to perform numerical simulations of PILE DRIVER, a deeply-buried underground nuclear explosion detonated in granite at the Nevada Test Site. Particle velocity histories, peak velocity and peak displacement as a function of slant range obtained from the code simulations compare favorably with PILE DRIVER data. The simulated attenuation of peak velocity and peak displacement also agrees with the results from several other spherical wave experiments in granite.
Date: March 12, 2001
Creator: Antoun, T H; Lomov, I N & Glenn, L A
System: The UNT Digital Library
Development of cost-effective Nb3Sn conductors for the next generation hadron colliders (open access)

Development of cost-effective Nb3Sn conductors for the next generation hadron colliders

Significant progress has been made in demonstrating that reliable, efficient high field dipole magnets can be made with Nb{sub 3}Sn superconductors. A key factor in determining whether these magnets will be a cost-effective solution for the next generation hadron collider is the conductor cost. Consequently, DOE initiated a conductor development program to demonstrate that Nb{sub 3}Sn can be improved to reach a cost/performance value of $1.50/kA-m at 12T, 4.2K. The first phase of this program was initiated in Jan 2000, with the goal of improving the key properties of interest for accelerator dipole magnets--high critical current density and low magnetization. New world record critical current densities have been reported recently, and it appears that significant potential exists for further improvement. Although new techniques for compensating for magnetization effects have reduced the requirements somewhat, techniques for lowering the effective filament size while maintaining these high Jc values are a program priority. The next phase of this program is focused on reducing the conductor cost through substitution of lower cost raw materials and through process improvements. The cost drivers for materials and fabrication have been identified, and projects are being initiated to demonstrate cost reductions.
Date: April 12, 2001
Creator: Scanlan, R. M.; Dietderich, D. R. & Zeitlin, B. A.
System: The UNT Digital Library
Dynamic Statistical Profiling of Communication Activity in Distributed Applications (open access)

Dynamic Statistical Profiling of Communication Activity in Distributed Applications

A complete trace of communication activity for a terascale application is overwhelming in terms of overhead and storage. We propose a novel alternative that enables profiling of the application's communication activity using statistical message sampling during runtime. We have implemented an operational prototype and our evidence shows that this new technique can provide an accurate, low-overhead, tractable alternative for performance analysis of communication activity. Moreover, this alternative enables an assortment of runtime analysis techniques not previously available with post-mortem, trace-based systems. Our assessment of relative performance and coverage of different sampling and analysis methods shows that purely random selection is preferred over counter- and timer-based sampling. Experiments on several applications running up to 128 processors demonstrate the viability of this approach. In particular, on one application, statistical profiling results contradict conclusions based on evidence from tracing. The design of our prototype reveals that parsimonious modifications to the MPI runtime system could facilitate such techniques on production computing systems, and it suggests that this sampling technique could execute continuously for long-running applications.
Date: October 12, 2001
Creator: Vetter, Jeffrey
System: The UNT Digital Library