Mechanisms of defect production and atomic mixing in high energy displacement cascades: A molecular dynamics study (open access)

Mechanisms of defect production and atomic mixing in high energy displacement cascades: A molecular dynamics study

We have performed molecular dynamics computer simulation studies of displacement cascades in Cu at low temperature. For 25 keV recoils we observe the splitting of a cascade into subcascades and show that cascades in Cu may lead to the formation of vacancy and interstitial dislocation loops. We discuss a new mechanism of defect production based on the observation of interstitial prismatic dislocation loop punching from cascades at 10 K. We also show that below the subcascade threshold, atomic mixing in the cascade is recoil-energy dependent and obtain a mixing efficiency that scales as the square root of the primary recoil energy. 44 refs., 12 figs.
Date: June 5, 1991
Creator: Diaz de la Rubia, T. & Guinan, M.W.
System: The UNT Digital Library
Effects of doping on hybridization gapped materials (open access)

Effects of doping on hybridization gapped materials

Doping studies are presented on three materials exhibiting hybridization gaps: Ce{sub 3}Bi{sub 4}Pt{sub 3}, U{sub 3}Sb{sub 4}Pt{sub 3}, and CeRhSb. In the case of trivalent La, Y, or Lu substituting for Ce or U, there is a suppression of the low temperature gap and an increase in the electronic specific heat, {gamma}. In the case of tetravalent Th substitutions for U there is no change in {gamma} and in the case of tetravalent Zr substitution for Ce in CeRhSb, there is an enhanced semiconductor-like behavior in the electrical resistance. These results are discussed in the light of a simple model of hybridization gapped systems. 12 refs., 3 figs.
Date: June 5, 1991
Creator: Canfield, P. C.; Thompson, J. D.; Hundley, M. F.; Lacerda, A. & Fisk, Z.
System: The UNT Digital Library
Reactor Operations Management Plan (open access)

Reactor Operations Management Plan

The K-Reactor last operated in April 1988. At that time, K-Reactor was one of three operating reactors at the Savannah River Site (SRS). Following an incident in P-Reactor in August 1988, it was decided to discontinue SRS reactor operation and conduct an extensive program to upgrade operating practices and plant hardware prior to restart of any of the reactors. The K-reactor was the first of three reactors scheduled to resume production. At the present time, it is the only reactor with planned restart. WSRC assumed management of SRS on April 1, 1989. WSRC established the Safety Basis for Restart and a listing of the actions planned to satisfy the Safety Basis. In consultation with DOE, it was determined that proper management of the restart activities would require a single plan that integrated the numerous activities. The plan was entitled the Reactor Operations Management Plan and is referred to simply as the ROMP. The initial version of ROMP was produced in July of 1989. Subsequent modifications led to Revision 3 which was approved by DOE in May, 1990. Other changes were made in a formal change process, resulting in the latest version, Revision 5, being issued in October, 1990. The ROMP …
Date: December 5, 1991
Creator: Rice, P.D.
System: The UNT Digital Library
Some limitations of detailed balance for inverse reaction calculations in the astrophysical p-process (open access)

Some limitations of detailed balance for inverse reaction calculations in the astrophysical p-process

p-Process modeling of some rare but stable proton-rich nuclei requires knowledge of a variety of neutron, charged particle, and photonuclear reaction rates at temperatures of 2 to 3 {times} 10{sup 9} {degrees}K. Detailed balance is usually invoked to obtain the stellar photonuclear rates, in spite of a number of well-known constraints. In this work we attempt to calculate directly the stellar rates for ({gamma},n) and ({gamma},{alpha}) reactions on {sup 151}Eu. These are compared with stellar rates obtained from detailed balance, using the same input parameters for the stellar (n,{gamma}) and ({alpha},{gamma}) reactions on {sup 150}Eu and {sup 147}Pm, respectively. The two methods yielded somewhat different results, which will be discussed along with some sensitivity studies. 16 refs., 7 figs.
Date: December 5, 1990
Creator: Gardner, D. G. & Gardner, M. A.
System: The UNT Digital Library
Time response of fast-gated microchannel plates used as x-ray detectors (open access)

Time response of fast-gated microchannel plates used as x-ray detectors

We report measurements of the time response of fast-gated, micro- channel plate (MCP) detectors, using a <10 ps pulsewidth ultra-violet laser and an electronic sampling system to measure time resolutions to better than 25 ps. The results show that framing times of less than 100 ps are attainable with high gain. The data is compared to a Monte Carlo calculation, which shows good agreement. We also measured the relative sensitivity as a function of DC bias, and saturation effects for large signal inputs. In part B, we briefly describe an electrical time-of-flight'' technique, which we have used to measure the response time of a fast-gated microchannel plate (MCP). Thinner MCP's than previously used have been tested, and, as expected, show fast gating times and smaller electron multiplication. A preliminary design for an x-ray pinhole camera, using a thin MCP, is presented. 7 refs., 6 figs.
Date: November 5, 1990
Creator: Turner, R. E.; Bell, P.; Hanks, R.; Kilkenny, J. D.; Landen, N.; Power, G. et al.
System: The UNT Digital Library
Modal study of refractive effects on x-ray laser coherence (open access)

Modal study of refractive effects on x-ray laser coherence

The role of smoothly varying transverse gain and refraction profiles on x-ray laser intensity and coherence is analyzed by modally expanding the electric field within the paraxial approximation. Comparison with a square transverse profile reveals that smooth-edged profiles lead to: (1) a greatly reduced number of guided modes, (2) the continued cancellation of local intensity from a loosely guided mode by resonant free modes, (3) and the absence of extraneous (or anomalous) free mode resonances. These generic spectral properties should enable a considerable simplification in analyzing and optimizing the coherence properties of laboratory soft x-ray lasers. 6 refs., 3 figs.
Date: April 5, 1991
Creator: Amendt, P.; London, R.A. (Lawrence Livermore National Lab., CA (USA)) & Strauss, M. (Israel Atomic Energy Commission, Beersheba (Israel). Nuclear Research Center-Negev)
System: The UNT Digital Library
Star-disk collisions in active galactic nuclei and the origin of the broad line region (open access)

Star-disk collisions in active galactic nuclei and the origin of the broad line region

Stars of a cluster surrounding the central black hole in an AGN will collide with the accretion disk. For a central black hole of 10{sup 8} M{circle dot} and a cluster with 10{sup 7} {minus} 10{sup 8} stars within a parsec, one estimates that {approximately}10{sup 4} such collisions will occur per year. Collisions are hypersonic (Mach number M {much gt} 1). Some of the wake of the star -- the disk material shocked by its passage -- will follow it out of the disk. Such star tails'' with the estimated masses {delta}m {approximately} 10{sup 25} {minus} 10{sup 27} g subsequently expand, cool and begin to recombine. We propose that -- when illuminated by the ionizing flux from the central source -- they are likely to be the origin of the observed broad emission lines.
Date: December 5, 1991
Creator: Zurek, W.H.; Colgate, S.A. (Los Alamos National Lab., NM (United States)) & Siemiginowska, A. (Harvard-Smithsonian Center for Astrophysics, Cambridge, MA (United States))
System: The UNT Digital Library
Practical path planning among movable obstacles (open access)

Practical path planning among movable obstacles

Path planning among movable obstacles is a practical problem that is in need of a solution. In this paper an efficient heuristic algorithm that uses a generate-and-test paradigm: a good'' candidate path is hypothesized by a global planner and subsequently verified by a local planner. In the process of formalizing the problem, we also present a technique for modeling object interactions through contact. Our algorithm has been tested on a variety of examples, and was able to generate solutions within 10 seconds. 5 figs., 27 refs.
Date: September 5, 1990
Creator: Chen, Pang C. & Hwang, Yong K.
System: The UNT Digital Library
From bit-strings (part way) to quaternions (open access)

From bit-strings (part way) to quaternions

We present work in progress on constructing rotations and boosts from bit strings, and a mapping of bit-strings onto integer quaternion coordinates.
Date: April 5, 1991
Creator: Noyes, H. Pierre
System: The UNT Digital Library
US nuclear weapons policy (open access)

US nuclear weapons policy

We are closing chapter one'' of the nuclear age. Whatever happens to the Soviet Union and to Europe, some of the major determinants of nuclear policy will not be what they have been for the last forty-five years. Part of the task for US nuclear weapons policy is to adapt its nuclear forces and the oganizations managing them to the present, highly uncertain, but not urgently competitive situation between the US and the Soviet Union. Containment is no longer the appropriate watchword. Stabilization in the face of uncertainty, a more complicated and politically less readily communicable goal, may come closer. A second and more difficult part of the task is to deal with what may be the greatest potential source of danger to come out of the end of the cold war: the breakup of some of the cooperative institutions that managed the nuclear threat and were created by the cold war. These cooperative institutions, principally the North Atlantic Treaty Organization (NATO), the Warsaw Pact, the US-Japan alliance, were not created specifically to manage the nuclear threat, but manage it they did. A third task for nuclear weapons policy is that of dealing with nuclear proliferation under modern conditions when …
Date: December 5, 1990
Creator: May, M.
System: The UNT Digital Library
The construction of a physical map for human chromosome 19 (open access)

The construction of a physical map for human chromosome 19

Unlike a genetic map which provides information on the relative position of genes or markers based upon the frequency of genetic recombination, a physical map provides a topographical picture of DNA, i.e. distances in base pairs between landmarks. The landmarks may be genes, gene markers, anonymous sequences, or cloned DNA fragments. Perhaps the most useful type of physical map is one that consists of an overlapping set of cloned DNA fragments (contigs) that span the chromosome. Once genes are assigned to this contig map, sequencing of the genomic DNA can be prioritized to complete the most interesting regions first. While, in practice, complete coverage of a complex genome in recombinant clones may not be possible to achieve, many gaps in a clone map may be closed by using multiple cloning vectors or uncloned large DNA fragments such as those separated by electrophoretic methods. Human chromosome 19 contains about 60 million base pairs of DNA and represents about 2% of the haploid genome. Our initial interest in chromosome 19 originated from the presence of three DNA repair genes which we localized to a region of this chromosome. Our approach to constructing a physical map of human chromosome 19 involves four steps: …
Date: November 5, 1990
Creator: Carrano, A. V.; Alleman, J.; Amemiya, C.; Ashworth, L. K.; Aslanidis, C.; Branscomb, E. W. et al.
System: The UNT Digital Library
Theoretical/numerical investigation of induction cavity impedances for moderate to large gap widths (open access)

Theoretical/numerical investigation of induction cavity impedances for moderate to large gap widths

In order to understand the coupling of a charged particle beam to modes in induction cells with gap width -- to -- beampipe radius ratio w/b > 1, the variation of the transverse Z/Q for both axially symmetric and axially asymmetric dipole modes in this regime is investigated. It is found that the gross behavior of the axially symmetric modes when w/b > 1 is at least consistent with the approximate analysis of Briggs, although a thorough comparison has not been undertaken. The axially asymmetric modes are found to be unimportant until w/b approaches 2, and they generally exhibit lower values of Z{perpendicular}/Q than the axially symmetric modes.
Date: September 5, 1990
Creator: DeFord, J. F.
System: The UNT Digital Library
Relativistic klystrons for high-gradient accelerators (open access)

Relativistic klystrons for high-gradient accelerators

Experimental work is being performed by collaborators at LLNL, SLAC, and LBL to investigate relativistic klystrons as a possible rf power source for future high-gradient accelerators. We have learned how to overcome or previously reported problem of high power rf pulse shortening and have achieved peak rf power levels of 330 MW using an 11.4-GHz high-gain tube with multiple output structures. In these experiments the rf pulse is of the same duration as the beam current pulse. In addition, experiments have been performed on two short sections of a high-gradient accelerator using the rf power from a relativistic klystron. An average accelerating gradient of 84 MV/m has been achieved with 80-MW of rf power.
Date: September 5, 1990
Creator: Westenskow, G. A.; Aalberts, D. P.; Boyd, J. K.; Deis, G. A.; Houck, T. L.; Orzechowski, T. J. et al.
System: The UNT Digital Library
Completing fault models for abductive diagnosis (open access)

Completing fault models for abductive diagnosis

In logic-based diagnosis, the consistency-based method is used to determine the possible sets of faulty devices. If the fault models of the devices are incomplete or nondeterministic, then this method does not necessarily yield abductive explanations of system behavior. Such explanations give additional information about faulty behavior and can be used for prediction. Unfortunately, system descriptions for the consistency-based method are often not suitable for abductive diagnosis. Methods for completing the fault models for abductive diagnosis have been suggested informally by Poole and by Cox et al. Here we formalize these methods by introducing a standard form for system descriptions. The properties of these methods are determined in relation to consistency-based diagnosis and compared to other ideas for integrating consistency-based and abductive diagnosis.
Date: November 5, 1992
Creator: Knill, E. (Los Alamos National Lab., NM (United States)); Cox, P.T. & Pietrzykowski, T. (Technical Univ., NS (Canada))
System: The UNT Digital Library
The effects of laser beam non-uniformities on x-ray conversion efficiency (open access)

The effects of laser beam non-uniformities on x-ray conversion efficiency

High gain Inertial Confinement Fusion (ICF) targets require a highly uniform drive. In the case of direct drive, the inherent non-uniformities in a high-power glass laser beam are large enough to prevent high compression of targets. In recent years two methods for smoothing the laser drive, Induced Spatial Incoherence (ISI) and Smoothing by Spectral Dispersion (SSD), have been proposed. Both methods break the original laser beam up into many beamlets that then interfere at the target to produce an illumination pattern with large instantaneous intensity variations over a wide range of spatial scales. This interferences pattern dances around at the coherence time of the laser and averages out to produce a smooth beam on longer time scales. Indirect drive schemes shine the laser on high-Z material, usually gold, which converts the laser energy into x-rays. The x-rays are then used to drive the target. Non-uniformities in the laser beam can imprint themselves on the emitted x-rays and potentially cause problems, although the spatial transport of the x-rays to the target tends to smooth out these non-uniformities. As a result, ISI and SSD schemes are also being considered for indirect drive laser systems. We address this problem by modeling the effects …
Date: November 5, 1990
Creator: Langer, S.H. & Estabrook, K.G.
System: The UNT Digital Library
Transformation as a Design Process and Runtime Architecture for High Integrity Software (open access)

Transformation as a Design Process and Runtime Architecture for High Integrity Software

We have discussed two aspects of creating high integrity software that greatly benefit from the availability of transformation technology, which in this case is manifest by the requirement for a sophisticated backtracking parser. First, because of the potential for correctly manipulating programs via small changes, an automated non-procedural transformation system can be a valuable tool for constructing high assurance software. Second, modeling the processing of translating data into information as a, perhaps, context-dependent grammar leads to an efficient, compact implementation. From a practical perspective, the transformation process should begin in the domain language in which a problem is initially expressed. Thus in order for a transformation system to be practical it must be flexible with respect to domain-specific languages. We have argued that transformation applied to specification results in a highly reliable system. We also attempted to briefly demonstrate that transformation technology applied to the runtime environment will result in a safe and secure system. We thus believe that the sophisticated multi-lookahead backtracking parsing technology is central to the task of being in a position to demonstrate the existence of HIS.
Date: April 5, 1999
Creator: Bespalko, Stephen J. & Winter, Victor L.
System: The UNT Digital Library
First measurement of the left-right cross section asymmetry in Z boson production at E{sub cm} = 91.5 GeV (open access)

First measurement of the left-right cross section asymmetry in Z boson production at E{sub cm} = 91.5 GeV

The left-right cross section asymmetry for Z boson production in e{sup +} e{sup {minus}} annihilation (A{sub LR}) is being measured at E{sub cm} 91.5 GeV with the SLD detector at the SLAC Linear Collider (SLC) using a longitudinally polarized electron beam. The electron polarization is continually monitored with a Compton scattering polarimeter, and is typically 22%. At the current time, we have accumulated a sample of 4779 Z events. We find that A{sub LR} = 0.02 {double_bond} 0.07 {doteq} 0.001 where the first error is statistical and the second is systematic. Using this very preliminary measurement, we determine the weak mixing angle defined at the Z boson pole to be sin{sup 2}{sub W}{sup olept} = 0.247 {plus_minus} 0.009.
Date: August 5, 1992
Creator: Collaboration, SLD
System: The UNT Digital Library
Overview of crash and impact analysis at Lawrence Livermore National Laboratory (open access)

Overview of crash and impact analysis at Lawrence Livermore National Laboratory

This work provides a brief overview of past and ongoing efforts at Lawrence Livermore National Laboratory (LLNL) in the area of finite-element modeling of crash and impact problems. The process has been one of evolution in several respects. One aspect of the evolution has been the continual upgrading and refinement of the DYNA, NIKE, and TOPAZ family of finite-element codes. The major missions of these codes involve problems where the dominant factors are high-rate dynamics, quasi-statics, and heat transfer, respectively. However, analysis of a total event, whether it be a shipping container drop or an automobile/barrier collision, may require use or coupling or two or more of these codes. Along with refinements in speed, contact capability, and element technology, material model complexity continues to evolve as more detail is demanded from the analyses. A more recent evolution has involved the mix of problems addressed at LLNL and the direction of the technology thrusts. A pronounced increase in collaborative efforts with the civilian and private sector has resulted in a mix of complex problems involving synergism between weapons applications (shipping container, earth penetrator, missile carrier, ship hull damage) and a more broad base of problems such as vehicle impacts as discussed …
Date: August 5, 1993
Creator: Logan, R. W. & Tokarz, F. J.
System: The UNT Digital Library
NO{sub x} reduction in pressurized fluidized-bed combustion (open access)

NO{sub x} reduction in pressurized fluidized-bed combustion

Batch combustion experiments were performed in a small bubbling fluidized-bed reactor with the objective of establishing the cause of reduced NO{sub x} emissions from pressurized fluidized bed combustion (PFBC). All variables except for pressure were kept constant in the experiments: fuel batch size, for example, was the same in experiments performed at three pressure levels (0.2, 1 and 2 MPa). Two different types of experiments were conducted: one using air diluted with nitrogen (4.5% O{sub 2}) for the purpose of determining the conversion of fuel N to NO{sub x}, and the other with NO-doped diluted air (800 ppM NO, 4.5% O{sub 2}) for the purpose of determining the reduction of bulk-gas NO{sub x} by the burning fuel. A large excess of combustion air was used in all experiments so as to keep the bulk-gas composition relatively unchanged by combustion products. Six different fuels were studied: a bituminous coal, coke prepared from the same coal, three specialty cokes (one of which contained 10 wt % N) and graphite (0%N). The straight-air combustion experiments showed that the conversion of fuel-N to NO{sub x} dropped with increasing pressure (at constant fuel concentration in the bed). The NO-doped combustion experiments showed significantly increased NO{sub …
Date: November 5, 1992
Creator: Wallman, P. H.; Carlsson, R. C. J. & Leckner, B.
System: The UNT Digital Library
The Savannah River Technology Center environmental monitoring field test platform (open access)

The Savannah River Technology Center environmental monitoring field test platform

Nearly all industrial facilities have been responsible for introducing synthetic chemicals into the environment. The Savannah River Site is no exception. Several areas at the site have been contaminated by chlorinated volatile organic chemicals. Because of the persistence and refractory nature of these contaminants, a complete clean up of the site will take many years. A major focus of the mission of the Environmental Sciences Section of the Savannah River Technology Center is to develop better, faster, and less expensive methods for characterizing, monitoring, and remediating the subsurface. These new methods can then be applied directly at the Savannah River Site and at other contaminated areas in the United States and throughout the world. The Environmental Sciences Section has hosted field testing of many different monitoring technologies over the past two years primarily as a result of the Integrated Demonstration Program sponsored by the Department of Energy`s Office of Technology Development. This paper provides an overview of some of the technologies that have been demonstrated at the site and briefly discusses the applicability of these techniques.
Date: March 5, 1993
Creator: Rossabi, J.
System: The UNT Digital Library
First principles theory of disordered alloys and alloy phase stability (open access)

First principles theory of disordered alloys and alloy phase stability

These lecture notes review the LDA-KKR-CPA method for treating the electronic structure and energetics of random alloys and the MF-CF and GPM theories of ordering and phase stability built on the LDA- KKR-CPA description of the disordered phase. Section 2 lays out the basic LDA-KKR-CPA theory of random alloys and some applications. Section 3 reviews the progress made in understanding specific ordering phenomena in binary solid solutions base on the MF-CF and GPM theories of ordering and phase stability. Examples are Fermi surface nesting, band filling, off diagonal randomness, charge transfer, size difference or local strain fluctuations, magnetic effects; in each case, an attempt is made to link the ordering and the underlying electronic structure of the disordered phase. Section 4 reviews calculations of electronic structure of {beta}-phase Ni{sub c}Al{sub 1-c} alloys using a version of the LDA-KKR-CPA codes generalized to complex lattices.
Date: June 5, 1993
Creator: Stocks, G. M.; Nicholson, D. M. C. & Shelton, W. A.
System: The UNT Digital Library
Measurements of DT and DD neutron yields by neutron activation on TFTR (open access)

Measurements of DT and DD neutron yields by neutron activation on TFTR

A variety of elemental foils have been activated by neutron fluence from TFTR under conditions with the DT neutron yield per shot ranging from 10{sup 12} to over 10{sup 18}, and with the DT/(DD+DT) neutron ratio varying from 0.5% (from triton burnup) to unity. Linear response over this large dynamic range is obtained by reducing the mass of the foils and increasing the cooling time, all while accepting greatly improved counting statistics. Effects on background gamma-ray lines from foil-capsule-material contaminants. and the resulting lower limits on activation foil mass, have been determined. DT neutron yields from dosimetry standard reactions on aluminum, chromium, iron, nickel, zirconium, and indium are in agreement within the {plus_minus}9% (one-sigma,) accuracy of the measurements: also agreeing are yields from silicon foils using the ACTL library cross-section. While the ENDF/B-V library has too low a cross-section. Preliminary results from a variety of other threshold reactions are presented. Use of the {sup 115}In(n,n) {sup 115m}In reaction (0.42 times as sensitive to DT neutrons as DD neutrons) in conjunction with pure-DT reactions allows a determination of the DT/(DD+DT) ratio in trace tritium or low-power tritium beam experiments.
Date: May 5, 1994
Creator: Barnes, C. W.; Larson, A. R.; LeMunyan, G. & Loughlin, M. J.
System: The UNT Digital Library
Status of {alpha}{sub s} measurements (open access)

Status of {alpha}{sub s} measurements

I review the current determinations of {alpha}{sub s}. Attention is given to the theoretical uncertainties inherent in most determinations. all current determinations are consistent with an average of {alpha}{sub s}(M{sub Z}) = 0.119{plus_minus}0.005. Prospects for reduction of the errors in the future are discussed.
Date: May 5, 1993
Creator: Hinchliffe, I.
System: The UNT Digital Library
Radiative divertor modeling studies (open access)

Radiative divertor modeling studies

A two-dimensional fluid code called UEDGE is used to simulate the edge plasma in tokamak divertors and to evaluate methods for reducing the heat load on divertor plates by radiating some of the power before it reaches the plates. UEDGE is a fully-implicit code being developed jointly by us, D. A. Knoll and R. B. Campbell. For these studies, UEDGE uses a banded matrix solver and a fixed-fraction impurity model. Work is presently underway with Knoll and Campbell to include a memory-efficient iterative solver and a model of impurity transport. Simulations of the proposed TPX device show that a few percent nitrogen concentration in the scrape-off layer can radiate up to 80% of the divertor power, thus reducing the peak heat flux and electron temperature at the divertor plate to acceptable values. A comparison of the neutral gas distribution from UEDGE with results from the DEGAS Monte Carlo neutrals code confirms the validity of our fluid neutrals model.
Date: May 5, 1993
Creator: Rensink, M. E.; Allen, S. L.; Hill, D. N.; Kaiser, T. B. & Rognlien, T. D.
System: The UNT Digital Library