Resource Type

Time-resolved fast-neutron pinhole camera for studying thermonuclear plasmas (open access)

Time-resolved fast-neutron pinhole camera for studying thermonuclear plasmas

A fast-neutron pinhole camera with high detection efficiency and nanosecond time-resolution has been developed and applied to the investigation of the spatial and temporal distributions of DD- and DT-neutrons produced by thermonuclear plasmas. The pinhole consists of a specially designed 1.15 m long copper collimator with an effective aperture of 1 mm diameter. Several different types of spatial resolution detectors have been used at the image plane: (1) a multi-element, scintillation-photomultiplier system used for time-resolved measurements consisting of sixty-one individual detectors, (2) a scintillation-fiber-chamber coupled to a gated image-intensifier tube used for direct photographing of the neutron image, and (3) a propane bubble chamber used for time-integrated recording with a capability to distinguish DD- from DT-neutrons. Pulsed neutron sources with typical dimensions of 1 cm emitting of the order of 10/sup 12/ neutrons over a time period of 10-100 nsec have been investigated. A spatial resolution of 1 mm and a time resolution of approximately 10 nsec was achieved in the investigations of dense plasma compression phenomena.
Date: February 2, 1976
Creator: Bauer, R. W. & Weingart, R. C.
System: The UNT Digital Library
Electron energy and space charge calculations in reflex diodes (open access)

Electron energy and space charge calculations in reflex diodes

Previously reported Monte Carlo code calculations of the electron energy distributions and the consequent reflex triode characteristics will be presented for two different anode designs. In addition, a generalized formulation of Poisson's equation will be used to examine the virtual cathode side of a reflex diode. The familiar ''resonance'' solution for the reflex triode is again found, but with a different physical interpretation. In the former case the current diverges, but in the virtual cathode space the linear dimension diverges as one approaches the ''resonance.''
Date: May 2, 1977
Creator: Shearer, J. W.
System: The UNT Digital Library
Transaction oriented minicomputer allows flexible design of the controlled materials information system (open access)

Transaction oriented minicomputer allows flexible design of the controlled materials information system

The design of business data processing applications utilizing minicomputers requires special considerations throughout the development of the systems project. Minicomputer features, capabilities, and limitations must be closely examined prior to the implementation of the design phase. The design requirements of an inventory control minicomputer system currently being installed by the Data Processing Services Department of Lawrence Livermore Laboratory are presented.
Date: April 2, 1976
Creator: Jessen, T. D.
System: The UNT Digital Library
Development of high-creep-strength molybdenum and tungsten alloys by the internal nitriding process (open access)

Development of high-creep-strength molybdenum and tungsten alloys by the internal nitriding process

Substantial increases in the high-temperature creep strength of Mo-Hf alloys can be obtained by internal nitriding. The creep resistance of internally nitrided Mo-1.86 wt % Hf is more than 100 times greater than that of other commercially available molybdenum-base alloys. The HfN precipitates appear to be stable over long times at temperatures near 1600 K. Internally nitrided Mo-Hf alloys appear to be good candidates for fabrication of components of space power systems where the ratio of high-temperature strength to weight is important. They are particularly good candidates for components that can be fabricated from the lower-strength unnitrided alloy and subsequently nitrided to provide high-temperature strength.
Date: October 2, 1986
Creator: Mitchell, J. B.
System: The UNT Digital Library
Issues in risk assessment and modifications of the NRC health effects models (open access)

Issues in risk assessment and modifications of the NRC health effects models

A report, Health Effects Models for Nuclear Power Plant Accident Consequence Analysis, was published by the US Nuclear Regulatory Commission, in 1985, and revised in 1989. These reports provided models for estimating health effects that would be expected to result from the radiation exposure received in a nuclear reactor accident. Separate models were given for early occurring effects, late somatic effects, and genetic effects; however, this paper addresses only late somatic effects, or the risk of cancer expected to occur in the lifetimes of exposed individuals. The 1989 revision was prepared prior to the publication of the BEIR V, 1988 UNSCEAR, and ICRP 60 reports. For this reason, an addendum was needed that would provide modified risk models that took into account these recent reports, and, more generally, any new evidence that had appeared since the 1989 publication. Of special importance was consideration of updated analyses of the Japanese A-bomb survivor study data based on revised DS86 dosimetry. The process of preparing the addendum required thorough review and evaluation of the models used by the BEIR V, UNSCEAR, and ICRP committees, and also required thorough consideration of the various decisions that must be made in any risk assessment effort. This …
Date: July 2, 1992
Creator: Gilbert, E. S.
System: The UNT Digital Library
Development of Lower Energy Neutron Spectroscopy for Areal Density Measurement in Implosion Experiment at NIF and Omega (open access)

Development of Lower Energy Neutron Spectroscopy for Areal Density Measurement in Implosion Experiment at NIF and Omega

Areal density ({rho}R) is a fundamental parameter that characterizes the performance of an ICF implosion. For high areal densities ({rho}R> 0.1 g/cm{sup 2}), which will be realized in implosion experiments at NIF and LMJ, the target areal density exceeds the stopping range of charged particles and measurements with charged particle spectroscopy will be difficult. In this region, an areal density measurement method using down shifted neutron counting is a promising alternative. The probability of neutron scattering in the imploded plasma is proportional to the areal density of the plasma. The spectrum of neutrons scattered by the specific target nucleus has a characteristic low energy cut off. This enables separate, simultaneous measurements of fuel and pusher {rho}Rs. To apply this concept in implosion experiments, the detector should have extremely large dynamic range. Sufficient signal output for low energy neutrons is also required. A lithium-glass scintillation-fiber plate (LG-SCIFI) is a promising candidate for this application. In this paper we propose a novel technique based on downshifted neutron measurements with a lithium-glass scintillation-fiber plate. The details of instrumentation and background estimation with Monte Carlo calculation are reported.
Date: August 2, 2001
Creator: Isumi, N; Lerche, R A; Phillips, T W; Schmid, G J; Moran, M J & Sangster, T C
System: The UNT Digital Library
Practical Issues Associated with Mortar Projections in Large Deformation Contact/Impact Analysis (open access)

Practical Issues Associated with Mortar Projections in Large Deformation Contact/Impact Analysis

Several recent works have considered variants of the mortar-finite element method for numerical treatment of contact phenomena. The method has shown considerable promise for the spatial discretization of contact interactions, particularly for kinematically linear applications where one or both of the contacting surfaces are flat. Desirable features already demonstrated for the method in this specialized setting include passage of patch tests, preservation of convergence rates that would be obtained with a perfectly conforming mesh, and accurate resolution of contact stresses on interfaces. This paper concerns itself with the successful extension of these methods to encompass contact of geometrically noncoincident surfaces. The issue of patch test passage over curved interfaces will be discussed. It will be shown that a generalization of the mortar projection method is required to pass patch tests in this instance. Issues relating to the exact numerical integration of the mortar projection integrals will also be outlined, and a convergence study for a mortar tying application will be presented.
Date: May 2, 2002
Creator: Laursen, T. A.; Puso, M. A. & Heinstein, M. W.
System: The UNT Digital Library
Non-Equilibrium Zeldovich-Von Neumann-Doring Theory and Reactive Flow Modeling of Detonation (open access)

Non-Equilibrium Zeldovich-Von Neumann-Doring Theory and Reactive Flow Modeling of Detonation

This paper discusses the Non-Equilibrium Zeldovich - von Neumann - Doring (NEZND) theory of self-sustaining detonation waves and the Ignition and Growth reactive flow model of shock initiation and detonation wave propagation in solid explosives. The NEZND theory identified the non-equilibrium excitation processes that precede and follow the exothermic decomposition of a large high explosive molecule into several small reaction product molecules. The thermal energy deposited by the leading shock wave must be distributed to the vibrational modes of the explosive molecule before chemical reactions can occur. The induction time for the onset of the initial endothermic reactions can be calculated using high pressure, high temperature transition state theory. Since the chemical energy is released well behind the leading shock front of a detonation wave, a physical mechanism is required for this chemical energy to reinforce the leading shock front and maintain its overall constant velocity. This mechanism is the amplification of pressure wavelets in the reaction zone by the process of de-excitation of the initially highly vibrationally excited reaction product molecules. This process leads to the development of the three-dimensional structure of detonation waves observed for all explosives. For practical predictions of shock initiation and detonation in hydrodynamic codes, …
Date: May 2, 2002
Creator: Tarver, C M; Forbes, J W & Urtiew, P A
System: The UNT Digital Library
Rotating Aperture Deuterium Gas Cell Development for High Brightness Neutron Production (open access)

Rotating Aperture Deuterium Gas Cell Development for High Brightness Neutron Production

Work is underway at LLNL to design and build a high-brightness mono-energetic source for fast neutron imaging. The approach being pursued will use a 7-MeV deuterium linac for producing high-energy neutrons via a D(d,n){sup 3}He reaction. To achieve a high-brightness neutron source, a windowless rotating aperture gas cell approach is being employed. Using a series of close-tolerance rotor and stator plates, a differential pumping assembly was designed and built that contains up to 3 atmospheres of deuterium gas in a 40-mm-long gas cell. Rarefaction of the gas due to beam-induced heating will be addressed by rapidly moving the gas across the beam channel in a cross flow tube. The design and fabrication process was guided by extensive 3D modeling of the hydrodynamic gas flow and structural dynamics of the assembly. Summaries of the modeling results, the fabrication of the rotating aperture system, and initial measurements of gas leakage are presented.
Date: May 2, 2005
Creator: Rusnak, B.; Hall, J. M. & Shen, S.
System: The UNT Digital Library
Reagentless Real-time Identification of Individual Microorganisms by Bio-Aerosol Mass Spectrometry (open access)

Reagentless Real-time Identification of Individual Microorganisms by Bio-Aerosol Mass Spectrometry

None
Date: March 2, 2004
Creator: Gard, E E
System: The UNT Digital Library
Atomic layer deposition of ZnO on ultra-low-density nanoporous silica aerogel monoliths (open access)

Atomic layer deposition of ZnO on ultra-low-density nanoporous silica aerogel monoliths

We report on atomic layer deposition of an {approx} 2-nm-thick ZnO layer on the inner surface of ultralow-density ({approx} 0.5% of the full density) nanoporous silica aerogel monoliths with an extremely large effective aspect ratio of {approx} 10{sup 5} (defined as the ratio of the monolith thickness to the average pore size). The resultant monoliths are formed by amorphous-SiO{sub 2}/wurtzite-ZnO nanoparticles which are randomly oriented and interconnected into an open-cell network with an apparent density of {approx} 3% and a surface area of {approx} 100 m{sup 2} g{sup -1}. Secondary ion mass spectrometry and high-resolution transmission electron microscopy imaging reveal excellent uniformity and crystallinity of ZnO coating. Oxygen K-edge and Zn L{sub 3}-edge soft x-ray absorption near-edge structure spectroscopy shows broadened O 2p- as well as Zn 4s-, 5s-, and 3d-projected densities of states in the conduction band.
Date: September 2, 2004
Creator: Kucheyev, S O; Biener, J; Wang, Y M; Baumann, T F; Wu, K J; van Buuren, T et al.
System: The UNT Digital Library
Effective Communication and File-I/O Bandwidth Benchmarks (open access)

Effective Communication and File-I/O Bandwidth Benchmarks

We describe the design and MPI implementation of two benchmarks created to characterize the balanced system performance of high-performance clusters and supercomputers: b{_}eff, the communication-specific benchmark examines the parallel message passing performance of a system, and b{_}eff{_}io, which characterizes the effective 1/0 bandwidth. Both benchmarks have two goals: (a) to get a detailed insight into the Performance strengths and weaknesses of different parallel communication and I/O patterns, and based on this, (b) to obtain a single bandwidth number that characterizes the average performance of the system namely communication and 1/0 bandwidth. Both benchmarks use a time driven approach and loop over a variety of communication and access patterns to characterize a system in an automated fashion. Results of the two benchmarks are given for several systems including IBM SPs, Cray T3E, NEC SX-5, and Hitachi SR 8000. After a redesign of b{_}eff{_}io, I/O bandwidth results for several compute partition sizes are achieved in an appropriate time for rapid benchmarking.
Date: May 2, 2001
Creator: Koniges, A E & Rabenseifner, R
System: The UNT Digital Library
Characterization of an Effective Cleaning Procedure for Aluminum Alloys: Surface Enhanced Raman Spectroscopy and Zeta Potential Analysis (open access)

Characterization of an Effective Cleaning Procedure for Aluminum Alloys: Surface Enhanced Raman Spectroscopy and Zeta Potential Analysis

We have developed a cleaning procedure for aluminum alloys for effective minimization of surface-adsorbed sub-micron particles and non-volatile residue. The procedure consists of a phosphoric acid etch followed by an alkaline detergent wash. To better understand the mechanism whereby this procedure reduces surface contaminants, we characterized the aluminum surface as a function of cleaning step using Surface Enhanced Raman Spectroscopy (SERS). SERS indicates that phosphoric acid etching re-establishes a surface oxide of different characteristics, including deposition of phosphate and increased hydration, while the subsequent alkaline detergent wash appears to remove the phosphate and modify the new surface oxide, possibly leading to a more compact surface oxide. We also studied the zeta potential of <5 micron pure aluminum and aluminum alloy 6061-T6 particles to determine how surface electrostatics may be affected during the cleaning process. The particles show a decrease in the magnitude of their zeta potential in the presence of detergent, and this effect is most pronounced for particles that have been etched with phosphoric acid. This reduction in magnitude of the surface attractive potential is in agreement with our observation that the phosphoric acid etch followed by detergent wash results in a decrease in surface-adsorbed sub-micron particulates.
Date: June 2, 2004
Creator: Cherepy, N J; Shen, T H; Esposito, A P & Tillotson, T M
System: The UNT Digital Library
A summary of the results from LASS (Large Aperture Superconducting Solenoid) and the future of strange quake spectroscopy (open access)

A summary of the results from LASS (Large Aperture Superconducting Solenoid) and the future of strange quake spectroscopy

A brief summary is presented of results pertinent to quark spectroscopy derived from high statistics data on K{sup {minus}}p interactions obtained with the LASS spectrometer at SLAC. The present status of strange meson spectroscopy is briefly reviewed, and the impact of the proposed KAON Factory on the future of the subject considered. 36 refs., 24 figs.
Date: May 2, 1990
Creator: Aston, D.; Bienz, T.; Bird, T.; Dunwoodie, W.; Johnson, W.; Kunz, P. et al.
System: The UNT Digital Library
Proposed Mission Concept for the Astrophysical Plasma-dynamic Explorer (APEX): An EUV High-Resolution Spectroscopic SMEX (open access)

Proposed Mission Concept for the Astrophysical Plasma-dynamic Explorer (APEX): An EUV High-Resolution Spectroscopic SMEX

EUVE and the ROSAT WFC have left a tremendous legacy in astrophysics at EUV wavelengths. More recently, Chandra and XMM-Newton have demonstrated at X-ray wavelengths the power of high-resolution astronomical spectroscopy, which allows the identification of weak emission lines, the measurement of Doppler shifts and line profiles, and the detection of narrow absorption features. This leads to a complete understanding of the density, temperature, abundance, magnetic, and dynamic structure of astrophysical plasmas. However, the termination of the EUVE mission has left a gaping hole in spectral coverage at crucial EUV wavelengths ({approx}100-300 Angstroms), where hot (10{sup 5}-10{sup 8} K) plasmas radiate most strongly and produce critical spectral diagnostics. CHIPS will fill this hole only partially as it is optimized for diffuse emission and has only moderate resolution (R{approx}150). For discrete sources, we have successfully flown a follow-on instrument to the EUVE spectrometer (A{sub eff} {approx}1 cm{sup 2}, R {approx}400), the high-resolution spectrometer J-PEX (A{sub eff} {approx}3 cm{sup 2}, R {approx}3000). Here we build on the J-PEX prototype and present a strawman design for an orbiting spectroscopic observatory, APEX, a SMEX-class instrument containing a suite of 8 spectrometers that together achieve both high effective area (A{sub eff}>20 cm{sup 2}) and high …
Date: July 2, 2004
Creator: Kowalski, M P
System: The UNT Digital Library
Computational Simulations of High Intensity X-Ray Matter Interaction (open access)

Computational Simulations of High Intensity X-Ray Matter Interaction

Free electron lasers have the promise of producing extremely high-intensity short pulses of coherent, monochromatic radiation in the 1-10 keV energy range. For example, the Linac Coherent Light Source at Stanford is being designed to produce an output intensity of 2 x 10{sup 14} W/cm{sup 2} in a 230 fs pulse. These sources will open the door to many novel research studies. However, the intense x-ray pulses may damage the optical components necessary for studying and controlling the output. At the full output intensity, the dose to optical components at normal incidence ranges from 1-10 eV/atom for low-Z materials (Z < 14) at photon energies of 1 keV. It is important to have an understanding of the effects of such high doses in order to specify the composition, placement, and orientation of optical components, such as mirrors and monochromators. Doses of 10 eV/atom are certainly unacceptable since they will lead to ablation of the surface of the optical components. However, it is not precisely known what the damage thresholds are for the materials being considered for optical components for x-ray free electron lasers. In this paper, we present analytic estimates and computational simulations of the effects of high-intensity x-ray pulses …
Date: August 2, 2001
Creator: London, R. A.; Rionta, R.; Tatchyn, R. & Roessler, S.
System: The UNT Digital Library
A 3D Contact Smoothing Method (open access)

A 3D Contact Smoothing Method

Smoothing of contact surfaces can be used to eliminate the chatter typically seen with node on facet contact and give a better representation of the actual contact surface. The latter affect is well demonstrated for problems with interference fits. In this work we present two methods for the smoothing of contact surfaces for 3D finite element contact. In the first method, we employ Gregory patches to smooth the faceted surface in a node on facet implementation. In the second method, we employ a Bezier interpolation of the faceted surface in a mortar method implementation of contact. As is well known, node on facet approaches can exhibit locking due to the failure of the Babuska-Brezzi condition and in some instances fail the patch test. The mortar method implementation is stable and provides optimal convergence in the energy of error. In the this work we demonstrate the superiority of the smoothed versus the non-smoothed node on facet implementations. We also show where the node on facet method fails and some results from the smoothed mortar method implementation.
Date: May 2, 2002
Creator: Puso, M. A. & Laursen, T. A.
System: The UNT Digital Library
Thermal Cook-off of an HMX Based Explosive: Pressure Gauge Experiments and Modeling (open access)

Thermal Cook-off of an HMX Based Explosive: Pressure Gauge Experiments and Modeling

Safety issues related to thermal cook-off are important for handling and storing explosive devices. Violence of event as a function of confinement is important for prediction of collateral events. There are major issues, which require an understanding of the following events: (1) transit to detonation of a pressure wave from a cook-off event, (2) sensitivity of HMX based explosives changes with thermally induced phase transitions and (3) the potential danger of neighboring explosive devices being affected by a cook-off reaction. Results of cook-off events of known size, confinement and thermal history allows for development and/or calibrating computer models for calculating events that are difficult to measure experimentally.
Date: April 2, 2002
Creator: Urtiew, P A; Forbes, J W; Tarver, C M; Garcia, F; Greenwood, D W & Vandersall, K S
System: The UNT Digital Library
Uranus in 2003: Zonal Winds, Banded Structure, and Discrete Features (open access)

Uranus in 2003: Zonal Winds, Banded Structure, and Discrete Features

None
Date: February 2, 2005
Creator: Hammel, H B; de Pater, I; Gibbard, S G; Lockwood, G W & Rages, K
System: The UNT Digital Library
Multiscale modeling of MEMS dynamics and failure (open access)

Multiscale modeling of MEMS dynamics and failure

This work studies multiscale phenomena in silicon micro-resonators which comprise the mechanical components of next-generation Micro-Electro-Mechanical Systems (MEMS). Unlike their larger relatives, the behavior of these sub-micron MEMS is not described well by conventional continuum models and finite elements, but it is determined appreciably by the interplay between physics at the Angstrom, nanometer and micron scales. As device sizes are reduced below the micron scale, atomistic processes cause systematic deviations from the behavior predicted by conventional continuum elastic theory. [1] These processes cause anomalous surface effects in the resonator frequency and quality factor-even for single crystal devices with clean surfaces due to thermal fluctuations. The simulation of these atomistic effects is a challenging problem due to the large number of atoms involved and due to the fact that they are finite temperature phenomena. Our simulations include up to two million atoms in the device itself, and hundreds of millions more are in the proximal regions of the substrate. A direct, atomistic simulation of the motion of this many atoms is prohibitive, and it would be inefficient. The micron-scale processes in the substrate are well described by finite elements, and an atomistic simulation is not required. On the other hand, atomistic …
Date: October 2, 2000
Creator: Rudd, R E
System: The UNT Digital Library
Creating Ensembles of Decision Trees Through Sampling (open access)

Creating Ensembles of Decision Trees Through Sampling

Recent work in classification indicates that significant improvements in accuracy can be obtained by growing an ensemble of classifiers and having them vote for the most popular class. This paper focuses on ensembles of decision trees that are created with a randomized procedure based on sampling. Randomization can be introduced by using random samples of the training data (as in bagging or arcing) and running a conventional tree-building algorithm, or by randomizing the induction algorithm itself. The objective of this paper is to describe our first experiences with a novel randomized tree induction method that uses a subset of samples at a node to determine the split. Our empirical results show that ensembles generated using this approach yield results that are competitive in accuracy and superior in computational cost.
Date: February 2, 2001
Creator: Kamath, C & Cantu-Paz, E
System: The UNT Digital Library
NLO QCD Corrections to Hadronic Higgs Production with Heavy Quarks (open access)

NLO QCD Corrections to Hadronic Higgs Production with Heavy Quarks

The production of a Higgs boson in association with a pair of t{bar t} or b{bar b} quarks plays a very important role at both the Tevatron and the Large Hadron Collider. The theoretical prediction of the corresponding cross sections has been improved by including the complete next-to-leading order QCD corrections. After a brief description of the most relevant technical aspects of the calculation, we review the results obtained for both the Tevatron and the Large Hadron Collider.
Date: July 2, 2003
Creator: Dawson, S.; Jackson, C.; Orr, L.; Reina, L. & Wacheroth, D.
System: The UNT Digital Library
Massive Star Formation in a Gravitationally-Lensed H II Galaxy at z = 3.357 (open access)

Massive Star Formation in a Gravitationally-Lensed H II Galaxy at z = 3.357

The Lynx arc, with a redshift of 3.357, was discovered during spectroscopic follow-up of the z = 0.570 cluster RX J0848+4456 from the ROSAT Deep Cluster Survey. The arc is characterized by a very red R - K color and strong, narrow emission lines. Analysis of HST WFPC 2 imaging and Keck optical and infrared spectroscopy shows that the arc is an H II galaxy magnified by a factor of {approx} 10 by a complex cluster environment. The high intrinsic luminosity, the emission line spectrum, the absorption components seen in Ly{alpha} and C IV, and the restframe ultraviolet continuum are all consistent with a simple H II region model containing {approx} 10{sup 6} hot O stars. The best fit parameters for this model imply a very hot ionizing continuum (T{sub BB} {approx} 80, 000 K), high ionization parameter (log U {approx} -1), and low nebular metallicity (Z/Z{sub {circle_dot}} {approx} 0.05). The narrowness of the emission lines requires a low mass-to-light ratio for the ionizing stars, suggestive of an extremely low metallicity stellar cluster. The apparent overabundance of silicon in the nebula could indicate enrichment by past pair instability supernovae, requiring stars more massive than {approx}140M{sub {circle_dot}}.
Date: March 2, 2004
Creator: Villar-Martin, M.; Stern, D.; Hook, R. N.; Rosati, P.; Lombardi, M.; Humphrey, A. et al.
System: The UNT Digital Library
The K-selected Butcher-Oemler Effect (open access)

The K-selected Butcher-Oemler Effect

We investigate the Butcher-Oemler effect using samples of galaxies brighter than observed frame K* + 1.5 in 33 clusters at 0.1 {approx}< z {approx}< 0.9. We attempt to duplicate as closely as possible the methodology of Butcher & Oemler. Apart from selecting in the K-band, the most important difference is that we use a brightness limit fixed at 1.5 magnitudes below an observed frame K* rather than the nominal limit of rest frame M(V ) = -20 used by Butcher & Oemler. For an early type galaxy at z = 0.1 our sample cutoff is 0.2 magnitudes brighter than rest frame M(V ) = -20, while at z = 0.9 our cutoff is 0.9 magnitudes brighter. If the blue galaxies tend to be faint, then the difference in magnitude limits should result in our measuring lower blue fractions. A more minor difference from the Butcher & Oemler methodology is that the area covered by our galaxy samples has a radius of 0.5 or 0.7 Mpc at all redshifts rather than R{sub 30}, the radius containing 30% of the cluster population. In practice our field sizes are generally similar to those used by Butcher & Oemler. We find the fraction of …
Date: March 2, 2004
Creator: Stanford, S. A.; De Propris, R.; Dickinson, M. & Eisenhardt, P. R.
System: The UNT Digital Library