Slow Nonradiative Decay for Rare Earths in KPb2Br5 and RbPb2Br5 (open access)

Slow Nonradiative Decay for Rare Earths in KPb2Br5 and RbPb2Br5

We report on spectroscopic investigations of Nd{sup 3+}- and Tb{sup 3+}- doped low phonon energy, moisture-resistant host crystals, KPb{sub 2}Br{sub 5} and RbPb{sub 2}Br{sub 5}, and their potential to serve as new solid state laser materials at new wavelengths, especially in the long wavelength infrared region. This includes emission spectra, emission lifetime measurements, Raman scattering spectra as well as calculations of the multiphonon decay rate, radiative lifetimes and quantum efficiencies for relevant (laser) transitions in these crystals.
Date: February 27, 2004
Creator: Rademaker, K.; Petermann, K.; Huber, G.; Krupke, W.; Page, R.; Payne, S. et al.
Object Type: Article
System: The UNT Digital Library
Surface Figure Metrology for CELT Primary Mirror Segments (open access)

Surface Figure Metrology for CELT Primary Mirror Segments

The University of California and California Institute of Technology are currently studying the feasibility of building a 30-m segmented ground based optical telescope called the California Extremely Large Telescope (CELT). The early ideas for this telescope were first described by Nelson and Mast and more recently refined by Nelson. In parallel, concepts for the fabrication of the primary segments were proposed by Mast, Nelson and Sommargren where high risk technologies were identified. One of these was the surface figure metrology needed for fabricating the aspheric mirror segments. This report addresses the advanced interferometry that will be needed to achieve 15nm rms accuracy for mirror segments with aspheric departures as large as 35mm peak-to-valley. For reasons of cost, size, measurement consistency and ease of operation we believe it is desirable to have a single interferometer that can be universally applied to each and every mirror segment. Such an instrument is described in this report.
Date: February 27, 2001
Creator: Sommargren, G; Phillion, D; Seppala, L & Lerner, S
Object Type: Report
System: The UNT Digital Library
Very Fine Aerosols from the World Trade Center Collapse Piles: Anaerobic Incineration (open access)

Very Fine Aerosols from the World Trade Center Collapse Piles: Anaerobic Incineration

By September 14, three days after the initial World Trade Center collapse, efforts at fire suppression and heavy rainfall had extinguished the immediate surface fires. From then until roughly mid-December, the collapse piles continuously emitted an acrid smoke and fume in the smoldering phase of the event. Knowledge of the sources, nature, and concentration of these aerosols is important for evaluation and alleviation of the health effects on workers and nearby residents. In this paper, we build on our earlier work to ascribe these aerosols to similar processes that occur in urban incinerators. The simultaneous presence of finely powdered (circa 5 {micro}m) and highly basic (pH 11 to 12) cement dust and high levels of very fine (< 0.25 {micro}m) sulfuric acid fumes helps explain observed health impacts. The unprecedented levels of several metals in the very fine mode can be tied to liberation of those metals that are both present in elevated concentrations in the debris and have depressed volatility temperatures caused by the presence of organic materials and chlorine.
Date: February 27, 2004
Creator: Cahill, Thomas A.; Cliff, Steven S.; Shackelford, James; Meier, Michael; Dunlap, Michael; Perry, Kevin D. et al.
Object Type: Article
System: The UNT Digital Library
Modeling of Thermal Convection of Liquid TNT for Cookoff (open access)

Modeling of Thermal Convection of Liquid TNT for Cookoff

The objective is to computationally model thermal convection of liquid TNT in a heated cylindrical container for what are called 'cookoff' experiments. Our goal is to capture the thermal convection coupled to the heat transfer in the surrounding container. We will present computational results that validate the functionality of the model, numerical strategy, and computer code for a model problem with Rayleigh number of O(10{sup 6}). We solve the problem of thermal convection between two parallel plates in this turbulent flow regime and show that the three-dimensional computations are in excellent agreement with experiment.
Date: February 27, 2003
Creator: McCallen, R; Dunn, T; Nichols, A; Reaugh, J & McClelland, M
Object Type: Article
System: The UNT Digital Library
A Multiresolution Image Cache for Volume Rendering (open access)

A Multiresolution Image Cache for Volume Rendering

The authors discuss the techniques and implementation details of the shared-memory image caching system for volume visualization and iso-surface rendering. One of the goals of the system is to decouple image generation from image display. This is done by maintaining a set of impostors for interactive display while the production of the impostor imagery is performed by a set of parallel, background processes. The system introduces a caching basis that is free of the gap/overlap artifacts of earlier caching techniques. instead of placing impostors at fixed, pre-defined positions in world space, the technique is to adaptively place impostors relative to the camera viewpoint. The positions translate with the camera but stay aligned to the data; i.e., the positions translate, but do not rotate, with the camera. The viewing transformation is factored into a translation transformation and a rotation transformation. The impostor imagery is generated using just the translation transformation and visible impostors are displayed using just the rotation transformation. Displayed image quality is improved by increasing the number of impostors and the frequency that impostors are re-rendering is improved by decreasing the number of impostors.
Date: February 27, 2003
Creator: LaMar, E. & Pascucci, V.
Object Type: Article
System: The UNT Digital Library
The Moment Condensed History Algorithm for Monte Carlo Electron Transport Simulations (open access)

The Moment Condensed History Algorithm for Monte Carlo Electron Transport Simulations

We introduce a new Condensed History algorithm for the Monte Carlo simulation of electron transport. To obtain more accurate simulations, the new algorithm preserves the mean position and the variance in the mean position exactly for electrons that have traveled a given path length and are traveling in a given direction. This is accomplished by deriving the zeroth-, first-, and second-order spatial moments of the Spencer-Lewis equation and employing this information directly in the Condensed History process. Numerical calculations demonstrate the advantages of our method over standard Condensed History methods.
Date: February 27, 2001
Creator: Tolar, D R & Larsen, E W
Object Type: Article
System: The UNT Digital Library
Dislocation-Defect Interactions in Materials (open access)

Dislocation-Defect Interactions in Materials

In order to develop predictive models of the mechanical response of irradiated materials it is necessary to understand the fundamental physical processes controlling the deformation. This is particularly important near yielding where local defect interactions may dominate the behavior. Dislocation-defect interactions in copper containing various densities and distributions of stacking-fault tetrahedra and small dislocation loops were examined dynamically using the in-situ TEM straining technique. Defect annihilation mechanisms as well as the conditions required to produce defect-free channels are proposed. The experimental results are compared to atomistic simulations and continuum mechanics calculations of unit interactions. Based on these observations, an improved continuum model of the mechanical behavior of irradiated materials is presented.
Date: February 27, 2003
Creator: Robach, J. S.; Robertson, I. M.; Ahn, D. C.; Sofronis, P.; Wirth, B. D. & Arsenlis, T.
Object Type: Article
System: The UNT Digital Library
A Short Survey of Document Structure Similarity Algorithms (open access)

A Short Survey of Document Structure Similarity Algorithms

This paper provides a brief survey of document structural similarity algorithms, including the optimal Tree Edit Distance algorithm and various approximation algorithms. The approximation algorithms include the simple weighted tag similarity algorithm, Fourier transforms of the structure, and a new application of the shingle technique to structural similarity. We show three surprising results. First, the Fourier transform technique proves to be the least accurate of any of approximation algorithms, while also being slowest. Second, optimal Tree Edit Distance algorithms may not be the best technique for clustering pages from different sites. Third, the simplest approximation to structure may be the most effective and efficient mechanism for many applications.
Date: February 27, 2004
Creator: Buttler, D
Object Type: Article
System: The UNT Digital Library
Statistical Scalability Analysis of Communication Operations in Distributed Applications (open access)

Statistical Scalability Analysis of Communication Operations in Distributed Applications

Current trends in high performance computing suggest that users will soon have widespread access to clusters of multiprocessors with hundreds, if not thousands, of processors. This unprecedented degree of parallelism will undoubtedly expose scalability limitations in existing applications, where scalability is the ability of a parallel algorithm on a parallel architecture to effectively utilize an increasing number of processors. Users will need precise and automated techniques for detecting the cause of limited scalability. This paper addresses this dilemma. First, we argue that users face numerous challenges in understanding application scalability: managing substantial amounts of experiment data, extracting useful trends from this data, and reconciling performance information with their application's design. Second, we propose a solution to automate this data analysis problem by applying fundamental statistical techniques to scalability experiment data. Finally, we evaluate our operational prototype on several applications, and show that statistical techniques offer an effective strategy for assessing application scalability. In particular, we find that non-parametric correlation of the number of tasks to the ratio of the time for individual communication operations to overall communication time provides a reliable measure for identifying communication operations that scale poorly.
Date: February 27, 2001
Creator: Vetter, J S & McCracken, M O
Object Type: Article
System: The UNT Digital Library
Beryllium Materials for National Ignition Facility Targets LDRD Final Report (open access)

Beryllium Materials for National Ignition Facility Targets LDRD Final Report

The National Ignition Facility (NIF) will require spherical ignition capsules approximately 2 mm in diameter with a 120- to 150-pm-thick ablator. Beryllium-based alloys are promising candidates for an ablator material due to their combination of low opacity and relatively high density (compared to polymer coatings). For optimum performance, the Be-coated capsules require a smooth surface finish, uniform thickness, microscopic homogeneity, and preferably high strength. The coatings must contain on the order of 1 at.% of a high-Z dopant (such as Cu) and permit the capsule to be filled with fuel, which will be a mixture of hydrogen isotopes. These demanding requirements can be met through a synthesis method with a focus on the control of microstructure. In our experiments, the sputter deposition process has been manipulated so as to decrease the grain size, thereby reducing roughness and improving homogeneity. The material properties of sputter-deposited coatings are sensitive to their microstructure and growth morphology. To meet the requirements for Be coated capsules, the goal of this project was to optimize the microstructure and growth morphology through the control of deposition process parameters. Prior experimental studies of evaporation and sputter deposition revealed that the grain size of 99.8 at.% pure Be can …
Date: February 27, 2001
Creator: McEachern, R L
Object Type: Report
System: The UNT Digital Library
Small Animal Radionuclide Imaging With Focusing Gamma-Ray Optics (open access)

Small Animal Radionuclide Imaging With Focusing Gamma-Ray Optics

Significant effort currently is being devoted to the development of noninvasive imaging systems that allow in vivo assessment of biological and biomolecular interactions in mice and other small animals. While physiological function in small animals can be localized and imaged using conventional radionuclide imaging techniques such as single-photon emission tomography (SPECT) and positron emission tomography (PET), these techniques inherently are limited to spatial resolutions of 1-2 mm. For this reason, we are developing a small animal radionuclide imaging system (SARIS) using grazing incidence optics to focus gamma-rays emitted by {sup 125}I and other radiopharmaceuticals. We have developed a prototype optic with sufficient accuracy and precision to focus the 27.5 keV photons from {sup 125}I onto a high-resolution imaging detector. Experimental measurements from the prototype have demonstrated that the optic can focus X-rays from a microfocus X-ray tube to a spot having physical dimensions (approximately 1500 microns half-power diameter) consistent with those predicted by theory. Our theoretical and numerical analysis also indicate that an optic can be designed and build that ultimately can achieve 100 {micro}m spatial resolution with sufficient efficiency to perform in vivo single photon emission imaging studies in small animal.
Date: February 27, 2004
Creator: Hill, R; Decker, T; Epstein, M; Ziock, K; Pivovaroff, M J; Craig, W W et al.
Object Type: Article
System: The UNT Digital Library
Using Graphs for Fast Error Term Approximation of Time-varying Datasets (open access)

Using Graphs for Fast Error Term Approximation of Time-varying Datasets

We present a method for the efficient computation and storage of approximations of error tables used for error estimation of a region between different time steps in time-varying datasets. The error between two time steps is defined as the distance between the data of these time steps. Error tables are used to look up the error between different time steps of a time-varying dataset, especially when run time error computation is expensive. However, even the generation of error tables itself can be expensive. For n time steps, the exact error look-up table (which stores the error values for all pairs of time steps in a matrix) has a memory complexity and pre-processing time complexity of O(n2), and O(1) for error retrieval. Our approximate error look-up table approach uses trees, where the leaf nodes represent original time steps, and interior nodes contain an average (or best-representative) of the children nodes. The error computed on an edge of a tree describes the distance between the two nodes on that edge. Evaluating the error between two different time steps requires traversing a path between the two leaf nodes, and accumulating the errors on the traversed edges. For n time steps, this scheme has …
Date: February 27, 2003
Creator: Nuber, C; LaMar, E C; Pascucci, V; Hamann, B & Joy, K I
Object Type: Article
System: The UNT Digital Library
Hyper-resistivity Theory in a Cylindrical Plasma (open access)

Hyper-resistivity Theory in a Cylindrical Plasma

A model is presented for determining the hyper-resistivity coefficient that arises due to the presence of magnetic structures that appear in plasma configurations such as the reversed field pinch and spheromak. Emphasis is placed on modeling cases where magnetic islands pass from non-overlap to overlap regimes. Earlier works have shown that a diffusion-based model can give realistic transport scalings when magnetic islands are isolated, and this formalism is extended to apply to the hyper-resistivity problem. In this case electrons may either be in long or short mean-free-path regimes and intuitively-based arguments are presented of how to extend previous theories to incorporate this feature in the presence of magnetic structures that pass from laminar to moderately chaotic regimes.
Date: February 27, 2001
Creator: Berk, H. L.; Fowler, T. K.; LoDestro, L. L. & Pearlstein, L. D.
Object Type: Report
System: The UNT Digital Library
Positron Annihilation Spectroscopy and Small Angle Neutron Scattering Characterization of the Effect of Mn on the Nanostructural Features formed in Irradiated Fe-Cu-Mn Alloys (open access)

Positron Annihilation Spectroscopy and Small Angle Neutron Scattering Characterization of the Effect of Mn on the Nanostructural Features formed in Irradiated Fe-Cu-Mn Alloys

The size, number density and composition of the nanometer defects responsible for the hardening and embrittlement in irradiated Fe-0.9wt.% Cu and Fe-0.9wt.% Cu-1.0wt% Mn model reactor pressure vessel alloys were measured using small angle neutron scattering and positron annihilation spectroscopy. These alloys were irradiated at 290 C to relatively low neutron fluences (E > 1 MeV, 6.0 x 10{sup 20} to 4.0 x 10{sup 21} n/m{sup 2}) in order to study the effect of manganese on the nucleation and growth of copper rich precipitates and secondary defect features. Copper rich precipitates were present in both alloys following irradiation. The Fe-Cu-Mn alloy had smaller precipitates and a larger number density of precipitates, suggesting Mn segregation at the iron matrix-precipitate interface which reduces the interfacial energy and in turn the driving force for coarsening. Mn also retards the precipitation kinetics and inhibits large vacancy cluster formation, suggesting a strong Mn-vacancy interaction which reduces radiation enhanced diffusion.
Date: February 27, 2003
Creator: Glade, S C; Wirth, B D; Asoka-Kumar, P; Odette, G R; Sterne, P A & Howell, R H
Object Type: Article
System: The UNT Digital Library
Feature Selection in Scientific Applications (open access)

Feature Selection in Scientific Applications

Numerous applications of data mining to scientific data involve the induction of a classification model. In many cases, the collection of data is not performed with this task in mind, and therefore, the data might contain irrelevant or redundant features that affect negatively the accuracy of the induction algorithms. The size and dimensionality of typical scientific data make it difficult to use any available domain information to identify features that discriminate between the classes of interest. Similarly, exploratory data analysis techniques have limitations on the amount and dimensionality of the data that can be effectively processed. In this paper, we describe applications of efficient feature selection methods to data sets from astronomy, plasma physics, and remote sensing. We use variations of recently proposed filter methods as well as traditional wrapper approaches where practical. We discuss the importance of these applications, the general challenges of feature selection in scientific datasets, the strategies for success that were common among our diverse applications, and the lessons learned in solving these problems.
Date: February 27, 2004
Creator: Cantu-Paz, E; Newsam, S & Kamath, C
Object Type: Article
System: The UNT Digital Library
Characterization of Contaminant Transport by Gravity, Capillarity and Barometric Pumping in Heterogeneous Vadose Zones (open access)

Characterization of Contaminant Transport by Gravity, Capillarity and Barometric Pumping in Heterogeneous Vadose Zones

This final report summarizes the work and accomplishments of our three-year project. We have pursued the concept of a Vadose-Zone Observatory (VZO) to provide the field laboratory necessary for carrying out the experiments required to achieve the goals of this research. Our approach has been (1) to carry out plume release experiments at a VZO allowing the acquisition of several different kinds of raw data that (2) are analyzed and evaluated with the aid of highly detailed, diagnostic numerical models. The key feature of the VZO constructed at Lawrence Livermore National Laboratory (LLNL) is the variety of plume-tracking techniques that can be used at a single location. Electric resistance tomography (ERT) uses vertical arrays of electrodes across the vadose zone that can monitor electrical resistance changes in the soil as a plume moves downward to the water table. These resistance changes can be used to provide ''snapshots'' of the progress of the plume. Additionally, monitoring wells have been completed at multiple levels in the vicinity of a central infiltration site. Sensors emplaced at different levels include electrically conducting gypsum blocks for detecting saturation changes, thermistors for monitoring temperature changes and pressure transducers for observing barometric changes at different levels in …
Date: February 27, 2001
Creator: Carrigan, C R; Martins, S A; Ramirez, A L; Daily, W D; Hudson, G B; Ralsont, D et al.
Object Type: Report
System: The UNT Digital Library
Open and Closed Magnetic Confinement Systems: Is There a Fundamental Difference in Their Transport Properties? (open access)

Open and Closed Magnetic Confinement Systems: Is There a Fundamental Difference in Their Transport Properties?

The results of five decades of experimental investigations of open-ended and closed magnetic confinement geometries are examined to see if intrinsic topology-dependent differences in their cross-field transport can be discerned. The evidence strongly supports a picture in which closed systems (stellarators, tokamaks, reversed-field pinches, etc.) are in all cases studied to date characterized by some level of plasma turbulence, leading to substantial deviations from purely classical cross-field transport. This transport is often describable as a Bohm-like scaling with plasma temperature and magnetic field intensity. By contrast, open systems have in many significant examples been able to approach closely to classically predicted cross-field transport, including cases where the transport appeared to be more than five orders of magnitude slower than the Bohm-diffusion rate. To explain these differences the following tentative hypothesis is put forward: The differences arise from two sources: (1) differences in the instability driving terms arising from free-energy sources, such as current flow along the field lines, etc. and, (2) differences in the nature of the boundary conditions for the various unstable waves that may be stimulated by these free energy sources within the plasma. By analogy with a laser, closed systems, with their flux tubes returning on themselves, …
Date: February 27, 2001
Creator: Post, R F
Object Type: Article
System: The UNT Digital Library
High Resolution Radionuclide Imaging Using Focusing Gamma-Ray Optics (open access)

High Resolution Radionuclide Imaging Using Focusing Gamma-Ray Optics

Significant effort is being devoted to the development of noninvasive imaging systems that allow in vivo assessment of biological and biomolecular interactions in mice and other small animals. Although single-photon emission tomography (SPECT) and positron emission tomography (PET) are well-matched to the study of physiological function in small animals, the spatial resolutions of 1-2 mm currently achievable with these techniques limits the types of research possible. For this reason, we are developing a small animal radionuclide imaging system using grazing incidence optics to focus the low-energy gamma-rays emitted by {sup 125}I, {sup 95m}Tc, {sup 96}Tc, and {sup 99m}Tc. We compare this approach to the more traditional use of absorptive collimation.
Date: February 27, 2004
Creator: Pivovaroff, Michael; Craig, William; Ziock, Klaus; Barber, William; Funk, Tobias; Hasegawa, Bruce et al.
Object Type: Article
System: The UNT Digital Library
Development of a High Resolution X-Ray Imaging Crystal Spectrometer for Measurement of Ion-Temperature and Rotation-Velocity Profiles in Fusion Energy Research Plasmas (open access)

Development of a High Resolution X-Ray Imaging Crystal Spectrometer for Measurement of Ion-Temperature and Rotation-Velocity Profiles in Fusion Energy Research Plasmas

A new imaging high resolution x-ray crystal spectrometer (XCS) has been developed to measure continuous profiles of ion temperature and rotation velocity in fusion plasmas. Following proof-of-principle tests on the Alcator C-Mod tokamak and the NSTX spherical tokamak, and successful testing of a new silicon, pixilated detector with 1MHz count rate capability per pixel, an imaging XCS is being designed to measure full profiles of Ti and vφ on C-Mod. The imaging XCS design has also been adopted for ITER. Ion-temperature uncertainty and minimum measurable rotation velocity are calculated for the C-Mod spectrometer. The affects of x-ray and nuclear-radiation background on the measurement uncertainties are calculated to predict performance on ITER.
Date: February 27, 2008
Creator: Hill, K. W.; Broennimann, Ch; Eikenberry, E. F.; Ince-Cushman, A.; Lee, S. G.; Rice, J. E. et al.
Object Type: Report
System: The UNT Digital Library
Environment, Safety, and Health Self-Assessment Report, Fiscal Year 2008 (open access)

Environment, Safety, and Health Self-Assessment Report, Fiscal Year 2008

Lawrence Berkeley National Laboratory's Environment, Safety, and Health (ES&H) Self-Assessment Program ensures that Integrated Safety Management (ISM) is implemented institutionally and by all divisions. The Self-Assessment Program, managed by the Office of Contract Assurance (OCA), provides for an internal evaluation of all ES&H programs and systems at LBNL. The functions of the program are to ensure that work is conducted safely, and with minimal negative impact to workers, the public, and the environment. The Self-Assessment Program is also the mechanism used to institute continuous improvements to the Laboratory's ES&H programs. The program is described in LBNL/PUB 5344, Environment, Safety, and Health Self-Assessment Program and is composed of four distinct assessments: the Division Self-Assessment, the Management of Environment, Safety, and Health (MESH) review, ES&H Technical Assurance, and the Appendix B Self-Assessment. The Division Self-Assessment uses the five core functions and seven guiding principles of ISM as the basis of evaluation. Metrics are created to measure performance in fulfilling ISM core functions and guiding principles, as well as promoting compliance with applicable regulations. The five core functions of ISM are as follows: (1) Define the Scope of Work; (2) Identify and Analyze Hazards; (3) Control the Hazards; (4) Perform the Work; and …
Date: February 27, 2009
Creator: Chernowski, John
Object Type: Report
System: The UNT Digital Library
Aza Cope Rearrangement of Propargyl Enammonium Cations Catalyzed By a Self-Assembled `Nanozyme (open access)

Aza Cope Rearrangement of Propargyl Enammonium Cations Catalyzed By a Self-Assembled `Nanozyme

The tetrahedral [Ga{sub 4}L{sub 6}]{sup 12-} assembly (L = N,N-bis(2,3-dihydroxybenzoyl)-1,5-diaminonaphthalene) encapsulates a variety of cations, including propargyl enammonium cations capable of undergoing the aza Cope rearrangement. For propargyl enammonium substrates that are encapsulated in the [Ga{sub 4}L{sub 6}]{sup 12-} assembly, rate accelerations of up to 184 are observed when compared to the background reaction. After rearrangement, the product iminium ion is released into solution and hydrolyzed allowing for catalytic turnover. The activation parameters for the catalyzed and uncatalyzed reaction were determined, revealing that a lowered entropy of activation is responsible for the observed rate enhancements. The catalyzed reaction exhibits saturation kinetics; the rate data obey the Michaelis-Menten model of enzyme kinetics, and competitive inhibition using a non-reactive guest has been demonstrated.
Date: February 27, 2008
Creator: Hastings, Courntey J.; Fiedler, Dorothea; Bergman, Robert G. & Raymond, Kenneth N.
Object Type: Article
System: The UNT Digital Library
First Quarter Hanford Seismic Report for Fiscal Year 2001 (open access)

First Quarter Hanford Seismic Report for Fiscal Year 2001

Hanford Seismic Monitoring provides an uninterrupted collection of high-quality raw and processed seismic data from the Hanford Seismic Network (HSN) for the U.S. Department of Energy and its contractors. Hanford Seismic Monitoring also locates and identifies sources of seismic activity and monitors changes in the historical pattern of seismic activity at the Hanford Site. The data are compiled, archived, and published for use by the Hanford Site for waste management, Natural Phenomena Hazards assessments, and engineering design and construction. In addition, the seismic monitoring organization works with the Hanford Site Emergency Services Organization to provide assistance in the event of a significant earthquake on the Hanford Site. The HSN and the Eastern Washington Regional Network (EWRN) consist of 41 individual sensor sites and 15 radio relay sites maintained by the Hanford Seismic Monitoring staff. For the HSN, there were 477 triggers during the first quarter of fiscal year (FY) 2001 on the data acquisition system. Of these triggers, 176 were earthquakes. Forty-five earthquakes were located in the HSN area; 1 earthquake occurred in the Columbia River Basalt Group, 43 were earthquakes in the pre-basalt sediments, and 1 was earthquakes in the crystalline basement. Geographically, 44 earthquakes occurred in swarm areas, …
Date: February 27, 2001
Creator: Hartshorn, Donald C.; Reidel, Stephen P.; Rohay, Alan C. & Valenta, Michelle M.
Object Type: Report
System: The UNT Digital Library
FY08 LDRD Final Report LOCAL: Locality-Optimizing Caching Algorithms and Layouts (open access)

FY08 LDRD Final Report LOCAL: Locality-Optimizing Caching Algorithms and Layouts

This project investigated layout and compression techniques for large, unstructured simulation data to reduce bandwidth requirements and latency in simulation I/O and subsequent post-processing, e.g. data analysis and visualization. The main goal was to eliminate the data-transfer bottleneck - for example, from disk to memory and from central processing unit to graphics processing unit - through coherent data access and by trading underutilized compute power for effective bandwidth and storage. This was accomplished by (1) designing algorithms that both enforce and exploit compactness and locality in unstructured data, and (2) adapting offline computations to a novel stream processing framework that supports pipelining and low-latency sequential access to compressed data. This report summarizes the techniques developed and results achieved, and includes references to publications that elaborate on the technical details of these methods.
Date: February 27, 2009
Creator: Lindstrom, P
Object Type: Report
System: The UNT Digital Library
Photoelectron Spectroscopy under Ambient Pressure and Temperature Conditions (open access)

Photoelectron Spectroscopy under Ambient Pressure and Temperature Conditions

We describe the development and applications of novel instrumentation for photoemission spectroscopy of solid or liquid surfaces in the presence of gases under ambient conditions or pressure and temperature. The new instrument overcomes the strong scattering of electrons in gases by the use of an aperture close to the surface followed by a differentially-pumped electrostatic lens system. In addition to the scattering problem, experiments in the presence of condensed water or other liquids require the development of special sample holders to provide localized cooling. We discuss the first two generations of Ambient Pressure PhotoEmission Spectroscopy (APPES) instruments developed at synchrotron light sources (ALS in Berkeley and BESSY in Berlin), with special focus on the Berkeley instruments. Applications to environmental science and catalytic chemical research are illustrated in two examples.
Date: February 27, 2009
Creator: Ogletree, D. Frank; Bluhm, Hendrik; Hebenstreit, Eleonore B. & Salmeron, Miquel
Object Type: Article
System: The UNT Digital Library