Phase I Report: DARPA Exoskeleton Program (open access)

Phase I Report: DARPA Exoskeleton Program

The Defense Advanced Research Projects Agency (DARPA) inaugurated a program addressing research and development for an Exoskeleton for Human Performance Augmentation in FY!2001. A team consisting of Oak Ridge National Laboratory, the prime contractor, AeroVironment, Inc., the Army Research Laboratory, the University of Minnesota, and the Virginia Polytechnic Institute has recently completed an 18-month Phase I effort in support of this DARPA program. The Phase I effort focused on the development and proof-of-concept demonstrations for key enabling technologies, laying the foundation for subsequently building and demonstrating a prototype exoskeleton. The overall approach was driven by the need to optimize energy efficiency while providing a system that augmented the operator in as transparent manner as possible (non-impeding). These needs led to the evolution of two key distinguishing features of this team's approach. The first is the ''no knee contact'' concept. This concept is dependent on a unique Cartesian-based control scheme that uses force sensing at the foot and backpack attachments to allow the exoskeleton to closely follow the operator while avoiding the difficulty of connecting and sensing position at the knee. The second is an emphasis on energy efficiency manifested by an energetic, power, actuation and controls approach designed to enhance …
Date: January 21, 2004
Creator: Jansen, J.F.
Object Type: Report
System: The UNT Digital Library
Lithographically-directed self-assembly of nanostructures (open access)

Lithographically-directed self-assembly of nanostructures

The combination of lithography and self-assembly provides apowerful means of organizing solution-synthesized nanostructures for awide variety of applications. We have developed a fluidic assembly methodthat relies on the local pinning of a moving liquid contact line bylithographically produced topographic features to concentratenanoparticles at those features. The final stages of the assembly processare controlled first by long-range immersion capillary forces and then bythe short-range electrostatic and Van der Waal's interactions. We havesuccessfully assembled nanoparticles from 50 nm to 2 nm in size usingthis technique and have also demonstrated the controlled positioning ofmore complex nanotetrapod structures. We have used this process toassemble Au nanoparticles into pre-patterned electrode structures andhave performed preliminary electrical characterization of the devices soformed. The fluidic assembly method is capable of very high yield, interms of positioning nanostructures at each lithographically-definedlocation, and of excellent specificity, with essentially no particledeposition between features.
Date: September 21, 2004
Creator: Liddle, J. Alexander; Cui, Yi & Alivisatos, Paul
Object Type: Article
System: The UNT Digital Library
Computing Path Tables for Quickest Multipaths In Computer Networks (open access)

Computing Path Tables for Quickest Multipaths In Computer Networks

We consider the transmission of a message from a source node to a terminal node in a network with n nodes and m links where the message is divided into parts and each part is transmitted over a different path in a set of paths from the source node to the terminal node. Here each link is characterized by a bandwidth and delay. The set of paths together with their transmission rates used for the message is referred to as a multipath. We present two algorithms that produce a minimum-end-to-end message delay multipath path table that, for every message length, specifies a multipath that will achieve the minimum end-to-end delay. The algorithms also generate a function that maps the minimum end-to-end message delay to the message length. The time complexities of the algorithms are O(n{sup 2}((n{sup 2}/logn) + m)min(D{sub max}, C{sub max})) and O(nm(C{sub max} + nmin(D{sub max}, C{sub max}))) when the link delays and bandwidths are non-negative integers. Here D{sub max} and C{sub max} are respectively the maximum link delay and maximum link bandwidth and C{sub max} and D{sub max} are greater than zero.
Date: December 21, 2004
Creator: Grimmell, W.C.
Object Type: Report
System: The UNT Digital Library
Community Land Model Version 3.0 (CLM3.0) Developer's Guide (open access)

Community Land Model Version 3.0 (CLM3.0) Developer's Guide

This document describes the guidelines adopted for software development of the Community Land Model (CLM) and serves as a reference to the entire code base of the released version of the model. The version of the code described here is Version 3.0 which was released in the summer of 2004. This document, the Community Land Model Version 3.0 (CLM3.0) User's Guide (Vertenstein et al., 2004), the Technical Description of the Community Land Model (CLM) (Oleson et al., 2004), and the Community Land Model's Dynamic Global Vegetation Model (CLM-DGVM): Technical Description and User's Guide (Levis et al., 2004) provide the developer, user, or researcher with details of implementation, instructions for using the model, a scientific description of the model, and a scientific description of the Dynamic Global Vegetation Model integrated with CLM respectively. The CLM is a single column (snow-soil-vegetation) biogeophysical model of the land surface which can be run serially (on a laptop or personal computer) or in parallel (using distributed or shared memory processors or both) on both vector and scalar computer architectures. Written in Fortran 90, CLM can be run offline (i.e., run in isolation using stored atmospheric forcing data), coupled to an atmospheric model (e.g., the Community …
Date: December 21, 2004
Creator: Hoffman, FM
Object Type: Report
System: The UNT Digital Library
WIPP Subsidence Monument Leveling Survey - 2004 (open access)

WIPP Subsidence Monument Leveling Survey - 2004

Sections 2 through 7 of this report define the result of the 2004 leveling survey through the subsidence monuments at the WIPP site. Approximately 15 miles of leveling was completed through nine vertical control loops. The 2004 survey includes the determination of elevation on each of the 48 existing subsidence monuments and the WIPP baseline survey, and 14 of the National Geodetic Survey's (NGS) vertical control points. The field observations were completed during August through November of 2004 by personnel from the WashingtonTRU Solutions (WTS) Surveying Group, Mine Engineering Department. Additional rod personnel were provided by the Geotechnical Engineering department. Digital leveling techniques were utilized to achieve better than Second Order Class II loop closures as outlined by the Federal Geodetic Control Subcommittee (FGCS). Because it is important to perform the subsidence survey in exactly the same manner each year, WIPP procedure (WP 09-ES4001) details each step of the survey. Starting with the 2002 survey this procedure has been used to perform the subsidence survey. Starting with the survey of the year 2001, Loop 1 and redundant survey connections among the various loops were removed from the survey and report. This resulted in a reduction of fieldwork with no loss …
Date: December 21, 2004
Creator: Westinghouse TRU Solutions LLC
Object Type: Report
System: The UNT Digital Library
Impacts of Stable Element Intake on C and I Dose Estimates - Implications for Proposed Yucca Mountain Repository (open access)

Impacts of Stable Element Intake on C and I Dose Estimates - Implications for Proposed Yucca Mountain Repository

The purpose of this study was to evaluate the influence of the intake of stable isotopes of carbon and iodine on the committed doses due to the ingestion of {sup 14}C and {sup 129}I. This was accomplished through the application of two different computational approaches. The first was based on the assumption that ground (drinking) water was the only source of intake of both {sup 14}C and {sup 129}I and stable carbon and stable iodine. For purposes of the second approach, the intake of {sup 14}C and {sup 129}I was still assumed to be only that in the ground (drinking) water, but the intake of stable carbon and stable iodine was assumed to be that in the drinking water plus other components of the diet. The doses were estimated using either a conversion formula or the applicable dose coefficients in Federal Guidance Reports No. 11 and No. 13. Serving as input for the analyses was the estimated maximum concentration of {sup 14}C or {sup 129}I that would be present in the ground water due to potential releases from the proposed Yucca Mountain high-level radioactive waste repository during the first 10,000 years after closure. The estimated concentrations of stable carbon and …
Date: December 21, 2004
Creator: Moeller, Dade W.; Ryan, Michael T.; Sun, Lin-Shen C. & Cherry, Robert N., Jr.
Object Type: Report
System: The UNT Digital Library
Probability Density and CFAR Threshold Estimation for Hyperspectral Imaging (open access)

Probability Density and CFAR Threshold Estimation for Hyperspectral Imaging

The work reported here shows the proof of principle (using a small data set) for a suite of algorithms designed to estimate the probability density function of hyperspectral background data and compute the appropriate Constant False Alarm Rate (CFAR) matched filter decision threshold for a chemical plume detector. Future work will provide a thorough demonstration of the algorithms and their performance with a large data set. The LASI (Large Aperture Search Initiative) Project involves instrumentation and image processing for hyperspectral images of chemical plumes in the atmosphere. The work reported here involves research and development on algorithms for reducing the false alarm rate in chemical plume detection and identification algorithms operating on hyperspectral image cubes. The chemical plume detection algorithms to date have used matched filters designed using generalized maximum likelihood ratio hypothesis testing algorithms [1, 2, 5, 6, 7, 12, 10, 11, 13]. One of the key challenges in hyperspectral imaging research is the high false alarm rate that often results from the plume detector [1, 2]. The overall goal of this work is to extend the classical matched filter detector to apply Constant False Alarm Rate (CFAR) methods to reduce the false alarm rate, or Probability of False …
Date: September 21, 2004
Creator: Clark, G A
Object Type: Report
System: The UNT Digital Library
Quantitative Evaluation of Bio-Aerosol Mass Spectrometry for the Real-Time Detection of Individual Airborne Mycobacterium Tuberculosis H37Ra Particles (open access)

Quantitative Evaluation of Bio-Aerosol Mass Spectrometry for the Real-Time Detection of Individual Airborne Mycobacterium Tuberculosis H37Ra Particles

None
Date: May 21, 2004
Creator: Tobias, H; Schafer, M; Pitesky, M; Horn, J & Frank, M
Object Type: Article
System: The UNT Digital Library
Near-infrared Adaptive Optics Imaging of the Satellites and Individual Rings of Uranus from the W.M. Keck Observatory (open access)

Near-infrared Adaptive Optics Imaging of the Satellites and Individual Rings of Uranus from the W.M. Keck Observatory

None
Date: January 21, 2004
Creator: Gibbard, S G; de Pater, I & Hammel, H B
Object Type: Article
System: The UNT Digital Library
FUDGE: A Program for Performing Nuclear Data Testing and Sensitivity Studies (open access)

FUDGE: A Program for Performing Nuclear Data Testing and Sensitivity Studies

We have developed a program called FUDGE that allows one to modify data from LLNL's nuclear database. After modifying data, FUDGE can then be instructed to process the data into the formats used by LLNL's deterministic (ndf) and the Monte Carlo (MCAPM) transport codes. This capability allows users to perform nuclear data sensitivity studies without modification of the transport modeling codes. FUDGE is designed to be user friendly (object-oriented) and fast (the modification and processing typically takes about a minute). It uses Python as a front-end, making it flexible and scriptable. Comparing, plotting and printing of the data are also supported. An overview of FUDGE will be presented as well as examples.
Date: September 21, 2004
Creator: Beck, B R
Object Type: Article
System: The UNT Digital Library
Neutrino Factory and Beta Beam Experiments and Development. (open access)

Neutrino Factory and Beta Beam Experiments and Development.

The long-term prospects for fully exploring three-flavor mixing in the neutrino sector depend upon an ongoing and increased investment in the appropriate accelerator R&D. Two new concepts have been proposed that would revolutionize neutrino experiments, namely the Neutrino Factory and the Beta Beam facility. These new facilities would dramatically improve our ability to test the three-flavor mixing framework, measure CP violation in the lepton sector, and perhaps determine the neutrino mass hierarchy, and, if necessary, probe extremely small values of the mixing angle {theta}{sub 13}. The stunning sensitivity that could be achieved with a Neutrino Factory is described, together with our present understanding of the corresponding sensitivity that might be achieved with a Beta Beam facility. In the Beta Beam case, additional study is required to better understand the optimum Beta Beam energy, and the achievable sensitivity. Neither a Neutrino Factory nor a Beta Beam facility could be built without significant R&D. An impressive Neutrino Factory R&D effort has been ongoing in the U.S. and elsewhere over the last few years and significant progress has been made towards optimizing the design, developing and testing the required accelerator components, and significantly reducing the cost. The recent progress is described here. There …
Date: September 21, 2004
Creator: Albright, C.; Berg, J. S.; Fernow, R.; Gallardo, J.; Kahn, S.; Kirk, H. et al.
Object Type: Report
System: The UNT Digital Library
An Integrated Universal Collapsar Gamma-ray Burst Model (open access)

An Integrated Universal Collapsar Gamma-ray Burst Model

Starting with two assumptions: (1) gamma-ray bursts originate from stellar death phenomena or so called ''collapsars'' and (2) that these bursts are quasi-universal, whereby the majority of the observed variation is due to our perspective of the jet, an integrated gamma-ray burst model is proposed. It is found that several of the key correlations in the data can be naturally explained with this simple picture and another possible correlation is predicted.
Date: January 21, 2004
Creator: Salmonson, J D
Object Type: Article
System: The UNT Digital Library
Segregation of Uranium Metal from K Basin Sludge: Results from Vendor Testing (open access)

Segregation of Uranium Metal from K Basin Sludge: Results from Vendor Testing

Under contract to Fluor Hanford, Pacific Northwest National Laboratory directed laboratory, bench-scale, and pilot-scale vendor testing to evaluate the use of commercial gravity mineral concentration technology to remove and concentrate uranium metal from Hanford K Basin sludge. Uranium metal in the sludge corrodes by reacting with water to generate heat and hydrogen gas, and may constrain shipment and disposal of the sludge to the Waste Isolation Pilot Plant as remote-handled transuranic waste. Separating uranium metal from the K Basin sludge is expected to be similar to some gold recovery operations. Consequently, the capabilities of commercial gravity mineral concentration technologies were assessed for their applicability to K Basin sludge streams. Overall, the vendor testing demonstrated the technical feasibility of using gravity concentration equipment to separate the K Basin sludge into a high-volume uranium metal-depleted stream and a low-volume uranium metal-rich stream. I n test systems, more than 96% of the uranium metal surrogate was concentrated into 10 to 30% of the sludge mass (7 to 24% of the sludge volume). With more prototypical equipment and stream recycle, higher recoveries may be achieved.
Date: September 21, 2004
Creator: Schmidt, Andrew J.; Elmore, Monte R. & Delegard, Calvin H.
Object Type: Report
System: The UNT Digital Library
Laser- and Radar-based Mission Concepts for Suborbital and Spaceborne Monitoring of Seismic Surface Waves (open access)

Laser- and Radar-based Mission Concepts for Suborbital and Spaceborne Monitoring of Seismic Surface Waves

The development of a suborbital or spaceborne system to monitor seismic waves poses an intriguing prospect for advancing the state of seismology. This capability would enable an unprecedented global mapping of the velocity structure of the earth's crust, understanding of earthquake rupture dynamics and wave propagation effects, and event source location, characterization and discrimination that are critical for both fundamental earthquake research and nuclear non-proliferation applications. As part of an ongoing collaboration between LLNL and JPL, an advanced mission concept study assessed architectural considerations and operational and data delivery requirements, extending two prior studies by each organization--a radar-based satellite system (JPL) for earthquake hazard assessment and a feasibility study of space- or UAV-based laser seismometer systems (LLNL) for seismic event monitoring. Seismic wave measurement requirements include lower bounds on detectability of specific seismic sources of interest and wave amplitude accuracy for different levels of analysis, such as source characterization, discrimination and tomography, with a 100 {micro}m wave amplitude resolution for waves nominally traveling 5 km/s, an upper frequency bound based on explosion and earthquake surface displacement spectra, and minimum horizontal resolution (1-5 km) and areal coverage, in general and for targeted observations. For a radar system, corresponding engineering and operational …
Date: September 21, 2004
Creator: Foxall, W; Schultz, C A & Tralli, D M
Object Type: Article
System: The UNT Digital Library
MATRIX PRODUCT VARIATIONAL FORMULATION FOR LATTICE GAUGE THEORY. (open access)

MATRIX PRODUCT VARIATIONAL FORMULATION FOR LATTICE GAUGE THEORY.

For hamiltonian lattice gauge theory, we introduce the matrix product ansatz inspired from density matrix renormalization group. In this method, wavefunction of the target state is assumed to be a product of finite matrices. As a result, the energy becomes a simple function of the matrices, which can be evaluated using a computer. The minimum of the energy function corresponds to the vacuum state. We show that the S = 1/2 Heisenberg chain model are well described with the ansatz. The method is also applied to the two-dimensional S = 1/2 Heisenberg and U(1) plaquette chain models.
Date: September 21, 2004
Creator: SUGIHARA,T.
Object Type: Article
System: The UNT Digital Library
Feature Subset Selection, Class Separability, and Genetic Algorithms (open access)

Feature Subset Selection, Class Separability, and Genetic Algorithms

The performance of classification algorithms in machine learning is affected by the features used to describe the labeled examples presented to the inducers. Therefore, the problem of feature subset selection has received considerable attention. Genetic approaches to this problem usually follow the wrapper approach: treat the inducer as a black box that is used to evaluate candidate feature subsets. The evaluations might take a considerable time and the traditional approach might be unpractical for large data sets. This paper describes a hybrid of a simple genetic algorithm and a method based on class separability applied to the selection of feature subsets for classification problems. The proposed hybrid was compared against each of its components and two other feature selection wrappers that are used widely. The objective of this paper is to determine if the proposed hybrid presents advantages over the other methods in terms of accuracy or speed in this problem. The experiments used a Naive Bayes classifier and public-domain and artificial data sets. The experiments suggest that the hybrid usually finds compact feature subsets that give the most accurate results, while beating the execution time of the other wrappers.
Date: January 21, 2004
Creator: Cantu-Paz, E
Object Type: Article
System: The UNT Digital Library
Open Midplane Dipole Design for Lhc Ir Upgrade. (open access)

Open Midplane Dipole Design for Lhc Ir Upgrade.

The proposed luminosity upgrade of the Large Hadron Collider (LHC), now under construction, will bring a large increase in the number of secondary particles from p-p collisions at the interaction point (IP). Energy deposition will be so large that the lifetime and quench performance of interaction region (IR) magnets may be significantly reduced if conventional designs are used. Moreover, the cryogenic capacity of the LHC will have to be significantly increased as the energy deposition load on the interaction region (IR) magnets by itself will exhaust the present capacity. We propose an alternate open midplane dipole design concept for the dipole-first optics that mitigates these issues. The proposed design takes advantage of the fact that most of the energy is deposited in the midplane region. The coil midplane region is kept free of superconductor, support structure and other material. Initial energy deposition calculations show that the increase in temperature remains within the quench tolerance of the superconducting coils. In addition, most of the energy is deposited in a relatively warm region where the heat removal is economical. We present the basic concept and preliminary design that includes several innovations.
Date: January 21, 2004
Creator: Gupta, R.; Anerella, M.; Harrison, M.; Schmalzle, J. & Mokhov, N.
Object Type: Article
System: The UNT Digital Library
Precision Micron Hole Drilling using a Frequency Doubled, Diode Pumped Solid State Laser (open access)

Precision Micron Hole Drilling using a Frequency Doubled, Diode Pumped Solid State Laser

This work represents the second phase of a program to demonstrate precision laser drilling with minimal Heat Affected Zone. The technique uses a Diode Pumped Solid State Laser with two wavelengths and two modes of operation. The fundamental mode of the DPSSL at 1.06 microns is used to drill a hole with a diameter of a fraction of a millimeter diameter in a millimeter thick substrate quickly, but with low precision. This hole is then machined to precision dimensions using the second harmonic of the DPSSL Laser at 532 nm using a trepanning technique. Both lasers operate in the ablative mode with peak powers at or above a giga-watt per square centimeter and pulse durations in the 80 - 100 ns range. Under these conditions, the thermal diffusion distance is of the order of a micron or less and that fact coupled with the ablative nature of the process results in little or no HAZ (heat affected zone). With no HAZ, there isn't any change in the crystalline structure surrounding the hole and the strength of the substrate is maintained. Applications for these precision holes include cooling passages in turbine blades, ports for diesel injectors, suction holes for boundary layer …
Date: April 21, 2004
Creator: Friedman, H W & Pierce, E L
Object Type: Report
System: The UNT Digital Library
Rapid Degradation of Alkanethiol-Based Self-Assembled Monolayers on Gold in Ambient Laboratory Conditions (open access)

Rapid Degradation of Alkanethiol-Based Self-Assembled Monolayers on Gold in Ambient Laboratory Conditions

Self-assembled monolayers (SAMs) consisting of alkanethiols and similar sulfur-containing molecules on noble metal substrates are extensively used and explored for various chemical and biological surface-functionalization in the scientific community. SAMs consisting of thiol- or disulfide-containing molecules adsorbed on gold are commonly used due to their ease of preparation and stability. However, the gold-thiolate bond is easily and rapidly oxidized under ambient conditions, adversely affecting SAM quality and structure. Here, the oxidation of dodecanethiol on gold is explored for various 12-hour exposures to ambient laboratory air and light. SAM samples are freshly prepared, air-exposed, and stored in small, capped vials. X-ray photoelectron spectroscopy (XPS) reveals nearly complete oxidation of the thiolate in air-exposed samples, and a decrease in carbon signal on the surface. Near-edge X-ray absorption fine structure spectroscopy (NEXAFS) at the Carbon K-edge shows a loss of upright orientational order upon air-exposure. Alternatively, the oxidation of the thiolate is minor when SAMs are stored in limited-air-containing small 15 ml vials. Thus, care must be taken to avoid SAM degradation by ensuring alkanethiolates on gold have sufficient durability for each intended environment and application.
Date: July 21, 2004
Creator: Willey, T M; Vance, A L; van Buuren, T; Bostedt, C; Terminello, L J & Fadley, C S
Object Type: Article
System: The UNT Digital Library
Compensation for Bunch Emittance in a Magnetization and Space Charge Dominated Beam. (open access)

Compensation for Bunch Emittance in a Magnetization and Space Charge Dominated Beam.

In order to obtain sufficient cooling rates for the Relativistic Heavy Ion Collider (RHIC) electron cooling, a bunched beam with high bunch charge, high repetition frequency and high energy is required and it is necessary to use a ''magnetized'' beam, i.e., an electron beam with non-negligible angular momentum. Applying a longitudinal solenoid field on the cathode can generate such a beam, which rotates around its longitudinal axis in a field-free region. This paper suggests how a magnetized beam can be accelerated and transported from a RF photocathode electron gun to the cooling section without significantly increasing its emittance. The evolution of longitudinal slices of the beam under a combination of space charge and magnetization is investigated, using paraxial envelope equations and numerical simulations. We find that we must modify the traditional method of compensating for emittance as used for normal non-magnetized beam with space charge to account for magnetization. The results of computer simulations of successful compensation are presented. Alternately, we show an electron bunch density distribution for which all slices propagate uniformly and which does not require emittance compensation.
Date: June 21, 2004
Creator: Chang, X.; Ben-Zvi, Ilan & Kewisch, J.
Object Type: Article
System: The UNT Digital Library
Proceedings of the 26th Seismic Research Review: Trends in Nuclear Explosion Monitoring (open access)

Proceedings of the 26th Seismic Research Review: Trends in Nuclear Explosion Monitoring

These proceedings contain papers prepared for the 26th Seismic Research Review: Trends in Nuclear Explosion Monitoring, held 21-23 September, 2004 in Orlando, Florida. These papers represent the combined research related to ground-based nuclear explosion monitoring funded by the National Nuclear Security Administration (NNSA), Defense Threat Reduction Agency (DTRA), Air Force Research Laboratory (AFRL), US Army Space and Missile Defense Command, and other invited sponsors. The scientific objectives of the research are to improve the United States capability to detect, locate, and identify nuclear explosions. The purpose of the meeting is to provide the sponsoring agencies, as well as potential users, an opportunity to review research accomplished during the preceding year and to discuss areas of investigation for the coming year. For the researchers, it provides a forum for the exchange of scientific information toward achieving program goals, and an opportunity to discuss results and future plans. Paper topics include: seismic regionalization and calibration; detection and location of sources; wave propagation from source to receiver; the nature of seismic sources, including mining practices; hydroacoustic, infrasound, and radionuclide methods; on-site inspection; and data processing.
Date: September 21, 2004
Creator: Chavez, Francesca C.; Benson, Jody; Hanson, Stephanie; Mark, Carol & Wetovsky, Marvin A.
Object Type: Article
System: The UNT Digital Library
Modelling the Madden Julian Oscillation (open access)

Modelling the Madden Julian Oscillation

The MJO has long been an aspect of the global climate that has provided a tough test for the climate modelling community. Since the 1980s there have been numerous studies of the simulation of the MJO in atmospheric general circulation models (GCMs), ranging from Hayashi and Golder (1986, 1988) and Lau and Lau (1986), through to more recent studies such as Wang and Schlesinger (1999) and Wu et al. (2002). Of course, attempts to reproduce the MJO in climate models have proceeded in parallel with developments in our understanding of what the MJO is and what drives it. In fact, many advances in understanding the MJO have come through modeling studies. In particular, failure of climate models to simulate various aspects of the MJO has prompted investigations into the mechanisms that are important to its initiation and maintenance, leading to improvements both in our understanding of, and ability to simulate, the MJO. The initial focus of this chapter will be on modeling the MJO during northern winter, when it is characterized as a predominantly eastward propagating mode and is most readily seen in observations. Aspects of the simulation of the MJO will be discussed in the context of its sensitivity …
Date: May 21, 2004
Creator: Slingo, J. M.; Inness, P. M. & Sperber, K. R.
Object Type: Book
System: The UNT Digital Library
Reducing Complexity in Parallel Algebraic Multigrid Preconditioners (open access)

Reducing Complexity in Parallel Algebraic Multigrid Preconditioners

Algebraic multigrid (AMG) is a very efficient iterative solver and preconditioner for large unstructured linear systems. Traditional coarsening schemes for AMG can, however, lead to computational complexity growth as problem size increases, resulting in increased memory use and execution time, and diminished scalability. Two new parallel AMG coarsening schemes are proposed, that are based on solely enforcing a maximum independent set property, resulting in sparser coarse grids. The new coarsening techniques remedy memory and execution time complexity growth for various large three-dimensional (3D) problems. If used within AMG as a preconditioner for Krylov subspace methods, the resulting iterative methods tend to converge fast. This paper discusses complexity issues that can arise in AMG, describes the new coarsening schemes and examines the performance of the new preconditioners for various large 3D problems.
Date: September 21, 2004
Creator: De Sterck, H; Yang, U M & Heys, J
Object Type: Article
System: The UNT Digital Library
Comparison of Four Parallel Algorithms For Domain Decomposed Implicit Monte Carlo (open access)

Comparison of Four Parallel Algorithms For Domain Decomposed Implicit Monte Carlo

We consider two existing asynchronous parallel algorithms for Implicit Monte Carlo (IMC) thermal radiation transport on spatially decomposed meshes. The two algorithms are from the production codes KULL from Lawrence Livermore National Laboratory and Milagro from Los Alamos National Laboratory. Both algorithms were considered and analyzed in an implementation of the KULL IMC package in ALEGRA, a Sandia National Laboratory high energy density physics code. Improvements were made to both algorithms. The improved Milagro algorithm performed the best by scaling nearly perfectly out to 244 processors.
Date: December 21, 2004
Creator: Brunner, T A; Urbatsch, T J; Evans, T M & Gentile, N A
Object Type: Article
System: The UNT Digital Library