Resource Type

Hydrogen Storage Properties of Magnesium Base Nanostructured Composite Materials (open access)

Hydrogen Storage Properties of Magnesium Base Nanostructured Composite Materials

In this work, nanostructured composite materials have been synthesized using the mechanical alloying process. The new materials produced have been investigated by X-ray diffraction (XRD), transition electron microscope (TEM), scanning electron microscope (SEM) and electron energy dispersion spectrum (EDS) for their phase compositions, crystal structure, grain size, particle morphology and the distribution of catalyst element. Hydrogen storage capacities and the hydriding-dehydriding kinetics of the new materials have been measured at different temperatures using a Sieverts apparatus. It is observed that mechanical alloying accelerates the hydrogenation kinetics of the magnesium based materials at low temperature, but a high temperature must be provided to release the absorbed hydrogen from the hydrided magnesium based materials. It is believed that the dehydriding temperature is largely controlled by the thermodynamic configuration of magnesium hydride. Doping Mg-Ni nano/amorphous composite materials with lanthanum reduces the hydriding and dehydriding temperature. Although the stability of MgH2 can not be easily reduced by ball milling alone, the results suggest the thermodynamic properties of Mg-Ni nano/amorphous composite materials can be alternated by additives such as La or other effective elements. Further investigation toward understanding the mechanism of additives will be rewarded.
Date: April 30, 2004
Creator: AU, M
System: The UNT Digital Library
Analysis of Homogeneous Charge Compression Ignition (HCCI) Engines for Cogeneration Applications (open access)

Analysis of Homogeneous Charge Compression Ignition (HCCI) Engines for Cogeneration Applications

This paper presents an evaluation of the applicability of Homogeneous Charge Compression Ignition Engines (HCCI) for small-scale cogeneration (less than 1 MWe) in comparison to five previously analyzed prime movers. The five comparator prime movers include stoichiometric spark-ignited (SI) engines, lean burn SI engines, diesel engines, microturbines and fuel cells. The investigated option, HCCI engines, is a relatively new type of engine that has some fundamental differences with respect to other prime movers. Here, the prime movers are compared by calculating electric and heating efficiency, fuel consumption, nitrogen oxide (NOx) emissions and capital and fuel cost. Two cases are analyzed. In Case 1, the cogeneration facility requires combined power and heating. In Case 2, the requirement is for power and chilling. The results show that the HCCI engines closely approach the very high fuel utilization efficiency of diesel engines without the high emissions of NOx and the expensive diesel fuel. HCCI engines offer a new alternative for cogeneration that provides a unique combination of low cost, high efficiency, low emissions and flexibility in operating temperatures that can be optimally tuned for cogeneration systems. HCCI engines are the most efficient technology that meets the oncoming 2007 CARB NOx standards for cogeneration …
Date: April 30, 2004
Creator: Aceves, S; Martinez-Frias, J & Reistad, G
System: The UNT Digital Library
Production of e+e- Pairs Accompanied by Nuclear Dissociation in Ultra-peripheral Heavy Ion Collisions (open access)

Production of e+e- Pairs Accompanied by Nuclear Dissociation in Ultra-peripheral Heavy Ion Collisions

We present the first data on e{sup +}e{sup -} pair production accompanied by nuclear breakup in ultra-peripheral gold-gold collisions at a center of mass energy of 200 GeV per nucleon pair. The nuclear breakup requirement selects events at small impact parameters, where higher-order corrections to the pair production cross section should be enhanced. We compare the pair kinematic distributions with two calculations: one based on the equivalent photon approximation, and the other using lowest-order quantum electrodynamics (QED); the latter includes the photon virtuality. The cross section, pair mass, rapidity and angular distributions are in good agreement with both calculations. The pair transverse momentum, p{sub T}, spectrum agrees with the QED calculation, but not with the equivalent photon approach. We set limits on higher-order contributions to the cross section. The e{sup +} and e{sup -} p{sub T} spectra are similar, with no evidence for interference effects due to higher-order diagrams.
Date: April 7, 2004
Creator: Adams, J.; Adler, C.; Aggarwal, M. M.; Ahammed, Z.; Allgower, C.; Amonett, J. et al.
System: The UNT Digital Library
Centrality and pseudorapidity dependence of charged hadron production at intermediate p{sub T} in Au+Au collisions at {radical}s{sub NN} = 130 GeV (open access)

Centrality and pseudorapidity dependence of charged hadron production at intermediate p{sub T} in Au+Au collisions at {radical}s{sub NN} = 130 GeV

We present STAR measurements of charged hadron production as a function of centrality in Au + Au collisions at {radical}s{sub NN} = 130 GeV. The measurements cover a phase space region of 0.2 < p{sub T} < 6.0 GeV/c in transverse momentum and -1 < {eta} < 1 in pseudorapidity. Inclusive transverse momentum distributions of charged hadrons in the pseudorapidity region 0.5 < |{eta}| < 1 are reported and compared to our previously published results for |{eta}| < 0.5. No significant difference is seen for inclusive p{sub T} distributions of charged hadrons in these two pseudorapidity bins. We measured dN/d{eta} distributions and truncated mean p{sub T} in a region of p{sub T} > p{sub T}{sup cut}, and studied the results in the framework of participant and binary scaling. No clear evidence is observed for participant scaling of charged hadron yield in the measured p{sub T} region. The relative importance of hard scattering process is investigated through binary scaling fraction of particle production.
Date: April 15, 2004
Creator: Adams, J.; Aggarwal, M. M.; Ahammed, Z.; Amonett, J.; Anderson, B. D.; Arkhipkin, D. et al.
System: The UNT Digital Library
Centrality and pseudorapidity dependence of charged hadron production at intermediate p{sub t} in Au+Au collisions at {radical}s{sub NN} = 130 GeV (open access)

Centrality and pseudorapidity dependence of charged hadron production at intermediate p{sub t} in Au+Au collisions at {radical}s{sub NN} = 130 GeV

We present STAR measurements of charged hadron production as a function of centrality in Au + Au collisions at {radical}s{sub NN} = 130 GeV. The measurements cover a phase space region of 0.2 < p{sub T} < 6.0 GeV/c in transverse momentum and 11 < {eta} < 1 in pseudorapidity. Inclusive transverse momentum distributions of charged hadrons in the pseudorapidity region 0.5 < |{eta}| < 1 are reported and compared to our previously published results for |{eta}| < 0.5. No significant difference is seen for inclusive p{sub T} distributions of charged hadrons in these two pseudorapidity bins. We measured dN/d{eta} distributions and truncated mean p{sub T} in a region of p{sub T} > P{sub T}{sup cut}, and studied the results in the framework of participant and binary scaling. No clear evidence is observed for participant scaling of charged hadron yield in the measured pT region. The relative importance of hard scattering process is investigated through binary scaling fraction of particle production.
Date: April 15, 2004
Creator: Adams, J.; Aggarwal, M.M.; Ahammed, Z.; Amonett, J.; Anderson, B. D.; Arkhipkin, D. et al.
System: The UNT Digital Library
Compact Optical Technique for Streak Camera Calibration (open access)

Compact Optical Technique for Streak Camera Calibration

The National Ignition Facility is under construction at the Lawrence Livermore National Laboratory for the U.S. Department of Energy Stockpile Stewardship Program. Optical streak cameras are an integral part of the experimental diagnostics instrumentation. To accurately reduce data from the streak cameras a temporal calibration is required. This article describes a technique for generating trains of precisely timed short-duration optical pulses that are suitable for temporal calibrations.
Date: April 2004
Creator: Allen, Curt; Davies, Terence; Janson, Frans; Justin, Ronald; Marshall, Bruce; Sweningsen, Oliver et al.
System: The UNT Digital Library
Measurement of total ion flux in vacuum Arc discharges (open access)

Measurement of total ion flux in vacuum Arc discharges

A vacuum arc ion source was modified allowing us to collections from arc plasma streaming through an anode mesh. The mesh had ageometric transmittance of 60 percent, which was taken into account as acorrection factor. The ion current from twenty-two cathode materials wasmeasured at an arc current of 100 A. The ion current normalized by thearc current was found to depend on the cathode material, with valuesinthe range from 5 percent to 11 percent. The normalized ion current isgenerally greater for light elements than for heavy elements. The ionerosion rates were determined fromvalues of ion currentand ion chargestates, which were previously measured in the same experimental system.The ion erosion rates range from 12-94 mu g/C.
Date: April 12, 2004
Creator: Anders, Andre; Oks, Efim M.; Yushkov, Georgy Yu. & Brown, Ian G.
System: The UNT Digital Library
Heat Wave: A Web-based Heat Stress Management Tool (open access)

Heat Wave: A Web-based Heat Stress Management Tool

None
Date: April 8, 2004
Creator: Anderson, R B; MacQueen, D H & Laguna, G W
System: The UNT Digital Library
Precipitation-Front Modeling: Issues Relating to Nucleation and Metastable Precipitation in the Planned Nuclear Waste Repository at Yucca Mountain, Nevada (open access)

Precipitation-Front Modeling: Issues Relating to Nucleation and Metastable Precipitation in the Planned Nuclear Waste Repository at Yucca Mountain, Nevada

The focus of the presentation is on certain aspects concerning the kinetics of heterogeneous reactions involving the dissolution and precipitation of unstable and metastable phases under conditions departing from thermodynamic equilibrium. These aspects are particularly relevant to transient thermal-hydrological-chemical (THC) processes that will occur as a result of the emplacement of radioactive waste within the Yucca Mountain Repository. Most important of these is a phenomenon commonly observed in altering soils, sediments and rocks, where less stable minerals precipitate in preference to those that are more stable, referred to as the Ostwald Rule of Stages, or the Ostwald Step Rule. W. Ostwald (1897) described the phenomenon characterizing his rule (as cited in Schmeltzer et al., 1998), thus: ''...in the course of transformation of an unstable (or metastable) state into a stable one the system does not go directly to the most stable conformation (corresponding to the modification with the lowest free energy) but prefers to reach intermediate stages (corresponding to other metastable modifications) having the closest free energy to the initial state''. This phenomenon is so widespread in natural geochemical systems, particularly under hydrothermal or low temperature conditions, that few geochemical parageneses involving the subcritical aqueous phase can be described without …
Date: April 1, 2004
Creator: Apps, J. A. & Sonnenthal, E. L.
System: The UNT Digital Library
Automatic registration of serial mammary gland sections (open access)

Automatic registration of serial mammary gland sections

We present two new methods for automatic registration of microscope images of consecutive tissue sections. They represent two possibilities for the first step in the 3-D reconstruction of histological structures from serially sectioned tissue blocks. The goal is to accurately align the sections in order to place every relevant shape contained in each image in front of its corresponding shape in the following section before detecting the structures of interest and rendering them in 3D. This is accomplished by finding the best rigid body transformation (translation and rotation) of the image being registered by maximizing a matching function based on the image content correlation. The first method makes use of the entire image information, whereas the second one uses only the information located at specific sites, as determined by the segmentation of the most relevant tissue structures. To reduce computing time, we use a multiresolution pyramidal approach that reaches the best registration transformation in increasing resolution steps. In each step, a subsampled version of the images is used. Both methods rely on a binary image which is a thresholded version of the Sobel gradients of the image (first method) or a set of boundaries manually or automatically obtained that define …
Date: April 13, 2004
Creator: Arganda-Carreras, Ignacio; Fernandez-Gonzalez, Rodrigo & Ortiz-de-Solorzano, Carlos
System: The UNT Digital Library
Heavy Meson Production at a Low-Energy Photon Collider (open access)

Heavy Meson Production at a Low-Energy Photon Collider

A low-energy {gamma}{gamma} collider has been discussed in the context of a testbed for a {gamma}{gamma} interaction region at the Next Linear Collider(NLC). We consider the production of heavy mesons at such a testbed using Compton-backscattered photons and demonstrate that their production rivals or exceeds those by BELLE, BABAR or LEP where they are produced indirectly via virtual {gamma}{gamma} luminosities.
Date: April 15, 2004
Creator: Asztalos, S
System: The UNT Digital Library
Efficient traffic grooming in SONET/WDM BLSR Networks (open access)

Efficient traffic grooming in SONET/WDM BLSR Networks

In this paper, we study traffic grooming in SONET/WDM BLSR networks under the uniform all-to-all traffic model with an objective to reduce total network costs (wavelength and electronic multiplexing costs), in particular, to minimize the number of ADMs while using the optimal number of wavelengths. We derive a new tighter lower bound for the number of wavelengths when the number of nodes is a multiple of 4. We show that this lower bound is achievable. All previous ADM lower bounds except perhaps that in were derived under the assumption that the magnitude of the traffic streams (r) is one unit (r = 1) with respect to the wavelength capacity granularity g. We then derive new, more general and tighter lower bounds for the number of ADMs subject to that the optimal number of wavelengths is used, and propose heuristic algorithms (circle construction algorithm and circle grooming algorithm) that try to minimize the number of ADMs while using the optimal number of wavelengths in BLSR networks. Both the bounds and algorithms are applicable to any value of r and for different wavelength granularity g. Performance evaluation shows that wherever applicable, our lower bounds are at least as good as existing bounds …
Date: April 2, 2004
Creator: Awwal, Abdul S.; Billah, Abdur R. B. & Wang, Bin
System: The UNT Digital Library
Recovery and Visualization of 3D Structure of Chromosomes from Tomographic Reconstruction Images (open access)

Recovery and Visualization of 3D Structure of Chromosomes from Tomographic Reconstruction Images

The objectives of this work include automatic recovery and visualization of a 3D chromosome structure from a sequence of 2D tomographic reconstruction images taken through the nucleus of a cell. Structure is very important for biologists as it affects chromosome functions, behavior of the cell and its state. Chromosome analysis is significant in the detection of deceases and in monitoring environmental gene mutations. The algorithm incorporates thresholding based on a histogram analysis with a polyline splitting algorithm, contour extraction via active contours, and detection of the 3D chromosome structure by establishing corresponding regions throughout the slices. Visualization using point cloud meshing generates a 3D surface. The 3D triangular mesh of the chromosomes provides surface detail and allows a user to interactively analyze chromosomes using visualization software.
Date: April 28, 2004
Creator: Babu, S; Liao, P; Shin, M C & Tsap, L V
System: The UNT Digital Library
What can we learn from neutrinoless double beta decay experiments? (open access)

What can we learn from neutrinoless double beta decay experiments?

We assess how well next generation neutrinoless double beta decay and normal neutrino beta decay experiments can answer four fundamental questions. 1) If neutrinoless double beta decay searches do not detect a signal, and if the spectrum is known to be inverted hierarchy, can we conclude that neutrinos are Dirac particles? 2) If neutrinoless double beta decay searches are negative and a next generation ordinary beta decay experiment detects the neutrino mass scale, can we conclude that neutrinos are Dirac particles? 3) If neutrinoless double beta decay is observed with a large neutrino mass element, what is the total mass in neutrinos? 4) If neutrinoless double beta decay is observed but next generation beta decay searches for a neutrino mass only set a mass upper limit, can we establish whether the mass hierarchy is normal or inverted? We base our answers on the expected performance of next generation neutrinoless double beta decay experiments and on simulations of the accuracy of calculations of nuclear matrix elements.
Date: April 8, 2004
Creator: Bahcall, John N.; Murayama, Hitoshi & Pena-Garay, Carlos
System: The UNT Digital Library
The CKM matrix and the unitarity triangle. Proceedings, workshop, Geneva, Switzerland, February 13-16, 2002 (open access)

The CKM matrix and the unitarity triangle. Proceedings, workshop, Geneva, Switzerland, February 13-16, 2002

This report contains the results of the Workshop on the CKM Unitarity Triangle that was held at CERN on 13-16 February 2002. There had been several Workshops on B physics that concentrated on studies at e{sup +}e{sup -} machines, at the Tevatron, or at LHC separately. Here we brought together experts of different fields, both theorists and experimentalists, to study the determination of the CKM matrix from all the available data of K, D, and B physics. The analysis of LEP data for B physics is reaching its end, and one of the goals of the Workshop was to underline the results that have been achieved at LEP, SLC, and CESR. Another goal was to prepare for the transfer of responsibility for averaging B physics properties, that has developed within the LEP community, to the present main actors of these studies, from the B factory and the Tevatron experiments. The optimal way to combine the various experimental and theoretical inputs and to fit for the apex of the Unitarity Triangle has been a contentious issue. A further goal of the Workshop was to bring together the proponents of different fitting strategies, and to compare their approaches when applied to the …
Date: April 2, 2004
Creator: Battaglia, M.; Buras, A. J.; Gambino, P. & Stocchi, A.
System: The UNT Digital Library
Stochastic algorithms for the analysis of numerical flame simulations (open access)

Stochastic algorithms for the analysis of numerical flame simulations

Recent progress in simulation methodologies and high-performance parallel computers have made it is possible to perform detailed simulations of multidimensional reacting flow phenomena using comprehensive kinetics mechanisms. As simulations become larger and more complex, it becomes increasingly difficult to extract useful information from the numerical solution, particularly regarding the interactions of the chemical reaction and diffusion processes. In this paper we present a new diagnostic tool for analysis of numerical simulations of reacting flow. Our approach is based on recasting an Eulerian flow solution in a Lagrangian frame. Unlike a conventional Lagrangian view point that follows the evolution of a volume of the fluid, we instead follow specific chemical elements, e.g., carbon, nitrogen, etc., as they move through the system . From this perspective an ''atom'' is part of some molecule of a species that is transported through the domain by advection and diffusion. Reactions cause the atom to shift from one chemical host species to another and the subsequent transport of the atom is given by the movement of the new species. We represent these processes using a stochastic particle formulation that treats advection deterministically and models diffusion and chemistry as stochastic processes. In this paper, we discuss the …
Date: April 26, 2004
Creator: Bell, John B.; Day, Marcus S.; Grcar, Joseph F. & Lijewski, Michael J.
System: The UNT Digital Library
Target Diagnostic Technology Research & Development for the LLNL ICF and HED Program (open access)

Target Diagnostic Technology Research & Development for the LLNL ICF and HED Program

The National Ignition Facility is operational at LLNL. The ICF and HED programs at LLNL have formed diagnostic research and development groups to institute improvements outside the charter of core diagnostics. We will present data from instrumentation being developed. A major portion of our work is improvements to detectors and readout systems. We have efforts related to CCD device development. Work has been done in collaboration with the University of Arizona to back thin a large format CCD device. We have developed in collaboration with a commercial vendor a large format, compact CCD system. We have coupled large format CCD systems to our optical and x-ray streak cameras leading to improvements in resolution and dynamic range. We will discuss gate-width and uniformity improvements to MCP-based framing cameras. We will present data from single shot data link work and discuss technology aimed at improvements of dynamic range for high-speed transient measurements from remote locations.
Date: April 13, 2004
Creator: Bell, P; Landen, O; Weber, F; Lowry, M; Bennett, C; Kimbrough, J et al.
System: The UNT Digital Library
Review of Current FFAG Lattice Studies in North America (open access)

Review of Current FFAG Lattice Studies in North America

There has been a revival of interest in the use of fixed field alternating gradient accelerators (FFAGs) for many applications, including muon accelerators, high-intensity proton sources, and medical applications. The original FFAGs, and those recently built in Japan, have been based on a so-called scaling FFAG design, for which tunes are constant and the behavior in phase space is independent of energy with the exception of a scaling factor. Activity in the US and Canada has instead mostly focused on nonscaling designs, which, while having the large energy acceptance that characterizes an FFAG, do not obey the scaling relations of the scaling FFAG. Most of these designs have been based on magnets with a linear midplane field profile. A great deal of analysis, both theoretically and numerically, has occurred on these designs, and they are very well understood at this point. Some more recent work has occurred on designs with a nonlinear field profile. Since no non-scaling FFAG has ever been built, there is interest in building a small model which would accelerate electrons and demonstrate our understanding of non-scaling FFAG design.
Date: April 1, 2004
Creator: Berg, J. Scott; Plamer, Robert; Ruggiero, Alessandro; Trbojevic, Dejan; Keil, Eberhard; Johnstone, Carol et al.
System: The UNT Digital Library
Bounds on Elastic Constants for Random Polycrystals of Laminates (open access)

Bounds on Elastic Constants for Random Polycrystals of Laminates

A well-known result due to Hill provides an exact expression for the bulk modulus of any multicomponent elastic composite whenever the constituents are isotropic and the shear modulus is uniform throughout. Although no precise analog of Hill's result is available for the opposite case of uniform bulk modulus and varying shear modulus, it is shown here that some similar statements can be made for shear behavior of random polycrystals composed of laminates of isotropic materials. In particular, the Hashin-Shtrikman-type bounds of Peselnick, Meister, and Watt for random polycrystals composed of hexagonal (transversely isotropic) grains are applied to the problem of polycrystals of laminates. An exact product formula relating the Reuss estimate of bulk modulus and an effective shear modulus (of laminated grains composing the system) to products of the eigenvalues for quasi-compressional and quasi-uniaxial shear eigenvectors also plays an important role in the analysis of the overall shear behavior of the random polycrystal. When the bulk modulus is uniform in such a system, the equations are shown to reduce to a simple form that depends prominently on the uniaxial shear eigenvalue - as expected from physical arguments concerning the importance of uniaxial shear in these systems. One application of the …
Date: April 30, 2004
Creator: Berger, E. L.
System: The UNT Digital Library
Pore Fluid Effects on Shear Modulus for Sandstones with Soft Anisotropy (open access)

Pore Fluid Effects on Shear Modulus for Sandstones with Soft Anisotropy

A general analysis of poroelasticity for vertical transverse isotropy (VTI) shows that four eigenvectors are pure shear modes with no coupling to the pore-fluidmechanics. The remaining two eigenvectors are linear combinations of pure compression and uniaxial shear, both of which are coupled to the fluid mechanics. After reducing the problem to a 2x2 system, the analysis shows in a relatively elementary fashion how a poroelastic system with isotropic solid elastic frame, but with anisotropy introduced through the poroelastic coefficients, interacts with the mechanics of the pore fluid and produces shear dependence on fluid properties in the overall mechanical system. The analysis shows, for example, that this effect is always present (though sometimes small in magnitude) in the systems studied, and can be quite large (up to a definite maximum increase of 20 per cent) in some rocks--including Spirit River sandstone and Schuler-Cotton Valley sandstone.
Date: April 15, 2004
Creator: Berger, E. L.
System: The UNT Digital Library
Poroelastic fluid effects on shear for rocks with soft anisotropy (open access)

Poroelastic fluid effects on shear for rocks with soft anisotropy

A general analysis of poroelasticity for vertical transverse isotropy (VTI) shows that four eigenvectors are pure shear modes with no coupling to the pore-fluid mechanics. The remaining two eigenvectors are linear combinations of pure compression and uniaxial shear, both of which are coupled to the fluid mechanics. After reducing the problem to a 2 x 2 system, the analysis shows in a relatively elementary fashion how a poroelastic system with isotropic solid elastic frame, but with anisotropy introduced through the poroelastic coefficients, interacts with the mechanics of the pore fluid and produces shear dependence on fluid properties in the overall poroelastic system. The analysis shows for example that this effect is always present (though sometimes small in magnitude) in the systems studied, and can be quite large (on the order of 10 to 20%) for wave propagation studies in some real granites and sandstones, including Spirit River sandstone and Schuler-Cotton Valley sandstone. Some of the results quoted here are obtained by using a new product formula relating local bulk and uniaxial shear energy to the product of the two eigenvalues that are coupled to the fluid mechanics. This product formula was first derived in prior work, but is given a …
Date: April 13, 2004
Creator: Berger, E. L.
System: The UNT Digital Library
Growth of nanocrystalline MoO3 on Au(111) studied by in-situ STM (open access)

Growth of nanocrystalline MoO3 on Au(111) studied by in-situ STM

The growth of nanocrystalline MoO{sub 3} islands on Au(111) using physical vapor deposition of Mo has been studied by scanning tunneling microscopy (STM) and low energy electron diffraction (LEED). The growth conditions affect the shape and distribution of the MoO{sub 3} nanostructures, providing a means of preparing materials with different percentages of edge sites that may have different chemical and physical properties than atoms in the interior of the nanostructures. MoO{sub 3} islands were prepared by physical vapor deposition of Mo and subsequent oxidation by NO{sub 2}exposure at temperatures between 450 K and 600 K. They exhibit a crystalline structure with a c(4x2) periodicity relative to unreconstructed Au(111). While the atomic-scale structure is identical to that of MoO{sub 3} islands prepared by chemical vapor deposition, we demonstrate that the distribution of MoO{sub 3} islands on the Au(111) surface reflects the distribution of Mo clusters prior to oxidation although the growth of MoO{sub 3} involves long-range mass transport via volatile MoO{sub 3} precursor species. The island morphology is kinetically controlled at 450 K, whereas an equilibrium shape is approached at higher preparation temperatures or after prolonged annealing at the elevated temperature. Mo deposition at or above 525 K leads to the …
Date: April 22, 2004
Creator: Biener, M M; Biener, J; Schalek, R & Friend, C M
System: The UNT Digital Library
Little Supersymmetry and the Supersymmetric Little Hierarchy Problem (open access)

Little Supersymmetry and the Supersymmetric Little Hierarchy Problem

The current experimental lower bound on the Higgs mass significantly restricts the allowed parameter space in most realistic supersymmetric models, with the consequence that these models exhibit significant fine-tuning. We propose a solution to this `supersymmetric little hierarchy problem'. We consider scenarios where the stop masses are relatively heavy - in the 500 GeV to a TeV range. Radiative stability of the Higgs soft mass against quantum corrections from the top quark Yukawa coupling is achieved by imposing a global SU(3) symmetry on this interaction. This global symmetry is only approximate - it is not respected by the gauge interactions. A subgroup of the global symmetry is gauged by the familiar SU(2) of the Standard Model. The physical Higgs is significantly lighter than the other scalars because it is the pseudo-Goldstone boson associated with the breaking of this symmetry. Radiative corrections to the Higgs potential naturally lead to the right pattern of gauge and global symmetry breaking. We show that both the gauge and global symmetries can be embedded into a single SU(6) grand unifying group, thereby maintaining the prediction of gauge coupling unification. Among the firm predictions of this class of models are new states with the quantum numbers …
Date: April 22, 2004
Creator: Birkedal, Andreas; Chacko, Z. & Gaillard, Mary K.
System: The UNT Digital Library
Improved Pinhole-Apertured Point-Projection Backlighter Geometry (open access)

Improved Pinhole-Apertured Point-Projection Backlighter Geometry

Pinhole-apertured point-projection x-ray radiography is an important diagnostic technique for obtaining high resolution, high contrast, and large field-of-view images used to diagnose the hydrodynamic evolution of high energy density experiments. In this technique, a pinhole aperture is placed between a laser irradiated foil (x-ray source) and an imaging detector. In this letter, we present an improved backlighter geometry that utilizes a tilted pinhole for debris mitigation and a front-side illuminated backlighter foil for improved photon statistics.
Date: April 13, 2004
Creator: Blue, B.; Robey, H. F. & Hansen, J. F.
System: The UNT Digital Library