116 Matching Results

Results open in a new window/tab.

Deletion of ultraconserved elements yields viable mice (open access)

Deletion of ultraconserved elements yields viable mice

Ultraconserved elements have been suggested to retainextended perfect sequence identity between the human, mouse, and ratgenomes due to essential functional properties. To investigate thenecessities of these elements in vivo, we removed four non-codingultraconserved elements (ranging in length from 222 to 731 base pairs)from the mouse genome. To maximize the likelihood of observing aphenotype, we chose to delete elements that function as enhancers in amouse transgenic assay and that are near genes that exhibit markedphenotypes both when completely inactivated in the mouse as well as whentheir expression is altered due to other genomic modifications.Remarkably, all four resulting lines of mice lacking these ultraconservedelements were viable and fertile, and failed to reveal any criticalabnormalities when assayed for a variety of phenotypes including growth,longevity, pathology and metabolism. In addition more targeted screens,informed by the abnormalities observed in mice where genes in proximityto the investigated elements had been altered, also failed to revealnotable abnormalities. These results, while not inclusive of all thepossible phenotypic impact of the deleted sequences, indicate thatextreme sequence constraint does not necessarily reflect crucialfunctions required for viability.
Date: July 15, 2007
Creator: Ahituv, Nadav; Zhu, Yiwen; Visel, Axel; Holt, Amy; Afzal, Veena; Pennacchio, Len A. et al.
System: The UNT Digital Library
Environmental Stewardship: How Semiconductor Suppliers Help to Meet Energy-Efficiency Regulations and Voluntary Specifications in China (open access)

Environmental Stewardship: How Semiconductor Suppliers Help to Meet Energy-Efficiency Regulations and Voluntary Specifications in China

Recognizing the role that semiconductor suppliers can playin meeting energy-efficiency regulations and voluntary specifications,this paper provides an overview of Chinese policies and implementingbodies; a discussion of current programs, their goals, and effectiveness;and possible steps that can be taken tomeet these energy-efficiencyrequirements while also meeting products' high performance and costgoals.
Date: January 15, 2007
Creator: Aizhen, Li; Fanara, Andrew; Fridley, David; Merriman, Louise & Ju, Jeff
System: The UNT Digital Library
Feasibility Evaluation and Retrofit Plan for Cold Crucible Induction Melter Deployment in the Defense Waste Processing Facility at Savannah River Site - 8118 (open access)

Feasibility Evaluation and Retrofit Plan for Cold Crucible Induction Melter Deployment in the Defense Waste Processing Facility at Savannah River Site - 8118

Cold crucible induction melters (CCIM) have been proposed as an alternative technology for waste glass melting at the Defense Waste Processing Facility (DWPF) at Savannah River Site (SRS) as well as for other waste vitrification facilities. Proponents of this technology cite high temperature operation, high tolerance for noble metals and aluminum, high waste loading, high throughput capacity, and low equipment cost as the advantages over existing Joule Heated Melter (JHM) technology. This paper describes the CCIM technology and identifies technical challenges that must be addressed in order to implement CCIMs in the DWPF. The CCIM uses induction heating to maintain molten glass at high temperature. A water-cooled helical induction coil is connected to an AC current supply, typically operating at frequencies from 100 KHz to 5 MHz. The oscillating magnetic field generated by the oscillating current flow through the coil induces eddy currents in conductive materials within the coil. Those oscillating eddy currents, in turn, generate heat in the material. In the CCIM, the induction coil surrounds a 'Cold Crucible' which is formed by metal tubes, typically copper or stainless steel. The tubes are constructed such that the magnetic field does not couple with the crucible. Therefore, the field generated …
Date: November 15, 2007
Creator: Barnes, A.; Dan Iverson, D. & Brannen Adkins, B.
System: The UNT Digital Library
Detection of low energy single ion impacts in micron scaletransistors at room temperature (open access)

Detection of low energy single ion impacts in micron scaletransistors at room temperature

We report the detection of single ion impacts throughmonitoring of changes in the source-drain currents of field effecttransistors (FET) at room temperature. Implant apertures are formed inthe interlayer dielectrics and gate electrodes of planar, micro-scaleFETs by electron beam assisted etching. FET currents increase due to thegeneration of positively charged defects in gate oxides when ions(121Sb12+, 14+, Xe6+; 50 to 70 keV) impinge into channel regions. Implantdamage is repaired by rapid thermal annealing, enabling iterative cyclesof device doping and electrical characterization for development ofsingle atom devices and studies of dopant fluctuationeffects.
Date: October 15, 2007
Creator: Batra, A.; Weis, C. D.; Reijonen, J.; Persaud, A.; Schenkel, T.; Cabrini, S. et al.
System: The UNT Digital Library
Porphyrin-Based Photocatalytic Lithography (open access)

Porphyrin-Based Photocatalytic Lithography

Photocatalytic lithography is an emerging technique that couples light with coated mask materials in order to pattern surface chemistry. We excite porphyrins to create radical species that photocatalytically oxidize, and thereby pattern, chemistries in the local vicinity. The technique advantageously does not necessitate mass transport or specified substrates, it is fast and robust and the wavelength of light does not limit the resolution of patterned features. We have patterned proteins and cells in order to demonstrate the utility of photocatalytic lithography in life science applications.
Date: October 15, 2007
Creator: Bearinger, J.; Stone, G.; Christian, A.; Dugan, L.; Hiddessen, A.; Wu, K. J. et al.
System: The UNT Digital Library
Optimal Heat Collection Element Shapes for Parabolic Trough Concentrators (open access)

Optimal Heat Collection Element Shapes for Parabolic Trough Concentrators

For nearly 150 years, the cross section of the heat collection tubes used at the focus of parabolic trough solar concentrators has been circular. This type of tube is obviously simple and easily fabricated, but it is not optimal. It is shown in this article that the optimal shape, assuming a perfect parabolic figure for the concentrating mirror, is instead oblong, and is approximately given by a pair of facing parabolic segments.
Date: November 15, 2007
Creator: Bennett, C
System: The UNT Digital Library
Search for a Reliable Storage Architecture for Rhic. (open access)

Search for a Reliable Storage Architecture for Rhic.

Software used to operate the Relativistic Heavy Ion Collider (RHIC) resides on one operational RAID storage system. This storage system is also used to store data that reflects the status and recent history of accelerator operations. Failure of this system interrupts the operation of the accelerator as backup systems are brought online. In order to increase the reliability of this critical control system component, the storage system architecture has been upgraded to use Storage Area Network (SAN) technology and to introduce redundant components and redundant storage paths. This paper describes the evolution of the storage system, the contributions to reliability that each additional feature has provided, further improvements that are being considered, and real-life experience with the current system.
Date: October 15, 2007
Creator: Binello, S.; Katz, R. A. & Morris, J. T.
System: The UNT Digital Library
Proceedings of RIKEN BNL Research Center Workshop Entitled - Domain Wall Fermions at Ten Years (Volume 84) (open access)

Proceedings of RIKEN BNL Research Center Workshop Entitled - Domain Wall Fermions at Ten Years (Volume 84)

The workshop was held to mark the 10th anniversary of the first numerical simulations of QCD using domain wall fermions initiated at BNL. It is very gratifying that in the intervening decade widespread use of domain wall and overlap fermions is being made. It therefore seemed appropriate at this stage for some ''communal introspection'' of the progress that has been made, hurdles that need to be overcome, and physics that can and should be done with chiral fermions. The meeting was very well attended, drawing about 60 registered participants primarily from Europe, Japan and the US. It was quite remarkable that pioneers David Kaplan, Herbert Neuberger, Rajamani Narayanan, Yigal Shamir, Sinya Aoki, and Pavlos Vranas all attended the workshop. Comparisons between domain wall and overlap formulations, with their respective advantages and limitations, were discussed at length, and a broad physics program including pion and kaon physics, the epsilon regime, nucleon structure, and topology, among others, emerged. New machines and improved algorithms have played a key role in realizing realistic dynamical fermion lattice simulations (small quark mass, large volume, and so on), so much in fact that measurements are now as costly. Consequently, ways to make the measurements more efficient were …
Date: March 15, 2007
Creator: Blum, T. & Soni, A.
System: The UNT Digital Library
Environmental Test Activity on the Flight Modules of the GLAST LAT Tracker (open access)

Environmental Test Activity on the Flight Modules of the GLAST LAT Tracker

The GLAST Large Area Telescope (LAT) is a gamma-ray telescope consisting of a silicon micro-strip detector tracker followed by a segmented CsI calorimeter and covered by a segmented scintillator anticoincidence system that will search for {gamma}-rays in the 20 MeV-300 GeV energy range. The results of the environmental tests performed on the flight modules (towers) of the Tracker are presented. The aim of the environmental tests is to verify the performance of the silicon detectors in the expected mission environment. The tower modules are subjected to dynamic tests that simulate the launch environment and thermal vacuum test that reproduce the thermal gradients expected on orbit. The tower performance is continuously monitored during the whole test sequence. The environmental test activity, the results of the tests and the silicon tracker performance are presented.
Date: February 15, 2007
Creator: Brigida, M.; Caliandro, A.; Favuzzi, C.; Fusco, P.; Gargano, F.; Giglietto, N. et al.
System: The UNT Digital Library
Performance of the Integrated Tracker Towers of the GLAST Large Area Telescope (open access)

Performance of the Integrated Tracker Towers of the GLAST Large Area Telescope

The GLAST Large Area Telescope (LAT) is a high energy gamma ray observatory, mounted on a satellite that will be own in 2007. The LAT tracker consists of an array of tower modules, equipped with planes of silicon strip detectors (SSDs) interleaved with tungsten converter layers. Photon detection is based on the pair conversion process; silicon strip detectors will reconstruct tracks of electrons and positrons. The instrument is actually being assembled. The first towers have been already tested and integrated at Stanford Linear Accelerator Center (SLAC). An overview of the integration stages of the main components of the tracker and a description of the pre-launch tests will be given. Experimental results on the performance of the tracker towers will be also discussed.
Date: February 15, 2007
Creator: Brigida, M.; Caliandro, A.; Favuzzi, C.; Fusco, P.; Gargano, F.; Giglietto, N. et al.
System: The UNT Digital Library
First Results From GLAST-LAT Integrated Towers Cosmic Ray Data Taking And Monte Carlo Comparison (open access)

First Results From GLAST-LAT Integrated Towers Cosmic Ray Data Taking And Monte Carlo Comparison

GLAST Large Area Telescope (LAT) is a gamma ray telescope instrumented with silicon-strip detector planes and sheets of converter, followed by a calorimeter (CAL) and surrounded by an anticoincidence system (ACD). This instrument is sensitive to gamma rays in the energy range between 20 MeV and 300 GeV. At present, the first towers have been integrated and pre-launch data taking with cosmic ray muons is being performed. The results from the data analysis carried out during LAT integration will be discussed and a comparison with the predictions from the Monte Carlo simulation will be shown.
Date: February 15, 2007
Creator: Brigida, M.; Caliandro, A.; Favuzzi, C.; Fusco, P.; Gargano, F.; Giordano, F. et al.
System: The UNT Digital Library
Illuminating the 1/x Moment of Parton Distribution Functions (open access)

Illuminating the 1/x Moment of Parton Distribution Functions

The Weisberger relation, an exact statement of the parton model, elegantly relates a high-energy physics observable, the 1/x moment of parton distribution functions, to a nonperturbative low-energy observable: the dependence of the nucleon mass on the value of the quark mass or its corresponding quark condensate. We show that contemporary fits to nucleon structure functions fail to determine this 1/x moment; however, deeply virtual Compton scattering can be described in terms of a novel F1/x(t) form factor which illuminates this physics. An analysis of exclusive photon-induced processes in terms of the parton-nucleon scattering amplitude with Regge behavior reveals a failure of the high Q2 factorization of exclusive processes at low t in terms of the Generalized Parton-Distribution Functions which has been widely believed to hold in the past. We emphasize the need for more data for the DVCS process at large t in future or upgraded facilities.
Date: October 15, 2007
Creator: Brodsky, Stanley J.; Llanes-Estrada, Felipe J. & Szczepaniak, Adam P.
System: The UNT Digital Library
Soft Error Vulnerability of Iterative Linear Algebra Methods (open access)

Soft Error Vulnerability of Iterative Linear Algebra Methods

Devices become increasingly vulnerable to soft errors as their feature sizes shrink. Previously, soft errors primarily caused problems for space and high-atmospheric computing applications. Modern architectures now use features so small at sufficiently low voltages that soft errors are becoming significant even at terrestrial altitudes. The soft error vulnerability of iterative linear algebra methods, which many scientific applications use, is a critical aspect of the overall application vulnerability. These methods are often considered invulnerable to many soft errors because they converge from an imprecise solution to a precise one. However, we show that iterative methods can be vulnerable to soft errors, with a high rate of silent data corruptions. We quantify this vulnerability, with algorithms generating up to 8.5% erroneous results when subjected to a single bit-flip. Further, we show that detecting soft errors in an iterative method depends on its detailed convergence properties and requires more complex mechanisms than simply checking the residual. Finally, we explore inexpensive techniques to tolerate soft errors in these methods.
Date: December 15, 2007
Creator: Bronevetsky, G & de Supinski, B
System: The UNT Digital Library
First observations of beam losses due to bound-free pairproduction in a heavy-ion collider (open access)

First observations of beam losses due to bound-free pairproduction in a heavy-ion collider

We report the first observations of beam losses due tobound-free pair production at the interaction point of a heavy-ioncollider. This process is expected to be a major luminosity limit for theLarge Hadron Collider (LHC) when it operates with 208Pb82+ ions becausethe localized energy deposition by the lost ions may quenchsuperconducting magnet coils. Measurements were performed at theRelativistic Heavy Ion Collider (RHIC) during operation with 100GeV/nucleon 63Cu29+ ions. At RHIC, the rate, energy and magnetic fieldare low enough so that magnet quenching is not an issue. The hadronicshowers produced when the single-electron ions struck the RHIC beampipewere observed using an array of photodiodes. The measurement confirms theorder of magnitude of the theoretical cross section previously calculatedby others.
Date: June 15, 2007
Creator: Bruce, R.; Jowett, J.M.; Gilardoni, S.; Drees, A.; Fischer, W.; Tepikian, S. et al.
System: The UNT Digital Library
Water adsorption on O(2x2)/Ru(0001) from STM experiments andfirst-principles calculations (open access)

Water adsorption on O(2x2)/Ru(0001) from STM experiments andfirst-principles calculations

We present a combined theoretical and experimental study of water adsorption on Ru(0001) pre-covered with 0.25 monolayers (ML) of oxygen forming a (2 x 2) structure. Several structures were analyzed by means of Density Functional Theory calculations for which STM simulations were performed and compared with experimental data. Up to 0.25 monolayers the molecules bind to the exposed Ru atoms of the 2 x 2 unit cell via the lone pair orbitals. The molecular plane is almost parallel to the surface with its H atoms pointing towards the chemisorbed O atoms of the 2 x 2 unit cell forming hydrogen bonds. The existence of these additional hydrogen bonds increases the adsorption energy of the water molecule to approximately 616 meV, which is {approx}220 meV more stable than on the clean Ru(0001) surface with a similar configuration. The binding energy shows only a weak dependence on water coverage, with a shallow minimum for a row structure at 0.125 ML. This is consistent with the STM experiments that show a tendency of the molecules to form linear rows at intermediate coverage. Our calculations also suggest the possible formation of water dimers near 0.25 ML.
Date: October 15, 2007
Creator: Cabrera-Sanfelix, P.; Sanchez-Portal, D.; Mugarza, A.; Shimizu,T.K.; Salmeron, M. & Arnau, A.
System: The UNT Digital Library
Similarity-Guided Streamline Placement with Error Evaluation (open access)

Similarity-Guided Streamline Placement with Error Evaluation

Most streamline generation algorithms either provide a particular density of streamlines across the domain or explicitly detect features, such as critical points, and follow customized rules to emphasize those features. However, the former generally includes many redundant streamlines, and the latter requires Boolean decisions on which points are features (and may thus suffer from robustness problems for real-world data). We take a new approach to adaptive streamline placement for steady vector fields in 2D and 3D. We define a metric for local similarity among streamlines and use this metric to grow streamlines from a dense set of candidate seed points. The metric considers not only Euclidean distance, but also a simple statistical measure of shape and directional similarity. Without explicit feature detection, our method produces streamlines that naturally accentuate regions of geometric interest. In conjunction with this method, we also propose a quantitative error metric for evaluating a streamline representation based on how well it preserves the information from the original vector field. This error metric reconstructs a vector field from points on the streamline representation and computes a difference of the reconstruction from the original vector field.
Date: August 15, 2007
Creator: Chen, Y.; Cohen, J. D. & Krolik, J. H.
System: The UNT Digital Library
Electron Energy Distributions at Relativistic Shock Sites: Observational Constraints from the Cygnus A Hotspots (open access)

Electron Energy Distributions at Relativistic Shock Sites: Observational Constraints from the Cygnus A Hotspots

We report new detections of the hotspots in Cygnus A at 4.5 and 8.0 microns with the Spitzer Space Telescope. Together with detailed published radio observations and synchrotron self-Compton modeling of previous X-ray detections, we reconstruct the underlying electron energy spectra of the two brightest hotspots (A and D). The low-energy portion of the electron distributions have flat power-law slopes (s {approx} 1.5) up to the break energy which corresponds almost exactly to the mass ratio between protons and electrons; we argue that these features are most likely intrinsic rather than due to absorption effects. Beyond the break, the electron spectra continue to higher energies with very steep slopes s>3. Thus, there is no evidence for the 'canonical' s=2 slope expected in 1st order Fermi-type shocks within the whole observable electron energy range. We discuss the significance of these observations and the insight offered into high-energy particle acceleration processes in mildly relativistic shocks.
Date: October 15, 2007
Creator: Cheung, C.C.Teddy; Stawarz, L.; Harris, D.E. & Ostrowski, M.
System: The UNT Digital Library
Automated Science Processing for GLAST LAT Data (open access)

Automated Science Processing for GLAST LAT Data

Automated Science Processing (ASP) will be performed by the GLAST Large Area Telescope (LAT) Instrument Science Operations Center (ISOC) on data from the satellite as soon as the Level 1 data are available in the ground processing pipeline. ASP will consist of time-critical science analyses that will facilitate follow-up and multi-wavelength observations of transient sources. These analyses include refinement of gamma-ray burst (GRB) positions, timing, flux and spectral properties, off-line searches for untriggered GRBs and gamma-ray afterglows, longer time scale monitoring of a standard set of sources (AGNs, X-ray binaries), and searches for previously unknown flaring sources in the LAT band. We describe the design of ASP and its scientific products; and we show results of a prototype implementation, driven by the standard LAT data processing pipeline, as applied to simulated LAT and GBM data.
Date: October 15, 2007
Creator: Chiang, James; /NASA, Goddard /SLAC; Carson, Jennifer; Focke, Warren & /SLAC
System: The UNT Digital Library
Using Sequencing to Improve Operational Efficiency and Reliability (open access)

Using Sequencing to Improve Operational Efficiency and Reliability

Operation of an accelerator requires the efficient and reproducible execution of many different types of procedures. Some procedures, like beam acceleration, magnet quench recovery, and species switching can be quite complex. To improve accelerator reliability and efficiency, automated execution of procedures is required. Creation of a single robust sequencing application permits the streamlining of this process and offers many benefits in sequence creation, editing, and control. In this paper, we present key features of a sequencer application commissioned at the Collider-Accelerator Department of Brookhaven National Laboratory during the 2007 run. Included is a categorization of the different types of sequences in use, a discussion of the features considered desirable in a good sequencer, and a description of the tools created to aid in sequence construction and diagnosis. Finally, highlights from our operational experience are presented, with emphasis on Operations control of the sequencer, and the alignment of sequence construction with existing operational paradigms.
Date: October 15, 2007
Creator: D'Ottavio, T. & Niedziela, J.
System: The UNT Digital Library
Tracking Accelerator Settings. (open access)

Tracking Accelerator Settings.

Recording setting changes within an accelerator facility provides information that can be used to answer questions about when, why, and how changes were made to some accelerator system. This can be very useful during normal operations, but can also aid with security concerns and in detecting unusual software behavior. The Set History System (SHS) is a new client-server system developed at the Collider-Accelerator Department of Brookhaven National Laboratory to provide these capabilities. The SHS has been operational for over two years and currently stores about IOOK settings per day into a commercial database management system. The SHS system consists of a server written in Java, client tools written in both Java and C++, and a web interface for querying the database of setting changes. The design of the SHS focuses on performance, portability, and a minimal impact on database resources. In this paper, we present an overview of the system design along with benchmark results showing the performance and reliability of the SHS over the last year.
Date: October 15, 2007
Creator: D'Ottavio, T.; Fu, W. & Ottavio, D. P.
System: The UNT Digital Library
On Direct Verification of Warped Hierarchy-and-FlavorModels (open access)

On Direct Verification of Warped Hierarchy-and-FlavorModels

We consider direct experimental verification of warped models, based on the Randall-Sundrum (RS) scenario, that explain gauge and flavor hierarchies, assuming that the gauge fields and fermions of the Standard Model (SM) propagate in the 5D bulk. Most studies have focused on the bosonic Kaluza Klein (KK) signatures and indicate that discovering gauge KK modes is likely possible, yet challenging, while graviton KK modes are unlikely to be accessible at the LHC, even with a luminosity upgrade. We show that direct evidence for bulk SM fermions, i.e. their KK modes, is likely also beyond the reach of a luminosity-upgraded LHC. Thus, neither the spin-2 KK graviton, the most distinct RS signal, nor the KK SM fermions, direct evidence for bulk flavor, seem to be within the reach of the LHC. We then consider hadron colliders with vs. = 21, 28, and 60 TeV. We find that discovering the first KK modes of SM fermions and the graviton typically requires the Next Hadron Collider (NHC) with {radical}s {approx} 60 TeV and O(1) ab-1 of integrated luminosity. If the LHC yields hints of these warped models, establishing that Nature is described by them, or their 4D CFT duals, requires an NHC-class machine …
Date: October 15, 2007
Creator: Davoudiasl, Hooman; Rizzo, Thomas G. & Soni, Amarjit
System: The UNT Digital Library
Quantitative Computational Thermochemistry of Transition Metal Species (open access)

Quantitative Computational Thermochemistry of Transition Metal Species

This article discusses quantitative computational thermochemistry of transition metal species. The correlation consistent Composite Approach (ccCA), which has been shown to achieve chemical accuracy (±1 kcal mol⁻¹) for a large benchmark set of main group and s-block metal compounds, is used to compute enthalpies of formation for a set of 17 3d transition metal species.
Date: May 15, 2007
Creator: DeYonker, Nathan J.; Peterson, Kirk A.; Steyl, Gideon; Wilson, Angela K. & Cundari, Thomas R., 1964-
System: The UNT Digital Library
Measurement of Angle beta with Time-dependent CPAsymmetry in B0 to K+K-K0 Decays (open access)

Measurement of Angle beta with Time-dependent CPAsymmetry in B0 to K+K-K0 Decays

In the Standard Model (SM) of particle physics, the phase of the Cabibbo-Kobayashi-Maskawa (CKM) quark-mixing matrix [1, 2] is the only source of CP violation in the quark sector. Due to the interference between mixing and decay, this phase can be observed in measurements of time-dependent CP asymmetries of B{sup 0} mesons. In the SM, CP asymmetries in b {yields} s{bar s}s decays, such as B{sup 0} {yields} K{sup +}K{sup -}K{sup 0}, are expected to be nearly equal to those observed in tree-dominated b {yields} {bar c}s decays [3]. However, because in the SM the former are dominated by loop amplitudes, new particles in those loops potentially introduce new physics at the same order as the SM process. Within the SM, deviations from the expected CP asymmetries in B{sup 0} {yields} K{sup +}K{sup -}K{sup 0} decays depend on the Dalitz plot position, but are expected to be small and positive [4]. In particular, for the decay B{sup 0} {yields} {phi}K{sup 0} they are expected to be less than 4%. BABAR extracts the time-dependent CP-violation parameters by taking into account different amplitudes and phases across the B{sup 0} and {bar B}{sup 0} Dalitz plots, while Belle measures it separately for B{sup …
Date: February 15, 2007
Creator: Di Marco, Emanuele
System: The UNT Digital Library
Baryon Triality And Neutrino Masses From An Anomalous FlavorU(1) (open access)

Baryon Triality And Neutrino Masses From An Anomalous FlavorU(1)

We construct a concise U(1){sub X} Froggatt-Nielsen model in which baryon triality, a discrete gauge Z{sub 3}-symmetry, arises from U(1){sub X} breaking. The proton is thus stable, however, R-parity is violated. With the proper choice of U(1){sub X} charges we can obtain neutrino masses and mixings consistent with an explanation of the atmospheric and solar neutrino anomalies in terms of neutrino oscillations, with no right-handed neutrinos required. The only mass scale apart from M{sub grav} is m{sub soft}.
Date: August 15, 2007
Creator: Dreiner, Herbi K.; Luhn, Christoph; Murayama, Hitoshi & Thormeier,Marc
System: The UNT Digital Library