WorldWideScience

Sample records for k-space sampling density

  1. Sodium magnetic resonance imaging. Development of a 3D radial acquisition technique with optimized k-space sampling density and high SNR-efficiency

    International Nuclear Information System (INIS)

    Nagel, Armin Michael

    2009-01-01

    A 3D radial k-space acquisition technique with homogenous distribution of the sampling density (DA-3D-RAD) is presented. This technique enables short echo times (TE 23 Na-MRI, and provides a high SNR-efficiency. The gradients of the DA-3D-RAD-sequence are designed such that the average sampling density in each spherical shell of k-space is constant. The DA-3D-RAD-sequence provides 34% more SNR than a conventional 3D radial sequence (3D-RAD) if T 2 * -decay is neglected. This SNR-gain is enhanced if T 2 * -decay is present, so a 1.5 to 1.8 fold higher SNR is measured in brain tissue with the DA-3D-RAD-sequence. Simulations and experimental measurements show that the DA-3D-RAD sequence yields a better resolution in the presence of T 2 * -decay and less image artefacts when B 0 -inhomogeneities exist. Using the developed sequence, T 1 -, T 2 * - and Inversion-Recovery- 23 Na-image contrasts were acquired for several organs and 23 Na-relaxation times were measured (brain tissue: T 1 =29.0±0.3 ms; T 2s * ∼4 ms; T 2l * ∼31 ms; cerebrospinal fluid: T 1 =58.1±0.6 ms; T 2 * =55±3 ms (B 0 =3 T)). T 1 - und T 2 * -relaxation times of cerebrospinal fluid are independent of the selected magnetic field strength (B0 = 3T/7 T), whereas the relaxation times of brain tissue increase with field strength. Furthermore, 23 Na-signals of oedemata were suppressed in patients and thus signals from different tissue compartments were selectively measured. (orig.)

  2. Two-dimensional T2 distribution mapping in rock core plugs with optimal k-space sampling.

    Science.gov (United States)

    Xiao, Dan; Balcom, Bruce J

    2012-07-01

    Spin-echo single point imaging has been employed for 1D T(2) distribution mapping, but a simple extension to 2D is challenging since the time increase is n fold, where n is the number of pixels in the second dimension. Nevertheless 2D T(2) mapping in fluid saturated rock core plugs is highly desirable because the bedding plane structure in rocks often results in different pore properties within the sample. The acquisition time can be improved by undersampling k-space. The cylindrical shape of rock core plugs yields well defined intensity distributions in k-space that may be efficiently determined by new k-space sampling patterns that are developed in this work. These patterns acquire 22.2% and 11.7% of the k-space data points. Companion density images may be employed, in a keyhole imaging sense, to improve image quality. T(2) weighted images are fit to extract T(2) distributions, pixel by pixel, employing an inverse Laplace transform. Images reconstructed with compressed sensing, with similar acceleration factors, are also presented. The results show that restricted k-space sampling, in this application, provides high quality results. Copyright © 2012 Elsevier Inc. All rights reserved.

  3. Reduced aliasing artifacts using shaking projection k-space sampling trajectory

    Science.gov (United States)

    Zhu, Yan-Chun; Du, Jiang; Yang, Wen-Chao; Duan, Chai-Jie; Wang, Hao-Yu; Gao, Song; Bao, Shang-Lian

    2014-03-01

    Radial imaging techniques, such as projection-reconstruction (PR), are used in magnetic resonance imaging (MRI) for dynamic imaging, angiography, and short-T2 imaging. They are less sensitive to flow and motion artifacts, and support fast imaging with short echo times. However, aliasing and streaking artifacts are two main sources which degrade radial imaging quality. For a given fixed number of k-space projections, data distributions along radial and angular directions will influence the level of aliasing and streaking artifacts. Conventional radial k-space sampling trajectory introduces an aliasing artifact at the first principal ring of point spread function (PSF). In this paper, a shaking projection (SP) k-space sampling trajectory was proposed to reduce aliasing artifacts in MR images. SP sampling trajectory shifts the projection alternately along the k-space center, which separates k-space data in the azimuthal direction. Simulations based on conventional and SP sampling trajectories were compared with the same number projections. A significant reduction of aliasing artifacts was observed using the SP sampling trajectory. These two trajectories were also compared with different sampling frequencies. A SP trajectory has the same aliasing character when using half sampling frequency (or half data) for reconstruction. SNR comparisons with different white noise levels show that these two trajectories have the same SNR character. In conclusion, the SP trajectory can reduce the aliasing artifact without decreasing SNR and also provide a way for undersampling reconstruction. Furthermore, this method can be applied to three-dimensional (3D) hybrid or spherical radial k-space sampling for a more efficient reduction of aliasing artifacts.

  4. Reduced aliasing artifacts using shaking projection k-space sampling trajectory

    International Nuclear Information System (INIS)

    Zhu Yan-Chun; Yang Wen-Chao; Wang Hao-Yu; Gao Song; Bao Shang-Lian; Du Jiang; Duan Chai-Jie

    2014-01-01

    Radial imaging techniques, such as projection-reconstruction (PR), are used in magnetic resonance imaging (MRI) for dynamic imaging, angiography, and short-T2 imaging. They are less sensitive to flow and motion artifacts, and support fast imaging with short echo times. However, aliasing and streaking artifacts are two main sources which degrade radial imaging quality. For a given fixed number of k-space projections, data distributions along radial and angular directions will influence the level of aliasing and streaking artifacts. Conventional radial k-space sampling trajectory introduces an aliasing artifact at the first principal ring of point spread function (PSF). In this paper, a shaking projection (SP) k-space sampling trajectory was proposed to reduce aliasing artifacts in MR images. SP sampling trajectory shifts the projection alternately along the k-space center, which separates k-space data in the azimuthal direction. Simulations based on conventional and SP sampling trajectories were compared with the same number projections. A significant reduction of aliasing artifacts was observed using the SP sampling trajectory. These two trajectories were also compared with different sampling frequencies. A SP trajectory has the same aliasing character when using half sampling frequency (or half data) for reconstruction. SNR comparisons with different white noise levels show that these two trajectories have the same SNR character. In conclusion, the SP trajectory can reduce the aliasing artifact without decreasing SNR and also provide a way for undersampling reconstruction. Furthermore, this method can be applied to three-dimensional (3D) hybrid or spherical radial k-space sampling for a more efficient reduction of aliasing artifacts

  5. Correlation between k-space sampling pattern and MTF in compressed sensing MRSI.

    Science.gov (United States)

    Heikal, A A; Wachowicz, K; Fallone, B G

    2016-10-01

    To investigate the relationship between the k-space sampling patterns used for compressed sensing MR spectroscopic imaging (CS-MRSI) and the modulation transfer function (MTF) of the metabolite maps. This relationship may allow the desired frequency content of the metabolite maps to be quantitatively tailored when designing an undersampling pattern. Simulations of a phantom were used to calculate the MTF of Nyquist sampled (NS) 32 × 32 MRSI, and four-times undersampled CS-MRSI reconstructions. The dependence of the CS-MTF on the k-space sampling pattern was evaluated for three sets of k-space sampling patterns generated using different probability distribution functions (PDFs). CS-MTFs were also evaluated for three more sets of patterns generated using a modified algorithm where the sampling ratios are constrained to adhere to PDFs. Strong visual correlation as well as high R 2 was found between the MTF of CS-MRSI and the product of the frequency-dependant sampling ratio and the NS 32 × 32 MTF. Also, PDF-constrained sampling patterns led to higher reproducibility of the CS-MTF, and stronger correlations to the above-mentioned product. The relationship established in this work provides the user with a theoretical solution for the MTF of CS MRSI that is both predictable and customizable to the user's needs.

  6. A radial sampling strategy for uniform k-space coverage with retrospective respiratory gating in 3D ultrashort-echo-time lung imaging.

    Science.gov (United States)

    Park, Jinil; Shin, Taehoon; Yoon, Soon Ho; Goo, Jin Mo; Park, Jang-Yeon

    2016-05-01

    The purpose of this work was to develop a 3D radial-sampling strategy which maintains uniform k-space sample density after retrospective respiratory gating, and demonstrate its feasibility in free-breathing ultrashort-echo-time lung MRI. A multi-shot, interleaved 3D radial sampling function was designed by segmenting a single-shot trajectory of projection views such that each interleaf samples k-space in an incoherent fashion. An optimal segmentation factor for the interleaved acquisition was derived based on an approximate model of respiratory patterns such that radial interleaves are evenly accepted during the retrospective gating. The optimality of the proposed sampling scheme was tested by numerical simulations and phantom experiments using human respiratory waveforms. Retrospectively, respiratory-gated, free-breathing lung MRI with the proposed sampling strategy was performed in healthy subjects. The simulation yielded the most uniform k-space sample density with the optimal segmentation factor, as evidenced by the smallest standard deviation of the number of neighboring samples as well as minimal side-lobe energy in the point spread function. The optimality of the proposed scheme was also confirmed by minimal image artifacts in phantom images. Human lung images showed that the proposed sampling scheme significantly reduced streak and ring artifacts compared with the conventional retrospective respiratory gating while suppressing motion-related blurring compared with full sampling without respiratory gating. In conclusion, the proposed 3D radial-sampling scheme can effectively suppress the image artifacts due to non-uniform k-space sample density in retrospectively respiratory-gated lung MRI by uniformly distributing gated radial views across the k-space. Copyright © 2016 John Wiley & Sons, Ltd.

  7. Sodium magnetic resonance imaging. Development of a 3D radial acquisition technique with optimized k-space sampling density and high SNR-efficiency; Natrium-Magnetresonanztomographie. Entwicklung einer 3D radialen Messtechnik mit optimierter k-Raum-Abtastdichte und hoher SNR-Effizienz

    Energy Technology Data Exchange (ETDEWEB)

    Nagel, Armin Michael

    2009-04-01

    A 3D radial k-space acquisition technique with homogenous distribution of the sampling density (DA-3D-RAD) is presented. This technique enables short echo times (TE<0.5 ms), that are necessary for {sup 23}Na-MRI, and provides a high SNR-efficiency. The gradients of the DA-3D-RAD-sequence are designed such that the average sampling density in each spherical shell of k-space is constant. The DA-3D-RAD-sequence provides 34% more SNR than a conventional 3D radial sequence (3D-RAD) if T{sub 2}{sup *}-decay is neglected. This SNR-gain is enhanced if T{sub 2}{sup *}-decay is present, so a 1.5 to 1.8 fold higher SNR is measured in brain tissue with the DA-3D-RAD-sequence. Simulations and experimental measurements show that the DA-3D-RAD sequence yields a better resolution in the presence of T{sub 2}{sup *}-decay and less image artefacts when B{sub 0}-inhomogeneities exist. Using the developed sequence, T{sub 1}-, T{sub 2}{sup *}- and Inversion-Recovery-{sup 23}Na-image contrasts were acquired for several organs and {sup 23}Na-relaxation times were measured (brain tissue: T{sub 1}=29.0{+-}0.3 ms; T{sub 2s}{sup *}{approx}4 ms; T{sub 2l}{sup *}{approx}31 ms; cerebrospinal fluid: T{sub 1}=58.1{+-}0.6 ms; T{sub 2}{sup *}=55{+-}3 ms (B{sub 0}=3 T)). T{sub 1}- und T{sub 2}{sup *}-relaxation times of cerebrospinal fluid are independent of the selected magnetic field strength (B0 = 3T/7 T), whereas the relaxation times of brain tissue increase with field strength. Furthermore, {sup 23}Na-signals of oedemata were suppressed in patients and thus signals from different tissue compartments were selectively measured. (orig.)

  8. Improved abdominal MRI in non-breath-holding children using a radial k-space sampling technique

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Jong Hyuk; Choi, Young Hun; Cheon, Jung Eun; Lee, So Mi; Cho, Hyun Hae; Kim, Woo Sun; Kim, In One [Seoul National University Children' s Hospital, Department of Radiology, Seoul (Korea, Republic of); Shin, Su Mi [SMG-SNU Boramae Medical Center, Department of Radiology, Seoul (Korea, Republic of)

    2015-06-15

    Radial k-space sampling techniques have been shown to reduce motion artifacts in adult abdominal MRI. To compare a T2-weighted radial k-space sampling MRI pulse sequence (BLADE) with standard respiratory-triggered T2-weighted turbo spin echo (TSE) in pediatric abdominal imaging. Axial BLADE and respiratory-triggered turbo spin echo sequences were performed without fat suppression in 32 abdominal MR examinations in children. We retrospectively assessed overall image quality, the presence of respiratory, peristaltic and radial artifact, and lesion conspicuity. We evaluated signal uniformity of each sequence. BLADE showed improved overall image quality (3.35 ± 0.85 vs. 2.59 ± 0.59, P < 0.001), reduced respiratory motion artifact (0.51 ± 0.56 vs. 1.89 ± 0.68, P < 0.001), and improved lesion conspicuity (3.54 ± 0.88 vs. 2.92 ± 0.77, P = 0.006) compared to respiratory triggering turbo spin-echo (TSE) sequences. The bowel motion artifact scores were similar for both sequences (1.65 ± 0.77 vs. 1.79 ± 0.74, P = 0.691). BLADE introduced a radial artifact that was not observed on the respiratory triggering-TSE images (1.10 ± 0.85 vs. 0, P < 0.001). BLADE was associated with diminished signal variation compared with respiratory triggering-TSE in the liver, spleen and air (P < 0.001). The radial k-space sampling technique improved the quality and reduced respiratory motion artifacts in young children compared with conventional respiratory-triggered turbo spin-echo sequences. (orig.)

  9. SU-F-J-158: Respiratory Motion Resolved, Self-Gated 4D-MRI Using Rotating Cartesian K-Space Sampling

    Energy Technology Data Exchange (ETDEWEB)

    Han, F; Zhou, Z; Yang, Y; Sheng, K; Hu, P [UCLA School of Medicine, Los Angeles, CA (United States)

    2016-06-15

    Purpose: Dynamic MRI has been used to quantify respiratory motion of abdominal organs in radiation treatment planning. Many existing 4D-MRI methods based on 2D acquisitions suffer from limited slice resolution and additional stitching artifacts when evaluated in 3D{sup 1}. To address these issues, we developed a 4D-MRI (3D dynamic) technique with true 3D k-space encoding and respiratory motion self-gating. Methods: The 3D k-space was acquired using a Rotating Cartesian K-space (ROCK) pattern, where the Cartesian grid was reordered in a quasi-spiral fashion with each spiral arm rotated using golden angle{sup 2}. Each quasi-spiral arm started with the k-space center-line, which were used as self-gating{sup 3} signal for respiratory motion estimation. The acquired k-space data was then binned into 8 respiratory phases and the golden angle ensures a near-uniform k-space sampling in each phase. Finally, dynamic 3D images were reconstructed using the ESPIRiT technique{sup 4}. 4D-MRI was performed on 6 healthy volunteers, using the following parameters (bSSFP, Fat-Sat, TE/TR=2ms/4ms, matrix size=500×350×120, resolution=1×1×1.2mm, TA=5min, 8 respiratory phases). Supplemental 2D real-time images were acquired in 9 different planes. Dynamic locations of the diaphragm dome and left kidney were measured from both 4D and 2D images. The same protocol was also performed on a MRI-compatible motion phantom where the motion was programmed with different amplitude (10–30mm) and frequency (3–10/min). Results: High resolution 4D-MRI were obtained successfully in 5 minutes. Quantitative motion measurements from 4D-MRI agree with the ones from 2D CINE (<5% error). The 4D images are free of the stitching artifacts and their near-isotropic resolution facilitates 3D visualization and segmentation of abdominal organs such as the liver, kidney and pancreas. Conclusion: Our preliminary studies demonstrated a novel ROCK 4D-MRI technique with true 3D k-space encoding and respiratory

  10. Technical innovation in dynamic contrast-enhanced magnetic resonance imaging of musculoskeletal tumors: an MR angiographic sequence using a sparse k-space sampling strategy.

    Science.gov (United States)

    Fayad, Laura M; Mugera, Charles; Soldatos, Theodoros; Flammang, Aaron; del Grande, Filippo

    2013-07-01

    We demonstrate the clinical use of an MR angiography sequence performed with sparse k-space sampling (MRA), as a method for dynamic contrast-enhanced (DCE)-MRI, and apply it to the assessment of sarcomas for treatment response. Three subjects with sarcomas (2 with osteosarcoma, 1 with high-grade soft tissue sarcomas) underwent MRI after neoadjuvant therapy/prior to surgery, with conventional MRI (T1-weighted, fluid-sensitive, static post-contrast T1-weighted sequences) and DCE-MRI (MRA, time resolution = 7-10 s, TR/TE 2.4/0.9 ms, FOV 40 cm(2)). Images were reviewed by two observers in consensus who recorded image quality (1 = diagnostic, no significant artifacts, 2 = diagnostic, 75 % with good response, >75 % with poor response). DCE-MRI findings were concordant with histological response (arterial enhancement with poor response, no arterial enhancement with good response). Unlike conventional DCE-MRI sequences, an MRA sequence with sparse k-space sampling is easily integrated into a routine musculoskeletal tumor MRI protocol, with high diagnostic quality. In this preliminary work, tumor enhancement characteristics by DCE-MRI were used to assess treatment response.

  11. Technical innovation in dynamic contrast-enhanced magnetic resonance imaging of musculoskeletal tumors: an MR angiographic sequence using a sparse k-space sampling strategy

    International Nuclear Information System (INIS)

    Fayad, Laura M.; Mugera, Charles; Grande, Filippo del; Soldatos, Theodoros; Flammang, Aaron

    2013-01-01

    We demonstrate the clinical use of an MR angiography sequence performed with sparse k-space sampling (MRA), as a method for dynamic contrast-enhanced (DCE)-MRI, and apply it to the assessment of sarcomas for treatment response. Three subjects with sarcomas (2 with osteosarcoma, 1 with high-grade soft tissue sarcomas) underwent MRI after neoadjuvant therapy/prior to surgery, with conventional MRI (T1-weighted, fluid-sensitive, static post-contrast T1-weighted sequences) and DCE-MRI (MRA, time resolution = 7-10 s, TR/TE 2.4/0.9 ms, FOV 40 cm 2 ). Images were reviewed by two observers in consensus who recorded image quality (1 = diagnostic, no significant artifacts, 2 = diagnostic, 75 % with good response, >75 % with poor response). DCE-MRI findings were concordant with histological response (arterial enhancement with poor response, no arterial enhancement with good response). Unlike conventional DCE-MRI sequences, an MRA sequence with sparse k-space sampling is easily integrated into a routine musculoskeletal tumor MRI protocol, with high diagnostic quality. In this preliminary work, tumor enhancement characteristics by DCE-MRI were used to assess treatment response. (orig.)

  12. k-space sampling optimization for ultrashort TE imaging of cortical bone: Applications in radiation therapy planning and MR-based PET attenuation correction

    International Nuclear Information System (INIS)

    Hu, Lingzhi; Traughber, Melanie; Su, Kuan-Hao; Pereira, Gisele C.; Grover, Anu; Traughber, Bryan; Muzic, Raymond F. Jr.

    2014-01-01

    Purpose: The ultrashort echo-time (UTE) sequence is a promising MR pulse sequence for imaging cortical bone which is otherwise difficult to image using conventional MR sequences and also poses strong attenuation for photons in radiation therapy and PET imaging. The authors report here a systematic characterization of cortical bone signal decay and a scanning time optimization strategy for the UTE sequence through k-space undersampling, which can result in up to a 75% reduction in acquisition time. Using the undersampled UTE imaging sequence, the authors also attempted to quantitatively investigate the MR properties of cortical bone in healthy volunteers, thus demonstrating the feasibility of using such a technique for generating bone-enhanced images which can be used for radiation therapy planning and attenuation correction with PET/MR. Methods: An angularly undersampled, radially encoded UTE sequence was used for scanning the brains of healthy volunteers. Quantitative MR characterization of tissue properties, including water fraction and R2 ∗ = 1/T2 ∗ , was performed by analyzing the UTE images acquired at multiple echo times. The impact of different sampling rates was evaluated through systematic comparison of the MR image quality, bone-enhanced image quality, image noise, water fraction, and R2 ∗ of cortical bone. Results: A reduced angular sampling rate of the UTE trajectory achieves acquisition durations in proportion to the sampling rate and in as short as 25% of the time required for full sampling using a standard Cartesian acquisition, while preserving unique MR contrast within the skull at the cost of a minimal increase in noise level. The R2 ∗ of human skull was measured as 0.2–0.3 ms −1 depending on the specific region, which is more than ten times greater than the R2 ∗ of soft tissue. The water fraction in human skull was measured to be 60%–80%, which is significantly less than the >90% water fraction in brain. High-quality, bone

  13. Metabolite-cycled density-weighted concentric rings k-space trajectory (DW-CRT) enables high-resolution 1 H magnetic resonance spectroscopic imaging at 3-Tesla.

    Science.gov (United States)

    Steel, Adam; Chiew, Mark; Jezzard, Peter; Voets, Natalie L; Plaha, Puneet; Thomas, Michael Albert; Stagg, Charlotte J; Emir, Uzay E

    2018-05-17

    Magnetic resonance spectroscopic imaging (MRSI) is a promising technique in both experimental and clinical settings. However, to date, MRSI has been hampered by prohibitively long acquisition times and artifacts caused by subject motion and hardware-related frequency drift. In the present study, we demonstrate that density weighted concentric ring trajectory (DW-CRT) k-space sampling in combination with semi-LASER excitation and metabolite-cycling enables high-resolution MRSI data to be rapidly acquired at 3 Tesla. Single-slice full-intensity MRSI data (short echo time (TE) semi-LASER TE = 32 ms) were acquired from 6 healthy volunteers with an in-plane resolution of 5 × 5 mm in 13 min 30 sec using this approach. Using LCModel analysis, we found that the acquired spectra allowed for the mapping of total N-acetylaspartate (median Cramer-Rao Lower Bound [CRLB] = 3%), glutamate+glutamine (8%), and glutathione (13%). In addition, we demonstrate potential clinical utility of this technique by optimizing the TE to detect 2-hydroxyglutarate (long TE semi-LASER, TE = 110 ms), to produce relevant high-resolution metabolite maps of grade III IDH-mutant oligodendroglioma in a single patient. This study demonstrates the potential utility of MRSI in the clinical setting at 3 Tesla.

  14. Free-breathing contrast-enhanced T1-weighted gradient-echo imaging with radial k-space sampling for paediatric abdominopelvic MRI

    Energy Technology Data Exchange (ETDEWEB)

    Chandarana, Hersh; Block, Kai T.; Winfeld, Matthew J.; Lala, Shailee V.; Mazori, Daniel; Giuffrida, Emalyn; Babb, James S.; Milla, Sarah S. [New York University Langone Medical Center, Department of Radiology, New York, NY (United States)

    2014-02-15

    To compare the image quality of contrast-enhanced abdominopelvic 3D fat-suppressed T1-weighted gradient-echo imaging with radial and conventional Cartesian k-space acquisition schemes in paediatric patients. Seventy-three consecutive paediatric patients were imaged at 1.5 T with sequential contrast-enhanced T1-weighted Cartesian (VIBE) and radial gradient echo (GRE) acquisition schemes with matching parameters when possible. Cartesian VIBE was acquired as a breath-hold or as free breathing in patients who could not suspend respiration, followed by free-breathing radial GRE in all patients. Two paediatric radiologists blinded to the acquisition schemes evaluated multiple parameters of image quality on a five-point scale, with higher score indicating a more optimal examination. Lesion presence or absence, conspicuity and edge sharpness were also evaluated. Mixed-model analysis of variance was performed to compare radial GRE and Cartesian VIBE. Radial GRE had significantly (all P < 0.001) higher scores for overall image quality, hepatic edge sharpness, hepatic vessel clarity and respiratory motion robustness than Cartesian VIBE. More lesions were detected on radial GRE by both readers than on Cartesian VIBE, with significantly higher scores for lesion conspicuity and edge sharpness (all P < 0.001). Radial GRE has better image quality and lesion conspicuity than conventional Cartesian VIBE in paediatric patients undergoing contrast-enhanced abdominopelvic MRI. (orig.)

  15. Full k-space visualization of photoelectron diffraction

    International Nuclear Information System (INIS)

    Denlinger, J.D.; Rotenberg, E.; Kevan, S.D.; Tonner, B.P.

    1997-01-01

    The development of photoelectron holography has promoted the need for larger photoelectron diffraction data sets in order to improve the quality of real-space reconstructed images (by suppressing transformational artifacts and distortions). The two main experimental and theoretical approaches to holography, the transform of angular distribution patterns for a coarse selection of energies or the transform of energy-scanned profiles for several directions, represent two limits to k-space sampling. The high brightness of third-generation soft x-ray synchrotron sources provides the opportunity to rapidly measure large high-density x-ray photoelectron diffraction (XPD) data sets with approximately uniform k-space sampling. In this abstract, the authors present such a photoelectron data set acquired for Cu 3p emission from Cu(001). Cu(001) is one of the most well-studied systems for understanding photoelectron diffraction structure and for testing photoelectron holography methods. Cu(001) was chosen for this study in part due to the relatively inert and unreconstructed clean surface, and it served to calibrate and fine-tune the operation of a new synchrotron beamline, electron spectrometer and sample goniometer. In addition to Cu, similar open-quotes volumeclose quotes XPD data sets have been acquired for bulk and surface core-level emission from W(110), from reconstructed Si(100) and Si(111) surfaces, and from the adsorbate system of c(2x2) Mn/Ni(100)

  16. Validation of highly accelerated real-time cardiac cine MRI with radial k-space sampling and compressed sensing in patients at 1.5T and 3T.

    Science.gov (United States)

    Haji-Valizadeh, Hassan; Rahsepar, Amir A; Collins, Jeremy D; Bassett, Elwin; Isakova, Tamara; Block, Tobias; Adluru, Ganesh; DiBella, Edward V R; Lee, Daniel C; Carr, James C; Kim, Daniel

    2018-05-01

    To validate an optimal 12-fold accelerated real-time cine MRI pulse sequence with radial k-space sampling and compressed sensing (CS) in patients at 1.5T and 3T. We used two strategies to reduce image artifacts arising from gradient delays and eddy currents in radial k-space sampling with balanced steady-state free precession readout. We validated this pulse sequence against a standard breath-hold cine sequence in two patient cohorts: a myocardial infarction (n = 16) group at 1.5T and chronic kidney disease group (n = 18) at 3T. Two readers independently performed visual analysis of 68 cine sets in four categories (myocardial definition, temporal fidelity, artifact, noise) on a 5-point Likert scale (1 = nondiagnostic, 2 = poor, 3 = adequate or moderate, 4 = good, 5 = excellent). Another reader calculated left ventricular (LV) functional parameters, including ejection fraction. Compared with standard cine, real-time cine produced nonsignificantly different visually assessed scores, except for the following categories: 1) temporal fidelity scores were significantly lower (P = 0.013) for real-time cine at both field strengths, 2) artifacts scores were significantly higher (P = 0.013) for real-time cine at both field strengths, and 3) noise scores were significantly (P = 0.013) higher for real-time cine at 1.5T. Standard and real-time cine pulse sequences produced LV functional parameters that were in good agreement (e.g., absolute mean difference in ejection fraction cine MRI pulse sequence using radial k-space sampling and CS produces good to excellent visual scores and relatively accurate LV functional parameters in patients at 1.5T and 3T. Magn Reson Med 79:2745-2751, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  17. MO-FG-CAMPUS-JeP2-01: 4D-MRI with 3D Radial Sampling and Self-Gating-Based K-Space Sorting: Image Quality Improvement by Slab-Selective Excitation

    Energy Technology Data Exchange (ETDEWEB)

    Deng, Z; Pang, J; Tuli, R; Fraass, B; Fan, Z [Cedars Sinai Medical Center, Los Angeles, CA (United States); Yang, W [Cedars-Sinai Medical Center, Los Angeles, CA (United States); Bi, X [Siemens Healthcare, Los Angeles, CA (United States); Hakimian, B [Cedars Sinai Medical Center, Los Angeles CA (United States); Li, D [Cedars Sinai Medical Center, Los Angeles, California (United States)

    2016-06-15

    Purpose: A recent 4D MRI technique based on 3D radial sampling and self-gating-based K-space sorting has shown promising results in characterizing respiratory motion. However due to continuous acquisition and potentially drastic k-space undersampling resultant images could suffer from low blood-to-tissue contrast and streaking artifacts. In this study 3D radial sampling with slab-selective excitation (SS) was proposed in attempt to enhance blood-to-tissue contrast by exploiting the in-flow effect and to suppress the excess signal from the peripheral structures particularly in the superior-inferior direction. The feasibility of improving image quality by using this approach was investigated through a comparison with the previously developed non-selective excitation (NS) approach. Methods: Two excitation approaches SS and NS were compared in 5 cancer patients (1 lung 1 liver 2 pancreas and 1 esophagus) at 3Tesla. Image artifact was assessed in all patients on a 4-point scale (0: poor; 3: excellent). Signal-tonoise ratio (SNR) of the blood vessel (aorta) at the center of field-of-view and its nearby tissue were measured in 3 of the 5 patients (1 liver 2 pancreas) and blood-to-tissue contrast-to-noise ratio (CNR) were then determined. Results: Compared with NS the image quality of SS was visually improved with overall higher signal in all patients (2.6±0.55 vs. 3.4±0.55). SS showed an approximately 2-fold increase of SNR in the blood (aorta: 16.39±1.95 vs. 32.19±7.93) and slight increase in the surrounding tissue (liver/pancreas: 16.91±1.82 vs. 22.31±3.03). As a result the blood-totissue CNR was dramatically higher in the SS method (1.20±1.20 vs. 9.87±6.67). Conclusion: The proposed 3D radial sampling with slabselective excitation allows for reduced image artifact and improved blood SNR and blood-to-tissue CNR. The success of this technique could potentially benefit patients with cancerous tumors that have invaded the surrounding blood vessels where radiation

  18. MO-FG-CAMPUS-JeP2-01: 4D-MRI with 3D Radial Sampling and Self-Gating-Based K-Space Sorting: Image Quality Improvement by Slab-Selective Excitation

    International Nuclear Information System (INIS)

    Deng, Z; Pang, J; Tuli, R; Fraass, B; Fan, Z; Yang, W; Bi, X; Hakimian, B; Li, D

    2016-01-01

    Purpose: A recent 4D MRI technique based on 3D radial sampling and self-gating-based K-space sorting has shown promising results in characterizing respiratory motion. However due to continuous acquisition and potentially drastic k-space undersampling resultant images could suffer from low blood-to-tissue contrast and streaking artifacts. In this study 3D radial sampling with slab-selective excitation (SS) was proposed in attempt to enhance blood-to-tissue contrast by exploiting the in-flow effect and to suppress the excess signal from the peripheral structures particularly in the superior-inferior direction. The feasibility of improving image quality by using this approach was investigated through a comparison with the previously developed non-selective excitation (NS) approach. Methods: Two excitation approaches SS and NS were compared in 5 cancer patients (1 lung 1 liver 2 pancreas and 1 esophagus) at 3Tesla. Image artifact was assessed in all patients on a 4-point scale (0: poor; 3: excellent). Signal-tonoise ratio (SNR) of the blood vessel (aorta) at the center of field-of-view and its nearby tissue were measured in 3 of the 5 patients (1 liver 2 pancreas) and blood-to-tissue contrast-to-noise ratio (CNR) were then determined. Results: Compared with NS the image quality of SS was visually improved with overall higher signal in all patients (2.6±0.55 vs. 3.4±0.55). SS showed an approximately 2-fold increase of SNR in the blood (aorta: 16.39±1.95 vs. 32.19±7.93) and slight increase in the surrounding tissue (liver/pancreas: 16.91±1.82 vs. 22.31±3.03). As a result the blood-totissue CNR was dramatically higher in the SS method (1.20±1.20 vs. 9.87±6.67). Conclusion: The proposed 3D radial sampling with slabselective excitation allows for reduced image artifact and improved blood SNR and blood-to-tissue CNR. The success of this technique could potentially benefit patients with cancerous tumors that have invaded the surrounding blood vessels where radiation

  19. Sampling low-density gypsy moth populations

    Science.gov (United States)

    William E. Wallner; Clive G. Jones; Joseph S. Elkinton; Bruce L. Parker

    1991-01-01

    The techniques and methodology for sampling gypsy moth, Lymantria dispar L., at low densities, less than 100 egg masses/ha (EM/ha), are compared. Forest managers have constraints of time and cost, and need a useful, simple predictable means to assist them in sampling gypsy moth populations. A comparison of various techniques coupled with results of...

  20. Direct sampling for stand density index

    Science.gov (United States)

    Mark J. Ducey; Harry T. Valentine

    2008-01-01

    A direct method of estimating stand density index in the field, without complex calculations, would be useful in a variety of silvicultural situations. We present just such a method. The approach uses an ordinary prism or other angle gauge, but it involves deliberately "pushing the point" or, in some cases, "pulling the point." This adjusts the...

  1. Estimating diurnal primate densities using distance sampling ...

    African Journals Online (AJOL)

    SARAH

    2016-03-31

    Mar 31, 2016 ... In the second session, we used 10 transect adjusted to transect (Grid 17 ... session transect was visited 20 times while at the second session transect ... probability, the density of the group and the group size of each species ...

  2. Image reconstruction in k-space from MR data encoded with ambiguous gradient fields.

    Science.gov (United States)

    Schultz, Gerrit; Gallichan, Daniel; Weber, Hans; Witschey, Walter R T; Honal, Matthias; Hennig, Jürgen; Zaitsev, Maxim

    2015-02-01

    In this work, the limits of image reconstruction in k-space are explored when non-bijective gradient fields are used for spatial encoding. The image space analogy between parallel imaging and imaging with non-bijective encoding fields is partially broken in k-space. As a consequence, it is hypothesized and proven that ambiguities can only be resolved partially in k-space, and not completely as is the case in image space. Image-space and k-space based reconstruction algorithms for multi-channel radiofrequency data acquisitions are programmed and tested using numerical simulations as well as in vivo measurement data. The hypothesis is verified based on an analysis of reconstructed images. It is found that non-bijective gradient fields have the effect that densely sampled autocalibration data, used for k-space reconstruction, provide less information than a separate scan of the receiver coil sensitivity maps, used for image space reconstruction. Consequently, in k-space only the undersampling artifact can be unfolded, whereas in image space, it is also possible to resolve aliasing that is caused by the non-bijectivity of the gradient fields. For standard imaging, reconstruction in image space and in k-space is nearly equivalent, whereas there is a fundamental difference with practical consequences for the selection of image reconstruction algorithms when non-bijective encoding fields are involved. © 2014 Wiley Periodicals, Inc.

  3. Recursion method in the k-space representation

    International Nuclear Information System (INIS)

    Anlage, S.M.; Smith, D.L.

    1986-01-01

    We show that by using a unitary transformation to k space and the special-k-point method for evaluating Brillouin-zone sums, the recursion method can be very effectively applied to translationally invariant systems. We use this approach to perform recursion calculations for realistic tight-binding Hamiltonians which describe diamond- and zinc-blende-structure semiconductors. Projected densities of states for these Hamiltonians have band gaps and internal van Hove singularities. We calculate coefficients for 63 recursion levels exactly and for about 200 recursion levels to a good approximation. Comparisons are made for materials with different magnitude band gaps (diamond, Si, α-Sn). Comparison is also made between materials with one (e.g., diamond) and two (e.g., GaAs) band gaps. The asymptotic behavior of the recursion coefficients is studied by Fourier analysis. Band gaps in the projected density of states dominate the asymptotic behavior. Perturbation analysis describes the asymptotic behavior rather well. Projected densities of states are calculated using a very simple termination scheme. These densities of states compare favorably with the results of Gilat-Raubenheimer integration

  4. Sampling density for the quantitative evaluation of air trapping

    International Nuclear Information System (INIS)

    Goris, Michael L.; Robinson, Terry E.

    2009-01-01

    Concerns have been expressed recently about the radiation burden on patient populations, especially children, undergoing serial radiological testing. To reduce the dose one can change the CT acquisition settings or decrease the sampling density. In this study we determined the minimum desirable sampling density to ascertain the degree of air trapping in children with cystic fibrosis. Ten children with cystic fibrosis in stable condition underwent a volumetric spiral CT scan. The degree of air trapping was determined by an automated algorithm for all slices in the volume, and then for 1/2, 1/4, to 1/128 of all slices, or a sampling density ranging from 100% to 1% of the total volume. The variation around the true value derived from 100% sampling was determined for all other sampling densities. The precision of the measurement remained stable down to a 10% sampling density, but decreased markedly below 3.4%. For a disease marker with the regional variability of air trapping in cystic fibrosis, regardless of observer variability, a sampling density below 10% and even more so, below 3.4%, apparently decreases the precision of the evaluation. (orig.)

  5. A new design and rationale for 3D orthogonally oversampled k-space trajectories.

    Science.gov (United States)

    Pipe, James G; Zwart, Nicholas R; Aboussouan, Eric A; Robison, Ryan K; Devaraj, Ajit; Johnson, Kenneth O

    2011-11-01

    A novel center-out 3D trajectory for sampling magnetic resonance data is presented. The trajectory set is based on a single Fermat spiral waveform, which is substantially undersampled in the center of k-space. Multiple trajectories are combined in a "stacked cone" configuration to give very uniform sampling throughout a "hub," which is very efficient in terms of gradient performance and uniform trajectory spacing. The fermat looped, orthogonally encoded trajectories (FLORET) design produces less gradient-efficient trajectories near the poles, so multiple orthogonal hub designs are shown. These multihub designs oversample k-space twice with orthogonal trajectories, which gives unique properties but also doubles the minimum scan time for critical sampling of k-space. The trajectory is shown to be much more efficient than the conventional stack of cones trajectory, and has nearly the same signal-to-noise ratio efficiency (but twice the minimum scan time) as a stack of spirals trajectory. As a center-out trajectory, it provides a shorter minimum echo time than stack of spirals, and its spherical k-space coverage can dramatically reduce Gibbs ringing. Copyright © 2011 Wiley Periodicals, Inc.

  6. The k-space origins of scattering in Bi2Sr2CaCu2O8+x.

    Science.gov (United States)

    Alldredge, Jacob W; Calleja, Eduardo M; Dai, Jixia; Eisaki, H; Uchida, S; McElroy, Kyle

    2013-08-21

    We demonstrate a general, computer automated procedure that inverts the reciprocal space scattering data (q-space) that are measured by spectroscopic imaging scanning tunnelling microscopy (SI-STM) in order to determine the momentum space (k-space) scattering structure. This allows a detailed examination of the k-space origins of the quasiparticle interference (QPI) pattern in Bi2Sr2CaCu2O8+x within the theoretical constraints of the joint density of states (JDOS). Our new method allows measurement of the differences between the positive and negative energy dispersions, the gap structure and an energy dependent scattering length scale. Furthermore, it resolves the transition between the dispersive QPI and the checkerboard ([Formula: see text] excitation). We have measured the k-space scattering structure over a wide range of doping (p ∼ 0.22-0.08), including regions where the octet model is not applicable. Our technique allows the complete mapping of the k-space scattering origins of the spatial excitations in Bi2Sr2CaCu2O8+x, which allows for better comparisons between SI-STM and other experimental probes of the band structure. By applying our new technique to such a heavily studied compound, we can validate our new general approach for determining the k-space scattering origins from SI-STM data.

  7. STEP: Self-supporting tailored k-space estimation for parallel imaging reconstruction.

    Science.gov (United States)

    Zhou, Zechen; Wang, Jinnan; Balu, Niranjan; Li, Rui; Yuan, Chun

    2016-02-01

    A new subspace-based iterative reconstruction method, termed Self-supporting Tailored k-space Estimation for Parallel imaging reconstruction (STEP), is presented and evaluated in comparison to the existing autocalibrating method SPIRiT and calibrationless method SAKE. In STEP, two tailored schemes including k-space partition and basis selection are proposed to promote spatially variant signal subspace and incorporated into a self-supporting structured low rank model to enforce properties of locality, sparsity, and rank deficiency, which can be formulated into a constrained optimization problem and solved by an iterative algorithm. Simulated and in vivo datasets were used to investigate the performance of STEP in terms of overall image quality and detail structure preservation. The advantage of STEP on image quality is demonstrated by retrospectively undersampled multichannel Cartesian data with various patterns. Compared with SPIRiT and SAKE, STEP can provide more accurate reconstruction images with less residual aliasing artifacts and reduced noise amplification in simulation and in vivo experiments. In addition, STEP has the capability of combining compressed sensing with arbitrary sampling trajectory. Using k-space partition and basis selection can further improve the performance of parallel imaging reconstruction with or without calibration signals. © 2015 Wiley Periodicals, Inc.

  8. Complete k-space visualization of x-ray photoelectron diffraction

    International Nuclear Information System (INIS)

    Denlinger, J.D.; Lawrence Berkeley Lab., CA; Rotenberg, E.; Lawrence Berkeley Lab., CA; Kevan, S.D.; Tonner, B.P.

    1996-01-01

    A highly detailed x-ray photoelectron diffraction data set has been acquired for crystalline Cu(001). The data set for bulk Cu 3p emission encompasses a large k-space volume (k = 3--10 angstrom -1 ) with sufficient energy and angular sampling to monitor the continuous variation of diffraction intensities. The evolution of back-scattered intensity oscillations is visualized by energy and angular slices of this volume data set. Large diffraction data sets such as this will provide rigorous experimental tests of real-space reconstruction algorithms and multiple-scattering simulations

  9. Development of a sampling strategy and sample size calculation to estimate the distribution of mammographic breast density in Korean women.

    Science.gov (United States)

    Jun, Jae Kwan; Kim, Mi Jin; Choi, Kui Son; Suh, Mina; Jung, Kyu-Won

    2012-01-01

    Mammographic breast density is a known risk factor for breast cancer. To conduct a survey to estimate the distribution of mammographic breast density in Korean women, appropriate sampling strategies for representative and efficient sampling design were evaluated through simulation. Using the target population from the National Cancer Screening Programme (NCSP) for breast cancer in 2009, we verified the distribution estimate by repeating the simulation 1,000 times using stratified random sampling to investigate the distribution of breast density of 1,340,362 women. According to the simulation results, using a sampling design stratifying the nation into three groups (metropolitan, urban, and rural), with a total sample size of 4,000, we estimated the distribution of breast density in Korean women at a level of 0.01% tolerance. Based on the results of our study, a nationwide survey for estimating the distribution of mammographic breast density among Korean women can be conducted efficiently.

  10. A singular K-space model for fast reconstruction of magnetic resonance images from undersampled data.

    Science.gov (United States)

    Luo, Jianhua; Mou, Zhiying; Qin, Binjie; Li, Wanqing; Ogunbona, Philip; Robini, Marc C; Zhu, Yuemin

    2017-12-09

    Reconstructing magnetic resonance images from undersampled k-space data is a challenging problem. This paper introduces a novel method of image reconstruction from undersampled k-space data based on the concept of singularizing operators and a novel singular k-space model. Exploring the sparsity of an image in the k-space, the singular k-space model (SKM) is proposed in terms of the k-space functions of a singularizing operator. The singularizing operator is constructed by combining basic difference operators. An algorithm is developed to reliably estimate the model parameters from undersampled k-space data. The estimated parameters are then used to recover the missing k-space data through the model, subsequently achieving high-quality reconstruction of the image using inverse Fourier transform. Experiments on physical phantom and real brain MR images have shown that the proposed SKM method constantly outperforms the popular total variation (TV) and the classical zero-filling (ZF) methods regardless of the undersampling rates, the noise levels, and the image structures. For the same objective quality of the reconstructed images, the proposed method requires much less k-space data than the TV method. The SKM method is an effective method for fast MRI reconstruction from the undersampled k-space data. Graphical abstract Two Real Images and their sparsified images by singularizing operator.

  11. A support vector density-based importance sampling for reliability assessment

    International Nuclear Information System (INIS)

    Dai, Hongzhe; Zhang, Hao; Wang, Wei

    2012-01-01

    An importance sampling method based on the adaptive Markov chain simulation and support vector density estimation is developed in this paper for efficient structural reliability assessment. The methodology involves the generation of samples that can adaptively populate the important region by the adaptive Metropolis algorithm, and the construction of importance sampling density by support vector density. The use of the adaptive Metropolis algorithm may effectively improve the convergence and stability of the classical Markov chain simulation. The support vector density can approximate the sampling density with fewer samples in comparison to the conventional kernel density estimation. The proposed importance sampling method can effectively reduce the number of structural analysis required for achieving a given accuracy. Examples involving both numerical and practical structural problems are given to illustrate the application and efficiency of the proposed methodology.

  12. Density meter algorithm and system for estimating sampling/mixing uncertainty

    International Nuclear Information System (INIS)

    Shine, E.P.

    1986-01-01

    The Laboratories Department at the Savannah River Plant (SRP) has installed a six-place density meter with an automatic sampling device. This paper describes the statistical software developed to analyze the density of uranyl nitrate solutions using this automated system. The purpose of this software is twofold: to estimate the sampling/mixing and measurement uncertainties in the process and to provide a measurement control program for the density meter. Non-uniformities in density are analyzed both analytically and graphically. The mean density and its limit of error are estimated. Quality control standards are analyzed concurrently with process samples and used to control the density meter measurement error. The analyses are corrected for concentration due to evaporation of samples waiting to be analyzed. The results of this program have been successful in identifying sampling/mixing problems and controlling the quality of analyses

  13. Density meter algorithm and system for estimating sampling/mixing uncertainty

    International Nuclear Information System (INIS)

    Shine, E.P.

    1986-01-01

    The Laboratories Department at the Savannah River Plant (SRP) has installed a six-place density meter with an automatic sampling device. This paper describes the statisical software developed to analyze the density of uranyl nitrate solutions using this automated system. The purpose of this software is twofold: to estimate the sampling/mixing and measurement uncertainties in the process and to provide a measurement control program for the density meter. Non-uniformities in density are analyzed both analytically and graphically. The mean density and its limit of error are estimated. Quality control standards are analyzed concurrently with process samples and used to control the density meter measurement error. The analyses are corrected for concentration due to evaporation of samples waiting to be analyzed. The results of this program have been successful in identifying sampling/mixing problems and controlling the quality of analyses

  14. Functional approximations to posterior densities: a neural network approach to efficient sampling

    NARCIS (Netherlands)

    L.F. Hoogerheide (Lennart); J.F. Kaashoek (Johan); H.K. van Dijk (Herman)

    2002-01-01

    textabstractThe performance of Monte Carlo integration methods like importance sampling or Markov Chain Monte Carlo procedures greatly depends on the choice of the importance or candidate density. Usually, such a density has to be "close" to the target density in order to yield numerically accurate

  15. Forward Modeling of Reduced Power Spectra from Three-dimensional k-space

    Science.gov (United States)

    von Papen, Michael; Saur, Joachim

    2015-06-01

    We present results from a numerical forward model to evaluate one-dimensional reduced power spectral densities (PSDs) from arbitrary energy distributions in {\\boldsymbol{k}} -space. In this model, we can separately calculate the diagonal elements of the spectral tensor for incompressible axisymmetric turbulence with vanishing helicity. Given a critically balanced turbulent cascade with {{k}\\parallel }∼ k\\bot α and α \\lt 1, we explore the implications on the reduced PSD as a function of frequency. The spectra are obtained under the assumption of Taylor’s hypothesis. We further investigate the functional dependence of the spectral index κ on the field-to-flow angle θ between plasma flow and background magnetic field from MHD to electron kinetic scales. We show that critically balanced turbulence asymptotically develops toward θ-independent spectra with a slope corresponding to the perpendicular cascade. This occurs at a transition frequency {{f}2D}(L,α ,θ ), which is analytically estimated and depends on outer scale L, critical balance exponent α, and field-to-flow angle θ. We discuss anisotropic damping terms acting on the {\\boldsymbol{k}} -space distribution of energy and their effects on the PSD. Further, we show that the spectral anisotropies κ (θ ) as found by Horbury et al. and Chen et al. in the solar wind are in accordance with a damped critically balanced cascade of kinetic Alfvén waves. We also model power spectra obtained by Papen et al. in Saturn’s plasma sheet and find that the change of spectral indices inside 9 {{R}s} can be explained by damping on electron scales.

  16. Sample sizes to control error estimates in determining soil bulk density in California forest soils

    Science.gov (United States)

    Youzhi Han; Jianwei Zhang; Kim G. Mattson; Weidong Zhang; Thomas A. Weber

    2016-01-01

    Characterizing forest soil properties with high variability is challenging, sometimes requiring large numbers of soil samples. Soil bulk density is a standard variable needed along with element concentrations to calculate nutrient pools. This study aimed to determine the optimal sample size, the number of observation (n), for predicting the soil bulk density with a...

  17. Reliability of different sampling densities for estimating and mapping lichen diversity in biomonitoring studies

    International Nuclear Information System (INIS)

    Ferretti, M.; Brambilla, E.; Brunialti, G.; Fornasier, F.; Mazzali, C.; Giordani, P.; Nimis, P.L.

    2004-01-01

    Sampling requirements related to lichen biomonitoring include optimal sampling density for obtaining precise and unbiased estimates of population parameters and maps of known reliability. Two available datasets on a sub-national scale in Italy were used to determine a cost-effective sampling density to be adopted in medium-to-large-scale biomonitoring studies. As expected, the relative error in the mean Lichen Biodiversity (Italian acronym: BL) values and the error associated with the interpolation of BL values for (unmeasured) grid cells increased as the sampling density decreased. However, the increase in size of the error was not linear and even a considerable reduction (up to 50%) in the original sampling effort led to a far smaller increase in errors in the mean estimates (<6%) and in mapping (<18%) as compared with the original sampling densities. A reduction in the sampling effort can result in considerable savings of resources, which can then be used for a more detailed investigation of potentially problematic areas. It is, however, necessary to decide the acceptable level of precision at the design stage of the investigation, so as to select the proper sampling density. - An acceptable level of precision must be decided before determining a sampling design

  18. Effect of density increase on self-absorption property of bulk samples

    International Nuclear Information System (INIS)

    Dao Anh Minh; Tran Duc Thiep

    1990-01-01

    Asymptotic behaviour due to self-absorption of photon attenuation function in terms of material density for bulk samples has been considered. Some practical applications have also been presented. (author). 9 refs., 4 figs., 2 tabs

  19. Self-calibrated correlation imaging with k-space variant correlation functions.

    Science.gov (United States)

    Li, Yu; Edalati, Masoud; Du, Xingfu; Wang, Hui; Cao, Jie J

    2018-03-01

    Correlation imaging is a previously developed high-speed MRI framework that converts parallel imaging reconstruction into the estimate of correlation functions. The presented work aims to demonstrate this framework can provide a speed gain over parallel imaging by estimating k-space variant correlation functions. Because of Fourier encoding with gradients, outer k-space data contain higher spatial-frequency image components arising primarily from tissue boundaries. As a result of tissue-boundary sparsity in the human anatomy, neighboring k-space data correlation varies from the central to the outer k-space. By estimating k-space variant correlation functions with an iterative self-calibration method, correlation imaging can benefit from neighboring k-space data correlation associated with both coil sensitivity encoding and tissue-boundary sparsity, thereby providing a speed gain over parallel imaging that relies only on coil sensitivity encoding. This new approach is investigated in brain imaging and free-breathing neonatal cardiac imaging. Correlation imaging performs better than existing parallel imaging techniques in simulated brain imaging acceleration experiments. The higher speed enables real-time data acquisition for neonatal cardiac imaging in which physiological motion is fast and non-periodic. With k-space variant correlation functions, correlation imaging gives a higher speed than parallel imaging and offers the potential to image physiological motion in real-time. Magn Reson Med 79:1483-1494, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  20. Local activation time sampling density for atrial tachycardia contact mapping: how much is enough?

    Science.gov (United States)

    Williams, Steven E; Harrison, James L; Chubb, Henry; Whitaker, John; Kiedrowicz, Radek; Rinaldi, Christopher A; Cooklin, Michael; Wright, Matthew; Niederer, Steven; O'Neill, Mark D

    2018-02-01

    Local activation time (LAT) mapping forms the cornerstone of atrial tachycardia diagnosis. Although anatomic and positional accuracy of electroanatomic mapping (EAM) systems have been validated, the effect of electrode sampling density on LAT map reconstruction is not known. Here, we study the effect of chamber geometry and activation complexity on optimal LAT sampling density using a combined in silico and in vivo approach. In vivo 21 atrial tachycardia maps were studied in three groups: (1) focal activation, (2) macro-re-entry, and (3) localized re-entry. In silico activation was simulated on a 4×4cm atrial monolayer, sampled randomly at 0.25-10 points/cm2 and used to re-interpolate LAT maps. Activation patterns were studied in the geometrically simple porcine right atrium (RA) and complex human left atrium (LA). Activation complexity was introduced into the porcine RA by incomplete inter-caval linear ablation. In all cases, optimal sampling density was defined as the highest density resulting in minimal further error reduction in the re-interpolated maps. Optimal sampling densities for LA tachycardias were 0.67 ± 0.17 points/cm2 (focal activation), 1.05 ± 0.32 points/cm2 (macro-re-entry) and 1.23 ± 0.26 points/cm2 (localized re-entry), P = 0.0031. Increasing activation complexity was associated with increased optimal sampling density both in silico (focal activation 1.09 ± 0.14 points/cm2; re-entry 1.44 ± 0.49 points/cm2; spiral-wave 1.50 ± 0.34 points/cm2, P density (0.61 ± 0.22 points/cm2 vs. 1.0 ± 0.34 points/cm2, P = 0.0015). Optimal sampling densities can be identified to maximize diagnostic yield of LAT maps. Greater sampling density is required to correctly reveal complex activation and represent activation across complex geometries. Overall, the optimal sampling density for LAT map interpolation defined in this study was ∼1.0-1.5 points/cm2. Published on behalf of the European Society of

  1. Self-navigated 4D cartesian imaging of periodic motion in the body trunk using partial k-space compressed sensing.

    Science.gov (United States)

    Küstner, Thomas; Würslin, Christian; Schwartz, Martin; Martirosian, Petros; Gatidis, Sergios; Brendle, Cornelia; Seith, Ferdinand; Schick, Fritz; Schwenzer, Nina F; Yang, Bin; Schmidt, Holger

    2017-08-01

    To enable fast and flexible high-resolution four-dimensional (4D) MRI of periodic thoracic/abdominal motion for motion visualization or motion-corrected imaging. We proposed a Cartesian three-dimensional k-space sampling scheme that acquires a random combination of k-space lines in the ky/kz plane. A partial Fourier-like constraint compacts the sampling space to one half of k-space. The central k-space line is periodically acquired to allow an extraction of a self-navigated respiration signal used to populate a k-space of multiple breathing positions. The randomness of the acquisition (induced by periodic breathing pattern) yields a subsampled k-space that is reconstructed using compressed sensing. Local image evaluations (coefficient of variation and slope steepness through organs) reveal information about motion resolvability. Image quality is inspected by a blinded reading. Sequence and reconstruction method are made publicly available. The method is able to capture and reconstruct 4D images with high image quality and motion resolution within a short scan time of less than 2 min. These findings are supported by restricted-isometry-property analysis, local image evaluation, and blinded reading. The proposed method provides a clinical feasible setup to capture periodic respiratory motion with a fast acquisition protocol and can be extended by further surrogate signals to capture additional periodic motions. Retrospective parametrization allows for flexible tuning toward the targeted applications. Magn Reson Med 78:632-644, 2017. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.

  2. Assessment of soil sample quality used for density evaluations through computed tomography

    International Nuclear Information System (INIS)

    Pires, Luiz F.; Arthur, Robson C.J.; Bacchi, Osny O.S.

    2005-01-01

    There are several methods to measure soil bulk density (ρ s ) like the paraffin sealed clod (PS), the volumetric ring (VR), the computed tomography (CT), and the neutron-gamma surface gauge (SG). In order to evaluate by a non-destructive way the possible modifications in soil structure caused by sampling for the PS and VR methods of ρ s evaluation we proposed to use the gamma ray CT method. A first generation tomograph was used having a 241 Am source and a 3 in x 3 in NaI(Tl) scintillation crystal detector coupled to a photomultiplier tube. Results confirm the effect of soil sampler devices on the structure of soil samples, and that the compaction caused during sampling causes significant alterations of soil bulk density. Through the use of CT it was possible to determine the level of compaction and to make a detailed analysis of the soil bulk density distribution within the soil sample. (author)

  3. Determination of the neutral oxygen atom density in a plasma reactor loaded with metal samples

    Science.gov (United States)

    Mozetic, Miran; Cvelbar, Uros

    2009-08-01

    The density of neutral oxygen atoms was determined during processing of metal samples in a plasma reactor. The reactor was a Pyrex tube with an inner diameter of 11 cm and a length of 30 cm. Plasma was created by an inductively coupled radiofrequency generator operating at a frequency of 27.12 MHz and output power up to 500 W. The O density was measured at the edge of the glass tube with a copper fiber optics catalytic probe. The O atom density in the empty tube depended on pressure and was between 4 and 7 × 1021 m-3. The maximum O density was at a pressure of about 150 Pa, while the dissociation fraction of O2 molecules was maximal at the lowest pressure and decreased with increasing pressure. At about 300 Pa it dropped below 10%. The measurements were repeated in the chamber loaded with different metallic samples. In these cases, the density of oxygen atoms was lower than that in the empty chamber. The results were explained by a drain of O atoms caused by heterogeneous recombination on the samples.

  4. Sampling Error in Relation to Cyst Nematode Population Density Estimation in Small Field Plots.

    Science.gov (United States)

    Župunski, Vesna; Jevtić, Radivoje; Jokić, Vesna Spasić; Župunski, Ljubica; Lalošević, Mirjana; Ćirić, Mihajlo; Ćurčić, Živko

    2017-06-01

    Cyst nematodes are serious plant-parasitic pests which could cause severe yield losses and extensive damage. Since there is still very little information about error of population density estimation in small field plots, this study contributes to the broad issue of population density assessment. It was shown that there was no significant difference between cyst counts of five or seven bulk samples taken per each 1-m 2 plot, if average cyst count per examined plot exceeds 75 cysts per 100 g of soil. Goodness of fit of data to probability distribution tested with χ 2 test confirmed a negative binomial distribution of cyst counts for 21 out of 23 plots. The recommended measure of sampling precision of 17% expressed through coefficient of variation ( cv ) was achieved if the plots of 1 m 2 contaminated with more than 90 cysts per 100 g of soil were sampled with 10-core bulk samples taken in five repetitions. If plots were contaminated with less than 75 cysts per 100 g of soil, 10-core bulk samples taken in seven repetitions gave cv higher than 23%. This study indicates that more attention should be paid on estimation of sampling error in experimental field plots to ensure more reliable estimation of population density of cyst nematodes.

  5. Closed-Form Representations of the Density Function and Integer Moments of the Sample Correlation Coefficient

    Directory of Open Access Journals (Sweden)

    Serge B. Provost

    2015-07-01

    Full Text Available This paper provides a simplified representation of the exact density function of R, the sample correlation coefficient. The odd and even moments of R are also obtained in closed forms. Being expressed in terms of generalized hypergeometric functions, the resulting representations are readily computable. Some numerical examples corroborate the validity of the results derived herein.

  6. 3D-Laser-Scanning Technique Applied to Bulk Density Measurements of Apollo Lunar Samples

    Science.gov (United States)

    Macke, R. J.; Kent, J. J.; Kiefer, W. S.; Britt, D. T.

    2015-01-01

    In order to better interpret gravimetric data from orbiters such as GRAIL and LRO to understand the subsurface composition and structure of the lunar crust, it is import to have a reliable database of the density and porosity of lunar materials. To this end, we have been surveying these physical properties in both lunar meteorites and Apollo lunar samples. To measure porosity, both grain density and bulk density are required. For bulk density, our group has historically utilized sub-mm bead immersion techniques extensively, though several factors have made this technique problematic for our work with Apollo samples. Samples allocated for measurement are often smaller than optimal for the technique, leading to large error bars. Also, for some samples we were required to use pure alumina beads instead of our usual glass beads. The alumina beads were subject to undesirable static effects, producing unreliable results. Other investigators have tested the use of 3d laser scanners on meteorites for measuring bulk volumes. Early work, though promising, was plagued with difficulties including poor response on dark or reflective surfaces, difficulty reproducing sharp edges, and large processing time for producing shape models. Due to progress in technology, however, laser scanners have improved considerably in recent years. We tested this technique on 27 lunar samples in the Apollo collection using a scanner at NASA Johnson Space Center. We found it to be reliable and more precise than beads, with the added benefit that it involves no direct contact with the sample, enabling the study of particularly friable samples for which bead immersion is not possible

  7. Real-time viscosity and mass density sensors requiring microliter sample volume based on nanomechanical resonators.

    Science.gov (United States)

    Bircher, Benjamin A; Duempelmann, Luc; Renggli, Kasper; Lang, Hans Peter; Gerber, Christoph; Bruns, Nico; Braun, Thomas

    2013-09-17

    A microcantilever based method for fluid viscosity and mass density measurements with high temporal resolution and microliter sample consumption is presented. Nanomechanical cantilever vibration is driven by photothermal excitation and detected by an optical beam deflection system using two laser beams of different wavelengths. The theoretical framework relating cantilever response to the viscosity and mass density of the surrounding fluid was extended to consider higher flexural modes vibrating at high Reynolds numbers. The performance of the developed sensor and extended theory was validated over a viscosity range of 1-20 mPa·s and a corresponding mass density range of 998-1176 kg/m(3) using reference fluids. Separating sample plugs from the carrier fluid by a two-phase configuration in combination with a microfluidic flow cell, allowed samples of 5 μL to be sequentially measured under continuous flow, opening the method to fast and reliable screening applications. To demonstrate the study of dynamic processes, the viscosity and mass density changes occurring during the free radical polymerization of acrylamide were monitored and compared to published data. Shear-thinning was observed in the viscosity data at higher flexural modes, which vibrate at elevated frequencies. Rheokinetic models allowed the monomer-to-polymer conversion to be tracked in spite of the shear-thinning behavior, and could be applied to study the kinetics of unknown processes.

  8. On the asymptotic improvement of supervised learning by utilizing additional unlabeled samples - Normal mixture density case

    Science.gov (United States)

    Shahshahani, Behzad M.; Landgrebe, David A.

    1992-01-01

    The effect of additional unlabeled samples in improving the supervised learning process is studied in this paper. Three learning processes. supervised, unsupervised, and combined supervised-unsupervised, are compared by studying the asymptotic behavior of the estimates obtained under each process. Upper and lower bounds on the asymptotic covariance matrices are derived. It is shown that under a normal mixture density assumption for the probability density function of the feature space, the combined supervised-unsupervised learning is always superior to the supervised learning in achieving better estimates. Experimental results are provided to verify the theoretical concepts.

  9. A new Langmuir probe concept for rapid sampling of space plasma electron density

    International Nuclear Information System (INIS)

    Jacobsen, K S; Pedersen, A; Moen, J I; Bekkeng, T A

    2010-01-01

    In this paper we describe a new Langmuir probe concept that was invented for the in situ investigation of HF radar backscatter irregularities, with the capability to measure absolute electron density at a resolution sufficient to resolve the finest conceivable structure in an ionospheric plasma. The instrument consists of two or more fixed-bias cylindrical Langmuir probes whose radius is small compared to the Debye length. With this configuration, it is possible to acquire absolute electron density measurements independent of electron temperature and rocket/satellite potential. The system was flown on the ICI-2 sounding rocket to investigate the plasma irregularities which cause HF backscatter. It had a sampling rate of more than 5 kHz and successfully measured structures down to the scale of one electron gyro radius. The system can easily be adapted for any ionospheric rocket or satellite, and provides high-quality measurements of electron density at any desired resolution

  10. From echolocation clicks to animal density – acoustic sampling of harbour porpoises with static dataloggers

    DEFF Research Database (Denmark)

    Kyhn, Line Anker; Tougaard, Jakob; Thomas, L.

    2012-01-01

    Monitoring abundance and population trends of small odontocetes is notoriously difficult and labour intensive. There is a need to develop alternative methods to the traditional visual line transect surveys, especially for low density areas. Here, the prospect of obtaining robust density estimates....... This provides a method suitable for monitoring in areas with densities too low for visual surveys to be practically feasible, e.g. the endangered harbour porpoise population in the Baltic....... for porpoises by passive acoustic monitoring (PAM) is demonstrated by combining rigorous application of methods adapted from distance sampling to PAM. Acoustic dataloggers (T-PODs) were deployed in an area where harbour porpoises concurrently were tracked visually. Probability of detection was estimated...

  11. A framework for inference about carnivore density from unstructured spatial sampling of scat using detector dogs

    Science.gov (United States)

    Thompson, Craig M.; Royle, J. Andrew; Garner, James D.

    2012-01-01

    Wildlife management often hinges upon an accurate assessment of population density. Although undeniably useful, many of the traditional approaches to density estimation such as visual counts, livetrapping, or mark–recapture suffer from a suite of methodological and analytical weaknesses. Rare, secretive, or highly mobile species exacerbate these problems through the reality of small sample sizes and movement on and off study sites. In response to these difficulties, there is growing interest in the use of non-invasive survey techniques, which provide the opportunity to collect larger samples with minimal increases in effort, as well as the application of analytical frameworks that are not reliant on large sample size arguments. One promising survey technique, the use of scat detecting dogs, offers a greatly enhanced probability of detection while at the same time generating new difficulties with respect to non-standard survey routes, variable search intensity, and the lack of a fixed survey point for characterizing non-detection. In order to account for these issues, we modified an existing spatially explicit, capture–recapture model for camera trap data to account for variable search intensity and the lack of fixed, georeferenced trap locations. We applied this modified model to a fisher (Martes pennanti) dataset from the Sierra National Forest, California, and compared the results (12.3 fishers/100 km2) to more traditional density estimates. We then evaluated model performance using simulations at 3 levels of population density. Simulation results indicated that estimates based on the posterior mode were relatively unbiased. We believe that this approach provides a flexible analytical framework for reconciling the inconsistencies between detector dog survey data and density estimation procedures.

  12. K-space trajectory mapping and its application for ultrashort Echo time imaging

    Czech Academy of Sciences Publication Activity Database

    Latta, P.; Starčuk jr., Zenon; Gruwel, M. L. H.; Weber, M.H.; Tomanek, B.

    2017-01-01

    Roč. 36, February (2017), s. 68-76 ISSN 0730-725X R&D Projects: GA ČR(CZ) GA15-12607S Institutional support: RVO:68081731 Keywords : gradient imperfections * K-space deviation * trajectrory estaimation * ultrashort echo time Subject RIV: FS - Medical Facilities ; Equipment OBOR OECD: Medical engineering Impact factor: 2.225, year: 2016

  13. Off-resonance artifacts correction with convolution in k-space (ORACLE).

    Science.gov (United States)

    Lin, Wei; Huang, Feng; Simonotto, Enrico; Duensing, George R; Reykowski, Arne

    2012-06-01

    Off-resonance artifacts hinder the wider applicability of echo-planar imaging and non-Cartesian MRI methods such as radial and spiral. In this work, a general and rapid method is proposed for off-resonance artifacts correction based on data convolution in k-space. The acquired k-space is divided into multiple segments based on their acquisition times. Off-resonance-induced artifact within each segment is removed by applying a convolution kernel, which is the Fourier transform of an off-resonance correcting spatial phase modulation term. The field map is determined from the inverse Fourier transform of a basis kernel, which is calibrated from data fitting in k-space. The technique was demonstrated in phantom and in vivo studies for radial, spiral and echo-planar imaging datasets. For radial acquisitions, the proposed method allows the self-calibration of the field map from the imaging data, when an alternating view-angle ordering scheme is used. An additional advantage for off-resonance artifacts correction based on data convolution in k-space is the reusability of convolution kernels to images acquired with the same sequence but different contrasts. Copyright © 2011 Wiley-Liss, Inc.

  14. Space density and clustering properties of a new sample of emission-line galaxies

    International Nuclear Information System (INIS)

    Wasilewski, A.J.

    1982-01-01

    A moderate-dispersion objective-prism survey for low-redshift emission-line galaxies has been carried out in an 825 sq. deg. region of sky with the Burrell Schmidt telescope of Case Western Reserve University. A 4 0 prism (300 A/mm at H#betta#) was used with the Illa-J emulsion to show that a new sample of emission-line galaxies is available even in areas already searched with the excess uv-continuum technique. The new emission-line galaxies occur quite commonly in systems with peculiar morphology indicating gravitational interaction with a close companion or other disturbance. About 10 to 15% of the sample are Seyfert galaxies. It is suggested that tidal interaction involving matter infall play a significant role in the generation of an emission-line spectrum. The space density of the new galaxies is found to be similar to the space density of the Makarian galaxies. Like the Markarian sample, the galaxies in the present survey represent about 10% of all galaxies in the absolute magnitude range M/sub p/ = -16 to -22. The observations also indicate that current estimates of dwarf galaxy space densities may be too low. The clustering properties of the new galaxies have been investigated using two approaches: cluster contour maps and the spatial correlation function. These tests suggest that there is weak clustering and possibly superclustering within the sample itself and that the galaxies considered here are about as common in clusters of ordinary galaxies as in the field

  15. Assessment of Different Sampling Methods for Measuring and Representing Macular Cone Density Using Flood-Illuminated Adaptive Optics.

    Science.gov (United States)

    Feng, Shu; Gale, Michael J; Fay, Jonathan D; Faridi, Ambar; Titus, Hope E; Garg, Anupam K; Michaels, Keith V; Erker, Laura R; Peters, Dawn; Smith, Travis B; Pennesi, Mark E

    2015-09-01

    To describe a standardized flood-illuminated adaptive optics (AO) imaging protocol suitable for the clinical setting and to assess sampling methods for measuring cone density. Cone density was calculated following three measurement protocols: 50 × 50-μm sampling window values every 0.5° along the horizontal and vertical meridians (fixed-interval method), the mean density of expanding 0.5°-wide arcuate areas in the nasal, temporal, superior, and inferior quadrants (arcuate mean method), and the peak cone density of a 50 × 50-μm sampling window within expanding arcuate areas near the meridian (peak density method). Repeated imaging was performed in nine subjects to determine intersession repeatability of cone density. Cone density montages could be created for 67 of the 74 subjects. Image quality was determined to be adequate for automated cone counting for 35 (52%) of the 67 subjects. We found that cone density varied with different sampling methods and regions tested. In the nasal and temporal quadrants, peak density most closely resembled histological data, whereas the arcuate mean and fixed-interval methods tended to underestimate the density compared with histological data. However, in the inferior and superior quadrants, arcuate mean and fixed-interval methods most closely matched histological data, whereas the peak density method overestimated cone density compared with histological data. Intersession repeatability testing showed that repeatability was greatest when sampling by arcuate mean and lowest when sampling by fixed interval. We show that different methods of sampling can significantly affect cone density measurements. Therefore, care must be taken when interpreting cone density results, even in a normal population.

  16. Path integral methods for primordial density perturbations - sampling of constrained Gaussian random fields

    International Nuclear Information System (INIS)

    Bertschinger, E.

    1987-01-01

    Path integrals may be used to describe the statistical properties of a random field such as the primordial density perturbation field. In this framework the probability distribution is given for a Gaussian random field subjected to constraints such as the presence of a protovoid or supercluster at a specific location in the initial conditions. An algorithm has been constructed for generating samples of a constrained Gaussian random field on a lattice using Monte Carlo techniques. The method makes possible a systematic study of the density field around peaks or other constrained regions in the biased galaxy formation scenario, and it is effective for generating initial conditions for N-body simulations with rare objects in the computational volume. 21 references

  17. High density FTA plates serve as efficient long-term sample storage for HLA genotyping.

    Science.gov (United States)

    Lange, V; Arndt, K; Schwarzelt, C; Boehme, I; Giani, A S; Schmidt, A H; Ehninger, G; Wassmuth, R

    2014-02-01

    Storage of dried blood spots (DBS) on high-density FTA(®) plates could constitute an appealing alternative to frozen storage. However, it remains controversial whether DBS are suitable for high-resolution sequencing of human leukocyte antigen (HLA) alleles. Therefore, we extracted DNA from DBS that had been stored for up to 4 years, using six different methods. We identified those extraction methods that recovered sufficient high-quality DNA for reliable high-resolution HLA sequencing. Further, we confirmed that frozen whole blood samples that had been stored for several years can be transferred to filter paper without compromising HLA genotyping upon extraction. Concluding, DNA derived from high-density FTA(®) plates is suitable for high-resolution HLA sequencing, provided that appropriate extraction protocols are employed. © 2014 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  18. Increasing signal-to-noise ratio of swept-source optical coherence tomography by oversampling in k-space

    Science.gov (United States)

    Nagib, Karim; Mezgebo, Biniyam; Thakur, Rahul; Fernando, Namal; Kordi, Behzad; Sherif, Sherif

    2018-03-01

    Optical coherence tomography systems suffer from noise that could reduce ability to interpret reconstructed images correctly. We describe a method to increase the signal-to-noise ratio of swept-source optical coherence tomography (SSOCT) using oversampling in k-space. Due to this oversampling, information redundancy would be introduced in the measured interferogram that could be used to reduce white noise in the reconstructed A-scan. We applied our novel scaled nonuniform discrete Fourier transform to oversampled SS-OCT interferograms to reconstruct images of a salamander egg. The peak-signal-to-noise (PSNR) between the reconstructed images using interferograms sampled at 250MS/s andz50MS/s demonstrate that this oversampling increased the signal-to-noise ratio by 25.22 dB.

  19. Study on the Effects of Sample Density on Gamma Spectrometry System Measurement Efficiency at Radiochemistry and Environment Laboratory

    International Nuclear Information System (INIS)

    Wo, Y.M.; Dainee Nor Fardzila Ahmad Tugi; Khairul Nizam Razali

    2015-01-01

    The effects of sample density on the measurement efficiency of the gamma spectrometry system were studied by using four sets multi nuclide standard sources of various densities between 0.3 - 1.4 g/ ml. The study was conducted on seven unit 25 % coaxial HPGe detector gamma spectrometry systems in Radiochemistry and Environment Laboratory (RAS). Difference on efficiency against gamma emitting radionuclides energy and measurement systems were compared and discussed. Correction factor for self absorption caused by difference in sample matrix density of the gamma systems were estimated. The correction factors are to be used in quantification of radionuclides concentration in various densities of service and research samples in RAS. (author)

  20. Near-native protein loop sampling using nonparametric density estimation accommodating sparcity.

    Science.gov (United States)

    Joo, Hyun; Chavan, Archana G; Day, Ryan; Lennox, Kristin P; Sukhanov, Paul; Dahl, David B; Vannucci, Marina; Tsai, Jerry

    2011-10-01

    Unlike the core structural elements of a protein like regular secondary structure, template based modeling (TBM) has difficulty with loop regions due to their variability in sequence and structure as well as the sparse sampling from a limited number of homologous templates. We present a novel, knowledge-based method for loop sampling that leverages homologous torsion angle information to estimate a continuous joint backbone dihedral angle density at each loop position. The φ,ψ distributions are estimated via a Dirichlet process mixture of hidden Markov models (DPM-HMM). Models are quickly generated based on samples from these distributions and were enriched using an end-to-end distance filter. The performance of the DPM-HMM method was evaluated against a diverse test set in a leave-one-out approach. Candidates as low as 0.45 Å RMSD and with a worst case of 3.66 Å were produced. For the canonical loops like the immunoglobulin complementarity-determining regions (mean RMSD 7.0 Å), this sampling method produces a population of loop structures to around 3.66 Å for loops up to 17 residues. In a direct test of sampling to the Loopy algorithm, our method demonstrates the ability to sample nearer native structures for both the canonical CDRH1 and non-canonical CDRH3 loops. Lastly, in the realistic test conditions of the CASP9 experiment, successful application of DPM-HMM for 90 loops from 45 TBM targets shows the general applicability of our sampling method in loop modeling problem. These results demonstrate that our DPM-HMM produces an advantage by consistently sampling near native loop structure. The software used in this analysis is available for download at http://www.stat.tamu.edu/~dahl/software/cortorgles/.

  1. Near-native protein loop sampling using nonparametric density estimation accommodating sparcity.

    Directory of Open Access Journals (Sweden)

    Hyun Joo

    2011-10-01

    Full Text Available Unlike the core structural elements of a protein like regular secondary structure, template based modeling (TBM has difficulty with loop regions due to their variability in sequence and structure as well as the sparse sampling from a limited number of homologous templates. We present a novel, knowledge-based method for loop sampling that leverages homologous torsion angle information to estimate a continuous joint backbone dihedral angle density at each loop position. The φ,ψ distributions are estimated via a Dirichlet process mixture of hidden Markov models (DPM-HMM. Models are quickly generated based on samples from these distributions and were enriched using an end-to-end distance filter. The performance of the DPM-HMM method was evaluated against a diverse test set in a leave-one-out approach. Candidates as low as 0.45 Å RMSD and with a worst case of 3.66 Å were produced. For the canonical loops like the immunoglobulin complementarity-determining regions (mean RMSD 7.0 Å, this sampling method produces a population of loop structures to around 3.66 Å for loops up to 17 residues. In a direct test of sampling to the Loopy algorithm, our method demonstrates the ability to sample nearer native structures for both the canonical CDRH1 and non-canonical CDRH3 loops. Lastly, in the realistic test conditions of the CASP9 experiment, successful application of DPM-HMM for 90 loops from 45 TBM targets shows the general applicability of our sampling method in loop modeling problem. These results demonstrate that our DPM-HMM produces an advantage by consistently sampling near native loop structure. The software used in this analysis is available for download at http://www.stat.tamu.edu/~dahl/software/cortorgles/.

  2. Near-Native Protein Loop Sampling Using Nonparametric Density Estimation Accommodating Sparcity

    Science.gov (United States)

    Day, Ryan; Lennox, Kristin P.; Sukhanov, Paul; Dahl, David B.; Vannucci, Marina; Tsai, Jerry

    2011-01-01

    Unlike the core structural elements of a protein like regular secondary structure, template based modeling (TBM) has difficulty with loop regions due to their variability in sequence and structure as well as the sparse sampling from a limited number of homologous templates. We present a novel, knowledge-based method for loop sampling that leverages homologous torsion angle information to estimate a continuous joint backbone dihedral angle density at each loop position. The φ,ψ distributions are estimated via a Dirichlet process mixture of hidden Markov models (DPM-HMM). Models are quickly generated based on samples from these distributions and were enriched using an end-to-end distance filter. The performance of the DPM-HMM method was evaluated against a diverse test set in a leave-one-out approach. Candidates as low as 0.45 Å RMSD and with a worst case of 3.66 Å were produced. For the canonical loops like the immunoglobulin complementarity-determining regions (mean RMSD 7.0 Å), this sampling method produces a population of loop structures to around 3.66 Å for loops up to 17 residues. In a direct test of sampling to the Loopy algorithm, our method demonstrates the ability to sample nearer native structures for both the canonical CDRH1 and non-canonical CDRH3 loops. Lastly, in the realistic test conditions of the CASP9 experiment, successful application of DPM-HMM for 90 loops from 45 TBM targets shows the general applicability of our sampling method in loop modeling problem. These results demonstrate that our DPM-HMM produces an advantage by consistently sampling near native loop structure. The software used in this analysis is available for download at http://www.stat.tamu.edu/~dahl/software/cortorgles/. PMID:22028638

  3. Improving snow density estimation for mapping SWE with Lidar snow depth: assessment of uncertainty in modeled density and field sampling strategies in NASA SnowEx

    Science.gov (United States)

    Raleigh, M. S.; Smyth, E.; Small, E. E.

    2017-12-01

    The spatial distribution of snow water equivalent (SWE) is not sufficiently monitored with either remotely sensed or ground-based observations for water resources management. Recent applications of airborne Lidar have yielded basin-wide mapping of SWE when combined with a snow density model. However, in the absence of snow density observations, the uncertainty in these SWE maps is dominated by uncertainty in modeled snow density rather than in Lidar measurement of snow depth. Available observations tend to have a bias in physiographic regime (e.g., flat open areas) and are often insufficient in number to support testing of models across a range of conditions. Thus, there is a need for targeted sampling strategies and controlled model experiments to understand where and why different snow density models diverge. This will enable identification of robust model structures that represent dominant processes controlling snow densification, in support of basin-scale estimation of SWE with remotely-sensed snow depth datasets. The NASA SnowEx mission is a unique opportunity to evaluate sampling strategies of snow density and to quantify and reduce uncertainty in modeled snow density. In this presentation, we present initial field data analyses and modeling results over the Colorado SnowEx domain in the 2016-2017 winter campaign. We detail a framework for spatially mapping the uncertainty in snowpack density, as represented across multiple models. Leveraging the modular SUMMA model, we construct a series of physically-based models to assess systematically the importance of specific process representations to snow density estimates. We will show how models and snow pit observations characterize snow density variations with forest cover in the SnowEx domains. Finally, we will use the spatial maps of density uncertainty to evaluate the selected locations of snow pits, thereby assessing the adequacy of the sampling strategy for targeting uncertainty in modeled snow density.

  4. Deconvolution of the density of states of tip and sample through constant-current tunneling spectroscopy

    Directory of Open Access Journals (Sweden)

    Holger Pfeifer

    2011-09-01

    Full Text Available We introduce a scheme to obtain the deconvolved density of states (DOS of the tip and sample, from scanning tunneling spectra determined in the constant-current mode (z–V spectroscopy. The scheme is based on the validity of the Wentzel–Kramers–Brillouin (WKB approximation and the trapezoidal approximation of the electron potential within the tunneling barrier. In a numerical treatment of z–V spectroscopy, we first analyze how the position and amplitude of characteristic DOS features change depending on parameters such as the energy position, width, barrier height, and the tip–sample separation. Then it is shown that the deconvolution scheme is capable of recovering the original DOS of tip and sample with an accuracy of better than 97% within the one-dimensional WKB approximation. Application of the deconvolution scheme to experimental data obtained on Nb(110 reveals a convergent behavior, providing separately the DOS of both sample and tip. In detail, however, there are systematic quantitative deviations between the DOS results based on z–V data and those based on I–V data. This points to an inconsistency between the assumed and the actual transmission probability function. Indeed, the experimentally determined differential barrier height still clearly deviates from that derived from the deconvolved DOS. Thus, the present progress in developing a reliable deconvolution scheme shifts the focus towards how to access the actual transmission probability function.

  5. Terrestrial gamma radiation baseline mapping using ultra low density sampling methods

    International Nuclear Information System (INIS)

    Kleinschmidt, R.; Watson, D.

    2016-01-01

    Baseline terrestrial gamma radiation maps are indispensable for providing basic reference information that may be used in assessing the impact of a radiation related incident, performing epidemiological studies, remediating land contaminated with radioactive materials, assessment of land use applications and resource prospectivity. For a large land mass, such as Queensland, Australia (over 1.7 million km 2 ), it is prohibitively expensive and practically difficult to undertake detailed in-situ radiometric surveys of this scale. It is proposed that an existing, ultra-low density sampling program already undertaken for the purpose of a nationwide soil survey project be utilised to develop a baseline terrestrial gamma radiation map. Geoelement data derived from the National Geochemistry Survey of Australia (NGSA) was used to construct a baseline terrestrial gamma air kerma rate map, delineated by major drainage catchments, for Queensland. Three drainage catchments (sampled at the catchment outlet) spanning low, medium and high radioelement concentrations were selected for validation of the methodology using radiometric techniques including in-situ measurements and soil sampling for high resolution gamma spectrometry, and comparative non-radiometric analysis. A Queensland mean terrestrial air kerma rate, as calculated from the NGSA outlet sediment uranium, thorium and potassium concentrations, of 49 ± 69 nGy h −1 (n = 311, 3σ 99% confidence level) is proposed as being suitable for use as a generic terrestrial air kerma rate background range. Validation results indicate that catchment outlet measurements are representative of the range of results obtained across the catchment and that the NGSA geoelement data is suitable for calculation and mapping of terrestrial air kerma rate. - Highlights: • A baseline terrestrial air kerma map of Queensland, Australia was developed using geochemical data from a major drainage catchment ultra-low density sampling program

  6. [Detection of herpes virus and human enterovirus in pathology samples using low-density arrays].

    Science.gov (United States)

    Del Carmen Martínez, Sofía; Gervás Ríos, Ruth; Franco Rodríguez, Yoana; González Velasco, Cristina; Cruz Sánchez, Miguel Ángel; Abad Hernández, María Del Mar

    Despite the frequency of infections with herpesviridae family, only eight subtypes affect humans (Herpex Simplex Virus types 1 and 2, Varicella Zoster Virus, Epstein-Barr Virus, Citomegalovirus and Human Herpes Virus types 6, 7 and 8). Amongst enteroviruses infections, the most important are Poliovirus, Coxackievirus and Echovirus. Symptoms can vary from mild to severe and early diagnosis is of upmost importance. Nowadays, low-density arrays can detect different types of viruses in a single assay using DNA extracted from biological samples. We analyzed 70 samples of formalin-fixed and paraffin-embedded tissue, searching for viruses (HSV-1, HSV-2, VZV, CMV, EBV, HHV-6, HHV-7 y HHV-8, Poliovirus, Echovirus and Coxsackievirus) using the kit CLART ® ENTHERPEX. Out of the total of 70 samples, 29 were positive for viral infection (41.43%), and only 4 of them showed cytopathic effect (100% correlation between histology and the test). 47.6% of GVHD samples were positive for virus; 68.75% of IBD analyzed showed positivity for viral infection; in colitis with ulcers (neither GVHD nor IBD), the test was positive in 50% of the samples and was also positive in 50% of ischemic lesions. The high sensitivity of the technique makes it a useful tool for the pathologist in addition to conventional histology-based diagnosis, as a viral infection may affect treatment. Copyright © 2016 Sociedad Española de Anatomía Patológica. Publicado por Elsevier España, S.L.U. All rights reserved.

  7. Self-gated 4D multiphase, steady-state imaging with contrast enhancement (MUSIC) using rotating cartesian K-space (ROCK): Validation in children with congenital heart disease.

    Science.gov (United States)

    Han, Fei; Zhou, Ziwu; Han, Eric; Gao, Yu; Nguyen, Kim-Lien; Finn, J Paul; Hu, Peng

    2017-08-01

    To develop and validate a cardiac-respiratory self-gating strategy for the recently proposed multiphase steady-state imaging with contrast enhancement (MUSIC) technique. The proposed SG strategy uses the ROtating Cartesian K-space (ROCK) sampling, which allows for retrospective k-space binning based on motion surrogates derived from k-space center line. The k-space bins are reconstructed using a compressed sensing algorithm. Ten pediatric patients underwent cardiac MRI for clinical reasons. The original MUSIC and 2D-CINE images were acquired as a part of the clinical protocol, followed by the ROCK-MUSIC acquisition, all under steady-state intravascular distribution of ferumoxytol. Subjective scores and image sharpness were used to compare the images of ROCK-MUSIC and original MUSIC. All scans were completed successfully without complications. The ROCK-MUSIC acquisition took 5 ± 1 min, compared to 8 ± 2 min for the original MUSIC. Image scores of ROCK-MUSIC were significantly better than original MUSIC at the ventricular outflow tracts (3.9 ± 0.3 vs. 3.3 ± 0.6, P ROCK-MUSIC in the other anatomic locations. ROCK-MUSIC provided images of equal or superior image quality compared to original MUSIC, and this was achievable with 40% savings in scan time and without the need for physiologic signal. Magn Reson Med 78:472-483, 2017. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.

  8. Characteristics of recycled and electron beam irradiated high density polyethylene samples

    International Nuclear Information System (INIS)

    Cardoso, Jessica R.; Gabriel, Leandro; Geraldo, Aurea B.C.; Moura, Eduardo

    2015-01-01

    Polymers modification by irradiation is a well-known process that allows degradation and cross-linking in concurrent events; this last is expected when an increase of mechanical properties is required. Actually, the interest of recycling and reuse of polymeric material is linked to the increase of plastics ending up in waste streams. Therefore, these both irradiation and recycling process may be conducted to allow a new use to this material that would be discarded by an improvement of its mechanical properties. In this work, the High Density Polyethylene (HDPE) matrix has been recycled five times from original substrate. The electron beam irradiation process was applied from 50 kGy to 200 kGy in both original and recycled samples; in this way, mechanical properties and thermal characteristics were evaluated. The results of applied process and material characterization are discussed. (author)

  9. Characteristics of recycled and electron beam irradiated high density polyethylene samples

    Energy Technology Data Exchange (ETDEWEB)

    Cardoso, Jessica R.; Gabriel, Leandro; Geraldo, Aurea B.C.; Moura, Eduardo, E-mail: jrcardoso@ipen.br, E-mail: lgabriell@gmail.com, E-mail: ageraldo@ipen.br [Instituto de Pesquisas Energeticas e Nucleares (IPEN/CNEN-SP), Sao Paulo, SP (Brazil)

    2015-07-01

    Polymers modification by irradiation is a well-known process that allows degradation and cross-linking in concurrent events; this last is expected when an increase of mechanical properties is required. Actually, the interest of recycling and reuse of polymeric material is linked to the increase of plastics ending up in waste streams. Therefore, these both irradiation and recycling process may be conducted to allow a new use to this material that would be discarded by an improvement of its mechanical properties. In this work, the High Density Polyethylene (HDPE) matrix has been recycled five times from original substrate. The electron beam irradiation process was applied from 50 kGy to 200 kGy in both original and recycled samples; in this way, mechanical properties and thermal characteristics were evaluated. The results of applied process and material characterization are discussed. (author)

  10. The dependence of the counting efficiency of Marinelli beakers for environmental samples on the density of the samples

    International Nuclear Information System (INIS)

    Alfassi, Z.B.; Lavi, N.

    2005-01-01

    The effect of the density of the radioactive material packed in a Marinelli beaker on the counting efficiency was studied. It was found that for all densities (0.4-1.7g/cm 3) studied the counting efficiency (ε) fits the linear log-log dependence on the photon energy (E) above 200keV, i.e. obeying the equation ε=αE β (α, β-parameters). It was found that for each photon energy the counting efficiency is linearly dependent on the density (ρ) of the matrix. ε=a-bρ (a, b-parameters). The parameters of the linear dependence are energy dependent (linear log-log dependence), leading to a final equation for the counting efficiency of Marinelli beaker involving both density of the matrix and the photon energy: ε=α 1 .E β 1 -α 2 E β 2 ρ

  11. Terrestrial gamma radiation baseline mapping using ultra low density sampling methods.

    Science.gov (United States)

    Kleinschmidt, R; Watson, D

    2016-01-01

    Baseline terrestrial gamma radiation maps are indispensable for providing basic reference information that may be used in assessing the impact of a radiation related incident, performing epidemiological studies, remediating land contaminated with radioactive materials, assessment of land use applications and resource prospectivity. For a large land mass, such as Queensland, Australia (over 1.7 million km(2)), it is prohibitively expensive and practically difficult to undertake detailed in-situ radiometric surveys of this scale. It is proposed that an existing, ultra-low density sampling program already undertaken for the purpose of a nationwide soil survey project be utilised to develop a baseline terrestrial gamma radiation map. Geoelement data derived from the National Geochemistry Survey of Australia (NGSA) was used to construct a baseline terrestrial gamma air kerma rate map, delineated by major drainage catchments, for Queensland. Three drainage catchments (sampled at the catchment outlet) spanning low, medium and high radioelement concentrations were selected for validation of the methodology using radiometric techniques including in-situ measurements and soil sampling for high resolution gamma spectrometry, and comparative non-radiometric analysis. A Queensland mean terrestrial air kerma rate, as calculated from the NGSA outlet sediment uranium, thorium and potassium concentrations, of 49 ± 69 nGy h(-1) (n = 311, 3σ 99% confidence level) is proposed as being suitable for use as a generic terrestrial air kerma rate background range. Validation results indicate that catchment outlet measurements are representative of the range of results obtained across the catchment and that the NGSA geoelement data is suitable for calculation and mapping of terrestrial air kerma rate. Crown Copyright © 2015. Published by Elsevier Ltd. All rights reserved.

  12. Photoelectron diffraction k-space volumes of the c(2x2) Mn/Ni(100) structure

    Energy Technology Data Exchange (ETDEWEB)

    Banerjee, S.; Denlinger, J.; Chen, X. [Univ. of Wisconsin, Milwaukee, WI (United States)] [and others

    1997-04-01

    Traditionally, x-ray photoelectron diffraction (XPD) studies have either been done by scanning the diffraction angle for fixed kinetic energy (ADPD), or scanning the kinetic energy at fixed exit angle (EDPD). Both of these methods collect subsets of the full diffraction pattern, or volume, which is the intensity of photoemission as a function of momentum direction and magnitude. With the high density available at the Spectromicroscopy Facility (BL 7.0) {open_quotes}ultraESCA{close_quotes} station, the authors are able to completely characterize the photoelectron diffraction patterns of surface structures, up to several hundred electron volts kinetic energy. This large diffraction `volume` can then be analyzed in many ways. The k-space volume contains as a subset the energy dependent photoelectron diffraction spectra along all emission angles. It also contains individual, hemispherical, diffraction patterns at specific kinetic energies. Other `cuts` through the data set are also possible, revealing new ways of viewing photoelectron diffraction data, and potentially new information about the surface structure being studied. In this article the authors report a brief summary of a structural study being done on the c(2x2) Mn/Ni(100) surface alloy. This system is interesting for both structural and magnetic reasons. Magnetically, the Mn/Ni(100) surface alloy exhibits parallel coupling of the Mn and Ni moments, which is opposite to the reported coupling for the bulk, disordered, alloy. Structurally, the Mn atoms are believed to lie well above the surface plane.

  13. Thermal property and density measurements of samples taken from drilling cores from potential geologic media

    International Nuclear Information System (INIS)

    Lagedrost, J.F.; Capps, W.

    1983-12-01

    Density, steady-state conductivity, enthalpy, specific heat, heat capacity, thermal diffusivity and linear thermal expansion were measured on 59 materials from core drill samples of several geologic media, including rock salt, basalt, and other associated rocks from 7 potential sites for nuclear waste isolation. The measurements were conducted from or near to room temperature up to 500 0 C, or to lower temperatures if limited by specimen cracking or fracturing. Ample documentation establishes the reliability of the property measurement methods and the accuracy of the results. Thermal expansions of salts reached 2.2 to 2.8 percent at 500 0 C. Associated rocks were from 0.6 to 1.6 percent. Basalts were close to 0.3 percent at 500 0 C. Specific heats of salts varied from 0.213 to 0.233 cal g -1 C -1 , and basalts averaged 0.239 cal g -1 C -1 . Thermal conductivities of salts at 50 0 C were from 0.022 to 0.046 wcm -1 C -1 , and at 500 0 C, from 0.012 to 0.027 wcm -1 C -1 . Basalts conductivities ranged from 0.020 to 0.022 wcm -1 C -1 at 100 0 C and 0.016 to 0.018 at 500 0 C. There were no obvious conductivity trends relative to source location. Room temperature densities of salts were from 2.14 to 2.29 gcm -3 , and basalts, from 2.83 to 2.90 gcm -3 . The extreme friability of some materials made specimen fabrication difficult. 21 references, 17 figures, 28 tables

  14. Extensive Sampling of Forest Carbon using High Density Power Line Lidar

    Science.gov (United States)

    Hampton, H. M.; Chen, Q.; Dye, D. G.; Hungate, B. A.

    2013-12-01

    Estimating carbon sequestration and greenhouse gas emissions from forest management, natural processes, and disturbance is of growing interest for mitigating global warming. Ponderosa pine is common at mid-elevations throughout the western United States and is a dominant tree species in southwestern forests. Existing unmanaged "relict" sites and stand reconstructions of southwestern ponderosa pine forests from before European settlement (late 1800s) provide evidence of forests of larger trees of lower density and less vulnerability to severe fires than today's typical conditions of high densities of small trees that have resulted from a century of fire suppression. Forest treatments to improve forest health in the region include tree cutting focused on small-diameter trees (thinning), low-intensity prescribed burning, and monitoring rather than suppressing wildfires. Stimulated by several uncharacteristically-intense fires in the last decade, a collaborative process found strong stakeholder agreement to accelerate forest treatments to reduce fire risk and restore ecological conditions. Land use planning to ramp up management is underway and could benefit from quick and inexpensive techniques to inventory tree-level carbon because existing inventory data are not adequate to capture the range of forest structural conditions. Our approach overcomes these shortcomings by employing recent breakthroughs in estimating aboveground biomass from high resolution light detection and ranging (lidar) remote sensing. Lidar is an active remote sensing technique, analogous to radar, which measures the time required for a transmitted pulse of laser light to return to the sensor after reflection from a target. Lidar data can capture 3-dimensional forest structure with greater detail and broader spatial coverage than is feasible with conventional field measurements. We developed a novel methodology for extensive sampling and field validation of forest carbon, applicable to managed and

  15. Four dimensional magnetic resonance imaging with retrospective k-space reordering: A feasibility study

    International Nuclear Information System (INIS)

    Liu, Yilin; Yin, Fang-Fang; Cai, Jing; Chen, Nan-kuei; Chu, Mei-Lan

    2015-01-01

    Purpose: Current four dimensional magnetic resonance imaging (4D-MRI) techniques lack sufficient temporal/spatial resolution and consistent tumor contrast. To overcome these limitations, this study presents the development and initial evaluation of a new strategy for 4D-MRI which is based on retrospective k-space reordering. Methods: We simulated a k-space reordered 4D-MRI on a 4D digital extended cardiac-torso (XCAT) human phantom. A 2D echo planar imaging MRI sequence [frame rate (F) = 0.448 Hz; image resolution (R) = 256 × 256; number of k-space segments (N KS ) = 4] with sequential image acquisition mode was assumed for the simulation. Image quality of the simulated “4D-MRI” acquired from the XCAT phantom was qualitatively evaluated, and tumor motion trajectories were compared to input signals. In particular, mean absolute amplitude differences (D) and cross correlation coefficients (CC) were calculated. Furthermore, to evaluate the data sufficient condition for the new 4D-MRI technique, a comprehensive simulation study was performed using 30 cancer patients’ respiratory profiles to study the relationships between data completeness (C p ) and a number of impacting factors: the number of repeated scans (N R ), number of slices (N S ), number of respiratory phase bins (N P ), N KS , F, R, and initial respiratory phase at image acquisition (P 0 ). As a proof-of-concept, we implemented the proposed k-space reordering 4D-MRI technique on a T2-weighted fast spin echo MR sequence and tested it on a healthy volunteer. Results: The simulated 4D-MRI acquired from the XCAT phantom matched closely to the original XCAT images. Tumor motion trajectories measured from the simulated 4D-MRI matched well with input signals (D = 0.83 and 0.83 mm, and CC = 0.998 and 0.992 in superior–inferior and anterior–posterior directions, respectively). The relationship between C p and N R was found best represented by an exponential function (C P =100(1−e −0.18N R ), when N S

  16. Spatial pattern corrections and sample sizes for forest density estimates of historical tree surveys

    Science.gov (United States)

    Brice B. Hanberry; Shawn Fraver; Hong S. He; Jian Yang; Dan C. Dey; Brian J. Palik

    2011-01-01

    The U.S. General Land Office land surveys document trees present during European settlement. However, use of these surveys for calculating historical forest density and other derived metrics is limited by uncertainty about the performance of plotless density estimators under a range of conditions. Therefore, we tested two plotless density estimators, developed by...

  17. Density determination in Pino Radiata (D.Don) samples using 59.5 keV gamma radiation attenuation

    International Nuclear Information System (INIS)

    Dinator, Maria I.; Morales, Jose R.; Aliaga, Nelson; Karsulovic, Jose T.; Sanchez, Jaime; Leon, Adolfo

    1996-01-01

    A non destructive method to determine wood samples density is presented. The photon mass attenuation coefficient in samples of Pino radiata (D.Don) was measured at 59.5 keV with a radioactive source of Am-241. The value of 0.192 ± 0.002 cm 2 /g was obtained with a gamma spectroscopy system and later used on the determination of the mass density in sixteen samples of the same species. Comparison of these results with those of gravimetric method through a linear regression showed a slope of 1.001 and a correlation factor of 0.94. (author)

  18. A method to determine density in wood samples using attenuation of 59.5 KeV gamma radiation

    International Nuclear Information System (INIS)

    Dinator, M.I.; Morales, J.R.; Aliaga, N.; Karsulovic, J.T.; Sanchez, J.; Leon, L.A.

    1996-01-01

    A nondestructive method to determine the density of wood samples is presented. The photon mass attenuation coefficient in samples of Pino Radiata was measured at 59.5 KeV with a radioactive source of Am-241. The value of 0.192 ± 0.002 cm 2 /g was obtained with a gamma spectroscopy system and later used on the determination of the mass density in sixteen samples of the same species. Comparison of these results with those of gravimetric method through a linear regression showed a slope of 1.001 and correlation factor of 0.94. (author)

  19. Vitality of oligozoospermic semen samples is improved by both swim-up and density gradient centrifugation before cryopreservation.

    Science.gov (United States)

    Counsel, Madeleine; Bellinge, Rhys; Burton, Peter

    2004-05-01

    To ascertain whether washing sperm from oligozoospermic and normozoospermic samples before cryopreservation improves post-thaw vitality. Normozoospermic (n = 18) and oligozoospermic (n = 16) samples were divided into three aliquots. The first aliquot remained untreated and the second and third aliquots were subjected to the swim-up and discontinuous density gradient sperm washing techniques respectively. Vitality staining was performed, samples mixed with cryopreservation media and frozen. Spermatozoa were thawed, stained, and vitality quantified and expressed as the percentage of live spermatozoa present. Post-thaw vitality in untreated aliquots from normozoospermic samples (24.9% +/- 2.3; mean +/- SEM) was significantly higher (unpaired t-tests; P vitality was significantly higher after swim-up in normozoospermic samples (35.6% +/- 2.1; P vitality in oligozoospermic (22.4% +/- 1.0; P vitality in cryopreserved oligozoospermic samples was improved by both the swim-up and density gradient centrifugation washing techniques prior to freezing.

  20. Mammographic breast density as a risk factor for breast cancer: awareness in a recently screened clinical sample.

    Science.gov (United States)

    O'Neill, Suzanne C; Leventhal, Kara Grace; Scarles, Marie; Evans, Chalanda N; Makariou, Erini; Pien, Edward; Willey, Shawna

    2014-01-01

    Breast density is an established, independent risk factor for breast cancer. Despite this, density has not been included in standard risk models or routinely disclosed to patients. However, this is changing in the face of legal mandates and advocacy efforts. Little information exists regarding women's awareness of density as a risk factor, their personal risk, and risk management options. We assessed awareness of density as a risk factor and whether sociodemographic variables, breast cancer risk factors. and perceived breast cancer risk were associated with awareness in 344 women with a recent screening mammogram at a tertiary care center. Overall, 62% of women had heard about density as a risk factor and 33% had spoken to a provider about breast density. Of the sample, 18% reported that their provider indicated that they had high breast density. Awareness of density as a risk factor was greater among White women and those with other breast cancer risk factors. Our results suggest that although a growing number of women are aware of breast density as a risk factor, this awareness varies. Growing mandates for disclosure suggest the need for patient education interventions for women at increased risk for the disease and to ensure all women are equally aware of their risks. Copyright © 2014 Jacobs Institute of Women's Health. Published by Elsevier Inc. All rights reserved.

  1. Fast MR image reconstruction for partially parallel imaging with arbitrary k-space trajectories.

    Science.gov (United States)

    Ye, Xiaojing; Chen, Yunmei; Lin, Wei; Huang, Feng

    2011-03-01

    Both acquisition and reconstruction speed are crucial for magnetic resonance (MR) imaging in clinical applications. In this paper, we present a fast reconstruction algorithm for SENSE in partially parallel MR imaging with arbitrary k-space trajectories. The proposed method is a combination of variable splitting, the classical penalty technique and the optimal gradient method. Variable splitting and the penalty technique reformulate the SENSE model with sparsity regularization as an unconstrained minimization problem, which can be solved by alternating two simple minimizations: One is the total variation and wavelet based denoising that can be quickly solved by several recent numerical methods, whereas the other one involves a linear inversion which is solved by the optimal first order gradient method in our algorithm to significantly improve the performance. Comparisons with several recent parallel imaging algorithms indicate that the proposed method significantly improves the computation efficiency and achieves state-of-the-art reconstruction quality.

  2. Space charge profiles in low density polyethylene samples containing a permittivity/conductivity gradient

    DEFF Research Database (Denmark)

    Bambery, K.R.; Fleming, R.J.; Holbøll, Joachim

    2001-01-01

    .5×107 V m-1. Current density was also measured as a function of temperature and field. Space charge due exclusively to the temperature gradient was detected, with density of order 0.01 C m-3. The activation energy associated with the transport of electrons through the bulk was calculated as 0.09 e...

  3. A framework for inference about carnivore density from unstructured spatial sampling of scat using detector dogs

    Science.gov (United States)

    Craig M. Thompson; J. Andrew Royle; James D. Garner

    2012-01-01

    Wildlife management often hinges upon an accurate assessment of population density. Although undeniably useful, many of the traditional approaches to density estimation such as visual counts, livetrapping, or mark–recapture suffer from a suite of methodological and analytical weaknesses. Rare, secretive, or highly mobile species exacerbate these problems through the...

  4. Optimization of Region-of-Interest Sampling Strategies for Hepatic MRI Proton Density Fat Fraction Quantification

    Science.gov (United States)

    Hong, Cheng William; Wolfson, Tanya; Sy, Ethan Z.; Schlein, Alexandra N.; Hooker, Jonathan C.; Dehkordy, Soudabeh Fazeli; Hamilton, Gavin; Reeder, Scott B.; Loomba, Rohit; Sirlin, Claude B.

    2017-01-01

    BACKGROUND Clinical trials utilizing proton density fat fraction (PDFF) as an imaging biomarker for hepatic steatosis have used a laborious region-of-interest (ROI) sampling strategy of placing an ROI in each hepatic segment. PURPOSE To identify a strategy with the fewest ROIs that consistently achieves close agreement with the nine-ROI strategy. STUDY TYPE Retrospective secondary analysis of prospectively acquired clinical research data. POPULATION A total of 391 adults (173 men, 218 women) with known or suspected NAFLD. FIELD STRENGTH/SEQUENCE Confounder-corrected chemical-shift-encoded 3T MRI using a 2D multiecho gradientrecalled echo technique. ASSESSMENT An ROI was placed in each hepatic segment. Mean nine-ROI PDFF and segmental PDFF standard deviation were computed. Segmental and lobar PDFF were compared. PDFF was estimated using every combinatorial subset of ROIs and compared to the nine-ROI average. STATISTICAL TESTING Mean nine-ROI PDFF and segmental PDFF standard deviation were summarized descriptively. Segmental PDFF was compared using a one-way analysis of variance, and lobar PDFF was compared using a paired t-test and a Bland–Altman analysis. The PDFF estimated by every subset of ROIs was informally compared to the nine-ROI average using median intraclass correlation coefficients (ICCs) and Bland–Altman analyses. RESULTS The study population’s mean whole-liver PDFF was 10.1±8.9% (range: 1.1–44.1%). Although there was no significant difference in average segmental (P=0.452) or lobar (P=0.154) PDFF, left and right lobe PDFF differed by at least 1.5 percentage points in 25.1% (98/391) of patients. Any strategy with ≥ 4 ROIs had ICC >0.995. 115 of 126 four-ROI strategies (91%) had limits of agreement (LOA) 0.995, and 2/36 (6%) of two-ROI strategies and 46/84 (55%) of three-ROI strategies had LOA <1.5%. DATA CONCLUSION Four-ROI sampling strategies with two ROIs in the left and right lobes achieve close agreement with nine-ROI PDFF. Level of

  5. Optimization of region-of-interest sampling strategies for hepatic MRI proton density fat fraction quantification.

    Science.gov (United States)

    Hong, Cheng William; Wolfson, Tanya; Sy, Ethan Z; Schlein, Alexandra N; Hooker, Jonathan C; Fazeli Dehkordy, Soudabeh; Hamilton, Gavin; Reeder, Scott B; Loomba, Rohit; Sirlin, Claude B

    2018-04-01

    Clinical trials utilizing proton density fat fraction (PDFF) as an imaging biomarker for hepatic steatosis have used a laborious region-of-interest (ROI) sampling strategy of placing an ROI in each hepatic segment. To identify a strategy with the fewest ROIs that consistently achieves close agreement with the nine-ROI strategy. Retrospective secondary analysis of prospectively acquired clinical research data. A total of 391 adults (173 men, 218 women) with known or suspected NAFLD. Confounder-corrected chemical-shift-encoded 3T MRI using a 2D multiecho gradient-recalled echo technique. An ROI was placed in each hepatic segment. Mean nine-ROI PDFF and segmental PDFF standard deviation were computed. Segmental and lobar PDFF were compared. PDFF was estimated using every combinatorial subset of ROIs and compared to the nine-ROI average. Mean nine-ROI PDFF and segmental PDFF standard deviation were summarized descriptively. Segmental PDFF was compared using a one-way analysis of variance, and lobar PDFF was compared using a paired t-test and a Bland-Altman analysis. The PDFF estimated by every subset of ROIs was informally compared to the nine-ROI average using median intraclass correlation coefficients (ICCs) and Bland-Altman analyses. The study population's mean whole-liver PDFF was 10.1 ± 8.9% (range: 1.1-44.1%). Although there was no significant difference in average segmental (P = 0.452) or lobar (P = 0.154) PDFF, left and right lobe PDFF differed by at least 1.5 percentage points in 25.1% (98/391) of patients. Any strategy with ≥4 ROIs had ICC >0.995. 115 of 126 four-ROI strategies (91%) had limits of agreement (LOA) 0.995, and 2/36 (6%) of two-ROI strategies and 46/84 (55%) of three-ROI strategies had LOA <1.5%. Four-ROI sampling strategies with two ROIs in the left and right lobes achieve close agreement with nine-ROI PDFF. 3 Technical Efficacy: Stage 2 J. Magn. Reson. Imaging 2018;47:988-994. © 2017 International Society for Magnetic Resonance

  6. Time-resolved 3D pulmonary perfusion MRI: comparison of different k-space acquisition strategies at 1.5 and 3 T.

    Science.gov (United States)

    Attenberger, Ulrike I; Ingrisch, Michael; Dietrich, Olaf; Herrmann, Karin; Nikolaou, Konstantin; Reiser, Maximilian F; Schönberg, Stefan O; Fink, Christian

    2009-09-01

    Time-resolved pulmonary perfusion MRI requires both high temporal and spatial resolution, which can be achieved by using several nonconventional k-space acquisition techniques. The aim of this study is to compare the image quality of time-resolved 3D pulmonary perfusion MRI with different k-space acquisition techniques in healthy volunteers at 1.5 and 3 T. Ten healthy volunteers underwent contrast-enhanced time-resolved 3D pulmonary MRI on 1.5 and 3 T using the following k-space acquisition techniques: (a) generalized autocalibrating partial parallel acquisition (GRAPPA) with an internal acquisition of reference lines (IRS), (b) GRAPPA with a single "external" acquisition of reference lines (ERS) before the measurement, and (c) a combination of GRAPPA with an internal acquisition of reference lines and view sharing (VS). The spatial resolution was kept constant at both field strengths to exclusively evaluate the influences of the temporal resolution achieved with the different k-space sampling techniques on image quality. The temporal resolutions were 2.11 seconds IRS, 1.31 seconds ERS, and 1.07 VS at 1.5 T and 2.04 seconds IRS, 1.30 seconds ERS, and 1.19 seconds VS at 3 T.Image quality was rated by 2 independent radiologists with regard to signal intensity, perfusion homogeneity, artifacts (eg, wrap around, noise), and visualization of pulmonary vessels using a 3 point scale (1 = nondiagnostic, 2 = moderate, 3 = good). Furthermore, the signal-to-noise ratio in the lungs was assessed. At 1.5 T the lowest image quality (sum score: 154) was observed for the ERS technique and the highest quality for the VS technique (sum score: 201). In contrast, at 3 T images acquired with VS were hampered by strong artifacts and image quality was rated significantly inferior (sum score: 137) compared with IRS (sum score: 180) and ERS (sum score: 174). Comparing 1.5 and 3 T, in particular the overall rating of the IRS technique (sum score: 180) was very similar at both field

  7. Influence of sample preparation on the transformation of low-density to high-density amorphous ice: An explanation based on the potential energy landscape

    Science.gov (United States)

    Giovambattista, Nicolas; Starr, Francis W.; Poole, Peter H.

    2017-07-01

    Experiments and computer simulations of the transformations of amorphous ices display different behaviors depending on sample preparation methods and on the rates of change of temperature and pressure to which samples are subjected. In addition to these factors, simulation results also depend strongly on the chosen water model. Using computer simulations of the ST2 water model, we study how the sharpness of the compression-induced transition from low-density amorphous ice (LDA) to high-density amorphous ice (HDA) is influenced by the preparation of LDA. By studying LDA samples prepared using widely different procedures, we find that the sharpness of the LDA-to-HDA transformation is correlated with the depth of the initial LDA sample in the potential energy landscape (PEL), as characterized by the inherent structure energy. Our results show that the complex phenomenology of the amorphous ices reported in experiments and computer simulations can be understood and predicted in a unified way from knowledge of the PEL of the system.

  8. Topological variability and sex differences in fingerprint ridge density in a sample of the Sudanese population.

    Science.gov (United States)

    Ahmed, Altayeb Abdalla; Osman, Samah

    2016-08-01

    Fingerprints are important biometric variables that show manifold utilities in human biology, human morphology, anthropology, and genetics. Their role in forensics as a legally admissible tool of identification is well recognized and is based on their stability following full development, individualistic characteristics, easy classification of their patterns, and uniqueness. Nevertheless, fingerprint ridge density and its variability have not been previously studied in the Sudanese population. Hence, this study was conducted to analyze the topological variability in epidermal ridge density and to assess the possibility of its application in determining sex of Sudanese Arabs. The data used for this study were prints of all 10 fingers of 200 Sudanese Arab individuals (100 men and 100 women) aged between 18 and 28 years. Fingerprint ridge density was assessed for three different areas (radial, ulnar and proximal) for all 10 fingers of each subject. Significant variability was found between the areas (p crime scenes can be useful to determine sex of Sudanese individuals based on fingerprint ridge density; furthermore, ridge density can be considered a morphological trait for individual variation in forensic anthropology. Copyright © 2016 Elsevier Ltd and Faculty of Forensic and Legal Medicine. All rights reserved.

  9. Prediction of flexible/rigid regions from protein sequences using k-spaced amino acid pairs

    Directory of Open Access Journals (Sweden)

    Ruan Jishou

    2007-04-01

    Full Text Available Abstract Background Traditionally, it is believed that the native structure of a protein corresponds to a global minimum of its free energy. However, with the growing number of known tertiary (3D protein structures, researchers have discovered that some proteins can alter their structures in response to a change in their surroundings or with the help of other proteins or ligands. Such structural shifts play a crucial role with respect to the protein function. To this end, we propose a machine learning method for the prediction of the flexible/rigid regions of proteins (referred to as FlexRP; the method is based on a novel sequence representation and feature selection. Knowledge of the flexible/rigid regions may provide insights into the protein folding process and the 3D structure prediction. Results The flexible/rigid regions were defined based on a dataset, which includes protein sequences that have multiple experimental structures, and which was previously used to study the structural conservation of proteins. Sequences drawn from this dataset were represented based on feature sets that were proposed in prior research, such as PSI-BLAST profiles, composition vector and binary sequence encoding, and a newly proposed representation based on frequencies of k-spaced amino acid pairs. These representations were processed by feature selection to reduce the dimensionality. Several machine learning methods for the prediction of flexible/rigid regions and two recently proposed methods for the prediction of conformational changes and unstructured regions were compared with the proposed method. The FlexRP method, which applies Logistic Regression and collocation-based representation with 95 features, obtained 79.5% accuracy. The two runner-up methods, which apply the same sequence representation and Support Vector Machines (SVM and Naïve Bayes classifiers, obtained 79.2% and 78.4% accuracy, respectively. The remaining considered methods are

  10. Estimating black bear density in New Mexico using noninvasive genetic sampling coupled with spatially explicit capture-recapture methods

    Science.gov (United States)

    Gould, Matthew J.; Cain, James W.; Roemer, Gary W.; Gould, William R.

    2016-01-01

    During the 2004–2005 to 2015–2016 hunting seasons, the New Mexico Department of Game and Fish (NMDGF) estimated black bear abundance (Ursus americanus) across the state by coupling density estimates with the distribution of primary habitat generated by Costello et al. (2001). These estimates have been used to set harvest limits. For example, a density of 17 bears/100 km2 for the Sangre de Cristo and Sacramento Mountains and 13.2 bears/100 km2 for the Sandia Mountains were used to set harvest levels. The advancement and widespread acceptance of non-invasive sampling and mark-recapture methods, prompted the NMDGF to collaborate with the New Mexico Cooperative Fish and Wildlife Research Unit and New Mexico State University to update their density estimates for black bear populations in select mountain ranges across the state.We established 5 study areas in 3 mountain ranges: the northern (NSC; sampled in 2012) and southern Sangre de Cristo Mountains (SSC; sampled in 2013), the Sandia Mountains (Sandias; sampled in 2014), and the northern (NSacs) and southern Sacramento Mountains (SSacs; both sampled in 2014). We collected hair samples from black bears using two concurrent non-invasive sampling methods, hair traps and bear rubs. We used a gender marker and a suite of microsatellite loci to determine the individual identification of hair samples that were suitable for genetic analysis. We used these data to generate mark-recapture encounter histories for each bear and estimated density in a spatially explicit capture-recapture framework (SECR). We constructed a suite of SECR candidate models using sex, elevation, land cover type, and time to model heterogeneity in detection probability and the spatial scale over which detection probability declines. We used Akaike’s Information Criterion corrected for small sample size (AICc) to rank and select the most supported model from which we estimated density.We set 554 hair traps, 117 bear rubs and collected 4,083 hair

  11. Mesquite seed density in fecal samples of Raramuri Criollo vs. Angus x Hereford cows grazing Chihuahuan Desert Rangeland

    Science.gov (United States)

    This study was part of a larger project investigating breed-related differences in feeding habits of Raramuri Criollo (RC) versus Angus x Hereford (AH) cows. Seed densities in fecal samples collected in July and August 2015 were analyzed to compare presumed mesquite bean consumption of RC and AH cow...

  12. Effects of social organization, trap arrangement and density, sampling scale, and population density on bias in population size estimation using some common mark-recapture estimators.

    Directory of Open Access Journals (Sweden)

    Manan Gupta

    Full Text Available Mark-recapture estimators are commonly used for population size estimation, and typically yield unbiased estimates for most solitary species with low to moderate home range sizes. However, these methods assume independence of captures among individuals, an assumption that is clearly violated in social species that show fission-fusion dynamics, such as the Asian elephant. In the specific case of Asian elephants, doubts have been raised about the accuracy of population size estimates. More importantly, the potential problem for the use of mark-recapture methods posed by social organization in general has not been systematically addressed. We developed an individual-based simulation framework to systematically examine the potential effects of type of social organization, as well as other factors such as trap density and arrangement, spatial scale of sampling, and population density, on bias in population sizes estimated by POPAN, Robust Design, and Robust Design with detection heterogeneity. In the present study, we ran simulations with biological, demographic and ecological parameters relevant to Asian elephant populations, but the simulation framework is easily extended to address questions relevant to other social species. We collected capture history data from the simulations, and used those data to test for bias in population size estimation. Social organization significantly affected bias in most analyses, but the effect sizes were variable, depending on other factors. Social organization tended to introduce large bias when trap arrangement was uniform and sampling effort was low. POPAN clearly outperformed the two Robust Design models we tested, yielding close to zero bias if traps were arranged at random in the study area, and when population density and trap density were not too low. Social organization did not have a major effect on bias for these parameter combinations at which POPAN gave more or less unbiased population size estimates

  13. The determination of the bulk density of irradiated samples using a mercury pyknometer

    International Nuclear Information System (INIS)

    Keep, R.H.; Perks, J.M.

    1980-05-01

    A method for determining the bulk density of fragmented UO 2 specimens in the mass range 1 to 10 g by mercury pyknometry has been developed. The factor limiting the accuracy of the technique in this application is the consistency with which the pyknometer can be filled with mercury; this is dependent on the vacuum obtained in the pyknometer prior to filling. It has been found that this method can be used to determine the density of fragments of UO 2 in the mass range specified to an accuracy of better than +- 0.2% (1σ). (author)

  14. The impact of residential density on vehicle usage and fuel consumption: Evidence from national samples

    DEFF Research Database (Denmark)

    Kim, Jinwon; Brownstone, David

    2013-01-01

    This paper investigates the impact of residential density on household vehicle usage and fuel consumption. We estimate a simultaneous equations system to account for the potential residential self-selection problem. While most previous studies focus on a specific region, this paper uses national...

  15. Macular pigment optical density in the elderly: findings in a large biracial Midsouth population sample

    NARCIS (Netherlands)

    Iannaccone, Alessandro; Mura, Marco; Gallaher, Kevin T.; Johnson, Elizabeth J.; Todd, William Andrew; Kenyon, Emily; Harris, Tarsha L.; Harris, Tamara; Satterfield, Suzanne; Johnson, Karen C.; Kritchevsky, Stephen B.

    2007-01-01

    PURPOSE: To report the macular pigment optical density (MPOD) findings at 0.5 degrees of eccentricity from the fovea in elderly subjects participating in ARMA, a study of aging and age-related maculopathy (ARM) ancillary to the Health, Aging, and Body Composition (Health ABC) Study. METHODS: MPOD

  16. Direct sampling during multiple sediment density flows reveals dynamic sediment transport and depositional environment in Monterey submarine canyon

    Science.gov (United States)

    Maier, K. L.; Gales, J. A.; Paull, C. K.; Gwiazda, R.; Rosenberger, K. J.; McGann, M.; Lundsten, E. M.; Anderson, K.; Talling, P.; Xu, J.; Parsons, D. R.; Barry, J.; Simmons, S.; Clare, M. A.; Carvajal, C.; Wolfson-Schwehr, M.; Sumner, E.; Cartigny, M.

    2017-12-01

    Sediment density flows were directly sampled with a coupled sediment trap-ADCP-instrument mooring array to evaluate the character and frequency of turbidity current events through Monterey Canyon, offshore California. This novel experiment aimed to provide links between globally significant sediment density flow processes and their resulting deposits. Eight to ten Anderson sediment traps were repeatedly deployed at 10 to 300 meters above the seafloor on six moorings anchored at 290 to 1850 meters water depth in the Monterey Canyon axial channel during 6-month deployments (October 2015 - April 2017). Anderson sediment traps include a funnel and intervalometer (discs released at set time intervals) above a meter-long tube, which preserves fine-scale stratigraphy and chronology. Photographs, multi-sensor logs, CT scans, and grain size analyses reveal layers from multiple sediment density flow events that carried sediment ranging from fine sand to granules. More sediment accumulation from sediment density flows, and from between flows, occurred in the upper canyon ( 300 - 800 m water depth) compared to the lower canyon ( 1300 - 1850 m water depth). Sediment accumulated in the traps during sediment density flows is sandy and becomes finer down-canyon. In the lower canyon where sediment directly sampled from density flows are clearly distinguished within the trap tubes, sands have sharp basal contacts, normal grading, and muddy tops that exhibit late-stage pulses. In at least two of the sediment density flows, the simultaneous low velocity and high backscatter measured by the ADCPs suggest that the trap only captured the collapsing end of a sediment density flow event. In the upper canyon, accumulation between sediment density flow events is twice as fast compared to the lower canyon; it is characterized by sub-cm-scale layers in muddy sediment that appear to have accumulated with daily to sub-daily frequency, likely related to known internal tidal dynamics also measured

  17. Effects of LiDAR point density, sampling size and height threshold on estimation accuracy of crop biophysical parameters.

    Science.gov (United States)

    Luo, Shezhou; Chen, Jing M; Wang, Cheng; Xi, Xiaohuan; Zeng, Hongcheng; Peng, Dailiang; Li, Dong

    2016-05-30

    Vegetation leaf area index (LAI), height, and aboveground biomass are key biophysical parameters. Corn is an important and globally distributed crop, and reliable estimations of these parameters are essential for corn yield forecasting, health monitoring and ecosystem modeling. Light Detection and Ranging (LiDAR) is considered an effective technology for estimating vegetation biophysical parameters. However, the estimation accuracies of these parameters are affected by multiple factors. In this study, we first estimated corn LAI, height and biomass (R2 = 0.80, 0.874 and 0.838, respectively) using the original LiDAR data (7.32 points/m2), and the results showed that LiDAR data could accurately estimate these biophysical parameters. Second, comprehensive research was conducted on the effects of LiDAR point density, sampling size and height threshold on the estimation accuracy of LAI, height and biomass. Our findings indicated that LiDAR point density had an important effect on the estimation accuracy for vegetation biophysical parameters, however, high point density did not always produce highly accurate estimates, and reduced point density could deliver reasonable estimation results. Furthermore, the results showed that sampling size and height threshold were additional key factors that affect the estimation accuracy of biophysical parameters. Therefore, the optimal sampling size and the height threshold should be determined to improve the estimation accuracy of biophysical parameters. Our results also implied that a higher LiDAR point density, larger sampling size and height threshold were required to obtain accurate corn LAI estimation when compared with height and biomass estimations. In general, our results provide valuable guidance for LiDAR data acquisition and estimation of vegetation biophysical parameters using LiDAR data.

  18. Density-viscosity product of small-volume ionic liquid samples using quartz crystal impedance analysis.

    Science.gov (United States)

    McHale, Glen; Hardacre, Chris; Ge, Rile; Doy, Nicola; Allen, Ray W K; MacInnes, Jordan M; Bown, Mark R; Newton, Michael I

    2008-08-01

    Quartz crystal impedance analysis has been developed as a technique to assess whether room-temperature ionic liquids are Newtonian fluids and as a small-volume method for determining the values of their viscosity-density product, rho eta. Changes in the impedance spectrum of a 5-MHz fundamental frequency quartz crystal induced by a water-miscible room-temperature ionic liquid, 1-butyl-3-methylimiclazolium trifluoromethylsulfonate ([C4mim][OTf]), were measured. From coupled frequency shift and bandwidth changes as the concentration was varied from 0 to 100% ionic liquid, it was determined that this liquid provided a Newtonian response. A second water-immiscible ionic liquid, 1-butyl-3-methylimidazolium bis(trifluoromethanesulfonyl)imide [C4mim][NTf2], with concentration varied using methanol, was tested and also found to provide a Newtonian response. In both cases, the values of the square root of the viscosity-density product deduced from the small-volume quartz crystal technique were consistent with those measured using a viscometer and density meter. The third harmonic of the crystal was found to provide the closest agreement between the two measurement methods; the pure ionic liquids had the largest difference of approximately 10%. In addition, 18 pure ionic liquids were tested, and for 11 of these, good-quality frequency shift and bandwidth data were obtained; these 12 all had a Newtonian response. The frequency shift of the third harmonic was found to vary linearly with square root of viscosity-density product of the pure ionic liquids up to a value of square root(rho eta) approximately 18 kg m(-2) s(-1/2), but with a slope 10% smaller than that predicted by the Kanazawa and Gordon equation. It is envisaged that the quartz crystal technique could be used in a high-throughput microfluidic system for characterizing ionic liquids.

  19. Molecular dynamics equation designed for realizing arbitrary density: Application to sampling method utilizing the Tsallis generalized distribution

    International Nuclear Information System (INIS)

    Fukuda, Ikuo; Nakamura, Haruki

    2010-01-01

    Several molecular dynamics techniques applying the Tsallis generalized distribution are presented. We have developed a deterministic dynamics to generate an arbitrary smooth density function ρ. It creates a measure-preserving flow with respect to the measure ρdω and realizes the density ρ under the assumption of the ergodicity. It can thus be used to investigate physical systems that obey such distribution density. Using this technique, the Tsallis distribution density based on a full energy function form along with the Tsallis index q ≥ 1 can be created. From the fact that an effective support of the Tsallis distribution in the phase space is broad, compared with that of the conventional Boltzmann-Gibbs (BG) distribution, and the fact that the corresponding energy-surface deformation does not change energy minimum points, the dynamics enhances the physical state sampling, in particular for a rugged energy surface spanned by a complicated system. Other feature of the Tsallis distribution is that it provides more degree of the nonlinearity, compared with the case of the BG distribution, in the deterministic dynamics equation, which is very useful to effectively gain the ergodicity of the dynamical system constructed according to the scheme. Combining such methods with the reconstruction technique of the BG distribution, we can obtain the information consistent with the BG ensemble and create the corresponding free energy surface. We demonstrate several sampling results obtained from the systems typical for benchmark tests in MD and from biomolecular systems.

  20. Apparent density measurement by mercury pycnometry. Improved accuracy. Simplification of handling for possible application to irradiated samples

    International Nuclear Information System (INIS)

    Marlet, Bernard

    1978-12-01

    The accuracy of the apparent density measurement on massive samples of any geometrical shape has been improved and the method simplified. A standard deviation of +-1 to 5.10 -3 g.ml -1 according to the size and surface state of the sample, was obtained by the use of a flat ground stopper on a mercury pycnometer which fills itself under vacuum. This method saves considerable time and has been adapted to work in shielded cells for the measurement of radioactive materials, especially sintered uranium dioxide leaving the pile. The different parameters are analysed and criticized [fr

  1. Efficacy of passive hair-traps for the genetic sampling of a low-density badger population

    Directory of Open Access Journals (Sweden)

    Alessandro Balestrieri

    2011-02-01

    Full Text Available

    A hair-trapping survey was carried out in the western River Po plain (NW Italy. We aimed to test whether barbed wire hair snares in combination with DNA profiling might represent an effective tool to study a low-density badger population. Traps were placed above the entrances of twelve badger setts between 15 February and 30 April 2010. Trapping effort was expressed as the number of trap-nights required to pluck a hair sample and the trend in the number of genotyped individual over time was analysed by regression analysis. Forty-three hair samples were collected, with an overall trapping effort of 54.8 trap-nights per one hair sample. Twenty-eight samples yielded reliable genotypes, allowing the identification of nine individual badgers. The length of storage period (1-3 months before DNA extraction did not seem to affect genotyping success. According to the regression model, trapping effort allowed to sample 75% of the overall population. Our results suggest that the efficacy of passive devices is affected by population density.

  2. A power-driven increment borer for sampling high-density tropical wood

    Czech Academy of Sciences Publication Activity Database

    Krottenthaler, S.; Pitsch, P.; Helle, G.; Locosselli, G. M.; Ceccantini, G.; Altman, Jan; Svoboda, M.; Doležal, Jiří; Schleser, G.; Anhuf, D.

    2015-01-01

    Roč. 36, November (2015), s. 40-44 ISSN 1125-7865 R&D Projects: GA ČR GAP504/12/1952; GA ČR(CZ) GA14-12262S Institutional support: RVO:67985939 Keywords : tropical dendrochronology * tree sampling methods * increment cores Subject RIV: EF - Botanics Impact factor: 2.107, year: 2015

  3. Effect of sample geometry on bulk relative density of hot-mix asphalt mixes

    CSIR Research Space (South Africa)

    Anochie-Boateng, Joseph

    2011-09-01

    Full Text Available with different number of cut/cored surfaces. Significant variations in voids were observed in the HMA core and beam samples from the same compacted slabs. The objective of this paper is to present the findings of the effect of specimen geometry and cut surfaces...

  4. Sampling strategies for subsampled segmented EPI PRF thermometry in MR guided high intensity focused ultrasound

    Science.gov (United States)

    Odéen, Henrik; Todd, Nick; Diakite, Mahamadou; Minalga, Emilee; Payne, Allison; Parker, Dennis L.

    2014-01-01

    Purpose: To investigate k-space subsampling strategies to achieve fast, large field-of-view (FOV) temperature monitoring using segmented echo planar imaging (EPI) proton resonance frequency shift thermometry for MR guided high intensity focused ultrasound (MRgHIFU) applications. Methods: Five different k-space sampling approaches were investigated, varying sample spacing (equally vs nonequally spaced within the echo train), sampling density (variable sampling density in zero, one, and two dimensions), and utilizing sequential or centric sampling. Three of the schemes utilized sequential sampling with the sampling density varied in zero, one, and two dimensions, to investigate sampling the k-space center more frequently. Two of the schemes utilized centric sampling to acquire the k-space center with a longer echo time for improved phase measurements, and vary the sampling density in zero and two dimensions, respectively. Phantom experiments and a theoretical point spread function analysis were performed to investigate their performance. Variable density sampling in zero and two dimensions was also implemented in a non-EPI GRE pulse sequence for comparison. All subsampled data were reconstructed with a previously described temporally constrained reconstruction (TCR) algorithm. Results: The accuracy of each sampling strategy in measuring the temperature rise in the HIFU focal spot was measured in terms of the root-mean-square-error (RMSE) compared to fully sampled “truth.” For the schemes utilizing sequential sampling, the accuracy was found to improve with the dimensionality of the variable density sampling, giving values of 0.65 °C, 0.49 °C, and 0.35 °C for density variation in zero, one, and two dimensions, respectively. The schemes utilizing centric sampling were found to underestimate the temperature rise, with RMSE values of 1.05 °C and 1.31 °C, for variable density sampling in zero and two dimensions, respectively. Similar subsampling schemes

  5. Investigation of type and density of bio-aerosols in air samples from educational hospital wards of Kerman city, 2014

    Directory of Open Access Journals (Sweden)

    Mohammad Malakootian

    2016-10-01

    Full Text Available Background: Bio-aerosols in the air of hospital wards have an important role in the development of infections. It is important to make quantitative and qualitative estimations of microorganisms in the air of these wards as an index for environmental hygiene applicable to different hospital wards. The aim of the study was to investigate degrees of diversity and density of bio-aerosols in the education hospitals of Kerman city. Methods: This study applied a descriptive-cross-sectional methodology in the second half of 2014 in the education hospitals of Kerman city, with bed capacity of over 300. As many as 200 samples were collected from the air in different wards of each hospital using the standard method of the National Occupational Health and Safety Institute. Following collection, samples were placed in an incubator for 48 hours and then bio-aerosol detections were made for and resulting data reported as colonies/m3. Results: Results indicated that maximum and minimum degrees of bacterial density were observed in operation rooms and in the intensive care unit (ICU of Shafa hospital. Furthermore, comparison showedthat the operating room at Afzalipour hospital had the lowest level of fungal contamination, while ICU at Bahonar hospital had the highest level of fungal contamination. The emitted fungi of Aspergillus and Penicillium along with the bacteria, staphylococci and Acinetobacter had greater frequencies. The means of bacterial density and fungal density were not equal across the studied hospitals and significant statistical, difference was observed between means of bacterial and fungal density (P ≤ 0.001. Conclusion: Amounts of bacterial and fungal density were greater than those proposed in the American Industrial Health State Conference in 73.3% of the wards in the educational hospitals of Kerman city sampled in this study. Therefore it is suggested that implementation of some, necessary measures for continuous monitoring, promotion of

  6. Gamma-ray attenuation technique for determining density and water content of wood samples

    International Nuclear Information System (INIS)

    Ferraz, E.S.B.; Aguiar, O.

    1985-01-01

    The theoretical aspects of application of the Beer-Lambert law are discussed, with emphasis on the maximum theoretical error expected. A serie of measurements of moisture content within the range of 8g/cm 3 to 30g/cm 3 are made on samples of Pinus oocarpa by gamma-ray ( 241 Am) attenuation methods and by the conventional gravimetric method. The relative deviations (experimental errors) found in the determinations made by these two methods are compared with the theoretical errors calculated, showing the viability of the gamma-ray method. (M.A.C.) [pt

  7. Measurement of density distribution of a cracking catalyst in experimental riser with a sampling procedure for gamma ray tomography

    International Nuclear Information System (INIS)

    Dantas, C.C.; Melo, S.B.; Oliveira, E.F.; Simoes, F.P.M.; Santos, M.G. dos; Santos, V.A. dos

    2008-01-01

    By scanning a riser the number of the gamma ray trajectories and the beam width involve temporal, spatial and density resolutions as they are closely correlated parameters. Therefore, evaluation of parameters and their interaction quantification, certainly, are required in the imaging process. Measuring the density distribution of the catalyst from the FCC - fluid cracking catalytic process in an experimental riser in single beam tomographic system, density resolution is evaluated and correlated with spatial resolution. The beam width Δs inside riser is measured and a criterion for determining spatial resolution is proposed. Experiments are carried out to demonstrate resolution effects of three Δs values: 3.30 x 10 -3 , 6.20 x 10 -3 and 12.00 x 10 -3 m. The gamma beam profile is modeled and a sampling rate according to Nyquist criterion is analyzed. The 4.3%, 8.1% and 15.6% ratios of Δs/R to internal riser radius are correlated to counting time in the sampling procedure. Results are discussed by comparison with values from literature

  8. Critical current density improvements in MgB2 superconducting bulk samples by K2CO3 additions  

    DEFF Research Database (Denmark)

    Grivel, J.-C.

    2018-01-01

    MgB2 bulk samples with potassium carbonate doping were made by means of reaction of elemental Mg and B powders mixed with various amounts of K2CO3. The Tc of the superconducting phase as well as its a-axis parameter were decreased as a result of carbon doping. Potassium escaped the samples during...... reaction. The critical current density of MgB2 was improved both in self field and under applied magnetic field for T ≤ 30 K, with optimum results for 1 mol% K2CO3 addition. The normalized flux pinning force (f(b)) shows that the flux pinning mechanism at low field is similar for all samples, following...

  9. Sex differences in fingerprint ridge density in a Turkish young adult population: a sample of Baskent University.

    Science.gov (United States)

    Oktem, Hale; Kurkcuoglu, Ayla; Pelin, Ismail Can; Yazici, Ayse Canan; Aktaş, Gulnihal; Altunay, Fikret

    2015-05-01

    Fingerprints are considered to be one of the most reliable methods of identification. Identification of an individual plays a vital part of any medico-legal investigations. Dermatoglyphics is a branch of science that studies epidermal ridges and ridge patterns. Epidermal ridges are polygenic characteristics that form intrauterine 10-18 weeks and considered fully developed by the sixth month of fetal growth. Fingerprints are permanent morphological characteristics and criminal detection based on fingerprints is based on the principle that no two people can have identical fingerprints. Sex determination from fingerprints has been examined in different population. In this study we aimed to study fingerprint ridge density in Turkish population sample of Baskent University students. Fingerprints were obtained from 118 women, 88 men a total of 206 students aged between 17 and 28 years old by means of simple inking method. Fingerprints from all right and left hands fingers were collected in three different area of each. The ridges on fingerprints were counted diagonally on squares measuring 5 mm × 5 mm on radial, ulnar and inferior areas. The fingerprint ridge density in radial, ulnar and inferior areas and between sexes was compared statistically Mann Whitney U test and Friedman test. The ridge density was significantly greater in women in every region studied and in all fingers when compared to men. The fingerprint ridge density in the ulnar and radial areas of the fingerprints was significantly greater than the lower area. Fingerprint ridge density can be used by medico-legal examination for sex identification. Copyright © 2015 Elsevier Ltd and Faculty of Forensic and Legal Medicine. All rights reserved.

  10. Quantitative analysis of low-density SNP data for parentage assignment and estimation of family contributions to pooled samples.

    Science.gov (United States)

    Henshall, John M; Dierens, Leanne; Sellars, Melony J

    2014-09-02

    While much attention has focused on the development of high-density single nucleotide polymorphism (SNP) assays, the costs of developing and running low-density assays have fallen dramatically. This makes it feasible to develop and apply SNP assays for agricultural species beyond the major livestock species. Although low-cost low-density assays may not have the accuracy of the high-density assays widely used in human and livestock species, we show that when combined with statistical analysis approaches that use quantitative instead of discrete genotypes, their utility may be improved. The data used in this study are from a 63-SNP marker Sequenom® iPLEX Platinum panel for the Black Tiger shrimp, for which high-density SNP assays are not currently available. For quantitative genotypes that could be estimated, in 5% of cases the most likely genotype for an individual at a SNP had a probability of less than 0.99. Matrix formulations of maximum likelihood equations for parentage assignment were developed for the quantitative genotypes and also for discrete genotypes perturbed by an assumed error term. Assignment rates that were based on maximum likelihood with quantitative genotypes were similar to those based on maximum likelihood with perturbed genotypes but, for more than 50% of cases, the two methods resulted in individuals being assigned to different families. Treating genotypes as quantitative values allows the same analysis framework to be used for pooled samples of DNA from multiple individuals. Resulting correlations between allele frequency estimates from pooled DNA and individual samples were consistently greater than 0.90, and as high as 0.97 for some pools. Estimates of family contributions to the pools based on quantitative genotypes in pooled DNA had a correlation of 0.85 with estimates of contributions from DNA-derived pedigree. Even with low numbers of SNPs of variable quality, parentage testing and family assignment from pooled samples are

  11. A k-space method for acoustic propagation using coupled first-order equations in three dimensions.

    Science.gov (United States)

    Tillett, Jason C; Daoud, Mohammad I; Lacefield, James C; Waag, Robert C

    2009-09-01

    A previously described two-dimensional k-space method for large-scale calculation of acoustic wave propagation in tissues is extended to three dimensions. The three-dimensional method contains all of the two-dimensional method features that allow accurate and stable calculation of propagation. These features are spectral calculation of spatial derivatives, temporal correction that produces exact propagation in a homogeneous medium, staggered spatial and temporal grids, and a perfectly matched boundary layer. Spectral evaluation of spatial derivatives is accomplished using a fast Fourier transform in three dimensions. This computational bottleneck requires all-to-all communication; execution time in a parallel implementation is therefore sensitive to node interconnect latency and bandwidth. Accuracy of the three-dimensional method is evaluated through comparisons with exact solutions for media having spherical inhomogeneities. Large-scale calculations in three dimensions were performed by distributing the nearly 50 variables per voxel that are used to implement the method over a cluster of computers. Two computer clusters used to evaluate method accuracy are compared. Comparisons of k-space calculations with exact methods including absorption highlight the need to model accurately the medium dispersion relationships, especially in large-scale media. Accurately modeled media allow the k-space method to calculate acoustic propagation in tissues over hundreds of wavelengths.

  12. Experimental MR-guided cryotherapy of the brain with almost real-time imaging by radial k-space scanning

    International Nuclear Information System (INIS)

    Tacke, J.; Schorn, R.; Glowinski, A.; Grosskortenhaus, S.; Adam, G.; Guenther, R.W.; Rasche, V.

    1999-01-01

    Purpose: To test radial k-space scanning by MR fluoroscopy to guide and control MR-guided interstitial cryotherapy of the healthy pig brain. Methods: After MR tomographic planning of the approach, an MR-compatible experimental cryotherapy probe of 2.7 mm diameter was introduced through a 5 mm burr hole into the right frontal brain of five healthy pigs. The freeze-thaw cycles were imaged using a T 1 -weighted gradient echo sequence with radial k-space scanning in coronal, sagittal, and axial directions. Results: The high temporal resolution of the chosen sequence permits a continuous representation of the freezing process with good image quality and high contrast between ice and unfrozen brain parenchyma. Because of the interactive conception of the sequence the layer plane could be chosen as desired during the measurement. Ice formation was sharply demarcated, spherically configurated, and was free of signals. Its maximum diameter was 13 mm. Conclusions: With use of the novel, interactively controllable gradient echo sequence with radial k-space scanning, guidance of the intervention under fluoroscopic conditions with the advantages of MRT is possible. MR-guided cryotherapy allows a minimally-invasive, precisely dosable focal tissue ablation. (orig.) [de

  13. Ag2S/CdS/TiO2 Nanotube Array Films with High Photocurrent Density by Spotting Sample Method

    OpenAIRE

    Sun, Hong; Zhao, Peini; Zhang, Fanjun; Liu, Yuliang; Hao, Jingcheng

    2015-01-01

    Ag2S/CdS/TiO2 hybrid nanotube array films (Ag2S/CdS/TNTs) were prepared by selectively depositing a narrow-gap semiconductor—Ag2S (0.9 eV) quantum dots (QDs)—in the local domain of the CdS/TiO2 nanotube array films by spotting sample method (SSM). The improvement of sunlight absorption ability and photocurrent density of titanium dioxide (TiO2) nanotube array films (TNTs) which were obtained by anodic oxidation method was realized because of modifying semiconductor QDs. The CdS/TNTs, Ag2S/TNT...

  14. THE ALFALFA H α SURVEY. I. PROJECT DESCRIPTION AND THE LOCAL STAR FORMATION RATE DENSITY FROM THE FALL SAMPLE

    International Nuclear Information System (INIS)

    Sistine, Angela Van; Salzer, John J.; Janowiecki, Steven; Sugden, Arthur; Giovanelli, Riccardo; Haynes, Martha P.; Jaskot, Anne E.; Wilcots, Eric M.

    2016-01-01

    The ALFALFA H α survey utilizes a large sample of H i-selected galaxies from the ALFALFA survey to study star formation (SF) in the local universe. ALFALFA H α contains 1555 galaxies with distances between ∼20 and ∼100 Mpc. We have obtained continuum-subtracted narrowband H α images and broadband R images for each galaxy, creating one of the largest homogeneous sets of H α images ever assembled. Our procedures were designed to minimize the uncertainties related to the calculation of the local SF rate density (SFRD). The galaxy sample we constructed is as close to volume-limited as possible, is a robust statistical sample, and spans a wide range of galaxy environments. In this paper, we discuss the properties of our Fall sample of 565 galaxies, our procedure for deriving individual galaxy SF rates, and our method for calculating the local SFRD. We present a preliminary value of log(SFRD[ M ⊙ yr −1 Mpc −3 ]) = −1.747 ± 0.018 (random) ±0.05 (systematic) based on the 565 galaxies in our Fall sub-sample. Compared to the weighted average of SFRD values around z ≈ 2, our local value indicates a drop in the global SFRD of a factor of 10.2 over that lookback time.

  15. THE ALFALFA H α SURVEY. I. PROJECT DESCRIPTION AND THE LOCAL STAR FORMATION RATE DENSITY FROM THE FALL SAMPLE

    Energy Technology Data Exchange (ETDEWEB)

    Sistine, Angela Van [Department of Physics, University of Wisconsin-Milwaukee, Milwaukee, WI 53211 (United States); Salzer, John J.; Janowiecki, Steven [Department of Astronomy, Indiana University, Bloomington, IN 47405 (United States); Sugden, Arthur [Department of Endocrinology, Beth Israel Deaconess Medical Center, Harvard Medical School, Boston, MA 02115 (United States); Giovanelli, Riccardo; Haynes, Martha P. [Center for Astrophysics and Planetary Science, Cornell University, Ithaca, NY 14853 (United States); Jaskot, Anne E. [Department of Astronomy, Smith College, Northampton, MA 01063 (United States); Wilcots, Eric M. [Department of Astronomy, University of Wisconsin-Madison, Madison, WI 53706 (United States)

    2016-06-10

    The ALFALFA H α survey utilizes a large sample of H i-selected galaxies from the ALFALFA survey to study star formation (SF) in the local universe. ALFALFA H α contains 1555 galaxies with distances between ∼20 and ∼100 Mpc. We have obtained continuum-subtracted narrowband H α images and broadband R images for each galaxy, creating one of the largest homogeneous sets of H α images ever assembled. Our procedures were designed to minimize the uncertainties related to the calculation of the local SF rate density (SFRD). The galaxy sample we constructed is as close to volume-limited as possible, is a robust statistical sample, and spans a wide range of galaxy environments. In this paper, we discuss the properties of our Fall sample of 565 galaxies, our procedure for deriving individual galaxy SF rates, and our method for calculating the local SFRD. We present a preliminary value of log(SFRD[ M {sub ⊙} yr{sup −1} Mpc{sup −3}]) = −1.747 ± 0.018 (random) ±0.05 (systematic) based on the 565 galaxies in our Fall sub-sample. Compared to the weighted average of SFRD values around z ≈ 2, our local value indicates a drop in the global SFRD of a factor of 10.2 over that lookback time.

  16. Gemini NIFS survey of feeding and feedback processes in nearby active galaxies - II. The sample and surface mass density profiles

    Science.gov (United States)

    Riffel, R. A.; Storchi-Bergmann, T.; Riffel, R.; Davies, R.; Bianchin, M.; Diniz, M. R.; Schönell, A. J.; Burtscher, L.; Crenshaw, M.; Fischer, T. C.; Dahmer-Hahn, L. G.; Dametto, N. Z.; Rosario, D.

    2018-02-01

    We present and characterize a sample of 20 nearby Seyfert galaxies selected for having BAT 14-195 keV luminosities LX ≥ 1041.5 erg s-1, redshift z ≤ 0.015, being accessible for observations with the Gemini Near-Infrared Field Spectrograph (NIFS) and showing extended [O III]λ5007 emission. Our goal is to study Active Galactic Nucleus (AGN) feeding and feedback processes from near-infrared integral-field spectra, which include both ionized (H II) and hot molecular (H2) emission. This sample is complemented by other nine Seyfert galaxies previously observed with NIFS. We show that the host galaxy properties (absolute magnitudes MB, MH, central stellar velocity dispersion and axial ratio) show a similar distribution to those of the 69 BAT AGN. For the 20 galaxies already observed, we present surface mass density (Σ) profiles for H II and H2 in their inner ˜500 pc, showing that H II emission presents a steeper radial gradient than H2. This can be attributed to the different excitation mechanisms: ionization by AGN radiation for H II and heating by X-rays for H2. The mean surface mass densities are in the range (0.2 ≤ ΣH II ≤ 35.9) M⊙ pc-2, and (0.2 ≤ ΣH2 ≤ 13.9)× 10-3 M⊙ pc-2, while the ratios between the H II and H2 masses range between ˜200 and 8000. The sample presented here will be used in future papers to map AGN gas excitation and kinematics, providing a census of the mass inflow and outflow rates and power as well as their relation with the AGN luminosity.

  17. Non-Cartesian MRI scan time reduction through sparse sampling

    NARCIS (Netherlands)

    Wajer, F.T.A.W.

    2001-01-01

    Non-Cartesian MRI Scan-Time Reduction through Sparse Sampling Magnetic resonance imaging (MRI) signals are measured in the Fourier domain, also called k-space. Samples of the MRI signal can not be taken at will, but lie along k-space trajectories determined by the magnetic field gradients. MRI

  18. Amorphous and liquid samples structure and density measurements at high pressure - high temperature using diffraction and imaging techniques

    Science.gov (United States)

    Guignot, N.; King, A.; Clark, A. N.; Perrillat, J. P.; Boulard, E.; Morard, G.; Deslandes, J. P.; Itié, J. P.; Ritter, X.; Sanchez-Valle, C.

    2016-12-01

    Determination of the density and structure of liquids such as iron alloys, silicates and carbonates is a key to understand deep Earth structure and dynamics. X-ray diffraction provided by large synchrotron facilities gives excellent results as long as the signal scattered from the sample can be isolated from its environment. Different techniques already exist; we present here the implementation and the first results given by the combined angle- and energy-dispersive structural analysis and refinement (CAESAR) technique introduced by Wang et al. in 2004, that has never been used in this context. It has several advantages in the study of liquids: 1/ the standard energy-dispersive technique (EDX), fast and compatible with large multi-anvil presses frames, is used for fast analysis free of signal pollution from the sample environment 2/ some limitations of the EDX technique (homogeneity of the sample, low resolution) are irrelevant in the case of liquid signals, others (wrong intensities, escape peaks artifacts, background subtraction) are solved by the CAESAR technique 3/ high Q data (up to 15 A-1 and more) can be obtained in a few hours (usually less than 2). We present here the facilities available on the PSICHE beamline (SOLEIL synchrotron, France) and a few results obtained using a Paris-Edinburgh (PE) press and a 1200 tons load capacity multi-anvil press with a (100) DIA compression module. X-ray microtomography, used in conjunction with a PE press featuring rotating anvils (RotoPEc, Philippe et al., 2013) is also very effective, by simply measuring the 3D volume of glass or liquid spheres at HPHT, thus providing density. This can be done in conjunction with the CAESAR technique and we illustrate this point. Finally, absorption profiles can be obtained via imaging techniques, providing another independent way to measure the density of these materials. References Y. Wang et al., A new technique for angle-dispersive powder diffraction using an energy

  19. Ag2S/CdS/TiO2 Nanotube Array Films with High Photocurrent Density by Spotting Sample Method.

    Science.gov (United States)

    Sun, Hong; Zhao, Peini; Zhang, Fanjun; Liu, Yuliang; Hao, Jingcheng

    2015-12-01

    Ag2S/CdS/TiO2 hybrid nanotube array films (Ag2S/CdS/TNTs) were prepared by selectively depositing a narrow-gap semiconductor-Ag2S (0.9 eV) quantum dots (QDs)-in the local domain of the CdS/TiO2 nanotube array films by spotting sample method (SSM). The improvement of sunlight absorption ability and photocurrent density of titanium dioxide (TiO2) nanotube array films (TNTs) which were obtained by anodic oxidation method was realized because of modifying semiconductor QDs. The CdS/TNTs, Ag2S/TNTs, and Ag2S/CdS/TNTs fabricated by uniformly depositing the QDs into the TNTs via the successive ionic layer adsorption and reaction (SILAR) method were synthesized, respectively. The X-ray powder diffraction (XRD), scanning electron microscopy (SEM), transmission electron microscopy (TEM), and X-ray photoelectron spectrum (XPS) results demonstrated that the Ag2S/CdS/TNTs prepared by SSM and other films were successfully prepared. In comparison with the four films of TNTs, CdS/TNTs, Ag2S/TNTs, and Ag2S/CdS/TNTs by SILAR, the Ag2S/CdS/TNTs prepared by SSM showed much better absorption capability and the highest photocurrent density in UV-vis range (320~800 nm). The cycles of local deposition have great influence on their photoelectric properties. The photocurrent density of Ag2S/CdS/TNTs by SSM with optimum deposition cycles of 6 was about 37 times that of TNTs without modification, demonstrating their great prospective applications in solar energy utilization fields.

  20. Collective estimation of multiple bivariate density functions with application to angular-sampling-based protein loop modeling

    KAUST Repository

    Maadooliat, Mehdi

    2015-10-21

    This paper develops a method for simultaneous estimation of density functions for a collection of populations of protein backbone angle pairs using a data-driven, shared basis that is constructed by bivariate spline functions defined on a triangulation of the bivariate domain. The circular nature of angular data is taken into account by imposing appropriate smoothness constraints across boundaries of the triangles. Maximum penalized likelihood is used to fit the model and an alternating blockwise Newton-type algorithm is developed for computation. A simulation study shows that the collective estimation approach is statistically more efficient than estimating the densities individually. The proposed method was used to estimate neighbor-dependent distributions of protein backbone dihedral angles (i.e., Ramachandran distributions). The estimated distributions were applied to protein loop modeling, one of the most challenging open problems in protein structure prediction, by feeding them into an angular-sampling-based loop structure prediction framework. Our estimated distributions compared favorably to the Ramachandran distributions estimated by fitting a hierarchical Dirichlet process model; and in particular, our distributions showed significant improvements on the hard cases where existing methods do not work well.

  1. Collective estimation of multiple bivariate density functions with application to angular-sampling-based protein loop modeling

    KAUST Repository

    Maadooliat, Mehdi; Zhou, Lan; Najibi, Seyed Morteza; Gao, Xin; Huang, Jianhua Z.

    2015-01-01

    This paper develops a method for simultaneous estimation of density functions for a collection of populations of protein backbone angle pairs using a data-driven, shared basis that is constructed by bivariate spline functions defined on a triangulation of the bivariate domain. The circular nature of angular data is taken into account by imposing appropriate smoothness constraints across boundaries of the triangles. Maximum penalized likelihood is used to fit the model and an alternating blockwise Newton-type algorithm is developed for computation. A simulation study shows that the collective estimation approach is statistically more efficient than estimating the densities individually. The proposed method was used to estimate neighbor-dependent distributions of protein backbone dihedral angles (i.e., Ramachandran distributions). The estimated distributions were applied to protein loop modeling, one of the most challenging open problems in protein structure prediction, by feeding them into an angular-sampling-based loop structure prediction framework. Our estimated distributions compared favorably to the Ramachandran distributions estimated by fitting a hierarchical Dirichlet process model; and in particular, our distributions showed significant improvements on the hard cases where existing methods do not work well.

  2. Energy-filtered real- and k-space secondary and energy-loss electron imaging with Dual Emission Electron spectro-Microscope: Cs/Mo(110)

    Energy Technology Data Exchange (ETDEWEB)

    Grzelakowski, Krzysztof P., E-mail: k.grzelakowski@opticon-nanotechnology.com

    2016-05-15

    Since its introduction the importance of complementary k{sub ||}-space (LEED) and real space (LEEM) information in the investigation of surface science phenomena has been widely demonstrated over the last five decades. In this paper we report the application of a novel kind of electron spectromicroscope Dual Emission Electron spectroMicroscope (DEEM) with two independent electron optical channels for reciprocal and real space quasi-simultaneous imaging in investigation of a Cs covered Mo(110) single crystal by using the 800 eV electron beam from an “in-lens” electron gun system developed for the sample illumination. With the DEEM spectromicroscope it is possible to observe dynamic, irreversible processes at surfaces in the energy-filtered real space and in the corresponding energy-filtered k{sub ǁ}-space quasi-simultaneously in two independent imaging columns. The novel concept of the high energy electron beam sample illumination in the cathode lens based microscopes allows chemically selective imaging and analysis under laboratory conditions. - Highlights: • A novel concept of the electron sample illumination with “in-lens” e- gun is realized. • Quasi-simultaneous energy selective observation of the real- and k-space in EELS mode. • Observation of the energy filtered Auger electron diffraction at Cs atoms on Mo(110). • Energy-loss, Auger and secondary electron momentum microscopy is realized.

  3. Sampling

    CERN Document Server

    Thompson, Steven K

    2012-01-01

    Praise for the Second Edition "This book has never had a competitor. It is the only book that takes a broad approach to sampling . . . any good personal statistics library should include a copy of this book." —Technometrics "Well-written . . . an excellent book on an important subject. Highly recommended." —Choice "An ideal reference for scientific researchers and other professionals who use sampling." —Zentralblatt Math Features new developments in the field combined with all aspects of obtaining, interpreting, and using sample data Sampling provides an up-to-date treat

  4. Evaluation of Reduced Power Spectra from Three-Dimensional k-Space

    Science.gov (United States)

    Saur, J.; von Papen, M.

    2014-12-01

    We present a new tool to evaluate one dimensional reduced power spectral densities (PSD) from arbitrary energy distributions in kk-space. This enables us to calculate the power spectra as they are measured in spacecraft frame for any given measurement geometry assuming Taylor's frozen-in approximation. It is possible to seperately calculate the diagonal elements of the spectral tensor and also to insert additional, non-turbulent energy in kk-space (e.g. mirror mode waves). Given a critically balanced turbulent cascade with k∥˜kα⊥k_\\|sim k_perp^alpha, we explore the implications on the spectral form of the PSD and the functional dependence of the spectral index κkappa on the field-to-flow angle θtheta between plasma flow and background magnetic field. We show that critically balanced turbulence develops a θtheta-independent cascade with the spectral slope of the perpendicular cascade κ(θ=90∘)kappa(theta{=}90^circ). This happens at frequencies f>fmaxf>f_mathrm{max}, where fmax(L,α,θ)f_mathrm{max}(L,alpha,theta) is a function of outer scale LL, critical balance exponent αalpha and field-to-flow angle θtheta. We also discuss potential damping terms acting on the kk-space distribution of energy and their effect on the PSD. Further, we show that the functional dependence κ(θ)kappa(theta) as found by textit{Horbury et al.} (2008) and textit{Chen et al.} (2010) can be explained with a damped critically balanced turbulence model.

  5. Reduction of respiratory ghosting motion artifacts in conventional two-dimensional multi-slice Cartesian turbo spin-echo: which k-space filling order is the best?

    Science.gov (United States)

    Inoue, Yuuji; Yoneyama, Masami; Nakamura, Masanobu; Takemura, Atsushi

    2018-06-01

    The two-dimensional Cartesian turbo spin-echo (TSE) sequence is widely used in routine clinical studies, but it is sensitive to respiratory motion. We investigated the k-space orders in Cartesian TSE that can effectively reduce motion artifacts. The purpose of this study was to demonstrate the relationship between k-space order and degree of motion artifacts using a moving phantom. We compared the degree of motion artifacts between linear and asymmetric k-space orders. The actual spacing of ghost artifacts in the asymmetric order was doubled compared with that in the linear order in the free-breathing situation. The asymmetric order clearly showed less sensitivity to incomplete breath-hold at the latter half of the imaging period. Because of the actual number of partitions of the k-space and the temporal filling order, the asymmetric k-space order of Cartesian TSE was superior to the linear k-space order for reduction of ghosting motion artifacts.

  6. Evaluation of single and two-stage adaptive sampling designs for estimation of density and abundance of freshwater mussels in a large river

    Science.gov (United States)

    Smith, D.R.; Rogala, J.T.; Gray, B.R.; Zigler, S.J.; Newton, T.J.

    2011-01-01

    Reliable estimates of abundance are needed to assess consequences of proposed habitat restoration and enhancement projects on freshwater mussels in the Upper Mississippi River (UMR). Although there is general guidance on sampling techniques for population assessment of freshwater mussels, the actual performance of sampling designs can depend critically on the population density and spatial distribution at the project site. To evaluate various sampling designs, we simulated sampling of populations, which varied in density and degree of spatial clustering. Because of logistics and costs of large river sampling and spatial clustering of freshwater mussels, we focused on adaptive and non-adaptive versions of single and two-stage sampling. The candidate designs performed similarly in terms of precision (CV) and probability of species detection for fixed sample size. Both CV and species detection were determined largely by density, spatial distribution and sample size. However, designs did differ in the rate that occupied quadrats were encountered. Occupied units had a higher probability of selection using adaptive designs than conventional designs. We used two measures of cost: sample size (i.e. number of quadrats) and distance travelled between the quadrats. Adaptive and two-stage designs tended to reduce distance between sampling units, and thus performed better when distance travelled was considered. Based on the comparisons, we provide general recommendations on the sampling designs for the freshwater mussels in the UMR, and presumably other large rivers.

  7. Effect of sample moisture and bulk density on performance of the 241Am-Be source based prompt gamma rays neutron activation analysis setup. A Monte Carlo study

    International Nuclear Information System (INIS)

    Almisned, Ghada

    2010-01-01

    Monte Carlo simulations were carried out using the dependence of gamma ray yield on the bulk density and moisture content for five different lengths of Portland cement samples in a thermal neutron capture based Prompt Gamma ray Neutron Activation Analysis (PGNAA) setup for source inside moderator geometry using an 241 Am-Be neutron source. In this study, yields of 1.94 and 6.42 MeV prompt gamma rays from calcium in the five Portland cement samples were calculated as a function of sample bulk density and moisture content. The study showed a strong dependence of the 1.94 and 6.42 MeV gamma ray yield upon the sample bulk density but a weaker dependence upon sample moisture content. For an order of magnitude increase in the sample bulk density, an order of magnitude increase in the gamma rays yield was observed, i.e., a one-to-one correspondence. In case of gamma ray yield dependence upon sample moisture content, an order of magnitude increase in the moisture content of the sample resulted in about 16-17% increase in the yield of 1.94 and 6.42 MeV gamma rays from calcium. (author)

  8. Easy measurement of diffusion coefficients of EGFP-tagged plasma membrane proteins using k-space Image Correlation Spectroscopy

    DEFF Research Database (Denmark)

    Christensen, Eva Arnspang; Koffman, Jennifer Skaarup; Marlar, Saw

    2014-01-01

    Lateral diffusion and compartmentalization of plasma membrane proteins are tightly regulated in cells and thus, studying these processes will reveal new insights to plasma membrane protein function and regulation. Recently, k-Space Image Correlation Spectroscopy (kICS)1 was developed to enable...... routine measurements of diffusion coefficients directly from images of fluorescently tagged plasma membrane proteins, that avoided systematic biases introduced by probe photophysics. Although the theoretical basis for the analysis is complex, the method can be implemented by nonexperts using a freely...... to the correlation function yields the diffusion coefficient. This paper provides a step-by-step guide to the image analysis and measurement of diffusion coefficients via kICS. First, a high frame rate image sequence of a fluorescently labeled plasma membrane protein is acquired using a fluorescence microscope Then...

  9. Fixed-Precision Sequential Sampling Plans for Estimating Alfalfa Caterpillar, Colias lesbia, Egg Density in Alfalfa, Medicago sativa, Fields in Córdoba, Argentina

    Science.gov (United States)

    Serra, Gerardo V.; Porta, Norma C. La; Avalos, Susana; Mazzuferi, Vilma

    2013-01-01

    The alfalfa caterpillar, Colias lesbia (Fabricius) (Lepidoptera: Pieridae), is a major pest of alfalfa, Medicago sativa L. (Fabales: Fabaceae), crops in Argentina. Its management is based mainly on chemical control of larvae whenever the larvae exceed the action threshold. To develop and validate fixed-precision sequential sampling plans, an intensive sampling programme for C. lesbia eggs was carried out in two alfalfa plots located in the Province of Córdoba, Argentina, from 1999 to 2002. Using Resampling for Validation of Sampling Plans software, 12 additional independent data sets were used to validate the sequential sampling plan with precision levels of 0.10 and 0.25 (SE/mean), respectively. For a range of mean densities of 0.10 to 8.35 eggs/sample, an average sample size of only 27 and 26 sample units was required to achieve a desired precision level of 0.25 for the sampling plans of Green and Kuno, respectively. As the precision level was increased to 0.10, average sample size increased to 161 and 157 sample units for the sampling plans of Green and Kuno, respectively. We recommend using Green's sequential sampling plan because it is less sensitive to changes in egg density. These sampling plans are a valuable tool for researchers to study population dynamics and to evaluate integrated pest management strategies. PMID:23909840

  10. Breast Density Awareness and Knowledge, and Intentions for Breast Cancer Screening in a Diverse Sample of Women Age Eligible for Mammography.

    Science.gov (United States)

    Santiago-Rivas, Marimer; Benjamin, Shayna; Andrews, Janna Z; Jandorf, Lina

    2017-08-14

    The objectives of this study were to assess breast density knowledge and breast density awareness, and to identify information associated with intention to complete routine and supplemental screening for breast cancer in a diverse sample of women age eligible for mammography. We quantitatively (self-report) assessed breast density awareness and knowledge (N = 264) in black (47.7%), Latina (35.2%), and white (17%) women recruited online and in the community. Most participants reported having heard about breast density (69.2%); less than one third knew their own breast density status (30.4%). Knowing their own breast density, believing that women should be notified of their breast density in their mammogram report, and feeling informed if being provided this information are associated with likelihood of completing mammogram. Intending mammogram completion and knowledge regarding the impact of breast density on mammogram accuracy are associated with likelihood of completing supplemental ultrasound tests of the breast. These findings help inform practitioners and policy makers about information and communication factors that influence breast cancer screening concerns and decisions. Knowing this information should prepare practitioners to better identify women who may have not been exposed to breast density messages.

  11. Effects of Spatial Distribution of Trees on Density Estimation by Nearest Individual Sampling Method: Case Studies in Zagros Wild Pistachio Woodlands and Simulated Stands

    Directory of Open Access Journals (Sweden)

    Y. Erfanifard

    2014-06-01

    Full Text Available Distance methods and their estimators of density may have biased measurements unless the studied stand of trees has a random spatial pattern. This study aimed at assessing the effect of spatial arrangement of wild pistachio trees on the results of density estimation by using the nearest individual sampling method in Zagros woodlands, Iran, and applying a correction factor based on the spatial pattern of trees. A 45 ha clumped stand of wild pistachio trees was selected in Zagros woodlands and two random and dispersed stands with similar density and area were simulated. Distances from the nearest individual and neighbour at 40 sample points in a 100 × 100 m grid were measured in the three stands. The results showed that the nearest individual method with Batcheler estimator could not calculate density correctly in all stands. However, applying the correction factor based on the spatial pattern of the trees, density was measured with no significant difference in terms of the real density of the stands. This study showed that considering the spatial arrangement of trees can improve the results of the nearest individual method with Batcheler estimator in density measurement.

  12. Gamma-ray yield dependence on bulk density and moisture content of a sample of a PGNAA setup. A Monte Carlo study

    International Nuclear Information System (INIS)

    Nagadi, M.M.; Naqvi, A.A.

    2007-01-01

    Monte Carlo calculations were carried out to study the dependence of γ-ray yield on the bulk density and moisture content of a sample in a thermalneutron capture-based prompt gamma neutron activation analysis (PGNAA) setup. The results of the study showed a strong dependence of the γ-ray yield upon the sample bulk density. An order of magnitude increase in yield of 1.94 and 6.42 MeV prompt γ-rays from calcium in a Portland cement sample was observed for a corresponding order of magnitude increase in the sample bulk density. On the contrary the γ-ray yield has a weak dependence on sample moisture content and an increase of only 20% in yield of 1.94 and 6.42 MeV prompt γ-rays from calcium in the Portland cement sample was observed for an order of magnitude increase in the moisture content of the Portland cement sample. A similar effect of moisture content has been observed on the yield of 1.167 MeV prompt γ-rays from chlorine contaminants in Portland cement samples. For an order of magnitude increase in the moisture content of the sample, a 7 to 12% increase in the yield of the 1.167 MeV chlorine γ-ray was observed for the Portland cement samples containing 1 to 5 wt.% chlorine contaminants. This study has shown that effects of sample moisture content on prompt γ-ray yield from constituents of a Portland cement sample are insignificant in a thermal-neutrons capture-based PGNAA setup. (author)

  13. Computational Identification of Protein Pupylation Sites by Using Profile-Based Composition of k-Spaced Amino Acid Pairs.

    Directory of Open Access Journals (Sweden)

    Md Mehedi Hasan

    Full Text Available Prokaryotic proteins are regulated by pupylation, a type of post-translational modification that contributes to cellular function in bacterial organisms. In pupylation process, the prokaryotic ubiquitin-like protein (Pup tagging is functionally analogous to ubiquitination in order to tag target proteins for proteasomal degradation. To date, several experimental methods have been developed to identify pupylated proteins and their pupylation sites, but these experimental methods are generally laborious and costly. Therefore, computational methods that can accurately predict potential pupylation sites based on protein sequence information are highly desirable. In this paper, a novel predictor termed as pbPUP has been developed for accurate prediction of pupylation sites. In particular, a sophisticated sequence encoding scheme [i.e. the profile-based composition of k-spaced amino acid pairs (pbCKSAAP] is used to represent the sequence patterns and evolutionary information of the sequence fragments surrounding pupylation sites. Then, a Support Vector Machine (SVM classifier is trained using the pbCKSAAP encoding scheme. The final pbPUP predictor achieves an AUC value of 0.849 in 10-fold cross-validation tests and outperforms other existing predictors on a comprehensive independent test dataset. The proposed method is anticipated to be a helpful computational resource for the prediction of pupylation sites. The web server and curated datasets in this study are freely available at http://protein.cau.edu.cn/pbPUP/.

  14. Prediction of citrullination sites by incorporating k-spaced amino acid pairs into Chou's general pseudo amino acid composition.

    Science.gov (United States)

    Ju, Zhe; Wang, Shi-Yun

    2018-04-22

    As one of the most important and common protein post-translational modifications, citrullination plays a key role in regulating various biological processes and is associated with several human diseases. The accurate identification of citrullination sites is crucial for elucidating the underlying molecular mechanisms of citrullination and designing drugs for related human diseases. In this study, a novel bioinformatics tool named CKSAAP_CitrSite is developed for the prediction of citrullination sites. With the assistance of support vector machine algorithm, the highlight of CKSAAP_CitrSite is to adopt the composition of k-spaced amino acid pairs surrounding a query site as input. As illustrated by 10-fold cross-validation, CKSAAP_CitrSite achieves a satisfactory performance with a Sensitivity of 77.59%, a Specificity of 95.26%, an Accuracy of 89.37% and a Matthew's correlation coefficient of 0.7566, which is much better than those of the existing prediction method. Feature analysis shows that the N-terminal space containing pairs may play an important role in the prediction of citrullination sites, and the arginines close to N-terminus tend to be citrullinated. The conclusions derived from this study could offer useful information for elucidating the molecular mechanisms of citrullination and related experimental validations. A user-friendly web-server for CKSAAP_CitrSite is available at 123.206.31.171/CKSAAP_CitrSite/. Copyright © 2017. Published by Elsevier B.V.

  15. Higher Dietary Energy Density is Associated with Stunting but not Overweight and Obesity in a Sample of Urban Malaysian Children.

    Science.gov (United States)

    Shariff, Zalilah Mohd; Lin, Khor Geok; Sariman, Sarina; Siew, Chin Yit; Yusof, Barakatun Nisak Mohd; Mun, Chan Yoke; Lee, Huang Soo; Mohamad, Maznorila

    2016-01-01

    Although diets with high energy density are associated with increased risk of overweight and obesity, it is not known whether such diets are associated with undernutrition. This study assessed the relationship between dietary energy density (ED) and nutritional status of 745 urban 1- to 10-year-old children. Dietary intakes were obtained using food recall and record for two days. Dietary energy density was based on food and caloric beverages. Higher dietary ED was associated with lower intakes of carbohydrate, sugar, vitamins C and D, and calcium but higher fat, fiber, iron, and folate intakes. While intakes of fruits and milk/dairy products decreased, meat, fish, and legume intakes increased with higher dietary ED. Stunting, but not other growth problems, was associated with higher dietary ED. Future studies should confirm the cause-and-effect relationship between higher dietary ED and stunting.

  16. Tobacco outlet density and converted versus native non-daily cigarette use in a national US sample.

    Science.gov (United States)

    Kirchner, Thomas R; Anesetti-Rothermel, Andrew; Bennett, Morgane; Gao, Hong; Carlos, Heather; Scheuermann, Taneisha S; Reitzel, Lorraine R; Ahluwalia, Jasjit S

    2017-01-01

    Investigate whether non-daily smokers' (NDS) cigarette price and purchase preferences, recent cessation attempts, and current intentions to quit are associated with the density of the retail cigarette product landscape surrounding their residential address. Cross-sectional assessment of N=904 converted NDS (CNDS). who previously smoked every day, and N=297 native NDS (NNDS) who only smoked non-daily, drawn from a national panel. Kernel density estimation was used to generate a nationwide probability surface of tobacco outlets linked to participants' residential ZIP code. Hierarchically nested log-linear models were compared to evaluate associations between outlet density, non-daily use patterns, price sensitivity and quit intentions. Overall, NDS in ZIP codes with greater outlet density were less likely than NDS in ZIP codes with lower outlet density to hold 6-month quit intentions when they also reported that price affected use patterns (G 2 =66.1, ppurchase locations (G 2 =85.2, pprice influenced the amount they smoke (G 2 =43.9, pprices (G 2 =59.3, pprice affected their cigarette brand choice compared with those in ZIP codes with lower density. This paper provides initial evidence that the point-of-sale cigarette environment may be differentially associated with the maintenance of CNDS versus NNDS patterns. Future research should investigate how tobacco control efforts can be optimised to both promote cessation and curb the rising tide of non-daily smoking in the USA. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.

  17. A suspended-particle rosette multi-sampler for discrete biogeochemical sampling in low-particle-density waters

    Energy Technology Data Exchange (ETDEWEB)

    Breier, J. A.; Rauch, C. G.; McCartney, K.; Toner, B. M.; Fakra, S. C.; White, S. N.; German, C. R.

    2010-06-22

    To enable detailed investigations of early stage hydrothermal plume formation and abiotic and biotic plume processes we developed a new oceanographic tool. The Suspended Particulate Rosette sampling system has been designed to collect geochemical and microbial samples from the rising portion of deep-sea hydrothermal plumes. It can be deployed on a remotely operated vehicle for sampling rising plumes, on a wire-deployed water rosette for spatially discrete sampling of non-buoyant hydrothermal plumes, or on a fixed mooring in a hydrothermal vent field for time series sampling. It has performed successfully during both its first mooring deployment at the East Pacific Rise and its first remotely-operated vehicle deployments along the Mid-Atlantic Ridge. It is currently capable of rapidly filtering 24 discrete large-water-volume samples (30-100 L per sample) for suspended particles during a single deployment (e.g. >90 L per sample at 4-7 L per minute through 1 {mu}m pore diameter polycarbonate filters). The Suspended Particulate Rosette sampler has been designed with a long-term goal of seafloor observatory deployments, where it can be used to collect samples in response to tectonic or other events. It is compatible with in situ optical sensors, such as laser Raman or visible reflectance spectroscopy systems, enabling in situ particle analysis immediately after sample collection and before the particles alter or degrade.

  18. Choice of sample size for high transport critical current density in a granular superconductor: percolation versus self-field effects

    International Nuclear Information System (INIS)

    Mulet, R.; Diaz, O.; Altshuler, E.

    1997-01-01

    The percolative character of the current paths and the self-field effects were considered to estimate optimal sample dimensions for the transport current of a granular superconductor by means of a Monte Carlo algorithm and critical-state model calculations. We showed that, under certain conditions, self-field effects are negligible and the J c dependence on sample dimensions is determined by the percolative character of the current. Optimal dimensions are demonstrated to be a function of the fraction of superconducting phase in the sample. (author)

  19. Characterization of the effect of sample quality on high density oligonucleotide microarray data using progressively degraded rat liver RNA

    Directory of Open Access Journals (Sweden)

    Rosenzweig Barry A

    2007-09-01

    Full Text Available Abstract Background The interpretability of microarray data can be affected by sample quality. To systematically explore how RNA quality affects microarray assay performance, a set of rat liver RNA samples with a progressive change in RNA integrity was generated by thawing frozen tissue or by ex vivo incubation of fresh tissue over a time course. Results Incubation of tissue at 37°C for several hours had little effect on RNA integrity, but did induce changes in the transcript levels of stress response genes and immune cell markers. In contrast, thawing of tissue led to a rapid loss of RNA integrity. Probe sets identified as most sensitive to RNA degradation tended to be located more than 1000 nucleotides upstream of their transcription termini, similar to the positioning of control probe sets used to assess sample quality on Affymetrix GeneChip® arrays. Samples with RNA integrity numbers less than or equal to 7 showed a significant increase in false positives relative to undegraded liver RNA and a reduction in the detection of true positives among probe sets most sensitive to sample integrity for in silico modeled changes of 1.5-, 2-, and 4-fold. Conclusion Although moderate levels of RNA degradation are tolerated by microarrays with 3'-biased probe selection designs, in this study we identify a threshold beyond which decreased specificity and sensitivity can be observed that closely correlates with average target length. These results highlight the value of annotating microarray data with metrics that capture important aspects of sample quality.

  20. Potential, velocity, and density fields from redshift-distance samples: Application - Cosmography within 6000 kilometers per second

    International Nuclear Information System (INIS)

    Bertschinger, E.; Dekel, A.; Faber, S.M.; Dressler, A.; Burstein, D.

    1990-01-01

    A potential flow reconstruction algorithm has been applied to the real universe to reconstruct the three-dimensional potential, velocity, and mass density fields smoothed on large scales. The results are shown as maps of these fields, revealing the three-dimensional structure within 6000 km/s distance from the Local Group. The dominant structure is an extended deep potential well in the Hydra-Centaurus region, stretching across the Galactic plane toward Pavo, broadly confirming the Great Attractor (GA) model of Lynden-Bell et al. (1988). The Local Supercluster appears to be an extended ridge on the near flank of the GA, proceeding through the Virgo Southern Extension to the Virgo and Ursa Major clusters. The Virgo cluster and the Local Group are both falling toward the bottom of the GA potential well with peculiar velocities of 658 + or - 121 km/s and 565 + or - 125 km/s, respectively. 65 refs

  1. Potential, velocity, and density fields from redshift-distance samples: Application - Cosmography within 6000 kilometers per second

    Science.gov (United States)

    Bertschinger, Edmund; Dekel, Avishai; Faber, Sandra M.; Dressler, Alan; Burstein, David

    1990-12-01

    A potential flow reconstruction algorithm has been applied to the real universe to reconstruct the three-dimensional potential, velocity, and mass density fields smoothed on large scales. The results are shown as maps of these fields, revealing the three-dimensional structure within 6000 km/s distance from the Local Group. The dominant structure is an extended deep potential well in the Hydra-Centaurus region, stretching across the Galactic plane toward Pavo, broadly confirming the Great Attractor (GA) model of Lynden-Bell et al. (1988). The Local Supercluster appears to be an extended ridge on the near flank of the GA, proceeding through the Virgo Southern Extension to the Virgo and Ursa Major clusters. The Virgo cluster and the Local Group are both falling toward the bottom of the GA potential well with peculiar velocities of 658 + or - 121 km/s and 565 + or - 125 km/s, respectively.

  2. Heavy metal accumulation related to population density in road dust samples taken from urban sites under different land uses

    NARCIS (Netherlands)

    Trujillo-González, Juan Manuel; Torres-Mora, Marco Aurelio; Keesstra, Saskia; Brevik, Eric C.; Jiménez-Ballesta, Raimundo

    2016-01-01

    Soil pollution is a key component of the land degradation process, but little is known about the impact of soil pollution on human health in the urban environment. The heavy metals Pb, Zn, Cu, Cr, Cd and Ni were analyzed by acid digestion (method EPA 3050B) and a total of 15 dust samples were

  3. Agrilus auroguttatus exit hole distributions on Quercus agrifolia boles and a sampling method to estimate their density on individual trees

    Science.gov (United States)

    Laurel J. Haavik; Tom W. Coleman; Mary Louise Flint; Robert C. Venette; Steven J. Seybold

    2012-01-01

    In recent decades, invasive phloem and wood borers have become important pests in North America. To aid tree sampling and survey efforts for the newly introduced goldspotted oak borer, Agrilus auroguttatus Schaeffer (Coleoptera: Buprestidae), we examined spatial patterns of exit holes on the boles (trunks) of 58 coast live oak, Quercus...

  4. A Centrifugal Microfluidic Platform That Separates Whole Blood Samples into Multiple Removable Fractions Due to Several Discrete but Continuous Density Gradient Sections

    Science.gov (United States)

    Moen, Scott T.; Hatcher, Christopher L.; Singh, Anup K.

    2016-01-01

    We present a miniaturized centrifugal platform that uses density centrifugation for separation and analysis of biological components in small volume samples (~5 μL). We demonstrate the ability to enrich leukocytes for on-disk visualization via microscopy, as well as recovery of viable cells from each of the gradient partitions. In addition, we simplified the traditional Modified Wright-Giemsa staining by decreasing the time, volume, and expertise involved in the procedure. From a whole blood sample, we were able to extract 95.15% of leukocytes while excluding 99.8% of red blood cells. This platform has great potential in both medical diagnostics and research applications as it offers a simpler, automated, and inexpensive method for biological sample separation, analysis, and downstream culturing. PMID:27054764

  5. THE BOSS EMISSION-LINE LENS SURVEY. II. INVESTIGATING MASS-DENSITY PROFILE EVOLUTION IN THE SLACS+BELLS STRONG GRAVITATIONAL LENS SAMPLE

    Energy Technology Data Exchange (ETDEWEB)

    Bolton, Adam S.; Brownstein, Joel R.; Shu Yiping; Arneson, Ryan A. [Department of Physics and Astronomy, University of Utah, 115 South 1400 East, Salt Lake City, UT 84112 (United States); Kochanek, Christopher S. [Department of Astronomy and Center for Cosmology and Astroparticle Physics, Ohio State University, Columbus, OH 43210 (United States); Schlegel, David J. [Lawrence Berkeley National Laboratory, Berkeley, CA 94720 (United States); Eisenstein, Daniel J. [Harvard-Smithsonian Center for Astrophysics, 60 Garden Street, MS 20, Cambridge, MA 02138 (United States); Wake, David A. [Department of Astronomy, Yale University, New Haven, CT 06520 (United States); Connolly, Natalia [Department of Physics, Hamilton College, Clinton, NY 13323 (United States); Maraston, Claudia [Institute of Cosmology and Gravitation, University of Portsmouth, Portsmouth PO1 3FX (United Kingdom); Weaver, Benjamin A., E-mail: bolton@astro.utah.edu [Center for Cosmology and Particle Physics, New York University, New York, NY 10003 (United States)

    2012-09-20

    We present an analysis of the evolution of the central mass-density profile of massive elliptical galaxies from the SLACS and BELLS strong gravitational lens samples over the redshift interval z Almost-Equal-To 0.1-0.6, based on the combination of strong-lensing aperture mass and stellar velocity-dispersion constraints. We find a significant trend toward steeper mass profiles (parameterized by the power-law density model with {rho}{proportional_to}r {sup -{gamma}}) at later cosmic times, with magnitude d < {gamma} > /dz = -0.60 {+-} 0.15. We show that the combined lens-galaxy sample is consistent with a non-evolving distribution of stellar velocity dispersions. Considering possible additional dependence of <{gamma} > on lens-galaxy stellar mass, effective radius, and Sersic index, we find marginal evidence for shallower mass profiles at higher masses and larger sizes, but with a significance that is subdominant to the redshift dependence. Using the results of published Monte Carlo simulations of spectroscopic lens surveys, we verify that our mass-profile evolution result cannot be explained by lensing selection biases as a function of redshift. Interpreted as a true evolutionary signal, our result suggests that major dry mergers involving off-axis trajectories play a significant role in the evolution of the average mass-density structure of massive early-type galaxies over the past 6 Gyr. We also consider an alternative non-evolutionary hypothesis based on variations in the strong-lensing measurement aperture with redshift, which would imply the detection of an 'inflection zone' marking the transition between the baryon-dominated and dark-matter halo-dominated regions of the lens galaxies. Further observations of the combined SLACS+BELLS sample can constrain this picture more precisely, and enable a more detailed investigation of the multivariate dependences of galaxy mass structure across cosmic time.

  6. Patterns of lymph node sampling and the impact of lymph node density in favorable histology Wilms tumor: An analysis of the national cancer database.

    Science.gov (United States)

    Saltzman, A F; Carrasco, A; Amini, A; Aldrink, J H; Dasgupta, R; Gow, K W; Glick, R D; Ehrlich, P F; Cost, N G

    2018-04-01

    There is controversy about the role of lymph node (LN) sampling or dissection in the management of favorable histology (FH) Wilms tumor (WT), specifically how it performed and how it may impact survival. The objective of this study was to analyze factors affecting LN sampling patterns and the impact of LN yield and density (number of positive LNs/LNs examined) on overall survival (OS) in patients with advanced-stage favorable histology Wilms tumor (FHWT). The National Cancer Database (NCDB) was queried for patients with FHWT during 2004-2013. Demographic, clinical and OS data were abstracted for those who underwent surgical resection. Poisson regression was performed to analyze how factors influenced LN yield. Patients with positive LNs had LN density calculated and were further analyzed. A total of 2340 patients met criteria, with a median age at diagnosis of 3 years (range 0-78 years). The median number of LNs examined was three (range 0-87). Lymph node yield was affected by age, race, insurance, tumor size, laterality, advanced stage, LN positivity, and institutional volume. A total of 390 (16.6%) patients had LN-positive disease. Median LN density for these LN-positive patients was 0.38 (range 0.02-1) (Summary Figure). Estimated 5-year OS was significantly improved for those with LN density ≤0.38 vs. >0.38 (94% vs. 84.6%, P = 0.012). In this population, on multivariate analysis, age and LN density were significant predictors of OS. It is difficult to compile large numbers of cases in rare diseases like WT, and fortunately a large administrative database such as the NCDB can serve as a great resource. However, administrative data come with inherent limitations such as missing data and inability to account for a variety of factors that may influence LN yield and/or OS (specimen designation, pathologist experience, surgeon experience/volume, institutional Children's Oncology Group (COG) association, etc.). In this specific disease, the American Joint Committee

  7. MRI-determined liver proton density fat fraction, with MRS validation: Comparison of regions of interest sampling methods in patients with type 2 diabetes.

    Science.gov (United States)

    Vu, Kim-Nhien; Gilbert, Guillaume; Chalut, Marianne; Chagnon, Miguel; Chartrand, Gabriel; Tang, An

    2016-05-01

    To assess the agreement between published magnetic resonance imaging (MRI)-based regions of interest (ROI) sampling methods using liver mean proton density fat fraction (PDFF) as the reference standard. This retrospective, internal review board-approved study was conducted in 35 patients with type 2 diabetes. Liver PDFF was measured by magnetic resonance spectroscopy (MRS) using a stimulated-echo acquisition mode sequence and MRI using a multiecho spoiled gradient-recalled echo sequence at 3.0T. ROI sampling methods reported in the literature were reproduced and liver mean PDFF obtained by whole-liver segmentation was used as the reference standard. Intraclass correlation coefficients (ICCs), Bland-Altman analysis, repeated-measures analysis of variance (ANOVA), and paired t-tests were performed. ICC between MRS and MRI-PDFF was 0.916. Bland-Altman analysis showed excellent intermethod agreement with a bias of -1.5 ± 2.8%. The repeated-measures ANOVA found no systematic variation of PDFF among the nine liver segments. The correlation between liver mean PDFF and ROI sampling methods was very good to excellent (0.873 to 0.975). Paired t-tests revealed significant differences (P sampling methods that exclusively or predominantly sampled the right lobe. Significant correlations with mean PDFF were found with sampling methods that included higher number of segments, total area equal or larger than 5 cm(2) , or sampled both lobes (P = 0.001, 0.023, and 0.002, respectively). MRI-PDFF quantification methods should sample each liver segment in both lobes and include a total surface area equal or larger than 5 cm(2) to provide a close estimate of the liver mean PDFF. © 2015 Wiley Periodicals, Inc.

  8. Density dependence and climate effects in Rocky Mountain elk: an application of regression with instrumental variables for population time series with sampling error.

    Science.gov (United States)

    Creel, Scott; Creel, Michael

    2009-11-01

    1. Sampling error in annual estimates of population size creates two widely recognized problems for the analysis of population growth. First, if sampling error is mistakenly treated as process error, one obtains inflated estimates of the variation in true population trajectories (Staples, Taper & Dennis 2004). Second, treating sampling error as process error is thought to overestimate the importance of density dependence in population growth (Viljugrein et al. 2005; Dennis et al. 2006). 2. In ecology, state-space models are used to account for sampling error when estimating the effects of density and other variables on population growth (Staples et al. 2004; Dennis et al. 2006). In econometrics, regression with instrumental variables is a well-established method that addresses the problem of correlation between regressors and the error term, but requires fewer assumptions than state-space models (Davidson & MacKinnon 1993; Cameron & Trivedi 2005). 3. We used instrumental variables to account for sampling error and fit a generalized linear model to 472 annual observations of population size for 35 Elk Management Units in Montana, from 1928 to 2004. We compared this model with state-space models fit with the likelihood function of Dennis et al. (2006). We discuss the general advantages and disadvantages of each method. Briefly, regression with instrumental variables is valid with fewer distributional assumptions, but state-space models are more efficient when their distributional assumptions are met. 4. Both methods found that population growth was negatively related to population density and winter snow accumulation. Summer rainfall and wolf (Canis lupus) presence had much weaker effects on elk (Cervus elaphus) dynamics [though limitation by wolves is strong in some elk populations with well-established wolf populations (Creel et al. 2007; Creel & Christianson 2008)]. 5. Coupled with predictions for Montana from global and regional climate models, our results

  9. Critical current density in MgB2 bulk samples after co-doping with nano-SiC and poly zinc acrylate complexes

    International Nuclear Information System (INIS)

    Zhang, Z.; Suo, H.; Ma, L.; Zhang, T.; Liu, M.; Zhou, M.

    2011-01-01

    SiC and poly zinc acrylate complexes co-doped MgB 2 bulk has been synthesized. Co-doping can cause higher carbon substitutions and the second phase particles. Co-doping can further increase the Jc value of MgB 2 bulk on the base of the SiC doping. The co-doped MgB 2 bulk samples have been synthesized using an in situ reaction processing. The additives is 8 wt.% SiC nano powders and 10 wt.% [(CH 2 CHCOO) 2 Zn] n poly zinc acrylate complexes (PZA). A systematic study was performed on samples doped with SiC or PZA and samples co-doped with both of them. The effects of doping and co-doping on phase formation, microstructure, and the variation of lattice parameters were studied. The amount of substituted carbon, the critical temperature (T c ) and the critical current density (J c ) were determined. The calculated lattice parameters show the decrease of the a-axis, while no obvious change was detected for c-axis parameter in co-doped samples. This indicates that the carbon was substituted by boron in MgB 2 . The amount of substituted carbon for the co-doped sample shows an enhancement compared to that of the both single doped samples. The co-doped samples perform the highest J c values, which reaches 3.3 x 10 4 A/cm 2 at 5 K and 7 T. It is shown that co-doping with SiC and organic compound is an effective way to further improve the superconducting properties of MgB 2 .

  10. Influence of parasite density and sample storage time on the reliability of Entamoeba histolytica-specific PCR from formalin-fixed and paraffin-embedded tissues.

    Science.gov (United States)

    Frickmann, Hagen; Tenner-Racz, Klara; Eggert, Petra; Schwarz, Norbert G; Poppert, Sven; Tannich, Egbert; Hagen, Ralf M

    2013-12-01

    We report on the reliability of polymerase chain reaction (PCR) for the detection of Entamoeba histolytica from formalin-fixed, paraffin-embedded tissue in comparison with microscopy and have determined predictors that may influence PCR results. E. histolytica-specific and Entamoeba dispar-specific real-time PCR and microscopy from adjacent histologic sections were performed using a collection of formalin-fixed, paraffin-embedded tissue specimens obtained from patients with invasive amebiasis. Specimens had been collected during the previous 4 decades. Association of sample age, parasite density, and reliability of PCR was analyzed. E. histolytica PCR was positive in 20 of 34 biopsies (58.8%); 2 of these 20 were microscopically negative for amebae in neighboring tissue sections. PCR was negative in 9 samples with visible amebae in neighboring sections and in 5 samples without visible parasites in neighboring sections. PCR was negative in all specimens that were older than 3 decades. Low parasite counts and sample ages older than 20 years were predictors for false-negative PCR results. All samples were negative for E. dispar DNA. PCR is suitable for the detection of E. histolytica in formalin-fixed, paraffin-embedded tissue samples that are younger than 2 decades and that contain intermediate to high parasite numbers. Negative results in older samples were due to progressive degradation of DNA over time as indicated by control PCRs targeting the human 18S rRNA gene. Moreover, our findings support previous suggestions that only E. histolytica but not E. dispar is responsible for invasive amebiasis.

  11. Do the venous blood samples replicate malaria parasite densities found in capillary blood? A field study performed in naturally-infected asymptomatic children in Cameroon.

    Science.gov (United States)

    Sandeu, Maurice M; Bayibéki, Albert N; Tchioffo, Majoline T; Abate, Luc; Gimonneau, Geoffrey; Awono-Ambéné, Parfait H; Nsango, Sandrine E; Diallo, Diadier; Berry, Antoine; Texier, Gaétan; Morlais, Isabelle

    2017-08-17

    The measure of new drug- or vaccine-based approaches for malaria control is based on direct membrane feeding assays (DMFAs) where gametocyte-infected blood samples are offered to mosquitoes through an artificial feeder system. Gametocyte donors are identified by the microscopic detection and quantification of malaria blood stages on blood films prepared using either capillary or venous blood. However, parasites are known to sequester in the microvasculature and this phenomenon may alter accurate detection of parasites in blood films. The blood source may then impact the success of mosquito feeding experiments and investigations are needed for the implementation of DMFAs under natural conditions. Thick blood smears were prepared from blood obtained from asymptomatic children attending primary schools in the vicinity of Mfou (Cameroon) over four transmission seasons. Parasite densities were determined microscopically from capillary and venous blood for 137 naturally-infected gametocyte carriers. The effect of the blood source on gametocyte and asexual stage densities was then assessed by fitting cumulative link mixed models (CLMM). DMFAs were performed to compare the infectiousness of gametocytes from the different blood sources to mosquitoes. Prevalence of Plasmodium falciparum asexual stages among asymptomatic children aged from 4 to 15 years was 51.8% (2116/4087). The overall prevalence of P. falciparum gametocyte carriage was 8.9% and varied from one school to another. No difference in the density of gametocyte and asexual stages was found between capillary and venous blood. Attempts to perform DMFAs with capillary blood failed. Plasmodium falciparum malaria parasite densities do not differ between capillary and venous blood in asymptomatic subjects for both gametocyte and trophozoite stages. This finding suggests that the blood source should not interfere with transmission efficiency in DMFAs.

  12. A Bone Sample Containing a Bone Graft Substitute Analyzed by Correlating Density Information Obtained by X-ray Micro Tomography with Compositional Information Obtained by Raman Microscopy

    Directory of Open Access Journals (Sweden)

    Johann Charwat-Pessler

    2015-06-01

    Full Text Available The ability of bone graft substitutes to promote new bone formation has been increasingly used in the medical field to repair skeletal defects or to replace missing bone in a broad range of applications in dentistry and orthopedics. A common way to assess such materials is via micro computed tomography (µ-CT, through the density information content provided by the absorption of X-rays. Information on the chemical composition of a material can be obtained via Raman spectroscopy. By investigating a bone sample from miniature pigs containing the bone graft substitute Bio Oss®, we pursued the target of assessing to what extent the density information gained by µ-CT imaging matches the chemical information content provided by Raman spectroscopic imaging. Raman images and Raman correlation maps of the investigated sample were used in order to generate a Raman based segmented image by means of an agglomerative, hierarchical cluster analysis. The resulting segments, showing chemically related areas, were subsequently compared with the µ-CT image by means of a one-way ANOVA. We found out that to a certain extent typical gray-level values (and the related histograms in the µ-CT image can be reliably related to specific segments within the image resulting from the cluster analysis.

  13. Detailed deposition density maps constructed by large-scale soil sampling for gamma-ray emitting radioactive nuclides from the Fukushima Dai-ichi Nuclear Power Plant accident.

    Science.gov (United States)

    Saito, Kimiaki; Tanihata, Isao; Fujiwara, Mamoru; Saito, Takashi; Shimoura, Susumu; Otsuka, Takaharu; Onda, Yuichi; Hoshi, Masaharu; Ikeuchi, Yoshihiro; Takahashi, Fumiaki; Kinouchi, Nobuyuki; Saegusa, Jun; Seki, Akiyuki; Takemiya, Hiroshi; Shibata, Tokushi

    2015-01-01

    Soil deposition density maps of gamma-ray emitting radioactive nuclides from the Fukushima Dai-ichi Nuclear Power Plant (NPP) accident were constructed on the basis of results from large-scale soil sampling. In total 10,915 soil samples were collected at 2168 locations. Gamma rays emitted from the samples were measured by Ge detectors and analyzed using a reliable unified method. The determined radioactivity was corrected to that of June 14, 2011 by considering the intrinsic decay constant of each nuclide. Finally the deposition maps were created for (134)Cs, (137)Cs, (131)I, (129m)Te and (110m)Ag. The radioactivity ratio of (134)Cs-(137)Cs was almost constant at 0.91 regardless of the locations of soil sampling. The radioactivity ratios of (131)I and (129m)Te-(137)Cs were relatively high in the regions south of the Fukushima NPP site. Effective doses for 50 y after the accident were evaluated for external and inhalation exposures due to the observed radioactive nuclides. The radiation doses from radioactive cesium were found to be much higher than those from the other radioactive nuclides. Copyright © 2014 The Authors. Published by Elsevier Ltd.. All rights reserved.

  14. Energy Preserved Sampling for Compressed Sensing MRI

    Directory of Open Access Journals (Sweden)

    Yudong Zhang

    2014-01-01

    Full Text Available The sampling patterns, cost functions, and reconstruction algorithms play important roles in optimizing compressed sensing magnetic resonance imaging (CS-MRI. Simple random sampling patterns did not take into account the energy distribution in k-space and resulted in suboptimal reconstruction of MR images. Therefore, a variety of variable density (VD based samplings patterns had been developed. To further improve it, we propose a novel energy preserving sampling (ePRESS method. Besides, we improve the cost function by introducing phase correction and region of support matrix, and we propose iterative thresholding algorithm (ITA to solve the improved cost function. We evaluate the proposed ePRESS sampling method, improved cost function, and ITA reconstruction algorithm by 2D digital phantom and 2D in vivo MR brains of healthy volunteers. These assessments demonstrate that the proposed ePRESS method performs better than VD, POWER, and BKO; the improved cost function can achieve better reconstruction quality than conventional cost function; and the ITA is faster than SISTA and is competitive with FISTA in terms of computation time.

  15. The Influence of Sampling Density on Bayesian Age-Depth Models and Paleoclimatic Reconstructions - Lessons Learned from Lake Titicaca - Bolivia/Peru

    Science.gov (United States)

    Salenbien, W.; Baker, P. A.; Fritz, S. C.; Guedron, S.

    2014-12-01

    Lake Titicaca is one of the most important archives of paleoclimate in tropical South America, and prior studies have elucidated patterns of climate variation at varied temporal scales over the past 0.5 Ma. Yet, slow sediment accumulation rates in the main deeper basin of the lake have precluded analysis of the lake's most recent history at high resolution. To obtain a paleoclimate record of the last few millennia at multi-decadal resolution, we obtained five short cores, ranging from 139 to 181 cm in length, from the shallower Wiñaymarka sub-basin of of Lake Titicaca, where sedimentation rates are higher than in the lake's main basin. Selected cores have been analyzed for their geochemical signature by scanning XRF, diatom stratigraphy, sedimentology, and for 14C age dating. A total of 72 samples were 14C-dated using a Gas Ion Source automated high-throughput method for carbonate samples (mainly Littoridina sp. and Taphius montanus gastropod shells) at NOSAMS (Woods Hole Oceanographic Institute) with an analytical precision higher than 2%. The method has lower analytical precision compared with traditional AMS radiocarbon dating, but the lower cost enables analysis of a larger number of samples, and the error associated with the lower precision is relatively small for younger samples (< ~8,000 years). A 172-cm-long core was divided into centimeter long sections, and 47 14C dates were obtained from 1-cm intervals, averaging one date every 3-4 cm. The other cores were radiocarbon dated with a sparser sampling density that focused on visual unconformities and shell beds. The high-resolution radiocarbon analysis reveals complex sedimentation patterns in visually continuous sections, with abundant indicators of bioturbated or reworked sediments and periods of very rapid sediment accumulation. These features are not evident in the sparser sampling strategy but have significant implications for reconstructing past lake level and paleoclimatic history.

  16. Long T2 suppression in native lung 3-D imaging using k-space reordered inversion recovery dual-echo ultrashort echo time MRI.

    Science.gov (United States)

    Gai, Neville D; Malayeri, Ashkan A; Bluemke, David A

    2017-08-01

    Long T2 species can interfere with visualization of short T2 tissue imaging. For example, visualization of lung parenchyma can be hindered by breathing artifacts primarily from fat in the chest wall. The purpose of this work was to design and evaluate a scheme for long T2 species suppression in lung parenchyma imaging using 3-D inversion recovery double-echo ultrashort echo time imaging with a k-space reordering scheme for artifact suppression. A hyperbolic secant (HS) pulse was evaluated for different tissues (T1/T2). Bloch simulations were performed with the inversion pulse followed by segmented UTE acquisition. Point spread function (PSF) was simulated for a standard interleaved acquisition order and a modulo 2 forward-reverse acquisition order. Phantom and in vivo images (eight volunteers) were acquired with both acquisition orders. Contrast to noise ratio (CNR) was evaluated in in vivo images prior to and after introduction of the long T2 suppression scheme. The PSF as well as phantom and in vivo images demonstrated reduction in artifacts arising from k-space modulation after using the reordering scheme. CNR measured between lung and fat and lung and muscle increased from -114 and -148.5 to +12.5 and 2.8 after use of the IR-DUTE sequence. Paired t test between the CNRs obtained from UTE and IR-DUTE showed significant positive change (p lung-fat CNR and p = 0.03 for lung-muscle CNR). Full 3-D lung parenchyma imaging with improved positive contrast between lung and other long T2 tissue types can be achieved robustly in a clinically feasible time using IR-DUTE with image subtraction when segmented radial acquisition with k-space reordering is employed.

  17. Is multidetector CT-based bone mineral density and quantitative bone microstructure assessment at the spine still feasible using ultra-low tube current and sparse sampling?

    Energy Technology Data Exchange (ETDEWEB)

    Mei, Kai; Kopp, Felix K.; Schwaiger, Benedikt J.; Gersing, Alexandra S.; Sauter, Andreas; Muenzel, Daniela; Rummeny, Ernst J. [Klinikum rechts der Isar, Technische Universitaet Muenchen, Department of Diagnostic and Interventional Radiology, Munich (Germany); Bippus, Rolf [Research Laboratories, Philips GmbH Innovative Technologies, Hamburg (Germany); Koehler, Thomas [Research Laboratories, Philips GmbH Innovative Technologies, Hamburg (Germany); Technische Universitaet Muenchen, TUM Institute for Advanced Studies, Garching (Germany); Fehringer, Andreas [Technische Universitaet Muenchen, Lehrstuhl fuer Biomedizinische Physik, Garching (Germany); Pfeiffer, Franz [Klinikum rechts der Isar, Technische Universitaet Muenchen, Department of Diagnostic and Interventional Radiology, Munich (Germany); Technische Universitaet Muenchen, TUM Institute for Advanced Studies, Garching (Germany); Technische Universitaet Muenchen, Lehrstuhl fuer Biomedizinische Physik, Garching (Germany); Kirschke, Jan S. [Klinikum rechts der Isar, Technische Universitaet Muenchen, Section of Diagnostic and Interventional Neuroradiology, Munich (Germany); Noel, Peter B. [Klinikum rechts der Isar, Technische Universitaet Muenchen, Department of Diagnostic and Interventional Radiology, Munich (Germany); Technische Universitaet Muenchen, Lehrstuhl fuer Biomedizinische Physik, Garching (Germany); Baum, Thomas [Klinikum rechts der Isar, Technische Universitaet Muenchen, Department of Diagnostic and Interventional Radiology, Munich (Germany); Klinikum rechts der Isar, Technische Universitaet Muenchen, Section of Diagnostic and Interventional Neuroradiology, Munich (Germany)

    2017-12-15

    Osteoporosis diagnosis using multidetector CT (MDCT) is limited to relatively high radiation exposure. We investigated the effect of simulated ultra-low-dose protocols on in-vivo bone mineral density (BMD) and quantitative trabecular bone assessment. Institutional review board approval was obtained. Twelve subjects with osteoporotic vertebral fractures and 12 age- and gender-matched controls undergoing routine thoracic and abdominal MDCT were included (average effective dose: 10 mSv). Ultra-low radiation examinations were achieved by simulating lower tube currents and sparse samplings at 50%, 25% and 10% of the original dose. BMD and trabecular bone parameters were extracted in T10-L5. Except for BMD measurements in sparse sampling data, absolute values of all parameters derived from ultra-low-dose data were significantly different from those derived from original dose images (p<0.05). BMD, apparent bone fraction and trabecular thickness were still consistently lower in subjects with than in those without fractures (p<0.05). In ultra-low-dose scans, BMD and microstructure parameters were able to differentiate subjects with and without vertebral fractures, suggesting osteoporosis diagnosis is feasible. However, absolute values differed from original values. BMD from sparse sampling appeared to be more robust. This dose-dependency of parameters should be considered for future clinical use. (orig.)

  18. Is multidetector CT-based bone mineral density and quantitative bone microstructure assessment at the spine still feasible using ultra-low tube current and sparse sampling?

    International Nuclear Information System (INIS)

    Mei, Kai; Kopp, Felix K.; Schwaiger, Benedikt J.; Gersing, Alexandra S.; Sauter, Andreas; Muenzel, Daniela; Rummeny, Ernst J.; Bippus, Rolf; Koehler, Thomas; Fehringer, Andreas; Pfeiffer, Franz; Kirschke, Jan S.; Noel, Peter B.; Baum, Thomas

    2017-01-01

    Osteoporosis diagnosis using multidetector CT (MDCT) is limited to relatively high radiation exposure. We investigated the effect of simulated ultra-low-dose protocols on in-vivo bone mineral density (BMD) and quantitative trabecular bone assessment. Institutional review board approval was obtained. Twelve subjects with osteoporotic vertebral fractures and 12 age- and gender-matched controls undergoing routine thoracic and abdominal MDCT were included (average effective dose: 10 mSv). Ultra-low radiation examinations were achieved by simulating lower tube currents and sparse samplings at 50%, 25% and 10% of the original dose. BMD and trabecular bone parameters were extracted in T10-L5. Except for BMD measurements in sparse sampling data, absolute values of all parameters derived from ultra-low-dose data were significantly different from those derived from original dose images (p<0.05). BMD, apparent bone fraction and trabecular thickness were still consistently lower in subjects with than in those without fractures (p<0.05). In ultra-low-dose scans, BMD and microstructure parameters were able to differentiate subjects with and without vertebral fractures, suggesting osteoporosis diagnosis is feasible. However, absolute values differed from original values. BMD from sparse sampling appeared to be more robust. This dose-dependency of parameters should be considered for future clinical use. (orig.)

  19. Controlling T2 blurring in 3D RARE arterial spin labeling acquisition through optimal combination of variable flip angles and k-space filtering.

    Science.gov (United States)

    Zhao, Li; Chang, Ching-Di; Alsop, David C

    2018-02-09

    To improve the SNR efficiency and reduce the T 2 blurring of 3D rapid acquisition with relaxation enhancement stack-of-spiral arterial spin labeling imaging by using variable refocusing flip angles and k-space filtering. An algorithm for determining the optimal combination of variable flip angles and filtering correction is proposed. The flip angles are designed using extended phase graph physical simulations in an analytical and global optimization framework, with an optional constraint on deposited power. Optimal designs for correcting to Hann and Fermi window functions were compared with conventional constant amplitude or variable flip angle only designs on 6 volunteers. With the Fermi window correction, the proposed optimal designs provided 39.8 and 27.3% higher SNR (P variable flip angle designs. Even when power deposition was limited to 50% of the constant amplitude design, the proposed method outperformed the SNR (P variable flip angles can be derived as the output of an optimization problem. The combined design of variable flip angle and k-space filtering provided superior SNR to designs primarily emphasizing either approach singly. © 2018 International Society for Magnetic Resonance in Medicine.

  20. A comparison of LBGs, DRGs, and BzK galaxies: their contribution to the stellar mass density in the GOODS-MUSIC sample

    Science.gov (United States)

    Grazian, A.; Salimbeni, S.; Pentericci, L.; Fontana, A.; Nonino, M.; Vanzella, E.; Cristiani, S.; de Santis, C.; Gallozzi, S.; Giallongo, E.; Santini, P.

    2007-04-01

    Context: The classification scheme for high redshift galaxies is complex at the present time, with simple colour-selection criteria (i.e. EROs, IEROs, LBGs, DRGs, BzKs), resulting in ill-defined properties for the stellar mass and star formation rate of these distant galaxies. Aims: The goal of this work is to investigate the properties of different classes of high-z galaxies, focusing in particular on the stellar masses of LBGs, DRGs, and BzKs, in order to derive their contribution to the total mass budget of the distant Universe. Methods: We used the GOODS-MUSIC catalog, containing ~3000 Ks-selected (~10 000 z-selected) galaxies with multi-wavelength coverage extending from the U band to the Spitzer 8~μm band, with spectroscopic or accurate photometric redshifts. We selected samples of BM/BX/LBGs, DRGs, and BzK galaxies to discuss the overlap and the limitations of these criteria, which can be overridden by a selection criterion based on physical parameters. We then measured the stellar masses of these galaxies and computed the stellar mass density (SMD) for the different samples up to redshift ≃4. Results: We show that the BzK-PE criterion is not optimal for selecting early type galaxies at the faint end. On the other hand, BzK-SF is highly contaminated by passively evolving galaxies at red z-Ks colours. We find that LBGs and DRGs contribute almost equally to the global SMD at z≥ 2 and, in general, that star-forming galaxies form a substantial fraction of the universal SMD. Passively evolving galaxies show a strong negative density evolution from redshift 2 to 3, indicating that we are witnessing the epoch of mass assembly of such objects. Finally we have indications that by pushing the selection to deeper magnitudes, the contribution of less massive DRGs could overtake that of LBGs. Deeper surveys, like the HUDF, are required to confirm this suggestion.

  1. View-sharing in keyhole imaging: Partially compressed central k-space acquisition in time-resolved MRA at 3.0 T

    Energy Technology Data Exchange (ETDEWEB)

    Hadizadeh, Dariusch R., E-mail: Dariusch.Hadizadeh@ukb.uni-bonn.de [University of Bonn, Department of Radiology, Sigmund-Freud-Strasse 25, 53127 Bonn (Germany); Gieseke, Juergen [University of Bonn, Department of Radiology, Sigmund-Freud-Strasse 25, 53127 Bonn (Germany); Philips Healthcare, Best (Netherlands); Beck, Gabriele; Geerts, Liesbeth [Philips Healthcare, Best (Netherlands); Kukuk, Guido M. [University of Bonn, Department of Radiology, Sigmund-Freud-Strasse 25, 53127 Bonn (Germany); Bostroem, Azize [Department of Neurosurgery, Sigmund-Freud-Strasse 25, 53127 Bonn, Deutschland (Germany); Urbach, Horst; Schild, Hans H.; Willinek, Winfried A. [University of Bonn, Department of Radiology, Sigmund-Freud-Strasse 25, 53127 Bonn (Germany)

    2011-11-15

    Introduction: Time-resolved contrast-enhanced magnetic resonance (MR) angiography (CEMRA) of the intracranial vasculature has proved its clinical value for the evaluation of cerebral vascular disease in cases where both flow hemodynamics and morphology are important. The purpose of this study was to evaluate a combination of view-sharing with keyhole imaging to increase spatial and temporal resolution of time-resolved CEMRA at 3.0 T. Methods: Alternating view-sharing was combined with randomly segmented k-space ordering, keyhole imaging, partial Fourier and parallel imaging (4DkvsMRA). 4DkvsMRA was evaluated using varying compression factors (80-100) resulting in spatial resolutions ranging from (1.1 x 1.1 x 1.4) to (0.96 x 0.96 x 0.95) mm{sup 3} and temporal resolutions ranging from 586 ms/dynamic scan - 288 ms/dynamic scan in three protocols in 10 healthy volunteers and seven patients (17 subjects). DSA correlation was available in four patients with cerebral arteriovenous malformations (cAVMs) and one patient with cerebral teleangiectasia. Results: 4DkvsMRA was successfully performed in all subjects and showed clear depiction of arterial and venous phases with diagnostic image quality. At the maximum view-sharing compression factor (=100), a 'flickering' artefact was observed. Conclusion: View-sharing in keyhole imaging allows for increased spatial and temporal resolution in time-resolved MRA.

  2. Comprehensive quantification of signal-to-noise ratio and g-factor for image-based and k-space-based parallel imaging reconstructions.

    Science.gov (United States)

    Robson, Philip M; Grant, Aaron K; Madhuranthakam, Ananth J; Lattanzi, Riccardo; Sodickson, Daniel K; McKenzie, Charles A

    2008-10-01

    Parallel imaging reconstructions result in spatially varying noise amplification characterized by the g-factor, precluding conventional measurements of noise from the final image. A simple Monte Carlo based method is proposed for all linear image reconstruction algorithms, which allows measurement of signal-to-noise ratio and g-factor and is demonstrated for SENSE and GRAPPA reconstructions for accelerated acquisitions that have not previously been amenable to such assessment. Only a simple "prescan" measurement of noise amplitude and correlation in the phased-array receiver, and a single accelerated image acquisition are required, allowing robust assessment of signal-to-noise ratio and g-factor. The "pseudo multiple replica" method has been rigorously validated in phantoms and in vivo, showing excellent agreement with true multiple replica and analytical methods. This method is universally applicable to the parallel imaging reconstruction techniques used in clinical applications and will allow pixel-by-pixel image noise measurements for all parallel imaging strategies, allowing quantitative comparison between arbitrary k-space trajectories, image reconstruction, or noise conditioning techniques. (c) 2008 Wiley-Liss, Inc.

  3. View-sharing in keyhole imaging: Partially compressed central k-space acquisition in time-resolved MRA at 3.0 T

    International Nuclear Information System (INIS)

    Hadizadeh, Dariusch R.; Gieseke, Juergen; Beck, Gabriele; Geerts, Liesbeth; Kukuk, Guido M.; Bostroem, Azize; Urbach, Horst; Schild, Hans H.; Willinek, Winfried A.

    2011-01-01

    Introduction: Time-resolved contrast-enhanced magnetic resonance (MR) angiography (CEMRA) of the intracranial vasculature has proved its clinical value for the evaluation of cerebral vascular disease in cases where both flow hemodynamics and morphology are important. The purpose of this study was to evaluate a combination of view-sharing with keyhole imaging to increase spatial and temporal resolution of time-resolved CEMRA at 3.0 T. Methods: Alternating view-sharing was combined with randomly segmented k-space ordering, keyhole imaging, partial Fourier and parallel imaging (4DkvsMRA). 4DkvsMRA was evaluated using varying compression factors (80-100) resulting in spatial resolutions ranging from (1.1 x 1.1 x 1.4) to (0.96 x 0.96 x 0.95) mm 3 and temporal resolutions ranging from 586 ms/dynamic scan - 288 ms/dynamic scan in three protocols in 10 healthy volunteers and seven patients (17 subjects). DSA correlation was available in four patients with cerebral arteriovenous malformations (cAVMs) and one patient with cerebral teleangiectasia. Results: 4DkvsMRA was successfully performed in all subjects and showed clear depiction of arterial and venous phases with diagnostic image quality. At the maximum view-sharing compression factor (=100), a 'flickering' artefact was observed. Conclusion: View-sharing in keyhole imaging allows for increased spatial and temporal resolution in time-resolved MRA.

  4. Doppler reflectometry for the investigation of poloidally propagating density perturbations

    International Nuclear Information System (INIS)

    Hirsch, M.; Baldzuhn, J.; Kurzan, B.; Holzhauer, E.

    1999-01-01

    A modification of microwave reflectometry is discussed where the direction of observation is tilted with respect to the normal onto the reflecting surface. The experiment is similar to scattering where a finite resolution in k-space exists but keeps the radial localization of reflectometry. The observed poloidal wavenumber is chosen by Bragg's condition via the tilt angle and the resolution in k-space is determined by the antenna pattern. From the Doppler shift of the reflected wave the poloidal propagation velocity of density perturbations is obtained. The diagnostic capabilities of Doppler reflectometry are investigated using full wave code calculations. The method offers the possibility to observe changes in the poloidal propagation velocity of density perturbations and their radial shear with a temporal resolution of about 10μs. (authors)

  5. The relation between fine root density and proximity of stems in closed Douglas-fir plantations on homogen[e]ous sandy soils: implications for sampling design

    NARCIS (Netherlands)

    Olsthoorn, A.F.M.; Klap, J.M.; Oude Voshaar, J.H.

    1999-01-01

    Studies have been carried out in two fully stocked, fast growing Douglas-fir plantations of the Dutch ACIFORN project in three consecutive years, to obtain information on fine root densities (Olsthoorn 1991). For the present paper, data collected in early summer 1987 were used to study the relation

  6. Final report on EUROMET key comparison EUROMET.M.D-K2 (EUROMET 627) "Comparison of density determinations of liquid samples"

    Science.gov (United States)

    Bettin, Horst; Heinonen, Martti; Gosset, André; Zelenka, Zoltán; Lorefice, Salvatore; Hellerud, Kristen; Durlik, Hanna; Jordaan, Werner; Field, Ireen

    2016-01-01

    The results of the key comparison EUROMET 627 (EUROMET.M.D-K2) are presented. This project covered the density measurements of three liquids: dodecane, water and an oil of high viscosity measured at 15 °C, 20 °C and 40 °C. Seven European metrology laboratories and the South African laboratory CSIR-NML (now: NMISA) measured the densities at atmospheric pressure by hydrostatic weighing of solid density standards between 04 October 2001 and 18 December 2001. The stability and homogeneity of the liquids were investigated by the pilot laboratory PTB. The results generally show good agreement among the participants. Only for the simple Mohr-Westphal balances do the uncertainties seem to be underestimated by the laboratories. Furthermore, the measurement of high-viscosity oil was difficult for some laboratories. Nevertheless, the five laboratories PTB/DE, BNM/FR (now: LNE/FR), OMH/HU (now: MKEH/HU), IMGC/IT (now: INRIM/IT) and GUM/PL agree with each other for stated uncertainties of 0.05 kg/m3 or less. This satisfies the current needs of customers who wish to calibrate or check liquid density measuring instruments such as oscillation-type density meters. No reference values were calculated since the subsequent CCM key comparison CCM.D-K2 had a different scope and the EUROMET 627 comparison was soon superseded by the EURAMET 1019 (EURAMET.M.D-K2) comparison. To reach the main text of this paper, click on Final Report. Note that this text is that which appears in Appendix B of the BIPM key comparison database kcdb.bipm.org/. The final report has been peer-reviewed and approved for publication by the CCM, according to the provisions of the CIPM Mutual Recognition Arrangement (CIPM MRA).

  7. Prediction of mucin-type O-glycosylation sites in mammalian proteins using the composition of k-spaced amino acid pairs

    Directory of Open Access Journals (Sweden)

    Sheng Zhi-Ya

    2008-02-01

    Full Text Available Abstract Background As one of the most common protein post-translational modifications, glycosylation is involved in a variety of important biological processes. Computational identification of glycosylation sites in protein sequences becomes increasingly important in the post-genomic era. A new encoding scheme was employed to improve the prediction of mucin-type O-glycosylation sites in mammalian proteins. Results A new protein bioinformatics tool, CKSAAP_OGlySite, was developed to predict mucin-type O-glycosylation serine/threonine (S/T sites in mammalian proteins. Using the composition of k-spaced amino acid pairs (CKSAAP based encoding scheme, the proposed method was trained and tested in a new and stringent O-glycosylation dataset with the assistance of Support Vector Machine (SVM. When the ratio of O-glycosylation to non-glycosylation sites in training datasets was set as 1:1, 10-fold cross-validation tests showed that the proposed method yielded a high accuracy of 83.1% and 81.4% in predicting O-glycosylated S and T sites, respectively. Based on the same datasets, CKSAAP_OGlySite resulted in a higher accuracy than the conventional binary encoding based method (about +5.0%. When trained and tested in 1:5 datasets, the CKSAAP encoding showed a more significant improvement than the binary encoding. We also merged the training datasets of S and T sites and integrated the prediction of S and T sites into one single predictor (i.e. S+T predictor. Either in 1:1 or 1:5 datasets, the performance of this S+T predictor was always slightly better than those predictors where S and T sites were independently predicted, suggesting that the molecular recognition of O-glycosylated S/T sites seems to be similar and the increase of the S+T predictor's accuracy may be a result of expanded training datasets. Moreover, CKSAAP_OGlySite was also shown to have better performance when benchmarked against two existing predictors. Conclusion Because of CKSAAP

  8. Respiratory motion-resolved, self-gated 4D-MRI using Rotating Cartesian K-space (ROCK): Initial clinical experience on an MRI-guided radiotherapy system.

    Science.gov (United States)

    Han, Fei; Zhou, Ziwu; Du, Dongsu; Gao, Yu; Rashid, Shams; Cao, Minsong; Shaverdian, Narek; Hegde, John V; Steinberg, Michael; Lee, Percy; Raldow, Ann; Low, Daniel A; Sheng, Ke; Yang, Yingli; Hu, Peng

    2018-06-01

    To optimize and evaluate the respiratory motion-resolved, self-gated 4D-MRI using Rotating Cartesian K-space (ROCK-4D-MRI) method in a 0.35 T MRI-guided radiotherapy (MRgRT) system. The study included seven patients with abdominal tumors treated on the MRgRT system. ROCK-4D-MRI and 2D-CINE, was performed immediately after one of the treatment fractions. Motion quantification based on 4D-MRI was compared with those based on 2D-CINE. The image quality of 4D-MRI was evaluated against 4D-CT. The gross tumor volumes (GTV) were defined based on individual respiratory phases of both 4D-MRI and 4D-CT and compared for their variability over the respiratory cycle. The motion measurements based on 4D-MRI matched well with 2D-CINE, with differences of 1.04 ± 0.52 mm in the superior-inferior and 0.54 ± 0.21 mm in the anterior-posterior directions. The image quality scores of 4D-MRI were significantly higher than 4D-CT, with better tumor contrast (3.29 ± 0.76 vs. 1.86 ± 0.90) and less motion artifacts (3.57 ± 0.53 vs. 2.29 ± 0.95). The GTVs were more consistent in 4D-MRI than in 4D-CT, with significantly smaller GTV variability (9.31 ± 4.58% vs. 34.27 ± 23.33%). Our study demonstrated the clinical feasibility of using the ROCK-4D-MRI to acquire high quality, respiratory motion-resolved 4D-MRI in a low-field MRgRT system. The 4D-MRI image could provide accurate dynamic information for radiotherapy treatment planning. Copyright © 2018 Elsevier B.V. All rights reserved.

  9. Solidity of viscous liquids. IV. Density fluctuations

    DEFF Research Database (Denmark)

    Dyre, J. C.

    2006-01-01

    This paper is the fourth in a series exploring the physical consequences of the solidity of highly viscous liquids. It is argued that the two basic characteristics of a flow event (a jump between two energy minima in configuration space) are the local density change and the sum of all particle...... displacements. Based on this it is proposed that density fluctuations are described by a time-dependent Ginzburg-Landau equation with rates in k space of the form C+Dk^2 with D>>C a^2 where a is the average intermolecular distance. The inequality expresses a long-wavelength dominance of the dynamics which...... with Debye behavior at low frequencies and an omega^{−1/2} decay of the loss at high frequencies. Finally, a general formalism for the description of viscous liquid dynamics, which supplements the density dynamics by including stress fields, a potential energy field, and molecular orientational fields...

  10. Monitoring Pb in Aqueous Samples by Using Low Density Solvent on Air-Assisted Dispersive Liquid-Liquid Microextraction Coupled with UV-Vis Spectrophotometry.

    Science.gov (United States)

    Nejad, Mina Ghasemi; Faraji, Hakim; Moghimi, Ali

    2017-04-01

    In this study, AA-DLLME combined with UV-Vis spectrophotometry was developed for pre-concentration, microextraction and determination of lead in aqueous samples. Optimization of the independent variables was carried out according to chemometric methods in three steps. According to the screening and optimization study, 86 μL of 1-undecanol (extracting solvent), 12 times syringe pumps, pH 2.0, 0.00% of salt and 0.1% DDTP (chelating agent) were chosen as the optimum independent variables for microextraction and determination of lead. Under the optimized conditions, R = 0.9994, and linearity range was 0.01-100 µg mL -1 . LOD and LOQ were 3.4 and 11.6 ng mL -1 , respectively. The method was applied for analysis of real water samples, such as tap, mineral, river and waste water.

  11. Consistency of genetic inheritance mode and heritability patterns of triglyceride vs. high density lipoprotein cholesterol ratio in two Taiwanese family samples

    Directory of Open Access Journals (Sweden)

    Yang Chi-Yu

    2003-04-01

    Full Text Available Abstract Background Triglyceride/HDL cholesterol ratio (TG/HDL-C is considered as a risk factor for cardiovascular events. Genetic components were important in controlling the variation in western countries. But the mode of inheritance and family aggregation patterns were still unknown among Asian-Pacific countries. This study, based on families recruited from community and hospital, is aimed to investigate the mode of inheritance, heritability and shared environmental factors in controlling TG/HDL-C. Results Two populations, one from community-based families (n = 988, 894 parent-offspring and 453 sibling pairs and the other from hospital-based families (n = 1313, 76 parent-offspring and 52 sibling pairs were sampled. The population in hospital-based families had higher mean age values than community-based families (54.7 vs. 34.0. Logarithmic transformed TG/ HDL-C values, after adjusted by age, gender and body mass index, were for genetic analyses. Significant parent-offspring and sibling correlations were also found in both samples. The parent-offspring correlation coefficient was higher in the hospital-based families than in the community-based families. Genetic heritability was higher in community-based families (0.338 ± 0.114, p = 0.002, but the common shared environmental factor was higher in hospital-based families (0.203 ± 0.042, p Conclusion Variations of TG/HDL-C in the normal ranges were likely to be influenced by multiple factors, including environmental and genetic components. Higher genetic factors were proved in younger community-based families than in older hospital-based families.

  12. Road density

    Data.gov (United States)

    U.S. Environmental Protection Agency — Road density is generally highly correlated with amount of developed land cover. High road densities usually indicate high levels of ecological disturbance. More...

  13. Analyzing the Effects of Climate Factors on Soybean Protein, Oil Contents, and Composition by Extensive and High-Density Sampling in China.

    Science.gov (United States)

    Song, Wenwen; Yang, Ruping; Wu, Tingting; Wu, Cunxiang; Sun, Shi; Zhang, Shouwei; Jiang, Bingjun; Tian, Shiyan; Liu, Xiaobing; Han, Tianfu

    2016-05-25

    From 2010 to 2013, 763 soybean samples were collected from an extensive area of China. The correlations between seed compositions and climate data were analyzed. The contents of crude protein and water-soluble protein, total amount of protein plus oil, and most of the amino acids were positively correlated with an accumulated temperature ≥15 °C (AT15) and the mean daily temperature (MDT) but were negatively correlated with hours of sunshine (HS) and diurnal temperature range (DTR). The correlations of crude oil and most fatty acids with climate factors were opposite to those of crude protein. Crude oil content had a quadratic regression relationship with MDT, and a positive correlation between oil content and MDT was found when the daily temperature was soybean protein and oil contents. The study illustrated the effects of climate factors on soybean protein and oil contents and proposed agronomic practices for improving soybean quality in different regions of China. The results provide a foundation for the regionalization of high-quality soybean production in China and similar regions in the world.

  14. Commissioning and modification of the low temperature scanning polarization microscope (TTSPM) and imaging of the local magnetic flux density distribution in superconducting niobium samples

    International Nuclear Information System (INIS)

    Gruenzweig, Matthias Sebastian Peter

    2014-01-01

    The dissertation is separated into two different parts, which will be presented in the following. Part I of the dissertation is about the commissioning and the modification of the ''low-temperature scanning polarization microscope'' which was designed in a previous dissertation of Stefan Guenon [1]. A scanning polarization microscope has certain advantages compared to conventional polarization microscopes. With a scanning polarization microscope it is easily possible to achieve a high illumination intensity, which is important to realize a high signal-to-noise ratio. Moreover, the confocal design of the scanning polarization microscope improves the resolution of the microscope by a factor of 1.4. Normally, it is not necessary to post-process the images by means of differential frame method to eliminate the contrast of non-magnetic origin. In contrast to conventional polarization microscopes the low-temperature scanning polarization microscope is able to image electronic transport properties via beam-induced voltage variation in addition to the magneto-optical effects. In this dissertation, it was possible to demonstrate the performance capability of the scanning polarization microscope at room temperature as well as at low temperatures. The investigation of the polar Kerr-effect has been carried out with a BaFe 12 O 19 -test sample whereas the measurements of the longitudinal Kerr-effect have been carried out with an in-plane magnetized acceleration sensor. Furthermore, an independent room temperature construction for out-of-plane measurements in a magnetic field up to 1 Tesla has been designed and implemented within the framework of a diploma thesis, supervised by the author of this dissertation. Using this construction, it was possible to gain experimental results regarding the interlayer exchange coupling between iron-terbium alloys (Fe 1-x Tb x ) and cobalt-platinum multilayers (vertical stroke Co/Pt vertical stroke n ). Indeed, it has been

  15. Densidade global de solos medida com anel volumétrico e por cachimbagem de terra fina seca ao ar Bulk density of soil samples measured in the field and through volume measurement of sieved soil

    Directory of Open Access Journals (Sweden)

    Bernardo Van Raij

    1989-01-01

    Full Text Available Em laboratórios de rotina de fertilidade do solo, a medida de quantidade de terra para análise é feita em volume, mediante utensílios chamados "cachimbos", que permitem medir volumes de terra. Admite-se que essas medidas reflitam a quantidade de terra existente em volume de solo similar em condições de campo. Essa hipótese foi avaliada neste trabalho, por doze amostras dos horizontes A e B de seis perfis de solos. A densidade em condições de campo foi avaliada por anel volumétrico e, no laboratório, por meio de cachimbos de diversos tamanhos. A cachimbagem revelou-se bastante precisa. Os valores de densidade global calculada variaram de 0,63 a 1,46g/cm³ para medidas de campo e de 0,91 a 1,33g/cm³ para medidas com cachimbos. Portanto, a medida de laboratório subestimou valores altos de densidade e deu resultados mais elevados para valores de campo mais baixos.In soil testing laboratories, soil samples for chemical analysis are usually measured by volume, using appropriate measuring spoons. It is tacitly assumed that such measurements would reflect amounts of soil existing in the same volume under field conditions. This hypothesis was tested, using 12 soil samples of the A and B horizons of six soil profiles. Bulk density in the field was evaluated through a cylindrical metal sampler of 50cm³ and in the laboratory using spoons of different sizes. Measurements of soil volumes by spoons were quite precise. Values of bulk density varied between 0.63 and 1.46g/cm³ for field measurements and between 0.91 and 1.33g/cm³ for laboratory measurements with spoons. Thus, laboratory measurements overestimated lower values of bulk densities and underestimated the higher ones.

  16. Lung density

    DEFF Research Database (Denmark)

    Garnett, E S; Webber, C E; Coates, G

    1977-01-01

    The density of a defined volume of the human lung can be measured in vivo by a new noninvasive technique. A beam of gamma-rays is directed at the lung and, by measuring the scattered gamma-rays, lung density is calculated. The density in the lower lobe of the right lung in normal man during quiet...... breathing in the sitting position ranged from 0.25 to 0.37 g.cm-3. Subnormal values were found in patients with emphsema. In patients with pulmonary congestion and edema, lung density values ranged from 0.33 to 0.93 g.cm-3. The lung density measurement correlated well with the findings in chest radiographs...... but the lung density values were more sensitive indices. This was particularly evident in serial observations of individual patients....

  17. Tamanho amostral para a estimativa da densidade básica em um clone híbrido de Eucalyptus sp. Sample size for estimating basic density in a clone of Eucalyptus sp. hybrid.

    Directory of Open Access Journals (Sweden)

    Franciane Andrade de PÁDUA

    2015-06-01

    Full Text Available As diversas formas de se amostrar a madeira para o estudo de suas propriedades levam em consideração a acurácia, o tempo e o custo de processamento e coleta do material. No entanto, a forma e intensidade da amostragem considerada pode não captar corretamente a variabilidade dessas propriedades ou até mesmo negligenciá-la. O objetivo deste trabalho foi estimar o número de árvores necessárias para a estimativa da densidade básica média da árvore em um clone de híbrido de Eucalyptus urophylla x Eucalyptus grandis considerando diferentes formas de amostragem e classes de diâmetro. Foram utilizadas 50 árvores de um clone do hibrido, aos 5,6 anos. As árvores foram distribuídas em três classes de diâmetro e amostradas na forma de discos, a partir de três propostas: tradicional (0%, 25%, 50%,75% e 100% da altura comercial Hc; alternativa (2%, 10%, 30% e 70% Hc e de metro em metro a partir do DAP. Não houve diferença entre o número de árvores requeridas para a estimativa da densidade do clone por forma de amostragem, admitindo-se um erro de 5% e intervalo de confiança de 95%. A amostragem alternativa foi a mais eficiente considerando a intensidade da amostragem no tronco e o coeficiente de variação. A classificação diamétrica resultou em um número maior de árvores para estimar a densidade média, em função da maior variação da propriedade dentro de classes do que dentro do método de amostragem. There are several methods of collecting wood samples for the study of their properties, which consider the accuracy, time and cost of collecting and processing the material. However, often the variation pattern of ownership in the tree is neglected. Depending on the shape and size of the sample in the study the variability of the properties of the wood cannot be properly captured. The aim of this study was to estimate the number of trees needed to estimate the average basic density of the tree in a Eucalyptus urophylla x

  18. Espectrofotometria de longo caminho óptico em espectrofotômetro de duplo-feixe convencional: uma alternativa simples para investigações de amostras com densidade óptica muito baixa Long optical path length spectrophotometry in conventional double-beam spectrophotometers: a simple alternative for investigating samples of very low optical density

    Directory of Open Access Journals (Sweden)

    André Luiz Galo

    2009-01-01

    Full Text Available We describe the design and tests of a set-up mounted in a conventional double beam spectrophotometer, which allows the determination of optical density of samples confined in a long liquid core waveguide (LCW capillary. Very long optical path length can be achieved with capillary cell, allowing measurements of samples with very low optical densities. The device uses a custom optical concentrator optically coupled to LCW (TEFLON® AF. Optical density measurements, carried out using a LCW of ~ 45 cm, were in accordance with the Beer-Lambert Law. Thus, it was possible to analyze quantitatively samples at concentrations 45 fold lower than that regularly used in spectrophotometric measurements.

  19. Learning Grasp Affordance Densities

    DEFF Research Database (Denmark)

    Detry, Renaud; Kraft, Dirk; Kroemer, Oliver

    2011-01-01

    and relies on kernel density estimation to provide a continuous model. Grasp densities are learned and refined from exploration, by letting a robot “play” with an object in a sequence of graspand-drop actions: The robot uses visual cues to generate a set of grasp hypotheses; it then executes......We address the issue of learning and representing object grasp affordance models. We model grasp affordances with continuous probability density functions (grasp densities) which link object-relative grasp poses to their success probability. The underlying function representation is nonparametric...... these and records their outcomes. When a satisfactory number of grasp data is available, an importance-sampling algorithm turns these into a grasp density. We evaluate our method in a largely autonomous learning experiment run on three objects of distinct shapes. The experiment shows how learning increases success...

  20. Low Bone Density

    Science.gov (United States)

    ... Density Exam/Testing › Low Bone Density Low Bone Density Low bone density is when your bone density ... people with normal bone density. Detecting Low Bone Density A bone density test will determine whether you ...

  1. Sampling on Quasicrystals

    OpenAIRE

    Grepstad, Sigrid

    2011-01-01

    We prove that quasicrystals are universal sets of stable sampling in any dimension. Necessary and sufficient density conditions for stable sampling and interpolation sets in one dimension are studied in detail.

  2. Experimental MR-guided cryotherapy of the brain with almost real-time imaging by radial k-space scanning; Experimentelle MR-gesteuerte Kryotherapie des Gehirns mit nahezu Echtzeitdarstellung durch radiale k-Raum-Abtastung

    Energy Technology Data Exchange (ETDEWEB)

    Tacke, J.; Schorn, R.; Glowinski, A.; Grosskortenhaus, S.; Adam, G.; Guenther, R.W. [Technische Hochschule Aachen (Germany). Klinik fuer Radiologische Diagnostik; Speetzen, R.; Rau, G. [Helmholtz-Institut fuer Biomedizinische Technik, Aachen (Germany); Rasche, V. [Philips GmbH Forschungslaboratorium, Hamburg (Germany)

    1999-02-01

    Purpose: To test radial k-space scanning by MR fluoroscopy to guide and control MR-guided interstitial cryotherapy of the healthy pig brain. Methods: After MR tomographic planning of the approach, an MR-compatible experimental cryotherapy probe of 2.7 mm diameter was introduced through a 5 mm burr hole into the right frontal brain of five healthy pigs. The freeze-thaw cycles were imaged using a T{sub 1}-weighted gradient echo sequence with radial k-space scanning in coronal, sagittal, and axial directions. Results: The high temporal resolution of the chosen sequence permits a continuous representation of the freezing process with good image quality and high contrast between ice and unfrozen brain parenchyma. Because of the interactive conception of the sequence the layer plane could be chosen as desired during the measurement. Ice formation was sharply demarcated, spherically configurated, and was free of signals. Its maximum diameter was 13 mm. Conclusions: With use of the novel, interactively controllable gradient echo sequence with radial k-space scanning, guidance of the intervention under fluoroscopic conditions with the advantages of MRT is possible. MR-guided cryotherapy allows a minimally-invasive, precisely dosable focal tissue ablation. (orig.) [Deutsch] Ziel: Erprobung der radialen k-Raum-Abtastung bei der MR-Fluoroskopie zur Steuerung und Kontrolle MR-gesteuerter interstitieller Kryotherapie des gesunden Schweinegehirns. Methoden: Nach MR-tomographischer Planung des Zugangsweges wurde eine MR-kompatible experimentelle Kryotherapiesonde von 2,7 mm Durchmesser ueber ein 5 mm Bohrloch in das rechte Frontalhirn von fuenf gesunden Schweinen eingebracht. Die Frier-/Tauzyklen wurden anhand einer T{sub 1}-gewichteten Gradientenechosequenz mit radialer k-Raum-Abtastung in koronarer, sagittaler und axialer Schichtfuehrung dargestellt. Ergebnisse: Die hohe zeitliche Aufloesung der gewaehlten Sequenz erlaubte eine kontinuierliche Darstellung des Friervorgangs bei

  3. Level densities

    International Nuclear Information System (INIS)

    Ignatyuk, A.V.

    1998-01-01

    For any applications of the statistical theory of nuclear reactions it is very important to obtain the parameters of the level density description from the reliable experimental data. The cumulative numbers of low-lying levels and the average spacings between neutron resonances are usually used as such data. The level density parameters fitted to such data are compiled in the RIPL Starter File for the tree models most frequently used in practical calculations: i) For the Gilber-Cameron model the parameters of the Beijing group, based on a rather recent compilations of the neutron resonance and low-lying level densities and included into the beijing-gc.dat file, are chosen as recommended. As alternative versions the parameters provided by other groups are given into the files: jaeri-gc.dat, bombay-gc.dat, obninsk-gc.dat. Additionally the iljinov-gc.dat, and mengoni-gc.dat files include sets of the level density parameters that take into account the damping of shell effects at high energies. ii) For the backed-shifted Fermi gas model the beijing-bs.dat file is selected as the recommended one. Alternative parameters of the Obninsk group are given in the obninsk-bs.dat file and those of Bombay in bombay-bs.dat. iii) For the generalized superfluid model the Obninsk group parameters included into the obninsk-bcs.dat file are chosen as recommended ones and the beijing-bcs.dat file is included as an alternative set of parameters. iv) For the microscopic approach to the level densities the files are: obninsk-micro.for -FORTRAN 77 source for the microscopical statistical level density code developed in Obninsk by Ignatyuk and coworkers, moller-levels.gz - Moeller single-particle level and ground state deformation data base, moller-levels.for -retrieval code for Moeller single-particle level scheme. (author)

  4. THE SL2S GALAXY-SCALE LENS SAMPLE. IV. THE DEPENDENCE OF THE TOTAL MASS DENSITY PROFILE OF EARLY-TYPE GALAXIES ON REDSHIFT, STELLAR MASS, AND SIZE

    International Nuclear Information System (INIS)

    Sonnenfeld, Alessandro; Treu, Tommaso; Suyu, Sherry H.; Gavazzi, Raphaël; Marshall, Philip J.; Auger, Matthew W.; Nipoti, Carlo

    2013-01-01

    We present optical and near-infrared spectroscopy obtained at Keck, Very Large Telescope, and Gemini for a sample of 36 secure strong gravitational lens systems and 17 candidates identified as part of the Strong Lensing Legacy Survey. The deflectors are massive early-type galaxies in the redshift range z d = 0.2-0.8, while the lensed sources are at z s = 1-3.5. We combine these data with photometric and lensing measurements presented in the companion paper III and with lenses from the Sloan Lens Advanced Camera for Surveys and Lènses Structure and Dynamics surveys to investigate the cosmic evolution of the internal structure of massive early-type galaxies over half the age of the universe. We study the dependence of the slope of the total mass density profile, γ' (ρ(r)∝r -γ ' ), on stellar mass, size, and redshift. We find that two parameters are sufficient to determine γ' with less than 6% residual scatter. At fixed redshift, γ' depends solely on the surface stellar mass density ∂γ'/∂Σ * = 0.38 ± 0.07, i.e., galaxies with denser stars also have steeper slopes. At fixed M * and R eff , γ' depends on redshift, in the sense that galaxies at a lower redshift have steeper slopes (∂γ'/∂z = –0.31 ± 0.10). However, the mean redshift evolution of γ' for an individual galaxy is consistent with zero dγ'/dz = –0.10 ± 0.12. This result is obtained by combining our measured dependencies of γ' on z, M * ,R eff with the evolution of the R eff -M * taken from the literature, and is broadly consistent with current models of the formation and evolution of massive early-type galaxies. Detailed quantitative comparisons of our results with theory will provide qualitatively new information on the detailed physical processes at work

  5. Contributions of an adiabatic initial inversion pulse and K-space Re-ordered by inversion-time at each slice position (KRISP) to control of CSF artifacts and visualization of the brain in FLAIR magnetic resonance imaging

    International Nuclear Information System (INIS)

    Curati, Walter L.; Oatridge, Angela; Herlihy, Amy H.; Hajnal, Joseph V.; Puri, Basant K.; Bydder, Graeme M.

    2001-01-01

    AIM: The aim of this study was to compare the performance of three fluid attenuated inversion recovery (FLAIR) pulse sequences for control of cerebrospinal fluid (CSF) and blood flow artifacts in imaging of the brain. The first of these sequences had an initial sinc inversion pulse which was followed by conventional k-space mapping. The second had an initial sinc inversion pulse followed by k-space re-ordered by inversion time at each slice position (KRISP) and the third had an adiabatic initial inversion pulse followed by KRISP. MATERIALS AND METHODS: Ten patients with established disease were studied with all three pulse sequences. Seven were also studied with the adiabatic KRISP sequence after contrast enhancement. Their images were evaluated for patient motion artifact, CSF and blood flow artifact as well as conspicuity of the cortex, meninges, ventricular system, brainstem and cerebellum. The conspicuity of lesions and the degree of enhancement were also evaluated. RESULTS: Both the sinc and adiabatic KRISP FLAIR sequences showed better control of CSF and blood flow artifacts than the conventional FLAIR sequence. In addition the adiabatic KRISP FLAIR sequence showed better control of CSF artifact at the inferior aspect of the posterior fossa. The lesion conspicuity was similar for each of the FLAIR sequences as was the degree of contrast enhancement to that shown with a T 1 weighted spin echo sequence. CONCLUSION: The KRISP FLAIR sequence controls high signal artifacts from CSF flow and blood flow and the adiabatic pulse controls high signal artifacts due to inadequate inversion of the CSF magnetization at the periphery of the head transmitter coil. The KRISP FLAIR sequence also improves cortical and meningeal definition as a result of an edge enhancement effect. The effects are synergistic and can be usefully combined in a single pulse sequence. Curati, W.L. et al. (2001)

  6. Density determination in Pino Radiata (D.Don) samples using 59.5 keV gamma radiation attenuation; Determinacion de densidad en muestras de Pino Radiata (D. Don) mediante atenuacion de radiacion gamma de 59.5 KeV

    Energy Technology Data Exchange (ETDEWEB)

    Dinator, Maria I; Morales, Jose R; Aliaga, Nelson [Chile Univ., Santiago (Chile). Dept. de Fisica; Karsulovic, Jose T; Sanchez, Jaime; Leon, Adolfo [Chile Univ., Santiago (Chile). Dept. de Tecnologia de la Madera

    1997-12-31

    A non destructive method to determine wood samples density is presented. The photon mass attenuation coefficient in samples of Pino radiata (D.Don) was measured at 59.5 keV with a radioactive source of Am-241. The value of 0.192 {+-} 0.002 cm{sup 2}/g was obtained with a gamma spectroscopy system and later used on the determination of the mass density in sixteen samples of the same species. Comparison of these results with those of gravimetric method through a linear regression showed a slope of 1.001 and a correlation factor of 0.94. (author). 3 refs., 4 figs.

  7. THE SL2S GALAXY-SCALE LENS SAMPLE. IV. THE DEPENDENCE OF THE TOTAL MASS DENSITY PROFILE OF EARLY-TYPE GALAXIES ON REDSHIFT, STELLAR MASS, AND SIZE

    Energy Technology Data Exchange (ETDEWEB)

    Sonnenfeld, Alessandro; Treu, Tommaso; Suyu, Sherry H. [Physics Department, University of California, Santa Barbara, CA 93106 (United States); Gavazzi, Raphaël [Institut d' Astrophysique de Paris, UMR7095 CNRS-Université Pierre et Marie Curie, 98bis bd Arago, F-75014 Paris (France); Marshall, Philip J. [Kavli Institute for Particle Astrophysics and Cosmology, Stanford University, 452 Lomita Mall, Stanford, CA 94305 (United States); Auger, Matthew W. [Institute of Astronomy, University of Cambridge, Madingley Road, Cambridge CB3 0HA (United Kingdom); Nipoti, Carlo, E-mail: sonnen@physics.ucsb.edu [Astronomy Department, University of Bologna, via Ranzani 1, I-40127 Bologna (Italy)

    2013-11-10

    We present optical and near-infrared spectroscopy obtained at Keck, Very Large Telescope, and Gemini for a sample of 36 secure strong gravitational lens systems and 17 candidates identified as part of the Strong Lensing Legacy Survey. The deflectors are massive early-type galaxies in the redshift range z{sub d} = 0.2-0.8, while the lensed sources are at z{sub s} = 1-3.5. We combine these data with photometric and lensing measurements presented in the companion paper III and with lenses from the Sloan Lens Advanced Camera for Surveys and Lènses Structure and Dynamics surveys to investigate the cosmic evolution of the internal structure of massive early-type galaxies over half the age of the universe. We study the dependence of the slope of the total mass density profile, γ' (ρ(r)∝r{sup -γ{sup '}}), on stellar mass, size, and redshift. We find that two parameters are sufficient to determine γ' with less than 6% residual scatter. At fixed redshift, γ' depends solely on the surface stellar mass density ∂γ'/∂Σ{sub *} = 0.38 ± 0.07, i.e., galaxies with denser stars also have steeper slopes. At fixed M{sub *} and R{sub eff}, γ' depends on redshift, in the sense that galaxies at a lower redshift have steeper slopes (∂γ'/∂z = –0.31 ± 0.10). However, the mean redshift evolution of γ' for an individual galaxy is consistent with zero dγ'/dz = –0.10 ± 0.12. This result is obtained by combining our measured dependencies of γ' on z, M{sub *},R{sub eff} with the evolution of the R{sub eff}-M{sub *} taken from the literature, and is broadly consistent with current models of the formation and evolution of massive early-type galaxies. Detailed quantitative comparisons of our results with theory will provide qualitatively new information on the detailed physical processes at work.

  8. Renormings of C(K) spaces

    Czech Academy of Sciences Publication Activity Database

    Smith, Richard James; Troyanski, S.

    2010-01-01

    Roč. 104, č. 2 (2010), s. 375-412 ISSN 1578-7303 R&D Projects: GA ČR GA201/07/0394 Institutional research plan: CEZ:AV0Z10190503 Keywords : uniformly rotund norms * Frechet * Gateaux Subject RIV: BA - General Mathematics Impact factor: 0.400, year: 2010 http://www.springerlink.com/content/430027876375w58x/

  9. Variable Kernel Density Estimation

    OpenAIRE

    Terrell, George R.; Scott, David W.

    1992-01-01

    We investigate some of the possibilities for improvement of univariate and multivariate kernel density estimates by varying the window over the domain of estimation, pointwise and globally. Two general approaches are to vary the window width by the point of estimation and by point of the sample observation. The first possibility is shown to be of little efficacy in one variable. In particular, nearest-neighbor estimators in all versions perform poorly in one and two dimensions, but begin to b...

  10. Data release for intermediate-density hydrogeochemical and stream sediment sampling in the Vallecito Creek Special Study Area, Colorado, including concentrations of uranium and forty-six additional elements

    International Nuclear Information System (INIS)

    Warren, R.G.

    1981-04-01

    A sediment sample and two water samples were collected at each location about a kilometer apart from small tributary streams within the area. One of the two water samples collected at each location was filtered in the field and the other was not. Both samples were acidified to a pH of < 1; field data and uranium concentrations are listed first for the filtered sample (sample type = 07) and followed by the unfiltered sample (sample type = 27) for each location in Appendix I-A. Uranium concentrations are higher in unfiltered samples than in filtered samples for most locations. Measured uranium concentrations in control standards analyzed with the water samples are listed in Appendix II. All sediments were air dried and the fraction finer than 100 mesh was separated and analyzed for uranium and forty-six additional elements. Field data and analytical results for each sediment sample are listed in Appendix I-B. Analytical procedures for both water and sediment samples are briefly described in Appendix III. Most bedrock units within the sampled area are of Precambrian age. Three Precambrian units are known or potential hosts for uranium deposits; the Trimble granite is associated with the recently discovered Florida Mountain vein deposit, the Uncompahgre formation hosts a vein-type occurrence in Elk Park near the contact with the Irving formation, and the Vallecito conglomerate has received some attention as a possible host for a quartz pebble conglomerate deposit. Nearly all sediment samples collected downslope from exposures of Timble granite (geologic unit symbol ''T'' in Appendix I) contain unusually high uranium concentrations. High uranium concentrations in sediment also occur for an individual sample location that has a geologic setting similar to the Elk Park occurrence and for a sample associated with the Vallecito conglomerate

  11. Compressively sampled MR image reconstruction using generalized thresholding iterative algorithm

    Science.gov (United States)

    Elahi, Sana; kaleem, Muhammad; Omer, Hammad

    2018-01-01

    Compressed sensing (CS) is an emerging area of interest in Magnetic Resonance Imaging (MRI). CS is used for the reconstruction of the images from a very limited number of samples in k-space. This significantly reduces the MRI data acquisition time. One important requirement for signal recovery in CS is the use of an appropriate non-linear reconstruction algorithm. It is a challenging task to choose a reconstruction algorithm that would accurately reconstruct the MR images from the under-sampled k-space data. Various algorithms have been used to solve the system of non-linear equations for better image quality and reconstruction speed in CS. In the recent past, iterative soft thresholding algorithm (ISTA) has been introduced in CS-MRI. This algorithm directly cancels the incoherent artifacts produced because of the undersampling in k -space. This paper introduces an improved iterative algorithm based on p -thresholding technique for CS-MRI image reconstruction. The use of p -thresholding function promotes sparsity in the image which is a key factor for CS based image reconstruction. The p -thresholding based iterative algorithm is a modification of ISTA, and minimizes non-convex functions. It has been shown that the proposed p -thresholding iterative algorithm can be used effectively to recover fully sampled image from the under-sampled data in MRI. The performance of the proposed method is verified using simulated and actual MRI data taken at St. Mary's Hospital, London. The quality of the reconstructed images is measured in terms of peak signal-to-noise ratio (PSNR), artifact power (AP), and structural similarity index measure (SSIM). The proposed approach shows improved performance when compared to other iterative algorithms based on log thresholding, soft thresholding and hard thresholding techniques at different reduction factors.

  12. Comparison of density estimators. [Estimation of probability density functions

    Energy Technology Data Exchange (ETDEWEB)

    Kao, S.; Monahan, J.F.

    1977-09-01

    Recent work in the field of probability density estimation has included the introduction of some new methods, such as the polynomial and spline methods and the nearest neighbor method, and the study of asymptotic properties in depth. This earlier work is summarized here. In addition, the computational complexity of the various algorithms is analyzed, as are some simulations. The object is to compare the performance of the various methods in small samples and their sensitivity to change in their parameters, and to attempt to discover at what point a sample is so small that density estimation can no longer be worthwhile. (RWR)

  13. Boat sampling

    International Nuclear Information System (INIS)

    Citanovic, M.; Bezlaj, H.

    1994-01-01

    This presentation describes essential boat sampling activities: on site boat sampling process optimization and qualification; boat sampling of base material (beltline region); boat sampling of weld material (weld No. 4); problems accompanied with weld crown varieties, RPV shell inner radius tolerance, local corrosion pitting and water clarity. The equipment used for boat sampling is described too. 7 pictures

  14. Graph sampling

    OpenAIRE

    Zhang, L.-C.; Patone, M.

    2017-01-01

    We synthesise the existing theory of graph sampling. We propose a formal definition of sampling in finite graphs, and provide a classification of potential graph parameters. We develop a general approach of Horvitz–Thompson estimation to T-stage snowball sampling, and present various reformulations of some common network sampling methods in the literature in terms of the outlined graph sampling theory.

  15. Size scaling effects on the particle density fluctuations in confined plasmas

    International Nuclear Information System (INIS)

    Vazquez, Federico; Markus, Ferenc

    2009-01-01

    In this paper, memory and nonlocal effects on fluctuating mass diffusion are addressed in the context of fusion plasmas. Nonlocal effects are included by considering a diffusivity coefficient depending on the size of the container in the transverse direction to the applied magnetic field. It is obtained by resorting to the general formulation of the extended version of irreversible thermodynamics in terms of the higher order dissipative fluxes. The developed model describes two different types of the particle density time correlation function. Both have been observed in tokamak and nontokamak devices. These two kinds of time correlation function characterize the wave and the diffusive transport mechanisms of particle density perturbations. A transition between them is found, which is controlled by the size of the container. A phase diagram in the (L,2π/k) space describes the relation between the dynamics of particle density fluctuations and the size L of the system together with the oscillating mode k of the correlation function.

  16. Density measurements of small amounts of high-density solids by a floatation method

    International Nuclear Information System (INIS)

    Akabori, Mitsuo; Shiba, Koreyuki

    1984-09-01

    A floatation method for determining the density of small amounts of high-density solids is described. The use of a float combined with an appropriate floatation liquid allows us to measure the density of high-density substances in small amounts. Using the sample of 0.1 g in weight, the floatation liquid of 3.0 g cm -3 in density and the float of 1.5 g cm -3 in apparent density, the sample densities of 5, 10 and 20 g cm -3 are determined to an accuracy better than +-0.002, +-0.01 and +-0.05 g cm -3 , respectively that correspond to about +-1 x 10 -5 cm 3 in volume. By means of appropriate degassing treatments, the densities of (Th,U)O 2 pellets of --0.1 g in weight and --9.55 g cm -3 in density were determined with an accuracy better than +-0.05 %. (author)

  17. Obesity and Regional Immigrant Density.

    Science.gov (United States)

    Emerson, Scott D; Carbert, Nicole S

    2017-11-24

    Canada has an increasingly large immigrant population. Areas of higher immigrant density, may relate to immigrants' health through reduced acculturation to Western foods, greater access to cultural foods, and/or promotion of salubrious values/practices. It is unclear, however, whether an association exists between Canada-wide regional immigrant density and obesity among immigrants. Thus, we examined whether regional immigrant density was related to obesity, among immigrants. Adult immigrant respondents (n = 15,595) to a national population-level health survey were merged with region-level immigrant density data. Multi-level logistic regression was used to model the odds of obesity associated with increased immigrant density. The prevalence of obesity among the analytic sample was 16%. Increasing regional immigrant density was associated with lower odds of obesity among minority immigrants and long-term white immigrants. Immigrant density at the region-level in Canada may be an important contextual factor to consider when examining obesity among immigrants.

  18. Balanced sampling

    NARCIS (Netherlands)

    Brus, D.J.

    2015-01-01

    In balanced sampling a linear relation between the soil property of interest and one or more covariates with known means is exploited in selecting the sampling locations. Recent developments make this sampling design attractive for statistical soil surveys. This paper introduces balanced sampling

  19. Ensemble Sampling

    OpenAIRE

    Lu, Xiuyuan; Van Roy, Benjamin

    2017-01-01

    Thompson sampling has emerged as an effective heuristic for a broad range of online decision problems. In its basic form, the algorithm requires computing and sampling from a posterior distribution over models, which is tractable only for simple special cases. This paper develops ensemble sampling, which aims to approximate Thompson sampling while maintaining tractability even in the face of complex models such as neural networks. Ensemble sampling dramatically expands on the range of applica...

  20. Gleeble Testing of Tungsten Samples

    Science.gov (United States)

    2013-02-01

    temperature on an Instron load frame with a 222.41 kN (50 kip) load cell . The samples were compressed at the same strain rate as on the Gleeble...ID % RE Initial Density (cm 3 ) Density after Compression (cm 3 ) % Change in Density Test Temperature NT1 0 18.08 18.27 1.06 1000 NT3 0...4.1 Nano-Tungsten The results for the compression of the nano-tungsten samples are shown in tables 2 and 3 and figure 5. During testing, sample NT1

  1. A morphometric study of antral G-cell density in a sample of adult general population: comparison of three different methods and correlation with patient demography, helicobacter pylori infection, histomorphology and circulating gastrin levels

    DEFF Research Database (Denmark)

    Petersson, Fredrik; Borch, Kurt; Rehfeld, Jens F

    2008-01-01

    whether these methods are intercorrelated and the relation of these methods to plasma gastrin concentrations, demography, the occurrence of H. pylori infection and chronic gastritis. Gastric antral mucosal biopsy sections from 273 adults (188 with and 85 without H pylori infection) from a general...... population sample were examined immunohistochemically for G-cells using cell counting, stereology (point counting) and computerized image analysis. Gastritis was scored according to the updated Sydney system. Basal plasma gastrin concentrations were measured by radioimmunoassay. The three methods for G...

  2. Laboratory Density Functionals

    OpenAIRE

    Giraud, B. G.

    2007-01-01

    We compare several definitions of the density of a self-bound system, such as a nucleus, in relation with its center-of-mass zero-point motion. A trivial deconvolution relates the internal density to the density defined in the laboratory frame. This result is useful for the practical definition of density functionals.

  3. Laser sampling

    International Nuclear Information System (INIS)

    Gorbatenko, A A; Revina, E I

    2015-01-01

    The review is devoted to the major advances in laser sampling. The advantages and drawbacks of the technique are considered. Specific features of combinations of laser sampling with various instrumental analytical methods, primarily inductively coupled plasma mass spectrometry, are discussed. Examples of practical implementation of hybrid methods involving laser sampling as well as corresponding analytical characteristics are presented. The bibliography includes 78 references

  4. Photoionization and High Density Gas

    Science.gov (United States)

    Kallman, T.; Bautista, M.; White, Nicholas E. (Technical Monitor)

    2002-01-01

    We present results of calculations using the XSTAR version 2 computer code. This code is loosely based on the XSTAR v.1 code which has been available for public use for some time. However it represents an improvement and update in several major respects, including atomic data, code structure, user interface, and improved physical description of ionization/excitation. In particular, it now is applicable to high density situations in which significant excited atomic level populations are likely to occur. We describe the computational techniques and assumptions, and present sample runs with particular emphasis on high density situations.

  5. Density Distributions of Cyclotrimethylenetrinitramines (RDX)

    International Nuclear Information System (INIS)

    Hoffman, D M

    2002-01-01

    As part of the US Army Foreign Comparative Testing (FCT) program the density distributions of six samples of class 1 RDX were measured using the density gradient technique. This technique was used in an attempt to distinguish between RDX crystallized by a French manufacturer (designated insensitive or IRDX) from RDX manufactured at Holston Army Ammunition Plant (HAAP), the current source of RDX for Department of Defense (DoD). Two samples from different lots of French IRDX had an average density of 1.7958 ± 0.0008 g/cc. The theoretical density of a perfect RDX crystal is 1.806 g/cc. This yields 99.43% of the theoretical maximum density (TMD). For two HAAP RDX lots the average density was 1.786 ± 0.002 g/cc, only 98.89% TMD. Several other techniques were used for preliminary characterization of one lot of French IRDX and two lot of HAAP RDX. Light scattering, SEM and polarized optical microscopy (POM) showed that SNPE and Holston RDX had the appropriate particle size distribution for Class 1 RDX. High performance liquid chromatography showed quantities of HMX in HAAP RDX. French IRDX also showed a 1.1 C higher melting point compared to HAAP RDX in the differential scanning calorimetry (DSC) consistent with no melting point depression due to the HMX contaminant. A second part of the program involved characterization of Holston RDX recrystallized using the French process. After reprocessing the average density of the Holston RDX was increased to 1.7907 g/cc. Apparently HMX in RDX can act as a nucleating agent in the French RDX recrystallization process. The French IRDX contained no HMX, which is assumed to account for its higher density and narrower density distribution. Reprocessing of RDX from Holston improved the average density compared to the original Holston RDX, but the resulting HIRDX was not as dense as the original French IRDX. Recrystallized Holston IRDX crystals were much larger (3-500 (micro)m or more) then either the original class 1 HAAP RDX or French

  6. Densities of carbon foils

    International Nuclear Information System (INIS)

    Stoner, J.O. Jr.

    1991-01-01

    The densities of arc-evaporated carbon target foils have been measured by several methods. The density depends upon the method used to measure it; for the same surface density, values obtained by different measurement techniques may differ by fifty percent or more. The most reliable density measurements are by flotation, yielding a density of 2.01±0.03 g cm -3 , and interferometric step height with the surface density known from auxiliary measurements, yielding a density of 2.61±0.4 g cm -3 . The difference between these density values mayy be due in part to the compressive stresses that carbon films have while still on their substrates, uncertainties in the optical calibration of surface densities of carbon foils, and systematic errors in step-height measurements. Mechanical thickness measurements by micrometer caliper are unreliable due to nonplanarity of these foils. (orig.)

  7. Clinical Feasibility of Free-Breathing Dynamic T1-Weighted Imaging With Gadoxetic Acid-Enhanced Liver Magnetic Resonance Imaging Using a Combination of Variable Density Sampling and Compressed Sensing.

    Science.gov (United States)

    Yoon, Jeong Hee; Yu, Mi Hye; Chang, Won; Park, Jin-Young; Nickel, Marcel Dominik; Son, Yohan; Kiefer, Berthold; Lee, Jeong Min

    2017-10-01

    The purpose of the study was to investigate the clinical feasibility of free-breathing dynamic T1-weighted imaging (T1WI) using Cartesian sampling, compressed sensing, and iterative reconstruction in gadoxetic acid-enhanced liver magnetic resonance imaging (MRI). This retrospective study was approved by our institutional review board, and the requirement for informed consent was waived. A total of 51 patients at high risk of breath-holding failure underwent dynamic T1WI in a free-breathing manner using volumetric interpolated breath-hold (BH) examination with compressed sensing reconstruction (CS-VIBE) and hard gating. Timing, motion artifacts, and image quality were evaluated by 4 radiologists on a 4-point scale. For patients with low image quality scores (XD]) reconstruction was additionally performed and reviewed in the same manner. In addition, in 68.6% (35/51) patients who had previously undergone liver MRI, image quality and motion artifacts on dynamic phases using CS-VIBE were compared with previous BH-T1WIs. In all patients, adequate arterial-phase timing was obtained at least once. Overall image quality of free-breathing T1WI was 3.30 ± 0.59 on precontrast and 2.68 ± 0.70, 2.93 ± 0.65, and 3.30 ± 0.49 on early arterial, late arterial, and portal venous phases, respectively. In 13 patients with lower than average image quality (XD-reconstructed CS-VIBE) significantly reduced motion artifacts (P XD reconstruction showed less motion artifacts and better image quality on precontrast, arterial, and portal venous phases (P < 0.0001-0.013). Volumetric interpolated breath-hold examination with compressed sensing has the potential to provide consistent, motion-corrected free-breathing dynamic T1WI for liver MRI in patients at high risk of breath-holding failure.

  8. Soil sampling

    International Nuclear Information System (INIS)

    Fortunati, G.U.; Banfi, C.; Pasturenzi, M.

    1994-01-01

    This study attempts to survey the problems associated with techniques and strategies of soil sampling. Keeping in mind the well defined objectives of a sampling campaign, the aim was to highlight the most important aspect of representativeness of samples as a function of the available resources. Particular emphasis was given to the techniques and particularly to a description of the many types of samplers which are in use. The procedures and techniques employed during the investigations following the Seveso accident are described. (orig.)

  9. Density heterogeneity of the cratonic lithosphere

    DEFF Research Database (Denmark)

    Cherepanova, Yulia; Artemieva, Irina

    2015-01-01

    Using free-board modeling, we examine a vertically-averaged mantle density beneath the Archean-Proterozoic Siberian craton in the layer from the Moho down to base of the chemical boundary layer (CBL). Two models are tested: in Model 1 the base of the CBL coincides with the LAB, whereas in Model 2...... the base of the CBL is at a 180 km depth. The uncertainty of density model is density structure of the Siberian lithospheric mantle with a strong...... correlation between mantle density variations and the tectonic setting. Three types of cratonic mantle are recognized from mantle density anomalies. 'Pristine' cratonic regions not sampled by kimberlites have the strongest depletion with density deficit of 1.8-3.0% (and SPT density of 3.29-3.33 t/m3...

  10. Language sampling

    DEFF Research Database (Denmark)

    Rijkhoff, Jan; Bakker, Dik

    1998-01-01

    This article has two aims: [1] to present a revised version of the sampling method that was originally proposed in 1993 by Rijkhoff, Bakker, Hengeveld and Kahrel, and [2] to discuss a number of other approaches to language sampling in the light of our own method. We will also demonstrate how our...... sampling method is used with different genetic classifications (Voegelin & Voegelin 1977, Ruhlen 1987, Grimes ed. 1997) and argue that —on the whole— our sampling technique compares favourably with other methods, especially in the case of exploratory research....

  11. Future Road Density

    Data.gov (United States)

    U.S. Environmental Protection Agency — Road density is generally highly correlated with amount of developed land cover. High road densities usually indicate high levels of ecological disturbance. More...

  12. Achieving maximum baryon densities

    International Nuclear Information System (INIS)

    Gyulassy, M.

    1984-01-01

    In continuing work on nuclear stopping power in the energy range E/sub lab/ approx. 10 GeV/nucleon, calculations were made of the energy and baryon densities that could be achieved in uranium-uranium collisions. Results are shown. The energy density reached could exceed 2 GeV/fm 3 and baryon densities could reach as high as ten times normal nuclear densities

  13. Crowding and Density

    Science.gov (United States)

    Design and Environment, 1972

    1972-01-01

    Three-part report pinpointing problems and uncovering solutions for the dual concepts of density (ratio of people to space) and crowding (psychological response to density). Section one, A Primer on Crowding,'' reviews new psychological and social findings; section two, Density in the Suburbs,'' shows conflict between status quo and increased…

  14. Sample preparation

    International Nuclear Information System (INIS)

    Anon.

    1992-01-01

    Sample preparation prior to HPLC analysis is certainly one of the most important steps to consider in trace or ultratrace analysis. For many years scientists have tried to simplify the sample preparation process. It is rarely possible to inject a neat liquid sample or a sample where preparation may not be any more complex than dissolution of the sample in a given solvent. The last process alone can remove insoluble materials, which is especially helpful with the samples in complex matrices if other interactions do not affect extraction. Here, it is very likely a large number of components will not dissolve and are, therefore, eliminated by a simple filtration process. In most cases, the process of sample preparation is not as simple as dissolution of the component interest. At times, enrichment is necessary, that is, the component of interest is present in very large volume or mass of material. It needs to be concentrated in some manner so a small volume of the concentrated or enriched sample can be injected into HPLC. 88 refs

  15. Sampling Development

    Science.gov (United States)

    Adolph, Karen E.; Robinson, Scott R.

    2011-01-01

    Research in developmental psychology requires sampling at different time points. Accurate depictions of developmental change provide a foundation for further empirical studies and theories about developmental mechanisms. However, overreliance on widely spaced sampling intervals in cross-sectional and longitudinal designs threatens the validity of…

  16. Environmental sampling

    International Nuclear Information System (INIS)

    Puckett, J.M.

    1998-01-01

    Environmental Sampling (ES) is a technology option that can have application in transparency in nuclear nonproliferation. The basic process is to take a sample from the environment, e.g., soil, water, vegetation, or dust and debris from a surface, and through very careful sample preparation and analysis, determine the types, elemental concentration, and isotopic composition of actinides in the sample. The sample is prepared and the analysis performed in a clean chemistry laboratory (CCL). This ES capability is part of the IAEA Strengthened Safeguards System. Such a Laboratory is planned to be built by JAERI at Tokai and will give Japan an intrinsic ES capability. This paper presents options for the use of ES as a transparency measure for nuclear nonproliferation

  17. Statistical theory of electron densities

    International Nuclear Information System (INIS)

    Pratt, L.R.; Hoffman, G.G.; Harris, R.A.

    1988-01-01

    An optimized Thomas--Fermi theory is proposed which retains the simplicity of the original theory and is a suitable reference theory for Monte Carlo density functional treatments of condensed materials. The key ingredient of the optimized theory is a neighborhood sampled potential which contains effects of the inhomogeneities in the one-electron potential. In contrast to the traditional Thomas--Fermi approach, the optimized theory predicts a finite electron density in the vicinity of a nucleus. Consideration of the example of an ideal electron gas subject to a central Coulomb field indicates that implementation of the approach is straightforward. The optimized theory is found to fail completely when a classically forbidden region is approached. However, these circumstances are not of primary interest for calculations of interatomic forces. It is shown how the energy functional of the density may be constructed by integration of a generalized Hellmann--Feynman relation. This generalized Hellmann--Feynman relation proves to be equivalent to the variational principle of density functional quantum mechanics, and, therefore, the present density theory can be viewed as a variational consequence of the constructed energy functional

  18. Spherical sampling

    CERN Document Server

    Freeden, Willi; Schreiner, Michael

    2018-01-01

    This book presents, in a consistent and unified overview, results and developments in the field of today´s spherical sampling, particularly arising in mathematical geosciences. Although the book often refers to original contributions, the authors made them accessible to (graduate) students and scientists not only from mathematics but also from geosciences and geoengineering. Building a library of topics in spherical sampling theory it shows how advances in this theory lead to new discoveries in mathematical, geodetic, geophysical as well as other scientific branches like neuro-medicine. A must-to-read for everybody working in the area of spherical sampling.

  19. Probability densities and Lévy densities

    DEFF Research Database (Denmark)

    Barndorff-Nielsen, Ole Eiler

    For positive Lévy processes (i.e. subordinators) formulae are derived that express the probability density or the distribution function in terms of power series in time t. The applicability of the results to finance and to turbulence is briefly indicated.......For positive Lévy processes (i.e. subordinators) formulae are derived that express the probability density or the distribution function in terms of power series in time t. The applicability of the results to finance and to turbulence is briefly indicated....

  20. Fluidic sampling

    International Nuclear Information System (INIS)

    Houck, E.D.

    1992-01-01

    This paper covers the development of the fluidic sampler and its testing in a fluidic transfer system. The major findings of this paper are as follows. Fluidic jet samples can dependably produce unbiased samples of acceptable volume. The fluidic transfer system with a fluidic sampler in-line will transfer water to a net lift of 37.2--39.9 feet at an average ratio of 0.02--0.05 gpm (77--192 cc/min). The fluidic sample system circulation rate compares very favorably with the normal 0.016--0.026 gpm (60--100 cc/min) circulation rate that is commonly produced for this lift and solution with the jet-assisted airlift sample system that is normally used at ICPP. The volume of the sample taken with a fluidic sampler is dependant on the motive pressure to the fluidic sampler, the sample bottle size and on the fluidic sampler jet characteristics. The fluidic sampler should be supplied with fluid having the motive pressure of the 140--150 percent of the peak vacuum producing motive pressure for the jet in the sampler. Fluidic transfer systems should be operated by emptying a full pumping chamber to nearly empty or empty during the pumping cycle, this maximizes the solution transfer rate

  1. Spatial analysis of NDVI readings with difference sampling density

    Science.gov (United States)

    Advanced remote sensing technologies provide research an innovative way of collecting spatial data for use in precision agriculture. Sensor information and spatial analysis together allow for a complete understanding of the spatial complexity of a field and its crop. The objective of the study was...

  2. Procedure for Uranium-Molybdenum Density Measurements and Porosity Determination

    Energy Technology Data Exchange (ETDEWEB)

    Prabhakaran, Ramprashad [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Devaraj, Arun [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Joshi, Vineet V. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Lavender, Curt A. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)

    2016-08-13

    The purpose of this document is to provide guidelines for preparing uranium-molybdenum (U-Mo) specimens, performing density measurements, and computing sample porosity. Typical specimens (solids) will be sheared to small rectangular foils, disks, or pieces of metal. A mass balance, solid density determination kit, and a liquid of known density will be used to determine the density of U-Mo specimens using the Archimedes principle. A standard test weight of known density would be used to verify proper operation of the system. By measuring the density of a U-Mo sample, it is possible to determine its porosity.

  3. Why Density Dependent Propulsion?

    Science.gov (United States)

    Robertson, Glen A.

    2011-01-01

    In 2004 Khoury and Weltman produced a density dependent cosmology theory they call the Chameleon, as at its nature, it is hidden within known physics. The Chameleon theory has implications to dark matter/energy with universe acceleration properties, which implies a new force mechanism with ties to the far and local density environment. In this paper, the Chameleon Density Model is discussed in terms of propulsion toward new propellant-less engineering methods.

  4. Density limits in Tokamaks

    International Nuclear Information System (INIS)

    Tendler, M.

    1984-06-01

    The energy loss from a tokamak plasma due to neutral hydrogen radiation and recycling is of great importance for the energy balance at the periphery. It is shown that the requirement for thermal equilibrium implies a constraint on the maximum attainable edge density. The relation to other density limits is discussed. The average plasma density is shown to be a strong function of the refuelling deposition profile. (author)

  5. Nuclear Level Densities

    International Nuclear Information System (INIS)

    Grimes, S.M.

    2005-01-01

    Recent research in the area of nuclear level densities is reviewed. The current interest in nuclear astrophysics and in structure of nuclei off of the line of stability has led to the development of radioactive beam facilities with larger machines currently being planned. Nuclear level densities for the systems used to produce the radioactive beams influence substantially the production rates of these beams. The modification of level-density parameters near the drip lines would also affect nucleosynthesis rates and abundances

  6. Measurement of true density

    International Nuclear Information System (INIS)

    Carr-Brion, K.G.; Keen, E.F.

    1982-01-01

    System for determining the true density of a fluent mixture such as a liquid slurry, containing entrained gas, such as air comprises a restriction in pipe through which at least a part of the mixture is passed. Density measuring means such as gamma-ray detectors and source measure the apparent density of the mixture before and after its passage through the restriction. Solid-state pressure measuring devices are arranged to measure the pressure in the mixture before and after its passage through the restriction. Calculating means, such as a programmed microprocessor, determine the true density from these measurements using relationships given in the description. (author)

  7. Electron densities in planetary nebulae

    International Nuclear Information System (INIS)

    Stanghellini, L.; Kaler, J.B.

    1989-01-01

    Electron densities for 146 planetary nebulae have been obtained for analyzing a large sample of forbidden lines by interpolating theoretical curves obtained from solutions of the five-level atoms using up-to-date collision strengths and transition probabilities. Electron temperatures were derived from forbidden N II and/or forbidden O III lines or were estimated from the He II 4686 A line strengths. The forbidden O II densities are generally lower than those from forbidden Cl III by an average factor of 0.65. For data sets in which forbidden O II and forbidden S II were observed in common, the forbidden O II values drop to 0.84 that of the forbidden S II, implying that the outermost parts of the nebulae might have elevated densities. The forbidden Cl II and forbidden Ar IV densities show the best correlation, especially where they have been obtained from common data sets. The data give results within 30 percent of one another, assuming homogeneous nebulae. 106 refs

  8. Recycling of WEEE by magnetic density separation

    NARCIS (Netherlands)

    Hu, B.; Giacometti, L.; Di Maio, F.; Rem, P.C.

    2011-01-01

    The paper introduces a new recycling method of WEEE: Magnetic Density Separation. By using this technology, both grade and recovery rate of recycled products are over 90%. Good separations are not only observed in relatively big WEEE samples, but also in samples with smaller sizes or electrical

  9. Sampling methods

    International Nuclear Information System (INIS)

    Loughran, R.J.; Wallbrink, P.J.; Walling, D.E.; Appleby, P.G.

    2002-01-01

    Methods for the collection of soil samples to determine levels of 137 Cs and other fallout radionuclides, such as excess 210 Pb and 7 Be, will depend on the purposes (aims) of the project, site and soil characteristics, analytical capacity, the total number of samples that can be analysed and the sample mass required. The latter two will depend partly on detector type and capabilities. A variety of field methods have been developed for different field conditions and circumstances over the past twenty years, many of them inherited or adapted from soil science and sedimentology. The use of them inherited or adapted from soil science and sedimentology. The use of 137 Cs in erosion studies has been widely developed, while the application of fallout 210 Pb and 7 Be is still developing. Although it is possible to measure these nuclides simultaneously, it is common for experiments to designed around the use of 137 Cs along. Caesium studies typically involve comparison of the inventories found at eroded or sedimentation sites with that of a 'reference' site. An accurate characterization of the depth distribution of these fallout nuclides is often required in order to apply and/or calibrate the conversion models. However, depending on the tracer involved, the depth distribution, and thus the sampling resolution required to define it, differs. For example, a depth resolution of 1 cm is often adequate when using 137 Cs. However, fallout 210 Pb and 7 Be commonly has very strong surface maxima that decrease exponentially with depth, and fine depth increments are required at or close to the soil surface. Consequently, different depth incremental sampling methods are required when using different fallout radionuclides. Geomorphic investigations also frequently require determination of the depth-distribution of fallout nuclides on slopes and depositional sites as well as their total inventories

  10. On density forecast evaluation

    NARCIS (Netherlands)

    Diks, C.

    2008-01-01

    Traditionally, probability integral transforms (PITs) have been popular means for evaluating density forecasts. For an ideal density forecast, the PITs should be uniformly distributed on the unit interval and independent. However, this is only a necessary condition, and not a sufficient one, as

  11. Analysing designed experiments in distance sampling

    Science.gov (United States)

    Stephen T. Buckland; Robin E. Russell; Brett G. Dickson; Victoria A. Saab; Donal N. Gorman; William M. Block

    2009-01-01

    Distance sampling is a survey technique for estimating the abundance or density of wild animal populations. Detection probabilities of animals inherently differ by species, age class, habitats, or sex. By incorporating the change in an observer's ability to detect a particular class of animals as a function of distance, distance sampling leads to density estimates...

  12. A SURVEY OF CORONAL CAVITY DENSITY PROFILES

    International Nuclear Information System (INIS)

    Fuller, J.; Gibson, S. E.

    2009-01-01

    Coronal cavities are common features of the solar corona that appear as darkened regions at the base of coronal helmet streamers in coronagraph images. Their darkened appearance indicates that they are regions of lowered density embedded within the comparatively higher density helmet streamer. Despite interfering projection effects of the surrounding helmet streamer (which we refer to as the cavity rim), Fuller et al. have shown that under certain conditions it is possible to use a Van de Hulst inversion of white-light polarized brightness (pB) data to calculate the electron density of both the cavity and cavity rim plasma. In this article, we apply minor modifications to the methods of Fuller et al. in order to improve the accuracy and versatility of the inversion process, and use the new methods to calculate density profiles for both the cavity and cavity rim in 24 cavity systems. We also examine trends in cavity morphology and how departures from the model geometry affect our density calculations. The density calculations reveal that in all 24 cases the cavity plasma has a flatter density profile than the plasma of the cavity rim, meaning that the cavity has a larger density depletion at low altitudes than it does at high altitudes. We find that the mean cavity density is over four times greater than that of a coronal hole at an altitude of 1.2 R sun and that every cavity in the sample is over twice as dense as a coronal hole at this altitude. Furthermore, we find that different cavity systems near solar maximum span a greater range in density at 1.2 R sun than do cavity systems near solar minimum, with a slight trend toward higher densities for systems nearer to solar maximum. Finally, we found no significant correlation of cavity density properties with cavity height-indeed, cavities show remarkably similar density depletions-except for the two smallest cavities that show significantly greater depletion.

  13. Current density tensors

    Science.gov (United States)

    Lazzeretti, Paolo

    2018-04-01

    It is shown that nonsymmetric second-rank current density tensors, related to the current densities induced by magnetic fields and nuclear magnetic dipole moments, are fundamental properties of a molecule. Together with magnetizability, nuclear magnetic shielding, and nuclear spin-spin coupling, they completely characterize its response to magnetic perturbations. Gauge invariance, resolution into isotropic, deviatoric, and antisymmetric parts, and contributions of current density tensors to magnetic properties are discussed. The components of the second-rank tensor properties are rationalized via relationships explicitly connecting them to the direction of the induced current density vectors and to the components of the current density tensors. The contribution of the deviatoric part to the average value of magnetizability, nuclear shielding, and nuclear spin-spin coupling, uniquely determined by the antisymmetric part of current density tensors, vanishes identically. The physical meaning of isotropic and anisotropic invariants of current density tensors has been investigated, and the connection between anisotropy magnitude and electron delocalization has been discussed.

  14. Impact ionization in GaAs: A screened exchange density-functional approach

    International Nuclear Information System (INIS)

    Picozzi, S.; Asahi, R.; Geller, C.B.; Continenza, A.; Freeman, A.J.

    2001-01-01

    Results are presented of a fully ab initio calculation of impact ionization rates in GaAs within the density functional theory framework, using a screened-exchange formalism and the highly precise all-electron full-potential linearized augmented plane wave method. The calculated impact ionization rates show a marked orientation dependence in k space, indicating the strong restrictions imposed by the conservation of energy and momentum. This anisotropy diminishes as the impacting electron energy increases. A Keldysh type fit performed on the energy-dependent rate shows a rather soft edge and a threshold energy greater than the direct band gap. The consistency with available Monte Carlo and empirical pseudopotential calculations shows the reliability of our approach and paves the way to ab initio calculations of pair production rates in new and more complex materials

  15. correlation between maximum dry density and cohesion of ...

    African Journals Online (AJOL)

    HOD

    investigation on sandy soils to determine the correlation between relative density and compaction test parameter. Using twenty soil samples, they were able to develop correlations between relative density, coefficient of uniformity and maximum dry density. Khafaji [5] using standard proctor compaction method carried out an ...

  16. Intrinsic-density functionals

    International Nuclear Information System (INIS)

    Engel, J.

    2007-01-01

    The Hohenberg-Kohn theorem and Kohn-Sham procedure are extended to functionals of the localized intrinsic density of a self-bound system such as a nucleus. After defining the intrinsic-density functional, we modify the usual Kohn-Sham procedure slightly to evaluate the mean-field approximation to the functional, and carefully describe the construction of the leading corrections for a system of fermions in one dimension with a spin-degeneracy equal to the number of particles N. Despite the fact that the corrections are complicated and nonlocal, we are able to construct a local Skyrme-like intrinsic-density functional that, while different from the exact functional, shares with it a minimum value equal to the exact ground-state energy at the exact ground-state intrinsic density, to next-to-leading order in 1/N. We briefly discuss implications for real Skyrme functionals

  17. Density functional theory

    International Nuclear Information System (INIS)

    Das, M.P.

    1984-07-01

    The state of the art of the density functional formalism (DFT) is reviewed. The theory is quantum statistical in nature; its simplest version is the well-known Thomas-Fermi theory. The DFT is a powerful formalism in which one can treat the effect of interactions in inhomogeneous systems. After some introductory material, the DFT is outlined from the two basic theorems, and various generalizations of the theorems appropriate to several physical situations are pointed out. Next, various approximations to the density functionals are presented and some practical schemes, discussed; the approximations include an electron gas of almost constant density and an electron gas of slowly varying density. Then applications of DFT in various diverse areas of physics (atomic systems, plasmas, liquids, nuclear matter) are mentioned, and its strengths and weaknesses are pointed out. In conclusion, more recent developments of DFT are indicated

  18. Low Density Supersonic Decelerators

    Data.gov (United States)

    National Aeronautics and Space Administration — The Low-Density Supersonic Decelerator project will demonstrate the use of inflatable structures and advanced parachutes that operate at supersonic speeds to more...

  19. density functional theory approach

    Indian Academy of Sciences (India)

    YOGESH ERANDE

    2017-07-27

    Jul 27, 2017 ... a key role in all optical switching devices, since their optical properties can be .... optimized in the gas phase using Density Functional Theory. (DFT).39 The ...... The Mediation of Electrostatic Effects by Sol- vents J. Am. Chem.

  20. Bone mineral density test

    Science.gov (United States)

    BMD test; Bone density test; Bone densitometry; DEXA scan; DXA; Dual-energy x-ray absorptiometry; p-DEXA; Osteoporosis - BMD ... need to undress. This scan is the best test to predict your risk of fractures, especially of ...

  1. Density scaling for multiplets

    International Nuclear Information System (INIS)

    Nagy, A

    2011-01-01

    Generalized Kohn-Sham equations are presented for lowest-lying multiplets. The way of treating non-integer particle numbers is coupled with an earlier method of the author. The fundamental quantity of the theory is the subspace density. The Kohn-Sham equations are similar to the conventional Kohn-Sham equations. The difference is that the subspace density is used instead of the density and the Kohn-Sham potential is different for different subspaces. The exchange-correlation functional is studied using density scaling. It is shown that there exists a value of the scaling factor ζ for which the correlation energy disappears. Generalized OPM and Krieger-Li-Iafrate (KLI) methods incorporating correlation are presented. The ζKLI method, being as simple as the original KLI method, is proposed for multiplets.

  2. Fission level densities

    International Nuclear Information System (INIS)

    Maslov, V.M.

    1998-01-01

    Fission level densities (or fissioning nucleus level densities at fission saddle deformations) are required for statistical model calculations of actinide fission cross sections. Back-shifted Fermi-Gas Model, Constant Temperature Model and Generalized Superfluid Model (GSM) are widely used for the description of level densities at stable deformations. These models provide approximately identical level density description at excitations close to the neutron binding energy. It is at low excitation energies that they are discrepant, while this energy region is crucial for fission cross section calculations. A drawback of back-shifted Fermi gas model and traditional constant temperature model approaches is that it is difficult to include in a consistent way pair correlations, collective effects and shell effects. Pair, shell and collective properties of nucleus do not reduce just to the renormalization of level density parameter a, but influence the energy dependence of level densities. These effects turn out to be important because they seem to depend upon deformation of either equilibrium or saddle-point. These effects are easily introduced within GSM approach. Fission barriers are another key ingredients involved in the fission cross section calculations. Fission level density and barrier parameters are strongly interdependent. This is the reason for including fission barrier parameters along with the fission level densities in the Starter File. The recommended file is maslov.dat - fission barrier parameters. Recent version of actinide fission barrier data obtained in Obninsk (obninsk.dat) should only be considered as a guide for selection of initial parameters. These data are included in the Starter File, together with the fission barrier parameters recommended by CNDC (beijing.dat), for completeness. (author)

  3. Density-wave oscillations

    International Nuclear Information System (INIS)

    Belblidia, L.A.; Bratianu, C.

    1979-01-01

    Boiling flow in a steam generator, a water-cooled reactor, and other multiphase processes can be subject to instabilities. It appears that the most predominant instabilities are the so-called density-wave oscillations. They can cause difficulties for three main reasons; they may induce burnout; they may cause mechanical vibrations of components; and they create system control problems. A comprehensive review is presented of experimental and theoretical studies concerning density-wave oscillations. (author)

  4. Density of liquid Ytterbium

    International Nuclear Information System (INIS)

    Stankus, S.V.; Basin, A.S.

    1983-01-01

    Results are presented for measurements of the density of metallic ytterbium in the liquid state and at the liquid-solid phase transition. Based on the numerical data obtained, the coefficient of thermal expansion βZ of the liquid and the density discontinuity on melting deltarho/sub m/ are calculated. The magnitudes of βZ and deltarho/sub m/ for the heavy lanthanides are compared

  5. Negative Ion Density Fronts

    International Nuclear Information System (INIS)

    Igor Kaganovich

    2000-01-01

    Negative ions tend to stratify in electronegative plasmas with hot electrons (electron temperature Te much larger than ion temperature Ti, Te > Ti ). The boundary separating a plasma containing negative ions, and a plasma, without negative ions, is usually thin, so that the negative ion density falls rapidly to zero-forming a negative ion density front. We review theoretical, experimental and numerical results giving the spatio-temporal evolution of negative ion density fronts during plasma ignition, the steady state, and extinction (afterglow). During plasma ignition, negative ion fronts are the result of the break of smooth plasma density profiles during nonlinear convection. In a steady-state plasma, the fronts are boundary layers with steepening of ion density profiles due to nonlinear convection also. But during plasma extinction, the ion fronts are of a completely different nature. Negative ions diffuse freely in the plasma core (no convection), whereas the negative ion front propagates towards the chamber walls with a nearly constant velocity. The concept of fronts turns out to be very effective in analysis of plasma density profile evolution in strongly non-isothermal plasmas

  6. Plano amostral para cálculo de densidade larvária de Aedes aegypti e Aedes albopictus no Estado de São Paulo, Brasil Sampling desing for larval density computation of Aedes Aegypti and Aedes albopictus in the State of S. Paulo, Brazil

    Directory of Open Access Journals (Sweden)

    Maria Cecília G.P. Alves

    1991-08-01

    Full Text Available O Programa de Controle de Vetores de Febre Amarela e Dengue, desenvolvido pela Superintendência de Controle de Endemias do Estado de São Paulo, Brasil, prevê a realização de pesquisa para avaliação da densidade larvária de Aedes aegypti e Aedes albopictus em edificações de municípios com infestação domiciliar. Descreve-se o plano amostral que vem sendo aplicado, desde outubro de 1987, nos municípios da Região de Presidente Prudente. Para acompanhamento da densidade está sendo utilizado o índice de Breteau. São sorteados, nos municípios infestados, mensalmente e de forma independente, amostras de edificações para a obtenção das estimativas do índice. O plano amostral prevê a seleção de conglomerados em 2 estágios: quadras e edificações. O tamanho da amostra foi definido estimando-se o coeficiente da correlação intraconglomerado e variância relativa por elemento através de pesquisas realizadas anteriormente em municípios do Serviço Regional de São José do Rio Preto. O plano propõe que os valores relativos ao tamanho da amostra sejam atualizados periodicamente em função dos valores obtidos para o estimador do Índice de Breteau e sua variância, em meses anteriores.The Yellow Fever and Dengue Vector Control Program developed by the Superintendency for the Control of Endemic Diseases in the State of S. Paulo recommends Aedes aegypti and Aedes albopictus larval density monitoring in cities with domiciliar infestation. The sampling plan which has been applied in the countries of the Presidente Prudente region (SP- Brazil since 1987 is described. The infestation is measured by using the Breteau Index. A sample of buildings is drawn, monthly and independently, in the infested cities, in which measurements are to be made. The sample is stratified and the elementary unit selection is made by using two-stage cluster sampling: of blocks and buildings. The sample sizes were defined using the coefficient of variation

  7. Acoustic levitation methods for density measurements

    Science.gov (United States)

    Trinh, E. H.; Hsu, C. J.

    1986-01-01

    The capability of ultrasonic levitators operating in air to perform density measurements has been demonstrated. The remote determination of the density of ordinary liquids as well as low density solid metals can be carried out using levitated samples with size on the order of a few millimeters and at a frequency of 20 kHz. Two basic methods may be used. The first one is derived from a previously known technique developed for acoustic levitation in liquid media, and is based on the static equilibrium position of levitated samples in the earth's gravitational field. The second approach relies on the dynamic interaction between a levitated sample and the acoustic field. The first technique appears more accurate (1 percent uncertainty), but the latter method is directly applicable to a near gravity-free environment such as that found in space.

  8. High throughput nonparametric probability density estimation.

    Science.gov (United States)

    Farmer, Jenny; Jacobs, Donald

    2018-01-01

    In high throughput applications, such as those found in bioinformatics and finance, it is important to determine accurate probability distribution functions despite only minimal information about data characteristics, and without using human subjectivity. Such an automated process for univariate data is implemented to achieve this goal by merging the maximum entropy method with single order statistics and maximum likelihood. The only required properties of the random variables are that they are continuous and that they are, or can be approximated as, independent and identically distributed. A quasi-log-likelihood function based on single order statistics for sampled uniform random data is used to empirically construct a sample size invariant universal scoring function. Then a probability density estimate is determined by iteratively improving trial cumulative distribution functions, where better estimates are quantified by the scoring function that identifies atypical fluctuations. This criterion resists under and over fitting data as an alternative to employing the Bayesian or Akaike information criterion. Multiple estimates for the probability density reflect uncertainties due to statistical fluctuations in random samples. Scaled quantile residual plots are also introduced as an effective diagnostic to visualize the quality of the estimated probability densities. Benchmark tests show that estimates for the probability density function (PDF) converge to the true PDF as sample size increases on particularly difficult test probability densities that include cases with discontinuities, multi-resolution scales, heavy tails, and singularities. These results indicate the method has general applicability for high throughput statistical inference.

  9. A New Cryogenic Sample Manipulator For SRC's Scienta 2002 System

    International Nuclear Information System (INIS)

    Gundelach, Chad T.; Fisher, Mike V.; Hoechst, Hartmut

    2004-01-01

    We discuss the first bench tests of a sample manipulator which was recently designed at SRC for the Scienta 2002 User system. The manipulator concept utilizes the 10 deg. angular window of the Scienta in the horizontal plane (angle dispersion) by rotating the sample normal around the vertical axis while angular scans along the vertical axis (energy dispersion) are continuous within ±30 deg. relative to the electron lens by rotating the sample around the horizontal axis. With this concept it is possible to precisely map the entire two-dimensional k-space of a crystal by means of stitching together 10 deg. wide stripes centered +15 deg. to -50 deg. relative to the sample normal. Three degrees of translational freedom allow positioning the sample surface at the focal point of the analyzer. Two degrees of rotational freedom are available at this position for manipulating the sample. Samples are mounted to a standard holder and transferred to the manipulator via a load-lock system attached to a prep chamber. The manipulator is configured with a cryogenic cold head, an electrical heater, and a temperature sensor permitting continuous closed-loop operation for 20-380 K

  10. Assessment of wood density of seven clones of Eucalyptus grandis ...

    African Journals Online (AJOL)

    With the objective of evaluating the correlation of wood basic density with age in seven Eucalyptus grandis clones planted in Brazil, five trees in each clone were sampled at the ages of 0, 5, 1, 5, 2, 5, 3, 5, 4, 5 and 7, 5 years. The analysis of these samples showed that the intraclonal variation of the basic density (except for 0, ...

  11. VARIATION OF PATHOGEN DENSITIES IN URBAN STORMWATER RUNOFF WITH LAND USE

    Science.gov (United States)

    Stormwater runoff samples were collected from outfalls draining small municipal separate storm sewer systems. The samples were collected from three land use areas (high-density residential, low-density residential, and landscaped commercial). The concentrations of organisms in ...

  12. Method of measuring density of gas in a vessel

    International Nuclear Information System (INIS)

    Shono, Kosuke.

    1981-01-01

    Purpose: To accurately measure the density of a gas in a vessel even at a loss-of-coolant accident in a BWR type reactor. Method: When at least one of the pressure or the temperature of gas in a vessel exceeds the usable range of a gas density measuring instrument due to a loss-of-coolant accident, the gas in the vessel is sampled, and the pressure or the temperature of the sampled gas are measured by matching them to the usable conditions of the gas density measuring instrument. Hydrogen gas and oxygen gas densities exceeding the usable range of the gas density measuring instrument are calculated by the following formulae based on the measured values. C'sub(O) = P sub(T).C sub(O)/P sub(T), C'sub(H) = C''sub(H).C'sub(O)/C''sub(O), where C sub(O), P sub(T), C'sub(H) represent the oxygen density, the total pressure and the hydrogen density of the internal pressure gas of the vessel after the respective gas density measuring instruments exceed the usable ranges; C sub(O), P sub(T) represent the oxygen density and the total pressure of the gas in the vessel before the gas density measuring instruments exceeded the usable range, and C''sub(H), C''sub(O) represent the hydrogen density and oxygen density of the respective sampled gases. (Kamimura, M.)

  13. CRISS power spectral density

    International Nuclear Information System (INIS)

    Vaeth, W.

    1979-04-01

    The correlation of signal components at different frequencies like higher harmonics cannot be detected by a normal power spectral density measurement, since this technique correlates only components at the same frequency. This paper describes a special method for measuring the correlation of two signal components at different frequencies: the CRISS power spectral density. From this new function in frequency analysis, the correlation of two components can be determined quantitatively either they stem from one signal or from two diverse signals. The principle of the method, suitable for the higher harmonics of a signal as well as for any other frequency combinations is shown for the digital frequency analysis technique. Two examples of CRISS power spectral densities demonstrates the operation of the new method. (orig.) [de

  14. High density dispersion fuel

    International Nuclear Information System (INIS)

    Hofman, G.L.

    1996-01-01

    A fuel development campaign that results in an aluminum plate-type fuel of unlimited LEU burnup capability with an uranium loading of 9 grams per cm 3 of meat should be considered an unqualified success. The current worldwide approved and accepted highest loading is 4.8 g cm -3 with U 3 Si 2 as fuel. High-density uranium compounds offer no real density advantage over U 3 Si 2 and have less desirable fabrication and performance characteristics as well. Of the higher-density compounds, U 3 Si has approximately a 30% higher uranium density but the density of the U 6 X compounds would yield the factor 1.5 needed to achieve 9 g cm -3 uranium loading. Unfortunately, irradiation tests proved these peritectic compounds have poor swelling behavior. It is for this reason that the authors are turning to uranium alloys. The reason pure uranium was not seriously considered as a dispersion fuel is mainly due to its high rate of growth and swelling at low temperatures. This problem was solved at least for relatively low burnup application in non-dispersion fuel elements with small additions of Si, Fe, and Al. This so called adjusted uranium has nearly the same density as pure α-uranium and it seems prudent to reconsider this alloy as a dispersant. Further modifications of uranium metal to achieve higher burnup swelling stability involve stabilization of the cubic γ phase at low temperatures where normally α phase exists. Several low neutron capture cross section elements such as Zr, Nb, Ti and Mo accomplish this in various degrees. The challenge is to produce a suitable form of fuel powder and develop a plate fabrication procedure, as well as obtain high burnup capability through irradiation testing

  15. Gap and density theorems

    CERN Document Server

    Levinson, N

    1940-01-01

    A typical gap theorem of the type discussed in the book deals with a set of exponential functions { \\{e^{{{i\\lambda}_n} x}\\} } on an interval of the real line and explores the conditions under which this set generates the entire L_2 space on this interval. A typical gap theorem deals with functions f on the real line such that many Fourier coefficients of f vanish. The main goal of this book is to investigate relations between density and gap theorems and to study various cases where these theorems hold. The author also shows that density- and gap-type theorems are related to various propertie

  16. Nuclear level density

    International Nuclear Information System (INIS)

    Cardoso Junior, J.L.

    1982-10-01

    Experimental data show that the number of nuclear states increases rapidly with increasing excitation energy. The properties of highly excited nuclei are important for many nuclear reactions, mainly those that go via processes of the compound nucleus type. In this case, it is sufficient to know the statistical properties of the nuclear levels. First of them is the function of nuclear levels density. Several theoretical models which describe the level density are presented. The statistical mechanics and a quantum mechanics formalisms as well as semi-empirical results are analysed and discussed. (Author) [pt

  17. Polarizable Density Embedding

    DEFF Research Database (Denmark)

    Olsen, Jógvan Magnus Haugaard; Steinmann, Casper; Ruud, Kenneth

    2015-01-01

    We present a new QM/QM/MM-based model for calculating molecular properties and excited states of solute-solvent systems. We denote this new approach the polarizable density embedding (PDE) model and it represents an extension of our previously developed polarizable embedding (PE) strategy. The PDE...... model is a focused computational approach in which a core region of the system studied is represented by a quantum-chemical method, whereas the environment is divided into two other regions: an inner and an outer region. Molecules belonging to the inner region are described by their exact densities...

  18. Holographic magnetisation density waves

    Energy Technology Data Exchange (ETDEWEB)

    Donos, Aristomenis [Centre for Particle Theory and Department of Mathematical Sciences, Durham University,Stockton Road, Durham, DH1 3LE (United Kingdom); Pantelidou, Christiana [Departament de Fisica Quantica i Astrofisica & Institut de Ciencies del Cosmos (ICC),Universitat de Barcelona,Marti i Franques 1, 08028 Barcelona (Spain)

    2016-10-10

    We numerically construct asymptotically AdS black brane solutions of D=4 Einstein theory coupled to a scalar and two U(1) gauge fields. The solutions are holographically dual to d=3 CFTs in a constant external magnetic field along one of the U(1)’s. Below a critical temperature the system’s magnetisation density becomes inhomogeneous, leading to spontaneous formation of current density waves. We find that the transition can be of second order and that the solutions which minimise the free energy locally in the parameter space of solutions have averaged stressed tensor of a perfect fluid.

  19. K-space polarimetry of bullseye plasmon antennas.

    Science.gov (United States)

    Osorio, Clara I; Mohtashami, Abbas; Koenderink, A Femius

    2015-04-30

    Surface plasmon resonators can drastically redistribute incident light over different output wave vectors and polarizations. This can lead for instance to sub-diffraction sized nanoapertures in metal films that beam and to nanoparticle antennas that enable efficient conversion of photons between spatial modes, or helicity channels. We present a polarimetric Fourier microscope as a new experimental tool to completely characterize the angle-dependent polarization-resolved scattering of single nanostructures. Polarimetry allows determining the full Stokes parameters from just six Fourier images. The degree of polarization and the polarization ellipse are measured for each scattering direction collected by a high NA objective. We showcase the method on plasmonic bullseye antennas in a metal film, which are known to beam light efficiently. We find rich results for the polarization state of the beamed light, including complete conversion of input polarization from linear to circular and from one helicity to another. In addition to uncovering new physics for plasmonic groove antennas, the described technique projects to have a large impact in nanophotonics, in particular towards the investigation of a broad range of phenomena ranging from photon spin Hall effects, polarization to orbital angular momentum transfer and design of plasmon antennas.

  20. Path-integral computation of superfluid densities

    International Nuclear Information System (INIS)

    Pollock, E.L.; Ceperley, D.M.

    1987-01-01

    The normal and superfluid densities are defined by the response of a liquid to sample boundary motion. The free-energy change due to uniform boundary motion can be calculated by path-integral methods from the distribution of the winding number of the paths around a periodic cell. This provides a conceptually and computationally simple way of calculating the superfluid density for any Bose system. The linear-response formulation relates the superfluid density to the momentum-density correlation function, which has a short-ranged part related to the normal density and, in the case of a superfluid, a long-ranged part whose strength is proportional to the superfluid density. These facts are discussed in the context of path-integral computations and demonstrated for liquid 4 He along the saturated vapor-pressure curve. Below the experimental superfluid transition temperature the computed superfluid fractions agree with the experimental values to within the statistical uncertainties of a few percent in the computations. The computed transition is broadened by finite-sample-size effects

  1. TU-EF-BRA-01: NMR and Proton Density MRI of the 1D Patient

    International Nuclear Information System (INIS)

    Wolbarst, A.

    2015-01-01

    NMR, and Proton Density MRI of the 1D Patient - Anthony Wolbarst Net Voxel Magnetization, m(x,t). T1-MRI; The MRI Device - Lisa Lemen ‘Classical’ NMR; FID Imaging in 1D via k-Space - Nathan Yanasak Spin-Echo; S-E/Spin Warp in a 2D Slice - Ronald Price Magnetic resonance imaging not only reveals the structural, anatomic details of the body, as does CT, but also it can provide information on the physiological status and pathologies of its tissues, like nuclear medicine. It can display high-quality slice and 3D images of organs and vessels viewed from any perspective, with resolution better than 1 mm. MRI is perhaps most extraordinary and notable for the plethora of ways in which it can create unique forms of image contrast, reflective of fundamentally different biophysical phenomena. As with ultrasound, there is no risk from ionizing radiation to the patient or staff, since no X-rays or radioactive nuclei are involved. Instead, MRI harnesses magnetic fields and radio waves to probe the stable nuclei of the ordinary hydrogen atoms (isolated protons) occurring in water and lipid molecules within and around cells. MRI consists, in essence, of creating spatial maps of the electromagnetic environments around these hydrogen nuclei. Spatial variations in the proton milieus can be related to clinical differences in the biochemical and physiological properties and conditions of the associated tissues. Imaging of proton density (PD), and of the tissue proton spin relaxation times known as T1 and T2, all can reveal important clinical information, but they do so with approaches so dissimilar from one another that each is chosen for only certain clinical situations. T1 and T2 in a voxel are determined by different aspects of the rotations and other motions of the water and lipid molecules involved, as constrained by the local biophysical surroundings within and between its cells – and they, in turn, depend on the type of tissue and its state of health. Three other common

  2. TU-EF-BRA-01: NMR and Proton Density MRI of the 1D Patient

    Energy Technology Data Exchange (ETDEWEB)

    Wolbarst, A. [Univ Kentucky (United States)

    2015-06-15

    NMR, and Proton Density MRI of the 1D Patient - Anthony Wolbarst Net Voxel Magnetization, m(x,t). T1-MRI; The MRI Device - Lisa Lemen ‘Classical’ NMR; FID Imaging in 1D via k-Space - Nathan Yanasak Spin-Echo; S-E/Spin Warp in a 2D Slice - Ronald Price Magnetic resonance imaging not only reveals the structural, anatomic details of the body, as does CT, but also it can provide information on the physiological status and pathologies of its tissues, like nuclear medicine. It can display high-quality slice and 3D images of organs and vessels viewed from any perspective, with resolution better than 1 mm. MRI is perhaps most extraordinary and notable for the plethora of ways in which it can create unique forms of image contrast, reflective of fundamentally different biophysical phenomena. As with ultrasound, there is no risk from ionizing radiation to the patient or staff, since no X-rays or radioactive nuclei are involved. Instead, MRI harnesses magnetic fields and radio waves to probe the stable nuclei of the ordinary hydrogen atoms (isolated protons) occurring in water and lipid molecules within and around cells. MRI consists, in essence, of creating spatial maps of the electromagnetic environments around these hydrogen nuclei. Spatial variations in the proton milieus can be related to clinical differences in the biochemical and physiological properties and conditions of the associated tissues. Imaging of proton density (PD), and of the tissue proton spin relaxation times known as T1 and T2, all can reveal important clinical information, but they do so with approaches so dissimilar from one another that each is chosen for only certain clinical situations. T1 and T2 in a voxel are determined by different aspects of the rotations and other motions of the water and lipid molecules involved, as constrained by the local biophysical surroundings within and between its cells – and they, in turn, depend on the type of tissue and its state of health. Three other common

  3. A Tryst With Density

    Indian Academy of Sciences (India)

    best known for developing the density functional theory (DFT). This is an extremely ... lem that has become famous in popular culture is that of the planet. Tatooine. Fans of ... the Schrödinger equation (or, if relativistic effects are important, the Dirac .... it supplies a moral justification for one's subsequent endeav- ours along ...

  4. Density in Liquids.

    Science.gov (United States)

    Nesin, Gert; Barrow, Lloyd H.

    1984-01-01

    Describes a fourth-grade unit on density which introduces a concept useful in the study of chemistry and procedures appropriate to the chemistry laboratory. The hands-on activities, which use simple equipment and household substances, are at the level of thinking Piaget describes as concrete operational. (BC)

  5. Destiny from density

    OpenAIRE

    Seewaldt, Victoria L.

    2012-01-01

    The identification of a signalling protein that regulates the accumulation of fat and connective tissue in breasts may help to explain why high mammographic density is linked to breast-cancer risk and may provide a marker for predicting this risk.

  6. Polarizable Density Embedding

    DEFF Research Database (Denmark)

    Reinholdt, Peter; Kongsted, Jacob; Olsen, Jógvan Magnus Haugaard

    2017-01-01

    We analyze the performance of the polarizable density embedding (PDE) model-a new multiscale computational approach designed for prediction and rationalization of general molecular properties of large and complex systems. We showcase how the PDE model very effectively handles the use of large...

  7. Gamma irradiation effects in low density polyethylene

    International Nuclear Information System (INIS)

    Ono, Lilian S.; Scagliusi, Sandra R.; Cardoso, Elisabeth E.L.; Lugao, Ademar B.

    2011-01-01

    Low density polyethylene (LDPE) is obtained from ethylene gas polymerization, being one of the most commercialized polymers due to its versatility and low cost. It's a semi-crystalline polymer, usually inactive at room temperature, capable to attain temperatures within a 80 deg C - 100 deg C range, without changing its physical-chemical properties. LDPE has more resistance when compared to its equivalent High Density Polyethylene (HDPE). LDPE most common applications consist in manufacturing of laboratory materials, general containers, pipes, plastic bags, etc. Gamma radiation is used on polymers in order to modify mechanical and physical-chemical features according to utility purposes. This work aims to the study of gamma (γ) radiation interaction with low density polyethylene to evaluate changes in its physical-chemical properties. Polymer samples were exposed to 5, 10, 15, 20 and 30kGy doses, at room temperature. Samples characterization employed Thermal Analysis, Melt Flow Index, Infrared Spectroscopy and Swelling tests. (author)

  8. Apparatus for measurement of tree core density

    International Nuclear Information System (INIS)

    Blincow, D.W.

    1975-01-01

    Apparatus is described for direct measurement of the density of a core sample from a tree. A radiation source and detector with a receptacle for the core therebetween, an integrator unit for the detector output, and an indicating meter driven by the integrator unit are described

  9. Thyroid Stimulating Hormone and Bone Mineral Density

    DEFF Research Database (Denmark)

    van Vliet, Nicolien A; Noordam, Raymond; van Klinken, Jan B

    2018-01-01

    With population aging, prevalence of low bone mineral density (BMD) and associated fracture risk are increased. To determine whether low circulating thyroid stimulating hormone (TSH) levels within the normal range are causally related to BMD, we conducted a two-sample Mendelian randomization (MR...

  10. Quantal density functional theory

    CERN Document Server

    Sahni, Viraht

    2016-01-01

    This book deals with quantal density functional theory (QDFT) which is a time-dependent local effective potential theory of the electronic structure of matter. The treated time-independent QDFT constitutes a special case. In the 2nd edition, the theory is extended to include the presence of external magnetostatic fields. The theory is a description of matter based on the ‘quantal Newtonian’ first and second laws which is in terms of “classical” fields that pervade all space, and their quantal sources. The fields, which are explicitly defined, are separately representative of electron correlations due to the Pauli exclusion principle, Coulomb repulsion, correlation-kinetic, correlation-current-density, and correlation-magnetic effects. The book further describes Schrödinger theory from the new physical perspective of fields and quantal sources. It also describes traditional Hohenberg-Kohn-Sham DFT, and explains via QDFT the physics underlying the various energy functionals and functional derivatives o...

  11. Discrete density of states

    International Nuclear Information System (INIS)

    Aydin, Alhun; Sisman, Altug

    2016-01-01

    By considering the quantum-mechanically minimum allowable energy interval, we exactly count number of states (NOS) and introduce discrete density of states (DOS) concept for a particle in a box for various dimensions. Expressions for bounded and unbounded continua are analytically recovered from discrete ones. Even though substantial fluctuations prevail in discrete DOS, they're almost completely flattened out after summation or integration operation. It's seen that relative errors of analytical expressions of bounded/unbounded continua rapidly decrease for high NOS values (weak confinement or high energy conditions), while the proposed analytical expressions based on Weyl's conjecture always preserve their lower error characteristic. - Highlights: • Discrete density of states considering minimum energy difference is proposed. • Analytical DOS and NOS formulas based on Weyl conjecture are given. • Discrete DOS and NOS functions are examined for various dimensions. • Relative errors of analytical formulas are much better than the conventional ones.

  12. Discrete density of states

    Energy Technology Data Exchange (ETDEWEB)

    Aydin, Alhun; Sisman, Altug, E-mail: sismanal@itu.edu.tr

    2016-03-22

    By considering the quantum-mechanically minimum allowable energy interval, we exactly count number of states (NOS) and introduce discrete density of states (DOS) concept for a particle in a box for various dimensions. Expressions for bounded and unbounded continua are analytically recovered from discrete ones. Even though substantial fluctuations prevail in discrete DOS, they're almost completely flattened out after summation or integration operation. It's seen that relative errors of analytical expressions of bounded/unbounded continua rapidly decrease for high NOS values (weak confinement or high energy conditions), while the proposed analytical expressions based on Weyl's conjecture always preserve their lower error characteristic. - Highlights: • Discrete density of states considering minimum energy difference is proposed. • Analytical DOS and NOS formulas based on Weyl conjecture are given. • Discrete DOS and NOS functions are examined for various dimensions. • Relative errors of analytical formulas are much better than the conventional ones.

  13. Density dependent effective interactions

    International Nuclear Information System (INIS)

    Dortmans, P.J.; Amos, K.

    1994-01-01

    An effective nucleon-nucleon interaction is defined by an optimal fit to select on-and half-off-of-the-energy shell t-and g-matrices determined by solutions of the Lippmann-Schwinger and Brueckner-Bethe-Goldstone equations with the Paris nucleon-nucleon interaction as input. As such, it is seen to better reproduce the interaction on which it is based than other commonly used density dependent effective interactions. The new (medium modified) effective interaction when folded with appropriate density matrices, has been used to define proton- 12 C and proton- 16 O optical potentials. With them elastic scattering data are well fit and the medium effects identifiable. 23 refs., 8 figs

  14. ON THE ORIGIN OF THE HIGH COLUMN DENSITY TURNOVER IN THE H I COLUMN DENSITY DISTRIBUTION

    International Nuclear Information System (INIS)

    Erkal, Denis; Gnedin, Nickolay Y.; Kravtsov, Andrey V.

    2012-01-01

    We study the high column density regime of the H I column density distribution function and argue that there are two distinct features: a turnover at N H I ≈ 10 21 cm –2 , which is present at both z = 0 and z ≈ 3, and a lack of systems above N H I ≈ 10 22 cm –2 at z = 0. Using observations of the column density distribution, we argue that the H I-H 2 transition does not cause the turnover at N H I ≈ 10 21 cm –2 but can plausibly explain the turnover at N H I ∼> 10 22 cm –2 . We compute the H I column density distribution of individual galaxies in the THINGS sample and show that the turnover column density depends only weakly on metallicity. Furthermore, we show that the column density distribution of galaxies, corrected for inclination, is insensitive to the resolution of the H I map or to averaging in radial shells. Our results indicate that the similarity of H I column density distributions at z = 3 and 0 is due to the similarity of the maximum H I surface densities of high-z and low-z disks, set presumably by universal processes that shape properties of the gaseous disks of galaxies. Using fully cosmological simulations, we explore other candidate physical mechanisms that could produce a turnover in the column density distribution. We show that while turbulence within giant molecular clouds cannot affect the damped Lyα column density distribution, stellar feedback can affect it significantly if the feedback is sufficiently effective in removing gas from the central 2-3 kpc of high-redshift galaxies. Finally, we argue that it is meaningful to compare column densities averaged over ∼ kpc scales with those estimated from quasar spectra that probe sub-pc scales due to the steep power spectrum of H I column density fluctuations observed in nearby galaxies.

  15. Density oscillations within hadrons

    International Nuclear Information System (INIS)

    Arnold, R.; Barshay, S.

    1976-01-01

    In models of extended hadrons, in which small bits of matter carrying charge and effective mass exist confined within a medium, oscillations in the matter density may occur. A way of investigating this possibility experimentally in high-energy hadron-hadron elastic diffraction scattering is suggested, and the effect is illustrated by examining some existing data which might be relevant to the question [fr

  16. Toward a Redefinition of Density

    Science.gov (United States)

    Rapoport, Amos

    1975-01-01

    This paper suggests that in addition to the recent work indicating that crowding is a subjective phenomenon, an adequate definition of density must also include a subjective component since density is a complex phenomenon in itself. Included is a discussion of both physical density and perceived density. (Author/MA)

  17. Density measures and additive property

    OpenAIRE

    Kunisada, Ryoichi

    2015-01-01

    We deal with finitely additive measures defined on all subsets of natural numbers which extend the asymptotic density (density measures). We consider a class of density measures which are constructed from free ultrafilters on natural numbers and study a certain additivity property of such density measures.

  18. Local density measurement of additive manufactured copper parts by instrumented indentation

    Science.gov (United States)

    Santo, Loredana; Quadrini, Fabrizio; Bellisario, Denise; Tedde, Giovanni Matteo; Zarcone, Mariano; Di Domenico, Gildo; D'Angelo, Pierpaolo; Corona, Diego

    2018-05-01

    Instrumented flat indentation has been used to evaluate local density of additive manufactured (AM) copper samples with different relative density. Indentations were made by using tungsten carbide (WC) flat pins with 1 mm diameter. Pure copper powders were used in a selective laser melting (SLM) machine to produce samples to test. By changing process parameters, samples density was changed from the relative density of 63% to 71%. Indentation tests were performed on the xy surface of the AM samples. In order to make a correlation between indentation test results and sample density, the indentation pressure at fixed displacement was selected. Results show that instrumented indentation is a valid technique to measure density distribution along the geometry of an SLM part. In fact, a linear trend between indentation pressure and sample density was found for the selected density range.

  19. Modulation Based on Probability Density Functions

    Science.gov (United States)

    Williams, Glenn L.

    2009-01-01

    A proposed method of modulating a sinusoidal carrier signal to convey digital information involves the use of histograms representing probability density functions (PDFs) that characterize samples of the signal waveform. The method is based partly on the observation that when a waveform is sampled (whether by analog or digital means) over a time interval at least as long as one half cycle of the waveform, the samples can be sorted by frequency of occurrence, thereby constructing a histogram representing a PDF of the waveform during that time interval.

  20. Comet coma sample return instrument

    Science.gov (United States)

    Albee, A. L.; Brownlee, Don E.; Burnett, Donald S.; Tsou, Peter; Uesugi, K. T.

    1994-01-01

    The sample collection technology and instrument concept for the Sample of Comet Coma Earth Return Mission (SOCCER) are described. The scientific goals of this Flyby Sample Return are to return to coma dust and volatile samples from a known comet source, which will permit accurate elemental and isotopic measurements for thousands of individual solid particles and volatiles, detailed analysis of the dust structure, morphology, and mineralogy of the intact samples, and identification of the biogenic elements or compounds in the solid and volatile samples. Having these intact samples, morphologic, petrographic, and phase structural features can be determined. Information on dust particle size, shape, and density can be ascertained by analyzing penetration holes and tracks in the capture medium. Time and spatial data of dust capture will provide understanding of the flux dynamics of the coma and the jets. Additional information will include the identification of cosmic ray tracks in the cometary grains, which can provide a particle's process history and perhaps even the age of the comet. The measurements will be made with the same equipment used for studying micrometeorites for decades past; hence, the results can be directly compared without extrapolation or modification. The data will provide a powerful and direct technique for comparing the cometary samples with all known types of meteorites and interplanetary dust. This sample collection system will provide the first sample return from a specifically identified primitive body and will allow, for the first time, a direct method of matching meteoritic materials captured on Earth with known parent bodies.

  1. High density grids

    Energy Technology Data Exchange (ETDEWEB)

    Cohen, Aina E.; Baxter, Elizabeth L.

    2018-01-16

    An X-ray data collection grid device is provided that includes a magnetic base that is compatible with robotic sample mounting systems used at synchrotron beamlines, a grid element fixedly attached to the magnetic base, where the grid element includes at least one sealable sample window disposed through a planar synchrotron-compatible material, where the planar synchrotron-compatible material includes at least one automated X-ray positioning and fluid handling robot fiducial mark.

  2. Density Distribution Sunflower Plots

    Directory of Open Access Journals (Sweden)

    William D. Dupont

    2003-01-01

    Full Text Available Density distribution sunflower plots are used to display high-density bivariate data. They are useful for data where a conventional scatter plot is difficult to read due to overstriking of the plot symbol. The x-y plane is subdivided into a lattice of regular hexagonal bins of width w specified by the user. The user also specifies the values of l, d, and k that affect the plot as follows. Individual observations are plotted when there are less than l observations per bin as in a conventional scatter plot. Each bin with from l to d observations contains a light sunflower. Other bins contain a dark sunflower. In a light sunflower each petal represents one observation. In a dark sunflower, each petal represents k observations. (A dark sunflower with p petals represents between /2-pk k and /2+pk k observations. The user can control the sizes and colors of the sunflowers. By selecting appropriate colors and sizes for the light and dark sunflowers, plots can be obtained that give both the overall sense of the data density distribution as well as the number of data points in any given region. The use of this graphic is illustrated with data from the Framingham Heart Study. A documented Stata program, called sunflower, is available to draw these graphs. It can be downloaded from the Statistical Software Components archive at http://ideas.repec.org/c/boc/bocode/s430201.html . (Journal of Statistical Software 2003; 8 (3: 1-5. Posted at http://www.jstatsoft.org/index.php?vol=8 .

  3. Bulk density calculations from prompt gamma ray yield

    International Nuclear Information System (INIS)

    Naqvi, A.A.; Nagadi, M.M.; Al-Amoudi, O.S.B.; Maslehuddin, M.

    2006-01-01

    Full text: The gamma ray yield from a Prompt Gamma ray Neutron Activation Analysis (PGNAA) setup is a linear function of element concentration and neutron flux in a the sample with constant bulk density. If the sample bulk density varies as well, then the element concentration and the neutron flux has a nonlinear correlation with the gamma ray yield [1]. The measurement of gamma ray yield non-linearity from samples and a standard can be used to estimate the bulk density of the samples. In this study the prompt gamma ray yield from Blast Furnace Slag, Fly Ash, Silica Fumes and Superpozz cements samples have been measured as a function of their calcium and silicon concentration using KFUPM accelerator-based PGNAA setup [2]. Due to different bulk densities of the blended cement samples, the measured gamma ray yields have nonlinear correlation with calcium and silicon concentration of the samples. The non-linearity in the yield was observed to increase with gamma rays energy and element concentration. The bulk densities of the cement samples were calculated from ratio of gamma ray yield from blended cement and that from a Portland cement standard. The calculated bulk densities have good agreement with the published data. The result of this study will be presented

  4. Air shower density spectrum

    International Nuclear Information System (INIS)

    Porter, M.R.; Foster, J.M.; Hodson, A.L.; Hazen, W.E.; Hendel, A.Z.; Bull, R.M.

    1982-01-01

    Measurements of the differential local density spectrum have been made using a 1 m 2 discharge chamber mounted in the Leeds discharge chamber array. The results are fitted to a power law of the form h(δ)dδ = kδsup(-ν)dδ, where ν=2.47+-0.04; k=0.21 s - 1 , for 7 m - 2 - 2 ; ν=2.90+-0.22; k=2.18 s - 1 , for δ > 200 m - 2 . Details of the measurement techniques are given with particular reference to the treatment of closely-spaced discharges. A comparison of these results with previous experiments using different techniques is made

  5. Measurement of loose powder density

    International Nuclear Information System (INIS)

    Akhtar, S.; Ali, A.; Haider, A.; Farooque, M.

    2011-01-01

    Powder metallurgy is a conventional technique for making engineering articles from powders. Main objective is to produce final products with the highest possible uniform density, which depends on the initial loose powder characteristics. Producing, handling, characterizing and compacting materials in loose powder form are part of the manufacturing processes. Density of loose metallic or ceramic powder is an important parameter for die design. Loose powder density is required for calculating the exact mass of powder to fill the die cavity for producing intended green density of the powder compact. To fulfill this requirement of powder metallurgical processing, a loose powder density meter as per ASTM standards is designed and fabricated for measurement of density. The density of free flowing metallic powders can be determined using Hall flow meter funnel and density cup of 25 cm/sup 3/ volume. Density of metal powders like cobalt, manganese, spherical bronze and pure iron is measured and results are obtained with 99.9% accuracy. (author)

  6. Gluon density in nuclei

    International Nuclear Information System (INIS)

    Ayala, A.L.

    1996-01-01

    In this talk we present our detailed study (theory and numbers) on the shadowing corrections to the gluon structure functions for nuclei. Starting from rather controversial information on the nucleon structure function which is originated by the recent HERA data, we develop the Glauber approach for the gluon density in a nucleus based on Mueller formula and estimate the value of the shadowing corrections in this case. Then we calculate the first corrections to the Glauber approach and show that these corrections are big. Based on this practical observation we suggest the new evolution equation which takes into account the shadowing corrections and solve it. We hope to convince you that the new evolution equation gives a good theoretical tool to treat the shadowing corrections for the gluons density in a nucleus and, therefore, it is able to provide the theoretically reliable initial conditions for the time evolution of the nucleus-nucleus cascade. The initial conditions should be fixed both theoretically and phenomenologically before to attack such complicated problems as the mixture of hard and soft processes in nucleus-nucleus interactions at high energy or the theoretically reliable approach to hadron or/and parton cascades for high energy nucleus-nucleus interaction. 35 refs., 24 figs., 1 tab

  7. Lead accumulation in the roadside soils from heavy density motor ...

    African Journals Online (AJOL)

    The levels of lead pollution in the roadside soils of the heavy density motor ways of Eastern Ethiopia, in particular; Modjo, Bishoftu and Adama towns were studied. Soil samples were collected from a total of 22 sampling sites while the control samples were obtained from places about 1 km away from the main roads of each ...

  8. Testing an excited-state energy density functional and the associated potential with the ionization potential theorem

    International Nuclear Information System (INIS)

    Hemanadhan, M; Shamim, Md; Harbola, Manoj K

    2014-01-01

    The modified local spin density (MLSD) functional and the related local potential for excited states is tested by employing the ionization potential theorem. The exchange functional for an excited state is constructed by splitting k-space. Since its functional derivative cannot be obtained easily, the corresponding exchange potential is given by an analogy to its ground-state counterpart. Further, to calculate the highest occupied orbital energy ϵ max accurately, the potential is corrected for its asymptotic behaviour by employing the van Leeuwen and Baerends (LB) correction to it. ϵ max so obtained is then compared with the ΔSCF ionization energy calculated using the MLSD functional with self-interaction correction for the orbitals involved in the transition. It is shown that the two match quite accurately. The match becomes even better by tuning the LB correction with respect to a parameter in it. (paper)

  9. Anomalous evolution of Ar metastable density with electron density in high density Ar discharge

    International Nuclear Information System (INIS)

    Park, Min; Chang, Hong-Young; You, Shin-Jae; Kim, Jung-Hyung; Shin, Yong-Hyeon

    2011-01-01

    Recently, an anomalous evolution of argon metastable density with plasma discharge power (electron density) was reported [A. M. Daltrini, S. A. Moshkalev, T. J. Morgan, R. B. Piejak, and W. G. Graham, Appl. Phys. Lett. 92, 061504 (2008)]. Although the importance of the metastable atom and its density has been reported in a lot of literature, however, a basic physics behind the anomalous evolution of metastable density has not been clearly understood yet. In this study, we investigated a simple global model to elucidate the underlying physics of the anomalous evolution of argon metastable density with the electron density. On the basis of the proposed simple model, we reproduced the anomalous evolution of the metastable density and disclosed the detailed physics for the anomalous result. Drastic changes of dominant mechanisms for the population and depopulation processes of Ar metastable atoms with electron density, which take place even in relatively low electron density regime, is the clue to understand the result.

  10. Relative density: the key to stocking assessment in regional analysis—a forest survey viewpoint.

    Science.gov (United States)

    Colin D. MacLean

    1979-01-01

    Relative density is a measure of tree crowding compared to a reference level such as normal density. This stand attribute, when compared to management standards, indicates adequacy of stocking. The Pacific Coast Forest Survey Unit assesses the relative density of each stand sampled by summing the individual density contributions of each tree tallied, thus quantifying...

  11. Density-Functional formalism

    International Nuclear Information System (INIS)

    Szasz, L.; Berrios-Pagan, I.; McGinn, G.

    1975-01-01

    A new Density-Functional formula is constructed for atoms. The kinetic energy of the electron is divided into two parts: the kinetic self-energy and the orthogonalization energy. Calculations were made for the total energies of neutral atoms, positive ions and for the He isoelectronic series. For neutral atoms the results match the Hartree-Fock energies within 1% for atoms with N 36 the results generally match the HF energies within 0.1%. For positive ions the results are fair; for the molecular applications a simplified model is developed in which the kinetic energy consists of the Weizsaecker term plus the Fermi energy reduced by a continuous function. (orig.) [de

  12. Density and Specific Gravity Metrics in Biomass Research

    Science.gov (United States)

    Micheal C. Wiemann; G. Bruce Williamson

    2012-01-01

    Following the 2010 publication of Measuring Wood Specific Gravity… Correctly in the American Journal of Botany, readers contacted us to inquire about application of wood density and specific gravity to biomass research. Here we recommend methods for sample collection, volume measurement, and determination of wood density and specific gravity for...

  13. Mineralogy and geochemistry of density-separated Greek lignite fractions

    NARCIS (Netherlands)

    Iordanidis, A.; Doesburg, van J.D.J.

    2006-01-01

    In this study, lignite samples were collected from the Ptolemais region, northern Greece, homogenized, crushed to less than I nun, and separated in three density fractions using heavy media. The mineralogical investigation of the density fractions showed a predominance of pyrite in the light

  14. Density functional theory

    International Nuclear Information System (INIS)

    Freyss, M.

    2015-01-01

    This chapter gives an introduction to first-principles electronic structure calculations based on the density functional theory (DFT). Electronic structure calculations have a crucial importance in the multi-scale modelling scheme of materials: not only do they enable one to accurately determine physical and chemical properties of materials, they also provide data for the adjustment of parameters (or potentials) in higher-scale methods such as classical molecular dynamics, kinetic Monte Carlo, cluster dynamics, etc. Most of the properties of a solid depend on the behaviour of its electrons, and in order to model or predict them it is necessary to have an accurate method to compute the electronic structure. DFT is based on quantum theory and does not make use of any adjustable or empirical parameter: the only input data are the atomic number of the constituent atoms and some initial structural information. The complicated many-body problem of interacting electrons is replaced by an equivalent single electron problem, in which each electron is moving in an effective potential. DFT has been successfully applied to the determination of structural or dynamical properties (lattice structure, charge density, magnetisation, phonon spectra, etc.) of a wide variety of solids. Its efficiency was acknowledged by the attribution of the Nobel Prize in Chemistry in 1998 to one of its authors, Walter Kohn. A particular attention is given in this chapter to the ability of DFT to model the physical properties of nuclear materials such as actinide compounds. The specificities of the 5f electrons of actinides will be presented, i.e., their more or less high degree of localisation around the nuclei and correlations. The limitations of the DFT to treat the strong 5f correlations are one of the main issues for the DFT modelling of nuclear fuels. Various methods that exist to better treat strongly correlated materials will finally be presented. (author)

  15. Hormonal Determinants of Mammographic Density

    National Research Council Canada - National Science Library

    Simpson, Jennifer K; Modugno, Francemary; Weissfeld, Joel L; Kuller, Lewis; Vogel, Victor; Constantino, Joseph P

    2005-01-01

    .... However, not all women on HRT will experience an increase in breast density. We propose a novel hypothesis to explain in part the individual variability in breast density seen among women on HRT...

  16. Density limit in ASDEX discharges with peaked density profiles

    International Nuclear Information System (INIS)

    Staebler, A.; Niedermeyer, H.; Loch, R.; Mertens, V.; Mueller, E.R.; Soeldner, F.X.; Wagner, F.

    1989-01-01

    Results concerning the density limit in OH and NI-heated ASDEX discharges with the usually observed broad density profiles have been reported earlier: In ohmic discharges with high q a (q-cylindrical is used throughout this paper) the Murakami parameter (n e R/B t ) is a good scaling parameter. At the high densities edge cooling is observed causing the plasma to shrink until an m=2-instability terminates the discharge. When approaching q a =2 the density limit is no longer proportional to I p ; a minimum exists in n e,max (q a ) at q a ∼2.15. With NI-heating the density limit increases less than proportional to the heating power; the behaviour during the pre-disruptive phase is rather similar to the one of OH discharges. There are specific operating regimes on ASDEX leading to discharges with strongly peaked density profiles: the improved ohmic confinement regime, counter neutral injection, and multipellet injection. These regimes are characterized by enhanced energy and particle confinement. The operational limit in density for these discharges is, therefore, of great interest having furthermore in mind that high central densities are favourable in achieving high fusion yields. In addition, further insight into the mechanisms of the density limit observed in tokamaks may be obtained by comparing plasmas with rather different density profiles at their maximum attainable densities. 7 refs., 2 figs

  17. Optimization of multiply acquired magnetic flux density Bz using ICNE-Multiecho train in MREIT

    International Nuclear Information System (INIS)

    Nam, Hyun Soo; Kwon, Oh In

    2010-01-01

    The aim of magnetic resonance electrical impedance tomography (MREIT) is to visualize the electrical properties, conductivity or current density of an object by injection of current. Recently, the prolonged data acquisition time when using the injected current nonlinear encoding (ICNE) method has been advantageous for measurement of magnetic flux density data, Bz, for MREIT in the signal-to-noise ratio (SNR). However, the ICNE method results in undesirable side artifacts, such as blurring, chemical shift and phase artifacts, due to the long data acquisition under an inhomogeneous static field. In this paper, we apply the ICNE method to a gradient and spin echo (GRASE) multi-echo train pulse sequence in order to provide the multiple k-space lines during a single RF pulse period. We analyze the SNR of the measured multiple B z data using the proposed ICNE-Multiecho MR pulse sequence. By determining a weighting factor for B z data in each of the echoes, an optimized inversion formula for the magnetic flux density data is proposed for the ICNE-Multiecho MR sequence. Using the ICNE-Multiecho method, the quality of the measured magnetic flux density is considerably increased by the injection of a long current through the echo train length and by optimization of the voxel-by-voxel noise level of the B z value. Agarose-gel phantom experiments have demonstrated fewer artifacts and a better SNR using the ICNE-Multiecho method. Experimenting with the brain of an anesthetized dog, we collected valuable echoes by taking into account the noise level of each of the echoes and determined B z data by determining optimized weighting factors for the multiply acquired magnetic flux density data.

  18. High-density polymorphisms analysis of 23 candidate genes for association with bone mineral density.

    Science.gov (United States)

    Giroux, Sylvie; Elfassihi, Latifa; Clément, Valérie; Bussières, Johanne; Bureau, Alexandre; Cole, David E C; Rousseau, François

    2010-11-01

    Osteoporosis is a bone disease characterized by low bone mineral density (BMD), a highly heritable and polygenic trait. Women are more prone than men to develop osteoporosis due to a lower peak bone mass and accelerated bone loss at menopause. Peak bone mass has been convincingly shown to be due to genetic factors with heritability up to 80%. Menopausal bone loss has been shown to have around 38% to 49% heritability depending on the site studied. To have more statistical power to detect small genetic effects we focused on premenopausal women. We studied 23 candidate genes, some involved in calcium and vitamin-D regulation and others because estrogens strongly induced their gene expression in mice where it was correlated with humerus trabecular bone density. High-density polymorphisms were selected to cover the entire gene variability and 231 polymorphisms were genotyped in a first sample of 709 premenopausal women. Positive associations were retested in a second, independent, sample of 673 premenopausal women. Ten polymorphisms remained associated with BMD in the combined samples and one was further associated in a large sample of postmenopausal women (1401 women). This associated polymorphism was located in the gene CSF3R (granulocyte colony stimulating factor receptor) that had never been associated with BMD before. The results reported in this study suggest a role for CSF3R in the determination of bone density in women. Copyright © 2010 Elsevier Inc. All rights reserved.

  19. The car parking problem at high densities

    Science.gov (United States)

    Burgos, E.; Bonadeo, H.

    1989-04-01

    The radial distribution functions of random 1-D systems of sequential hard rods have been studied in the range of very high densities. It is found that as the number of samples rejected before completion increases, anomalies in the pairwise distribution functions arise. These are discussed using analytical solutions for systems of three rods and numerical simulations with twelve rods. The probabilities of different spatial orderings with respect to the sequential order are examined.

  20. Determination of the density of active uranium

    Energy Technology Data Exchange (ETDEWEB)

    Piercy, G R

    1958-03-15

    A procedure was found to measure the density of irradiated uranium to an accuracy of 0.06% by measuring the weight of the sample in air and in n-octyl alcohol. The measurements were made using a gramatic balance that was readily adapted for remote control in the 'cave'. Since the n-octyl alcohol was inside the balance for all measurements, the complete apparatus was mobile. (author)

  1. Determination of the density of active uranium

    International Nuclear Information System (INIS)

    Piercy, G.R.

    1958-03-01

    A procedure was found to measure the density of irradiated uranium to an accuracy of 0.06% by measuring the weight of the sample in air and in n-octyl alcohol. The measurements were made using a gramatic balance that was readily adapted for remote control in the 'cave'. Since the n-octyl alcohol was inside the balance for all measurements, the complete apparatus was mobile. (author)

  2. Smoothing densities under shape constraints

    OpenAIRE

    Davies, Paul Laurie; Meise, Monika

    2009-01-01

    In Davies and Kovac (2004) the taut string method was proposed for calculating a density which is consistent with the data and has the minimum number of peaks. The main disadvantage of the taut string density is that it is piecewise constant. In this paper a procedure is presented which gives a smoother density by minimizing the total variation of a derivative of the density subject to the number, positions and heights of the local extreme values obtained from the taut string density. 2...

  3. High density hydrogen research

    International Nuclear Information System (INIS)

    Hawke, R.S.

    1977-01-01

    The interest in the properties of very dense hydrogen is prompted by its abundance in Saturn and Jupiter and its importance in laser fusion studies. Furthermore, it has been proposed that the metallic form of hydrogen may be a superconductor at relatively high temperatures and/or exist in a metastable phase at ambient pressure. For ten years or more, laboratories have been developing the techniques to study hydrogen in the megabar region (1 megabar = 100 GPa). Three major approaches to study dense hydrogen experimentally have been used, static presses, shockwave compression, and magnetic compression. Static tchniques have crossed the megabar threshold in stiff materials but have not yet been convincingly successful in very compressible hydrogen. Single and double shockwave techniques have improved the precision of the pressure, volume, temperature Equation of State (EOS) of molecular hydrogen (deuterium) up to near 1 Mbar. Multiple shockwave and magnetic techniques have compressed hydrogen to several megabars and densities in the range of the metallic phase. The net result is that hydrogen becomes conducting at a pressure between 2 and 4 megabars. Hence, the possibility of making a significant amount of hydrogen into a metal in a static press remains a formidable challenge. The success of such experiments will hopefully answer the questions about hydrogen's metallic vs. conducting molecular phase, superconductivity, and metastability. 4 figures, 15 references

  4. Estimates of high absolute densities and emergence rates of demersal zooplankton from the Agatti Atoll, laccadives

    Digital Repository Service at National Institute of Oceanography (India)

    Madhupratap, M.; Achuthankutty, C.T.; Nair, S.R.S.

    Direct sampling of the sandy substratus of the Agatti Lagoon with a corer showed the presence of vary high densities of epibenthic forms. On average, densities were about 25 times higher than previously estimated with emergence traps. About 80...

  5. CORRELATION BETWEEN GROUP LOCAL DENSITY AND GROUP LUMINOSITY

    Energy Technology Data Exchange (ETDEWEB)

    Deng Xinfa [School of Science, Nanchang University, Jiangxi 330031 (China); Yu Guisheng [Department of Natural Science, Nanchang Teachers College, Jiangxi 330103 (China)

    2012-11-10

    In this study, we investigate the correlation between group local number density and total luminosity of groups. In four volume-limited group catalogs, we can conclude that groups with high luminosity exist preferentially in high-density regions, while groups with low luminosity are located preferentially in low-density regions, and that in a volume-limited group sample with absolute magnitude limit M{sub r} = -18, the correlation between group local number density and total luminosity of groups is the weakest. These results basically are consistent with the environmental dependence of galaxy luminosity.

  6. Computing thermal Wigner densities with the phase integration method

    International Nuclear Information System (INIS)

    Beutier, J.; Borgis, D.; Vuilleumier, R.; Bonella, S.

    2014-01-01

    We discuss how the Phase Integration Method (PIM), recently developed to compute symmetrized time correlation functions [M. Monteferrante, S. Bonella, and G. Ciccotti, Mol. Phys. 109, 3015 (2011)], can be adapted to sampling/generating the thermal Wigner density, a key ingredient, for example, in many approximate schemes for simulating quantum time dependent properties. PIM combines a path integral representation of the density with a cumulant expansion to represent the Wigner function in a form calculable via existing Monte Carlo algorithms for sampling noisy probability densities. The method is able to capture highly non-classical effects such as correlation among the momenta and coordinates parts of the density, or correlations among the momenta themselves. By using alternatives to cumulants, it can also indicate the presence of negative parts of the Wigner density. Both properties are demonstrated by comparing PIM results to those of reference quantum calculations on a set of model problems

  7. Computing thermal Wigner densities with the phase integration method.

    Science.gov (United States)

    Beutier, J; Borgis, D; Vuilleumier, R; Bonella, S

    2014-08-28

    We discuss how the Phase Integration Method (PIM), recently developed to compute symmetrized time correlation functions [M. Monteferrante, S. Bonella, and G. Ciccotti, Mol. Phys. 109, 3015 (2011)], can be adapted to sampling/generating the thermal Wigner density, a key ingredient, for example, in many approximate schemes for simulating quantum time dependent properties. PIM combines a path integral representation of the density with a cumulant expansion to represent the Wigner function in a form calculable via existing Monte Carlo algorithms for sampling noisy probability densities. The method is able to capture highly non-classical effects such as correlation among the momenta and coordinates parts of the density, or correlations among the momenta themselves. By using alternatives to cumulants, it can also indicate the presence of negative parts of the Wigner density. Both properties are demonstrated by comparing PIM results to those of reference quantum calculations on a set of model problems.

  8. Density limit experiments on FTU

    International Nuclear Information System (INIS)

    Pucella, G.; Tudisco, O.; Apicella, M.L.; Apruzzese, G.; Artaserse, G.; Belli, F.; Boncagni, L.; Botrugno, A.; Buratti, P.; Calabrò, G.; Castaldo, C.; Cianfarani, C.; Cocilovo, V.; Dimatteo, L.; Esposito, B.; Frigione, D.; Gabellieri, L.; Giovannozzi, E.; Bin, W.; Granucci, G.

    2013-01-01

    One of the main problems in tokamak fusion devices concerns the capability to operate at a high plasma density, which is observed to be limited by the appearance of catastrophic events causing loss of plasma confinement. The commonly used empirical scaling law for the density limit is the Greenwald limit, predicting that the maximum achievable line-averaged density along a central chord depends only on the average plasma current density. However, the Greenwald density limit has been exceeded in tokamak experiments in the case of peaked density profiles, indicating that the edge density is the real parameter responsible for the density limit. Recently, it has been shown on the Frascati Tokamak Upgrade (FTU) that the Greenwald density limit is exceeded in gas-fuelled discharges with a high value of the edge safety factor. In order to understand this behaviour, dedicated density limit experiments were performed on FTU, in which the high density domain was explored in a wide range of values of plasma current (I p = 500–900 kA) and toroidal magnetic field (B T = 4–8 T). These experiments confirm the edge nature of the density limit, as a Greenwald-like scaling holds for the maximum achievable line-averaged density along a peripheral chord passing at r/a ≃ 4/5. On the other hand, the maximum achievable line-averaged density along a central chord does not depend on the average plasma current density and essentially depends on the toroidal magnetic field only. This behaviour is explained in terms of density profile peaking in the high density domain, with a peaking factor at the disruption depending on the edge safety factor. The possibility that the MARFE (multifaced asymmetric radiation from the edge) phenomenon is the cause of the peaking has been considered, with the MARFE believed to form a channel for the penetration of the neutral particles into deeper layers of the plasma. Finally, the magnetohydrodynamic (MHD) analysis has shown that also the central line

  9. Modern survey sampling

    CERN Document Server

    Chaudhuri, Arijit

    2014-01-01

    Exposure to SamplingAbstract Introduction Concepts of Population, Sample, and SamplingInitial RamificationsAbstract Introduction Sampling Design, Sampling SchemeRandom Numbers and Their Uses in Simple RandomSampling (SRS)Drawing Simple Random Samples with and withoutReplacementEstimation of Mean, Total, Ratio of Totals/Means:Variance and Variance EstimationDetermination of Sample SizesA.2 Appendix to Chapter 2 A.More on Equal Probability Sampling A.Horvitz-Thompson EstimatorA.SufficiencyA.LikelihoodA.Non-Existence Theorem More Intricacies Abstract Introduction Unequal Probability Sampling StrategiesPPS Sampling Exploring Improved WaysAbstract Introduction Stratified Sampling Cluster SamplingMulti-Stage SamplingMulti-Phase Sampling: Ratio and RegressionEstimationviiviii ContentsControlled SamplingModeling Introduction Super-Population ModelingPrediction Approach Model-Assisted Approach Bayesian Methods Spatial SmoothingSampling on Successive Occasions: Panel Rotation Non-Response and Not-at-Homes Weighting Adj...

  10. Analyzing forensic evidence based on density with magnetic levitation.

    Science.gov (United States)

    Lockett, Matthew R; Mirica, Katherine A; Mace, Charles R; Blackledge, Robert D; Whitesides, George M

    2013-01-01

    This paper describes a method for determining the density of contact trace objects with magnetic levitation (MagLev). MagLev measurements accurately determine the density (± 0.0002 g/cm(3) ) of a diamagnetic object and are compatible with objects that are nonuniform in shape and size. The MagLev device (composed of two permanent magnets with like poles facing) and the method described provide a means of accurately determining the density of trace objects. This method is inexpensive, rapid, and verifiable and provides numerical values--independent of the specific apparatus or analyst--that correspond to the absolute density of the sample that may be entered into a searchable database. We discuss the feasibility of MagLev as a possible means of characterizing forensic-related evidence and demonstrate the ability of MagLev to (i) determine the density of samples of glitter and gunpowder, (ii) separate glitter particles of different densities, and (iii) determine the density of a glitter sample that was removed from a complex sample matrix. © 2012 American Academy of Forensic Sciences.

  11. Resolvability of regional density structure

    Science.gov (United States)

    Plonka, A.; Fichtner, A.

    2016-12-01

    Lateral density variations are the source of mass transport in the Earth at all scales, acting as drivers of convectivemotion. However, the density structure of the Earth remains largely unknown since classic seismic observables and gravityprovide only weak constraints with strong trade-offs. Current density models are therefore often based on velocity scaling,making strong assumptions on the origin of structural heterogeneities, which may not necessarily be correct. Our goal is to assessif 3D density structure may be resolvable with emerging full-waveform inversion techniques. We have previously quantified the impact of regional-scale crustal density structure on seismic waveforms with the conclusion that reasonably sized density variations within thecrust can leave a strong imprint on both travel times and amplitudes, and, while this can produce significant biases in velocity and Q estimates, the seismic waveform inversion for density may become feasible. In this study we performprincipal component analyses of sensitivity kernels for P velocity, S velocity, and density. This is intended to establish theextent to which these kernels are linearly independent, i.e. the extent to which the different parameters may be constrainedindependently. Since the density imprint we observe is not exclusively linked to travel times and amplitudes of specific phases,we consider waveform differences between complete seismograms. We test the method using a known smooth model of the crust and seismograms with clear Love and Rayleigh waves, showing that - as expected - the first principal kernel maximizes sensitivity to SH and SV velocity structure, respectively, and that the leakage between S velocity, P velocity and density parameter spaces is minimal in the chosen setup. Next, we apply the method to data from 81 events around the Iberian Penninsula, registered in total by 492 stations. The objective is to find a principal kernel which would maximize the sensitivity to density

  12. Systematic sampling with errors in sample locations

    DEFF Research Database (Denmark)

    Ziegel, Johanna; Baddeley, Adrian; Dorph-Petersen, Karl-Anton

    2010-01-01

    analysis using point process methods. We then analyze three different models for the error process, calculate exact expressions for the variances, and derive asymptotic variances. Errors in the placement of sample points can lead to substantial inflation of the variance, dampening of zitterbewegung......Systematic sampling of points in continuous space is widely used in microscopy and spatial surveys. Classical theory provides asymptotic expressions for the variance of estimators based on systematic sampling as the grid spacing decreases. However, the classical theory assumes that the sample grid...... is exactly periodic; real physical sampling procedures may introduce errors in the placement of the sample points. This paper studies the effect of errors in sample positioning on the variance of estimators in the case of one-dimensional systematic sampling. First we sketch a general approach to variance...

  13. Super liquid density target designs

    International Nuclear Information System (INIS)

    Pan, Y.L.; Bailey, D.S.

    1976-01-01

    The success of laser fusion depends on obtaining near isentropic compression of fuel to very high densities and igniting this fuel. To date, the results of laser fusion experiments have been based mainly on the exploding pusher implosion of fusion capsules consisting of thin glass microballoons (wall thickness of less than 1 micron) filled with low density DT gas (initial density of a few mg/cc). Maximum DT densities of a few tenths of g/cc and temperatures of a few keV have been achieved in these experiments. We will discuss the results of LASNEX target design calculations for targets which: (a) can compress fuel to much higher densities using the capabilities of existing Nd-glass systems at LLL; (b) allow experimental measurement of the peak fuel density achieved

  14. High Power Density Motors

    Science.gov (United States)

    Kascak, Daniel J.

    2004-01-01

    With the growing concerns of global warming, the need for pollution-free vehicles is ever increasing. Pollution-free flight is one of NASA's goals for the 21" Century. , One method of approaching that goal is hydrogen-fueled aircraft that use fuel cells or turbo- generators to develop electric power that can drive electric motors that turn the aircraft's propulsive fans or propellers. Hydrogen fuel would likely be carried as a liquid, stored in tanks at its boiling point of 20.5 K (-422.5 F). Conventional electric motors, however, are far too heavy (for a given horsepower) to use on aircraft. Fortunately the liquid hydrogen fuel can provide essentially free refrigeration that can be used to cool the windings of motors before the hydrogen is used for fuel. Either High Temperature Superconductors (HTS) or high purity metals such as copper or aluminum may be used in the motor windings. Superconductors have essentially zero electrical resistance to steady current. The electrical resistance of high purity aluminum or copper near liquid hydrogen temperature can be l/lOO* or less of the room temperature resistance. These conductors could provide higher motor efficiency than normal room-temperature motors achieve. But much more importantly, these conductors can carry ten to a hundred times more current than copper conductors do in normal motors operating at room temperature. This is a consequence of the low electrical resistance and of good heat transfer coefficients in boiling LH2. Thus the conductors can produce higher magnetic field strengths and consequently higher motor torque and power. Designs, analysis and actual cryogenic motor tests show that such cryogenic motors could produce three or more times as much power per unit weight as turbine engines can, whereas conventional motors produce only 1/5 as much power per weight as turbine engines. This summer work has been done with Litz wire to maximize the current density. The current is limited by the amount of heat it

  15. The determination of bulk (apparent) density of plant fibres by density method

    International Nuclear Information System (INIS)

    Sharifah Hanisah Syed Abd Aziz; Raja Jamal Raja hedar; Zahid Abdullah

    2004-01-01

    The absolute density of plant fibres excludes all pores and lumen and therefore is a measure of the solid matter of the fibres. On the other hand the bulk density, which is being discussed here, includes all the solid matter and the pores of the fibres. In this work, the apparent density of the fibre was measured by using the Archimedes principle, which involves the immersion of a known weight of fibre into a solvent of lower density than the fibre. Toluene with a density of about 860 kg/m3 was chosen as a solvent. A tuft of fibre was weighed and recorded as W fa . The fibre was then immersed in toluene, which wetted the fibre, and made to rest on the weighing pan submerged in the solvent and the weight of the immersed fibre was recorded as W fs . The apparent density was then calculated using the equation. All the measurements were taken at room temperature. The fibre samples were not oven dried prior to measurement. (Author)

  16. Breast density in multiethnic women presenting for screening mammography.

    Science.gov (United States)

    Oppong, Bridget A; Dash, Chiranjeev; O'Neill, Suzanne; Li, Yinan; Makambi, Kepher; Pien, Edward; Makariou, Erini; Coleman, Tesha; Adams-Campbell, Lucile L

    2018-05-01

    Data on ethnic variations in breast density are limited and often not inclusive of underrepresented minorities. As breast density is associated with elevated breast cancer risk, investigating racial and ethnic difference may elucidate the observed differences in breast cancer risk among different populations. We reviewed breast density from initial screening of women from the Capital Breast Care Center and Georgetown University Hospital from 2010 to 2014. Patient demographics including race, age at screening, education, menopausal status, and body mass index were abstracted. We recorded the BI-RADS density categories: (1) "fatty," (2) "scattered fibroglandular densities," (3) "heterogeneously dense," and (4) "extremely dense." Multivariable unconditional logistic regression was used to identify predictors of breast density. Density categorization was recorded for 2146 women over the 5-year period, comprising Blacks (n = 940), Hispanics (n = 893), and Whites (n = 314). Analysis of subject characteristics by breast density showed that high category is observed in younger, Hispanic, nulliparous, premenopausal, and nonobese women (t-test or chi-square test, P-values density. Being Hispanic, premenopausal, and nonobese were predictive of high density on logistic regression. In this analysis of density distribution in a diverse sample, Hispanic women have the highest breast density, followed by Blacks and Whites. Unique in our findings is women who identify as Hispanic have the highest breast density and lower rates of obesity. Further investigation of the impact of obesity on breast density, especially in the understudied Hispanic group is needed. © 2017 Wiley Periodicals, Inc.

  17. Density functionals from deep learning

    OpenAIRE

    McMahon, Jeffrey M.

    2016-01-01

    Density-functional theory is a formally exact description of a many-body quantum system in terms of its density; in practice, however, approximations to the universal density functional are required. In this work, a model based on deep learning is developed to approximate this functional. Deep learning allows computational models that are capable of naturally discovering intricate structure in large and/or high-dimensional data sets, with multiple levels of abstraction. As no assumptions are ...

  18. Transition densities with electron scattering

    International Nuclear Information System (INIS)

    Heisenberg, J.

    1985-01-01

    This paper reviews the ground state and transition charge densities in nuclei via electron scattering. Using electrons as a spectroscopic tool in nuclear physics, these transition densities can be determined with high precision, also in the nuclear interior. These densities generally ask for a microscopic interpretation in terms of contributions from individual nucleons. The results for single particle transitions confirm the picture of particle-phonon coupling. (Auth.)

  19. Sets with Prescribed Arithmetic Densities

    Czech Academy of Sciences Publication Activity Database

    Luca, F.; Pomerance, C.; Porubský, Štefan

    2008-01-01

    Roč. 3, č. 2 (2008), s. 67-80 ISSN 1336-913X R&D Projects: GA ČR GA201/07/0191 Institutional research plan: CEZ:AV0Z10300504 Keywords : generalized arithmetic density * generalized asymptotic density * generalized logarithmic density * arithmetical semigroup * weighted arithmetic mean * ratio set * R-dense set * Axiom A * delta-regularly varying function Subject RIV: BA - General Mathematics

  20. Signal sampling circuit

    NARCIS (Netherlands)

    Louwsma, S.M.; Vertregt, Maarten

    2011-01-01

    A sampling circuit for sampling a signal is disclosed. The sampling circuit comprises a plurality of sampling channels adapted to sample the signal in time-multiplexed fashion, each sampling channel comprising a respective track-and-hold circuit connected to a respective analogue to digital

  1. Signal sampling circuit

    NARCIS (Netherlands)

    Louwsma, S.M.; Vertregt, Maarten

    2010-01-01

    A sampling circuit for sampling a signal is disclosed. The sampling circuit comprises a plurality of sampling channels adapted to sample the signal in time-multiplexed fashion, each sampling channel comprising a respective track-and-hold circuit connected to a respective analogue to digital

  2. Histogram Estimators of Bivariate Densities

    National Research Council Canada - National Science Library

    Husemann, Joyce A

    1986-01-01

    One-dimensional fixed-interval histogram estimators of univariate probability density functions are less efficient than the analogous variable-interval estimators which are constructed from intervals...

  3. Chronic subdural hematoma fluid and its computerized tomographic density

    International Nuclear Information System (INIS)

    Masuzawa, Hideaki; Sato, Jinichi; Kamitani, Hiroshi; Yamashita, Midori

    1983-01-01

    Laboratory and in vivo CT analysis were performed on 19 chronic subdural hematomas and five subdural hygromas. In these 25 hematoma samples, red blood cells (RBC), hematocrit, and hemoglobin (Hgb) varied greatly, though, these values correlated well with the CT densities. Plasma protein content was fairly constant with an average of 7.1+-0.8g/dl. There were four hematoma samples with RBC of less than 20x10 4 μl or Hgb of less than 2.0g/dl. Their CT values ranged between 18 and 23 H.U., which were considered close to the in vivo serum level CT density. Five hygroma fluid showed no RBC and very little protein content of less than 0.4g/dl. CT density ranged between -2 and 13 H.U. The edge effect of the skull was experimentally studied using a phantom skull filled with water. This revealed a remarkable overshoot of the CT values within ten pixels from the inner wall of the skull. Visual observation of the original CT pictures revealed four low density hematomas and seven mixed density ones. When compared to the density of the ventricular cavity, all of the low density hematomas and the supernatant part of the mixed density ones were clearly higher in density. All five hygromas appeared CSF dense or lower. In conclusion, because of the edge effect by the skull, thin subdural fluids could not be diagnosed by CT alone. Thick subdural fluids could be differentiated as either hematoma or hygroma by their CT densities. Subdural hematomas had in vivo CT densities of at least serum level or approximately 20 H.U., while subdural hygromas had densities close to CSF. These characteristics were best appreciated by visual observation of the CT scan films. (J.P.N.)

  4. Importing low-density ideas to high-density revitalisation

    DEFF Research Database (Denmark)

    Arnholtz, Jens; Ibsen, Christian Lyhne; Ibsen, Flemming

    2016-01-01

    Why did union officials from a high-union-density country like Denmark choose to import an organising strategy from low-density countries such as the US and the UK? Drawing on in-depth interviews with key union officials and internal documents, the authors of this article argue two key points. Fi...

  5. Novel density-based and hierarchical density-based clustering algorithms for uncertain data.

    Science.gov (United States)

    Zhang, Xianchao; Liu, Han; Zhang, Xiaotong

    2017-09-01

    Uncertain data has posed a great challenge to traditional clustering algorithms. Recently, several algorithms have been proposed for clustering uncertain data, and among them density-based techniques seem promising for handling data uncertainty. However, some issues like losing uncertain information, high time complexity and nonadaptive threshold have not been addressed well in the previous density-based algorithm FDBSCAN and hierarchical density-based algorithm FOPTICS. In this paper, we firstly propose a novel density-based algorithm PDBSCAN, which improves the previous FDBSCAN from the following aspects: (1) it employs a more accurate method to compute the probability that the distance between two uncertain objects is less than or equal to a boundary value, instead of the sampling-based method in FDBSCAN; (2) it introduces new definitions of probability neighborhood, support degree, core object probability, direct reachability probability, thus reducing the complexity and solving the issue of nonadaptive threshold (for core object judgement) in FDBSCAN. Then, we modify the algorithm PDBSCAN to an improved version (PDBSCANi), by using a better cluster assignment strategy to ensure that every object will be assigned to the most appropriate cluster, thus solving the issue of nonadaptive threshold (for direct density reachability judgement) in FDBSCAN. Furthermore, as PDBSCAN and PDBSCANi have difficulties for clustering uncertain data with non-uniform cluster density, we propose a novel hierarchical density-based algorithm POPTICS by extending the definitions of PDBSCAN, adding new definitions of fuzzy core distance and fuzzy reachability distance, and employing a new clustering framework. POPTICS can reveal the cluster structures of the datasets with different local densities in different regions better than PDBSCAN and PDBSCANi, and it addresses the issues in FOPTICS. Experimental results demonstrate the superiority of our proposed algorithms over the existing

  6. Comparison of low density and high density pedicle screw instrumentation in Lenke 1 adolescent idiopathic scoliosis.

    Science.gov (United States)

    Shen, Mingkui; Jiang, Honghui; Luo, Ming; Wang, Wengang; Li, Ning; Wang, Lulu; Xia, Lei

    2017-08-02

    The correlation between implant density and deformity correction has not yet led to a precise conclusion in adolescent idiopathic scoliosis (AIS). The aim of this study was to evaluate the effects of low density (LD) and high density (HD) pedicle screw instrumentation in terms of the clinical, radiological and Scoliosis Research Society (SRS)-22 outcomes in Lenke 1 AIS. We retrospectively reviewed 62 consecutive Lenke 1 AIS patients who underwent posterior spinal arthrodesis using all-pedicle screw instrumentation with a minimum follow-up of 24 months. The implant density was defined as the number of screws per spinal level fused. Patients were then divided into two groups according to the average implant density for the entire study. The LD group (n = 28) had fewer than 1.61 screws per level, while the HD group (n = 34) had more than 1.61 screws per level. The radiographs were analysed preoperatively, postoperatively and at final follow-up. The perioperative and SRS-22 outcomes were also assessed. Independent sample t tests were used between the two groups. Comparisons between the two groups showed no significant differences in the correction of the main thoracic curve and thoracic kyphosis, blood transfusion, hospital stay, and SRS-22 scores. Compared with the HD group, there was a decreased operating time (278.4 vs. 331.0 min, p = 0.004) and decreased blood loss (823.6 vs. 1010.9 ml, p = 0.048), pedicle screws needed (15.1 vs. 19.6, p density and high density pedicle screw instrumentation achieved satisfactory deformity correction in Lenke 1 AIS patients. However, the operating time and blood loss were reduced, and the implant costs were decreased with the use of low screw density constructs.

  7. Fertilization increases paddy soil organic carbon density*

    Science.gov (United States)

    Wang, Shao-xian; Liang, Xin-qiang; Luo, Qi-xiang; Fan, Fang; Chen, Ying-xu; Li, Zu-zhang; Sun, Huo-xi; Dai, Tian-fang; Wan, Jun-nan; Li, Xiao-jun

    2012-01-01

    Field experiments provide an opportunity to study the effects of fertilization on soil organic carbon (SOC) sequestration. We sampled soils from a long-term (25 years) paddy experiment in subtropical China. The experiment included eight treatments: (1) check, (2) PK, (3) NP, (4) NK, (5) NPK, (6) 7F:3M (N, P, K inorganic fertilizers+30% organic N), (7) 5F:5M (N, P, K inorganic fertilizers+50% organic N), (8) 3F:7M (N, P, K inorganic fertilizers+70% organic N). Fertilization increased SOC content in the plow layers compared to the non-fertilized check treatment. The SOC density in the top 100 cm of soil ranged from 73.12 to 91.36 Mg/ha. The SOC densities of all fertilizer treatments were greater than that of the check. Those treatments that combined inorganic fertilizers and organic amendments had greater SOC densities than those receiving only inorganic fertilizers. The SOC density was closely correlated to the sum of the soil carbon converted from organic amendments and rice residues. Carbon sequestration in paddy soils could be achieved by balanced and combined fertilization. Fertilization combining both inorganic fertilizers and organic amendments is an effective sustainable practice to sequestrate SOC. PMID:22467369

  8. Fertilization increases paddy soil organic carbon density.

    Science.gov (United States)

    Wang, Shao-xian; Liang, Xin-qiang; Luo, Qi-xiang; Fan, Fang; Chen, Ying-xu; Li, Zu-zhang; Sun, Huo-xi; Dai, Tian-fang; Wan, Jun-nan; Li, Xiao-jun

    2012-04-01

    Field experiments provide an opportunity to study the effects of fertilization on soil organic carbon (SOC) sequestration. We sampled soils from a long-term (25 years) paddy experiment in subtropical China. The experiment included eight treatments: (1) check, (2) PK, (3) NP, (4) NK, (5) NPK, (6) 7F:3M (N, P, K inorganic fertilizers+30% organic N), (7) 5F:5M (N, P, K inorganic fertilizers+50% organic N), (8) 3F:7M (N, P, K inorganic fertilizers+70% organic N). Fertilization increased SOC content in the plow layers compared to the non-fertilized check treatment. The SOC density in the top 100 cm of soil ranged from 73.12 to 91.36 Mg/ha. The SOC densities of all fertilizer treatments were greater than that of the check. Those treatments that combined inorganic fertilizers and organic amendments had greater SOC densities than those receiving only inorganic fertilizers. The SOC density was closely correlated to the sum of the soil carbon converted from organic amendments and rice residues. Carbon sequestration in paddy soils could be achieved by balanced and combined fertilization. Fertilization combining both inorganic fertilizers and organic amendments is an effective sustainable practice to sequestrate SOC.

  9. Mammography density estimation with automated volumetic breast density measurement

    International Nuclear Information System (INIS)

    Ko, Su Yeon; Kim, Eun Kyung; Kim, Min Jung; Moon, Hee Jung

    2014-01-01

    To compare automated volumetric breast density measurement (VBDM) with radiologists' evaluations based on the Breast Imaging Reporting and Data System (BI-RADS), and to identify the factors associated with technical failure of VBDM. In this study, 1129 women aged 19-82 years who underwent mammography from December 2011 to January 2012 were included. Breast density evaluations by radiologists based on BI-RADS and by VBDM (Volpara Version 1.5.1) were compared. The agreement in interpreting breast density between radiologists and VBDM was determined based on four density grades (D1, D2, D3, and D4) and a binary classification of fatty (D1-2) vs. dense (D3-4) breast using kappa statistics. The association between technical failure of VBDM and patient age, total breast volume, fibroglandular tissue volume, history of partial mastectomy, the frequency of mass > 3 cm, and breast density was analyzed. The agreement between breast density evaluations by radiologists and VBDM was fair (k value = 0.26) when the four density grades (D1/D2/D3/D4) were used and moderate (k value = 0.47) for the binary classification (D1-2/D3-4). Twenty-seven women (2.4%) showed failure of VBDM. Small total breast volume, history of partial mastectomy, and high breast density were significantly associated with technical failure of VBDM (p 0.001 to 0.015). There is fair or moderate agreement in breast density evaluation between radiologists and VBDM. Technical failure of VBDM may be related to small total breast volume, a history of partial mastectomy, and high breast density.

  10. Log sampling methods and software for stand and landscape analyses.

    Science.gov (United States)

    Lisa J. Bate; Torolf R. Torgersen; Michael J. Wisdom; Edward O. Garton; Shawn C. Clabough

    2008-01-01

    We describe methods for efficient, accurate sampling of logs at landscape and stand scales to estimate density, total length, cover, volume, and weight. Our methods focus on optimizing the sampling effort by choosing an appropriate sampling method and transect length for specific forest conditions and objectives. Sampling methods include the line-intersect method and...

  11. Density-based similarity measures for content based search

    Energy Technology Data Exchange (ETDEWEB)

    Hush, Don R [Los Alamos National Laboratory; Porter, Reid B [Los Alamos National Laboratory; Ruggiero, Christy E [Los Alamos National Laboratory

    2009-01-01

    We consider the query by multiple example problem where the goal is to identify database samples whose content is similar to a coUection of query samples. To assess the similarity we use a relative content density which quantifies the relative concentration of the query distribution to the database distribution. If the database distribution is a mixture of the query distribution and a background distribution then it can be shown that database samples whose relative content density is greater than a particular threshold {rho} are more likely to have been generated by the query distribution than the background distribution. We describe an algorithm for predicting samples with relative content density greater than {rho} that is computationally efficient and possesses strong performance guarantees. We also show empirical results for applications in computer network monitoring and image segmentation.

  12. Are most samples of animals systematically biased? Consistent individual trait differences bias samples despite random sampling.

    Science.gov (United States)

    Biro, Peter A

    2013-02-01

    Sampling animals from the wild for study is something nearly every biologist has done, but despite our best efforts to obtain random samples of animals, 'hidden' trait biases may still exist. For example, consistent behavioral traits can affect trappability/catchability, independent of obvious factors such as size and gender, and these traits are often correlated with other repeatable physiological and/or life history traits. If so, systematic sampling bias may exist for any of these traits. The extent to which this is a problem, of course, depends on the magnitude of bias, which is presently unknown because the underlying trait distributions in populations are usually unknown, or unknowable. Indeed, our present knowledge about sampling bias comes from samples (not complete population censuses), which can possess bias to begin with. I had the unique opportunity to create naturalized populations of fish by seeding each of four small fishless lakes with equal densities of slow-, intermediate-, and fast-growing fish. Using sampling methods that are not size-selective, I observed that fast-growing fish were up to two-times more likely to be sampled than slower-growing fish. This indicates substantial and systematic bias with respect to an important life history trait (growth rate). If correlations between behavioral, physiological and life-history traits are as widespread as the literature suggests, then many animal samples may be systematically biased with respect to these traits (e.g., when collecting animals for laboratory use), and affect our inferences about population structure and abundance. I conclude with a discussion on ways to minimize sampling bias for particular physiological/behavioral/life-history types within animal populations.

  13. The Density of Sustainable Settlements

    DEFF Research Database (Denmark)

    Lauring, Michael; Silva, Victor; Jensen, Ole B.

    2010-01-01

    This paper is the initial result of a cross-disciplinary attempt to encircle an answer to the question of optimal densities of sustainable settlements. Urban density is an important component in the framework of sustainable development and influences not only the character and design of cities...

  14. Ant-inspired density estimation via random walks.

    Science.gov (United States)

    Musco, Cameron; Su, Hsin-Hao; Lynch, Nancy A

    2017-10-03

    Many ant species use distributed population density estimation in applications ranging from quorum sensing, to task allocation, to appraisal of enemy colony strength. It has been shown that ants estimate local population density by tracking encounter rates: The higher the density, the more often the ants bump into each other. We study distributed density estimation from a theoretical perspective. We prove that a group of anonymous agents randomly walking on a grid are able to estimate their density within a small multiplicative error in few steps by measuring their rates of encounter with other agents. Despite dependencies inherent in the fact that nearby agents may collide repeatedly (and, worse, cannot recognize when this happens), our bound nearly matches what would be required to estimate density by independently sampling grid locations. From a biological perspective, our work helps shed light on how ants and other social insects can obtain relatively accurate density estimates via encounter rates. From a technical perspective, our analysis provides tools for understanding complex dependencies in the collision probabilities of multiple random walks. We bound the strength of these dependencies using local mixing properties of the underlying graph. Our results extend beyond the grid to more general graphs, and we discuss applications to size estimation for social networks, density estimation for robot swarms, and random walk-based sampling for sensor networks.

  15. Level density of 57Co

    International Nuclear Information System (INIS)

    Mishra, V.; Boukharouba, N.; Brient, C.E.; Grimes, S.M.; Pedroni, R.S.

    1994-01-01

    Levels in 57 Co have been studied in the region of resolved levels (E 57 Fe(p,n) 57 Co neutron spectrum with resolution ΔE∼5 keV. Seventeen previously unknown levels are located. Level density parameters in the continuum region are deduced from thick target measurements of the same reaction and additional level density information is deduced from Ericson fluctuation studies of the reaction 56 Fe(p,n) 56 Co. A set of level density parameters is found which describes the level density of 57 Co at energies up to 14 MeV. Efforts to obtain level density information from the 56 Fe(d,n) 57 Co reaction were unsuccessful, but estimates of the fraction of the deuteron absorption cross section corresponding to compound nucleus formation are obtained

  16. The density of cement phases

    International Nuclear Information System (INIS)

    Balonis, M.; Glasser, F.P.

    2009-01-01

    The densities of principal crystalline phases occurring in Portland cement are critically assessed and tabulated, in some cases with addition of new data. A reliable and self-consistent density set for crystalline phases was obtained by calculating densities from crystallographic data and unit cell contents. Independent laboratory work was undertaken to synthesize major AFm and AFt cement phases, determine their unit cell parameters and compare the results with those recorded in the literature. Parameters were refined from powder diffraction patterns using CELREF 2 software. A density value is presented for each phase, showing literature sources, in some cases describing limitations on the data, and the weighting attached to numerical values where an averaging process was used for accepted data. A brief discussion is made of the consequences of the packing of water to density changes in AFm and AFt structures.

  17. Infrared thermography for wood density estimation

    Science.gov (United States)

    López, Gamaliel; Basterra, Luis-Alfonso; Acuña, Luis

    2018-03-01

    Infrared thermography (IRT) is becoming a commonly used technique to non-destructively inspect and evaluate wood structures. Based on the radiation emitted by all objects, this technique enables the remote visualization of the surface temperature without making contact using a thermographic device. The process of transforming radiant energy into temperature depends on many parameters, and interpreting the results is usually complicated. However, some works have analyzed the operation of IRT and expanded its applications, as found in the latest literature. This work analyzes the effect of density on the thermodynamic behavior of timber to be determined by IRT. The cooling of various wood samples has been registered, and a statistical procedure that enables one to quantitatively estimate the density of timber has been designed. This procedure represents a new method to physically characterize this material.

  18. On the Sampling

    OpenAIRE

    Güleda Doğan

    2017-01-01

    This editorial is on statistical sampling, which is one of the most two important reasons for editorial rejection from our journal Turkish Librarianship. The stages of quantitative research, the stage in which we are sampling, the importance of sampling for a research, deciding on sample size and sampling methods are summarised briefly.

  19. Information sampling behavior with explicit sampling costs

    Science.gov (United States)

    Juni, Mordechai Z.; Gureckis, Todd M.; Maloney, Laurence T.

    2015-01-01

    The decision to gather information should take into account both the value of information and its accrual costs in time, energy and money. Here we explore how people balance the monetary costs and benefits of gathering additional information in a perceptual-motor estimation task. Participants were rewarded for touching a hidden circular target on a touch-screen display. The target’s center coincided with the mean of a circular Gaussian distribution from which participants could sample repeatedly. Each “cue” — sampled one at a time — was plotted as a dot on the display. Participants had to repeatedly decide, after sampling each cue, whether to stop sampling and attempt to touch the hidden target or continue sampling. Each additional cue increased the participants’ probability of successfully touching the hidden target but reduced their potential reward. Two experimental conditions differed in the initial reward associated with touching the hidden target and the fixed cost per cue. For each condition we computed the optimal number of cues that participants should sample, before taking action, to maximize expected gain. Contrary to recent claims that people gather less information than they objectively should before taking action, we found that participants over-sampled in one experimental condition, and did not significantly under- or over-sample in the other. Additionally, while the ideal observer model ignores the current sample dispersion, we found that participants used it to decide whether to stop sampling and take action or continue sampling, a possible consequence of imperfect learning of the underlying population dispersion across trials. PMID:27429991

  20. Distance sampling methods and applications

    CERN Document Server

    Buckland, S T; Marques, T A; Oedekoven, C S

    2015-01-01

    In this book, the authors cover the basic methods and advances within distance sampling that are most valuable to practitioners and in ecology more broadly. This is the fourth book dedicated to distance sampling. In the decade since the last book published, there have been a number of new developments. The intervening years have also shown which advances are of most use. This self-contained book covers topics from the previous publications, while also including recent developments in method, software and application. Distance sampling refers to a suite of methods, including line and point transect sampling, in which animal density or abundance is estimated from a sample of distances to detected individuals. The book illustrates these methods through case studies; data sets and computer code are supplied to readers through the book’s accompanying website.  Some of the case studies use the software Distance, while others use R code. The book is in three parts.  The first part addresses basic methods, the ...

  1. How Sample Size Affects a Sampling Distribution

    Science.gov (United States)

    Mulekar, Madhuri S.; Siegel, Murray H.

    2009-01-01

    If students are to understand inferential statistics successfully, they must have a profound understanding of the nature of the sampling distribution. Specifically, they must comprehend the determination of the expected value and standard error of a sampling distribution as well as the meaning of the central limit theorem. Many students in a high…

  2. Particle density determination of pellets and briquettes

    Energy Technology Data Exchange (ETDEWEB)

    Rabier, Fabienne; Temmerman, Michaeel [Centre wallon de Recherches agronomiques, Departement de Genie rural, CRA-W, Chaussee de Namur, 146, B 5030 Gembloux (Belgium); Boehm, Thorsten; Hartmann, Hans [Technologie und Foerderzentrum fuer Nachwachsende Rohstoffe, TFZ, Schulgasse 18, D 94315 Straubing (Germany); Daugbjerg Jensen, Peter [Forest and Landscape, The Royal Veterinary and Agricultural University, Rolighedsvej 23, DK 1958 Frederiksberg C (Denmark); Rathbauer, Josef [Bundesanstalt fuer Landtechnik, BLT, Rottenhauer Strasse,1 A 3250 Wieselburg (Austria); Carrasco, Juan; Fernandez, Miguel [Centro de investigaciones Energeticas, Medioambientales y Tecnologicas, CIEMAT, Avenida Complutense, 22 E 28040 Madrid (Spain)

    2006-11-15

    Several methods and procedures for the determination of particle density of pellets and briquettes were tested and evaluated. Round robin trials were organized involving five European laboratories, which measured the particle densities of 15 pellet and five briquette types. The test included stereometric methods, methods based on liquid displacement (hydrostatic and buoyancy) applying different procedures and one method based on solid displacement. From the results for both pellets and briquettes, it became clear that the application of a method based on either liquid or solid displacement (only tested on pellet samples) leads to an improved reproducibility compared to a stereometric method. For both, pellets and briquettes, the variability of measurements strongly depends on the fuel type itself. For briquettes, the three methods tested based on liquid displacement lead to similar results. A coating of the samples with paraffin did not improve the repeatability and the reproducibility. Determinations with pellets proved to be most reliable when the buoyancy method was applied using a wetting agent to reduce surface tensions without sample coating. This method gave the best values for repeatability and reproducibility, thus less replications are required to reach a given accuracy level. For wood pellets, the method based on solid displacement gave better values of repeatability, however, this instrument was tested at only one laboratory. (author)

  3. Sampling Criterion for EMC Near Field Measurements

    DEFF Research Database (Denmark)

    Franek, Ondrej; Sørensen, Morten; Ebert, Hans

    2012-01-01

    An alternative, quasi-empirical sampling criterion for EMC near field measurements intended for close coupling investigations is proposed. The criterion is based on maximum error caused by sub-optimal sampling of near fields in the vicinity of an elementary dipole, which is suggested as a worst......-case representative of a signal trace on a typical printed circuit board. It has been found that the sampling density derived in this way is in fact very similar to that given by the antenna near field sampling theorem, if an error less than 1 dB is required. The principal advantage of the proposed formulation is its...

  4. Density Estimation in Several Populations With Uncertain Population Membership

    KAUST Repository

    Ma, Yanyuan

    2011-09-01

    We devise methods to estimate probability density functions of several populations using observations with uncertain population membership, meaning from which population an observation comes is unknown. The probability of an observation being sampled from any given population can be calculated. We develop general estimation procedures and bandwidth selection methods for our setting. We establish large-sample properties and study finite-sample performance using simulation studies. We illustrate our methods with data from a nutrition study.

  5. Radioactivity in environmental samples

    International Nuclear Information System (INIS)

    Fornaro, Laura

    2001-01-01

    The objective of this practical work is to familiarize the student with radioactivity measures in environmental samples. For that were chosen samples a salt of natural potassium, a salt of uranium or torio and a sample of drinkable water

  6. DNA Sampling Hook

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The DNA Sampling Hook is a significant improvement on a method of obtaining a tissue sample from a live fish in situ from an aquatic environment. A tissue sample...

  7. Iowa Geologic Sampling Points

    Data.gov (United States)

    Iowa State University GIS Support and Research Facility — Point locations of geologic samples/files in the IGS repository. Types of samples include well cuttings, outcrop samples, cores, drillers logs, measured sections,...

  8. Density dependent hadron field theory

    International Nuclear Information System (INIS)

    Fuchs, C.; Lenske, H.; Wolter, H.H.

    1995-01-01

    A fully covariant approach to a density dependent hadron field theory is presented. The relation between in-medium NN interactions and field-theoretical meson-nucleon vertices is discussed. The medium dependence of nuclear interactions is described by a functional dependence of the meson-nucleon vertices on the baryon field operators. As a consequence, the Euler-Lagrange equations lead to baryon rearrangement self-energies which are not obtained when only a parametric dependence of the vertices on the density is assumed. It is shown that the approach is energy-momentum conserving and thermodynamically consistent. Solutions of the field equations are studied in the mean-field approximation. Descriptions of the medium dependence in terms of the baryon scalar and vector density are investigated. Applications to infinite nuclear matter and finite nuclei are discussed. Density dependent coupling constants obtained from Dirac-Brueckner calculations with the Bonn NN potentials are used. Results from Hartree calculations for energy spectra, binding energies, and charge density distributions of 16 O, 40,48 Ca, and 208 Pb are presented. Comparisons to data strongly support the importance of rearrangement in a relativistic density dependent field theory. Most striking is the simultaneous improvement of charge radii, charge densities, and binding energies. The results indicate the appearance of a new ''Coester line'' in the nuclear matter equation of state

  9. Measuring single-cell density.

    Science.gov (United States)

    Grover, William H; Bryan, Andrea K; Diez-Silva, Monica; Suresh, Subra; Higgins, John M; Manalis, Scott R

    2011-07-05

    We have used a microfluidic mass sensor to measure the density of single living cells. By weighing each cell in two fluids of different densities, our technique measures the single-cell mass, volume, and density of approximately 500 cells per hour with a density precision of 0.001 g mL(-1). We observe that the intrinsic cell-to-cell variation in density is nearly 100-fold smaller than the mass or volume variation. As a result, we can measure changes in cell density indicative of cellular processes that would be otherwise undetectable by mass or volume measurements. Here, we demonstrate this with four examples: identifying Plasmodium falciparum malaria-infected erythrocytes in a culture, distinguishing transfused blood cells from a patient's own blood, identifying irreversibly sickled cells in a sickle cell patient, and identifying leukemia cells in the early stages of responding to a drug treatment. These demonstrations suggest that the ability to measure single-cell density will provide valuable insights into cell state for a wide range of biological processes.

  10. Attractor comparisons based on density

    International Nuclear Information System (INIS)

    Carroll, T. L.

    2015-01-01

    Recognizing a chaotic attractor can be seen as a problem in pattern recognition. Some feature vector must be extracted from the attractor and used to compare to other attractors. The field of machine learning has many methods for extracting feature vectors, including clustering methods, decision trees, support vector machines, and many others. In this work, feature vectors are created by representing the attractor as a density in phase space and creating polynomials based on this density. Density is useful in itself because it is a one dimensional function of phase space position, but representing an attractor as a density is also a way to reduce the size of a large data set before analyzing it with graph theory methods, which can be computationally intensive. The density computation in this paper is also fast to execute. In this paper, as a demonstration of the usefulness of density, the density is used directly to construct phase space polynomials for comparing attractors. Comparisons between attractors could be useful for tracking changes in an experiment when the underlying equations are too complicated for vector field modeling

  11. Energy vs. density on paths toward more exact density functionals.

    Science.gov (United States)

    Kepp, Kasper P

    2018-03-14

    Recently, the progression toward more exact density functional theory has been questioned, implying a need for more formal ways to systematically measure progress, i.e. a "path". Here I use the Hohenberg-Kohn theorems and the definition of normality by Burke et al. to define a path toward exactness and "straying" from the "path" by separating errors in ρ and E[ρ]. A consistent path toward exactness involves minimizing both errors. Second, a suitably diverse test set of trial densities ρ' can be used to estimate the significance of errors in ρ without knowing the exact densities which are often inaccessible. To illustrate this, the systems previously studied by Medvedev et al., the first ionization energies of atoms with Z = 1 to 10, the ionization energy of water, and the bond dissociation energies of five diatomic molecules were investigated using CCSD(T)/aug-cc-pV5Z as benchmark at chemical accuracy. Four functionals of distinct designs was used: B3LYP, PBE, M06, and S-VWN. For atomic cations regardless of charge and compactness up to Z = 10, the energy effects of the different ρ are energy-wise insignificant. An interesting oscillating behavior in the density sensitivity is observed vs. Z, explained by orbital occupation effects. Finally, it is shown that even large "normal" problems such as the Co-C bond energy of cobalamins can use simpler (e.g. PBE) trial densities to drastically speed up computation by loss of a few kJ mol -1 in accuracy. The proposed method of using a test set of trial densities to estimate the sensitivity and significance of density errors of functionals may be useful for testing and designing new balanced functionals with more systematic improvement of densities and energies.

  12. Network and adaptive sampling

    CERN Document Server

    Chaudhuri, Arijit

    2014-01-01

    Combining the two statistical techniques of network sampling and adaptive sampling, this book illustrates the advantages of using them in tandem to effectively capture sparsely located elements in unknown pockets. It shows how network sampling is a reliable guide in capturing inaccessible entities through linked auxiliaries. The text also explores how adaptive sampling is strengthened in information content through subsidiary sampling with devices to mitigate unmanageable expanding sample sizes. Empirical data illustrates the applicability of both methods.

  13. Exploiting sparsity and low-rank structure for the recovery of multi-slice breast MRIs with reduced sampling error.

    Science.gov (United States)

    Yin, X X; Ng, B W-H; Ramamohanarao, K; Baghai-Wadji, A; Abbott, D

    2012-09-01

    It has been shown that, magnetic resonance images (MRIs) with sparsity representation in a transformed domain, e.g. spatial finite-differences (FD), or discrete cosine transform (DCT), can be restored from undersampled k-space via applying current compressive sampling theory. The paper presents a model-based method for the restoration of MRIs. The reduced-order model, in which a full-system-response is projected onto a subspace of lower dimensionality, has been used to accelerate image reconstruction by reducing the size of the involved linear system. In this paper, the singular value threshold (SVT) technique is applied as a denoising scheme to reduce and select the model order of the inverse Fourier transform image, and to restore multi-slice breast MRIs that have been compressively sampled in k-space. The restored MRIs with SVT for denoising show reduced sampling errors compared to the direct MRI restoration methods via spatial FD, or DCT. Compressive sampling is a technique for finding sparse solutions to underdetermined linear systems. The sparsity that is implicit in MRIs is to explore the solution to MRI reconstruction after transformation from significantly undersampled k-space. The challenge, however, is that, since some incoherent artifacts result from the random undersampling, noise-like interference is added to the image with sparse representation. These recovery algorithms in the literature are not capable of fully removing the artifacts. It is necessary to introduce a denoising procedure to improve the quality of image recovery. This paper applies a singular value threshold algorithm to reduce the model order of image basis functions, which allows further improvement of the quality of image reconstruction with removal of noise artifacts. The principle of the denoising scheme is to reconstruct the sparse MRI matrices optimally with a lower rank via selecting smaller number of dominant singular values. The singular value threshold algorithm is performed

  14. Density limit in JT-60

    International Nuclear Information System (INIS)

    Kamada, Yutaka; Hosogane, Nobuyuki; Hirayama, Toshio; Tsunematsu, Toshihide

    1990-05-01

    This report studies mainly the density limit for a series of gas- and pellet-fuelled limiter discharges in JT-60. With the pellet injection into high-current/low-q (q(a)=2.3∼2.5) discharges, the Murakami factor reaches up to 10∼13 x 10 19 m -2 T -1 . The values are about factors of 1.5∼2.0 higher than those for usual gas-fuelled discharges. The pellet injected discharges have high central density, whereas the electron density in the outer region (a/2 abs and n e 2 (r=50 cm) x Z eff (r=50 cm). (author)

  15. Charge density waves in solids

    CERN Document Server

    Gor'kov, LP

    2012-01-01

    The latest addition to this series covers a field which is commonly referred to as charge density wave dynamics.The most thoroughly investigated materials are inorganic linear chain compounds with highly anisotropic electronic properties. The volume opens with an examination of their structural properties and the essential features which allow charge density waves to develop.The behaviour of the charge density waves, where interesting phenomena are observed, is treated both from a theoretical and an experimental standpoint. The role of impurities in statics and dynamics is considered and an

  16. Magnetothermopower in unconventional density waves

    International Nuclear Information System (INIS)

    Dora, B.; Maki, K.; Vanyolos, A.; Virosztek, A.

    2003-10-01

    After a brief introduction on unconventional density waves (i.e. unconventional charge density wave (UCDW) and unconventional spin density wave (USDW)), we discuss the magnetotransport of the low temperature phase (LTP) of α-(BEDT-TTF) 2 KHg(SCN) 4 . Recently we have proposed that the low temperature phase in α-(BEDT-TTF) 2 KHg(SCN 4 should be UCDW. Here we show that UCDW describes very consistently the magnetothermopower of )α-(BEDT-TTF) 2 KHg(SCN) 4 observed by Choi et al. (author)

  17. Stellar Disk Truncations: HI Density and Dynamics

    Science.gov (United States)

    Trujillo, Ignacio; Bakos, Judit

    2010-06-01

    Using HI Nearby Galaxy Survey (THINGS) 21-cm observations of a sample of nearby (nearly face-on) galaxies we explore whether the stellar disk truncation phenomenon produces any signature either in the HI gas density and/or in the gas dynamics. Recent cosmological simulations suggest that the origin of the break on the surface brightness distribution is produced by the appearance of a warp at the truncation position. This warp should produce a flaring on the gas distribution increasing the velocity dispersion of the HI component beyond the break. We do not find, however, any evidence of this increase in the gas velocity dispersion profile.

  18. Evolución de los usos en relación con los parámetros morfológicos de densidad y compacidad (Una muestra en Puente de Vallecas, Madrid / Use evolution in relation to morphological parameters of density and compactness. (A sample in Puente de Vallecas

    Directory of Open Access Journals (Sweden)

    Fernando Miguel García Martín

    2012-09-01

    Full Text Available ResumenDe las variables sociales, económicas, ambientales y espaciales que influyen en las actividades que se realizan en las ciudades son estas últimas, las relativas a la forma urbana, una de las menos estudiadas. En este trabajo se analiza la influencia de dos parámetros morfológicos, la densidad y la compacidad, en la evolución de los usos para el caso del distrito de Puente de Vallecas, en la ciudad de Madrid. Se analiza así la adecuación de las distintas tipologías empleadas en la formación de esta zona periférica de la ciudad durante el siglo XX, comparando su situación y evolución actual.Palabras claveMorfología urbana, usos urbanos, densidad, diversidad, periferias, flexibilidad urbana.AbstractSocial, economic, environmental and spatial variables influence the activities that take place in cities. In this paper the spatial conditions, those relating to urban form, are studied trough the influence of two morphological parameters, density and compactness, in the evolution of the uses for the case of the Puente de Vallecas district in the city of Madrid. The suitability of different urban types used in the formation of the city’s periphery during the twentieth century is discussed comparing their current situation.KeywordsUrban morphology, urban uses, density, diversity, periphery, urban flexibility.

  19. FOREWORD: Special issue on density

    Science.gov (United States)

    Fujii, Kenichi

    2004-04-01

    This special issue on density was undertaken to provide readers with an overview of the present state of the density standards for solids, liquids and gases, as well as the technologies developed for measuring density. This issue also includes topics on the refractive index of gases and on techniques used for calibrating hydrometers so that almost all areas concerned with density standards are covered in four review articles and seven original articles, most of which describe current research being conducted at national metrology institutes (NMIs). A review article was invited from the Ruhr-Universität Bochum to highlight research on the magnetic suspension densimeters. In metrology, the determinations of the volume of a weight and the density of air are of primary importance in establishing a mass standard because the effect of the buoyancy force of air acting on the weight must be known accurately to determine the mass of the weight. A density standard has therefore been developed at many NMIs with a close relation to the mass standard. Hydrostatic weighing is widely used to measure the volume of a solid. The most conventional hydrostatic weighing method uses water as a primary density standard for measuring the volume of a solid. A brief history of the determination of the density of water is therefore given in a review article, as well as a recommended value for the density of water with a specified isotopic abundance. The most modern technique for hydrostatic weighing uses a solid density standard instead of water. For this purpose, optical interferometers for measuring the diameters of silicon spheres have been developed to convert the length standard into the volume standard with a small uncertainty. A review article is therefore dedicated to describing the state-of-the-art optical interferometers developed for silicon spheres. Relative combined standard uncertainties of several parts in 108 have been achieved today for measuring the volume and density of

  20. Measurement of neoclassically predicted edge current density at ASDEX Upgrade

    Science.gov (United States)

    Dunne, M. G.; McCarthy, P. J.; Wolfrum, E.; Fischer, R.; Giannone, L.; Burckhart, A.; the ASDEX Upgrade Team

    2012-12-01

    Experimental confirmation of neoclassically predicted edge current density in an ELMy H-mode plasma is presented. Current density analysis using the CLISTE equilibrium code is outlined and the rationale for accuracy of the reconstructions is explained. Sample profiles and time traces from analysis of data at ASDEX Upgrade are presented. A high time resolution is possible due to the use of an ELM-synchronization technique. Additionally, the flux-surface-averaged current density is calculated using a neoclassical approach. Results from these two separate methods are then compared and are found to validate the theoretical formula. Finally, several discharges are compared as part of a fuelling study, showing that the size and width of the edge current density peak at the low-field side can be explained by the electron density and temperature drives and their respective collisionality modifications.

  1. Measurement of neoclassically predicted edge current density at ASDEX Upgrade

    International Nuclear Information System (INIS)

    Dunne, M.G.; McCarthy, P.J.; Wolfrum, E.; Fischer, R.; Giannone, L.; Burckhart, A.

    2012-01-01

    Experimental confirmation of neoclassically predicted edge current density in an ELMy H-mode plasma is presented. Current density analysis using the CLISTE equilibrium code is outlined and the rationale for accuracy of the reconstructions is explained. Sample profiles and time traces from analysis of data at ASDEX Upgrade are presented. A high time resolution is possible due to the use of an ELM-synchronization technique. Additionally, the flux-surface-averaged current density is calculated using a neoclassical approach. Results from these two separate methods are then compared and are found to validate the theoretical formula. Finally, several discharges are compared as part of a fuelling study, showing that the size and width of the edge current density peak at the low-field side can be explained by the electron density and temperature drives and their respective collisionality modifications. (paper)

  2. SYNTHESIS, CHARACTERIZATION AND DENSITY FUNCTIONAL ...

    African Journals Online (AJOL)

    Preferred Customer

    We synthesized a number of aniline derivatives containing acyl groups to compare their barriers of rotation around ... KEY WORDS: Monoacyl aniline, Synthesis, Density functional theory, Rotation barrier. INTRODUCTION. Developments in ...

  3. Small-mammal density estimation: A field comparison of grid-based vs. web-based density estimators

    Science.gov (United States)

    Parmenter, R.R.; Yates, Terry L.; Anderson, D.R.; Burnham, K.P.; Dunnum, J.L.; Franklin, A.B.; Friggens, M.T.; Lubow, B.C.; Miller, M.; Olson, G.S.; Parmenter, Cheryl A.; Pollard, J.; Rexstad, E.; Shenk, T.M.; Stanley, T.R.; White, Gary C.

    2003-01-01

    Statistical models for estimating absolute densities of field populations of animals have been widely used over the last century in both scientific studies and wildlife management programs. To date, two general classes of density estimation models have been developed: models that use data sets from capture–recapture or removal sampling techniques (often derived from trapping grids) from which separate estimates of population size (NÌ‚) and effective sampling area (AÌ‚) are used to calculate density (DÌ‚ = NÌ‚/AÌ‚); and models applicable to sampling regimes using distance-sampling theory (typically transect lines or trapping webs) to estimate detection functions and densities directly from the distance data. However, few studies have evaluated these respective models for accuracy, precision, and bias on known field populations, and no studies have been conducted that compare the two approaches under controlled field conditions. In this study, we evaluated both classes of density estimators on known densities of enclosed rodent populations. Test data sets (n = 11) were developed using nine rodent species from capture–recapture live-trapping on both trapping grids and trapping webs in four replicate 4.2-ha enclosures on the Sevilleta National Wildlife Refuge in central New Mexico, USA. Additional “saturation” trapping efforts resulted in an enumeration of the rodent populations in each enclosure, allowing the computation of true densities. Density estimates (DÌ‚) were calculated using program CAPTURE for the grid data sets and program DISTANCE for the web data sets, and these results were compared to the known true densities (D) to evaluate each model's relative mean square error, accuracy, precision, and bias. In addition, we evaluated a variety of approaches to each data set's analysis by having a group of independent expert analysts calculate their best density estimates without a priori knowledge of the true densities; this

  4. Sampling procedures and tables

    International Nuclear Information System (INIS)

    Franzkowski, R.

    1980-01-01

    Characteristics, defects, defectives - Sampling by attributes and by variables - Sample versus population - Frequency distributions for the number of defectives or the number of defects in the sample - Operating characteristic curve, producer's risk, consumer's risk - Acceptable quality level AQL - Average outgoing quality AOQ - Standard ISQ 2859 - Fundamentals of sampling by variables for fraction defective. (RW)

  5. Experimental Evidence for Static Charge Density Waves in Iron Oxypnictides

    KAUST Repository

    Martinelli, A.; Manfrinetti, P.; Provino, A.; Genovese, Alessandro; Caglieris, F.; Lamura, G.; Ritter, C.; Putti, M.

    2017-01-01

    In this Letter we report high-resolution synchrotron x-ray powder diffraction and transmission electron microscope analysis of Mn-substituted LaFeAsO samples, demonstrating that a static incommensurate modulated structure develops across the low-temperature orthorhombic phase, whose modulation wave vector depends on the Mn content. The incommensurate structural distortion is likely originating from a charge-density-wave instability, a periodic modulation of the density of conduction electrons associated with a modulation of the atomic positions. Our results add a new component in the physics of Fe-based superconductors, indicating that the density wave ordering is charge driven.

  6. Experimental Evidence for Static Charge Density Waves in Iron Oxypnictides

    KAUST Repository

    Martinelli, A.

    2017-02-01

    In this Letter we report high-resolution synchrotron x-ray powder diffraction and transmission electron microscope analysis of Mn-substituted LaFeAsO samples, demonstrating that a static incommensurate modulated structure develops across the low-temperature orthorhombic phase, whose modulation wave vector depends on the Mn content. The incommensurate structural distortion is likely originating from a charge-density-wave instability, a periodic modulation of the density of conduction electrons associated with a modulation of the atomic positions. Our results add a new component in the physics of Fe-based superconductors, indicating that the density wave ordering is charge driven.

  7. Vibronic coupling density and related concepts

    International Nuclear Information System (INIS)

    Sato, Tohru; Uejima, Motoyuki; Iwahara, Naoya; Haruta, Naoki; Shizu, Katsuyuki; Tanaka, Kazuyoshi

    2013-01-01

    Vibronic coupling density is derived from a general point of view as a one-electron property density. Related concepts as well as their applications are presented. Linear and nonlinear vibronic coupling density and related concepts, orbital vibronic coupling density, reduced vibronic coupling density, atomic vibronic coupling constant, and effective vibronic coupling density, illustrate the origin of vibronic couplings and enable us to design novel functional molecules or to elucidate chemical reactions. Transition dipole moment density is defined as an example of the one-electron property density. Vibronic coupling density and transition dipole moment density open a way to design light-emitting molecules with high efficiency.

  8. Effective sample labeling

    International Nuclear Information System (INIS)

    Rieger, J.T.; Bryce, R.W.

    1990-01-01

    Ground-water samples collected for hazardous-waste and radiological monitoring have come under strict regulatory and quality assurance requirements as a result of laws such as the Resource Conservation and Recovery Act. To comply with these laws, the labeling system used to identify environmental samples had to be upgraded to ensure proper handling and to protect collection personnel from exposure to sample contaminants and sample preservatives. The sample label now used as the Pacific Northwest Laboratory is a complete sample document. In the event other paperwork on a labeled sample were lost, the necessary information could be found on the label

  9. Ultrasonic level, temperature, and density sensor

    International Nuclear Information System (INIS)

    Rogers, S.C.; Miller, G.N.

    1982-01-01

    A sensor has been developed to measure simultaneously the level, temperature, and density of the fluid in which it is immersed. The sensor is a thin, rectangular stainless steel ribbon which acts as a waveguide and is housed in a perforated tube. The waveguide is coupled to a section of magnetostrictive magnetic-coil transducers. These tranducers are excited in an alternating sequence to interrogate the sensor with both torsional ultrasonic waves, utilizing the Wiedemann effect, and extensional ultrasonic waves, using the Joule effect. The measured torsional wave transit time is a function of the density, level, and temperature of the fluid surrounding the waveguide. The measured extensional wave transit time is a function of the temperature of the waveguide only. The sensor is divided into zones by the introduction of reflecting surfaces at measured intervals along its length. Consequently, the transit times from each reflecting surface can be analyzed to yield a temperature profile and a density profile along the length of the sensor. Improvements in acoustic wave dampener and pressure seal designs enhance the compatibility of the probe with high-temperature, high-radiation, water-steam environments and increase the likelihood of survival in such environments. Utilization of a microcomputer to automate data sampling and processing has resulted in improved resolution of the sensor

  10. Early-type galaxy core phase densities

    International Nuclear Information System (INIS)

    Carlberg, R. G.; Hartwick, F. D. A.

    2014-01-01

    Early-type galaxies have projected central density brightness profile logarithmic slopes, γ', ranging from about 0 to 1. We show that γ' is strongly correlated, r = 0.83, with the coarse grain phase density of the galaxy core, Q 0 ≡ ρ/σ 3 . The luminosity-γ' correlation is much weaker, r = –0.51. Q 0 also serves to separate the distribution of steep core profiles, γ' > 0.5, from shallow profiles, γ' < 0.3, although there are many galaxies of intermediate slope, at intermediate Q 0 , in a volume-limited sample. The transition phase density separating the two profile types is approximately 0.003 M ☉ pc –3 km –3 s 3 , which is also where the relation between Q 0 and core mass shows a change in slope, the rotation rate of the central part of the galaxy increases, and the ratio of the black hole to core mass increases. These relations are considered relative to the globular cluster inspiral core buildup and binary black hole core scouring mechanisms for core creation and evolution. Mass-enhanced globular cluster inspiral models have quantitative predictions that are supported by data, but no single model yet completely explains the correlations.

  11. ARPES study of the evolution of band structure and charge density wave properties in RTe3 ( R=Y , La, Ce, Sm, Gd, Tb, and Dy)

    Energy Technology Data Exchange (ETDEWEB)

    Hussain, Zahid; Brouet, Veronique; Yang, Wanli; Zhou, Xingjiang; Hussain, Zahid; Moore, R.G.; He, R.; Lu, D. H.; Shen, Z.X.; Laverock, J.; Dugdale, S.B.; Ru, N.; Fisher, R.

    2008-01-16

    We present a detailed angle-resolved photoemission spectroscopy (ARPES) investigation of the RTe3 family, which sets this system as an ideal"textbook" example for the formation of a nesting driven charge density wave (CDW). This family indeed exhibits the full range of phenomena that can be associated to CDWinstabilities, from the opening of large gaps on the best nested parts of Fermi surface (up to 0.4 eV), to the existence of residual metallic pockets. ARPES is the best suited technique to characterize these features, thanks to its unique ability to resolve the electronic structure in k space. An additional advantage of RTe3 is that theband structure can be very accurately described by a simple two dimensional tight-binding (TB) model, which allows one to understand and easily reproduce many characteristics of the CDW. In this paper, we first establish the main features of the electronic structure by comparing our ARPES measurements with the linear muffin-tinorbital band calculations. We use this to define the validity and limits of the TB model. We then present a complete description of the CDW properties and of their strong evolution as a function of R. Using simple models, we are able to reproduce perfectly the evolution of gaps in k space, the evolution of the CDW wave vector with R, and the shape of the residual metallic pockets. Finally, we give an estimation of the CDWinteraction parameters and find that the change in the electronic density of states n (EF), due to lattice expansion when different R ions are inserted, has the correct order of magnitude to explain the evolution of the CDW properties.

  12. Dislocation density and graphitization of diamond crystals

    International Nuclear Information System (INIS)

    Pantea, C.; Voronin, G.A.; Zerda, T.W.; Gubicza, J.; Ungar, T.

    2002-01-01

    Two sets of diamond specimens compressed at 2 GPa at temperatures varying between 1060 K and 1760 K were prepared; one in which graphitization was promoted by the presence of water and another in which graphitization of diamond was practically absent. X-ray diffraction peak profiles of both sets were analyzed for the microstructure by using the modified Williamson-Hall method and by fitting the Fourier coefficients of the measured profiles by theoretical functions for crystallite size and lattice strain. The procedures determined mean size and size distribution of crystallites as well as the density and the character of the dislocations. The same experimental conditions resulted in different microstructures for the two sets of samples. They were explained in terms of hydrostatic conditions present in the graphitized samples

  13. Correlation and spectral density measurements by LDA

    International Nuclear Information System (INIS)

    Pfeifer, H.J.

    1986-01-01

    The present paper is intended to give a review on the state-of-the art in correlation and spectral density measurements by means of laser Doppler anemometry. As will be shown in detail the most important difference in performing this type of studies is the fact that laser anemometry relies on the presence of particles in the flow serving as flow velocity indicators. This means that, except in heavily seeded flows, the instantaneous velocity can only be sampled at random instants. This calls for new algorithms to calculate estimates of both correlation functions and power spectra. Various possibilities to handle the problem of random sampling have been developed in the past. They are explained from the theoretical point of view and the experimental aspects are detailed as far as they are different from conventional applications of laser anemometry

  14. Density of American black bears in New Mexico

    Science.gov (United States)

    Gould, Matthew J.; Cain, James W.; Roemer, Gary W.; Gould, William R.; Liley, Stewart

    2018-01-01

    Considering advances in noninvasive genetic sampling and spatially explicit capture–recapture (SECR) models, the New Mexico Department of Game and Fish sought to update their density estimates for American black bear (Ursus americanus) populations in New Mexico, USA, to aide in setting sustainable harvest limits. We estimated black bear density in the Sangre de Cristo, Sandia, and Sacramento Mountains, New Mexico, 2012–2014. We collected hair samples from black bears using hair traps and bear rubs and used a sex marker and a suite of microsatellite loci to individually genotype hair samples. We then estimated density in a SECR framework using sex, elevation, land cover type, and time to model heterogeneity in detection probability and the spatial scale over which detection probability declines. We sampled the populations using 554 hair traps and 117 bear rubs and collected 4,083 hair samples. We identified 725 (367 male, 358 female) individuals. Our density estimates varied from 16.5 bears/100 km2 (95% CI = 11.6–23.5) in the southern Sacramento Mountains to 25.7 bears/100 km2 (95% CI = 13.2–50.1) in the Sandia Mountains. Overall, detection probability at the activity center (g0) was low across all study areas and ranged from 0.00001 to 0.02. The low values of g0 were primarily a result of half of all hair samples for which genotypes were attempted failing to produce a complete genotype. We speculate that the low success we had genotyping hair samples was due to exceedingly high levels of ultraviolet (UV) radiation that degraded the DNA in the hair. Despite sampling difficulties, we were able to produce density estimates with levels of precision comparable to those estimated for black bears elsewhere in the United States.

  15. Enhanced conformational sampling using enveloping distribution sampling.

    Science.gov (United States)

    Lin, Zhixiong; van Gunsteren, Wilfred F

    2013-10-14

    To lessen the problem of insufficient conformational sampling in biomolecular simulations is still a major challenge in computational biochemistry. In this article, an application of the method of enveloping distribution sampling (EDS) is proposed that addresses this challenge and its sampling efficiency is demonstrated in simulations of a hexa-β-peptide whose conformational equilibrium encompasses two different helical folds, i.e., a right-handed 2.7(10∕12)-helix and a left-handed 3(14)-helix, separated by a high energy barrier. Standard MD simulations of this peptide using the GROMOS 53A6 force field did not reach convergence of the free enthalpy difference between the two helices even after 500 ns of simulation time. The use of soft-core non-bonded interactions in the centre of the peptide did enhance the number of transitions between the helices, but at the same time led to neglect of relevant helical configurations. In the simulations of a two-state EDS reference Hamiltonian that envelops both the physical peptide and the soft-core peptide, sampling of the conformational space of the physical peptide ensures that physically relevant conformations can be visited, and sampling of the conformational space of the soft-core peptide helps to enhance the transitions between the two helices. The EDS simulations sampled many more transitions between the two helices and showed much faster convergence of the relative free enthalpy of the two helices compared with the standard MD simulations with only a slightly larger computational effort to determine optimized EDS parameters. Combined with various methods to smoothen the potential energy surface, the proposed EDS application will be a powerful technique to enhance the sampling efficiency in biomolecular simulations.

  16. Imaginary time density-density correlations for two-dimensional electron gases at high density

    Energy Technology Data Exchange (ETDEWEB)

    Motta, M.; Galli, D. E. [Dipartimento di Fisica, Università degli Studi di Milano, Via Celoria 16, 20133 Milano (Italy); Moroni, S. [IOM-CNR DEMOCRITOS National Simulation Center and SISSA, Via Bonomea 265, 34136 Trieste (Italy); Vitali, E. [Department of Physics, College of William and Mary, Williamsburg, Virginia 23187-8795 (United States)

    2015-10-28

    We evaluate imaginary time density-density correlation functions for two-dimensional homogeneous electron gases of up to 42 particles in the continuum using the phaseless auxiliary field quantum Monte Carlo method. We use periodic boundary conditions and up to 300 plane waves as basis set elements. We show that such methodology, once equipped with suitable numerical stabilization techniques necessary to deal with exponentials, products, and inversions of large matrices, gives access to the calculation of imaginary time correlation functions for medium-sized systems. We discuss the numerical stabilization techniques and the computational complexity of the methodology and we present the limitations related to the size of the systems on a quantitative basis. We perform the inverse Laplace transform of the obtained density-density correlation functions, assessing the ability of the phaseless auxiliary field quantum Monte Carlo method to evaluate dynamical properties of medium-sized homogeneous fermion systems.

  17. Density determination of nail polishes and paint chips using magnetic levitation

    Science.gov (United States)

    Huang, Peggy P.

    Trace evidence is often small, easily overlooked, and difficult to analyze. This study describes a nondestructive method to separate and accurately determine the density of trace evidence samples, specifically nail polish and paint chip using magnetic levitation (MagLev). By determining the levitation height of each sample in the MagLev device, the density of the sample is back extrapolated using a standard density bead linear regression line. The results show that MagLev distinguishes among eight clear nail polishes, including samples from the same manufacturer; separates select colored nail polishes from the same manufacturer; can determine the density range of household paint chips; and shows limited levitation for unknown paint chips. MagLev provides a simple, affordable, and nondestructive means of determining density. The addition of co-solutes to the paramagnetic solution to expand the density range may result in greater discriminatory power and separation and lead to further applications of this technique.

  18. Optimization of multiply acquired magnetic flux density B{sub z} using ICNE-Multiecho train in MREIT

    Energy Technology Data Exchange (ETDEWEB)

    Nam, Hyun Soo; Kwon, Oh In [Department of Mathematics, Konkuk University, Seoul (Korea, Republic of)

    2010-05-07

    The aim of magnetic resonance electrical impedance tomography (MREIT) is to visualize the electrical properties, conductivity or current density of an object by injection of current. Recently, the prolonged data acquisition time when using the injected current nonlinear encoding (ICNE) method has been advantageous for measurement of magnetic flux density data, Bz, for MREIT in the signal-to-noise ratio (SNR). However, the ICNE method results in undesirable side artifacts, such as blurring, chemical shift and phase artifacts, due to the long data acquisition under an inhomogeneous static field. In this paper, we apply the ICNE method to a gradient and spin echo (GRASE) multi-echo train pulse sequence in order to provide the multiple k-space lines during a single RF pulse period. We analyze the SNR of the measured multiple B{sub z} data using the proposed ICNE-Multiecho MR pulse sequence. By determining a weighting factor for B{sub z} data in each of the echoes, an optimized inversion formula for the magnetic flux density data is proposed for the ICNE-Multiecho MR sequence. Using the ICNE-Multiecho method, the quality of the measured magnetic flux density is considerably increased by the injection of a long current through the echo train length and by optimization of the voxel-by-voxel noise level of the B{sub z} value. Agarose-gel phantom experiments have demonstrated fewer artifacts and a better SNR using the ICNE-Multiecho method. Experimenting with the brain of an anesthetized dog, we collected valuable echoes by taking into account the noise level of each of the echoes and determined B{sub z} data by determining optimized weighting factors for the multiply acquired magnetic flux density data.

  19. Electron density profile in multilayer systems

    International Nuclear Information System (INIS)

    Toekesi, K.

    2004-01-01

    Complete text of publication follows. Electron energy loss spectroscopy (EELS) has been used extensively to study the multilayer systems, where the thickness of layers are in the nanometer range. These studies has received considerable attention because of its technological interest, for example in the nanotechnology. On the most fundamental level, its importance is derived from the basic physics that is involved. One key quantities of interest is the response of a many-body system to an external perturbation: How act and how modify the interface between the solid-solid or solid-vacuum the excitations in the solid and in the vicinity of the interfaces. In this work, as a starting point of such investigations we calculated the electron density profile for multilayer systems. Our approach employs the time-dependent density functional theory (TDDFT), that is, the solution of a time-dependent Schroedinger equation in which the potential and forces are determined selfconsistently from the dynamics governed by the Schroedinger equation. We treat the problem in TDDFT at the level of the local-density approximation (LDA). Later, the comparison of experimentally obtained loss functions and the theory, based on our TDDFT calculations can provide deeper understanding of surface physics. We performed the calculations for half-infinite samples characterized by r s =1.642 and r s =1.997. We also performed the calculations for double layer systems. The substrate was characterized by r s =1.997 and the coverage by r s =1.642. Fig. 1. shows the obtained electron density profile in LDA approximation. Because of the sharp cutoff of electronic wave vectors at the Fermi surface, the densities in the interior exhibit slowly decaying Friedel oscillations. To highlight the Friedel oscillation we enlarged the electron density profile in Fig. 1a. and Fig. 1b. The work was supported by the Hungarian Scientific Research Found: OTKA No. T038016, the grant 'Bolyai' from the Hungarian Academy of

  20. Sampling in practice

    DEFF Research Database (Denmark)

    Esbensen, Kim Harry; Petersen, Lars

    2005-01-01

    A basic knowledge of the Theory of Sampling (TOS) and a set of only eight sampling unit operations is all the practical sampler needs to ensure representativeness of samples extracted from all kinds of lots: production batches, - truckloads, - barrels, sub-division in the laboratory, sampling...... in nature and in the field (environmental sampling, forestry, geology, biology), from raw materials or manufactory processes etc. We here can only give a brief introduction to the Fundamental Sampling Principle (FSP) and these eight Sampling Unit Operations (SUO’s). Always respecting FSP and invoking only...... the necessary SUO’s (dependent on the practical situation) is the only prerequisite needed for eliminating all sampling bias and simultaneously minimizing sampling variance, and this is in addition a sure guarantee for making the final analytical results trustworthy. No reliable conclusions can be made unless...

  1. The redshift number density evolution of Mg II absorption systems

    International Nuclear Information System (INIS)

    Chen Zhi-Fu

    2013-01-01

    We make use of the recent large sample of 17 042 Mg II absorption systems from Quider et al. to analyze the evolution of the redshift number density. Regardless of the strength of the absorption line, we find that the evolution of the redshift number density can be clearly distinguished into three different phases. In the intermediate redshift epoch (0.6 ≲ z ≲ 1.6), the evolution of the redshift number density is consistent with the non-evolution curve, however, the non-evolution curve over-predicts the values of the redshift number density in the early (z ≲ 0.6) and late (z ≳ 1.6) epochs. Based on the invariant cross-section of the absorber, the lack of evolution in the redshift number density compared to the non-evolution curve implies the galaxy number density does not evolve during the middle epoch. The flat evolution of the redshift number density tends to correspond to a shallow evolution in the galaxy merger rate during the late epoch, and the steep decrease of the redshift number density might be ascribed to the small mass of halos during the early epoch.

  2. Stochastic transport models for mixing in variable-density turbulence

    Science.gov (United States)

    Bakosi, J.; Ristorcelli, J. R.

    2011-11-01

    In variable-density (VD) turbulent mixing, where very-different- density materials coexist, the density fluctuations can be an order of magnitude larger than their mean. Density fluctuations are non-negligible in the inertia terms of the Navier-Stokes equation which has both quadratic and cubic nonlinearities. Very different mixing rates of different materials give rise to large differential accelerations and some fundamentally new physics that is not seen in constant-density turbulence. In VD flows material mixing is active in a sense far stronger than that applied in the Boussinesq approximation of buoyantly-driven flows: the mass fraction fluctuations are coupled to each other and to the fluid momentum. Statistical modeling of VD mixing requires accounting for basic constraints that are not important in the small-density-fluctuation passive-scalar-mixing approximation: the unit-sum of mass fractions, bounded sample space, and the highly skewed nature of the probability densities become essential. We derive a transport equation for the joint probability of mass fractions, equivalent to a system of stochastic differential equations, that is consistent with VD mixing in multi-component turbulence and consistently reduces to passive scalar mixing in constant-density flows.

  3. A comprehensive tool for measuring mammographic density changes over time.

    Science.gov (United States)

    Eriksson, Mikael; Li, Jingmei; Leifland, Karin; Czene, Kamila; Hall, Per

    2018-06-01

    Mammographic density is a marker of breast cancer risk and diagnostics accuracy. Density change over time is a strong proxy for response to endocrine treatment and potentially a stronger predictor of breast cancer incidence. We developed STRATUS to analyse digital and analogue images and enable automated measurements of density changes over time. Raw and processed images from the same mammogram were randomly sampled from 41,353 healthy women. Measurements from raw images (using FDA approved software iCAD) were used as templates for STRATUS to measure density on processed images through machine learning. A similar two-step design was used to train density measures in analogue images. Relative risks of breast cancer were estimated in three unique datasets. An alignment protocol was developed using images from 11,409 women to reduce non-biological variability in density change. The protocol was evaluated in 55,073 women having two regular mammography screens. Differences and variances in densities were compared before and after image alignment. The average relative risk of breast cancer in the three datasets was 1.6 [95% confidence interval (CI) 1.3-1.8] per standard deviation of percent mammographic density. The discrimination was AUC 0.62 (CI 0.60-0.64). The type of image did not significantly influence the risk associations. Alignment decreased the non-biological variability in density change and re-estimated the yearly overall percent density decrease from 1.5 to 0.9%, p density measures was not influenced by mammogram type. The alignment protocol reduced the non-biological variability between images over time. STRATUS has the potential to become a useful tool for epidemiological studies and clinical follow-up.

  4. Influence of lifestyle factors on mammographic density in postmenopausal women.

    Directory of Open Access Journals (Sweden)

    Judith S Brand

    Full Text Available BACKGROUND: Mammographic density is a strong risk factor for breast cancer. Apart from hormone replacement therapy (HRT, little is known about lifestyle factors that influence breast density. METHODS: We examined the effect of smoking, alcohol and physical activity on mammographic density in a population-based sample of postmenopausal women without breast cancer. Lifestyle factors were assessed by a questionnaire and percentage and area measures of mammographic density were measured using computer-assisted software. General linear models were used to assess the association between lifestyle factors and mammographic density and effect modification by body mass index (BMI and HRT was studied. RESULTS: Overall, alcohol intake was positively associated with percent mammographic density (P trend  = 0.07. This association was modified by HRT use (P interaction  = 0.06: increasing alcohol intake was associated with increasing percent density in current HRT users (P trend  = 0.01 but not in non-current users (P trend  = 0.82. A similar interaction between alcohol and HRT was found for the absolute dense area, with a positive association being present in current HRT users only (P interaction  = 0.04. No differences in mammographic density were observed across categories of smoking and physical activity, neither overall nor in stratified analyses by BMI and HRT use. CONCLUSIONS: Increasing alcohol intake is associated with an increase in mammography density, whereas smoking and physical activity do not seem to influence density. The observed interaction between alcohol and HRT may pose an opportunity for HRT users to lower their mammographic density and breast cancer risk.

  5. Sampling of ore

    International Nuclear Information System (INIS)

    Boehme, R.C.; Nicholas, B.L.

    1987-01-01

    This invention relates to a method of an apparatus for ore sampling. The method includes the steps of periodically removing a sample of the output material of a sorting machine, weighing each sample so that each is of the same weight, measuring a characteristic such as the radioactivity, magnetivity or the like of each sample, subjecting at least an equal portion of each sample to chemical analysis to determine the mineral content of the sample and comparing the characteristic measurement with desired mineral content of the chemically analysed portion of the sample to determine the characteristic/mineral ratio of the sample. The apparatus includes an ore sample collector, a deflector for deflecting a sample of ore particles from the output of an ore sorter into the collector and means for moving the deflector from a first position in which it is clear of the particle path from the sorter to a second position in which it is in the particle path at predetermined time intervals and for predetermined time periods to deflect the sample particles into the collector. The apparatus conveniently includes an ore crusher for comminuting the sample particle, a sample hopper means for weighing the hopper, a detector in the hopper for measuring a characteristic such as radioactivity, magnetivity or the like of particles in the hopper, a discharge outlet from the hopper and means for feeding the particles from the collector to the crusher and then to the hopper

  6. Examining the occupancy–density relationship for a low-density carnivore

    Science.gov (United States)

    Linden, Daniel W.; Fuller, Angela K.; Royle, J. Andrew; Hare, Matthew P.

    2017-01-01

    The challenges associated with monitoring low-density carnivores across large landscapes have limited the ability to implement and evaluate conservation and management strategies for such species. Non-invasive sampling techniques and advanced statistical approaches have alleviated some of these challenges and can even allow for spatially explicit estimates of density, one of the most valuable wildlife monitoring tools.For some species, individual identification comes at no cost when unique attributes (e.g. pelage patterns) can be discerned with remote cameras, while other species require viable genetic material and expensive laboratory processing for individual assignment. Prohibitive costs may still force monitoring efforts to use species distribution or occupancy as a surrogate for density, which may not be appropriate under many conditions.Here, we used a large-scale monitoring study of fisher Pekania pennanti to evaluate the effectiveness of occupancy as an approximation to density, particularly for informing harvest management decisions. We combined remote cameras with baited hair snares during 2013–2015 to sample across a 70 096-km2 region of western New York, USA. We fit occupancy and Royle–Nichols models to species detection–non-detection data collected by cameras, and spatial capture–recapture (SCR) models to individual encounter data obtained by genotyped hair samples. Variation in the state variables within 15-km2 grid cells was modelled as a function of landscape attributes known to influence fisher distribution.We found a close relationship between grid cell estimates of fisher state variables from the models using detection–non-detection data and those from the SCR model, likely due to informative spatial covariates across a large landscape extent and a grid cell resolution that worked well with the movement ecology of the species. Fisher occupancy and density were both positively associated with the proportion of coniferous

  7. Genetic Sample Inventory

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This database archives genetic tissue samples from marine mammals collected primarily from the U.S. east coast. The collection includes samples from field programs,...

  8. Superposition Enhanced Nested Sampling

    Directory of Open Access Journals (Sweden)

    Stefano Martiniani

    2014-08-01

    Full Text Available The theoretical analysis of many problems in physics, astronomy, and applied mathematics requires an efficient numerical exploration of multimodal parameter spaces that exhibit broken ergodicity. Monte Carlo methods are widely used to deal with these classes of problems, but such simulations suffer from a ubiquitous sampling problem: The probability of sampling a particular state is proportional to its entropic weight. Devising an algorithm capable of sampling efficiently the full phase space is a long-standing problem. Here, we report a new hybrid method for the exploration of multimodal parameter spaces exhibiting broken ergodicity. Superposition enhanced nested sampling combines the strengths of global optimization with the unbiased or athermal sampling of nested sampling, greatly enhancing its efficiency with no additional parameters. We report extensive tests of this new approach for atomic clusters that are known to have energy landscapes for which conventional sampling schemes suffer from broken ergodicity. We also introduce a novel parallelization algorithm for nested sampling.

  9. Chorionic villus sampling

    Science.gov (United States)

    ... medlineplus.gov/ency/article/003406.htm Chorionic villus sampling To use the sharing features on this page, please enable JavaScript. Chorionic villus sampling (CVS) is a test some pregnant women have ...

  10. Genetic Sample Inventory - NRDA

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This database archives genetic tissue samples from marine mammals collected in the North-Central Gulf of Mexico from 2010-2015. The collection includes samples from...

  11. Current interruption by density depression

    International Nuclear Information System (INIS)

    Wagner, J.S.; Tajima, T.; Akasofu, S.I.

    1985-04-01

    Using a one-dimensional electrostatic particle code, we examine processes associated with current interruption in a collisionless plasma when a density depression is present along the current channel. Current interruption due to double layers was suggested by Alfven and Carlqvist (1967) as a cause of solar flares. At a local density depression, plasma instabilities caused by an electron current flow are accentuated, leading to current disruption. Our simulation study encompasses a wide range of the parameters in such a way that under appropriate conditions, both the Alfven and Carlqvist (1967) regime and the Smith and Priest (1972) regime take place. In the latter regime the density depression decays into a stationary structure (''ion-acoustic layer'') which spawns a series of ion-acoustic ''solitons'' and ion phase space holes travelling upstream. A large inductance of the current circuit tends to enhance the plasma instabilities

  12. Sleep spindle density in narcolepsy

    DEFF Research Database (Denmark)

    Christensen, Julie Anja Engelhard; Nikolic, Miki; Hvidtfelt, Mathias

    2017-01-01

    BACKGROUND: Patients with narcolepsy type 1 (NT1) show alterations in sleep stage transitions, rapid-eye-movement (REM) and non-REM sleep due to the loss of hypocretinergic signaling. However, the sleep microstructure has not yet been evaluated in these patients. We aimed to evaluate whether...... the sleep spindle (SS) density is altered in patients with NT1 compared to controls and patients with narcolepsy type 2 (NT2). METHODS: All-night polysomnographic recordings from 28 NT1 patients, 19 NT2 patients, 20 controls (C) with narcolepsy-like symptoms, but with normal cerebrospinal fluid hypocretin...... levels and multiple sleep latency tests, and 18 healthy controls (HC) were included. Unspecified, slow, and fast SS were automatically detected, and SS densities were defined as number per minute and were computed across sleep stages and sleep cycles. The between-cycle trends of SS densities in N2...

  13. High Energy Density Laboratory Astrophysics

    CERN Document Server

    Lebedev, Sergey V

    2007-01-01

    During the past decade, research teams around the world have developed astrophysics-relevant research utilizing high energy-density facilities such as intense lasers and z-pinches. Every two years, at the International conference on High Energy Density Laboratory Astrophysics, scientists interested in this emerging field discuss the progress in topics covering: - Stellar evolution, stellar envelopes, opacities, radiation transport - Planetary Interiors, high-pressure EOS, dense plasma atomic physics - Supernovae, gamma-ray bursts, exploding systems, strong shocks, turbulent mixing - Supernova remnants, shock processing, radiative shocks - Astrophysical jets, high-Mach-number flows, magnetized radiative jets, magnetic reconnection - Compact object accretion disks, x-ray photoionized plasmas - Ultrastrong fields, particle acceleration, collisionless shocks. These proceedings cover many of the invited and contributed papers presented at the 6th International Conference on High Energy Density Laboratory Astrophys...

  14. Energy vs. density on paths toward exact density functionals

    DEFF Research Database (Denmark)

    Kepp, Kasper Planeta

    2018-01-01

    Recently, the progression toward more exact density functional theory has been questioned, implying a need for more formal ways to systematically measure progress, i.e. a “path”. Here I use the Hohenberg-Kohn theorems and the definition of normality by Burke et al. to define a path toward exactness...

  15. Density dependence of the nuclear energy-density functional

    Science.gov (United States)

    Papakonstantinou, Panagiota; Park, Tae-Sun; Lim, Yeunhwan; Hyun, Chang Ho

    2018-01-01

    Background: The explicit density dependence in the coupling coefficients entering the nonrelativistic nuclear energy-density functional (EDF) is understood to encode effects of three-nucleon forces and dynamical correlations. The necessity for the density-dependent coupling coefficients to assume the form of a preferably small fractional power of the density ρ is empirical and the power is often chosen arbitrarily. Consequently, precision-oriented parametrizations risk overfitting in the regime of saturation and extrapolations in dilute or dense matter may lose predictive power. Purpose: Beginning with the observation that the Fermi momentum kF, i.e., the cubic root of the density, is a key variable in the description of Fermi systems, we first wish to examine if a power hierarchy in a kF expansion can be inferred from the properties of homogeneous matter in a domain of densities, which is relevant for nuclear structure and neutron stars. For subsequent applications we want to determine a functional that is of good quality but not overtrained. Method: For the EDF, we fit systematically polynomial and other functions of ρ1 /3 to existing microscopic, variational calculations of the energy of symmetric and pure neutron matter (pseudodata) and analyze the behavior of the fits. We select a form and a set of parameters, which we found robust, and examine the parameters' naturalness and the quality of resulting extrapolations. Results: A statistical analysis confirms that low-order terms such as ρ1 /3 and ρ2 /3 are the most relevant ones in the nuclear EDF beyond lowest order. It also hints at a different power hierarchy for symmetric vs. pure neutron matter, supporting the need for more than one density-dependent term in nonrelativistic EDFs. The functional we propose easily accommodates known or adopted properties of nuclear matter near saturation. More importantly, upon extrapolation to dilute or asymmetric matter, it reproduces a range of existing microscopic

  16. Test sample handling apparatus

    International Nuclear Information System (INIS)

    1981-01-01

    A test sample handling apparatus using automatic scintillation counting for gamma detection, for use in such fields as radioimmunoassay, is described. The apparatus automatically and continuously counts large numbers of samples rapidly and efficiently by the simultaneous counting of two samples. By means of sequential ordering of non-sequential counting data, it is possible to obtain precisely ordered data while utilizing sample carrier holders having a minimum length. (U.K.)

  17. Laboratory Sampling Guide

    Science.gov (United States)

    2012-05-11

    environment, and by ingestion of foodstuffs that have incorporated C-14 by photosynthesis . Like tritium, C-14 is a very low energy beta emitter and is... bacterial growth and to minimize development of solids in the sample. • Properly identify each sample container with name, SSN, and collection start and...sampling in the same cardboard carton. The sample may be kept cool or frozen during collection to control odor and bacterial growth. • Once

  18. High speed network sampling

    OpenAIRE

    Rindalsholt, Ole Arild

    2005-01-01

    Master i nettverks- og systemadministrasjon Classical Sampling methods play an important role in the current practice of Internet measurement. With today’s high speed networks, routers cannot manage to generate complete Netflow data for every packet. They have to perform restricted sampling. This thesis summarizes some of the most important sampling schemes and their applications before diving into an analysis on the effect of sampling Netflow records.

  19. High follicle density does not decrease sweat gland density in Huacaya alpacas.

    Science.gov (United States)

    Moore, K E; Maloney, S K; Blache, D

    2015-01-01

    When exposed to high ambient temperatures, mammals lose heat evaporatively by either sweating from glands in the skin or by respiratory panting. Like other camelids, alpacas are thought to evaporate more water by sweating than panting, despite a thick fleece, unlike sheep which mostly pant in response to heat stress. Alpacas were brought to Australia to develop an alternative fibre industry to sheep wool. In Australia, alpacas can be exposed to ambient temperatures higher than in their native South America. As a young industry there is a great deal of variation in the quality and quantity of the fleece produced in the national flock. There is selection pressure towards animals with finer and denser fleeces. Because the fibre from secondary follicles is finer than that from primary follicles, selecting for finer fibres might alter the ratio of primary and secondary follicles. In turn the selection might alter sweat gland density because the sweat glands are associated with the primary follicle. Skin biopsy and fibre samples were obtained from the mid-section of 33 Huacaya alpacas and the skin sections were processed into horizontal sections at the sebaceous gland level. Total, primary, and secondary follicles and the number of sweat gland ducts were quantified. Fibre samples from each alpaca were further analysed for mean fibre diameter. The finer-fibred animals had a higher total follicle density (Palpacas with high follicle density should not be limited for potential sweating ability. Copyright © 2014 Elsevier Ltd. All rights reserved.

  20. Mars Sample Handling Functionality

    Science.gov (United States)

    Meyer, M. A.; Mattingly, R. L.

    2018-04-01

    The final leg of a Mars Sample Return campaign would be an entity that we have referred to as Mars Returned Sample Handling (MRSH.) This talk will address our current view of the functional requirements on MRSH, focused on the Sample Receiving Facility (SRF).

  1. IAEA Sampling Plan

    Energy Technology Data Exchange (ETDEWEB)

    Geist, William H. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-09-15

    The objectives for this presentation are to describe the method that the IAEA uses to determine a sampling plan for nuclear material measurements; describe the terms detection probability and significant quantity; list the three nuclear materials measurement types; describe the sampling method applied to an item facility; and describe multiple method sampling.

  2. Developing Water Sampling Standards

    Science.gov (United States)

    Environmental Science and Technology, 1974

    1974-01-01

    Participants in the D-19 symposium on aquatic sampling and measurement for water pollution assessment were informed that determining the extent of waste water stream pollution is not a cut and dry procedure. Topics discussed include field sampling, representative sampling from storm sewers, suggested sampler features and application of improved…

  3. Simulating QCD at finite density

    CERN Document Server

    de Forcrand, Philippe

    2009-01-01

    In this review, I recall the nature and the inevitability of the "sign problem" which plagues attempts to simulate lattice QCD at finite baryon density. I present the main approaches used to circumvent the sign problem at small chemical potential. I sketch how one can predict analytically the severity of the sign problem, as well as the numerically accessible range of baryon densities. I review progress towards the determination of the pseudo-critical temperature T_c(mu), and towards the identification of a possible QCD critical point. Some promising advances with non-standard approaches are reviewed.

  4. Momentum density maps for molecules

    International Nuclear Information System (INIS)

    Cook, J.P.D.; Brion, C.E.

    1982-01-01

    Momentum-space and position-space molecular orbital density functions computed from LCAO-MO-SCF wavefunctions are used to rationalize the shapes of some momentum distributions measured by binary (e,2e) spectroscopy. A set of simple rules is presented which enable one to sketch the momentum density function and the momentum distribution from a knowledge of the position-space wavefunction and the properties and effects of the Fourier Transform and the spherical average. Selected molecular orbitals of H 2 , N 2 and CO 2 are used to illustrate this work

  5. Flashing coupled density wave oscillation

    International Nuclear Information System (INIS)

    Jiang Shengyao; Wu Xinxin; Zhang Youjie

    1997-07-01

    The experiment was performed on the test loop (HRTL-5), which simulates the geometry and system design of the 5 MW reactor. The phenomenon and mechanism of different kinds of two-phase flow instabilities, namely geyser instability, flashing instability and flashing coupled density wave instability are described. The especially interpreted flashing coupled density wave instability has never been studied well, it is analyzed by using a one-dimensional non-thermo equilibrium two-phase flow drift model computer code. Calculations are in good agreement with the experiment results. (5 refs.,5 figs., 1 tab.)

  6. High-density multicore fibers

    DEFF Research Database (Denmark)

    Takenaga, K.; Matsuo, S.; Saitoh, K.

    2016-01-01

    High-density single-mode multicore fibers were designed and fabricated. A heterogeneous 30-core fiber realized a low crosstalk of −55 dB. A quasi-single-mode homogeneous 31-core fiber attained the highest core count as a single-mode multicore fiber.......High-density single-mode multicore fibers were designed and fabricated. A heterogeneous 30-core fiber realized a low crosstalk of −55 dB. A quasi-single-mode homogeneous 31-core fiber attained the highest core count as a single-mode multicore fiber....

  7. High density operation in pulsator

    International Nuclear Information System (INIS)

    Klueber, O.; Cannici, B.; Engelhardt, W.; Gernhardt, J.; Glock, E.; Karger, F.; Lisitano, G.; Mayer, H.M.; Meisel, D.; Morandi, P.

    1976-03-01

    This report summarizes the results of experiments at high electron densities (>10 14 cm -3 ) which have been achieved by pulsed gas inflow during the discharge. At these densities a regime is established which is characterized by βsub(p) > 1, nsub(i) approximately nsub(e), Tsub(i) approximately Tsub(e) and tausub(E) proportional to nsub(e). Thus the toroidal magnetic field contributes considerably to the plasma confinement and the ions constitute almost half of the plasma pressure. Furthermore, the confinement is appreciably improved and the plasma becomes impermeable to hot neutrals. (orig.) [de

  8. Generalized sampling in Julia

    DEFF Research Database (Denmark)

    Jacobsen, Christian Robert Dahl; Nielsen, Morten; Rasmussen, Morten Grud

    2017-01-01

    Generalized sampling is a numerically stable framework for obtaining reconstructions of signals in different bases and frames from their samples. For example, one can use wavelet bases for reconstruction given frequency measurements. In this paper, we will introduce a carefully documented toolbox...... for performing generalized sampling in Julia. Julia is a new language for technical computing with focus on performance, which is ideally suited to handle the large size problems often encountered in generalized sampling. The toolbox provides specialized solutions for the setup of Fourier bases and wavelets....... The performance of the toolbox is compared to existing implementations of generalized sampling in MATLAB....

  9. Density Changes in the Optimized CSSX Solvent System

    Energy Technology Data Exchange (ETDEWEB)

    Lee, D.D.

    2002-11-25

    Density increases in caustic-side solvent extraction (CSSX) solvent have been observed in separate experimental programs performed by different groups of researchers. Such changes indicate a change in chemical composition. Increased density adversely affects separation of solvent from denser aqueous solutions present in the CSSX process. Identification and control of factors affecting solvent density are essential for design and operation of the centrifugal contactors. The goals of this research were to identify the factors affecting solvent density (composition) and to develop correlations between easily measured solvent properties (density and viscosity) and the chemical composition of the solvent, which will permit real-time determination and adjustment of the solvent composition. In evaporation experiments, virgin solvent was subjected to evaporation under quiescent conditions at 25, 35, and 45 C with continuously flowing dry air passing over the surface of the solvent. Density and viscosity were measured periodically, and chemical analysis was performed on the solvent samples. Chemical interaction tests were completed to determine if any chemical reaction takes place over extended contact time that changes the composition and/or physical properties. Solvent and simulant, solvent and strip solution, and solvent and wash solution were contacted continuously in agitated flasks. They were periodically sampled and the density measured (viscosity was also measured on some samples) and then submitted to the Chemical Sciences Division of Oak Ridge National Laboratory for analysis by nuclear magnetic resonance (NMR) spectrometry and high-performance liquid chromatography (HPLC) using the virgin solvent as the baseline. Chemical interaction tests showed that solvent densities and viscosities did not change appreciably during contact with simulant, strip, or wash solution. No effects on density and viscosity and no chemical changes in the solvent were noted within

  10. Thermal Stress Effect on Density Changes of Hemp Hurds Composites

    Science.gov (United States)

    Schwarzova, Ivana; Cigasova, Julia; Stevulova, Nadezda

    2016-12-01

    The aim of this article is to study the behavior of prepared biocomposites based on hemp hurds as a filling agent in composite system. In addition to the filler and water, an alternative binder, called MgO-cement was used. For this objective were prepared three types of samples; samples based on untreated hemp hurds as a referential material and samples based on chemically (with NaOH solution) and physically (by ultrasonic procedure) treated hemp hurds. The thermal stress effect on bulk density changes of hemp hurds composites was monitored. Gradual increase in temperature led to composites density reduction of 30-40 %. This process is connected with mass loss of the adsorbed moisture and physically bound water and also with degradation of organic compounds present in hemp hurds aggregates such as pectin, hemicelluloses and cellulose. Therefore the changes in the chemical composition of treated hemp hurds in comparison to original sample and its thermal decomposition were also studied.

  11. Creating Great Neighborhoods: Density in Your Community

    Science.gov (United States)

    This report highlights nine community-led efforts to create vibrant neighborhoods through density, discusses the connections between smart growth and density, and introduces design principles to ensure that density becomes a community asset.

  12. The Lunar Sample Compendium

    Science.gov (United States)

    Meyer, Charles

    2009-01-01

    The Lunar Sample Compendium is a succinct summary of the data obtained from 40 years of study of Apollo and Luna samples of the Moon. Basic petrographic, chemical and age information is compiled, sample-by-sample, in the form of an advanced catalog in order to provide a basic description of each sample. The LSC can be found online using Google. The initial allocation of lunar samples was done sparingly, because it was realized that scientific techniques would improve over the years and new questions would be formulated. The LSC is important because it enables scientists to select samples within the context of the work that has already been done and facilitates better review of proposed allocations. It also provides back up material for public displays, captures information found only in abstracts, grey literature and curatorial databases and serves as a ready access to the now-vast scientific literature.

  13. Image Sampling with Quasicrystals

    Directory of Open Access Journals (Sweden)

    Mark Grundland

    2009-07-01

    Full Text Available We investigate the use of quasicrystals in image sampling. Quasicrystals produce space-filling, non-periodic point sets that are uniformly discrete and relatively dense, thereby ensuring the sample sites are evenly spread out throughout the sampled image. Their self-similar structure can be attractive for creating sampling patterns endowed with a decorative symmetry. We present a brief general overview of the algebraic theory of cut-and-project quasicrystals based on the geometry of the golden ratio. To assess the practical utility of quasicrystal sampling, we evaluate the visual effects of a variety of non-adaptive image sampling strategies on photorealistic image reconstruction and non-photorealistic image rendering used in multiresolution image representations. For computer visualization of point sets used in image sampling, we introduce a mosaic rendering technique.

  14. Urine sample collection protocols for bioassay samples

    Energy Technology Data Exchange (ETDEWEB)

    MacLellan, J.A.; McFadden, K.M.

    1992-11-01

    In vitro radiobioassay analyses are used to measure the amount of radioactive material excreted by personnel exposed to the potential intake of radioactive material. The analytical results are then used with various metabolic models to estimate the amount of radioactive material in the subject`s body and the original intake of radioactive material. Proper application of these metabolic models requires knowledge of the excretion period. It is normal practice to design the bioassay program based on a 24-hour excretion sample. The Hanford bioassay program simulates a total 24-hour urine excretion sample with urine collection periods lasting from one-half hour before retiring to one-half hour after rising on two consecutive days. Urine passed during the specified periods is collected in three 1-L bottles. Because the daily excretion volume given in Publication 23 of the International Commission on Radiological Protection (ICRP 1975, p. 354) for Reference Man is 1.4 L, it was proposed to use only two 1-L bottles as a cost-saving measure. This raised the broader question of what should be the design capacity of a 24-hour urine sample kit.

  15. Urine sample collection protocols for bioassay samples

    Energy Technology Data Exchange (ETDEWEB)

    MacLellan, J.A.; McFadden, K.M.

    1992-11-01

    In vitro radiobioassay analyses are used to measure the amount of radioactive material excreted by personnel exposed to the potential intake of radioactive material. The analytical results are then used with various metabolic models to estimate the amount of radioactive material in the subject's body and the original intake of radioactive material. Proper application of these metabolic models requires knowledge of the excretion period. It is normal practice to design the bioassay program based on a 24-hour excretion sample. The Hanford bioassay program simulates a total 24-hour urine excretion sample with urine collection periods lasting from one-half hour before retiring to one-half hour after rising on two consecutive days. Urine passed during the specified periods is collected in three 1-L bottles. Because the daily excretion volume given in Publication 23 of the International Commission on Radiological Protection (ICRP 1975, p. 354) for Reference Man is 1.4 L, it was proposed to use only two 1-L bottles as a cost-saving measure. This raised the broader question of what should be the design capacity of a 24-hour urine sample kit.

  16. Rapid density-measurement system with vibrating-tube densimeter

    International Nuclear Information System (INIS)

    Kayukawa, Yohei; Hasumoto, Masaya; Watanabe, Koichi

    2003-01-01

    Concerning an increasing demand for environmentally friendly refrigerants including hydrocarbons, thermodynamic properties of such new refrigerants, especially densities, are essential information for refrigeration engineering. A rapid density-measurement system with vibrating-tube densimeter was developed in the present study with an aim to supply large numbers of high-quality PVT property data in a short period. The present system needs only a few minutes to obtain a single datum, and requires less than 20 cm 3 sample fluid. PVT properties in the entire fluid-phase, vapor-pressures, saturated-liquid densities for pure fluid are available. Liquid densities, bubble-point pressures and saturated-liquid densities for mixture can be obtained. The measurement range is from 240 to 380 K for temperature and up to 7 MPa for pressure. By employing a new calibration function, density can be precisely obtained even at lower densities. The densimeter is calibrated with pure water and iso-octane which is one of the density-standard fluids, and then measurement uncertainty was evaluated to be 0.1 kg m -3 or 0.024% whichever greater in density, 0.26 kPa or 0.022% whichever greater in pressure and 3 mK for temperature, respectively. The performance of the present measurement system was examined by measuring thermodynamic properties for refrigerant R134a. The experimental results were compared with available equation of state and confirmed to agree with it within ±0.05% for liquid densities while ±0.5% in pressure for the gas phase

  17. Level densities in nuclear physics

    International Nuclear Information System (INIS)

    Beckerman, M.

    1978-01-01

    In the independent-particle model nucleons move independently in a central potential. There is a well-defined set of single- particle orbitals, each nucleon occupies one of these orbitals subject to Fermi statistics, and the total energy of the nucleus is equal to the sum of the energies of the individual nucleons. The basic question is the range of validity of this Fermi gas description and, in particular, the roles of the residual interactions and collective modes. A detailed examination of experimental level densities in light-mass system is given to provide some insight into these questions. Level densities over the first 10 MeV or so in excitation energy as deduced from neutron and proton resonances data and from spectra of low-lying bound levels are discussed. To exhibit some of the salient features of these data comparisons to independent-particle (shell) model calculations are presented. Shell structure is predicted to manifest itself through discontinuities in the single-particle level density at the Fermi energy and through variatons in the occupancy of the valence orbitals. These predictions are examined through combinatorial calculations performed with the Grover [Phys. Rev., 157, 832(1967), 185 1303(1969)] odometer method. Before the discussion of the experimenta results, statistical mechanical level densities for spherical nuclei are reviewed. After consideration of deformed nuclei, the conclusions resulting from this work are drawn. 7 figures, 3 tables

  18. Solar corona electron density distribution

    International Nuclear Information System (INIS)

    Esposito, P.B.; Edenhofer, P.; Lueneburg, E.

    1980-01-01

    Three and one-half months of single-frequency (f= 0 2.2 x 10 9 Hz) time delay data (earth-to-spacecraft and return signal travel time) were acquired from the Helios 2 spacecraft around the time of its solar occupation (May 16, 1976). Following the determination of the spacecraft trajectory the excess time delay due to the integrated effect of free electrons along the signal's ray path could be separated and modeled. An average solar corona, equatorial, electron density profile, during solar minimum, was deduced from time delay measurements acquired within 5--60 solar radii (R/sub S/) of the sun. As a point of reference, at 10 R/sub S/ from the sun we find an average electron density of 4500 el cm -3 . However, there appears to be an asymmtry in the electron density as the ray path moved from the west (preoccultation) to east (post-occulation) solar limb. This may be related to the fact that during entry into occulation the heliographic latitude of the ray path (at closes approach to the sun) was about 6 0 , whereas during exit it became -7 0 . The Helios electron density model is compared with similar models deduced from a variety of different experimental techniques. Within 5--20 R/sub S/ of the sun the models separate according to solar minimum or maximum conditions; however, anomalies are evident

  19. High density matter at RHIC

    Indian Academy of Sciences (India)

    QCD predicts a phase transition between hadronic matter and a quark-gluon plasma at high energy density. The relativistic heavy ion collider (RHIC) at Brookhaven National Laboratory is a new facility dedicated to the experimental study of matter under extreme conditions. Already the first round of experimental results at ...

  20. density-dependent selection revisited

    Indian Academy of Sciences (India)

    Unknown

    is a more useful way of looking at density-dependent selection, and then go on ... these models was that the condition for maintenance of ... In a way, their formulation may be viewed as ... different than competition among species, and typical.

  1. Modern charge-density analysis

    CERN Document Server

    Gatti, Carlo

    2012-01-01

    Focusing on developments from the past 10-15 years, this volume presents an objective overview of the research in charge density analysis. The most promising methodologies are included, in addition to powerful interpretative tools and a survey of important areas of research.

  2. Optimization of Barron density estimates

    Czech Academy of Sciences Publication Activity Database

    Vajda, Igor; van der Meulen, E. C.

    2001-01-01

    Roč. 47, č. 5 (2001), s. 1867-1883 ISSN 0018-9448 R&D Projects: GA ČR GA102/99/1137 Grant - others:Copernicus(XE) 579 Institutional research plan: AV0Z1075907 Keywords : Barron estimator * chi-square criterion * density estimation Subject RIV: BD - Theory of Information Impact factor: 2.077, year: 2001

  3. High current density ion source

    International Nuclear Information System (INIS)

    King, H.J.

    1977-01-01

    A high-current-density ion source with high total current is achieved by individually directing the beamlets from an electron bombardment ion source through screen and accelerator electrodes. The openings in these screen and accelerator electrodes are oriented and positioned to direct the individual beamlets substantially toward a focus point. 3 figures, 1 table

  4. The density limit in Tokamaks

    International Nuclear Information System (INIS)

    Alladio, F.

    1985-01-01

    A short summary of the present status of experimental observations, theoretical ideas and understanding of the density limit in tokamaks is presented. It is the result of the discussion that was held on this topic at the 4th European Tokamak Workshop in Copenhagen (December 4th to 6th, 1985). 610 refs

  5. Density estimation from local structure

    CSIR Research Space (South Africa)

    Van der Walt, Christiaan M

    2009-11-01

    Full Text Available Mixture Model (GMM) density function of the data and the log-likelihood scores are compared to the scores of a GMM trained with the expectation maximization (EM) algorithm on 5 real-world classification datasets (from the UCI collection). They show...

  6. Dual model for parton densities

    International Nuclear Information System (INIS)

    El Hassouni, A.; Napoly, O.

    1981-01-01

    We derive power-counting rules for quark densities near x=1 and x=0 from parton interpretations of one-particle inclusive dual amplitudes. Using these rules, we give explicit expressions for quark distributions (including charm) inside hadrons. We can then show the compatibility between fragmentation and recombination descriptions of low-p/sub perpendicular/ processes

  7. Micro Coriolis Gas Density Sensor

    NARCIS (Netherlands)

    Sparreboom, Wouter; Ratering, Gijs; Kruijswijk, Wim; van der Wouden, E.J.; Groenesteijn, Jarno; Lötters, Joost Conrad

    2017-01-01

    In this paper we report on gas density measurements using a micro Coriolis sensor. The technology to fabricate the sensor is based on surface channel technology. The measurement tube is freely suspended and has a wall thickness of only 1 micron. This renders the sensor extremely sensitive to changes

  8. Method of measuring surface density

    International Nuclear Information System (INIS)

    Gregor, J.

    1982-01-01

    A method is described of measuring surface density or thickness, preferably of coating layers, using radiation emitted by a suitable radionuclide, e.g., 241 Am. The radiation impinges on the measured material, e.g., a copper foil and in dependence on its surface density or thickness part of the flux of impinging radiation is reflected and part penetrates through the material. The radiation which has penetrated through the material excites in a replaceable adjustable backing characteristic radiation of an energy close to that of the impinging radiation (within +-30 keV). Part of the flux of the characteristic radiation spreads back to the detector, penetrates through the material in which in dependence on surface density or thickness of the coating layer it is partly absorbed. The flux of the penetrated characteristic radiation impinging on the face of the detector is a function of surface density or thickness. Only that part of the energy is evaluated of the energy spectrum which corresponds to the energy of characteristic radiation. (B.S.)

  9. Information Density and Syntactic Repetition

    Science.gov (United States)

    Temperley, David; Gildea, Daniel

    2015-01-01

    In noun phrase (NP) coordinate constructions (e.g., NP and NP), there is a strong tendency for the syntactic structure of the second conjunct to match that of the first; the second conjunct in such constructions is therefore low in syntactic information. The theory of uniform information density predicts that low-information syntactic…

  10. Gamma-ray self-attenuation corrections in environmental samples

    International Nuclear Information System (INIS)

    Robu, E.; Giovani, C.

    2009-01-01

    Gamma-spectrometry is a commonly used technique in environmental radioactivity monitoring. Frequently the bulk samples that should be measured differ with respect to composition and density from the reference sample used for efficiency calibration. Correction factors should be applied in these cases for activity measurement. Linear attenuation coefficients and self-absorption correction factors have been evaluated for soil, grass and liquid sources with different densities and geometries.(authors)

  11. The topology of the Coulomb potential density. A comparison with the electron density, the virial energy density, and the Ehrenfest force density.

    Science.gov (United States)

    Ferreira, Lizé-Mari; Eaby, Alan; Dillen, Jan

    2017-12-15

    The topology of the Coulomb potential density has been studied within the context of the theory of Atoms in Molecules and has been compared with the topologies of the electron density, the virial energy density and the Ehrenfest force density. The Coulomb potential density is found to be mainly structurally homeomorphic with the electron density. The Coulomb potential density reproduces the non-nuclear attractor which is observed experimentally in the molecular graph of the electron density of a Mg dimer, thus, for the first time ever providing an alternative and energetic foundation for the existence of this critical point. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  12. Urinary density measurement and analysis methods in neonatal unit care

    Directory of Open Access Journals (Sweden)

    Maria Vera Lúcia Moreira Leitão Cardoso

    2013-09-01

    Full Text Available The objective was to assess urine collection methods through cotton in contact with genitalia and urinary collector to measure urinary density in newborns. This is a quantitative intervention study carried out in a neonatal unit of Fortaleza-CE, Brazil, in 2010. The sample consisted of 61 newborns randomly chosen to compose the study group. Most neonates were full term (31/50.8% males (33/54%. Data on urinary density measurement through the methods of cotton and collector presented statistically significant differences (p<0.05. The analysis of interquartile ranges between subgroups resulted in statistical differences between urinary collector/reagent strip (1005 and cotton/reagent strip (1010, however there was no difference between urinary collector/ refractometer (1008 and cotton/ refractometer. Therefore, further research should be conducted with larger sampling using methods investigated in this study and whenever possible, comparing urine density values to laboratory tests.

  13. Merger of waste in kaolin panels medium density

    International Nuclear Information System (INIS)

    Bezerra, A.F.C.; Santana, L.N.L.; Neves, G.A.

    2011-01-01

    Medium-density panels are molded under pressure and temperature and have physical and mechanical properties similar to those of solid wood. Their composition involves fibers of eucalyptus and pine, but other residues as kaolin waste can be incorporated. The objective was to manufacture medium density panels incorporating kaolin waste and compare the physical, chemical and mechanical properties of these with other commercials. The residue was subjected to the following characterization tests: X-ray diffraction, chemical analysis, differential thermal analysis, thermal gravimetric analysis and size analysis.Through the process of pressing the samples were prepared, they were evaluated for their flexural strength and tensile strength perpendicular to the water absorption / swelling in thickness, density and moisture content. According to the analyzed results, we conclude that samples having the residue had lower levels of swelling, tensile and flexural strength and higher levels of absorption.(author)

  14. Ambit determination method in estimating rice plant population density

    Directory of Open Access Journals (Sweden)

    Abu Bakar, B.,

    2017-11-01

    Full Text Available Rice plant population density is a key indicator in determining the crop setting and fertilizer application rate. It is therefore essential that the population density is monitored to ensure that a correct crop management decision is taken. The conventional method of determining plant population is by manually counting the total number of rice plant tillers in a 25 cm x 25 cm square frame. Sampling is done by randomly choosing several different locations within a plot to perform tiller counting. This sampling method is time consuming, labour intensive and costly. An alternative fast estimating method was developed to overcome this issue. The method relies on measuring the outer circumference or ambit of the contained rice plants in a 25 cm x 25 cm square frame to determine the number of tillers within that square frame. Data samples of rice variety MR219 were collected from rice plots in the Muda granary area, Sungai Limau Dalam, Kedah. The data were taken at 50 days and 70 days after seeding (DAS. A total of 100 data samples were collected for each sampling day. A good correlation was obtained for the variety of 50 DAS and 70 DAS. The model was then verified by taking 100 samples with the latching strap for 50 DAS and 70 DAS. As a result, this technique can be used as a fast, economical and practical alternative to manual tiller counting. The technique can potentially be used in the development of an electronic sensing system to estimate paddy plant population density.

  15. Automated Proposition Density Analysis for Discourse in Aphasia

    Science.gov (United States)

    Fromm, Davida; Greenhouse, Joel; Hou, Kaiyue; Russell, G. Austin; Cai, Xizhen; Forbes, Margaret; Holland, Audrey; MacWhinney, Brian

    2016-01-01

    Purpose: This study evaluates how proposition density can differentiate between persons with aphasia (PWA) and individuals in a control group, as well as among subtypes of aphasia, on the basis of procedural discourse and personal narratives collected from large samples of participants. Method: Participants were 195 PWA and 168 individuals in a…

  16. Cumulative sum quality control for calibrated breast density measurements

    International Nuclear Information System (INIS)

    Heine, John J.; Cao Ke; Beam, Craig

    2009-01-01

    Purpose: Breast density is a significant breast cancer risk factor. Although various methods are used to estimate breast density, there is no standard measurement for this important factor. The authors are developing a breast density standardization method for use in full field digital mammography (FFDM). The approach calibrates for interpatient acquisition technique differences. The calibration produces a normalized breast density pixel value scale. The method relies on first generating a baseline (BL) calibration dataset, which required extensive phantom imaging. Standardizing prospective mammograms with calibration data generated in the past could introduce unanticipated error in the standardized output if the calibration dataset is no longer valid. Methods: Sample points from the BL calibration dataset were imaged approximately biweekly over an extended timeframe. These serial samples were used to evaluate the BL dataset reproducibility and quantify the serial calibration accuracy. The cumulative sum (Cusum) quality control method was used to evaluate the serial sampling. Results: There is considerable drift in the serial sample points from the BL calibration dataset that is x-ray beam dependent. Systematic deviation from the BL dataset caused significant calibration errors. This system drift was not captured with routine system quality control measures. Cusum analysis indicated that the drift is a sign of system wear and eventual x-ray tube failure. Conclusions: The BL calibration dataset must be monitored and periodically updated, when necessary, to account for sustained system variations to maintain the calibration accuracy.

  17. Cumulative sum quality control for calibrated breast density measurements

    Energy Technology Data Exchange (ETDEWEB)

    Heine, John J.; Cao Ke; Beam, Craig [Cancer Prevention and Control Division, Moffitt Cancer Center, 12902 Magnolia Drive, Tampa, Florida 33612 (United States); Division of Epidemiology and Biostatistics, School of Public Health, University of Illinois at Chicago, 1603 W. Taylor St., Chicago, Illinois 60612 (United States)

    2009-12-15

    Purpose: Breast density is a significant breast cancer risk factor. Although various methods are used to estimate breast density, there is no standard measurement for this important factor. The authors are developing a breast density standardization method for use in full field digital mammography (FFDM). The approach calibrates for interpatient acquisition technique differences. The calibration produces a normalized breast density pixel value scale. The method relies on first generating a baseline (BL) calibration dataset, which required extensive phantom imaging. Standardizing prospective mammograms with calibration data generated in the past could introduce unanticipated error in the standardized output if the calibration dataset is no longer valid. Methods: Sample points from the BL calibration dataset were imaged approximately biweekly over an extended timeframe. These serial samples were used to evaluate the BL dataset reproducibility and quantify the serial calibration accuracy. The cumulative sum (Cusum) quality control method was used to evaluate the serial sampling. Results: There is considerable drift in the serial sample points from the BL calibration dataset that is x-ray beam dependent. Systematic deviation from the BL dataset caused significant calibration errors. This system drift was not captured with routine system quality control measures. Cusum analysis indicated that the drift is a sign of system wear and eventual x-ray tube failure. Conclusions: The BL calibration dataset must be monitored and periodically updated, when necessary, to account for sustained system variations to maintain the calibration accuracy.

  18. An expression relating breaking stress and density of trabecular bone

    DEFF Research Database (Denmark)

    Rajapakse, C.S.; Thomsen, J.S.; Ortiz, J.S.E.

    2004-01-01

    Bone mineral density (BMD) is the principal diagnostic tool used in clinical settings to diagnose and monitor osteoporosis. Experimental studies on ex vivo bone samples from multiple skeletal locations have been used to propose that their breaking stress bears a power-law relationship to volumetric...

  19. Estimation of larval density of Liriomyza sativae Blanchard (Diptera ...

    African Journals Online (AJOL)

    This study was conducted to develop sequential sampling plans to estimate larval density of Liriomyza sativae Blanchard (Diptera: Agromyzidae) at three precision levels in cucumber greenhouse. The within- greenhouse spatial patterns of larvae were aggregated. The slopes and intercepts of both Iwao's patchiness ...

  20. Estimating forest canopy bulk density using six indirect methods

    Science.gov (United States)

    Robert E. Keane; Elizabeth D. Reinhardt; Joe Scott; Kathy Gray; James Reardon

    2005-01-01

    Canopy bulk density (CBD) is an important crown characteristic needed to predict crown fire spread, yet it is difficult to measure in the field. Presented here is a comprehensive research effort to evaluate six indirect sampling techniques for estimating CBD. As reference data, detailed crown fuel biomass measurements were taken on each tree within fixed-area plots...

  1. Urinary and Anthropometrical Indices of Bone Density in Healthy ...

    African Journals Online (AJOL)

    Measurements on the x-ray of the 2nd metacarpal of the right hand and 2h fasting urine sample were used in a cross sectional study to assess urinary indices of bone density (bone mass, percentage cortical area, PCA) in 94 healthy Nigerian adults aged between 19-72 years. Body mass index (BMI) was also estimated.

  2. Chemical bonding and charge density distribution analysis of ...

    Indian Academy of Sciences (India)

    tice and the electron density distributions in the unit cell of the samples were investigated. Structural ... titanium and oxygen ions and predominant ionic nature between barium and oxygen ions. Average grain sizes ... trations (at <1%) is responsible for the formation of .... indicated by dots and calculated powder patterns are.

  3. Genetic control of wood density and bark thickness, and their ...

    African Journals Online (AJOL)

    Tree diameter under and over bark at breast height (dbh), wood density and bark thickness were assessed on samples from control-pollinated families of Eucalyptus grandis, E. urophylla, E. grandis × E. urophylla and E. urophylla × E. grandis. The material was planted in field trials in the coastal Zululand region of South ...

  4. Sample Results From Tank 48H Samples HTF-48-14-158, -159, -169, and -170

    Energy Technology Data Exchange (ETDEWEB)

    Peters, T. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Hang, T. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL)

    2015-04-28

    Savannah River National Laboratory (SRNL) analyzed samples from Tank 48H in support of determining the cause for the unusually high dose rates at the sampling points for this tank. A set of two samples was taken from the quiescent tank, and two additional samples were taken after the contents of the tank were mixed. The results of the analyses of all the samples show that the contents of the tank have changed very little since the analysis of the previous sample in 2012. The solids are almost exclusively composed of tetraphenylborate (TPB) salts, and there is no indication of acceleration in the TPB decomposition. The filtrate composition shows a moderate increase in salt concentration and density, which is attributable to the addition of NaOH for the purposes of corrosion control. An older modeling simulation of the TPB degradation was updated, and the supernate results from a 2012 sample were run in the model. This result was compared to the results from the 2014 recent sample results reported in this document. The model indicates there is no change in the TPB degradation from 2012 to 2014. SRNL measured the buoyancy of the TPB solids in Tank 48H simulant solutions. It was determined that a solution of density 1.279 g/mL (~6.5M sodium) was capable of indefinitely suspending the TPB solids evenly throughout the solution. A solution of density 1.296 g/mL (~7M sodium) caused a significant fraction of the solids to float on the solution surface. As the experiments could not include the effect of additional buoyancy elements such as benzene or hydrogen generation, the buoyancy measurements provide an upper bound estimate of the density in Tank 48H required to float the solids.

  5. Sampling informative/complex a priori probability distributions using Gibbs sampling assisted by sequential simulation

    DEFF Research Database (Denmark)

    Hansen, Thomas Mejer; Mosegaard, Klaus; Cordua, Knud Skou

    2010-01-01

    Markov chain Monte Carlo methods such as the Gibbs sampler and the Metropolis algorithm can be used to sample the solutions to non-linear inverse problems. In principle these methods allow incorporation of arbitrarily complex a priori information, but current methods allow only relatively simple...... this algorithm with the Metropolis algorithm to obtain an efficient method for sampling posterior probability densities for nonlinear inverse problems....

  6. Sample size methodology

    CERN Document Server

    Desu, M M

    2012-01-01

    One of the most important problems in designing an experiment or a survey is sample size determination and this book presents the currently available methodology. It includes both random sampling from standard probability distributions and from finite populations. Also discussed is sample size determination for estimating parameters in a Bayesian setting by considering the posterior distribution of the parameter and specifying the necessary requirements. The determination of the sample size is considered for ranking and selection problems as well as for the design of clinical trials. Appropria

  7. Statistical sampling strategies

    International Nuclear Information System (INIS)

    Andres, T.H.

    1987-01-01

    Systems assessment codes use mathematical models to simulate natural and engineered systems. Probabilistic systems assessment codes carry out multiple simulations to reveal the uncertainty in values of output variables due to uncertainty in the values of the model parameters. In this paper, methods are described for sampling sets of parameter values to be used in a probabilistic systems assessment code. Three Monte Carlo parameter selection methods are discussed: simple random sampling, Latin hypercube sampling, and sampling using two-level orthogonal arrays. Three post-selection transformations are also described: truncation, importance transformation, and discretization. Advantages and disadvantages of each method are summarized

  8. Statistical distribution sampling

    Science.gov (United States)

    Johnson, E. S.

    1975-01-01

    Determining the distribution of statistics by sampling was investigated. Characteristic functions, the quadratic regression problem, and the differential equations for the characteristic functions are analyzed.

  9. An x-ray backlit Talbot-Lau deflectometer for high-energy-density electron density diagnostics

    Science.gov (United States)

    Valdivia, M. P.; Stutman, D.; Stoeckl, C.; Theobald, W.; Mileham, C.; Begishev, I. A.; Bromage, J.; Regan, S. P.

    2016-02-01

    X-ray phase-contrast techniques can measure electron density gradients in high-energy-density plasmas through refraction induced phase shifts. An 8 keV Talbot-Lau interferometer consisting of free standing ultrathin gratings was deployed at an ultra-short, high-intensity laser system using K-shell emission from a 1-30 J, 8 ps laser pulse focused on thin Cu foil targets. Grating survival was demonstrated for 30 J, 8 ps laser pulses. The first x-ray deflectometry images obtained under laser backlighting showed up to 25% image contrast and thus enabled detection of electron areal density gradients with a maximum value of 8.1 ± 0.5 × 1023 cm-3 in a low-Z millimeter sized sample. An electron density profile was obtained from refraction measurements with an error of x-ray source-size, similar to conventional radiography.

  10. Putting Priors in Mixture Density Mercer Kernels

    Science.gov (United States)

    Srivastava, Ashok N.; Schumann, Johann; Fischer, Bernd

    2004-01-01

    This paper presents a new methodology for automatic knowledge driven data mining based on the theory of Mercer Kernels, which are highly nonlinear symmetric positive definite mappings from the original image space to a very high, possibly infinite dimensional feature space. We describe a new method called Mixture Density Mercer Kernels to learn kernel function directly from data, rather than using predefined kernels. These data adaptive kernels can en- code prior knowledge in the kernel using a Bayesian formulation, thus allowing for physical information to be encoded in the model. We compare the results with existing algorithms on data from the Sloan Digital Sky Survey (SDSS). The code for these experiments has been generated with the AUTOBAYES tool, which automatically generates efficient and documented C/C++ code from abstract statistical model specifications. The core of the system is a schema library which contains template for learning and knowledge discovery algorithms like different versions of EM, or numeric optimization methods like conjugate gradient methods. The template instantiation is supported by symbolic- algebraic computations, which allows AUTOBAYES to find closed-form solutions and, where possible, to integrate them into the code. The results show that the Mixture Density Mercer-Kernel described here outperforms tree-based classification in distinguishing high-redshift galaxies from low- redshift galaxies by approximately 16% on test data, bagged trees by approximately 7%, and bagged trees built on a much larger sample of data by approximately 2%.

  11. Double trouble at high density:

    DEFF Research Database (Denmark)

    Gergs, André; Palmqvist, Annemette; Preuss, Thomas G

    2014-01-01

    Population size is often regulated by negative feedback between population density and individual fitness. At high population densities, animals run into double trouble: they might concurrently suffer from overexploitation of resources and also from negative interference among individuals...... regardless of resource availability, referred to as crowding. Animals are able to adapt to resource shortages by exhibiting a repertoire of life history and physiological plasticities. In addition to resource-related plasticity, crowding might lead to reduced fitness, with consequences for individual life...... history. We explored how different mechanisms behind resource-related plasticity and crowding-related fitness act independently or together, using the water flea Daphnia magna as a case study. For testing hypotheses related to mechanisms of plasticity and crowding stress across different biological levels...

  12. Generalized Expression for Polarization Density

    International Nuclear Information System (INIS)

    Wang, Lu; Hahm, T.S.

    2009-01-01

    A general polarization density which consists of classical and neoclassical parts is systematically derived via modern gyrokinetics and bounce-kinetics by employing a phase-space Lagrangian Lie-transform perturbation method. The origins of polarization density are further elucidated. Extending the work on neoclassical polarization for long wavelength compared to ion banana width [M. N. Rosenbluth and F. L. Hinton, Phys. Rev. Lett. 80, 724 (1998)], an analytical formula for the generalized neoclassical polarization including both finite-banana-width (FBW) and finite-Larmor-radius (FLR) effects for arbitrary radial wavelength in comparison to banana width and gyroradius is derived. In additional to the contribution from trapped particles, the contribution of passing particles to the neoclassical polarization is also explicitly calculated. Our analytic expression agrees very well with the previous numerical results for a wide range of radial wavelength.

  13. Asymptotic density and effective negligibility

    Science.gov (United States)

    Astor, Eric P.

    In this thesis, we join the study of asymptotic computability, a project attempting to capture the idea that an algorithm might work correctly in all but a vanishing fraction of cases. In collaboration with Hirschfeldt and Jockusch, broadening the original investigation of Jockusch and Schupp, we introduce dense computation, the weakest notion of asymptotic computability (requiring only that the correct answer is produced on a set of density 1), and effective dense computation, where every computation halts with either the correct answer or (on a set of density 0) a symbol denoting uncertainty. A few results make more precise the relationship between these notions and work already done with Jockusch and Schupp's original definitions of coarse and generic computability. For all four types of asymptotic computation, including generic computation, we demonstrate that non-trivial upper cones have measure 0, building on recent work of Hirschfeldt, Jockusch, Kuyper, and Schupp in which they establish this for coarse computation. Their result transfers to yield a minimal pair for relative coarse computation; we generalize their method and extract a similar result for relative dense computation (and thus for its corresponding reducibility). However, all of these notions of near-computation treat a set as negligible iff it has asymptotic density 0. Noting that this definition is not computably invariant, this produces some failures of intuition and a break with standard expectations in computability theory. For instance, as shown by Hamkins and Miasnikov, the halting problem is (in some formulations) effectively densely computable, even in polynomial time---yet this result appears fragile, as indicated by Rybalov. In independent work, we respond to this by strengthening the approach of Jockusch and Schupp to avoid such phenomena; specifically, we introduce a new notion of intrinsic asymptotic density, invariant under computable permutation, with rich relations to both

  14. High density energy storage capacitor

    International Nuclear Information System (INIS)

    Whitham, K.; Howland, M.M.; Hutzler, J.R.

    1979-01-01

    The Nova laser system will use 130 MJ of capacitive energy storage and have a peak power capability of 250,000 MW. This capacitor bank is a significant portion of the laser cost and requires a large portion of the physical facilities. In order to reduce the cost and volume required by the bank, the Laser Fusion Program funded contracts with three energy storage capacitor producers: Aerovox, G.E., and Maxwell Laboratories, to develop higher energy density, lower cost energy storage capacitors. This paper describes the designs which resulted from the Aerovox development contract, and specifically addresses the design and initial life testing of a 12.5 kJ, 22 kV capacitor with a density of 4.2 J/in 3 and a projected cost in the range of 5 cents per joule

  15. Urban heat island effect on cicada densities in metropolitan Seoul

    Directory of Open Access Journals (Sweden)

    Hoa Q. Nguyen

    2018-01-01

    Full Text Available Background Urban heat island (UHI effect, the ubiquitous consequence of urbanization, is considered to play a major role in population expansion of numerous insects. Cryptotympana atrata and Hyalessa fuscata are the most abundant cicada species in the Korean Peninsula, where their population densities are higher in urban than in rural areas. We predicted a positive relationship between the UHI intensities and population densities of these two cicada species in metropolitan Seoul. Methods To test this prediction, enumeration surveys of cicada exuviae densities were conducted in 36 localities located within and in the vicinity of metropolitan Seoul. Samples were collected in two consecutive periods from July to August 2015. The abundance of each species was estimated by two resource-weighted densities, one based on the total geographic area, and the other on the total number of trees. Multiple linear regression analyses were performed to identify factors critical for the prevalence of cicada species in the urban habitat. Results C. atrata and H. fuscata were major constituents of cicada species composition collected across all localities. Minimum temperature and sampling period were significant factors contributing to the variation in densities of both species, whereas other environmental factors related to urbanization were not significant. More cicada exuviae were collected in the second rather than in the first samplings, which matched the phenological pattern of cicadas in metropolitan Seoul. Cicada population densities increased measurably with the increase in temperature. Age of residential complex also exhibited a significantly positive correlation to H. fuscata densities, but not to C. atrata densities. Discussion Effects of temperature on cicada densities have been discerned from other environmental factors, as cicada densities increased measurably in tandem with elevated temperature. Several mechanisms may contribute to the abundance of

  16. Density operators in quantum mechanics

    International Nuclear Information System (INIS)

    Burzynski, A.

    1979-01-01

    A brief discussion and resume of density operator formalism in the way it occurs in modern physics (in quantum optics, quantum statistical physics, quantum theory of radiation) is presented. Particularly we emphasize the projection operator method, application of spectral theorems and superoperators formalism in operator Hilbert spaces (Hilbert-Schmidt type). The paper includes an appendix on direct sums and direct products of spaces and operators, and problems of reducibility for operator class by using the projection operators. (author)

  17. On the kinetic energy density

    International Nuclear Information System (INIS)

    Lombard, R.J.; Mas, D.; Moszkowski, S.A.

    1991-01-01

    We discuss two expressions for the density of kinetic energy which differ by an integration by parts. Using the Wigner transform we shown that the arithmetic mean of these two terms is closely analogous to the classical value. Harmonic oscillator wavefunctions are used to illustrate the radial dependence of these expressions. We study the differences they induce through effective mass terms when performing self-consistent calculations. (author)

  18. Neutronic density perturbation by probes

    International Nuclear Information System (INIS)

    Vigon, M. A.; Diez, L.

    1956-01-01

    The introduction of absorbent materials of neutrons in diffuser media, produces local disturbances of neutronic density. The disturbance depends especially on the nature and size of the absorbent. Approximated equations which relates te disturbance and the distance to the absorbent in the case of thin disks have been drawn. The experimental comprobation has been carried out in two especial cases. In both cases the experimental results are in agreement with the calculated values from these equations. (Author)

  19. Ion density in ionizing beams

    International Nuclear Information System (INIS)

    Knuyt, G.K.; Callebaut, D.K.

    1978-01-01

    The equations defining the ion density in a non-quasineutral plasma (chasma) are derived for a number of particular cases from the general results obtained in paper 1. Explicit calculations are made for a fairly general class of boundaries: all tri-axial ellipsoids, including cylinders with elliptic cross-section and the plane parallel case. The results are very simple. When the ion production and the beam intensity are constant then the steady state ion space charge is also constant in space, it varies over less than 10% for the various geometries, it may exceed the beam density largely for comparatively high pressures (usually still less than about 10 -3 Torr), it is tabulated for a number of interesting cases and moreover it can be calculated precisely and easily by some simple formulae for which also approximations are elaborated. The total potential is U =-ax 2 -by 2 -cz 2 , a, b and c constants which can be calculated immediately from the space charge density and the geometry; the largest coefficient varies at most over a factor four for various geometries; it is tabulated for a number of interesting cases. (author)

  20. Density functional theory of nuclei

    International Nuclear Information System (INIS)

    Terasaki, Jun

    2008-01-01

    The density functional theory of nuclei has come to draw attention of scientists in the field of nuclear structure because the theory is expected to provide reliable numerical data in wide range on the nuclear chart. This article is organized to present an overview of the theory to the people engaged in the theory of other fields as well as those people in the nuclear physics experiments. At first, the outline of the density functional theory widely used in the electronic systems (condensed matter, atoms, and molecules) was described starting from the Kohn-Sham equation derived on the variational principle. Then the theory used in the field of nuclear physics was presented. Hartree-Fock and Hartree-Fock-Bogolyubov approximation by using Skyrme interaction was explained. Comparison of the results of calculations and experiments of binding energies and ground state mean square charge radii of some magic number nuclei were shown. The similarity and dissimilarity between the two streams were summarized. Finally the activities of the international project of Universal Nuclear Energy Density Functional (UNEDF) which was started recently lead by US scientist was reported. This project is programmed for five years. One of the applications of the project is the calculation of the neutron capture cross section of nuclei on the r-process, which is absolutely necessary for the nucleosynthesis research. (S. Funahashi)

  1. Big Data, Small Sample.

    Science.gov (United States)

    Gerlovina, Inna; van der Laan, Mark J; Hubbard, Alan

    2017-05-20

    Multiple comparisons and small sample size, common characteristics of many types of "Big Data" including those that are produced by genomic studies, present specific challenges that affect reliability of inference. Use of multiple testing procedures necessitates calculation of very small tail probabilities of a test statistic distribution. Results based on large deviation theory provide a formal condition that is necessary to guarantee error rate control given practical sample sizes, linking the number of tests and the sample size; this condition, however, is rarely satisfied. Using methods that are based on Edgeworth expansions (relying especially on the work of Peter Hall), we explore the impact of departures of sampling distributions from typical assumptions on actual error rates. Our investigation illustrates how far the actual error rates can be from the declared nominal levels, suggesting potentially wide-spread problems with error rate control, specifically excessive false positives. This is an important factor that contributes to "reproducibility crisis". We also review some other commonly used methods (such as permutation and methods based on finite sampling inequalities) in their application to multiple testing/small sample data. We point out that Edgeworth expansions, providing higher order approximations to the sampling distribution, offer a promising direction for data analysis that could improve reliability of studies relying on large numbers of comparisons with modest sample sizes.

  2. Sampling system and method

    Science.gov (United States)

    Decker, David L.; Lyles, Brad F.; Purcell, Richard G.; Hershey, Ronald Lee

    2013-04-16

    The present disclosure provides an apparatus and method for coupling conduit segments together. A first pump obtains a sample and transmits it through a first conduit to a reservoir accessible by a second pump. The second pump further conducts the sample from the reservoir through a second conduit.

  3. Simple street tree sampling

    Science.gov (United States)

    David J. Nowak; Jeffrey T. Walton; James Baldwin; Jerry. Bond

    2015-01-01

    Information on street trees is critical for management of this important resource. Sampling of street tree populations provides an efficient means to obtain street tree population information. Long-term repeat measures of street tree samples supply additional information on street tree changes and can be used to report damages from catastrophic events. Analyses of...

  4. Sampling or gambling

    Energy Technology Data Exchange (ETDEWEB)

    Gy, P.M.

    1981-12-01

    Sampling can be compared to no other technique. A mechanical sampler must above all be selected according to its aptitude for supressing or reducing all components of the sampling error. Sampling is said to be correct when it gives all elements making up the batch of matter submitted to sampling an uniform probability of being selected. A sampler must be correctly designed, built, installed, operated and maintained. When the conditions of sampling correctness are not strictly respected, the sampling error can no longer be controlled and can, unknown to the user, be unacceptably large: the sample is no longer representative. The implementation of an incorrect sampler is a form of gambling and this paper intends to show that at this game the user is nearly always the loser in the long run. The users' and the manufacturers' interests may diverge and the standards which should safeguard the users' interests very often fail to do so by tolerating or even recommending incorrect techniques such as the implementation of too narrow cutters traveling too fast through the stream to be sampled.

  5. Sample pretretment in microsystems

    DEFF Research Database (Denmark)

    Perch-Nielsen, Ivan R.

    2003-01-01

    : Sample preparation → DNA amplification → DNA analysis. The overall goal of the project is integration of as many as possible of these steps. This thesis covers mainly pretreatment in a microchip. Some methods for sample pretreatment have been tested. Most conventional is fluorescence activated cell sort......When a sample, e.g. from a patient, is processed using conventional methods, the sample must be transported to the laboratory where it is analyzed, after which the results is sent back. By integrating the separate steps of the analysis in a micro total analysis system (μTAS), results can...... be obtained fast and better. Preferably with all the processes from sample to signal moved to the bedside of the patient. Of course there is still much to learn and study in the process of miniaturization. DNA analysis is one process subject to integration. There are roughly three steps in a DNA analysis...

  6. Biological sample collector

    Science.gov (United States)

    Murphy, Gloria A [French Camp, CA

    2010-09-07

    A biological sample collector is adapted to a collect several biological samples in a plurality of filter wells. A biological sample collector may comprise a manifold plate for mounting a filter plate thereon, the filter plate having a plurality of filter wells therein; a hollow slider for engaging and positioning a tube that slides therethrough; and a slide case within which the hollow slider travels to allow the tube to be aligned with a selected filter well of the plurality of filter wells, wherein when the tube is aligned with the selected filter well, the tube is pushed through the hollow slider and into the selected filter well to sealingly engage the selected filter well and to allow the tube to deposit a biological sample onto a filter in the bottom of the selected filter well. The biological sample collector may be portable.

  7. Dual chiral density wave in quark matter

    International Nuclear Information System (INIS)

    Tatsumi, Toshitaka

    2002-01-01

    We prove that quark matter is unstable for forming a dual chiral density wave above a critical density, within the Nambu-Jona-Lasinio model. Presence of a dual chiral density wave leads to a uniform ferromagnetism in quark matter. A similarity with the spin density wave theory in electron gas and the pion condensation theory is also pointed out. (author)

  8. Density functionals in the laboratory frame

    International Nuclear Information System (INIS)

    Giraud, B. G.

    2008-01-01

    We compare several definitions of the density of a self-bound system, such as a nucleus, in relation with its center-of-mass zero-point motion. A trivial deconvolution relates the internal density to the density defined in the laboratory frame. This result is useful for the practical definition of density functionals

  9. On VC-density over indiscernible sequences

    OpenAIRE

    Guingona, Vincent; Hill, Cameron Donnay

    2011-01-01

    In this paper, we study VC-density over indiscernible sequences (denoted VC_ind-density). We answer an open question in [1], showing that VC_ind-density is always integer valued. We also show that VC_ind-density and dp-rank coincide in the natural way.

  10. Research of mechanism of density lock

    International Nuclear Information System (INIS)

    Wang Shengfei; Yan Changqi; Gu Haifeng

    2010-01-01

    Mechanism of density lock was analyzed according to the work conditions of density lock. The results showed that: the stratification with no disturbance satisfied the work conditions of density lock; fluids between the stratification were not mixed at the condition of connected to each other; the density lock can be open automatically by controlled the pressure balance at the stratification. When disturbance existed, the stratification might be broken and mass would be transferred by convection. The stability of stratification can be enhanced by put the special structure in density lock to ensure the normal work of density lock. At last, the minimum of heat loss in density lock was also analyzed. (authors)

  11. Density of photonic states in cholesteric liquid crystals

    Science.gov (United States)

    Dolganov, P. V.

    2015-04-01

    Density of photonic states ρ (ω ) , group vg, and phase vph velocity of light, and the dispersion relation between wave vector k , and frequency ω (k ) were determined in a cholesteric photonic crystal. A highly sensitive method (measurement of rotation of the plane of polarization of light) was used to determine ρ (ω ) in samples of different quality. In high-quality samples a drastic increase in ρ (ω ) near the boundaries of the stop band and oscillations related to Pendellösung beatings are observed. In low-quality samples photonic properties are strongly modified. The maximal value of ρ (ω ) is substantially smaller, and density of photonic states increases near the selective reflection band without oscillations in ρ (ω ) . Peculiarities of ρ (ω ) , vg, and ω (k ) are discussed. Comparison of the experimental results with theory was performed.

  12. Triglycerides, total cholesterol, high density lipoprotein cholesterol and low density lipoprotein cholesterol in rats exposed to premium motor spirit fumes.

    Science.gov (United States)

    Aberare, Ogbevire L; Okuonghae, Patrick; Mukoro, Nathaniel; Dirisu, John O; Osazuwa, Favour; Odigie, Elvis; Omoregie, Richard

    2011-06-01

    Deliberate and regular exposure to premium motor spirit fumes is common and could be a risk factor for liver disease in those who are occupationally exposed. A possible association between premium motor spirit fumes and plasma levels of triglyceride, total cholesterol, high density lipoprotein cholesterol and low density lipoprotein cholesterol using a rodent model could provide new insights in the pathology of diseases where cellular dysfunction is an established risk factor. The aim of this study was to evaluate the possible effect of premium motor spirit fumes on lipids and lipoproteins in workers occupationally exposed to premium motor spirit fumes using rodent model. Twenty-five Wister albino rats (of both sexes) were used for this study between the 4(th) of August and 7(th) of September, 2010. The rats were divided into five groups of five rats each. Group 1 rats were not exposed to premium motor spirit fumes (control group), group 2 rats were exposed for 1 hour daily, group 3 for 3 hours daily, group 4 for 5 hours daily and group 5 for 7 hours daily. The experiment lasted for a period of 4 weeks. Blood samples obtained from all the groups after 4 weeks of exposure were used for the estimation of plasma levels of triglyceride, total cholesterol, high density lipoprotein- cholesterol and low density lipoprotein- cholesterol. Results showed significant increase in means of plasma total cholesterol and low density lipoprotein levels (P<0.05). The mean triglyceride and total body weight were significantly lower (P<0.05) in the exposed group when compared with the unexposed. The plasma level of high density lipoprotein, the ratio of low density lipoprotein to high density lipoprotein and the ratio of total cholesterol to high density lipoprotein did not differ significantly in exposed subjects when compared with the control group. These results showed that frequent exposure to petrol fumes may be highly deleterious to the liver cells.

  13. PFP Wastewater Sampling Facility

    International Nuclear Information System (INIS)

    Hirzel, D.R.

    1995-01-01

    This test report documents the results obtained while conducting operational testing of the sampling equipment in the 225-WC building, the PFP Wastewater Sampling Facility. The Wastewater Sampling Facility houses equipment to sample and monitor the PFP's liquid effluents before discharging the stream to the 200 Area Treated Effluent Disposal Facility (TEDF). The majority of the streams are not radioactive and discharges from the PFP Heating, Ventilation, and Air Conditioning (HVAC). The streams that might be contaminated are processed through the Low Level Waste Treatment Facility (LLWTF) before discharging to TEDF. The sampling equipment consists of two flow-proportional composite samplers, an ultrasonic flowmeter, pH and conductivity monitors, chart recorder, and associated relays and current isolators to interconnect the equipment to allow proper operation. Data signals from the monitors are received in the 234-5Z Shift Office which contains a chart recorder and alarm annunciator panel. The data signals are also duplicated and sent to the TEDF control room through the Local Control Unit (LCU). Performing the OTP has verified the operability of the PFP wastewater sampling system. This Operability Test Report documents the acceptance of the sampling system for use

  14. Contributions to sampling statistics

    CERN Document Server

    Conti, Pier; Ranalli, Maria

    2014-01-01

    This book contains a selection of the papers presented at the ITACOSM 2013 Conference, held in Milan in June 2013. ITACOSM is the bi-annual meeting of the Survey Sampling Group S2G of the Italian Statistical Society, intended as an international  forum of scientific discussion on the developments of theory and application of survey sampling methodologies and applications in human and natural sciences. The book gathers research papers carefully selected from both invited and contributed sessions of the conference. The whole book appears to be a relevant contribution to various key aspects of sampling methodology and techniques; it deals with some hot topics in sampling theory, such as calibration, quantile-regression and multiple frame surveys, and with innovative methodologies in important topics of both sampling theory and applications. Contributions cut across current sampling methodologies such as interval estimation for complex samples, randomized responses, bootstrap, weighting, modeling, imputati...

  15. Waste classification sampling plan

    International Nuclear Information System (INIS)

    Landsman, S.D.

    1998-01-01

    The purpose of this sampling is to explain the method used to collect and analyze data necessary to verify and/or determine the radionuclide content of the B-Cell decontamination and decommissioning waste stream so that the correct waste classification for the waste stream can be made, and to collect samples for studies of decontamination methods that could be used to remove fixed contamination present on the waste. The scope of this plan is to establish the technical basis for collecting samples and compiling quantitative data on the radioactive constituents present in waste generated during deactivation activities in B-Cell. Sampling and radioisotopic analysis will be performed on the fixed layers of contamination present on structural material and internal surfaces of process piping and tanks. In addition, dose rate measurements on existing waste material will be performed to determine the fraction of dose rate attributable to both removable and fixed contamination. Samples will also be collected to support studies of decontamination methods that are effective in removing the fixed contamination present on the waste. Sampling performed under this plan will meet criteria established in BNF-2596, Data Quality Objectives for the B-Cell Waste Stream Classification Sampling, J. M. Barnett, May 1998

  16. On incomplete sampling under birth-death models and connections to the sampling-based coalescent.

    Science.gov (United States)

    Stadler, Tanja

    2009-11-07

    The constant rate birth-death process is used as a stochastic model for many biological systems, for example phylogenies or disease transmission. As the biological data are usually not fully available, it is crucial to understand the effect of incomplete sampling. In this paper, we analyze the constant rate birth-death process with incomplete sampling. We derive the density of the bifurcation events for trees on n leaves which evolved under this birth-death-sampling process. This density is used for calculating prior distributions in Bayesian inference programs and for efficiently simulating trees. We show that the birth-death-sampling process can be interpreted as a birth-death process with reduced rates and complete sampling. This shows that joint inference of birth rate, death rate and sampling probability is not possible. The birth-death-sampling process is compared to the sampling-based population genetics model, the coalescent. It is shown that despite many similarities between these two models, the distribution of bifurcation times remains different even in the case of very large population sizes. We illustrate these findings on an Hepatitis C virus dataset from Egypt. We show that the transmission times estimates are significantly different-the widely used Gamma statistic even changes its sign from negative to positive when switching from the coalescent to the birth-death process.

  17. New adaptive sampling method in particle image velocimetry

    International Nuclear Information System (INIS)

    Yu, Kaikai; Xu, Jinglei; Tang, Lan; Mo, Jianwei

    2015-01-01

    This study proposes a new adaptive method to enable the number of interrogation windows and their positions in a particle image velocimetry (PIV) image interrogation algorithm to become self-adapted according to the seeding density. The proposed method can relax the constraint of uniform sampling rate and uniform window size commonly adopted in the traditional PIV algorithm. In addition, the positions of the sampling points are redistributed on the basis of the spring force generated by the sampling points. The advantages include control of the number of interrogation windows according to the local seeding density and smoother distribution of sampling points. The reliability of the adaptive sampling method is illustrated by processing synthetic and experimental images. The synthetic example attests to the advantages of the sampling method. Compared with that of the uniform interrogation technique in the experimental application, the spatial resolution is locally enhanced when using the proposed sampling method. (technical design note)

  18. Microplastic sampling in the Mediterranean Sea

    DEFF Research Database (Denmark)

    Biginagwa, Fares; Sosthenes, Bahati; Syberg, Kristian

    The extent of microplastic pollution in the Southwestern Mediterranean Sea is not yet known, although on Northwestern part has been previously studied. Plastic samples were collected at 7 transects during a 10 day expedition from Sicily (Italy) to Malaga (Spain) in September 2014. A 330 µM mesh...... manta trawl was used for surface water sampling. Physical and chemical characterization of plastic particles was performed with regard to size (1-5mm), shape (fragment, line, thin film, foam and pellets), color (transparent, white, black and colored), density and chemical composition according...

  19. Branch xylem density variations across the Amazon Basin

    Directory of Open Access Journals (Sweden)

    S. Patiño

    2009-04-01

    Full Text Available Xylem density is a physical property of wood that varies between individuals, species and environments. It reflects the physiological strategies of trees that lead to growth, survival and reproduction. Measurements of branch xylem density, ρx, were made for 1653 trees representing 598 species, sampled from 87 sites across the Amazon basin. Measured values ranged from 218 kg m−3 for a Cordia sagotii (Boraginaceae from Mountagne de Tortue, French Guiana to 1130 kg m−3 for an Aiouea sp. (Lauraceae from Caxiuana, Central Pará, Brazil. Analysis of variance showed significant differences in average ρx across regions and sampled plots as well as significant differences between families, genera and species. A partitioning of the total variance in the dataset showed that species identity (family, genera and species accounted for 33% with environment (geographic location and plot accounting for an additional 26%; the remaining "residual" variance accounted for 41% of the total variance. Variations in plot means, were, however, not only accountable by differences in species composition because xylem density of the most widely distributed species in our dataset varied systematically from plot to plot. Thus, as well as having a genetic component, branch xylem density is a plastic trait that, for any given species, varies according to where the tree is growing in a predictable manner. Within the analysed taxa, exceptions to this general rule seem to be pioneer species belonging for example to the Urticaceae whose branch xylem density is more constrained than most species sampled in this study. These patterns of variation of branch xylem density across Amazonia suggest a large functional diversity amongst Amazonian trees which is not well understood.

  20. Branch xylem density variations across the Amazon Basin

    Science.gov (United States)

    Patiño, S.; Lloyd, J.; Paiva, R.; Baker, T. R.; Quesada, C. A.; Mercado, L. M.; Schmerler, J.; Schwarz, M.; Santos, A. J. B.; Aguilar, A.; Czimczik, C. I.; Gallo, J.; Horna, V.; Hoyos, E. J.; Jimenez, E. M.; Palomino, W.; Peacock, J.; Peña-Cruz, A.; Sarmiento, C.; Sota, A.; Turriago, J. D.; Villanueva, B.; Vitzthum, P.; Alvarez, E.; Arroyo, L.; Baraloto, C.; Bonal, D.; Chave, J.; Costa, A. C. L.; Herrera, R.; Higuchi, N.; Killeen, T.; Leal, E.; Luizão, F.; Meir, P.; Monteagudo, A.; Neil, D.; Núñez-Vargas, P.; Peñuela, M. C.; Pitman, N.; Priante Filho, N.; Prieto, A.; Panfil, S. N.; Rudas, A.; Salomão, R.; Silva, N.; Silveira, M.; Soares Dealmeida, S.; Torres-Lezama, A.; Vásquez-Martínez, R.; Vieira, I.; Malhi, Y.; Phillips, O. L.

    2009-04-01

    Xylem density is a physical property of wood that varies between individuals, species and environments. It reflects the physiological strategies of trees that lead to growth, survival and reproduction. Measurements of branch xylem density, ρx, were made for 1653 trees representing 598 species, sampled from 87 sites across the Amazon basin. Measured values ranged from 218 kg m-3 for a Cordia sagotii (Boraginaceae) from Mountagne de Tortue, French Guiana to 1130 kg m-3 for an Aiouea sp. (Lauraceae) from Caxiuana, Central Pará, Brazil. Analysis of variance showed significant differences in average ρx across regions and sampled plots as well as significant differences between families, genera and species. A partitioning of the total variance in the dataset showed that species identity (family, genera and species) accounted for 33% with environment (geographic location and plot) accounting for an additional 26%; the remaining "residual" variance accounted for 41% of the total variance. Variations in plot means, were, however, not only accountable by differences in species composition because xylem density of the most widely distributed species in our dataset varied systematically from plot to plot. Thus, as well as having a genetic component, branch xylem density is a plastic trait that, for any given species, varies according to where the tree is growing in a predictable manner. Within the analysed taxa, exceptions to this general rule seem to be pioneer species belonging for example to the Urticaceae whose branch xylem density is more constrained than most species sampled in this study. These patterns of variation of branch xylem density across Amazonia suggest a large functional diversity amongst Amazonian trees which is not well understood.

  1. Roles of pinning strength and density in vortex melting

    International Nuclear Information System (INIS)

    Obaidat, I M; Khawaja, U Al; Benkraouda, M

    2008-01-01

    We have investigated the role of pinning strength and density on the equilibrium vortex-lattice to vortex-liquid phase transition under several applied magnetic fields. This study was conducted using a series of molecular dynamic simulations on several samples with different strengths and densities of pinning sites which are arranged in periodic square arrays. We have found a single solid-liquid vortex transition when the vortex filling factor n>1. We have found that, for fixed pinning densities and strengths, the melting temperature, T m , decreases almost linearly with increasing magnetic field. Our results provide direct numerical evidence for the significant role of both the strength and density of pinning centers on the position of the melting line. We have found that the vortex-lattice to vortex-liquid melting line shifts up as the pinning strength or the pinning density was increased. The effect on the melting line was found to be more pronounced at small values of strength and density of pinning sites

  2. Temperature Dependence Viscosity and Density of Different Biodiesel Blends

    Directory of Open Access Journals (Sweden)

    Vojtěch Kumbár

    2015-01-01

    Full Text Available The main goal of this paper is to assess the effect of rapeseed oil methyl ester (RME concentration in diesel fuel on its viscosity and density behaviour. The density and dynamic viscosity were observed at various mixing ratios of RME and diesel fuel. All measurements were performed at constant temperature of 40 °C. Increasing ratio of RME in diesel fuel was reflected in increased density value and dynamic viscosity of the blend. In case of pure RME, pure diesel fuel, and a blend of both (B30, temperature dependence of dynamic viscosity and density was examined. Temperature range in the experiment was −10 °C to 80 °C. Considerable temperature dependence of dynamic viscosity and density was found and demonstrated for all three samples. This finding is in accordance with theoretical assumptions and reference data. Mathematical models were developed and tested. Temperature dependence of dynamic viscosity was modeled using a polynomial 3rd polynomial degree. Correlation coefficients R −0.796, −0.948, and −0.974 between measured and calculated values were found. Temperature dependence of density was modeled using a 2nd polynomial degree. Correlation coefficients R −0.994, −0.979, and −0.976 between measured and calculated values were acquired. The proposed models can be used for flow behaviour prediction of RME, diesel fuel, and their blends.

  3. Sample Return Robot

    Data.gov (United States)

    National Aeronautics and Space Administration — This Challenge requires demonstration of an autonomous robotic system to locate and collect a set of specific sample types from a large planetary analog area and...

  4. Ecotoxicology statistical sampling

    International Nuclear Information System (INIS)

    Saona, G.

    2012-01-01

    This presentation introduces to general concepts in toxicology sample designs such as the distribution of organic or inorganic contaminants, a microbiological contamination, and the determination of the position in an eco toxicological bioassays ecosystem.

  5. Mini MAX - Medicaid Sample

    Data.gov (United States)

    U.S. Department of Health & Human Services — To facilitate wider use of MAX, CMS contracted with Mathematica to convene a technical expert panel (TEP) and determine the feasibility of creating a sample file for...

  6. Operational air sampling report

    International Nuclear Information System (INIS)

    Lyons, C.L.

    1994-03-01

    Nevada Test Site vertical shaft and tunnel events generate beta/gamma fission products. The REECo air sampling program is designed to measure these radionuclides at various facilities supporting these events. The current testing moratorium and closure of the Decontamination Facility has decreased the scope of the program significantly. Of the 118 air samples collected in the only active tunnel complex, only one showed any airborne fission products. Tritiated water vapor concentrations were very similar to previously reported levels. The 206 air samples collected at the Area-6 decontamination bays and laundry were again well below any Derived Air Concentration calculation standard. Laboratory analyses of these samples were negative for any airborne fission products

  7. Collecting Samples for Testing

    Science.gov (United States)

    ... Creatinine Ratio Valproic Acid Vancomycin Vanillylmandelic Acid (VMA) VAP Vitamin A Vitamin B12 and Folate Vitamin D ... that used for CSF in that they require aspiration of a sample of the fluid through a ...

  8. Roadway sampling evaluation.

    Science.gov (United States)

    2014-09-01

    The Florida Department of Transportation (FDOT) has traditionally required that all sampling : and testing of asphalt mixtures be at the Contractors production facility. With recent staffing cuts, as : well as budget reductions, FDOT has been cons...

  9. Soil Gas Sampling

    Science.gov (United States)

    Field Branches Quality System and Technical Procedures: This document describes general and specific procedures, methods and considerations to be used and observed when collecting soil gas samples for field screening or laboratory analysis.

  10. Soil Sampling Operating Procedure

    Science.gov (United States)

    EPA Region 4 Science and Ecosystem Support Division (SESD) document that describes general and specific procedures, methods, and considerations when collecting soil samples for field screening or laboratory analysis.

  11. Minimal nuclear energy density functional

    Science.gov (United States)

    Bulgac, Aurel; Forbes, Michael McNeil; Jin, Shi; Perez, Rodrigo Navarro; Schunck, Nicolas

    2018-04-01

    We present a minimal nuclear energy density functional (NEDF) called "SeaLL1" that has the smallest number of possible phenomenological parameters to date. SeaLL1 is defined by seven significant phenomenological parameters, each related to a specific nuclear property. It describes the nuclear masses of even-even nuclei with a mean energy error of 0.97 MeV and a standard deviation of 1.46 MeV , two-neutron and two-proton separation energies with rms errors of 0.69 MeV and 0.59 MeV respectively, and the charge radii of 345 even-even nuclei with a mean error ɛr=0.022 fm and a standard deviation σr=0.025 fm . SeaLL1 incorporates constraints on the equation of state (EoS) of pure neutron matter from quantum Monte Carlo calculations with chiral effective field theory two-body (NN ) interactions at the next-to-next-to-next-to leading order (N3LO) level and three-body (NNN ) interactions at the next-to-next-to leading order (N2LO) level. Two of the seven parameters are related to the saturation density and the energy per particle of the homogeneous symmetric nuclear matter, one is related to the nuclear surface tension, two are related to the symmetry energy and its density dependence, one is related to the strength of the spin-orbit interaction, and one is the coupling constant of the pairing interaction. We identify additional phenomenological parameters that have little effect on ground-state properties but can be used to fine-tune features such as the Thomas-Reiche-Kuhn sum rule, the excitation energy of the giant dipole and Gamow-Teller resonances, the static dipole electric polarizability, and the neutron skin thickness.

  12. Statistical sampling plans

    International Nuclear Information System (INIS)

    Jaech, J.L.

    1984-01-01

    In auditing and in inspection, one selects a number of items by some set of procedures and performs measurements which are compared with the operator's values. This session considers the problem of how to select the samples to be measured, and what kinds of measurements to make. In the inspection situation, the ultimate aim is to independently verify the operator's material balance. The effectiveness of the sample plan in achieving this objective is briefly considered. The discussion focuses on the model plant

  13. Two phase sampling

    CERN Document Server

    Ahmad, Zahoor; Hanif, Muhammad

    2013-01-01

    The development of estimators of population parameters based on two-phase sampling schemes has seen a dramatic increase in the past decade. Various authors have developed estimators of population using either one or two auxiliary variables. The present volume is a comprehensive collection of estimators available in single and two phase sampling. The book covers estimators which utilize information on single, two and multiple auxiliary variables of both quantitative and qualitative nature. Th...

  14. Leptin and bone mineral density

    DEFF Research Database (Denmark)

    Morberg, Cathrine M.; Tetens, Inge; Black, Eva

    2003-01-01

    Leptin has been suggested to decrease bone mineral density (BMD). This observational analysis explored the relationship between serum leptin and BMD in 327 nonobese men (controls) (body mass index 26.1 +/- 3.7 kg/m(2), age 49.9 +/- 6.0 yr) and 285 juvenile obese men (body mass index 35.9 +/- 5.9 kg...... males, but it also stresses the fact that the strong covariation between the examined variables is a shortcoming of the cross-sectional design....

  15. Bounded Densities and Their Derivatives

    DEFF Research Database (Denmark)

    Kozine, Igor; Krymsky, V.

    2009-01-01

    This paper describes how one can compute interval-valued statistical measures given limited information about the underlying distribution. The particular focus is on a bounded derivative of a probability density function and its combination with other available statistical evidence for computing ...... quantities of interest. To be able to utilise the evidence about the derivative it is suggested to adapt the ‘conventional’ problem statement to variational calculus and the way to do so is demonstrated. A number of examples are given throughout the paper....

  16. Equilibrium problems for Raney densities

    Science.gov (United States)

    Forrester, Peter J.; Liu, Dang-Zheng; Zinn-Justin, Paul

    2015-07-01

    The Raney numbers are a class of combinatorial numbers generalising the Fuss-Catalan numbers. They are indexed by a pair of positive real numbers (p, r) with p > 1 and 0 0 and similarly use both methods to identify the equilibrium problem for (p, r) = (θ/q + 1, 1/q), θ > 0 and q \\in Z+ . The Wiener-Hopf method is used to extend the latter to parameters (p, r) = (θ/q + 1, m + 1/q) for m a non-negative integer, and also to identify the equilibrium problem for a family of densities with moments given by certain binomial coefficients.

  17. High density fuel storage rack

    International Nuclear Information System (INIS)

    Zezza, L.J.

    1980-01-01

    High storage density for spent nuclear fuel assemblies in a pool achieved by positioning fuel storage cells of high thermal neutron absorption materials in an upright configuration in a rack. The rack holds the cells at required pitch. Each cell carries an internal fuel assembly support, and most cells are vertically movable in the rack so that they rest on the pool bottom. Pool water circulation through the cells and around the fuel assemblies is permitted by circulation openings at the top and bottom of the cells above and below the fuel assemblies

  18. Origin of cosmological density fluctuations

    International Nuclear Information System (INIS)

    Carr, B.J.

    1984-11-01

    The density fluctuations required to explain the large-scale cosmological structure may have arisen spontaneously as a result of a phase transition in the early Universe. There are several ways in which such fluctuations may have ben produced, and they could have a variety of spectra, so one should not necessarily expect all features of the large-scale structure to derive from a simple power law spectrum. Some features may even result from astrophysical amplification mechanisms rather than gravitational instability. 128 references

  19. Uranium tailings sampling manual

    International Nuclear Information System (INIS)

    Feenstra, S.; Reades, D.W.; Cherry, J.A.; Chambers, D.B.; Case, G.G.; Ibbotson, B.G.

    1985-01-01

    The purpose of this manual is to describe the requisite sampling procedures for the application of uniform high-quality standards to detailed geotechnical, hydrogeological, geochemical and air quality measurements at Canadian uranium tailings disposal sites. The selection and implementation of applicable sampling procedures for such measurements at uranium tailings disposal sites are complicated by two primary factors. Firstly, the physical and chemical nature of uranium mine tailings and effluent is considerably different from natural soil materials and natural waters. Consequently, many conventional methods for the collection and analysis of natural soils and waters are not directly applicable to tailings. Secondly, there is a wide range in the physical and chemical nature of uranium tailings. The composition of the ore, the milling process, the nature of tailings depositon, and effluent treatment vary considerably and are highly site-specific. Therefore, the definition and implementation of sampling programs for uranium tailings disposal sites require considerable evaluation, and often innovation, to ensure that appropriate sampling and analysis methods are used which provide the flexibility to take into account site-specific considerations. The following chapters describe the objective and scope of a sampling program, preliminary data collection, and the procedures for sampling of tailings solids, surface water and seepage, tailings pore-water, and wind-blown dust and radon

  20. Reactor water sampling device

    International Nuclear Information System (INIS)

    Sakamaki, Kazuo.

    1992-01-01

    The present invention concerns a reactor water sampling device for sampling reactor water in an in-core monitor (neutron measuring tube) housing in a BWR type reactor. The upper end portion of a drain pipe of the reactor water sampling device is attached detachably to an in-core monitor flange. A push-up rod is inserted in the drain pipe vertically movably. A sampling vessel and a vacuum pump are connected to the lower end of the drain pipe. A vacuum pump is operated to depressurize the inside of the device and move the push-up rod upwardly. Reactor water in the in-core monitor housing flows between the drain pipe and the push-up rod and flows into the sampling vessel. With such a constitution, reactor water in the in-core monitor housing can be sampled rapidly with neither opening the lid of the reactor pressure vessel nor being in contact with air. Accordingly, operator's exposure dose can be reduced. (I.N.)