WorldWideScience

Sample records for dimensional acquisition method

  1. Quantification of Artifact Reduction With Real-Time Cine Four-Dimensional Computed Tomography Acquisition Methods

    International Nuclear Information System (INIS)

    Langner, Ulrich W.; Keall, Paul J.

    2010-01-01

    Purpose: To quantify the magnitude and frequency of artifacts in simulated four-dimensional computed tomography (4D CT) images using three real-time acquisition methods- direction-dependent displacement acquisition, simultaneous displacement and phase acquisition, and simultaneous displacement and velocity acquisition- and to compare these methods with commonly used retrospective phase sorting. Methods and Materials: Image acquisition for the four 4D CT methods was simulated with different displacement and velocity tolerances for spheres with radii of 0.5 cm, 1.5 cm, and 2.5 cm, using 58 patient-measured tumors and respiratory motion traces. The magnitude and frequency of artifacts, CT doses, and acquisition times were computed for each method. Results: The mean artifact magnitude was 50% smaller for the three real-time methods than for retrospective phase sorting. The dose was ∼50% lower, but the acquisition time was 20% to 100% longer for the real-time methods than for retrospective phase sorting. Conclusions: Real-time acquisition methods can reduce the frequency and magnitude of artifacts in 4D CT images, as well as the imaging dose, but they increase the image acquisition time. The results suggest that direction-dependent displacement acquisition is the preferred real-time 4D CT acquisition method, because on average, the lowest dose is delivered to the patient and the acquisition time is the shortest for the resulting number and magnitude of artifacts.

  2. High temporal resolution magnetic resonance imaging: development of a parallel three dimensional acquisition method for functional neuroimaging

    International Nuclear Information System (INIS)

    Rabrait, C.

    2007-11-01

    Echo Planar Imaging is widely used to perform data acquisition in functional neuroimaging. This sequence allows the acquisition of a set of about 30 slices, covering the whole brain, at a spatial resolution ranging from 2 to 4 mm, and a temporal resolution ranging from 1 to 2 s. It is thus well adapted to the mapping of activated brain areas but does not allow precise study of the brain dynamics. Moreover, temporal interpolation is needed in order to correct for inter-slices delays and 2-dimensional acquisition is subject to vascular in flow artifacts. To improve the estimation of the hemodynamic response functions associated with activation, this thesis aimed at developing a 3-dimensional high temporal resolution acquisition method. To do so, Echo Volume Imaging was combined with reduced field-of-view acquisition and parallel imaging. Indeed, E.V.I. allows the acquisition of a whole volume in Fourier space following a single excitation, but it requires very long echo trains. Parallel imaging and field-of-view reduction are used to reduce the echo train durations by a factor of 4, which allows the acquisition of a 3-dimensional brain volume with limited susceptibility-induced distortions and signal losses, in 200 ms. All imaging parameters have been optimized in order to reduce echo train durations and to maximize S.N.R., so that cerebral activation can be detected with a high level of confidence. Robust detection of brain activation was demonstrated with both visual and auditory paradigms. High temporal resolution hemodynamic response functions could be estimated through selective averaging of the response to the different trials of the stimulation. To further improve S.N.R., the matrix inversions required in parallel reconstruction were regularized, and the impact of the level of regularization on activation detection was investigated. Eventually, potential applications of parallel E.V.I. such as the study of non-stationary effects in the B.O.L.D. response

  3. TH-E-17A-07: Improved Cine Four-Dimensional Computed Tomography (4D CT) Acquisition and Processing Method

    International Nuclear Information System (INIS)

    Castillo, S; Castillo, R; Castillo, E; Pan, T; Ibbott, G; Balter, P; Hobbs, B; Dai, J; Guerrero, T

    2014-01-01

    Purpose: Artifacts arising from the 4D CT acquisition and post-processing methods add systematic uncertainty to the treatment planning process. We propose an alternate cine 4D CT acquisition and post-processing method to consistently reduce artifacts, and explore patient parameters indicative of image quality. Methods: In an IRB-approved protocol, 18 patients with primary thoracic malignancies received a standard cine 4D CT acquisition followed by an oversampling 4D CT that doubled the number of images acquired. A second cohort of 10 patients received the clinical 4D CT plus 3 oversampling scans for intra-fraction reproducibility. The clinical acquisitions were processed by the standard phase sorting method. The oversampling acquisitions were processed using Dijkstras algorithm to optimize an artifact metric over available image data. Image quality was evaluated with a one-way mixed ANOVA model using a correlation-based artifact metric calculated from the final 4D CT image sets. Spearman correlations and a linear mixed model tested the association between breathing parameters, patient characteristics, and image quality. Results: The oversampling 4D CT scans reduced artifact presence significantly by 27% and 28%, for the first cohort and second cohort respectively. From cohort 2, the inter-replicate deviation for the oversampling method was within approximately 13% of the cross scan average at the 0.05 significance level. Artifact presence for both clinical and oversampling methods was significantly correlated with breathing period (ρ=0.407, p-value<0.032 clinical, ρ=0.296, p-value<0.041 oversampling). Artifact presence in the oversampling method was significantly correlated with amount of data acquired, (ρ=-0.335, p-value<0.02) indicating decreased artifact presence with increased breathing cycles per scan location. Conclusion: The 4D CT oversampling acquisition with optimized sorting reduced artifact presence significantly and reproducibly compared to the phase

  4. Non-contrast-enhanced hepatic MR angiography: Do two-dimensional parallel imaging and short tau inversion recovery methods shorten acquisition time without image quality deterioration?

    Energy Technology Data Exchange (ETDEWEB)

    Shimada, Kotaro, E-mail: kotaro@kuhp.kyoto-u.ac.jp [Department of Diagnostic Imaging and Nuclear Medicine, Kyoto University, Graduate School of Medicine, 54 Kawahara-cho, Shogoin, Sakyo-ku, Kyoto 606-8507 (Japan); Isoda, Hiroyoshi, E-mail: sayuki@kuhp.kyoto-u.ac.jp [Department of Diagnostic Imaging and Nuclear Medicine, Kyoto University, Graduate School of Medicine, 54 Kawahara-cho, Shogoin, Sakyo-ku, Kyoto 606-8507 (Japan); Okada, Tomohisa, E-mail: tomokada@kuhp.kyoto-u.ac.jp [Department of Diagnostic Imaging and Nuclear Medicine, Kyoto University, Graduate School of Medicine, 54 Kawahara-cho, Shogoin, Sakyo-ku, Kyoto 606-8507 (Japan); Kamae, Toshikazu, E-mail: toshi13@kuhp.kyoto-u.ac.jp [Department of Diagnostic Imaging and Nuclear Medicine, Kyoto University, Graduate School of Medicine, 54 Kawahara-cho, Shogoin, Sakyo-ku, Kyoto 606-8507 (Japan); Arizono, Shigeki, E-mail: arizono@kuhp.kyoto-u.ac.jp [Department of Diagnostic Imaging and Nuclear Medicine, Kyoto University, Graduate School of Medicine, 54 Kawahara-cho, Shogoin, Sakyo-ku, Kyoto 606-8507 (Japan); Hirokawa, Yuusuke, E-mail: yuusuke@kuhp.kyoto-u.ac.jp [Department of Diagnostic Imaging and Nuclear Medicine, Kyoto University, Graduate School of Medicine, 54 Kawahara-cho, Shogoin, Sakyo-ku, Kyoto 606-8507 (Japan); Shibata, Toshiya, E-mail: ksj@kuhp.kyoto-u.ac.jp [Department of Diagnostic Imaging and Nuclear Medicine, Kyoto University, Graduate School of Medicine, 54 Kawahara-cho, Shogoin, Sakyo-ku, Kyoto 606-8507 (Japan); Togashi, Kaori, E-mail: ktogashi@kuhp.kyoto-u.ac.jp [Department of Diagnostic Imaging and Nuclear Medicine, Kyoto University, Graduate School of Medicine, 54 Kawahara-cho, Shogoin, Sakyo-ku, Kyoto 606-8507 (Japan)

    2011-01-15

    Objective: To study whether shortening the acquisition time for selective hepatic artery visualization is feasible without image quality deterioration by adopting two-dimensional (2D) parallel imaging (PI) and short tau inversion recovery (STIR) methods. Materials and methods: Twenty-four healthy volunteers were enrolled. 3D true steady-state free-precession imaging with a time spatial labeling inversion pulse was conducted using 1D or 2D-PI and fat suppression by chemical shift selective (CHESS) or STIR methods. Three groups of different scan conditions were assigned and compared: group A (1D-PI factor 2 and CHESS), group B (2D-PI factor 2 x 2 and CHESS), and group C (2D-PI factor 2 x 2 and STIR). The artery-to-liver contrast was quantified, and the quality of artery visualization and overall image quality were scored. Results: The mean scan time was 9.5 {+-} 1.0 min (mean {+-} standard deviation), 5.9 {+-} 0.8 min, and 5.8 {+-} 0.5 min in groups A, B, and C, respectively, and was significantly shorter in groups B and C than in group A (P < 0.01). The artery-to-liver contrast was significantly better in group C than in groups A and B (P < 0.01). The scores for artery visualization and overall image quality were worse in group B than in groups A and C. The differences were statistically significant (P < 0.05) regarding the arterial branches of segments 4 and 8. Between group A and group C, which had similar scores, there were no statistically significant differences. Conclusion: Shortening the acquisition time for selective hepatic artery visualization was feasible without deterioration of the image quality by the combination of 2D-PI and STIR methods. It will facilitate using non-contrast-enhanced MRA in clinical practice.

  5. Non-contrast-enhanced hepatic MR angiography: Do two-dimensional parallel imaging and short tau inversion recovery methods shorten acquisition time without image quality deterioration?

    International Nuclear Information System (INIS)

    Shimada, Kotaro; Isoda, Hiroyoshi; Okada, Tomohisa; Kamae, Toshikazu; Arizono, Shigeki; Hirokawa, Yuusuke; Shibata, Toshiya; Togashi, Kaori

    2011-01-01

    Objective: To study whether shortening the acquisition time for selective hepatic artery visualization is feasible without image quality deterioration by adopting two-dimensional (2D) parallel imaging (PI) and short tau inversion recovery (STIR) methods. Materials and methods: Twenty-four healthy volunteers were enrolled. 3D true steady-state free-precession imaging with a time spatial labeling inversion pulse was conducted using 1D or 2D-PI and fat suppression by chemical shift selective (CHESS) or STIR methods. Three groups of different scan conditions were assigned and compared: group A (1D-PI factor 2 and CHESS), group B (2D-PI factor 2 x 2 and CHESS), and group C (2D-PI factor 2 x 2 and STIR). The artery-to-liver contrast was quantified, and the quality of artery visualization and overall image quality were scored. Results: The mean scan time was 9.5 ± 1.0 min (mean ± standard deviation), 5.9 ± 0.8 min, and 5.8 ± 0.5 min in groups A, B, and C, respectively, and was significantly shorter in groups B and C than in group A (P < 0.01). The artery-to-liver contrast was significantly better in group C than in groups A and B (P < 0.01). The scores for artery visualization and overall image quality were worse in group B than in groups A and C. The differences were statistically significant (P < 0.05) regarding the arterial branches of segments 4 and 8. Between group A and group C, which had similar scores, there were no statistically significant differences. Conclusion: Shortening the acquisition time for selective hepatic artery visualization was feasible without deterioration of the image quality by the combination of 2D-PI and STIR methods. It will facilitate using non-contrast-enhanced MRA in clinical practice.

  6. High temporal resolution magnetic resonance imaging: development of a parallel three dimensional acquisition method for functional neuroimaging; Imagerie par resonance magnetique a haute resolution temporelle: developpement d'une methode d'acquisition parallele tridimensionnelle pour l'imagerie fonctionnelle cerebrale

    Energy Technology Data Exchange (ETDEWEB)

    Rabrait, C

    2007-11-15

    Echo Planar Imaging is widely used to perform data acquisition in functional neuroimaging. This sequence allows the acquisition of a set of about 30 slices, covering the whole brain, at a spatial resolution ranging from 2 to 4 mm, and a temporal resolution ranging from 1 to 2 s. It is thus well adapted to the mapping of activated brain areas but does not allow precise study of the brain dynamics. Moreover, temporal interpolation is needed in order to correct for inter-slices delays and 2-dimensional acquisition is subject to vascular in flow artifacts. To improve the estimation of the hemodynamic response functions associated with activation, this thesis aimed at developing a 3-dimensional high temporal resolution acquisition method. To do so, Echo Volume Imaging was combined with reduced field-of-view acquisition and parallel imaging. Indeed, E.V.I. allows the acquisition of a whole volume in Fourier space following a single excitation, but it requires very long echo trains. Parallel imaging and field-of-view reduction are used to reduce the echo train durations by a factor of 4, which allows the acquisition of a 3-dimensional brain volume with limited susceptibility-induced distortions and signal losses, in 200 ms. All imaging parameters have been optimized in order to reduce echo train durations and to maximize S.N.R., so that cerebral activation can be detected with a high level of confidence. Robust detection of brain activation was demonstrated with both visual and auditory paradigms. High temporal resolution hemodynamic response functions could be estimated through selective averaging of the response to the different trials of the stimulation. To further improve S.N.R., the matrix inversions required in parallel reconstruction were regularized, and the impact of the level of regularization on activation detection was investigated. Eventually, potential applications of parallel E.V.I. such as the study of non-stationary effects in the B.O.L.D. response

  7. 48 CFR 7.402 - Acquisition methods.

    Science.gov (United States)

    2010-10-01

    ... ACQUISITION PLANNING Equipment Lease or Purchase 7.402 Acquisition methods. (a) Purchase method. (1) Generally, the purchase method is appropriate if the equipment will be used beyond the point in time when cumulative leasing costs exceed the purchase costs. (2) Agencies should not rule out the purchase method of...

  8. A seamless acquisition digital storage oscilloscope with three-dimensional waveform display

    Energy Technology Data Exchange (ETDEWEB)

    Yang, Kuojun, E-mail: kuojunyang@gmail.com; Guo, Lianping [School of Automation Engineering, University of Electronic Science and Technology of China, Chengdu (China); School of Electrical and Electronic Engineering, Nanyang Technological University (Singapore); Tian, Shulin; Zeng, Hao [School of Automation Engineering, University of Electronic Science and Technology of China, Chengdu (China); Qiu, Lei [School of Electrical and Electronic Engineering, Nanyang Technological University (Singapore)

    2014-04-15

    In traditional digital storage oscilloscope (DSO), sampled data need to be processed after each acquisition. During data processing, the acquisition is stopped and oscilloscope is blind to the input signal. Thus, this duration is called dead time. With the rapid development of modern electronic systems, the effect of infrequent events becomes significant. To capture these occasional events in shorter time, dead time in traditional DSO that causes the loss of measured signal needs to be reduced or even eliminated. In this paper, a seamless acquisition oscilloscope without dead time is proposed. In this oscilloscope, three-dimensional waveform mapping (TWM) technique, which converts sampled data to displayed waveform, is proposed. With this technique, not only the process speed is improved, but also the probability information of waveform is displayed with different brightness. Thus, a three-dimensional waveform is shown to the user. To reduce processing time further, parallel TWM which processes several sampled points simultaneously, and dual-port random access memory based pipelining technique which can process one sampling point in one clock period are proposed. Furthermore, two DDR3 (Double-Data-Rate Three Synchronous Dynamic Random Access Memory) are used for storing sampled data alternately, thus the acquisition can continue during data processing. Therefore, the dead time of DSO is eliminated. In addition, a double-pulse test method is adopted to test the waveform capturing rate (WCR) of the oscilloscope and a combined pulse test method is employed to evaluate the oscilloscope's capture ability comprehensively. The experiment results show that the WCR of the designed oscilloscope is 6 250 000 wfms/s (waveforms per second), the highest value in all existing oscilloscopes. The testing results also prove that there is no dead time in our oscilloscope, thus realizing the seamless acquisition.

  9. A seamless acquisition digital storage oscilloscope with three-dimensional waveform display

    Science.gov (United States)

    Yang, Kuojun; Tian, Shulin; Zeng, Hao; Qiu, Lei; Guo, Lianping

    2014-04-01

    In traditional digital storage oscilloscope (DSO), sampled data need to be processed after each acquisition. During data processing, the acquisition is stopped and oscilloscope is blind to the input signal. Thus, this duration is called dead time. With the rapid development of modern electronic systems, the effect of infrequent events becomes significant. To capture these occasional events in shorter time, dead time in traditional DSO that causes the loss of measured signal needs to be reduced or even eliminated. In this paper, a seamless acquisition oscilloscope without dead time is proposed. In this oscilloscope, three-dimensional waveform mapping (TWM) technique, which converts sampled data to displayed waveform, is proposed. With this technique, not only the process speed is improved, but also the probability information of waveform is displayed with different brightness. Thus, a three-dimensional waveform is shown to the user. To reduce processing time further, parallel TWM which processes several sampled points simultaneously, and dual-port random access memory based pipelining technique which can process one sampling point in one clock period are proposed. Furthermore, two DDR3 (Double-Data-Rate Three Synchronous Dynamic Random Access Memory) are used for storing sampled data alternately, thus the acquisition can continue during data processing. Therefore, the dead time of DSO is eliminated. In addition, a double-pulse test method is adopted to test the waveform capturing rate (WCR) of the oscilloscope and a combined pulse test method is employed to evaluate the oscilloscope's capture ability comprehensively. The experiment results show that the WCR of the designed oscilloscope is 6 250 000 wfms/s (waveforms per second), the highest value in all existing oscilloscopes. The testing results also prove that there is no dead time in our oscilloscope, thus realizing the seamless acquisition.

  10. A seamless acquisition digital storage oscilloscope with three-dimensional waveform display

    International Nuclear Information System (INIS)

    Yang, Kuojun; Guo, Lianping; Tian, Shulin; Zeng, Hao; Qiu, Lei

    2014-01-01

    In traditional digital storage oscilloscope (DSO), sampled data need to be processed after each acquisition. During data processing, the acquisition is stopped and oscilloscope is blind to the input signal. Thus, this duration is called dead time. With the rapid development of modern electronic systems, the effect of infrequent events becomes significant. To capture these occasional events in shorter time, dead time in traditional DSO that causes the loss of measured signal needs to be reduced or even eliminated. In this paper, a seamless acquisition oscilloscope without dead time is proposed. In this oscilloscope, three-dimensional waveform mapping (TWM) technique, which converts sampled data to displayed waveform, is proposed. With this technique, not only the process speed is improved, but also the probability information of waveform is displayed with different brightness. Thus, a three-dimensional waveform is shown to the user. To reduce processing time further, parallel TWM which processes several sampled points simultaneously, and dual-port random access memory based pipelining technique which can process one sampling point in one clock period are proposed. Furthermore, two DDR3 (Double-Data-Rate Three Synchronous Dynamic Random Access Memory) are used for storing sampled data alternately, thus the acquisition can continue during data processing. Therefore, the dead time of DSO is eliminated. In addition, a double-pulse test method is adopted to test the waveform capturing rate (WCR) of the oscilloscope and a combined pulse test method is employed to evaluate the oscilloscope's capture ability comprehensively. The experiment results show that the WCR of the designed oscilloscope is 6 250 000 wfms/s (waveforms per second), the highest value in all existing oscilloscopes. The testing results also prove that there is no dead time in our oscilloscope, thus realizing the seamless acquisition

  11. Single cell proteomics in biomedicine: High-dimensional data acquisition, visualization, and analysis.

    Science.gov (United States)

    Su, Yapeng; Shi, Qihui; Wei, Wei

    2017-02-01

    New insights on cellular heterogeneity in the last decade provoke the development of a variety of single cell omics tools at a lightning pace. The resultant high-dimensional single cell data generated by these tools require new theoretical approaches and analytical algorithms for effective visualization and interpretation. In this review, we briefly survey the state-of-the-art single cell proteomic tools with a particular focus on data acquisition and quantification, followed by an elaboration of a number of statistical and computational approaches developed to date for dissecting the high-dimensional single cell data. The underlying assumptions, unique features, and limitations of the analytical methods with the designated biological questions they seek to answer will be discussed. Particular attention will be given to those information theoretical approaches that are anchored in a set of first principles of physics and can yield detailed (and often surprising) predictions. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  12. Three-dimensional image acquisition and reconstruction system on a mobile device based on computer-generated integral imaging.

    Science.gov (United States)

    Erdenebat, Munkh-Uchral; Kim, Byeong-Jun; Piao, Yan-Ling; Park, Seo-Yeon; Kwon, Ki-Chul; Piao, Mei-Lan; Yoo, Kwan-Hee; Kim, Nam

    2017-10-01

    A mobile three-dimensional image acquisition and reconstruction system using a computer-generated integral imaging technique is proposed. A depth camera connected to the mobile device acquires the color and depth data of a real object simultaneously, and an elemental image array is generated based on the original three-dimensional information for the object, with lens array specifications input into the mobile device. The three-dimensional visualization of the real object is reconstructed on the mobile display through optical or digital reconstruction methods. The proposed system is implemented successfully and the experimental results certify that the system is an effective and interesting method of displaying real three-dimensional content on a mobile device.

  13. Three-dimensional ultrasonic imaging of concrete elements using different SAFT data acquisition and processing schemes

    International Nuclear Information System (INIS)

    Schickert, Martin

    2015-01-01

    Ultrasonic testing systems using transducer arrays and the SAFT (Synthetic Aperture Focusing Technique) reconstruction allow for imaging the internal structure of concrete elements. At one-sided access, three-dimensional representations of the concrete volume can be reconstructed in relatively great detail, permitting to detect and localize objects such as construction elements, built-in components, and flaws. Different SAFT data acquisition and processing schemes can be utilized which differ in terms of the measuring and computational effort and the reconstruction result. In this contribution, two methods are compared with respect to their principle of operation and their imaging characteristics. The first method is the conventional single-channel SAFT algorithm which is implemented using a virtual transducer that is moved within a transducer array by electronic switching. The second method is the Combinational SAFT algorithm (C-SAFT), also named Sampling Phased Array (SPA) or Full Matrix Capture/Total Focusing Method (TFM/FMC), which is realized using a combination of virtual transducers within a transducer array. Five variants of these two methods are compared by means of measurements obtained at test specimens containing objects typical of concrete elements. The automated SAFT imaging system FLEXUS is used for the measurements which includes a three-axis scanner with a 1.0 m × 0.8 m scan range and an electronically switched ultrasonic array consisting of 48 transducers in 16 groups. On the basis of two-dimensional and three-dimensional reconstructed images, qualitative and some quantitative results of the parameters image resolution, signal-to-noise ratio, measurement time, and computational effort are discussed in view of application characteristics of the SAFT variants

  14. Clinical study of Whole Heart MRCA. Evaluation of acquisition method

    International Nuclear Information System (INIS)

    Iwai, Mitsuhiro; Tateishi, Toshiki; Takeda, Soji; Hayashi, Ryuji

    2006-01-01

    In CT coronary angiography (CTCA), image reconstruction is performed using the optimal phase with either the relative or the absolute delaying methods. Whole Heart MRCA is generally considered to set up the signal acquisition time when is only the middle-diastolic phase in the standstill state of the right coronary artery per cardiac beat as the conventional method. However, in the case of high cardiac beats, such as arrhythmia respiratory instability, etc., image acquisition can take long time and a good quality image may not be maintained. We use not only the middle-diastolic acquisition (conventional method) but also the systolic phase acquisition (systolic phase method) and the shortening method (total acquisition shortening (TASH) method). The aim of this study was to examine the depiction ability and the image quality in the images of 585 Whole Heart MRCA(s) acquired by three methods described above. Seventy percent of all exhibited good quality images in the middle of extension. The remaining images obtained by the systolic phase (at the time of high cardiac beats) and the TASH methods (respiratory instability etc.) were better than those obtained using the conventional method. The sensitivity and the specificity of coronary stenosis (75%) in the TASH, systolic-phase and the conventional methods were 90% and 96%, 96% and 97%, and 87% and 96%, respectively. These findings proved that no significant differences in the depiction of the coronary stenosis were apparent using the three methods. It was concluded that it was necessary to establish suitable signal time acquisition and to change the acquisition method depending on cardiac beats or the state of breathing for Whole Heart MRCA. (author)

  15. L-C Measurement Acquisition Method for Aerospace Systems

    Science.gov (United States)

    Woodard, Stanley E.; Taylor, B. Douglas; Shams, Qamar A.; Fox, Robert L.

    2003-01-01

    This paper describes a measurement acquisition method for aerospace systems that eliminates the need for sensors to have physical connection to a power source (i.e., no lead wires) or to data acquisition equipment. Furthermore, the method does not require the sensors to be in proximity to any form of acquisition hardware. Multiple sensors can be interrogated using this method. The sensors consist of a capacitor, C(p), whose capacitance changes with changes to a physical property, p, electrically connected to an inductor, L. The method uses an antenna to broadcast electromagnetic energy that electrically excites one or more inductive-capacitive sensors via Faraday induction. This method facilitates measurements that were not previously possible because there was no practical means of providing power and data acquisition electrical connections to a sensor. Unlike traditional sensors, which measure only a single physical property, the manner in which the sensing element is interrogated simultaneously allows measurement of at least two unrelated physical properties (e.g., displacement rate and fluid level) by using each constituent of the L-C element. The key to using the method for aerospace applications is to increase the distance between the L-C elements and interrogating antenna; develop all key components to be non-obtrusive and to develop sensing elements that can easily be implemented. Techniques that have resulted in increased distance between antenna and sensor will be presented. Fluid-level measurements and pressure measurements using the acquisition method are demonstrated in the paper.

  16. Estimate of pulse-sequence data acquisition system for multi-dimensional measurement

    International Nuclear Information System (INIS)

    Kitamura, Yasunori; Sakae, Takeji; Nohtomi, Akihiro; Matoba, Masaru; Matsumoto, Yuzuru.

    1996-01-01

    A pulse-sequence data acquisition system has been newly designed and estimated for the measurement of one- or multi-dimensional pulse train coming from radiation detectors. In this system, in order to realize the pulse-sequence data acquisition, the arrival time of each pulse is recorded to a memory of a personal computer (PC). For the multi-dimensional data acquisition with several input channels, each arrival-time data is tagged with a 'flag' which indicates the input channel of arriving pulse. Counting losses due to the existence of processing time of the PC are expected to be reduced by using a First-In-First-Out (FIFO) memory unit. In order to verify this system, a computer simulation was performed, Various sets of random pulse trains with different mean pulse rate (1-600 kcps) were generated by using Monte Carlo simulation technique. Those pulse trains were dealt with another code which simulates the newly-designed data acquisition system including a FIFO memory unit; the memory size was assumed to be 0-100 words. And the recorded pulse trains on the PC with the various FIFO memory sizes have been observed. From the result of the simulation, it appears that the system with 3 words FIFO memory unit works successfully up to the pulse rate of 10 kcps without any severe counting losses. (author)

  17. Estimate of pulse-sequence data acquisition system for multi-dimensional measurement

    Energy Technology Data Exchange (ETDEWEB)

    Kitamura, Yasunori; Sakae, Takeji; Nohtomi, Akihiro; Matoba, Masaru [Kyushu Univ., Fukuoka (Japan). Faculty of Engineering; Matsumoto, Yuzuru

    1996-07-01

    A pulse-sequence data acquisition system has been newly designed and estimated for the measurement of one- or multi-dimensional pulse train coming from radiation detectors. In this system, in order to realize the pulse-sequence data acquisition, the arrival time of each pulse is recorded to a memory of a personal computer (PC). For the multi-dimensional data acquisition with several input channels, each arrival-time data is tagged with a `flag` which indicates the input channel of arriving pulse. Counting losses due to the existence of processing time of the PC are expected to be reduced by using a First-In-First-Out (FIFO) memory unit. In order to verify this system, a computer simulation was performed, Various sets of random pulse trains with different mean pulse rate (1-600 kcps) were generated by using Monte Carlo simulation technique. Those pulse trains were dealt with another code which simulates the newly-designed data acquisition system including a FIFO memory unit; the memory size was assumed to be 0-100 words. And the recorded pulse trains on the PC with the various FIFO memory sizes have been observed. From the result of the simulation, it appears that the system with 3 words FIFO memory unit works successfully up to the pulse rate of 10 kcps without any severe counting losses. (author)

  18. An evaluation on CT image acquisition method for medical VR applications

    Science.gov (United States)

    Jang, Seong-wook; Ko, Junho; Yoo, Yon-sik; Kim, Yoonsang

    2017-02-01

    Recent medical virtual reality (VR) applications to minimize re-operations are being studied for improvements in surgical efficiency and reduction of operation error. The CT image acquisition method considering three-dimensional (3D) modeling for medical VR applications is important, because the realistic model is required for the actual human organ. However, the research for medical VR applications has focused on 3D modeling techniques and utilized 3D models. In addition, research on a CT image acquisition method considering 3D modeling has never been reported. The conventional CT image acquisition method involves scanning a limited area of the lesion for the diagnosis of doctors once or twice. However, the medical VR application is required to acquire the CT image considering patients' various postures and a wider area than the lesion. A wider area than the lesion is required because of the necessary process of comparing bilateral sides for dyskinesia diagnosis of the shoulder, pelvis, and leg. Moreover, patients' various postures are required due to the different effects on the musculoskeletal system. Therefore, in this paper, we perform a comparative experiment on the acquired CT images considering image area (unilateral/bilateral) and patients' postures (neutral/abducted). CT images are acquired from 10 patients for the experiments, and the acquired CT images are evaluated based on the length per pixel and the morphological deviation. Finally, by comparing the experiment results, we evaluate the CT image acquisition method for medical VR applications.

  19. A method for improved 4D-computed tomography data acquisition

    Energy Technology Data Exchange (ETDEWEB)

    Kupper, Martin; Sprengel, Wolfgang [Technische Univ. Graz (Austria). Inst. fuer Materialphysik; Winkler, Peter; Zurl, Brigitte [Medizinische Univ. Graz (Austria). Comprehensive Cancer Center

    2017-05-01

    In four-dimensional time-dependent computed tomography (4D-CT) of the lungs, irregularities in breathing movements can cause errors in data acquisition, or even data loss. We present a method based on sending a synthetic, regular breathing signal to the CT instead of the real signal, which ensures 4D-CT data sets without data loss. Subsequent correction of the signal based on the real breathing curve enables an accurate reconstruction of the size and movement of the target volume. This makes it possible to plan radiation treatment based on the obtained data. The method was tested with dynamic thorax phantom measurements using synthetic and real breathing patterns.

  20. TSOM Method for Nanoelectronics Dimensional Metrology

    International Nuclear Information System (INIS)

    Attota, Ravikiran

    2011-01-01

    Through-focus scanning optical microscopy (TSOM) is a relatively new method that transforms conventional optical microscopes into truly three-dimensional metrology tools for nanoscale to microscale dimensional analysis. TSOM achieves this by acquiring and analyzing a set of optical images collected at various focus positions going through focus (from above-focus to under-focus). The measurement resolution is comparable to what is possible with typical light scatterometry, scanning electron microscopy (SEM) and atomic force microscopy (AFM). TSOM method is able to identify nanometer scale difference, type of the difference and magnitude of the difference between two nano/micro scale targets using a conventional optical microscope with visible wavelength illumination. Numerous industries could benefit from the TSOM method--such as the semiconductor industry, MEMS, NEMS, biotechnology, nanomanufacturing, data storage, and photonics. The method is relatively simple and inexpensive, has a high throughput, provides nanoscale sensitivity for 3D measurements and could enable significant savings and yield improvements in nanometrology and nanomanufacturing. Potential applications are demonstrated using experiments and simulations.

  1. Three-dimensional image signals: processing methods

    Science.gov (United States)

    Schiopu, Paul; Manea, Adrian; Craciun, Anca-Ileana; Craciun, Alexandru

    2010-11-01

    Over the years extensive studies have been carried out to apply coherent optics methods in real-time processing, communications and transmission image. This is especially true when a large amount of information needs to be processed, e.g., in high-resolution imaging. The recent progress in data-processing networks and communication systems has considerably increased the capacity of information exchange. We describe the results of literature investigation research of processing methods for the signals of the three-dimensional images. All commercially available 3D technologies today are based on stereoscopic viewing. 3D technology was once the exclusive domain of skilled computer-graphics developers with high-end machines and software. The images capture from the advanced 3D digital camera can be displayed onto screen of the 3D digital viewer with/ without special glasses. For this is needed considerable processing power and memory to create and render the complex mix of colors, textures, and virtual lighting and perspective necessary to make figures appear three-dimensional. Also, using a standard digital camera and a technique called phase-shift interferometry we can capture "digital holograms." These are holograms that can be stored on computer and transmitted over conventional networks. We present some research methods to process "digital holograms" for the Internet transmission and results.

  2. VME data acquisition system. Interactive software for the acquisition, display and storage of one or two dimensional spectra

    International Nuclear Information System (INIS)

    Petremann, E.

    1989-01-01

    The development and construction of a complete data acquisition system for nuclear physics applications, are described. The system is based on the VME bus and the 16/32 bits microprocessor. The data acquisition system enables the obtention of line spectra, involving one or two parameters, and the simultaneous storage of events in a magnetic tape. The analysis and the description of the data acquisition software, the experimental spectra display and saving on magnetic systems are given. Pascal and Assembler are used. The development of cards, for the standard VME and electronic equipment interfaces, is performed [fr

  3. Bulk density estimation using a 3-dimensional image acquisition and analysis system

    Directory of Open Access Journals (Sweden)

    Heyduk Adam

    2016-01-01

    Full Text Available The paper presents a concept of dynamic bulk density estimation of a particulate matter stream using a 3-d image analysis system and a conveyor belt scale. A method of image acquisition should be adjusted to the type of scale. The paper presents some laboratory results of static bulk density measurements using the MS Kinect time-of-flight camera and OpenCV/Matlab software. Measurements were made for several different size classes.

  4. Epic Dimensions: a Comparative Analysis of 3d Acquisition Methods

    Science.gov (United States)

    Graham, C. A.; Akoglu, K. G.; Lassen, A. W.; Simon, S.

    2017-08-01

    When it comes to capturing the geometry of a cultural heritage artifact, there is certainly no dearth of possible acquisition techniques. As technology has rapidly developed, the availability of intuitive 3D generating tools has increased exponentially and made it possible even for non-specialists to create many models quickly. Though the by-products of these different acquisition methods may be incongruent in terms of quality, these discrepancies are not problematic, as there are many applications of 3D models, each with their own set of requirements. Comparisons of high-resolution 3D models of an iconic Babylonian tablet, captured via four different closerange technologies discussed in this paper assess which methods of 3D digitization best suit specific intended purposes related to research, conservation and education. Taking into consideration repeatability, time and resource implications, qualitative and quantitative potential and ease of use, this paper presents a study of the strengths and weakness of structured light scanning, triangulation laser scanning, photometric stereo and close-range photogrammetry, in the context of interactive investigation, conditions monitoring, engagement, and dissemination.

  5. Three-dimensional digital tomosynthesis iterative reconstruction, artifact reduction and alternative acquisition geometry

    CERN Document Server

    Levakhina, Yulia

    2014-01-01

    Yulia Levakhina gives an introduction to the major challenges of image reconstruction in Digital Tomosynthesis (DT), particularly to the connection of the reconstruction problem with the incompleteness of the DT dataset. The author discusses the factors which cause the formation of limited angle artifacts and proposes how to account for them in order to improve image quality and axial resolution of modern DT. The addressed methods include a weighted non-linear back projection scheme for algebraic reconstruction and?novel dual-axis acquisition geometry. All discussed algorithms and methods are supplemented by detailed illustrations, hints for practical implementation, pseudo-code, simulation results and real patient case examples.

  6. 4D seismic data acquisition method during coal mining

    International Nuclear Information System (INIS)

    Du, Wen-Feng; Peng, Su-Ping

    2014-01-01

    In order to observe overburden media changes caused by mining processing, we take the fully-mechanized working face of the BLT coal mine in Shendong mine district as an example to develop a 4D seismic data acquisition methodology during coal mining. The 4D seismic data acquisition is implemented to collect 3D seismic data four times in different periods, such as before mining, during the mining process and after mining to observe the changes of the overburden layer during coal mining. The seismic data in the research area demonstrates that seismic waves are stronger in energy, higher in frequency and have better continuous reflectors before coal mining. However, all this is reversed after coal mining because the overburden layer has been mined, the seismic energy and frequency decrease, and reflections have more discontinuities. Comparing the records collected in the survey with those from newly mined areas and other records acquired in the same survey with the same geometry and with a long time for settling after mining, it clearly shows that the seismic reflections have stronger amplitudes and are more continuous because the media have recovered by overburden layer compaction after a long time of settling after mining. By 4D seismic acquisition, the original background investigation of the coal layers can be derived from the first records, then the layer structure changes can be monitored through the records of mining action and compaction action after mining. This method has laid the foundation for further research into the variation principles of the overburden layer under modern coal-mining conditions. (paper)

  7. New diffusion imaging method with a single acquisition sequence

    International Nuclear Information System (INIS)

    Melki, Ph.S.; Bittoun, J.; Lefevre, J.E.

    1987-01-01

    The apparent diffusion coefficient (ADC) is related to the molecular diffusion coefficient and to physiologic information: microcirculation in the capillary network, incoherent slow flow, and restricted diffusion. The authors present a new MR imaging sequence that yields computed ADC images in only one acquisition of 9-minutes with a 1.5-T imager (GE Signa). Compared to the previous method, this sequence is at least two times faster and thus can be used as a routine examination to supplement T1-, T2-, and density-weighted images. The method was assessed by measurement of the molecular diffusion in liquids, and the first clinical images obtained in neurologic diseases demonstrate its efficiency for clinical investigation. The possibility of separately imaging diffusion and perfusion is supported by an algorithm

  8. Application of the Maximum Entropy Method to Risk Analysis of Mergers and Acquisitions

    Science.gov (United States)

    Xie, Jigang; Song, Wenyun

    The maximum entropy (ME) method can be used to analyze the risk of mergers and acquisitions when only pre-acquisition information is available. A practical example of the risk analysis of China listed firms’ mergers and acquisitions is provided to testify the feasibility and practicality of the method.

  9. System and method for acquisition management of subject position information

    Science.gov (United States)

    Carrender, Curt

    2005-12-13

    A system and method for acquisition management of subject position information that utilizes radio frequency identification (RF ID) to store position information in position tags. Tag programmers receive position information from external positioning systems, such as the Global Positioning System (GPS), from manual inputs, such as keypads, or other tag programmers. The tag programmers program each position tag with the received position information. Both the tag programmers and the position tags can be portable or fixed. Implementations include portable tag programmers and fixed position tags for subject position guidance, and portable tag programmers for collection sample labeling. Other implementations include fixed tag programmers and portable position tags for subject route recordation. Position tags can contain other associated information such as destination address of an affixed subject for subject routing.

  10. System and method for acquisition management of subject position information

    Energy Technology Data Exchange (ETDEWEB)

    Carrender, Curt [Morgan Hill, CA

    2007-01-23

    A system and method for acquisition management of subject position information that utilizes radio frequency identification (RF ID) to store position information in position tags. Tag programmers receive position information from external positioning systems, such as the Global Positioning System (GPS), from manual inputs, such as keypads, or other tag programmers. The tag programmers program each position tag with the received position information. Both the tag programmers and the position tags can be portable or fixed. Implementations include portable tag programmers and fixed position tags for subject position guidance, and portable tag programmers for collection sample labeling. Other implementations include fixed tag programmers and portable position tags for subject route recordation. Position tags can contain other associated information such as destination address of an affixed subject for subject routing.

  11. Three-dimensional display techniques: description and critique of methods

    International Nuclear Information System (INIS)

    Budinger, T.F.

    1982-01-01

    The recent advances in non invasive medical imaging of 3 dimensional spatial distribution of radionuclides, X-ray attenuation coefficients, and nuclear magnetic resonance parameters necessitate development of a general method for displaying these data. The objective of this paper is to give a systematic description and comparison of known methods for displaying three dimensional data. The discussion of display methods is divided into two major categories: 1) computer-graphics methods which use a two dimensional display screen; and 2) optical methods (such as holography, stereopsis and vari-focal systems)

  12. 309 Enhancing the Acquisition Methods of School Library ...

    African Journals Online (AJOL)

    User

    2010-10-17

    Oct 17, 2010 ... Sometime, the school Libraries receives cash donations for the acquisition of library materials. ... To keep financial record or book budget. (v). To keep records .... management systems to knowledge-based systems provides a.

  13. Utility of three-dimensional method for diagnosing meniscal lesions

    International Nuclear Information System (INIS)

    Ohshima, Suguru; Nomura, Kazutoshi; Hirano, Mako; Hashimoto, Noburo; Fukumoto, Tetsuya; Katahira, Kazuhiro

    1998-01-01

    MRI of the knee is a useful method for diagnosing meniscal tears. Although the spin echo method is usually used for diagnosing meniscal tears, we examined the utility of thin slice scan with the three-dimensional method. We reviewed 70 menisci in which arthroscopic findings were confirmed. In this series, sensitivity was 90.9% for medial meniscal injuries and 68.8% for lateral meniscal injuries. There were 3 meniscal tears in which we could not detect tears on preoperative MRI. We could find tears in two of these cases when re-evaluated using the same MRI. In conclusion, we can get the same diagnostic rate with the three-dimensional method compared with the spin echo method. Scan time of the three-dimensional method is 3 minutes, on the other hand that of spin echo method in 17 minutes. This slice scan with three-dimensional method is useful for screening meniscal injuries before arthroscopy. (author)

  14. An Evaluation of the Acquisition Streamlining Methods at the Fleet and Industrial Supply Center Pearl Harbor Hawaii

    National Research Council Canada - National Science Library

    Henry, Mark

    1999-01-01

    ...) Pearl Harbor's implementation of acquisition streamlining initiatives and recommends viable methods of streamlining the acquisition process at FISC Pearl Harbor and other Naval Supply Systems Command...

  15. Method of dimensionality reduction in contact mechanics and friction

    CERN Document Server

    Popov, Valentin L

    2015-01-01

    This book describes for the first time a simulation method for the fast calculation of contact properties and friction between rough surfaces in a complete form. In contrast to existing simulation methods, the method of dimensionality reduction (MDR) is based on the exact mapping of various types of three-dimensional contact problems onto contacts of one-dimensional foundations. Within the confines of MDR, not only are three dimensional systems reduced to one-dimensional, but also the resulting degrees of freedom are independent from another. Therefore, MDR results in an enormous reduction of the development time for the numerical implementation of contact problems as well as the direct computation time and can ultimately assume a similar role in tribology as FEM has in structure mechanics or CFD methods, in hydrodynamics. Furthermore, it substantially simplifies analytical calculation and presents a sort of “pocket book edition” of the entirety contact mechanics. Measurements of the rheology of bodies in...

  16. Self-calibrated multiple-echo acquisition with radial trajectories using the conjugate gradient method (SMART-CG).

    Science.gov (United States)

    Jung, Youngkyoo; Samsonov, Alexey A; Bydder, Mark; Block, Walter F

    2011-04-01

    To remove phase inconsistencies between multiple echoes, an algorithm using a radial acquisition to provide inherent phase and magnitude information for self correction was developed. The information also allows simultaneous support for parallel imaging for multiple coil acquisitions. Without a separate field map acquisition, a phase estimate from each echo in multiple echo train was generated. When using a multiple channel coil, magnitude and phase estimates from each echo provide in vivo coil sensitivities. An algorithm based on the conjugate gradient method uses these estimates to simultaneously remove phase inconsistencies between echoes, and in the case of multiple coil acquisition, simultaneously provides parallel imaging benefits. The algorithm is demonstrated on single channel, multiple channel, and undersampled data. Substantial image quality improvements were demonstrated. Signal dropouts were completely removed and undersampling artifacts were well suppressed. The suggested algorithm is able to remove phase cancellation and undersampling artifacts simultaneously and to improve image quality of multiecho radial imaging, the important technique for fast three-dimensional MRI data acquisition. Copyright © 2011 Wiley-Liss, Inc.

  17. New method for solving three-dimensional Schroedinger equation

    International Nuclear Information System (INIS)

    Melezhik, V.S.

    1990-01-01

    The method derived recently for solving a multidimensional scattering problem is applied to a three-dimensional Schroedinger equation. As compared with direct three-dimensional calculations of finite elements and finite differences, this approach gives sufficiently accurate upper and lower approximations to the helium-atom binding energy, which demonstrates its efficiency. 15 refs.; 1 fig.; 2 tabs

  18. A Quantitative Three-Dimensional Image Analysis Tool for Maximal Acquisition of Spatial Heterogeneity Data.

    Science.gov (United States)

    Allenby, Mark C; Misener, Ruth; Panoskaltsis, Nicki; Mantalaris, Athanasios

    2017-02-01

    Three-dimensional (3D) imaging techniques provide spatial insight into environmental and cellular interactions and are implemented in various fields, including tissue engineering, but have been restricted by limited quantification tools that misrepresent or underutilize the cellular phenomena captured. This study develops image postprocessing algorithms pairing complex Euclidean metrics with Monte Carlo simulations to quantitatively assess cell and microenvironment spatial distributions while utilizing, for the first time, the entire 3D image captured. Although current methods only analyze a central fraction of presented confocal microscopy images, the proposed algorithms can utilize 210% more cells to calculate 3D spatial distributions that can span a 23-fold longer distance. These algorithms seek to leverage the high sample cost of 3D tissue imaging techniques by extracting maximal quantitative data throughout the captured image.

  19. Direct Linear Transformation Method for Three-Dimensional Cinematography

    Science.gov (United States)

    Shapiro, Robert

    1978-01-01

    The ability of Direct Linear Transformation Method for three-dimensional cinematography to locate points in space was shown to meet the accuracy requirements associated with research on human movement. (JD)

  20. A Study on Watt-hour Meter Data Acquisition Method Based on RFID Technology

    Science.gov (United States)

    Chen, Xiangqun; Huang, Rui; Shen, Liman; Chen, Hao; Xiong, Dezhi; Xiao, Xiangqi; Liu, Mouhai; Xu, Renheng

    2018-03-01

    Considering that traditional watt-hour meter data acquisition was subjected to the influence of distance and occlusion, a watt-hour meter data acquisition method based on RFID technology was proposed in this paper. In detail, RFID electronic tag was embedded in the watt-hour meter to identify the meter and record electric energy information, which made RFID based wireless data acquisition for watt-hour meter come true. Eventually, overall lifecycle management of watt-hour meter is realized.

  1. Iterative Two- and One-Dimensional Methods for Three-Dimensional Neutron Diffusion Calculations

    International Nuclear Information System (INIS)

    Lee, Hyun Chul; Lee, Deokjung; Downar, Thomas J.

    2005-01-01

    Two methods are proposed for solving the three-dimensional neutron diffusion equation by iterating between solutions of the two-dimensional (2-D) radial and one-dimensional (1-D) axial solutions. In the first method, the 2-D/1-D equations are coupled using a current correction factor (CCF) with the average fluxes of the lower and upper planes and the axial net currents at the plane interfaces. In the second method, an analytic expression for the axial net currents at the interface of the planes is used for planar coupling. A comparison of the new methods is made with two previously proposed methods, which use interface net currents and partial currents for planar coupling. A Fourier convergence analysis of the four methods was performed, and results indicate that the two new methods have at least three advantages over the previous methods. First, the new methods are unconditionally stable, whereas the net current method diverges for small axial mesh size. Second, the new methods provide better convergence performance than the other methods in the range of practical mesh sizes. Third, the spectral radii of the new methods asymptotically approach zero as the mesh size increases, while the spectral radius of the partial current method approaches a nonzero value as the mesh size increases. Of the two new methods proposed here, the analytic method provides a smaller spectral radius than the CCF method, but the CCF method has several advantages over the analytic method in practical applications

  2. Study on highly efficient seismic data acquisition and processing methods based on sparsity constraint

    Science.gov (United States)

    Wang, H.; Chen, S.; Tao, C.; Qiu, L.

    2017-12-01

    High-density, high-fold and wide-azimuth seismic data acquisition methods are widely used to overcome the increasingly sophisticated exploration targets. The acquisition period is longer and longer and the acquisition cost is higher and higher. We carry out the study of highly efficient seismic data acquisition and processing methods based on sparse representation theory (or compressed sensing theory), and achieve some innovative results. The theoretical principles of highly efficient acquisition and processing is studied. We firstly reveal sparse representation theory based on wave equation. Then we study the highly efficient seismic sampling methods and present an optimized piecewise-random sampling method based on sparsity prior information. At last, a reconstruction strategy with the sparsity constraint is developed; A two-step recovery approach by combining sparsity-promoting method and hyperbolic Radon transform is also put forward. The above three aspects constitute the enhanced theory of highly efficient seismic data acquisition. The specific implementation strategies of highly efficient acquisition and processing are studied according to the highly efficient acquisition theory expounded in paragraph 2. Firstly, we propose the highly efficient acquisition network designing method by the help of optimized piecewise-random sampling method. Secondly, we propose two types of highly efficient seismic data acquisition methods based on (1) single sources and (2) blended (or simultaneous) sources. Thirdly, the reconstruction procedures corresponding to the above two types of highly efficient seismic data acquisition methods are proposed to obtain the seismic data on the regular acquisition network. A discussion of the impact on the imaging result of blended shooting is discussed. In the end, we implement the numerical tests based on Marmousi model. The achieved results show: (1) the theoretical framework of highly efficient seismic data acquisition and processing

  3. Three-dimensional space charge calculation method

    International Nuclear Information System (INIS)

    Lysenko, W.P.; Wadlinger, E.A.

    1981-01-01

    A method is presented for calculating space-charge forces suitable for use in a particle tracing code. Poisson's equation is solved in three dimensions with boundary conditions specified on an arbitrary surface by using a weighted residual method. Using a discrete particle distribution as our source input, examples are shown of off-axis, bunched beams of noncircular crosssection in radio-frequency quadrupole (RFQ) and drift-tube linac geometries

  4. Variational iteration method for one dimensional nonlinear thermoelasticity

    International Nuclear Information System (INIS)

    Sweilam, N.H.; Khader, M.M.

    2007-01-01

    This paper applies the variational iteration method to solve the Cauchy problem arising in one dimensional nonlinear thermoelasticity. The advantage of this method is to overcome the difficulty of calculation of Adomian's polynomials in the Adomian's decomposition method. The numerical results of this method are compared with the exact solution of an artificial model to show the efficiency of the method. The approximate solutions show that the variational iteration method is a powerful mathematical tool for solving nonlinear problems

  5. Hypertext Glosses for Foreign Language Reading Comprehension and Vocabulary Acquisition: Effects of Assessment Methods

    Science.gov (United States)

    Chen, I-Jung

    2016-01-01

    This study compared how three different gloss modes affected college students' L2 reading comprehension and vocabulary acquisition. The study also compared how results on comprehension and vocabulary acquisition may differ depending on the four assessment methods used. A between-subjects design was employed with three groups of Mandarin-speaking…

  6. Acquisition and understanding of process knowledge using problem solving methods

    CERN Document Server

    Gómez-Pérez, JM

    2010-01-01

    The development of knowledge-based systems is usually approached through the combined skills of knowledge engineers (KEs) and subject matter experts (SMEs). One of the most critical steps in this activity aims at transferring knowledge from SMEs to formal, machine-readable representations, which allow systems to reason with such knowledge. However, this is a costly and error prone task. Alleviating the knowledge acquisition bottleneck requires enabling SMEs with the means to produce the desired knowledge representations without the help of KEs. This is especially difficult in the case of compl

  7. A method for real-time three-dimensional vector velocity imaging

    DEFF Research Database (Denmark)

    Jensen, Jørgen Arendt; Nikolov, Svetoslav

    2003-01-01

    The paper presents an approach for making real-time three-dimensional vector flow imaging. Synthetic aperture data acquisition is used, and the data is beamformed along the flow direction to yield signals usable for flow estimation. The signals are cross-related to determine the shift in position...... are done using 16 × 16 = 256 elements at a time and the received signals from the same elements are sampled. Access to the individual elements is done through 16-to-1 multiplexing, so that only a 256 channels transmitting and receiving system are needed. The method has been investigated using Field II...

  8. A sparse grid based method for generative dimensionality reduction of high-dimensional data

    Science.gov (United States)

    Bohn, Bastian; Garcke, Jochen; Griebel, Michael

    2016-03-01

    Generative dimensionality reduction methods play an important role in machine learning applications because they construct an explicit mapping from a low-dimensional space to the high-dimensional data space. We discuss a general framework to describe generative dimensionality reduction methods, where the main focus lies on a regularized principal manifold learning variant. Since most generative dimensionality reduction algorithms exploit the representer theorem for reproducing kernel Hilbert spaces, their computational costs grow at least quadratically in the number n of data. Instead, we introduce a grid-based discretization approach which automatically scales just linearly in n. To circumvent the curse of dimensionality of full tensor product grids, we use the concept of sparse grids. Furthermore, in real-world applications, some embedding directions are usually more important than others and it is reasonable to refine the underlying discretization space only in these directions. To this end, we employ a dimension-adaptive algorithm which is based on the ANOVA (analysis of variance) decomposition of a function. In particular, the reconstruction error is used to measure the quality of an embedding. As an application, the study of large simulation data from an engineering application in the automotive industry (car crash simulation) is performed.

  9. A multi-dimensional sampling method for locating small scatterers

    International Nuclear Information System (INIS)

    Song, Rencheng; Zhong, Yu; Chen, Xudong

    2012-01-01

    A multiple signal classification (MUSIC)-like multi-dimensional sampling method (MDSM) is introduced to locate small three-dimensional scatterers using electromagnetic waves. The indicator is built with the most stable part of signal subspace of the multi-static response matrix on a set of combinatorial sampling nodes inside the domain of interest. It has two main advantages compared to the conventional MUSIC methods. First, the MDSM is more robust against noise. Second, it can work with a single incidence even for multi-scatterers. Numerical simulations are presented to show the good performance of the proposed method. (paper)

  10. Comparison of pulsed three-dimensional CEST acquisition schemes at 7 tesla : steady state versus pseudosteady state

    NARCIS (Netherlands)

    Khlebnikov, Vitaly; Geades, Nicolas; Klomp, DWJ; Hoogduin, Hans; Gowland, Penny; Mougin, Olivier

    PURPOSE: To compare two pulsed, volumetric chemical exchange saturation transfer (CEST) acquisition schemes: steady state (SS) and pseudosteady state (PS) for the same brain coverage, spatial/spectral resolution and scan time. METHODS: Both schemes were optimized for maximum sensitivity to amide

  11. Irregular grid methods for pricing high-dimensional American options

    NARCIS (Netherlands)

    Berridge, S.J.

    2004-01-01

    This thesis proposes and studies numerical methods for pricing high-dimensional American options; important examples being basket options, Bermudan swaptions and real options. Four new methods are presented and analysed, both in terms of their application to various test problems, and in terms of

  12. Three dimensional dose distribution comparison of simple and complex acquisition trajectories in dedicated breast CT

    Energy Technology Data Exchange (ETDEWEB)

    Shah, Jainil P., E-mail: jainil.shah@duke.edu [Department of Biomedical Engineering, Duke University, Durham, North Carolina 27705 and Multi Modality Imaging Lab, Duke University Medical Center, Durham, North Carolina 27710 (United States); Mann, Steve D. [Medical Physics Graduate Program, Duke University Medical Center, Durham, North Carolina 27705 and Multi Modality Imaging Lab, Duke University Medical Center, Durham, North Carolina 27710 (United States); McKinley, Randolph L. [ZumaTek, Inc., Research Triangle Park, North Carolina 27709 (United States); Tornai, Martin P. [Department of Biomedical Engineering, Duke University, Durham, North Carolina 27705 (United States); Medical Physics Graduate Program, Duke University Medical Center, Durham, North Carolina 27705 (United States); Multi Modality Imaging Lab, Duke University Medical Center, Durham, North Carolina 27710 (United States)

    2015-08-15

    Purpose: A novel breast CT system capable of arbitrary 3D trajectories has been developed to address cone beam sampling insufficiency as well as to image further into the patient’s chest wall. The purpose of this study was to characterize any trajectory-related differences in 3D x-ray dose distribution in a pendant target when imaged with different orbits. Methods: Two acquisition trajectories were evaluated: circular azimuthal (no-tilt) and sinusoidal (saddle) orbit with ±15° tilts around a pendant breast, using Monte Carlo simulations as well as physical measurements. Simulations were performed with tungsten (W) filtration of a W-anode source; the simulated source flux was normalized to the measured exposure of a W-anode source. A water-filled cylindrical phantom was divided into 1 cm{sup 3} voxels, and the cumulative energy deposited was tracked in each voxel. Energy deposited per voxel was converted to dose, yielding the 3D distributed dose volumes. Additionally, three cylindrical phantoms of different diameters (10, 12.5, and 15 cm) and an anthropomorphic breast phantom, initially filled with water (mimicking pure fibroglandular tissue) and then with a 75% methanol-25% water mixture (mimicking 50–50 fibroglandular-adipose tissues), were used to simulate the pendant breast geometry and scanned on the physical system. Ionization chamber calibrated radiochromic film was used to determine the dose delivered in a 2D plane through the center of the volume for a fully 3D CT scan using the different orbits. Results: Measured experimental results for the same exposure indicated that the mean dose measured throughout the central slice for different diameters ranged from 3.93 to 5.28 mGy, with the lowest average dose measured on the largest cylinder with water mimicking a homogeneously fibroglandular breast. These results align well with the cylinder phantom Monte Carlo studies which also showed a marginal difference in dose delivered by a saddle trajectory in the

  13. Three dimensional dose distribution comparison of simple and complex acquisition trajectories in dedicated breast CT

    International Nuclear Information System (INIS)

    Shah, Jainil P.; Mann, Steve D.; McKinley, Randolph L.; Tornai, Martin P.

    2015-01-01

    Purpose: A novel breast CT system capable of arbitrary 3D trajectories has been developed to address cone beam sampling insufficiency as well as to image further into the patient’s chest wall. The purpose of this study was to characterize any trajectory-related differences in 3D x-ray dose distribution in a pendant target when imaged with different orbits. Methods: Two acquisition trajectories were evaluated: circular azimuthal (no-tilt) and sinusoidal (saddle) orbit with ±15° tilts around a pendant breast, using Monte Carlo simulations as well as physical measurements. Simulations were performed with tungsten (W) filtration of a W-anode source; the simulated source flux was normalized to the measured exposure of a W-anode source. A water-filled cylindrical phantom was divided into 1 cm"3 voxels, and the cumulative energy deposited was tracked in each voxel. Energy deposited per voxel was converted to dose, yielding the 3D distributed dose volumes. Additionally, three cylindrical phantoms of different diameters (10, 12.5, and 15 cm) and an anthropomorphic breast phantom, initially filled with water (mimicking pure fibroglandular tissue) and then with a 75% methanol-25% water mixture (mimicking 50–50 fibroglandular-adipose tissues), were used to simulate the pendant breast geometry and scanned on the physical system. Ionization chamber calibrated radiochromic film was used to determine the dose delivered in a 2D plane through the center of the volume for a fully 3D CT scan using the different orbits. Results: Measured experimental results for the same exposure indicated that the mean dose measured throughout the central slice for different diameters ranged from 3.93 to 5.28 mGy, with the lowest average dose measured on the largest cylinder with water mimicking a homogeneously fibroglandular breast. These results align well with the cylinder phantom Monte Carlo studies which also showed a marginal difference in dose delivered by a saddle trajectory in the

  14. A DETERMINISTIC METHOD FOR TRANSIENT, THREE-DIMENSIONAL NUETRON TRANSPORT

    International Nuclear Information System (INIS)

    S. GOLUOGLU, C. BENTLEY, R. DEMEGLIO, M. DUNN, K. NORTON, R. PEVEY I.SUSLOV AND H.L. DODDS

    1998-01-01

    A deterministic method for solving the time-dependent, three-dimensional Boltzmam transport equation with explicit representation of delayed neutrons has been developed and evaluated. The methodology used in this study for the time variable of the neutron flux is known as the improved quasi-static (IQS) method. The position, energy, and angle-dependent neutron flux is computed deterministically by using the three-dimensional discrete ordinates code TORT. This paper briefly describes the methodology and selected results. The code developed at the University of Tennessee based on this methodology is called TDTORT. TDTORT can be used to model transients involving voided and/or strongly absorbing regions that require transport theory for accuracy. This code can also be used to model either small high-leakage systems, such as space reactors, or asymmetric control rod movements. TDTORT can model step, ramp, step followed by another step, and step followed by ramp type perturbations. It can also model columnwise rod movement can also be modeled. A special case of columnwise rod movement in a three-dimensional model of a boiling water reactor (BWR) with simple adiabatic feedback is also included. TDTORT is verified through several transient one-dimensional, two-dimensional, and three-dimensional benchmark problems. The results show that the transport methodology and corresponding code developed in this work have sufficient accuracy and speed for computing the dynamic behavior of complex multidimensional neutronic systems

  15. A method of image improvement in three-dimensional imaging

    International Nuclear Information System (INIS)

    Suto, Yasuzo; Huang, Tewen; Furuhata, Kentaro; Uchino, Masafumi.

    1988-01-01

    In general, image interpolation is required when the surface configurations of such structures as bones and organs are three-dimensionally constructed from the multi-sliced images obtained by CT. Image interpolation is a processing method whereby an artificial image is inserted between two adjacent slices to make spatial resolution equal to slice resolution in appearance. Such image interpolation makes it possible to increase the image quality of the constructed three-dimensional image. In our newly-developed algorithm, we have converted the presently and subsequently sliced images to distance images, and generated the interpolation images from these two distance images. As a result, compared with the previous method, three-dimensional images with better image quality have been constructed. (author)

  16. Computational methods for three-dimensional microscopy reconstruction

    CERN Document Server

    Frank, Joachim

    2014-01-01

    Approaches to the recovery of three-dimensional information on a biological object, which are often formulated or implemented initially in an intuitive way, are concisely described here based on physical models of the object and the image-formation process. Both three-dimensional electron microscopy and X-ray tomography can be captured in the same mathematical framework, leading to closely-related computational approaches, but the methodologies differ in detail and hence pose different challenges. The editors of this volume, Gabor T. Herman and Joachim Frank, are experts in the respective methodologies and present research at the forefront of biological imaging and structural biology.   Computational Methods for Three-Dimensional Microscopy Reconstruction will serve as a useful resource for scholars interested in the development of computational methods for structural biology and cell biology, particularly in the area of 3D imaging and modeling.

  17. Fast multiview three-dimensional reconstruction method using cost volume filtering

    Science.gov (United States)

    Lee, Seung Joo; Park, Min Ki; Jang, In Yeop; Lee, Kwan H.

    2014-03-01

    As the number of customers who want to record three-dimensional (3-D) information using a mobile electronic device increases, it becomes more and more important to develop a method which quickly reconstructs a 3-D model from multiview images. A fast multiview-based 3-D reconstruction method is presented, which is suitable for the mobile environment by constructing a cost volume of the 3-D height field. This method consists of two steps: the construction of a reliable base surface and the recovery of shape details. In each step, the cost volume is constructed using photoconsistency and then it is filtered according to the multiscale. The multiscale-based cost volume filtering allows the 3-D reconstruction to maintain the overall shape and to preserve the shape details. We demonstrate the strength of the proposed method in terms of computation time, accuracy, and unconstrained acquisition environment.

  18. An method of verify period signal based on data acquisition card

    International Nuclear Information System (INIS)

    Zeng Shaoli

    2005-01-01

    This paper introduces an method to verify index voltage of Period Signal Generator by using data acquisition card. which it's error is less 0.5%. A corresponding Win32's program, which use voluntarily developed VxD to control data acquisition card direct I/O and multi thread technique for gain the best time scale precision, has developed in Windows platform. The program will real time collect inda voltage data and auto measure period. (authors)

  19. The stress analysis method for three-dimensional composite materials

    Science.gov (United States)

    Nagai, Kanehiro; Yokoyama, Atsushi; Maekawa, Zen'ichiro; Hamada, Hiroyuki

    1994-05-01

    This study proposes a stress analysis method for three-dimensionally fiber reinforced composite materials. In this method, the rule-of mixture for composites is successfully applied to 3-D space in which material properties would change 3-dimensionally. The fundamental formulas for Young's modulus, shear modulus, and Poisson's ratio are derived. Also, we discuss a strength estimation and an optimum material design technique for 3-D composite materials. The analysis is executed for a triaxial orthogonally woven fabric, and their results are compared to the experimental data in order to verify the accuracy of this method. The present methodology can be easily understood with basic material mechanics and elementary mathematics, so it enables us to write a computer program of this theory without difficulty. Furthermore, this method can be applied to various types of 3-D composites because of its general-purpose characteristics.

  20. Acquisition of Psychomotor Skills in Dentistry: An Experimental Teaching Method.

    Science.gov (United States)

    Vann, William F., Jr.; And Others

    1981-01-01

    A traditional method of teaching psychomotor skills in a preclinical restorative dentistry laboratory course was compared with an experimental method. The experimental group was taught using a guided systematic approach that relied on detailed checklists and exhaustive faculty feedback. (Author/MLW)

  1. The Validity of Dimensional Regularization Method on Fractal Spacetime

    Directory of Open Access Journals (Sweden)

    Yong Tao

    2013-01-01

    Full Text Available Svozil developed a regularization method for quantum field theory on fractal spacetime (1987. Such a method can be applied to the low-order perturbative renormalization of quantum electrodynamics but will depend on a conjectural integral formula on non-integer-dimensional topological spaces. The main purpose of this paper is to construct a fractal measure so as to guarantee the validity of the conjectural integral formula.

  2. A Parallel Distributed-Memory Particle Method Enables Acquisition-Rate Segmentation of Large Fluorescence Microscopy Images.

    Directory of Open Access Journals (Sweden)

    Yaser Afshar

    Full Text Available Modern fluorescence microscopy modalities, such as light-sheet microscopy, are capable of acquiring large three-dimensional images at high data rate. This creates a bottleneck in computational processing and analysis of the acquired images, as the rate of acquisition outpaces the speed of processing. Moreover, images can be so large that they do not fit the main memory of a single computer. We address both issues by developing a distributed parallel algorithm for segmentation of large fluorescence microscopy images. The method is based on the versatile Discrete Region Competition algorithm, which has previously proven useful in microscopy image segmentation. The present distributed implementation decomposes the input image into smaller sub-images that are distributed across multiple computers. Using network communication, the computers orchestrate the collectively solving of the global segmentation problem. This not only enables segmentation of large images (we test images of up to 10(10 pixels, but also accelerates segmentation to match the time scale of image acquisition. Such acquisition-rate image segmentation is a prerequisite for the smart microscopes of the future and enables online data compression and interactive experiments.

  3. A Parallel Distributed-Memory Particle Method Enables Acquisition-Rate Segmentation of Large Fluorescence Microscopy Images.

    Science.gov (United States)

    Afshar, Yaser; Sbalzarini, Ivo F

    2016-01-01

    Modern fluorescence microscopy modalities, such as light-sheet microscopy, are capable of acquiring large three-dimensional images at high data rate. This creates a bottleneck in computational processing and analysis of the acquired images, as the rate of acquisition outpaces the speed of processing. Moreover, images can be so large that they do not fit the main memory of a single computer. We address both issues by developing a distributed parallel algorithm for segmentation of large fluorescence microscopy images. The method is based on the versatile Discrete Region Competition algorithm, which has previously proven useful in microscopy image segmentation. The present distributed implementation decomposes the input image into smaller sub-images that are distributed across multiple computers. Using network communication, the computers orchestrate the collectively solving of the global segmentation problem. This not only enables segmentation of large images (we test images of up to 10(10) pixels), but also accelerates segmentation to match the time scale of image acquisition. Such acquisition-rate image segmentation is a prerequisite for the smart microscopes of the future and enables online data compression and interactive experiments.

  4. A Parallel Distributed-Memory Particle Method Enables Acquisition-Rate Segmentation of Large Fluorescence Microscopy Images

    Science.gov (United States)

    Afshar, Yaser; Sbalzarini, Ivo F.

    2016-01-01

    Modern fluorescence microscopy modalities, such as light-sheet microscopy, are capable of acquiring large three-dimensional images at high data rate. This creates a bottleneck in computational processing and analysis of the acquired images, as the rate of acquisition outpaces the speed of processing. Moreover, images can be so large that they do not fit the main memory of a single computer. We address both issues by developing a distributed parallel algorithm for segmentation of large fluorescence microscopy images. The method is based on the versatile Discrete Region Competition algorithm, which has previously proven useful in microscopy image segmentation. The present distributed implementation decomposes the input image into smaller sub-images that are distributed across multiple computers. Using network communication, the computers orchestrate the collectively solving of the global segmentation problem. This not only enables segmentation of large images (we test images of up to 1010 pixels), but also accelerates segmentation to match the time scale of image acquisition. Such acquisition-rate image segmentation is a prerequisite for the smart microscopes of the future and enables online data compression and interactive experiments. PMID:27046144

  5. High-speed fan-beam reconstruction using direct two-dimensional Fourier transform method

    International Nuclear Information System (INIS)

    Niki, Noboru; Mizutani, Toshio; Takahashi, Yoshizo; Inouye, Tamon.

    1984-01-01

    Since the first development of X-ray computer tomography (CT), various efforts have been made to obtain high quality of high-speed image. However, the development of high resolution CT and the ultra-high speed CT to be applied to hearts is still desired. The X-ray beam scanning method was already changed from the parallel beam system to the fan-beam system in order to greatly shorten the scanning time. Also, the filtered back projection (DFBP) method has been employed to directly processing fan-beam projection data as reconstruction method. Although the two-dimensional Fourier transform (TFT) method significantly faster than FBP method was proposed, it has not been sufficiently examined for fan-beam projection data. Thus, the ITFT method was investigated, which first executes rebinning algorithm to convert the fan-beam projection data to the parallel beam projection data, thereafter, uses two-dimensional Fourier transform. By this method, although high speed is expected, the reconstructed images might be degraded due to the adoption of rebinning algorithm. Therefore, the effect of the interpolation error of rebinning algorithm on the reconstructed images has been analyzed theoretically, and finally, the result of the employment of spline interpolation which allows the acquisition of high quality images with less errors has been shown by the numerical and visual evaluation based on simulation and actual data. Computation time was reduced to 1/15 for the image matrix of 512 and to 1/30 for doubled matrix. (Wakatsuki, Y.)

  6. Toward an Improved Method of HSI Evaluation in Defense Acquisition

    National Research Council Canada - National Science Library

    Simpson, Matthew

    2006-01-01

    Each of the domains of HSI is of itself a discipline with vast amounts of research, analytic techniques, educational programs, and methods for evaluating the effectiveness of the system with respect...

  7. A simple three dimensional wide-angle beam propagation method

    Science.gov (United States)

    Ma, Changbao; van Keuren, Edward

    2006-05-01

    The development of three dimensional (3-D) waveguide structures for chip scale planar lightwave circuits (PLCs) is hampered by the lack of effective 3-D wide-angle (WA) beam propagation methods (BPMs). We present a simple 3-D wide-angle beam propagation method (WA-BPM) using Hoekstra’s scheme along with a new 3-D wave equation splitting method. The applicability, accuracy and effectiveness of our method are demonstrated by applying it to simulations of wide-angle beam propagation and comparing them with analytical solutions.

  8. The comparative method of language acquisition research: a Mayan case study.

    Science.gov (United States)

    Pye, Clifton; Pfeiler, Barbara

    2014-03-01

    This article demonstrates how the Comparative Method can be applied to cross-linguistic research on language acquisition. The Comparative Method provides a systematic procedure for organizing and interpreting acquisition data from different languages. The Comparative Method controls for cross-linguistic differences at all levels of the grammar and is especially useful in drawing attention to variation in contexts of use across languages. This article uses the Comparative Method to analyze the acquisition of verb suffixes in two Mayan languages: K'iche' and Yucatec. Mayan status suffixes simultaneously mark distinctions in verb transitivity, verb class, mood, and clause position. Two-year-old children acquiring K'iche' and Yucatec Maya accurately produce the status suffixes on verbs, in marked distinction to the verbal prefixes for aspect and agreement. We find evidence that the contexts of use for the suffixes differentially promote the children's production of cognate status suffixes in K'iche' and Yucatec.

  9. Nonlinear dimensionality reduction methods for synthetic biology biobricks' visualization.

    Science.gov (United States)

    Yang, Jiaoyun; Wang, Haipeng; Ding, Huitong; An, Ning; Alterovitz, Gil

    2017-01-19

    Visualizing data by dimensionality reduction is an important strategy in Bioinformatics, which could help to discover hidden data properties and detect data quality issues, e.g. data noise, inappropriately labeled data, etc. As crowdsourcing-based synthetic biology databases face similar data quality issues, we propose to visualize biobricks to tackle them. However, existing dimensionality reduction methods could not be directly applied on biobricks datasets. Hereby, we use normalized edit distance to enhance dimensionality reduction methods, including Isomap and Laplacian Eigenmaps. By extracting biobricks from synthetic biology database Registry of Standard Biological Parts, six combinations of various types of biobricks are tested. The visualization graphs illustrate discriminated biobricks and inappropriately labeled biobricks. Clustering algorithm K-means is adopted to quantify the reduction results. The average clustering accuracy for Isomap and Laplacian Eigenmaps are 0.857 and 0.844, respectively. Besides, Laplacian Eigenmaps is 5 times faster than Isomap, and its visualization graph is more concentrated to discriminate biobricks. By combining normalized edit distance with Isomap and Laplacian Eigenmaps, synthetic biology biobircks are successfully visualized in two dimensional space. Various types of biobricks could be discriminated and inappropriately labeled biobricks could be determined, which could help to assess crowdsourcing-based synthetic biology databases' quality, and make biobricks selection.

  10. Community of Inquiry Method and Language Skills Acquisition: Empirical Evidence

    Science.gov (United States)

    Preece, Abdul Shakhour Duncan

    2015-01-01

    The study investigates the effectiveness of community of inquiry method in preparing students to develop listening and speaking skills in a sample of junior secondary school students in Borno state, Nigeria. A sample of 100 students in standard classes was drawn in one secondary school in Maiduguri metropolis through stratified random sampling…

  11. Methods for acquisition, storage, and evaluation of leguminous tree germplasm

    Energy Technology Data Exchange (ETDEWEB)

    Felker, P.

    1980-01-01

    Simple methods for establishing, maintaining, and planting of a small scale tree legume (Prosopis) germplasm collection by one or two people are described. Suggestions are included for: developing an understanding of the worldwide distribution of genus; becoming acquainted with basic and applied scientists working on the taxa; devising seed cleaning, fumigation, cataloging, and storage techniques; requesting seed from international seed collections; collecting seed from native populations; and for field designs for planting the germplasm collection.

  12. Indoor integrated navigation and synchronous data acquisition method for Android smartphone

    Science.gov (United States)

    Hu, Chunsheng; Wei, Wenjian; Qin, Shiqiao; Wang, Xingshu; Habib, Ayman; Wang, Ruisheng

    2015-08-01

    Smartphones are widely used at present. Most smartphones have cameras and kinds of sensors, such as gyroscope, accelerometer and magnet meter. Indoor navigation based on smartphone is very important and valuable. According to the features of the smartphone and indoor navigation, a new indoor integrated navigation method is proposed, which uses MEMS (Micro-Electro-Mechanical Systems) IMU (Inertial Measurement Unit), camera and magnet meter of smartphone. The proposed navigation method mainly involves data acquisition, camera calibration, image measurement, IMU calibration, initial alignment, strapdown integral, zero velocity update and integrated navigation. Synchronous data acquisition of the sensors (gyroscope, accelerometer and magnet meter) and the camera is the base of the indoor navigation on the smartphone. A camera data acquisition method is introduced, which uses the camera class of Android to record images and time of smartphone camera. Two kinds of sensor data acquisition methods are introduced and compared. The first method records sensor data and time with the SensorManager of Android. The second method realizes open, close, data receiving and saving functions in C language, and calls the sensor functions in Java language with JNI interface. A data acquisition software is developed with JDK (Java Development Kit), Android ADT (Android Development Tools) and NDK (Native Development Kit). The software can record camera data, sensor data and time at the same time. Data acquisition experiments have been done with the developed software and Sumsang Note 2 smartphone. The experimental results show that the first method of sensor data acquisition is convenient but lost the sensor data sometimes, the second method is much better in real-time performance and much less in data losing. A checkerboard image is recorded, and the corner points of the checkerboard are detected with the Harris method. The sensor data of gyroscope, accelerometer and magnet meter have

  13. Wave field restoration using three-dimensional Fourier filtering method.

    Science.gov (United States)

    Kawasaki, T; Takai, Y; Ikuta, T; Shimizu, R

    2001-11-01

    A wave field restoration method in transmission electron microscopy (TEM) was mathematically derived based on a three-dimensional (3D) image formation theory. Wave field restoration using this method together with spherical aberration correction was experimentally confirmed in through-focus images of amorphous tungsten thin film, and the resolution of the reconstructed phase image was successfully improved from the Scherzer resolution limit to the information limit. In an application of this method to a crystalline sample, the surface structure of Au(110) was observed in a profile-imaging mode. The processed phase image showed quantitatively the atomic relaxation of the topmost layer.

  14. New method for solving three-dimensional Schroedinger equation

    International Nuclear Information System (INIS)

    Melezhik, V.S.

    1992-01-01

    A new method is developed for solving the multidimensional Schroedinger equation without the variable separation. To solve the Schroedinger equation in a multidimensional coordinate space X, a difference grid Ω i (i=1,2,...,N) for some of variables, Ω, from X={R,Ω} is introduced and the initial partial-differential equation is reduced to a system of N differential-difference equations in terms of one of the variables R. The arising multi-channel scattering (or eigenvalue) problem is solved by the algorithm based on a continuous analog of the Newton method. The approach has been successfully tested for several two-dimensional problems (scattering on a nonspherical potential well and 'dipole' scatterer, a hydrogen atom in a homogenous magnetic field) and for a three-dimensional problem of the helium-atom bound states. (author)

  15. Performance analysis of three-dimensional ridge acquisition from live finger and palm surface scans

    Science.gov (United States)

    Fatehpuria, Abhishika; Lau, Daniel L.; Yalla, Veeraganesh; Hassebrook, Laurence G.

    2007-04-01

    Fingerprints are one of the most commonly used and relied-upon biometric technology. But often the captured fingerprint image is far from ideal due to imperfect acquisition techniques that can be slow and cumbersome to use without providing complete fingerprint information. Most of the diffculties arise due to the contact of the fingerprint surface with the sensor platen. To overcome these diffculties we have been developing a noncontact scanning system for acquiring a 3-D scan of a finger with suffciently high resolution which is then converted into a 2-D rolled equivalent image. In this paper, we describe certain quantitative measures evaluating scanner performance. Specifically, we use some image software components developed by the National Institute of Standards and Technology, to derive our performance metrics. Out of the eleven identified metrics, three were found to be most suitable for evaluating scanner performance. A comparison is also made between 2D fingerprint images obtained by the traditional means and the 2D images obtained after unrolling the 3D scans and the quality of the acquired scans is quantified using the metrics.

  16. On two flexible methods of 2-dimensional regression analysis

    Czech Academy of Sciences Publication Activity Database

    Volf, Petr

    2012-01-01

    Roč. 18, č. 4 (2012), s. 154-164 ISSN 1803-9782 Grant - others:GA ČR(CZ) GAP209/10/2045 Institutional support: RVO:67985556 Keywords : regression analysis * Gordon surface * prediction error * projection pursuit Subject RIV: BB - Applied Statistics, Operational Research http://library.utia.cas.cz/separaty/2013/SI/volf-on two flexible methods of 2-dimensional regression analysis.pdf

  17. Continuum methods of physical modeling continuum mechanics, dimensional analysis, turbulence

    CERN Document Server

    Hutter, Kolumban

    2004-01-01

    The book unifies classical continuum mechanics and turbulence modeling, i.e. the same fundamental concepts are used to derive model equations for material behaviour and turbulence closure and complements these with methods of dimensional analysis. The intention is to equip the reader with the ability to understand the complex nonlinear modeling in material behaviour and turbulence closure as well as to derive or invent his own models. Examples are mostly taken from environmental physics and geophysics.

  18. High dimensional model representation method for fuzzy structural dynamics

    Science.gov (United States)

    Adhikari, S.; Chowdhury, R.; Friswell, M. I.

    2011-03-01

    Uncertainty propagation in multi-parameter complex structures possess significant computational challenges. This paper investigates the possibility of using the High Dimensional Model Representation (HDMR) approach when uncertain system parameters are modeled using fuzzy variables. In particular, the application of HDMR is proposed for fuzzy finite element analysis of linear dynamical systems. The HDMR expansion is an efficient formulation for high-dimensional mapping in complex systems if the higher order variable correlations are weak, thereby permitting the input-output relationship behavior to be captured by the terms of low-order. The computational effort to determine the expansion functions using the α-cut method scales polynomically with the number of variables rather than exponentially. This logic is based on the fundamental assumption underlying the HDMR representation that only low-order correlations among the input variables are likely to have significant impacts upon the outputs for most high-dimensional complex systems. The proposed method is first illustrated for multi-parameter nonlinear mathematical test functions with fuzzy variables. The method is then integrated with a commercial finite element software (ADINA). Modal analysis of a simplified aircraft wing with fuzzy parameters has been used to illustrate the generality of the proposed approach. In the numerical examples, triangular membership functions have been used and the results have been validated against direct Monte Carlo simulations. It is shown that using the proposed HDMR approach, the number of finite element function calls can be reduced without significantly compromising the accuracy.

  19. Analysis of Filter-Bank-Based Methods for Fast Serial Acquisition of BOC-Modulated Signals

    Directory of Open Access Journals (Sweden)

    Elena Simona Lohan

    2007-09-01

    Full Text Available Binary-offset-carrier (BOC signals, selected for Galileo and modernized GPS systems, pose significant challenges for the code acquisition, due to the ambiguities (deep fades which are present in the envelope of the correlation function (CF. This is different from the BPSK-modulated CDMA signals, where the main correlation lobe spans over 2-chip interval, without any ambiguities or deep fades. To deal with the ambiguities due to BOC modulation, one solution is to use lower steps of scanning the code phases (i.e., lower than the traditional step of 0.5 chips used for BPSK-modulated CDMA signals. Lowering the time-bin steps entails an increase in the number of timing hypotheses, and, thus, in the acquisition times. An alternative solution is to transform the ambiguous CF into an “unambiguous” CF, via adequate filtering of the signal. A generalized class of frequency-based unambiguous acquisition methods is proposed here, namely the filter-bank-based (FBB approaches. The detailed theoretical analysis of FBB methods is given for serial-search single-dwell acquisition in single path static channels and a comparison is made with other ambiguous and unambiguous BOC acquisition methods existing in the literature.

  20. Three-dimensional protein structure prediction: Methods and computational strategies.

    Science.gov (United States)

    Dorn, Márcio; E Silva, Mariel Barbachan; Buriol, Luciana S; Lamb, Luis C

    2014-10-12

    A long standing problem in structural bioinformatics is to determine the three-dimensional (3-D) structure of a protein when only a sequence of amino acid residues is given. Many computational methodologies and algorithms have been proposed as a solution to the 3-D Protein Structure Prediction (3-D-PSP) problem. These methods can be divided in four main classes: (a) first principle methods without database information; (b) first principle methods with database information; (c) fold recognition and threading methods; and (d) comparative modeling methods and sequence alignment strategies. Deterministic computational techniques, optimization techniques, data mining and machine learning approaches are typically used in the construction of computational solutions for the PSP problem. Our main goal with this work is to review the methods and computational strategies that are currently used in 3-D protein prediction. Copyright © 2014 Elsevier Ltd. All rights reserved.

  1. Two-Dimensional Impact Reconstruction Method for Rail Defect Inspection

    Directory of Open Access Journals (Sweden)

    Jie Zhao

    2014-01-01

    Full Text Available The safety of train operating is seriously menaced by the rail defects, so it is of great significance to inspect rail defects dynamically while the train is operating. This paper presents a two-dimensional impact reconstruction method to realize the on-line inspection of rail defects. The proposed method utilizes preprocessing technology to convert time domain vertical vibration signals acquired by wireless sensor network to space signals. The modern time-frequency analysis method is improved to reconstruct the obtained multisensor information. Then, the image fusion processing technology based on spectrum threshold processing and node color labeling is proposed to reduce the noise, and blank the periodic impact signal caused by rail joints and locomotive running gear. This method can convert the aperiodic impact signals caused by rail defects to partial periodic impact signals, and locate the rail defects. An application indicates that the two-dimensional impact reconstruction method could display the impact caused by rail defects obviously, and is an effective on-line rail defects inspection method.

  2. Development of two dimensional electrophoresis method using single chain DNA

    International Nuclear Information System (INIS)

    Ikeda, Junichi; Hidaka, So

    1998-01-01

    By combining a separation method due to molecular weight and a method to distinguish difference of mono-bases, it was aimed to develop a two dimensional single chain DNA labeled with Radioisotope (RI). From electrophoretic pattern difference of parent and variant strands, it was investigated to isolate the root module implantation control gene. At first, a Single Strand Conformation Polymorphism (SSCP) method using concentration gradient gel was investigated. As a result, it was formed that intervals between double chain and single chain DNAs expanded, but intervals of both single chain DNAs did not expand. On next, combination of non-modified acrylic amide electrophoresis method and Denaturing Gradient-Gel Electrophoresis (DGGE) method was examined. As a result, hybrid DNA developed by two dimensional electrophoresis arranged on two lines. But, among them a band of DNA modified by high concentration of urea could not be found. Therefore, in this fiscal year's experiments, no preferable result could be obtained. By the used method, it was thought to be impossible to detect the differences. (G.K.)

  3. One New Method to Generate 3-Dimensional Virtual Mannequin

    Science.gov (United States)

    Xiu-jin, Shi; Zhi-jun, Wang; Jia-jin, Le

    The personal virtual mannequin is very important in electronic made to measure (eMTM) system. There is one new easy method to generate personal virtual mannequin. First, the characteristic information of customer's body is got from two photos. Secondly, some human body part templates corresponding with the customer are selected from the templates library. Thirdly, these templates are modified and assembled according to certain rules to generate a personalized 3-dimensional human, and then the virtual mannequin is realized. Experimental result shows that the method is easy and feasible.

  4. The transmission probability method in one-dimensional cylindrical geometry

    International Nuclear Information System (INIS)

    Rubin, I.E.

    1983-01-01

    The collision probability method widely used in solving the problems of neutron transpopt in a reactor cell is reliable for simple cells with small number of zones. The increase of the number of zones and also taking into account the anisotropy of scattering greatly increase the scope of calculations. In order to reduce the time of calculation the transmission probability method is suggested to be used for flux calculation in one-dimensional cylindrical geometry taking into account the scattering anisotropy. The efficiency of the suggested method is verified using the one-group calculations for cylindrical cells. The use of the transmission probability method allows to present completely angular and spatial dependences is neutrons distributions without the increase in the scope of calculations. The method is especially effective in solving the multi-group problems

  5. Enhanced spectral resolution by high-dimensional NMR using the filter diagonalization method and "hidden" dimensions.

    Science.gov (United States)

    Meng, Xi; Nguyen, Bao D; Ridge, Clark; Shaka, A J

    2009-01-01

    High-dimensional (HD) NMR spectra have poorer digital resolution than low-dimensional (LD) spectra, for a fixed amount of experiment time. This has led to "reduced-dimensionality" strategies, in which several LD projections of the HD NMR spectrum are acquired, each with higher digital resolution; an approximate HD spectrum is then inferred by some means. We propose a strategy that moves in the opposite direction, by adding more time dimensions to increase the information content of the data set, even if only a very sparse time grid is used in each dimension. The full HD time-domain data can be analyzed by the filter diagonalization method (FDM), yielding very narrow resonances along all of the frequency axes, even those with sparse sampling. Integrating over the added dimensions of HD FDM NMR spectra reconstitutes LD spectra with enhanced resolution, often more quickly than direct acquisition of the LD spectrum with a larger number of grid points in each of the fewer dimensions. If the extra-dimensions do not appear in the final spectrum, and are used solely to boost information content, we propose the moniker hidden-dimension NMR. This work shows that HD peaks have unmistakable frequency signatures that can be detected as single HD objects by an appropriate algorithm, even though their patterns would be tricky for a human operator to visualize or recognize, and even if digital resolution in an HD FT spectrum is very coarse compared with natural line widths.

  6. TH-CD-207A-06: Optimizing Four-Dimensional Digital Tomosynthesis Acquisition Based On Respiratory Guidance

    International Nuclear Information System (INIS)

    Kim, D; Kang, S; Kim, T; Kim, K; Cho, M; Shin, D; Noh, Y; Suh, T; Lee, S; Kim, S

    2016-01-01

    Purpose: Patient breathing-related sorting method of projections in 4D digital tomosythesis (DTS) can be suffered from severe artifacts due to non-uniform angle distribution of projections and noncoplanar reconstructed images for each phase. In this study, we propose a method for optimally acquiring projection images in 4D DTS. Methods: In this method every pair of projections at x-ray tube’s gantry angles symmetrical with respect to the center of the range of gantry rotation is obtained at the same respiration amplitude. This process is challenging but becomes feasible with visual-biofeedback using a patient specific respiration guide wave which is in sinusoidal shape (i.e., smooth and symmetrical enough). Depending on scan parameters such as the number of acquisition points per cycle, total scan angle and projections per acquisition amplitude, acquisition sequence is pre-determined. A simulation study for feasibility test was performed. To mimic actual situation closely, a group of volunteers were recruited and breathing data were acquired both with/without biofeedback. Then, x-ray projections for a humanoid phantom were virtually performed following (1) the breathing data from volunteers without guide, (2) the breathing data with guide and (3) the planned breathing data (i.e., ideal situation). Images from all of 3 scenarios were compared. Results: Scenario #2 showed significant artifact reduction compared to #1 while did minimal increase from the ideal situation (i.e., scenario #3). We verified the performance of the method with regard to the degree of inaccuracy during respiratory guiding. Also, the scan angle dependence-related differences in the DTS images could reduce between using the proposed method and the established patient breathing-related sorting method. Conclusion: Through the proposed 4D DTS method, it is possible to improve the accuracy of image guidance between intra/inter fractions with relatively low imaging dose. This research was supported

  7. TH-CD-207A-06: Optimizing Four-Dimensional Digital Tomosynthesis Acquisition Based On Respiratory Guidance

    Energy Technology Data Exchange (ETDEWEB)

    Kim, D; Kang, S; Kim, T; Kim, K; Cho, M; Shin, D; Noh, Y; Suh, T [Department of Biomedical Engineering and Research Institute of Biomedical Engineering, College of Medicine, The Catholic University of Korea, Seoul (Korea, Republic of); Lee, S [Department of Radiological Science, College of Medical Science, Konyang University, Daejeon (Korea, Republic of); Kim, S [Department of Radiation Oncology, School of Medicine, Virginia Commonwealth University, Richmond, VA (United States)

    2016-06-15

    Purpose: Patient breathing-related sorting method of projections in 4D digital tomosythesis (DTS) can be suffered from severe artifacts due to non-uniform angle distribution of projections and noncoplanar reconstructed images for each phase. In this study, we propose a method for optimally acquiring projection images in 4D DTS. Methods: In this method every pair of projections at x-ray tube’s gantry angles symmetrical with respect to the center of the range of gantry rotation is obtained at the same respiration amplitude. This process is challenging but becomes feasible with visual-biofeedback using a patient specific respiration guide wave which is in sinusoidal shape (i.e., smooth and symmetrical enough). Depending on scan parameters such as the number of acquisition points per cycle, total scan angle and projections per acquisition amplitude, acquisition sequence is pre-determined. A simulation study for feasibility test was performed. To mimic actual situation closely, a group of volunteers were recruited and breathing data were acquired both with/without biofeedback. Then, x-ray projections for a humanoid phantom were virtually performed following (1) the breathing data from volunteers without guide, (2) the breathing data with guide and (3) the planned breathing data (i.e., ideal situation). Images from all of 3 scenarios were compared. Results: Scenario #2 showed significant artifact reduction compared to #1 while did minimal increase from the ideal situation (i.e., scenario #3). We verified the performance of the method with regard to the degree of inaccuracy during respiratory guiding. Also, the scan angle dependence-related differences in the DTS images could reduce between using the proposed method and the established patient breathing-related sorting method. Conclusion: Through the proposed 4D DTS method, it is possible to improve the accuracy of image guidance between intra/inter fractions with relatively low imaging dose. This research was supported

  8. Perception of Teachers and Administrators on the Teaching Methods That Influence the Acquisition of Generic Skills

    Science.gov (United States)

    Audu, R.; Bin Kamin, Yusri; Bin Musta'amal, Aede Hatib; Bin Saud, Muhammad Sukri; Hamid, Mohd. Zolkifli Abd.

    2014-01-01

    This study is designed to identify the most significant teaching methods that influence the acquisition of generic skills of mechanical engineering trades students at technical college level. Descriptive survey research design was utilized in carrying out the study. One hundred and ninety (190) respondents comprised of mechanical engineering…

  9. L2 Vocabulary Acquisition in Children: Effects of Learning Method and Cognate Status

    Science.gov (United States)

    Tonzar, Claudio; Lotto, Lorella; Job, Remo

    2009-01-01

    In this study we investigated the effects of two learning methods (picture- or word-mediated learning) and of word status (cognates vs. noncognates) on the vocabulary acquisition of two foreign languages: English and German. We examined children from fourth and eighth grades in a school setting. After a learning phase during which L2 words were…

  10. Method and device for fast code acquisition in spread spectrum receivers

    NARCIS (Netherlands)

    Coenen, A.J.R.M.

    1993-01-01

    Abstract of NL 9101155 (A) Method for code acquisition in a satellite receiver. The biphase-modulated high-frequency carrier transmitted by a satellite is converted via a fixed local oscillator frequency down to the baseband, whereafter the baseband signal is fed via a bandpass filter, which has an

  11. Radiation-hardened fast acquisition/weak signal tracking system and method

    Science.gov (United States)

    Winternitz, Luke (Inventor); Boegner, Gregory J. (Inventor); Sirotzky, Steve (Inventor)

    2009-01-01

    A global positioning system (GPS) receiver and method of acquiring and tracking GPS signals comprises an antenna adapted to receive GPS signals; an analog radio frequency device operatively connected to the antenna and adapted to convert the GPS signals from an analog format to a digital format; a plurality of GPS signal tracking correlators operatively connected to the analog RF device; a GPS signal acquisition component operatively connected to the analog RF device and the plurality of GPS signal tracking correlators, wherein the GPS signal acquisition component is adapted to calculate a maximum vector on a databit correlation grid; and a microprocessor operatively connected to the plurality of GPS signal tracking correlators and the GPS signal acquisition component, wherein the microprocessor is adapted to compare the maximum vector with a predetermined correlation threshold to allow the GPS signal to be fully acquired and tracked.

  12. Dimensional analysis and qualitative methods in problem solving: II

    International Nuclear Information System (INIS)

    Pescetti, D

    2009-01-01

    We show that the underlying mathematical structure of dimensional analysis (DA), in the qualitative methods in problem-solving context, is the algebra of the affine spaces. In particular, we show that the qualitative problem-solving procedure based on the parallel decomposition of a problem into simple special cases yields the new original mathematical concepts of special points and special representations of affine spaces. A qualitative problem-solving algorithm piloted by the mathematics of DA is illustrated by a set of examples.

  13. Tailoring four-dimensional cone-beam CT acquisition settings for fiducial marker-based image guidance in radiation therapy.

    Science.gov (United States)

    Jin, Peng; van Wieringen, Niek; Hulshof, Maarten C C M; Bel, Arjan; Alderliesten, Tanja

    2018-04-01

    Use of four-dimensional cone-beam CT (4D-CBCT) and fiducial markers for image guidance during radiation therapy (RT) of mobile tumors is challenging due to the trade-off among image quality, imaging dose, and scanning time. This study aimed to investigate different 4D-CBCT acquisition settings for good visibility of fiducial markers in 4D-CBCT. Using these 4D-CBCTs, the feasibility of marker-based 4D registration for RT setup verification and manual respiration-induced motion quantification was investigated. For this, we applied a dynamic phantom with three different breathing motion amplitudes and included two patients with implanted markers. Irrespective of the motion amplitude, for a medium field of view (FOV), marker visibility was improved by reducing the imaging dose per projection and increasing the number of projection images; however, the scanning time was 4 to 8 min. For a small FOV, the total imaging dose and the scanning time were reduced (62.5% of the dose using a medium FOV, 2.5 min) without losing marker visibility. However, the body contour could be missing for a small FOV, which is not preferred in RT. The marker-based 4D setup verification was feasible for both the phantom and patient data. Moreover, manual marker motion quantification can achieve a high accuracy with a mean error of [Formula: see text].

  14. Four-Dimensional Data Assimilation Using the Adjoint Method

    Science.gov (United States)

    Bao, Jian-Wen

    The calculus of variations is used to confirm that variational four-dimensional data assimilation (FDDA) using the adjoint method can be implemented when the numerical model equations have a finite number of first-order discontinuous points. These points represent the on/off switches associated with physical processes, for which the Jacobian matrix of the model equation does not exist. Numerical evidence suggests that, in some situations when the adjoint method is used for FDDA, the temperature field retrieved using horizontal wind data is numerically not unique. A physical interpretation of this type of non-uniqueness of the retrieval is proposed in terms of energetics. The adjoint equations of a numerical model can also be used for model-parameter estimation. A general computational procedure is developed to determine the size and distribution of any internal model parameter. The procedure is then applied to a one-dimensional shallow -fluid model in the context of analysis-nudging FDDA: the weighting coefficients used by the Newtonian nudging technique are determined. The sensitivity of these nudging coefficients to the optimal objectives and constraints is investigated. Experiments of FDDA using the adjoint method are conducted using the dry version of the hydrostatic Penn State/NCAR mesoscale model (MM4) and its adjoint. The minimization procedure converges and the initialization experiment is successful. Temperature-retrieval experiments involving an assimilation of the horizontal wind are also carried out using the adjoint of MM4.

  15. New method of 2-dimensional metrology using mask contouring

    Science.gov (United States)

    Matsuoka, Ryoichi; Yamagata, Yoshikazu; Sugiyama, Akiyuki; Toyoda, Yasutaka

    2008-10-01

    We have developed a new method of accurately profiling and measuring of a mask shape by utilizing a Mask CD-SEM. The method is intended to realize high accuracy, stability and reproducibility of the Mask CD-SEM adopting an edge detection algorithm as the key technology used in CD-SEM for high accuracy CD measurement. In comparison with a conventional image processing method for contour profiling, this edge detection method is possible to create the profiles with much higher accuracy which is comparable with CD-SEM for semiconductor device CD measurement. This method realizes two-dimensional metrology for refined pattern that had been difficult to measure conventionally by utilizing high precision contour profile. In this report, we will introduce the algorithm in general, the experimental results and the application in practice. As shrinkage of design rule for semiconductor device has further advanced, an aggressive OPC (Optical Proximity Correction) is indispensable in RET (Resolution Enhancement Technology). From the view point of DFM (Design for Manufacturability), a dramatic increase of data processing cost for advanced MDP (Mask Data Preparation) for instance and surge of mask making cost have become a big concern to the device manufacturers. This is to say, demands for quality is becoming strenuous because of enormous quantity of data growth with increasing of refined pattern on photo mask manufacture. In the result, massive amount of simulated error occurs on mask inspection that causes lengthening of mask production and inspection period, cost increasing, and long delivery time. In a sense, it is a trade-off between the high accuracy RET and the mask production cost, while it gives a significant impact on the semiconductor market centered around the mask business. To cope with the problem, we propose the best method of a DFM solution using two-dimensional metrology for refined pattern.

  16. Lexical and semantic representations of L2 cognate and noncognate words acquisition in children : evidence from two learning methods

    OpenAIRE

    Comesaña, Montserrat; Soares, Ana Paula; Sánchez-Casas, Rosa; Lima, Cátia

    2012-01-01

    How bilinguals represent words in two languages and which mechanisms are responsible for second language acquisition are important questions in the bilingual and vocabulary acquisition literature. This study aims to analyze the effect of two learning methods (picture-based vs. word-based method) and two types of words (cognates and noncognates) in early stages of children’s L2 acquisition. Forty-eight native speakers of European Portuguese, all sixth graders (mean age= 10.87 years; SD= 0....

  17. Comparison of an alternative and existing binning methods to reduce the acquisition duration of 4D PET/CT

    International Nuclear Information System (INIS)

    Didierlaurent, David; Ribes, Sophie; Caselles, Olivier; Jaudet, Cyril; Dierickx, Lawrence O.; Zerdoud, Slimane; Brillouet, Severine; Weits, Kathleen; Batatia, Hadj; Courbon, Frédéric

    2014-01-01

    Purpose: Respiratory motion is a source of artifacts that reduce image quality in PET. Four dimensional (4D) PET/CT is one approach to overcome this problem. Existing techniques to limiting the effects of respiratory motions are based on prospective phase binning which requires a long acquisition duration (15–25 min). This time is uncomfortable for the patients and limits the clinical exploitation of 4D PET/CT. In this work, the authors evaluated an existing method and an alternative retrospective binning method to reduce the acquisition duration of 4D PET/CT. Methods: The authors studied an existing mixed-amplitude binning (MAB) method and an alternative binning method by mixed-phases (MPhB). Before implementing MPhB, they analyzed the regularity of the breathing patterns in patients. They studied the breathing signal drift and missing CT slices that could be challenging for implementing MAB. They compared the performance of MAB and MPhB with current binning methods to measure the maximum uptake, internal volume, and maximal range of tumor motion. Results: MPhB can be implemented depending on an optimal phase (in average, the exhalation peak phase −4.1% of the entire breathing cycle duration). Signal drift of patients was in average 35% relative to the breathing amplitude. Even after correcting this drift, MAB was feasible in 4D CT for only 64% of patients. No significant differences appeared between the different binning methods to measure the maximum uptake, internal volume, and maximal range of tumor motion. The authors also determined the inaccuracies of MAB and MPhB to measure the maximum amplitude of tumor motion with three bins (less than 3 mm for movement inferior to 12 mm, up to 6.4 mm for a 21 mm movement). Conclusions: The authors proposed an alternative binning method by mixed-phase binning that halves the acquisition duration of 4D PET/CT. Mixed-amplitude binning was challenging because of signal drift and missing CT slices. They showed that more

  18. On a novel matrix method for three-dimensional photoelasticity

    International Nuclear Information System (INIS)

    Theocaris, P.S.; Gdoutos, E.E.

    1978-01-01

    A non-destructive method for the photoelastic determination of three-dimensional stress distributions, based on the Mueller and Jones calculi, is developed. The differential equations satisfied by the Stokes and Jones vectors, when a polarized light beam passes through a photoelastic model, presenting rotation of the secondary principal stress directions, are established in matrix form. The Peano-Baker method is used for the solution of these differential equations in a matrix series form, establishing the elements of the Mueller and Jones matrices of the photoelastic model. These matrices are experimentally determined by using different wavelengths in conjunction with Jones' 'equivalence theorem'. The Neumann equations are immediately deduced from the above-mentioned differential equations. (orig.) [de

  19. New method of three-dimensional reconstruction from two-dimensional MR data sets

    International Nuclear Information System (INIS)

    Wrazidlo, W.; Schneider, S.; Brambs, H.J.; Richter, G.M.; Kauffmann, G.W.; Geiger, B.; Fischer, C.

    1989-01-01

    In medical diagnosis and therapy, cross-sectional images are obtained by means of US, CT, or MR imaging. The authors propose a new solution to the problem of constructing a shape over a set of cross-sectional contours from two-dimensional (2D) MR data sets. The authors' method reduces the problem of constructing a shape over the cross sections to one of constructing a sequence of partial shapes, each of them connecting two cross sections lying on adjacent planes. The solution makes use of the Delaunay triangulation, which is isomorphic in that specific situation. The authors compute this Delaunay triangulation. Shape reconstruction is then achieved section by pruning Delaunay triangulations

  20. Modeling of three-dimensional diffusible resistors with the one-dimensional tube multiplexing method

    International Nuclear Information System (INIS)

    Gillet, Jean-Numa; Degorce, Jean-Yves; Meunier, Michel

    2009-01-01

    Electronic-behavior modeling of three-dimensional (3D) p + -π-p + and n + -ν-n + semiconducting diffusible devices with highly accurate resistances for the design of analog resistors, which are compatible with the CMOS (complementary-metal-oxide-semiconductor) technologies, is performed in three dimensions with the fast tube multiplexing method (TMM). The current–voltage (I–V) curve of a silicon device is usually computed with traditional device simulators of technology computer-aided design (TCAD) based on the finite-element method (FEM). However, for the design of 3D p + -π-p + and n + -ν-n + diffusible resistors, they show a high computational cost and convergence that may fail with fully non-separable 3D dopant concentration profiles as observed in many diffusible resistors resulting from laser trimming. These problems are avoided with the proposed TMM, which divides the 3D resistor into one-dimensional (1D) thin tubes with longitudinal axes following the main orientation of the average electrical field in the tubes. The I–V curve is rapidly obtained for a device with a realistic 3D dopant profile, since a system of three first-order ordinary differential equations has to be solved for each 1D multiplexed tube with the TMM instead of three second-order partial differential equations in the traditional TCADs. Simulations with the TMM are successfully compared to experimental results from silicon-based 3D resistors fabricated by laser-induced dopant diffusion in the gaps of MOSFETs (metal-oxide-semiconductor field-effect transistors) without initial gate. Using thin tubes with other shapes than parallelepipeds as ring segments with toroidal lateral surfaces, the TMM can be generalized to electronic devices with other types of 3D diffusible microstructures

  1. Exact rebinning methods for three-dimensional PET.

    Science.gov (United States)

    Liu, X; Defrise, M; Michel, C; Sibomana, M; Comtat, C; Kinahan, P; Townsend, D

    1999-08-01

    The high computational cost of data processing in volume PET imaging is still hindering the routine application of this successful technique, especially in the case of dynamic studies. This paper describes two new algorithms based on an exact rebinning equation, which can be applied to accelerate the processing of three-dimensional (3-D) PET data. The first algorithm, FOREPROJ, is a fast-forward projection algorithm that allows calculation of the 3-D attenuation correction factors (ACF's) directly from a two-dimensional (2-D) transmission scan, without first reconstructing the attenuation map and then performing a 3-D forward projection. The use of FOREPROJ speeds up the estimation of the 3-D ACF's by more than a factor five. The second algorithm, FOREX, is a rebinning algorithm that is also more than five times faster, compared to the standard reprojection algorithm (3DRP) and does not suffer from the image distortions generated by the even faster approximate Fourier rebinning (FORE) method at large axial apertures. However, FOREX is probably not required by most existing scanners, as the axial apertures are not large enough to show improvements over FORE with clinical data. Both algorithms have been implemented and applied to data simulated for a scanner with a large axial aperture (30 degrees), and also to data acquired with the ECAT HR and the ECAT HR+ scanners. Results demonstrate the excellent accuracy achieved by these algorithms and the important speedup when the sinogram sizes are powers of two.

  2. Dimensionality Reduction Methods: Comparative Analysis of methods PCA, PPCA and KPCA

    Directory of Open Access Journals (Sweden)

    Jorge Arroyo-Hernández

    2016-01-01

    Full Text Available The dimensionality reduction methods are algorithms mapping the set of data in subspaces derived from the original space, of fewer dimensions, that allow a description of the data at a lower cost. Due to their importance, they are widely used in processes associated with learning machine. This article presents a comparative analysis of PCA, PPCA and KPCA dimensionality reduction methods. A reconstruction experiment of worm-shape data was performed through structures of landmarks located in the body contour, with methods having different number of main components. The results showed that all methods can be seen as alternative processes. Nevertheless, thanks to the potential for analysis in the features space and the method for calculation of its preimage presented, KPCA offers a better method for recognition process and pattern extraction

  3. Method for signal conditioning and data acquisition system, based on variable amplification and feedback technique

    Energy Technology Data Exchange (ETDEWEB)

    Conti, Livio, E-mail: livio.conti@uninettunouniversity.net [Facoltà di Ingegneria, Università Telematica Internazionale Uninettuno, Corso Vittorio Emanuele II 39, 00186 Rome, Italy INFN Sezione Roma Tor Vergata, Via della Ricerca Scientifica 1, 00133 Rome (Italy); Sgrigna, Vittorio [Dipartimento di Matematica e Fisica, Università Roma Tre, 84 Via della Vasca Navale, I-00146 Rome (Italy); Zilpimiani, David [National Institute of Geophysics, Georgian Academy of Sciences, 1 M. Alexidze St., 009 Tbilisi, Georgia (United States); Assante, Dario [Facoltà di Ingegneria, Università Telematica Internazionale Uninettuno, Corso Vittorio Emanuele II 39, 00186 Rome, Italy INFN Sezione Roma Tor Vergata, Via della Ricerca Scientifica 1, 00133 Rome (Italy)

    2014-08-21

    An original method of signal conditioning and adaptive amplification is proposed for data acquisition systems of analog signals, conceived to obtain a high resolution spectrum of any input signal. The procedure is based on a feedback scheme of the signal amplification with aim at maximizing the dynamic range and resolution of the data acquisition system. The paper describes the signal conditioning, digitization, and data processing procedures applied to an a priori unknown signal in order to enucleate its amplitude and frequency content for applications in different environments: on the ground, in space, or in the laboratory. An electronic board of the conditioning module has also been constructed and described. In the paper are also discussed the main fields of application and advantages of the method with respect to those known today.

  4. Method for signal conditioning and data acquisition system, based on variable amplification and feedback technique

    International Nuclear Information System (INIS)

    Conti, Livio; Sgrigna, Vittorio; Zilpimiani, David; Assante, Dario

    2014-01-01

    An original method of signal conditioning and adaptive amplification is proposed for data acquisition systems of analog signals, conceived to obtain a high resolution spectrum of any input signal. The procedure is based on a feedback scheme of the signal amplification with aim at maximizing the dynamic range and resolution of the data acquisition system. The paper describes the signal conditioning, digitization, and data processing procedures applied to an a priori unknown signal in order to enucleate its amplitude and frequency content for applications in different environments: on the ground, in space, or in the laboratory. An electronic board of the conditioning module has also been constructed and described. In the paper are also discussed the main fields of application and advantages of the method with respect to those known today

  5. A new method of three-dimensional computer assisted reconstruction of the developing biliary tract.

    Science.gov (United States)

    Prudhomme, M; Gaubert-Cristol, R; Jaeger, M; De Reffye, P; Godlewski, G

    1999-01-01

    A three-dimensional (3-D) computer assisted reconstruction of the biliary tract was performed in human and rat embryos at Carnegie stage 23 to describe and compare the biliary structures and to point out the anatomic relations between the structures of the hepatic pedicle. Light micrograph images from consecutive serial sagittal sections (diameter 7 mm) of one human and 16 rat embryos were directly digitalized with a CCD camera. The serial views were aligned automatically by software. The data were analysed following segmentation and thresholding, allowing automatic reconstruction. The main bile ducts ascended in the mesoderm of the hepatoduodenal ligament. The extrahepatic bile ducts: common bile duct (CD), cystic duct and gallbladder in the human, formed a compound system which could not be shown so clearly in histologic sections. The hepato-pancreatic ampulla was studied as visualised through the duodenum. The course of the CD was like a chicane. The gallbladder diameter and length were similar to those of the CD. Computer-assisted reconstruction permitted easy acquisition of the data by direct examination of the sections through the microscope. This method showed the relationships between the different structures of the hepatic pedicle and allowed estimation of the volume of the bile duct. These findings were not obvious in two-dimensional (2-D) views from histologic sections. Each embryonic stage could be rebuilt in 3-D, which could introduce the time as a fourth dimension, fundamental for the study of organogenesis.

  6. New methods for rapid data acquisition of contaminated land cover after NPP accident

    International Nuclear Information System (INIS)

    Hulka, J.; Cespirova, I.

    2008-01-01

    Aim of the research project is the analysis of the modem and rapid reliable data acquisition methods for agricultural countermeasures, feed-stuff restrictions and clean-up of large contaminated areas after NPP accident. Acquiring agricultural reliable data especially based on satellite technology and analysis of landscape contamination (based on computer code vs. in situ measurements, airborne and/or terrestrial mapping of contamination) are discussed. (authors)

  7. New methods for rapid data acquisition of contaminated land cover after NPP accident

    International Nuclear Information System (INIS)

    Hulka, J.; Cespirova, I.

    2009-01-01

    Aim of the research project is the analysis of the modem and rapid reliable data acquisition methods for agricultural countermeasures, feed-stuff restrictions and clean-up of large contaminated areas after NPP accident. Acquiring agricultural reliable data especially based on satellite technology and analysis of landscape contamination (based on computer code vs. in situ measurements, airborne and/or terrestrial mapping of contamination) are discussed. (authors)

  8. A synchronization method for wireless acquisition systems, application to brain computer interfaces.

    Science.gov (United States)

    Foerster, M; Bonnet, S; van Langhenhove, A; Porcherot, J; Charvet, G

    2013-01-01

    A synchronization method for wireless acquisition systems has been developed and implemented on a wireless ECoG recording implant and on a wireless EEG recording helmet. The presented algorithm and hardware implementation allow the precise synchronization of several data streams from several sensor nodes for applications where timing is critical like in event-related potential (ERP) studies. The proposed method has been successfully applied to obtain visual evoked potentials and compared with a reference biosignal amplifier. The control over the exact sampling frequency allows reducing synchronization errors that will otherwise accumulate during a recording. The method is scalable to several sensor nodes communicating with a shared base station.

  9. One-dimensional transient radiative transfer by lattice Boltzmann method.

    Science.gov (United States)

    Zhang, Yong; Yi, Hongliang; Tan, Heping

    2013-10-21

    The lattice Boltzmann method (LBM) is extended to solve transient radiative transfer in one-dimensional slab containing scattering media subjected to a collimated short laser irradiation. By using a fully implicit backward differencing scheme to discretize the transient term in the radiative transfer equation, a new type of lattice structure is devised. The accuracy and computational efficiency of this algorithm are examined firstly. Afterwards, effects of the medium properties such as the extinction coefficient, the scattering albedo and the anisotropy factor, and the shapes of laser pulse on time-resolved signals of transmittance and reflectance are investigated. Results of the present method are found to compare very well with the data from the literature. For an oblique incidence, the LBM results in this paper are compared with those by Monte Carlo method generated by ourselves. In addition, transient radiative transfer in a two-Layer inhomogeneous media subjected to a short square pulse irradiation is investigated. At last, the LBM is further extended to study the transient radiative transfer in homogeneous medium with a refractive index discontinuity irradiated by the short pulse laser. Several trends on the time-resolved signals different from those for refractive index of 1 (i.e. refractive-index-matched boundary) are observed and analysed.

  10. Fast Estimation Method of Space-Time Two-Dimensional Positioning Parameters Based on Hadamard Product

    Directory of Open Access Journals (Sweden)

    Haiwen Li

    2018-01-01

    Full Text Available The estimation speed of positioning parameters determines the effectiveness of the positioning system. The time of arrival (TOA and direction of arrival (DOA parameters can be estimated by the space-time two-dimensional multiple signal classification (2D-MUSIC algorithm for array antenna. However, this algorithm needs much time to complete the two-dimensional pseudo spectral peak search, which makes it difficult to apply in practice. Aiming at solving this problem, a fast estimation method of space-time two-dimensional positioning parameters based on Hadamard product is proposed in orthogonal frequency division multiplexing (OFDM system, and the Cramer-Rao bound (CRB is also presented. Firstly, according to the channel frequency domain response vector of each array, the channel frequency domain estimation vector is constructed using the Hadamard product form containing location information. Then, the autocorrelation matrix of the channel response vector for the extended array element in frequency domain and the noise subspace are calculated successively. Finally, by combining the closed-form solution and parameter pairing, the fast joint estimation for time delay and arrival direction is accomplished. The theoretical analysis and simulation results show that the proposed algorithm can significantly reduce the computational complexity and guarantee that the estimation accuracy is not only better than estimating signal parameters via rotational invariance techniques (ESPRIT algorithm and 2D matrix pencil (MP algorithm but also close to 2D-MUSIC algorithm. Moreover, the proposed algorithm also has certain adaptability to multipath environment and effectively improves the ability of fast acquisition of location parameters.

  11. Fast acquisition of multidimensional NMR spectra of solids and mesophases using alternative sampling methods.

    Science.gov (United States)

    Lesot, Philippe; Kazimierczuk, Krzysztof; Trébosc, Julien; Amoureux, Jean-Paul; Lafon, Olivier

    2015-11-01

    Unique information about the atom-level structure and dynamics of solids and mesophases can be obtained by the use of multidimensional nuclear magnetic resonance (NMR) experiments. Nevertheless, the acquisition of these experiments often requires long acquisition times. We review here alternative sampling methods, which have been proposed to circumvent this issue in the case of solids and mesophases. Compared to the spectra of solutions, those of solids and mesophases present some specificities because they usually display lower signal-to-noise ratios, non-Lorentzian line shapes, lower spectral resolutions and wider spectral widths. We highlight herein the advantages and limitations of these alternative sampling methods. A first route to accelerate the acquisition time of multidimensional NMR spectra consists in the use of sparse sampling schemes, such as truncated, radial or random sampling ones. These sparsely sampled datasets are generally processed by reconstruction methods differing from the Discrete Fourier Transform (DFT). A host of non-DFT methods have been applied for solids and mesophases, including the G-matrix Fourier transform, the linear least-square procedures, the covariance transform, the maximum entropy and the compressed sensing. A second class of alternative sampling consists in departing from the Jeener paradigm for multidimensional NMR experiments. These non-Jeener methods include Hadamard spectroscopy as well as spatial or orientational encoding of the evolution frequencies. The increasing number of high field NMR magnets and the development of techniques to enhance NMR sensitivity will contribute to widen the use of these alternative sampling methods for the study of solids and mesophases in the coming years. Copyright © 2015 John Wiley & Sons, Ltd.

  12. Matrix method for two-dimensional waveguide mode solution

    Science.gov (United States)

    Sun, Baoguang; Cai, Congzhong; Venkatesh, Balajee Seshasayee

    2018-05-01

    In this paper, we show that the transfer matrix theory of multilayer optics can be used to solve the modes of any two-dimensional (2D) waveguide for their effective indices and field distributions. A 2D waveguide, even composed of numerous layers, is essentially a multilayer stack and the transmission through the stack can be analysed using the transfer matrix theory. The result is a transfer matrix with four complex value elements, namely A, B, C and D. The effective index of a guided mode satisfies two conditions: (1) evanescent waves exist simultaneously in the first (cladding) layer and last (substrate) layer, and (2) the complex element D vanishes. For a given mode, the field distribution in the waveguide is the result of a 'folded' plane wave. In each layer, there is only propagation and absorption; at each boundary, only reflection and refraction occur, which can be calculated according to the Fresnel equations. As examples, we show that this method can be used to solve modes supported by the multilayer step-index dielectric waveguide, slot waveguide, gradient-index waveguide and various plasmonic waveguides. The results indicate the transfer matrix method is effective for 2D waveguide mode solution in general.

  13. Models, methods and software for distributed knowledge acquisition for the automated construction of integrated expert systems knowledge bases

    International Nuclear Information System (INIS)

    Dejneko, A.O.

    2011-01-01

    Based on an analysis of existing models, methods and means of acquiring knowledge, a base method of automated knowledge acquisition has been chosen. On the base of this method, a new approach to integrate information acquired from knowledge sources of different typologies has been proposed, and the concept of a distributed knowledge acquisition with the aim of computerized formation of the most complete and consistent models of problem areas has been introduced. An original algorithm for distributed knowledge acquisition from databases, based on the construction of binary decision trees has been developed [ru

  14. PWR core safety analysis with 3-dimensional methods

    International Nuclear Information System (INIS)

    Gensler, A.; Kühnel, K.; Kuch, S.

    2015-01-01

    Highlights: • An overview of AREVA’s safety analysis codes their coupling is provided. • The validation base and licensing applications of these codes are summarized. • Coupled codes and methods provide improved margins and non-conservative results. • Examples for REA and inadvertent opening of the pressurizer safety valve are given. - Abstract: The main focus of safety analysis is to demonstrate the required safety level of the reactor core. Because of the demanding requirements, the quality of the safety analysis strongly affects the confidence in the operational safety of a reactor. To ensure the highest quality, it is essential that the methodology consists of appropriate analysis tools, an extensive validation base, and last but not least highly educated engineers applying the methodology. The sophisticated 3-dimensional core models applied by AREVA ensure that all physical effects relevant for safety are treated and the results are reliable and conservative. Presently AREVA employs SCIENCE, CASMO/NEMO and CASCADE-3D for pressurized water reactors. These codes are currently being consolidated into the next generation 3D code system ARCADIA®. AREVA continuously extends the validation base, including measurement campaigns in test facilities and comparisons of the predictions of steady state and transient measured data gathered from plants during many years of operation. Thus, the core models provide reliable and comprehensive results for a wide range of applications. For the application of these powerful tools, AREVA is taking benefit of its interdisciplinary know-how and international teamwork. Experienced engineers of different technical backgrounds are working together to ensure an appropriate interpretation of the calculation results, uncertainty analysis, along with continuously maintaining and enhancing the quality of the analysis methodologies. In this paper, an overview of AREVA’s broad application experience as well as the broad validation

  15. New alternative methods of analyzing human behavior in cued target acquisition.

    Science.gov (United States)

    Maltz, Masha; Shinar, David

    2003-01-01

    Target acquisition tasks in natural environments are often augmented by cuing systems that advise human observers during the decision process. With present technological limitations, cuing systems are imperfect, so the question arises whether cuing aids should be implemented under all conditions. We examined target acquisition performance under different levels of task complexity and cuing system reliability. We introduce here two new methods to help define observer behavior trends in cued target acquisition: a quantitative measure of observer search behavior in a temporal sense and a measure of the extent of observer reliance on the cue. We found that observer reliance on the cue correlated with task difficulty and the perceived reliability of the cue. Cuing was generally helpful in complex tasks, whereas cuing reduced performance in easy tasks. Consequently, cuing systems should be implemented only when the task is difficult enough to warrant the intrusion of a cue into the task. Actual or potential applications of this research include the design and implementation of imperfect automated aids dealing with augmented reality.

  16. Study of mixed programming gamma spectrum acquisition method based on MSP430F4618

    International Nuclear Information System (INIS)

    Li Yuezhong; Tang Bin; Zhang Zhongliang; Xie Xiaolin

    2012-01-01

    In order to reduce the hand-held gamma spectrometer measurements dead time and to complete the low-voltage and low-power design, the spectrum signal acquisition circuit is constituted by the ultra-low power microcontroller MSP430F4618 and its external signal conditioning circuit, anti-coincidence circuit interface and its on-chip sample hold and A/D converter. C language programming and assembly one have been used together. The sample hold and A/D conversion and spectrum acquisition programming is accomplished by assembly language, and the system monitoring and task scheduler designing is accomplished by C language programming. The handhold gamma spectrometer power supply, which just uses two No.5 rechargeable batteries, is designed by high-efficiency DC-DC circuit. The prototype gamma spectrometer is developed by the method, and its testing shows that the implementation of spectrum acquisition time is shorten by twice to 3 times, that is, the dead measurement can be reduced; and the machine operating current does not exceed 150 mA. By using two 2400 mAh No.5 rechargeable battery, the machine can work continuously more than 10 hours, and it can meet the application requirements. (authors)

  17. Three-dimensional discrete element method simulation of core disking

    Science.gov (United States)

    Wu, Shunchuan; Wu, Haoyan; Kemeny, John

    2018-04-01

    The phenomenon of core disking is commonly seen in deep drilling of highly stressed regions in the Earth's crust. Given its close relationship with the in situ stress state, the presence and features of core disking can be used to interpret the stresses when traditional in situ stress measuring techniques are not available. The core disking process was simulated in this paper using the three-dimensional discrete element method software PFC3D (particle flow code). In particular, PFC3D is used to examine the evolution of fracture initiation, propagation and coalescence associated with core disking under various stress states. In this paper, four unresolved problems concerning core disking are investigated with a series of numerical simulations. These simulations also provide some verification of existing results by other researchers: (1) Core disking occurs when the maximum principal stress is about 6.5 times the tensile strength. (2) For most stress situations, core disking occurs from the outer surface, except for the thrust faulting stress regime, where the fractures were found to initiate from the inner part. (3) The anisotropy of the two horizontal principal stresses has an effect on the core disking morphology. (4) The thickness of core disk has a positive relationship with radial stress and a negative relationship with axial stresses.

  18. Theories to support method development in comprehensive two-dimensional liquid chromatography - A review

    NARCIS (Netherlands)

    Bedani, F.; Schoenmakers, P.J.; Janssen, H.-G.

    2012-01-01

    On-line comprehensive two-dimensional liquid chromatography techniques promise to resolve samples that current one-dimensional liquid chromatography methods cannot adequately deal with. To make full use of the potential of two-dimensional liquid chromatography, optimization is required. Optimization

  19. Method and system for manipulating a digital representation of a three-dimensional object

    DEFF Research Database (Denmark)

    2010-01-01

    A method of manipulating a three-dimensional virtual building block model by means of two-dimensional cursor movements, the virtual building block model including a plurality of virtual building blocks each including a number of connection elements for connecting the virtual building block...... with another virtual building block according to a set of connection rules, the method comprising positioning by means of cursor movements in a computer display area representing a two-dimensional projection of said model, a two-dimensional projection of a first virtual building block to be connected...... to the structure, resulting in a two-dimensional position; determining, from the two-dimensional position, a number of three-dimensional candidate positions of the first virtual building block in the three-dimensional coordinate system; selecting one of said candidate positions based on the connection rules...

  20. Optimization of three-dimensional triple IR fast spoiled gradient recalled acquisition in the steady state (FSPGR) to decrease vascular artifact at 3.0 Tesla

    International Nuclear Information System (INIS)

    Fujiwara, Yasuhiro; Fukuya, Yuko; Yamaguchi, Isao; Matsuda, Tsuyoshi; Ishimori, Yoshiyuki; Yamada, Kazuhiro; Kimura, Hirohiko; Miyati, Tosiaki

    2006-01-01

    The purpose of this study was to decrease vascular artifacts caused by the in-flow effect in three-dimensional inversion recovery prepared fast spoiled gradient recalled acquisition in the steady state (3D IR FSPGR) at 3.0 Tesla. We developed 3D triple IR (3IR) FSPGR and examined the signal characteristics of the new sequence. We have optimized scan parameters based on simulation, phantom, and in-vivo studies. As a result, optimized parameters (1st TI=600 ms, 3rd TI=500 ms) successfully have produced the vessel signal at more than 40% reduction, while gray-white matter contrast was preserved. Moreover, the reduced artifact was also confirmed by visual inspection of the in-vivo images for which this condition was used. Thus, 3D 3IR FSPGR was a useful sequence for the acquisition of T1-weighted images at 3.0 Tesla. (author)

  1. The Use of Statistical Methods in Dimensional Process Control

    National Research Council Canada - National Science Library

    Krajcsik, Stephen

    1985-01-01

    ... erection. To achieve this high degree of unit accuracy, we have begun a pilot dimensional control program that has set the guidelines for systematically monitoring each stage of the production process prior to erection...

  2. Generalized similarity method in unsteady two-dimensional MHD ...

    African Journals Online (AJOL)

    user

    International Journal of Engineering, Science and Technology. Vol. 1, No. 1, 2009 ... temperature two-dimensional MHD laminar boundary layer of incompressible fluid. ...... Φ η is Blasius solution for stationary boundary layer on the plate,. ( ). 0.

  3. The Topology Optimization of Three-dimensional Cooling Fins by the Internal Element Connectivity Parameterization Method

    International Nuclear Information System (INIS)

    Yoo, Sung Min; Kim, Yoon Young

    2007-01-01

    This work is concerned with the topology optimization of three-dimensional cooling fins or heat sinks. Motivated by earlier success of the Internal Element Connectivity Method (I-ECP) method in two dimensional problems, the extension of I-ECP to three-dimensional problems is carried out. The main efforts were made to maintain the numerical trouble-free characteristics of I-ECP for full three-dimensional problems; a serious numerical problem appearing in thermal topology optimization is erroneous temperature undershooting. The effectiveness of the present implementation was checked through the design optimization of three-dimensional fins

  4. Acquisition and processing method for human sensorial, sensitive, motory and phonatory circuits reaction times

    International Nuclear Information System (INIS)

    Doche, Claude

    1972-01-01

    This work describes a storage and acquisition device and a method for human sensorial and sensitive motory and phonatory reaction times. The considered circuits are those made with the visual, auditory and sensory receptor organs and the motory or phonatory effector organs. The anatomo-physiological localization of these circuits allows us to appreciate the possibilities of the central nervous system for different angles. The experimental population is made of normal and pathological individuals (individuals having tumoral or vascular, localized or diffused cerebral lesions or parkinsonian individuals). The parameter processing method is based on the multivariate analysis results and allows us to position each individual compared to a normal individual and to appreciate the weight of each circuit in this positioning. Clinical exploitation results give to this method a prognosis and therapeutic interest. It seems though untimely to talk about its diagnosis value. (author) [fr

  5. Method for coupling two-dimensional to three-dimensional discrete ordinates calculations

    International Nuclear Information System (INIS)

    Thompson, J.L.; Emmett, M.B.; Rhoades, W.A.; Dodds, H.L. Jr.

    1985-01-01

    A three-dimensional (3-D) discrete ordinates transport code, TORT, has been developed at the Oak Ridge National Laboratory for radiation penetration studies. It is not feasible to solve some 3-D penetration problems with TORT, such as a building located a large distance from a point source, because (a) the discretized 3-D problem is simply too big to fit on the computer or (b) the computing time (and corresponding cost) is prohibitive. Fortunately, such problems can be solved with a hybrid approach by coupling a two-dimensional (2-D) description of the point source, which is assumed to be azimuthally symmetric, to a 3-D description of the building, the region of interest. The purpose of this paper is to describe this hybrid methodology along with its implementation and evaluation in the DOTTOR (Discrete Ordinates to Three-dimensional Oak Ridge Transport) code

  6. A high count rate one-dimensional position sensitive detector and a data acquisition system for time resolved X-ray scattering studies

    International Nuclear Information System (INIS)

    Pernot, P.

    1982-01-01

    A curved multiwire proportional drift chamber has been built as a general purpose instrument for X-ray scattering and X-ray diffraction experiments with synchrotron radiation. This parallaxe-free one-dimensional linear position sensitive detector has a parallel readout with a double hit logic. The data acquisition system, installed as a part of the D11 camera at LURE-DCI, is designed to perform time slicing and cyclic experiments; it has been used with either the fast multiwire chamber or a standard position sensitive detector with delay line readout [fr

  7. NMR and pattern recognition methods in metabolomics: From data acquisition to biomarker discovery: A review

    International Nuclear Information System (INIS)

    Smolinska, Agnieszka; Blanchet, Lionel; Buydens, Lutgarde M.C.; Wijmenga, Sybren S.

    2012-01-01

    Highlights: ► Procedures for acquisition of different biofluids by NMR. ► Recent developments in metabolic profiling of different biofluids by NMR are presented. ► The crucial steps involved in data preprocessing and multivariate chemometric analysis are reviewed. ► Emphasis is given on recent findings on Multiple Sclerosis via NMR and pattern recognition methods. - Abstract: Metabolomics is the discipline where endogenous and exogenous metabolites are assessed, identified and quantified in different biological samples. Metabolites are crucial components of biological system and highly informative about its functional state, due to their closeness to functional endpoints and to the organism's phenotypes. Nuclear Magnetic Resonance (NMR) spectroscopy, next to Mass Spectrometry (MS), is one of the main metabolomics analytical platforms. The technological developments in the field of NMR spectroscopy have enabled the identification and quantitative measurement of the many metabolites in a single sample of biofluids in a non-targeted and non-destructive manner. Combination of NMR spectra of biofluids and pattern recognition methods has driven forward the application of metabolomics in the field of biomarker discovery. The importance of metabolomics in diagnostics, e.g. in identifying biomarkers or defining pathological status, has been growing exponentially as evidenced by the number of published papers. In this review, we describe the developments in data acquisition and multivariate analysis of NMR-based metabolomics data, with particular emphasis on the metabolomics of Cerebrospinal Fluid (CSF) and biomarker discovery in Multiple Sclerosis (MScl).

  8. A simple encoding method for Sigma-Delta ADC based biopotential acquisition systems.

    Science.gov (United States)

    Guerrero, Federico N; Spinelli, Enrique M

    2017-10-01

    Sigma Delta analogue-to-digital converters allow acquiring the full dynamic range of biomedical signals at the electrodes, resulting in less complex hardware and increased measurement robustness. However, the increased data size per sample (typically 24 bits) demands the transmission of extremely large volumes of data across the isolation barrier, thus increasing power consumption on the patient side. This problem is accentuated when a large number of channels is used as in current 128-256 electrodes biopotential acquisition systems, that usually opt for an optic fibre link to the computer. An analogous problem occurs for simpler low-power acquisition platforms that transmit data through a wireless link to a computing platform. In this paper, a low-complexity encoding method is presented to decrease sample data size without losses, while preserving the full DC-coupled signal. The method achieved a 2.3 average compression ratio evaluated over an ECG and EMG signal bank acquired with equipment based on Sigma-Delta converters. It demands a very low processing load: a C language implementation is presented that resulted in an 110 clock cycles average execution on an 8-bit microcontroller.

  9. Usefulness of a breath-holding acquisition method in PET/CT for pulmonary lesions

    International Nuclear Information System (INIS)

    Yamaguchi, Toshiaki; Ueda, Osamu; Hara, Hideyuki; Sakai, Hiroto; Kida, Tohru; Suzuki, Kayo; Adachi, Shuji; Ishii, Kazunari

    2009-01-01

    The objective of this study was to evaluate the usefulness of a breath-holding (BH) 18 F-2-fluoro-2-deoxy-D-glucose positron emission tomography ( 18 F-FDG-PET) technique for PET/computed tomography (CT) scanning of pulmonary lesions near the diaphragm, where image quality is influenced by respiratory motion. In a basic study, simulated breath-holding PET (sBH-PET) data were acquired by repeating image acquisition eight times with fixation of a phantom at 15 s/bed. Free-breathing PET (FB-PET) was simulated by acquiring data even as moving the phantom at 120 s/bed (sFB-PET). Images with total acquisition times of 15 s, 30 s, 45 s, 60 s, and 120 s were generated for sBH-PET. Receiver-operating characteristic (ROC) analyses and determination of the statistical significance of differences between sFB-PET images and sBH-PET images were performed. A total of 22 pulmonary lesions in 21 patients (12 men and 9 women, mean age 61.3±10.6 years, 10 benign lesions in 9 patients and 12 malignant lesions in 12 patients) were examined by FB-PET and BH-PET). For evaluation of these two acquisition methods, displacement of the lesion between CT and PET was considered to be a translation, and the statistical significance of differences in maximum standardized uptake value (SUV max ) of the lesion was assessed using the paired t test. In the basic study, sBH-PET images with acquisition times of 45 s, 60 s, and 120 s had significantly higher diagnostic accuracy than 120-s sFB-PET images (P max of the lesions in the BH-PET images was significantly higher than that in the FB-PET images (benign: 2.40±0.86 vs. 2.20±0.85, P=0.005; malignant: 4.84±2.16 vs. 3.75±2.11, P=0.001). BH-PET provides images with better diagnostic accuracy, avoids image degradation owing to respiratory motion, and yields more accurate attenuation correction. This method is very useful for overcoming the problem of respiratory motion. (author)

  10. Transport Methods Conquering the Seven-Dimensional Mountain

    International Nuclear Information System (INIS)

    Graziani, F; Olson, G

    2003-01-01

    In a wide variety of applications, a significant fraction of the momentum and energy present in a physical problem is carried by the transport of particles. Depending on the circumstances, the types of particles might involve some or all of photons, neutrinos, charged particles, or neutrons. In application areas that use transport, the computational time is usually dominated by the transport calculation. Therefore, there is a potential for great synergy; progress in transport algorithms could help quicken the time to solution for many applications. The complexity, and hence expense, involved in solving the transport problem can be understood by realizing that the general solution to the Boltzmann transport equation is seven dimensional: 3 spatial coordinates, 2 angles, 1 time, and 1 for speed or energy. Low-order approximations to the transport equation are frequently used due in part to physical justification but many times simply because a solution to the full transport problem is too computationally expensive. An example is the diffusion equation, which effectively drops the two angles in phase space by assuming that a linear representation in angle is adequate. Another approximation is the grey approximation, which drops the energy variable by averaging over it. If the grey approximation is applied to the diffusion equation, the expense of solving what amounts to the simplest possible description of transport is roughly equal to the cost of implicit computational fluid dynamics. It is clear therefore, that for those application areas needing some form of transport, fast, accurate and robust transport algorithms can lead to an increase in overall code performance and a decrease in time to solution. The seven-dimensional nature of transport means that factors of 100 or 1000 improvement in computer speed or memory are quickly absorbed in slightly higher resolution in space, angle, and energy. Therefore, the biggest advances in the last few years and in the next

  11. Single-acquisition method for simultaneous determination of extrinsic gamma-camera sensitivity and spatial resolution

    Energy Technology Data Exchange (ETDEWEB)

    Santos, J.A.M. [Servico de Fisica Medica, Instituto Portugues de Oncologia Francisco Gentil do Porto, E.P.E., Rua Dr. Antonio Bernardino de Almeida, 4200-072 Porto (Portugal)], E-mail: a.miranda@portugalmail.pt; Sarmento, S. [Servico de Fisica Medica, Instituto Portugues de Oncologia Francisco Gentil do Porto, E.P.E., Rua Dr. Antonio Bernardino de Almeida, 4200-072 Porto (Portugal); Alves, P.; Torres, M.C. [Departamento de Fisica da Universidade do Porto, Rua do Campo Alegre 687, 4169-007 Porto (Portugal); Bastos, A.L. [Servico de Medicina Nuclear, Instituto Portugues de Oncologia Francisco Gentil do Porto, E.P.E., Rua Dr. Antonio Bernardino de Almeida, 4200-072 Porto (Portugal); Ponte, F. [Servico de Fisica Medica, Instituto Portugues de Oncologia Francisco Gentil do Porto, E.P.E., Rua Dr. Antonio Bernardino de Almeida, 4200-072 Porto (Portugal)

    2008-01-15

    A new method for measuring simultaneously both the extrinsic sensitivity and spatial resolution of a gamma-camera in a single planar acquisition was implemented. A dual-purpose phantom (SR phantom; sensitivity/resolution) was developed, tested and the results compared with other conventional methods used for separate determination of these two important image quality parameters. The SR phantom yielded reproducible and accurate results, allowing an immediate visual inspection of the spatial resolution as well as the quantitative determination of the contrast for six different spatial frequencies. It also proved to be useful in the estimation of the modulation transfer function (MTF) of the image formation collimator/detector system at six different frequencies and can be used to estimate the spatial resolution as function of the direction relative to the digital matrix of the detector.

  12. Single-acquisition method for simultaneous determination of extrinsic gamma-camera sensitivity and spatial resolution

    International Nuclear Information System (INIS)

    Santos, J.A.M.; Sarmento, S.; Alves, P.; Torres, M.C.; Bastos, A.L.; Ponte, F.

    2008-01-01

    A new method for measuring simultaneously both the extrinsic sensitivity and spatial resolution of a gamma-camera in a single planar acquisition was implemented. A dual-purpose phantom (SR phantom; sensitivity/resolution) was developed, tested and the results compared with other conventional methods used for separate determination of these two important image quality parameters. The SR phantom yielded reproducible and accurate results, allowing an immediate visual inspection of the spatial resolution as well as the quantitative determination of the contrast for six different spatial frequencies. It also proved to be useful in the estimation of the modulation transfer function (MTF) of the image formation collimator/detector system at six different frequencies and can be used to estimate the spatial resolution as function of the direction relative to the digital matrix of the detector

  13. Motion-blurred star acquisition method of the star tracker under high dynamic conditions.

    Science.gov (United States)

    Sun, Ting; Xing, Fei; You, Zheng; Wei, Minsong

    2013-08-26

    The star tracker is one of the most promising attitude measurement devices used in spacecraft due to its extremely high accuracy. However, high dynamic performance is still one of its constraints. Smearing appears, making it more difficult to distinguish the energy dispersive star point from the noise. An effective star acquisition approach for motion-blurred star image is proposed in this work. The correlation filter and mathematical morphology algorithm is combined to enhance the signal energy and evaluate slowly varying background noise. The star point can be separated from most types of noise in this manner, making extraction and recognition easier. Partial image differentiation is then utilized to obtain the motion parameters from only one image of the star tracker based on the above process. Considering the motion model, the reference window is adopted to perform centroid determination. Star acquisition results of real on-orbit star images and laboratory validation experiments demonstrate that the method described in this work is effective and the dynamic performance of the star tracker could be improved along with more identified stars and guaranteed position accuracy of the star point.

  14. MR imaging of ore for heap bioleaching studies using pure phase encode acquisition methods

    Science.gov (United States)

    Fagan, Marijke A.; Sederman, Andrew J.; Johns, Michael L.

    2012-03-01

    Various MRI techniques were considered with respect to imaging of aqueous flow fields in low grade copper ore. Spin echo frequency encoded techniques were shown to produce unacceptable image distortions which led to pure phase encoded techniques being considered. Single point imaging multiple point acquisition (SPI-MPA) and spin echo single point imaging (SESPI) techniques were applied. By direct comparison with X-ray tomographic images, both techniques were found to be able to produce distortion-free images of the ore packings at 2 T. The signal to noise ratios (SNRs) of the SESPI images were found to be superior to SPI-MPA for equal total acquisition times; this was explained based on NMR relaxation measurements. SESPI was also found to produce suitable images for a range of particles sizes, whereas SPI-MPA SNR deteriorated markedly as particles size was reduced. Comparisons on a 4.7 T magnet showed significant signal loss from the SPI-MPA images, the effect of which was accentuated in the case of unsaturated flowing systems. Hence it was concluded that SESPI was the most robust imaging method for the study of copper ore heap leaching hydrology.

  15. Three-dimensional seismic survey planning based on the newest data acquisition design technique; Saishin no data shutoku design ni motozuku sanjigen jishin tansa keikaku

    Energy Technology Data Exchange (ETDEWEB)

    Minehara, M; Nakagami, K; Tanaka, H [Japan National Oil Corp., Tokyo (Japan). Technology Research Center

    1996-10-01

    Theory of parameter setting for data acquisition is arranged, mainly as to the seismic generating and receiving geometry. This paper also introduces an example of survey planning for three-dimensional land seismic exploration in progress. For the design of data acquisition, fundamental parameters are firstly determined on the basis of the characteristics of reflection records at a given district, and then, the layout of survey is determined. In this study, information through modeling based on the existing interpretation of geologic structures is also utilized, to reflect them for survey specifications. Land three-dimensional seismic survey was designed. Ground surface of the surveyed area consists of rice fields and hilly regions. The target was a nose-shaped structure in the depth about 2,500 m underground. A survey area of 4km{times}5km was set. Records in the shallow layers could not obtained when near offset was not ensured. Quality control of this distribution was important for grasping the shallow structure required. In this survey, the seismic generating point could be ensured more certainly than initially expected, which resulted in the sufficient security of near offset. 2 refs., 2 figs.

  16. Parametric study on single shot peening by dimensional analysis method incorporated with finite element method

    Science.gov (United States)

    Wu, Xian-Qian; Wang, Xi; Wei, Yan-Peng; Song, Hong-Wei; Huang, Chen-Guang

    2012-06-01

    Shot peening is a widely used surface treatment method by generating compressive residual stress near the surface of metallic materials to increase fatigue life and resistance to corrosion fatigue, cracking, etc. Compressive residual stress and dent profile are important factors to evaluate the effectiveness of shot peening process. In this paper, the influence of dimensionless parameters on maximum compressive residual stress and maximum depth of the dent were investigated. Firstly, dimensionless relations of processing parameters that affect the maximum compressive residual stress and the maximum depth of the dent were deduced by dimensional analysis method. Secondly, the influence of each dimensionless parameter on dimensionless variables was investigated by the finite element method. Furthermore, related empirical formulas were given for each dimensionless parameter based on the simulation results. Finally, comparison was made and good agreement was found between the simulation results and the empirical formula, which shows that a useful approach is provided in this paper for analyzing the influence of each individual parameter.

  17. New exact solutions of the (2 + 1)-dimensional breaking soliton system via an extended mapping method

    International Nuclear Information System (INIS)

    Ma Songhua; Fang Jianping; Zheng Chunlong

    2009-01-01

    By means of an extended mapping method and a variable separation method, a series of solitary wave solutions, periodic wave solutions and variable separation solutions to the (2 + 1)-dimensional breaking soliton system is derived.

  18. NMR and pattern recognition methods in metabolomics: From data acquisition to biomarker discovery: A review

    Energy Technology Data Exchange (ETDEWEB)

    Smolinska, Agnieszka, E-mail: A.Smolinska@science.ru.nl [Institute for Molecules and Materials, Radboud University Nijmegen, Nijmegen (Netherlands); Blanchet, Lionel [Institute for Molecules and Materials, Radboud University Nijmegen, Nijmegen (Netherlands); Department of Biochemistry, Nijmegen Centre for Molecular Life Sciences, Radboud University Nijmegen Medical Centre, Nijmegen (Netherlands); Buydens, Lutgarde M.C.; Wijmenga, Sybren S. [Institute for Molecules and Materials, Radboud University Nijmegen, Nijmegen (Netherlands)

    2012-10-31

    Highlights: Black-Right-Pointing-Pointer Procedures for acquisition of different biofluids by NMR. Black-Right-Pointing-Pointer Recent developments in metabolic profiling of different biofluids by NMR are presented. Black-Right-Pointing-Pointer The crucial steps involved in data preprocessing and multivariate chemometric analysis are reviewed. Black-Right-Pointing-Pointer Emphasis is given on recent findings on Multiple Sclerosis via NMR and pattern recognition methods. - Abstract: Metabolomics is the discipline where endogenous and exogenous metabolites are assessed, identified and quantified in different biological samples. Metabolites are crucial components of biological system and highly informative about its functional state, due to their closeness to functional endpoints and to the organism's phenotypes. Nuclear Magnetic Resonance (NMR) spectroscopy, next to Mass Spectrometry (MS), is one of the main metabolomics analytical platforms. The technological developments in the field of NMR spectroscopy have enabled the identification and quantitative measurement of the many metabolites in a single sample of biofluids in a non-targeted and non-destructive manner. Combination of NMR spectra of biofluids and pattern recognition methods has driven forward the application of metabolomics in the field of biomarker discovery. The importance of metabolomics in diagnostics, e.g. in identifying biomarkers or defining pathological status, has been growing exponentially as evidenced by the number of published papers. In this review, we describe the developments in data acquisition and multivariate analysis of NMR-based metabolomics data, with particular emphasis on the metabolomics of Cerebrospinal Fluid (CSF) and biomarker discovery in Multiple Sclerosis (MScl).

  19. Design and Implementation of Data Acquisition System Based on Digital Filtering Method for the Electrical Capacitance Tomography

    Directory of Open Access Journals (Sweden)

    LI Yang

    2017-02-01

    Full Text Available Aiming at the problem of high frequency noise interference in the ECT data acquisition system,on the basis of analysis of the ECT system data acquisition and control principles,we designed an improved distributed algorithm FIR low-pass digital filter combined with FPGA technology and digital filtering principle. The sampling frequency of the filter is 1 .5 MHz,the pass band cutoff frequency is 20MHz,and the design method is window function. We used the FDATooI toolbox in Matlab to extract and quantify the filter coefficients and the Quarters to simulate the simulation. Experimental results showed that the FIR digital filter can achieve the filtering function of the high frequency signal in the data acquisition system. Compared with the traditional DA algorithm,it has the advantages of small resource consumption and high acquisition speed and some other characteristics.

  20. Application of Exp-function method for (2 + 1)-dimensional nonlinear evolution equations

    International Nuclear Information System (INIS)

    Bekir, Ahmet; Boz, Ahmet

    2009-01-01

    In this paper, the Exp-function method is used to construct solitary and soliton solutions of (2 + 1)-dimensional nonlinear evolution equations. (2 + 1)-dimensional breaking soliton (Calogero) equation, modified Zakharov-Kuznetsov and Konopelchenko-Dubrovsky equations are chosen to illustrate the effectiveness of the method. The method is straightforward and concise, and its applications are promising. The Exp-function method presents a wider applicability for handling nonlinear wave equations.

  1. A Comparison of Machine Learning Methods in a High-Dimensional Classification Problem

    OpenAIRE

    Zekić-Sušac, Marijana; Pfeifer, Sanja; Šarlija, Nataša

    2014-01-01

    Background: Large-dimensional data modelling often relies on variable reduction methods in the pre-processing and in the post-processing stage. However, such a reduction usually provides less information and yields a lower accuracy of the model. Objectives: The aim of this paper is to assess the high-dimensional classification problem of recognizing entrepreneurial intentions of students by machine learning methods. Methods/Approach: Four methods were tested: artificial neural networks, CART ...

  2. Solution of the two-dimensional space-time reactor kinetics equation by a locally one-dimensional method

    International Nuclear Information System (INIS)

    Chen, G.S.; Christenson, J.M.

    1985-01-01

    In this paper, the authors present some initial results from an investigation of the application of a locally one-dimensional (LOD) finite difference method to the solution of the two-dimensional, two-group reactor kinetics equations. Although the LOD method is relatively well known, it apparently has not been previously applied to the space-time kinetics equations. In this investigation, the LOD results were benchmarked against similar computational results (using the same computing environment, the same programming structure, and the same sample problems) obtained by the TWIGL program. For all of the problems considered, the LOD method provided accurate results in one-half to one-eight of the time required by the TWIGL program

  3. An evaluation of inexpensive methods for root image acquisition when using rhizotrons.

    Science.gov (United States)

    Mohamed, Awaz; Monnier, Yogan; Mao, Zhun; Lobet, Guillaume; Maeght, Jean-Luc; Ramel, Merlin; Stokes, Alexia

    2017-01-01

    Belowground processes play an essential role in ecosystem nutrient cycling and the global carbon budget cycle. Quantifying fine root growth is crucial to the understanding of ecosystem structure and function and in predicting how ecosystems respond to climate variability. A better understanding of root system growth is necessary, but choosing the best method of observation is complex, especially in the natural soil environment. Here, we compare five methods of root image acquisition using inexpensive technology that is currently available on the market: flatbed scanner, handheld scanner, manual tracing, a smartphone application scanner and a time-lapse camera. Using the five methods, root elongation rate (RER) was measured for three months, on roots of hybrid walnut ( Juglans nigra  ×  Juglans regia L.) in rhizotrons installed in agroforests. When all methods were compared together, there were no significant differences in relative cumulative root length. However, the time-lapse camera and the manual tracing method significantly overestimated the relative mean diameter of roots compared to the three scanning methods. The smartphone scanning application was found to perform best overall when considering image quality and ease of use in the field. The automatic time-lapse camera was useful for measuring RER over several months without any human intervention. Our results show that inexpensive scanning and automated methods provide correct measurements of root elongation and length (but not diameter when using the time-lapse camera). These methods are capable of detecting fine roots to a diameter of 0.1 mm and can therefore be selected by the user depending on the data required.

  4. Bone surface enhancement in ultrasound images using a new Doppler-based acquisition/processing method

    Science.gov (United States)

    Yang, Xu; Tang, Songyuan; Tasciotti, Ennio; Righetti, Raffaella

    2018-01-01

    Ultrasound (US) imaging has long been considered as a potential aid in orthopedic surgeries. US technologies are safe, portable and do not use radiations. This would make them a desirable tool for real-time assessment of fractures and to monitor fracture healing. However, image quality of US imaging methods in bone applications is limited by speckle, attenuation, shadow, multiple reflections and other imaging artifacts. While bone surfaces typically appear in US images as somewhat ‘brighter’ than soft tissue, they are often not easily distinguishable from the surrounding tissue. Therefore, US imaging methods aimed at segmenting bone surfaces need enhancement in image contrast prior to segmentation to improve the quality of the detected bone surface. In this paper, we present a novel acquisition/processing technique for bone surface enhancement in US images. Inspired by elastography and Doppler imaging methods, this technique takes advantage of the difference between the mechanical and acoustic properties of bones and those of soft tissues to make the bone surface more easily distinguishable in US images. The objective of this technique is to facilitate US-based bone segmentation methods and improve the accuracy of their outcomes. The newly proposed technique is tested both in in vitro and in vivo experiments. The results of these preliminary experiments suggest that the use of the proposed technique has the potential to significantly enhance the detectability of bone surfaces in noisy ultrasound images.

  5. Bone surface enhancement in ultrasound images using a new Doppler-based acquisition/processing method.

    Science.gov (United States)

    Yang, Xu; Tang, Songyuan; Tasciotti, Ennio; Righetti, Raffaella

    2018-01-17

    Ultrasound (US) imaging has long been considered as a potential aid in orthopedic surgeries. US technologies are safe, portable and do not use radiations. This would make them a desirable tool for real-time assessment of fractures and to monitor fracture healing. However, image quality of US imaging methods in bone applications is limited by speckle, attenuation, shadow, multiple reflections and other imaging artifacts. While bone surfaces typically appear in US images as somewhat 'brighter' than soft tissue, they are often not easily distinguishable from the surrounding tissue. Therefore, US imaging methods aimed at segmenting bone surfaces need enhancement in image contrast prior to segmentation to improve the quality of the detected bone surface. In this paper, we present a novel acquisition/processing technique for bone surface enhancement in US images. Inspired by elastography and Doppler imaging methods, this technique takes advantage of the difference between the mechanical and acoustic properties of bones and those of soft tissues to make the bone surface more easily distinguishable in US images. The objective of this technique is to facilitate US-based bone segmentation methods and improve the accuracy of their outcomes. The newly proposed technique is tested both in in vitro and in vivo experiments. The results of these preliminary experiments suggest that the use of the proposed technique has the potential to significantly enhance the detectability of bone surfaces in noisy ultrasound images.

  6. Prospective Foreign Language Teachers' Preference of Teaching Methods for the Language Acquisition Course in Turkish Higher Education

    Science.gov (United States)

    GüvendIr, Emre

    2013-01-01

    Considering the significance of taking student preferences into account while organizing teaching practices, the current study explores which teaching method prospective foreign language teachers mostly prefer their teacher to use in the language acquisition course. A teaching methods evaluation form that includes six commonly used teaching…

  7. Methods for preparation of three-dimensional bodies

    Science.gov (United States)

    Mulligan, Anthony C.; Rigali, Mark J.; Sutaria, Manish P.; Artz, Gregory J.; Gafner, Felix H.; Vaidyanathan, K. Ranji

    2004-09-28

    Processes for mechanically fabricating two and three-dimensional fibrous monolith composites include preparing a fibrous monolith filament from a core composition of a first powder material and a boundary material of a second powder material. The filament includes a first portion of the core composition surrounded by a second portion of the boundary composition. One or more filaments are extruded through a mechanically-controlled deposition nozzle onto a working surface to create a fibrous monolith composite object. The objects may be formed directly from computer models and have complex geometries.

  8. Incremental Knowledge Acquisition for WSD: A Rough Set and IL based Method

    Directory of Open Access Journals (Sweden)

    Xu Huang

    2015-07-01

    Full Text Available Word sense disambiguation (WSD is one of tricky tasks in natural language processing (NLP as it needs to take into full account all the complexities of language. Because WSD involves in discovering semantic structures from unstructured text, automatic knowledge acquisition of word sense is profoundly difficult. To acquire knowledge about Chinese multi-sense verbs, we introduce an incremental machine learning method which combines rough set method and instance based learning. First, context of a multi-sense verb is extracted into a table; its sense is annotated by a skilled human and stored in the same table. By this way, decision table is formed, and then rules can be extracted within the framework of attributive value reduction of rough set. Instances not entailed by any rule are treated as outliers. When new instances are added to decision table, only the new added and outliers need to be learned further, thus incremental leaning is fulfilled. Experiments show the scale of decision table can be reduced dramatically by this method without performance decline.

  9. One-dimensional calculation of flow branching using the method of characteristics

    International Nuclear Information System (INIS)

    Meier, R.W.; Gido, R.G.

    1978-05-01

    In one-dimensional flow systems, the flow often branches, such as at a tee or manifold. The study develops a formulation for calculating the flow through branch points with one-dimensional method of characteristics equations. The resultant equations were verified by comparison with experimental measurements

  10. One-dimensional treatment of polyatomic crystals by the Laplace transform method

    International Nuclear Information System (INIS)

    Rosato, A.; Santana, P.H.A.

    1976-01-01

    The one dimensional periodic potential problem is solved using the Laplace transform method and a condensed expression for the relation E x k and effective mass for one electron in a polyatomic structure is determined. Applications related to the effect of the asymmetry of the potential upon the one dimensional band structure are discussed [pt

  11. Moderator feedback effects in two-dimensional nodal methods for pressurized water reactor analysis

    International Nuclear Information System (INIS)

    Downar, T.J.

    1987-01-01

    A method was developed for incorporating moderator feedback effects in two-dimensional nodal codes used for pressurized water reactor (PWR) neutronic analysis. Equations for the assembly average quality and density are developed in terms of the assembly power calculated in two dimensions. The method is validated with a Westinghouse PWR using the Electric Power Research Institute code SIMULATE-E. Results show a several percent improvement is achieved in the two-dimensional power distribution prediction compared to methods without moderator feedback

  12. Improved non-dimensional dynamic influence function method based on tow-domain method for vibration analysis of membranes

    Directory of Open Access Journals (Sweden)

    SW Kang

    2015-02-01

    Full Text Available This article introduces an improved non-dimensional dynamic influence function method using a sub-domain method for efficiently extracting the eigenvalues and mode shapes of concave membranes with arbitrary shapes. The non-dimensional dynamic influence function method (non-dimensional dynamic influence function method, which was developed by the authors in 1999, gives highly accurate eigenvalues for membranes, plates, and acoustic cavities, compared with the finite element method. However, it needs the inefficient procedure of calculating the singularity of a system matrix in the frequency range of interest for extracting eigenvalues and mode shapes. To overcome the inefficient procedure, this article proposes a practical approach to make the system matrix equation of the concave membrane of interest into a form of algebraic eigenvalue problem. It is shown by several case studies that the proposed method has a good convergence characteristics and yields very accurate eigenvalues, compared with an exact method and finite element method (ANSYS.

  13. Reduced basis ANOVA methods for partial differential equations with high-dimensional random inputs

    Energy Technology Data Exchange (ETDEWEB)

    Liao, Qifeng, E-mail: liaoqf@shanghaitech.edu.cn [School of Information Science and Technology, ShanghaiTech University, Shanghai 200031 (China); Lin, Guang, E-mail: guanglin@purdue.edu [Department of Mathematics & School of Mechanical Engineering, Purdue University, West Lafayette, IN 47907 (United States)

    2016-07-15

    In this paper we present a reduced basis ANOVA approach for partial deferential equations (PDEs) with random inputs. The ANOVA method combined with stochastic collocation methods provides model reduction in high-dimensional parameter space through decomposing high-dimensional inputs into unions of low-dimensional inputs. In this work, to further reduce the computational cost, we investigate spatial low-rank structures in the ANOVA-collocation method, and develop efficient spatial model reduction techniques using hierarchically generated reduced bases. We present a general mathematical framework of the methodology, validate its accuracy and demonstrate its efficiency with numerical experiments.

  14. Comparison of beam position calculation methods for application in digital acquisition systems

    Science.gov (United States)

    Reiter, A.; Singh, R.

    2018-05-01

    Different approaches to the data analysis of beam position monitors in hadron accelerators are compared adopting the perspective of an analog-to-digital converter in a sampling acquisition system. Special emphasis is given to position uncertainty and robustness against bias and interference that may be encountered in an accelerator environment. In a time-domain analysis of data in the presence of statistical noise, the position calculation based on the difference-over-sum method with algorithms like signal integral or power can be interpreted as a least-squares analysis of a corresponding fit function. This link to the least-squares method is exploited in the evaluation of analysis properties and in the calculation of position uncertainty. In an analytical model and experimental evaluations the positions derived from a straight line fit or equivalently the standard deviation are found to be the most robust and to offer the least variance. The measured position uncertainty is consistent with the model prediction in our experiment, and the results of tune measurements improve significantly.

  15. No-gold-standard evaluation of image-acquisition methods using patient data.

    Science.gov (United States)

    Jha, Abhinav K; Frey, Eric

    2017-02-11

    Several new and improved modalities, scanners, and protocols, together referred to as image-acquisition methods (IAMs), are being developed to provide reliable quantitative imaging. Objective evaluation of these IAMs on the clinically relevant quantitative tasks is highly desirable. Such evaluation is most reliable and clinically decisive when performed with patient data, but that requires the availability of a gold standard, which is often rare. While no-gold-standard (NGS) techniques have been developed to clinically evaluate quantitative imaging methods, these techniques require that each of the patients be scanned using all the IAMs, which is expensive, time consuming, and could lead to increased radiation dose. A more clinically practical scenario is where different set of patients are scanned using different IAMs. We have developed an NGS technique that uses patient data where different patient sets are imaged using different IAMs to compare the different IAMs. The technique posits a linear relationship, characterized by a slope, bias, and noise standard-deviation term, between the true and measured quantitative values. Under the assumption that the true quantitative values have been sampled from a unimodal distribution, a maximum-likelihood procedure was developed that estimates these linear relationship parameters for the different IAMs. Figures of merit can be estimated using these linear relationship parameters to evaluate the IAMs on the basis of accuracy, precision, and overall reliability. The proposed technique has several potential applications such as in protocol optimization, quantifying difference in system performance, and system harmonization using patient data.

  16. Data acquisition systems for uses of multi-counter time analyzer and one-dimensional PSD pulse height analyzer to neutron scattering measurements

    International Nuclear Information System (INIS)

    Ono, Masayoshi; Tasaki, Seiji; Okamoto, Sunao

    1989-01-01

    A data acquisition system having the various modern electronic devices was designed and tested for practical use of neutron time-of-flight (TOF) measurements with multiple counters. The system is principally composed of TOF logic units (load-able up to 128 units) with a control unit and a conventional micro-computer. The TOF logic unit (main memory, 2048 ch, 24 bits/ch) demonstrates about 1.7 times higher efficiency for neutron counting rate per channel than the one by a conventional TOF logic unit. Meanwhile, some data-access functions of the TOF logic unit were applied to position sensitive analyzer of one-dimensional neutron PSD for small angle scattering. The analyzer was tested with use of pulse generator. The result shows good linearity. (author)

  17. Food acquisition methods and correlates of food insecurity in adults on probation in Rhode Island

    Science.gov (United States)

    Stopka, Thomas J.; Beckwith, Curt G.

    2018-01-01

    Background Individuals under community corrections supervision may be at increased risk for food insecurity because they face challenges similar to other marginalized populations, such as people experiencing housing instability or substance users. The prevalence of food insecurity and its correlates have not been studied in the community corrections population. Methods We conducted a cross-sectional study in 2016, surveying 304 probationers in Rhode Island to estimate the prevalence of food insecurity, identify food acquisition methods, and determine characteristics of groups most at-risk for food insecurity. We used chi-square and Fisher’s exact tests to assess differences in sociodemographics and eating and food acquisition patterns, GIS to examine geospatial differences, and ordinal logistic regression to identify independent correlates across the four levels of food security. Results Nearly three-quarters (70.4%) of the participants experienced food insecurity, with almost half (48.0%) having very low food security. This is substantially higher than the general population within the state of Rhode Island, which reported a prevalence of 12.8% food insecurity with 6.1% very low food security in 2016. Participants with very low food security most often acquired lunch foods from convenience stores (and less likely from grocery stores) compared to the other three levels of food security. Participants did not differ significantly with regards to places for food acquisition related to breakfast or dinner meals based upon food security status. In adjusted models, being homeless (AOR 2.34, 95% CI: 1.31, 4.18) and depressed (AOR 3.12, 95% CI: 1.98, 4.91) were independently associated with a greater odds of being in a food insecure group. Compared to having help with meals none of the time, participants who reported having meal help all of the time (AOR 0.28, 95% CI: 0.12, 0.64), most of the time (AOR 0.31, 95% CI: 0.15, 0.61), and some of the time (AOR 0.54, 95% CI: 0

  18. Effective method for construction of low-dimensional models for heat transfer process

    Energy Technology Data Exchange (ETDEWEB)

    Blinov, D.G.; Prokopov, V.G.; Sherenkovskii, Y.V.; Fialko, N.M.; Yurchuk, V.L. [National Academy of Sciences of Ukraine, Kiev (Ukraine). Inst. of Engineering Thermophysics

    2004-12-01

    A low-dimensional model based on the method of proper orthogonal decomposition (POD) and the method of polyargumental systems (MPS) for thermal conductivity problems with strongly localized source of heat has been presented. The key aspect of these methods is that they enable to avoid weak points of other projection methods, which consists in a priori choice of basis functions. It enables us to use the MPS method and the POD method as convenient means to construct low-dimensional models of heat and mass transfer problems. (Author)

  19. Three-dimensional compound comparison methods and their application in drug discovery.

    Science.gov (United States)

    Shin, Woong-Hee; Zhu, Xiaolei; Bures, Mark Gregory; Kihara, Daisuke

    2015-07-16

    Virtual screening has been widely used in the drug discovery process. Ligand-based virtual screening (LBVS) methods compare a library of compounds with a known active ligand. Two notable advantages of LBVS methods are that they do not require structural information of a target receptor and that they are faster than structure-based methods. LBVS methods can be classified based on the complexity of ligand structure information utilized: one-dimensional (1D), two-dimensional (2D), and three-dimensional (3D). Unlike 1D and 2D methods, 3D methods can have enhanced performance since they treat the conformational flexibility of compounds. In this paper, a number of 3D methods will be reviewed. In addition, four representative 3D methods were benchmarked to understand their performance in virtual screening. Specifically, we tested overall performance in key aspects including the ability to find dissimilar active compounds, and computational speed.

  20. Three-Dimensional Compound Comparison Methods and Their Application in Drug Discovery

    Directory of Open Access Journals (Sweden)

    Woong-Hee Shin

    2015-07-01

    Full Text Available Virtual screening has been widely used in the drug discovery process. Ligand-based virtual screening (LBVS methods compare a library of compounds with a known active ligand. Two notable advantages of LBVS methods are that they do not require structural information of a target receptor and that they are faster than structure-based methods. LBVS methods can be classified based on the complexity of ligand structure information utilized: one-dimensional (1D, two-dimensional (2D, and three-dimensional (3D. Unlike 1D and 2D methods, 3D methods can have enhanced performance since they treat the conformational flexibility of compounds. In this paper, a number of 3D methods will be reviewed. In addition, four representative 3D methods were benchmarked to understand their performance in virtual screening. Specifically, we tested overall performance in key aspects including the ability to find dissimilar active compounds, and computational speed.

  1. Research on a Rotating Machinery Fault Prognosis Method Using Three-Dimensional Spatial Representations

    Directory of Open Access Journals (Sweden)

    Xiaoni Dong

    2016-01-01

    Full Text Available Process models and parameters are two critical steps for fault prognosis in the operation of rotating machinery. Due to the requirement for a short and rapid response, it is important to study robust sensor data representation schemes. However, the conventional holospectrum defined by one-dimensional or two-dimensional methods does not sufficiently present this information in both the frequency and time domains. To supply a complete holospectrum model, a new three-dimensional spatial representation method is proposed. This method integrates improved three-dimensional (3D holospectra and 3D filtered orbits, leading to the integration of radial and axial vibration features in one bearing section. The results from simulation and experimental analysis on a complex compressor show that the proposed method can present the real operational status and clearly reveal early faults, thus demonstrating great potential for condition-based maintenance prediction in industrial machinery.

  2. Similarity measurement method of high-dimensional data based on normalized net lattice subspace

    Institute of Scientific and Technical Information of China (English)

    Li Wenfa; Wang Gongming; Li Ke; Huang Su

    2017-01-01

    The performance of conventional similarity measurement methods is affected seriously by the curse of dimensionality of high-dimensional data.The reason is that data difference between sparse and noisy dimensionalities occupies a large proportion of the similarity, leading to the dissimilarities between any results.A similarity measurement method of high-dimensional data based on normalized net lattice subspace is proposed.The data range of each dimension is divided into several intervals, and the components in different dimensions are mapped onto the corresponding interval.Only the component in the same or adjacent interval is used to calculate the similarity.To validate this meth-od, three data types are used, and seven common similarity measurement methods are compared. The experimental result indicates that the relative difference of the method is increasing with the di-mensionality and is approximately two or three orders of magnitude higher than the conventional method.In addition, the similarity range of this method in different dimensions is [0, 1], which is fit for similarity analysis after dimensionality reduction.

  3. A computational method for the solution of one-dimensional ...

    Indian Academy of Sciences (India)

    embedding parameter p ∈ [0, 1], which is considered as a 'small parameter'. Consid- erable research work has recently been conducted in applying this method to a class of linear and nonlinear equations. This method was further developed and improved by He, and applied to nonlinear oscillators with discontinuities [1], ...

  4. The "SAFARI" Method of Collection Study and Cooperative Acquisition for a Multi-Library Cooperative. A Manual of Procedures.

    Science.gov (United States)

    Sinclair, Dorothy

    This document examines the importance and difficulties in resource sharing and acquisition by libraries and introduces the procedures of the Site Appraisal for Area Resources Inventory (SAFARI) system as a method of comparative evaluation of subject collections among a group of libraries. Resource, or collection, sharing offers specific…

  5. Food acquisition methods and correlates of food insecurity in adults on probation in Rhode Island.

    Science.gov (United States)

    Dong, Kimberly R; Tang, Alice M; Stopka, Thomas J; Beckwith, Curt G; Must, Aviva

    2018-01-01

    Individuals under community corrections supervision may be at increased risk for food insecurity because they face challenges similar to other marginalized populations, such as people experiencing housing instability or substance users. The prevalence of food insecurity and its correlates have not been studied in the community corrections population. We conducted a cross-sectional study in 2016, surveying 304 probationers in Rhode Island to estimate the prevalence of food insecurity, identify food acquisition methods, and determine characteristics of groups most at-risk for food insecurity. We used chi-square and Fisher's exact tests to assess differences in sociodemographics and eating and food acquisition patterns, GIS to examine geospatial differences, and ordinal logistic regression to identify independent correlates across the four levels of food security. Nearly three-quarters (70.4%) of the participants experienced food insecurity, with almost half (48.0%) having very low food security. This is substantially higher than the general population within the state of Rhode Island, which reported a prevalence of 12.8% food insecurity with 6.1% very low food security in 2016. Participants with very low food security most often acquired lunch foods from convenience stores (and less likely from grocery stores) compared to the other three levels of food security. Participants did not differ significantly with regards to places for food acquisition related to breakfast or dinner meals based upon food security status. In adjusted models, being homeless (AOR 2.34, 95% CI: 1.31, 4.18) and depressed (AOR 3.12, 95% CI: 1.98, 4.91) were independently associated with a greater odds of being in a food insecure group. Compared to having help with meals none of the time, participants who reported having meal help all of the time (AOR 0.28, 95% CI: 0.12, 0.64), most of the time (AOR 0.31, 95% CI: 0.15, 0.61), and some of the time (AOR 0.54, 95% CI: 0.29, 0.98) had a lower odds of

  6. Three dimensional iterative beam propagation method for optical waveguide devices

    Science.gov (United States)

    Ma, Changbao; Van Keuren, Edward

    2006-10-01

    The finite difference beam propagation method (FD-BPM) is an effective model for simulating a wide range of optical waveguide structures. The classical FD-BPMs are based on the Crank-Nicholson scheme, and in tridiagonal form can be solved using the Thomas method. We present a different type of algorithm for 3-D structures. In this algorithm, the wave equation is formulated into a large sparse matrix equation which can be solved using iterative methods. The simulation window shifting scheme and threshold technique introduced in our earlier work are utilized to overcome the convergence problem of iterative methods for large sparse matrix equation and wide-angle simulations. This method enables us to develop higher-order 3-D wide-angle (WA-) BPMs based on Pade approximant operators and the multistep method, which are commonly used in WA-BPMs for 2-D structures. Simulations using the new methods will be compared to the analytical results to assure its effectiveness and applicability.

  7. A three-dimensional correlation method for registration of medical images in radiology

    International Nuclear Information System (INIS)

    Georgiou, Michalakis; Sfakianakis, George N.; Nagel, Joachim H.

    1998-01-01

    The availability of methods to register multi-modality images in order to 'fuse' them to correlate their information is increasingly becoming an important requirement for various diagnostic and therapeutic procedures. A variety of image registration methods have been developed but they remain limited to specific clinical applications. Assuming rigid body transformation, two images can be registered if their differences are calculated in terms of translation, rotation and scaling. This paper describes the development and testing of a new correlation based approach for three-dimensional image registration. First, the scaling factors introduced by the imaging devices are calculated and compensated for. Then, the two images become translation invariant by computing their three-dimensional Fourier magnitude spectra. Subsequently, spherical coordinate transformation is performed and then the three-dimensional rotation is computed using a novice approach referred to as p olar Shells . The method of polar shells maps the three angles of rotation into one rotation and two translations of a two-dimensional function and then proceeds to calculate them using appropriate transformations based on the Fourier invariance properties. A basic assumption in the method is that the three-dimensional rotation is constrained to one large and two relatively small angles. This assumption is generally satisfied in normal clinical settings. The new three-dimensional image registration method was tested with simulations using computer generated phantom data as well as actual clinical data. Performance analysis and accuracy evaluation of the method using computer simulations yielded errors in the sub-pixel range. (authors)

  8. Three-dimensional volumetric MRI with isotropic resolution: improved speed of acquisition, spatial resolution and assessment of lesion conspicuity in patients with recurrent soft tissue sarcoma

    Energy Technology Data Exchange (ETDEWEB)

    Ahlawat, Shivani [The Johns Hopkins Medical Institutions, The Russell H. Morgan Department of Radiology and Radiological Science, Baltimore, MD (United States); Morris, Carol [The Johns Hopkins Medical Institutions, Department of Orthopedic Surgery, Baltimore, MD (United States); The Johns Hopkins Medical Institutions, Department of Oncology, Baltimore, MD (United States); Fayad, Laura M. [The Johns Hopkins Medical Institutions, The Russell H. Morgan Department of Radiology and Radiological Science, Baltimore, MD (United States); The Johns Hopkins Medical Institutions, Department of Orthopedic Surgery, Baltimore, MD (United States); The Johns Hopkins Medical Institutions, Department of Oncology, Baltimore, MD (United States)

    2016-05-15

    To assess the acquisition speed, lesion conspicuity, and inter-observer agreement associated with volumetric T{sub 1}-weighted MR sequences with isotropic resolution for detecting recurrent soft-tissue sarcoma (STS). Fifteen subjects with histologically proven recurrent STS underwent MRI, including axial and coronal T{sub 1}-weighted spin echo (T{sub 1}-WSE) (5-mm slice thickness) and coronal 3D volumetric T{sub 1}-weighted (fat-suppressed, volume-interpolated, breath-hold examination; repetition time/echo time, 3.7/1.4 ms; flip angle, 9.5 ; 1-mm slice thickness) sequences before and after intravenous contrast administration. Subtraction imaging and multiplanar reformations (MPRs) were performed. Acquisition times for T{sub 1}-WSE in two planes and 3D sequences were reported. Two radiologists reviewed images for quality (>50 % artifacts, 25-50 % artifacts, <25 % artifacts, and no substantial artifacts), lesion conspicuity, contrast-to-noise ratio (CNR{sub muscle}), recurrence size, and recurrence-to-joint distance. Descriptive and intraclass correlation (ICC) statistics are given. Mean acquisition times were significantly less for 3D imaging compared with 2-plane T{sub 1}-WSE (183.6 vs 342.6 s; P = 0.012). Image quality was rated as having no substantial artifacts in 13/15 and <25 % artifacts in 2/15. Lesion conspicuity was significantly improved for subtracted versus unsubtracted images (CNR{sub muscle}, 100 ± 138 vs 181 ± 199; P = 0.05). Mean recurrent lesion size was 2.5 cm (range, 0.7-5.7 cm), and measurements on 3D sequences offered excellent interobserver agreement (ICC, 0.98 for lesion size and 0.96 for recurrence-to-joint distance with MPR views). Three-dimensional volumetric sequences offer faster acquisition times, higher spatial resolution, and MPR capability compared with 2D T{sub 1}-WSE for postcontrast imaging. Subtraction imaging provides higher lesion conspicuity for detecting recurrent STS in skeletal muscle, with excellent interobserver

  9. A Web text acquisition method%基于Delphi的Web文本获取方法

    Institute of Scientific and Technical Information of China (English)

    刘建培

    2016-01-01

    提出基于delphi的Web文本获取方法,从网页中获取Web页面格式的源文件(.html文件),分析它的结构信息,处理它的控制符,通过分析过滤源文件的格式来提取网页中的文本信息。利用标点符号对文本信息进行章节、段落、句子等预处理,将文本信息转换成句子序列,让用户快速地定位到需要了解的内容,从而让用户远离钓鱼网站、恶意广告、欺诈信息以及在浏览网页内容时产生的骚扰,提高互联网体验。%In this paper, a method of Web text acquisition with Delphi is proposed, which obtains the source files of the Web page format (.Html file) from the Web page, analyzes its structure information, deals with its control character, and extracts the text information from the Web page by analyzing and filtering the source files’ formats. The method makes use of punctuation marks to preprocess the text information for sections, paragraphs and sentences, converts the text information into sentence sequences, which allows the users to quickly navigate to the contents needed to know, allows the users to stay away from phishing sites, malicious advertising, fraud information and the harassment generated by browsing the content of Web pages, and improves their Internet experience.

  10. Methods and devices for fabricating three-dimensional nanoscale structures

    Science.gov (United States)

    Rogers, John A.; Jeon, Seokwoo; Park, Jangung

    2010-04-27

    The present invention provides methods and devices for fabricating 3D structures and patterns of 3D structures on substrate surfaces, including symmetrical and asymmetrical patterns of 3D structures. Methods of the present invention provide a means of fabricating 3D structures having accurately selected physical dimensions, including lateral and vertical dimensions ranging from 10s of nanometers to 1000s of nanometers. In one aspect, methods are provided using a mask element comprising a conformable, elastomeric phase mask capable of establishing conformal contact with a radiation sensitive material undergoing photoprocessing. In another aspect, the temporal and/or spatial coherence of electromagnetic radiation using for photoprocessing is selected to fabricate complex structures having nanoscale features that do not extend entirely through the thickness of the structure fabricated.

  11. Linear finite element method for one-dimensional diffusion problems

    Energy Technology Data Exchange (ETDEWEB)

    Brandao, Michele A.; Dominguez, Dany S.; Iglesias, Susana M., E-mail: micheleabrandao@gmail.com, E-mail: dany@labbi.uesc.br, E-mail: smiglesias@uesc.br [Universidade Estadual de Santa Cruz (LCC/DCET/UESC), Ilheus, BA (Brazil). Departamento de Ciencias Exatas e Tecnologicas. Laboratorio de Computacao Cientifica

    2011-07-01

    We describe in this paper the fundamentals of Linear Finite Element Method (LFEM) applied to one-speed diffusion problems in slab geometry. We present the mathematical formulation to solve eigenvalue and fixed source problems. First, we discretized a calculus domain using a finite set of elements. At this point, we obtain the spatial balance equations for zero order and first order spatial moments inside each element. Then, we introduce the linear auxiliary equations to approximate neutron flux and current inside the element and architect a numerical scheme to obtain the solution. We offer numerical results for fixed source typical model problems to illustrate the method's accuracy for coarse-mesh calculations in homogeneous and heterogeneous domains. Also, we compare the accuracy and computational performance of LFEM formulation with conventional Finite Difference Method (FDM). (author)

  12. The Chimera Method of Simulation for Unsteady Three-Dimensional Viscous Flow

    Science.gov (United States)

    Meakin, Robert L.

    1996-01-01

    The Chimera overset grid method is reviewed and discussed in the context of a method of solution and analysis of unsteady three-dimensional viscous flows. The state of maturity of the various pieces of support software required to use the approach is discussed. A variety of recent applications of the method is presented. Current limitations of the approach are defined.

  13. Peculiarities of cyclotron magnetic system calculation with the finite difference method using two-dimensional approximation

    International Nuclear Information System (INIS)

    Shtromberger, N.L.

    1989-01-01

    To design a cyclotron magnetic system the legitimacy of two-dimensional approximations application is discussed. In all the calculations the finite difference method is used, and the linearization method with further use of the gradient conjugation method is used to solve the set of finite-difference equations. 3 refs.; 5 figs

  14. Three-dimensional wake field analysis by boundary element method

    International Nuclear Information System (INIS)

    Miyata, K.

    1987-01-01

    A computer code HERTPIA was developed for the calculation of electromagnetic wake fields excited by charged particles travelling through arbitrarily shaped accelerating cavities. This code solves transient wave problems for a Hertz vector. The numerical analysis is based on the boundary element method. This program is validated by comparing its results with analytical solutions in a pill-box cavity

  15. TreePM Method for Two-Dimensional Cosmological Simulations ...

    Indian Academy of Sciences (India)

    R. Narasimhan (Krishtel eMaging) 1461 1996 Oct 15 13:05:22

    We discuss the integration of the equations of motion that we use in the 2d TreePM code in section 7. .... spaced values of r in order to keep interpolation errors in control. .... hence we cannot use the usual leap-frog method. We recast the ...

  16. Improved algorithm for three-dimensional inverse method

    Science.gov (United States)

    Qiu, Xuwen

    An inverse method, which works for full 3D viscous applications in turbomachinery aerodynamic design, is developed. The method takes pressure loading and thickness distribution as inputs and computes the 3D-blade geometry. The core of the inverse method consists of two closely related steps, which are integrated into a time-marching procedure of a Navier-Stokes solver. First, the pressure loading condition is enforced while flow is allowed to cross the blade surfaces. A permeable blade boundary condition is developed here in order to be consistent with the propagation characteristics of the transient Navier-Stokes equations. In the second step, the blade geometry is adjusted so that the flow-tangency condition is satisfied for the new blade. A Non-Uniform Rational B-Spline (NURBS) model is used to represent the span-wise camber curves. The flow-tangency condition is then transformed into a general linear least squares fitting problem, which is solved by a robust Singular Value Decomposition (SVD) scheme. This blade geometry generation scheme allows the designer to have direct control over the smoothness of the calculated blade, and thus ensures the numerical stability during the iteration process. Numerical experiments show that this method is very accurate, efficient and robust. In target-shooting tests, the program was able to converge to the target blade accurately from a different initial blade. The speed of an inverse run is only about 15% slower than its analysis counterpart, which means a complete 3D viscous inverse design can be done in a matter of hours. The method is also proved to work well with the presence of clearance between the blade and the housing, a key factor to be considered in aerodynamic design. The method is first developed for blades without splitters, and is then extended to provide the capability of analyzing and designing machines with splitters. This gives designers an integrated environment where the aerodynamic design of both full

  17. A numerical method for two-dimensional anisotropic transport problem in cylindrical geometry

    International Nuclear Information System (INIS)

    Du Mingsheng; Feng Tiekai; Fu Lianxiang; Cao Changshu; Liu Yulan

    1988-01-01

    The authors deal with the triangular mesh-discontinuous finite element method for solving the time-dependent anisotropic neutron transport problem in two-dimensional cylindrical geometry. A prior estimate of the numerical solution is given. Stability is proved. The authors have computed a two dimensional anisotropic neutron transport problem and a Tungsten-Carbide critical assembly problem by using the numerical method. In comparision with DSN method and the experimental results obtained by others both at home and abroad, the method is satisfactory

  18. Solution of (3+1-Dimensional Nonlinear Cubic Schrodinger Equation by Differential Transform Method

    Directory of Open Access Journals (Sweden)

    Hassan A. Zedan

    2012-01-01

    Full Text Available Four-dimensional differential transform method has been introduced and fundamental theorems have been defined for the first time. Moreover, as an application of four-dimensional differential transform, exact solutions of nonlinear system of partial differential equations have been investigated. The results of the present method are compared very well with analytical solution of the system. Differential transform method can easily be applied to linear or nonlinear problems and reduces the size of computational work. With this method, exact solutions may be obtained without any need of cumbersome work, and it is a useful tool for analytical and numerical solutions.

  19. Enhanced spectral resolution by high-dimensional NMR using the filter diagonalization method and “hidden” dimensions

    Science.gov (United States)

    Meng, Xi; Nguyen, Bao D.; Ridge, Clark; Shaka, A. J.

    2009-01-01

    High-dimensional (HD) NMR spectra have poorer digital resolution than low-dimensional (LD) spectra, for a fixed amount of experiment time. This has led to “reduced-dimensionality” strategies, in which several LD projections of the HD NMR spectrum are acquired, each with higher digital resolution; an approximate HD spectrum is then inferred by some means. We propose a strategy that moves in the opposite direction, by adding more time dimensions to increase the information content of the data set, even if only a very sparse time grid is used in each dimension. The full HD time-domain data can be analyzed by the Filter Diagonalization Method (FDM), yielding very narrow resonances along all of the frequency axes, even those with sparse sampling. Integrating over the added dimensions of HD FDM NMR spectra reconstitutes LD spectra with enhanced resolution, often more quickly than direct acquisition of the LD spectrum with a larger number of grid points in each of the fewer dimensions. If the extra dimensions do not appear in the final spectrum, and are used solely to boost information content, we propose the moniker hidden-dimension NMR. This work shows that HD peaks have unmistakable frequency signatures that can be detected as single HD objects by an appropriate algorithm, even though their patterns would be tricky for a human operator to visualize or recognize, and even if digital resolution in an HD FT spectrum is very coarse compared with natural line widths. PMID:18926747

  20. Two-dimensional isostatic meshes in the finite element method

    OpenAIRE

    Martínez Marín, Rubén; Samartín, Avelino

    2002-01-01

    In a Finite Element (FE) analysis of elastic solids several items are usually considered, namely, type and shape of the elements, number of nodes per element, node positions, FE mesh, total number of degrees of freedom (dot) among others. In this paper a method to improve a given FE mesh used for a particular analysis is described. For the improvement criterion different objective functions have been chosen (Total potential energy and Average quadratic error) and the number of nodes and dof's...

  1. Dimensional analysis and self-similarity methods for engineers and scientists

    CERN Document Server

    Zohuri, Bahman

    2015-01-01

    This ground-breaking reference provides an overview of key concepts in dimensional analysis, and then pushes well beyond traditional applications in fluid mechanics to demonstrate how powerful this tool can be in solving complex problems across many diverse fields. Of particular interest is the book's coverage of  dimensional analysis and self-similarity methods in nuclear and energy engineering. Numerous practical examples of dimensional problems are presented throughout, allowing readers to link the book's theoretical explanations and step-by-step mathematical solutions to practical impleme

  2. Three-dimensional space-charge calculation method

    International Nuclear Information System (INIS)

    Lysenko, W.P.; Wadlinger, E.A.

    1980-09-01

    A method is presented for calculating space-charge forces on individual particles in a particle tracing simulation code. Poisson's equation is solved in three dimensions with boundary conditions specified on an arbitrary surface. When the boundary condition is defined by an impressed radio-frequency field, the external electric fields as well as the space-charge fields are determined. A least squares fitting procedure is used to calculate the coefficients of expansion functions, which need not be orthogonal nor individually satisfy the boundary condition

  3. A Diminution Method of Large Multi-dimensional Data Retrievals

    Directory of Open Access Journals (Sweden)

    Nushwan Yousif Baithoon

    2010-01-01

    Full Text Available The intention of this work is to introduce a method ofcompressing data at the transmitter (source and expanding it atthe receiver (destination.The amount of data compression is directly related to datadimensionality, hence, for example an N by N RGB image file isconsidered to be an M-D, with M=3, image data file.Also, the amount of scatter in an M-D file, hence, the covariancematrix is calculated, along with the average value of eachdimension, to represent the signature or code for each individualdata set to be sent by the source.At the destination random sets can test a particular receivedsignature so that only one set is acceptable thus giving thecorresponding intended set to be received.Sound results are obtained depending on the constrains beingimplemented. These constrains are user tolerant in so far as howwell tuned or rapid the information is to be processed for dataretrieval.The proposed method is well suited in application areas whereboth source and destination are communicating using the samesets of data files at each end. Also such a technique is feasible forthe availability of fast microprocessors and frame-grabbers.

  4. Calculation of two-dimensional thermal transients by the finite element method

    International Nuclear Information System (INIS)

    Fontoura Rodrigues, J.L.A. da; Barcellos, C.S. de

    1981-01-01

    The linear heat conduction through anisotropic and/or heterogeneous matter, in either two-dimensional fields with any kind of geometry or three-dimensional fields with axial symmetry is analysed. It only accepts time-independent boundary conditions and it is possible to have internal heat generation. The solution is obtained by modal analysis employing the finite element method under Galerkin formulation. (Author) [pt

  5. Method and apparatus for two-dimensional spectroscopy

    Science.gov (United States)

    DeCamp, Matthew F.; Tokmakoff, Andrei

    2010-10-12

    Preferred embodiments of the invention provide for methods and systems of 2D spectroscopy using ultrafast, first light and second light beams and a CCD array detector. A cylindrically-focused second light beam interrogates a target that is optically interactive with a frequency-dispersed excitation (first light) pulse, whereupon the second light beam is frequency-dispersed at right angle orientation to its line of focus, so that the horizontal dimension encodes the spatial location of the second light pulse and the first light frequency, while the vertical dimension encodes the second light frequency. Differential spectra of the first and second light pulses result in a 2D frequency-frequency surface equivalent to double-resonance spectroscopy. Because the first light frequency is spatially encoded in the sample, an entire surface can be acquired in a single interaction of the first and second light pulses.

  6. Pict'Earth: A new Method of Virtual Globe Data Acquisition

    Science.gov (United States)

    Johnson, J.; Long, S.; Riallant, D.; Hronusov, V.

    2007-12-01

    Georeferenced aerial imagery facilitates and enhances Earth science investigations. The realized value of imagery as a tool is measured from the spatial, temporal and radiometric resolution of the imagery. Currently, there is an need for a system which facilitates the rapid acquisition and distribution of high-resolution aerial earth images of localized areas. The Pict'Earth group has developed an apparatus and software algorithms which facilitate such tasks. Hardware includes a small radio-controlled model airplane (RC UAV); Light smartphones with high resolution cameras (Nokia NSeries Devices); and a GPS connected to the smartphone via the bluetooth protocol, or GPS-equipped phone. Software includes python code which controls the functions of the smartphone and GPS to acquire data in-flight; Online Virtual Globe applications including Google Earth, AJAX/Web2.0 technologies and services; APIs and libraries for developers, all of which are based on open XML-based GIS data standards. This new process for acquisition and distribution of high-resolution aerial earth images includes the following stages: Perform Survey over area of interest (AOI) with the RC UAV (Mobile Liveprocessing). In real-time our software collects images from the smartphone camera and positional data (latitude, longitude, altitude and heading) from the GPS. The software then calculates the earth footprint (geoprint) of each image and creates KML files which incorporate the georeferenced images and tracks of UAV. Optionally, it is possible to send the data in- flight via SMS/MMS (text and multimedia messages), or cellular internet networks via FTP. In Post processing the images are filtered, transformed, and assembled into a orthorectified image mosaic. The final mosaic is then cut into tiles and uploaded as a user ready product to web servers in kml format for use in Virtual Globes and other GIS applications. The obtained images and resultant data have high spatial resolution, can be updated in

  7. Three dimensional wavefield modeling using the pseudospectral method; Pseudospectral ho ni yoru sanjigen hadoba modeling

    Energy Technology Data Exchange (ETDEWEB)

    Sato, T; Matsuoka, T [Japan Petroleum Exploration Corp., Tokyo (Japan); Saeki, T [Japan National Oil Corp., Tokyo (Japan). Technology Research Center

    1997-05-27

    Discussed in this report is a wavefield simulation in the 3-dimensional seismic survey. With the level of the object of exploration growing deeper and the object more complicated in structure, the survey method is now turning 3-dimensional. There are several modelling methods for numerical calculation of 3-dimensional wavefields, such as the difference method, pseudospectral method, and the like, all of which demand an exorbitantly large memory and long calculation time, and are costly. Such methods have of late become feasible, however, thanks to the advent of the parallel computer. As compared with the difference method, the pseudospectral method requires a smaller computer memory and shorter computation time, and is more flexible in accepting models. It outputs the result in fullwave just like the difference method, and does not cause wavefield numerical variance. As the computation platform, the parallel computer nCUBE-2S is used. The object domain is divided into the number of the processors, and each of the processors takes care only of its share so that parallel computation as a whole may realize a very high-speed computation. By the use of the pseudospectral method, a 3-dimensional simulation is completed within a tolerable computation time length. 7 refs., 3 figs., 1 tab.

  8. a Method for Simultaneous Aerial and Terrestrial Geodata Acquisition for Corridor Mapping

    Science.gov (United States)

    Molina, P.; Blázquez, M.; Sastre, J.; Colomina, I.

    2015-08-01

    In this paper, we present mapKITE, a new mobile, simultaneous terrestrial and aerial, geodata collection and post-processing method. On one side, the method combines a terrestrial mobile mapping system (TMMS) with an unmanned aerial mapping one, both equipped with remote sensing payloads (at least, a nadir-looking visible-band camera in the UA) by means of which aerial and terrestrial geodata are acquired simultaneously. This tandem geodata acquisition system is based on a terrestrial vehicle (TV) and on an unmanned aircraft (UA) linked by a 'virtual tether', that is, a mechanism based on the real-time supply of UA waypoints by the TV. By means of the TV-to-UA tether, the UA follows the TV keeping a specific relative TV-to-UA spatial configuration enabling the simultaneous operation of both systems to obtain highly redundant and complementary geodata. On the other side, mapKITE presents a novel concept for geodata post-processing favoured by the rich geometrical aspects derived from the mapKITE tandem simultaneous operation. The approach followed for sensor orientation and calibration of the aerial images captured by the UA inherits the principles of Integrated Sensor Orientation (ISO) and adds the pointing-and-scaling photogrammetric measurement of a distinctive element observed in every UA image, which is a coded target mounted on the roof of the TV. By means of the TV navigation system, the orientation of the TV coded target is performed and used in the post-processing UA image orientation approach as a Kinematic Ground Control Point (KGCP). The geometric strength of a mapKITE ISO network is therefore high as it counts with the traditional tie point image measurements, static ground control points, kinematic aerial control and the new point-and-scale measurements of the KGCPs. With such a geometry, reliable system and sensor orientation and calibration and eventual further reduction of the number of traditional ground control points is feasible. The different

  9. A finite-dimensional reduction method for slightly supercritical elliptic problems

    Directory of Open Access Journals (Sweden)

    Riccardo Molle

    2004-01-01

    Full Text Available We describe a finite-dimensional reduction method to find solutions for a class of slightly supercritical elliptic problems. A suitable truncation argument allows us to work in the usual Sobolev space even in the presence of supercritical nonlinearities: we modify the supercritical term in such a way to have subcritical approximating problems; for these problems, the finite-dimensional reduction can be obtained applying the methods already developed in the subcritical case; finally, we show that, if the truncation is realized at a sufficiently large level, then the solutions of the approximating problems, given by these methods, also solve the supercritical problems when the parameter is small enough.

  10. Calculation of two-dimensional thermal transients by the method of finite elements

    International Nuclear Information System (INIS)

    Fontoura Rodrigues, J.L.A. da.

    1980-08-01

    The unsteady linear heat conduction analysis throught anisotropic and/or heterogeneous matter, in either two-dimensional fields with any kind of geometry or three-dimensional fields with axial symmetry is presented. The boundary conditions and the internal heat generation are supposed time - independent. The solution is obtained by modal analysis employing the finite element method under Galerkin formulation. Optionally, it can be used with a reduced resolution method called Stoker Economizing Method wich allows a decrease on the program processing costs. (Author) [pt

  11. Comparing 3-dimensional virtual methods for reconstruction in craniomaxillofacial surgery.

    Science.gov (United States)

    Benazzi, Stefano; Senck, Sascha

    2011-04-01

    In the present project, the virtual reconstruction of digital osteomized zygomatic bones was simulated using different methods. A total of 15 skulls were scanned using computed tomography, and a virtual osteotomy of the left zygomatic bone was performed. Next, virtual reconstructions of the missing part using mirror imaging (with and without best fit registration) and thin plate spline interpolation functions were compared with the original left zygomatic bone. In general, reconstructions using thin plate spline warping showed better results than the mirroring approaches. Nevertheless, when dealing with skulls characterized by a low degree of asymmetry, mirror imaging and subsequent registration can be considered a valid and easy solution for zygomatic bone reconstruction. The mirroring tool is one of the possible alternatives in reconstruction, but it might not always be the optimal solution (ie, when the hemifaces are asymmetrical). In the present pilot study, we have verified that best fit registration of the mirrored unaffected hemiface and thin plate spline warping achieved better results in terms of fitting accuracy, overcoming the evident limits of the mirroring approach. Copyright © 2011 American Association of Oral and Maxillofacial Surgeons. Published by Elsevier Inc. All rights reserved.

  12. Development of three-dimensional ENRICHED FREE MESH METHOD and its application to crack analysis

    International Nuclear Information System (INIS)

    Suzuki, Hayato; Matsubara, Hitoshi; Ezawa, Yoshitaka; Yagawa, Genki

    2010-01-01

    In this paper, we describe a method for three-dimensional high accurate analysis of a crack included in a large-scale structure. The Enriched Free Mesh Method (EFMM) is a method for improving the accuracy of the Free Mesh Method (FMM), which is a kind of meshless method. First, we developed an algorithm of the three-dimensional EFMM. The elastic problem was analyzed using the EFMM and we find that its accuracy compares advantageously with the FMM, and the number of CG iterations is smaller. Next, we developed a method for calculating the stress intensity factor by employing the EFMM. The structure with a crack was analyzed using the EFMM, and the stress intensity factor was calculated by the developed method. The analysis results were very well in agreement with reference solution. It was shown that the proposed method is very effective in the analysis of the crack included in a large-scale structure. (author)

  13. The analysis of RPV fast neutron flux calculation for PWR with three-dimensional SN method

    International Nuclear Information System (INIS)

    Yang Shouhai; Chen Yixue; Wang Weijin; Shi Shengchun; Lu Daogang

    2011-01-01

    Discrete ordinates (S N ) method is one of the most widely used method for reactor pressure vessel (RPV) design. As the fast development of computer CPU speed and memory capacity and consummation of three-dimensional discrete-ordinates method, it is mature for 3-D S N method to be used to engineering design for nuclear facilities. This work was done specifically for PWR model, with the results of 3-D core neutron transport calculation by 3-D core calculation, 3-D RPV fast neutron flux distribution obtain by 3-D S N method were compared with gained by 1-D and 2-D S N method and the 3-D Monte Carlo (MC) method. In this paper, the application of three-dimensional S N method in calculating RPV fast neutron flux distribution for pressurized water reactor (PWR) is presented and discussed. (authors)

  14. Analysis of acquisition patterns : A theoretical and empirical evaluation of alternative methods

    NARCIS (Netherlands)

    Paas, LJ; Molenaar, IW

    The order in which consumers acquire nonconsumable products, such as durable and financial products, provides key information for marketing activities, for example, cross-sell lead generation. This paper advocates the desirable features of nonparametric scaling for analyzing acquisition patterns. We

  15. Analysis of acquisition patterns: A theoretical and empirical evaluation of alternative methods

    NARCIS (Netherlands)

    Paas, L.J.; Molenaar, I.W.

    2005-01-01

    The order in which consumers acquire nonconsumable products, such as durable and financial products, provides key information for marketing activities, for example, cross-sell lead generation. This paper advocates the desirable features of nonparametric scaling for analyzing acquisition patterns. We

  16. The dimension split element-free Galerkin method for three-dimensional potential problems

    Science.gov (United States)

    Meng, Z. J.; Cheng, H.; Ma, L. D.; Cheng, Y. M.

    2018-02-01

    This paper presents the dimension split element-free Galerkin (DSEFG) method for three-dimensional potential problems, and the corresponding formulae are obtained. The main idea of the DSEFG method is that a three-dimensional potential problem can be transformed into a series of two-dimensional problems. For these two-dimensional problems, the improved moving least-squares (IMLS) approximation is applied to construct the shape function, which uses an orthogonal function system with a weight function as the basis functions. The Galerkin weak form is applied to obtain a discretized system equation, and the penalty method is employed to impose the essential boundary condition. The finite difference method is selected in the splitting direction. For the purposes of demonstration, some selected numerical examples are solved using the DSEFG method. The convergence study and error analysis of the DSEFG method are presented. The numerical examples show that the DSEFG method has greater computational precision and computational efficiency than the IEFG method.

  17. Fourier method for three-dimensional partial differential equations in periodic geometry. Application: HELIAC

    International Nuclear Information System (INIS)

    Shestakov, A.I.; Mirin, A.A.

    1984-01-01

    A numerical method based on Fourier expansions and finite differences is presented. The method is demonstrated by solving a scalar, three-dimensional elliptic equation arising in MFE research, but has applicability to a wider class of problems. The scheme solves equations whose solutions are expected to be periodic in one or more of the independent variables

  18. Computational Methods for Inviscid and Viscous Two-and-Three-Dimensional Flow Fields.

    Science.gov (United States)

    1975-01-01

    Difference Equations Over a Network, Watson Sei. Comput. Lab. Report, 19U9. 173- Isaacson, E. and Keller, H. B., Analaysis of Numerical Methods...element method has given a new impulse to the old mathematical theory of multivariate interpolation. We first study the one-dimensional case, which

  19. Validation of a Novel 3-Dimensional Sonographic Method for Assessing Gastric Accommodation in Healthy Adults

    NARCIS (Netherlands)

    Buisman, Wijnand J; van Herwaarden-Lindeboom, MYA; Mauritz, Femke A; El Ouamari, Mourad; Hausken, Trygve; Olafsdottir, Edda J; van der Zee, David C; Gilja, Odd Helge

    OBJECTIVES: A novel automated 3-dimensional (3D) sonographic method has been developed for measuring gastric volumes. This study aimed to validate and assess the reliability of this novel 3D sonographic method compared to the reference standard in 3D gastric sonography: freehand magneto-based 3D

  20. A finite element method for calculating the 3-dimensional magnetic fields of cyclotron

    International Nuclear Information System (INIS)

    Zhao Xiaofeng

    1986-01-01

    A series of formula of the finite element method (scalar potential) for calculating the three-dimensional magnetic field of the main magnet of a sector focused cyclotron, and the realization method of the periodic boundary conditions in the code are given

  1. Homotopy decomposition method for solving one-dimensional time-fractional diffusion equation

    Science.gov (United States)

    Abuasad, Salah; Hashim, Ishak

    2018-04-01

    In this paper, we present the homotopy decomposition method with a modified definition of beta fractional derivative for the first time to find exact solution of one-dimensional time-fractional diffusion equation. In this method, the solution takes the form of a convergent series with easily computable terms. The exact solution obtained by the proposed method is compared with the exact solution obtained by using fractional variational homotopy perturbation iteration method via a modified Riemann-Liouville derivative.

  2. Patient-adapted reconstruction and acquisition dynamic imaging method (PARADIGM) for MRI

    International Nuclear Information System (INIS)

    Aggarwal, Nitin; Bresler, Yoram

    2008-01-01

    Dynamic magnetic resonance imaging (MRI) is a challenging problem because the MR data acquisition is often not fast enough to meet the combined spatial and temporal Nyquist sampling rate requirements. Current approaches to this problem include hardware-based acceleration of the acquisition, and model-based image reconstruction techniques. In this paper we propose an alternative approach, called PARADIGM, which adapts both the acquisition and reconstruction to the spatio-temporal characteristics of the imaged object. The approach is based on time-sequential sampling theory, addressing the problem of acquiring a spatio-temporal signal under the constraint that only a limited amount of data can be acquired at a time instant. PARADIGM identifies a model class for the particular imaged object using a scout MR scan or auxiliary data. This object-adapted model is then used to optimize MR data acquisition, such that the imaging constraints are met, acquisition speed requirements are minimized, essentially perfect reconstruction of any object in the model class is guaranteed, and the inverse problem of reconstructing the dynamic object has a condition number of one. We describe spatio-temporal object models for various dynamic imaging applications including cardiac imaging. We present the theory underlying PARADIGM and analyze its performance theoretically and numerically. We also propose a practical MR imaging scheme for 2D dynamic cardiac imaging based on the theory. For this application, PARADIGM is predicted to provide a 10–25 × acceleration compared to the optimal non-adaptive scheme. Finally we present generalized optimality criteria and extend the scheme to dynamic imaging with three spatial dimensions

  3. A new analytical method to solve the heat equation for a multi-dimensional composite slab

    International Nuclear Information System (INIS)

    Lu, X; Tervola, P; Viljanen, M

    2005-01-01

    A novel analytical approach has been developed for heat conduction in a multi-dimensional composite slab subject to time-dependent boundary changes of the first kind. Boundary temperatures are represented as Fourier series. Taking advantage of the periodic properties of boundary changes, the analytical solution is obtained and expressed explicitly. Nearly all the published works necessitate searching for associated eigenvalues in solving such a problem even for a one-dimensional composite slab. In this paper, the proposed method involves no iterative computation such as numerically searching for eigenvalues and no residue evaluation. The adopted method is simple which represents an extension of the novel analytical approach derived for the one-dimensional composite slab. Moreover, the method of 'separation of variables' employed in this paper is new. The mathematical formula for solutions is concise and straightforward. The physical parameters are clearly shown in the formula. Further comparison with numerical calculations is presented

  4. Biomedical applications of two- and three-dimensional deterministic radiation transport methods

    International Nuclear Information System (INIS)

    Nigg, D.W.

    1992-01-01

    Multidimensional deterministic radiation transport methods are routinely used in support of the Boron Neutron Capture Therapy (BNCT) Program at the Idaho National Engineering Laboratory (INEL). Typical applications of two-dimensional discrete-ordinates methods include neutron filter design, as well as phantom dosimetry. The epithermal-neutron filter for BNCT that is currently available at the Brookhaven Medical Research Reactor (BMRR) was designed using such methods. Good agreement between calculated and measured neutron fluxes was observed for this filter. Three-dimensional discrete-ordinates calculations are used routinely for dose-distribution calculations in three-dimensional phantoms placed in the BMRR beam, as well as for treatment planning verification for live canine subjects. Again, good agreement between calculated and measured neutron fluxes and dose levels is obtained

  5. A GPU-based calculation using the three-dimensional FDTD method for electromagnetic field analysis.

    Science.gov (United States)

    Nagaoka, Tomoaki; Watanabe, Soichi

    2010-01-01

    Numerical simulations with the numerical human model using the finite-difference time domain (FDTD) method have recently been performed frequently in a number of fields in biomedical engineering. However, the FDTD calculation runs too slowly. We focus, therefore, on general purpose programming on the graphics processing unit (GPGPU). The three-dimensional FDTD method was implemented on the GPU using Compute Unified Device Architecture (CUDA). In this study, we used the NVIDIA Tesla C1060 as a GPGPU board. The performance of the GPU is evaluated in comparison with the performance of a conventional CPU and a vector supercomputer. The results indicate that three-dimensional FDTD calculations using a GPU can significantly reduce run time in comparison with that using a conventional CPU, even a native GPU implementation of the three-dimensional FDTD method, while the GPU/CPU speed ratio varies with the calculation domain and thread block size.

  6. A three-dimensional correlation method for registration of medical images in radiology

    Energy Technology Data Exchange (ETDEWEB)

    Georgiou, Michalakis; Sfakianakis, George N [Department of Radiology, University of Miami, Jackson Memorial Hospital, Miami, FL 33136 (United States); Nagel, Joachim H [Institute of Biomedical Engineering, University of Stuttgart, Stuttgart 70174 (Germany)

    1999-12-31

    The availability of methods to register multi-modality images in order to `fuse` them to correlate their information is increasingly becoming an important requirement for various diagnostic and therapeutic procedures. A variety of image registration methods have been developed but they remain limited to specific clinical applications. Assuming rigid body transformation, two images can be registered if their differences are calculated in terms of translation, rotation and scaling. This paper describes the development and testing of a new correlation based approach for three-dimensional image registration. First, the scaling factors introduced by the imaging devices are calculated and compensated for. Then, the two images become translation invariant by computing their three-dimensional Fourier magnitude spectra. Subsequently, spherical coordinate transformation is performed and then the three-dimensional rotation is computed using a novice approach referred to as {sup p}olar Shells{sup .} The method of polar shells maps the three angles of rotation into one rotation and two translations of a two-dimensional function and then proceeds to calculate them using appropriate transformations based on the Fourier invariance properties. A basic assumption in the method is that the three-dimensional rotation is constrained to one large and two relatively small angles. This assumption is generally satisfied in normal clinical settings. The new three-dimensional image registration method was tested with simulations using computer generated phantom data as well as actual clinical data. Performance analysis and accuracy evaluation of the method using computer simulations yielded errors in the sub-pixel range. (authors) 6 refs., 3 figs.

  7. A two-dimensional adaptive numerical grids generation method and its realization

    International Nuclear Information System (INIS)

    Xu Tao; Shui Hongshou

    1998-12-01

    A two-dimensional adaptive numerical grids generation method and its particular realization is discussed. This method is effective and easy to realize if the control functions are given continuously, and the grids for some regions is showed in this case. For Computational Fluid Dynamics, because the control values of adaptive grids-numerical solution is given in dispersed form, it is needed to interpolate these values to get the continuous control functions. These interpolation techniques are discussed, and some efficient adaptive grids are given. A two-dimensional fluid dynamics example was also given

  8. Approximate solutions for the two-dimensional integral transport equation. The critically mixed methods of resolution

    International Nuclear Information System (INIS)

    Sanchez, Richard.

    1980-11-01

    This work is divided into two part the first part (note CEA-N-2165) deals with the solution of complex two-dimensional transport problems, the second one treats the critically mixed methods of resolution. These methods are applied for one-dimensional geometries with highly anisotropic scattering. In order to simplify the set of integral equation provided by the integral transport equation, the integro-differential equation is used to obtain relations that allow to lower the number of integral equation to solve; a general mathematical and numerical study is presented [fr

  9. Development of three-dimensional individual bubble-velocity measurement method by bubble tracking

    International Nuclear Information System (INIS)

    Kanai, Taizo; Furuya, Masahiro; Arai, Takahiro; Shirakawa, Kenetsu; Nishi, Yoshihisa

    2012-01-01

    A gas-liquid two-phase flow in a large diameter pipe exhibits a three-dimensional flow structure. Wire-Mesh Sensor (WMS) consists of a pair of parallel wire layers located at the cross section of a pipe. Both the parallel wires cross at 90o with a small gap and each intersection acts as an electrode. The WMS allows the measurement of the instantaneous two-dimensional void-fraction distribution over the cross-section of a pipe, based on the difference between the local instantaneous conductivity of the two-phase flow. Furthermore, the WMS can acquire a phasic-velocity on the basis of the time lag of void signals between two sets of WMS. Previously, the acquired phasic velocity was one-dimensional with time-averaged distributions. The authors propose a method to estimate the three-dimensional bubble-velocity individually WMS data. The bubble velocity is determined by the tracing method. In this tracing method, each bubble is separated from WMS signal, volume and center coordinates of the bubble is acquired. Two bubbles with near volume at two WMS are considered as the same bubble and bubble velocity is estimated from the displacement of the center coordinates of the two bubbles. The validity of this method is verified by a swirl flow. The proposed method can successfully visualize a swirl flow structure and the results of this method agree with the results of cross-correlation analysis. (author)

  10. Research on remote sensing image pixel attribute data acquisition method in AutoCAD

    Science.gov (United States)

    Liu, Xiaoyang; Sun, Guangtong; Liu, Jun; Liu, Hui

    2013-07-01

    The remote sensing image has been widely used in AutoCAD, but AutoCAD lack of the function of remote sensing image processing. In the paper, ObjectARX was used for the secondary development tool, combined with the Image Engine SDK to realize remote sensing image pixel attribute data acquisition in AutoCAD, which provides critical technical support for AutoCAD environment remote sensing image processing algorithms.

  11. Cost-Benefit Comparison: A Method for Evaluation Proposed Changes to Defense Acquisition Procedures

    Science.gov (United States)

    1990-09-01

    Department of Civil Engineering, Florida University, Gainsville FL, Summer 1986 (AD-A170752). Horngren , Charles T. and George Foster. Cost Accounting : A...Acquisition Regulation (FAR) system, The Department of Labor (DOL) , the Cost Accounting Standards Board (CASB) , and the General Services...decision. In management and in managerial accounting , this type of study is known as cost - benefit analysis. A cost -benefit analysis is the most important

  12. NOTE: A method for controlling image acquisition in electronic portal imaging devices

    Science.gov (United States)

    Glendinning, A. G.; Hunt, S. G.; Bonnett, D. E.

    2001-02-01

    Certain types of camera-based electronic portal imaging devices (EPIDs) which initiate image acquisition based on sensing a change in video level have been observed to trigger unreliably at the beginning of dynamic multileaf collimation sequences. A simple, novel means of controlling image acquisition with an Elekta linear accelerator (Elekta Oncology Systems, Crawley, UK) is proposed which is based on illumination of a photodetector (ORP-12, Silonex Inc., Plattsburgh, NY, USA) by the electron gun of the accelerator. By incorporating a simple trigger circuit it is possible to derive a beam on/off status signal which changes at least 100 ms before any dose is measured by the accelerator. The status signal does not return to the beam-off state until all dose has been delivered and is suitable for accelerator pulse repetition frequencies of 50-400 Hz. The status signal is thus a reliable means of indicating the initiation and termination of radiation exposure, and thus controlling image acquisition of such EPIDs for this application.

  13. An efficient heuristic method for active feature acquisition and its application to protein-protein interaction prediction

    Directory of Open Access Journals (Sweden)

    Thahir Mohamed

    2012-11-01

    Full Text Available Abstract Background Machine learning approaches for classification learn the pattern of the feature space of different classes, or learn a boundary that separates the feature space into different classes. The features of the data instances are usually available, and it is only the class-labels of the instances that are unavailable. For example, to classify text documents into different topic categories, the words in the documents are features and they are readily available, whereas the topic is what is predicted. However, in some domains obtaining features may be resource-intensive because of which not all features may be available. An example is that of protein-protein interaction prediction, where not only are the labels ('interacting' or 'non-interacting' unavailable, but so are some of the features. It may be possible to obtain at least some of the missing features by carrying out a few experiments as permitted by the available resources. If only a few experiments can be carried out to acquire missing features, which proteins should be studied and which features of those proteins should be determined? From the perspective of machine learning for PPI prediction, it would be desirable that those features be acquired which when used in training the classifier, the accuracy of the classifier is improved the most. That is, the utility of the feature-acquisition is measured in terms of how much acquired features contribute to improving the accuracy of the classifier. Active feature acquisition (AFA is a strategy to preselect such instance-feature combinations (i.e. protein and experiment combinations for maximum utility. The goal of AFA is the creation of optimal training set that would result in the best classifier, and not in determining the best classification model itself. Results We present a heuristic method for active feature acquisition to calculate the utility of acquiring a missing feature. This heuristic takes into account the change in

  14. Correlation based method for comparing and reconstructing quasi-identical two-dimensional structures

    International Nuclear Information System (INIS)

    Mejia-Barbosa, Y.

    2000-03-01

    We show a method for comparing and reconstructing two similar amplitude-only structures, which are composed by the same number of identical apertures. The structures are two-dimensional and differ only in the location of one of the apertures. The method is based on a subtraction algorithm, which involves the auto-correlations and cross-correlation functions of the compared structures. Experimental results illustrate the feasibility of the method. (author)

  15. Semi-implicit method for three-dimensional compressible MHD simulation

    International Nuclear Information System (INIS)

    Harned, D.S.; Kerner, W.

    1984-03-01

    A semi-implicit method for solving the full compressible MHD equations in three dimensions is presented. The method is unconditionally stable with respect to the fast compressional modes. The time step is instead limited by the slower shear Alfven motion. The computing time required for one time step is essentially the same as for explicit methods. Linear stability limits are derived and verified by three-dimensional tests on linear waves in slab geometry. (orig.)

  16. Development of calculation method for one-dimensional kinetic analysis in fission reactors, including feedback effects

    International Nuclear Information System (INIS)

    Paixao, S.B.; Marzo, M.A.S.; Alvim, A.C.M.

    1986-01-01

    The calculation method used in WIGLE code is studied. Because of the non availability of such a praiseworthy solution, expounding the method minutely has been tried. This developed method has been applied for the solution of the one-dimensional, two-group, diffusion equations in slab, axial analysis, including non-boiling heat transfer, accountig for feedback. A steady-state program (CITER-1D), written in FORTRAN 4, has been implemented, providing excellent results, ratifying the developed work quality. (Author) [pt

  17. A study on three dimensional layout design by the simulated annealing method

    International Nuclear Information System (INIS)

    Jang, Seung Ho

    2008-01-01

    Modern engineered products are becoming increasingly complicated and most consumers prefer compact designs. Layout design plays an important role in many engineered products. The objective of this study is to suggest a method to apply the simulated annealing method to the arbitrarily shaped three-dimensional component layout design problem. The suggested method not only optimizes the packing density but also satisfies constraint conditions among the components. The algorithm and its implementation as suggested in this paper are extendable to other research objectives

  18. Development of three-dimensional transport code by the double finite element method

    International Nuclear Information System (INIS)

    Fujimura, Toichiro

    1985-01-01

    Development of a three-dimensional neutron transport code by the double finite element method is described. Both of the Galerkin and variational methods are adopted to solve the problem, and then the characteristics of them are compared. Computational results of the collocation method, developed as a technique for the vaviational one, are illustrated in comparison with those of an Ssub(n) code. (author)

  19. A novel three-dimensional mesh deformation method based on sphere relaxation

    International Nuclear Information System (INIS)

    Zhou, Xuan; Li, Shuixiang

    2015-01-01

    In our previous work (2013) [19], we developed a disk relaxation based mesh deformation method for two-dimensional mesh deformation. In this paper, the idea of the disk relaxation is extended to the sphere relaxation for three-dimensional meshes with large deformations. We develop a node based pre-displacement procedure to apply initial movements on nodes according to their layer indices. Afterwards, the nodes are moved locally by the improved sphere relaxation algorithm to transfer boundary deformations and increase the mesh quality. A three-dimensional mesh smoothing method is also adopted to prevent the occurrence of the negative volume of elements, and further improve the mesh quality. Numerical applications in three-dimension including the wing rotation, bending beam and morphing aircraft are carried out. The results demonstrate that the sphere relaxation based approach generates the deformed mesh with high quality, especially regarding complex boundaries and large deformations

  20. A novel three-dimensional mesh deformation method based on sphere relaxation

    Energy Technology Data Exchange (ETDEWEB)

    Zhou, Xuan [Department of Mechanics & Engineering Science, College of Engineering, Peking University, Beijing, 100871 (China); Institute of Applied Physics and Computational Mathematics, Beijing, 100094 (China); Li, Shuixiang, E-mail: lsx@pku.edu.cn [Department of Mechanics & Engineering Science, College of Engineering, Peking University, Beijing, 100871 (China)

    2015-10-01

    In our previous work (2013) [19], we developed a disk relaxation based mesh deformation method for two-dimensional mesh deformation. In this paper, the idea of the disk relaxation is extended to the sphere relaxation for three-dimensional meshes with large deformations. We develop a node based pre-displacement procedure to apply initial movements on nodes according to their layer indices. Afterwards, the nodes are moved locally by the improved sphere relaxation algorithm to transfer boundary deformations and increase the mesh quality. A three-dimensional mesh smoothing method is also adopted to prevent the occurrence of the negative volume of elements, and further improve the mesh quality. Numerical applications in three-dimension including the wing rotation, bending beam and morphing aircraft are carried out. The results demonstrate that the sphere relaxation based approach generates the deformed mesh with high quality, especially regarding complex boundaries and large deformations.

  1. Pseudo three-dimensional modeling of particle-fuel packing using distinct element method

    International Nuclear Information System (INIS)

    Yuki, Daisuke; Takata, Takashi; Yamaguchi, Akira

    2007-01-01

    Vibration-based packing of sphere-pac fuel is a key technology in a nuclear fuel manufacturing. In the production process of sphere-pac fuel, a Mixed Oxide (MOX) fuel is formed to spherical form and is packed in a cladding tube by adding a vibration force. In the present study, we have developed a numerical simulation method to investigate the behavior of the particles in a vibrated tube using the Distinct Element Method (DEM). In general, the DEM requires a significant computational cost. Therefore we propose a new approach in which a small particle can move through the space between three larger particles even in the two-dimensional simulation. We take into account an equivalent three-dimensional effect in the equations of motion. Thus it is named pseudo three-dimensional modeling. (author)

  2. Finite element method for radiation heat transfer in multi-dimensional graded index medium

    International Nuclear Information System (INIS)

    Liu, L.H.; Zhang, L.; Tan, H.P.

    2006-01-01

    In graded index medium, ray goes along a curved path determined by Fermat principle, and curved ray-tracing is very difficult and complex. To avoid the complicated and time-consuming computation of curved ray trajectories, a finite element method based on discrete ordinate equation is developed to solve the radiative transfer problem in a multi-dimensional semitransparent graded index medium. Two particular test problems of radiative transfer are taken as examples to verify this finite element method. The predicted dimensionless net radiative heat fluxes are determined by the proposed method and compared with the results obtained by finite volume method. The results show that the finite element method presented in this paper has a good accuracy in solving the multi-dimensional radiative transfer problem in semitransparent graded index medium

  3. A Comparison of Methods for Estimating the Determinant of High-Dimensional Covariance Matrix

    KAUST Repository

    Hu, Zongliang

    2017-09-27

    The determinant of the covariance matrix for high-dimensional data plays an important role in statistical inference and decision. It has many real applications including statistical tests and information theory. Due to the statistical and computational challenges with high dimensionality, little work has been proposed in the literature for estimating the determinant of high-dimensional covariance matrix. In this paper, we estimate the determinant of the covariance matrix using some recent proposals for estimating high-dimensional covariance matrix. Specifically, we consider a total of eight covariance matrix estimation methods for comparison. Through extensive simulation studies, we explore and summarize some interesting comparison results among all compared methods. We also provide practical guidelines based on the sample size, the dimension, and the correlation of the data set for estimating the determinant of high-dimensional covariance matrix. Finally, from a perspective of the loss function, the comparison study in this paper may also serve as a proxy to assess the performance of the covariance matrix estimation.

  4. A Comparison of Methods for Estimating the Determinant of High-Dimensional Covariance Matrix.

    Science.gov (United States)

    Hu, Zongliang; Dong, Kai; Dai, Wenlin; Tong, Tiejun

    2017-09-21

    The determinant of the covariance matrix for high-dimensional data plays an important role in statistical inference and decision. It has many real applications including statistical tests and information theory. Due to the statistical and computational challenges with high dimensionality, little work has been proposed in the literature for estimating the determinant of high-dimensional covariance matrix. In this paper, we estimate the determinant of the covariance matrix using some recent proposals for estimating high-dimensional covariance matrix. Specifically, we consider a total of eight covariance matrix estimation methods for comparison. Through extensive simulation studies, we explore and summarize some interesting comparison results among all compared methods. We also provide practical guidelines based on the sample size, the dimension, and the correlation of the data set for estimating the determinant of high-dimensional covariance matrix. Finally, from a perspective of the loss function, the comparison study in this paper may also serve as a proxy to assess the performance of the covariance matrix estimation.

  5. A Comparison of Methods for Estimating the Determinant of High-Dimensional Covariance Matrix

    KAUST Repository

    Hu, Zongliang; Dong, Kai; Dai, Wenlin; Tong, Tiejun

    2017-01-01

    The determinant of the covariance matrix for high-dimensional data plays an important role in statistical inference and decision. It has many real applications including statistical tests and information theory. Due to the statistical and computational challenges with high dimensionality, little work has been proposed in the literature for estimating the determinant of high-dimensional covariance matrix. In this paper, we estimate the determinant of the covariance matrix using some recent proposals for estimating high-dimensional covariance matrix. Specifically, we consider a total of eight covariance matrix estimation methods for comparison. Through extensive simulation studies, we explore and summarize some interesting comparison results among all compared methods. We also provide practical guidelines based on the sample size, the dimension, and the correlation of the data set for estimating the determinant of high-dimensional covariance matrix. Finally, from a perspective of the loss function, the comparison study in this paper may also serve as a proxy to assess the performance of the covariance matrix estimation.

  6. A Two-Dimensional Solar Tracking Stationary Guidance Method Based on Feature-Based Time Series

    Directory of Open Access Journals (Sweden)

    Keke Zhang

    2018-01-01

    Full Text Available The amount of satellite energy acquired has a direct impact on operational capacities of the satellite. As for practical high functional density microsatellites, solar tracking guidance design of solar panels plays an extremely important role. Targeted at stationary tracking problems incurred in a new system that utilizes panels mounted in the two-dimensional turntable to acquire energies to the greatest extent, a two-dimensional solar tracking stationary guidance method based on feature-based time series was proposed under the constraint of limited satellite attitude coupling control capability. By analyzing solar vector variation characteristics within an orbit period and solar vector changes within the whole life cycle, such a method could be adopted to establish a two-dimensional solar tracking guidance model based on the feature-based time series to realize automatic switching of feature-based time series and stationary guidance under the circumstance of different β angles and the maximum angular velocity control, which was applicable to near-earth orbits of all orbital inclination. It was employed to design a two-dimensional solar tracking stationary guidance system, and a mathematical simulation for guidance performance was carried out in diverse conditions under the background of in-orbit application. The simulation results show that the solar tracking accuracy of two-dimensional stationary guidance reaches 10∘ and below under the integrated constraints, which meet engineering application requirements.

  7. Real time alpha value measurement with Feynman-α method utilizing time series data acquisition on low enriched uranium system

    International Nuclear Information System (INIS)

    Tonoike, Kotaro; Yamamoto, Toshihiro; Watanabe, Shoichi; Miyoshi, Yoshinori

    2003-01-01

    As a part of the development of a subcriticality monitoring system, a system which has a time series data acquisition function of detector signals and a real time evaluation function of alpha value with the Feynman-alpha method was established, with which the kinetic parameter (alpha value) was measured at the STACY heterogeneous core. The Hashimoto's difference filter was implemented in the system, which enables the measurement at a critical condition. The measurement result of the new system agreed with the pulsed neutron method. (author)

  8. A New Ensemble Method with Feature Space Partitioning for High-Dimensional Data Classification

    Directory of Open Access Journals (Sweden)

    Yongjun Piao

    2015-01-01

    Full Text Available Ensemble data mining methods, also known as classifier combination, are often used to improve the performance of classification. Various classifier combination methods such as bagging, boosting, and random forest have been devised and have received considerable attention in the past. However, data dimensionality increases rapidly day by day. Such a trend poses various challenges as these methods are not suitable to directly apply to high-dimensional datasets. In this paper, we propose an ensemble method for classification of high-dimensional data, with each classifier constructed from a different set of features determined by partitioning of redundant features. In our method, the redundancy of features is considered to divide the original feature space. Then, each generated feature subset is trained by a support vector machine, and the results of each classifier are combined by majority voting. The efficiency and effectiveness of our method are demonstrated through comparisons with other ensemble techniques, and the results show that our method outperforms other methods.

  9. A two-dimensional, semi-analytic expansion method for nodal calculations

    International Nuclear Information System (INIS)

    Palmtag, S.P.

    1995-08-01

    Most modern nodal methods used today are based upon the transverse integration procedure in which the multi-dimensional flux shape is integrated over the transverse directions in order to produce a set of coupled one-dimensional flux shapes. The one-dimensional flux shapes are then solved either analytically or by representing the flux shape by a finite polynomial expansion. While these methods have been verified for most light-water reactor applications, they have been found to have difficulty predicting the large thermal flux gradients near the interfaces of highly-enriched MOX fuel assemblies. A new method is presented here in which the neutron flux is represented by a non-seperable, two-dimensional, semi-analytic flux expansion. The main features of this method are (1) the leakage terms from the node are modeled explicitly and therefore, the transverse integration procedure is not used, (2) the corner point flux values for each node are directly edited from the solution method, and a corner-point interpolation is not needed in the flux reconstruction, (3) the thermal flux expansion contains hyperbolic terms representing analytic solutions to the thermal flux diffusion equation, and (4) the thermal flux expansion contains a thermal to fast flux ratio term which reduces the number of polynomial expansion functions needed to represent the thermal flux. This new nodal method has been incorporated into the computer code COLOR2G and has been used to solve a two-dimensional, two-group colorset problem containing uranium and highly-enriched MOX fuel assemblies. The results from this calculation are compared to the results found using a code based on the traditional transverse integration procedure

  10. 3-Dimensional and Interactive Istanbul University Virtual Laboratory Based on Active Learning Methods

    Science.gov (United States)

    Ince, Elif; Kirbaslar, Fatma Gulay; Yolcu, Ergun; Aslan, Ayse Esra; Kayacan, Zeynep Cigdem; Alkan Olsson, Johanna; Akbasli, Ayse Ceylan; Aytekin, Mesut; Bauer, Thomas; Charalambis, Dimitris; Gunes, Zeliha Ozsoy; Kandemir, Ceyhan; Sari, Umit; Turkoglu, Suleyman; Yaman, Yavuz; Yolcu, Ozgu

    2014-01-01

    The purpose of this study is to develop a 3-dimensional interactive multi-user and multi-admin IUVIRLAB featuring active learning methods and techniques for university students and to introduce the Virtual Laboratory of Istanbul University and to show effects of IUVIRLAB on students' attitudes on communication skills and IUVIRLAB. Although there…

  11. Newton-sor iterative method for solving the two-dimensional porous ...

    African Journals Online (AJOL)

    In this paper, we consider the application of the Newton-SOR iterative method in obtaining the approximate solution of the two-dimensional porous medium equation (2D PME). The nonlinear finite difference approximation equation to the 2D PME is derived by using the implicit finite difference scheme. The developed ...

  12. ANALYSIS OF IMPACT ON COMPOSITE STRUCTURES WITH THE METHOD OF DIMENSIONALITY REDUCTION

    Directory of Open Access Journals (Sweden)

    Valentin L. Popov

    2015-04-01

    Full Text Available In the present paper, we discuss the impact of rigid profiles on continua with non-local criteria for plastic yield. For the important case of media whose hardness is inversely proportional to the indentation radius, we suggest a rigorous treatment based on the method of dimensionality reduction (MDR and study the example of indentation by a conical profile.

  13. A greedy method for reconstructing polycrystals from three-dimensional X-ray diffraction data

    DEFF Research Database (Denmark)

    Kulshreshth, Arun Kumar; Alpers, Andreas; Herman, Gabor T.

    2009-01-01

    An iterative search method is proposed for obtaining orientation maps inside polycrystals from three-dimensional X-ray diffraction (3DXRD) data. In each step, detector pixel intensities are calculated by a forward model based on the current estimate of the orientation map. The pixel at which...

  14. Multisymplectic Structure-Preserving in Simple Finite Element Method in High Dimensional Case

    Institute of Scientific and Technical Information of China (English)

    BAI Yong-Qiang; LIU Zhen; PEI Ming; ZHENG Zhu-Jun

    2003-01-01

    In this paper, we study a finite element scheme of some semi-linear elliptic boundary value problems inhigh-dimensional space. With uniform mesh, we find that, the numerical scheme derived from finite element method cankeep a preserved multisymplectic structure.

  15. Improving the accuracy of CT dimensional metrology by a novel beam hardening correction method

    International Nuclear Information System (INIS)

    Zhang, Xiang; Li, Lei; Zhang, Feng; Xi, Xiaoqi; Deng, Lin; Yan, Bin

    2015-01-01

    Its powerful nondestructive characteristics are attracting more and more research into the study of computed tomography (CT) for dimensional metrology, which offers a practical alternative to the common measurement methods. However, the inaccuracy and uncertainty severely limit the further utilization of CT for dimensional metrology due to many factors, among which the beam hardening (BH) effect plays a vital role. This paper mainly focuses on eliminating the influence of the BH effect in the accuracy of CT dimensional metrology. To correct the BH effect, a novel exponential correction model is proposed. The parameters of the model are determined by minimizing the gray entropy of the reconstructed volume. In order to maintain the consistency and contrast of the corrected volume, a punishment term is added to the cost function, enabling more accurate measurement results to be obtained by the simple global threshold method. The proposed method is efficient, and especially suited to the case where there is a large difference in gray value between material and background. Different spheres with known diameters are used to verify the accuracy of dimensional measurement. Both simulation and real experimental results demonstrate the improvement in measurement precision. Moreover, a more complex workpiece is also tested to show that the proposed method is of general feasibility. (paper)

  16. Two dimensional PMMA nanofluidic device fabricated by hot embossing and oxygen plasma assisted thermal bonding methods

    Science.gov (United States)

    Yin, Zhifu; Sun, Lei; Zou, Helin; Cheng, E.

    2015-05-01

    A method for obtaining a low-cost and high-replication precision two-dimensional (2D) nanofluidic device with a polymethyl methacrylate (PMMA) sheet is proposed. To improve the replication precision of the 2D PMMA nanochannels during the hot embossing process, the deformation of the PMMA sheet was analyzed by a numerical simulation method. The constants of the generalized Maxwell model used in the numerical simulation were calculated by experimental compressive creep curves based on previously established fitting formula. With optimized process parameters, 176 nm-wide and 180 nm-deep nanochannels were successfully replicated into the PMMA sheet with a replication precision of 98.2%. To thermal bond the 2D PMMA nanochannels with high bonding strength and low dimensional loss, the parameters of the oxygen plasma treatment and thermal bonding process were optimized. In order to measure the dimensional loss of 2D nanochannels after thermal bonding, a dimension loss evaluating method based on the nanoindentation experiments was proposed. According to the dimension loss evaluating method, the total dimensional loss of 2D nanochannels was 6 nm and 21 nm in width and depth, respectively. The tensile bonding strength of the 2D PMMA nanofluidic device was 0.57 MPa. The fluorescence images demonstrate that there was no blocking or leakage over the entire microchannels and nanochannels.

  17. Geotechnical applications of a two-dimensional elastodynamic displacement discontinuity method

    CSIR Research Space (South Africa)

    Siebrits, E

    1993-12-01

    Full Text Available A general two-dimensional elastodynamic displacement discontinuity method is used to model a variety of application problems. The plane strain problems are: the elastodynamic motions induced on a cavity by shear slip on a nearby crack; the dynamic...

  18. Variational Homotopy Perturbation Method for Solving Higher Dimensional Initial Boundary Value Problems

    Directory of Open Access Journals (Sweden)

    Muhammad Aslam Noor

    2008-01-01

    Full Text Available We suggest and analyze a technique by combining the variational iteration method and the homotopy perturbation method. This method is called the variational homotopy perturbation method (VHPM. We use this method for solving higher dimensional initial boundary value problems with variable coefficients. The developed algorithm is quite efficient and is practically well suited for use in these problems. The proposed scheme finds the solution without any discritization, transformation, or restrictive assumptions and avoids the round-off errors. Several examples are given to check the reliability and efficiency of the proposed technique.

  19. A new Riccati equation rational expansion method and its application to (2 + 1)-dimensional Burgers equation

    International Nuclear Information System (INIS)

    Wang Qi; Chen Yong; Zhang Hongqing

    2005-01-01

    In this paper, we present a new Riccati equation rational expansion method to uniformly construct a series of exact solutions for nonlinear evolution equations. Compared with most existing tanh methods and other sophisticated methods, the proposed method not only recover some known solutions, but also find some new and general solutions. The solutions obtained in this paper include rational triangular periodic wave solutions, rational solitary wave solutions and rational wave solutions. The efficiency of the method can be demonstrated on (2 + 1)-dimensional Burgers equation

  20. Analysis of Elastic-Plastic J Integrals for 3-Dimensional Cracks Using Finite Element Alternating Method

    International Nuclear Information System (INIS)

    Park, Jai Hak

    2009-01-01

    SGBEM(Symmetric Galerkin Boundary Element Method)-FEM alternating method has been proposed by Nikishkov, Park and Atluri. In the proposed method, arbitrarily shaped three-dimensional crack problems can be solved by alternating between the crack solution in an infinite body and the finite element solution without a crack. In the previous study, the SGBEM-FEM alternating method was extended further in order to solve elastic-plastic crack problems and to obtain elastic-plastic stress fields. For the elastic-plastic analysis the algorithm developed by Nikishkov et al. is used after modification. In the algorithm, the initial stress method is used to obtain elastic-plastic stress and strain fields. In this paper, elastic-plastic J integrals for three-dimensional cracks are obtained using the method. For that purpose, accurate values of displacement gradients and stresses are necessary on an integration path. In order to improve the accuracy of stress near crack surfaces, coordinate transformation and partitioning of integration domain are used. The coordinate transformation produces a transformation Jacobian, which cancels the singularity of the integrand. Using the developed program, simple three-dimensional crack problems are solved and elastic and elastic-plastic J integrals are obtained. The obtained J integrals are compared with the values obtained using a handbook solution. It is noted that J integrals obtained from the alternating method are close to the values from the handbook

  1. Casimir effect in a d-dimensional flat spacetime and the cut-off method

    International Nuclear Information System (INIS)

    Svaiter, N.F.; Svaiter, B.F.

    1989-01-01

    The CasiMir efeect in a D-dimensional spacetime produced by a Hermitian massless scalar field in the presence of a pair of perfectly reflecting parallel flat plates is discussed. The exponential cut-off regularization method is employed. The regularized vacuum energy and the Casimir energy of this field are evaluated and a detailed analysis of the divergent terms in the regularized vacuum energy is carried out. The two-dimensional version of the Casimir effect is discussed by means of the same cut-off method. A comparison between the above method and the zeta function regularization procedure is presented in a way which gives the unification between these two methods in the present case. (author) [pt

  2. Assessment of temporal resolution of multi-detector row computed tomography in helical acquisition mode using the impulse method.

    Science.gov (United States)

    Ichikawa, Katsuhiro; Hara, Takanori; Urikura, Atsushi; Takata, Tadanori; Ohashi, Kazuya

    2015-06-01

    The purpose of this study was to propose a method for assessing the temporal resolution (TR) of multi-detector row computed tomography (CT) (MDCT) in the helical acquisition mode using temporal impulse signals generated by a metal ball passing through the acquisition plane. An 11-mm diameter metal ball was shot along the central axis at approximately 5 m/s during a helical acquisition, and the temporal sensitivity profile (TSP) was measured from the streak image intensities in the reconstructed helical CT images. To assess the validity, we compared the measured and theoretical TSPs for the 4-channel modes of two MDCT systems. A 64-channel MDCT system was used to compare TSPs and image quality of a motion phantom for the pitch factors P of 0.6, 0.8, 1.0 and 1.2 with a rotation time R of 0.5 s, and for two R/P combinations of 0.5/1.2 and 0.33/0.8. Moreover, the temporal transfer functions (TFs) were calculated from the obtained TSPs. The measured and theoretical TSPs showed perfect agreement. The TSP narrowed with an increase in the pitch factor. The image sharpness of the 0.33/0.8 combination was inferior to that of the 0.5/1.2 combination, despite their almost identical full width at tenth maximum values. The temporal TFs quantitatively confirmed these differences. The TSP results demonstrated that the TR in the helical acquisition mode significantly depended on the pitch factor as well as the rotation time, and the pitch factor and reconstruction algorithm affected the TSP shape. Copyright © 2015 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  3. Low cost method for manufacturing a data acquisition system with USB connectivity

    Science.gov (United States)

    Niculescu, V.; Dobre, R. A.; Popovici, E.

    2016-06-01

    In the process of designing and manufacturing an electronic system the digital oscilloscope plays an essential role but it also represents one of the most expensive equipment present on the typical workbench. In order to make electronic design more accessible to students and hobbyists, an affordable data acquisition system was imagined. The paper extensively presents the development and testing of a low cost, medium speed, data acquisition system which can be used in a wide range of electronic measurement and debugging applications, assuring also great portability due to the small physical dimensions. Each hardware functional block and is thoroughly described, highlighting the challenges that occurred as well as the solutions to overcome them. The entire system was successfully manufactured using high quality components to assure increased reliability, and high frequency PCB materials and techniques were preferred. The measured values determined based on test signals were compared to the ones obtained using a digital oscilloscope available on the market and differences less than 1% were observed.

  4. Moving Least Squares Method for a One-Dimensional Parabolic Inverse Problem

    Directory of Open Access Journals (Sweden)

    Baiyu Wang

    2014-01-01

    Full Text Available This paper investigates the numerical solution of a class of one-dimensional inverse parabolic problems using the moving least squares approximation; the inverse problem is the determination of an unknown source term depending on time. The collocation method is used for solving the equation; some numerical experiments are presented and discussed to illustrate the stability and high efficiency of the method.

  5. Image-Based Compression Method of Three-Dimensional Range Data with Texture

    OpenAIRE

    Chen, Xia; Bell, Tyler; Zhang, Song

    2017-01-01

    Recently, high speed and high accuracy three-dimensional (3D) scanning techniques and commercially available 3D scanning devices have made real-time 3D shape measurement and reconstruction possible. The conventional mesh representation of 3D geometry, however, results in large file sizes, causing difficulties for its storage and transmission. Methods for compressing scanned 3D data therefore become desired. This paper proposes a novel compression method which stores 3D range data within the c...

  6. Direct Linear System Identification Method for Multistory Three-dimensional Building Structure with General Eccentricity

    OpenAIRE

    Shintani, Kenichirou; Yoshitomi, Shinta; Takewaki, Izuru

    2017-01-01

    A method of physical parameter system identification (SI) is proposed here for three-dimensional (3D) building structures with in-plane rigid floors in which the stiffness and damping coefficients of each structural frame in the 3D building structure are identified from the measured floor horizontal accelerations. A batch processing least-squares estimation method for many discrete time domain measured data is proposed for the direct identification of the stiffness and damping coefficients of...

  7. Asymptotic iteration method solutions to the d-dimensional Schroedinger equation with position-dependent mass

    International Nuclear Information System (INIS)

    Yasuk, F.; Tekin, S.; Boztosun, I.

    2010-01-01

    In this study, the exact solutions of the d-dimensional Schroedinger equation with a position-dependent mass m(r)=1/(1+ζ 2 r 2 ) is presented for a free particle, V(r)=0, by using the method of point canonical transformations. The energy eigenvalues and corresponding wavefunctions for the effective potential which is to be a generalized Poeschl-Teller potential are obtained within the framework of the asymptotic iteration method.

  8. One-Dimensional Finite Elements An Introduction to the FE Method

    CERN Document Server

    Öchsner, Andreas

    2013-01-01

     This textbook presents finite element methods using exclusively  one-dimensional elements. The aim is to present the complex methodology in  an easily understandable but mathematically correct fashion. The approach of  one-dimensional elements enables the reader to focus on the understanding of  the principles of basic and advanced mechanical problems. The reader easily  understands the assumptions and limitations of mechanical modeling as well  as the underlying physics without struggling with complex mathematics. But  although the description is easy it remains scientifically correct.   The approach using only one-dimensional elements covers not only standard  problems but allows also for advanced topics like plasticity or the  mechanics of composite materials. Many examples illustrate the concepts and  problems at the end of every chapter help to familiarize with the topics.

  9. Systems and methods that generate height map models for efficient three dimensional reconstruction from depth information

    Science.gov (United States)

    Frahm, Jan-Michael; Pollefeys, Marc Andre Leon; Gallup, David Robert

    2015-12-08

    Methods of generating a three dimensional representation of an object in a reference plane from a depth map including distances from a reference point to pixels in an image of the object taken from a reference point. Weights are assigned to respective voxels in a three dimensional grid along rays extending from the reference point through the pixels in the image based on the distances in the depth map from the reference point to the respective pixels, and a height map including an array of height values in the reference plane is formed based on the assigned weights. An n-layer height map may be constructed by generating a probabilistic occupancy grid for the voxels and forming an n-dimensional height map comprising an array of layer height values in the reference plane based on the probabilistic occupancy grid.

  10. An axial calculation method for accurate two-dimensional PWR core simulation

    International Nuclear Information System (INIS)

    Grimm, P.

    1985-02-01

    An axial calculation method, which improves the agreement of the multiplication factors determined by two- and three-dimensional PWR neutronic calculations, is presented. The axial buckling is determined at each time point so as to reproduce the increase of the leakage due to the flattening of the axial power distribution and the effect of the axial variation of the group constants of the fuel on the reactivity is taken into account. The results of a test example show that the differences of k-eff and cycle length between two- and three-dimensional calculations, which are unsatisfactorily large if a constant buckling is used, become negligible if the results of the axial calculation are used in the two-dimensional core simulation. (Auth.)

  11. A high-speed computerized tomography image reconstruction using direct two-dimensional Fourier transform method

    International Nuclear Information System (INIS)

    Niki, Noboru; Mizutani, Toshio; Takahashi, Yoshizo; Inouye, Tamon.

    1983-01-01

    The nescessity for developing real-time computerized tomography (CT) aiming at the dynamic observation of organs such as hearts has lately been advocated. It is necessary for its realization to reconstruct the images which are markedly faster than present CTs. Although various reconstructing methods have been proposed so far, the method practically employed at present is the filtered backprojection (FBP) method only, which can give high quality image reconstruction, but takes much computing time. In the past, the two-dimensional Fourier transform (TFT) method was regarded as unsuitable to practical use because the quality of images obtained was not good, in spite of the promising method for high speed reconstruction because of its less computing time. However, since it was revealed that the image quality by TFT method depended greatly on interpolation accuracy in two-dimensional Fourier space, the authors have developed a high-speed calculation algorithm that can obtain high quality images by pursuing the relationship between the image quality and the interpolation method. In this case, radial data sampling points in Fourier space are increased to β-th power of 2 times, and the linear or spline interpolation is used. Comparison of this method with the present FBP method resulted in the conclusion that the image quality is almost the same in practical image matrix, the computational time by TFT method becomes about 1/10 of FBP method, and the memory capacity also reduces by about 20 %. (Wakatsuki, Y.)

  12. Modified Splitting FDTD Methods for Two-Dimensional Maxwell’s Equations

    Directory of Open Access Journals (Sweden)

    Liping Gao

    2017-01-01

    Full Text Available In this paper, we develop a new method to reduce the error in the splitting finite-difference method of Maxwell’s equations. By this method two modified splitting FDTD methods (MS-FDTDI, MS-FDTDII for the two-dimensional Maxwell equations are proposed. It is shown that the two methods are second-order accurate in time and space and unconditionally stable by Fourier methods. By energy method, it is proved that MS-FDTDI is second-order convergent. By deriving the numerical dispersion (ND relations, we prove rigorously that MS-FDTDI has less ND errors than the ADI-FDTD method and the ND errors of ADI-FDTD are less than those of MS-FDTDII. Numerical experiments for computing ND errors and simulating a wave guide problem and a scattering problem are carried out and the efficiency of the MS-FDTDI and MS-FDTDII methods is confirmed.

  13. Two-dimensional differential transform method for solving linear and non-linear Schroedinger equations

    International Nuclear Information System (INIS)

    Ravi Kanth, A.S.V.; Aruna, K.

    2009-01-01

    In this paper, we propose a reliable algorithm to develop exact and approximate solutions for the linear and nonlinear Schroedinger equations. The approach rest mainly on two-dimensional differential transform method which is one of the approximate methods. The method can easily be applied to many linear and nonlinear problems and is capable of reducing the size of computational work. Exact solutions can also be achieved by the known forms of the series solutions. Several illustrative examples are given to demonstrate the effectiveness of the present method.

  14. Development of a three dimensional circulation model based on fractional step method

    Directory of Open Access Journals (Sweden)

    Mazen Abualtayef

    2010-03-01

    Full Text Available A numerical model was developed for simulating a three-dimensional multilayer hydrodynamic and thermodynamic model in domains with irregular bottom topography. The model was designed for examining the interactions between flow and topography. The model was based on the three-dimensional Navier-Stokes equations and was solved using the fractional step method, which combines the finite difference method in the horizontal plane and the finite element method in the vertical plane. The numerical techniques were described and the model test and application were presented. For the model application to the northern part of Ariake Sea, the hydrodynamic and thermodynamic results were predicted. The numerically predicted amplitudes and phase angles were well consistent with the field observations.

  15. Multi-GPU accelerated three-dimensional FDTD method for electromagnetic simulation.

    Science.gov (United States)

    Nagaoka, Tomoaki; Watanabe, Soichi

    2011-01-01

    Numerical simulation with a numerical human model using the finite-difference time domain (FDTD) method has recently been performed in a number of fields in biomedical engineering. To improve the method's calculation speed and realize large-scale computing with the numerical human model, we adapt three-dimensional FDTD code to a multi-GPU environment using Compute Unified Device Architecture (CUDA). In this study, we used NVIDIA Tesla C2070 as GPGPU boards. The performance of multi-GPU is evaluated in comparison with that of a single GPU and vector supercomputer. The calculation speed with four GPUs was approximately 3.5 times faster than with a single GPU, and was slightly (approx. 1.3 times) slower than with the supercomputer. Calculation speed of the three-dimensional FDTD method using GPUs can significantly improve with an expanding number of GPUs.

  16. Comparison of preconditioned generalized conjugate gradient methods to two-dimensional neutron and photon transport equation

    International Nuclear Information System (INIS)

    Chen, G.S.

    1997-01-01

    We apply and compare the preconditioned generalized conjugate gradient methods to solve the linear system equation that arises in the two-dimensional neutron and photon transport equation in this paper. Several subroutines are developed on the basis of preconditioned generalized conjugate gradient methods for time-independent, two-dimensional neutron and photon transport equation in the transport theory. These generalized conjugate gradient methods are used. TFQMR (transpose free quasi-minimal residual algorithm), CGS (conjuage gradient square algorithm), Bi-CGSTAB (bi-conjugate gradient stabilized algorithm) and QMRCGSTAB (quasi-minimal residual variant of bi-conjugate gradient stabilized algorithm). These sub-routines are connected to computer program DORT. Several problems are tested on a personal computer with Intel Pentium CPU. (author)

  17. Improved data acquisition methods for uninterrupted signal monitoring and ultra-fast plasma diagnostics in LHD

    International Nuclear Information System (INIS)

    Nakanishi, Hideya; Imazu, Setsuo; Ohsuna, Masaki

    2012-01-01

    To deal with endless data streams acquired in LHD steady-state experiments, the LHD data acquisition system was designed with a simple concept that divides a long pulse into a consecutive series of 10-s “subshots”. Latest digitizers applying high-speed PCI-Express technology, however, output nonstop gigabyte per second data streams whose subshot intervals would be extremely long if 10-s rule was applied. These digitizers need shorter subshot intervals, less than 10-s long. In contrast, steady-state fusion plants need uninterrupted monitoring of the environment and device soundness. They adopt longer subshot lengths of either 10 min or 1 day. To cope with both uninterrupted monitoring and ultra-fast diagnostics, the ability to vary the subshot length according to the type of operation is required. In this study, a design modification that enables variable subshot lengths was implemented and its practical effectiveness in LHD was verified. (author)

  18. Three-dimensional forward modeling of DC resistivity using the aggregation-based algebraic multigrid method

    Science.gov (United States)

    Chen, Hui; Deng, Ju-Zhi; Yin, Min; Yin, Chang-Chun; Tang, Wen-Wu

    2017-03-01

    To speed up three-dimensional (3D) DC resistivity modeling, we present a new multigrid method, the aggregation-based algebraic multigrid method (AGMG). We first discretize the differential equation of the secondary potential field with mixed boundary conditions by using a seven-point finite-difference method to obtain a large sparse system of linear equations. Then, we introduce the theory behind the pairwise aggregation algorithms for AGMG and use the conjugate-gradient method with the V-cycle AGMG preconditioner (AGMG-CG) to solve the linear equations. We use typical geoelectrical models to test the proposed AGMG-CG method and compare the results with analytical solutions and the 3DDCXH algorithm for 3D DC modeling (3DDCXH). In addition, we apply the AGMG-CG method to different grid sizes and geoelectrical models and compare it to different iterative methods, such as ILU-BICGSTAB, ILU-GCR, and SSOR-CG. The AGMG-CG method yields nearly linearly decreasing errors, whereas the number of iterations increases slowly with increasing grid size. The AGMG-CG method is precise and converges fast, and thus can improve the computational efficiency in forward modeling of three-dimensional DC resistivity.

  19. A Comparison of Machine Learning Methods in a High-Dimensional Classification Problem

    Directory of Open Access Journals (Sweden)

    Zekić-Sušac Marijana

    2014-09-01

    Full Text Available Background: Large-dimensional data modelling often relies on variable reduction methods in the pre-processing and in the post-processing stage. However, such a reduction usually provides less information and yields a lower accuracy of the model. Objectives: The aim of this paper is to assess the high-dimensional classification problem of recognizing entrepreneurial intentions of students by machine learning methods. Methods/Approach: Four methods were tested: artificial neural networks, CART classification trees, support vector machines, and k-nearest neighbour on the same dataset in order to compare their efficiency in the sense of classification accuracy. The performance of each method was compared on ten subsamples in a 10-fold cross-validation procedure in order to assess computing sensitivity and specificity of each model. Results: The artificial neural network model based on multilayer perceptron yielded a higher classification rate than the models produced by other methods. The pairwise t-test showed a statistical significance between the artificial neural network and the k-nearest neighbour model, while the difference among other methods was not statistically significant. Conclusions: Tested machine learning methods are able to learn fast and achieve high classification accuracy. However, further advancement can be assured by testing a few additional methodological refinements in machine learning methods.

  20. Upscaling permeability for three-dimensional fractured porous rocks with the multiple boundary method

    Science.gov (United States)

    Chen, Tao; Clauser, Christoph; Marquart, Gabriele; Willbrand, Karen; Hiller, Thomas

    2018-02-01

    Upscaling permeability of grid blocks is crucial for groundwater models. A novel upscaling method for three-dimensional fractured porous rocks is presented. The objective of the study was to compare this method with the commonly used Oda upscaling method and the volume averaging method. First, the multiple boundary method and its computational framework were defined for three-dimensional stochastic fracture networks. Then, the different upscaling methods were compared for a set of rotated fractures, for tortuous fractures, and for two discrete fracture networks. The results computed by the multiple boundary method are comparable with those of the other two methods and fit best the analytical solution for a set of rotated fractures. The errors in flow rate of the equivalent fracture model decrease when using the multiple boundary method. Furthermore, the errors of the equivalent fracture models increase from well-connected fracture networks to poorly connected ones. Finally, the diagonal components of the equivalent permeability tensors tend to follow a normal or log-normal distribution for the well-connected fracture network model with infinite fracture size. By contrast, they exhibit a power-law distribution for the poorly connected fracture network with multiple scale fractures. The study demonstrates the accuracy and the flexibility of the multiple boundary upscaling concept. This makes it attractive for being incorporated into any existing flow-based upscaling procedures, which helps in reducing the uncertainty of groundwater models.

  1. The equivalence of information-theoretic and likelihood-based methods for neural dimensionality reduction.

    Directory of Open Access Journals (Sweden)

    Ross S Williamson

    2015-04-01

    Full Text Available Stimulus dimensionality-reduction methods in neuroscience seek to identify a low-dimensional space of stimulus features that affect a neuron's probability of spiking. One popular method, known as maximally informative dimensions (MID, uses an information-theoretic quantity known as "single-spike information" to identify this space. Here we examine MID from a model-based perspective. We show that MID is a maximum-likelihood estimator for the parameters of a linear-nonlinear-Poisson (LNP model, and that the empirical single-spike information corresponds to the normalized log-likelihood under a Poisson model. This equivalence implies that MID does not necessarily find maximally informative stimulus dimensions when spiking is not well described as Poisson. We provide several examples to illustrate this shortcoming, and derive a lower bound on the information lost when spiking is Bernoulli in discrete time bins. To overcome this limitation, we introduce model-based dimensionality reduction methods for neurons with non-Poisson firing statistics, and show that they can be framed equivalently in likelihood-based or information-theoretic terms. Finally, we show how to overcome practical limitations on the number of stimulus dimensions that MID can estimate by constraining the form of the non-parametric nonlinearity in an LNP model. We illustrate these methods with simulations and data from primate visual cortex.

  2. Advanced numerical methods for three dimensional two-phase flow calculations

    Energy Technology Data Exchange (ETDEWEB)

    Toumi, I. [Laboratoire d`Etudes Thermiques des Reacteurs, Gif sur Yvette (France); Caruge, D. [Institut de Protection et de Surete Nucleaire, Fontenay aux Roses (France)

    1997-07-01

    This paper is devoted to new numerical methods developed for both one and three dimensional two-phase flow calculations. These methods are finite volume numerical methods and are based on the use of Approximate Riemann Solvers concepts to define convective fluxes versus mean cell quantities. The first part of the paper presents the numerical method for a one dimensional hyperbolic two-fluid model including differential terms as added mass and interface pressure. This numerical solution scheme makes use of the Riemann problem solution to define backward and forward differencing to approximate spatial derivatives. The construction of this approximate Riemann solver uses an extension of Roe`s method that has been successfully used to solve gas dynamic equations. As far as the two-fluid model is hyperbolic, this numerical method seems very efficient for the numerical solution of two-phase flow problems. The scheme was applied both to shock tube problems and to standard tests for two-fluid computer codes. The second part describes the numerical method in the three dimensional case. The authors discuss also some improvements performed to obtain a fully implicit solution method that provides fast running steady state calculations. Such a scheme is not implemented in a thermal-hydraulic computer code devoted to 3-D steady-state and transient computations. Some results obtained for Pressurised Water Reactors concerning upper plenum calculations and a steady state flow in the core with rod bow effect evaluation are presented. In practice these new numerical methods have proved to be stable on non staggered grids and capable of generating accurate non oscillating solutions for two-phase flow calculations.

  3. Advanced numerical methods for three dimensional two-phase flow calculations

    International Nuclear Information System (INIS)

    Toumi, I.; Caruge, D.

    1997-01-01

    This paper is devoted to new numerical methods developed for both one and three dimensional two-phase flow calculations. These methods are finite volume numerical methods and are based on the use of Approximate Riemann Solvers concepts to define convective fluxes versus mean cell quantities. The first part of the paper presents the numerical method for a one dimensional hyperbolic two-fluid model including differential terms as added mass and interface pressure. This numerical solution scheme makes use of the Riemann problem solution to define backward and forward differencing to approximate spatial derivatives. The construction of this approximate Riemann solver uses an extension of Roe's method that has been successfully used to solve gas dynamic equations. As far as the two-fluid model is hyperbolic, this numerical method seems very efficient for the numerical solution of two-phase flow problems. The scheme was applied both to shock tube problems and to standard tests for two-fluid computer codes. The second part describes the numerical method in the three dimensional case. The authors discuss also some improvements performed to obtain a fully implicit solution method that provides fast running steady state calculations. Such a scheme is not implemented in a thermal-hydraulic computer code devoted to 3-D steady-state and transient computations. Some results obtained for Pressurised Water Reactors concerning upper plenum calculations and a steady state flow in the core with rod bow effect evaluation are presented. In practice these new numerical methods have proved to be stable on non staggered grids and capable of generating accurate non oscillating solutions for two-phase flow calculations

  4. Two-Dimensional Space-Time Dependent Multi-group Diffusion Equation with SLOR Method

    International Nuclear Information System (INIS)

    Yulianti, Y.; Su'ud, Z.; Waris, A.; Khotimah, S. N.

    2010-01-01

    The research of two-dimensional space-time diffusion equations with SLOR (Successive-Line Over Relaxation) has been done. SLOR method is chosen because this method is one of iterative methods that does not required to defined whole element matrix. The research is divided in two cases, homogeneous case and heterogeneous case. Homogeneous case has been inserted by step reactivity. Heterogeneous case has been inserted by step reactivity and ramp reactivity. In general, the results of simulations are agreement, even in some points there are differences.

  5. Application of the Green's function method for 2- and 3-dimensional steady transonic flows

    Science.gov (United States)

    Tseng, K.

    1984-01-01

    A Time-Domain Green's function method for the nonlinear time-dependent three-dimensional aerodynamic potential equation is presented. The Green's theorem is being used to transform the partial differential equation into an integro-differential-delay equation. Finite-element and finite-difference methods are employed for the spatial and time discretizations to approximate the integral equation by a system of differential-delay equations. Solution may be obtained by solving for this nonlinear simultaneous system of equations in time. This paper discusses the application of the method to the Transonic Small Disturbance Equation and numerical results for lifting and nonlifting airfoils and wings in steady flows are presented.

  6. Two-dimensional parasitic capacitance extraction for integrated circuit with dual discrete geometric methods

    International Nuclear Information System (INIS)

    Ren Dan; Ren Zhuoxiang; Qu Hui; Xu Xiaoyu

    2015-01-01

    Capacitance extraction is one of the key issues in integrated circuits and also a typical electrostatic problem. The dual discrete geometric method (DGM) is investigated to provide relative solutions in two-dimensional unstructured mesh space. The energy complementary characteristic and quick field energy computation thereof based on it are emphasized. Contrastive analysis between the dual finite element methods and the dual DGMs are presented both from theoretical derivation and through case studies. The DGM, taking the scalar potential as unknown on dual interlocked meshes, with simple form and good accuracy, is expected to be one of the mainstreaming methods in associated areas. (paper)

  7. Three-Dimensional Phase Field Simulations of Hysteresis and Butterfly Loops by the Finite Volume Method

    International Nuclear Information System (INIS)

    Xi Li-Ying; Chen Huan-Ming; Zheng Fu; Gao Hua; Tong Yang; Ma Zhi

    2015-01-01

    Three-dimensional simulations of ferroelectric hysteresis and butterfly loops are carried out based on solving the time dependent Ginzburg–Landau equations using a finite volume method. The influence of externally mechanical loadings with a tensile strain and a compressive strain on the hysteresis and butterfly loops is studied numerically. Different from the traditional finite element and finite difference methods, the finite volume method is applicable to simulate the ferroelectric phase transitions and properties of ferroelectric materials even for more realistic and physical problems. (paper)

  8. An improved method for computer generation of three-dimensional digital holography

    International Nuclear Information System (INIS)

    Hu, Yanlei; Chen, Yuhang; Li, Jiawen; Huang, Wenhao; Chu, Jiaru; Ma, Jianqiang

    2013-01-01

    A novel method is proposed for designing optimized three-dimensional computer-generated holograms (CGHs). A series of spherical wave factors are introduced into the conventional optimal rotation angle (ORA) algorithm to achieve a varying amount of defocus along the optical axis, and the distraction terms are minimized during the iterative process. Both numerical simulation and experimental reconstructions are presented to demonstrate that this method is able to yield excellent multilayer patterns with high uniformity and signal-to-noise ratio (SNR). This method is significant for applications in laser 3D printing and multilayer data recording. (paper)

  9. A new method for three-dimensional laparoscopic ultrasound model reconstruction

    DEFF Research Database (Denmark)

    Fristrup, C W; Pless, T; Durup, J

    2004-01-01

    BACKGROUND: Laparoscopic ultrasound is an important modality in the staging of gastrointestinal tumors. Correct staging depends on good spatial understanding of the regional tumor infiltration. Three-dimensional (3D) models may facilitate the evaluation of tumor infiltration. The aim of the study...... accuracy of the new method was tested ex vivo, and the clinical feasibility was tested on a small series of patients. RESULTS: Both electromagnetic tracked reconstructions and the new 3D method gave good volumetric information with no significant difference. Clinical use of the new 3D method showed...

  10. The discrete cones method for two-dimensional neutron transport calculations

    International Nuclear Information System (INIS)

    Watanabe, Y.; Maynard, C.W.

    1986-01-01

    A novel method, the discrete cones method (DC/sub N/), is proposed as an alternative to the discrete ordinates method (S/sub N/) for solutions of the two-dimensional neutron transport equation. The new method utilizes a new concept, discrete cones, which are made by partitioning a unit spherical surface that the direction vector of particles covers. In this method particles in a cone are simultaneously traced instead of those in discrete directions so that an anomaly of the S/sub N/ method, the ray effects, can be eliminated. The DC/sub N/ method has been formulated for X-Y geometry and a program has been creaed by modifying the standard S/sub N/ program TWOTRAN-II. Our sample calculations demonstrate a strong mitigation of the ray effects without a computing cost penalty

  11. The discrete cones methods for two-dimensional neutral particle transport problems with voids

    International Nuclear Information System (INIS)

    Watanabe, Y.; Maynard, C.W.

    1983-01-01

    One of the most widely applied deterministic methods for time-independent, two-dimensional neutron transport calculations is the discrete ordinates method (DSN). The DSN solution, however, fails to be accurate in a void due to the ray effect. In order to circumvent this drawback, the authors have been developing a novel approximation: the discrete cones method (DCN), where a group of particles in a cone are simultaneously traced instead of particles in discrete directions for the DSN method. Programs, which apply to the DSN method in a non-vacuum region and the DCN method in a void, have been written for transport calculations in X-Y coordinates. The solutions for test problems demonstrate mitigation of the ray effect in voids without loosing the computational efficiency of the DSN method

  12. Method for the manufacture of a thin-layer battery stack on a three-dimensional substrate

    NARCIS (Netherlands)

    2008-01-01

    The invention relates to a method for the manufacture of a thin-layer battery stack on a three-dimensional substrate. The invention further relates to a thin-layer battery stack on a three-dimensional substrate obtainable by such a method. Moreover, the invention relates to a device comprising such

  13. Development of the method for the dimensional measurement of the HANARO nuclear fuel

    International Nuclear Information System (INIS)

    Kim, Tae Yeon; Lee, K. S.; Park, D. G.; Choo, Y. S.; Ahn, S. B.

    1998-06-01

    Dimension of the nuclear fuel is altered in nuclear reactor because of the neutron exposure with high pressure water. If the deformation is overlarge, the severe problem in safety of the nuclear fuel and the reactor come about. Therefore the accurate dimensional data of the nuclear fuel in diameter and length is very important for the design of the nuclear fuel and the estimation of the nuclear safety. Measurement of diameter for the dummy HANARO fuel rod which has not filled with real fuel material was carried out in hot cell. And also the length of the HANARO fuel assembly and the rod are measured. Dimensional measuring method for the HANARO fuel was developed. The test result show our method is good enough to distinguish change in volume with statistical uncertainty of 0.6 %. (author). 2 refs., 7 tabs., 20 figs

  14. Moment-based method for computing the two-dimensional discrete Hartley transform

    Science.gov (United States)

    Dong, Zhifang; Wu, Jiasong; Shu, Huazhong

    2009-10-01

    In this paper, we present a fast algorithm for computing the two-dimensional (2-D) discrete Hartley transform (DHT). By using kernel transform and Taylor expansion, the 2-D DHT is approximated by a linear sum of 2-D geometric moments. This enables us to use the fast algorithms developed for computing the 2-D moments to efficiently calculate the 2-D DHT. The proposed method achieves a simple computational structure and is suitable to deal with any sequence lengths.

  15. Optimal Layout Design using the Element Connectivity Parameterization Method: Application to Three Dimensional Geometrical Nonlinear Structures

    DEFF Research Database (Denmark)

    Yoon, Gil Ho; Joung, Young Soo; Kim, Yoon Young

    2005-01-01

    The topology design optimization of “three-dimensional geometrically-nonlinear” continuum structures is still a difficult problem not only because of its problem size but also the occurrence of unstable continuum finite elements during the design optimization. To overcome this difficulty, the ele......) stiffness matrix of continuum finite elements. Therefore, any finite element code, including commercial codes, can be readily used for the ECP implementation. The key ideas and characteristics of these methods will be presented in this paper....

  16. Soliton solutions of the two-dimensional KdV-Burgers equation by homotopy perturbation method

    International Nuclear Information System (INIS)

    Molabahrami, A.; Khani, F.; Hamedi-Nezhad, S.

    2007-01-01

    In this Letter, the He's homotopy perturbation method (HPM) to finding the soliton solutions of the two-dimensional Korteweg-de Vries Burgers' equation (tdKdVB) for the initial conditions was applied. Numerical solutions of the equation were obtained. The obtained solutions, in comparison with the exact solutions admit a remarkable accuracy. The results reveal that the HPM is very effective and simple

  17. Can We Train Machine Learning Methods to Outperform the High-dimensional Propensity Score Algorithm?

    Science.gov (United States)

    Karim, Mohammad Ehsanul; Pang, Menglan; Platt, Robert W

    2018-03-01

    The use of retrospective health care claims datasets is frequently criticized for the lack of complete information on potential confounders. Utilizing patient's health status-related information from claims datasets as surrogates or proxies for mismeasured and unobserved confounders, the high-dimensional propensity score algorithm enables us to reduce bias. Using a previously published cohort study of postmyocardial infarction statin use (1998-2012), we compare the performance of the algorithm with a number of popular machine learning approaches for confounder selection in high-dimensional covariate spaces: random forest, least absolute shrinkage and selection operator, and elastic net. Our results suggest that, when the data analysis is done with epidemiologic principles in mind, machine learning methods perform as well as the high-dimensional propensity score algorithm. Using a plasmode framework that mimicked the empirical data, we also showed that a hybrid of machine learning and high-dimensional propensity score algorithms generally perform slightly better than both in terms of mean squared error, when a bias-based analysis is used.

  18. A METHOD OF THE MINIMIZING OF THE TOTAL ACQUISITIONS COST WITH THE INCREASING VARIABLE DEMAND

    Directory of Open Access Journals (Sweden)

    ELEONORA IONELA FOCȘAN

    2015-12-01

    Full Text Available Over time, mankind has tried to find different ways of costs reduction. This subject which we are facing more often nowadays, has been detailed studied, without reaching a general model, and also efficient, regarding the costs reduction. Costs reduction entails a number of benefits over the entity, the most important being: increase revenue and default to the profit, increase productivity, a higher level of services / products offered to clients, and last but not least, the risk mitigation of the economic deficit. Therefore, each entity search different modes to obtain most benefits, for the company to succeed in a competitive market. This article supports the companies, trying to make known a new way of minimizing the total cost of acquisitions, by presenting some hypotheses about the increasing variable demand, proving them, and development of formulas for reducing the costs. The hypotheses presented in the model described below, can be maximally exploited to obtain new models of reducing the total cost, according to the modes of the purchase of entities which approach it.

  19. Acquisition and deconvolution of seismic signals by different methods to perform direct ground-force measurements

    Science.gov (United States)

    Poletto, Flavio; Schleifer, Andrea; Zgauc, Franco; Meneghini, Fabio; Petronio, Lorenzo

    2016-12-01

    We present the results of a novel borehole-seismic experiment in which we used different types of onshore-transient-impulsive and non-impulsive-surface sources together with direct ground-force recordings. The ground-force signals were obtained by baseplate load cells located beneath the sources, and by buried soil-stress sensors installed in the very shallow-subsurface together with accelerometers. The aim was to characterize the source's emission by its complex impedance, function of the near-field vibrations and soil stress components, and above all to obtain appropriate deconvolution operators to remove the signature of the sources in the far-field seismic signals. The data analysis shows the differences in the reference measurements utilized to deconvolve the source signature. As downgoing waves, we process the signals of vertical seismic profiles (VSP) recorded in the far-field approximation by an array of permanent geophones cemented at shallow-medium depth outside the casing of an instrumented well. We obtain a significant improvement in the waveform of the radiated seismic-vibrator signals deconvolved by ground force, similar to that of the seismograms generated by the impulsive sources, and demonstrates that the results obtained by different sources present low values in their repeatability norm. The comparison evidences the potentiality of the direct ground-force measurement approach to effectively remove the far-field source signature in VSP onshore data, and to increase the performance of permanent acquisition installations for time-lapse application purposes.

  20. Integration of relational and textual biomedical sources. A pilot experiment using a semi-automated method for logical schema acquisition.

    Science.gov (United States)

    García-Remesal, M; Maojo, V; Billhardt, H; Crespo, J

    2010-01-01

    Bringing together structured and text-based sources is an exciting challenge for biomedical informaticians, since most relevant biomedical sources belong to one of these categories. In this paper we evaluate the feasibility of integrating relational and text-based biomedical sources using: i) an original logical schema acquisition method for textual databases developed by the authors, and ii) OntoFusion, a system originally designed by the authors for the integration of relational sources. We conducted an integration experiment involving a test set of seven differently structured sources covering the domain of genetic diseases. We used our logical schema acquisition method to generate schemas for all textual sources. The sources were integrated using the methods and tools provided by OntoFusion. The integration was validated using a test set of 500 queries. A panel of experts answered a questionnaire to evaluate i) the quality of the extracted schemas, ii) the query processing performance of the integrated set of sources, and iii) the relevance of the retrieved results. The results of the survey show that our method extracts coherent and representative logical schemas. Experts' feedback on the performance of the integrated system and the relevance of the retrieved results was also positive. Regarding the validation of the integration, the system successfully provided correct results for all queries in the test set. The results of the experiment suggest that text-based sources including a logical schema can be regarded as equivalent to structured databases. Using our method, previous research and existing tools designed for the integration of structured databases can be reused - possibly subject to minor modifications - to integrate differently structured sources.

  1. Brooks–Corey Modeling by One-Dimensional Vertical Infiltration Method

    Directory of Open Access Journals (Sweden)

    Xuguang Xing

    2018-05-01

    Full Text Available The laboratory methods used for the soil water retention curve (SWRC construction and parameter estimation is time-consuming. A vertical infiltration method was proposed to estimate parameters α and n and to further construct the SWRC. In the present study, the relationships describing the cumulative infiltration and infiltration rate with the depth of the wetting front were established, and simplified expressions for estimating α and n parameters were proposed. The one-dimensional vertical infiltration experiments of four soils were conducted to verify if the proposed method would accurately estimate α and n. The fitted values of α and n, obtained from the RETC software, were consistent with the calculated values obtained from the infiltration method. The comparison between the measured SWRCs obtained from the centrifuge method and the calculated SWRCs that were based on the infiltration method displayed small values of root mean square error (RMSE, mean absolute percentage error (MAPE, and mean absolute error. SWMS_2D-based simulations of cumulative infiltration, based on the calculated α and n, remained consistent with the measured values due to small RMSE and MAPE values. The experiments verified the proposed one-dimensional vertical infiltration method, which has applications in field hydraulic parameter estimation.

  2. Advanced numerical methods for three dimensional two-phase flow calculations in PWR

    International Nuclear Information System (INIS)

    Toumi, I.; Gallo, D.; Royer, E.

    1997-01-01

    This paper is devoted to new numerical methods developed for three dimensional two-phase flow calculations. These methods are finite volume numerical methods. They are based on an extension of Roe's approximate Riemann solver to define convective fluxes versus mean cell quantities. To go forward in time, a linearized conservative implicit integrating step is used, together with a Newton iterative method. We also present here some improvements performed to obtain a fully implicit solution method that provides fast running steady state calculations. This kind of numerical method, which is widely used for fluid dynamic calculations, is proved to be very efficient for the numerical solution to two-phase flow problems. This numerical method has been implemented for the three dimensional thermal-hydraulic code FLICA-4 which is mainly dedicated to core thermal-hydraulic transient and steady-state analysis. Hereafter, we will also find some results obtained for the EPR reactor running in a steady-state at 60% of nominal power with 3 pumps out of 4, and a thermal-hydraulic core analysis for a 1300 MW PWR at low flow steam-line-break conditions. (author)

  3. Measurement of cardiac ventricular volumes using multidetector row computed tomography: comparison of two- and three-dimensional methods

    International Nuclear Information System (INIS)

    Montaudon, M.; Laffon, E.; Berger, P.; Corneloup, O.; Latrabe, V.; Laurent, F.

    2006-01-01

    This study compared a three-dimensional volumetric threshold-based method to a two-dimensional Simpson's rule based short-axis multiplanar method for measuring right (RV) and left ventricular (LV) volumes, stroke volumes, and ejection fraction using electrocardiography-gated multidetector computed tomography (MDCT) data sets. End-diastolic volume (EDV) and end-systolic volume (ESV) of RV and LV were measured independently and blindly by two observers from contrast-enhanced MDCT images using commercial software in 18 patients. For RV and LV the three-dimensionally calculated EDV and ESV values were smaller than those provided by two-dimensional short axis (10%, 5%, 15% and 26% differences respectively). Agreement between the two methods was found for LV (EDV/ESV: r=0.974/0.910, ICC=0.905/0.890) but not for RV (r=0.882/0.930, ICC=0.663/0.544). Measurement errors were significant only for EDV of LV using the two-dimensional method. Similar reproducibility was found for LV measurements, but the three-dimensional method provided greater reproducibility for RV measurements than the two-dimensional. The threshold value supported three-dimensional method provides reproducible cardiac ventricular volume measurements, comparable to those obtained using the short-axis Simpson based method. (orig.)

  4. Modulating functions-based method for parameters and source estimation in one-dimensional partial differential equations

    KAUST Repository

    Asiri, Sharefa M.; Laleg-Kirati, Taous-Meriem

    2016-01-01

    In this paper, modulating functions-based method is proposed for estimating space–time-dependent unknowns in one-dimensional partial differential equations. The proposed method simplifies the problem into a system of algebraic equations linear

  5. Simulation of Thermal Stratification in BWR Suppression Pools with One Dimensional Modeling Method

    Energy Technology Data Exchange (ETDEWEB)

    Haihua Zhao; Ling Zou; Hongbin Zhang

    2014-01-01

    The suppression pool in a boiling water reactor (BWR) plant not only is the major heat sink within the containment system, but also provides the major emergency cooling water for the reactor core. In several accident scenarios, such as a loss-of-coolant accident and extended station blackout, thermal stratification tends to form in the pool after the initial rapid venting stage. Accurately predicting the pool stratification phenomenon is important because it affects the peak containment pressure; the pool temperature distribution also affects the NPSHa (available net positive suction head) and therefore the performance of the Emergency Core Cooling System and Reactor Core Isolation Cooling System pumps that draw cooling water back to the core. Current safety analysis codes use zero dimensional (0-D) lumped parameter models to calculate the energy and mass balance in the pool; therefore, they have large uncertainties in the prediction of scenarios in which stratification and mixing are important. While three-dimensional (3-D) computational fluid dynamics (CFD) methods can be used to analyze realistic 3-D configurations, these methods normally require very fine grid resolution to resolve thin substructures such as jets and wall boundaries, resulting in a long simulation time. For mixing in stably stratified large enclosures, the BMIX++ code (Berkeley mechanistic MIXing code in C++) has been developed to implement a highly efficient analysis method for stratification where the ambient fluid volume is represented by one-dimensional (1-D) transient partial differential equations and substructures (such as free or wall jets) are modeled with 1-D integral models. This allows very large reductions in computational effort compared to multi-dimensional CFD modeling. One heat-up experiment performed at the Finland POOLEX facility, which was designed to study phenomena relevant to Nordic design BWR suppression pool including thermal stratification and mixing, is used for

  6. Sensor assembly method using silicon interposer with trenches for three-dimensional binocular range sensors

    Science.gov (United States)

    Nakajima, Kazuhiro; Yamamoto, Yuji; Arima, Yutaka

    2018-04-01

    To easily assemble a three-dimensional binocular range sensor, we devised an alignment method for two image sensors using a silicon interposer with trenches. The trenches were formed using deep reactive ion etching (RIE) equipment. We produced a three-dimensional (3D) range sensor using the method and experimentally confirmed that sufficient alignment accuracy was realized. It was confirmed that the alignment accuracy of the two image sensors when using the proposed method is more than twice that of the alignment assembly method on a conventional board. In addition, as a result of evaluating the deterioration of the detection performance caused by the alignment accuracy, it was confirmed that the vertical deviation between the corresponding pixels in the two image sensors is substantially proportional to the decrease in detection performance. Therefore, we confirmed that the proposed method can realize more than twice the detection performance of the conventional method. Through these evaluations, the effectiveness of the 3D binocular range sensor aligned by the silicon interposer with the trenches was confirmed.

  7. Kernel based methods for accelerated failure time model with ultra-high dimensional data

    Directory of Open Access Journals (Sweden)

    Jiang Feng

    2010-12-01

    Full Text Available Abstract Background Most genomic data have ultra-high dimensions with more than 10,000 genes (probes. Regularization methods with L1 and Lp penalty have been extensively studied in survival analysis with high-dimensional genomic data. However, when the sample size n ≪ m (the number of genes, directly identifying a small subset of genes from ultra-high (m > 10, 000 dimensional data is time-consuming and not computationally efficient. In current microarray analysis, what people really do is select a couple of thousands (or hundreds of genes using univariate analysis or statistical tests, and then apply the LASSO-type penalty to further reduce the number of disease associated genes. This two-step procedure may introduce bias and inaccuracy and lead us to miss biologically important genes. Results The accelerated failure time (AFT model is a linear regression model and a useful alternative to the Cox model for survival analysis. In this paper, we propose a nonlinear kernel based AFT model and an efficient variable selection method with adaptive kernel ridge regression. Our proposed variable selection method is based on the kernel matrix and dual problem with a much smaller n × n matrix. It is very efficient when the number of unknown variables (genes is much larger than the number of samples. Moreover, the primal variables are explicitly updated and the sparsity in the solution is exploited. Conclusions Our proposed methods can simultaneously identify survival associated prognostic factors and predict survival outcomes with ultra-high dimensional genomic data. We have demonstrated the performance of our methods with both simulation and real data. The proposed method performs superbly with limited computational studies.

  8. Comparison of preconditioned generalized conjugate gradient methods to two-dimensional neutron and photon transport equation

    International Nuclear Information System (INIS)

    Chen, G.S.; Yang, D.Y.

    1998-01-01

    We apply and compare the preconditioned generalized conjugate gradient methods to solve the linear system equation that arises in the two-dimensional neutron and photon transport equation in this paper. Several subroutines are developed on the basis of preconditioned generalized conjugate gradient methods for time-independent, two-dimensional neutron and photon transport equation in the transport theory. These generalized conjugate gradient methods are used: TFQMR (transpose free quasi-minimal residual algorithm) CGS (conjugate gradient square algorithm), Bi-CGSTAB (bi-conjugate gradient stabilized algorithm) and QMRCGSTAB (quasi-minimal residual variant of bi-conjugate gradient stabilized algorithm). These subroutines are connected to computer program DORT. Several problems are tested on a personal computer with Intel Pentium CPU. The reasons to choose the generalized conjugate gradient methods are that the methods have better residual (equivalent to error) control procedures in the computation and have better convergent rate. The pointwise incomplete LU factorization ILU, modified pointwise incomplete LU factorization MILU, block incomplete factorization BILU and modified blockwise incomplete LU factorization MBILU are the preconditioning techniques used in the several testing problems. In Bi-CGSTAB, CGS, TFQMR and QMRCGSTAB method, we find that either CGS or Bi-CGSTAB method combined with preconditioner MBILU is the most efficient algorithm in these methods in the several testing problems. The numerical solution of flux by preconditioned CGS and Bi-CGSTAB methods has the same result as those from Cray computer, obtained by either the point successive relaxation method or the line successive relaxation method combined with Gaussian elimination

  9. Covariance Method of the Tunneling Radiation from High Dimensional Rotating Black Holes

    Science.gov (United States)

    Li, Hui-Ling; Han, Yi-Wen; Chen, Shuai-Ru; Ding, Cong

    2018-04-01

    In this paper, Angheben-Nadalini-Vanzo-Zerbini (ANVZ) covariance method is used to study the tunneling radiation from the Kerr-Gödel black hole and Myers-Perry black hole with two independent angular momentum. By solving the Hamilton-Jacobi equation and separating the variables, the radial motion equation of a tunneling particle is obtained. Using near horizon approximation and the distance of the proper pure space, we calculate the tunneling rate and the temperature of Hawking radiation. Thus, the method of ANVZ covariance is extended to the research of high dimensional black hole tunneling radiation.

  10. Numerical method for solving the three-dimensional time-dependent neutron diffusion equation

    International Nuclear Information System (INIS)

    Khaled, S.M.; Szatmary, Z.

    2005-01-01

    A numerical time-implicit method has been developed for solving the coupled three-dimensional time-dependent multi-group neutron diffusion and delayed neutron precursor equations. The numerical stability of the implicit computation scheme and the convergence of the iterative associated processes have been evaluated. The computational scheme requires the solution of large linear systems at each time step. For this purpose, the point over-relaxation Gauss-Seidel method was chosen. A new scheme was introduced instead of the usual source iteration scheme. (author)

  11. Modification of equivalent charge method for the Roben three-dimensional problem in electrostatics

    International Nuclear Information System (INIS)

    Barsukov, A.B.; Surenskij, A.V.

    1989-01-01

    The approach of the Roben problem solution for the calculation of the potential of intermediate electrode of accelerating structure with HFQ focusing is considered. The solution is constructed on the basis of variational formulation of the equivalent charge method, where electrostatic problem is reduced to equations of root-mean-square residuals on the system conductors. The technique presented permits to solve efficiently the three-dimensional problems of electrostatics for rather complicated from geometrical viewpoint systems of electrodes. Processing time is comparable with methods of integral equations. 5 refs.; 2 figs

  12. Simulation of three-dimensional, time-dependent, incompressible flows by a finite element method

    International Nuclear Information System (INIS)

    Chan, S.T.; Gresho, P.M.; Lee, R.L.; Upson, C.D.

    1981-01-01

    A finite element model has been developed for simulating the dynamics of problems encountered in atmospheric pollution and safety assessment studies. The model is based on solving the set of three-dimensional, time-dependent, conservation equations governing incompressible flows. Spatial discretization is performed via a modified Galerkin finite element method, and time integration is carried out via the forward Euler method (pressure is computed implicitly, however). Several cost-effective techniques (including subcycling, mass lumping, and reduced Gauss-Legendre quadrature) which have been implemented are discussed. Numerical results are presented to demonstrate the applicability of the model

  13. A method for three-dimensional structural analysis of reinforced concrete containment

    International Nuclear Information System (INIS)

    Kulak, R.F.; Fiala, C.

    1989-01-01

    A finite element method designed to assist reactor safety analysts in the three-dimensional numerical simulation of reinforced concrete containments to normal and off-normal mechanical loadings is presented. The development of a lined reinforced concrete plate element is described in detail, and the implementation of an empirical transverse shear failure criteria is discussed. The method is applied to the analysis of a 1/6th scale reinforced concrete containment model subjected to static internal pressurization. 11 refs., 14 figs., 1 tab

  14. A method for three-dimensional quantitative observation of the microstructure of biological samples

    Science.gov (United States)

    Wang, Pengfei; Chen, Dieyan; Ma, Wanyun; Wu, Hongxin; Ji, Liang; Sun, Jialin; Lv, Danyu; Zhang, Lu; Li, Ying; Tian, Ning; Zheng, Jinggao; Zhao, Fengying

    2009-07-01

    Contemporary biology has developed into the era of cell biology and molecular biology, and people try to study the mechanism of all kinds of biological phenomena at the microcosmic level now. Accurate description of the microstructure of biological samples is exigent need from many biomedical experiments. This paper introduces a method for 3-dimensional quantitative observation on the microstructure of vital biological samples based on two photon laser scanning microscopy (TPLSM). TPLSM is a novel kind of fluorescence microscopy, which has excellence in its low optical damage, high resolution, deep penetration depth and suitability for 3-dimensional (3D) imaging. Fluorescent stained samples were observed by TPLSM, and afterward the original shapes of them were obtained through 3D image reconstruction. The spatial distribution of all objects in samples as well as their volumes could be derived by image segmentation and mathematic calculation. Thus the 3-dimensionally and quantitatively depicted microstructure of the samples was finally derived. We applied this method to quantitative analysis of the spatial distribution of chromosomes in meiotic mouse oocytes at metaphase, and wonderful results came out last.

  15. Direct-coupled-ray method for design-oriented three-dimensional transport analysis

    International Nuclear Information System (INIS)

    Bucholz, J.A.; Poncelet, C.G.

    1977-01-01

    A fast three-dimensional design-oriented transport method has been developed for the solution of both neutron and gamma transport problems. It combines a nodal approach with analytic integral transport to achieve relative speed and accuracy. An analytic solution is obtained for the angular flux in each of the 14 directions defined by the six faces and eight corners of a cubic mesh block. The scheme used to accommodate high-order anisotropic scattering is based on the formulation of ray-to-ray scattering probabilities in an integral sense. A variable mesh approximation has also been introduced to provide greater flexibility. The details of a direct-coupled-ray (DCR) → P 1 conversion technique have been developed but not yet implemented. The DCR method, as implemented in the TRANS3 code, has been used in a number of liquid-metal fast breeder reactor shielding applications. These included a one-dimensional deep penetration configuration and one-, two-, and three dimensional representations of the lower axial shield of the Clinch River Breeder Reactor. Comparisons with ANISN and DOT-III solutions indicated good to excellent agreement in most situations

  16. Solution of two-dimensional equations of neutron transport in 4P0-approximation of spherical harmonics method

    International Nuclear Information System (INIS)

    Polivanskij, V.P.

    1989-01-01

    The method to solve two-dimensional equations of neutron transport using 4P 0 -approximation is presented. Previously such approach was efficiently used for the solution of one-dimensional problems. New an attempt is made to apply the approach to solution of two-dimensional problems. Algorithm of the solution is given, as well as results of test neutron-physical calculations. A considerable as compared with diffusion approximation is shown. 11 refs

  17. Coupled DQ-FE methods for two dimensional transient heat transfer analysis of functionally graded material

    Energy Technology Data Exchange (ETDEWEB)

    Golbahar Haghighi, M.R.; Eghtesad, M. [Department of Mechanical Engineering, School of Engineering, Shiraz University, Shiraz 71348-51154 (Iran, Islamic Republic of); Malekzadeh, P. [Department of Mechanical Engineering, School of Engineering, Persian Gulf University, Boushehr 75169-13798 (Iran, Islamic Republic of)], E-mail: malekzadeh@pgu.ac.ir

    2008-05-15

    In this paper, a mixed finite element (FE) and differential quadrature (DQ) method as a simple, accurate and computationally efficient numerical tool for two dimensional transient heat transfer analysis of functionally graded materials (FGMs) is developed. The method benefits from the high accuracy, fast convergence behavior and low computational efforts of the DQ in conjunction with the advantages of the FE method in general geometry, loading and systematic boundary treatment. Also, the boundary conditions at the top and bottom surfaces of the domain can be implemented more precisely and in strong form. The temporal derivatives are discretized using an incremental DQ method (IDQM), whose numerical stability is not sensitive to time step size. The effects of non-uniform convective-radiative conditions on the boundaries are investigated. The accuracy of the proposed method is demonstrated by comparing its results with those available in the literature. It is shown that using few grid points, highly accurate results can be obtained.

  18. A Dual-Channel Acquisition Method Based on Extended Replica Folding Algorithm for Long Pseudo-Noise Code in Inter-Satellite Links.

    Science.gov (United States)

    Zhao, Hongbo; Chen, Yuying; Feng, Wenquan; Zhuang, Chen

    2018-05-25

    Inter-satellite links are an important component of the new generation of satellite navigation systems, characterized by low signal-to-noise ratio (SNR), complex electromagnetic interference and the short time slot of each satellite, which brings difficulties to the acquisition stage. The inter-satellite link in both Global Positioning System (GPS) and BeiDou Navigation Satellite System (BDS) adopt the long code spread spectrum system. However, long code acquisition is a difficult and time-consuming task due to the long code period. Traditional folding methods such as extended replica folding acquisition search technique (XFAST) and direct average are largely restricted because of code Doppler and additional SNR loss caused by replica folding. The dual folding method (DF-XFAST) and dual-channel method have been proposed to achieve long code acquisition in low SNR and high dynamic situations, respectively, but the former is easily affected by code Doppler and the latter is not fast enough. Considering the environment of inter-satellite links and the problems of existing algorithms, this paper proposes a new long code acquisition algorithm named dual-channel acquisition method based on the extended replica folding algorithm (DC-XFAST). This method employs dual channels for verification. Each channel contains an incoming signal block. Local code samples are folded and zero-padded to the length of the incoming signal block. After a circular FFT operation, the correlation results contain two peaks of the same magnitude and specified relative position. The detection process is eased through finding the two largest values. The verification takes all the full and partial peaks into account. Numerical results reveal that the DC-XFAST method can improve acquisition performance while acquisition speed is guaranteed. The method has a significantly higher acquisition probability than folding methods XFAST and DF-XFAST. Moreover, with the advantage of higher detection

  19. A Dual-Channel Acquisition Method Based on Extended Replica Folding Algorithm for Long Pseudo-Noise Code in Inter-Satellite Links

    Directory of Open Access Journals (Sweden)

    Hongbo Zhao

    2018-05-01

    Full Text Available Inter-satellite links are an important component of the new generation of satellite navigation systems, characterized by low signal-to-noise ratio (SNR, complex electromagnetic interference and the short time slot of each satellite, which brings difficulties to the acquisition stage. The inter-satellite link in both Global Positioning System (GPS and BeiDou Navigation Satellite System (BDS adopt the long code spread spectrum system. However, long code acquisition is a difficult and time-consuming task due to the long code period. Traditional folding methods such as extended replica folding acquisition search technique (XFAST and direct average are largely restricted because of code Doppler and additional SNR loss caused by replica folding. The dual folding method (DF-XFAST and dual-channel method have been proposed to achieve long code acquisition in low SNR and high dynamic situations, respectively, but the former is easily affected by code Doppler and the latter is not fast enough. Considering the environment of inter-satellite links and the problems of existing algorithms, this paper proposes a new long code acquisition algorithm named dual-channel acquisition method based on the extended replica folding algorithm (DC-XFAST. This method employs dual channels for verification. Each channel contains an incoming signal block. Local code samples are folded and zero-padded to the length of the incoming signal block. After a circular FFT operation, the correlation results contain two peaks of the same magnitude and specified relative position. The detection process is eased through finding the two largest values. The verification takes all the full and partial peaks into account. Numerical results reveal that the DC-XFAST method can improve acquisition performance while acquisition speed is guaranteed. The method has a significantly higher acquisition probability than folding methods XFAST and DF-XFAST. Moreover, with the advantage of higher

  20. PARALLEL ALGORITHM FOR THREE-DIMENSIONAL STOKES FLOW SIMULATION USING BOUNDARY ELEMENT METHOD

    Directory of Open Access Journals (Sweden)

    D. G. Pribytok

    2016-01-01

    Full Text Available Parallel computing technique for modeling three-dimensional viscous flow (Stokes flow using direct boundary element method is presented. The problem is solved in three phases: sampling and construction of system of linear algebraic equations (SLAE, its decision and finding the velocity of liquid at predetermined points. For construction of the system and finding the velocity, the parallel algorithms using graphics CUDA cards programming technology have been developed and implemented. To solve the system of linear algebraic equations the implemented software libraries are used. A comparison of time consumption for three main algorithms on the example of calculation of viscous fluid motion in three-dimensional cavity is performed.

  1. Three-Dimensional Computed Tomography as a Method for Finding Die Attach Voids in Diodes

    Science.gov (United States)

    Brahm, E. N.; Rolin, T. D.

    2010-01-01

    NASA analyzes electrical, electronic, and electromechanical (EEE) parts used in space vehicles to understand failure modes of these components. The diode is an EEE part critical to NASA missions that can fail due to excessive voiding in the die attach. Metallography, one established method for studying the die attach, is a time-intensive, destructive, and equivocal process whereby mechanical grinding of the diodes is performed to reveal voiding in the die attach. Problems such as die attach pull-out tend to complicate results and can lead to erroneous conclusions. The objective of this study is to determine if three-dimensional computed tomography (3DCT), a nondestructive technique, is a viable alternative to metallography for detecting die attach voiding. The die attach voiding in two- dimensional planes created from 3DCT scans was compared to several physical cross sections of the same diode to determine if the 3DCT scan accurately recreates die attach volumetric variability

  2. Three-dimensional static and dynamic reactor calculations by the nodal expansion method

    International Nuclear Information System (INIS)

    Christensen, B.

    1985-05-01

    This report reviews various method for the calculation of the neutron-flux- and power distribution in an nuclear reactor. The nodal expansion method (NEM) is especially described in much detail. The nodal expansion method solves the diffusion equation. In this method the reactor core is divided into nodes, typically 10 to 20 cm in each direction, and the average flux in each node is calculated. To obtain the coupling between the nodes the local flux inside each node is expressed by use of a polynomial expansion. The expansion is one-dimensional, so inside each node such three expansions occur. To calculate the expansion coefficients it is necessary that the polynomial expansion is a solution to the one-dimensional diffusion equation. When the one-dimensional diffusion equation is established a term with the transversal leakage occur, and this term is expanded after the same polynomials. The resulting equation system with the expansion coefficients as the unknowns is solved with weigthed residual technique. The nodal expansion method is built into a computer program (also called NEM), which is divided into two parts, one part for steady-state calculations and one part for dynamic calculations. It is possible to take advantage of symmetry properties of the reactor core. The program is very flexible with regard to the number of energy groups, the node size, the flux expansion order and the transverse leakage expansion order. The boundary of the core is described by albedos. The program and input to it are described. The program is tested on a number of examples extending from small theoretical one up to realistic reactor cores. Many calculations are done on the wellknown IAEA benchmark case. The calculations have tested the accuracy and the computing time for various node sizes and polynomial expansions. In the dynamic examples various strategies for variation of the time step-length have been tested. (author)

  3. A Generic multi-dimensional feature extraction method using multiobjective genetic programming.

    Science.gov (United States)

    Zhang, Yang; Rockett, Peter I

    2009-01-01

    In this paper, we present a generic feature extraction method for pattern classification using multiobjective genetic programming. This not only evolves the (near-)optimal set of mappings from a pattern space to a multi-dimensional decision space, but also simultaneously optimizes the dimensionality of that decision space. The presented framework evolves vector-to-vector feature extractors that maximize class separability. We demonstrate the efficacy of our approach by making statistically-founded comparisons with a wide variety of established classifier paradigms over a range of datasets and find that for most of the pairwise comparisons, our evolutionary method delivers statistically smaller misclassification errors. At very worst, our method displays no statistical difference in a few pairwise comparisons with established classifier/dataset combinations; crucially, none of the misclassification results produced by our method is worse than any comparator classifier. Although principally focused on feature extraction, feature selection is also performed as an implicit side effect; we show that both feature extraction and selection are important to the success of our technique. The presented method has the practical consequence of obviating the need to exhaustively evaluate a large family of conventional classifiers when faced with a new pattern recognition problem in order to attain a good classification accuracy.

  4. Nodal methods with non linear feedback for the three dimensional resolution of the diffusion's multigroup equations

    International Nuclear Information System (INIS)

    Ferri, A.A.

    1986-01-01

    Nodal methods applied in order to calculate the power distribution in a nuclear reactor core are presented. These methods have received special attention, because they yield accurate results in short computing times. Present nodal schemes contain several unknowns per node and per group. In the methods presented here, non linear feedback of the coupling coefficients has been applied to reduce this number to only one unknown per node and per group. The resulting algorithm is a 7- points formula, and the iterative process has proved stable in the response matrix scheme. The intranodal flux shape is determined by partial integration of the diffusion equations over two of the coordinates, leading to a set of three coupled one-dimensional equations. These can be solved by using a polynomial approximation or by integration (analytic solution). The tranverse net leakage is responsible for the coupling between the spatial directions, and two alternative methods are presented to evaluate its shape: direct parabolic approximation and local model expansion. Numerical results, which include the IAEA two-dimensional benchmark problem illustrate the efficiency of the developed methods. (M.E.L.) [es

  5. Novel iterative reconstruction method with optimal dose usage for partially redundant CT-acquisition

    Science.gov (United States)

    Bruder, H.; Raupach, R.; Sunnegardh, J.; Allmendinger, T.; Klotz, E.; Stierstorfer, K.; Flohr, T.

    2015-11-01

    In CT imaging, a variety of applications exist which are strongly SNR limited. However, in some cases redundant data of the same body region provide additional quanta. Examples: in dual energy CT, the spatial resolution has to be compromised to provide good SNR for material decomposition. However, the respective spectral dataset of the same body region provides additional quanta which might be utilized to improve SNR of each spectral component. Perfusion CT is a high dose application, and dose reduction is highly desirable. However, a meaningful evaluation of perfusion parameters might be impaired by noisy time frames. On the other hand, the SNR of the average of all time frames is extremely high. In redundant CT acquisitions, multiple image datasets can be reconstructed and averaged to composite image data. These composite image data, however, might be compromised with respect to contrast resolution and/or spatial resolution and/or temporal resolution. These observations bring us to the idea of transferring high SNR of composite image data to low SNR ‘source’ image data, while maintaining their resolution. It has been shown that the noise characteristics of CT image data can be improved by iterative reconstruction (Popescu et al 2012 Book of Abstracts, 2nd CT Meeting (Salt Lake City, UT) p 148). In case of data dependent Gaussian noise it can be modelled with image-based iterative reconstruction at least in an approximate manner (Bruder et al 2011 Proc. SPIE 7961 79610J). We present a generalized update equation in image space, consisting of a linear combination of the previous update, a correction term which is constrained by the source image data, and a regularization prior, which is initialized by the composite image data. This iterative reconstruction approach we call bimodal reconstruction (BMR). Based on simulation data it is shown that BMR can improve low contrast detectability, substantially reduces the noise power and has the potential to recover

  6. Novel iterative reconstruction method with optimal dose usage for partially redundant CT-acquisition

    International Nuclear Information System (INIS)

    Bruder, H; Raupach, R; Sunnegardh, J; Allmendinger, T; Klotz, E; Stierstorfer, K; Flohr, T

    2015-01-01

    In CT imaging, a variety of applications exist which are strongly SNR limited. However, in some cases redundant data of the same body region provide additional quanta.Examples: in dual energy CT, the spatial resolution has to be compromised to provide good SNR for material decomposition. However, the respective spectral dataset of the same body region provides additional quanta which might be utilized to improve SNR of each spectral component. Perfusion CT is a high dose application, and dose reduction is highly desirable. However, a meaningful evaluation of perfusion parameters might be impaired by noisy time frames. On the other hand, the SNR of the average of all time frames is extremely high.In redundant CT acquisitions, multiple image datasets can be reconstructed and averaged to composite image data. These composite image data, however, might be compromised with respect to contrast resolution and/or spatial resolution and/or temporal resolution. These observations bring us to the idea of transferring high SNR of composite image data to low SNR ‘source’ image data, while maintaining their resolution.It has been shown that the noise characteristics of CT image data can be improved by iterative reconstruction (Popescu et al 2012 Book of Abstracts, 2nd CT Meeting (Salt Lake City, UT) p 148). In case of data dependent Gaussian noise it can be modelled with image-based iterative reconstruction at least in an approximate manner (Bruder et al 2011 Proc. SPIE 7961 79610J).We present a generalized update equation in image space, consisting of a linear combination of the previous update, a correction term which is constrained by the source image data, and a regularization prior, which is initialized by the composite image data. This iterative reconstruction approach we call bimodal reconstruction (BMR).Based on simulation data it is shown that BMR can improve low contrast detectability, substantially reduces the noise power and has the potential to recover spatial

  7. Methods of Hematoxylin and Erosin Image Information Acquisition and Optimization in Confocal Microscopy.

    Science.gov (United States)

    Yoon, Woong Bae; Kim, Hyunjin; Kim, Kwang Gi; Choi, Yongdoo; Chang, Hee Jin; Sohn, Dae Kyung

    2016-07-01

    We produced hematoxylin and eosin (H&E) staining-like color images by using confocal laser scanning microscopy (CLSM), which can obtain the same or more information in comparison to conventional tissue staining. We improved images by using several image converting techniques, including morphological methods, color space conversion methods, and segmentation methods. An image obtained after image processing showed coloring very similar to that in images produced by H&E staining, and it is advantageous to conduct analysis through fluorescent dye imaging and microscopy rather than analysis based on single microscopic imaging. The colors used in CLSM are different from those seen in H&E staining, which is the method most widely used for pathologic diagnosis and is familiar to pathologists. Computer technology can facilitate the conversion of images by CLSM to be very similar to H&E staining images. We believe that the technique used in this study has great potential for application in clinical tissue analysis.

  8. Knowledge acquisition in ecological poduct design: the effects of computer-mediated communication and elicitation method

    OpenAIRE

    Sauer, J.; Schramme, S.; Rüttinger, B.

    2000-01-01

    This article presents a study that examines multiple effects of using different means of computer-mediated communication and knowledge elicitation methods during a product design process. The experimental task involved a typical scenario in product design, in which a knowledge engineer consults two experts to generate knowledge about a design issue. Employing a 3x2 between-subjects design, three conference types (face-to-face, computer, multivedia) and two knowledge elicitation methods (struc...

  9. A demonstration of adjoint methods for multi-dimensional remote sensing of the atmosphere and surface

    International Nuclear Information System (INIS)

    Martin, William G.K.; Hasekamp, Otto P.

    2018-01-01

    Highlights: • We demonstrate adjoint methods for atmospheric remote sensing in a two-dimensional setting. • Searchlight functions are used to handle the singularity of measurement response functions. • Adjoint methods require two radiative transfer calculations to evaluate the measurement misfit function and its derivatives with respect to all unknown parameters. • Synthetic retrieval studies show the scalability of adjoint methods to problems with thousands of measurements and unknown parameters. • Adjoint methods and the searchlight function technique are generalizable to 3D remote sensing. - Abstract: In previous work, we derived the adjoint method as a computationally efficient path to three-dimensional (3D) retrievals of clouds and aerosols. In this paper we will demonstrate the use of adjoint methods for retrieving two-dimensional (2D) fields of cloud extinction. The demonstration uses a new 2D radiative transfer solver (FSDOM). This radiation code was augmented with adjoint methods to allow efficient derivative calculations needed to retrieve cloud and surface properties from multi-angle reflectance measurements. The code was then used in three synthetic retrieval studies. Our retrieval algorithm adjusts the cloud extinction field and surface albedo to minimize the measurement misfit function with a gradient-based, quasi-Newton approach. At each step we compute the value of the misfit function and its gradient with two calls to the solver FSDOM. First we solve the forward radiative transfer equation to compute the residual misfit with measurements, and second we solve the adjoint radiative transfer equation to compute the gradient of the misfit function with respect to all unknowns. The synthetic retrieval studies verify that adjoint methods are scalable to retrieval problems with many measurements and unknowns. We can retrieve the vertically-integrated optical depth of moderately thick clouds as a function of the horizontal coordinate. It is also

  10. Application of Nondimensional Dynamic Influence Function Method for Eigenmode Analysis of Two-Dimensional Acoustic Cavities

    Directory of Open Access Journals (Sweden)

    S. W. Kang

    2014-04-01

    Full Text Available This paper establishes an improved NDIF method for the eigenvalue extraction of two-dimensional acoustic cavities with arbitrary shapes. The NDIF method, which was introduced by the authors in 1999, gives highly accurate eigenvalues despite employing a small number of nodes. However, it needs the inefficient procedure of calculating the singularity of a system matrix in the frequency range of interest for extracting eigenvalues and mode shapes. The paper proposes a practical approach for overcoming the inefficient procedure by making the final system matrix equation of the NDIF method into a form of algebraic eigenvalue problem. The solution quality of the proposed method is investigated by obtaining the eigenvalues and mode shapes of a circular, a rectangular, and an arbitrarily shaped cavity.

  11. Three-dimensional photoacoustic tomography based on graphics-processing-unit-accelerated finite element method.

    Science.gov (United States)

    Peng, Kuan; He, Ling; Zhu, Ziqiang; Tang, Jingtian; Xiao, Jiaying

    2013-12-01

    Compared with commonly used analytical reconstruction methods, the frequency-domain finite element method (FEM) based approach has proven to be an accurate and flexible algorithm for photoacoustic tomography. However, the FEM-based algorithm is computationally demanding, especially for three-dimensional cases. To enhance the algorithm's efficiency, in this work a parallel computational strategy is implemented in the framework of the FEM-based reconstruction algorithm using a graphic-processing-unit parallel frame named the "compute unified device architecture." A series of simulation experiments is carried out to test the accuracy and accelerating effect of the improved method. The results obtained indicate that the parallel calculation does not change the accuracy of the reconstruction algorithm, while its computational cost is significantly reduced by a factor of 38.9 with a GTX 580 graphics card using the improved method.

  12. Development of an automated extraction method for liver tumors in three dimensional multiphase multislice CT images

    International Nuclear Information System (INIS)

    Nakagawa, Junya; Shimizu, Akinobu; Kobatake, Hidefumi

    2004-01-01

    This paper proposes a tumor detection method using four phase three dimensional (3D) CT images of livers, i.e. non-contrast, early, portal, and late phase images. The method extracts liver regions from the four phase images and enhances tumors in the livers using a 3D adaptive convergence index filter. Then it detects local maximum points and extracts tumor candidates by a region growing method. Subsequently several features of the candidates are measured and each candidate is classified into true tumor or normal tissue based on Mahalanobis distances. Above processes except liver region extraction are applied to four phase images, independently and four resultant images are integrated into one. We applied the proposed method to 3D abdominal CT images of ten patients obtained with multi-detector row CT scanner and confirmed that tumor detection rate was 100% without false positives, which was quite promising results. (author)

  13. A two pressure-velocity approach for immersed boundary methods in three dimensional incompressible flows

    International Nuclear Information System (INIS)

    Sabir, O; Ahmad, Norhafizan; Nukman, Y; Tuan Ya, T M Y S

    2013-01-01

    This paper describes an innovative method for computing fluid solid interaction using Immersed boundary methods with two stage pressure-velocity corrections. The algorithm calculates the interactions between incompressible viscous flows and a solid shape in three-dimensional domain. The fractional step method is used to solve the Navier-Stokes equations in finite difference schemes. Most of IBMs are concern about exchange of the momentum between the Eulerian variables (fluid) and the Lagrangian nodes (solid). To address that concern, a new algorithm to correct the pressure and the velocity using Simplified Marker and Cell method is added. This scheme is applied on staggered grid to simulate the flow past a circular cylinder and study the effect of the new stage on calculations cost. To evaluate the accuracy of the computations the results are compared with the previous software results. The paper confirms the capacity of new algorithm for accurate and robust simulation of Fluid Solid Interaction with respect to pressure field

  14. Transmission probability method for solving neutron transport equation in three-dimensional triangular-z geometry

    Energy Technology Data Exchange (ETDEWEB)

    Liu Guoming [Department of Nuclear Engineering, Xi' an Jiaotong University, Xi' an, Shaanxi 710049 (China)], E-mail: gmliusy@gmail.com; Wu Hongchun; Cao Liangzhi [Department of Nuclear Engineering, Xi' an Jiaotong University, Xi' an, Shaanxi 710049 (China)

    2008-09-15

    This paper presents a transmission probability method (TPM) to solve the neutron transport equation in three-dimensional triangular-z geometry. The source within the mesh is assumed to be spatially uniform and isotropic. At the mesh surface, the constant and the simplified P{sub 1} approximation are invoked for the anisotropic angular flux distribution. Based on this model, a code TPMTDT is encoded. It was verified by three 3D Takeda benchmark problems, in which the first two problems are in XYZ geometry and the last one is in hexagonal-z geometry, and an unstructured geometry problem. The results of the present method agree well with those of Monte-Carlo calculation method and Spherical Harmonics (P{sub N}) method.

  15. TMCC: a transient three-dimensional neutron transport code by the direct simulation method - 222

    International Nuclear Information System (INIS)

    Shen, H.; Li, Z.; Wang, K.; Yu, G.

    2010-01-01

    A direct simulation method (DSM) is applied to solve the transient three-dimensional neutron transport problems. DSM is based on the Monte Carlo method, and can be considered as an application of the Monte Carlo method in the specific type of problems. In this work, the transient neutronics problem is solved by simulating the dynamic behaviors of neutrons and precursors of delayed neutrons during the transient process. DSM gets rid of various approximations which are always necessary to other methods, so it is precise and flexible in the requirement of geometric configurations, material compositions and energy spectrum. In this paper, the theory of DSM is introduced first, and the numerical results obtained with the new transient analysis code, named TMCC (Transient Monte Carlo Code), are presented. (authors)

  16. Application of an engineering inviscid-boundary layer method to slender three-dimensional vehicle forebodies

    Science.gov (United States)

    Riley, Christopher J.

    1993-01-01

    An engineering inviscid-boundary layer method has been modified for application to slender three-dimensional (3-D) forebodies which are characteristic of transatmospheric vehicles. An improved shock description in the nose region has been added to the inviscid technique which allows the calculation of a wider range of body geometries. The modified engineering method is applied to the perfect gas solution over a slender 3-D configuration at angle of attack. The method predicts surface pressures and laminar heating rates on the windward side of the vehicle that compare favorably with numerical solutions of the thin-layer Navier-Stokes equations. These improvements extend the 3-D capabilities of the engineering method and significantly increase its design applications.

  17. Finite element method with quadratic quadrilateral unit for solving two dimensional incompressible N-S equation

    International Nuclear Information System (INIS)

    Tao Ganqiang; Yu Qing; Xiao Xiao

    2011-01-01

    Viscous and incompressible fluid flow is important for numerous engineering mechanics problems. Because of high non linear and incompressibility for Navier-Stokes equation, it is very difficult to solve Navier-Stokes equation by numerical method. According to its characters of Navier-Stokes equation, quartic derivation controlling equation of the two dimensional incompressible Navier-Stokes equation is set up firstly. The method solves the problem for dealing with vorticity boundary and automatically meets incompressibility condition. Then Finite Element equation for Navier-Stokes equation is proposed by using quadratic quadrilateral unit with 8 nodes in which the unit function is quadratic and non linear.-Based on it, the Finite Element program of quadratic quadrilateral unit with 8 nodes is developed. Lastly, numerical experiment proves the accuracy and dependability of the method and also shows the method has good application prospect in computational fluid mechanics. (authors)

  18. A Monte Carlo Green's function method for three-dimensional neutron transport

    International Nuclear Information System (INIS)

    Gamino, R.G.; Brown, F.B.; Mendelson, M.R.

    1992-01-01

    This paper describes a Monte Carlo transport kernel capability, which has recently been incorporated into the RACER continuous-energy Monte Carlo code. The kernels represent a Green's function method for neutron transport from a fixed-source volume out to a particular volume of interest. This method is very powerful transport technique. Also, since kernels are evaluated numerically by Monte Carlo, the problem geometry can be arbitrarily complex, yet exact. This method is intended for problems where an ex-core neutron response must be determined for a variety of reactor conditions. Two examples are ex-core neutron detector response and vessel critical weld fast flux. The response is expressed in terms of neutron transport kernels weighted by a core fission source distribution. In these types of calculations, the response must be computed for hundreds of source distributions, but the kernels only need to be calculated once. The advance described in this paper is that the kernels are generated with a highly accurate three-dimensional Monte Carlo transport calculation instead of an approximate method such as line-of-sight attenuation theory or a synthesized three-dimensional discrete ordinates solution

  19. The ADO-nodal method for solving two-dimensional discrete ordinates transport problems

    International Nuclear Information System (INIS)

    Barichello, L.B.; Picoloto, C.B.; Cunha, R.D. da

    2017-01-01

    Highlights: • Two-dimensional discrete ordinates neutron transport. • Analytical Discrete Ordinates (ADO) nodal method. • Heterogeneous media fixed source problems. • Local solutions. - Abstract: In this work, recent results on the solution of fixed-source two-dimensional transport problems, in Cartesian geometry, are reported. Homogeneous and heterogeneous media problems are considered in order to incorporate the idea of arbitrary number of domain division into regions (nodes) when applying the ADO method, which is a method of analytical features, to those problems. The ADO-nodal formulation is developed, for each node, following previous work devoted to heterogeneous media problem. Here, however, the numerical procedure is extended to higher number of domain divisions. Such extension leads, in some cases, to the use of an iterative method for solving the general linear system which defines the arbitrary constants of the general solution. In addition to solve alternative heterogeneous media configurations than reported in previous works, the present approach allows comparisons with results provided by other metodologies generated with refined meshes. Numerical results indicate the ADO solution may achieve a prescribed accuracy using coarser meshes than other schemes.

  20. A new digitized reverse correction method for hypoid gears based on a one-dimensional probe

    Science.gov (United States)

    Li, Tianxing; Li, Jubo; Deng, Xiaozhong; Yang, Jianjun; Li, Genggeng; Ma, Wensuo

    2017-12-01

    In order to improve the tooth surface geometric accuracy and transmission quality of hypoid gears, a new digitized reverse correction method is proposed based on the measurement data from a one-dimensional probe. The minimization of tooth surface geometrical deviations is realized from the perspective of mathematical analysis and reverse engineering. Combining the analysis of complex tooth surface generation principles and the measurement mechanism of one-dimensional probes, the mathematical relationship between the theoretical designed tooth surface, the actual machined tooth surface and the deviation tooth surface is established, the mapping relation between machine-tool settings and tooth surface deviations is derived, and the essential connection between the accurate calculation of tooth surface deviations and the reverse correction method of machine-tool settings is revealed. Furthermore, a reverse correction model of machine-tool settings is built, a reverse correction strategy is planned, and the minimization of tooth surface deviations is achieved by means of the method of numerical iterative reverse solution. On this basis, a digitized reverse correction system for hypoid gears is developed by the organic combination of numerical control generation, accurate measurement, computer numerical processing, and digitized correction. Finally, the correctness and practicability of the digitized reverse correction method are proved through a reverse correction experiment. The experimental results show that the tooth surface geometric deviations meet the engineering requirements after two trial cuts and one correction.

  1. Registration and three-dimensional reconstruction of autoradiographic images by the disparity analysis method

    International Nuclear Information System (INIS)

    Zhao, Weizhao; Ginsberg, M.; Young, T.Y.

    1993-01-01

    Quantitative autoradiography is a powerful radio-isotopic-imaging method for neuroscientists to study local cerebral blood flow and glucose-metabolic rate at rest, in response to physiologic activation of the visual, auditory, somatosensory, and motor systems, and in pathologic conditions. Most autoradiographic studies analyze glucose utilization and blood flow in two-dimensional (2-D) coronal sections. With modern digital computer and image-processing techniques, a large number of closely spaced coronal sections can be stacked appropriately to form a three-dimensional (3-d) image. 3-D autoradiography allows investigators to observe cerebral sections and surfaces from any viewing angle. A fundamental problem in 3-D reconstruction is the alignment (registration) of the coronal sections. A new alignment method based on disparity analysis is presented which can overcome many of the difficulties encountered by previous methods. The disparity analysis method can deal with asymmetric, damaged, or tilted coronal sections under the same general framework, and it can be used to match coronal sections of different sizes and shapes. Experimental results on alignment and 3-D reconstruction are presented

  2. The interpolation method based on endpoint coordinate for CT three-dimensional image

    International Nuclear Information System (INIS)

    Suto, Yasuzo; Ueno, Shigeru.

    1997-01-01

    Image interpolation is frequently used to improve slice resolution to reach spatial resolution. Improved quality of reconstructed three-dimensional images can be attained with this technique as a result. Linear interpolation is a well-known and widely used method. The distance-image method, which is a non-linear interpolation technique, is also used to convert CT value images to distance images. This paper describes a newly developed method that makes use of end-point coordinates: CT-value images are initially converted to binary images by thresholding them and then sequences of pixels with 1-value are arranged in vertical or horizontal directions. A sequence of pixels with 1-value is defined as a line segment which has starting and end points. For each pair of adjacent line segments, another line segment was composed by spatial interpolation of the start and end points. Binary slice images are constructed from the composed line segments. Three-dimensional images were reconstructed from clinical X-ray CT images, using three different interpolation methods and their quality and processing speed were evaluated and compared. (author)

  3. Two-dimensional semi-analytic nodal method for multigroup pin power reconstruction

    International Nuclear Information System (INIS)

    Seung Gyou, Baek; Han Gyu, Joo; Un Chul, Lee

    2007-01-01

    A pin power reconstruction method applicable to multigroup problems involving square fuel assemblies is presented. The method is based on a two-dimensional semi-analytic nodal solution which consists of eight exponential terms and 13 polynomial terms. The 13 polynomial terms represent the particular solution obtained under the condition of a 2-dimensional 13 term source expansion. In order to achieve better approximation of the source distribution, the least square fitting method is employed. The 8 exponential terms represent a part of the analytically obtained homogeneous solution and the 8 coefficients are determined by imposing constraints on the 4 surface average currents and 4 corner point fluxes. The surface average currents determined from a transverse-integrated nodal solution are used directly whereas the corner point fluxes are determined during the course of the reconstruction by employing an iterative scheme that would realize the corner point balance condition. The outgoing current based corner point flux determination scheme is newly introduced. The accuracy of the proposed method is demonstrated with the L336C5 benchmark problem. (authors)

  4. Solving (2 + 1)-dimensional sine-Poisson equation by a modified variable separated ordinary differential equation method

    International Nuclear Information System (INIS)

    Ka-Lin, Su; Yuan-Xi, Xie

    2010-01-01

    By introducing a more general auxiliary ordinary differential equation (ODE), a modified variable separated ordinary differential equation method is presented for solving the (2 + 1)-dimensional sine-Poisson equation. As a result, many explicit and exact solutions of the (2 + 1)-dimensional sine-Poisson equation are derived in a simple manner by this technique. (general)

  5. Bibliography of papers, reports, and presentations related to point-sample dimensional measurement methods for machined part evaluation

    Energy Technology Data Exchange (ETDEWEB)

    Baldwin, J.M. [Sandia National Labs., Livermore, CA (United States). Integrated Manufacturing Systems

    1996-04-01

    The Dimensional Inspection Techniques Specification (DITS) Project is an ongoing effort to produce tools and guidelines for optimum sampling and data analysis of machined parts, when measured using point-sample methods of dimensional metrology. This report is a compilation of results of a literature survey, conducted in support of the DITS. Over 160 citations are included, with author abstracts where available.

  6. New Traveling Wave Solutions of the Higher Dimensional Nonlinear Partial Differential Equation by the Exp-Function Method

    Directory of Open Access Journals (Sweden)

    Hasibun Naher

    2012-01-01

    Full Text Available We construct new analytical solutions of the (3+1-dimensional modified KdV-Zakharov-Kuznetsev equation by the Exp-function method. Plentiful exact traveling wave solutions with arbitrary parameters are effectively obtained by the method. The obtained results show that the Exp-function method is effective and straightforward mathematical tool for searching analytical solutions with arbitrary parameters of higher-dimensional nonlinear partial differential equation.

  7. In Vitro Evaluation of Dimensional Stability of Alginate Impressions after Disinfection by Spray and Immersion Methods

    Directory of Open Access Journals (Sweden)

    Fahimeh Hamedi Rad

    2010-12-01

    Full Text Available Background and aims. The most common method for alginate impression disinfection is spraying it with disinfecting agents, but some studies have shown that these impressions can be immersed, too. The aim of this study was to evaluate the dimensional stability of alginate impressions following disinfecting by spray and immersion methods. Materials and methods. Four common disinfecting agents (Sodium Hypochlorite, Micro 10, Glutaraldehyde and Deconex were selected and the impressions (n=108 were divided into four groups (n=24 and eight subgroups (n=12 for disinfecting by any of the four above-mentioned agents by spray or immersion methods. The control group (n=12 was not disinfected. Then the impressions were poured by type III Dental Stone Plaster in a standard method. The results were analyzed by descriptive methods (mean and standard deviation, t-test, two-way analysis of variance (ANOVA and Duncan test, using SPSS 14.0 software for windows. Results. The mean changes of length and height were significant between the various groups and disinfecting methods. Regarding the length, the greatest and the least amounts were related to Deconex and Micro 10 in the immersion method, respectively. Regarding height, the greatest and the least amounts were related to Glutaraldehyde and Deconex in the immersion method, respectively. Conclusion. Disinfecting alginate impressions by Sodium Hypochlorite, Deconex and Glutaraldehyde by immersion method is not recommended and it is better to disinfect alginate impressions by spraying of Micro 10, Sodium Hypochlorite, Glutaraldehyde and immersion in Micro 10.

  8. Bandgap optimization of two-dimensional photonic crystals using semidefinite programming and subspace methods

    International Nuclear Information System (INIS)

    Men, H.; Nguyen, N.C.; Freund, R.M.; Parrilo, P.A.; Peraire, J.

    2010-01-01

    In this paper, we consider the optimal design of photonic crystal structures for two-dimensional square lattices. The mathematical formulation of the bandgap optimization problem leads to an infinite-dimensional Hermitian eigenvalue optimization problem parametrized by the dielectric material and the wave vector. To make the problem tractable, the original eigenvalue problem is discretized using the finite element method into a series of finite-dimensional eigenvalue problems for multiple values of the wave vector parameter. The resulting optimization problem is large-scale and non-convex, with low regularity and non-differentiable objective. By restricting to appropriate eigenspaces, we reduce the large-scale non-convex optimization problem via reparametrization to a sequence of small-scale convex semidefinite programs (SDPs) for which modern SDP solvers can be efficiently applied. Numerical results are presented for both transverse magnetic (TM) and transverse electric (TE) polarizations at several frequency bands. The optimized structures exhibit patterns which go far beyond typical physical intuition on periodic media design.

  9. Approximate solutions of the two-dimensional integral transport equation by collision probability methods

    International Nuclear Information System (INIS)

    Sanchez, Richard

    1977-01-01

    A set of approximate solutions for the isotropic two-dimensional neutron transport problem has been developed using the Interface Current formalism. The method has been applied to regular lattices of rectangular cells containing a fuel pin, cladding and water, or homogenized structural material. The cells are divided into zones which are homogeneous. A zone-wise flux expansion is used to formulate a direct collision probability problem within a cell. The coupling of the cells is made by making extra assumptions on the currents entering and leaving the interfaces. Two codes have been written: the first uses a cylindrical cell model and one or three terms for the flux expansion; the second uses a two-dimensional flux representation and does a truly two-dimensional calculation inside each cell. In both codes one or three terms can be used to make a space-independent expansion of the angular fluxes entering and leaving each side of the cell. The accuracies and computing times achieved with the different approximations are illustrated by numerical studies on two benchmark pr

  10. Solution of D dimensional Dirac equation for hyperbolic tangent potential using NU method and its application in material properties

    Energy Technology Data Exchange (ETDEWEB)

    Suparmi, A., E-mail: soeparmi@staff.uns.ac.id; Cari, C., E-mail: cari@staff.uns.ac.id; Pratiwi, B. N., E-mail: namakubetanurpratiwi@gmail.com [Physics Department, Faculty of Mathematics and Science, Sebelas Maret University, Jl. Ir. Sutami 36A Kentingan Surakarta 57126 (Indonesia); Deta, U. A. [Physics Department, Faculty of Science and Mathematics Education and Teacher Training, Surabaya State University, Surabaya (Indonesia)

    2016-02-08

    The analytical solution of D-dimensional Dirac equation for hyperbolic tangent potential is investigated using Nikiforov-Uvarov method. In the case of spin symmetry the D dimensional Dirac equation reduces to the D dimensional Schrodinger equation. The D dimensional relativistic energy spectra are obtained from D dimensional relativistic energy eigen value equation by using Mat Lab software. The corresponding D dimensional radial wave functions are formulated in the form of generalized Jacobi polynomials. The thermodynamically properties of materials are generated from the non-relativistic energy eigen-values in the classical limit. In the non-relativistic limit, the relativistic energy equation reduces to the non-relativistic energy. The thermal quantities of the system, partition function and specific heat, are expressed in terms of error function and imaginary error function which are numerically calculated using Mat Lab software.

  11. A New Multielement Method for LA-ICP-MS Data Acquisition from Glacier Ice Cores.

    Science.gov (United States)

    Spaulding, Nicole E; Sneed, Sharon B; Handley, Michael J; Bohleber, Pascal; Kurbatov, Andrei V; Pearce, Nicholas J; Erhardt, Tobias; Mayewski, Paul A

    2017-11-21

    To answer pressing new research questions about the rate and timing of abrupt climate transitions, a robust system for ultrahigh-resolution sampling of glacier ice is needed. Here, we present a multielement method of LA-ICP-MS analysis wherein an array of chemical elements is simultaneously measured from the same ablation area. Although multielement techniques are commonplace for high-concentration materials, prior to the development of this method, all LA-ICP-MS analyses of glacier ice involved a single element per ablation pass or spot. This new method, developed using the LA-ICP-MS system at the W. M. Keck Laser Ice Facility at the University of Maine Climate Change Institute, has already been used to shed light on our flawed understanding of natural levels of Pb in Earth's atmosphere.

  12. Indoor Map Acquisition System Using Global Scan Matching Method and Laser Range Scan Data

    Science.gov (United States)

    Hisanaga, Satoshi; Kase, Takaaki

    Simultaneous localization and mapping (SLAM) is the latest technique for constructing indoor maps. In indoor environment, a localization method using the features of the walls as landmarks has been studied in the past. The past study has a drawback. It cannot localize in spaces surrounded by featureless walls or walls on which similar features are repeated. To overcome this drawback, we developed an accuracy localization method that ignores the features of the walls. We noted the fact that the walls in a building are aligned along only two orthogonal directions. By considering a specific wall to be a reference wall, the location of a robot was expressed by using the distance between the robot and the reference wall. We developed the robot in order to evaluate the mapping accuracy of our method and carried out an experiment to map a corridor (40m long) that contained featureless parts. The map obtained had a margin of error of less than 2%.

  13. Development of knowledge acquisition methods for knowledge base construction for autonomous plants

    Energy Technology Data Exchange (ETDEWEB)

    Yoshikawa, S. [Power Reactor and Nuclear Fuel Development Corp., Oarai, Ibaraki (Japan). Oarai Engineering Center; Sasajima, M.; Kitamura, Y.; Ikeda, M.; Mizoguchi, R.

    1993-03-01

    In order to enhance safety and reliability of nuclear plant operation, it is strongly desired to construct diagnostic knowledge base without lacking, contradiction, and description inconsistency. Nowadays, an advanced method Knowledge Compiler` has been studied to acquire diagnostic knowledge, mainly based on qualitative reasoning technique, without accumulating heuristics by interviews. Until now, 2 methods to suppress the ambiguity observed when qualitative reasoning mechanism were applied to heat transport systems of nuclear power plants: In the first method, qualitative values are allocated to the system variables along with the causality direction, avoiding contradictions among plural variables in each qualitative constraint describing knowledge of deviation propagation, heat balance, or energy conservation. In the second method, all the qualitative information is represented as a set of simultaneous qualitative equations. And, an appropriate subset is selected so that the qualitative solutions of unknowns in this subset can be derived independently of the remaining part. A contrary method is applied for the selected subset to derive local solutions. Then the problem size is reduced by substituting solutions of the subset, in a recursive manner. In the previous report on this research project, complete computer softwares have been constructed based on these methods, and applied to a 2-loop heat transport system of a nuclear power plant. The detailed results are discussed in this report. In addition, an integrated configuration of diagnostic knowledge generation system of nuclear power plants is proposed, based upon the results and new foundings obtained through the research activities so far, and the future works to overcome remaining problems are also identified. (author)

  14. Laser induced ultrasonic phased array using full matrix capture data acquisition and total focusing method.

    Science.gov (United States)

    Stratoudaki, Theodosia; Clark, Matt; Wilcox, Paul D

    2016-09-19

    Laser ultrasonics is a technique where lasers are employed to generate and detect ultrasound. A data collection method (full matrix capture) and a post processing imaging algorithm, the total focusing method, both developed for ultrasonic arrays, are modified and used in order to enhance the capabilities of laser ultrasonics for nondestructive testing by improving defect detectability and increasing spatial resolution. In this way, a laser induced ultrasonic phased array is synthesized. A model is developed and compared with experimental results from aluminum samples with side drilled holes and slots at depths of 5 - 20 mm from the surface.

  15. Potential Use of Agile Methods in Selected DoD Acquisitions: Requirements Development and Management

    Science.gov (United States)

    2014-04-01

    guidelines. 9 Kanban is a technique for managing workflow originating from the lean engineering methods pioneered by Toyota. [Reinertsen 2009...Cockburn, Alistair, & Pols, Andy. Patterns for Effective Use Cas- es. Addison-Wesley, 2002. Anderson, David. Kanban . Blue Hole Press, 2010. CMU/SEI-2013

  16. Two numerical methods for the solution of two-dimensional eddy current problems

    International Nuclear Information System (INIS)

    Biddlecombe, C.S.

    1978-07-01

    A general method for the solution of eddy current problems in two dimensions - one component of current density and two of magnetic field, is reported. After examining analytical methods two numerical methods are presented. Both solve the two dimensional, low frequency limit of Maxwell's equations for transient eddy currents in conducting material, which may be permeable, in the presence of other non-conducting permeable material. Both solutions are expressed in terms of the magnetic vector potential. The first is an integral equation method, using zero order elements in the discretisation of the unknown source regions. The other is a differential equation method, using a first order finite element mesh, and the Galerkin weighted residual procedure. The resulting equations are solved as initial-value problems. Results from programs based on each method are presented showing the power and limitations of the methods and the range of problems solvable. The methods are compared and recommendations are made for choosing between them. Suggestions are made for improving both methods, involving boundary integral techniques. (author)

  17. MRI definition of target volumes using fuzzy logic method for three-dimensional conformal radiation therapy

    International Nuclear Information System (INIS)

    Caudrelier, Jean-Michel; Vial, Stephane; Gibon, David; Kulik, Carine; Fournier, Charles; Castelain, Bernard; Coche-Dequeant, Bernard; Rousseau, Jean

    2003-01-01

    Purpose: Three-dimensional (3D) volume determination is one of the most important problems in conformal radiation therapy. Techniques of volume determination from tomographic medical imaging are usually based on two-dimensional (2D) contour definition with the result dependent on the segmentation method used, as well as on the user's manual procedure. The goal of this work is to describe and evaluate a new method that reduces the inaccuracies generally observed in the 2D contour definition and 3D volume reconstruction process. Methods and Materials: This new method has been developed by integrating the fuzziness in the 3D volume definition. It first defines semiautomatically a minimal 2D contour on each slice that definitely contains the volume and a maximal 2D contour that definitely does not contain the volume. The fuzziness region in between is processed using possibility functions in possibility theory. A volume of voxels, including the membership degree to the target volume, is then created on each slice axis, taking into account the slice position and slice profile. A resulting fuzzy volume is obtained after data fusion between multiorientation slices. Different studies have been designed to evaluate and compare this new method of target volume reconstruction and a classical reconstruction method. First, target definition accuracy and robustness were studied on phantom targets. Second, intra- and interobserver variations were studied on radiosurgery clinical cases. Results: The absolute volume errors are less than or equal to 1.5% for phantom volumes calculated by the fuzzy logic method, whereas the values obtained with the classical method are much larger than the actual volumes (absolute volume errors up to 72%). With increasing MRI slice thickness (1 mm to 8 mm), the phantom volumes calculated by the classical method are increasing exponentially with a maximum absolute error up to 300%. In contrast, the absolute volume errors are less than 12% for phantom

  18. An Integral Method and Its Application to Some Three-Dimensional Boundary-Layer Flows,

    Science.gov (United States)

    1979-07-18

    M. Scala Dr. H. Lew Mr. J. W. Faust A . Martellucci W. Daskin J. D. Cresswell J. B. Arnaiz L. A . Marshall J. Cassanto R. Hobbs C. Harris F. George P.O...RESEARCH AND TECHNOLOGY DEPARTMENT 18 JULY 1979 Approved for public release, distribution unlimited DTICEILECTE1 APR 2 5 1930,, A NAVAL SURFACE WEAPONS...TITLE (end Subtlle) S. TYPE OF REPORT A PERIOD COVERED I INVTEGRAL M.ETHOD AND ITS 4PPLICATION TO SSOME THREE-DIMENSIONAL BOUNDARY-LAYER FLOWS 6

  19. Three-dimensional analysis of eddy current with the finite element method

    International Nuclear Information System (INIS)

    Takano, Ichiro; Suzuki, Yasuo

    1977-05-01

    The finite element method is applied to three-dimensional analysis of eddy current induced in a large Tokamak device (JT-60). Two techniques to study the eddy current are presented: those of ordinary vector potential and modified vector potential. The latter is originally developed for decreasing dimension of the global matrix. Theoretical treatment of these two is given. The skin effect for alternate current flowing in the circular loop of rectangular cross section is examined as an example of the modified vector potential technique, and the result is compared with analytical one. This technique is useful in analysis of the eddy current problem. (auth.)

  20. Three-dimensional Reconstruction Method Study Based on Interferometric Circular SAR

    Directory of Open Access Journals (Sweden)

    Hou Liying

    2016-10-01

    Full Text Available Circular Synthetic Aperture Radar (CSAR can acquire targets’ scattering information in all directions by a 360° observation, but a single-track CSAR cannot efficiently obtain height scattering information for a strong directive scatter. In this study, we examine the typical target of the three-dimensional circular SAR interferometry theoryand validate the theory in a darkroom experiment. We present a 3D reconstruction of the actual tank metal model of interferometric CSAR for the first time, verify the validity of the method, and demonstrate the important potential applications of combining 3D reconstruction with omnidirectional observation.

  1. New method for thickness determination and microscopic imaging of graphene-like two-dimensional materials

    International Nuclear Information System (INIS)

    Qin Xudong; Chen Yonghai; Liu Yu; Zhu Laipan; Li Yuan; Wu Qing; Huang Wei

    2016-01-01

    We employed the microscopic reflectance difference spectroscopy (micro-RDS) to determine the layer-number and microscopically image the surface topography of graphene and MoS 2 samples. The contrast image shows the efficiency and reliability of this new clipping technique. As a low-cost, quantifiable, no-contact and non-destructive method, it is not concerned with the characteristic signal of certain materials and can be applied to arbitrary substrates. Therefore it is a perfect candidate for characterizing the thickness of graphene-like two-dimensional materials. (paper)

  2. Measurement Uncertainty Evaluation in Dimensional X-ray Computed Tomography Using the Bootstrap Method

    DEFF Research Database (Denmark)

    Hiller, Jochen; Genta, Gianfranco; Barbato, Giulio

    2014-01-01

    measurement processes, e.g., with tactile systems, also due to factors related to systematic errors, mainly caused by specific CT image characteristics. In this paper we propose a simulation-based framework for measurement uncertainty evaluation in dimensional CT using the bootstrap method. In a case study...... the problem concerning measurement uncertainties was addressed with bootstrap and successfully applied to ball-bar CT measurements. Results obtained enabled extension to more complex shapes such as actual industrial components as we show by tests on a hollow cylinder workpiece....

  3. The hydrogen tunneling splitting in malonaldehyde: A full-dimensional time-independent quantum mechanical method

    Energy Technology Data Exchange (ETDEWEB)

    Wu, Feng; Ren, Yinghui; Bian, Wensheng, E-mail: bian@iccas.ac.cn [Beijing National Laboratory for Molecular Sciences, Institute of Chemistry, Chinese Academy of Sciences, Beijing 100190 (China); University of Chinese Academy of Sciences, Beijing 100049 (China)

    2016-08-21

    The accurate time-independent quantum dynamics calculations on the ground-state tunneling splitting of malonaldehyde in full dimensionality are reported for the first time. This is achieved with an efficient method developed by us. In our method, the basis functions are customized for the hydrogen transfer process which has the effect of greatly reducing the size of the final Hamiltonian matrix, and the Lanczos method and parallel strategy are used to further overcome the memory and central processing unit time bottlenecks. The obtained ground-state tunneling splitting of 24.5 cm{sup −1} is in excellent agreement with the benchmark value of 23.8 cm{sup −1} computed with the full-dimensional, multi-configurational time-dependent Hartree approach on the same potential energy surface, and we estimate that our reported value has an uncertainty of less than 0.5 cm{sup −1}. Moreover, the role of various vibrational modes strongly coupled to the hydrogen transfer process is revealed.

  4. Analysis of fracture surface of CFRP material by three-dimensional reconstruction methods

    International Nuclear Information System (INIS)

    Lobo, Raquel M.; Andrade, Arnaldo H.P.

    2009-01-01

    Fracture surfaces of CFRP (carbon Fiber Reinforced Polymer) materials, used in the nuclear fuel cycle, presents an elevated roughness, mainly due to the fracture mode known as pulling out, that displays pieces of carbon fibers after debonding between fiber and matrix. The fractographic analysis, by bi-dimensional images is deficient for not considering the so important vertical resolution as much as the horizontal resolution. In this case, the knowledge of this heights distribution that occurs during the breaking, can lead to the calculation of the involved energies in the process that would allows a better agreement on the fracture mechanisms of the composite material. An important solution for the material characterization, whose surface presents a high roughness due to the variation in height, is to reconstruct three-dimensionally these fracture surfaces. In this work, the 3D reconstruction was done by two different methods: the variable focus reconstruction, through a stack of images obtained by optical microscopy (OM) and the parallax reconstruction, carried through with images acquired by scanning electron microscopy (SEM). The results of both methods present an elevation map of the reconstructed image that determine the height of the surface pixel by pixel,. The results obtained by the methods of reconstruction for the CFRP surfaces, have been compared with others materials such as aluminum and copper that present a ductile type fracture surface, with lower roughness. (author)

  5. Development of a particle method of characteristics (PMOC) for one-dimensional shock waves

    Science.gov (United States)

    Hwang, Y.-H.

    2018-03-01

    In the present study, a particle method of characteristics is put forward to simulate the evolution of one-dimensional shock waves in barotropic gaseous, closed-conduit, open-channel, and two-phase flows. All these flow phenomena can be described with the same set of governing equations. The proposed scheme is established based on the characteristic equations and formulated by assigning the computational particles to move along the characteristic curves. Both the right- and left-running characteristics are traced and represented by their associated computational particles. It inherits the computational merits from the conventional method of characteristics (MOC) and moving particle method, but without their individual deficiencies. In addition, special particles with dual states deduced to the enforcement of the Rankine-Hugoniot relation are deliberately imposed to emulate the shock structure. Numerical tests are carried out by solving some benchmark problems, and the computational results are compared with available analytical solutions. From the derivation procedure and obtained computational results, it is concluded that the proposed PMOC will be a useful tool to replicate one-dimensional shock waves.

  6. A new NMIS characteristic signature acquisition method based on time-domain fission correlation spectrum

    International Nuclear Information System (INIS)

    Wei Biao; Feng Peng; Yang Fan; Ren Yong

    2014-01-01

    To deal with the disadvantages of the homogeneous signature of the nuclear material identification system (NMIS) and limited methods to extract the characteristic parameters of the nuclear materials, an enhanced method using the combination of the Time-of-Flight (TOF) and the Pulse Shape Discrimination (PSD) was introduced into the traditional characteristic parameters extraction and recognition system of the NMIS. With the help of the PSD, the γ signal and the neutron signal can be discriminated. Further based on the differences of the neutron-γ flight time of the detectors in various positions, a new time-domain signature reflecting the position information of unknown nuclear material was investigated. The simulation result showed that the algorithm is feasible and helpful to identify the relative position of unknown nuclear material. (authors)

  7. Finite element method for one-dimensional rill erosion simulation on a curved slope

    Directory of Open Access Journals (Sweden)

    Lijuan Yan

    2015-03-01

    Full Text Available Rill erosion models are important to hillslope soil erosion prediction and to land use planning. The development of rill erosion models and their use has become increasingly of great concern. The purpose of this research was to develop mathematic models with computer simulation procedures to simulate and predict rill erosion. The finite element method is known as an efficient tool in many other applications than in rill soil erosion. In this study, the hydrodynamic and sediment continuity model equations for a rill erosion system were solved by the Galerkin finite element method and Visual C++ procedures. The simulated results are compared with the data for spatially and temporally measured processes for rill erosion under different conditions. The results indicate that the one-dimensional linear finite element method produced excellent predictions of rill erosion processes. Therefore, this study supplies a tool for further development of a dynamic soil erosion prediction model.

  8. Resolution of the neutron transport equation by a three-dimensional least square method

    International Nuclear Information System (INIS)

    Varin, Elisabeth

    2001-01-01

    The knowledge of space and time distribution of neutrons with a certain energy or speed allows the exploitation and control of a nuclear reactor and the assessment of the irradiation dose about an irradiated nuclear fuel storage site. The neutron density is described by a transport equation. The objective of this research thesis is to develop a software for the resolution of this stationary equation in a three-dimensional Cartesian domain by means of a deterministic method. After a presentation of the transport equation, the author gives an overview of the different deterministic resolution approaches, identifies their benefits and drawbacks, and discusses the choice of the Ressel method. The least square method is precisely described and then applied. Numerical benchmarks are reported for validation purposes

  9. Numerical solution of multi group-Two dimensional- Adjoint equation with finite element method

    International Nuclear Information System (INIS)

    Poursalehi, N.; Khalafi, H.; Shahriari, M.; Minoochehr

    2008-01-01

    Adjoint equation is used for perturbation theory in nuclear reactor design. For numerical solution of adjoint equation, usually two methods are applied. These are Finite Element and Finite Difference procedures. Usually Finite Element Procedure is chosen for solving of adjoint equation, because it is more use able in variety of geometries. In this article, Galerkin Finite Element method is discussed. This method is applied for numerical solving multi group, multi region and two dimensional (X, Y) adjoint equation. Typical reactor geometry is partitioned with triangular meshes and boundary condition for adjoint flux is considered zero. Finally, for a case of defined parameters, Finite Element Code was applied and results were compared with Citation Code

  10. A new method of solution for one-dimensional quasi-neutral bounded plasmas

    Science.gov (United States)

    Kamran, M.; Kuhn, S.

    2010-08-01

    A new method is proposed for calculating the potential distribution Φ(z) in a one-dimensional quasi-neutral bounded plasma; Φ(z) is assumed to satisfy a quasi-neutrality condition (plasma equation) of the form ni{Φ(z)} = ne(Φ), where the electron density ne is a given function of Φ and the ion density ni is expressed in terms of trajectory integrals of the ion kinetic equation. While previous methods relied on formally solving a global integral equation (Riemann, Phys. Plasmas, vol. 13, 2006, paper no. 013503; Kos et al., Phys. Plasmas, vol. 16, 2009, paper no. 093503), the present method is characterized by piecewise analytic solution of the plasma equation in reasonably small intervals of z. As a first concrete application, Φ(z) is found analytically through order z4 near the center of a collisionless Tonks-Langmuir discharge with a cold-ion source.

  11. Methods of measurement signal acquisition from the rotational flow meter for frequency analysis

    Directory of Open Access Journals (Sweden)

    Świsulski Dariusz

    2017-01-01

    Full Text Available One of the simplest and commonly used instruments for measuring the flow of homogeneous substances is the rotational flow meter. The main part of such a device is a rotor (vane or screw rotating at a speed which is the function of the fluid or gas flow rate. A pulse signal with a frequency proportional to the speed of the rotor is obtained at the sensor output. For measurements in dynamic conditions, a variable interval between pulses prohibits the analysis of the measuring signal. Therefore, the authors of the article developed a method involving the determination of measured values on the basis of the last inter-pulse interval preceding the moment designated by the timing generator. For larger changes of the measured value at a predetermined time, the value can be determined by means of extrapolation of the two adjacent interpulse ranges, assuming a linear change in the flow. The proposed methods allow analysis which requires constant spacing between measurements, allowing for an analysis of the dynamics of changes in the test flow, eg. using a Fourier transform. To present the advantages of these methods simulations of flow measurement were carried out with a DRH-1140 rotor flow meter from the company Kobold.

  12. Multiple Attribute Group Decision-Making Methods Based on Trapezoidal Fuzzy Two-Dimensional Linguistic Partitioned Bonferroni Mean Aggregation Operators.

    Science.gov (United States)

    Yin, Kedong; Yang, Benshuo; Li, Xuemei

    2018-01-24

    In this paper, we investigate multiple attribute group decision making (MAGDM) problems where decision makers represent their evaluation of alternatives by trapezoidal fuzzy two-dimensional uncertain linguistic variable. To begin with, we introduce the definition, properties, expectation, operational laws of trapezoidal fuzzy two-dimensional linguistic information. Then, to improve the accuracy of decision making in some case where there are a sort of interrelationship among the attributes, we analyze partition Bonferroni mean (PBM) operator in trapezoidal fuzzy two-dimensional variable environment and develop two operators: trapezoidal fuzzy two-dimensional linguistic partitioned Bonferroni mean (TF2DLPBM) aggregation operator and trapezoidal fuzzy two-dimensional linguistic weighted partitioned Bonferroni mean (TF2DLWPBM) aggregation operator. Furthermore, we develop a novel method to solve MAGDM problems based on TF2DLWPBM aggregation operator. Finally, a practical example is presented to illustrate the effectiveness of this method and analyses the impact of different parameters on the results of decision-making.

  13. A demonstration of adjoint methods for multi-dimensional remote sensing of the atmosphere and surface

    Science.gov (United States)

    Martin, William G. K.; Hasekamp, Otto P.

    2018-01-01

    In previous work, we derived the adjoint method as a computationally efficient path to three-dimensional (3D) retrievals of clouds and aerosols. In this paper we will demonstrate the use of adjoint methods for retrieving two-dimensional (2D) fields of cloud extinction. The demonstration uses a new 2D radiative transfer solver (FSDOM). This radiation code was augmented with adjoint methods to allow efficient derivative calculations needed to retrieve cloud and surface properties from multi-angle reflectance measurements. The code was then used in three synthetic retrieval studies. Our retrieval algorithm adjusts the cloud extinction field and surface albedo to minimize the measurement misfit function with a gradient-based, quasi-Newton approach. At each step we compute the value of the misfit function and its gradient with two calls to the solver FSDOM. First we solve the forward radiative transfer equation to compute the residual misfit with measurements, and second we solve the adjoint radiative transfer equation to compute the gradient of the misfit function with respect to all unknowns. The synthetic retrieval studies verify that adjoint methods are scalable to retrieval problems with many measurements and unknowns. We can retrieve the vertically-integrated optical depth of moderately thick clouds as a function of the horizontal coordinate. It is also possible to retrieve the vertical profile of clouds that are separated by clear regions. The vertical profile retrievals improve for smaller cloud fractions. This leads to the conclusion that cloud edges actually increase the amount of information that is available for retrieving the vertical profile of clouds. However, to exploit this information one must retrieve the horizontally heterogeneous cloud properties with a 2D (or 3D) model. This prototype shows that adjoint methods can efficiently compute the gradient of the misfit function. This work paves the way for the application of similar methods to 3D remote

  14. Application of Symmetry Adapted Function Method for Three-Dimensional Reconstruction of Octahedral Biological Macromolecules

    Directory of Open Access Journals (Sweden)

    Songjun Zeng

    2010-01-01

    Full Text Available A method for three-dimensional (3D reconstruction of macromolecule assembles, that is, octahedral symmetrical adapted functions (OSAFs method, was introduced in this paper and a series of formulations for reconstruction by OSAF method were derived. To verify the feasibility and advantages of the method, two octahedral symmetrical macromolecules, that is, heat shock protein Degp24 and the Red-cell L Ferritin, were utilized as examples to implement reconstruction by the OSAF method. The schedule for simulation was designed as follows: 2000 random orientated projections of single particles with predefined Euler angles and centers of origins were generated, then different levels of noises that is signal-to-noise ratio (S/N =0.1,0.5, and 0.8 were added. The structures reconstructed by the OSAF method were in good agreement with the standard models and the relative errors of the structures reconstructed by the OSAF method to standard structures were very little even for high level noise. The facts mentioned above account for that the OSAF method is feasible and efficient approach to reconstruct structures of macromolecules and have ability to suppress the influence of noise.

  15. A Novel Medical Freehand Sketch 3D Model Retrieval Method by Dimensionality Reduction and Feature Vector Transformation

    Directory of Open Access Journals (Sweden)

    Zhang Jing

    2016-01-01

    Full Text Available To assist physicians to quickly find the required 3D model from the mass medical model, we propose a novel retrieval method, called DRFVT, which combines the characteristics of dimensionality reduction (DR and feature vector transformation (FVT method. The DR method reduces the dimensionality of feature vector; only the top M low frequency Discrete Fourier Transform coefficients are retained. The FVT method does the transformation of the original feature vector and generates a new feature vector to solve the problem of noise sensitivity. The experiment results demonstrate that the DRFVT method achieves more effective and efficient retrieval results than other proposed methods.

  16. Development of spectral history methods for pin-by-pin core analysis method using three-dimensional direct response matrix

    International Nuclear Information System (INIS)

    Mitsuyasu, T.; Ishii, K.; Hino, T.; Aoyama, M.

    2009-01-01

    Spectral history methods for pin-by-pin core analysis method using the three-dimensional direct response matrix have been developed. The direct response matrix is formalized by four sub-response matrices in order to respond to a core eigenvalue k and thus can be recomposed at each outer iteration in the core analysis. For core analysis, it is necessary to take into account the burn-up effect related to spectral history. One of the methods is to evaluate the nodal burn-up spectrum obtained using the out-going neutron current. The other is to correct the fuel rod neutron production rates obtained the pin-by-pin correction. These spectral history methods were tested in a heterogeneous system. The test results show that the neutron multiplication factor error can be reduced by half during burn-up, the nodal neutron production rates errors can be reduced by 30% or more. The root-mean-square differences between the relative fuel rod neutron production rate distributions can be reduced within 1.1% error. This means that these methods can accurately reflect the effects of intra- and inter-assembly heterogeneities during burn-up and can be used for core analysis. Core analysis with the DRM method was carried out for an ABWR quarter core and it was found that both thermal power and coolant-flow distributions were smoothly converged. (authors)

  17. Evaluation of MRI acquisition workflow with lean six sigma method: case study of liver and knee examinations.

    Science.gov (United States)

    Roth, Christopher J; Boll, Daniel T; Wall, Lisa K; Merkle, Elmar M

    2010-08-01

    The purpose of this investigation was to assess workflow for medical imaging studies, specifically comparing liver and knee MRI examinations by use of the Lean Six Sigma methodologic framework. The hypothesis tested was that the Lean Six Sigma framework can be used to quantify MRI workflow and to identify sources of inefficiency to target for sequence and protocol improvement. Audio-video interleave streams representing individual acquisitions were obtained with graphic user interface screen capture software in the examinations of 10 outpatients undergoing MRI of the liver and 10 outpatients undergoing MRI of the knee. With Lean Six Sigma methods, the audio-video streams were dissected into value-added time (true image data acquisition periods), business value-added time (time spent that provides no direct patient benefit but is requisite in the current system), and non-value-added time (scanner inactivity while awaiting manual input). For overall MRI table time, value-added time was 43.5% (range, 39.7-48.3%) of the time for liver examinations and 89.9% (range, 87.4-93.6%) for knee examinations. Business value-added time was 16.3% of the table time for the liver and 4.3% of the table time for the knee examinations. Non-value-added time was 40.2% of the overall table time for the liver and 5.8% for the knee examinations. Liver MRI examinations consume statistically significantly more non-value-added and business value-added times than do knee examinations, primarily because of respiratory command management and contrast administration. Workflow analyses and accepted inefficiency reduction frameworks can be applied with use of a graphic user interface screen capture program.

  18. Knowledge acquisition from natural language for expert systems based on classification problem-solving methods

    Science.gov (United States)

    Gomez, Fernando

    1989-01-01

    It is shown how certain kinds of domain independent expert systems based on classification problem-solving methods can be constructed directly from natural language descriptions by a human expert. The expert knowledge is not translated into production rules. Rather, it is mapped into conceptual structures which are integrated into long-term memory (LTM). The resulting system is one in which problem-solving, retrieval and memory organization are integrated processes. In other words, the same algorithm and knowledge representation structures are shared by these processes. As a result of this, the system can answer questions, solve problems or reorganize LTM.

  19. Experimental study on two-dimensional film flow with local measurement methods

    Energy Technology Data Exchange (ETDEWEB)

    Yang, Jin-Hwa, E-mail: evo03@snu.ac.kr [Nuclear Thermal-Hydraulic Engineering Laboratory, Seoul National University, Gwanak 599, Gwanak-ro, Gwanak-gu, Seoul 151-742 (Korea, Republic of); Korea Atomic Energy Research Institute, 989-111, Daedeok-daero, Yuseong-gu, Daejeon 305-600 (Korea, Republic of); Cho, Hyoung-Kyu [Nuclear Thermal-Hydraulic Engineering Laboratory, Seoul National University, Gwanak 599, Gwanak-ro, Gwanak-gu, Seoul 151-742 (Korea, Republic of); Kim, Seok [Korea Atomic Energy Research Institute, 989-111, Daedeok-daero, Yuseong-gu, Daejeon 305-600 (Korea, Republic of); Euh, Dong-Jin, E-mail: djeuh@kaeri.re.kr [Korea Atomic Energy Research Institute, 989-111, Daedeok-daero, Yuseong-gu, Daejeon 305-600 (Korea, Republic of); Park, Goon-Cherl [Nuclear Thermal-Hydraulic Engineering Laboratory, Seoul National University, Gwanak 599, Gwanak-ro, Gwanak-gu, Seoul 151-742 (Korea, Republic of)

    2015-12-01

    Highlights: • An experimental study on the two-dimensional film flow with lateral air injection was performed. • The ultrasonic thickness gauge was used to measure the local liquid film thickness. • The depth-averaged PIV (Particle Image Velocimetry) method was applied to measure the local liquid film velocity. • The uncertainty of the depth-averaged PIV was quantified with a validation experiment. • Characteristics of two-dimensional film flow were classified following the four different flow patterns. - Abstract: In an accident condition of a nuclear reactor, multidimensional two-phase flows may occur in the reactor vessel downcomer and reactor core. Therefore, those have been regarded as important issues for an advanced thermal-hydraulic safety analysis. In particular, the multi-dimensional two-phase flow in the upper downcomer during the reflood phase of large break loss of coolant accident appears with an interaction between a downward liquid and a transverse gas flow, which determines the bypass flow rate of the emergency core coolant and subsequently, the reflood coolant flow rate. At present, some thermal-hydraulic analysis codes incorporate multidimensional modules for the nuclear reactor safety analysis. However, their prediction capability for the two-phase cross flow in the upper downcomer has not been validated sufficiently against experimental data based on local measurements. For this reason, an experimental study was carried out for the two-phase cross flow to clarify the hydraulic phenomenon and provide local measurement data for the validation of the computational tools. The experiment was performed in a 1/10 scale unfolded downcomer of Advanced Power Reactor 1400 (APR1400). Pitot tubes, a depth-averaged PIV method and ultrasonic thickness gauge were applied for local measurement of the air velocity, the liquid film velocity and the liquid film thickness, respectively. The uncertainty of the depth-averaged PIV method for the averaged

  20. Experimental study on two-dimensional film flow with local measurement methods

    International Nuclear Information System (INIS)

    Yang, Jin-Hwa; Cho, Hyoung-Kyu; Kim, Seok; Euh, Dong-Jin; Park, Goon-Cherl

    2015-01-01

    Highlights: • An experimental study on the two-dimensional film flow with lateral air injection was performed. • The ultrasonic thickness gauge was used to measure the local liquid film thickness. • The depth-averaged PIV (Particle Image Velocimetry) method was applied to measure the local liquid film velocity. • The uncertainty of the depth-averaged PIV was quantified with a validation experiment. • Characteristics of two-dimensional film flow were classified following the four different flow patterns. - Abstract: In an accident condition of a nuclear reactor, multidimensional two-phase flows may occur in the reactor vessel downcomer and reactor core. Therefore, those have been regarded as important issues for an advanced thermal-hydraulic safety analysis. In particular, the multi-dimensional two-phase flow in the upper downcomer during the reflood phase of large break loss of coolant accident appears with an interaction between a downward liquid and a transverse gas flow, which determines the bypass flow rate of the emergency core coolant and subsequently, the reflood coolant flow rate. At present, some thermal-hydraulic analysis codes incorporate multidimensional modules for the nuclear reactor safety analysis. However, their prediction capability for the two-phase cross flow in the upper downcomer has not been validated sufficiently against experimental data based on local measurements. For this reason, an experimental study was carried out for the two-phase cross flow to clarify the hydraulic phenomenon and provide local measurement data for the validation of the computational tools. The experiment was performed in a 1/10 scale unfolded downcomer of Advanced Power Reactor 1400 (APR1400). Pitot tubes, a depth-averaged PIV method and ultrasonic thickness gauge were applied for local measurement of the air velocity, the liquid film velocity and the liquid film thickness, respectively. The uncertainty of the depth-averaged PIV method for the averaged

  1. Generating Lie Point Symmetry Groups of (2+1)-Dimensional Broer-Kaup Equation via a Simple Direct Method

    International Nuclear Information System (INIS)

    Ma Hongcai

    2005-01-01

    Using the (2+1)-dimensional Broer-Kaup equation as an simple example, a new direct method is developed to find symmetry groups and symmetry algebras and then exact solutions of nonlinear mathematical physical equations.

  2. Automatic registration method for multisensor datasets adopted for dimensional measurements on cutting tools

    International Nuclear Information System (INIS)

    Shaw, L; Mehari, F; Weckenmann, A; Ettl, S; Häusler, G

    2013-01-01

    Multisensor systems with optical 3D sensors are frequently employed to capture complete surface information by measuring workpieces from different views. During coarse and fine registration the resulting datasets are afterward transformed into one common coordinate system. Automatic fine registration methods are well established in dimensional metrology, whereas there is a deficit in automatic coarse registration methods. The advantage of a fully automatic registration procedure is twofold: it enables a fast and contact-free alignment and further a flexible application to datasets of any kind of optical 3D sensor. In this paper, an algorithm adapted for a robust automatic coarse registration is presented. The method was originally developed for the field of object reconstruction or localization. It is based on a segmentation of planes in the datasets to calculate the transformation parameters. The rotation is defined by the normals of three corresponding segmented planes of two overlapping datasets, while the translation is calculated via the intersection point of the segmented planes. First results have shown that the translation is strongly shape dependent: 3D data of objects with non-orthogonal planar flanks cannot be registered with the current method. In the novel supplement for the algorithm, the translation is additionally calculated via the distance between centroids of corresponding segmented planes, which results in more than one option for the transformation. A newly introduced measure considering the distance between the datasets after coarse registration evaluates the best possible transformation. Results of the robust automatic registration method are presented on the example of datasets taken from a cutting tool with a fringe-projection system and a focus-variation system. The successful application in dimensional metrology is proven with evaluations of shape parameters based on the registered datasets of a calibrated workpiece. (paper)

  3. METHOD FOR OPTIMAL RESOLUTION OF MULTI-AIRCRAFT CONFLICTS IN THREE-DIMENSIONAL SPACE

    Directory of Open Access Journals (Sweden)

    Denys Vasyliev

    2017-03-01

    Full Text Available Purpose: The risk of critical proximities of several aircraft and appearance of multi-aircraft conflicts increases under current conditions of high dynamics and density of air traffic. The actual problem is a development of methods for optimal multi-aircraft conflicts resolution that should provide the synthesis of conflict-free trajectories in three-dimensional space. Methods: The method for optimal resolution of multi-aircraft conflicts using heading, speed and altitude change maneuvers has been developed. Optimality criteria are flight regularity, flight economy and the complexity of maneuvering. Method provides the sequential synthesis of the Pareto-optimal set of combinations of conflict-free flight trajectories using multi-objective dynamic programming and selection of optimal combination using the convolution of optimality criteria. Within described method the following are defined: the procedure for determination of combinations of aircraft conflict-free states that define the combinations of Pareto-optimal trajectories; the limitations on discretization of conflict resolution process for ensuring the absence of unobservable separation violations. Results: The analysis of the proposed method is performed using computer simulation which results show that synthesized combination of conflict-free trajectories ensures the multi-aircraft conflict avoidance and complies with defined optimality criteria. Discussion: Proposed method can be used for development of new automated air traffic control systems, airborne collision avoidance systems, intelligent air traffic control simulators and for research activities.

  4. Physical and chemical parameters acquisition in situ, in deep clay. Development of sampling and testing methods

    International Nuclear Information System (INIS)

    Lajudie, A.; Coulon, H.; Geneste, P.

    1991-01-01

    Knowledge of deep formation for radioactive waste disposal requires field-tests or bench-scale experiments on samples of the site material. In the case of clay massifs the taking of cores and the sampling of these are particularly difficult. The most suitable materials and techniques were selected from a study of clay colling and conservation methods. These were used for a series of core samples taken at Mol in Belgium. Subsequently permeability measurements were carried out in laboratory on samples from vertical drilling and compared with in situ measurements. The latter were made by horizontal drillings from the shaft excavation of the underground facility HADES at Mol. There is a good overall agreement between the results of the two types of measurements. 25 figs.; 4 tabs.; 12 refs.; 16 photos

  5. New exact solutions of (2 + 1)-dimensional Gardner equation via the new sine-Gordon equation expansion method

    International Nuclear Information System (INIS)

    Chen Yong; Yan Zhenya

    2005-01-01

    In this paper (2 + 1)-dimensional Gardner equation is investigated using a sine-Gordon equation expansion method, which was presented via a generalized sine-Gordon reduction equation and a new transformation. As a consequence, it is shown that the method is more powerful to obtain many types of new doubly periodic solutions of (2 + 1)-dimensional Gardner equation. In particular, solitary wave solutions are also given as simple limits of doubly periodic solutions

  6. A Feature Subset Selection Method Based On High-Dimensional Mutual Information

    Directory of Open Access Journals (Sweden)

    Chee Keong Kwoh

    2011-04-01

    Full Text Available Feature selection is an important step in building accurate classifiers and provides better understanding of the data sets. In this paper, we propose a feature subset selection method based on high-dimensional mutual information. We also propose to use the entropy of the class attribute as a criterion to determine the appropriate subset of features when building classifiers. We prove that if the mutual information between a feature set X and the class attribute Y equals to the entropy of Y , then X is a Markov Blanket of Y . We show that in some cases, it is infeasible to approximate the high-dimensional mutual information with algebraic combinations of pairwise mutual information in any forms. In addition, the exhaustive searches of all combinations of features are prerequisite for finding the optimal feature subsets for classifying these kinds of data sets. We show that our approach outperforms existing filter feature subset selection methods for most of the 24 selected benchmark data sets.

  7. Capturing method for integral three-dimensional imaging using multiviewpoint robotic cameras

    Science.gov (United States)

    Ikeya, Kensuke; Arai, Jun; Mishina, Tomoyuki; Yamaguchi, Masahiro

    2018-03-01

    Integral three-dimensional (3-D) technology for next-generation 3-D television must be able to capture dynamic moving subjects with pan, tilt, and zoom camerawork as good as in current TV program production. We propose a capturing method for integral 3-D imaging using multiviewpoint robotic cameras. The cameras are controlled through a cooperative synchronous system composed of a master camera controlled by a camera operator and other reference cameras that are utilized for 3-D reconstruction. When the operator captures a subject using the master camera, the region reproduced by the integral 3-D display is regulated in real space according to the subject's position and view angle of the master camera. Using the cooperative control function, the reference cameras can capture images at the narrowest view angle that does not lose any part of the object region, thereby maximizing the resolution of the image. 3-D models are reconstructed by estimating the depth from complementary multiviewpoint images captured by robotic cameras arranged in a two-dimensional array. The model is converted into elemental images to generate the integral 3-D images. In experiments, we reconstructed integral 3-D images of karate players and confirmed that the proposed method satisfied the above requirements.

  8. An Improved Ensemble Learning Method for Classifying High-Dimensional and Imbalanced Biomedicine Data.

    Science.gov (United States)

    Yu, Hualong; Ni, Jun

    2014-01-01

    Training classifiers on skewed data can be technically challenging tasks, especially if the data is high-dimensional simultaneously, the tasks can become more difficult. In biomedicine field, skewed data type often appears. In this study, we try to deal with this problem by combining asymmetric bagging ensemble classifier (asBagging) that has been presented in previous work and an improved random subspace (RS) generation strategy that is called feature subspace (FSS). Specifically, FSS is a novel method to promote the balance level between accuracy and diversity of base classifiers in asBagging. In view of the strong generalization capability of support vector machine (SVM), we adopt it to be base classifier. Extensive experiments on four benchmark biomedicine data sets indicate that the proposed ensemble learning method outperforms many baseline approaches in terms of Accuracy, F-measure, G-mean and AUC evaluation criterions, thus it can be regarded as an effective and efficient tool to deal with high-dimensional and imbalanced biomedical data.

  9. Hydrazine-hydrothermal method to synthesize three-dimensional chalcogenide framework for photocatalytic hydrogen generation

    International Nuclear Information System (INIS)

    Liu Yi; Kanhere, Pushkar D.; Wong, Chui Ling; Tian Yuefeng; Feng Yuhua; Boey, Freddy; Wu, Tom; Chen Hongyu; White, Tim J.; Chen Zhong; Zhang Qichun

    2010-01-01

    A novel chalcogenide, [Mn 2 Sb 2 S 5 (N 2 H 4 ) 3 ] (1), has been synthesized by the hydrazine-hydrothermal method. X-ray crystallography study reveals that the new compound 1 crystallizes in space group P1-bar (no. 2) of the triclinic system. The structure features an open neutral three-dimensional framework, where two-dimensional mesh-like inorganic layers are bridged by intra- and inter-layer hydrazine ligands. Both two Mn1 and Mn2 sites adopt distorted octahedral coordination. While two Sb1 and Sb2 sites exhibit two different coordination geometries, the Sb1 site is coordinated with three S atoms to generate a SbS 3 trigonal-pyramidal geometry, and the Sb2 site adopts a SbS 4 trigonal bipyramidal coordination geometry. It has an optical band gap of about ∼2.09 eV, which was deduced from the diffuse reflectance spectrum, and displays photocatalytic behaviors under visible light irradiation. Magnetic susceptibility measurements show compound 1 obeys the Curie-Weiss law in the range of 50-300 K. -- Graphical abstract: A novel chalcogenide, [Mn 2 Sb 2 S 5 (N 2 H 4 ) 3 ] (1), synthesized by hydrazine-hydrothermal method, has a band gap of about ∼2.09 eV and displays photocatalytic behaviors under visible light irradiation. Display Omitted

  10. Physical properties of root cementum: Part I. A new method for 3-dimensional evaluation.

    Science.gov (United States)

    Malek, S; Darendeliler, M A; Swain, M V

    2001-08-01

    Cementum is a nonuniform connective tissue that covers the roots of human teeth. Investigation of the physical properties of cementum may help in understanding or evaluating any possible connection to root resorption. A variety of engineering tests are available to investigate these properties. However, the thickness of the cementum layer varies, and this limits the applicability of these techniques in determining the physical properties of cementum. Hardness testing with Knoop and Vickers indentations overcame some of these limitations, but they prohibited the retrieval and retesting of the sample and therefore the testing was restricted to one area or section of the tooth. Another limiting factor with the existing techniques was the risk of artifacts related to the embedding material such as acrylic. A new method to investigate the physical properties of human premolar cementum was developed to obtain a 3-dimensional map of these properties with the Ultra Micro Indentation System (UMIS-2000; Commonwealth Scientific and Industrial Research Organization, Campbell, Australia). UMIS-2000 is a nano-indentation instrument for investigation of the properties of the near-surface region of materials. Premolars were harvested from orthodontic patients requiring extractions and then mounted on a newly designed surveyor that allowed sample retrieval and 3-dimensional rotation. This novel method enabled the quantitative testing of root surface cementum, on all 4 root surfaces, extending from the apex to the cementoenamel junction at 60 different sites.

  11. A Mixed-Methods Approach to Investigating First- and Second-Language Incidental Vocabulary Acquisition through the Reading of Fiction

    Science.gov (United States)

    Reynolds, Barry Lee

    2015-01-01

    Adult English-L1 (n = 20) and English-L2 (n = 32) experimental groups were given a novel containing nonce words to read within two weeks to investigate whether the reading of fiction can induce a state of incidental vocabulary acquisition. After reading, an unexpected meaning recall translation assessment measuring acquisition of 49 target nonce…

  12. Assessing the accuracy and reliability of ultrasonographic three-dimensional parathyroid volume measurement in a patient with secondary hyperparathyroidism: a comparison with the two-dimensional conventional method

    Directory of Open Access Journals (Sweden)

    Sung-Hye You

    2017-01-01

    Full Text Available Purpose The purpose of this study was to investigate the accuracy and reliability of the semi-automated ultrasonographic volume measurement tool, virtual organ computer-aided analysis (VOCAL, for measuring the volume of parathyroid glands. Methods Volume measurements for 40 parathyroid glands were performed in patients with secondary hyperparathyroidism caused by chronic renal failure. The volume of the parathyroid glands was measured twice by experienced radiologists by two-dimensional (2D and three-dimensional (3D methods using conventional sonograms and the VOCAL with 30°angle increments before parathyroidectomy. The specimen volume was also measured postoperatively. Intraclass correlation coefficients (ICCs and the absolute percentage error were used for estimating the reproducibility and accuracy of the two different methods. Results The ICC value between two measurements of the 2D method and the 3D method was 0.956 and 0.999, respectively. The mean absolute percentage error of the 2D method and the 3D VOCAL technique was 29.56% and 5.78%, respectively. For accuracy and reliability, the plots of the 3D method showed a more compact distribution than those of the 2D method on the Bland-Altman graph. Conclusion The rotational VOCAL method for measuring the parathyroid gland is more accurate and reliable than the conventional 2D measurement. This VOCAL method could be used as a more reliable follow-up imaging modality in a patient with hyperparathyroidism.

  13. Estimation Methods for Infinite-Dimensional Systems Applied to the Hemodynamic Response in the Brain

    KAUST Repository

    Belkhatir, Zehor

    2018-05-01

    Infinite-Dimensional Systems (IDSs) which have been made possible by recent advances in mathematical and computational tools can be used to model complex real phenomena. However, due to physical, economic, or stringent non-invasive constraints on real systems, the underlying characteristics for mathematical models in general (and IDSs in particular) are often missing or subject to uncertainty. Therefore, developing efficient estimation techniques to extract missing pieces of information from available measurements is essential. The human brain is an example of IDSs with severe constraints on information collection from controlled experiments and invasive sensors. Investigating the intriguing modeling potential of the brain is, in fact, the main motivation for this work. Here, we will characterize the hemodynamic behavior of the brain using functional magnetic resonance imaging data. In this regard, we propose efficient estimation methods for two classes of IDSs, namely Partial Differential Equations (PDEs) and Fractional Differential Equations (FDEs). This work is divided into two parts. The first part addresses the joint estimation problem of the state, parameters, and input for a coupled second-order hyperbolic PDE and an infinite-dimensional ordinary differential equation using sampled-in-space measurements. Two estimation techniques are proposed: a Kalman-based algorithm that relies on a reduced finite-dimensional model of the IDS, and an infinite-dimensional adaptive estimator whose convergence proof is based on the Lyapunov approach. We study and discuss the identifiability of the unknown variables for both cases. The second part contributes to the development of estimation methods for FDEs where major challenges arise in estimating fractional differentiation orders and non-smooth pointwise inputs. First, we propose a fractional high-order sliding mode observer to jointly estimate the pseudo-state and input of commensurate FDEs. Second, we propose a

  14. Relation between the national handbook of recommended methods for water data acquisition and ASTM standards

    Science.gov (United States)

    Glysson, G. Douglas; Skinner, John V.

    1991-01-01

    In the late 1950's, intense demands for water and growing concerns about declines in the quality of water generated the need for more water-resources data. About thirty Federal agencies, hundreds of State, county and local agencies, and many private organizations had been collecting water data. However, because of differences in procedures and equipment, many of the data bases were incompatible. In 1964, as a step toward establishing more uniformity, the Bureau of the Budget (now the Office of Management and Budget, OMB) issued 'Circular A-67' which presented guidelines for collecting water data and also served as a catalyst for creating the Office of Water Data Coordination (OWDC) within the U.S. Geological Survey. This paper discusses past, present, and future aspects of the relation between methods in the National Handbook and standards published by ASTM (American Society for Testing and Materials) Committee D-19 on Water's Subcommittee D-19.07 on Sediment, Geomorphology, and Open Channel Flow. The discussion also covers historical aspects of standards - development work jointly conducted by OWDC and ASTM.

  15. Nonstop lose-less data acquisition and storing method for plasma motion images

    International Nuclear Information System (INIS)

    Nakanishi, Hideya; Ohsuna, Masaki; Kojima, Mamoru; Nonomura, Miki; Nagayama, Yoshio; Kawahata, Kazuo; Imazu, Setsuo; Okumura, Haruhiko

    2007-01-01

    Plasma diagnostic data analysis often requires the original raw data as they are, in other words, at the same frame rate and resolution of the CCD camera sensor. As a non-interlace VGA camera typically generates over 70 MB/s video stream, usual frame grabber cards apply the lossy compression encoder, such as mpeg-1/-2 or mpeg-4, to drastically lessen the bit rate. In this study, a new approach, which makes it possible to acquire and store such the wideband video stream without any quality reduction, has been successfully achieved. Simultaneously, the real-time video streaming is even possible at the original frame rate. For minimising the exclusive access time in every data storing, it has adopted the directory structure to hold every frame files separately, instead of one long consecutive file. The popular 'zip' archive method improves the portability of data files, however, the JPEG-LS image compression is applied inside by replacing its intrinsic deflate/inflate algorithm that has less performances for image data. (author)

  16. A three-dimensional polarization domain retrieval method from electron diffraction data

    International Nuclear Information System (INIS)

    Pennington, Robert S.; Koch, Christoph T.

    2015-01-01

    We present an algorithm for retrieving three-dimensional domains of picometer-scale shifts in atomic positions from electron diffraction data, and apply it to simulations of ferroelectric polarization in BaTiO 3 . Our algorithm successfully and correctly retrieves polarization domains in which the Ti atom positions differ by less than 3 pm (0.4% of the unit cell diagonal distance) with 5 and 10 nm depth resolution along the beam direction, and we also retrieve unit cell strain, corresponding to tetragonal-to-cubic unit cell distortions, for 10 nm domains. Experimental applicability is also discussed. - Highlights: • We show a retrieval method for ferroelectric polarization from TEM diffraction data. • Simulated strain and polarization variations along the beam direction are retrieved. • This method can be used for 3D strain and polarization mapping without specimen tilt

  17. Numerical method for three dimensional steady-state two-phase flow calculations

    International Nuclear Information System (INIS)

    Raymond, P.; Toumi, I.

    1992-01-01

    This paper presents the numerical scheme which was developed for the FLICA-4 computer code to calculate three dimensional steady state two phase flows. This computer code is devoted to steady state and transient thermal hydraulics analysis of nuclear reactor cores 1,3 . The first section briefly describes the FLICA-4 flow modelling. Then in order to introduce the numerical method for steady state computations, some details are given about the implicit numerical scheme based upon an approximate Riemann solver which was developed for calculation of flow transients. The third section deals with the numerical method for steady state computations, which is derived from this previous general scheme and its optimization. We give some numerical results for steady state calculations and comparisons on required CPU time and memory for various meshing and linear system solvers

  18. Energy method for multi-dimensional balance laws with non-local dissipation

    KAUST Repository

    Duan, Renjun

    2010-06-01

    In this paper, we are concerned with a class of multi-dimensional balance laws with a non-local dissipative source which arise as simplified models for the hydrodynamics of radiating gases. At first we introduce the energy method in the setting of smooth perturbations and study the stability of constants states. Precisely, we use Fourier space analysis to quantify the energy dissipation rate and recover the optimal time-decay estimates for perturbed solutions via an interpolation inequality in Fourier space. As application, the developed energy method is used to prove stability of smooth planar waves in all dimensions n2, and also to show existence and stability of time-periodic solutions in the presence of the time-periodic source. Optimal rates of convergence of solutions towards the planar waves or time-periodic states are also shown provided initially L1-perturbations. © 2009 Elsevier Masson SAS.

  19. A quasi-3-dimensional simulation method for a high-voltage level-shifting circuit structure

    International Nuclear Information System (INIS)

    Liu Jizhi; Chen Xingbi

    2009-01-01

    A new quasi-three-dimensional (quasi-3D) numeric simulation method for a high-voltage level-shifting circuit structure is proposed. The performances of the 3D structure are analyzed by combining some 2D device structures; the 2D devices are in two planes perpendicular to each other and to the surface of the semiconductor. In comparison with Davinci, the full 3D device simulation tool, the quasi-3D simulation method can give results for the potential and current distribution of the 3D high-voltage level-shifting circuit structure with appropriate accuracy and the total CPU time for simulation is significantly reduced. The quasi-3D simulation technique can be used in many cases with advantages such as saving computing time, making no demands on the high-end computer terminals, and being easy to operate. (semiconductor integrated circuits)

  20. A quasi-3-dimensional simulation method for a high-voltage level-shifting circuit structure

    Energy Technology Data Exchange (ETDEWEB)

    Liu Jizhi; Chen Xingbi, E-mail: jzhliu@uestc.edu.c [State Key Laboratory of Electronic Thin Films and Integrated Devices, University of Electronic Science and Technology of China, Chengdu 610054 (China)

    2009-12-15

    A new quasi-three-dimensional (quasi-3D) numeric simulation method for a high-voltage level-shifting circuit structure is proposed. The performances of the 3D structure are analyzed by combining some 2D device structures; the 2D devices are in two planes perpendicular to each other and to the surface of the semiconductor. In comparison with Davinci, the full 3D device simulation tool, the quasi-3D simulation method can give results for the potential and current distribution of the 3D high-voltage level-shifting circuit structure with appropriate accuracy and the total CPU time for simulation is significantly reduced. The quasi-3D simulation technique can be used in many cases with advantages such as saving computing time, making no demands on the high-end computer terminals, and being easy to operate. (semiconductor integrated circuits)

  1. Viscosity of confined two-dimensional Yukawa liquids: A nonequilibrium method

    International Nuclear Information System (INIS)

    Landmann, S.; Kählert, H.; Thomsen, H.; Bonitz, M.

    2015-01-01

    We present a nonequilibrium method that allows one to determine the viscosity of two-dimensional dust clusters in an isotropic confinement. By applying a tangential external force to the outer parts of the cluster (e.g., with lasers), a sheared velocity profile is created. The decay of the angular velocity towards the center of the confinement potential is determined by a balance between internal (viscosity) and external friction (neutral gas damping). The viscosity can then be calculated from a fit of the measured velocity profile to a solution of the Navier-Stokes equation. Langevin dynamics simulations are used to demonstrate the feasibility of the method. We find good agreement of the measured viscosity with previous results for macroscopic Yukawa plasmas

  2. Application of synthesis methods to two-dimensional fast reactor transient study

    International Nuclear Information System (INIS)

    Izutsu, Sadayuki; Hirakawa, Naohiro

    1978-01-01

    Space time synthesis and time synthesis codes were developed and applied to the space-dependent kinetics benchmark problem of a two-dimensional fast reactor model, and it was found both methods are accurate and economical for the fast reactor kinetics study. Comparison between the space time synthesis and the time synthesis was made. Also, in space time synthesis, the influence of the number of trial functions on the error and on the computing time and the effect of degeneration of expansion coefficients are investigated. The matrix factorization method is applied to the inversion of the matrix equation derived from the synthesis equation, and it is indicated that by the use of this scheme space-dependent kinetics problem of a fast reactor can be solved efficiently by space time synthesis. (auth.)

  3. Method for the determination of the three-dimensional structure of ultrashort relativistic electron bunches

    Energy Technology Data Exchange (ETDEWEB)

    Geloni, Gianluca; Ilinski, Petr; Saldin, Evgeni; Schneidmiller, Evgeni; Yurkov, Mikhail

    2009-05-15

    We describe a novel technique to characterize ultrashort electron bunches in Xray Free-Electron Lasers. Namely, we propose to use coherent Optical Transition Radiation to measure three-dimensional (3D) electron density distributions. Our method relies on the combination of two known diagnostics setups, an Optical Replica Synthesizer (ORS) and an Optical Transition Radiation (OTR) imager. Electron bunches are modulated at optical wavelengths in the ORS setup.When these electron bunches pass through a metal foil target, coherent radiation pulses of tens MW power are generated. It is thereafter possible to exploit advantages of coherent imaging techniques, such as direct imaging, diffractive imaging, Fourier holography and their combinations. The proposed method opens up the possibility of real-time, wavelength-limited, single-shot 3D imaging of an ultrashort electron bunch. (orig.)

  4. Three-Dimensional Navier-Stokes Calculations Using the Modified Space-Time CESE Method

    Science.gov (United States)

    Chang, Chau-lyan

    2007-01-01

    The space-time conservation element solution element (CESE) method is modified to address the robustness issues of high-aspect-ratio, viscous, near-wall meshes. In this new approach, the dependent variable gradients are evaluated using element edges and the corresponding neighboring solution elements while keeping the original flux integration procedure intact. As such, the excellent flux conservation property is retained and the new edge-based gradients evaluation significantly improves the robustness for high-aspect ratio meshes frequently encountered in three-dimensional, Navier-Stokes calculations. The order of accuracy of the proposed method is demonstrated for oblique acoustic wave propagation, shock-wave interaction, and hypersonic flows over a blunt body. The confirmed second-order convergence along with the enhanced robustness in handling hypersonic blunt body flow calculations makes the proposed approach a very competitive CFD framework for 3D Navier-Stokes simulations.

  5. Energy method for multi-dimensional balance laws with non-local dissipation

    KAUST Repository

    Duan, Renjun; Fellner, Klemens; Zhu, Changjiang

    2010-01-01

    In this paper, we are concerned with a class of multi-dimensional balance laws with a non-local dissipative source which arise as simplified models for the hydrodynamics of radiating gases. At first we introduce the energy method in the setting of smooth perturbations and study the stability of constants states. Precisely, we use Fourier space analysis to quantify the energy dissipation rate and recover the optimal time-decay estimates for perturbed solutions via an interpolation inequality in Fourier space. As application, the developed energy method is used to prove stability of smooth planar waves in all dimensions n2, and also to show existence and stability of time-periodic solutions in the presence of the time-periodic source. Optimal rates of convergence of solutions towards the planar waves or time-periodic states are also shown provided initially L1-perturbations. © 2009 Elsevier Masson SAS.

  6. Three-dimensional multiple reciprocity boundary element method for one-group neutron diffusion eigenvalue computations

    International Nuclear Information System (INIS)

    Itagaki, Masafumi; Sahashi, Naoki.

    1996-01-01

    The multiple reciprocity method (MRM) in conjunction with the boundary element method has been employed to solve one-group eigenvalue problems described by the three-dimensional (3-D) neutron diffusion equation. The domain integral related to the fission source is transformed into a series of boundary-only integrals, with the aid of the higher order fundamental solutions based on the spherical and the modified spherical Bessel functions. Since each degree of the higher order fundamental solutions in the 3-D cases has a singularity of order (1/r), the above series of boundary integrals requires additional terms which do not appear in the 2-D MRM formulation. The critical eigenvalue itself can be also described using only boundary integrals. Test calculations show that Wielandt's spectral shift technique guarantees rapid and stable convergence of 3-D MRM computations. (author)

  7. Three-dimensional viscous-inviscid coupling method for wind turbine computations

    DEFF Research Database (Denmark)

    Ramos García, Néstor; Sørensen, Jens Nørkær; Shen, Wen Zhong

    2016-01-01

    In this paper, a computational model for predicting the aerodynamic behavior of wind turbine wakes and blades subjected to unsteady motions and viscous effects is presented. The model is based on a three-dimensional panel method using a surface distribution of quadrilateral sources and doublets......, which is coupled to a viscous boundary layer solver. Unlike Navier-Stokes codes that need to solve the entire flow domain, the panel method solves the flow around a complex geometry by distributing singularity elements on the body surface, obtaining a faster solution and making this type of codes...... suitable for the design of wind turbines. A free-wake model has been employed to simulate the wake behind a wind turbine by using vortex filaments that carry the vorticity shed by the trailing edge of the blades. Viscous and rotational effects inside the boundary layer are taken into account via...

  8. A mixed method Poisson solver for three-dimensional self-gravitating astrophysical fluid dynamical systems

    Science.gov (United States)

    Duncan, Comer; Jones, Jim

    1993-01-01

    A key ingredient in the simulation of self-gravitating astrophysical fluid dynamical systems is the gravitational potential and its gradient. This paper focuses on the development of a mixed method multigrid solver of the Poisson equation formulated so that both the potential and the Cartesian components of its gradient are self-consistently and accurately generated. The method achieves this goal by formulating the problem as a system of four equations for the gravitational potential and the three Cartesian components of the gradient and solves them using a distributed relaxation technique combined with conventional full multigrid V-cycles. The method is described, some tests are presented, and the accuracy of the method is assessed. We also describe how the method has been incorporated into our three-dimensional hydrodynamics code and give an example of an application to the collision of two stars. We end with some remarks about the future developments of the method and some of the applications in which it will be used in astrophysics.

  9. Multi-dimensional Analysis Method of Hydrogen Combustion in the Containment of a Nuclear Power Plant

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Jongtae; Hong, Seongwan [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of); Kim, Gun Hong [Kyungwon E and C Co., Seongnam (Korea, Republic of)

    2014-05-15

    The most severe case is the occurrence of detonation, which induces a few-fold greater pressure load on the containment wall than a deflagration flame. The occurrence of a containment-wise global detonation is prohibited by a national regulation. The compartments located in the flow path such as steam generator compartment, annular compartment, and dome region are likely to have highly-concentrated hydrogen. If it is found that hydrogen concentration in any compartment is far below a detonation criterion during an accident progression, it can be thought that the occurrence of a detonative explosion in a compartment is excluded. However, if it is not, it is necessary to evaluate the characteristics of flame acceleration in the containment. The possibility of a flame transition from a deflagration to a detonation (DDT) can be evaluated from a calculated hydrogen distribution in a compartment by using sigma-lambda criteria. However, this method can provide a very conservative result because the geometric characteristics of a real compartment are not considered well. In order to evaluate the containment integrity from a threat of a hydrogen explosion, it is necessary to establish an integrated evaluation system, which includes a lumped-parameter and detail analysis methods. In this study, a method for the multi-dimensional analysis of hydrogen combustion is proposed to mechanistically evaluate the flame acceleration characteristics with a geometric effect. The geometry of the containment is modeled 3-dimensionally using a CAD tool. To resolve a propagating flame front, an adaptive mesh refinement method is coupled with a combustion analysis solver.

  10. THE NEW THREE-DIMENSIONAL VISUALIZATION METHOD OF HERITAGE SITES BY LIDAR DATA

    Directory of Open Access Journals (Sweden)

    N. Fujii

    2012-07-01

    Full Text Available We introduce a new visualization method for the three dimensional data with laser scanning from helicopter to express of the detailed landscape with "Red Relief Image Map (RRIM"and "3D-Viewer". This RRIM and 3D-Viewer’s method effectively represent 3D topographic information without any additional devices and stereopsis ability for the audience only through two dimensional medium and shows an appropriate form of every feature in the site. Chapters present what the laser scanning from helicopter is and show some examples of mounded tombs with RRIM and 3D-Viewer. This visualization technique including detailed topographical information and geographical coordinates can be directly linked to CAD and GIS system, therefore the LiDAR can easily produce a contour line, a cross section and a bird's-eye view at any place as well as measure the height of trees. This is different from other 3D topographic image with a shadow effect. Vegetation on the site is no longer obstacle to get detailed topographical information. Therefore, in Japan, this method is useful for huge mounded tombs thickly covered with trees, especially "Ryo-bo (imperial tomb"which are administrated by the Imperial Household Agency and common people can't enter. Also a cluster of small mounded tombs which extend in the vast area called "Gunshufun" is shown effectively for the location of each mounded tomb. This method is suitable for understanding the structure of the sites in any wide spread archaeological fields. Moreover, in the management of heritages it is important that these data present precise information of the surface of lands to understand the present situation of heritages. Detailed topographical information by "LiDAR and Red Relief Image Map and 3DViewer" will open a new gate for managing of cultural heritage sites in the future.

  11. Rapid Automatic Lighting Control of a Mixed Light Source for Image Acquisition using Derivative Optimum Search Methods

    Directory of Open Access Journals (Sweden)

    Kim HyungTae

    2015-01-01

    Full Text Available Automatic lighting (auto-lighting is a function that maximizes the image quality of a vision inspection system by adjusting the light intensity and color.In most inspection systems, a single color light source is used, and an equal step search is employed to determine the maximum image quality. However, when a mixed light source is used, the number of iterations becomes large, and therefore, a rapid search method must be applied to reduce their number. Derivative optimum search methods follow the tangential direction of a function and are usually faster than other methods. In this study, multi-dimensional forms of derivative optimum search methods are applied to obtain the maximum image quality considering a mixed-light source. The auto-lighting algorithms were derived from the steepest descent and conjugate gradient methods, which have N-size inputs of driving voltage and one output of image quality. Experiments in which the proposed algorithm was applied to semiconductor patterns showed that a reduced number of iterations is required to determine the locally maximized image quality.

  12. Assessing the accuracy and reliability of ultrasonographic three-dimensional parathyroid volume measurement in a patient with secondary hyperparathyroidism: a comparison with the two-dimensional conventional method

    Energy Technology Data Exchange (ETDEWEB)

    You, Sung Hye; Son, Gyu Ri; Lee, Nam Joon [Dept. of Radiology, Korea University Anam Hospital, Seoul (Korea, Republic of); Suh, Sangil; Ryoo, In Seon; Seol, Hae Young [Dept. of Radiology, Korea University Guro Hospital, Seoul (Korea, Republic of); Lee, Young Hen; Seo, Hyung Suk [Dept. of Radiology, Korea University Ansan Hospital, Ansan (Korea, Republic of)

    2017-01-15

    The purpose of this study was to investigate the accuracy and reliability of the semi-automated ultrasonographic volume measurement tool, virtual organ computer-aided analysis (VOCAL), for measuring the volume of parathyroid glands. Volume measurements for 40 parathyroid glands were performed in patients with secondary hyperparathyroidism caused by chronic renal failure. The volume of the parathyroid glands was measured twice by experienced radiologists by two-dimensional (2D) and three-dimensional (3D) methods using conventional sonograms and the VOCAL with 30°angle increments before parathyroidectomy. The specimen volume was also measured postoperatively. Intraclass correlation coefficients (ICCs) and the absolute percentage error were used for estimating the reproducibility and accuracy of the two different methods. The ICC value between two measurements of the 2D method and the 3D method was 0.956 and 0.999, respectively. The mean absolute percentage error of the 2D method and the 3D VOCAL technique was 29.56% and 5.78%, respectively. For accuracy and reliability, the plots of the 3D method showed a more compact distribution than those of the 2D method on the Bland-Altman graph. The rotational VOCAL method for measuring the parathyroid gland is more accurate and reliable than the conventional 2D measurement. This VOCAL method could be used as a more reliable follow-up imaging modality in a patient with hyperparathyroidism.

  13. Assessing the accuracy and reliability of ultrasonographic three-dimensional parathyroid volume measurement in a patient with secondary hyperparathyroidism: a comparison with the two-dimensional conventional method

    International Nuclear Information System (INIS)

    You, Sung Hye; Son, Gyu Ri; Lee, Nam Joon; Suh, Sangil; Ryoo, In Seon; Seol, Hae Young; Lee, Young Hen; Seo, Hyung Suk

    2017-01-01

    The purpose of this study was to investigate the accuracy and reliability of the semi-automated ultrasonographic volume measurement tool, virtual organ computer-aided analysis (VOCAL), for measuring the volume of parathyroid glands. Volume measurements for 40 parathyroid glands were performed in patients with secondary hyperparathyroidism caused by chronic renal failure. The volume of the parathyroid glands was measured twice by experienced radiologists by two-dimensional (2D) and three-dimensional (3D) methods using conventional sonograms and the VOCAL with 30°angle increments before parathyroidectomy. The specimen volume was also measured postoperatively. Intraclass correlation coefficients (ICCs) and the absolute percentage error were used for estimating the reproducibility and accuracy of the two different methods. The ICC value between two measurements of the 2D method and the 3D method was 0.956 and 0.999, respectively. The mean absolute percentage error of the 2D method and the 3D VOCAL technique was 29.56% and 5.78%, respectively. For accuracy and reliability, the plots of the 3D method showed a more compact distribution than those of the 2D method on the Bland-Altman graph. The rotational VOCAL method for measuring the parathyroid gland is more accurate and reliable than the conventional 2D measurement. This VOCAL method could be used as a more reliable follow-up imaging modality in a patient with hyperparathyroidism

  14. A method for measuring three-dimensional mandibular kinematics in vivo using single-plane fluoroscopy

    Science.gov (United States)

    Chen, C-C; Lin, C-C; Chen, Y-J; Hong, S-W; Lu, T-W

    2013-01-01

    Objectives Accurate measurement of the three-dimensional (3D) motion of the mandible in vivo is essential for relevant clinical applications. Existing techniques are either of limited accuracy or require the use of transoral devices that interfere with jaw movements. This study aimed to develop further an existing method for measuring 3D, in vivo mandibular kinematics using single-plane fluoroscopy; to determine the accuracy of the method; and to demonstrate its clinical applicability via measurements on a healthy subject during opening/closing and chewing movements. Methods The proposed method was based on the registration of single-plane fluoroscopy images and 3D low-radiation cone beam CT data. It was validated using roentgen single-plane photogrammetric analysis at static positions and during opening/closing and chewing movements. Results The method was found to have measurement errors of 0.1 ± 0.9 mm for all translations and 0.2° ± 0.6° for all rotations in static conditions, and of 1.0 ± 1.4 mm for all translations and 0.2° ± 0.7° for all rotations in dynamic conditions. Conclusions The proposed method is considered an accurate method for quantifying the 3D mandibular motion in vivo. Without relying on transoral devices, the method has advantages over existing methods, especially in the assessment of patients with missing or unstable teeth, making it useful for the research and clinical assessment of the temporomandibular joint and chewing function. PMID:22842637

  15. A new extended elliptic equation rational expansion method and its application to (2 + 1)-dimensional Burgers equation

    International Nuclear Information System (INIS)

    Wang Baodong; Song Lina; Zhang Hongqing

    2007-01-01

    In this paper, we present a new elliptic equation rational expansion method to uniformly construct a series of exact solutions for nonlinear partial differential equations. As an application of the method, we choose the (2 + 1)-dimensional Burgers equation to illustrate the method and successfully obtain some new and more general solutions

  16. A meshless local radial basis function method for two-dimensional incompressible Navier-Stokes equations

    KAUST Repository

    Wang, Zhiheng

    2014-12-10

    A meshless local radial basis function method is developed for two-dimensional incompressible Navier-Stokes equations. The distributed nodes used to store the variables are obtained by the philosophy of an unstructured mesh, which results in two main advantages of the method. One is that the unstructured nodes generation in the computational domain is quite simple, without much concern about the mesh quality; the other is that the localization of the obtained collocations for the discretization of equations is performed conveniently with the supporting nodes. The algebraic system is solved by a semi-implicit pseudo-time method, in which the convective and source terms are explicitly marched by the Runge-Kutta method, and the diffusive terms are implicitly solved. The proposed method is validated by several benchmark problems, including natural convection in a square cavity, the lid-driven cavity flow, and the natural convection in a square cavity containing a circular cylinder, and very good agreement with the existing results are obtained.

  17. Beam simulation of synchrotron radiation equipment. New method responsive to three dimensional magnetic field

    International Nuclear Information System (INIS)

    Tanaka, Hirofumi

    1999-01-01

    A new numerical analysis method capable of precise modeling of complex three dimensional magnetic field of superconducting wiggler and of long-term beam simulation without destroying property of Hamiltonian dynamics system was developed by using the above-mentioned method. Therefore, a fundamental design of a compact synchrotron radiation equipment with hexagonal column shape was also developed. Its main parameters had 1 GeV in energy, 36 m in circumference, 300 mA in stored current, and 184 nmrad in emittance. So as to enable to research the x-ray and vacuum UV regions, a superconducting wiggler with 7T in magnetic field strength and an undulator were set at straight section. It depends upon if beam around stable region on exciting the superconducting wiggler is wider than the required region whether this type of synchrotron radiation equipment can be realized or not. By using three orbit analysis methods containing the developed one, the circulating stable region was introduced. As a result, although shape of the stable region was different from used methods, it was found that considerably larger stable region was obtained than the required in circulation results of every three methods. That is to say, it was shown that the designed compact equipment can accumulate electron beams stably. (G.K.)

  18. Impact response analysis of cask for spent fuel by dimensional analysis and mode superposition method

    International Nuclear Information System (INIS)

    Kim, Y. J.; Kim, W. T.; Lee, Y. S.

    2006-01-01

    Full text: Full text: Due to the potentiality of accidents, the transportation safety of radioactive material has become extremely important in these days. The most important means of accomplishing the safety in transportation for radioactive material is the integrity of cask. The cask for spent fuel consists of a cask body and two impact limiters generally. The impact limiters are attached at the upper and the lower of the cask body. The cask comprises general requirements and test requirements for normal transport conditions and hypothetical accident conditions in accordance with IAEA regulations. Among the test requirements for hypothetical accident conditions, the 9 m drop test of dropping the cask from 9 m height to unyielding surface to get maximum damage becomes very important requirement because it can affect the structural soundness of the cask. So far the impact response analysis for 9 m drop test has been obtained by finite element method with complex computational procedure. In this study, the empirical equations of the impact forces for 9 m drop test are formulated by dimensional analysis. And then using the empirical equations the characteristics of material used for impact limiters are analysed. Also the dynamic impact response of the cask body is analysed using the mode superposition method and the analysis method is proposed. The results are also validated by comparing with previous experimental results and finite element analysis results. The present method is simpler than finite element method and can be used to predict the impact response of the cask

  19. A modified sparse reconstruction method for three-dimensional synthetic aperture radar image

    Science.gov (United States)

    Zhang, Ziqiang; Ji, Kefeng; Song, Haibo; Zou, Huanxin

    2018-03-01

    There is an increasing interest in three-dimensional Synthetic Aperture Radar (3-D SAR) imaging from observed sparse scattering data. However, the existing 3-D sparse imaging method requires large computing times and storage capacity. In this paper, we propose a modified method for the sparse 3-D SAR imaging. The method processes the collection of noisy SAR measurements, usually collected over nonlinear flight paths, and outputs 3-D SAR imagery. Firstly, the 3-D sparse reconstruction problem is transformed into a series of 2-D slices reconstruction problem by range compression. Then the slices are reconstructed by the modified SL0 (smoothed l0 norm) reconstruction algorithm. The improved algorithm uses hyperbolic tangent function instead of the Gaussian function to approximate the l0 norm and uses the Newton direction instead of the steepest descent direction, which can speed up the convergence rate of the SL0 algorithm. Finally, numerical simulation results are given to demonstrate the effectiveness of the proposed algorithm. It is shown that our method, compared with existing 3-D sparse imaging method, performs better in reconstruction quality and the reconstruction time.

  20. The method to make the three dimensional anatomical pattern of hepatic vessels by stereo angiography

    International Nuclear Information System (INIS)

    Mutou, Haruomi; Kobayashi, Seiichiro; Yamada, Akiyoshi; Takasaki, Takeshi; Isobe, Yoshinori; Tanaka, Seiichi; Saeki, Shin; Yoshida, Masanori

    1986-01-01

    For the Past few years, there has been a big advance in the hepatic surgery. Now, small resection, such as segmentectomy or subsegmentectomy, is performed routinely. Based on this tendency, hepatic surgeons request more details, more stereographic findings of hepatic vessels to heaptic angiography. Especially three dimensional combined anatomical pattern of the hepatic artery, portal vein and hepatic vein is strongly needed. We have tried three dimensional computer graphic of hepatic vessels since few years ago, using the personal computer, digitizer with clear screen, commercially available 3D software and my own program. We use three groups of angiographic films, that is the hepatic artery, portal vein and hepatic vein with IVC, which were taken by stereoangiography. The depth of each poits of vessels are calculated by the way described in Fig 3. Using these points, the 3D software, '3DPROGATS', can make the anatomical pattern of combined hepatic vessels on TV display. And then we can also perform rotation, heading, bank, zooming, hidden line elimination freely for this picture. Out of necessity as hepatic surgeons, we make a simple system for 3D computer graphic of heptic vessels. At present, the image is somewhat rough, but clinically it is relatively effective. In this report we want to explain our method and to show the anatomical pattern of hepatic vessels of case of hepatoma. (author)

  1. Novel Method of Detecting Movement of the Interference Fringes Using One-Dimensional PSD

    Directory of Open Access Journals (Sweden)

    Qi Wang

    2015-06-01

    Full Text Available In this paper, a method of using a one-dimensional position-sensitive detector (PSD by replacing charge-coupled device (CCD to measure the movement of the interference fringes is presented first, and its feasibility is demonstrated through an experimental setup based on the principle of centroid detection. Firstly, the centroid position of the interference fringes in a fiber Mach-Zehnder (M-Z interferometer is solved in theory, showing it has a higher resolution and sensitivity. According to the physical characteristics and principles of PSD, a simulation of the interference fringe’s phase difference in fiber M-Z interferometers and PSD output is carried out. Comparing the simulation results with the relationship between phase differences and centroid positions in fiber M-Z interferometers, the conclusion that the output of interference fringes by PSD is still the centroid position is obtained. Based on massive measurements, the best resolution of the system is achieved with 5.15, 625 μm. Finally, the detection system is evaluated through setup error analysis and an ultra-narrow-band filter structure. The filter structure is configured with a one-dimensional photonic crystal containing positive and negative refraction material, which can eliminate background light in the PSD detection experiment. This detection system has a simple structure, good stability, high precision and easily performs remote measurements, which makes it potentially useful in material small deformation tests, refractivity measurements of optical media and optical wave front detection.

  2. Efficient analysis of three dimensional EUV mask induced imaging artifacts using the waveguide decomposition method

    Science.gov (United States)

    Shao, Feng; Evanschitzky, Peter; Fühner, Tim; Erdmann, Andreas

    2009-10-01

    This paper employs the Waveguide decomposition method as an efficient rigorous electromagnetic field (EMF) solver to investigate three dimensional mask-induced imaging artifacts in EUV lithography. The major mask diffraction induced imaging artifacts are first identified by applying the Zernike analysis of the mask nearfield spectrum of 2D lines/spaces. Three dimensional mask features like 22nm semidense/dense contacts/posts, isolated elbows and line-ends are then investigated in terms of lithographic results. After that, the 3D mask-induced imaging artifacts such as feature orientation dependent best focus shift, process window asymmetries, and other aberration-like phenomena are explored for the studied mask features. The simulation results can help lithographers to understand the reasons of EUV-specific imaging artifacts and to devise illumination and feature dependent strategies for their compensation in the optical proximity correction (OPC) for EUV masks. At last, an efficient approach using the Zernike analysis together with the Waveguide decomposition technique is proposed to characterize the impact of mask properties for the future OPC process.

  3. Dimensional accuracy optimization of the micro-plastic injection molding process using the Taguchi design method

    Directory of Open Access Journals (Sweden)

    Chil-Chyuan KUO KUO

    2015-06-01

    Full Text Available Plastic injection molding is an important field in manufacturing industry because there are many plastic products that produced by injection molding. However, the time and cost required for producing a precision mold are the most troublesome problems that limit the application at the development stage of a new product in precision machinery industry. This study presents an approach of manufacturing a hard mold with microfeatures for micro-plastic injection molding. This study also focuses on Taguchi design method for investigating the effect of injection parameters on the dimensional accuracy of Fresnel lens during plastic injection molding. It was found that the dominant factor affecting the microgroove depth of Fresnel lens is packing pressure. The optimum processing parameters are packing pressure of 80 MPa, melt temperature of 240 °C, mold temperature of 90 °C and injection speed of 50 m/s. The dimensional accuracy of Fresnel lens can be controlled within ±3 µm using the optimum level of process parameters through the confirmation test. The research results of this study have industrial application values because electro-optical industries are able to significantly reduce a new optical element development cycle time.DOI: http://dx.doi.org/10.5755/j01.ms.21.2.5864

  4. Three-dimensional vision enhances task performance independently of the surgical method.

    Science.gov (United States)

    Wagner, O J; Hagen, M; Kurmann, A; Horgan, S; Candinas, D; Vorburger, S A

    2012-10-01

    Within the next few years, the medical industry will launch increasingly affordable three-dimensional (3D) vision systems for the operating room (OR). This study aimed to evaluate the effect of two-dimensional (2D) and 3D visualization on surgical skills and task performance. In this study, 34 individuals with varying laparoscopic experience (18 inexperienced individuals) performed three tasks to test spatial relationships, grasping and positioning, dexterity, precision, and hand-eye and hand-hand coordination. Each task was performed in 3D using binocular vision for open performance, the Viking 3Di Vision System for laparoscopic performance, and the DaVinci robotic system. The same tasks were repeated in 2D using an eye patch for monocular vision, conventional laparoscopy, and the DaVinci robotic system. Loss of 3D vision significantly increased the perceived difficulty of a task and the time required to perform it, independently of the approach (P robot than with laparoscopy (P = 0.005). In every case, 3D robotic performance was superior to conventional laparoscopy (2D) (P < 0.001-0.015). The more complex the task, the more 3D vision accelerates task completion compared with 2D vision. The gain in task performance is independent of the surgical method.

  5. A comparison of primary two- and three-dimensional methods to review CT colonography

    International Nuclear Information System (INIS)

    Gelder, Rogier E. van; Florie, Jasper; Nio, C. Yung; Jager, Steven W. de; Lameris, Johan S.; Stoker, Jaap; Jensch, Sebastiaan; Vos, Frans M.; Venema, Henk W.; Bartelsman, Joep F.; Reitsma, Johannes B.; Bossuyt, Patrick M.M.

    2007-01-01

    The aim of our study was to compare primary three-dimensional (3D) and primary two-dimensional (2D) review methods for CT colonography with regard to polyp detection and perceptive errors. CT colonography studies of 77 patients were read twice by three reviewers, first with a primary 3D method and then with a primary 2D method. Mean numbers of true and false positives, patient sensitivity and specificity and perceptive errors were calculated with colonoscopy as a reference standard. A perceptive error was made if a polyp was not detected by all reviewers. Mean sensitivity for large (≥10 mm) polyps for primary 3D and 2D review was 81% (14.7/18) and 70%(12.7/18), respectively (p-values ≥0.25). Mean numbers of large false positives for primary 3D and 2D were 8.3 and 5.3, respectively. With primary 3D and 2D review 1 and 6 perceptive errors, respectively, were made in 18 large polyps (p = 0.06). For medium-sized (6-9 mm) polyps these values were for primary 3D and 2D, respectively: mean sensitivity: 67%(11.3/17) and 61%(10.3/17; p-values≥ 0.45), number of false positives: 33.3 and 15.6, and perceptive errors: 4 and 6 (p = 0.53). No significant differences were found in the detection of large and medium-sized polyps between primary 3D and 2D review. (orig.)

  6. Seismic response of three-dimensional rockfill dams using the Indirect Boundary Element Method

    International Nuclear Information System (INIS)

    Sanchez-Sesma, Francisco J; Arellano-Guzman, Mauricio; Perez-Gavilan, Juan J; Suarez, Martha; Marengo-Mogollon, Humberto; Chaillat, Stephanie; Jaramillo, Juan Diego; Gomez, Juan; Iturraran-Viveros, Ursula; Rodriguez-Castellanos, Alejandro

    2010-01-01

    The Indirect Boundary Element Method (IBEM) is used to compute the seismic response of a three-dimensional rockfill dam model. The IBEM is based on a single layer integral representation of elastic fields in terms of the full-space Green function, or fundamental solution of the equations of dynamic elasticity, and the associated force densities along the boundaries. The method has been applied to simulate the ground motion in several configurations of surface geology. Moreover, the IBEM has been used as benchmark to test other procedures. We compute the seismic response of a three-dimensional rockfill dam model placed within a canyon that constitutes an irregularity on the surface of an elastic half-space. The rockfill is also assumed elastic with hysteretic damping to account for energy dissipation. Various types of incident waves are considered to analyze the physical characteristics of the response: symmetries, amplifications, impulse response and the like. Computations are performed in the frequency domain and lead to time response using Fourier analysis. In the present implementation a symmetrical model is used to test symmetries. The boundaries of each region are discretized into boundary elements whose size depends on the shortest wavelength, typically, six boundary segments per wavelength. Usually, the seismic response of rockfill dams is simulated using either finite elements (FEM) or finite differences (FDM). In most applications, commercial tools that combine features of these methods are used to assess the seismic response of the system for a given motion at the base of model. However, in order to consider realistic excitation of seismic waves with different incidence angles and azimuth we explore the IBEM.

  7. A Three-Dimensional Target Depth-Resolution Method with a Single-Vector Sensor.

    Science.gov (United States)

    Zhao, Anbang; Bi, Xuejie; Hui, Juan; Zeng, Caigao; Ma, Lin

    2018-04-12

    This paper mainly studies and verifies the target number category-resolution method in multi-target cases and the target depth-resolution method of aerial targets. Firstly, target depth resolution is performed by using the sign distribution of the reactive component of the vertical complex acoustic intensity; the target category and the number resolution in multi-target cases is realized with a combination of the bearing-time recording information; and the corresponding simulation verification is carried out. The algorithm proposed in this paper can distinguish between the single-target multi-line spectrum case and the multi-target multi-line spectrum case. This paper presents an improved azimuth-estimation method for multi-target cases, which makes the estimation results more accurate. Using the Monte Carlo simulation, the feasibility of the proposed target number and category-resolution algorithm in multi-target cases is verified. In addition, by studying the field characteristics of the aerial and surface targets, the simulation results verify that there is only amplitude difference between the aerial target field and the surface target field under the same environmental parameters, and an aerial target can be treated as a special case of a surface target; the aerial target category resolution can then be realized based on the sign distribution of the reactive component of the vertical acoustic intensity so as to realize three-dimensional target depth resolution. By processing data from a sea experiment, the feasibility of the proposed aerial target three-dimensional depth-resolution algorithm is verified.

  8. Development of 2-D/1-D fusion method for three-dimensional whole-core heterogeneous neutron transport calculations

    International Nuclear Information System (INIS)

    Lee, Gil Soo

    2006-02-01

    To describe power distribution and multiplication factor of a reactor core accurately, it is necessary to perform calculations based on neutron transport equation considering heterogeneous geometry and scattering angles. These calculations require very heavy calculations and were nearly impossible with computers of old days. From the limitation of computing power, traditional approach of reactor core design consists of heterogeneous transport calculation in fuel assembly level and whole core diffusion nodal calculation with assembly homogenized properties, resulting from fuel assembly transport calculation. This approach may be effective in computation time, but it gives less accurate results for highly heterogeneous problems. As potential for whole core heterogeneous transport calculation became more feasible owing to rapid development of computing power during last several years, the interests in two and three dimensional whole core heterogeneous transport calculations by deterministic method are increased. For two dimensional calculation, there were several successful approaches using even parity transport equation with triangular meshes, S N method with refined rectangular meshes, the method of characteristics (MOC) with unstructured meshes, and so on. The work in this thesis originally started from the two dimensional whole core heterogeneous transport calculation by using MOC. After successful achievement in two dimensional calculation, there were efforts in three-dimensional whole-core heterogeneous transport calculation using MOC. Since direct extension to three dimensional calculation of MOC requires too much computing power, indirect approach to three dimensional calculation was considered.Thus, 2D/1D fusion method for three dimensional heterogeneous transport calculation was developed and successfully implemented in a computer code. The 2D/1D fusion method is synergistic combination of the MOC for radial 2-D calculation and S N -like methods for axial 1

  9. Temporal resolution measurement of 128-slice dual source and 320-row area detector computed tomography scanners in helical acquisition mode using the impulse method.

    Science.gov (United States)

    Hara, Takanori; Urikura, Atsushi; Ichikawa, Katsuhiro; Hoshino, Takashi; Nishimaru, Eiji; Niwa, Shinji

    2016-04-01

    To analyse the temporal resolution (TR) of modern computed tomography (CT) scanners using the impulse method, and assess the actual maximum TR at respective helical acquisition modes. To assess the actual TR of helical acquisition modes of a 128-slice dual source CT (DSCT) scanner and a 320-row area detector CT (ADCT) scanner, we assessed the TRs of various acquisition combinations of a pitch factor (P) and gantry rotation time (R). The TR of the helical acquisition modes for the 128-slice DSCT scanner continuously improved with a shorter gantry rotation time and greater pitch factor. However, for the 320-row ADCT scanner, the TR with a pitch factor of pitch factor of >1.0, it was approximately one half of the gantry rotation time. The maximum TR values of single- and dual-source helical acquisition modes for the 128-slice DSCT scanner were 0.138 (R/P=0.285/1.5) and 0.074s (R/P=0.285/3.2), and the maximum TR values of the 64×0.5- and 160×0.5-mm detector configurations of the helical acquisition modes for the 320-row ADCT scanner were 0.120 (R/P=0.275/1.375) and 0.195s (R/P=0.3/0.6), respectively. Because the TR of a CT scanner is not accurately depicted in the specifications of the individual scanner, appropriate acquisition conditions should be determined based on the actual TR measurement. Copyright © 2016 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  10. Improved non-dimensional dynamic influence function method for vibration analysis of arbitrarily shaped plates with clamped edges

    Directory of Open Access Journals (Sweden)

    Sang-Wook Kang

    2016-03-01

    Full Text Available A new formulation for the non-dimensional dynamic influence function method, which was developed by the authors, is proposed to efficiently extract eigenvalues and mode shapes of clamped plates with arbitrary shapes. Compared with the finite element and boundary element methods, the non-dimensional dynamic influence function method yields highly accurate solutions in eigenvalue analysis problems of plates and membranes including acoustic cavities. However, the non-dimensional dynamic influence function method requires the uneconomic procedure of calculating the singularity of a system matrix in the frequency range of interest for extracting eigenvalues because it produces a non-algebraic eigenvalue problem. This article describes a new approach that reduces the problem of free vibrations of clamped plates to an algebraic eigenvalue problem, the solution of which is straightforward. The validity and efficiency of the proposed method are illustrated through several numerical examples.

  11. Elastic SCAD as a novel penalization method for SVM classification tasks in high-dimensional data.

    Science.gov (United States)

    Becker, Natalia; Toedt, Grischa; Lichter, Peter; Benner, Axel

    2011-05-09

    Classification and variable selection play an important role in knowledge discovery in high-dimensional data. Although Support Vector Machine (SVM) algorithms are among the most powerful classification and prediction methods with a wide range of scientific applications, the SVM does not include automatic feature selection and therefore a number of feature selection procedures have been developed. Regularisation approaches extend SVM to a feature selection method in a flexible way using penalty functions like LASSO, SCAD and Elastic Net.We propose a novel penalty function for SVM classification tasks, Elastic SCAD, a combination of SCAD and ridge penalties which overcomes the limitations of each penalty alone.Since SVM models are extremely sensitive to the choice of tuning parameters, we adopted an interval search algorithm, which in comparison to a fixed grid search finds rapidly and more precisely a global optimal solution. Feature selection methods with combined penalties (Elastic Net and Elastic SCAD SVMs) are more robust to a change of the model complexity than methods using single penalties. Our simulation study showed that Elastic SCAD SVM outperformed LASSO (L1) and SCAD SVMs. Moreover, Elastic SCAD SVM provided sparser classifiers in terms of median number of features selected than Elastic Net SVM and often better predicted than Elastic Net in terms of misclassification error.Finally, we applied the penalization methods described above on four publicly available breast cancer data sets. Elastic SCAD SVM was the only method providing robust classifiers in sparse and non-sparse situations. The proposed Elastic SCAD SVM algorithm provides the advantages of the SCAD penalty and at the same time avoids sparsity limitations for non-sparse data. We were first to demonstrate that the integration of the interval search algorithm and penalized SVM classification techniques provides fast solutions on the optimization of tuning parameters.The penalized SVM

  12. A method of adjusting SUV for injection-acquisition time differences in {sup 18}F-FDG PET Imaging

    Energy Technology Data Exchange (ETDEWEB)

    Laffon, Eric [Hopital du Haut Leveque, CHU de Bordeaux, Pessac (France); Centre de Recherche Cardio-Thoracique, Bordeaux (France); Hopital du Haut-Leveque, Service de Medecine Nucleaire, Pessac (France); Clermont, Henri de [Hopital du Haut Leveque, CHU de Bordeaux, Pessac (France); Marthan, Roger [Hopital du Haut Leveque, CHU de Bordeaux, Pessac (France); Centre de Recherche Cardio-Thoracique, Bordeaux (France)

    2011-11-15

    A time normalisation method of tumour SUVs in {sup 18}F-FDG PET imaging is proposed that has been verified in lung cancer patients. A two-compartment model analysis showed that, when SUV is not corrected for {sup 18}F physical decay (SUV{sub uncorr}), its value is within 5% of its peak value (t = 79 min) between 55 and 110 min after injection, in each individual patient. In 10 patients, each with 1 or more malignant lesions (n = 15), two PET acquisitions were performed within this time delay, and the maximal SUV of each lesion, both corrected and uncorrected, was assessed. No significant difference was found between the two uncorrected SUVs, whereas there was a significant difference between the two corrected ones: mean differences were 0.04 {+-} 0.22 and 3.24 {+-} 0.75 g.ml{sup -1}, respectively (95% confidence intervals). Therefore, a simple normalisation of decay-corrected SUV for time differences after injection is proposed: SUV{sub N} = 1.66*SUV{sub uncorr}, where the factor 1.66 arises from decay correction at t = 79 min. When {sup 18}F-FDG PET imaging is performed within the range 55-110 min after injection, a simple SUV normalisation for time differences after injection has been verified in patients with lung cancer, with a {+-}2.5% relative measurement uncertainty. (orig.)

  13. METHOD OF DIMENSIONALITY REDUCTION IN CONTACT MECHANICS AND FRICTION: A USERS HANDBOOK. I. AXIALLY-SYMMETRIC CONTACTS

    Directory of Open Access Journals (Sweden)

    Valentin L. Popov

    2014-04-01

    Full Text Available The Method of Dimensionality Reduction (MDR is a method of calculation and simulation of contacts of elastic and viscoelastic bodies. It consists essentially of two simple steps: (a substitution of the three-dimensional continuum by a uniquely defined one-dimensional linearly elastic or viscoelastic foundation (Winkler foundation and (b transformation of the three-dimensional profile of the contacting bodies by means of the MDR-transformation. As soon as these two steps are completed, the contact problem can be considered to be solved. For axial symmetric contacts, only a small calculation by hand is required which does not exceed elementary calculus and will not be a barrier for any practically-oriented engineer. Alternatively, the MDR can be implemented numerically, which is almost trivial due to the independence of the foundation elements. In spite of their simplicity, all the results are exact. The present paper is a short practical guide to the MDR.

  14. A Galleria Boundary Element Method for two-dimensional nonlinear magnetostatics

    Science.gov (United States)

    Brovont, Aaron D.

    The Boundary Element Method (BEM) is a numerical technique for solving partial differential equations that is used broadly among the engineering disciplines. The main advantage of this method is that one needs only to mesh the boundary of a solution domain. A key drawback is the myriad of integrals that must be evaluated to populate the full system matrix. To this day these integrals have been evaluated using numerical quadrature. In this research, a Galerkin formulation of the BEM is derived and implemented to solve two-dimensional magnetostatic problems with a focus on accurate, rapid computation. To this end, exact, closed-form solutions have been derived for all the integrals comprising the system matrix as well as those required to compute fields in post-processing; the need for numerical integration has been eliminated. It is shown that calculation of the system matrix elements using analytical solutions is 15-20 times faster than with numerical integration of similar accuracy. Furthermore, through the example analysis of a c-core inductor, it is demonstrated that the present BEM formulation is a competitive alternative to the Finite Element Method (FEM) for linear magnetostatic analysis. Finally, the BEM formulation is extended to analyze nonlinear magnetostatic problems via the Dual Reciprocity Method (DRBEM). It is shown that a coarse, meshless analysis using the DRBEM is able to achieve RMS error of 3-6% compared to a commercial FEM package in lightly saturated conditions.

  15. View-invariant gait recognition method by three-dimensional convolutional neural network

    Science.gov (United States)

    Xing, Weiwei; Li, Ying; Zhang, Shunli

    2018-01-01

    Gait as an important biometric feature can identify a human at a long distance. View change is one of the most challenging factors for gait recognition. To address the cross view issues in gait recognition, we propose a view-invariant gait recognition method by three-dimensional (3-D) convolutional neural network. First, 3-D convolutional neural network (3DCNN) is introduced to learn view-invariant feature, which can capture the spatial information and temporal information simultaneously on normalized silhouette sequences. Second, a network training method based on cross-domain transfer learning is proposed to solve the problem of the limited gait training samples. We choose the C3D as the basic model, which is pretrained on the Sports-1M and then fine-tune C3D model to adapt gait recognition. In the recognition stage, we use the fine-tuned model to extract gait features and use Euclidean distance to measure the similarity of gait sequences. Sufficient experiments are carried out on the CASIA-B dataset and the experimental results demonstrate that our method outperforms many other methods.

  16. Numerical simulation of two-dimensional flows over a circular cylinder using the immersed boundary method

    International Nuclear Information System (INIS)

    Lima E Silva, A.L.F.; Silveira-Neto, A.; Damasceno, J.J.R.

    2003-01-01

    In this work, a virtual boundary method is applied to the numerical simulation of a uniform flow over a cylinder. The force source term, added to the two-dimensional Navier-Stokes equations, guarantees the imposition of the no-slip boundary condition over the body-fluid interface. These equations are discretized, using the finite differences method. The immersed boundary is represented with a finite number of Lagrangian points, distributed over the solid-fluid interface. A Cartesian grid is used to solve the fluid flow equations. The key idea is to propose a method to calculate the interfacial force without ad hoc constants that should usually be adjusted for the type of flow and the type of the numerical method, when this kind of model is used. In the present work, this force is calculated using the Navier-Stokes equations applied to the Lagrangian points and then distributed over the Eulerian grid. The main advantage of this approach is that it enables calculation of this force field, even if the interface is moving or deforming. It is unnecessary to locate the Eulerian grid points near this immersed boundary. The lift and drag coefficients and the Strouhal number, calculated for an immersed cylinder, are compared with previous experimental and numerical results, for different Reynolds numbers

  17. Guided Autotransplantation of Teeth: A Novel Method Using Virtually Planned 3-dimensional Templates.

    Science.gov (United States)

    Strbac, Georg D; Schnappauf, Albrecht; Giannis, Katharina; Bertl, Michael H; Moritz, Andreas; Ulm, Christian

    2016-12-01

    The aim of this study was to introduce an innovative method for autotransplantation of teeth using 3-dimensional (3D) surgical templates for guided osteotomy preparation and donor tooth placement. This report describes autotransplantation of immature premolars as treatment of an 11-year-old boy having suffered severe trauma with avulsion of permanent maxillary incisors. This approach uses modified methods from guided implant surgery by superimposition of Digital Imaging and Communications in Medicine files and 3D data sets of the jaws in order to predesign 3D printed templates with the aid of a fully digital workflow. The intervention in this complex case could successfully be accomplished by performing preplanned virtual transplantations with guided osteotomies to prevent bone loss and ensure accurate donor teeth placement in new recipient sites. Functional and esthetic restoration could be achieved by modifying methods used in guided implant surgery and prosthodontic rehabilitation. The 1-year follow-up showed vital natural teeth with physiological clinical and radiologic parameters. This innovative approach uses the latest diagnostic methods and techniques of guided implant surgery, enabling the planning and production of 3D printed surgical templates. These accurate virtually predesigned surgical templates could facilitate autotransplantation in the future by full implementation of recommended guidelines, ensuring an atraumatic surgical protocol. Copyright © 2016 American Association of Endodontists. Published by Elsevier Inc. All rights reserved.

  18. Methods for the solution of the two-dimensional radiation-transfer equation

    International Nuclear Information System (INIS)

    Weaver, R.; Mihalas, D.; Olson, G.

    1982-01-01

    We use the variable Eddington factor (VEF) approximation to solve the time-dependent two-dimensional radiation transfer equation. The transfer equation and its moments are derived for an inertial frame of reference in cylindrical geometry. Using the VEF tensor to close the moment equations, we manipulate them into a combined moment equation that results in an energy equation, which is automatically flux limited. There are two separable facets in this method of solution. First, given the variable Eddington tensor, we discuss the efficient solution of the combined moment matrix equation. The second facet of the problem is the calculation of the variable Eddington tensor. Several options for this calculation, as well as physical limitations on the use of locally-calculated Eddington factors, are discussed

  19. Method to planarize three-dimensional structures to enable conformal electrodes

    Science.gov (United States)

    Nikolic, Rebecca J; Conway, Adam M; Graff, Robert T; Reinhardt, Catherine; Voss, Lars F; Shao, Qinghui

    2012-11-20

    Methods for fabricating three-dimensional PIN structures having conformal electrodes are provided, as well as the structures themselves. The structures include a first layer and an array of pillars with cavity regions between the pillars. A first end of each pillar is in contact with the first layer. A segment is formed on the second end of each pillar. The cavity regions are filled with a fill material, which may be a functional material such as a neutron sensitive material. The fill material covers each segment. A portion of the fill material is etched back to produce an exposed portion of the segment. A first electrode is deposited onto the fill material and each exposed segment, thereby forming a conductive layer that provides a common contact to each the exposed segment. A second electrode is deposited onto the first layer.

  20. Cardiac dimensional analysis by use of biplane cineradiography: description and validation of method.

    Science.gov (United States)

    Lipscomb, K

    1980-01-01

    Biplane cineradiography is a potentially powerful tool for precise measurement of intracardiac dimensions. The most systematic approach to these measurements is the creation of a three-dimensional coordinate system within the x-ray field. Using this system, interpoint distances, such as between radiopaque clips or coronary artery bifurcations, can be calculated by use of the Pythagoras theorem. Alternatively, calibration factors can be calculated in order to determine the absolute dimensions of a structure, such as a ventricle or coronary artery. However, cineradiography has two problems that have precluded widespread use of the system. These problems are pincushion distortion and variable image magnification. In this paper, methodology to quantitate and compensate for these variables is presented. The method uses radiopaque beads permanently mounted in the x-ray field. The position of the bead images on the x-ray film determine the compensation factors. Using this system, measurements are made with a standard deviation of approximately 1% of the true value.

  1. A three-dimensional cell-based smoothed finite element method for elasto-plasticity

    International Nuclear Information System (INIS)

    Lee, Kye Hyung; Im, Se Yong; Lim, Jae Hyuk; Sohn, Dong Woo

    2015-01-01

    This work is concerned with a three-dimensional cell-based smoothed finite element method for application to elastic-plastic analysis. The formulation of smoothed finite elements is extended to cover elastic-plastic deformations beyond the classical linear theory of elasticity, which has been the major application domain of smoothed finite elements. The finite strain deformations are treated with the aid of the formulation based on the hyperelastic constitutive equation. The volumetric locking originating from the nearly incompressible behavior of elastic-plastic deformations is remedied by relaxing the volumetric strain through the mean value. The comparison with the conventional finite elements demonstrates the effectiveness and accuracy of the present approach.

  2. Three-Dimensional Dynamic Topology Optimization with Frequency Constraints Using Composite Exponential Function and ICM Method

    Directory of Open Access Journals (Sweden)

    Hongling Ye

    2015-01-01

    Full Text Available The dynamic topology optimization of three-dimensional continuum structures subject to frequency constraints is investigated using Independent Continuous Mapping (ICM design variable fields. The composite exponential function (CEF is selected to be a filter function which recognizes the design variables and to implement the changing process of design variables from “discrete” to “continuous” and back to “discrete.” Explicit formulations of frequency constraints are given based on filter functions, first-order Taylor series expansion. And an improved optimal model is formulated using CEF and the explicit frequency constraints. Dual sequential quadratic programming (DSQP algorithm is used to solve the optimal model. The program is developed on the platform of MSC Patran & Nastran. Finally, numerical examples are given to demonstrate the validity and applicability of the proposed method.

  3. A three-dimensional cell-based smoothed finite element method for elasto-plasticity

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Kye Hyung; Im, Se Yong [KAIST, Daejeon (Korea, Republic of); Lim, Jae Hyuk [KARI, Daejeon (Korea, Republic of); Sohn, Dong Woo [Korea Maritime and Ocean University, Busan (Korea, Republic of)

    2015-02-15

    This work is concerned with a three-dimensional cell-based smoothed finite element method for application to elastic-plastic analysis. The formulation of smoothed finite elements is extended to cover elastic-plastic deformations beyond the classical linear theory of elasticity, which has been the major application domain of smoothed finite elements. The finite strain deformations are treated with the aid of the formulation based on the hyperelastic constitutive equation. The volumetric locking originating from the nearly incompressible behavior of elastic-plastic deformations is remedied by relaxing the volumetric strain through the mean value. The comparison with the conventional finite elements demonstrates the effectiveness and accuracy of the present approach.

  4. A solution of two-dimensional magnetohydrodynamic flow using the finite volume method

    Directory of Open Access Journals (Sweden)

    Naceur Sonia

    2014-01-01

    Full Text Available This paper presents the two dimensional numerical modeling of the coupling electromagnetic-hydrodynamic phenomena in a conduction MHD pump using the Finite volume Method. Magnetohydrodynamic problems are, thus, interdisciplinary and coupled, since the effect of the velocity field appears in the magnetic transport equations, and the interaction between the electric current and the magnetic field appears in the momentum transport equations. The resolution of the Maxwell's and Navier Stokes equations is obtained by introducing the magnetic vector potential A, the vorticity z and the stream function y. The flux density, the electromagnetic force, and the velocity are graphically presented. Also, the simulation results agree with those obtained by Ansys Workbench Fluent software.

  5. FINITE VOLUME METHOD FOR SOLVING THREE-DIMENSIONAL ELECTRIC FIELD DISTRIBUTION

    Directory of Open Access Journals (Sweden)

    Paţiuc V.I.

    2011-04-01

    Full Text Available The paper examines a new approach to finite volume method which is used to calculate the electric field spatially homogeneous three-dimensional environment. It is formulated the problem Dirihle with building of the computational grid on base of space partition, which is known as Delone triangulation with the use of Voronoi cells. It is proposed numerical algorithm for calculating the potential and electric field strength in the space formed by a cylinder placed in the air. It is developed algorithm and software which were for the case, when the potential on the inner surface of the cylinder has been assigned and on the outer surface and the bottom of cylinder it was assigned zero potential. There are presented results of calculations of distribution in the potential space and electric field strength.

  6. A Fibonacci collocation method for solving a class of Fredholm–Volterra integral equations in two-dimensional spaces

    Directory of Open Access Journals (Sweden)

    Farshid Mirzaee

    2014-06-01

    Full Text Available In this paper, we present a numerical method for solving two-dimensional Fredholm–Volterra integral equations (F-VIE. The method reduces the solution of these integral equations to the solution of a linear system of algebraic equations. The existence and uniqueness of the solution and error analysis of proposed method are discussed. The method is computationally very simple and attractive. Finally, numerical examples illustrate the efficiency and accuracy of the method.

  7. Recent Progress in GW-based Methods for Excited-State Calculations of Reduced Dimensional Systems

    Science.gov (United States)

    da Jornada, Felipe H.

    2015-03-01

    Ab initio calculations of excited-state phenomena within the GW and GW-Bethe-Salpeter equation (GW-BSE) approaches allow one to accurately study the electronic and optical properties of various materials, including systems with reduced dimensionality. However, several challenges arise when dealing with complicated nanostructures where the electronic screening is strongly spatially and directionally dependent. In this talk, we discuss some recent developments to address these issues. First, we turn to the slow convergence of quasiparticle energies and exciton binding energies with respect to k-point sampling. This is very effectively dealt with using a new hybrid sampling scheme, which results in savings of several orders of magnitude in computation time. A new ab initio method is also developed to incorporate substrate screening into GW and GW-BSE calculations. These two methods have been applied to mono- and few-layer MoSe2, and yielded strong environmental dependent behaviors in good agreement with experiment. Other issues that arise in confined systems and materials with reduced dimensionality, such as the effect of the Tamm-Dancoff approximation to GW-BSE, and the calculation of non-radiative exciton lifetime, are also addressed. These developments have been efficiently implemented and successfully applied to real systems in an ab initio framework using the BerkeleyGW package. I would like to acknowledge collaborations with Diana Y. Qiu, Steven G. Louie, Meiyue Shao, Chao Yang, and the experimental groups of M. Crommie and F. Wang. This work was supported by Department of Energy under Contract No. DE-AC02-05CH11231 and by National Science Foundation under Grant No. DMR10-1006184.

  8. The analysis of carbohydrates in milk powder by a new "heart-cutting" two-dimensional liquid chromatography method.

    Science.gov (United States)

    Ma, Jing; Hou, Xiaofang; Zhang, Bing; Wang, Yunan; He, Langchong

    2014-03-01

    In this study, a new"heart-cutting" two-dimensional liquid chromatography method for the simultaneous determination of carbohydrate contents in milk powder was presented. In this two dimensional liquid chromatography system, a Venusil XBP-C4 analysis column was used in the first dimension ((1)D) as a pre-separation column, a ZORBAX carbohydrates analysis column was used in the second dimension ((2)D) as a final-analysis column. The whole process was completed in less than 35min without a particular sample preparation procedure. The capability of the new two dimensional HPLC method was demonstrated in the determination of carbohydrates in various brands of milk powder samples. A conventional one dimensional chromatography method was also proposed. The two proposed methods were both validated in terms of linearity, limits of detection, accuracy and precision. The comparison between the results obtained with the two methods showed that the new and completely automated two dimensional liquid chromatography method is more suitable for milk powder sample because of its online cleanup effect involved. Crown Copyright © 2013. Published by Elsevier B.V. All rights reserved.

  9. Development of Data Acquisition System for nuclear thermal hydraulic out-of-pile facility using the graphical programming methods

    Energy Technology Data Exchange (ETDEWEB)

    Bouaichaoui, Youcef; Berrahal, Abderezak; Halbaoui, Khaled [Birine Nuclear Research Center/CRNB/COMENA/ALGERIA, BO 180, Ain Oussera, 17200, Djelfa (Algeria)

    2015-07-01

    This paper describes the design of data acquisition system (DAQ) that is connected to a PC and development of a feedback control system that maintains the coolant temperature of the process at a desired set point using a digital controller system based on the graphical programming language. The paper will provide details about the data acquisition unit, shows the implementation of the controller, and present test results. (authors)

  10. A simple method for in vivo measurement of implant rod three-dimensional geometry during scoliosis surgery.

    Science.gov (United States)

    Salmingo, Remel A; Tadano, Shigeru; Fujisaki, Kazuhiro; Abe, Yuichiro; Ito, Manabu

    2012-05-01

    Scoliosis is defined as a spinal pathology characterized as a three-dimensional deformity of the spine combined with vertebral rotation. Treatment for severe scoliosis is achieved when the scoliotic spine is surgically corrected and fixed using implanted rods and screws. Several studies performed biomechanical modeling and corrective forces measurements of scoliosis correction. These studies were able to predict the clinical outcome and measured the corrective forces acting on screws, however, they were not able to measure the intraoperative three-dimensional geometry of the spinal rod. In effect, the results of biomechanical modeling might not be so realistic and the corrective forces during the surgical correction procedure were intra-operatively difficult to measure. Projective geometry has been shown to be successful in the reconstruction of a three-dimensional structure using a series of images obtained from different views. In this study, we propose a new method to measure the three-dimensional geometry of an implant rod using two cameras. The reconstruction method requires only a few parameters, the included angle θ between the two cameras, the actual length of the rod in mm, and the location of points for curve fitting. The implant rod utilized in spine surgery was used to evaluate the accuracy of the current method. The three-dimensional geometry of the rod was measured from the image obtained by a scanner and compared to the proposed method using two cameras. The mean error in the reconstruction measurements ranged from 0.32 to 0.45 mm. The method presented here demonstrated the possibility of intra-operatively measuring the three-dimensional geometry of spinal rod. The proposed method could be used in surgical procedures to better understand the biomechanics of scoliosis correction through real-time measurement of three-dimensional implant rod geometry in vivo.

  11. Mobility and increased risk of HIV acquisition in South Africa: a mixed-method systematic review protocol.

    Science.gov (United States)

    Dzomba, Armstrong; Govender, Kaymarlin; Mashamba-Thompson, Tivani P; Tanser, Frank

    2018-02-27

    In South Africa (home of the largest HIV epidemic globally), there are high levels of mobility. While studies produced in the recent past provide useful perspectives to the mobility-HIV risk linkage, systematic analyses are needed for in-depth understanding of the complex dynamics between mobility and HIV risk. We plan to undertake an evidence-based review of existing literature connecting mobility and increased risky sexual behavior as well as risk of HIV acquisition in South Africa. We will conduct a mixed-method systematic review of peer-reviewed studies published between 2000 and 2015. In particular, we will search for relevant South African studies from the following databases: MEDLINE, EMBASE, Web of Science, and J-STOR databases. Studies explicitly examining HIV and labor migration will be eligible for inclusion, while non-empirical work and other studies on key vulnerable populations such as commercial sex workers (CSW) and men who have sex with men (MSM) will be excluded. The proposed mixed-method systematic review will employ a three-phase sequential approach [i.e., (i) identifying relevant studies through data extraction (validated by use of Distiller-SR data management software), (ii) qualitative synthesis, and (iii) quantitative synthesis including meta-analysis data]. Recurrent ideas and conclusions from syntheses will be compiled into key themes and further processed into categories and sub-themes constituting the primary and secondary outcomes of this study. Synthesis of main findings from different studies examining the subject issue here may uncover important research gaps in this literature, laying a strong foundation for research and development of sustainable localized migrant-specific HIV prevention strategies in South Africa. Our protocol was registered with PROSPERO under registration number: CRD 42017055580. ( https://www.crd.york.ac.uk/PROSPERO/display_record.asp?ID=CRD42017055580 ).

  12. Ultrahigh-dimensional variable selection method for whole-genome gene-gene interaction analysis

    Directory of Open Access Journals (Sweden)

    Ueki Masao

    2012-05-01

    Full Text Available Abstract Background Genome-wide gene-gene interaction analysis using single nucleotide polymorphisms (SNPs is an attractive way for identification of genetic components that confers susceptibility of human complex diseases. Individual hypothesis testing for SNP-SNP pairs as in common genome-wide association study (GWAS however involves difficulty in setting overall p-value due to complicated correlation structure, namely, the multiple testing problem that causes unacceptable false negative results. A large number of SNP-SNP pairs than sample size, so-called the large p small n problem, precludes simultaneous analysis using multiple regression. The method that overcomes above issues is thus needed. Results We adopt an up-to-date method for ultrahigh-dimensional variable selection termed the sure independence screening (SIS for appropriate handling of numerous number of SNP-SNP interactions by including them as predictor variables in logistic regression. We propose ranking strategy using promising dummy coding methods and following variable selection procedure in the SIS method suitably modified for gene-gene interaction analysis. We also implemented the procedures in a software program, EPISIS, using the cost-effective GPGPU (General-purpose computing on graphics processing units technology. EPISIS can complete exhaustive search for SNP-SNP interactions in standard GWAS dataset within several hours. The proposed method works successfully in simulation experiments and in application to real WTCCC (Wellcome Trust Case–control Consortium data. Conclusions Based on the machine-learning principle, the proposed method gives powerful and flexible genome-wide search for various patterns of gene-gene interaction.

  13. Boundary element methods applied to two-dimensional neutron diffusion problems

    International Nuclear Information System (INIS)

    Itagaki, Masafumi

    1985-01-01

    The Boundary element method (BEM) has been applied to two-dimensional neutron diffusion problems. The boundary integral equation and its discretized form have been derived. Some numerical techniques have been developed, which can be applied to critical and fixed-source problems including multi-region ones. Two types of test programs have been developed according to whether the 'zero-determinant search' or the 'source iteration' technique is adopted for criticality search. Both programs require only the fluxes and currents on boundaries as the unknown variables. The former allows a reduction in computing time and memory in comparison with the finite element method (FEM). The latter is not always efficient in terms of computing time due to the domain integral related to the inhomogeneous source term; however, this domain integral can be replaced by the equivalent boundary integral for a region with a non-multiplying medium or with a uniform source, resulting in a significant reduction in computing time. The BEM, as well as the FEM, is well suited for solving irregular geometrical problems for which the finite difference method (FDM) is unsuited. The BEM also solves problems with infinite domains, which cannot be solved by the ordinary FEM and FDM. Some simple test calculations are made to compare the BEM with the FEM and FDM, and discussions are made concerning the relative merits of the BEM and problems requiring future solution. (author)

  14. Streamline integration as a method for two-dimensional elliptic grid generation

    Energy Technology Data Exchange (ETDEWEB)

    Wiesenberger, M., E-mail: Matthias.Wiesenberger@uibk.ac.at [Institute for Ion Physics and Applied Physics, Universität Innsbruck, A-6020 Innsbruck (Austria); Held, M. [Institute for Ion Physics and Applied Physics, Universität Innsbruck, A-6020 Innsbruck (Austria); Einkemmer, L. [Numerical Analysis group, Universität Innsbruck, A-6020 Innsbruck (Austria)

    2017-07-01

    We propose a new numerical algorithm to construct a structured numerical elliptic grid of a doubly connected domain. Our method is applicable to domains with boundaries defined by two contour lines of a two-dimensional function. Furthermore, we can adapt any analytically given boundary aligned structured grid, which specifically includes polar and Cartesian grids. The resulting coordinate lines are orthogonal to the boundary. Grid points as well as the elements of the Jacobian matrix can be computed efficiently and up to machine precision. In the simplest case we construct conformal grids, yet with the help of weight functions and monitor metrics we can control the distribution of cells across the domain. Our algorithm is parallelizable and easy to implement with elementary numerical methods. We assess the quality of grids by considering both the distribution of cell sizes and the accuracy of the solution to elliptic problems. Among the tested grids these key properties are best fulfilled by the grid constructed with the monitor metric approach. - Graphical abstract: - Highlights: • Construct structured, elliptic numerical grids with elementary numerical methods. • Align coordinate lines with or make them orthogonal to the domain boundary. • Compute grid points and metric elements up to machine precision. • Control cell distribution by adaption functions or monitor metrics.

  15. Two-dimensional fluid-hammer analysis by the method of nearcharacteristics

    International Nuclear Information System (INIS)

    Shin, Y.W.; Kot, C.A.

    1975-05-01

    A numerical technique based on the method of nearcharacteristics is considered for solving propagation of fluid-hammer waves in a two-dimensional geometry. The solution is constructed by relating flow conditions by compatibility equations along lines called nearcharacteristics. Three choices are considered in the numerical scheme that are accurate within an error of the order of magnitude of the time step. Since the nearcharacteristics lie in the coordinate planes, the technique provides an efficient method requiring only simple interpolations in the initial plane. On the other hand, the nearcharacteristics fall outside the characteristics cone. Thus the solution procedure directly refers to conditions outside the true domain of dependence. The effect of this is studied through numerical calculation of a simple example problem and comparison with results obtained by a bicharacteristic method. Comparison is also made with existing analytical solutions and experiments. Furthermore, the three solution schemes considered are examined for numerical stability by the vonNeumann test. Two of the schemes were found to be unstable; the third yielded a stability criterion equivalent to that of the bicharacteristic formulation. The stability-analysis results were confirmed by numerical experimentation. (auth)

  16. Stabilized Discretization in Spline Element Method for Solution of Two-Dimensional Navier-Stokes Problems

    Directory of Open Access Journals (Sweden)

    Neng Wan

    2014-01-01

    Full Text Available In terms of the poor geometric adaptability of spline element method, a geometric precision spline method, which uses the rational Bezier patches to indicate the solution domain, is proposed for two-dimensional viscous uncompressed Navier-Stokes equation. Besides fewer pending unknowns, higher accuracy, and computation efficiency, it possesses such advantages as accurate representation of isogeometric analysis for object boundary and the unity of geometry and analysis modeling. Meanwhile, the selection of B-spline basis functions and the grid definition is studied and a stable discretization format satisfying inf-sup conditions is proposed. The degree of spline functions approaching the velocity field is one order higher than that approaching pressure field, and these functions are defined on one-time refined grid. The Dirichlet boundary conditions are imposed through the Nitsche variational principle in weak form due to the lack of interpolation properties of the B-splines functions. Finally, the validity of the proposed method is verified with some examples.

  17. Development of design method of thick rubber bearings for three-dimensional base isolation

    International Nuclear Information System (INIS)

    Yabana, Shuichi; Matuda, Akihiro

    2000-01-01

    Thick rubber bearings as 3-dimensional base isolators have been developed to reduce both horizontal and vertical seismic loads especially for equipment in Fast Breeder Reactors. In this report, a design method of thick rubber bearings is presented. To consider nonlinearity of vertical stiffness affected by vertical stress in the design of thick rubber bearings, Lindley's evaluation method of vertical stiffness is modified as an explicit form of vertical stress. We confirm that the presented method is efficient for design of the thick rubber bearings from comparing between test results and predicted values. Furthermore, rubber bearing tests are conducted with 1/3 scale models to evaluate mechanical properties of thick rubber bearings including ultimate limits. In the tests, horizontal and vertical characteristics of 1/3 scale model are compared with those of 1/6 scale model to discuss scale effect of test specimen. Ultimate limits such as failure shear strain of thick rubber bearings are obtained under various loading conditions. From the test results, we confirm that full scale thick rubber bearing to satisfy requirements is feasible. (author)

  18. A healing method of tympanic membrane perforations using three-dimensional porous chitosan scaffolds.

    Science.gov (United States)

    Kim, Jangho; Kim, Seung Won; Choi, Seong Jun; Lim, Ki Taek; Lee, Jong Bin; Seonwoo, Hoon; Choung, Pill-Hoon; Park, Keehyun; Cho, Chong-Su; Choung, Yun-Hoon; Chung, Jong Hoon

    2011-11-01

    Both surgical tympanoplasty and paper patch grafts are frequently procedured to heal tympanic membrane (TM) perforation or chronic otitis media, despite their many disadvantages. In this study, we report a new healing method of TM perforation by using three-dimensional (3D) porous chitosan scaffolds (3D chitosan scaffolds) as an alternative method to surgical treatment or paper patch graft. Various 3D chitosan scaffolds were prepared; and the structural characteristics, mechanical property, in vitro biocompatibility, and healing effects of the 3D chitosan scaffolds as an artificial TM in in vivo animal studies were investigated. A 3D chitosan scaffold of 5 wt.% chitosan concentration showed good proliferation of TM cells in an in vitro study, as well as suitable structural characteristics and mechanical property, as compared with either 1% or 3% chitosan. In in vivo animal studies, 3D chitosan scaffold were able to migrate through the pores and surfaces of TM cells, thus leading to more effective TM regeneration than paper patch technique. Histological observations demonstrated that the regenerated TM with the 3D chitosan scaffold consisted of three (epidermal, connective tissue, and mucosal) layers and were thicker than normal TMs. The 3D chitosan scaffold technique may be an optimal healing method used in lieu of surgical tympanoplasty in certain cases to heal perforated TMs.

  19. Parallelized Three-Dimensional Resistivity Inversion Using Finite Elements And Adjoint State Methods

    Science.gov (United States)

    Schaa, Ralf; Gross, Lutz; Du Plessis, Jaco

    2015-04-01

    The resistivity method is one of the oldest geophysical exploration methods, which employs one pair of electrodes to inject current into the ground and one or more pairs of electrodes to measure the electrical potential difference. The potential difference is a non-linear function of the subsurface resistivity distribution described by an elliptic partial differential equation (PDE) of the Poisson type. Inversion of measured potentials solves for the subsurface resistivity represented by PDE coefficients. With increasing advances in multichannel resistivity acquisition systems (systems with more than 60 channels and full waveform recording are now emerging), inversion software require efficient storage and solver algorithms. We developed the finite element solver Escript, which provides a user-friendly programming environment in Python to solve large-scale PDE-based problems (see https://launchpad.net/escript-finley). Using finite elements, highly irregular shaped geology and topography can readily be taken into account. For the 3D resistivity problem, we have implemented the secondary potential approach, where the PDE is decomposed into a primary potential caused by the source current and the secondary potential caused by changes in subsurface resistivity. The primary potential is calculated analytically, and the boundary value problem for the secondary potential is solved using nodal finite elements. This approach removes the singularity caused by the source currents and provides more accurate 3D resistivity models. To solve the inversion problem we apply a 'first optimize then discretize' approach using the quasi-Newton scheme in form of the limited-memory Broyden-Fletcher-Goldfarb-Shanno (L-BFGS) method (see Gross & Kemp 2013). The evaluation of the cost function requires the solution of the secondary potential PDE for each source current and the solution of the corresponding adjoint-state PDE for the cost function gradients with respect to the subsurface

  20. A simplified method for rapid quantification of intracellular nucleoside triphosphates by one-dimensional thin-layer chromatography

    DEFF Research Database (Denmark)

    Jendresen, Christian Bille; Kilstrup, Mogens; Martinussen, Jan

    2011-01-01

    -pyrophosphate (PRPP), and inorganic pyrophosphate (PPi) in cell extracts. The method uses one-dimensional thin-layer chromatography (TLC) and radiolabeled biological samples. Nucleotides are resolved at the level of ionic charge in an optimized acidic ammonium formate and chloride solvent, permitting...... quantification of NTPs. The method is significantly simpler and faster than both current two-dimensional methods and high-performance liquid chromatography (HPLC)-based procedures, allowing a higher throughput while common sources of inaccuracies and technical problems are avoided. For determination of PPi...

  1. Three-dimensional display by computer graphics method of hepatocellular carcinoma using seen with the hepatic arteriogram

    International Nuclear Information System (INIS)

    Itsubo, Mariko; Kameda, Haruo; Suzuki, Naoki; Okamura, Tetsuo

    1989-01-01

    The method of three-dimensional display of hepatocellular carcinoma using conventional hepatic arteriogram by computer graphics method was newly exploited and applied in clinical use. Three-dimensional models were reconstructed from contour lines of tumors demonstrated as hypervascular lesions by hepatic arteriography. Although objects were limited by angiographic images in which tumors need to be demonstrated as nodules with hypervascularity, this method of three-dimensional display was not worse on accuracy than that using computed tomographic images. According to this method property of the tumor expressed by vascularity was demonstrated clear and in addition volume of the tumor was calculated easily. When the tumor arose in necrotic changes in which demonstrated as a vascular lesion by hepatic arteriography with reduction of size in usual by conservative treatment such as transcathter arterial embolization therapy, this three-dimensional display was able to demonstrate such changes clear. This preliminary study demonstrates the feasibility and clinical usefulness of three-dimensional display of hepatocellular carcinoma using hepatic arteriogram by computer graphics method. (author)

  2. Noninvasive computerized scanning method for the correlation between the facial soft and hard tissues for an integrated three-dimensional anthropometry and cephalometry.

    Science.gov (United States)

    Galantucci, Luigi Maria; Percoco, Gianluca; Lavecchia, Fulvio; Di Gioia, Eliana

    2013-05-01

    The article describes a new methodology to scan and integrate facial soft tissue surface with dental hard tissue models in a three-dimensional (3D) virtual environment, for a novel diagnostic approach.The facial and the dental scans can be acquired using any optical scanning systems: the models are then aligned and integrated to obtain a full virtual navigable representation of the head of the patient. In this article, we report in detail and further implemented a method for integrating 3D digital cast models into a 3D facial image, to visualize the anatomic position of the dentition. This system uses several 3D technologies to scan and digitize, integrating them with traditional dentistry records. The acquisitions were mainly performed using photogrammetric scanners, suitable for clinics or hospitals, able to obtain high mesh resolution and optimal surface texture for the photorealistic rendering of the face. To increase the quality and the resolution of the photogrammetric scanning of the dental elements, the authors propose a new technique to enhance the texture of the dental surface. Three examples of the application of the proposed procedure are reported in this article, using first laser scanning and photogrammetry and then only photogrammetry. Using cheek retractors, it is possible to scan directly a great number of dental elements. The final results are good navigable 3D models that integrate facial soft tissue and dental hard tissues. The method is characterized by the complete absence of ionizing radiation, portability and simplicity, fast acquisition, easy alignment of the 3D models, and wide angle of view of the scanner. This method is completely noninvasive and can be repeated any time the physician needs new clinical records. The 3D virtual model is a precise representation both of the soft and the hard tissue scanned, and it is possible to make any dimensional measure directly in the virtual space, for a full integrated 3D anthropometry and

  3. [Significance of three-dimensional reconstruction as a method of preoperative planning of laparoscopic radiofrequency ablation].

    Science.gov (United States)

    Zhang, W W; Wang, H G; Shi, X J; Chen, M Y; Lu, S C

    2016-09-01

    To discuss the significance of three-dimensional reconstruction as a method of preoperative planning of laparoscopic radiofrequency ablation(LRFA). Thirty-two cases of LRFA admitted from January 2014 to December 2015 in Department of Hepatobiliary Surgery, Chinese People's Liberation Army General Hospital were analyzed(3D-LRFA group). Three-dimensional(3D) reconstruction were taken as a method of preoperative planning in 3D-LRFA group.Other 64 LRFA cases were paired over the same period without three-dimensional reconstruction before the operation (LRFA group). Hepatobiliary system contrast enhanced CT scan of 3D-RFA patients were taken by multi-slice spiral computed tomography(MSCT), and the DICOM data were processed by IQQA(®)-Liver and IQQA(®)-guide to make 3D reconstruction.Using 3D reconstruction model, diameter and scope of tumor were measured, suitable size (length and radiofrequency length) and number of RFA electrode were chosen, scope and effect of radiofrequency were simulated, reasonable needle track(s) was planed, position and angle of laparoscopic ultrasound (LUS) probe was designed and LUS image was simulated.Data of operation and recovery were collected and analyzed. Data between two sets of measurement data were compared with t test or rank sum test, and count data with χ(2) test or Fisher exact probability test.Tumor recurrence rate was analyzed with the Kaplan-Meier survival curve and Log-rank (Mantel-Cox) test. Compared with LRFA group ((216.8±66.2) minutes, (389.1±183.4) s), 3D-LRFA group ((173.3±59.4) minutes, (242.2±90.8) s) has shorter operation time(t=-3.138, P=0.002) and shorter mean puncture time(t=-2.340, P=0.021). There was no significant difference of blood loss(P=0.170), ablation rate (P=0.871) and incidence of complications(P=1.000). Compared with LRFA group ((6.3±3.9)days, (330±102)U/L, (167±64)ng/L), 3D-LRFA group ((4.3±3.1) days, (285±102) U/L, (139±43) ng/L) had shorter post-operative stay(t=-2.527, P=0.016), less

  4. A vector/parallel method for a three-dimensional transport model coupled with bio-chemical terms

    NARCIS (Netherlands)

    B.P. Sommeijer (Ben); J. Kok (Jan)

    1995-01-01

    textabstractA so-called fractional step method is considered for the time integration of a three-dimensional transport-chemical model in shallow seas. In this method, the transport part and the chemical part are treated separately by appropriate integration techniques. This separation is motivated

  5. Partition functions in even dimensional AdS via quasinormal mode methods

    International Nuclear Information System (INIS)

    Keeler, Cynthia; Ng, Gim Seng

    2014-01-01

    In this note, we calculate the one-loop determinant for a massive scalar (with conformal dimension Δ) in even-dimensional AdS d+1 space, using the quasinormal mode method developed in http://dx.doi.org/10.1088/0264-9381/27/12/125001 by Denef, Hartnoll, and Sachdev. Working first in two dimensions on the related Euclidean hyperbolic plane H 2 , we find a series of zero modes for negative real values of Δ whose presence indicates a series of poles in the one-loop partition function Z(Δ) in the Δ complex plane; these poles contribute temperature-independent terms to the thermal AdS partition function computed in http://dx.doi.org/10.1088/0264-9381/27/12/125001. Our results match those in a series of papers by Camporesi and Higuchi, as well as Gopakumar et al. http://dx.doi.org/10.1007/JHEP11(2011)010 and Banerjee et al. http://dx.doi.org/10.1007/JHEP03(2011)147. We additionally examine the meaning of these zero modes, finding that they Wick-rotate to quasinormal modes of the AdS 2 black hole. They are also interpretable as matrix elements of the discrete series representations of SO(2,1) in the space of smooth functions on S 1 . We generalize our results to general even dimensional AdS 2n , again finding a series of zero modes which are related to discrete series representations of SO(2n,1), the motion group of H 2n .

  6. Three-dimensional characterization of ODS ferritic steel using by FIB-SEM serial sectioning method.

    Science.gov (United States)

    Endo, T; Sugino, Y; Ohono, N; Ukai, S; Miyazaki, N; Wang, Y; Ohnuki, S

    2014-11-01

    Considerable attention has been paid to the research of the electron tomography due to determine the three-dimensional (3D) structure of materials [1]. One of the electron tomography techniques, focused ion beam/scanning electron microscopy (FIB-SEM) imaging has advantages of high resolutions (10 nm), large area observation (μm order) and simultaneous energy dispersive x- ray microanalysis (EDS)/ electron backscatter diffraction (EBSD) analysis. The purpose of this study, three-dimensional EBSD analysis of ODS ferritic steel which carried out cold work using FIB-SEM equipment was conducted, and it aimed at analyzing the microstructure obtained there. The zone annealing tests were conducted for ferritic steel [2,3], which were produced through mechanical alloying and hot-extrusion. After zone annealing, specimens were mechanically polished with #400∼4000 emery paper, 1 µm diamond paste and alumina colloidal silica. The serial sectioning and the 3D-electron backscattering diffraction (3D-EBSD) analysis were carried out. We made the micro pillar (30 x 30 x 15 µm). The EBSD measurements were carried out in each layer after serial sectioning at a step size and milling depth was 80 nm with 30 slices. After EBSD analysis, the series of cross-sectional images were aligned according to arbitrarily specified areas and then stacked up to form a volume. Consequently, we obtained the 3D-IPF maps for ODS ferritic steel. In this specimen, the {111} and {001} grains are layered by turns. In addition, the volume fraction value of both plane are similar. The aspect ratio increases with specimen depth. The 3D-EBSD mapping is useful to analysis of the bulk material since this method obtain many microstructure information, such a shape, volume and orientation of the crystal, grain boundary. © The Author 2014. Published by Oxford University Press on behalf of The Japanese Society of Microscopy. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  7. A two-dimensional, finite-element methods for calculating TF coil response to out-of-plane Lorentz forces

    International Nuclear Information System (INIS)

    Witt, R.J.

    1989-01-01

    Toroidal field (TF) coils in fusion systems are routinely operated at very high magnetic fields. While obtaining the response of the coil to in-plane loads is relatively straightforward, the same is not true for the out-of-plane loads. Previous treatments of the out-of-plane problem have involved large, three-dimensional finite element idealizations. A new treatment of the out-of-plane problem is presented here; the model is two-dimensional in nature, and consumes far less CPU-time than three-dimensional methods. The approach assumes there exists a region of torsional deformation in the inboard leg and a bending region in the outboard leg. It also assumes the outboard part of the coil is attached to a torque frame/cylinder, which experiences primarily torsional deformation. Three-dimensional transition regions exist between the inboard and outboard legs and between the outboard leg and the torque frame. By considering several idealized problems of cylindrical shells subjected to moment distributions, it is shown that the size of these three-dimensional regions is quite small, and that the interaction between the torsional and bending regions can be treated in an equivalent two-dimensional fashion. Equivalent stiffnesses are derived to model penetration into and twist along the cylinders. These stiffnesses are then used in a special substructuring analysis to couple the three regions together. Results from the new method are compared to results from a 3D continuum model. (orig.)

  8. Unified Theoretical Frame of a Joint Transmitter-Receiver Reduced Dimensional STAP Method for an Airborne MIMO Radar

    Directory of Open Access Journals (Sweden)

    Guo Yiduo

    2016-10-01

    Full Text Available The unified theoretical frame of a joint transmitter-receiver reduced dimensional Space-Time Adaptive Processing (STAP method is studied for an airborne Multiple-Input Multiple-Output (MIMO radar. First, based on the transmitted waveform diverse characteristics of the transmitted waveform of the airborne MIMO radar, a uniform theoretical frame structure for the reduced dimensional joint adaptive STAP is constructed. Based on it, three reduced dimensional STAP fixed structures are established. Finally, three reduced rank STAP algorithms, which are suitable for a MIMO system, are presented corresponding to the three reduced dimensional STAP fixed structures. The simulations indicate that the joint adaptive algorithms have preferable clutter suppression and anti-interference performance.

  9. Reconstruction 3-dimensional image from 2-dimensional image of status optical coherence tomography (OCT) for analysis of changes in retinal thickness

    Energy Technology Data Exchange (ETDEWEB)

    Arinilhaq,; Widita, Rena [Department of Physics, Nuclear Physics and Biophysics Research Group, Institut Teknologi Bandung (Indonesia)

    2014-09-30

    Optical Coherence Tomography is often used in medical image acquisition to diagnose that change due easy to use and low price. Unfortunately, this type of examination produces a two-dimensional retinal image of the point of acquisition. Therefore, this study developed a method that combines and reconstruct 2-dimensional retinal images into three-dimensional images to display volumetric macular accurately. The system is built with three main stages: data acquisition, data extraction and 3-dimensional reconstruction. At data acquisition step, Optical Coherence Tomography produced six *.jpg images of each patient were further extracted with MATLAB 2010a software into six one-dimensional arrays. The six arrays are combined into a 3-dimensional matrix using a kriging interpolation method with SURFER9 resulting 3-dimensional graphics of macula. Finally, system provides three-dimensional color graphs based on the data distribution normal macula. The reconstruction system which has been designed produces three-dimensional images with size of 481 × 481 × h (retinal thickness) pixels.

  10. Lexical and semantic representations in the acquisition of L2 cognate and non-cognate words: evidence from two learning methods in children.

    Science.gov (United States)

    Comesaña, Montserrat; Soares, Ana Paula; Sánchez-Casas, Rosa; Lima, Cátia

    2012-08-01

    How bilinguals represent words in two languages and which mechanisms are responsible for second language acquisition are important questions in the bilingual and vocabulary acquisition literature. This study aims to analyse the effect of two learning methods (picture- vs. word-based method) and two types of words (cognates and non-cognates) in early stages of children's L2 acquisition. Forty-eight native speakers of European Portuguese, all sixth graders (mean age = 10.87 years; SD= 0.85), participated in the study. None of them had prior knowledge of Basque (the L2 in this study). After a learning phase in which L2 words were learned either by a picture- or a word-based method, children were tested in a backward-word translation recognition task at two times (immediately vs. one week later). Results showed that the participants made more errors when rejecting semantically related than semantically unrelated words as correct translations (semantic interference effect). The magnitude of this effect was higher in the delayed test condition regardless of the learning method. Moreover, the overall performance of participants from the word-based method was better than the performance of participants from the picture-word method. Results were discussed concerning the most significant bilingual lexical processing models. ©2011 The British Psychological Society.

  11. Application of a method for comparing one-dimensional and two-dimensional models of a ground-water flow system

    International Nuclear Information System (INIS)

    Naymik, T.G.

    1978-01-01

    To evaluate the inability of a one-dimensional ground-water model to interact continuously with surrounding hydraulic head gradients, simulations using one-dimensional and two-dimensional ground-water flow models were compared. This approach used two types of models: flow-conserving one-and-two dimensional models, and one-dimensional and two-dimensional models designed to yield two-dimensional solutions. The hydraulic conductivities of controlling features were varied and model comparison was based on the travel times of marker particles. The solutions within each of the two model types compare reasonably well, but a three-dimensional solution is required to quantify the comparison

  12. Analysis of one-dimensional nonequilibrium two-phase flow using control volume method

    International Nuclear Information System (INIS)

    Minato, Akihiko; Naitoh, Masanori

    1987-01-01

    A one-dimensional numerical analysis model was developed for prediction of rapid flow transient behavior involving boiling. This model was based on six conservation equations of time averaged parameters of gas and liquid behavior. These equations were solved by using a control volume method with an explicit time integration. This model did not use staggered mesh scheme, which had been commonly used in two-phase flow analysis. Because void fraction and velocity of each phase were defined at the same location in the present model, effects of void fraction on phase velocity calculation were treated directly without interpolation. Though non-staggered mesh scheme was liable to cause numerical instability with zigzag pressure field, stability was achieved by employing the Godunov method. In order to verify the present analytical model, Edwards' pipe blow down and Zaloudek's initially subcooled critical two-phase flow experiments were analyzed. Stable solutions were obtained for rarefaction wave propagation with boiling and transient two-phase flow behavior in a broken pipe by using this model. (author)

  13. Three-dimensional numerical study on the mechanism of anisotropic MCCI by improved MPS method

    Energy Technology Data Exchange (ETDEWEB)

    Li, Xin, E-mail: lixin@fuji.waseda.jp; Yamaji, Akifumi

    2017-04-01

    Highlights: • 3-D simulation of a MCCI test was presented with improved moving particle method. • The influence of thermally stable silica aggregates on MCCI has been investigated. • The mechanisms for isotropic/anisotropic ablation have been clarified mechanistically. - Abstract: In two-dimensional (2-D) molten corium-concrete interaction (MCCI) experiments with prototypic corium and siliceous concrete, the more pronounced lateral concrete erosion behavior than that in the axial direction, namely anisotropic ablation, has been a research interest. However, the knowledge of the mechanism on this anisotropic ablation behavior, which is important for severe accident analysis and management, is still limited. In this paper, 3-D simulation of 2-D MCCI experiment VULCANO VB-U7 has been carried out with improved Moving Particle Semi-implicit (MPS) method. Heat conduction, phase change, and corium viscosity models have been developed and incorporated into MPS code MPS-SW-MAIN-Ver.2.0 for current study. The influence of thermally stable silica aggregates has been investigated by setting up different simulation cases for analysis. The simulation results suggested reasonable models and assumptions to be considered in order to achieve best estimation of MCCI with prototypic oxidic corium and siliceous concrete. The simulation results also indicated that silica aggregates can contribute to anisotropic ablation. The mechanisms for anisotropic ablation pattern in siliceous concrete as well as isotropic ablation pattern in limestone-rich concrete have been clarified from a mechanistic perspective.

  14. Three-dimensional reconstruction volume: a novel method for volume measurement in kidney cancer.

    Science.gov (United States)

    Durso, Timothy A; Carnell, Jonathan; Turk, Thomas T; Gupta, Gopal N

    2014-06-01

    The role of volumetric estimation is becoming increasingly important in the staging, management, and prognostication of benign and cancerous conditions of the kidney. We evaluated the use of three-dimensional reconstruction volume (3DV) in determining renal parenchymal volumes (RPV) and renal tumor volumes (RTV). We compared 3DV with the currently available methods of volume assessment and determined its interuser reliability. RPV and RTV were assessed in 28 patients who underwent robot-assisted laparoscopic partial nephrectomy for kidney cancer. Patients with a preoperative creatinine level of kidney pre- and postsurgery overestimated 3D reconstruction volumes by 15% to 102% and 12% to 101%, respectively. In addition, volumes obtained from 3DV displayed high interuser reliability regardless of experience. 3DV provides a highly reliable way of assessing kidney volumes. Given that 3DV takes into account visible anatomy, the differences observed using previously published methods can be attributed to the failure of geometry to accurately approximate kidney or tumor shape. 3DV provides a more accurate, reproducible, and clinically useful tool for urologists looking to improve patient care using analysis related to volume.

  15. A parallel algorithm for the two-dimensional time fractional diffusion equation with implicit difference method.

    Science.gov (United States)

    Gong, Chunye; Bao, Weimin; Tang, Guojian; Jiang, Yuewen; Liu, Jie

    2014-01-01

    It is very time consuming to solve fractional differential equations. The computational complexity of two-dimensional fractional differential equation (2D-TFDE) with iterative implicit finite difference method is O(M(x)M(y)N(2)). In this paper, we present a parallel algorithm for 2D-TFDE and give an in-depth discussion about this algorithm. A task distribution model and data layout with virtual boundary are designed for this parallel algorithm. The experimental results show that the parallel algorithm compares well with the exact solution. The parallel algorithm on single Intel Xeon X5540 CPU runs 3.16-4.17 times faster than the serial algorithm on single CPU core. The parallel efficiency of 81 processes is up to 88.24% compared with 9 processes on a distributed memory cluster system. We do think that the parallel computing technology will become a very basic method for the computational intensive fractional applications in the near future.

  16. Combining Capability Assessment and Value Engineering: a New Two-dimensional Method for Software Process Improvement

    Directory of Open Access Journals (Sweden)

    Pasi Ojala

    2008-02-01

    Full Text Available During the last decades software process improvement (SPI has been recognized as a usable possibility to increase the quality of software development. Implemented SPI investments have often indicated increased process capabilities as well. Recently more attention has been focused on the costs of SPI as well as on the cost-effectiveness and productivity of software development, although the roots of economic-driven software engineering originate from the very early days of software engineering research. This research combines Value Engineering and capability assessment into usable new method in order to better respond to the challenges that cost-effectiveness and productivity has brought to software companies. This is done in part by defining the concepts of value, worth and cost and in part by defining the Value Engineering process and different enhancements it has seen to offer to software assessment. The practical industrial cases show that proposed two-dimensional method works in practise and is useful to assessed companies.

  17. Application of finite-element method to three-dimensional nuclear reactor analysis

    International Nuclear Information System (INIS)

    Cheung, K.Y.

    1985-01-01

    The application of the finite element method to solve a realistic one-or-two energy group, multiregion, three-dimensional static neutron diffusion problem is studied. Linear, quadratic, and cubic serendipity box-shape elements are used. The resulting sets of simultaneous algebraic equations with thousands of unknowns are solved by the conjugate gradient method, without forming the large coefficient matrix explicitly. This avoids the complicated data management schemes to store such a large coefficient matrix. Three finite-element computer programs: FEM-LINEAR, FEM-QUADRATIC and FEM-CUBIC were developed, using the linear, quadratic, and cubic box-shape elements respectively. They are self-contained, using simple nodal labeling schemes, without the need for separate finite element mesh generating routines. The efficiency and accuracy of these computer programs are then compared among themselves, and with other computer codes. The cubic element model is not recommended for practical usage because it gives almost identical results as the quadratic model, but it requires considerably longer computation time. The linear model is less accurate than the quadratic model, but it requires much shorter computation time. For a large 3-D problem, the linear model is to be preferred since it gives acceptable accuracy. The quadratic model may be used if improved accuracy is desired

  18. An efficicient data structure for three-dimensional vertex based finite volume method

    Science.gov (United States)

    Akkurt, Semih; Sahin, Mehmet

    2017-11-01

    A vertex based three-dimensional finite volume algorithm has been developed using an edge based data structure.The mesh data structure of the given algorithm is similar to ones that exist in the literature. However, the data structures are redesigned and simplied in order to fit requirements of the vertex based finite volume method. In order to increase the cache efficiency, the data access patterns for the vertex based finite volume method are investigated and these datas are packed/allocated in a way that they are close to each other in the memory. The present data structure is not limited with tetrahedrons, arbitrary polyhedrons are also supported in the mesh without putting any additional effort. Furthermore, the present data structure also supports adaptive refinement and coarsening. For the implicit and parallel implementation of the FVM algorithm, PETSc and MPI libraries are employed. The performance and accuracy of the present algorithm are tested for the classical benchmark problems by comparing the CPU time for the open source algorithms.

  19. Examination and Improvement of Accuracy of Three-Dimensional Elastic Crack Solutions Obtained Using Finite Element Alternating Method

    International Nuclear Information System (INIS)

    Park, Jai Hak; Nikishkov, G. P.

    2010-01-01

    An SGBEM (symmetric Galerkin boundary element method)-FEM alternating method has been proposed by Nikishkov, Park and Atluri. This method can be used to obtain mixed-mode stress intensity factors for planar and nonplanar three-dimensional cracks having an arbitrary shape. For field applications, however, it is necessary to verify the accuracy and consistency of this method. Therefore, in this study, we investigate the effects of several factors on the accuracy of the stress intensity factors obtained using the above mentioned alternating method. The obtained stress intensity factors are compared with the known values provided in handbooks, especially in the case of internal and external circumferential semi-elliptical surface cracks. The results show that the SGBEM-FEM alternating method yields accurate stress intensity factors for three-dimensional cracks, including internal and external circumferential surface cracks and that the method can be used as a robust crack analysis tool for solving field problems

  20. A Dimensionality Reduction-Based Multi-Step Clustering Method for Robust Vessel Trajectory Analysis

    Directory of Open Access Journals (Sweden)

    Huanhuan Li

    2017-08-01

    Full Text Available The Shipboard Automatic Identification System (AIS is crucial for navigation safety and maritime surveillance, data mining and pattern analysis of AIS information have attracted considerable attention in terms of both basic research and practical applications. Clustering of spatio-temporal AIS trajectories can be used to identify abnormal patterns and mine customary route data for transportation safety. Thus, the capacities of navigation safety and maritime traffic monitoring could be enhanced correspondingly. However, trajectory clustering is often sensitive to undesirable outliers and is essentially more complex compared with traditional point clustering. To overcome this limitation, a multi-step trajectory clustering method is proposed in this paper for robust AIS trajectory clustering. In particular, the Dynamic Time Warping (DTW, a similarity measurement method, is introduced in the first step to measure the distances between different trajectories. The calculated distances, inversely proportional to the similarities, constitute a distance matrix in the second step. Furthermore, as a widely-used dimensional reduction method, Principal Component Analysis (PCA is exploited to decompose the obtained distance matrix. In particular, the top k principal components with above 95% accumulative contribution rate are extracted by PCA, and the number of the centers k is chosen. The k centers are found by the improved center automatically selection algorithm. In the last step, the improved center clustering algorithm with k clusters is implemented on the distance matrix to achieve the final AIS trajectory clustering results. In order to improve the accuracy of the proposed multi-step clustering algorithm, an automatic algorithm for choosing the k clusters is developed according to the similarity distance. Numerous experiments on realistic AIS trajectory datasets in the bridge area waterway and Mississippi River have been implemented to compare our

  1. A Dimensionality Reduction-Based Multi-Step Clustering Method for Robust Vessel Trajectory Analysis.

    Science.gov (United States)

    Li, Huanhuan; Liu, Jingxian; Liu, Ryan Wen; Xiong, Naixue; Wu, Kefeng; Kim, Tai-Hoon

    2017-08-04

    The Shipboard Automatic Identification System (AIS) is crucial for navigation safety and maritime surveillance, data mining and pattern analysis of AIS information have attracted considerable attention in terms of both basic research and practical applications. Clustering of spatio-temporal AIS trajectories can be used to identify abnormal patterns and mine customary route data for transportation safety. Thus, the capacities of navigation safety and maritime traffic monitoring could be enhanced correspondingly. However, trajectory clustering is often sensitive to undesirable outliers and is essentially more complex compared with traditional point clustering. To overcome this limitation, a multi-step trajectory clustering method is proposed in this paper for robust AIS trajectory clustering. In particular, the Dynamic Time Warping (DTW), a similarity measurement method, is introduced in the first step to measure the distances between different trajectories. The calculated distances, inversely proportional to the similarities, constitute a distance matrix in the second step. Furthermore, as a widely-used dimensional reduction method, Principal Component Analysis (PCA) is exploited to decompose the obtained distance matrix. In particular, the top k principal components with above 95% accumulative contribution rate are extracted by PCA, and the number of the centers k is chosen. The k centers are found by the improved center automatically selection algorithm. In the last step, the improved center clustering algorithm with k clusters is implemented on the distance matrix to achieve the final AIS trajectory clustering results. In order to improve the accuracy of the proposed multi-step clustering algorithm, an automatic algorithm for choosing the k clusters is developed according to the similarity distance. Numerous experiments on realistic AIS trajectory datasets in the bridge area waterway and Mississippi River have been implemented to compare our proposed method with

  2. Optimized negative dimensional integration method (NDIM) and multiloop Feynman diagram calculation

    International Nuclear Information System (INIS)

    Gonzalez, Ivan; Schmidt, Ivan

    2007-01-01

    We present an improved form of the integration technique known as NDIM (negative dimensional integration method), which is a powerful tool in the analytical evaluation of Feynman diagrams. Using this technique we study a φ 3 +φ 4 theory in D=4-2ε dimensions, considering generic topologies of L loops and E independent external momenta, and where the propagator powers are arbitrary. The method transforms the Schwinger parametric integral associated to the diagram into a multiple series expansion, whose main characteristic is that the argument contains several Kronecker deltas which appear naturally in the application of the method, and which we call diagram presolution. The optimization we present here consists in a procedure that minimizes the series multiplicity, through appropriate factorizations in the multinomials that appear in the parametric integral, and which maximizes the number of Kronecker deltas that are generated in the process. The solutions are presented in terms of generalized hypergeometric functions, obtained once the Kronecker deltas have been used in the series. Although the technique is general, we apply it to cases in which there are 2 or 3 different energy scales (masses or kinematic variables associated to the external momenta), obtaining solutions in terms of a finite sum of generalized hypergeometric series 1 and 2 variables respectively, each of them expressible as ratios between the different energy scales that characterize the topology. The main result is a method capable of solving Feynman integrals, expressing the solutions as hypergeometric series of multiplicity (n-1), where n is the number of energy scales present in the diagram

  3. Estimation of center line and diameter of brain blood vessel using three-dimensional blood vessel matching method with head three-dimensional CTA image

    International Nuclear Information System (INIS)

    Maekawa, Masashi; Shinohara, Toshihiro; Nakayama, Masato; Nakasako, Noboru

    2010-01-01

    To support and automate the brain blood vessel disease diagnosis, a novel method to obtain the center line and the diameter of a blood vessel is proposed with a three-dimensional head computed tomographic angiography (CTA) image. Although the line thinning processing with distance transform or gray information is generally used to obtain the blood vessel center line, this method is not essentially one to obtain the center line and tends to yield extra lines depending on CTA images. In this study, the center line of the blood vessel is obtained by tracing the vessel. The blood vessel is traced by sequentially estimating the center point and direction of the blood vessel. The center point and direction of the blood vessel are estimated by taking the correlation between the blood vessel and a solid model of the blood vessel that is designed by considering noise influence. In addition, the vessel diameter is also estimated by correlating the blood vessel and the blood vessel model of which the diameter is variable. The validity of the proposed method is confirmed by experimentally applied the proposed method to an actual three-dimensional head CTA image. (author)

  4. Data acquisition

    International Nuclear Information System (INIS)

    Clout, P.N.

    1982-01-01

    Data acquisition systems are discussed for molecular biology experiments using synchrotron radiation sources. The data acquisition system requirements are considered. The components of the solution are described including hardwired solutions and computer-based solutions. Finally, the considerations for the choice of the computer-based solution are outlined. (U.K.)

  5. The usefulness and the problems of attenuation correction using simultaneous transmission and emission data acquisition method. Studies on normal volunteers and phantom

    International Nuclear Information System (INIS)

    Kijima, Tetsuji; Kumita, Shin-ichiro; Mizumura, Sunao; Cho, Keiichi; Ishihara, Makiko; Toba, Masahiro; Kumazaki, Tatsuo; Takahashi, Munehiro.

    1997-01-01

    Attenuation correction using simultaneous transmission data (TCT) and emission data (ECT) acquisition method was applied to 201 Tl myocardial SPECT with ten normal adults and the phantom in order to validate the efficacy of attenuation correction using this method. Normal adults study demonstrated improved 201 Tl accumulation to the septal wall and the posterior wall of the left ventricle and relative decreased activities in the lateral wall with attenuation correction (p 201 Tl uptake organs such as the liver and the stomach pushed up the activities in the septal wall and the posterior wall. Cardiac dynamic phantom studies showed partial volume effect due to cardiac motion contributed to under-correction of the apex, which might be overcome using gated SPECT. Although simultaneous TCT and ECT acquisition was conceived of the advantageous method for attenuation correction, miss-correction of the special myocardial segments should be taken into account in assessment of attenuation correction compensated images. (author)

  6. Hydrogeophysical exploration of three-dimensional salinity anomalies with the time-domain electromagnetic method (TDEM)

    DEFF Research Database (Denmark)

    Bauer-Gottwein, Peter; Gondwe, Bibi Ruth Neuman; Christiansen, Lars

    2010-01-01

    Delta is presented. Evaporative salt enrichment causes a strong salinity anomaly under the island. We show that the TDEM field data cannot be interpreted in terms of standard one-dimensional layered-earth TDEM models, because of the strongly three-dimensional nature of the salinity anomaly. Three...

  7. Ontology-based configuration of problem-solving methods and generation of knowledge-acquisition tools: application of PROTEGE-II to protocol-based decision support.

    Science.gov (United States)

    Tu, S W; Eriksson, H; Gennari, J H; Shahar, Y; Musen, M A

    1995-06-01

    PROTEGE-II is a suite of tools and a methodology for building knowledge-based systems and domain-specific knowledge-acquisition tools. In this paper, we show how PROTEGE-II can be applied to the task of providing protocol-based decision support in the domain of treating HIV-infected patients. To apply PROTEGE-II, (1) we construct a decomposable problem-solving method called episodic skeletal-plan refinement, (2) we build an application ontology that consists of the terms and relations in the domain, and of method-specific distinctions not already captured in the domain terms, and (3) we specify mapping relations that link terms from the application ontology to the domain-independent terms used in the problem-solving method. From the application ontology, we automatically generate a domain-specific knowledge-acquisition tool that is custom-tailored for the application. The knowledge-acquisition tool is used for the creation and maintenance of domain knowledge used by the problem-solving method. The general goal of the PROTEGE-II approach is to produce systems and components that are reusable and easily maintained. This is the rationale for constructing ontologies and problem-solving methods that can be composed from a set of smaller-grained methods and mechanisms. This is also why we tightly couple the knowledge-acquisition tools to the application ontology that specifies the domain terms used in the problem-solving systems. Although our evaluation is still preliminary, for the application task of providing protocol-based decision support, we show that these goals of reusability and easy maintenance can be achieved. We discuss design decisions and the tradeoffs that have to be made in the development of the system.

  8. Three-dimensional registration methods for multi-modal magnetic resonance neuroimages

    International Nuclear Information System (INIS)

    Triantafyllou, C.

    2001-08-01

    In this thesis, image alignment techniques are developed and evaluated for applications in neuroimaging. In particular, the problem of combining cross-sequence MRI (Magnetic Resonance Imaging) intra-subject scans is considered. The challenge in this case is to find topographically uniform mappings in order to register (find a mapping between) low resolution echo-planar images and their high resolution structural counterparts. Such an approach enables us to effectually fuse, in a clinically useful way, information across scans. This dissertation devises a new framework by which this may be achieved, involving appropriate optimisation of the required mapping functions, which turn out to be non-linear and high-dimensional in nature. Novel ways to constrain and regularise these functions to enhance the computational speed of the process and the accuracy of the solution are also studied. The algorithms, whose characteristics are demonstrated for this specific application should be fully generalisable to other medical imaging modalities and potentially, other areas of image processing. To begin with, some existing registration methods are reviewed, followed by the introduction of an automated global 3-D registration method. Its performance is investigated on extracted cortical and ventricular surfaces by utilising the principles of the chamfer matching approach. Evaluations on synthetic and real data-sets, are performed to show that removal of global image differences is possible in principle, although the true accuracy of the method depends on the type of geometrical distortions present. These results also reveal that this class of algorithm is unable to solve more localised variations and higher order magnetic field distortions between the images. These facts motivate the development of a high-dimensional 3-D registration method capable of effecting a one-to-one correspondence by capturing the localised differences. This method was seen to account not only for

  9. Reduction of vascular artifact on T1-weighted images of the brain by using three-dimensional double IR fast spoiled gradient echo recalled acquisition in the steady state (FSPGR) at 3.0 Tesla

    International Nuclear Information System (INIS)

    Fujiwara, Yasuhiro; Yamaguchi, Isao; Ookoshi, Yusuke; Ootani, Yuriko; Matsuda, Tsuyoshi; Ishimori, Yoshiyuki; Hayashi, Hiroyuki; Miyati, Tosiaki; Kimura, Hirohiko

    2007-01-01

    The purpose of this study was to decrease vascular artifacts caused by the in-flow effect in three-dimensional inversion recovery prepared fast spoiled gradient recalled acquisition in the steady state (3D IR FSPGR) at 3.0 Tesla. We developed 3D double IR FSPGR and investigated the signal characteristics of the new sequence. The 3D double IR FSPGR sequence uses two inversion pulses, the first for obtaining tissue contrast and the second for nulling vascular signal, which is applied at the time of the first IR period at the neck region. We have optimized scan parameters based on both phantom and in-vivo study. As a result, optimized parameters (1st TI=700 ms, 2nd TI=400 ms) successfully have produced much less vessel signal at reduction than conventional 3D IR FSPGR over a wide imaging range, while preserving the signal-to-noise ratio (SNR) and gray/white matter contrast. Moreover, the decreased artifact was also confirmed by visual inspection of the images obtained in vivo using those parameters. Thus, 3D double IR FSPGR was a useful sequence for the acquisition of T1-weighted images at 3.0 Tesla. (author)

  10. A hybrid method for quasi-three-dimensional slope stability analysis in a municipal solid waste landfill

    International Nuclear Information System (INIS)

    Yu, L.; Batlle, F.

    2011-01-01

    Highlights: → A quasi-three-dimensional slope stability analysis method was proposed. → The proposed method is a good engineering tool for 3D slope stability analysis. → Factor of safety from 3D analysis is higher than from 2D analysis. → 3D analysis results are more sensitive to cohesion than 2D analysis. - Abstract: Limited space for accommodating the ever increasing mounds of municipal solid waste (MSW) demands the capacity of MSW landfill be maximized by building landfills to greater heights with steeper slopes. This situation has raised concerns regarding the stability of high MSW landfills. A hybrid method for quasi-three-dimensional slope stability analysis based on the finite element stress analysis was applied in a case study at a MSW landfill in north-east Spain. Potential slides can be assumed to be located within the waste mass due to the lack of weak foundation soils and geosynthetic membranes at the landfill base. The only triggering factor of deep-seated slope failure is the higher leachate level and the relatively high and steep slope in the front. The valley-shaped geometry and layered construction procedure at the site make three-dimensional slope stability analyses necessary for this landfill. In the finite element stress analysis, variations of leachate level during construction and continuous settlement of the landfill were taken into account. The 'equivalent' three-dimensional factor of safety (FoS) was computed from the individual result of the two-dimensional analysis for a series of evenly spaced cross sections within the potential sliding body. Results indicate that the hybrid method for quasi-three-dimensional slope stability analysis adopted in this paper is capable of locating roughly the spatial position of the potential sliding mass. This easy to manipulate method can serve as an engineering tool in the preliminary estimate of the FoS as well as the approximate position and extent of the potential sliding mass. The result that

  11. A method of integration of atomistic simulations and continuum mechanics by collecting of dynamical systems with dimensional reduction

    International Nuclear Information System (INIS)

    Kaczmarek, J.

    2002-01-01

    Elementary processes responsible for phenomena in material are frequently related to scale close to atomic one. Therefore atomistic simulations are important for material sciences. On the other hand continuum mechanics is widely applied in mechanics of materials. It seems inevitable that both methods will gradually integrate. A multiscale method of integration of these approaches called collection of dynamical systems with dimensional reduction is introduced in this work. The dimensional reduction procedure realizes transition between various scale models from an elementary dynamical system (EDS) to a reduced dynamical system (RDS). Mappings which transform variables and forces, skeletal dynamical system (SDS) and a set of approximation and identification methods are main components of this procedure. The skeletal dynamical system is a set of dynamical systems parameterized by some constants and has variables related to the dimensionally reduced model. These constants are identified with the aid of solutions of the elementary dynamical system. As a result we obtain a dimensionally reduced dynamical system which describes phenomena in an averaged way in comparison with the EDS. Concept of integration of atomistic simulations with continuum mechanics consists in using a dynamical system describing evolution of atoms as an elementary dynamical system. Then, we introduce a continuum skeletal dynamical system within the dimensional reduction procedure. In order to construct such a system we have to modify a continuum mechanics formulation to some degree. Namely, we formalize scale of averaging for continuum theory and as a result we consider continuum with finite-dimensional fields only. Then, realization of dimensional reduction is possible. A numerical example of realization of the dimensional reduction procedure is shown. We consider a one dimensional chain of atoms interacting by Lennard-Jones potential. Evolution of this system is described by an elementary

  12. A proposal of a three-dimensional CT measurement method of maxillofacial structure

    International Nuclear Information System (INIS)

    Tanaka, Ray; Hayashi, Takafumi

    2007-01-01

    Three-dimensional CT measurement is put in practice in order to grasp the pathological condition on diseases such as the temporomandibular joint disorder, maxillofacial anomaly, jaw deformity, or fracture which cause the morphologic changes of the maxillofacial bones. On the 3D measurement, the unique system that is obtained by volume rendering 3D images with a simultaneous reference of axial images combined with coronal and sagittal multi-planar reconstruction (MPR) images (we call this MPR referential method), is employed in order to define the measurement points. Our purpose in this report is to indicate the usefulness of this unique method by comparing with the common way to define the measurement points on only 3D reconstruction images without consulting of MPR images. Clinical CT data obtained from a male patient with skeletal malocclusion was used. Contiguous axial images were reconstructed at 4 times magnification, with a reconstruction interval of 0.5 mm, focused on the temporomandibular joint region in his left side. After these images were converted to Digital Imaging and Communications in Medicine (DICOM) format and sent to personal computer (PC), 3D reconstruction image was created using free 3D DICOM medical image viewer. The coordinates of 3 measurement points (the lateral and medial pole of the mandibular condyle, and the left foramen ovale) were defined with MPR images (MPR coordinates) as reference coordinates, and then the coordinates that were defined on only 3D reconstruction image without consulting to MPR images (3D coordinates) were compared to those of MPR coordinates. Three examiners were engaged independently 10 times for every measurement point. In our result, there was no correspondence between 3D coordinates and MPR coordinates, and contribution of 3D coordinates showed a variety in every measurement point and in every observer. We deemed that ''MPR referential method'' is useful to assess the location of the target point of anatomical

  13. A safe and efficient method to retrieve mesenchymal stem cells from three-dimensional fibrin gels.

    Science.gov (United States)

    Carrion, Bita; Janson, Isaac A; Kong, Yen P; Putnam, Andrew J

    2014-03-01

    Mesenchymal stem cells (MSCs) display multipotent characteristics that make them ideal for potential therapeutic applications. MSCs are typically cultured as monolayers on tissue culture plastic, but there is increasing evidence suggesting that they may lose their multipotency over time in vitro and eventually cease to retain any resemblance to in vivo resident MSCs. Three-dimensional (3D) culture systems that more closely recapitulate the physiological environment of MSCs and other cell types are increasingly explored for their capacity to support and maintain the cell phenotypes. In much of our own work, we have utilized fibrin, a natural protein-based material that serves as the provisional extracellular matrix during wound healing. Fibrin has proven to be useful in numerous tissue engineering applications and has been used clinically as a hemostatic material. Its rapid self-assembly driven by thrombin-mediated alteration of fibrinogen makes fibrin an attractive 3D substrate, in which cells can adhere, spread, proliferate, and undergo complex morphogenetic programs. However, there is a significant need for simple cost-effective methods to safely retrieve cells encapsulated within fibrin hydrogels to perform additional analyses or use the cells for therapy. Here, we present a safe and efficient protocol for the isolation of MSCs from 3D fibrin gels. The key ingredient of our successful extraction method is nattokinase, a serine protease of the subtilisin family that has a strong fibrinolytic activity. Our data show that MSCs recovered from 3D fibrin gels using nattokinase are not only viable but also retain their proliferative and multilineage potentials. Demonstrated for MSCs, this method can be readily adapted to retrieve any other cell type from 3D fibrin gel constructs for various applications, including expansion, bioassays, and in vivo implantation.

  14. A three-dimensional strain measurement method in elastic transparent materials using tomographic particle image velocimetry.

    Directory of Open Access Journals (Sweden)

    Azuma Takahashi

    Full Text Available The mechanical interaction between blood vessels and medical devices can induce strains in these vessels. Measuring and understanding these strains is necessary to identify the causes of vascular complications. This study develops a method to measure the three-dimensional (3D distribution of strain using tomographic particle image velocimetry (Tomo-PIV and compares the measurement accuracy with the gauge strain in tensile tests.The test system for measuring 3D strain distribution consists of two cameras, a laser, a universal testing machine, an acrylic chamber with a glycerol water solution for adjusting the refractive index with the silicone, and dumbbell-shaped specimens mixed with fluorescent tracer particles. 3D images of the particles were reconstructed from 2D images using a multiplicative algebraic reconstruction technique (MART and motion tracking enhancement. Distributions of the 3D displacements were calculated using a digital volume correlation. To evaluate the accuracy of the measurement method in terms of particle density and interrogation voxel size, the gauge strain and one of the two cameras for Tomo-PIV were used as a video-extensometer in the tensile test. The results show that the optimal particle density and interrogation voxel size are 0.014 particles per pixel and 40 × 40 × 40 voxels with a 75% overlap. The maximum measurement error was maintained at less than 2.5% in the 4-mm-wide region of the specimen.We successfully developed a method to experimentally measure 3D strain distribution in an elastic silicone material using Tomo-PIV and fluorescent particles. To the best of our knowledge, this is the first report that applies Tomo-PIV to investigate 3D strain measurements in elastic materials with large deformation and validates the measurement accuracy.

  15. A coupled Eulerian/Lagrangian method for the solution of three-dimensional vortical flows

    Science.gov (United States)

    Felici, Helene Marie

    1992-01-01

    A coupled Eulerian/Lagrangian method is presented for the reduction of numerical diffusion observed in solutions of three-dimensional rotational flows using standard Eulerian finite-volume time-marching procedures. A Lagrangian particle tracking method using particle markers is added to the Eulerian time-marching procedure and provides a correction of the Eulerian solution. In turn, the Eulerian solutions is used to integrate the Lagrangian state-vector along the particles trajectories. The Lagrangian correction technique does not require any a-priori information on the structure or position of the vortical regions. While the Eulerian solution ensures the conservation of mass and sets the pressure field, the particle markers, used as 'accuracy boosters,' take advantage of the accurate convection description of the Lagrangian solution and enhance the vorticity and entropy capturing capabilities of standard Eulerian finite-volume methods. The combined solution procedures is tested in several applications. The convection of a Lamb vortex in a straight channel is used as an unsteady compressible flow preservation test case. The other test cases concern steady incompressible flow calculations and include the preservation of turbulent inlet velocity profile, the swirling flow in a pipe, and the constant stagnation pressure flow and secondary flow calculations in bends. The last application deals with the external flow past a wing with emphasis on the trailing vortex solution. The improvement due to the addition of the Lagrangian correction technique is measured by comparison with analytical solutions when available or with Eulerian solutions on finer grids. The use of the combined Eulerian/Lagrangian scheme results in substantially lower grid resolution requirements than the standard Eulerian scheme for a given solution accuracy.

  16. Cellular phone-based image acquisition and quantitative ratiometric method for detecting cocaine and benzoylecgonine for biological and forensic applications.

    Science.gov (United States)

    Cadle, Brian A; Rasmus, Kristin C; Varela, Juan A; Leverich, Leah S; O'Neill, Casey E; Bachtell, Ryan K; Cooper, Donald C

    2010-01-01

    Here we describe the first report of using low-cost cellular or web-based digital cameras to image and quantify standardized rapid immunoassay strips as a new point-of-care diagnostic and forensics tool with health applications. Quantitative ratiometric pixel density analysis (QRPDA) is an automated method requiring end-users to utilize inexpensive (∼ $1 USD/each) immunotest strips, a commonly available web or mobile phone camera or scanner, and internet or cellular service. A model is described whereby a central computer server and freely available IMAGEJ image analysis software records and analyzes the incoming image data with time-stamp and geo-tag information and performs the QRPDA using custom JAVA based macros (http://www.neurocloud.org). To demonstrate QRPDA we developed a standardized method using rapid immunotest strips directed against cocaine and its major metabolite, benzoylecgonine. Images from standardized samples were acquired using several devices, including a mobile phone camera, web cam, and scanner. We performed image analysis of three brands of commercially available dye-conjugated anti-cocaine/benzoylecgonine (COC/BE) antibody test strips in response to three different series of cocaine concentrations ranging from 0.1 to 300 ng/ml and BE concentrations ranging from 0.003 to 0.1 ng/ml. This data was then used to create standard curves to allow quantification of COC/BE in biological samples. Across all devices, QRPDA quantification of COC and BE proved to be a sensitive, economical, and faster alternative to more costly methods, such as gas chromatography-mass spectrometry, tandem mass spectrometry, or high pressure liquid chromatography. The limit of detection was determined to be between 0.1 and 5 ng/ml. To simulate conditions in the field, QRPDA was found to be robust under a variety of image acquisition and testing conditions that varied temperature, lighting, resolution, magnification and concentrations of biological fluid in a sample. To

  17. Cellular Phone-Based Image Acquisition and Quantitative Ratiometric Method for Detecting Cocaine and Benzoylecgonine for Biological and Forensic Applications

    Directory of Open Access Journals (Sweden)

    Brian A. Cadle

    2010-01-01

    Full Text Available Here we describe the first report of using low-cost cellular or web-based digital cameras to image and quantify standardized rapid immunoassay strips as a new point-of-care diagnostic and forensics tool with health applications. Quantitative ratiometric pixel density analysis (QRPDA is an automated method requiring end-users to utilize inexpensive (~ $1 USD/each immunotest strips, a commonly available web or mobile phone camera or scanner, and internet or cellular service. A model is described whereby a central computer server and freely available IMAGEJ image analysis software records and analyzes the incoming image data with time-stamp and geo-tag information and performs the QRPDA using custom JAVA based macros ( http://www.neurocloud.org . To demonstrate QRPDA we developed a standardized method using rapid immunotest strips directed against cocaine and its major metabolite, benzoylecgonine. Images from standardized samples were acquired using several devices, including a mobile phone camera, web cam, and scanner. We performed image analysis of three brands of commercially available dye-conjugated anti-cocaine/benzoylecgonine (COC/BE antibody test strips in response to three different series of cocaine concentrations ranging from 0.1 to 300 ng/ml and BE concentrations ranging from 0.003 to 0.1 ng/ml. This data was then used to create standard curves to allow quantification of COC/BE in biological samples. Across all devices, QRPDA quantification of COC and BE proved to be a sensitive, economical, and faster alternative to more costly methods, such as gas chromatography-mass spectrometry, tandem mass spectrometry, or high pressure liquid chromatography. The limit of detection was determined to be between 0.1 and 5 ng/ml. To simulate conditions in the field, QRPDA was found to be robust under a variety of image acquisition and testing conditions that varied temperature, lighting, resolution, magnification and concentrations of biological fluid

  18. Extended Jacobi Elliptic Function Rational Expansion Method and Its Application to (2+1)-Dimensional Stochastic Dispersive Long Wave System

    International Nuclear Information System (INIS)

    Song Lina; Zhang Hongqing

    2007-01-01

    In this work, by means of a generalized method and symbolic computation, we extend the Jacobi elliptic function rational expansion method to uniformly construct a series of stochastic wave solutions for stochastic evolution equations. To illustrate the effectiveness of our method, we take the (2+1)-dimensional stochastic dispersive long wave system as an example. We not only have obtained some known solutions, but also have constructed some new rational formal stochastic Jacobi elliptic function solutions.

  19. Use of exact albedo conditions in numerical methods for one-dimensional one-speed discrete ordinates eigenvalue problems

    International Nuclear Information System (INIS)

    Abreu, M.P. de

    1994-01-01

    The use of exact albedo boundary conditions in numerical methods applied to one-dimensional one-speed discrete ordinates (S n ) eigenvalue problems for nuclear reactor global calculations is described. An albedo operator that treats the reflector region around a nuclear reactor core implicitly is described and exactly was derived. To illustrate the method's efficiency and accuracy, it was used conventional linear diamond method with the albedo option to solve typical model problems. (author)

  20. Program design of data acquisition in Windows

    International Nuclear Information System (INIS)

    Cai Jianxin; Yan Huawen

    2004-01-01

    Several methods for the design of data acquisition program based on Microsoft Windows are introduced. Then their respective advantages and disadvantages are totally analyzed. At the same time, the data acquisition modes applicable to each method are also pointed out. It is convenient for data acquisition programmers to develop data acquisition systems. (authors)

  1. Two-dimensional transient thermal analysis of a fuel rod by finite volume method

    Energy Technology Data Exchange (ETDEWEB)

    Costa, Rhayanne Yalle Negreiros; Silva, Mário Augusto Bezerra da; Lira, Carlos Alberto de Oliveira, E-mail: ryncosta@gmail.com, E-mail: mabs500@gmail.com, E-mail: cabol@ufpe.br [Universidade Federal de Pernambuco (UFPE), Recife, PE (Brazil). Departamento de Energia Nuclear

    2017-07-01

    One of the greatest concerns when studying a nuclear reactor is the warranty of safe temperature limits all over the system at all time. The preservation of core structure along with the constraint of radioactive material into a controlled system are the main focus during the operation of a reactor. The purpose of this paper is to present the temperature distribution for a nominal channel of the AP1000 reactor developed by Westinghouse Co. during steady-state and transient operations. In the analysis, the system was subjected to normal operation conditions and then to blockages of the coolant flow. The time necessary to achieve a new safe stationary stage (when it was possible) was presented. The methodology applied in this analysis was based on a two-dimensional survey accomplished by the application of Finite Volume Method (FVM). A steady solution is obtained and compared with an analytical analysis that disregard axial heat transport to determine its relevance. The results show the importance of axial heat transport consideration in this type of study. A transient analysis shows the behavior of the system when submitted to coolant blockage at channel's entrance. Three blockages were simulated (10%, 20% and 30%) and the results show that, for a nominal channel, the system can still be considerate safe (there's no bubble formation until that point). (author)

  2. The study of three-dimensional net method of deformation observation

    International Nuclear Information System (INIS)

    Jia Jinyun

    1996-12-01

    Due to the influence of all kinds of factors, when buildings and equipment of nuclear power stations, water power stations and so on are in their service, deformations always happen to them. Especially, the breakage, the slide of edge danger rocks and basic displacement in the area of nuclear power station site can all interfere its regular operation, even endanger safety. But the traditional trigonometric net control can not obtain high precise deformation observation. So, topographic balanced vertical deviations are applied. The slope distance is divided rationally into horizontal and vertical components, then the precise vertical component is used to participate in restricting the deflection in order to enhance the observation post's precision. Meanwhile, the element model is selected, high precise monitoring net of three-dimensional deformation is set up, using astro-geodetic deflection of the vertical to correct the observation values. In this way, the earth error of height is given, the plane coordinate is defined by paralleling some plane of reference ellipsoid. This method may satisfy the deformation observation in the projects such as nuclear power stations or so. (8 figs., 5 tabs.)

  3. CubeAid - an interactive method of quickly analyzing 3-dimensional gamma-ray data sets

    Energy Technology Data Exchange (ETDEWEB)

    Kuehner, J A; Waddington, J C; Prevost, D [McMaster Univ., Hamilton, ON (Canada)

    1992-08-01

    With the advent of highly efficient gamma detector arrays capable of producing significant 4- and 5-fold data, a new challenge will be to develop appropriate data analysis techniques. One method may be to exploit the relatively fast analysis possible using three-dimensional (3D) analysis of sorted higher-fold data, as can be done using CubeAid software running on a personal computer (PC). This paper describes some of the capabilities of CubeAid. The main idea is to construct and use a 3D array (a cube) of triple data of dimensions suitable to the capability of a PC using VGA mode or higher. So far (as of the time of the conference), the authors had used a cube of edge size 640, and typically 2 or 3 keV per channel. In order to make data extraction fast, and to reduce disk space, a symmetrized 1/2 cube was used, the depth dimension having been compressed. In making this cube, sorting was first done into a symmetrized 1/6 cube from tape to a VAX hard disk. 2 figs.

  4. Diatom Valve Three-Dimensional Representation: A New Imaging Method Based on Combined Microscopies

    Science.gov (United States)

    Ferrara, Maria Antonietta; De Tommasi, Edoardo; Coppola, Giuseppe; De Stefano, Luca; Rea, Ilaria; Dardano, Principia

    2016-01-01

    The frustule of diatoms, unicellular microalgae, shows very interesting photonic features, generally related to its complicated and quasi-periodic micro- and nano-structure. In order to simulate light propagation inside and through this natural structure, it is important to develop three-dimensional (3D) models for synthetic replica with high spatial resolution. In this paper, we present a new method that generates images of microscopic diatoms with high definition, by merging scanning electron microscopy and digital holography microscopy or atomic force microscopy data. Starting from two digital images, both acquired separately with standard characterization procedures, a high spatial resolution (Δz = λ/20, Δx = Δy ≅ 100 nm, at least) 3D model of the object has been generated. Then, the two sets of data have been processed by matrix formalism, using an original mathematical algorithm implemented on a commercially available software. The developed methodology could be also of broad interest in the design and fabrication of micro-opto-electro-mechanical systems. PMID:27690008

  5. Fabrication of three-dimensional ordered nanodot array structures by a thermal dewetting method

    International Nuclear Information System (INIS)

    Li Zhenxing; Yoshino, Masahiko; Yamanaka, Akinori

    2012-01-01

    A new fabrication method for three-dimensional nanodot arrays with low cost and high throughput is developed in this paper. In this process, firstly a 2D nanodot array is fabricated by combination of top-down and bottom-up approaches. A nanoplastic forming technique is utilized as the top-down approach to fabricate a groove grid pattern on an Au layer deposited on a substrate, and self-organization by thermal dewetting is employed as the bottom-up approach. On the first-layer nanodot array, SiO 2 is deposited as a spacer layer. Au is then deposited on the spacer layer and thermal dewetting is conducted to fabricate a second-layer nanodot array. The effective parameters influencing dot formation on the second layer, including Au layer thickness and SiO 2 layer thickness, are studied. It is demonstrated that a 3D nanodot array of good vertical alignment is obtained by repeating the SiO 2 deposition, Au deposition and thermal dewetting. The mechanism of the dot agglomeration process is studied based on geometrical models. The effects of the spacer layer thickness and Au layer thickness on the morphology and alignment of the second-layer dots are discussed. (paper)

  6. The Albedo method for tri-dimensional calculations of fast reactors, with application to PEC

    International Nuclear Information System (INIS)

    Bianchini, G.; Loizzo, P.

    1983-01-01

    The Pec core simulator computer code, being now defined at Enea, is a relatively simple and inexpensive calculational model used by the reactor operator to derive the core life and the single subassemblies power and sodium flow. The diffusion module of this code will be based on the neutronic design code Citation. Here are outlined the theoretical foundations and the procedures to reduce the tri-dimensional diffusion computer time by the use of the following approximations: 1) the reactor zones far from the core are substituted by boundary conditions (albedo method); suitable flux logarithmic derivates are defined; 2) the fuel elements are represented by exagonal meshes; appropriate normalization factors are defined. With respect to the standard design procedures the computer cpu time is reduced from 90 minutes to 2 minutes (Ibm 4341/2). The errors amount to a few mk on the multiplication factor and to a few percent on the power distribution. The approximations (1) and (2) are equally important with respect to the time reduction

  7. New Multigrid Method Including Elimination Algolithm Based on High-Order Vector Finite Elements in Three Dimensional Magnetostatic Field Analysis

    Science.gov (United States)

    Hano, Mitsuo; Hotta, Masashi

    A new multigrid method based on high-order vector finite elements is proposed in this paper. Low level discretizations in this method are obtained by using low-order vector finite elements for the same mesh. Gauss-Seidel method is used as a smoother, and a linear equation of lowest level is solved by ICCG method. But it is often found that multigrid solutions do not converge into ICCG solutions. An elimination algolithm of constant term using a null space of the coefficient matrix is also described. In three dimensional magnetostatic field analysis, convergence time and number of iteration of this multigrid method are discussed with the convectional ICCG method.

  8. A transformed rational function method and exact solutions to the 3+1 dimensional Jimbo-Miwa equation

    International Nuclear Information System (INIS)

    Ma Wenxiu; Lee, J.-H.

    2009-01-01

    A direct approach to exact solutions of nonlinear partial differential equations is proposed, by using rational function transformations. The new method provides a more systematical and convenient handling of the solution process of nonlinear equations, unifying the tanh-function type methods, the homogeneous balance method, the exp-function method, the mapping method, and the F-expansion type methods. Its key point is to search for rational solutions to variable-coefficient ordinary differential equations transformed from given partial differential equations. As an application, the construction problem of exact solutions to the 3+1 dimensional Jimbo-Miwa equation is treated, together with a Baecklund transformation.

  9. Fast multi-dimensional NMR by minimal sampling

    Science.gov (United States)

    Kupče, Ēriks; Freeman, Ray

    2008-03-01

    A new scheme is proposed for very fast acquisition of three-dimensional NMR spectra based on minimal sampling, instead of the customary step-wise exploration of all of evolution space. The method relies on prior experiments to determine accurate values for the evolving frequencies and intensities from the two-dimensional 'first planes' recorded by setting t1 = 0 or t2 = 0. With this prior knowledge, the entire three-dimensional spectrum can be reconstructed by an additional measurement of the response at a single location (t1∗,t2∗) where t1∗ and t2∗ are fixed values of the evolution times. A key feature is the ability to resolve problems of overlap in the acquisition dimension. Applied to a small protein, agitoxin, the three-dimensional HNCO spectrum is obtained 35 times faster than systematic Cartesian sampling of the evolution domain. The extension to multi-dimensional spectroscopy is outlined.

  10. Shroud leakage flow models and a multi-dimensional coupling CFD (computational fluid dynamics) method for shrouded turbines

    International Nuclear Information System (INIS)

    Zou, Zhengping; Liu, Jingyuan; Zhang, Weihao; Wang, Peng

    2016-01-01

    Multi-dimensional coupling simulation is an effective approach for evaluating the flow and aero-thermal performance of shrouded turbines, which can balance the simulation accuracy and computing cost effectively. In this paper, 1D leakage models are proposed based on classical jet theories and dynamics equations, which can be used to evaluate most of the main features of shroud leakage flow, including the mass flow rate, radial and circumferential momentum, temperature and the jet width. Then, the 1D models are expanded to 2D distributions on the interface by using a multi-dimensional scaling method. Based on the models and multi-dimensional scaling, a multi-dimensional coupling simulation method for shrouded turbines is developed, in which, some boundary source and sink are set on the interface between the shroud and the main flow passage. To verify the precision, some simulations on the design point and off design points of a 1.5 stage turbine are conducted. It is indicated that the models and methods can give predictions with sufficient accuracy for most of the flow field features and will contribute to pursue deeper understanding and better design methods of shrouded axial turbines, which are the important devices in energy engineering. - Highlights: • Free and wall attached jet theories are used to model the leakage flow in shrouds. • Leakage flow rate is modeled by virtual labyrinth number and residual-energy factor. • A scaling method is applied to 1D model to obtain 2D distributions on interfaces. • A multi-dimensional coupling CFD method for shrouded turbines is proposed. • The proposed coupling method can give accurate predictions with low computing cost.

  11. Probabilistic numerical methods for high-dimensional stochastic control and valuation problems on electricity markets

    International Nuclear Information System (INIS)

    Langrene, Nicolas

    2014-01-01

    This thesis deals with the numerical solution of general stochastic control problems, with notable applications for electricity markets. We first propose a structural model for the price of electricity, allowing for price spikes well above the marginal fuel price under strained market conditions. This model allows to price and partially hedge electricity derivatives, using fuel forwards as hedging instruments. Then, we propose an algorithm, which combines Monte-Carlo simulations with local basis regressions, to solve general optimal switching problems. A comprehensive rate of convergence of the method is provided. Moreover, we manage to make the algorithm parsimonious in memory (and hence suitable for high dimensional problems) by generalizing to this framework a memory reduction method that avoids the storage of the sample paths. We illustrate this on the problem of investments in new power plants (our structural power price model allowing the new plants to impact the price of electricity). Finally, we study more general stochastic control problems (the control can be continuous and impact the drift and volatility of the state process), the solutions of which belong to the class of fully nonlinear Hamilton-Jacobi-Bellman equations, and can be handled via constrained Backward Stochastic Differential Equations, for which we develop a backward algorithm based on control randomization and parametric optimizations. A rate of convergence between the constraPned BSDE and its discrete version is provided, as well as an estimate of the optimal control. This algorithm is then applied to the problem of super replication of options under uncertain volatilities (and correlations). (author)

  12. Principles of the Kenzan Method for Robotic Cell Spheroid-Based Three-Dimensional Bioprinting.

    Science.gov (United States)

    Moldovan, Nicanor I; Hibino, Narutoshi; Nakayama, Koichi

    2017-06-01

    Bioprinting is a technology with the prospect to change the way many diseases are treated, by replacing the damaged tissues with live de novo created biosimilar constructs. However, after more than a decade of incubation and many proofs of concept, the field is still in its infancy. The current stagnation is the consequence of its early success: the first bioprinters, and most of those that followed, were modified versions of the three-dimensional printers used in additive manufacturing, redesigned for layer-by-layer dispersion of biomaterials. In all variants (inkjet, microextrusion, or laser assisted), this approach is material ("scaffold") dependent and energy intensive, making it hardly compatible with some of the intended biological applications. Instead, the future of bioprinting may benefit from the use of gentler scaffold-free bioassembling methods. A substantial body of evidence has accumulated, indicating this is possible by use of preformed cell spheroids, which have been assembled in cartilage, bone, and cardiac muscle-like constructs. However, a commercial instrument capable to directly and precisely "print" spheroids has not been available until the invention of the microneedles-based ("Kenzan") spheroid assembling and the launching in Japan of a bioprinter based on this method. This robotic platform laces spheroids into predesigned contiguous structures with micron-level precision, using stainless steel microneedles ("kenzans") as temporary support. These constructs are further cultivated until the spheroids fuse into cellular aggregates and synthesize their own extracellular matrix, thus attaining the needed structural organization and robustness. This novel technology opens wide opportunities for bioengineering of tissues and organs.

  13. Multiple and sequential data acquisition method: an improved method for fragmentation and detection of cross-linked peptides on a hybrid linear trap quadrupole Orbitrap Velos mass spectrometer.

    Science.gov (United States)

    Rudashevskaya, Elena L; Breitwieser, Florian P; Huber, Marie L; Colinge, Jacques; Müller, André C; Bennett, Keiryn L

    2013-02-05

    The identification and validation of cross-linked peptides by mass spectrometry remains a daunting challenge for protein-protein cross-linking approaches when investigating protein interactions. This includes the fragmentation of cross-linked peptides in the mass spectrometer per se and following database searching, the matching of the molecular masses of the fragment ions to the correct cross-linked peptides. The hybrid linear trap quadrupole (LTQ) Orbitrap Velos combines the speed of the tandem mass spectrometry (MS/MS) duty circle with high mass accuracy, and these features were utilized in the current study to substantially improve the confidence in the identification of cross-linked peptides. An MS/MS method termed multiple and sequential data acquisition method (MSDAM) was developed. Preliminary optimization of the MS/MS settings was performed with a synthetic peptide (TP1) cross-linked with bis[sulfosuccinimidyl] suberate (BS(3)). On the basis of these results, MSDAM was created and assessed on the BS(3)-cross-linked bovine serum albumin (BSA) homodimer. MSDAM applies a series of multiple sequential fragmentation events with a range of different normalized collision energies (NCE) to the same precursor ion. The combination of a series of NCE enabled a considerable improvement in the quality of the fragmentation spectra for cross-linked peptides, and ultimately aided in the identification of the sequences of the cross-linked peptides. Concurrently, MSDAM provides confirmatory evidence from the formation of reporter ions fragments, which reduces the false positive rate of incorrectly assigned cross-linked peptides.

  14. Three-dimensional assemblies of graphene prepared by a novel chemical reduction-induced self-assembly method

    KAUST Repository

    Zhang, Lianbin; Chen, Guoying; Hedhili, Mohamed N.; Zhang, Hongnan; Wang, Peng

    2012-01-01

    In this study, three-dimensional (3D) graphene assemblies are prepared from graphene oxide (GO) by a facile in situ reduction-assembly method, using a novel, low-cost, and environment-friendly reducing medium which is a combination of oxalic acid

  15. A new method for information retrieval in two-dimensional grating-based X-ray phase contrast imaging

    International Nuclear Information System (INIS)

    Wang Zhi-Li; Gao Kun; Chen Jian; Ge Xin; Tian Yang-Chao; Wu Zi-Yu; Zhu Pei-Ping

    2012-01-01

    Grating-based X-ray phase contrast imaging has been demonstrated to be an extremely powerful phase-sensitive imaging technique. By using two-dimensional (2D) gratings, the observable contrast is extended to two refraction directions. Recently, we have developed a novel reverse-projection (RP) method, which is capable of retrieving the object information efficiently with one-dimensional (1D) grating-based phase contrast imaging. In this contribution, we present its extension to the 2D grating-based X-ray phase contrast imaging, named the two-dimensional reverse-projection (2D-RP) method, for information retrieval. The method takes into account the nonlinear contributions of two refraction directions and allows the retrieval of the absorption, the horizontal and the vertical refraction images. The obtained information can be used for the reconstruction of the three-dimensional phase gradient field, and for an improved phase map retrieval and reconstruction. Numerical experiments are carried out, and the results confirm the validity of the 2D-RP method

  16. A HIGH ORDER SOLUTION OF THREE DIMENSIONAL TIME DEPENDENT NONLINEAR CONVECTIVE-DIFFUSIVE PROBLEM USING MODIFIED VARIATIONAL ITERATION METHOD

    Directory of Open Access Journals (Sweden)

    Pratibha Joshi

    2014-12-01

    Full Text Available In this paper, we have achieved high order solution of a three dimensional nonlinear diffusive-convective problem using modified variational iteration method. The efficiency of this approach has been shown by solving two examples. All computational work has been performed in MATHEMATICA.

  17. Numerical Simulation of the Dynamical Conductivity of One-Dimensional Disordered Systems by MacKinnon’s Method

    Science.gov (United States)

    Saso, Tetsuro; Kim, C. I.; Kasuya, Tadao

    1983-06-01

    Report is given on a computer simulation of the dynamical conductivity σ(ω) of one-dimensional disordered systems with up to 106 sites by MacKinnon’s method. A comparison is made with the asymptotically exact solution valid for weak disorder by Berezinskii.

  18. System and method for three-dimensional image reconstruction using an absolute orientation sensor

    KAUST Repository

    Giancola, Silvio; Ghanem, Bernard; Schneider, Jens; Wonka, Peter

    2018-01-01

    A three-dimensional image reconstruction system includes an image capture device, an inertial measurement unit (IMU), and an image processor. The image capture device captures image data. The inertial measurement unit (IMU) is affixed to the image

  19. Neutron radiography imaging with 2-dimensional photon counting method and its problems

    International Nuclear Information System (INIS)

    Ikeda, Y.; Kobayashi, H.; Niwa, T.; Kataoka, T.

    1988-01-01

    A ultra sensitive neutron imaging system has been deviced with a 2-dimensional photon counting camara (ARGUS 100). The imaging system is composed by a 2-dimensional single photon counting tube and a low background vidicon followed with an image processing unit and frame memories. By using the imaging system, electronic neutron radiography (NTV) has been possible under the neutron flux less than 3 x 10 4 n/cm 2 ·s. (author)

  20. Dural attachment of intracranial meningiomas: evaluation with contrast-enhanced three-dimensional fast imaging with steady-state acquisition (FIESTA) at 3 T

    Energy Technology Data Exchange (ETDEWEB)

    Yamamoto, Junkoh; Takahashi, Mayu; Aoyama, Yuichi; Soejima, Yoshiteru; Saito, Takeshi; Akiba, Daisuke; Nishizawa, Shigeru [University of Occupational and Environmental Health, Department of Neurosurgery, Kitakyusyu (Japan); Kakeda, Shingo; Korogi, Yukunori [University of Occupational and Environmental Health, Department of Radiology, Kitakyusyu (Japan)

    2011-06-15

    The purpose of this study was to evaluate the role of contrast-enhanced fast imaging with steady-state acquisition (CE-FIESTA) for assessing whether dural attachment in intracranial meningiomas is adhesive or not by correlation with intraoperative findings. Fourteen consecutive patients who were candidates for surgical treatment of meningiomas were prospectively analyzed with preoperative magnetic resonance imaging, including CE-FIESTA at 3 T. First, two neuroradiologists assessed several characteristics of the attachment of the meningioma to the dura mater or skull base on CE-FIESTA images. Second, the surgical findings of adhesion at the dural attachment of meningiomas were evaluated by two neurosurgeons. Finally, the CE-FIESTA findings were correlated with the surgical findings by one neurosurgeon and one neuroradiologist by consensus. CE-FIESTA clearly depicted a hypointense marginal line at the attachment site of the meningioma. When CE-FIESTA revealed smooth marginal lines or hyperintense zones along the marginal lines, tumors were detached easily from the dura mater. On the contrary, when CE-FIESTA showed an irregularity, such as partial disruption of the marginal lines, vessels, or bony hyperostosis, the tumors tended to adhere firmly to the dura mater, which was found to contain small vessels and fine fibrous tissues. There seems to be an excellent correlation between the characteristics of dural attachment of meningiomas on CE-FIESTA images and intraoperative findings. Therefore, for operative planning, CE-FIESTA may provide useful information regarding the adhesiveness of dural attachment. (orig.)