WorldWideScience

Sample records for intensity based b-spline

  1. B-spline tight frame based force matching method

    Science.gov (United States)

    Yang, Jianbin; Zhu, Guanhua; Tong, Dudu; Lu, Lanyuan; Shen, Zuowei

    2018-06-01

    In molecular dynamics simulations, compared with popular all-atom force field approaches, coarse-grained (CG) methods are frequently used for the rapid investigations of long time- and length-scale processes in many important biological and soft matter studies. The typical task in coarse-graining is to derive interaction force functions between different CG site types in terms of their distance, bond angle or dihedral angle. In this paper, an ℓ1-regularized least squares model is applied to form the force functions, which makes additional use of the B-spline wavelet frame transform in order to preserve the important features of force functions. The B-spline tight frames system has a simple explicit expression which is useful for representing our force functions. Moreover, the redundancy of the system offers more resilience to the effects of noise and is useful in the case of lossy data. Numerical results for molecular systems involving pairwise non-bonded, three and four-body bonded interactions are obtained to demonstrate the effectiveness of our approach.

  2. Accurate B-spline-based 3-D interpolation scheme for digital volume correlation

    Science.gov (United States)

    Ren, Maodong; Liang, Jin; Wei, Bin

    2016-12-01

    An accurate and efficient 3-D interpolation scheme, based on sampling theorem and Fourier transform technique, is proposed to reduce the sub-voxel matching error caused by intensity interpolation bias in digital volume correlation. First, the influence factors of the interpolation bias are investigated theoretically using the transfer function of an interpolation filter (henceforth filter) in the Fourier domain. A law that the positional error of a filter can be expressed as a function of fractional position and wave number is found. Then, considering the above factors, an optimized B-spline-based recursive filter, combining B-spline transforms and least squares optimization method, is designed to virtually eliminate the interpolation bias in the process of sub-voxel matching. Besides, given each volumetric image containing different wave number ranges, a Gaussian weighting function is constructed to emphasize or suppress certain of wave number ranges based on the Fourier spectrum analysis. Finally, a novel software is developed and series of validation experiments were carried out to verify the proposed scheme. Experimental results show that the proposed scheme can reduce the interpolation bias to an acceptable level.

  3. A chord error conforming tool path B-spline fitting method for NC machining based on energy minimization and LSPIA

    OpenAIRE

    He, Shanshan; Ou, Daojiang; Yan, Changya; Lee, Chen-Han

    2015-01-01

    Piecewise linear (G01-based) tool paths generated by CAM systems lack G1 and G2 continuity. The discontinuity causes vibration and unnecessary hesitation during machining. To ensure efficient high-speed machining, a method to improve the continuity of the tool paths is required, such as B-spline fitting that approximates G01 paths with B-spline curves. Conventional B-spline fitting approaches cannot be directly used for tool path B-spline fitting, because they have shortages such as numerical...

  4. Prostate multimodality image registration based on B-splines and quadrature local energy.

    Science.gov (United States)

    Mitra, Jhimli; Martí, Robert; Oliver, Arnau; Lladó, Xavier; Ghose, Soumya; Vilanova, Joan C; Meriaudeau, Fabrice

    2012-05-01

    Needle biopsy of the prostate is guided by Transrectal Ultrasound (TRUS) imaging. The TRUS images do not provide proper spatial localization of malignant tissues due to the poor sensitivity of TRUS to visualize early malignancy. Magnetic Resonance Imaging (MRI) has been shown to be sensitive for the detection of early stage malignancy, and therefore, a novel 2D deformable registration method that overlays pre-biopsy MRI onto TRUS images has been proposed. The registration method involves B-spline deformations with Normalized Mutual Information (NMI) as the similarity measure computed from the texture images obtained from the amplitude responses of the directional quadrature filter pairs. Registration accuracy of the proposed method is evaluated by computing the Dice Similarity coefficient (DSC) and 95% Hausdorff Distance (HD) values for 20 patients prostate mid-gland slices and Target Registration Error (TRE) for 18 patients only where homologous structures are visible in both the TRUS and transformed MR images. The proposed method and B-splines using NMI computed from intensities provide average TRE values of 2.64 ± 1.37 and 4.43 ± 2.77 mm respectively. Our method shows statistically significant improvement in TRE when compared with B-spline using NMI computed from intensities with Student's t test p = 0.02. The proposed method shows 1.18 times improvement over thin-plate splines registration with average TRE of 3.11 ± 2.18 mm. The mean DSC and the mean 95% HD values obtained with the proposed method of B-spline with NMI computed from texture are 0.943 ± 0.039 and 4.75 ± 2.40 mm respectively. The texture energy computed from the quadrature filter pairs provides better registration accuracy for multimodal images than raw intensities. Low TRE values of the proposed registration method add to the feasibility of it being used during TRUS-guided biopsy.

  5. Optimization and parallelization of B-spline based orbital evaluations in QMC on multi/many-core shared memory processors

    OpenAIRE

    Mathuriya, Amrita; Luo, Ye; Benali, Anouar; Shulenburger, Luke; Kim, Jeongnim

    2016-01-01

    B-spline based orbital representations are widely used in Quantum Monte Carlo (QMC) simulations of solids, historically taking as much as 50% of the total run time. Random accesses to a large four-dimensional array make it challenging to efficiently utilize caches and wide vector units of modern CPUs. We present node-level optimizations of B-spline evaluations on multi/many-core shared memory processors. To increase SIMD efficiency and bandwidth utilization, we first apply data layout transfo...

  6. A graph-based method for fitting planar B-spline curves with intersections

    Directory of Open Access Journals (Sweden)

    Pengbo Bo

    2016-01-01

    Full Text Available The problem of fitting B-spline curves to planar point clouds is studied in this paper. A novel method is proposed to deal with the most challenging case where multiple intersecting curves or curves with self-intersection are necessary for shape representation. A method based on Delauney Triangulation of data points is developed to identify connected components which is also capable of removing outliers. A skeleton representation is utilized to represent the topological structure which is further used to create a weighted graph for deciding the merging of curve segments. Different to existing approaches which utilize local shape information near intersections, our method considers shape characteristics of curve segments in a larger scope and is thus capable of giving more satisfactory results. By fitting each group of data points with a B-spline curve, we solve the problems of curve structure reconstruction from point clouds, as well as the vectorization of simple line drawing images by drawing lines reconstruction.

  7. Efficient GPU-based texture interpolation using uniform B-splines

    NARCIS (Netherlands)

    Ruijters, D.; Haar Romenij, ter B.M.; Suetens, P.

    2008-01-01

    This article presents uniform B-spline interpolation, completely contained on the graphics processing unit (GPU). This implies that the CPU does not need to compute any lookup tables or B-spline basis functions. The cubic interpolation can be decomposed into several linear interpolations [Sigg and

  8. Regional Densification of a Global VTEC Model Based on B-Spline Representations

    Science.gov (United States)

    Erdogan, Eren; Schmidt, Michael; Dettmering, Denise; Goss, Andreas; Seitz, Florian; Börger, Klaus; Brandert, Sylvia; Görres, Barbara; Kersten, Wilhelm F.; Bothmer, Volker; Hinrichs, Johannes; Mrotzek, Niclas

    2017-04-01

    The project OPTIMAP is a joint initiative of the Bundeswehr GeoInformation Centre (BGIC), the German Space Situational Awareness Centre (GSSAC), the German Geodetic Research Institute of the Technical University Munich (DGFI-TUM) and the Institute for Astrophysics at the University of Göttingen (IAG). The main goal of the project is the development of an operational tool for ionospheric mapping and prediction (OPTIMAP). Two key features of the project are the combination of different satellite observation techniques (GNSS, satellite altimetry, radio occultations and DORIS) and the regional densification as a remedy against problems encountered with the inhomogeneous data distribution. Since the data from space-geoscientific mission which can be used for modeling ionospheric parameters, such as the Vertical Total Electron Content (VTEC) or the electron density, are distributed rather unevenly over the globe at different altitudes, appropriate modeling approaches have to be developed to handle this inhomogeneity. Our approach is based on a two-level strategy. To be more specific, in the first level we compute a global VTEC model with a moderate regional and spectral resolution which will be complemented in the second level by a regional model in a densification area. The latter is a region characterized by a dense data distribution to obtain a high spatial and spectral resolution VTEC product. Additionally, the global representation means a background model for the regional one to avoid edge effects at the boundaries of the densification area. The presented approach based on a global and a regional model part, i.e. the consideration of a regional densification is called the Two-Level VTEC Model (TLVM). The global VTEC model part is based on a series expansion in terms of polynomial B-Splines in latitude direction and trigonometric B-Splines in longitude direction. The additional regional model part is set up by a series expansion in terms of polynomial B-splines for

  9. Comparison Between Polynomial, Euler Beta-Function and Expo-Rational B-Spline Bases

    Science.gov (United States)

    Kristoffersen, Arnt R.; Dechevsky, Lubomir T.; Laksa˚, Arne; Bang, Børre

    2011-12-01

    Euler Beta-function B-splines (BFBS) are the practically most important instance of generalized expo-rational B-splines (GERBS) which are not true expo-rational B-splines (ERBS). BFBS do not enjoy the full range of the superproperties of ERBS but, while ERBS are special functions computable by a very rapidly converging yet approximate numerical quadrature algorithms, BFBS are explicitly computable piecewise polynomial (for integer multiplicities), similar to classical Schoenberg B-splines. In the present communication we define, compute and visualize for the first time all possible BFBS of degree up to 3 which provide Hermite interpolation in three consecutive knots of multiplicity up to 3, i.e., the function is being interpolated together with its derivatives of order up to 2. We compare the BFBS obtained for different degrees and multiplicities among themselves and versus the classical Schoenberg polynomial B-splines and the true ERBS for the considered knots. The results of the graphical comparison are discussed from analytical point of view. For the numerical computation and visualization of the new B-splines we have used Maple 12.

  10. A chord error conforming tool path B-spline fitting method for NC machining based on energy minimization and LSPIA

    Directory of Open Access Journals (Sweden)

    Shanshan He

    2015-10-01

    Full Text Available Piecewise linear (G01-based tool paths generated by CAM systems lack G1 and G2 continuity. The discontinuity causes vibration and unnecessary hesitation during machining. To ensure efficient high-speed machining, a method to improve the continuity of the tool paths is required, such as B-spline fitting that approximates G01 paths with B-spline curves. Conventional B-spline fitting approaches cannot be directly used for tool path B-spline fitting, because they have shortages such as numerical instability, lack of chord error constraint, and lack of assurance of a usable result. Progressive and Iterative Approximation for Least Squares (LSPIA is an efficient method for data fitting that solves the numerical instability problem. However, it does not consider chord errors and needs more work to ensure ironclad results for commercial applications. In this paper, we use LSPIA method incorporating Energy term (ELSPIA to avoid the numerical instability, and lower chord errors by using stretching energy term. We implement several algorithm improvements, including (1 an improved technique for initial control point determination over Dominant Point Method, (2 an algorithm that updates foot point parameters as needed, (3 analysis of the degrees of freedom of control points to insert new control points only when needed, (4 chord error refinement using a similar ELSPIA method with the above enhancements. The proposed approach can generate a shape-preserving B-spline curve. Experiments with data analysis and machining tests are presented for verification of quality and efficiency. Comparisons with other known solutions are included to evaluate the worthiness of the proposed solution.

  11. A meshless scheme for partial differential equations based on multiquadric trigonometric B-spline quasi-interpolation

    International Nuclear Information System (INIS)

    Gao Wen-Wu; Wang Zhi-Gang

    2014-01-01

    Based on the multiquadric trigonometric B-spline quasi-interpolant, this paper proposes a meshless scheme for some partial differential equations whose solutions are periodic with respect to the spatial variable. This scheme takes into account the periodicity of the analytic solution by using derivatives of a periodic quasi-interpolant (multiquadric trigonometric B-spline quasi-interpolant) to approximate the spatial derivatives of the equations. Thus, it overcomes the difficulties of the previous schemes based on quasi-interpolation (requiring some additional boundary conditions and yielding unwanted high-order discontinuous points at the boundaries in the spatial domain). Moreover, the scheme also overcomes the difficulty of the meshless collocation methods (i.e., yielding a notorious ill-conditioned linear system of equations for large collocation points). The numerical examples that are presented at the end of the paper show that the scheme provides excellent approximations to the analytic solutions. (general)

  12. The four-dimensional non-uniform rational B-splines-based cardiac-torso phantom and its application in medical imaging research

    International Nuclear Information System (INIS)

    Li Chongguo; Wu Dake; Lang Jinyi

    2008-01-01

    Simulation skill is playing an increasingly important role in medical imaging research. four-dimensional non-uniform rational B-splines-based cardiac-torso (4D NCAT) phantom is new tool for meoical imaging res catch and when combined with accurate models for the imaging process a wealth of realistic imaging data from subjects of various anatomies. Can be provided 4D NCAT phantoms have bend widely used in medical research such as SPECT, PET, CT and so on. 4D NCAT phantoms have also been used in inverse planning system of intensity modulated radiation therapy. (authors)

  13. Complex wavenumber Fourier analysis of the B-spline based finite element method

    Czech Academy of Sciences Publication Activity Database

    Kolman, Radek; Plešek, Jiří; Okrouhlík, Miloslav

    2014-01-01

    Roč. 51, č. 2 (2014), s. 348-359 ISSN 0165-2125 R&D Projects: GA ČR(CZ) GAP101/11/0288; GA ČR(CZ) GAP101/12/2315; GA ČR GPP101/10/P376; GA ČR GA101/09/1630 Institutional support: RVO:61388998 Keywords : elastic wave propagation * dispersion errors * B-spline * finite element method * isogeometric analysis Subject RIV: JR - Other Machinery Impact factor: 1.513, year: 2014 http://www.sciencedirect.com/science/article/pii/S0165212513001479

  14. B-spline based finite element method in one-dimensional discontinuous elastic wave propagation

    Czech Academy of Sciences Publication Activity Database

    Kolman, Radek; Okrouhlík, Miloslav; Berezovski, A.; Gabriel, Dušan; Kopačka, Ján; Plešek, Jiří

    2017-01-01

    Roč. 46, June (2017), s. 382-395 ISSN 0307-904X R&D Projects: GA ČR(CZ) GAP101/12/2315; GA MŠk(CZ) EF15_003/0000493 Grant - others:AV ČR(CZ) DAAD-16-12; AV ČR(CZ) ETA-15-03 Program:Bilaterální spolupráce; Bilaterální spolupráce Institutional support: RVO:61388998 Keywords : discontinuous elastic wave propagation * B-spline finite element method * isogeometric analysis * implicit and explicit time integration * dispersion * spurious oscillations Subject RIV: BI - Acoustics OBOR OECD: Acoustics Impact factor: 2.350, year: 2016 http://www.sciencedirect.com/science/article/pii/S0307904X17300835

  15. Automatic and accurate reconstruction of distal humerus contours through B-Spline fitting based on control polygon deformation.

    Science.gov (United States)

    Mostafavi, Kamal; Tutunea-Fatan, O Remus; Bordatchev, Evgueni V; Johnson, James A

    2014-12-01

    The strong advent of computer-assisted technologies experienced by the modern orthopedic surgery prompts for the expansion of computationally efficient techniques to be built on the broad base of computer-aided engineering tools that are readily available. However, one of the common challenges faced during the current developmental phase continues to remain the lack of reliable frameworks to allow a fast and precise conversion of the anatomical information acquired through computer tomography to a format that is acceptable to computer-aided engineering software. To address this, this study proposes an integrated and automatic framework capable to extract and then postprocess the original imaging data to a common planar and closed B-Spline representation. The core of the developed platform relies on the approximation of the discrete computer tomography data by means of an original two-step B-Spline fitting technique based on successive deformations of the control polygon. In addition to its rapidity and robustness, the developed fitting technique was validated to produce accurate representations that do not deviate by more than 0.2 mm with respect to alternate representations of the bone geometry that were obtained through different-contact-based-data acquisition or data processing methods. © IMechE 2014.

  16. Spatial and temporal interpolation of satellite-based aerosol optical depth measurements over North America using B-splines

    Science.gov (United States)

    Pfister, Nicolas; O'Neill, Norman T.; Aube, Martin; Nguyen, Minh-Nghia; Bechamp-Laganiere, Xavier; Besnier, Albert; Corriveau, Louis; Gasse, Geremie; Levert, Etienne; Plante, Danick

    2005-08-01

    Satellite-based measurements of aerosol optical depth (AOD) over land are obtained from an inversion procedure applied to dense dark vegetation pixels of remotely sensed images. The limited number of pixels over which the inversion procedure can be applied leaves many areas with little or no AOD data. Moreover, satellite coverage by sensors such as MODIS yields only daily images of a given region with four sequential overpasses required to straddle mid-latitude North America. Ground based AOD data from AERONET sun photometers are available on a more continuous basis but only at approximately fifty locations throughout North America. The object of this work is to produce a complete and coherent mapping of AOD over North America with a spatial resolution of 0.1 degree and a frequency of three hours by interpolating MODIS satellite-based data together with available AERONET ground based measurements. Before being interpolated, the MODIS AOD data extracted from different passes are synchronized to the mapping time using analyzed wind fields from the Global Multiscale Model (Meteorological Service of Canada). This approach amounts to a trajectory type of simplified atmospheric dynamics correction method. The spatial interpolation is performed using a weighted least squares method applied to bicubic B-spline functions defined on a rectangular grid. The least squares method enables one to weight the data accordingly to the measurement errors while the B-splines properties of local support and C2 continuity offer a good approximation of AOD behaviour viewed as a function of time and space.

  17. Evaluation of accuracy of B-spline transformation-based deformable image registration with different parameter settings for thoracic images.

    Science.gov (United States)

    Kanai, Takayuki; Kadoya, Noriyuki; Ito, Kengo; Onozato, Yusuke; Cho, Sang Yong; Kishi, Kazuma; Dobashi, Suguru; Umezawa, Rei; Matsushita, Haruo; Takeda, Ken; Jingu, Keiichi

    2014-11-01

    Deformable image registration (DIR) is fundamental technique for adaptive radiotherapy and image-guided radiotherapy. However, further improvement of DIR is still needed. We evaluated the accuracy of B-spline transformation-based DIR implemented in elastix. This registration package is largely based on the Insight Segmentation and Registration Toolkit (ITK), and several new functions were implemented to achieve high DIR accuracy. The purpose of this study was to clarify whether new functions implemented in elastix are useful for improving DIR accuracy. Thoracic 4D computed tomography images of ten patients with esophageal or lung cancer were studied. Datasets for these patients were provided by DIR-lab (dir-lab.com) and included a coordinate list of anatomical landmarks that had been manually identified. DIR between peak-inhale and peak-exhale images was performed with four types of parameter settings. The first one represents original ITK (Parameter 1). The second employs the new function of elastix (Parameter 2), and the third was created to verify whether new functions improve DIR accuracy while keeping computational time (Parameter 3). The last one partially employs a new function (Parameter 4). Registration errors for these parameter settings were calculated using the manually determined landmark pairs. 3D registration errors with standard deviation over all cases were 1.78 (1.57), 1.28 (1.10), 1.44 (1.09) and 1.36 (1.35) mm for Parameter 1, 2, 3 and 4, respectively, indicating that the new functions are useful for improving DIR accuracy, even while maintaining the computational time, and this B-spline-based DIR could be used clinically to achieve high-accuracy adaptive radiotherapy. © The Author 2014. Published by Oxford University Press on behalf of The Japan Radiation Research Society and Japanese Society for Radiation Oncology.

  18. Evaluation of accuracy of B-spline transformation-based deformable image registration with different parameter settings for thoracic images

    International Nuclear Information System (INIS)

    Kanai, Takayuki; Kadoya, Noriyuki; Ito, Kengo

    2014-01-01

    Deformable image registration (DIR) is fundamental technique for adaptive radiotherapy and image-guided radiotherapy. However, further improvement of DIR is still needed. We evaluated the accuracy of B-spline transformation-based DIR implemented in elastix. This registration package is largely based on the Insight Segmentation and Registration Toolkit (ITK), and several new functions were implemented to achieve high DIR accuracy. The purpose of this study was to clarify whether new functions implemented in elastix are useful for improving DIR accuracy. Thoracic 4D computed tomography images of ten patients with esophageal or lung cancer were studied. Datasets for these patients were provided by DIR-lab (dir-lab.com) and included a coordinate list of anatomical landmarks that had been manually identified. DIR between peak-inhale and peak-exhale images was performed with four types of parameter settings. The first one represents original ITK (Parameter 1). The second employs the new function of elastix (Parameter 2), and the third was created to verify whether new functions improve DIR accuracy while keeping computational time (Parameter 3). The last one partially employs a new function (Parameter 4). Registration errors for these parameter settings were calculated using the manually determined landmark pairs. 3D registration errors with standard deviation over all cases were 1.78 (1.57), 1.28 (1.10), 1.44 (1.09) and 1.36 (1.35) mm for Parameter 1, 2, 3 and 4, respectively, indicating that the new functions are useful for improving DIR accuracy, even while maintaining the computational time, and this B-spline-based DIR could be used clinically to achieve high-accuracy adaptive radiotherapy. (author)

  19. Recursive B-spline approximation using the Kalman filter

    Directory of Open Access Journals (Sweden)

    Jens Jauch

    2017-02-01

    Full Text Available This paper proposes a novel recursive B-spline approximation (RBA algorithm which approximates an unbounded number of data points with a B-spline function and achieves lower computational effort compared with previous algorithms. Conventional recursive algorithms based on the Kalman filter (KF restrict the approximation to a bounded and predefined interval. Conversely RBA includes a novel shift operation that enables to shift estimated B-spline coefficients in the state vector of a KF. This allows to adapt the interval in which the B-spline function can approximate data points during run-time.

  20. Nonlinear Analysis for the Crack Control of SMA Smart Concrete Beam Based on a Bidirectional B-Spline QR Method

    Directory of Open Access Journals (Sweden)

    Yan Li

    2018-01-01

    Full Text Available A bidirectional B-spline QR method (BB-sQRM for the study on the crack control of the reinforced concrete (RC beam embedded with shape memory alloy (SMA wires is presented. In the proposed method, the discretization is performed with a set of spline nodes in two directions of the plane model, and structural displacement fields are constructed by the linear combination of the products of cubic B-spline interpolation functions. To derive the elastoplastic stiffness equation of the RC beam, an explicit form is utilized to express the elastoplastic constitutive law of concrete materials. The proposed model is compared with the ANSYS model in several numerical examples. The results not only show that the solutions given by the BB-sQRM are very close to those given by the finite element method (FEM but also prove the high efficiency and low computational cost of the BB-sQRM. Meanwhile, the five parameters, such as depth-span ratio, thickness of concrete cover, reinforcement ratio, prestrain, and eccentricity of SMA wires, are investigated to learn their effects on the crack control. The results show that depth-span ratio of RC beams and prestrain and eccentricity of SMA wires have a significant influence on the control performance of beam cracks.

  1. B-spline Collocation with Domain Decomposition Method

    International Nuclear Information System (INIS)

    Hidayat, M I P; Parman, S; Ariwahjoedi, B

    2013-01-01

    A global B-spline collocation method has been previously developed and successfully implemented by the present authors for solving elliptic partial differential equations in arbitrary complex domains. However, the global B-spline approximation, which is simply reduced to Bezier approximation of any degree p with C 0 continuity, has led to the use of B-spline basis of high order in order to achieve high accuracy. The need for B-spline bases of high order in the global method would be more prominent in domains of large dimension. For the increased collocation points, it may also lead to the ill-conditioning problem. In this study, overlapping domain decomposition of multiplicative Schwarz algorithm is combined with the global method. Our objective is two-fold that improving the accuracy with the combination technique, and also investigating influence of the combination technique to the employed B-spline basis orders with respect to the obtained accuracy. It was shown that the combination method produced higher accuracy with the B-spline basis of much lower order than that needed in implementation of the initial method. Hence, the approximation stability of the B-spline collocation method was also increased.

  2. Tomographic reconstruction with B-splines surfaces

    International Nuclear Information System (INIS)

    Oliveira, Eric F.; Dantas, Carlos C.; Melo, Silvio B.; Mota, Icaro V.; Lira, Mailson

    2011-01-01

    Algebraic reconstruction techniques when applied to a limited number of data usually suffer from noise caused by the process of correction or by inconsistencies in the data coming from the stochastic process of radioactive emission and oscillation equipment. The post - processing of the reconstructed image with the application of filters can be done to mitigate the presence of noise. In general these processes also attenuate the discontinuities present in edges that distinguish objects or artifacts, causing excessive blurring in the reconstructed image. This paper proposes a built-in noise reduction at the same time that it ensures adequate smoothness level in the reconstructed surface, representing the unknowns as linear combinations of elements of a piecewise polynomial basis, i.e. a B-splines basis. For that, the algebraic technique ART is modified to accommodate the first, second and third degree bases, ensuring C 0 , C 1 and C 2 smoothness levels, respectively. For comparisons, three methodologies are applied: ART, ART post-processed with regular B-splines filters (ART*) and the proposed method with the built-in B-splines filter (BsART). Simulations with input data produced from common mathematical phantoms were conducted. For the phantoms used the BsART method consistently presented the smallest errors, among the three methods. This study has shown the superiority of the change made to embed the filter in the ART when compared to the post-filtered ART. (author)

  3. Gradient-based optimization with B-splines on sparse grids for solving forward-dynamics simulations of three-dimensional, continuum-mechanical musculoskeletal system models.

    Science.gov (United States)

    Valentin, J; Sprenger, M; Pflüger, D; Röhrle, O

    2018-05-01

    Investigating the interplay between muscular activity and motion is the basis to improve our understanding of healthy or diseased musculoskeletal systems. To be able to analyze the musculoskeletal systems, computational models are used. Albeit some severe modeling assumptions, almost all existing musculoskeletal system simulations appeal to multibody simulation frameworks. Although continuum-mechanical musculoskeletal system models can compensate for some of these limitations, they are essentially not considered because of their computational complexity and cost. The proposed framework is the first activation-driven musculoskeletal system model, in which the exerted skeletal muscle forces are computed using 3-dimensional, continuum-mechanical skeletal muscle models and in which muscle activations are determined based on a constraint optimization problem. Numerical feasibility is achieved by computing sparse grid surrogates with hierarchical B-splines, and adaptive sparse grid refinement further reduces the computational effort. The choice of B-splines allows the use of all existing gradient-based optimization techniques without further numerical approximation. This paper demonstrates that the resulting surrogates have low relative errors (less than 0.76%) and can be used within forward simulations that are subject to constraint optimization. To demonstrate this, we set up several different test scenarios in which an upper limb model consisting of the elbow joint, the biceps and triceps brachii, and an external load is subjected to different optimization criteria. Even though this novel method has only been demonstrated for a 2-muscle system, it can easily be extended to musculoskeletal systems with 3 or more muscles. Copyright © 2018 John Wiley & Sons, Ltd.

  4. A new perspective for quintic B-spline based Crank-Nicolson-differential quadrature method algorithm for numerical solutions of the nonlinear Schrödinger equation

    Science.gov (United States)

    Başhan, Ali; Uçar, Yusuf; Murat Yağmurlu, N.; Esen, Alaattin

    2018-01-01

    In the present paper, a Crank-Nicolson-differential quadrature method (CN-DQM) based on utilizing quintic B-splines as a tool has been carried out to obtain the numerical solutions for the nonlinear Schrödinger (NLS) equation. For this purpose, first of all, the Schrödinger equation has been converted into coupled real value differential equations and then they have been discretized using both the forward difference formula and the Crank-Nicolson method. After that, Rubin and Graves linearization techniques have been utilized and the differential quadrature method has been applied to obtain an algebraic equation system. Next, in order to be able to test the efficiency of the newly applied method, the error norms, L2 and L_{∞}, as well as the two lowest invariants, I1 and I2, have been computed. Besides those, the relative changes in those invariants have been presented. Finally, the newly obtained numerical results have been compared with some of those available in the literature for similar parameters. This comparison clearly indicates that the currently utilized method, namely CN-DQM, is an effective and efficient numerical scheme and allows us to propose to solve a wide range of nonlinear equations.

  5. 基于三次B样条函数的SEM图像处理%SEM Image Processing Based on Third- order B- spline Function

    Institute of Scientific and Technical Information of China (English)

    张健

    2011-01-01

    SEM images, for its unique practical testing significance, need in denoising also highlight its edges and accurate edge extraction positioning, So this paper adopts a partial differential method which can maintain the edges of the denoising and a extensive application of multi - scale wavelet analysis to detect edges, all based on third - order B - spline function as the core operator, for line width test of SEM image processing, This algorithm obtained the better denoising effect and maintained edge features for SEM images.%SEM图像由于其独特的实际测试意义,需要在去噪的同时突出边缘和准确的边缘提取定位,所以提出采用能够保持边缘的偏微分方法去噪和广泛应用的多尺度小波提取边缘,基于三次B样条函数作为核心算子,对用于线宽测试的SEM图像进行处理,获得了较好的去噪并保持边缘的效果以及清晰的图像边缘检测效果.

  6. Color management with a hammer: the B-spline fitter

    Science.gov (United States)

    Bell, Ian E.; Liu, Bonny H. P.

    2003-01-01

    To paraphrase Abraham Maslow: If the only tool you have is a hammer, every problem looks like a nail. We have a B-spline fitter customized for 3D color data, and many problems in color management can be solved with this tool. Whereas color devices were once modeled with extensive measurement, look-up tables and trilinear interpolation, recent improvements in hardware have made B-spline models an affordable alternative. Such device characterizations require fewer color measurements than piecewise linear models, and have uses beyond simple interpolation. A B-spline fitter, for example, can act as a filter to remove noise from measurements, leaving a model with guaranteed smoothness. Inversion of the device model can then be carried out consistently and efficiently, as the spline model is well behaved and its derivatives easily computed. Spline-based algorithms also exist for gamut mapping, the composition of maps, and the extrapolation of a gamut. Trilinear interpolation---a degree-one spline---can still be used after nonlinear spline smoothing for high-speed evaluation with robust convergence. Using data from several color devices, this paper examines the use of B-splines as a generic tool for modeling devices and mapping one gamut to another, and concludes with applications to high-dimensional and spectral data.

  7. A novel knot selection method for the error-bounded B-spline curve fitting of sampling points in the measuring process

    International Nuclear Information System (INIS)

    Liang, Fusheng; Zhao, Ji; Ji, Shijun; Zhang, Bing; Fan, Cheng

    2017-01-01

    The B-spline curve has been widely used in the reconstruction of measurement data. The error-bounded sampling points reconstruction can be achieved by the knot addition method (KAM) based B-spline curve fitting. In KAM, the selection pattern of initial knot vector has been associated with the ultimate necessary number of knots. This paper provides a novel initial knots selection method to condense the knot vector required for the error-bounded B-spline curve fitting. The initial knots are determined by the distribution of features which include the chord length (arc length) and bending degree (curvature) contained in the discrete sampling points. Firstly, the sampling points are fitted into an approximate B-spline curve Gs with intensively uniform knot vector to substitute the description of the feature of the sampling points. The feature integral of Gs is built as a monotone increasing function in an analytic form. Then, the initial knots are selected according to the constant increment of the feature integral. After that, an iterative knot insertion (IKI) process starting from the initial knots is introduced to improve the fitting precision, and the ultimate knot vector for the error-bounded B-spline curve fitting is achieved. Lastly, two simulations and the measurement experiment are provided, and the results indicate that the proposed knot selection method can reduce the number of ultimate knots available. (paper)

  8. Implementation of exterior complex scaling in B-splines to solve atomic and molecular collision problems

    International Nuclear Information System (INIS)

    McCurdy, C William; MartIn, Fernando

    2004-01-01

    B-spline methods are now well established as widely applicable tools for the evaluation of atomic and molecular continuum states. The mathematical technique of exterior complex scaling has been shown, in a variety of other implementations, to be a powerful method with which to solve atomic and molecular scattering problems, because it allows the correct imposition of continuum boundary conditions without their explicit analytic application. In this paper, an implementation of exterior complex scaling in B-splines is described that can bring the well-developed technology of B-splines to bear on new problems, including multiple ionization and breakup problems, in a straightforward way. The approach is demonstrated for examples involving the continuum motion of nuclei in diatomic molecules as well as electronic continua. For problems involving electrons, a method based on Poisson's equation is presented for computing two-electron integrals over B-splines under exterior complex scaling

  9. Comparative Performance of Complex-Valued B-Spline and Polynomial Models Applied to Iterative Frequency-Domain Decision Feedback Equalization of Hammerstein Channels.

    Science.gov (United States)

    Chen, Sheng; Hong, Xia; Khalaf, Emad F; Alsaadi, Fuad E; Harris, Chris J

    2017-12-01

    Complex-valued (CV) B-spline neural network approach offers a highly effective means for identifying and inverting practical Hammerstein systems. Compared with its conventional CV polynomial-based counterpart, a CV B-spline neural network has superior performance in identifying and inverting CV Hammerstein systems, while imposing a similar complexity. This paper reviews the optimality of the CV B-spline neural network approach. Advantages of B-spline neural network approach as compared with the polynomial based modeling approach are extensively discussed, and the effectiveness of the CV neural network-based approach is demonstrated in a real-world application. More specifically, we evaluate the comparative performance of the CV B-spline and polynomial-based approaches for the nonlinear iterative frequency-domain decision feedback equalization (NIFDDFE) of single-carrier Hammerstein channels. Our results confirm the superior performance of the CV B-spline-based NIFDDFE over its CV polynomial-based counterpart.

  10. Exponential B-splines and the partition of unity property

    DEFF Research Database (Denmark)

    Christensen, Ole; Massopust, Peter

    2012-01-01

    We provide an explicit formula for a large class of exponential B-splines. Also, we characterize the cases where the integer-translates of an exponential B-spline form a partition of unity up to a multiplicative constant. As an application of this result we construct explicitly given pairs of dual...

  11. B-splines and Faddeev equations

    International Nuclear Information System (INIS)

    Huizing, A.J.

    1990-01-01

    Two numerical methods for solving the three-body equations describing relativistic pion deuteron scattering have been investigated. For separable two body interactions these equations form a set of coupled one-dimensional integral equations. They are plagued by singularities which occur in the kernel of the integral equations as well as in the solution. The methods to solve these equations differ in the way they treat the singularities. First the Fuda-Stuivenberg method is discussed. The basic idea of this method is an one time iteration of the set of integral equations to treat the logarithmic singularities. In the second method, the spline method, the unknown solution is approximated by splines. Cubic splines have been used with cubic B-splines as basis. If the solution is approximated by a linear combination of basis functions, an integral equation can be transformed into a set of linear equations for the expansion coefficients. This set of linear equations is solved by standard means. Splines are determined by points called knots. A proper choice of splines to approach the solution stands for a proper choice of the knots. The solution of the three-body scattering equations has a square root behaviour at a certain point. Hence it was investigated how the knots should be chosen to approximate the square root function by cubic B-splines in an optimal way. Before applying this method to solve numerically the three-body equations describing pion-deuteron scattering, an analytically solvable example has been constructed with a singularity structure of both kernel and solution comparable to those of the three-body equations. The accuracy of the numerical solution was determined to a large extent by the accuracy of the approximation of the square root part. The results for a pion laboratory energy of 47.4 MeV agree very well with those from literature. In a complete calculation for 47.7 MeV the spline method turned out to be a factor thousand faster than the Fuda

  12. Image edges detection through B-Spline filters

    International Nuclear Information System (INIS)

    Mastropiero, D.G.

    1997-01-01

    B-Spline signal processing was used to detect the edges of a digital image. This technique is based upon processing the image in the Spline transform domain, instead of doing so in the space domain (classical processing). The transformation to the Spline transform domain means finding out the real coefficients that makes it possible to interpolate the grey levels of the original image, with a B-Spline polynomial. There exist basically two methods of carrying out this interpolation, which produces the existence of two different Spline transforms: an exact interpolation of the grey values (direct Spline transform), and an approximated interpolation (smoothing Spline transform). The latter results in a higher smoothness of the gray distribution function defined by the Spline transform coefficients, and is carried out with the aim of obtaining an edge detection algorithm which higher immunity to noise. Finally the transformed image was processed in order to detect the edges of the original image (the gradient method was used), and the results of the three methods (classical, direct Spline transform and smoothing Spline transform) were compared. The results were that, as expected, the smoothing Spline transform technique produced a detection algorithm more immune to external noise. On the other hand the direct Spline transform technique, emphasizes more the edges, even more than the classical method. As far as the consuming time is concerned, the classical method is clearly the fastest one, and may be applied whenever the presence of noise is not important, and whenever edges with high detail are not required in the final image. (author). 9 refs., 17 figs., 1 tab

  13. Numerical Solution of the Blasius Viscous Flow Problem by Quartic B-Spline Method

    Directory of Open Access Journals (Sweden)

    Hossein Aminikhah

    2016-01-01

    Full Text Available A numerical method is proposed to study the laminar boundary layer about a flat plate in a uniform stream of fluid. The presented method is based on the quartic B-spline approximations with minimizing the error L2-norm. Theoretical considerations are discussed. The computed results are compared with some numerical results to show the efficiency of the proposed approach.

  14. Evaluation of optimization methods for nonrigid medical image registration using mutual information and B-splines

    NARCIS (Netherlands)

    Klein, S.; Staring, M.; Pluim, J.P.W.

    2007-01-01

    A popular technique for nonrigid registration of medical images is based on the maximization of their mutual information, in combination with a deformation field parameterized by cubic B-splines. The coordinate mapping that relates the two images is found using an iterative optimization procedure.

  15. Micropolar Fluids Using B-spline Divergence Conforming Spaces

    KAUST Repository

    Sarmiento, Adel

    2014-06-06

    We discretized the two-dimensional linear momentum, microrotation, energy and mass conservation equations from micropolar fluids theory, with the finite element method, creating divergence conforming spaces based on B-spline basis functions to obtain pointwise divergence free solutions [8]. Weak boundary conditions were imposed using Nitsche\\'s method for tangential conditions, while normal conditions were imposed strongly. Once the exact mass conservation was provided by the divergence free formulation, we focused on evaluating the differences between micropolar fluids and conventional fluids, to show the advantages of using the micropolar fluid model to capture the features of complex fluids. A square and an arc heat driven cavities were solved as test cases. A variation of the parameters of the model, along with the variation of Rayleigh number were performed for a better understanding of the system. The divergence free formulation was used to guarantee an accurate solution of the flow. This formulation was implemented using the framework PetIGA as a basis, using its parallel stuctures to achieve high scalability. The results of the square heat driven cavity test case are in good agreement with those reported earlier.

  16. Counterexamples to the B-spline Conjecture for Gabor Frames

    DEFF Research Database (Denmark)

    Lemvig, Jakob; Nielsen, Kamilla Haahr

    2016-01-01

    The frame set conjecture for B-splines Bn, n≥2, states that the frame set is the maximal set that avoids the known obstructions. We show that any hyperbola of the form ab=r, where r is a rational number smaller than one and a and b denote the sampling and modulation rates, respectively, has infin...

  17. B-Spline Active Contour with Handling of Topology Changes for Fast Video Segmentation

    Directory of Open Access Journals (Sweden)

    Frederic Precioso

    2002-06-01

    Full Text Available This paper deals with video segmentation for MPEG-4 and MPEG-7 applications. Region-based active contour is a powerful technique for segmentation. However most of these methods are implemented using level sets. Although level-set methods provide accurate segmentation, they suffer from large computational cost. We propose to use a regular B-spline parametric method to provide a fast and accurate segmentation. Our B-spline interpolation is based on a fixed number of points 2j depending on the level of the desired details. Through this spatial multiresolution approach, the computational cost of the segmentation is reduced. We introduce a length penalty. This results in improving both smoothness and accuracy. Then we show some experiments on real-video sequences.

  18. Choosing the Optimal Number of B-spline Control Points (Part 1: Methodology and Approximation of Curves)

    Science.gov (United States)

    Harmening, Corinna; Neuner, Hans

    2016-09-01

    Due to the establishment of terrestrial laser scanner, the analysis strategies in engineering geodesy change from pointwise approaches to areal ones. These areal analysis strategies are commonly built on the modelling of the acquired point clouds. Freeform curves and surfaces like B-spline curves/surfaces are one possible approach to obtain space continuous information. A variety of parameters determines the B-spline's appearance; the B-spline's complexity is mostly determined by the number of control points. Usually, this number of control points is chosen quite arbitrarily by intuitive trial-and-error-procedures. In this paper, the Akaike Information Criterion and the Bayesian Information Criterion are investigated with regard to a justified and reproducible choice of the optimal number of control points of B-spline curves. Additionally, we develop a method which is based on the structural risk minimization of the statistical learning theory. Unlike the Akaike and the Bayesian Information Criteria this method doesn't use the number of parameters as complexity measure of the approximating functions but their Vapnik-Chervonenkis-dimension. Furthermore, it is also valid for non-linear models. Thus, the three methods differ in their target function to be minimized and consequently in their definition of optimality. The present paper will be continued by a second paper dealing with the choice of the optimal number of control points of B-spline surfaces.

  19. Numerical simulation of reaction-diffusion systems by modified cubic B-spline differential quadrature method

    International Nuclear Information System (INIS)

    Mittal, R.C.; Rohila, Rajni

    2016-01-01

    In this paper, we have applied modified cubic B-spline based differential quadrature method to get numerical solutions of one dimensional reaction-diffusion systems such as linear reaction-diffusion system, Brusselator system, Isothermal system and Gray-Scott system. The models represented by these systems have important applications in different areas of science and engineering. The most striking and interesting part of the work is the solution patterns obtained for Gray Scott model, reminiscent of which are often seen in nature. We have used cubic B-spline functions for space discretization to get a system of ordinary differential equations. This system of ODE’s is solved by highly stable SSP-RK43 method to get solution at the knots. The computed results are very accurate and shown to be better than those available in the literature. Method is easy and simple to apply and gives solutions with less computational efforts.

  20. Curvelet-domain multiple matching method combined with cubic B-spline function

    Science.gov (United States)

    Wang, Tong; Wang, Deli; Tian, Mi; Hu, Bin; Liu, Chengming

    2018-05-01

    Since the large amount of surface-related multiple existed in the marine data would influence the results of data processing and interpretation seriously, many researchers had attempted to develop effective methods to remove them. The most successful surface-related multiple elimination method was proposed based on data-driven theory. However, the elimination effect was unsatisfactory due to the existence of amplitude and phase errors. Although the subsequent curvelet-domain multiple-primary separation method achieved better results, poor computational efficiency prevented its application. In this paper, we adopt the cubic B-spline function to improve the traditional curvelet multiple matching method. First, select a little number of unknowns as the basis points of the matching coefficient; second, apply the cubic B-spline function on these basis points to reconstruct the matching array; third, build constraint solving equation based on the relationships of predicted multiple, matching coefficients, and actual data; finally, use the BFGS algorithm to iterate and realize the fast-solving sparse constraint of multiple matching algorithm. Moreover, the soft-threshold method is used to make the method perform better. With the cubic B-spline function, the differences between predicted multiple and original data diminish, which results in less processing time to obtain optimal solutions and fewer iterative loops in the solving procedure based on the L1 norm constraint. The applications to synthetic and field-derived data both validate the practicability and validity of the method.

  1. Enhanced spatio-temporal alignment of plantar pressure image sequences using B-splines.

    Science.gov (United States)

    Oliveira, Francisco P M; Tavares, João Manuel R S

    2013-03-01

    This article presents an enhanced methodology to align plantar pressure image sequences simultaneously in time and space. The temporal alignment of the sequences is accomplished using B-splines in the time modeling, and the spatial alignment can be attained using several geometric transformation models. The methodology was tested on a dataset of 156 real plantar pressure image sequences (3 sequences for each foot of the 26 subjects) that was acquired using a common commercial plate during barefoot walking. In the alignment of image sequences that were synthetically deformed both in time and space, an outstanding accuracy was achieved with the cubic B-splines. This accuracy was significantly better (p align real image sequences with unknown transformation involved, the alignment based on cubic B-splines also achieved superior results than our previous methodology (p alignment on the dynamic center of pressure (COP) displacement was also assessed by computing the intraclass correlation coefficients (ICC) before and after the temporal alignment of the three image sequence trials of each foot of the associated subject at six time instants. The results showed that, generally, the ICCs related to the medio-lateral COP displacement were greater when the sequences were temporally aligned than the ICCs of the original sequences. Based on the experimental findings, one can conclude that the cubic B-splines are a remarkable solution for the temporal alignment of plantar pressure image sequences. These findings also show that the temporal alignment can increase the consistency of the COP displacement on related acquired plantar pressure image sequences.

  2. Vibration Analysis of Rectangular Plates with One or More Guided Edges via Bicubic B-Spline Method

    Directory of Open Access Journals (Sweden)

    W.J. Si

    2005-01-01

    Full Text Available A simple and accurate method is proposed for the vibration analysis of rectangular plates with one or more guided edges, in which bicubic B-spline interpolation in combination with a new type of basis cubic B-spline functions is used to approximate the plate deflection. This type of basis cubic B-spline functions can satisfy simply supported, clamped, free, and guided edge conditions with easy numerical manipulation. The frequency characteristic equation is formulated based on classical thin plate theory by performing Hamilton's principle. The present solutions are verified with the analytical ones. Fast convergence, high accuracy and computational efficiency have been demonstrated from the comparisons. Frequency parameters for 13 cases of rectangular plates with at least one guided edge, which are possible by approximate or numerical methods only, are presented. These results are new in literature.

  3. Hybrid B-Spline Collocation Method for Solving the Generalized Burgers-Fisher and Burgers-Huxley Equations

    Directory of Open Access Journals (Sweden)

    Imtiaz Wasim

    2018-01-01

    Full Text Available In this study, we introduce a new numerical technique for solving nonlinear generalized Burgers-Fisher and Burgers-Huxley equations using hybrid B-spline collocation method. This technique is based on usual finite difference scheme and Crank-Nicolson method which are used to discretize the time derivative and spatial derivatives, respectively. Furthermore, hybrid B-spline function is utilized as interpolating functions in spatial dimension. The scheme is verified unconditionally stable using the Von Neumann (Fourier method. Several test problems are considered to check the accuracy of the proposed scheme. The numerical results are in good agreement with known exact solutions and the existing schemes in literature.

  4. PetIGA-MF: a multi-field high-performance toolbox for structure-preserving B-splines spaces

    KAUST Repository

    Sarmiento, Adel; Cô rtes, A.M.A.; Garcia, D.A.; Dalcin, Lisandro; Collier, N.; Calo, V.M.

    2016-01-01

    We describe a high-performance solution framework for isogeometric discrete differential forms based on B-splines: PetIGA-MF. Built on top of PetIGA, an open-source library we have built and developed over the last decade, PetIGA-MF is a general

  5. MRI non-uniformity correction through interleaved bias estimation and B-spline deformation with a template.

    Science.gov (United States)

    Fletcher, E; Carmichael, O; Decarli, C

    2012-01-01

    We propose a template-based method for correcting field inhomogeneity biases in magnetic resonance images (MRI) of the human brain. At each algorithm iteration, the update of a B-spline deformation between an unbiased template image and the subject image is interleaved with estimation of a bias field based on the current template-to-image alignment. The bias field is modeled using a spatially smooth thin-plate spline interpolation based on ratios of local image patch intensity means between the deformed template and subject images. This is used to iteratively correct subject image intensities which are then used to improve the template-to-image deformation. Experiments on synthetic and real data sets of images with and without Alzheimer's disease suggest that the approach may have advantages over the popular N3 technique for modeling bias fields and narrowing intensity ranges of gray matter, white matter, and cerebrospinal fluid. This bias field correction method has the potential to be more accurate than correction schemes based solely on intrinsic image properties or hypothetical image intensity distributions.

  6. MRI Non-Uniformity Correction Through Interleaved Bias Estimation and B-Spline Deformation with a Template*

    Science.gov (United States)

    Fletcher, E.; Carmichael, O.; DeCarli, C.

    2013-01-01

    We propose a template-based method for correcting field inhomogeneity biases in magnetic resonance images (MRI) of the human brain. At each algorithm iteration, the update of a B-spline deformation between an unbiased template image and the subject image is interleaved with estimation of a bias field based on the current template-to-image alignment. The bias field is modeled using a spatially smooth thin-plate spline interpolation based on ratios of local image patch intensity means between the deformed template and subject images. This is used to iteratively correct subject image intensities which are then used to improve the template-to-image deformation. Experiments on synthetic and real data sets of images with and without Alzheimer’s disease suggest that the approach may have advantages over the popular N3 technique for modeling bias fields and narrowing intensity ranges of gray matter, white matter, and cerebrospinal fluid. This bias field correction method has the potential to be more accurate than correction schemes based solely on intrinsic image properties or hypothetical image intensity distributions. PMID:23365843

  7. Data assimilation using Bayesian filters and B-spline geological models

    KAUST Repository

    Duan, Lian

    2011-04-01

    This paper proposes a new approach to problems of data assimilation, also known as history matching, of oilfield production data by adjustment of the location and sharpness of patterns of geological facies. Traditionally, this problem has been addressed using gradient based approaches with a level set parameterization of the geology. Gradient-based methods are robust, but computationally demanding with real-world reservoir problems and insufficient for reservoir management uncertainty assessment. Recently, the ensemble filter approach has been used to tackle this problem because of its high efficiency from the standpoint of implementation, computational cost, and performance. Incorporation of level set parameterization in this approach could further deal with the lack of differentiability with respect to facies type, but its practical implementation is based on some assumptions that are not easily satisfied in real problems. In this work, we propose to describe the geometry of the permeability field using B-spline curves. This transforms history matching of the discrete facies type to the estimation of continuous B-spline control points. As filtering scheme, we use the ensemble square-root filter (EnSRF). The efficacy of the EnSRF with the B-spline parameterization is investigated through three numerical experiments, in which the reservoir contains a curved channel, a disconnected channel or a 2-dimensional closed feature. It is found that the application of the proposed method to the problem of adjusting facies edges to match production data is relatively straightforward and provides statistical estimates of the distribution of geological facies and of the state of the reservoir.

  8. Data assimilation using Bayesian filters and B-spline geological models

    International Nuclear Information System (INIS)

    Duan Lian; Farmer, Chris; Hoteit, Ibrahim; Luo Xiaodong; Moroz, Irene

    2011-01-01

    This paper proposes a new approach to problems of data assimilation, also known as history matching, of oilfield production data by adjustment of the location and sharpness of patterns of geological facies. Traditionally, this problem has been addressed using gradient based approaches with a level set parameterization of the geology. Gradient-based methods are robust, but computationally demanding with real-world reservoir problems and insufficient for reservoir management uncertainty assessment. Recently, the ensemble filter approach has been used to tackle this problem because of its high efficiency from the standpoint of implementation, computational cost, and performance. Incorporation of level set parameterization in this approach could further deal with the lack of differentiability with respect to facies type, but its practical implementation is based on some assumptions that are not easily satisfied in real problems. In this work, we propose to describe the geometry of the permeability field using B-spline curves. This transforms history matching of the discrete facies type to the estimation of continuous B-spline control points. As filtering scheme, we use the ensemble square-root filter (EnSRF). The efficacy of the EnSRF with the B-spline parameterization is investigated through three numerical experiments, in which the reservoir contains a curved channel, a disconnected channel or a 2-dimensional closed feature. It is found that the application of the proposed method to the problem of adjusting facies edges to match production data is relatively straightforward and provides statistical estimates of the distribution of geological facies and of the state of the reservoir.

  9. Automatic Shape Control of Triangular B-Splines of Arbitrary Topology

    Institute of Scientific and Technical Information of China (English)

    Ying He; Xian-Feng Gu; Hong Qin

    2006-01-01

    Triangular B-splines are powerful and flexible in modeling a broader class of geometric objects defined over arbitrary, non-rectangular domains. Despite their great potential and advantages in theory, practical techniques and computational tools with triangular B-splines are less-developed. This is mainly because users have to handle a large number of irregularly distributed control points over arbitrary triangulation. In this paper, an automatic and efficient method is proposed to generate visually pleasing, high-quality triangular B-splines of arbitrary topology. The experimental results on several real datasets show that triangular B-splines are powerful and effective in both theory and practice.

  10. On developing B-spline registration algorithms for multi-core processors

    International Nuclear Information System (INIS)

    Shackleford, J A; Kandasamy, N; Sharp, G C

    2010-01-01

    Spline-based deformable registration methods are quite popular within the medical-imaging community due to their flexibility and robustness. However, they require a large amount of computing time to obtain adequate results. This paper makes two contributions towards accelerating B-spline-based registration. First, we propose a grid-alignment scheme and associated data structures that greatly reduce the complexity of the registration algorithm. Based on this grid-alignment scheme, we then develop highly data parallel designs for B-spline registration within the stream-processing model, suitable for implementation on multi-core processors such as graphics processing units (GPUs). Particular attention is focused on an optimal method for performing analytic gradient computations in a data parallel fashion. CPU and GPU versions are validated for execution time and registration quality. Performance results on large images show that our GPU algorithm achieves a speedup of 15 times over the single-threaded CPU implementation whereas our multi-core CPU algorithm achieves a speedup of 8 times over the single-threaded implementation. The CPU and GPU versions achieve near-identical registration quality in terms of RMS differences between the generated vector fields.

  11. Analytic regularization of uniform cubic B-spline deformation fields.

    Science.gov (United States)

    Shackleford, James A; Yang, Qi; Lourenço, Ana M; Shusharina, Nadya; Kandasamy, Nagarajan; Sharp, Gregory C

    2012-01-01

    Image registration is inherently ill-posed, and lacks a unique solution. In the context of medical applications, it is desirable to avoid solutions that describe physically unsound deformations within the patient anatomy. Among the accepted methods of regularizing non-rigid image registration to provide solutions applicable to medical practice is the penalty of thin-plate bending energy. In this paper, we develop an exact, analytic method for computing the bending energy of a three-dimensional B-spline deformation field as a quadratic matrix operation on the spline coefficient values. Results presented on ten thoracic case studies indicate the analytic solution is between 61-1371x faster than a numerical central differencing solution.

  12. A spectral/B-spline method for the Navier-Stokes equations in unbounded domains

    International Nuclear Information System (INIS)

    Dufresne, L.; Dumas, G.

    2003-01-01

    The numerical method presented in this paper aims at solving the incompressible Navier-Stokes equations in unbounded domains. The problem is formulated in cylindrical coordinates and the method is based on a Galerkin approximation scheme that makes use of vector expansions that exactly satisfy the continuity constraint. More specifically, the divergence-free basis vector functions are constructed with Fourier expansions in the θ and z directions while mapped B-splines are used in the semi-infinite radial direction. Special care has been taken to account for the particular analytical behaviors at both end points r=0 and r→∞. A modal reduction algorithm has also been implemented in the azimuthal direction, allowing for a relaxation of the CFL constraint on the timestep size and a possibly significant reduction of the number of DOF. The time marching is carried out using a mixed quasi-third order scheme. Besides the advantages of a divergence-free formulation and a quasi-spectral convergence, the local character of the B-splines allows for a great flexibility in node positioning while keeping narrow bandwidth matrices. Numerical tests show that the present method compares advantageously with other similar methodologies using purely global expansions

  13. Non-stationary hydrologic frequency analysis using B-spline quantile regression

    Science.gov (United States)

    Nasri, B.; Bouezmarni, T.; St-Hilaire, A.; Ouarda, T. B. M. J.

    2017-11-01

    Hydrologic frequency analysis is commonly used by engineers and hydrologists to provide the basic information on planning, design and management of hydraulic and water resources systems under the assumption of stationarity. However, with increasing evidence of climate change, it is possible that the assumption of stationarity, which is prerequisite for traditional frequency analysis and hence, the results of conventional analysis would become questionable. In this study, we consider a framework for frequency analysis of extremes based on B-Spline quantile regression which allows to model data in the presence of non-stationarity and/or dependence on covariates with linear and non-linear dependence. A Markov Chain Monte Carlo (MCMC) algorithm was used to estimate quantiles and their posterior distributions. A coefficient of determination and Bayesian information criterion (BIC) for quantile regression are used in order to select the best model, i.e. for each quantile, we choose the degree and number of knots of the adequate B-spline quantile regression model. The method is applied to annual maximum and minimum streamflow records in Ontario, Canada. Climate indices are considered to describe the non-stationarity in the variable of interest and to estimate the quantiles in this case. The results show large differences between the non-stationary quantiles and their stationary equivalents for an annual maximum and minimum discharge with high annual non-exceedance probabilities.

  14. ESTIMATION OF GENETIC PARAMETERS IN TROPICARNE CATTLE WITH RANDOM REGRESSION MODELS USING B-SPLINES

    Directory of Open Access Journals (Sweden)

    Joel Domínguez Viveros

    2015-04-01

    Full Text Available The objectives were to estimate variance components, and direct (h2 and maternal (m2 heritability in the growth of Tropicarne cattle based on a random regression model using B-Splines for random effects modeling. Information from 12 890 monthly weightings of 1787 calves, from birth to 24 months old, was analyzed. The pedigree included 2504 animals. The random effects model included genetic and permanent environmental (direct and maternal of cubic order, and residuals. The fixed effects included contemporaneous groups (year – season of weighed, sex and the covariate age of the cow (linear and quadratic. The B-Splines were defined in four knots through the growth period analyzed. Analyses were performed with the software Wombat. The variances (phenotypic and residual presented a similar behavior; of 7 to 12 months of age had a negative trend; from birth to 6 months and 13 to 18 months had positive trend; after 19 months were maintained constant. The m2 were low and near to zero, with an average of 0.06 in an interval of 0.04 to 0.11; the h2 also were close to zero, with an average of 0.10 in an interval of 0.03 to 0.23.

  15. B-Spline Approximations of the Gaussian, their Gabor Frame Properties, and Approximately Dual Frames

    DEFF Research Database (Denmark)

    Christensen, Ole; Kim, Hong Oh; Kim, Rae Young

    2017-01-01

    We prove that Gabor systems generated by certain scaled B-splines can be considered as perturbations of the Gabor systems generated by the Gaussian, with a deviation within an arbitrary small tolerance whenever the order N of the B-spline is sufficiently large. As a consequence we show that for a...

  16. B-spline solution of a singularly perturbed boundary value problem arising in biology

    International Nuclear Information System (INIS)

    Lin Bin; Li Kaitai; Cheng Zhengxing

    2009-01-01

    We use B-spline functions to develop a numerical method for solving a singularly perturbed boundary value problem associated with biology science. We use B-spline collocation method, which leads to a tridiagonal linear system. The accuracy of the proposed method is demonstrated by test problems. The numerical result is found in good agreement with exact solution.

  17. Automatic optimal filament segmentation with sub-pixel accuracy using generalized linear models and B-spline level-sets.

    Science.gov (United States)

    Xiao, Xun; Geyer, Veikko F; Bowne-Anderson, Hugo; Howard, Jonathon; Sbalzarini, Ivo F

    2016-08-01

    Biological filaments, such as actin filaments, microtubules, and cilia, are often imaged using different light-microscopy techniques. Reconstructing the filament curve from the acquired images constitutes the filament segmentation problem. Since filaments have lower dimensionality than the image itself, there is an inherent trade-off between tracing the filament with sub-pixel accuracy and avoiding noise artifacts. Here, we present a globally optimal filament segmentation method based on B-spline vector level-sets and a generalized linear model for the pixel intensity statistics. We show that the resulting optimization problem is convex and can hence be solved with global optimality. We introduce a simple and efficient algorithm to compute such optimal filament segmentations, and provide an open-source implementation as an ImageJ/Fiji plugin. We further derive an information-theoretic lower bound on the filament segmentation error, quantifying how well an algorithm could possibly do given the information in the image. We show that our algorithm asymptotically reaches this bound in the spline coefficients. We validate our method in comprehensive benchmarks, compare with other methods, and show applications from fluorescence, phase-contrast, and dark-field microscopy. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.

  18. Nonrigid registration of dynamic medical imaging data using nD + t B-splines and a groupwise optimization approach.

    Science.gov (United States)

    Metz, C T; Klein, S; Schaap, M; van Walsum, T; Niessen, W J

    2011-04-01

    A registration method for motion estimation in dynamic medical imaging data is proposed. Registration is performed directly on the dynamic image, thus avoiding a bias towards a specifically chosen reference time point. Both spatial and temporal smoothness of the transformations are taken into account. Optionally, cyclic motion can be imposed, which can be useful for visualization (viewing the segmentation sequentially) or model building purposes. The method is based on a 3D (2D+time) or 4D (3D+time) free-form B-spline deformation model, a similarity metric that minimizes the intensity variances over time and constrained optimization using a stochastic gradient descent method with adaptive step size estimation. The method was quantitatively compared with existing registration techniques on synthetic data and 3D+t computed tomography data of the lungs. This showed subvoxel accuracy while delivering smooth transformations, and high consistency of the registration results. Furthermore, the accuracy of semi-automatic derivation of left ventricular volume curves from 3D+t computed tomography angiography data of the heart was evaluated. On average, the deviation from the curves derived from the manual annotations was approximately 3%. The potential of the method for other imaging modalities was shown on 2D+t ultrasound and 2D+t magnetic resonance images. The software is publicly available as an extension to the registration package elastix. Copyright © 2010 Elsevier B.V. All rights reserved.

  19. Reproducción espectral de valores triestímulo mediante descripciones B-Spline: evaluación del error en el color

    OpenAIRE

    Pizarro Bondia, Carlos; Arasa Marti, Jose; de Lasarte, Marta; Pujol Ramo, Jaume; Arjona Carbonell, Mª Montserrat; Vilaseca Ricart, Meritxell

    2008-01-01

    La principal motivación de este trabajo es la búsqueda de una única expresión matemática que permita reproducir distribuciones espectrales de forma general. Para ello se consideran polinomios B-Spline rotacionales de segundo orden como expresión matemática base para dicha reproducción. El objetivo fundamental de este trabajo es, por tanto, la determinación de los coeficientes de los polinomios B-Spline que permitan reproducir distribuciones espectrales, así como la evaluación de la exactit...

  20. Adaptive B-spline volume representation of measured BRDF data for photorealistic rendering

    Directory of Open Access Journals (Sweden)

    Hyungjun Park

    2015-01-01

    Full Text Available Measured bidirectional reflectance distribution function (BRDF data have been used to represent complex interaction between lights and surface materials for photorealistic rendering. However, their massive size makes it hard to adopt them in practical rendering applications. In this paper, we propose an adaptive method for B-spline volume representation of measured BRDF data. It basically performs approximate B-spline volume lofting, which decomposes the problem into three sub-problems of multiple B-spline curve fitting along u-, v-, and w-parametric directions. Especially, it makes the efficient use of knots in the multiple B-spline curve fitting and thereby accomplishes adaptive knot placement along each parametric direction of a resulting B-spline volume. The proposed method is quite useful to realize efficient data reduction while smoothing out the noises and keeping the overall features of BRDF data well. By applying the B-spline volume models of real materials for rendering, we show that the B-spline volume models are effective in preserving the features of material appearance and are suitable for representing BRDF data.

  1. A direct method to solve optimal knots of B-spline curves: An application for non-uniform B-spline curves fitting.

    Directory of Open Access Journals (Sweden)

    Van Than Dung

    Full Text Available B-spline functions are widely used in many industrial applications such as computer graphic representations, computer aided design, computer aided manufacturing, computer numerical control, etc. Recently, there exist some demands, e.g. in reverse engineering (RE area, to employ B-spline curves for non-trivial cases that include curves with discontinuous points, cusps or turning points from the sampled data. The most challenging task in these cases is in the identification of the number of knots and their respective locations in non-uniform space in the most efficient computational cost. This paper presents a new strategy for fitting any forms of curve by B-spline functions via local algorithm. A new two-step method for fast knot calculation is proposed. In the first step, the data is split using a bisecting method with predetermined allowable error to obtain coarse knots. Secondly, the knots are optimized, for both locations and continuity levels, by employing a non-linear least squares technique. The B-spline function is, therefore, obtained by solving the ordinary least squares problem. The performance of the proposed method is validated by using various numerical experimental data, with and without simulated noise, which were generated by a B-spline function and deterministic parametric functions. This paper also discusses the benchmarking of the proposed method to the existing methods in literature. The proposed method is shown to be able to reconstruct B-spline functions from sampled data within acceptable tolerance. It is also shown that, the proposed method can be applied for fitting any types of curves ranging from smooth ones to discontinuous ones. In addition, the method does not require excessive computational cost, which allows it to be used in automatic reverse engineering applications.

  2. Time-dependent B-spline R-matrix approach to double ionization of atoms by XUV laser pulses

    Energy Technology Data Exchange (ETDEWEB)

    Guan Xiaoxu; Zatsarinny, Oleg; Bartschat, Klaus [Department of Physics and Astronomy, Drake University, Des Moines, Iowa 50311 (United States); Noble, Clifford J [Computational Science and Engineering Department, Daresbury Laboratory, Warrington WA4 4AD (United Kingdom); Schneider, Barry I, E-mail: xiaoxu.guan@drake.ed, E-mail: klaus.bartschat@drake.ed, E-mail: bschneid@nsf.go [Physics Division, National Science Foundation, Arlington, Virgina 22230 (United States)

    2009-11-01

    We present an ab initio and non-perturbative time-dependent approach to the problem of double ionization of a general atom driven by intense XUV laser pulses. After using a highly flexible B-spline R-matrix method to generate field-free Hamiltonian and electric dipole matrices, the initial state is propagated in time using an efficient Arnoldi-Lanczos scheme. Example results for momentum and energy distributions of the two outgoing electrons in two-color pump-probe processes of He are presented.

  3. A time-dependent B-spline R-matrix approach to double ionization of atoms by XUV laser pulses

    Energy Technology Data Exchange (ETDEWEB)

    Guan Xiaoxu; Zatsarinny, O; Noble, C J; Bartschat, K [Department of Physics and Astronomy, Drake University, Des Moines, IA 50311 (United States); Schneider, B I [Physics Division, National Science Foundation, Arlington, Virgina 22230 (United States)], E-mail: xiaoxu.guan@drake.edu, E-mail: oleg.zatsarinny@drake.edu, E-mail: cjn@maxnet.co.nz, E-mail: klaus.bartschat@drake.edu, E-mail: bschneid@nsf.gov

    2009-07-14

    We present an ab initio and non-perturbative time-dependent approach to the problem of double ionization of a general atom driven by intense XUV laser pulses. After using a highly flexible B-spline R-matrix method to generate field-free Hamiltonian and electric dipole matrices, the initial state is propagated in time using an efficient Arnoldi-Lanczos scheme. Test calculations for double ionization of He by a single laser pulse yield good agreement with benchmark results obtained with other methods. The method is then applied to two-colour pump-probe processes, for which momentum and energy distributions of the two outgoing electrons are presented.

  4. Numerical solution of system of boundary value problems using B-spline with free parameter

    Science.gov (United States)

    Gupta, Yogesh

    2017-01-01

    This paper deals with method of B-spline solution for a system of boundary value problems. The differential equations are useful in various fields of science and engineering. Some interesting real life problems involve more than one unknown function. These result in system of simultaneous differential equations. Such systems have been applied to many problems in mathematics, physics, engineering etc. In present paper, B-spline and B-spline with free parameter methods for the solution of a linear system of second-order boundary value problems are presented. The methods utilize the values of cubic B-spline and its derivatives at nodal points together with the equations of the given system and boundary conditions, ensuing into the linear matrix equation.

  5. B-Spline potential function for maximum a-posteriori image reconstruction in fluorescence microscopy

    Directory of Open Access Journals (Sweden)

    Shilpa Dilipkumar

    2015-03-01

    Full Text Available An iterative image reconstruction technique employing B-Spline potential function in a Bayesian framework is proposed for fluorescence microscopy images. B-splines are piecewise polynomials with smooth transition, compact support and are the shortest polynomial splines. Incorporation of the B-spline potential function in the maximum-a-posteriori reconstruction technique resulted in improved contrast, enhanced resolution and substantial background reduction. The proposed technique is validated on simulated data as well as on the images acquired from fluorescence microscopes (widefield, confocal laser scanning fluorescence and super-resolution 4Pi microscopy. A comparative study of the proposed technique with the state-of-art maximum likelihood (ML and maximum-a-posteriori (MAP with quadratic potential function shows its superiority over the others. B-Spline MAP technique can find applications in several imaging modalities of fluorescence microscopy like selective plane illumination microscopy, localization microscopy and STED.

  6. PetIGA-MF: a multi-field high-performance toolbox for structure-preserving B-splines spaces

    KAUST Repository

    Sarmiento, Adel

    2016-10-01

    We describe a high-performance solution framework for isogeometric discrete differential forms based on B-splines: PetIGA-MF. Built on top of PetIGA, an open-source library we have built and developed over the last decade, PetIGA-MF is a general multi-field discretization tool. To test the capabilities of our implementation, we solve different viscous flow problems such as Darcy, Stokes, Brinkman, and Navier-Stokes equations. Several convergence benchmarks based on manufactured solutions are presented assuring optimal convergence rates of the approximations, showing the accuracy and robustness of our solver.

  7. Multivariate Hermite interpolation on scattered point sets using tensor-product expo-rational B-splines

    Science.gov (United States)

    Dechevsky, Lubomir T.; Bang, Børre; Laksa˚, Arne; Zanaty, Peter

    2011-12-01

    At the Seventh International Conference on Mathematical Methods for Curves and Surfaces, To/nsberg, Norway, in 2008, several new constructions for Hermite interpolation on scattered point sets in domains in Rn,n∈N, combined with smooth convex partition of unity for several general types of partitions of these domains were proposed in [1]. All of these constructions were based on a new type of B-splines, proposed by some of the authors several years earlier: expo-rational B-splines (ERBS) [3]. In the present communication we shall provide more details about one of these constructions: the one for the most general class of domain partitions considered. This construction is based on the use of two separate families of basis functions: one which has all the necessary Hermite interpolation properties, and another which has the necessary properties of a smooth convex partition of unity. The constructions of both of these two bases are well-known; the new part of the construction is the combined use of these bases for the derivation of a new basis which enjoys having all above-said interpolation and unity partition properties simultaneously. In [1] the emphasis was put on the use of radial basis functions in the definitions of the two initial bases in the construction; now we shall put the main emphasis on the case when these bases consist of tensor-product B-splines. This selection provides two useful advantages: (A) it is easier to compute higher-order derivatives while working in Cartesian coordinates; (B) it becomes clear that this construction becomes a far-going extension of tensor-product constructions. We shall provide 3-dimensional visualization of the resulting bivariate bases, using tensor-product ERBS. In the main tensor-product variant, we shall consider also replacement of ERBS with simpler generalized ERBS (GERBS) [2], namely, their simplified polynomial modifications: the Euler Beta-function B-splines (BFBS). One advantage of using BFBS instead of ERBS

  8. An isogeometric boundary element method for electromagnetic scattering with compatible B-spline discretizations

    Science.gov (United States)

    Simpson, R. N.; Liu, Z.; Vázquez, R.; Evans, J. A.

    2018-06-01

    We outline the construction of compatible B-splines on 3D surfaces that satisfy the continuity requirements for electromagnetic scattering analysis with the boundary element method (method of moments). Our approach makes use of Non-Uniform Rational B-splines to represent model geometry and compatible B-splines to approximate the surface current, and adopts the isogeometric concept in which the basis for analysis is taken directly from CAD (geometry) data. The approach allows for high-order approximations and crucially provides a direct link with CAD data structures that allows for efficient design workflows. After outlining the construction of div- and curl-conforming B-splines defined over 3D surfaces we describe their use with the electric and magnetic field integral equations using a Galerkin formulation. We use Bézier extraction to accelerate the computation of NURBS and B-spline terms and employ H-matrices to provide accelerated computations and memory reduction for the dense matrices that result from the boundary integral discretization. The method is verified using the well known Mie scattering problem posed over a perfectly electrically conducting sphere and the classic NASA almond problem. Finally, we demonstrate the ability of the approach to handle models with complex geometry directly from CAD without mesh generation.

  9. An Investigation into Conversion from Non-Uniform Rational B-Spline Boundary Representation Geometry to Constructive Solid Geometry

    Science.gov (United States)

    2015-12-01

    ARL-SR-0347 ● DEC 2015 US Army Research Laboratory An Investigation into Conversion from Non-Uniform Rational B-Spline Boundary...US Army Research Laboratory An Investigation into Conversion from Non-Uniform Rational B-Spline Boundary Representation Geometry to...from Non-Uniform Rational B-Spline Boundary Representation Geometry to Constructive Solid Geometry 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c

  10. Kinetic energy classification and smoothing for compact B-spline basis sets in quantum Monte Carlo

    Science.gov (United States)

    Krogel, Jaron T.; Reboredo, Fernando A.

    2018-01-01

    Quantum Monte Carlo calculations of defect properties of transition metal oxides have become feasible in recent years due to increases in computing power. As the system size has grown, availability of on-node memory has become a limiting factor. Saving memory while minimizing computational cost is now a priority. The main growth in memory demand stems from the B-spline representation of the single particle orbitals, especially for heavier elements such as transition metals where semi-core states are present. Despite the associated memory costs, splines are computationally efficient. In this work, we explore alternatives to reduce the memory usage of splined orbitals without significantly affecting numerical fidelity or computational efficiency. We make use of the kinetic energy operator to both classify and smooth the occupied set of orbitals prior to splining. By using a partitioning scheme based on the per-orbital kinetic energy distributions, we show that memory savings of about 50% is possible for select transition metal oxide systems. For production supercells of practical interest, our scheme incurs a performance penalty of less than 5%.

  11. A spectral/B-spline method for the Navier-Stokes equations in unbounded domains

    CERN Document Server

    Dufresne, L

    2003-01-01

    The numerical method presented in this paper aims at solving the incompressible Navier-Stokes equations in unbounded domains. The problem is formulated in cylindrical coordinates and the method is based on a Galerkin approximation scheme that makes use of vector expansions that exactly satisfy the continuity constraint. More specifically, the divergence-free basis vector functions are constructed with Fourier expansions in the theta and z directions while mapped B-splines are used in the semi-infinite radial direction. Special care has been taken to account for the particular analytical behaviors at both end points r=0 and r-> infinity. A modal reduction algorithm has also been implemented in the azimuthal direction, allowing for a relaxation of the CFL constraint on the timestep size and a possibly significant reduction of the number of DOF. The time marching is carried out using a mixed quasi-third order scheme. Besides the advantages of a divergence-free formulation and a quasi-spectral convergence, the lo...

  12. Automatic lung lobe segmentation of COPD patients using iterative B-spline fitting

    Science.gov (United States)

    Shamonin, D. P.; Staring, M.; Bakker, M. E.; Xiao, C.; Stolk, J.; Reiber, J. H. C.; Stoel, B. C.

    2012-02-01

    We present an automatic lung lobe segmentation algorithm for COPD patients. The method enhances fissures, removes unlikely fissure candidates, after which a B-spline is fitted iteratively through the remaining candidate objects. The iterative fitting approach circumvents the need to classify each object as being part of the fissure or being noise, and allows the fissure to be detected in multiple disconnected parts. This property is beneficial for good performance in patient data, containing incomplete and disease-affected fissures. The proposed algorithm is tested on 22 COPD patients, resulting in accurate lobe-based densitometry, and a median overlap of the fissure (defined 3 voxels wide) with an expert ground truth of 0.65, 0.54 and 0.44 for the three main fissures. This compares to complete lobe overlaps of 0.99, 0.98, 0.98, 0.97 and 0.87 for the five main lobes, showing promise for lobe segmentation on data of patients with moderate to severe COPD.

  13. Approximation and geomatric modeling with simplex B-splines associates with irregular triangular

    NARCIS (Netherlands)

    Auerbach, S.; Gmelig Meyling, R.H.J.; Neamtu, M.; Neamtu, M.; Schaeben, H.

    1991-01-01

    Bivariate quadratic simplical B-splines defined by their corresponding set of knots derived from a (suboptimal) constrained Delaunay triangulation of the domain are employed to obtain a C1-smooth surface. The generation of triangle vertices is adjusted to the areal distribution of the data in the

  14. Least square fitting of low resolution gamma ray spectra with cubic B-spline basis functions

    International Nuclear Information System (INIS)

    Zhu Menghua; Liu Lianggang; Qi Dongxu; You Zhong; Xu Aoao

    2009-01-01

    In this paper, the least square fitting method with the cubic B-spline basis functions is derived to reduce the influence of statistical fluctuations in the gamma ray spectra. The derived procedure is simple and automatic. The results show that this method is better than the convolution method with a sufficient reduction of statistical fluctuation. (authors)

  15. A finite strain Eulerian formulation for compressible and nearly incompressible hyperelasticity using high-order B-spline finite elements

    KAUST Repository

    Duddu, Ravindra

    2011-10-05

    We present a numerical formulation aimed at modeling the nonlinear response of elastic materials using large deformation continuum mechanics in three dimensions. This finite element formulation is based on the Eulerian description of motion and the transport of the deformation gradient. When modeling a nearly incompressible solid, the transport of the deformation gradient is decomposed into its isochoric part and the Jacobian determinant as independent fields. A homogeneous isotropic hyperelastic solid is assumed and B-splines-based finite elements are used for the spatial discretization. A variational multiscale residual-based approach is employed to stabilize the transport equations. The performance of the scheme is explored for both compressible and nearly incompressible applications. The numerical results are in good agreement with theory illustrating the viability of the computational scheme. © 2011 John Wiley & Sons, Ltd.

  16. PEMODELAN B-SPLINE DAN MARS PADA NILAI UJIAN MASUK TERHADAP IPK MAHASISWA JURUSAN DISAIN KOMUNIKASI VISUAL UK. PETRA SURABAYA

    Directory of Open Access Journals (Sweden)

    I Nyoman Budiantara

    2006-01-01

    Full Text Available Regression analysis is constructed for capturing the influences of independent variables to dependent ones. It can be done by looking at the relationship between those variables. This task of approximating the mean function can be done essentially in two ways. The quiet often use parametric approach is to assume that the mean curve has some prespecified functional forms. Alternatively, nonparametric approach, .i.e., without reference to a specific form, is used when there is no information of the regression function form (Haerdle, 1990. Therefore nonparametric approach has more flexibilities than the parametric one. The aim of this research is to find the best fit model that captures relationship between admission test score to the GPA. This particular data was taken from the Department of Design Communication and Visual, Petra Christian University, Surabaya for year 1999. Those two approaches were used here. In the parametric approach, we use simple linear, quadric cubic regression, and in the nonparametric ones, we use B-Spline and Multivariate Adaptive Regression Splines (MARS. Overall, the best model was chosen based on the maximum determinant coefficient. However, for MARS, the best model was chosen based on the GCV, minimum MSE, maximum determinant coefficient. Abstract in Bahasa Indonesia : Analisa regresi digunakan untuk melihat pengaruh variabel independen terhadap variabel dependent dengan terlebih dulu melihat pola hubungan variabel tersebut. Hal ini dapat dilakukan dengan melalui dua pendekatan. Pendekatan yang paling umum dan seringkali digunakan adalah pendekatan parametrik. Pendekatan parametrik mengasumsikan bentuk model sudah ditentukan. Apabila tidak ada informasi apapun tentang bentuk dari fungsi regresi, maka pendekatan yang digunakan adalah pendekatan nonparametrik. (Haerdle, 1990. Karena pendekatan tidak tergantung pada asumsi bentuk kurva tertentu, sehingga memberikan fleksibelitas yang lebih besar. Tujuan penelitian ini

  17. Investigation of confined hydrogen atom in spherical cavity, using B-splines basis set

    Directory of Open Access Journals (Sweden)

    M Barezi

    2011-03-01

    Full Text Available Studying confined quantum systems (CQS is very important in nano technology. One of the basic CQS is a hydrogen atom confined in spherical cavity. In this article, eigenenergies and eigenfunctions of hydrogen atom in spherical cavity are calculated, using linear variational method. B-splines are used as basis functions, which can easily construct the trial wave functions with appropriate boundary conditions. The main characteristics of B-spline are its high localization and its flexibility. Besides, these functions have numerical stability and are able to spend high volume of calculation with good accuracy. The energy levels as function of cavity radius are analyzed. To check the validity and efficiency of the proposed method, extensive convergence test of eigenenergies in different cavity sizes has been carried out.

  18. A scalable block-preconditioning strategy for divergence-conforming B-spline discretizations of the Stokes problem

    KAUST Repository

    Cortes, Adriano Mauricio; Dalcin, Lisandro; Sarmiento, Adel; Collier, N.; Calo, Victor M.

    2016-01-01

    The recently introduced divergence-conforming B-spline discretizations allow the construction of smooth discrete velocity-pressure pairs for viscous incompressible flows that are at the same time inf−supinf−sup stable and pointwise divergence

  19. A cubic B-spline Galerkin approach for the numerical simulation of the GEW equation

    Directory of Open Access Journals (Sweden)

    S. Battal Gazi Karakoç

    2016-02-01

    Full Text Available The generalized equal width (GEW wave equation is solved numerically by using lumped Galerkin approach with cubic B-spline functions. The proposed numerical scheme is tested by applying two test problems including single solitary wave and interaction of two solitary waves. In order to determine the performance of the algorithm, the error norms L2 and L∞ and the invariants I1, I2 and I3 are calculated. For the linear stability analysis of the numerical algorithm, von Neumann approach is used. As a result, the obtained findings show that the presented numerical scheme is preferable to some recent numerical methods.  

  20. RANCANG BANGUN PROGRAM PENGEDITAN KURVA B-SPLINE MULTIRESOLUSI BERBASIS WAVELETS

    Directory of Open Access Journals (Sweden)

    Nanik Suciati

    2002-07-01

    Full Text Available Penelitian ini menyusun representasi multiresolusi untuk kurva B-spline kubik yang menginterpolasi titik-titik ujung dengan basis wavelets. Representasi multiresolusi ini digunakan untuk mendukung beberapa tipe pengeditan kurva, yaitu penghalusan kurva dengan tingkat resolusi kontinyu untuk menghilangkan detail-detail kurva yang tidak diinginkan, pengeditan bentuk keseluruhan kurva dengan tetap mempertahankan detaildetailnya, perubahan detail-detail kurva tanpa mempengaruhi bentuk keseluruhannya, dan pengeditan satubagian tertentu dari kurva melalui manipulasi secara langsung terhadap titik-titik kontrolnya. Untuk menguji kemampuan representasi multiresolusi dalam mendukung empat tipe manipulasi kurva tersebut, disusun program pengeditan kurva dengan menggunakan bahasa pemrograman Visual C++ pada komputer Pentium 133 MHz, memori 16 Mbyte, sistem operasi Windows 95, lingkungan pengembangan Microsoft DevelopmentStudio 97 dan pustaka Microsoft Foundation Class. Dari hasil uji coba program diketahui bahwa representasi multiresolusi memberikan dukungan yang sangat baik terhadap tipe-tipe pengeditan seperti yang disebutkan di atas. Representasi multiresolusi tidak membutuhkan memori penyimpan ekstra selain dari yang digunakan untuk menyimpan titik kontrol. Dari hasil uji coba program menggunakan ratusan titik kontrol, algoritma berjalan cukup cepat dan memadai berkaitan dengan tuntutan komunikasi interaktif antara user dan program.Kata kunci: B-Spline, Wavelet, Multiresolusi

  1. Fuzzy B-spline optimization for urban slum three-dimensional reconstruction using ENVISAT satellite data

    International Nuclear Information System (INIS)

    Marghany, Maged

    2014-01-01

    A critical challenges in urban aeras is slums. In fact, they are considered a source of crime and disease due to poor-quality housing, unsanitary conditions, poor infrastructures and occupancy security. The poor in the dense urban slums are the most vulnerable to infection due to (i) inadequate and restricted access to safety, drinking water and sufficient quantities of water for personal hygiene; (ii) the lack of removal and treatment of excreta; and (iii) the lack of removal of solid waste. This study aims to investigate the capability of ENVISAT ASAR satellite and Google Earth data for three-dimensional (3-D) slum urban reconstruction in developed countries such as Egypt. The main objective of this work is to utilize some 3-D automatic detection algorithm for urban slum in ENVISAT ASAR and Google Erath images were acquired in Cairo, Egypt using Fuzzy B-spline algorithm. The results show that the fuzzy algorithm is the best indicator for chaotic urban slum as it can discriminate between them from its surrounding environment. The combination of Fuzzy and B-spline then used to reconstruct 3-D of urban slum. The results show that urban slums, road network, and infrastructures are perfectly discriminated. It can therefore be concluded that the fuzzy algorithm is an appropriate algorithm for chaotic urban slum automatic detection in ENVSIAT ASAR and Google Earth data

  2. Fuzzy B-spline optimization for urban slum three-dimensional reconstruction using ENVISAT satellite data

    Science.gov (United States)

    Marghany, Maged

    2014-06-01

    A critical challenges in urban aeras is slums. In fact, they are considered a source of crime and disease due to poor-quality housing, unsanitary conditions, poor infrastructures and occupancy security. The poor in the dense urban slums are the most vulnerable to infection due to (i) inadequate and restricted access to safety, drinking water and sufficient quantities of water for personal hygiene; (ii) the lack of removal and treatment of excreta; and (iii) the lack of removal of solid waste. This study aims to investigate the capability of ENVISAT ASAR satellite and Google Earth data for three-dimensional (3-D) slum urban reconstruction in developed countries such as Egypt. The main objective of this work is to utilize some 3-D automatic detection algorithm for urban slum in ENVISAT ASAR and Google Erath images were acquired in Cairo, Egypt using Fuzzy B-spline algorithm. The results show that the fuzzy algorithm is the best indicator for chaotic urban slum as it can discriminate between them from its surrounding environment. The combination of Fuzzy and B-spline then used to reconstruct 3-D of urban slum. The results show that urban slums, road network, and infrastructures are perfectly discriminated. It can therefore be concluded that the fuzzy algorithm is an appropriate algorithm for chaotic urban slum automatic detection in ENVSIAT ASAR and Google Earth data.

  3. Correlation studies for B-spline modeled F2 Chapman parameters obtained from FORMOSAT-3/COSMIC data

    Directory of Open Access Journals (Sweden)

    M. Limberger

    2014-12-01

    Full Text Available The determination of ionospheric key quantities such as the maximum electron density of the F2 layer NmF2, the corresponding F2 peak height hmF2 and the F2 scale height HF2 are of high relevance in 4-D ionosphere modeling to provide information on the vertical structure of the electron density (Ne. The Ne distribution with respect to height can, for instance, be modeled by the commonly accepted F2 Chapman layer. An adequate and observation driven description of the vertical Ne variation can be obtained from electron density profiles (EDPs derived by ionospheric radio occultation measurements between GPS and low Earth orbiter (LEO satellites. For these purposes, the six FORMOSAT-3/COSMIC (F3/C satellites provide an excellent opportunity to collect EDPs that cover most of the ionospheric region, in particular the F2 layer. For the contents of this paper, F3/C EDPs have been exploited to determine NmF2, hmF2 and HF2 within a regional modeling approach. As mathematical base functions, endpoint-interpolating polynomial B-splines are considered to model the key parameters with respect to longitude, latitude and time. The description of deterministic processes and the verification of this modeling approach have been published previously in Limberger et al. (2013, whereas this paper should be considered as an extension dealing with related correlation studies, a topic to which less attention has been paid in the literature. Relations between the B-spline series coefficients regarding specific key parameters as well as dependencies between the three F2 Chapman key parameters are in the main focus. Dependencies are interpreted from the post-derived correlation matrices as a result of (1 a simulated scenario without data gaps by taking dense, homogenously distributed profiles into account and (2 two real data scenarios on 1 July 2008 and 1 July 2012 including sparsely, inhomogeneously distributed F3/C EDPs. Moderate correlations between hmF2 and HF2 as

  4. Finite nucleus Dirac mean field theory and random phase approximation using finite B splines

    International Nuclear Information System (INIS)

    McNeil, J.A.; Furnstahl, R.J.; Rost, E.; Shepard, J.R.; Department of Physics, University of Maryland, College Park, Maryland 20742; Department of Physics, University of Colorado, Boulder, Colorado 80309)

    1989-01-01

    We calculate the finite nucleus Dirac mean field spectrum in a Galerkin approach using finite basis splines. We review the method and present results for the relativistic σ-ω model for the closed-shell nuclei 16 O and 40 Ca. We study the convergence of the method as a function of the size of the basis and the closure properties of the spectrum using an energy-weighted dipole sum rule. We apply the method to the Dirac random-phase-approximation response and present results for the isoscalar 1/sup -/ and 3/sup -/ longitudinal form factors of 16 O and 40 Ca. We also use a B-spline spectral representation of the positive-energy projector to evaluate partial energy-weighted sum rules and compare with nonrelativistic sum rule results

  5. Series-nonuniform rational B-spline signal feedback: From chaos to any embedded periodic orbit or target point

    Energy Technology Data Exchange (ETDEWEB)

    Shao, Chenxi, E-mail: cxshao@ustc.edu.cn; Xue, Yong; Fang, Fang; Bai, Fangzhou [Department of Computer Science and Technology, University of Science and Technology of China, Hefei 230027 (China); Yin, Peifeng [Department of Computer Science and Engineering, Pennsylvania State University, State College, Pennsylvania 16801 (United States); Wang, Binghong [Department of Modern Physics, University of Science and Technology of China, Hefei 230026 (China)

    2015-07-15

    The self-controlling feedback control method requires an external periodic oscillator with special design, which is technically challenging. This paper proposes a chaos control method based on time series non-uniform rational B-splines (SNURBS for short) signal feedback. It first builds the chaos phase diagram or chaotic attractor with the sampled chaotic time series and any target orbit can then be explicitly chosen according to the actual demand. Second, we use the discrete timing sequence selected from the specific target orbit to build the corresponding external SNURBS chaos periodic signal, whose difference from the system current output is used as the feedback control signal. Finally, by properly adjusting the feedback weight, we can quickly lead the system to an expected status. We demonstrate both the effectiveness and efficiency of our method by applying it to two classic chaotic systems, i.e., the Van der Pol oscillator and the Lorenz chaotic system. Further, our experimental results show that compared with delayed feedback control, our method takes less time to obtain the target point or periodic orbit (from the starting point) and that its parameters can be fine-tuned more easily.

  6. Application of SCM with Bayesian B-Spline to Spatio-Temporal Analysis of Hypertension in China.

    Science.gov (United States)

    Ye, Zirong; Xu, Li; Zhou, Zi; Wu, Yafei; Fang, Ya

    2018-01-02

    Most previous research on the disparities of hypertension risk has neither simultaneously explored the spatio-temporal disparities nor considered the spatial information contained in the samples, thus the estimated results may be unreliable. Our study was based on the China Health and Nutrition Survey (CHNS), including residents over 12 years old in seven provinces from 1991 to 2011. Bayesian B-spline was used in the extended shared component model (SCM) for fitting temporal-related variation to explore spatio-temporal distribution in the odds ratio (OR) of hypertension, reveal gender variation, and explore latent risk factors. Our results revealed that the prevalence of hypertension increased from 14.09% in 1991 to 32.37% in 2011, with men experiencing a more obvious change than women. From a spatial perspective, a standardized prevalence ratio (SPR) remaining at a high level was found in Henan and Shandong for both men and women. Meanwhile, before 1997, the temporal distribution of hypertension risk for both men and women remained low. After that, notably since 2004, the OR of hypertension in each province increased to a relatively high level, especially in Northern China. Notably, the OR of hypertension in Shandong and Jiangsu, which was over 1.2, continuously stood out after 2004 for males, while that in Shandong and Guangxi was relatively high for females. The findings suggested that obvious spatial-temporal patterns for hypertension exist in the regions under research and this pattern was quite different between men and women.

  7. Series-nonuniform rational B-spline signal feedback: From chaos to any embedded periodic orbit or target point.

    Science.gov (United States)

    Shao, Chenxi; Xue, Yong; Fang, Fang; Bai, Fangzhou; Yin, Peifeng; Wang, Binghong

    2015-07-01

    The self-controlling feedback control method requires an external periodic oscillator with special design, which is technically challenging. This paper proposes a chaos control method based on time series non-uniform rational B-splines (SNURBS for short) signal feedback. It first builds the chaos phase diagram or chaotic attractor with the sampled chaotic time series and any target orbit can then be explicitly chosen according to the actual demand. Second, we use the discrete timing sequence selected from the specific target orbit to build the corresponding external SNURBS chaos periodic signal, whose difference from the system current output is used as the feedback control signal. Finally, by properly adjusting the feedback weight, we can quickly lead the system to an expected status. We demonstrate both the effectiveness and efficiency of our method by applying it to two classic chaotic systems, i.e., the Van der Pol oscillator and the Lorenz chaotic system. Further, our experimental results show that compared with delayed feedback control, our method takes less time to obtain the target point or periodic orbit (from the starting point) and that its parameters can be fine-tuned more easily.

  8. Improving mouse controlling and movement for people with Parkinson's disease and involuntary tremor using adaptive path smoothing technique via B-spline.

    Science.gov (United States)

    Hashem, Seyed Yashar Bani; Zin, Nor Azan Mat; Yatim, Noor Faezah Mohd; Ibrahim, Norlinah Mohamed

    2014-01-01

    Many input devices are available for interacting with computers, but the computer mouse is still the most popular device for interaction. People who suffer from involuntary tremor have difficulty using the mouse in the normal way. The target participants of this research were individuals who suffer from Parkinson's disease. Tremor in limbs makes accurate mouse movements impossible or difficult without any assistive technologies to help. This study explores a new assistive technique-adaptive path smoothing via B-spline (APSS)-to enhance mouse controlling based on user's tremor level and type. APSS uses Mean filtering and B-spline to provide a smoothed mouse trajectory. Seven participants who have unwanted tremor evaluated APSS. Results show that APSS is very promising and greatly increases their control of the computer mouse. Result of user acceptance test also shows that user perceived APSS as easy to use. They also believe it to be a useful tool and intend to use it once it is available. Future studies could explore the possibility of integrating APSS with one assistive pointing technique, such as the Bubble cursor or the Sticky target technique, to provide an all in one solution for motor disabled users.

  9. An Adaptive B-Spline Method for Low-order Image Reconstruction Problems - Final Report - 09/24/1997 - 09/24/2000

    Energy Technology Data Exchange (ETDEWEB)

    Li, Xin; Miller, Eric L.; Rappaport, Carey; Silevich, Michael

    2000-04-11

    A common problem in signal processing is to estimate the structure of an object from noisy measurements linearly related to the desired image. These problems are broadly known as inverse problems. A key feature which complicates the solution to such problems is their ill-posedness. That is, small perturbations in the data arising e.g. from noise can and do lead to severe, non-physical artifacts in the recovered image. The process of stabilizing these problems is known as regularization of which Tikhonov regularization is one of the most common. While this approach leads to a simple linear least squares problem to solve for generating the reconstruction, it has the unfortunate side effect of producing smooth images thereby obscuring important features such as edges. Therefore, over the past decade there has been much work in the development of edge-preserving regularizers. This technique leads to image estimates in which the important features are retained, but computationally the y require the solution of a nonlinear least squares problem, a daunting task in many practical multi-dimensional applications. In this thesis we explore low-order models for reducing the complexity of the re-construction process. Specifically, B-Splines are used to approximate the object. If a ''proper'' collection B-Splines are chosen that the object can be efficiently represented using a few basis functions, the dimensionality of the underlying problem will be significantly decreased. Consequently, an optimum distribution of splines needs to be determined. Here, an adaptive refining and pruning algorithm is developed to solve the problem. The refining part is based on curvature information, in which the intuition is that a relatively dense set of fine scale basis elements should cluster near regions of high curvature while a spares collection of basis vectors are required to adequately represent the object over spatially smooth areas. The pruning part is a greedy

  10. FC LSEI WNNLS, Least-Square Fitting Algorithms Using B Splines

    International Nuclear Information System (INIS)

    Hanson, R.J.; Haskell, K.H.

    1989-01-01

    1 - Description of problem or function: FC allows a user to fit dis- crete data, in a weighted least-squares sense, using piece-wise polynomial functions represented by B-Splines on a given set of knots. In addition to the least-squares fitting of the data, equality, inequality, and periodic constraints at a discrete, user-specified set of points can be imposed on the fitted curve or its derivatives. The subprograms LSEI and WNNLS solve the linearly-constrained least-squares problem. LSEI solves the class of problem with general inequality constraints, and, if requested, obtains a covariance matrix of the solution parameters. WNNLS solves the class of problem with non-negativity constraints. It is anticipated that most users will find LSEI suitable for their needs; however, users with inequalities that are single bounds on variables may wish to use WNNLS. 2 - Method of solution: The discrete data are fit by a linear combination of piece-wise polynomial curves which leads to a linear least-squares system of algebraic equations. Additional information is expressed as a discrete set of linear inequality and equality constraints on the fitted curve which leads to a linearly-constrained least-squares system of algebraic equations. The solution of this system is the main computational problem solved

  11. The estimation of time-varying risks in asset pricing modelling using B-Spline method

    Science.gov (United States)

    Nurjannah; Solimun; Rinaldo, Adji

    2017-12-01

    Asset pricing modelling has been extensively studied in the past few decades to explore the risk-return relationship. The asset pricing literature typically assumed a static risk-return relationship. However, several studies found few anomalies in the asset pricing modelling which captured the presence of the risk instability. The dynamic model is proposed to offer a better model. The main problem highlighted in the dynamic model literature is that the set of conditioning information is unobservable and therefore some assumptions have to be made. Hence, the estimation requires additional assumptions about the dynamics of risk. To overcome this problem, the nonparametric estimators can also be used as an alternative for estimating risk. The flexibility of the nonparametric setting avoids the problem of misspecification derived from selecting a functional form. This paper investigates the estimation of time-varying asset pricing model using B-Spline, as one of nonparametric approach. The advantages of spline method is its computational speed and simplicity, as well as the clarity of controlling curvature directly. The three popular asset pricing models will be investigated namely CAPM (Capital Asset Pricing Model), Fama-French 3-factors model and Carhart 4-factors model. The results suggest that the estimated risks are time-varying and not stable overtime which confirms the risk instability anomaly. The results is more pronounced in Carhart’s 4-factors model.

  12. Multilevel summation with B-spline interpolation for pairwise interactions in molecular dynamics simulations

    International Nuclear Information System (INIS)

    Hardy, David J.; Schulten, Klaus; Wolff, Matthew A.; Skeel, Robert D.; Xia, Jianlin

    2016-01-01

    The multilevel summation method for calculating electrostatic interactions in molecular dynamics simulations constructs an approximation to a pairwise interaction kernel and its gradient, which can be evaluated at a cost that scales linearly with the number of atoms. The method smoothly splits the kernel into a sum of partial kernels of increasing range and decreasing variability with the longer-range parts interpolated from grids of increasing coarseness. Multilevel summation is especially appropriate in the context of dynamics and minimization, because it can produce continuous gradients. This article explores the use of B-splines to increase the accuracy of the multilevel summation method (for nonperiodic boundaries) without incurring additional computation other than a preprocessing step (whose cost also scales linearly). To obtain accurate results efficiently involves technical difficulties, which are overcome by a novel preprocessing algorithm. Numerical experiments demonstrate that the resulting method offers substantial improvements in accuracy and that its performance is competitive with an implementation of the fast multipole method in general and markedly better for Hamiltonian formulations of molecular dynamics. The improvement is great enough to establish multilevel summation as a serious contender for calculating pairwise interactions in molecular dynamics simulations. In particular, the method appears to be uniquely capable for molecular dynamics in two situations, nonperiodic boundary conditions and massively parallel computation, where the fast Fourier transform employed in the particle–mesh Ewald method falls short.

  13. A baseline correction algorithm for Raman spectroscopy by adaptive knots B-spline

    International Nuclear Information System (INIS)

    Wang, Xin; Fan, Xian-guang; Xu, Ying-jie; Wang, Xiu-fen; He, Hao; Zuo, Yong

    2015-01-01

    The Raman spectroscopy technique is a powerful and non-invasive technique for molecular fingerprint detection which has been widely used in many areas, such as food safety, drug safety, and environmental testing. But Raman signals can be easily corrupted by a fluorescent background, therefore we presented a baseline correction algorithm to suppress the fluorescent background in this paper. In this algorithm, the background of the Raman signal was suppressed by fitting a curve called a baseline using a cyclic approximation method. Instead of the traditional polynomial fitting, we used the B-spline as the fitting algorithm due to its advantages of low-order and smoothness, which can avoid under-fitting and over-fitting effectively. In addition, we also presented an automatic adaptive knot generation method to replace traditional uniform knots. This algorithm can obtain the desired performance for most Raman spectra with varying baselines without any user input or preprocessing step. In the simulation, three kinds of fluorescent background lines were introduced to test the effectiveness of the proposed method. We showed that two real Raman spectra (parathion-methyl and colza oil) can be detected and their baselines were also corrected by the proposed method. (paper)

  14. 2-Dimensional B-Spline Algorithms with Applications to Ray Tracing in Media of Spatially-Varying Refractive Index

    Science.gov (United States)

    2007-08-01

    In the approach, photon trajectories are computed using a solution of the Eikonal equation (ray-tracing methods) rather than linear trajectories. The...coupling the radiative transport solution into heat transfer and damage models. 15. SUBJECT TERMS: B-Splines, Ray-Tracing, Eikonal Equation...multi-layer biological tissue model. In the approach, photon trajectories are computed using a solution of the Eikonal equation (ray-tracing methods

  15. Investigation of electron and hydrogenic-donor states confined in a permeable spherical box using B-splines

    Directory of Open Access Journals (Sweden)

    T Nikbakht

    2012-12-01

    Full Text Available   Effects of quantum size and potential shape on the spectra of an electron and a hydrogenic-donor at the center of a permeable spherical cavity have been calculated, using linear variational method. B-splines have been used as basis functions. By extensive convergence tests and comparing with other results given in the literature, the validity and efficiency of the method were confirmed.

  16. Alignment of large image series using cubic B-splines tessellation: application to transmission electron microscopy data.

    Science.gov (United States)

    Dauguet, Julien; Bock, Davi; Reid, R Clay; Warfield, Simon K

    2007-01-01

    3D reconstruction from serial 2D microscopy images depends on non-linear alignment of serial sections. For some structures, such as the neuronal circuitry of the brain, very large images at very high resolution are necessary to permit reconstruction. These very large images prevent the direct use of classical registration methods. We propose in this work a method to deal with the non-linear alignment of arbitrarily large 2D images using the finite support properties of cubic B-splines. After initial affine alignment, each large image is split into a grid of smaller overlapping sub-images, which are individually registered using cubic B-splines transformations. Inside the overlapping regions between neighboring sub-images, the coefficients of the knots controlling the B-splines deformations are blended, to create a virtual large grid of knots for the whole image. The sub-images are resampled individually, using the new coefficients, and assembled together into a final large aligned image. We evaluated the method on a series of large transmission electron microscopy images and our results indicate significant improvements compared to both manual and affine alignment.

  17. Galerkin method for unsplit 3-D Dirac equation using atomically/kinetically balanced B-spline basis

    International Nuclear Information System (INIS)

    Fillion-Gourdeau, F.; Lorin, E.; Bandrauk, A.D.

    2016-01-01

    A Galerkin method is developed to solve the time-dependent Dirac equation in prolate spheroidal coordinates for an electron–molecular two-center system. The initial state is evaluated from a variational principle using a kinetic/atomic balanced basis, which allows for an efficient and accurate determination of the Dirac spectrum and eigenfunctions. B-spline basis functions are used to obtain high accuracy. This numerical method is used to compute the energy spectrum of the two-center problem and then the evolution of eigenstate wavefunctions in an external electromagnetic field.

  18. An efficient approach to numerical study of the coupled-BBM system with B-spline collocation method

    Directory of Open Access Journals (Sweden)

    khalid ali

    2016-11-01

    Full Text Available In the present paper, a numerical method is proposed for the numerical solution of a coupled-BBM system with appropriate initial and boundary conditions by using collocation method with cubic trigonometric B-spline on the uniform mesh points. The method is shown to be unconditionally stable using von-Neumann technique. To test accuracy the error norms2L, ?L are computed. Furthermore, interaction of two and three solitary waves are used to discuss the effect of the behavior of the solitary waves after the interaction. These results show that the technique introduced here is easy to apply. We make linearization for the nonlinear term.

  19. Dynamic metabolic flux analysis using B-splines to study the effects of temperature shift on CHO cell metabolism

    Directory of Open Access Journals (Sweden)

    Verónica S. Martínez

    2015-12-01

    Full Text Available Metabolic flux analysis (MFA is widely used to estimate intracellular fluxes. Conventional MFA, however, is limited to continuous cultures and the mid-exponential growth phase of batch cultures. Dynamic MFA (DMFA has emerged to characterize time-resolved metabolic fluxes for the entire culture period. Here, the linear DMFA approach was extended using B-spline fitting (B-DMFA to estimate mass balanced fluxes. Smoother fits were achieved using reduced number of knots and parameters. Additionally, computation time was greatly reduced using a new heuristic algorithm for knot placement. B-DMFA revealed that Chinese hamster ovary cells shifted from 37 °C to 32 °C maintained a constant IgG volume-specific productivity, whereas the productivity for the controls peaked during mid-exponential growth phase and declined afterward. The observed 42% increase in product titer at 32 °C was explained by a prolonged cell growth with high cell viability, a larger cell volume and a more stable volume-specific productivity. Keywords: Dynamic, Metabolism, Flux analysis, CHO cells, Temperature shift, B-spline curve fitting

  20. A scalable block-preconditioning strategy for divergence-conforming B-spline discretizations of the Stokes problem

    KAUST Repository

    Cortes, Adriano Mauricio

    2016-10-01

    The recently introduced divergence-conforming B-spline discretizations allow the construction of smooth discrete velocity-pressure pairs for viscous incompressible flows that are at the same time inf−supinf−sup stable and pointwise divergence-free. When applied to the discretized Stokes problem, these spaces generate a symmetric and indefinite saddle-point linear system. The iterative method of choice to solve such system is the Generalized Minimum Residual Method. This method lacks robustness, and one remedy is to use preconditioners. For linear systems of saddle-point type, a large family of preconditioners can be obtained by using a block factorization of the system. In this paper, we show how the nesting of “black-box” solvers and preconditioners can be put together in a block triangular strategy to build a scalable block preconditioner for the Stokes system discretized by divergence-conforming B-splines. Besides the known cavity flow problem, we used for benchmark flows defined on complex geometries: an eccentric annulus and hollow torus of an eccentric annular cross-section.

  1. The modeling of quadratic B-splines surfaces for the tomographic reconstruction in the FCC- type-riser

    International Nuclear Information System (INIS)

    Vasconcelos, Geovane Vitor; Dantas, Carlos Costa; Melo, Silvio de Barros; Pires, Renan Ferraz

    2009-01-01

    The 3D tomography reconstruction has been a profitable alternative in the analysis of the FCC-type- riser (Fluid Catalytic Cracking), for appropriately keeping track of the sectional catalyst concentration distribution in the process of oil refining. The method of tomography reconstruction proposed by M. Azzi and colleagues (1991) uses a relatively small amount of trajectories (from 3 to 5) and projections (from 5 to 7) of gamma rays, a desirable feature in the industrial process tomography. Compared to more popular methods, such as the FBP (Filtered Back Projection), which demands a much higher amount of gamma rays projections, the method by Azzi et al. is more appropriate for the industrial process, where the physical limitations and the cost of the process require more economical arrangements. The use of few projections and trajectories facilitates the diagnosis in the flow dynamical process. This article proposes an improvement in the basis functions introduced by Azzi et al., through the use of quadratic B-splines functions. The use of B-splines functions makes possible a smoother surface reconstruction of the density distribution, since the functions are continuous and smooth. This work describes how the modeling can be done. (author)

  2. Near real-time estimation of ionosphere vertical total electron content from GNSS satellites using B-splines in a Kalman filter

    Science.gov (United States)

    Erdogan, Eren; Schmidt, Michael; Seitz, Florian; Durmaz, Murat

    2017-02-01

    Although the number of terrestrial global navigation satellite system (GNSS) receivers supported by the International GNSS Service (IGS) is rapidly growing, the worldwide rather inhomogeneously distributed observation sites do not allow the generation of high-resolution global ionosphere products. Conversely, with the regionally enormous increase in highly precise GNSS data, the demands on (near) real-time ionosphere products, necessary in many applications such as navigation, are growing very fast. Consequently, many analysis centers accepted the responsibility of generating such products. In this regard, the primary objective of our work is to develop a near real-time processing framework for the estimation of the vertical total electron content (VTEC) of the ionosphere using proper models that are capable of a global representation adapted to the real data distribution. The global VTEC representation developed in this work is based on a series expansion in terms of compactly supported B-spline functions, which allow for an appropriate handling of the heterogeneous data distribution, including data gaps. The corresponding series coefficients and additional parameters such as differential code biases of the GNSS satellites and receivers constitute the set of unknown parameters. The Kalman filter (KF), as a popular recursive estimator, allows processing of the data immediately after acquisition and paves the way of sequential (near) real-time estimation of the unknown parameters. To exploit the advantages of the chosen data representation and the estimation procedure, the B-spline model is incorporated into the KF under the consideration of necessary constraints. Based on a preprocessing strategy, the developed approach utilizes hourly batches of GPS and GLONASS observations provided by the IGS data centers with a latency of 1 h in its current realization. Two methods for validation of the results are performed, namely the self consistency analysis and a comparison

  3. Numerical solutions of magnetohydrodynamic stability of axisymmetric toroidal plasmas using cubic B-spline finite element method

    International Nuclear Information System (INIS)

    Cheng, C.Z.

    1988-12-01

    A nonvariational ideal MHD stability code (NOVA) has been developed. In a general flux coordinate (/psi/, θ, /zeta/) system with an arbitrary Jacobian, the NOVA code employs Fourier expansions in the generalized poloidal angle θ and generalized toroidal angle /zeta/ directions, and cubic-B spline finite elements in the radial /psi/ direction. Extensive comparisons with these variational ideal MHD codes show that the NOVA code converges faster and gives more accurate results. An extended version of NOVA is developed to integrate non-Hermitian eigenmode equations due to energetic particles. The set of non-Hermitian integro-differential eigenmode equations is numerically solved by the NOVA-K code. We have studied the problems of the stabilization of ideal MHD internal kink modes by hot particle pressure and the excitation of ''fishbone'' internal kink modes by resonating with the energetic particle magnetic drift frequency. Comparisons with analytical solutions show that the values of the critical β/sub h/ from the analytical theory can be an order of magnitude different from those computed by the NOVA-K code. 24 refs., 11 figs., 1 tab

  4. Performance evaluation of block-diagonal preconditioners for the divergence-conforming B-spline discretization of the Stokes system

    KAUST Repository

    Côrtes, A.M.A.

    2015-02-20

    The recently introduced divergence-conforming B-spline discretizations allow the construction of smooth discrete velocity–pressure pairs for viscous incompressible flows that are at the same time inf-sup stable and pointwise divergence-free. When applied to discretized Stokes equations, these spaces generate a symmetric and indefinite saddle-point linear system. Krylov subspace methods are usually the most efficient procedures to solve such systems. One of such methods, for symmetric systems, is the Minimum Residual Method (MINRES). However, the efficiency and robustness of Krylov subspace methods is closely tied to appropriate preconditioning strategies. For the discrete Stokes system, in particular, block-diagonal strategies provide efficient preconditioners. In this article, we compare the performance of block-diagonal preconditioners for several block choices. We verify how the eigenvalue clustering promoted by the preconditioning strategies affects MINRES convergence. We also compare the number of iterations and wall-clock timings. We conclude that among the building blocks we tested, the strategy with relaxed inner conjugate gradients preconditioned with incomplete Cholesky provided the best results.

  5. A quantitative evaluation of pleural effusion on computed tomography scans using B-spline and local clustering level set.

    Science.gov (United States)

    Song, Lei; Gao, Jungang; Wang, Sheng; Hu, Huasi; Guo, Youmin

    2017-01-01

    Estimation of the pleural effusion's volume is an important clinical issue. The existing methods cannot assess it accurately when there is large volume of liquid in the pleural cavity and/or the patient has some other disease (e.g. pneumonia). In order to help solve this issue, the objective of this study is to develop and test a novel algorithm using B-spline and local clustering level set method jointly, namely BLL. The BLL algorithm was applied to a dataset involving 27 pleural effusions detected on chest CT examination of 18 adult patients with the presence of free pleural effusion. Study results showed that average volumes of pleural effusion computed using the BLL algorithm and assessed manually by the physicians were 586 ml±339 ml and 604±352 ml, respectively. For the same patient, the volume of the pleural effusion, segmented semi-automatically, was 101.8% ±4.6% of that was segmented manually. Dice similarity was found to be 0.917±0.031. The study demonstrated feasibility of applying the new BLL algorithm to accurately measure the volume of pleural effusion.

  6. Performance evaluation of block-diagonal preconditioners for the divergence-conforming B-spline discretization of the Stokes system

    KAUST Repository

    Cô rtes, A.M.A.; Coutinho, A.L.G.A.; Dalcin, L.; Calo, Victor M.

    2015-01-01

    The recently introduced divergence-conforming B-spline discretizations allow the construction of smooth discrete velocity–pressure pairs for viscous incompressible flows that are at the same time inf-sup stable and pointwise divergence-free. When applied to discretized Stokes equations, these spaces generate a symmetric and indefinite saddle-point linear system. Krylov subspace methods are usually the most efficient procedures to solve such systems. One of such methods, for symmetric systems, is the Minimum Residual Method (MINRES). However, the efficiency and robustness of Krylov subspace methods is closely tied to appropriate preconditioning strategies. For the discrete Stokes system, in particular, block-diagonal strategies provide efficient preconditioners. In this article, we compare the performance of block-diagonal preconditioners for several block choices. We verify how the eigenvalue clustering promoted by the preconditioning strategies affects MINRES convergence. We also compare the number of iterations and wall-clock timings. We conclude that among the building blocks we tested, the strategy with relaxed inner conjugate gradients preconditioned with incomplete Cholesky provided the best results.

  7. Series-NonUniform Rational B-Spline (S-NURBS) model: a geometrical interpolation framework for chaotic data.

    Science.gov (United States)

    Shao, Chenxi; Liu, Qingqing; Wang, Tingting; Yin, Peifeng; Wang, Binghong

    2013-09-01

    Time series is widely exploited to study the innate character of the complex chaotic system. Existing chaotic models are weak in modeling accuracy because of adopting either error minimization strategy or an acceptable error to end the modeling process. Instead, interpolation can be very useful for solving differential equations with a small modeling error, but it is also very difficult to deal with arbitrary-dimensional series. In this paper, geometric theory is considered to reduce the modeling error, and a high-precision framework called Series-NonUniform Rational B-Spline (S-NURBS) model is developed to deal with arbitrary-dimensional series. The capability of the interpolation framework is proved in the validation part. Besides, we verify its reliability by interpolating Musa dataset. The main improvement of the proposed framework is that we are able to reduce the interpolation error by properly adjusting weights series step by step if more information is given. Meanwhile, these experiments also demonstrate that studying the physical system from a geometric perspective is feasible.

  8. Numerical solutions of magnetohydrodynamic stability of axisymmetric toroidal plasmas using cubic B-spline finite element method

    Energy Technology Data Exchange (ETDEWEB)

    Cheng, C.Z.

    1988-12-01

    A nonvariational ideal MHD stability code (NOVA) has been developed. In a general flux coordinate (/psi/, theta, /zeta/) system with an arbitrary Jacobian, the NOVA code employs Fourier expansions in the generalized poloidal angle theta and generalized toroidal angle /zeta/ directions, and cubic-B spline finite elements in the radial /psi/ direction. Extensive comparisons with these variational ideal MHD codes show that the NOVA code converges faster and gives more accurate results. An extended version of NOVA is developed to integrate non-Hermitian eigenmode equations due to energetic particles. The set of non-Hermitian integro-differential eigenmode equations is numerically solved by the NOVA-K code. We have studied the problems of the stabilization of ideal MHD internal kink modes by hot particle pressure and the excitation of ''fishbone'' internal kink modes by resonating with the energetic particle magnetic drift frequency. Comparisons with analytical solutions show that the values of the critical ..beta../sub h/ from the analytical theory can be an order of magnitude different from those computed by the NOVA-K code. 24 refs., 11 figs., 1 tab.

  9. Projection of curves on B-spline surfaces using quadratic reparameterization

    KAUST Repository

    Yang, Yijun; Zeng, Wei; Zhang, Hui; Yong, Junhai; Paul, Jean Claude

    2010-01-01

    Curves on surfaces play an important role in computer aided geometric design. In this paper, we present a hyperbola approximation method based on the quadratic reparameterization of Bézier surfaces, which generates reasonable low degree curves lying

  10. Projection of curves on B-spline surfaces using quadratic reparameterization

    KAUST Repository

    Yang, Yijun

    2010-09-01

    Curves on surfaces play an important role in computer aided geometric design. In this paper, we present a hyperbola approximation method based on the quadratic reparameterization of Bézier surfaces, which generates reasonable low degree curves lying completely on the surfaces by using iso-parameter curves of the reparameterized surfaces. The Hausdorff distance between the projected curve and the original curve is controlled under the user-specified distance tolerance. The projected curve is T-G 1 continuous, where T is the user-specified angle tolerance. Examples are given to show the performance of our algorithm. © 2010 Elsevier Inc. All rights reserved.

  11. Pre-evaluation and interactive editing of B-spline and GERBS curves and surfaces

    Science.gov (United States)

    Laksâ, Arne

    2017-12-01

    Interactive computer based geometry editing is very useful for designers and artists. Our goal has been to develop useful tools for geometry editing in a way that increases the ability for creative design. When we interactively editing geometry, we want to see the change happening gradually and smoothly on the screen. Pre-evaluation is a tool for increasing the speed of the graphics when doing interactive affine operation on control points and control surfaces. It is then possible to add details on surfaces, and change shape in a smooth and continuous way. We use pre-evaluation on basis functions, on blending functions and on local surfaces. Pre-evaluation can be made hierarchi-cally and is thus useful for local refinements. Sampling and plotting of curves, surfaces and volumes can today be handled by the GPU and it is therefore important to have a structured organization and updating system to be able to make interactive editing as smooth and user friendly as possible. In the following, we will show a structure for pre-evaluation and an optimal organisation of the computation and we will show the effect of implementing both of these techniques.

  12. An innovation on high-grade CNC machines tools for B-spline curve method of high-speed interpolation arithmetic

    Science.gov (United States)

    Zhang, Wanjun; Gao, Shanping; Cheng, Xiyan; Zhang, Feng

    2017-04-01

    A novel on high-grade CNC machines tools for B Spline curve method of High-speed interpolation arithmetic is introduced. In the high-grade CNC machines tools CNC system existed the type value points is more trouble, the control precision is not strong and so on, In order to solve this problem. Through specific examples in matlab7.0 simulation result showed that that the interpolation error significantly reduced, the control precision is improved markedly, and satisfy the real-time interpolation of high speed, high accuracy requirements.

  13. A hybrid biomechanical intensity based deformable image registration of lung 4DCT

    International Nuclear Information System (INIS)

    Samavati, Navid; Velec, Michael; Brock, Kristy

    2015-01-01

    Deformable image registration (DIR) has been extensively studied over the past two decades due to its essential role in many image-guided interventions (IGI). IGI demands a highly accurate registration that maintains its accuracy across the entire region of interest. This work evaluates the improvement in accuracy and consistency by refining the results of Morfeus, a biomechanical model-based DIR algorithm.A hybrid DIR algorithm is proposed based on, a biomechanical model–based DIR algorithm and a refinement step based on a B-spline intensity-based algorithm. Inhale and exhale reconstructions of four-dimensional computed tomography (4DCT) lung images from 31 patients were initially registered using the biomechanical DIR by modeling contact surface between the lungs and the chest cavity. The resulting deformations were then refined using the intensity-based algorithm to reduce any residual uncertainties. Important parameters in the intensity-based algorithm, including grid spacing, number of pyramids, and regularization coefficient, were optimized on 10 randomly-chosen patients (out of 31). Target registration error (TRE) was calculated by measuring the Euclidean distance of common anatomical points on both images after registration. For each patient a minimum of 30 points/lung were used.Grid spacing of 8 mm, 5 levels of grid pyramids, and regularization coefficient of 3.0 were found to provide optimal results on 10 randomly chosen patients. Overall the entire patient population (n = 31), the hybrid method resulted in mean ± SD (90th%) TRE of 1.5 ± 1.4 (2.9) mm compared to 3.1 ± 1.9 (5.6) using biomechanical DIR and 2.6 ± 2.5 (6.1) using intensity-based DIR alone.The proposed hybrid biomechanical modeling intensity based algorithm is a promising DIR technique which could be used in various IGI procedures. The current investigation shows the efficacy of this approach for the registration of 4DCT images of the lungs with average accuracy of 1.5

  14. New light on the Kr-(4p55s2) Feshbach resonances: high-resolution electron scattering experiments and B-spline R-matrix calculations

    International Nuclear Information System (INIS)

    Hoffmann, T H; Ruf, M-W; Hotop, H; Zatsarinny, O; Bartschat, K; Allan, M

    2010-01-01

    In a joint experimental and theoretical effort, we carried out a detailed study of electron scattering from Kr atoms in the energy range of the low-lying Kr - (4p 5 5s 2 ) Feshbach resonances. Absolute angle-differential cross sections for elastic electron scattering were measured over the energy range 9.3-10.3 eV with an energy width of about 13 meV at scattering angles between 10 deg. and 180 deg. Using several sets of elastic scattering phase shifts, a detailed analysis of the sharp Kr - (4p 5 5s 2 2 P 3/2 ) resonance was carried out, resulting in a resonance width of Γ 3/2 3.6(2) meV. By direct comparison with the position of the Ar - (3p 5 4s 2 2 P 3/2 ) resonance, the energy for the Kr - (4p 5 5s 2 2 P 3/2 ) resonance was determined as E 3/2 = 9.489(3) eV. A Fano-type fit of the higher lying Kr - (4p 5 5s 2 2 P 1/2 ) resonance yielded the resonance parameters Γ 1/2 = 33(5) meV and E 1/2 = 10.126(4) eV. In order to obtain additional insights, B-spline R-matrix calculations were performed for both the elastic and the inelastic cross sections above the threshold for 4p 5 5s excitation. They provide the total and angle-differential cross sections for excitation of long-lived and short-lived levels of the 4p 5 5s configuration in Kr and branching ratios for the decay of the Kr - (4p 5 5s 2 2 P 1/2 ) resonance into the three available exit channels. The results are compared with selected experimental data.

  15. Wavelet based free-form deformations for nonrigid registration

    Science.gov (United States)

    Sun, Wei; Niessen, Wiro J.; Klein, Stefan

    2014-03-01

    In nonrigid registration, deformations may take place on the coarse and fine scales. For the conventional B-splines based free-form deformation (FFD) registration, these coarse- and fine-scale deformations are all represented by basis functions of a single scale. Meanwhile, wavelets have been proposed as a signal representation suitable for multi-scale problems. Wavelet analysis leads to a unique decomposition of a signal into its coarse- and fine-scale components. Potentially, this could therefore be useful for image registration. In this work, we investigate whether a wavelet-based FFD model has advantages for nonrigid image registration. We use a B-splines based wavelet, as defined by Cai and Wang.1 This wavelet is expressed as a linear combination of B-spline basis functions. Derived from the original B-spline function, this wavelet is smooth, differentiable, and compactly supported. The basis functions of this wavelet are orthogonal across scales in Sobolev space. This wavelet was previously used for registration in computer vision, in 2D optical flow problems,2 but it was not compared with the conventional B-spline FFD in medical image registration problems. An advantage of choosing this B-splines based wavelet model is that the space of allowable deformation is exactly equivalent to that of the traditional B-spline. The wavelet transformation is essentially a (linear) reparameterization of the B-spline transformation model. Experiments on 10 CT lung and 18 T1-weighted MRI brain datasets show that wavelet based registration leads to smoother deformation fields than traditional B-splines based registration, while achieving better accuracy.

  16. Steganography based on pixel intensity value decomposition

    Science.gov (United States)

    Abdulla, Alan Anwar; Sellahewa, Harin; Jassim, Sabah A.

    2014-05-01

    This paper focuses on steganography based on pixel intensity value decomposition. A number of existing schemes such as binary, Fibonacci, Prime, Natural, Lucas, and Catalan-Fibonacci (CF) are evaluated in terms of payload capacity and stego quality. A new technique based on a specific representation is proposed to decompose pixel intensity values into 16 (virtual) bit-planes suitable for embedding purposes. The proposed decomposition has a desirable property whereby the sum of all bit-planes does not exceed the maximum pixel intensity value, i.e. 255. Experimental results demonstrate that the proposed technique offers an effective compromise between payload capacity and stego quality of existing embedding techniques based on pixel intensity value decomposition. Its capacity is equal to that of binary and Lucas, while it offers a higher capacity than Fibonacci, Prime, Natural, and CF when the secret bits are embedded in 1st Least Significant Bit (LSB). When the secret bits are embedded in higher bit-planes, i.e., 2nd LSB to 8th Most Significant Bit (MSB), the proposed scheme has more capacity than Natural numbers based embedding. However, from the 6th bit-plane onwards, the proposed scheme offers better stego quality. In general, the proposed decomposition scheme has less effect in terms of quality on pixel value when compared to most existing pixel intensity value decomposition techniques when embedding messages in higher bit-planes.

  17. SU-E-J-95: Towards Optimum Boundary Conditions for Biomechanical Model Based Deformable Registration Using Intensity Based Image Matching for Prostate Correlative Pathology.

    Science.gov (United States)

    Samavati, N; McGrath, D M; Lee, J; van der Kwast, T; Jewett, M; Mã Nard, C; Pluim, J P W; Brock, K K

    2012-06-01

    Deformable registration of histology to MRI is an essential tool to validate in vivo prostate cancer imaging. However, direct registration of histology to in vivo MR is prone to error due to geometric differences between the tissue sections and the in vivo imaging planes. To increase the accuracy of registration, an ex vivo high resolution MRI is acquired to compensate for the direct registration difficulties. A novel intensity-based deformable registration algorithm based on local variation in image intensities is proposed to register the histology to ex vivo MRI of prostatectomy specimens. Four sets of ex vivo MR and whole mount pathology images from four patients were used to investigate and validate the technique. In addition, 9 synthetically deformed ex vivo MR images were used. The standard deviation in local windows within the images was calculated to generate intermediate images based on both MR and histology. The intermediate images were registered using the Drop package (Munich, Germany). To further increase the accuracy, a final refinement of the registration was performed using Drop with a finer B-spline rid. The registration parameters were tuned to achieve a visually acceptable registration. Magnitude of Differences (MOD) and Angular Error (AE) were used to validate the synthetic data, and the Target Registration Error (TRE) of manually indicated landmarks was used for the clinical data. MOD of 0.6mm and AE of 8.3 degrees showed the efficacy of using intermediate images, compared to 0.8mm and 10.0 degrees achieved with Drop without the intermediate images. The average mean±std TRE among the four patients was 1.0±0.6 mm using the proposed method compared to 1.6±1.1 mm using Elastix (Utrecht, The Netherlands). An intensity-based deformable registration algorithm which uses intermediate images was evaluated on prostatectomy specimens and synthetically deformed clinical data, indicating improvement in overall accuracy and robustness. OICR, Terry Fox

  18. Estimating nonrigid motion from inconsistent intensity with robust shape features

    International Nuclear Information System (INIS)

    Liu, Wenyang; Ruan, Dan

    2013-01-01

    Purpose: To develop a nonrigid motion estimation method that is robust to heterogeneous intensity inconsistencies amongst the image pairs or image sequence. Methods: Intensity and contrast variations, as in dynamic contrast enhanced magnetic resonance imaging, present a considerable challenge to registration methods based on general discrepancy metrics. In this study, the authors propose and validate a novel method that is robust to such variations by utilizing shape features. The geometry of interest (GOI) is represented with a flexible zero level set, segmented via well-behaved regularized optimization. The optimization energy drives the zero level set to high image gradient regions, and regularizes it with area and curvature priors. The resulting shape exhibits high consistency even in the presence of intensity or contrast variations. Subsequently, a multiscale nonrigid registration is performed to seek a regular deformation field that minimizes shape discrepancy in the vicinity of GOIs. Results: To establish the working principle, realistic 2D and 3D images were subject to simulated nonrigid motion and synthetic intensity variations, so as to enable quantitative evaluation of registration performance. The proposed method was benchmarked against three alternative registration approaches, specifically, optical flow, B-spline based mutual information, and multimodality demons. When intensity consistency was satisfied, all methods had comparable registration accuracy for the GOIs. When intensities among registration pairs were inconsistent, however, the proposed method yielded pronounced improvement in registration accuracy, with an approximate fivefold reduction in mean absolute error (MAE = 2.25 mm, SD = 0.98 mm), compared to optical flow (MAE = 9.23 mm, SD = 5.36 mm), B-spline based mutual information (MAE = 9.57 mm, SD = 8.74 mm) and mutimodality demons (MAE = 10.07 mm, SD = 4.03 mm). Applying the proposed method on a real MR image sequence also provided

  19. Estimación de la estructura a plazos de las tasas de interés en Colombia por medio del método de funciones B-spline cúbicas.

    Directory of Open Access Journals (Sweden)

    Diego Mauricio Vasquez E.

    2010-05-01

    Full Text Available En este documento se presenta la descripción y los resultados de la estimación de la estructura a plazos de las tasas de interés en Colombia utilizando el método de funciones B-spline cúbicas. Adicionalmente, se llevan a cabo comparaciones entre los resultados obtenidos a través de esta metodología y los presentados por Arango, Melo y Vásquez (2002 respecto a los métodos de Nelson y Siegel, y de la Bolsa de Valores de Colombia. Se observa que el desempeño del método de estimación de funciones Bspline cúbicas es similar al de Nelson y Siegel, y estos dos métodos superan al de la Bolsa de Valores de Colombia.

  20. TU-G-BRA-05: Predicting Volume Change of the Tumor and Critical Structures Throughout Radiation Therapy by CT-CBCT Registration with Local Intensity Correction

    Energy Technology Data Exchange (ETDEWEB)

    Park, S; Robinson, A; Kiess, A; Quon, H; Wong, J; Lee, J [Johns Hopkins University, Baltimore, MD (United States); Plishker, W [IGI Technologies Inc., College Park, MD (United States); Shekhar, R [IGI Technologies Inc., College Park, MD (United States); Children’s National Medical Center, Washington, D.C. (United States)

    2015-06-15

    Purpose: The purpose of this study is to develop an accurate and effective technique to predict and monitor volume changes of the tumor and organs at risk (OARs) from daily cone-beam CTs (CBCTs). Methods: While CBCT is typically used to minimize the patient setup error, its poor image quality impedes accurate monitoring of daily anatomical changes in radiotherapy. Reconstruction artifacts in CBCT often cause undesirable errors in registration-based contour propagation from the planning CT, a conventional way to estimate anatomical changes. To improve the registration and segmentation accuracy, we developed a new deformable image registration (DIR) that iteratively corrects CBCT intensities using slice-based histogram matching during the registration process. Three popular DIR algorithms (hierarchical B-spline, demons, optical flow) augmented by the intensity correction were implemented on a graphics processing unit for efficient computation, and their performances were evaluated on six head and neck (HN) cancer cases. Four trained scientists manually contoured nodal gross tumor volume (GTV) on the planning CT and every other fraction CBCTs for each case, to which the propagated GTV contours by DIR were compared. The performance was also compared with commercial software, VelocityAI (Varian Medical Systems Inc.). Results: Manual contouring showed significant variations, [-76, +141]% from the mean of all four sets of contours. The volume differences (mean±std in cc) between the average manual segmentation and four automatic segmentations are 3.70±2.30(B-spline), 1.25±1.78(demons), 0.93±1.14(optical flow), and 4.39±3.86 (VelocityAI). In comparison to the average volume of the manual segmentations, the proposed approach significantly reduced the estimation error by 9%(B-spline), 38%(demons), and 51%(optical flow) over the conventional mutual information based method (VelocityAI). Conclusion: The proposed CT-CBCT registration with local CBCT intensity correction

  1. Utilization of a hybrid finite-element based registration method to quantify heterogeneous tumor response for adaptive treatment for lung cancer patients

    Science.gov (United States)

    Sharifi, Hoda; Zhang, Hong; Bagher-Ebadian, Hassan; Lu, Wei; Ajlouni, Munther I.; Jin, Jian-Yue; (Spring Kong, Feng-Ming; Chetty, Indrin J.; Zhong, Hualiang

    2018-03-01

    Tumor response to radiation treatment (RT) can be evaluated from changes in metabolic activity between two positron emission tomography (PET) images. Activity changes at individual voxels in pre-treatment PET images (PET1), however, cannot be derived until their associated PET-CT (CT1) images are appropriately registered to during-treatment PET-CT (CT2) images. This study aimed to investigate the feasibility of using deformable image registration (DIR) techniques to quantify radiation-induced metabolic changes on PET images. Five patients with non-small-cell lung cancer (NSCLC) treated with adaptive radiotherapy were considered. PET-CTs were acquired two weeks before RT and 18 fractions after the start of RT. DIR was performed from CT1 to CT2 using B-Spline and diffeomorphic Demons algorithms. The resultant displacements in the tumor region were then corrected using a hybrid finite element method (FEM). Bitmap masks generated from gross tumor volumes (GTVs) in PET1 were deformed using the four different displacement vector fields (DVFs). The conservation of total lesion glycolysis (TLG) in GTVs was used as a criterion to evaluate the quality of these registrations. The deformed masks were united to form a large mask which was then partitioned into multiple layers from center to border. The averages of SUV changes over all the layers were 1.0  ±  1.3, 1.0  ±  1.2, 0.8  ±  1.3, 1.1  ±  1.5 for the B-Spline, B-Spline  +  FEM, Demons and Demons  +  FEM algorithms, respectively. TLG changes before and after mapping using B-Spline, Demons, hybrid-B-Spline, and hybrid-Demons registrations were 20.2%, 28.3%, 8.7%, and 2.2% on average, respectively. Compared to image intensity-based DIR algorithms, the hybrid FEM modeling technique is better in preserving TLG and could be useful for evaluation of tumor response for patients with regressing tumors.

  2. OHBM 2017: Practical intensity based meta-analysis

    OpenAIRE

    Maumet, Camille

    2017-01-01

    "Practical intensity-based meta-analysis" slides from my talk in the OHBM 2017 educational talk on Neuroimaging meta-analysis.http://www.humanbrainmapping.org/files/2017/ED Courses/Neuroimaging Meta-Analysis.pdf

  3. Evaluation of interpolation effects on upsampling and accuracy of cost functions-based optimized automatic image registration.

    Science.gov (United States)

    Mahmoudzadeh, Amir Pasha; Kashou, Nasser H

    2013-01-01

    Interpolation has become a default operation in image processing and medical imaging and is one of the important factors in the success of an intensity-based registration method. Interpolation is needed if the fractional unit of motion is not matched and located on the high resolution (HR) grid. The purpose of this work is to present a systematic evaluation of eight standard interpolation techniques (trilinear, nearest neighbor, cubic Lagrangian, quintic Lagrangian, hepatic Lagrangian, windowed Sinc, B-spline 3rd order, and B-spline 4th order) and to compare the effect of cost functions (least squares (LS), normalized mutual information (NMI), normalized cross correlation (NCC), and correlation ratio (CR)) for optimized automatic image registration (OAIR) on 3D spoiled gradient recalled (SPGR) magnetic resonance images (MRI) of the brain acquired using a 3T GE MR scanner. Subsampling was performed in the axial, sagittal, and coronal directions to emulate three low resolution datasets. Afterwards, the low resolution datasets were upsampled using different interpolation methods, and they were then compared to the high resolution data. The mean squared error, peak signal to noise, joint entropy, and cost functions were computed for quantitative assessment of the method. Magnetic resonance image scans and joint histogram were used for qualitative assessment of the method.

  4. Evaluation of Interpolation Effects on Upsampling and Accuracy of Cost Functions-Based Optimized Automatic Image Registration

    Directory of Open Access Journals (Sweden)

    Amir Pasha Mahmoudzadeh

    2013-01-01

    Full Text Available Interpolation has become a default operation in image processing and medical imaging and is one of the important factors in the success of an intensity-based registration method. Interpolation is needed if the fractional unit of motion is not matched and located on the high resolution (HR grid. The purpose of this work is to present a systematic evaluation of eight standard interpolation techniques (trilinear, nearest neighbor, cubic Lagrangian, quintic Lagrangian, hepatic Lagrangian, windowed Sinc, B-spline 3rd order, and B-spline 4th order and to compare the effect of cost functions (least squares (LS, normalized mutual information (NMI, normalized cross correlation (NCC, and correlation ratio (CR for optimized automatic image registration (OAIR on 3D spoiled gradient recalled (SPGR magnetic resonance images (MRI of the brain acquired using a 3T GE MR scanner. Subsampling was performed in the axial, sagittal, and coronal directions to emulate three low resolution datasets. Afterwards, the low resolution datasets were upsampled using different interpolation methods, and they were then compared to the high resolution data. The mean squared error, peak signal to noise, joint entropy, and cost functions were computed for quantitative assessment of the method. Magnetic resonance image scans and joint histogram were used for qualitative assessment of the method.

  5. Measures of Competitive Intensity – Analysis Based on Literature Review

    Directory of Open Access Journals (Sweden)

    Dariusz Kwieciński

    2017-03-01

    Full Text Available Purpose: To systematize the existing approaches and tools used for measuring competitive intensity. Methodology: Systematic literature review along with critical literature review. Findings: Identifcation of two main approaches to measuring competition intensity: the frst pertains to research based on experts’ opinions and involves the use of questionnaires (primary sources, while the second is based on structural variables used with a variety of indexes (secondary sources. In addition, variables applied for the purpose of measuring the intensity of competition are divided into structural and behavioural. Research implications: Research implications are two-fold. Firstly, a distinction is made between various types of existing approaches to measuring competitive intensity. Secondly, research is carried out, inter alia, with regard to the actual object of certain measures, as opposed to their object stemming from commonly accepted defnitions. Practical implications: The issue of measuring competition intensity occupies a prominent place in the discussion on the effectiveness of inter-organizational relationships. The fndings outlined in this paper may help managers to develop/adopt the right approach supporting their strategic decisions. Originality: The paper provides a complex review of the existing methods and measures of competitive intensity. It systematizes recent knowledge about competitive intensity measurements.

  6. Measuring energy efficiency: Is energy intensity a good evidence base?

    International Nuclear Information System (INIS)

    Proskuryakova, L.; Kovalev, A.

    2015-01-01

    Highlights: • Energy intensity measure reflects consumption, not energy efficiency. • Thermodynamic indicators should describe energy efficiency at all levels. • These indicators should have no reference to economic or financial parameters. • A set of energy efficiency indicators should satisfy several basic principles. • There are trade-offs between energy efficiency, power and costs. - Abstract: There is a widespread assumption in energy statistics and econometrics that energy intensity and energy efficiency are equivalent measures of energy performance of economies. The paper points to the discrepancy between the engineering concept of energy efficiency and the energy intensity as it is understood in macroeconomic statistics. This double discrepancy concerns definitions (while engineering concept of energy efficiency is based on the thermodynamic definition, energy intensity includes economic measures) and use. With regard to the latter, the authors conclude that energy intensity can only provide indirect and delayed evidence of technological and engineering energy efficiency of energy conversion processes, which entails shortcomings for management and policymaking. Therefore, we suggest to stop considering subsectoral, sectoral and other levels of energy intensities as aggregates of lower-level energy efficiency. It is suggested that the insufficiency of energy intensity indicators can be compensated with the introduction of thermodynamic indicators describing energy efficiency at the physical, technological, enterprise, sub-sector, sectoral and national levels without references to any economic or financial parameters. Structured statistical data on thermodynamic efficiency is offered as a better option for identifying break-through technologies and technological bottle-necks that constrain efficiency advancements. It is also suggested that macro-level thermodynamic indicators should be based on the thermodynamic first law efficiency and the energy

  7. Optimization of the Upper Surface of Hypersonic Vehicle Based on CFD Analysis

    Science.gov (United States)

    Gao, T. Y.; Cui, K.; Hu, S. C.; Wang, X. P.; Yang, G. W.

    2011-09-01

    For the hypersonic vehicle, the aerodynamic performance becomes more intensive. Therefore, it is a significant event to optimize the shape of the hypersonic vehicle to achieve the project demands. It is a key technology to promote the performance of the hypersonic vehicle with the method of shape optimization. Based on the existing vehicle, the optimization to the upper surface of the Simplified hypersonic vehicle was done to obtain a shape which suits the project demand. At the cruising condition, the upper surface was parameterized with the B-Spline curve method. The incremental parametric method and the reconstruction technology of the local mesh were applied here. The whole flow field was been calculated and the aerodynamic performance of the craft were obtained by the computational fluid dynamic (CFD) technology. Then the vehicle shape was optimized to achieve the maximum lift-drag ratio at attack angle 3°, 4° and 5°. The results will provide the reference for the practical design.

  8. TLS FIELD DATA BASED INTENSITY CORRECTION FOR FOREST ENVIRONMENTS

    Directory of Open Access Journals (Sweden)

    J. Heinzel

    2016-06-01

    Full Text Available Terrestrial laser scanning (TLS is increasingly used for forestry applications. Besides the three dimensional point coordinates, the 'intensity' of the reflected signal plays an important role in forestry and vegetation studies. The benefit of the signal intensity is caused by the wavelength of the laser that is within the near infrared (NIR for most scanners. The NIR is highly indicative for various vegetation characteristics. However, the intensity as recorded by most terrestrial scanners is distorted by both external and scanner specific factors. Since details about system internal alteration of the signal are often unknown to the user, model driven approaches are impractical. On the other hand, existing data driven calibration procedures require laborious acquisition of separate reference datasets or areas of homogenous reflection characteristics from the field data. In order to fill this gap, the present study introduces an approach to correct unwanted intensity variations directly from the point cloud of the field data. The focus is on the variation over range and sensor specific distortions. Instead of an absolute calibration of the values, a relative correction within the dataset is sufficient for most forestry applications. Finally, a method similar to time series detrending is presented with the only pre-condition of a relative equal distribution of forest objects and materials over range. Our test data covers 50 terrestrial scans captured with a FARO Focus 3D S120 scanner using a laser wavelength of 905 nm. Practical tests demonstrate that our correction method removes range and scanner based alterations of the intensity.

  9. Intensity Based Seismic Hazard Map of Republic of Macedonia

    Science.gov (United States)

    Dojcinovski, Dragi; Dimiskovska, Biserka; Stojmanovska, Marta

    2016-04-01

    The territory of the Republic of Macedonia and the border terrains are among the most seismically active parts of the Balkan Peninsula belonging to the Mediterranean-Trans-Asian seismic belt. The seismological data on the R. Macedonia from the past 16 centuries point to occurrence of very strong catastrophic earthquakes. The hypocenters of the occurred earthquakes are located above the Mohorovicic discontinuity, most frequently, at a depth of 10-20 km. Accurate short -term prognosis of earthquake occurrence, i.e., simultaneous prognosis of time, place and intensity of their occurrence is still not possible. The present methods of seismic zoning have advanced to such an extent that it is with a great probability that they enable efficient protection against earthquake effects. The seismic hazard maps of the Republic of Macedonia are the result of analysis and synthesis of data from seismological, seismotectonic and other corresponding investigations necessary for definition of the expected level of seismic hazard for certain time periods. These should be amended, from time to time, with new data and scientific knowledge. The elaboration of this map does not completely solve all issues related to earthquakes, but it provides basic empirical data necessary for updating the existing regulations for construction of engineering structures in seismically active areas regulated by legal regulations and technical norms whose constituent part is the seismic hazard map. The map has been elaborated based on complex seismological and geophysical investigations of the considered area and synthesis of the results from these investigations. There were two phases of elaboration of the map. In the first phase, the map of focal zones characterized by maximum magnitudes of possible earthquakes has been elaborated. In the second phase, the intensities of expected earthquakes have been computed according to the MCS scale. The map is prognostic, i.e., it provides assessment of the

  10. Deadline based scheduling for data-intensive applications in clouds

    Institute of Scientific and Technical Information of China (English)

    Fu Xiong; Cang Yeliang; Zhu Lipeng; Hu Bin; Deng Song; Wang Dong

    2016-01-01

    Cloud computing emerges as a new computing pattern that can provide elastic services for any users around the world.It provides good chances to solve large scale scientific problems with fewer efforts.Application deployment remains an important issue in clouds.Appropriate scheduling mechanisms can shorten the total completion time of an application and therefore improve the quality of service (QoS) for cloud users.Unlike current scheduling algorithms which mostly focus on single task allocation,we propose a deadline based scheduling approach for data-intensive applications in clouds.It does not simply consider the total completion time of an application as the sum of all its subtasks' completion time.Not only the computation capacity of virtual machine (VM) is considered,but also the communication delay and data access latencies are taken into account.Simulations show that our proposed approach has a decided advantage over the two other algorithms.

  11. Colour based fire detection method with temporal intensity variation filtration

    Science.gov (United States)

    Trambitckii, K.; Anding, K.; Musalimov, V.; Linß, G.

    2015-02-01

    Development of video, computing technologies and computer vision gives a possibility of automatic fire detection on video information. Under that project different algorithms was implemented to find more efficient way of fire detection. In that article colour based fire detection algorithm is described. But it is not enough to use only colour information to detect fire properly. The main reason of this is that in the shooting conditions may be a lot of things having colour similar to fire. A temporary intensity variation of pixels is used to separate them from the fire. These variations are averaged over the series of several frames. This algorithm shows robust work and was realised as a computer program by using of the OpenCV library.

  12. Colour based fire detection method with temporal intensity variation filtration

    International Nuclear Information System (INIS)

    Trambitckii, K; Musalimov, V; Anding, K; Linß, G

    2015-01-01

    Development of video, computing technologies and computer vision gives a possibility of automatic fire detection on video information. Under that project different algorithms was implemented to find more efficient way of fire detection. In that article colour based fire detection algorithm is described. But it is not enough to use only colour information to detect fire properly. The main reason of this is that in the shooting conditions may be a lot of things having colour similar to fire. A temporary intensity variation of pixels is used to separate them from the fire. These variations are averaged over the series of several frames. This algorithm shows robust work and was realised as a computer program by using of the OpenCV library

  13. Tissue-Based MRI Intensity Standardization: Application to Multicentric Datasets

    Directory of Open Access Journals (Sweden)

    Nicolas Robitaille

    2012-01-01

    Full Text Available Intensity standardization in MRI aims at correcting scanner-dependent intensity variations. Existing simple and robust techniques aim at matching the input image histogram onto a standard, while we think that standardization should aim at matching spatially corresponding tissue intensities. In this study, we present a novel automatic technique, called STI for STandardization of Intensities, which not only shares the simplicity and robustness of histogram-matching techniques, but also incorporates tissue spatial intensity information. STI uses joint intensity histograms to determine intensity correspondence in each tissue between the input and standard images. We compared STI to an existing histogram-matching technique on two multicentric datasets, Pilot E-ADNI and ADNI, by measuring the intensity error with respect to the standard image after performing nonlinear registration. The Pilot E-ADNI dataset consisted in 3 subjects each scanned in 7 different sites. The ADNI dataset consisted in 795 subjects scanned in more than 50 different sites. STI was superior to the histogram-matching technique, showing significantly better intensity matching for the brain white matter with respect to the standard image.

  14. Intensive Intervention Practice Guide: School-Based Functional Analysis

    Science.gov (United States)

    Pennington, Brittany; Pokorski, Elizabeth A.; Kumm, Skip; Sterrett, Brittany I.

    2017-01-01

    The National Center for Leadership in Intensive Intervention (NCLII), a consortium funded by the Office of Special Education Programs (OSEP), prepares special education leaders to become experts in research on intensive intervention for students with disabilities who have persistent and severe academic (e.g., reading and math) and behavioral…

  15. elastix: a toolbox for intensity-based medical image registration.

    Science.gov (United States)

    Klein, Stefan; Staring, Marius; Murphy, Keelin; Viergever, Max A; Pluim, Josien P W

    2010-01-01

    Medical image registration is an important task in medical image processing. It refers to the process of aligning data sets, possibly from different modalities (e.g., magnetic resonance and computed tomography), different time points (e.g., follow-up scans), and/or different subjects (in case of population studies). A large number of methods for image registration are described in the literature. Unfortunately, there is not one method that works for all applications. We have therefore developed elastix, a publicly available computer program for intensity-based medical image registration. The software consists of a collection of algorithms that are commonly used to solve medical image registration problems. The modular design of elastix allows the user to quickly configure, test, and compare different registration methods for a specific application. The command-line interface enables automated processing of large numbers of data sets, by means of scripting. The usage of elastix for comparing different registration methods is illustrated with three example experiments, in which individual components of the registration method are varied.

  16. Damage Detection for Historical Architectures Based on Tls Intensity Data

    Science.gov (United States)

    Li, Q.; Cheng, X.

    2018-04-01

    TLS (Terrestrial Laser Scanner) has long been preferred in the cultural heritage field for 3D documentation of historical sites thanks to its ability to acquire the geometric information without any physical contact. Besides the geometric information, most TLS systems also record the intensity information, which is considered as an important measurement of the spectral property of the scanned surface. Recent studies have shown the potential of using intensity for damage detection. However, the original intensity is affected by scanning geometry such as range and incidence angle and other factors, thus making the results less accurate. Therefore, in this paper, we present a method to detect certain damage areas using the corrected intensity data. Firstly, two data-driven models have been developed to correct the range and incidence angle effect. Then the corrected intensity is used to generate 2D intensity images for classification. After the damage areas being detected, they are re-projected to the 3D point cloud for better visual representation and further investigation. The experiment results indicate the feasibility and validity of the corrected intensity for damage detection.

  17. LIGHT INTENSITY INFLUENCE ON STRONTIUM TITANATE BASED PHOTO- ELECTROCHEMICAL CELLS

    Directory of Open Access Journals (Sweden)

    D. Hertkorn

    2017-07-01

    Full Text Available The influence of light intensity on photo-electrochemical cells (PECs consisting of an n-type strontium titanate (SrTiO₃ photoanode and nickel cathode in potassium hydroxide electrolyte is studied. The band levels of an electrolyte-metal-semiconductor-electrolyte system are presented and the effect of different light intensities on the energy levels is investigated. Photocurrent density, quantum efficiency, and open circuit potential measurements are performed on the processed PECs under different light intensities (375 nm. It is demonstrated that a threshold value of the light intensity has to be reached in order to obtain positive photo activity and that beyond this value the performance remains nearly constant.

  18. Ultrasound-based guidance of intensity-modulated radiation therapy

    International Nuclear Information System (INIS)

    Fung, Albert Y.C.; Ayyangar, Komanduri M.; Djajaputra, David; Nehru, Ramasamy M.; Enke, Charles A.

    2006-01-01

    In ultrasound-guided intensity-modulated radiation therapy (IMRT) of prostate cancer, ultrasound imaging ascertains the anatomical position of patients during x-ray therapy delivery. The ultrasound transducers are made of piezoelectric ceramics. The same crystal is used for both ultrasound production and reception. Three-dimensional (3D) ultrasound devices capture and correlate series of 2-dimensional (2D) B-mode images. The transducers are often arranged in a convex array for focusing. Lower frequency reaches greater depth, but results in low resolution. For clear image, some gel is usually applied between the probe and the skin contact surface. For prostate positioning, axial and sagittal scans are performed, and the volume contours from computed tomography (CT) planning are superimposed on the ultrasound images obtained before radiation delivery at the linear accelerator. The planning volumes are then overlaid on the ultrasound images and adjusted until they match. The computer automatically deduces the offset necessary to move the patient so that the treatment area is in the correct location. The couch is translated as needed. The currently available commercial equipment can attain a positional accuracy of 1-2 mm. Commercial manufacturer designs differ in the detection of probe coordinates relative to the isocenter. Some use a position-sensing robotic arm, while others have infrared light-emitting diodes or pattern-recognition software with charge-couple-device cameras. Commissioning includes testing of image quality and positional accuracy. Ultrasound is mainly used in prostate positioning. Data for 7825 daily fractions of 234 prostate patients indicated average 3D inter-fractional displacement of about 7.8 mm. There was no perceivable trend of shift over time. Scatter plots showed slight prevalence toward superior-posterior directions. Uncertainties of ultrasound guidance included tissue inhomogeneities, speckle noise, probe pressure, and inter

  19. Study of helium and beryllium atoms with strong and short laser field; Etude des atomes d'helium et de beryllium en champ laser intense et bref

    Energy Technology Data Exchange (ETDEWEB)

    Laulan, St

    2004-09-01

    We present a theoretical study of the interaction between a two-active electron atom and an intense (10{sup 14} to 10{sup 15} W/cm{sup 2}) and ultrashort (from a few 10{sup -15} to a few 10{sup -18} s) laser field. In the first part, we describe the current experimental techniques able to produce a coherent radiation of high power in the UV-XUV regime and with femtosecond time duration. A theoretical model of a laser pulse is defined with such characteristics. Then, we develop a numerical approach based on B-spline functions to describe the atomic structure of the two-active electron system. A spectral non perturbative method is proposed to solve the time dependent Schroedinger equation. We focalize our attention on the description of the atomic double continuum states. Finally, we expose results on the double ionization of helium and beryllium atoms with intense and short laser field. In particular, we present total cross section calculations and ejected electron energy distributions in the double continuum after one- and two-photon absorption. (author)

  20. Evidence based exercise - clinical benefits of high intensity interval training.

    Science.gov (United States)

    Shiraev, Tim; Barclay, Gabriella

    2012-12-01

    Aerobic exercise has a marked impact on cardiovascular disease risk. Benefits include improved serum lipid profiles, blood pressure and inflammatory markers as well as reduced risk of stroke, acute coronary syndrome and overall cardiovascular mortality. Most exercise programs prescribed for fat reduction involve continuous, moderate aerobic exercise, as per Australian Heart Foundation clinical guidelines. This article describes the benefits of exercise for patients with cardiovascular and metabolic disease and details the numerous benefits of high intensity interval training (HIIT) in particular. Aerobic exercise has numerous benefits for high-risk populations and such benefits, especially weight loss, are amplified with HIIT. High intensity interval training involves repeatedly exercising at a high intensity for 30 seconds to several minutes, separated by 1-5 minutes of recovery (either no or low intensity exercise). HIT is associated with increased patient compliance and improved cardiovascular and metabolic outcomes and is suitable for implementation in both healthy and 'at risk' populations. Importantly, as some types of exercise are contraindicated in certain patient populations and HIIT is a complex concept for those unfamiliar to exercise, some patients may require specific assessment or instruction before commencing a HIIT program.

  1. Electron beam based transversal profile measurements of intense ion beams

    International Nuclear Information System (INIS)

    El Moussati, Said

    2014-01-01

    A non-invasive diagnostic method for the experimental determination of the transverse profile of an intense ion beam has been developed and investigated theoretically as well as experimentally within the framework of the present work. The method is based on the deflection of electrons when passing the electromagnetic field of an ion beam. To achieve this an electron beam is employed with a specifically prepared transversal profile. This distinguish this method from similar ones which use thin electron beams for scanning the electromagnetic field [Roy et al. 2005; Blockland10]. The diagnostic method presented in this work will be subsequently called ''Electron-Beam-Imaging'' (EBI). First of all the influence of the electromagnetic field of the ion beam on the electrons has been theoretically analyzed. It was found that the magnetic field causes only a shift of the electrons along the ion beam axis, while the electric field only causes a shift in a plane transverse to the ion beam. Moreover, in the non-relativistic case the magnetic force is significantly smaller than the Coulomb one and the electrons suffer due to the magnetic field just a shift and continue to move parallel to their initial trajectory. Under the influence of the electric field, the electrons move away from the ion beam axis, their resulting trajectory shows a specific angle compared to the original direction. This deflection angle practically depends just on the electric field of the ion beam. Thus the magnetic field has been neglected when analysing the experimental data. The theoretical model provides a relationship between the deflection angle of the electrons and the charge distribution in the cross section of the ion beam. The model however only can be applied for small deflection angles. This implies a relationship between the line-charge density of the ion beam and the initial kinetic energy of the electrons. Numerical investigations have been carried out to clarify the

  2. Position Detection Based on Intensities of Reflected Infrared Light

    DEFF Research Database (Denmark)

    Christensen, Henrik Vie

    measurements of reflected light intensities, and includes easy calibration. The method for reconstructing 3D positions has been implemented in a prototype of a “non-Touch Screen” for a computer, so that the user can control a cursor in three dimensions by moving his/hers hand in front of the computer screen....... The 2D position reconstruction method is mplemented in a prototype of a human-machine interface (HMI) for an electrically powered wheelchair, such that the wheelchair user can control the movement of the wheelchair by head movements. Both “non-Touch Screen” prototype and wheelchair HMI has been tested...

  3. SOA based intensive support system for space radiation data

    International Nuclear Information System (INIS)

    Goranova, M.; Shishedjiev, B.; Genova, S.; Semkova, J.

    2013-01-01

    Modern data intensive science involves heterogeneous and structured data sets in sophisticated data formats. Scientists need access to distributed computing and data sources and support for remote access to expensive, multinational specialized instruments. Scientists need effective software for data analysis, querying, accessing and visualization. The interaction between computer science and science and engineering becomes essential for the automation of data manipulation. The key solution uses the Service-oriented Architecture (SOA) in the field of science and Grid computing. The goal of this paper is managing the scientific data received by the Lyulin-5 particle telescope used in MATROSHKA-R experiment performed at the International Space Station (ISS). The dynamics of radiation characteristics and their dependency on the time and the orbital parameters have been established. The experiment helps the accurate estimation of the impact of space radiation on human health in long-duration manned missions

  4. Optical nonclassicality test based on third-order intensity correlations

    Science.gov (United States)

    Rigovacca, L.; Kolthammer, W. S.; Di Franco, C.; Kim, M. S.

    2018-03-01

    We develop a nonclassicality criterion for the interference of three delayed, but otherwise identical, light fields in a three-mode Bell interferometer. We do so by comparing the prediction of quantum mechanics with those of a classical framework in which independent sources emit electric fields with random phases. In particular, we evaluate third-order correlations among output intensities as a function of the delays, and show how the presence of a correlation revival for small delays cannot be explained by the classical model of light. The observation of a revival is thus a nonclassicality signature, which can be achieved only by sources with a photon-number statistics that is highly sub-Poissonian. Our analysis provides strong evidence for the nonclassicality of the experiment discussed by Menssen et al. [Phys. Rev. Lett. 118, 153603 (2017), 10.1103/PhysRevLett.118.153603], and shows how a collective "triad" phase affects the interference of any three or more light fields, irrespective of their quantum or classical character.

  5. An intense neutron generator based on a proton accelerator

    Energy Technology Data Exchange (ETDEWEB)

    Bartholomew, G A; Milton, J C.D.; Vogt, E W

    1964-07-01

    A study has been made of the demand for a neutron facility with a thermal flux of {>=} 10{sup 16} n cm{sup -2} sec{sup -1} and of possible methods of producing such fluxes with existing or presently developing technology. Experimental projects proposed by neutron users requiring high fluxes call for neutrons of all energies from thermal to 100 MeV with both continuous-wave and pulsed output. Consideration of the heat generated in the source per useful neutron liberated shows that the (p,xn) reaction with 400 1000 MeV bombarding energies and heavy element targets (e.g. bismuth, lead) is capable of greater specific source strength than other possible methods realizable within the time scale. A preliminary parameter optimization carried through for the accelerator currently promising greatest economy (the separated orbit cyclotron or S.O.C.), reveals that a facility delivering a proton beam of about 65 mA at about 1 BeV would satisfy the flux requirement with a neutron cost significantly more favourable than that projected for a high flux reactor. It is suggested that a proton storage ring providing post-acceleration pulsing of the proton beam should be developed for the facility. With this elaboration, and by taking advantage of the intrinsic microscopic pulse structure provided by the radio frequency duty cycle, a very versatile source may be devised capable of producing multiple beams of continuous and pulsed neutrons with a wide range of energies and pulse widths. The source promises to be of great value for high flux irradiations and as a pilot facility for advanced reactor technology. The proposed proton accelerator also constitutes a meson source capable of producing beams of {pi} and {mu} mesons and of neutrinos orders of magnitude more intense than those of any accelerator presently in use. These beams, which can be produced simultaneously with the neutron beams, open vast areas of new research in fundamental nuclear structure, elementary particle physics

  6. An intense neutron generator based on a proton accelerator

    International Nuclear Information System (INIS)

    Bartholomew, G.A.; Milton, J.C.D.; Vogt, E.W.

    1964-01-01

    A study has been made of the demand for a neutron facility with a thermal flux of ≥ 10 16 n cm -2 sec -1 and of possible methods of producing such fluxes with existing or presently developing technology. Experimental projects proposed by neutron users requiring high fluxes call for neutrons of all energies from thermal to 100 MeV with both continuous-wave and pulsed output. Consideration of the heat generated in the source per useful neutron liberated shows that the (p,xn) reaction with 400 1000 MeV bombarding energies and heavy element targets (e.g. bismuth, lead) is capable of greater specific source strength than other possible methods realizable within the time scale. A preliminary parameter optimization carried through for the accelerator currently promising greatest economy (the separated orbit cyclotron or S.O.C.), reveals that a facility delivering a proton beam of about 65 mA at about 1 BeV would satisfy the flux requirement with a neutron cost significantly more favourable than that projected for a high flux reactor. It is suggested that a proton storage ring providing post-acceleration pulsing of the proton beam should be developed for the facility. With this elaboration, and by taking advantage of the intrinsic microscopic pulse structure provided by the radio frequency duty cycle, a very versatile source may be devised capable of producing multiple beams of continuous and pulsed neutrons with a wide range of energies and pulse widths. The source promises to be of great value for high flux irradiations and as a pilot facility for advanced reactor technology. The proposed proton accelerator also constitutes a meson source capable of producing beams of π and μ mesons and of neutrinos orders of magnitude more intense than those of any accelerator presently in use. These beams, which can be produced simultaneously with the neutron beams, open vast areas of new research in fundamental nuclear structure, elementary particle physics, and perhaps also in

  7. Electronic Health Record for Intensive Care based on Usual Windows Based Software.

    Science.gov (United States)

    Reper, Arnaud; Reper, Pascal

    2015-08-01

    In Intensive Care Units, the amount of data to be processed for patients care, the turn over of the patients, the necessity for reliability and for review processes indicate the use of Patient Data Management Systems (PDMS) and electronic health records (EHR). To respond to the needs of an Intensive Care Unit and not to be locked with proprietary software, we developed an EHR based on usual software and components. The software was designed as a client-server architecture running on the Windows operating system and powered by the access data base system. The client software was developed using Visual Basic interface library. The application offers to the users the following functions: medical notes captures, observations and treatments, nursing charts with administration of medications, scoring systems for classification, and possibilities to encode medical activities for billing processes. Since his deployment in September 2004, the EHR was used to care more than five thousands patients with the expected software reliability and facilitated data management and review processes. Communications with other medical software were not developed from the start, and are realized by the use of basic functionalities communication engine. Further upgrade of the system will include multi-platform support, use of typed language with static analysis, and configurable interface. The developed system based on usual software components was able to respond to the medical needs of the local ICU environment. The use of Windows for development allowed us to customize the software to the preexisting organization and contributed to the acceptability of the whole system.

  8. Arc-based smoothing of ion beam intensity on targets

    International Nuclear Information System (INIS)

    Friedman, Alex

    2012-01-01

    By manipulating a set of ion beams upstream of a target, it is possible to arrange for a smoother deposition pattern, so as to achieve more uniform illumination of the target. A uniform energy deposition pattern is important for applications including ion-beam-driven high energy density physics and heavy-ion beam-driven inertial fusion energy (“heavy-ion fusion”). Here, we consider an approach to such smoothing that is based on rapidly “wobbling” each of the beams back and forth along a short arc-shaped path, via oscillating fields applied upstream of the final pulse compression. In this technique, uniformity is achieved in the time-averaged sense; this is sufficient provided the beam oscillation timescale is short relative to the hydrodynamic timescale of the target implosion. This work builds on two earlier concepts: elliptical beams applied to a distributed-radiator target [D. A. Callahan and M. Tabak, Phys. Plasmas 7, 2083 (2000)] and beams that are wobbled so as to trace a number of full rotations around a circular or elliptical path [R. C. Arnold et al., Nucl. Instrum. Methods 199, 557 (1982)]. Here, we describe the arc-based smoothing approach and compare it to results obtainable using an elliptical-beam prescription. In particular, we assess the potential of these approaches for minimization of azimuthal asymmetry, for the case of a ring of beams arranged on a cone. It is found that, for small numbers of beams on the ring, the arc-based smoothing approach offers superior uniformity. In contrast with the full-rotation approach, arc-based smoothing remains usable when the geometry precludes wobbling the beams around a full circle, e.g., for the X-target [E. Henestroza, B. G. Logan, and L. J. Perkins, Phys. Plasmas 18, 032702 (2011)] and some classes of distributed-radiator targets.

  9. EME assessments using telstra's mobile base station field intensity plotter

    International Nuclear Information System (INIS)

    Wood, M.; Hurren, S.; Vinnal, E.; Armstrong, M.

    2001-01-01

    Public interest in the potential health issues arising from mobile phone base stations has highlighted the importance of having accessible and easy to understand information on electromagnetic energy (EME) emission levels. A range of groups including residents, community groups, businesses, schools, site owners and local governments are often interested in knowing what EME levels a particular base station is capable of producing and how these compare to safety standards regulated by the Australian Communications Authority (ACA). Performing the complex mathematical calculations to predict these levels requires significant expertise, is time consuming and relatively costly. A new software tool, born out of Telstra's EME Research and Development Program, is set to revolutionise EME assessments by facilitating the provision of this information in a more timely, standardised and cost effective manner. Telstra has taken the decision to commercialise the software, which is a world first, because of expressions of interest from other carriers, EME assessment specialists, government agencies and regulatory organisations both in Australia and overseas. Significantly, the software should improve the flow of easy to understand and accurate information on emission levels from base stations. Copyright (2001) Australasian Radiation Protection Society Inc

  10. Streaming support for data intensive cloud-based sequence analysis.

    Science.gov (United States)

    Issa, Shadi A; Kienzler, Romeo; El-Kalioby, Mohamed; Tonellato, Peter J; Wall, Dennis; Bruggmann, Rémy; Abouelhoda, Mohamed

    2013-01-01

    Cloud computing provides a promising solution to the genomics data deluge problem resulting from the advent of next-generation sequencing (NGS) technology. Based on the concepts of "resources-on-demand" and "pay-as-you-go", scientists with no or limited infrastructure can have access to scalable and cost-effective computational resources. However, the large size of NGS data causes a significant data transfer latency from the client's site to the cloud, which presents a bottleneck for using cloud computing services. In this paper, we provide a streaming-based scheme to overcome this problem, where the NGS data is processed while being transferred to the cloud. Our scheme targets the wide class of NGS data analysis tasks, where the NGS sequences can be processed independently from one another. We also provide the elastream package that supports the use of this scheme with individual analysis programs or with workflow systems. Experiments presented in this paper show that our solution mitigates the effect of data transfer latency and saves both time and cost of computation.

  11. Streaming Support for Data Intensive Cloud-Based Sequence Analysis

    Directory of Open Access Journals (Sweden)

    Shadi A. Issa

    2013-01-01

    Full Text Available Cloud computing provides a promising solution to the genomics data deluge problem resulting from the advent of next-generation sequencing (NGS technology. Based on the concepts of “resources-on-demand” and “pay-as-you-go”, scientists with no or limited infrastructure can have access to scalable and cost-effective computational resources. However, the large size of NGS data causes a significant data transfer latency from the client’s site to the cloud, which presents a bottleneck for using cloud computing services. In this paper, we provide a streaming-based scheme to overcome this problem, where the NGS data is processed while being transferred to the cloud. Our scheme targets the wide class of NGS data analysis tasks, where the NGS sequences can be processed independently from one another. We also provide the elastream package that supports the use of this scheme with individual analysis programs or with workflow systems. Experiments presented in this paper show that our solution mitigates the effect of data transfer latency and saves both time and cost of computation.

  12. Streaming Support for Data Intensive Cloud-Based Sequence Analysis

    Science.gov (United States)

    Issa, Shadi A.; Kienzler, Romeo; El-Kalioby, Mohamed; Tonellato, Peter J.; Wall, Dennis; Bruggmann, Rémy; Abouelhoda, Mohamed

    2013-01-01

    Cloud computing provides a promising solution to the genomics data deluge problem resulting from the advent of next-generation sequencing (NGS) technology. Based on the concepts of “resources-on-demand” and “pay-as-you-go”, scientists with no or limited infrastructure can have access to scalable and cost-effective computational resources. However, the large size of NGS data causes a significant data transfer latency from the client's site to the cloud, which presents a bottleneck for using cloud computing services. In this paper, we provide a streaming-based scheme to overcome this problem, where the NGS data is processed while being transferred to the cloud. Our scheme targets the wide class of NGS data analysis tasks, where the NGS sequences can be processed independently from one another. We also provide the elastream package that supports the use of this scheme with individual analysis programs or with workflow systems. Experiments presented in this paper show that our solution mitigates the effect of data transfer latency and saves both time and cost of computation. PMID:23710461

  13. A study of an intensive home-based treatment program and its ...

    African Journals Online (AJOL)

    Adele

    Key words: Intensive home based program, health services, admission, outpatient, inpatient, aged care. Received: ... vate agencies were sometimes also used and funded by APS or more .... result of other structural changes in the system.

  14. Intensity-based hierarchical elastic registration using approximating splines.

    Science.gov (United States)

    Serifovic-Trbalic, Amira; Demirovic, Damir; Cattin, Philippe C

    2014-01-01

    We introduce a new hierarchical approach for elastic medical image registration using approximating splines. In order to obtain the dense deformation field, we employ Gaussian elastic body splines (GEBS) that incorporate anisotropic landmark errors and rotation information. Since the GEBS approach is based on a physical model in form of analytical solutions of the Navier equation, it can very well cope with the local as well as global deformations present in the images by varying the standard deviation of the Gaussian forces. The proposed GEBS approximating model is integrated into the elastic hierarchical image registration framework, which decomposes a nonrigid registration problem into numerous local rigid transformations. The approximating GEBS registration scheme incorporates anisotropic landmark errors as well as rotation information. The anisotropic landmark localization uncertainties can be estimated directly from the image data, and in this case, they represent the minimal stochastic localization error, i.e., the Cramér-Rao bound. The rotation information of each landmark obtained from the hierarchical procedure is transposed in an additional angular landmark, doubling the number of landmarks in the GEBS model. The modified hierarchical registration using the approximating GEBS model is applied to register 161 image pairs from a digital mammogram database. The obtained results are very encouraging, and the proposed approach significantly improved all registrations comparing the mean-square error in relation to approximating TPS with the rotation information. On artificially deformed breast images, the newly proposed method performed better than the state-of-the-art registration algorithm introduced by Rueckert et al. (IEEE Trans Med Imaging 18:712-721, 1999). The average error per breast tissue pixel was less than 2.23 pixels compared to 2.46 pixels for Rueckert's method. The proposed hierarchical elastic image registration approach incorporates the GEBS

  15. Realization of spin-dependent splitting with arbitrary intensity patterns based on all-dielectric metasurfaces

    Energy Technology Data Exchange (ETDEWEB)

    Ke, Yougang; Liu, Yachao; He, Yongli; Zhou, Junxiao; Luo, Hailu, E-mail: hailuluo@hnu.edu.cn; Wen, Shuangchun [Laboratory for Spin Photonics, School of Physics and Electronics, Hunan University, Changsha 410082 (China)

    2015-07-27

    We report the realization of spin-dependent splitting with arbitrary intensity patterns based on all-dielectric metasurfaces. Compared with the plasmonic metasurfaces, the all-dielectric metasurface exhibits more high transmission efficiency and conversion efficiency, which makes it possible to achieve the spin-dependent splitting with arbitrary intensity patterns. Our findings suggest a way for generation and manipulation of spin photons, and thereby offer the possibility of developing spin-based nanophotonic applications.

  16. Micropolar Fluids Using B-spline Divergence Conforming Spaces

    KAUST Repository

    Sarmiento, Adel; Garcia, Daniel; Dalcin, Lisandro; Collier, Nathan; Calo, Victor M.

    2014-01-01

    The divergence free formulation was used to guarantee an accurate solution of the flow. This formulation was implemented using the framework PetIGA as a basis, using its parallel stuctures to achieve high scalability. The results of the square heat driven cavity test case are in good agreement with those reported earlier.

  17. Sparse B-spline polynomial descriptors for human activity recognition

    NARCIS (Netherlands)

    Oikonomopoulos, Antonios; Pantic, Maja; Patras, Ioannis

    2009-01-01

    The extraction and quantization of local image and video descriptors for the subsequent creation of visual codebooks is a technique that has proved very effective for image and video retrieval applications. In this paper we build on this concept and propose a new set of visual descriptors that

  18. The primipara respons based on individual personality type to the intensity of delivery pain

    Directory of Open Access Journals (Sweden)

    Gita N Sari

    2016-01-01

    Full Text Available Delivery period is one of periods that can cause stress to the mother and the fetus. This period is the natural common phenomenon that for some women subjectively can be considered as pain process that can cause simultaneous anxiety and pain. Psychology research has shown that pain is not only connected to physical respond, the culture that teaches and nurtures us also play important role in coping the pain. These two factors shape different personality for each individual. The objective of this study is to find out the primipara respons based on individual personality type to the intensity of delivery pain. The method of this study was analytical method with survey cross sectional approach. The data was collected prospectively from interview and questionnaire in the same time to find out the correlation between individual personality type and the intensity of delivery pain based on inclusive and exclusive period February 1st 2009 to April 30th 2009. The result with chi-square test and spearman rank test showed significant correlation between individual personality type and the intensity of delivery pain (X2= 8,571 ; p = 0,014. There is the negative correlation between extrovert individual personality and intensity of delivery pain (rs= -0,730; p <0,001, and there is the positive correlation between introvert individual personality type and intensity of delivery pain (rs = 0,726; p <0,001. Based on mann whitney, showed significant difference between extrovert personality type and introverts personality type to intensity of delivery pain (Z M-W: 3,050, p: 0,002. Based on chi-square test showed significant correlation between knowledge based on individual personality type to the intensity of delivery pain (X2= 4,418; p = 0,036 The conclusion of these study are the more extrovert individual personality type the less intense the delivery pain would be, the more introvert individual personality type then the more intense delivery pain would be. The

  19. Evaluation of finite difference and FFT-based solutions of the transport of intensity equation.

    Science.gov (United States)

    Zhang, Hongbo; Zhou, Wen-Jing; Liu, Ying; Leber, Donald; Banerjee, Partha; Basunia, Mahmudunnabi; Poon, Ting-Chung

    2018-01-01

    A finite difference method is proposed for solving the transport of intensity equation. Simulation results show that although slower than fast Fourier transform (FFT)-based methods, finite difference methods are able to reconstruct the phase with better accuracy due to relaxed assumptions for solving the transport of intensity equation relative to FFT methods. Finite difference methods are also more flexible than FFT methods in dealing with different boundary conditions.

  20. Innovation Intensity and Adoption at the Base of the Pyramid Market: A Study of Household Appliances

    Directory of Open Access Journals (Sweden)

    Vitor Koki da Costa Nogami

    2015-08-01

    Full Text Available The paper analyzes the innovation intensity and adoption characteristics at the base of the pyramid market. The innovation intensity is configured as radical and incremental, while the innovation adoption is configured as early and tardy. As an empirical approach it was conducted a study type survey. Data analysis is based on non-parametric statistics. The results indicate that the base of the pyramid consumers is characterized by adopting incremental innovations tardily, as pointed out by the literature. Furthermore, it was also observed that women have greater decision-making power in the families of this segment.

  1. A usability evaluation of a SNOMED CT based compositional interface terminology for intensive care

    NARCIS (Netherlands)

    Bakhshi-Raiez, F.; de Keizer, N. F.; Cornet, R.; Dorrepaal, M.; Dongelmans, D.; Jaspers, M. W. M.

    2012-01-01

    Objective: To evaluate the usability of a large compositional interface terminology based on SNOMED CT and the terminology application for registration of the reasons for intensive care admission in a Patient Data Management System. Design: Observational study with user-based usability evaluations

  2. Characteristic Analysis Light Intensity Sensor Based On Plastic Optical Fiber At Various Configuration

    Science.gov (United States)

    Arifin, A.; Lusiana; Yunus, Muhammad; Dewang, Syamsir

    2018-03-01

    This research discusses the light intensity sensor based on plastic optical fiber. This light intensity sensor is made of plastic optical fiber consisting of two types, namely which is cladding and without cladding. Plastic optical fiber used multi-mode step-index type made of polymethyl metacrylate (PMMA). The infrared LED emits light into the optical fiber of the plastic and is subsequently received by the phototransistor to be converted to an electric voltage. The sensor configuration is made with three models: straight configuration, U configuration and gamma configuration with cladding and without cladding. The measured light source uses a 30 Watt high power LED with a light intensity of 0 to 10 Klux. The measured light intensity will affect the propagation of light inside the optical fiber sensor. The greater the intensity of the measured light, the greater the output voltage that is read on the computer. The results showed that the best optical fiber sensor characteristics were obtained in U configuration. Sensors with U-configuration without cladding had the best sensitivity and resolution values of 0.0307 volts/Klux and 0.0326 Klux. The advantages of this measuring light intensity based on the plastic optical fiber instrument are simple, easy-to-make operational systems, low cost, high sensitivity and resolution.

  3. Development and Testing of a Decision Making Based Method to Adjust Automatically the Harrowing Intensity

    Science.gov (United States)

    Rueda-Ayala, Victor; Weis, Martin; Keller, Martina; Andújar, Dionisio; Gerhards, Roland

    2013-01-01

    Harrowing is often used to reduce weed competition, generally using a constant intensity across a whole field. The efficacy of weed harrowing in wheat and barley can be optimized, if site-specific conditions of soil, weed infestation and crop growth stage are taken into account. This study aimed to develop and test an algorithm to automatically adjust the harrowing intensity by varying the tine angle and number of passes. The field variability of crop leaf cover, weed density and soil density was acquired with geo-referenced sensors to investigate the harrowing selectivity and crop recovery. Crop leaf cover and weed density were assessed using bispectral cameras through differential images analysis. The draught force of the soil opposite to the direction of travel was measured with electronic load cell sensor connected to a rigid tine mounted in front of the harrow. Optimal harrowing intensity levels were derived in previously implemented experiments, based on the weed control efficacy and yield gain. The assessments of crop leaf cover, weed density and soil density were combined via rules with the aforementioned optimal intensities, in a linguistic fuzzy inference system (LFIS). The system was evaluated in two field experiments that compared constant intensities with variable intensities inferred by the system. A higher weed density reduction could be achieved when the harrowing intensity was not kept constant along the cultivated plot. Varying the intensity tended to reduce the crop leaf cover, though slightly improving crop yield. A real-time intensity adjustment with this system is achievable, if the cameras are attached in the front and at the rear or sides of the harrow. PMID:23669712

  4. Development and Testing of a Decision Making Based Method to Adjust Automatically the Harrowing Intensity

    Directory of Open Access Journals (Sweden)

    Roland Gerhards

    2013-05-01

    Full Text Available Harrowing is often used to reduce weed competition, generally using a constant intensity across a whole field. The efficacy of weed harrowing in wheat and barley can be optimized, if site-specific conditions of soil, weed infestation and crop growth stage are taken into account. This study aimed to develop and test an algorithm to automatically adjust the harrowing intensity by varying the tine angle and number of passes. The field variability of crop leaf cover, weed density and soil density was acquired with geo-referenced sensors to investigate the harrowing selectivity and crop recovery. Crop leaf cover and weed density were assessed using bispectral cameras through differential images analysis. The draught force of the soil opposite to the direction of travel was measured with electronic load cell sensor connected to a rigid tine mounted in front of the harrow. Optimal harrowing intensity levels were derived in previously implemented experiments, based on the weed control efficacy and yield gain. The assessments of crop leaf cover, weed density and soil density were combined via rules with the aforementioned optimal intensities, in a linguistic fuzzy inference system (LFIS. The system was evaluated in two field experiments that compared constant intensities with variable intensities inferred by the system. A higher weed density reduction could be achieved when the harrowing intensity was not kept constant along the cultivated plot. Varying the intensity tended to reduce the crop leaf cover, though slightly improving crop yield. A real-time intensity adjustment with this system is achievable, if the cameras are attached in the front and at the rear or sides of the harrow.

  5. Intense heavy ion beam-induced temperature effects in carbon-based stripper foils

    International Nuclear Information System (INIS)

    Kupka, K.; Tomut, M.; Simon, P.; Hubert, C.; Romanenko, A.; Lommel, B.; Trautmann, C.

    2015-01-01

    At the future FAIR facility, reliably working solid carbon stripper foils are desired for providing intermediate charge states to SIS18. With the expected high beam intensities, the foils experience enhanced degradation and limited lifetime due to severe radiation damage, stress waves, and thermal effects. This work presents systematic measurements of the temperature of different carbon-based stripper foils (amorphous, diamond-like, and carbon-nanotube based) exposed to 4.8 MeV/u U, Bi, and Au beams of different pulse intensities. Thermal and spectroscopic analyses were performed by means of infrared thermography and Fourier transform infrared spectroscopy. The resulting temperature depends on the foil thickness and strongly increases with increasing pulse intensity and repetition rate. (author)

  6. Spline based iterative phase retrieval algorithm for X-ray differential phase contrast radiography.

    Science.gov (United States)

    Nilchian, Masih; Wang, Zhentian; Thuering, Thomas; Unser, Michael; Stampanoni, Marco

    2015-04-20

    Differential phase contrast imaging using grating interferometer is a promising alternative to conventional X-ray radiographic methods. It provides the absorption, differential phase and scattering information of the underlying sample simultaneously. Phase retrieval from the differential phase signal is an essential problem for quantitative analysis in medical imaging. In this paper, we formalize the phase retrieval as a regularized inverse problem, and propose a novel discretization scheme for the derivative operator based on B-spline calculus. The inverse problem is then solved by a constrained regularized weighted-norm algorithm (CRWN) which adopts the properties of B-spline and ensures a fast implementation. The method is evaluated with a tomographic dataset and differential phase contrast mammography data. We demonstrate that the proposed method is able to produce phase image with enhanced and higher soft tissue contrast compared to conventional absorption-based approach, which can potentially provide useful information to mammographic investigations.

  7. Intensity-based readout of resonant-waveguide grating biosensors: Systems and nanostructures

    Science.gov (United States)

    Paulsen, Moritz; Jahns, Sabrina; Gerken, Martina

    2017-09-01

    Resonant waveguide gratings (RWG) - also called photonic crystal slabs (PCS) - have been established as reliable optical transducers for label-free biochemical assays as well as for cell-based assays. Current readout systems are based on mechanical scanning and spectrometric measurements with system sizes suitable for laboratory equipment. Here, we review recent progress in compact intensity-based readout systems for point-of-care (POC) applications. We briefly introduce PCSs as sensitive optical transducers and introduce different approaches for intensity-based readout systems. Photometric measurements have been realized with a simple combination of a light source and a photodetector. Recently a 96-channel, intensity-based readout system for both biochemical interaction analyses as well as cellular assays was presented employing the intensity change of a near cut-off mode. As an alternative for multiparametric detection, a camera system for imaging detection has been implemented. A portable, camera-based system of size 13 cm × 4.9 cm × 3.5 cm with six detection areas on an RWG surface area of 11 mm × 7 mm has been demonstrated for the parallel detection of six protein binding kinetics. The signal-to-noise ratio of this system corresponds to a limit of detection of 168 M (24 ng/ml). To further improve the signal-to-noise ratio advanced nanostructure designs are investigated for RWGs. Here, results on multiperiodic and deterministic aperiodic nanostructures are presented. These advanced nanostructures allow for the design of the number and wavelengths of the RWG resonances. In the context of intensity-based readout systems they are particularly interesting for the realization of multi-LED systems. These recent trends suggest that compact point-of-care systems employing disposable test chips with RWG functional areas may reach market in the near future.

  8. Intensive care nurses' perceptions of simulation-based team training for building patient safety in intensive care: a descriptive qualitative study.

    Science.gov (United States)

    Ballangrud, Randi; Hall-Lord, Marie Louise; Persenius, Mona; Hedelin, Birgitta

    2014-08-01

    To describe intensive care nurses' perceptions of simulation-based team training for building patient safety in intensive care. Failures in team processes are found to be contributory factors to incidents in an intensive care environment. Simulation-based training is recommended as a method to make health-care personnel aware of the importance of team working and to improve their competencies. The study uses a qualitative descriptive design. Individual qualitative interviews were conducted with 18 intensive care nurses from May to December 2009, all of which had attended a simulation-based team training programme. The interviews were analysed by qualitative content analysis. One main category emerged to illuminate the intensive care nurse perception: "training increases awareness of clinical practice and acknowledges the importance of structured work in teams". Three generic categories were found: "realistic training contributes to safe care", "reflection and openness motivates learning" and "finding a common understanding of team performance". Simulation-based team training makes intensive care nurses more prepared to care for severely ill patients. Team training creates a common understanding of how to work in teams with regard to patient safety. Copyright © 2014 Elsevier Ltd. All rights reserved.

  9. AN AERIAL-IMAGE DENSE MATCHING APPROACH BASED ON OPTICAL FLOW FIELD

    Directory of Open Access Journals (Sweden)

    W. Yuan

    2016-06-01

    Full Text Available Dense matching plays an important role in many fields, such as DEM (digital evaluation model producing, robot navigation and 3D environment reconstruction. Traditional approaches may meet the demand of accuracy. But the calculation time and out puts density is hardly be accepted. Focus on the matching efficiency and complex terrain surface matching feasibility an aerial image dense matching method based on optical flow field is proposed in this paper. First, some high accurate and uniformed control points are extracted by using the feature based matching method. Then the optical flow is calculated by using these control points, so as to determine the similar region between two images. Second, the optical flow field is interpolated by using the multi-level B-spline interpolation in the similar region and accomplished the pixel by pixel coarse matching. Final, the results related to the coarse matching refinement based on the combined constraint, which recognizes the same points between images. The experimental results have shown that our method can achieve per-pixel dense matching points, the matching accuracy achieves sub-pixel level, and fully meet the three-dimensional reconstruction and automatic generation of DSM-intensive matching’s requirements. The comparison experiments demonstrated that our approach’s matching efficiency is higher than semi-global matching (SGM and Patch-based multi-view stereo matching (PMVS which verifies the feasibility and effectiveness of the algorithm.

  10. A scintillating fibre-based profiler for low intensity ion beams

    Energy Technology Data Exchange (ETDEWEB)

    Finocchiaro, P. [Istituto Nazionale di Fisica Nucleare, Catania (Italy); Amato, A. [Istituto Nazionale di Fisica Nucleare, Catania (Italy); Ciavola, G. [Istituto Nazionale di Fisica Nucleare, Catania (Italy); Cuttone, G. [Istituto Nazionale di Fisica Nucleare, Catania (Italy); Gu, M. [Istituto Nazionale di Fisica Nucleare, Catania (Italy); Raia, G. [Istituto Nazionale di Fisica Nucleare, Catania (Italy); Rovelli, A. [Istituto Nazionale di Fisica Nucleare, Catania (Italy)

    1997-01-11

    In the framework of the EXCYT radioactive ion beam facility, now under development at LNS Catania, we have developed a new beam profile monitor based on a scintillating fibre and a photodetector. Its sensitivity allows the detection of single beam particles in pulse mode, thus representing a useful tool for diagnostics of low and very low intensity beams. (orig.).

  11. A scintillating fibre-based profiler for low intensity ion beams

    International Nuclear Information System (INIS)

    Finocchiaro, P.; Amato, A.; Ciavola, G.; Cuttone, G.; Gu, M.; Raia, G.; Rovelli, A.

    1997-01-01

    In the framework of the EXCYT radioactive ion beam facility, now under development at LNS Catania, we have developed a new beam profile monitor based on a scintillating fibre and a photodetector. Its sensitivity allows the detection of single beam particles in pulse mode, thus representing a useful tool for diagnostics of low and very low intensity beams. (orig.)

  12. Perceptions of Personal Well-Being among Youth Accessing Residential or Intensive Home-Based Treatment

    Science.gov (United States)

    Preyde, Michele; Watkins, Hanna; Ashbourne, Graham; Lazure, Kelly; Carter, Jeff; Penney, Randy; White, Sara; Frensch, Karen; Cameron, Gary

    2013-01-01

    The outcomes of youth accessing residential treatment or intensive home-based treatment are varied. Understanding youth's perceptions of their well-being may inform service. The purpose of this report was to explore perceptions of youth's mental health, life satisfaction, and outlook for the future. Youth reported ongoing struggles with mental…

  13. Effects of an intensive data-based decision making intervention on teacher efficacy

    NARCIS (Netherlands)

    van der Scheer, Emmelien; Visscher, Arend J.

    2016-01-01

    Research into the effects of interventions on teacher efficacy is scarce. In this study, the long-term effects of an intensive data-based decision making intervention on teacher efficacy of mainly grade 4 teachers were investigated by means of a delayed treatment control group design (62 teachers).

  14. Modelling and assessment of urban flood hazards based on rainfall intensity-duration-frequency curves reformation

    OpenAIRE

    Ghazavi, Reza; Moafi Rabori, Ali; Ahadnejad Reveshty, Mohsen

    2016-01-01

    Estimate design storm based on rainfall intensity–duration–frequency (IDF) curves is an important parameter for hydrologic planning of urban areas. The main aim of this study was to estimate rainfall intensities of Zanjan city watershed based on overall relationship of rainfall IDF curves and appropriate model of hourly rainfall estimation (Sherman method, Ghahreman and Abkhezr method). Hydrologic and hydraulic impacts of rainfall IDF curves change in flood properties was evaluated via Stormw...

  15. Evaluation of Intensive Construction Land Use in the Emerging City Based on PSR-Entropy model

    Science.gov (United States)

    Jia, Yuanyuan; Lei, Guangyu

    2018-01-01

    A comprehensive understanding of emerging city land utilization and the evaluation of intensive land use in the Emerging City will provide the comprehensive and reliable technical basis for the planning and management. It is an important node. According to the Han cheng from 2008 to 2016 years of land use, based on PSR-Entropy model of land use evaluation system, using entropy method to determine the index weight, the introduction of comprehensive index method to evaluate the degree of land use. The results show that the intensive land use comprehensive evaluation index of Han cheng increased from 2008 to 2015, but the land intensive use can not achieve the standards. The potential of further enhancing space is relatively large.

  16. Optical fibres sensor based in the intensity switch of a linear laser with two Bragg gratings

    International Nuclear Information System (INIS)

    Basurto P, M.A.; Kuzin, E.A.; Archundia B, C.; Marroquin, E.; May A, M.; Cerecedo N, H.H.; Sanchez M, J.J.; Tentori S, D.; Marquez B, I.; Shliagin, M.; Miridonov, S.

    2000-01-01

    In this work we propose a new configuration for an optical fiber temperature sensor, based on a linear type Er-doped fiber laser. The laser cavity consists of an Er-doped fiber and two identical Bragg gratings at the fiber ends (working as reflectors). Temperature changes are detected by measuring, through one of the gratings, the intensity variations at the system's output. When the temperature of one of the Bragg gratings is modified, a wavelength shift of its spectral reflectivity is observed. Hence, the laser emission intensity of the system is modified. We present experimental results of the intensity switch observed when the temperature difference between the gratings detunes their spectral reflectance. Making use of this effect it is possible to develop limit comparators to bound the temperature range for the object under supervision. This limiting work can be performed with a high sensitivity using a very simple interrogation procedure. (Author)

  17. Differential standard deviation of log-scale intensity based optical coherence tomography angiography.

    Science.gov (United States)

    Shi, Weisong; Gao, Wanrong; Chen, Chaoliang; Yang, Victor X D

    2017-12-01

    In this paper, a differential standard deviation of log-scale intensity (DSDLI) based optical coherence tomography angiography (OCTA) is presented for calculating microvascular images of human skin. The DSDLI algorithm calculates the variance in difference images of two consecutive log-scale intensity based structural images from the same position along depth direction to contrast blood flow. The en face microvascular images were then generated by calculating the standard deviation of the differential log-scale intensities within the specific depth range, resulting in an improvement in spatial resolution and SNR in microvascular images compared to speckle variance OCT and power intensity differential method. The performance of DSDLI was testified by both phantom and in vivo experiments. In in vivo experiments, a self-adaptive sub-pixel image registration algorithm was performed to remove the bulk motion noise, where 2D Fourier transform was utilized to generate new images with spatial interval equal to half of the distance between two pixels in both fast-scanning and depth directions. The SNRs of signals of flowing particles are improved by 7.3 dB and 6.8 dB on average in phantom and in vivo experiments, respectively, while the average spatial resolution of images of in vivo blood vessels is increased by 21%. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  18. USGS "Did You Feel It?" internet-based macroseismic intensity maps

    Science.gov (United States)

    Wald, D.J.; Quitoriano, V.; Worden, B.; Hopper, M.; Dewey, J.W.

    2011-01-01

    The U.S. Geological Survey (USGS) "Did You Feel It?" (DYFI) system is an automated approach for rapidly collecting macroseismic intensity data from Internet users' shaking and damage reports and generating intensity maps immediately following earthquakes; it has been operating for over a decade (1999-2011). DYFI-based intensity maps made rapidly available through the DYFI system fundamentally depart from more traditional maps made available in the past. The maps are made more quickly, provide more complete coverage and higher resolution, provide for citizen input and interaction, and allow data collection at rates and quantities never before considered. These aspects of Internet data collection, in turn, allow for data analyses, graphics, and ways to communicate with the public, opportunities not possible with traditional data-collection approaches. Yet web-based contributions also pose considerable challenges, as discussed herein. After a decade of operational experience with the DYFI system and users, we document refinements to the processing and algorithmic procedures since DYFI was first conceived. We also describe a number of automatic post-processing tools, operations, applications, and research directions, all of which utilize the extensive DYFI intensity datasets now gathered in near-real time. DYFI can be found online at the website http://earthquake.usgs.gov/dyfi/. ?? 2011 by the Istituto Nazionale di Geofisica e Vulcanologia.

  19. Re-visiting Trichuris trichiura intensity thresholds based on anemia during pregnancy.

    Directory of Open Access Journals (Sweden)

    Theresa W Gyorkos

    Full Text Available The intensity categories, or thresholds, currently used for Trichuris trichiura (ie. epg intensities of 1-999 (light; 1,000-9,999 epg (moderate, and ≥ 10,000 epg (heavy were developed in the 1980s, when there were little epidemiological data available on dose-response relationships. This study was undertaken to determine a threshold for T. trichiura-associated anemia in pregnant women and to describe the implications of this threshold in terms of the need for primary prevention and chemotherapeutic interventions.In Iquitos, Peru, 935 pregnant women were tested for T. trichiura infection in their second trimester of pregnancy; were given daily iron supplements throughout their pregnancy; and had their blood hemoglobin levels measured in their third trimester of pregnancy. Women in the highest two T. trichiura intensity quintiles (601-1632 epg and ≥ 1633 epg had significantly lower mean hemoglobin concentrations than the lowest quintile (0-24 epg. They also had a statistically significantly higher risk of anemia, with adjusted odds ratios of 1.67 (95% CI: 1.02, 2.62 and 1.73 (95% CI: 1.09, 2.74, respectively.This analysis provides support for categorizing a T. trichiura infection ≥ 1,000 epg as 'moderate', as currently defined by the World Health Organization. Because this 'moderate' level of T. trichiura infection was found to be a significant risk factor for anemia in pregnant women, the intensity of Trichuris infection deemed to cause or aggravate anemia should no longer be restricted to the 'heavy' intensity category. It should now include both 'heavy' and 'moderate' intensities of Trichuris infection. Evidence-based deworming strategies targeting pregnant women or populations where anemia is of concern should be updated accordingly.

  20. Re-visiting Trichuris trichiura intensity thresholds based on anemia during pregnancy.

    Science.gov (United States)

    Gyorkos, Theresa W; Gilbert, Nicolas L; Larocque, Renée; Casapía, Martín; Montresor, Antonio

    2012-01-01

    The intensity categories, or thresholds, currently used for Trichuris trichiura (ie. epg intensities of 1-999 (light); 1,000-9,999 epg (moderate), and ≥ 10,000 epg (heavy)) were developed in the 1980s, when there were little epidemiological data available on dose-response relationships. This study was undertaken to determine a threshold for T. trichiura-associated anemia in pregnant women and to describe the implications of this threshold in terms of the need for primary prevention and chemotherapeutic interventions. In Iquitos, Peru, 935 pregnant women were tested for T. trichiura infection in their second trimester of pregnancy; were given daily iron supplements throughout their pregnancy; and had their blood hemoglobin levels measured in their third trimester of pregnancy. Women in the highest two T. trichiura intensity quintiles (601-1632 epg and ≥ 1633 epg) had significantly lower mean hemoglobin concentrations than the lowest quintile (0-24 epg). They also had a statistically significantly higher risk of anemia, with adjusted odds ratios of 1.67 (95% CI: 1.02, 2.62) and 1.73 (95% CI: 1.09, 2.74), respectively. This analysis provides support for categorizing a T. trichiura infection ≥ 1,000 epg as 'moderate', as currently defined by the World Health Organization. Because this 'moderate' level of T. trichiura infection was found to be a significant risk factor for anemia in pregnant women, the intensity of Trichuris infection deemed to cause or aggravate anemia should no longer be restricted to the 'heavy' intensity category. It should now include both 'heavy' and 'moderate' intensities of Trichuris infection. Evidence-based deworming strategies targeting pregnant women or populations where anemia is of concern should be updated accordingly.

  1. Respiration rate detection based on intensity modulation using plastic optical fiber

    Science.gov (United States)

    Anwar, Zawawi Mohd; Ziran Nurul Sufia, Nor; Hadi, Manap

    2017-11-01

    This paper presents the implementation of respiration rate measurement via a simple intensity-based optical fiber sensor using optical fiber technology. The breathing rate is measured based on the light intensity variation due to the longitudinal gap changes between two separated fibers. In order to monitor the breathing rate continuously, the output from the photodetector conditioning circuit is connected to a low-cost Arduino kit. At the sensing point, two optical fiber cables are positioned in series with a small gap and fitted inside a transparent plastic tube. To ensure smooth movement of the fiber during inhale and exhale processes as well as to maintain the gap of the fiber during idle condition, the fiber is attached firmly to a stretchable bandage. This study shows that this simple fiber arrangement can be applied to detect respiration activity which might be critical for patient monitoring.

  2. A novel vision-based mold monitoring system in an environment of intense vibration

    International Nuclear Information System (INIS)

    Hu, Fen; He, Zaixing; Zhao, Xinyue; Zhang, Shuyou

    2017-01-01

    Mold monitoring has been more and more widely used in the modern manufacturing industry, especially when based on machine vision, but these systems cannot meet the detection speed and accuracy requirements for mold monitoring because they must operate in environments that exhibit intense vibration during production. To ensure that the system runs accurately and efficiently, we propose a new descriptor that combines the geometric relationship-based global context feature and the local scale-invariant feature transform for the image registration step of the mold monitoring system. The experimental results of four types of molds showed that the detection accuracy of the mold monitoring system is improved in the environment with intense vibration. (paper)

  3. A novel vision-based mold monitoring system in an environment of intense vibration

    Science.gov (United States)

    Hu, Fen; He, Zaixing; Zhao, Xinyue; Zhang, Shuyou

    2017-10-01

    Mold monitoring has been more and more widely used in the modern manufacturing industry, especially when based on machine vision, but these systems cannot meet the detection speed and accuracy requirements for mold monitoring because they must operate in environments that exhibit intense vibration during production. To ensure that the system runs accurately and efficiently, we propose a new descriptor that combines the geometric relationship-based global context feature and the local scale-invariant feature transform for the image registration step of the mold monitoring system. The experimental results of four types of molds showed that the detection accuracy of the mold monitoring system is improved in the environment with intense vibration.

  4. Respiration rate detection based on intensity modulation using plastic optical fiber

    Directory of Open Access Journals (Sweden)

    Mohd Anwar Zawawi

    2017-01-01

    Full Text Available This paper presents the implementation of respiration rate measurement via a simple intensity-based optical fiber sensor using optical fiber technology. The breathing rate is measured based on the light intensity variation due to the longitudinal gap changes between two separated fibers. In order to monitor the breathing rate continuously, the output from the photodetector conditioning circuit is connected to a low-cost Arduino kit. At the sensing point, two optical fiber cables are positioned in series with a small gap and fitted inside a transparent plastic tube. To ensure smooth movement of the fiber during inhale and exhale processes as well as to maintain the gap of the fiber during idle condition, the fiber is attached firmly to a stretchable bandage. This study shows that this simple fiber arrangement can be applied to detect respiration activity which might be critical for patient monitoring.

  5. SPES: A new cyclotron-based facility for research and applications with high-intensity beams

    Science.gov (United States)

    Maggiore, M.; Campo, D.; Antonini, P.; Lombardi, A.; Manzolaro, M.; Andrighetto, A.; Monetti, A.; Scarpa, D.; Esposito, J.; Silvestrin, L.

    2017-06-01

    In 2016, Laboratori Nazionali di Legnaro (Italy) started the commissioning of a new accelerator facility based on a high-power cyclotron able to deliver proton beams up to 70 MeV of energy and 700 μA current. Such a machine is the core of the Selective Production of Exotic Species (SPES) project whose main goal is to provide exotics beam for nuclear and astrophysics research and to deliver high-intensity proton beams for medical applications and neutrons generator.

  6. A tesselation-based model for intensity estimation and laser plasma interactions calculations in three dimensions

    Science.gov (United States)

    Colaïtis, A.; Chapman, T.; Strozzi, D.; Divol, L.; Michel, P.

    2018-03-01

    A three-dimensional laser propagation model for computation of laser-plasma interactions is presented. It is focused on indirect drive geometries in inertial confinement fusion and formulated for use at large temporal and spatial scales. A modified tesselation-based estimator and a relaxation scheme are used to estimate the intensity distribution in plasma from geometrical optics rays. Comparisons with reference solutions show that this approach is well-suited to reproduce realistic 3D intensity field distributions of beams smoothed by phase plates. It is shown that the method requires a reduced number of rays compared to traditional rigid-scale intensity estimation. Using this field estimator, we have implemented laser refraction, inverse-bremsstrahlung absorption, and steady-state crossed-beam energy transfer with a linear kinetic model in the numerical code Vampire. Probe beam amplification and laser spot shapes are compared with experimental results and pf3d paraxial simulations. These results are promising for the efficient and accurate computation of laser intensity distributions in holhraums, which is of importance for determining the capsule implosion shape and risks of laser-plasma instabilities such as hot electron generation and backscatter in multi-beam configurations.

  7. Propagation based phase retrieval of simulated intensity measurements using artificial neural networks

    Science.gov (United States)

    Kemp, Z. D. C.

    2018-04-01

    Determining the phase of a wave from intensity measurements has many applications in fields such as electron microscopy, visible light optics, and medical imaging. Propagation based phase retrieval, where the phase is obtained from defocused images, has shown significant promise. There are, however, limitations in the accuracy of the retrieved phase arising from such methods. Sources of error include shot noise, image misalignment, and diffraction artifacts. We explore the use of artificial neural networks (ANNs) to improve the accuracy of propagation based phase retrieval algorithms applied to simulated intensity measurements. We employ a phase retrieval algorithm based on the transport-of-intensity equation to obtain the phase from simulated micrographs of procedurally generated specimens. We then train an ANN with pairs of retrieved and exact phases, and use the trained ANN to process a test set of retrieved phase maps. The total error in the phase is significantly reduced using this method. We also discuss a variety of potential extensions to this work.

  8. A CMOS Luminescence Intensity and Lifetime Dual Sensor Based on Multicycle Charge Modulation.

    Science.gov (United States)

    Fu, Guoqing; Sonkusale, Sameer R

    2018-06-01

    Luminescence plays an important role in many scientific and industrial applications. This paper proposes a novel complementary metal-oxide-semiconductor (CMOS) sensor chip that can realize both luminescence intensity and lifetime sensing. To enable high sensitivity, we propose parasitic insensitive multicycle charge modulation scheme for low-light lifetime extraction benefiting from simplicity, accuracy, and compatibility with deeply scaled CMOS process. The designed in-pixel capacitive transimpedance amplifier (CTIA) based structure is able to capture the weak luminescence-induced voltage signal by accumulating photon-generated charges in 25 discrete gated 10-ms time windows and 10-μs pulsewidth. A pinned photodiode on chip with 1.04 pA dark current is utilized for luminescence detection. The proposed CTIA-based circuitry can achieve 2.1-mV/(nW/cm 2 ) responsivity and 4.38-nW/cm 2 resolution at 630 nm wavelength for intensity measurement and 45-ns resolution for lifetime measurement. The sensor chip is employed for measuring time constants and luminescence lifetimes of an InGaN-based white light-emitting diode at different wavelengths. In addition, we demonstrate accurate measurement of the lifetime of an oxygen sensitive chromophore with sensitivity to oxygen concentration of 7.5%/ppm and 6%/ppm in both intensity and lifetime domain. This CMOS-enabled oxygen sensor was then employed to test water quality from different sources (tap water, lakes, and rivers).

  9. Low-cost vibration sensor based on dual fiber Bragg gratings and light intensity measurement.

    Science.gov (United States)

    Gao, Xueqing; Wang, Yongjiao; Yuan, Bo; Yuan, Yinquan; Dai, Yawen; Xu, Gang

    2013-09-20

    A vibration monitoring system based on light intensity measurement has been constructed, and the designed accelerometer is based on steel cantilever frame and dual fiber Bragg gratings (FBGs). By using numerical simulations for the dual FBGs, the dependence relationship of the area of main lobes on the difference of initial central wavelengths is obtained and the most optimal choice for the initial value and the vibration amplitude of the difference of central wavelengths of two FBGs is suggested. The vibration monitoring experiments are finished, and the measured data are identical to the simulated results.

  10. Multi-energy CT based on a prior rank, intensity and sparsity model (PRISM)

    International Nuclear Information System (INIS)

    Gao, Hao; Osher, Stanley; Yu, Hengyong; Wang, Ge

    2011-01-01

    We propose a compressive sensing approach for multi-energy computed tomography (CT), namely the prior rank, intensity and sparsity model (PRISM). To further compress the multi-energy image for allowing the reconstruction with fewer CT data and less radiation dose, the PRISM models a multi-energy image as the superposition of a low-rank matrix and a sparse matrix (with row dimension in space and column dimension in energy), where the low-rank matrix corresponds to the stationary background over energy that has a low matrix rank, and the sparse matrix represents the rest of distinct spectral features that are often sparse. Distinct from previous methods, the PRISM utilizes the generalized rank, e.g., the matrix rank of tight-frame transform of a multi-energy image, which offers a way to characterize the multi-level and multi-filtered image coherence across the energy spectrum. Besides, the energy-dependent intensity information can be incorporated into the PRISM in terms of the spectral curves for base materials, with which the restoration of the multi-energy image becomes the reconstruction of the energy-independent material composition matrix. In other words, the PRISM utilizes prior knowledge on the generalized rank and sparsity of a multi-energy image, and intensity/spectral characteristics of base materials. Furthermore, we develop an accurate and fast split Bregman method for the PRISM and demonstrate the superior performance of the PRISM relative to several competing methods in simulations. (papers)

  11. 3D web based learning of medical equipment employed in intensive care units.

    Science.gov (United States)

    Cetin, Aydın

    2012-02-01

    In this paper, both synchronous and asynchronous web based learning of 3D medical equipment models used in hospital intensive care unit have been described over the moodle course management system. 3D medical equipment models were designed with 3ds Max 2008, then converted to ASE format and added interactivity displayed with Viewpoint-Enliven. 3D models embedded in a web page in html format with dynamic interactivity-rotating, panning and zooming by dragging a mouse over images-and descriptive information is embedded to 3D model by using xml format. A pilot test course having 15 h was applied to technicians who is responsible for intensive care unit at Medical Devices Repairing and Maintenance Center (TABOM) of Turkish High Specialized Hospital.

  12. An industrial radiography exposure device based on measurement of transmitted gamma-ray intensity

    International Nuclear Information System (INIS)

    Polee, C; Chankow, N; Srisatit, S; Thong-Aram, D

    2015-01-01

    In film radiography, underexposure and overexposure may happen particularly when lacking information of specimen material and hollowness. This paper describes a method and a device for determining exposure in industrial gamma-ray radiography based on quick measurement of transmitted gamma-ray intensity with a small detector. Application software was developed for Android mobile phone to remotely control the device and to display counting data via Bluetooth communication. Prior to film exposure, the device is placed behind a specimen to measure transmitted intensity which is inversely proportional to the exposure. Unlike in using the conventional exposure curve, correction factors for source decay, source-to- film distance, specimen thickness and kind of material are not needed. The developed technique and device make radiographic process economic, convenient and more reliable. (paper)

  13. An Industrial Radipgraphy Exposure Device Based on Measurement of Transmitted Gamma-Ray Intensity

    International Nuclear Information System (INIS)

    Polee, C.; Chankow, N.; Srisatit, S.; Thong-Aram, D.

    2014-01-01

    In film radiography, underexposure and overexposure may happen particularly when lacking knowledge of specimen material and hollowness. This paper describes a method and a device for determining exposure in industrial gamma-ray radiography based on quick measurement of transmitted gamma-ray intensity with a D3372 Hamamatsu small GM tube. Application software is developed for Android mobile phone to remotely control the device and to display the counting data via Bluetooth. Prior to placing film, the device is placed behind the specimen to be radiographed to determine the exposure time from the transmitted intensity which is independent on source activity, source-to-film distance, specimen thickness and kind of material. The developed technique and device make radiographic process economic, convenient and more reliable.

  14. Whole vertebral bone segmentation method with a statistical intensity-shape model based approach

    Science.gov (United States)

    Hanaoka, Shouhei; Fritscher, Karl; Schuler, Benedikt; Masutani, Yoshitaka; Hayashi, Naoto; Ohtomo, Kuni; Schubert, Rainer

    2011-03-01

    An automatic segmentation algorithm for the vertebrae in human body CT images is presented. Especially we focused on constructing and utilizing 4 different statistical intensity-shape combined models for the cervical, upper / lower thoracic and lumbar vertebrae, respectively. For this purpose, two previously reported methods were combined: a deformable model-based initial segmentation method and a statistical shape-intensity model-based precise segmentation method. The former is used as a pre-processing to detect the position and orientation of each vertebra, which determines the initial condition for the latter precise segmentation method. The precise segmentation method needs prior knowledge on both the intensities and the shapes of the objects. After PCA analysis of such shape-intensity expressions obtained from training image sets, vertebrae were parametrically modeled as a linear combination of the principal component vectors. The segmentation of each target vertebra was performed as fitting of this parametric model to the target image by maximum a posteriori estimation, combined with the geodesic active contour method. In the experimental result by using 10 cases, the initial segmentation was successful in 6 cases and only partially failed in 4 cases (2 in the cervical area and 2 in the lumbo-sacral). In the precise segmentation, the mean error distances were 2.078, 1.416, 0.777, 0.939 mm for cervical, upper and lower thoracic, lumbar spines, respectively. In conclusion, our automatic segmentation algorithm for the vertebrae in human body CT images showed a fair performance for cervical, thoracic and lumbar vertebrae.

  15. Distributions of emissions intensity for individual beef cattle reared on pasture-based production systems.

    Science.gov (United States)

    McAuliffe, G A; Takahashi, T; Orr, R J; Harris, P; Lee, M R F

    2018-01-10

    Life Cycle Assessment (LCA) of livestock production systems is often based on inventory data for farms typical of a study region. As information on individual animals is often unavailable, livestock data may already be aggregated at the time of inventory analysis, both across individual animals and across seasons. Even though various computational tools exist to consider the effect of genetic and seasonal variabilities in livestock-originated emissions intensity, the degree to which these methods can address the bias suffered by representative animal approaches is not well-understood. Using detailed on-farm data collected on the North Wyke Farm Platform (NWFP) in Devon, UK, this paper proposes a novel approach of life cycle impact assessment that complements the existing LCA methodology. Field data, such as forage quality and animal performance, were measured at high spatial and temporal resolutions and directly transferred into LCA processes. This approach has enabled derivation of emissions intensity for each individual animal and, by extension, its intra-farm distribution, providing a step towards reducing uncertainty related to agricultural production inherent in LCA studies for food. Depending on pasture management strategies, the total emissions intensity estimated by the proposed method was higher than the equivalent value recalculated using a representative animal approach by 0.9-1.7 kg CO 2 -eq/kg liveweight gain, or up to 10% of system-wide emissions. This finding suggests that emissions intensity values derived by the latter technique may be underestimated due to insufficient consideration given to poorly performing animals, whose emissions becomes exponentially greater as average daily gain decreases. Strategies to mitigate life-cycle environmental impacts of pasture-based beef productions systems are also discussed.

  16. Experimental study of intense radiation in terahertz region based on cylindrical surface wave resonator

    International Nuclear Information System (INIS)

    Gong, Shaoyan; Ogura, Kazuo; Yambe, Kiyoyuki; Nomizu, Shintaro; Shirai, Akihiro; Yamazaki, Kosuke; Kawamura, Jun; Miura, Takuro; Takanashi, Sho; San, Min Thu

    2015-01-01

    Periodical corrugations structured on a cylindrical conductor have cylindrical surface waves (CSWs), which are reflected at the corrugation ends and form a CSW-resonator. In this paper, intense radiations in terahertz region based on the CSW-resonator are reported. The CSW-resonators with upper cut off frequencies in the modern IEEE G-band (110–300 GHz) are excited by a coaxially injected annular beam in a weakly relativistic region less than 100 kV. It is shown that there exists an oscillation starting energy for the CSW-resonator. Above the starting energy, very intense terahertz radiations on the order of kW are obtained. The operation frequencies in the range of 166–173 GHz and 182–200 GHz are obtained using two types of CSW-resonator with the different corrugation amplitude. Electromagnetic properties of the CSW-resonator can be controlled by the artificial structure and may play an important role in high-intensity terahertz generations and applications

  17. Simulation of novel intensity modulated cascaded coated LPFG sensor based on PMTP

    Science.gov (United States)

    Feng, Wenbin; Gu, Zhengtian; Lin, Qiang; Sang, Jiangang

    2017-12-01

    This paper presents a novel intensity modulated cascaded long-period fiber grating (CLPFG) sensor which is cascaded by two same coated long-period fiber gratings (LPFGs) operating at the phase-matching turning point (PMTP). The sensor combines the high sensitivity of LPFG operating at PMTP and the narrow bandwidth of interference attenuation band of CLPFG, so a higher response to small change of the surrounding refractive index (SRI) can be obtained by intensity modulation. Based on the coupled-mode theory, the grating parameters of the PMTP of a middle odd order cladding mode of a single LPFG are calculated. Then this two same LPFGs are cascaded into a CLPFG, and the optical transmission spectrum of the CLPFG is calculated by transfer matrix method. A resonant wavelength of a special interference attenuation band whose intensity has the highest response to SRI, is selected form CLPFG’s spectrum, and setting the resonant wavelength as the operating wavelength of the sensor. Furthermore, the simulation results show that the resolution of SRI of this CLPFG is available to 1.97 × 10-9 by optimizing the film optical parameters, which is about three orders of magnitude higher than coated dual-peak LPFG and cascaded LPFG sensors. It is noteworthy that the sensor is also sensitive to the refractive index of coat, so that the sensor is expected to be applied to detections of gas, PH value, humidity and so on, in the future.

  18. Simultaneous measurement of refractive index and temperature based on intensity demodulation using matching grating method.

    Science.gov (United States)

    Qi, Liang; Zhao, Chun-Liu; Kang, Juan; Jin, Yongxing; Wang, Jianfeng; Ye, Manping; Jin, Shangzhong

    2013-07-01

    A solution refractive index (SRI) and temperature simultaneous measurement sensor with intensity-demodulation system based on matching grating method were demonstrated. Long period grating written in a photonic crystal fiber (LPG-PCF), provides temperature stable and wavelength dependent optical intensity transmission. The reflective peaks of two fiber Bragg gratings (FBGs), one of which is etched then sensitive to both SRI and temperature, another (FBG2) is only sensitive to temperature, were located in the same linear range of the LPG-PCF's transmission spectrum. An identical FBG with FBG2 was chosen as a matching FBG. When environments (SRI and temperature) change, the wavelength shifts of the FBGs are translated effectively to the reflection intensity changes. By monitoring output lights of unmatching and matching paths, the SRI and temperature were deduced by a signal processing unit. Experimental results show that the simultaneous refractive index and temperature measurement system work well. The proposed sensor system is compact and suitable for in situ applications at lower cost.

  19. Effects of alcohols on fluorescence intensity and color of a discharged-obelin-based biomarker.

    Science.gov (United States)

    Alieva, Roza R; Belogurova, Nadezhda V; Petrova, Alena S; Kudryasheva, Nadezhda S

    2014-05-01

    Photoproteins are responsible for bioluminescence of marine coelenterates; bioluminescent and fluorescent biomarkers based on photoproteins are useful for monitoring of calcium-dependent processes in medical investigations. Here, we present the analysis of intensity and color of light-induced fluorescence of Ca(2+)-discharged photoprotein obelin in the presence of alcohols (ethanol and glycerol). Complex obelin spectra obtained at different concentrations of the alcohols at 350- and 280-nm excitation (corresponding to polypeptide-bound coelenteramide and tryptophan absorption regions) were deconvoluted into Gaussian components; fluorescent intensity and contributions of the components to experimental spectra were analyzed. Five Gaussian components were found in different spectral regions-ultraviolet (tryptophan emission), blue-green (coelenteramide emission), and red (hypothetical indole-coelenteramide exciplex emission). Inhibition coefficients and contributions of the components to experimental fluorescent spectra showed that presence of alcohols increased contributions of ultraviolet, violet, and red components, but decreased contributions of components in the blue-green region. The effects were related to (1) changes of proton transfer efficiency in fluorescent S*1 state of coelenteramide in the obelin active center and (2) formation of indole-coelenteramide exciplex at 280-nm photoexcitation. The data show that variation of fluorescence color and intensity in the presence of alcohols and dependence of emission spectra on excitation wavelength should be considered while applying the discharged obelin as a fluorescence biomarker.

  20. Determination of Geometrical REVs Based on Volumetric Fracture Intensity and Statistical Tests

    Directory of Open Access Journals (Sweden)

    Ying Liu

    2018-05-01

    Full Text Available This paper presents a method to estimate a representative element volume (REV of a fractured rock mass based on the volumetric fracture intensity P32 and statistical tests. A 150 m × 80 m × 50 m 3D fracture network model was generated based on field data collected at the Maji dam site by using the rectangular window sampling method. The volumetric fracture intensity P32 of each cube was calculated by varying the cube location in the generated 3D fracture network model and varying the cube side length from 1 to 20 m, and the distribution of the P32 values was described. The size effect and spatial effect of the fractured rock mass were studied; the P32 values from the same cube sizes and different locations were significantly different, and the fluctuation in P32 values clearly decreases as the cube side length increases. In this paper, a new method that comprehensively considers the anisotropy of rock masses, simplicity of calculation and differences between different methods was proposed to estimate the geometrical REV size. The geometrical REV size of the fractured rock mass was determined based on the volumetric fracture intensity P32 and two statistical test methods, namely, the likelihood ratio test and the Wald–Wolfowitz runs test. The results of the two statistical tests were substantially different; critical cube sizes of 13 m and 12 m were estimated by the Wald–Wolfowitz runs test and the likelihood ratio test, respectively. Because the different test methods emphasize different considerations and impact factors, considering a result that these two tests accept, the larger cube size, 13 m, was selected as the geometrical REV size of the fractured rock mass at the Maji dam site in China.

  1. Data to calculate emissions intensity for individual beef cattle reared on pasture-based production systems

    Directory of Open Access Journals (Sweden)

    G.A. McAuliffe

    2018-04-01

    Full Text Available With increasing concern about environmental burdens originating from livestock production, the importance of farming system evaluation has never been greater. In order to form a basis for trade-off analysis of pasture-based cattle production systems, liveweight data from 90 Charolais × Hereford-Friesian calves were collected at a high temporal resolution at the North Wyke Farm Platform (NWFP in Devon, UK. These data were then applied to the Intergovernmental Panel on Climate Change (IPCC modelling framework to estimate on-farm methane emissions under three different pasture management strategies, completing a foreground dataset required to calculate emissions intensity of individual beef cattle.

  2. AN EVOLUTIONARY ALGORITHM FOR FAST INTENSITY BASED IMAGE MATCHING BETWEEN OPTICAL AND SAR SATELLITE IMAGERY

    Directory of Open Access Journals (Sweden)

    P. Fischer

    2018-04-01

    Full Text Available This paper presents a hybrid evolutionary algorithm for fast intensity based matching between satellite imagery from SAR and very high-resolution (VHR optical sensor systems. The precise and accurate co-registration of image time series and images of different sensors is a key task in multi-sensor image processing scenarios. The necessary preprocessing step of image matching and tie-point detection is divided into a search problem and a similarity measurement. Within this paper we evaluate the use of an evolutionary search strategy for establishing the spatial correspondence between satellite imagery of optical and radar sensors. The aim of the proposed algorithm is to decrease the computational costs during the search process by formulating the search as an optimization problem. Based upon the canonical evolutionary algorithm, the proposed algorithm is adapted for SAR/optical imagery intensity based matching. Extensions are drawn using techniques like hybridization (e.g. local search and others to lower the number of objective function calls and refine the result. The algorithm significantely decreases the computational costs whilst finding the optimal solution in a reliable way.

  3. Fluorescence intensity and lifetime-based cyanide sensitive probes for physiological safeguard

    International Nuclear Information System (INIS)

    Badugu, Ramachandram; Lakowicz, Joseph R.; Geddes, Chris D.

    2004-01-01

    We characterize six new fluorescent probes that show both intensity and lifetime changes in the presence of free uncomplexed aqueous cyanide, allowing for fluorescence based cyanide sensing up to physiological safeguard levels, i.e. 2 to the anionic R-B - (CN) 3 form, a new cyanide binding mechanism which we have recently reported. The presence of an electron deficient quaternary heterocyclic nitrogen nucleus, and the electron rich cyanide bound form, provides for the intensity changes observed. We have determined the disassociation constants of the probes to be in the range ∼15-84 μM 3 . In addition we have synthesized control compounds which do not contain the boronic acid moiety, allowing for a rationale of the cyanide responses between the probe isomers to be made. The lifetime of the cyanide bound probes are significantly shorter than the free R-B(OH) 2 probe forms, providing for the opportunity of lifetime based cyanide sensing up to physiologically lethal levels. Finally, while fluorescent probes containing the boronic acid moiety have earned a well-deserved reputation for monosaccharide sensing, we show that strong bases such as CN - and OH - preferentially bind as compared to glucose, enabling the potential use of these probes for cyanide safeguard and determination in physiological fluids, especially given that physiologies do not experience any notable changes in pH

  4. Two-year outcome of team-based intensive case management for patients with schizophrenia.

    Science.gov (United States)

    Aberg-Wistedt, A; Cressell, T; Lidberg, Y; Liljenberg, B; Osby, U

    1995-12-01

    Two-year outcomes of patients with schizophrenic disorders who were assigned to an intensive, team-based case management program and patients who received standard psychiatric services were assessed. The case management model featured increased staff contact time with patients, rehabilitation plans based on patients' expressed needs, and patients' attendance at team meetings where their rehabilitation plan was discussed. Forty patients were randomly assigned to either the case management group or the control group that received standard services. Patients' use of emergency and inpatient services, their quality of life, the size of their social networks, and their relatives' burden of care were assessed at assignment to the study groups and at two-year follow-up. Patients in the case management group had significantly fewer emergency visits compared with the two years before the study, and their relatives reported significantly reduced burden of care associated with relationships with psychiatric services over the two-year period. The size of patients' social networks increased for the case management group and decreased for the control group. A team-based intensive case management model is an effective intervention in the rehabilitation of patients with chronic schizophrenia.

  5. Quantification of Parkinson Tremor Intensity Based On EMG Signal Analysis Using Fast Orthogonal Search Algorithm

    Directory of Open Access Journals (Sweden)

    H. Rezghian Moghadam

    2018-06-01

    Full Text Available The tremor injury is one of the common symptoms of Parkinson's disease. The patients suffering from Parkinson's disease have difficulty in controlling their movements owing to tremor. The intensity of the disease can be determined through specifying the range of intensity values of involuntary tremor in Parkinson patients. The level of disease in patients is determined through an empirical range of 0-5. In the early stages of Parkinson, resting tremor can be very mild and intermittent. So, diagnosing the levels of disease is difficult but important since it has only medication therapy. The aim of this study is to quantify the intensity of tremor by the analysis of electromyogram signal. The solution proposed in this paper is to employ a polynomial function model to estimate the Unified Parkinson's Disease Rating Scale (UPDRS value. The algorithm of Fast Orthogonal Search (FOS, which is based on identification of orthogonal basic functions, was utilized for model identification. In fact, some linear and nonlinear features extracted from wrist surface electromyogram signal were considered as the input of the model identified by FOS, and the model output was the UPDRS value. In this research, the proposed model was designed based on two different structures which have been called the single structure and parallel structure. The efficiency of designed models with different structures was evaluated. The evaluation results using K-fold cross validation approach showed that the proposed model with a parallel structure could determine the tremor severity of the Parkinson's disease with accuracy of 99.25% ±0.41, sensitivity of 97.17% ±1.9 and specificity of 99.72% ±0.18.

  6. A tunable, linac based, intense, broad-band THz source forpump-probe experiments

    Energy Technology Data Exchange (ETDEWEB)

    Schmerge, J. [SLAC National Accelerator Lab., Menlo Park, CA (United States); Adolphsen, C. [SLAC National Accelerator Lab., Menlo Park, CA (United States); Corbett, J. [SLAC National Accelerator Lab., Menlo Park, CA (United States); Dolgashev, V. [SLAC National Accelerator Lab., Menlo Park, CA (United States); Durr, H. [SLAC National Accelerator Lab., Menlo Park, CA (United States); Fazio, M. [SLAC National Accelerator Lab., Menlo Park, CA (United States); Fisher, A. [SLAC National Accelerator Lab., Menlo Park, CA (United States); Frisch, J. [SLAC National Accelerator Lab., Menlo Park, CA (United States); Gaffney, K. [SLAC National Accelerator Lab., Menlo Park, CA (United States); Guehr, M. [SLAC National Accelerator Lab., Menlo Park, CA (United States); Hastings, J. [SLAC National Accelerator Lab., Menlo Park, CA (United States); Hettel, B. [SLAC National Accelerator Lab., Menlo Park, CA (United States); Hoffmann, M. [SLAC National Accelerator Lab., Menlo Park, CA (United States); Hogan, M. [SLAC National Accelerator Lab., Menlo Park, CA (United States); Holtkamp, N. [SLAC National Accelerator Lab., Menlo Park, CA (United States); Huang, X. [SLAC National Accelerator Lab., Menlo Park, CA (United States); Huang, Z. [SLAC National Accelerator Lab., Menlo Park, CA (United States); Kirchmann, P. [SLAC National Accelerator Lab., Menlo Park, CA (United States); LaRue, J. [SLAC National Accelerator Lab., Menlo Park, CA (United States); Limborg, C. [SLAC National Accelerator Lab., Menlo Park, CA (United States); Lindenberg, A. [SLAC National Accelerator Lab., Menlo Park, CA (United States); Loos, H. [SLAC National Accelerator Lab., Menlo Park, CA (United States); Maxwell, T. [SLAC National Accelerator Lab., Menlo Park, CA (United States); Nilsson, A. [SLAC National Accelerator Lab., Menlo Park, CA (United States); Raubenheimer, T. [SLAC National Accelerator Lab., Menlo Park, CA (United States); Reis, D. [SLAC National Accelerator Lab., Menlo Park, CA (United States); Ross, M. [SLAC National Accelerator Lab., Menlo Park, CA (United States); Shen, Z. -X. [SLAC National Accelerator Lab., Menlo Park, CA (United States); Stupakov, G. [SLAC National Accelerator Lab., Menlo Park, CA (United States); Tantawi, S. [SLAC National Accelerator Lab., Menlo Park, CA (United States); Tian, K. [SLAC National Accelerator Lab., Menlo Park, CA (United States); Wu, Z. [SLAC National Accelerator Lab., Menlo Park, CA (United States); Xiang, D. [SLAC National Accelerator Lab., Menlo Park, CA (United States); Yakimenko, V. [SLAC National Accelerator Lab., Menlo Park, CA (United States)

    2015-02-02

    We propose an intense THz source with tunable frequency and bandwidth that can directly interact with the degrees of freedom that determine the properties of materials and thus provides a new tool for controlling and directing these ultrafast processes as well as aiding synthesis of new materials with new functional properties. This THz source will broadly impact our understanding of dynamical processes in matter at the atomic-scale and in real time. Established optical pumping schemes using femtosecond visible frequency laser pulses for excitation are extended into the THz frequency regime thereby enabling resonant excitation of bonds in correlated solid state materials (phonon pumping), to drive low energy electronic excitations, to trigger surface chemistry reactions, and to all-optically bias a material with ultrashort electric fields or magnetic fields. A linac-based THz source can supply stand-alone experiments with peak intensities two orders of magnitude stronger than existing laser-based sources, but when coupled with atomic-scale sensitive femtosecond x-ray probes it opens a new frontier in ultrafast science with broad applications to correlated materials, interfacial and liquid phase chemistry, and materials in extreme conditions.

  7. Intensity liquid level sensor based on multimode interference and fiber Bragg grating

    International Nuclear Information System (INIS)

    Oliveira, Ricardo; Aristilde, Stenio; Osório, Jonas H; Cordeiro, Cristiano M B; Franco, Marcos A R; Bilro, Lúcia; Nogueira, Rogério N

    2016-01-01

    In this paper an intensity liquid level sensor based on a single-mode—no-core—single-mode (SMS) fiber structure together with a Bragg grating inscribed in the later single mode fiber is proposed. As the no-core fiber is sensitive to the external refractive index, the SMS spectral response will be shifted related to the length of no-core fiber that is immersed in a liquid. By positioning the FBG central wavelength at the spectral region of the SMS edge filter, it is possible to measure the liquid level using the reflected FBG peak power through an intensity-based approach. The sensor is also self-referenced using the peak power of another FBG that is placed before and far from the sensing part. The temperature error analysis was also studied revealing that the sensor can operate in environments where the temperature changes are minimal. The possibility to use a second setup that makes the whole device temperature insensitive is also discussed. (paper)

  8. [Activity-based costing methodology to manage resources in intensive care units].

    Science.gov (United States)

    Alvear V, Sandra; Canteros G, Jorge; Jara M, Juan; Rodríguez C, Patricia

    2013-11-01

    An accurate estimation of resources use by individual patients is crucial in hospital management. To measure financial costs of health care actions in intensive care units of two public regional hospitals in Chile. Prospective follow up of 716 patients admitted to two intensive care units during 2011. The financial costs of health care activities was calculated using the Activity-Based Costing methodology. The main activities recorded were procedures and treatments, monitoring, response to patient needs, patient maintenance and coordination. Activity-Based Costs, including human resources and assorted indirect costs correspond to 81 to 88% of costs per disease in one hospital and 69 to 80% in the other. The costs associated to procedures and treatments are the most significant and are approximately $100,000 (Chilean pesos) per day of hospitalization. The second most significant cost corresponds to coordination activities, which fluctuates between $86,000 and 122,000 (Chilean pesos). There are significant differences in resources use between the two hospitals studied. Therefore cost estimation methodologies should be incorporated in the management of these clinical services.

  9. Accelerator-based intense neutron source for materials R ampersand D

    International Nuclear Information System (INIS)

    Jameson, R.A.

    1990-01-01

    Accelerator-based neutron sources for R ampersand D of materials in nuclear energy systems, including fusion reactors, can provide sufficient neutron flux, flux-volume, fluence and other attractive features for many aspects of materials research. The neutron spectrum produced from the D-Li reaction has been judged useful for many basic materials research problems, and to be a satisfactory approximation to that of the fusion process. The technology of high-intensity linear accelerators can readily be applied to provide the deuteron beam for the neutron source. Earlier applications included the Los Alamos Meson Physics Facility and the Fusion Materials Irradiation Test facility prototype. The key features of today's advanced accelerator technology are presented to illustrate the present state-of-the-art in terms of improved understanding of basic physical principles and engineering technique, and to show how these advances can be applied to present demands in a timely manner. These features include how to produce an intense beam current with the high quality required to minimize beam losses along the accelerator and transport system that could cause maintenance difficulties, by controlling the beam emittance through proper choice of the operating frequency, balancing of the forces acting on the beam, and realization in practical hardware. A most interesting aspect for materials researchers is the increased flexibility and opportunities for experimental configurations that a modern accelerator-based source could add to the set of available tools. 8 refs., 5 figs

  10. Change detection for synthetic aperture radar images based on pattern and intensity distinctiveness analysis

    Science.gov (United States)

    Wang, Xiao; Gao, Feng; Dong, Junyu; Qi, Qiang

    2018-04-01

    Synthetic aperture radar (SAR) image is independent on atmospheric conditions, and it is the ideal image source for change detection. Existing methods directly analysis all the regions in the speckle noise contaminated difference image. The performance of these methods is easily affected by small noisy regions. In this paper, we proposed a novel change detection framework for saliency-guided change detection based on pattern and intensity distinctiveness analysis. The saliency analysis step can remove small noisy regions, and therefore makes the proposed method more robust to the speckle noise. In the proposed method, the log-ratio operator is first utilized to obtain a difference image (DI). Then, the saliency detection method based on pattern and intensity distinctiveness analysis is utilized to obtain the changed region candidates. Finally, principal component analysis and k-means clustering are employed to analysis pixels in the changed region candidates. Thus, the final change map can be obtained by classifying these pixels into changed or unchanged class. The experiment results on two real SAR images datasets have demonstrated the effectiveness of the proposed method.

  11. Intensity-based bayesian framework for image reconstruction from sparse projection data

    International Nuclear Information System (INIS)

    Rashed, E.A.; Kudo, Hiroyuki

    2009-01-01

    This paper presents a Bayesian framework for iterative image reconstruction from projection data measured over a limited number of views. The classical Nyquist sampling rule yields the minimum number of projection views required for accurate reconstruction. However, challenges exist in many medical and industrial imaging applications in which the projection data is undersampled. Classical analytical reconstruction methods such as filtered backprojection (FBP) are not a good choice for use in such cases because the data undersampling in the angular range introduces aliasing and streak artifacts that degrade lesion detectability. In this paper, we propose a Bayesian framework for maximum likelihood-expectation maximization (ML-EM)-based iterative reconstruction methods that incorporates a priori knowledge obtained from expected intensity information. The proposed framework is based on the fact that, in tomographic imaging, it is often possible to expect a set of intensity values of the reconstructed object with relatively high accuracy. The image reconstruction cost function is modified to include the l 1 norm distance to the a priori known information. The proposed method has the potential to regularize the solution to reduce artifacts without missing lesions that cannot be expected from the a priori information. Numerical studies showed a significant improvement in image quality and lesion detectability under the condition of highly undersampled projection data. (author)

  12. A new Monte Carlo-based treatment plan optimization approach for intensity modulated radiation therapy.

    Science.gov (United States)

    Li, Yongbao; Tian, Zhen; Shi, Feng; Song, Ting; Wu, Zhaoxia; Liu, Yaqiang; Jiang, Steve; Jia, Xun

    2015-04-07

    Intensity-modulated radiation treatment (IMRT) plan optimization needs beamlet dose distributions. Pencil-beam or superposition/convolution type algorithms are typically used because of their high computational speed. However, inaccurate beamlet dose distributions may mislead the optimization process and hinder the resulting plan quality. To solve this problem, the Monte Carlo (MC) simulation method has been used to compute all beamlet doses prior to the optimization step. The conventional approach samples the same number of particles from each beamlet. Yet this is not the optimal use of MC in this problem. In fact, there are beamlets that have very small intensities after solving the plan optimization problem. For those beamlets, it may be possible to use fewer particles in dose calculations to increase efficiency. Based on this idea, we have developed a new MC-based IMRT plan optimization framework that iteratively performs MC dose calculation and plan optimization. At each dose calculation step, the particle numbers for beamlets were adjusted based on the beamlet intensities obtained through solving the plan optimization problem in the last iteration step. We modified a GPU-based MC dose engine to allow simultaneous computations of a large number of beamlet doses. To test the accuracy of our modified dose engine, we compared the dose from a broad beam and the summed beamlet doses in this beam in an inhomogeneous phantom. Agreement within 1% for the maximum difference and 0.55% for the average difference was observed. We then validated the proposed MC-based optimization schemes in one lung IMRT case. It was found that the conventional scheme required 10(6) particles from each beamlet to achieve an optimization result that was 3% difference in fluence map and 1% difference in dose from the ground truth. In contrast, the proposed scheme achieved the same level of accuracy with on average 1.2 × 10(5) particles per beamlet. Correspondingly, the computation

  13. Forecasting overhaul or replacement intervals based on estimated system failure intensity

    Science.gov (United States)

    Gannon, James M.

    1994-12-01

    System reliability can be expressed in terms of the pattern of failure events over time. Assuming a nonhomogeneous Poisson process and Weibull intensity function for complex repairable system failures, the degree of system deterioration can be approximated. Maximum likelihood estimators (MLE's) for the system Rate of Occurrence of Failure (ROCOF) function are presented. Evaluating the integral of the ROCOF over annual usage intervals yields the expected number of annual system failures. By associating a cost of failure with the expected number of failures, budget and program policy decisions can be made based on expected future maintenance costs. Monte Carlo simulation is used to estimate the range and the distribution of the net present value and internal rate of return of alternative cash flows based on the distributions of the cost inputs and confidence intervals of the MLE's.

  14. Influence of Signal Intensity Non-Uniformity on Brain Volumetry Using an Atlas-Based Method

    International Nuclear Information System (INIS)

    Takao, Hidemasa; Kunimatsu, Akira; Mori, Harushi

    2012-01-01

    Many studies have reported pre-processing effects for brain volumetry; however, no study has investigated whether non-parametric non-uniform intensity normalization (N3) correction processing results in reduced system dependency when using an atlas-based method. To address this shortcoming, the present study assessed whether N3 correction processing provides reduced system dependency in atlas-based volumetry. Contiguous sagittal T1-weighted images of the brain were obtained from 21 healthy participants, by using five magnetic resonance protocols. After image preprocessing using the Statistical Parametric Mapping 5 software, we measured the structural volume of the segmented images with the WFU-PickAtlas software. We applied six different bias-correction levels (Regularization 10, Regularization 0.0001, Regularization 0, Regularization 10 with N3, Regularization 0.0001 with N3, and Regularization 0 with N3) to each set of images. The structural volume change ratio (%) was defined as the change ratio (%) = (100 X[measured volume - mean volume of five magnetic resonance protocols] / mean volume of five magnetic resonance protocols) for each bias-correction level. A low change ratio was synonymous with lower system dependency. The results showed that the images with the N3 correction had a lower change ratio compared with those without the N3 correction. The present study is the first atlas-based volumetry study to show that the precision of atlas-based volumetry improves when using N3-corrected images. Therefore, correction for signal intensity non-uniformity is strongly advised for multi-scanner or multi-site imaging trials.

  15. Influence of signal intensity non-uniformity on brain volumetry using an atlas-based method.

    Science.gov (United States)

    Goto, Masami; Abe, Osamu; Miyati, Tosiaki; Kabasawa, Hiroyuki; Takao, Hidemasa; Hayashi, Naoto; Kurosu, Tomomi; Iwatsubo, Takeshi; Yamashita, Fumio; Matsuda, Hiroshi; Mori, Harushi; Kunimatsu, Akira; Aoki, Shigeki; Ino, Kenji; Yano, Keiichi; Ohtomo, Kuni

    2012-01-01

    Many studies have reported pre-processing effects for brain volumetry; however, no study has investigated whether non-parametric non-uniform intensity normalization (N3) correction processing results in reduced system dependency when using an atlas-based method. To address this shortcoming, the present study assessed whether N3 correction processing provides reduced system dependency in atlas-based volumetry. Contiguous sagittal T1-weighted images of the brain were obtained from 21 healthy participants, by using five magnetic resonance protocols. After image preprocessing using the Statistical Parametric Mapping 5 software, we measured the structural volume of the segmented images with the WFU-PickAtlas software. We applied six different bias-correction levels (Regularization 10, Regularization 0.0001, Regularization 0, Regularization 10 with N3, Regularization 0.0001 with N3, and Regularization 0 with N3) to each set of images. The structural volume change ratio (%) was defined as the change ratio (%) = (100 × [measured volume - mean volume of five magnetic resonance protocols] / mean volume of five magnetic resonance protocols) for each bias-correction level. A low change ratio was synonymous with lower system dependency. The results showed that the images with the N3 correction had a lower change ratio compared with those without the N3 correction. The present study is the first atlas-based volumetry study to show that the precision of atlas-based volumetry improves when using N3-corrected images. Therefore, correction for signal intensity non-uniformity is strongly advised for multi-scanner or multi-site imaging trials.

  16. Influence of Signal Intensity Non-Uniformity on Brain Volumetry Using an Atlas-Based Method

    Energy Technology Data Exchange (ETDEWEB)

    Takao, Hidemasa; Kunimatsu, Akira; Mori, Harushi [University of Tokyo Hospital, Tokyo (Japan); and others

    2012-07-15

    Many studies have reported pre-processing effects for brain volumetry; however, no study has investigated whether non-parametric non-uniform intensity normalization (N3) correction processing results in reduced system dependency when using an atlas-based method. To address this shortcoming, the present study assessed whether N3 correction processing provides reduced system dependency in atlas-based volumetry. Contiguous sagittal T1-weighted images of the brain were obtained from 21 healthy participants, by using five magnetic resonance protocols. After image preprocessing using the Statistical Parametric Mapping 5 software, we measured the structural volume of the segmented images with the WFU-PickAtlas software. We applied six different bias-correction levels (Regularization 10, Regularization 0.0001, Regularization 0, Regularization 10 with N3, Regularization 0.0001 with N3, and Regularization 0 with N3) to each set of images. The structural volume change ratio (%) was defined as the change ratio (%) = (100 X[measured volume - mean volume of five magnetic resonance protocols] / mean volume of five magnetic resonance protocols) for each bias-correction level. A low change ratio was synonymous with lower system dependency. The results showed that the images with the N3 correction had a lower change ratio compared with those without the N3 correction. The present study is the first atlas-based volumetry study to show that the precision of atlas-based volumetry improves when using N3-corrected images. Therefore, correction for signal intensity non-uniformity is strongly advised for multi-scanner or multi-site imaging trials.

  17. Outcomes in a Community-Based Intensive Cardiac Rehabilitation Program: Comparison with Hospital-Based and Academic Programs.

    Science.gov (United States)

    Katzenberg, Charles; Silva, Edna; Young, M Jean; Gilles, Greg

    2018-04-13

    The purpose of this study was to test the hypothesis that a community-based intensive cardiac rehabilitation program could produce positive changes in risk factor profile and outcomes in an at-risk population. Participants seeking either primary or secondary coronary artery disease prevention voluntarily enrolled in the 12-week intensive cardiac rehabilitation program. Data were obtained at baseline and 6-12 months after completion of the program. A total of 142 individuals, mean age 69 years, completed the Heart Series between 2012 and 2016. Follow-up data were available in 105 participants (74%). Participants showed statistically significant improvements in mean weight (165 to 162 lbs, P = .0005), body mass index (26 to 25 kg/m 2 , P = .001), systolic blood pressure (126 to 122 mm Hg, P = .01), diastolic blood pressure (73 to 70 mm Hg, P = .0005), total cholesterol (175 to 168 mg/dL, P = .03), low-density lipoprotein cholesterol (LDL-C) (100 to 93 mg/dL, P = .005), LDL-C/high-density lipoprotein cholesterol (HDL-C) ratio (1.8 to 1.6, P = .005), and cholesterol/HDL-C ratio (3.2 to 3.0, P = .003). Changes in HDL-C, triglycerides, and fasting blood glucose did not reach statistical significance, but all trended in favorable directions. Adverse cardiovascular disease outcomes were rare (one stent placement, no deaths). A total of 105 participants completed our 12-week community-based intensive cardiac rehabilitation program and showed significant positive changes in several measures of cardiac risk, with only 1 adverse event. These results compare favorably with those of hospital-based and academic institutional programs. Copyright © 2018 Elsevier Inc. All rights reserved.

  18. First experiences with model based iterative reconstructions influence on quantitative plaque volume and intensity measurements in coronary computed tomography angiography

    DEFF Research Database (Denmark)

    Precht, Helle; Kitslaar, Pieter H.; Broersen, Alexander

    2017-01-01

    Purpose: Investigate the influence of adaptive statistical iterative reconstruction (ASIR) and the model- based IR (Veo) reconstruction algorithm in coronary computed tomography angiography (CCTA) im- ages on quantitative measurements in coronary arteries for plaque volumes and intensities. Methods...

  19. Suppression of Repeat-Intensive False Targets Based on Temporal Pulse Diversity

    Directory of Open Access Journals (Sweden)

    Gang Lu

    2013-01-01

    Full Text Available This paper considers the problem of suppressing the repeat-intensive false targets produced by a deception electronic attack (EA system equipped with a Digital Radio Frequency Memory (DRFM device. Different from a conventional repeat jammer, this type of jamming intensively retransmits the intercepted signal stored in a DRFM to the victim radar in a very short time-delay interval relative to a radar pulse wide. A multipeak matched-filtering output is then produced other than the merely expected true target. An electronic protection (EP algorithm based on the space time block code (STBC is proposed to suppress the adverse effects of this jammer. By transmitting a pulse sequence generated from the STBC in succession and the following cancellation process applied upon the received signal, this algorithm performs successfully in a single antenna system provided that the target models are nonfluctuating or slow fluctuating and the pulse repetition frequency (PRF is comparatively high. The performance in white and correlated Gaussian disturbance is evaluated by means of Monte Carlo simulations.

  20. Blood component therapy in anesthesia and intensive care: Adoption of evidence based approaches

    Directory of Open Access Journals (Sweden)

    Sukhminder Jit Singh Bajwa

    2014-01-01

    Full Text Available Transfusion of blood and its components has undergone technological advancement, and its use is increasing both perioperatively as well as in the Intensive Care Unit. The separation of blood into its various components has made it very economical as blood donated from a single donor can be utilized for many recipients at the same time. However, the transfusion of blood and its components do carry the inherent risk of various transfusion reactions as well as transmission of infections. The indications for transfusion should be strictly adhered to for preventing nonjudicious use. The health care persons involved in transfusion should be well aware of implications of the mismatched transfusion and should be able to provide treatment if such mishaps do occur. A health care professional should carefully weigh the benefits of blood transfusion against the risks involved before subjecting the patients to the transfusion. This manuscript aims to comprehensively review the current evidence based approaches in blood and component transfusion which are being followed in anesthesiology and intensive care practice.

  1. Greenhouse Gas Emission Intensities for the Livestock Sector in Indonesia, Based on the National Specific Data

    Directory of Open Access Journals (Sweden)

    Eska Nugrahaeningtyas

    2018-06-01

    Full Text Available The aims of this study were to calculate greenhouse gas (GHG emissions and to identify the trends of GHG emission intensity, based on meat production from the livestock sector in Indonesia, which had not been done before. The total emissions from the livestock sector from 2000 to 2015 in Indonesia were calculated using the 2006 Intergovernmental Panel on Climate Change Guideline (2006 IPCC GL using Tier 1 and Tier 2, with its default values and some of the country specific data that were found in the grey literature. During 2000 to 2015, the change from the Tier 1 to Tier 2 methods resulted in an approximately 7.39% emission decrease from enteric fermentation and a 4.24% increase from manure management, which resulted in a 4.98% decrease in the total emissions. The shared emission from manure management increased by about 9% and 6% using Tier 1 and Tier 2, respectively. In contrast with the total emissions, the overall emission intensity in Indonesia decreased (up to 60.77% for swine, showing that the livestock productivity in Indonesia has become more efficient. In order to meet the meat demand with less GHG emissions, chicken farming is one option to be developed. The increased emission and share from manure management indicated that manure management system needs to be of concern, especially for beef cattle and swine.

  2. Solidification of Al-Sn-Cu Based Immiscible Alloys under Intense Shearing

    Science.gov (United States)

    Kotadia, H. R.; Doernberg, E.; Patel, J. B.; Fan, Z.; Schmid-Fetzer, R.

    2009-09-01

    The growing importance of Al-Sn based alloys as materials for engineering applications necessitates the development of uniform microstructures with improved performance. Guided by the recently thermodynamically assessed Al-Sn-Cu system, two model immiscible alloys, Al-45Sn-10Cu and Al-20Sn-10Cu, were selected to investigate the effects of intensive melt shearing provided by the novel melt conditioning by advanced shear technology (MCAST) unit on the uniform dispersion of the soft Sn phase in a hard Al matrix. Our experimental results have confirmed that intensive melt shearing is an effective way to achieve fine and uniform dispersion of the soft phase without macro-demixing, and that such dispersed microstructure can be further refined in alloys with precipitation of the primary Al phase prior to the demixing reaction. In addition, it was found that melt shearing at 200 rpm and 60 seconds will be adequate to produce fine and uniform dispersion of the Sn phase, and that higher shearing speed and prolonged shearing time can only achieve minor further refinement.

  3. Reconstruction of structural damage based on reflection intensity spectra of fiber Bragg gratings

    International Nuclear Information System (INIS)

    Huang, Guojun; Wei, Changben; Chen, Shiyuan; Yang, Guowei

    2014-01-01

    We present an approach for structural damage reconstruction based on the reflection intensity spectra of fiber Bragg gratings (FBGs). Our approach incorporates the finite element method, transfer matrix (T-matrix), and genetic algorithm to solve the inverse photo-elastic problem of damage reconstruction, i.e. to identify the location, size, and shape of a defect. By introducing a parameterized characterization of the damage information, the inverse photo-elastic problem is reduced to an optimization problem, and a relevant computational scheme was developed. The scheme iteratively searches for the solution to the corresponding direct photo-elastic problem until the simulated and measured (or target) reflection intensity spectra of the FBGs near the defect coincide within a prescribed error. Proof-of-concept validations of our approach were performed numerically and experimentally using both holed and cracked plate samples as typical cases of plane-stress problems. The damage identifiability was simulated by changing the deployment of the FBG sensors, including the total number of sensors and their distance to the defect. Both the numerical and experimental results demonstrate that our approach is effective and promising. It provides us with a photo-elastic method for developing a remote, automatic damage-imaging technique that substantially improves damage identification for structural health monitoring. (paper)

  4. Single-intensity-recording optical encryption technique based on phase retrieval algorithm and QR code

    Science.gov (United States)

    Wang, Zhi-peng; Zhang, Shuai; Liu, Hong-zhao; Qin, Yi

    2014-12-01

    Based on phase retrieval algorithm and QR code, a new optical encryption technology that only needs to record one intensity distribution is proposed. In this encryption process, firstly, the QR code is generated from the information to be encrypted; and then the generated QR code is placed in the input plane of 4-f system to have a double random phase encryption. For only one intensity distribution in the output plane is recorded as the ciphertext, the encryption process is greatly simplified. In the decryption process, the corresponding QR code is retrieved using phase retrieval algorithm. A priori information about QR code is used as support constraint in the input plane, which helps solve the stagnation problem. The original information can be recovered without distortion by scanning the QR code. The encryption process can be implemented either optically or digitally, and the decryption process uses digital method. In addition, the security of the proposed optical encryption technology is analyzed. Theoretical analysis and computer simulations show that this optical encryption system is invulnerable to various attacks, and suitable for harsh transmission conditions.

  5. Reoptimization of Intensity Modulated Proton Therapy Plans Based on Linear Energy Transfer

    Energy Technology Data Exchange (ETDEWEB)

    Unkelbach, Jan, E-mail: junkelbach@mgh.harvard.edu [Department of Radiation Oncology, Massachusetts General Hospital, Boston, Massachusetts (United States); Botas, Pablo [Department of Radiation Oncology, Massachusetts General Hospital, Boston, Massachusetts (United States); Faculty of Physics, Ruprecht-Karls-Universität Heidelberg, Heidelberg (Germany); Giantsoudi, Drosoula; Gorissen, Bram L.; Paganetti, Harald [Department of Radiation Oncology, Massachusetts General Hospital, Boston, Massachusetts (United States)

    2016-12-01

    Purpose: We describe a treatment plan optimization method for intensity modulated proton therapy (IMPT) that avoids high values of linear energy transfer (LET) in critical structures located within or near the target volume while limiting degradation of the best possible physical dose distribution. Methods and Materials: To allow fast optimization based on dose and LET, a GPU-based Monte Carlo code was extended to provide dose-averaged LET in addition to dose for all pencil beams. After optimizing an initial IMPT plan based on physical dose, a prioritized optimization scheme is used to modify the LET distribution while constraining the physical dose objectives to values close to the initial plan. The LET optimization step is performed based on objective functions evaluated for the product of LET and physical dose (LET×D). To first approximation, LET×D represents a measure of the additional biological dose that is caused by high LET. Results: The method is effective for treatments where serial critical structures with maximum dose constraints are located within or near the target. We report on 5 patients with intracranial tumors (high-grade meningiomas, base-of-skull chordomas, ependymomas) in whom the target volume overlaps with the brainstem and optic structures. In all cases, high LET×D in critical structures could be avoided while minimally compromising physical dose planning objectives. Conclusion: LET-based reoptimization of IMPT plans represents a pragmatic approach to bridge the gap between purely physical dose-based and relative biological effectiveness (RBE)-based planning. The method makes IMPT treatments safer by mitigating a potentially increased risk of side effects resulting from elevated RBE of proton beams near the end of range.

  6. Micromachined diffraction based optical microphones and intensity probes with electrostatic force feedback

    Science.gov (United States)

    Bicen, Baris

    Measuring acoustic pressure gradients is critical in many applications such as directional microphones for hearing aids and sound intensity probes. This measurement is especially challenging with decreasing microphone size, which reduces the sensitivity due to small spacing between the pressure ports. Novel, micromachined biomimetic microphone diaphragms are shown to provide high sensitivity to pressure gradients on one side of the diaphragm with low thermal mechanical noise. These structures have a dominant mode shape with see-saw like motion in the audio band, responding to pressure gradients as well as spurious higher order modes sensitive to pressure. In this dissertation, integration of a diffraction based optical detection method with these novel diaphragm structures to implement a low noise optical pressure gradient microphone is described and experimental characterization results are presented, showing 36 dBA noise level with 1mm port spacing, nearly an order of magnitude better than the current gradient microphones. The optical detection scheme also provides electrostatic actuation capability from both sides of the diaphragm separately which can be used for active force feedback. A 4-port electromechanical equivalent circuit model of this microphone with optical readout is developed to predict the overall response of the device to different acoustic and electrostatic excitations. The model includes the damping due to complex motion of air around the microphone diaphragm, and it calculates the detected optical signal on each side of the diaphragm as a combination of two separate dominant vibration modes. This equivalent circuit model is verified by experiments and used to predict the microphone response with different force feedback schemes. Single sided force feedback is used for active damping to improve the linearity and the frequency response of the microphone. Furthermore, it is shown that using two sided force feedback one can significantly suppress

  7. Mediterranean intense desert dust outbreaks and their vertical structure based on remote sensing data

    Directory of Open Access Journals (Sweden)

    A. Gkikas

    2016-07-01

    Full Text Available The main aim of the present study is to describe the vertical structure of the intense Mediterranean dust outbreaks, based on the use of satellite and surface-based retrievals/measurements. Strong and extreme desert dust (DD episodes are identified at 1°  ×  1° spatial resolution, over the period March 2000–February 2013, through the implementation of an updated objective and dynamic algorithm. According to the algorithm, strong DD episodes occurring at a specific place correspond to cases in which the daily aerosol optical depth at 550 nm (AOD550 nm exceeds or equals the long-term mean AOD550 nm (Mean plus two standard deviations (SD, which is also smaller than Mean+4 × SD. Extreme DD episodes correspond to cases in which the daily AOD550 nm value equals or exceeds Mean+4 × SD. For the identification of DD episodes, additional optical properties (Ångström exponent, fine fraction, effective radius and aerosol index derived by the MODIS-Terra & Aqua (also AOD retrievals, OMI-Aura and EP-TOMS databases are used as inputs. According to the algorithm using MODIS-Terra data, over the period March 2000–February 2013, strong DD episodes occur more frequently (up to 9.9 episodes year−1 over the western Mediterranean, while the corresponding frequencies for the extreme ones are smaller (up to 3.3 episodes year−1, central Mediterranean Sea. In contrast to their frequency, dust episodes are more intense (AODs up to 4.1, over the central and eastern Mediterranean Sea, off the northern African coasts. Slightly lower frequencies and higher intensities are found when the satellite algorithm operates based on MODIS-Aqua retrievals, for the period 2003–2012. The consistency of the algorithm is successfully tested through the application of an alternative methodology for the determination of DD episodes, which produced similar features of the episodes' frequency and intensity, with just slightly higher

  8. Intensity-based fibre-optic sensing system using contrast modulation of subcarrier interference pattern

    Science.gov (United States)

    Adamovsky, G.; Sherer, T. N.; Maitland, D. J.

    1989-01-01

    A novel technique to compensate for unwanted intensity losses in a fiber-optic sensing system is described. The technique involves a continuous sinusoidal modulation of the light source intensity at radio frequencies and an intensity sensor placed in an unbalanced interferometer. The system shows high sensitivity and stability.

  9. Quantitative phase microscopy for cellular dynamics based on transport of intensity equation.

    Science.gov (United States)

    Li, Ying; Di, Jianglei; Ma, Chaojie; Zhang, Jiwei; Zhong, Jinzhan; Wang, Kaiqiang; Xi, Teli; Zhao, Jianlin

    2018-01-08

    We demonstrate a simple method for quantitative phase imaging of tiny transparent objects such as living cells based on the transport of intensity equation. The experiments are performed using an inverted bright field microscope upgraded with a flipping imaging module, which enables to simultaneously create two laterally separated images with unequal defocus distances. This add-on module does not include any lenses or gratings and is cost-effective and easy-to-alignment. The validity of this method is confirmed by the measurement of microlens array and human osteoblastic cells in culture, indicating its potential in the applications of dynamically measuring living cells and other transparent specimens in a quantitative, non-invasive and label-free manner.

  10. Equal intensity double plasmon resonance of bimetallic quasi-nanocomposites based on sandwich geometry

    Energy Technology Data Exchange (ETDEWEB)

    Chakravadhanula, V S K; Elbahri, M; Schuermann, U; Takele, H; Greve, H; Zaporojtchenko, V; Faupel, F [Chair for Multicomponent Materials, Technical Faculty of the CAU Kiel, Kaiserstrasse 2, D-24143 Kiel (Germany)], E-mail: ff@tf.uni-kiel.de

    2008-06-04

    We report a strategy to achieve a material showing equal intensity double plasmon resonance (EIDPR) based on sandwich geometry. We studied the interaction between localized plasmon resonances associated with different metal clusters (Au/Ag) on Teflon AF (TAF) in sandwich geometry. Engineering the EIDPR was done by tailoring the amount of Au/Ag and changing the TAF thickness. The samples were investigated by transmission electron microscopy (TEM) and UV-visible spectroscopy. Interestingly, and in agreement with the dipole-surface interaction, the critical barrier thickness for an optimum EIDPR was observed at 3.3 nm. The results clearly show a plasmon sequence effect and visualize the role of plasmon decay.

  11. Hybrid iterative phase retrieval algorithm based on fusion of intensity information in three defocused planes.

    Science.gov (United States)

    Zeng, Fa; Tan, Qiaofeng; Yan, Yingbai; Jin, Guofan

    2007-10-01

    Study of phase retrieval technology is quite meaningful, for its wide applications related to many domains, such as adaptive optics, detection of laser quality, precise measurement of optical surface, and so on. Here a hybrid iterative phase retrieval algorithm is proposed, based on fusion of the intensity information in three defocused planes. First the conjugate gradient algorithm is adapted to achieve a coarse solution of phase distribution in the input plane; then the iterative angular spectrum method is applied in succession for better retrieval result. This algorithm is still applicable even when the exact shape and size of the aperture in the input plane are unknown. Moreover, this algorithm always exhibits good convergence, i.e., the retrieved results are insensitive to the chosen positions of the three defocused planes and the initial guess of complex amplitude in the input plane, which has been proved by both simulations and further experiments.

  12. A research plan based on high intensity proton accelerator Neutron Science Research Center

    International Nuclear Information System (INIS)

    Mizumoto, Motoharu

    1997-01-01

    A plan called Neutron Science Research Center (NSRC) has been proposed in JAERI. The center is a complex composed of research facilities based on a proton linac with an energy of 1.5GeV and an average current of 10mA. The research facilities will consist of Thermal/Cold Neutron Facility, Neutron Irradiation Facility, Neutron Physics Facility, OMEGA/Nuclear Energy Facility, Spallation RI Beam Facility, Meson/Muon Facility and Medium Energy Experiment Facility, where high intensity proton beam and secondary particle beams such as neutron, pion, muon and unstable radio isotope (RI) beams generated from the proton beam will be utilized for innovative researches in the fields on nuclear engineering and basic sciences. (author)

  13. A research plan based on high intensity proton accelerator Neutron Science Research Center

    Energy Technology Data Exchange (ETDEWEB)

    Mizumoto, Motoharu [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment

    1997-03-01

    A plan called Neutron Science Research Center (NSRC) has been proposed in JAERI. The center is a complex composed of research facilities based on a proton linac with an energy of 1.5GeV and an average current of 10mA. The research facilities will consist of Thermal/Cold Neutron Facility, Neutron Irradiation Facility, Neutron Physics Facility, OMEGA/Nuclear Energy Facility, Spallation RI Beam Facility, Meson/Muon Facility and Medium Energy Experiment Facility, where high intensity proton beam and secondary particle beams such as neutron, pion, muon and unstable radio isotope (RI) beams generated from the proton beam will be utilized for innovative researches in the fields on nuclear engineering and basic sciences. (author)

  14. Intensity Variation Normalization for Finger Vein Recognition Using Guided Filter Based Singe Scale Retinex.

    Science.gov (United States)

    Xie, Shan Juan; Lu, Yu; Yoon, Sook; Yang, Jucheng; Park, Dong Sun

    2015-07-14

    Finger vein recognition has been considered one of the most promising biometrics for personal authentication. However, the capacities and percentages of finger tissues (e.g., bone, muscle, ligament, water, fat, etc.) vary person by person. This usually causes poor quality of finger vein images, therefore degrading the performance of finger vein recognition systems (FVRSs). In this paper, the intrinsic factors of finger tissue causing poor quality of finger vein images are analyzed, and an intensity variation (IV) normalization method using guided filter based single scale retinex (GFSSR) is proposed for finger vein image enhancement. The experimental results on two public datasets demonstrate the effectiveness of the proposed method in enhancing the image quality and finger vein recognition accuracy.

  15. Intensity Variation Normalization for Finger Vein Recognition Using Guided Filter Based Singe Scale Retinex

    Directory of Open Access Journals (Sweden)

    Shan Juan Xie

    2015-07-01

    Full Text Available Finger vein recognition has been considered one of the most promising biometrics for personal authentication. However, the capacities and percentages of finger tissues (e.g., bone, muscle, ligament, water, fat, etc. vary person by person. This usually causes poor quality of finger vein images, therefore degrading the performance of finger vein recognition systems (FVRSs. In this paper, the intrinsic factors of finger tissue causing poor quality of finger vein images are analyzed, and an intensity variation (IV normalization method using guided filter based single scale retinex (GFSSR is proposed for finger vein image enhancement. The experimental results on two public datasets demonstrate the effectiveness of the proposed method in enhancing the image quality and finger vein recognition accuracy.

  16. Numerical tilting compensation in microscopy based on wavefront sensing using transport of intensity equation method

    Science.gov (United States)

    Hu, Junbao; Meng, Xin; Wei, Qi; Kong, Yan; Jiang, Zhilong; Xue, Liang; Liu, Fei; Liu, Cheng; Wang, Shouyu

    2018-03-01

    Wide-field microscopy is commonly used for sample observations in biological research and medical diagnosis. However, the tilting error induced by the oblique location of the image recorder or the sample, as well as the inclination of the optical path often deteriorates the imaging quality. In order to eliminate the tilting in microscopy, a numerical tilting compensation technique based on wavefront sensing using transport of intensity equation method is proposed in this paper. Both the provided numerical simulations and practical experiments prove that the proposed technique not only accurately determines the tilting angle with simple setup and procedures, but also compensates the tilting error for imaging quality improvement even in the large tilting cases. Considering its simple systems and operations, as well as image quality improvement capability, it is believed the proposed method can be applied for tilting compensation in the optical microscopy.

  17. Extreme flood event analysis in Indonesia based on rainfall intensity and recharge capacity

    Science.gov (United States)

    Narulita, Ida; Ningrum, Widya

    2018-02-01

    Indonesia is very vulnerable to flood disaster because it has high rainfall events throughout the year. Flood is categorized as the most important hazard disaster because it is causing social, economic and human losses. The purpose of this study is to analyze extreme flood event based on satellite rainfall dataset to understand the rainfall characteristic (rainfall intensity, rainfall pattern, etc.) that happened before flood disaster in the area for monsoonal, equatorial and local rainfall types. Recharge capacity will be analyzed using land cover and soil distribution. The data used in this study are CHIRPS rainfall satellite data on 0.05 ° spatial resolution and daily temporal resolution, and GSMap satellite rainfall dataset operated by JAXA on 1-hour temporal resolution and 0.1 ° spatial resolution, land use and soil distribution map for recharge capacity analysis. The rainfall characteristic before flooding, and recharge capacity analysis are expected to become the important information for flood mitigation in Indonesia.

  18. Simple method based on intensity measurements for characterization of aberrations from micro-optical components.

    Science.gov (United States)

    Perrin, Stephane; Baranski, Maciej; Froehly, Luc; Albero, Jorge; Passilly, Nicolas; Gorecki, Christophe

    2015-11-01

    We report a simple method, based on intensity measurements, for the characterization of the wavefront and aberrations produced by micro-optical focusing elements. This method employs the setup presented earlier in [Opt. Express 22, 13202 (2014)] for measurements of the 3D point spread function, on which a basic phase-retrieval algorithm is applied. This combination allows for retrieval of the wavefront generated by the micro-optical element and, in addition, quantification of the optical aberrations through the wavefront decomposition with Zernike polynomials. The optical setup requires only an in-motion imaging system. The technique, adapted for the optimization of micro-optical component fabrication, is demonstrated by characterizing a planoconvex microlens.

  19. A High-precision Motion Compensation Method for SAR Based on Image Intensity Optimization

    Directory of Open Access Journals (Sweden)

    Hu Ke-bin

    2015-02-01

    Full Text Available Owing to the platform instability and precision limitations of motion sensors, motion errors negatively affect the quality of synthetic aperture radar (SAR images. The autofocus Back Projection (BP algorithm based on the optimization of image sharpness compensates for motion errors through phase error estimation. This method can attain relatively good performance, while assuming the same phase error for all pixels, i.e., it ignores the spatial variance of motion errors. To overcome this drawback, a high-precision motion error compensation method is presented in this study. In the proposed method, the Antenna Phase Centers (APC are estimated via optimization using the criterion of maximum image intensity. Then, the estimated APCs are applied for BP imaging. Because the APC estimation equals the range history estimation for each pixel, high-precision phase compensation for every pixel can be achieved. Point-target simulations and processing of experimental data validate the effectiveness of the proposed method.

  20. Colonic polyp detection method from 3D abdominal CT images based on local intensity analysis

    International Nuclear Information System (INIS)

    Oda, M.; Nakada, Y.; Kitasaka, T.; Mori, K.; Suenaga, Y.; Takayama, T.; Takabatake, H.; Mori, M.; Natori, H.; Nawano, S.

    2007-01-01

    This paper presents a detection method of colonic polyps from 3D abdominal CT images based on local intensity analysis. Recently, virtual colonoscopy (VC) has widely received attention as a new colon diagnostic method. VC is considered as a less-invasive inspection method which reduces patient load. However, since the colon has many haustra and its shape is long and convoluted, a physician has to change the viewpoint and the viewing direction of the virtual camera of VC many times while diagnosis. Additionally, there is a risk to overlook lesions existing in blinded areas caused by haustra. This paper proposes an automated colonic polyp detection method from 3D abdominal CT images. Colonic polyps are located on the colonic wall. Their CT values are higher than those of colonic lumen regions and lower than those of fecal materials tagged by an X-ray opaque contrast agent. CT values inside polyps which exist outside the tagged fecal materials tend to gradually increase from outward to inward (blob-like structure). CT values inside polyps that exist inside the tagged fecal materials tend to gradually decrease from outward to inward (inv-blob-like structure). We employ the blob and the inv-blob structure enhancement filters based on the eigenvalues of the Hessian matrix to detect polyps using intensity characteristic of polyps. Connected components with low output values of the enhancement filter are eliminated in false positive reduction process. Small connected components are also eliminated. We applied the proposed method to 44 cases of abdominal CT images. Sensitivity for polyps of 6 mm or larger was 80% with 4.7 false positives per case. (orig.)

  1. Drawing for Traffic Marking Using Bidirectional Gradient-Based Detection with MMS LIDAR Intensity

    Science.gov (United States)

    Takahashi, G.; Takeda, H.; Nakamura, K.

    2016-06-01

    Recently, the development of autonomous cars is accelerating on the integration of highly advanced artificial intelligence, which increases demand for a digital map with high accuracy. In particular, traffic markings are required to be precisely digitized since automatic driving utilizes them for position detection. To draw traffic markings, we benefit from Mobile Mapping Systems (MMS) equipped with high-density Laser imaging Detection and Ranging (LiDAR) scanners, which produces large amount of data efficiently with XYZ coordination along with reflectance intensity. Digitizing this data, on the other hand, conventionally has been dependent on human operation, which thus suffers from human errors, subjectivity errors, and low reproductivity. We have tackled this problem by means of automatic extraction of traffic marking, which partially accomplished to draw several traffic markings (G. Takahashi et al., 2014). The key idea of the method was extracting lines using the Hough transform strategically focused on changes in local reflection intensity along scan lines. However, it failed to extract traffic markings properly in a densely marked area, especially when local changing points are close each other. In this paper, we propose a bidirectional gradient-based detection method where local changing points are labelled with plus or minus group. Given that each label corresponds to the boundary between traffic markings and background, we can identify traffic markings explicitly, meaning traffic lines are differentiated correctly by the proposed method. As such, our automated method, a highly accurate and non-human-operator-dependent method using bidirectional gradient-based algorithm, can successfully extract traffic lines composed of complex shapes such as a cross walk, resulting in minimizing cost and obtaining highly accurate results.

  2. [Low-intensity, evidence-based cognitive-behavioural therapy of a patient with Crohn's disease].

    Science.gov (United States)

    Antal-Uram, Dóra; Harsányi, László; Perczel-Forintos, Dóra

    2018-03-01

    Inflammatory bowel disease (Crohn's disease and colitis ulcerosa) is a chronic, long-term condition that causes chronic inflammation in the digestive tract, and shows an increasing incidence and prevalence worldwide. Changes in disease activity over time affect psychological distress which increases the risk of exacerbations. Beside somatic symptoms (such as abdominal pain, diarrhoea and weight loss), psychiatric comorbidity (in particular major depression, anxiety, social phobia) is common in patients with Crohn's disease. This case study illustrates the management and stabilization of a 21-year-old adult male patient with active Crohn's disease and with severe psychiatric comorbidity. The patient was diagnosed with avoidant personality disorder and dysruptive mood dysregulation disorder based on the results of psychodiagnostics (SCID-II structured clinical interview, MMPI personality inventory and disease-specific clinical questionnaires such as Beck Depression Inventory, Beck Hopelessness Scale, Social Cognition Questionnaire, Anger Expression Scale, Cognitive Emotion Regulation Questionnaire, Rosenberg Self-Esteem Scale). The main aim of psychotherapy is to increase the adherence to pharmacotherapy, to promote psychosocial functioning, to improve well-being and to enhance adaptive coping strategies. Low-intensity cognitive-behavioural psychotherapy was used which included psychoeducation, motivational interview, behavioural activation, patient diary, cognitive restructuring, problem-solving training, and family consulting. Twenty-five sessions were held weekly in outpatient form and 3 sessions of crisis intervention after the surgery at the hospital. The efficacy of the treatment was measured by self-reported questionnaires at baseline and at two follow-up sessions which corroborated a very significant decrease in the severity of depression, hopelessness, while emotional regulation and self-esteem became more adaptive. The remission of the above

  3. DRAWING FOR TRAFFIC MARKING USING BIDIRECTIONAL GRADIENT-BASED DETECTION WITH MMS LIDAR INTENSITY

    Directory of Open Access Journals (Sweden)

    G. Takahashi

    2016-06-01

    Full Text Available Recently, the development of autonomous cars is accelerating on the integration of highly advanced artificial intelligence, which increases demand for a digital map with high accuracy. In particular, traffic markings are required to be precisely digitized since automatic driving utilizes them for position detection. To draw traffic markings, we benefit from Mobile Mapping Systems (MMS equipped with high-density Laser imaging Detection and Ranging (LiDAR scanners, which produces large amount of data efficiently with XYZ coordination along with reflectance intensity. Digitizing this data, on the other hand, conventionally has been dependent on human operation, which thus suffers from human errors, subjectivity errors, and low reproductivity. We have tackled this problem by means of automatic extraction of traffic marking, which partially accomplished to draw several traffic markings (G. Takahashi et al., 2014. The key idea of the method was extracting lines using the Hough transform strategically focused on changes in local reflection intensity along scan lines. However, it failed to extract traffic markings properly in a densely marked area, especially when local changing points are close each other. In this paper, we propose a bidirectional gradient-based detection method where local changing points are labelled with plus or minus group. Given that each label corresponds to the boundary between traffic markings and background, we can identify traffic markings explicitly, meaning traffic lines are differentiated correctly by the proposed method. As such, our automated method, a highly accurate and non-human-operator-dependent method using bidirectional gradient-based algorithm, can successfully extract traffic lines composed of complex shapes such as a cross walk, resulting in minimizing cost and obtaining highly accurate results.

  4. Intense charge transfer surface based on graphene and thymine-Hg(II)-thymine base pairs for detection of Hg(2.).

    Science.gov (United States)

    Li, Jiao; Lu, Liping; Kang, Tianfang; Cheng, Shuiyuan

    2016-03-15

    In this article, we developed an electrochemiluminescence (ECL) sensor with a high-intensity charge transfer interface for Hg(2+) detection based on Hg(II)-induced DNA hybridization. The sensor was fabricated by the following simple method. First, graphene oxide (GO) was electrochemically reduced onto a glassy carbon electrode through cyclic voltammetry. Then, amino-labeled double-stranded (ds)DNA was assembled on the electrode surface using 1-pyrenebutyric acid N-hydroxysuccinimide as a linker between GO and DNA. The other terminal of dsDNA, which was labeled with biotin, was linked to CdSe quantum dots via biotin-avidin interactions. Reduced graphene oxide has excellent electrical conductivity. dsDNA with T-Hg(II)-T base pairs exhibited more facile charge transfer. They both accelerate the electron transfer performance and sensitivity of the sensor. The increased ECL signals were logarithmically linear with the concentration of Hg(II) when Hg(2+) was present in the detection solution. The linear range of the sensor was 10(-11) to 10(-8)mol/L (R=0.9819) with a detection limit of 10(-11)mol/L. This biosensor exhibited satisfactory results when it was used to detect Hg(II) in real water samples. The biosensor with high-intense charge transfer performance is a prospect avenue to pursue more and more sensitive detection method. Copyright © 2015 Elsevier B.V. All rights reserved.

  5. A Model-Based Approach for Joint Analysis of Pain Intensity and Opioid Consumption in Postoperative Pain

    DEFF Research Database (Denmark)

    Juul, Rasmus V; Knøsgaard, Katrine R; Olesen, Anne E

    2016-01-01

    Joint analysis of pain intensity and opioid consumption is encouraged in trials of postoperative pain. However, previous approaches have not appropriately addressed the complexity of their interrelation in time. In this study, we applied a non-linear mixed effects model to simultaneously study pain...... intensity and opioid consumption in a 4-h postoperative period for 44 patients undergoing percutaneous kidney stone surgery. Analysis was based on 748 Numerical Rating Scale (NRS) scores of pain intensity and 51 observed morphine and oxycodone dosing events. A joint model was developed to describe...... the recurrent pattern of four key phases determining the development of pain intensity and opioid consumption in time; (A) Distribution of pain intensity scores which followed a truncated Poisson distribution with time-dependent mean score ranging from 0.93 to 2.45; (B) Probability of transition to threshold...

  6. Method for screening prevention and control measures and technologies based on groundwater pollution intensity assessment

    Energy Technology Data Exchange (ETDEWEB)

    Li, Juan, E-mail: lijuan@craes.org.cn [College of Water Sciences, Beijing Normal University, Beijing 100875 (China); Chinese Research Academy of Environmental Sciences, Beijing 100012 (China); State Environmental Protection Key Laboratory of Simulation and Control of Groundwater Pollution, Beijing, 100012 (China); Yang, Yang [College of Environment, Beijing Normal University, Beijing 100875 (China); Chinese Research Academy of Environmental Sciences, Beijing 100012 (China); State Environmental Protection Key Laboratory of Simulation and Control of Groundwater Pollution, Beijing, 100012 (China); Huan, Huan; Li, Mingxiao [Chinese Research Academy of Environmental Sciences, Beijing 100012 (China); State Environmental Protection Key Laboratory of Simulation and Control of Groundwater Pollution, Beijing, 100012 (China); Xi, Beidou, E-mail: xibd413@yeah.net [Chinese Research Academy of Environmental Sciences, Beijing 100012 (China); State Environmental Protection Key Laboratory of Simulation and Control of Groundwater Pollution, Beijing, 100012 (China); Lanzhou Jiaotong University, Lanzhou 730070 (China); Lv, Ningqing [Chinese Research Academy of Environmental Sciences, Beijing 100012 (China); State Environmental Protection Key Laboratory of Simulation and Control of Groundwater Pollution, Beijing, 100012 (China); Wu, Yi [Guizhou Academy of Environmental Science and Designing, Guizhou 550000 (China); Xie, Yiwen, E-mail: qin3201@126.com [School of Chemical and Environmental Engineering, Dongguan University of Technology, Dongguan, 523808 (China); Li, Xiang; Yang, Jinjin [Chinese Research Academy of Environmental Sciences, Beijing 100012 (China); State Environmental Protection Key Laboratory of Simulation and Control of Groundwater Pollution, Beijing, 100012 (China)

    2016-05-01

    index-based methodology to assess the groundwater pollution intensity (GPI). • GPI assessment includes PSH assessment and GIV assessment. • Measures to prevent and control groundwater pollution based on GPI assessment. • An index-based methodology for prevention and control technologies (PCT) screening. • PCT screening based on GPI assessment results and TOPSIS method.

  7. Method for screening prevention and control measures and technologies based on groundwater pollution intensity assessment

    International Nuclear Information System (INIS)

    Li, Juan; Yang, Yang; Huan, Huan; Li, Mingxiao; Xi, Beidou; Lv, Ningqing; Wu, Yi; Xie, Yiwen; Li, Xiang; Yang, Jinjin

    2016-01-01

    index-based methodology to assess the groundwater pollution intensity (GPI). • GPI assessment includes PSH assessment and GIV assessment. • Measures to prevent and control groundwater pollution based on GPI assessment. • An index-based methodology for prevention and control technologies (PCT) screening. • PCT screening based on GPI assessment results and TOPSIS method.

  8. SU-E-J-112: Intensity-Based Pulmonary Image Registration: An Evaluation Study

    Energy Technology Data Exchange (ETDEWEB)

    Yang, F; Meyer, J; Sandison, G [Department of Radiation Oncology, University of Washington Medical Center, Seattle, WA (United States)

    2015-06-15

    Purpose: Accurate alignment of thoracic CT images is essential for dose tracking and to safely implement adaptive radiotherapy in lung cancers. At the same time it is challenging given the highly elastic nature of lung tissue deformations. The objective of this study was to assess the performances of three state-of-art intensity-based algorithms in terms of their ability to register thoracic CT images subject to affine, barrel, and sinusoid transformation. Methods: Intensity similarity measures of the evaluated algorithms contained sum-of-squared difference (SSD), local mutual information (LMI), and residual complexity (RC). Five thoracic CT scans obtained from the EMPIRE10 challenge database were included and served as reference images. Each CT dataset was distorted by realistic affine, barrel, and sinusoid transformations. Registration performances of the three algorithms were evaluated for each distortion type in terms of intensity root mean square error (IRMSE) between the reference and registered images in the lung regions. Results: For affine distortions, the three algorithms differed significantly in registration of thoracic images both visually and nominally in terms of IRMSE with a mean of 0.011 for SSD, 0.039 for RC, and 0.026 for LMI (p<0.01; Kruskal-Wallis test). For barrel distortion, the three algorithms showed nominally no significant difference in terms of IRMSE with a mean of 0.026 for SSD, 0.086 for RC, and 0.054 for LMI (p=0.16) . A significant difference was seen for sinusoid distorted thoracic CT data with mean lung IRMSE of 0.039 for SSD, 0.092 for RC, and 0.035 for LMI (p=0.02). Conclusion: Pulmonary deformations might vary to a large extent in nature in a daily clinical setting due to factors ranging from anatomy variations to respiratory motion to image quality. It can be appreciated from the results of the present study that the suitability of application of a particular algorithm for pulmonary image registration is deformation-dependent.

  9. Temporal and subjective work demands in office-based patient care: an exploration of the dimensions of physician work intensity.

    Science.gov (United States)

    Jacobson, C Jeff; Bolon, Shannon; Elder, Nancy; Schroer, Brian; Matthews, Gerald; Szaflarski, Jerzy P; Raphaelson, Marc; Horner, Ronnie D

    2011-01-01

    Physician work intensity (WI) during office-based patient care affects quality of care and patient safety as well as physician job-satisfaction and reimbursement. Existing, brief work intensity measures have been used in physician studies, but their validity in clinical settings has not been established. Document and describe subjective and temporal WI dimensions for physicians in office-based clinical settings. Examine these in relation to the measurement procedures and dimensions of the SWAT and NASA-TLX intensity measures. A focused ethnographic study using interviews and direct observations. Five family physicians, 5 general internists, 5 neurologists, and 4 surgeons. Through interviews, each physician was asked to describe low and high intensity work responsibilities, patients, and events. To document time and task allotments, physicians were observed during a routine workday. Notes and transcripts were analyzed using the editing method in which categories are obtained from the data. WI factors identified by physicians matched dimensions assessed by standard, generic instruments of work intensity. Physicians also reported WI factors outside of the direct patient encounter. Across specialties, physician time spent in direct contact with patients averaged 61% for office-based services. Brief work intensity measures such as the SWAT and NASA-TLX can be used to assess WI in the office-based clinical setting. However, because these measures define the physician work "task" in terms of effort in the presence of the patient (ie, intraservice time), substantial physician effort dedicated to pre- and postservice activities is not captured.

  10. Intensive group-based CBT for child social phobia: a pilot study.

    Science.gov (United States)

    Donovan, Caroline L; Cobham, Vanessa; Waters, Allison M; Occhipinti, Stefano

    2015-05-01

    Although CBT has proven efficacious in the treatment of child social phobia (SP), most children do not present for treatment and child SP may be less responsive to treatment than other anxiety disorders. Intensive, group-based, SP-specific CBT may improve the efficacy of, and access to, treatment for child SP. The aim of this study was to provide a preliminary examination of such a program. Forty Australian children aged 7-12 years (15 male and 25 female) were allocated into treatment and waitlist groups. Clinical interviews to determine diagnostic status were conducted prior to treatment, following treatment and at 6-month follow-up. Parent and child questionnaire measures of child anxiety symptoms, internalizing symptoms, depression, social skills, social competence, and parental social anxiety were administered at the same time points. Treatment was delivered in 4 separate 3-hour sessions conducted over 3 consecutive weekends. At postassessment, 52.4% of children in the treatment group and 15.8% of children in the waitlist group were free of their SP diagnosis. At postassessment, compared to waitlist children, treatment group children demonstrated a greater drop in clinical severity, a greater increase in overall functioning, and held fewer clinical diagnoses. Treatment group children also reported a greater reduction in SP symptoms compared to waitlist children, and treatment group parents reported a greater reduction in child internalizing and anxiety symptoms, a greater increase in child social competence, and a greater decrease in parental SP symptoms, compared to parents of children in the waitlist group. By 6-month follow-up, 76.9% of the treatment group were free of their SP diagnosis and gains on all other measures were maintained. The results of this study are encouraging, and suggest that brief, intensive, group CBT for children with social anxiety is beneficial for many youngsters. Copyright © 2014. Published by Elsevier Ltd.

  11. Comparison of online and offline based merging methods for high resolution rainfall intensities

    Science.gov (United States)

    Shehu, Bora; Haberlandt, Uwe

    2016-04-01

    Accurate rainfall intensities with high spatial and temporal resolution are crucial for urban flow prediction. Commonly, raw or bias corrected radar fields are used for forecasting, while different merging products are employed for simulation. The merging products are proven to be adequate for rainfall intensities estimation, however their application in forecasting is limited as they are developed for offline mode. This study aims at adapting and refining the offline merging techniques for the online implementation, and at comparing the performance of these methods for high resolution rainfall data. Radar bias correction based on mean fields and quantile mapping are analyzed individually and also are implemented in conditional merging. Special attention is given to the impact of different spatial and temporal filters on the predictive skill of all methods. Raw radar data and kriging interpolation of station data are considered as a reference to check the benefit of the merged products. The methods are applied for several extreme events in the time period 2006-2012 caused by different meteorological conditions, and their performance is evaluated by split sampling. The study area is located within the 112 km radius of Hannover radar in Lower Saxony, Germany and the data set constitutes of 80 recording stations in 5 min time steps. The results of this study reveal how the performance of the methods is affected by the adjustment of radar data, choice of merging method and selected event. Merging techniques can be used to improve the performance of online rainfall estimation, which gives way to the application of merging products in forecasting.

  12. The intensity detection of single-photon detectors based on photon counting probability density statistics

    International Nuclear Information System (INIS)

    Zhang Zijing; Song Jie; Zhao Yuan; Wu Long

    2017-01-01

    Single-photon detectors possess the ultra-high sensitivity, but they cannot directly respond to signal intensity. Conventional methods adopt sampling gates with fixed width and count the triggered number of sampling gates, which is capable of obtaining photon counting probability to estimate the echo signal intensity. In this paper, we not only count the number of triggered sampling gates, but also record the triggered time position of photon counting pulses. The photon counting probability density distribution is obtained through the statistics of a series of the triggered time positions. Then Minimum Variance Unbiased Estimation (MVUE) method is used to estimate the echo signal intensity. Compared with conventional methods, this method can improve the estimation accuracy of echo signal intensity due to the acquisition of more detected information. Finally, a proof-of-principle laboratory system is established. The estimation accuracy of echo signal intensity is discussed and a high accuracy intensity image is acquired under low-light level environments. (paper)

  13. An Ensemble Approach to Knowledge-Based Intensity-Modulated Radiation Therapy Planning

    Directory of Open Access Journals (Sweden)

    Jiahan Zhang

    2018-03-01

    Full Text Available Knowledge-based planning (KBP utilizes experienced planners’ knowledge embedded in prior plans to estimate optimal achievable dose volume histogram (DVH of new cases. In the regression-based KBP framework, previously planned patients’ anatomical features and DVHs are extracted, and prior knowledge is summarized as the regression coefficients that transform features to organ-at-risk DVH predictions. In our study, we find that in different settings, different regression methods work better. To improve the robustness of KBP models, we propose an ensemble method that combines the strengths of various linear regression models, including stepwise, lasso, elastic net, and ridge regression. In the ensemble approach, we first obtain individual model prediction metadata using in-training-set leave-one-out cross validation. A constrained optimization is subsequently performed to decide individual model weights. The metadata is also used to filter out impactful training set outliers. We evaluate our method on a fresh set of retrospectively retrieved anonymized prostate intensity-modulated radiation therapy (IMRT cases and head and neck IMRT cases. The proposed approach is more robust against small training set size, wrongly labeled cases, and dosimetric inferior plans, compared with other individual models. In summary, we believe the improved robustness makes the proposed method more suitable for clinical settings than individual models.

  14. A New Displacement-based Approach to Calculate Stress Intensity Factors With the Boundary Element Method

    Directory of Open Access Journals (Sweden)

    Marco Gonzalez

    Full Text Available Abstract The analysis of cracked brittle mechanical components considering linear elastic fracture mechanics is usually reduced to the evaluation of stress intensity factors (SIFs. The SIF calculation can be carried out experimentally, theoretically or numerically. Each methodology has its own advantages but the use of numerical methods has become very popular. Several schemes for numerical SIF calculations have been developed, the J-integral method being one of the most widely used because of its energy-like formulation. Additionally, some variations of the J-integral method, such as displacement-based methods, are also becoming popular due to their simplicity. In this work, a simple displacement-based scheme is proposed to calculate SIFs, and its performance is compared with contour integrals. These schemes are all implemented with the Boundary Element Method (BEM in order to exploit its advantages in crack growth modelling. Some simple examples are solved with the BEM and the calculated SIF values are compared against available solutions, showing good agreement between the different schemes.

  15. A usability evaluation of a SNOMED CT based compositional interface terminology for intensive care.

    Science.gov (United States)

    Bakhshi-Raiez, F; de Keizer, N F; Cornet, R; Dorrepaal, M; Dongelmans, D; Jaspers, M W M

    2012-05-01

    To evaluate the usability of a large compositional interface terminology based on SNOMED CT and the terminology application for registration of the reasons for intensive care admission in a Patient Data Management System. Observational study with user-based usability evaluations before and 3 months after the system was implemented and routinely used. Usability was defined by five aspects: effectiveness, efficiency, learnability, overall user satisfaction, and experienced usability problems. Qualitative (the Think-Aloud user testing method) and quantitative (the System Usability Scale questionnaire and Time-on-Task analyses) methods were used to examine these usability aspects. The results of the evaluation study revealed that the usability of the interface terminology fell short (SUS scores before and after implementation of 47.2 out of 100 and 37.5 respectively out of 100). The qualitative measurements revealed a high number (n=35) of distinct usability problems, leading to ineffective and inefficient registration of reasons for admission. The effectiveness and efficiency of the system did not change over time. About 14% (n=5) of the revealed usability problems were related to the terminology content based on SNOMED CT, while the remaining 86% (n=30) was related to the terminology application. The problems related to the terminology content were more severe than the problems related to the terminology application. This study provides a detailed insight into how clinicians interact with a controlled compositional terminology through a terminology application. The extensiveness, complexity of the hierarchy, and the language usage of an interface terminology are defining for its usability. Carefully crafted domain-specific subsets and a well-designed terminology application are needed to facilitate the use of a complex compositional interface terminology based on SNOMED CT. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.

  16. The Urban Intensive Land-use Evaluation in Xi’an, Based on Fuzzy Comprehensive Evaluation

    Science.gov (United States)

    Shi, Ru; Kang, Zhiyuan

    2018-01-01

    The intensive land-use is the basis of urban “stock optimization”, and scientific and reasonable evaluation is the important content of the land-intensive utilization. In this paper, through the survey of Xi’an urban land-use condition, we construct the suitable evaluation index system of Xi’an’ intensive land-use, by using Analytic Hierarchy Process (AHP) and Fuzzy Comprehensive Evaluation (FCE) of combination. And through the analysis of the influencing factors of land-intensive utilization, we provide a reference for the future development direction.

  17. Ultrasound image based visual servoing for moving target ablation by high intensity focused ultrasound.

    Science.gov (United States)

    Seo, Joonho; Koizumi, Norihiro; Mitsuishi, Mamoru; Sugita, Naohiko

    2017-12-01

    Although high intensity focused ultrasound (HIFU) is a promising technology for tumor treatment, a moving abdominal target is still a challenge in current HIFU systems. In particular, respiratory-induced organ motion can reduce the treatment efficiency and negatively influence the treatment result. In this research, we present: (1) a methodology for integration of ultrasound (US) image based visual servoing in a HIFU system; and (2) the experimental results obtained using the developed system. In the visual servoing system, target motion is monitored by biplane US imaging and tracked in real time (40 Hz) by registration with a preoperative 3D model. The distance between the target and the current HIFU focal position is calculated in every US frame and a three-axis robot physically compensates for differences. Because simultaneous HIFU irradiation disturbs US target imaging, a sophisticated interlacing strategy was constructed. In the experiments, respiratory-induced organ motion was simulated in a water tank with a linear actuator and kidney-shaped phantom model. Motion compensation with HIFU irradiation was applied to the moving phantom model. Based on the experimental results, visual servoing exhibited a motion compensation accuracy of 1.7 mm (RMS) on average. Moreover, the integrated system could make a spherical HIFU-ablated lesion in the desired position of the respiratory-moving phantom model. We have demonstrated the feasibility of our US image based visual servoing technique in a HIFU system for moving target treatment. © 2016 The Authors The International Journal of Medical Robotics and Computer Assisted Surgery Published by John Wiley & Sons Ltd.

  18. Postoperative Irradiation of Gynecologic Malignancies: Improving Treatment Delivery Using Aperture-Based Intensity-Modulated Radiotherapy

    International Nuclear Information System (INIS)

    Nadeau, Sylvain; Bouchard, Myriam; Germain, Isabelle; Raymond, Paul-Emile; Beaulieu, Frederic; Beaulieu, Luc; Roy, Rene; Gingras, Luc

    2007-01-01

    Purpose: To evaluate dosimetric and treatment delivery advantages of aperture-based intensity-modulated radiotherapy (AB-IMRT) for the treatment of patients receiving whole pelvic radiotherapy for gynecologic malignancies. Methods and Materials: Nineteen patients undergoing pelvic radiotherapy after resection of endometrial cancers were selected. A 45-Gy dose was prescribed to the target volume delineated on a planning CT scan. An in-house inverse planning system, Ballista, was used to develop a treatment plan using aperture-based multileaf collimator segments. This approach was compared with conventional four-field, enlarged four-field, and static beamlet-based IMRT (BB-IMRT) techniques in terms of target coverage, dose-volume histogram statistics for surrounding normal tissues, and numbers of segments and monitor units (MU). Results: Three quarters (76.4%) of the planning target volume received the prescription dose with conventional four-field plans. With adequate target coverage, the Ballista plans significantly reduced the volume of bowel and bladder irradiated at the prescribed dose (p < 0.001), whereas the two approaches provided equivalent results for the rectum (p 0.5). On the other hand, AB-IMRT and BB-IMRT plans showed only small differences in dose-volume histogram statistics of unknown clinical impact, whereas Ballista plan delivery required on average 73% and 59% fewer segments and MU, respectively. Conclusion: With respect to conventional techniques, AB-IMRT for the treatment of gynecologic malignancies provides dosimetric advantages similar to those with BB-IMRT but with clear treatment delivery improvements

  19. Intensive monitoring of new drugs based on first prescription signals from pharmacists : a pilot study

    NARCIS (Netherlands)

    Van Grootheest, AC; Groote, JK; de Jong-van den Berg, LTW

    Background Intensive monitoring can be a valuable tool in the early detection of adverse drug reactions, especially of new drugs. Aim of this pilot study was to investigate the practical possibilities of a system of intensive monitoring, using the pharmacy computer system to detect the first

  20. The promise and peril of intensive-site-based ecological research: insights from the Hubbard Brook ecosystem study

    Science.gov (United States)

    Timothy J. Fahey; Pamela H. Templer; Bruce T. Anderson; John J. Battles; John L. Campbell; Charles T. Driscoll; Anthony R. Fusco; Mark B. Green; Karim-Aly S. Kassam; Nicholas L. Rodenhouse; Lindsey Rustad; Paul G. Schaberg; Matthew A. Vadeboncoeur

    2015-01-01

    Ecological research is increasingly concentrated at particular locations or sites. This trend reflects a variety of advantages of intensive, site-based research, but also raises important questions about the nature of such spatially delimited research: how well does site based research represent broader areas, and does it constrain scientific discovery? We provide an...

  1. Advanced hemodynamic monitoring in intensive care medicine : A German web-based survey study.

    Science.gov (United States)

    Saugel, B; Reese, P C; Wagner, J Y; Buerke, M; Huber, W; Kluge, S; Prondzinsky, R

    2018-04-01

    Advanced hemodynamic monitoring is recommended in patients with complex circulatory shock. To evaluate the current attitudes and beliefs among German intensivists, regarding advanced hemodynamic monitoring, the actual hemodynamic management in clinical practice, and the barriers to using it. Web-based survey among members of the German Society of Medical Intensive Care and Emergency Medicine. Of 284 respondents, 249 (87%) agreed that further hemodynamic assessment is needed to determine the type of circulatory shock if no clear clinical diagnosis can be made. In all, 281 (99%) agreed that echocardiography is helpful for this purpose (transpulmonary thermodilution: 225 [79%]; pulmonary artery catheterization: 126 [45%]). More than 70% of respondents agreed that blood flow variables (cardiac output, stroke volume) should be measured in patients with hemodynamic instability. The parameters most respondents agreed should be assessed in a patient with hemodynamic instability were mean arterial pressure, cardiac output, and serum lactate. Echocardiography is available in 99% of ICUs (transpulmonary thermodilution: 91%; pulmonary artery catheter: 63%). The respondents stated that, in clinical practice, invasive arterial pressure measurements and serum lactate measurements are performed in more than 90% of patients with hemodynamic instability (cardiac output monitoring in about 50%; transpulmonary thermodilution in about 40%). The respondents did not feel strong barriers to the use of advanced hemodynamic monitoring in clinical practice. This survey study shows that German intensivists deem advanced hemodynamic assessment necessary for the differential diagnosis of circulatory shock and to guide therapy with fluids, vasopressors, and inotropes in ICU patients.

  2. LASER POINTER DETECTION BASED ON INTENSITY PROFILE ANALYSIS FOR APPLICATION IN TELECONSULTATION

    Directory of Open Access Journals (Sweden)

    NAIREEN IMTIAZ

    2017-08-01

    Full Text Available Telemedicine is application of electronic communication to deliver medical care remotely. An important aspect of telemedicine is teleconsultation which involves obtaining the professional opinion of a healthcare provider. One of the ways to improve eleconsultation is to equip the remote specialist via control of a laser pointer, located in the consultation area to provide a means of gesture. As such, accurate detection of laser spot is crucial in such systems as they rely on visual feedback, which enables the specialist in a remote site to control and point the laser in the active location using a standard mouse. The main issue in laser spot detection in a natural environment is the distinguishability of a laser point image from other bright regions and glare due to camera saturation. This problem remains unsolved without extensive computing and use of hardware filters. In this paper a hybrid algorithm is described which is aimed to work with natural indoor environment while limiting computation. This algorithm combines thresholding and blob evaluation methods with a novel image intensity profile comparison method based on linear regression. A comparison of the algorithm has been done with existing approaches. The developed algorithm shows a higher accuracy and faster execution time making it an ideal candidate for real time detection applications.

  3. Enhanced Decision Support Systems in Intensive Care Unit Based on Intuitionistic Fuzzy Sets

    Directory of Open Access Journals (Sweden)

    Hanen Jemal

    2017-01-01

    Full Text Available In areas of medical diagnosis and decision-making, several uncertainty and ambiguity shrouded situations are most often imposed. In this regard, one may well assume that intuitionistic fuzzy sets (IFS should stand as a potent technique useful for demystifying associated with the real healthcare decision-making situations. To this end, we are developing a prototype model helpful for detecting the patients risk degree in Intensive Care Unit (ICU. Based on the intuitionistic fuzzy sets, dubbed Medical Intuitionistic Fuzzy Expert Decision Support System (MIFEDSS, the shown work has its origins in the Modified Early Warning Score (MEWS standard. It is worth noting that the proposed prototype effectiveness validation is associated through a real case study test at the Polyclinic ESSALEMA cited in Sfax, Tunisia. This paper does actually provide some practical initial results concerning the system as carried out in real life situations. Indeed, the proposed system turns out to prove that the MIFEDSS does actually display an imposing capability for an established handily ICU related uncertainty issues. The performance of the prototypes is compared with the MEWS standard which exposed that the IFS application appears to perform highly better in deferring accuracy than the expert MEWS score with higher degrees of sensitivity and specificity being recorded.

  4. СREATING OF BARCODES FOR FACIAL IMAGES BASED ON INTENSITY GRADIENTS

    Directory of Open Access Journals (Sweden)

    G. A. Kukharev

    2014-05-01

    Full Text Available The paper provides analysis of existing approaches to the generating of barcodes and description of the system structure for generating of barcodes from facial images. The method for generating of standard type linear barcodes from facial images is proposed. This method is based on the difference of intensity gradients, which represent images in the form of initial features. Further averaging of these features into a limited number of intervals is performed; the quantization of results into decimal digits from 0 to 9 and table conversion into the standard barcode is done. Testing was conducted on the Face94 database and database of composite faces of different ages. It showed that the proposed method ensures the stability of generated barcodes according to changes of scale, pose and mirroring of facial images, as well as changes of facial expressions and shadows on faces from local lighting. The proposed solutions are computationally low-cost and do not require the use of any specialized image processing software for generating of facial barcodes in real-time systems.

  5. High intensity intermittent games-based activity and adolescents' cognition: moderating effect of physical fitness.

    Science.gov (United States)

    Cooper, Simon B; Dring, Karah J; Morris, John G; Sunderland, Caroline; Bandelow, Stephan; Nevill, Mary E

    2018-05-08

    An acute bout of exercise elicits a beneficial effect on subsequent cognitive function in adolescents. The effect of games-based activity, an ecologically valid and attractive exercise model for young people, remains unknown; as does the moderating effect of fitness on the acute exercise-cognition relationship. Therefore, the aim of the present study was to examine the effect of games-based activity on subsequent cognition in adolescents, and the moderating effect of fitness on this relationship. Following ethical approval, 39 adolescents (12.3 ± 0.7 year) completed an exercise and resting trial in a counterbalanced, randomised crossover design. During familiarisation, participants completed a multi-stage fitness test to predict VO 2 peak. The exercise trial consisted of 60-min games-based activity (basketball), during which heart rate was 158 ± 11 beats∙min - 1 . A battery of cognitive function tests (Stroop test, Sternberg paradigm, trail making and d2 tests) were completed 30-min before, immediately following and 45-min following the basketball. Response times on the complex level of the Stroop test were enhanced both immediately (p = 0.021) and 45-min (p = 0.035) post-exercise, and response times on the five item level of the Sternberg paradigm were enhanced immediately post-exercise (p = 0.023). There were no effects on the time taken to complete the trail making test or any outcome of the d2 test. In particular, response times were enhanced in the fitter adolescents 45-min post-exercise on both levels of the Stroop test (simple, p = 0.005; complex, p = 0.040) and on the three item level of the Sternberg paradigm immediately (p = 0.017) and 45-min (p = 0.008) post-exercise. Games-based activity enhanced executive function and working memory scanning speed in adolescents, an effect particularly evident in fitter adolescents, whilst the high intensity intermittent nature of games-based activity may be too demanding for

  6. A GPU-accelerated and Monte Carlo-based intensity modulated proton therapy optimization system.

    Science.gov (United States)

    Ma, Jiasen; Beltran, Chris; Seum Wan Chan Tseung, Hok; Herman, Michael G

    2014-12-01

    Conventional spot scanning intensity modulated proton therapy (IMPT) treatment planning systems (TPSs) optimize proton spot weights based on analytical dose calculations. These analytical dose calculations have been shown to have severe limitations in heterogeneous materials. Monte Carlo (MC) methods do not have these limitations; however, MC-based systems have been of limited clinical use due to the large number of beam spots in IMPT and the extremely long calculation time of traditional MC techniques. In this work, the authors present a clinically applicable IMPT TPS that utilizes a very fast MC calculation. An in-house graphics processing unit (GPU)-based MC dose calculation engine was employed to generate the dose influence map for each proton spot. With the MC generated influence map, a modified least-squares optimization method was used to achieve the desired dose volume histograms (DVHs). The intrinsic CT image resolution was adopted for voxelization in simulation and optimization to preserve spatial resolution. The optimizations were computed on a multi-GPU framework to mitigate the memory limitation issues for the large dose influence maps that resulted from maintaining the intrinsic CT resolution. The effects of tail cutoff and starting condition were studied and minimized in this work. For relatively large and complex three-field head and neck cases, i.e., >100,000 spots with a target volume of ∼ 1000 cm(3) and multiple surrounding critical structures, the optimization together with the initial MC dose influence map calculation was done in a clinically viable time frame (less than 30 min) on a GPU cluster consisting of 24 Nvidia GeForce GTX Titan cards. The in-house MC TPS plans were comparable to a commercial TPS plans based on DVH comparisons. A MC-based treatment planning system was developed. The treatment planning can be performed in a clinically viable time frame on a hardware system costing around 45,000 dollars. The fast calculation and

  7. A GPU-accelerated and Monte Carlo-based intensity modulated proton therapy optimization system

    Energy Technology Data Exchange (ETDEWEB)

    Ma, Jiasen, E-mail: ma.jiasen@mayo.edu; Beltran, Chris; Seum Wan Chan Tseung, Hok; Herman, Michael G. [Department of Radiation Oncology, Division of Medical Physics, Mayo Clinic, 200 First Street Southwest, Rochester, Minnesota 55905 (United States)

    2014-12-15

    Purpose: Conventional spot scanning intensity modulated proton therapy (IMPT) treatment planning systems (TPSs) optimize proton spot weights based on analytical dose calculations. These analytical dose calculations have been shown to have severe limitations in heterogeneous materials. Monte Carlo (MC) methods do not have these limitations; however, MC-based systems have been of limited clinical use due to the large number of beam spots in IMPT and the extremely long calculation time of traditional MC techniques. In this work, the authors present a clinically applicable IMPT TPS that utilizes a very fast MC calculation. Methods: An in-house graphics processing unit (GPU)-based MC dose calculation engine was employed to generate the dose influence map for each proton spot. With the MC generated influence map, a modified least-squares optimization method was used to achieve the desired dose volume histograms (DVHs). The intrinsic CT image resolution was adopted for voxelization in simulation and optimization to preserve spatial resolution. The optimizations were computed on a multi-GPU framework to mitigate the memory limitation issues for the large dose influence maps that resulted from maintaining the intrinsic CT resolution. The effects of tail cutoff and starting condition were studied and minimized in this work. Results: For relatively large and complex three-field head and neck cases, i.e., >100 000 spots with a target volume of ∼1000 cm{sup 3} and multiple surrounding critical structures, the optimization together with the initial MC dose influence map calculation was done in a clinically viable time frame (less than 30 min) on a GPU cluster consisting of 24 Nvidia GeForce GTX Titan cards. The in-house MC TPS plans were comparable to a commercial TPS plans based on DVH comparisons. Conclusions: A MC-based treatment planning system was developed. The treatment planning can be performed in a clinically viable time frame on a hardware system costing around 45

  8. Intense transient electric field sensor based on the electro-optic effect of LiNbO{sub 3}

    Energy Technology Data Exchange (ETDEWEB)

    Yang, Qing, E-mail: yangqing@cqu.edu.cn; Sun, Shangpeng; Han, Rui; Sima, Wenxia; Liu, Tong [State Key Laboratory of Power Transmission Equipment & System Security and New Technology, Chongqing University, Shapingba District, Chongqing, 400044 (China)

    2015-10-15

    Intense transient electric field measurements are widely applied in various research areas. An optical intense E-field sensor for time-domain measurements, based on the electro-optic effect of lithium niobate, has been studied in detail. Principles and key issues in the design of the sensor are presented. The sensor is insulated, small in size (65 mm × 15 mm × 15 mm), and suitable for high-intensity (<801 kV/m) electric field measurements over a wide frequency band (10 Hz–10 MHz). The input/output characteristics of the sensor were obtained and the sensor calibrated. Finally, an application using this sensor in testing laboratory lightning impulses and in measuring transient electric fields during switch-on of a disconnector confirmed that the sensor is expected to find widespread use in transient intense electric field measurement applications.

  9. Tsunami Source Identification on the 1867 Tsunami Event Based on the Impact Intensity

    Science.gov (United States)

    Wu, T. R.

    2014-12-01

    The 1867 Keelung tsunami event has drawn significant attention from people in Taiwan. Not only because the location was very close to the 3 nuclear power plants which are only about 20km away from the Taipei city but also because of the ambiguous on the tsunami sources. This event is unique in terms of many aspects. First, it was documented on many literatures with many languages and with similar descriptions. Second, the tsunami deposit was discovered recently. Based on the literatures, earthquake, 7-meter tsunami height, volcanic smoke, and oceanic smoke were observed. Previous studies concluded that this tsunami was generated by an earthquake with a magnitude around Mw7.0 along the Shanchiao Fault. However, numerical results showed that even a Mw 8.0 earthquake was not able to generate a 7-meter tsunami. Considering the steep bathymetry and intense volcanic activities along the Keelung coast, one reasonable hypothesis is that different types of tsunami sources were existed, such as the submarine landslide or volcanic eruption. In order to confirm this scenario, last year we proposed the Tsunami Reverse Tracing Method (TRTM) to find the possible locations of the tsunami sources. This method helped us ruling out the impossible far-field tsunami sources. However, the near-field sources are still remain unclear. This year, we further developed a new method named 'Impact Intensity Analysis' (IIA). In the IIA method, the study area is divided into a sequence of tsunami sources, and the numerical simulations of each source is conducted by COMCOT (Cornell Multi-grid Coupled Tsunami Model) tsunami model. After that, the resulting wave height from each source to the study site is collected and plotted. This method successfully helped us to identify the impact factor from the near-field potential sources. The IIA result (Fig. 1) shows that the 1867 tsunami event was a multi-source event. A mild tsunami was trigged by a Mw7.0 earthquake, and then followed by the submarine

  10. Experiments based on blue intensity for reconstructing North Pacific temperatures along the Gulf of Alaska

    Science.gov (United States)

    Wilson, Rob; D'Arrigo, Rosanne; Andreu-Hayles, Laia; Oelkers, Rose; Wiles, Greg; Anchukaitis, Kevin; Davi, Nicole

    2017-08-01

    Ring-width (RW) records from the Gulf of Alaska (GOA) have yielded a valuable long-term perspective for North Pacific changes on decadal to longer timescales in prior studies but contain a broad winter to late summer seasonal climate response. Similar to the highly climate-sensitive maximum latewood density (MXD) proxy, the blue intensity (BI) parameter has recently been shown to correlate well with year-to-year warm-season temperatures for a number of sites at northern latitudes. Since BI records are much less labour intensive and expensive to generate than MXD, such data hold great potential value for future tree-ring studies in the GOA and other regions in mid- to high latitudes. Here we explore the potential for improving tree-ring-based reconstructions using combinations of RW- and BI-related parameters (latewood BI and delta BI) from an experimental subset of samples at eight mountain hemlock (Tsuga mertensiana) sites along the GOA. This is the first study for the hemlock genus using BI data. We find that using either inverted latewood BI (LWBinv) or delta BI (DB) can improve the amount of explained temperature variance by > 10 % compared to RW alone, although the optimal target season shrinks to June-September, which may have implications for studying ocean-atmosphere variability in the region. One challenge in building these BI records is that resin extraction did not remove colour differences between the heartwood and sapwood; thus, long term trend biases, expressed as relatively warm temperatures in the 18th century, were noted when using the LWBinv data. Using DB appeared to overcome these trend biases, resulting in a reconstruction expressing 18th-19th century temperatures ca. 0.5 °C cooler than the 20th-21st centuries. This cool period agrees well with previous dendroclimatic studies and the glacial advance record in the region. Continuing BI measurement in the GOA region must focus on sampling and measuring more trees per site (> 20) and compiling

  11. Hospitals and organizational models based on the intensity of treatment: the internist's point of view

    Directory of Open Access Journals (Sweden)

    Giuseppe Chesi

    2012-01-01

    Full Text Available IntroductionThe type of patients being treated in our hospitals has changed significantly. Today's patients are much older with more complicated, polypathological problems. As a result, hospital organization and management structures must also change, particularly in Internal Medicine. A widely discussed approach, organization according to “intensity of treatment,” could be an appropriate solution from an organizational viewpoint that would also satisfy these new demands.Materials and methodsWith the aid of a questionnaire sent to internists working in the hospitals of Italy's Emilia-Romagna region and the review of the relevant medical literature, we defined structural, organizational, technological, managerial, and staffing characteristics to better determine and classify this model. We analyzed questionnaire responses of 31 internists heading operative units in their hospitals, a relatively homogeneous subgroup with experience in organizing and managing healthcare as well as its clinical aspects.ResultsAnalysis of these questionnaires revealed important points concerning the model: 1 an accurate identification of the medical care on which to base the model; 2 a well-defined strategy for differentiated allocation of staff to structural and technological areas depending on the level of medical care provided in the area; 3 an accurate definition of the types and features of patients targeted by each level of medical care; 4 an early exchange (starting from the patient's arrival in the Emergency Department of information and medical knowledge among Emergency Department physicians and those present during the initial stages of hospitalization; 5 a precise definition of responsibilities in the different areas, operative and collaborative stages among different physicians and medical staff, the different disciplines involved in the process.ConclusionsAmong the physicians responsible for managing complex areas of Internal Medicine in Emilia

  12. Experiments based on blue intensity for reconstructing North Pacific temperatures along the Gulf of Alaska

    Directory of Open Access Journals (Sweden)

    R. Wilson

    2017-08-01

    Full Text Available Ring-width (RW records from the Gulf of Alaska (GOA have yielded a valuable long-term perspective for North Pacific changes on decadal to longer timescales in prior studies but contain a broad winter to late summer seasonal climate response. Similar to the highly climate-sensitive maximum latewood density (MXD proxy, the blue intensity (BI parameter has recently been shown to correlate well with year-to-year warm-season temperatures for a number of sites at northern latitudes. Since BI records are much less labour intensive and expensive to generate than MXD, such data hold great potential value for future tree-ring studies in the GOA and other regions in mid- to high latitudes. Here we explore the potential for improving tree-ring-based reconstructions using combinations of RW- and BI-related parameters (latewood BI and delta BI from an experimental subset of samples at eight mountain hemlock (Tsuga mertensiana sites along the GOA. This is the first study for the hemlock genus using BI data. We find that using either inverted latewood BI (LWBinv or delta BI (DB can improve the amount of explained temperature variance by > 10 % compared to RW alone, although the optimal target season shrinks to June–September, which may have implications for studying ocean–atmosphere variability in the region. One challenge in building these BI records is that resin extraction did not remove colour differences between the heartwood and sapwood; thus, long term trend biases, expressed as relatively warm temperatures in the 18th century, were noted when using the LWBinv data. Using DB appeared to overcome these trend biases, resulting in a reconstruction expressing 18th–19th century temperatures ca. 0.5 °C cooler than the 20th–21st centuries. This cool period agrees well with previous dendroclimatic studies and the glacial advance record in the region. Continuing BI measurement in the GOA region must focus on sampling and measuring more trees per

  13. Crossfit-based high-intensity power training improves maximal aerobic fitness and body composition.

    Science.gov (United States)

    Smith, Michael M; Sommer, Allan J; Starkoff, Brooke E; Devor, Steven T

    2013-11-01

    The purpose of this study was to examine the effects of a crossfit-based high-intensity power training (HIPT) program on aerobic fitness and body composition. Healthy subjects of both genders (23 men, 20 women) spanning all levels of aerobic fitness and body composition completed 10 weeks of HIPT consisting of lifts such as the squat, deadlift, clean, snatch, and overhead press performed as quickly as possible. Additionally, this crossfit-based HIPT program included skill work for the improvement of traditional Olympic lifts and selected gymnastic exercises. Body fat percentage was estimated using whole-body plethysmography, and maximal aerobic capacity (VO2max) was measured by analyzing expired gasses during a Bruce protocol maximal graded treadmill test. These variables were measured again after 10 weeks of training and compared for significant changes using a paired t-test. Results showed significant (p < 0.05) improvements of VO2max in men (43.10 ± 1.40 to 48.96 ± 1.42 ml · kg · min) and women (35.98 ± 1.60 to 40.22 ± 1.62 ml · kg · min) and decreased body fat percentage in men (22.2 ± 1.3 to 18.0 ± 1.3) and women (26.6 ± 2.0 to 23.2 ± 2.0). These improvements were significant across all levels of initial fitness. Significant correlations between absolute oxygen consumption and oxygen consumption relative to body weight was found in both men (r = 0.83, p < 0.001) and women (r = 0.94, p < 0.001), indicating that HIPT improved VO2max scaled to body weight independent of changes to body composition. Our data show that HIPT significantly improves VO2max and body composition in subjects of both genders across all levels of fitness.

  14. A fast color image enhancement algorithm based on Max Intensity Channel

    Science.gov (United States)

    Sun, Wei; Han, Long; Guo, Baolong; Jia, Wenyan; Sun, Mingui

    2014-03-01

    In this paper, we extend image enhancement techniques based on the retinex theory imitating human visual perception of scenes containing high illumination variations. This extension achieves simultaneous dynamic range modification, color consistency, and lightness rendition without multi-scale Gaussian filtering which has a certain halo effect. The reflection component is analyzed based on the illumination and reflection imaging model. A new prior named Max Intensity Channel (MIC) is implemented assuming that the reflections of some points in the scene are very high in at least one color channel. Using this prior, the illumination of the scene is obtained directly by performing a gray-scale closing operation and a fast cross-bilateral filtering on the MIC of the input color image. Consequently, the reflection component of each RGB color channel can be determined from the illumination and reflection imaging model. The proposed algorithm estimates the illumination component which is relatively smooth and maintains the edge details in different regions. A satisfactory color rendition is achieved for a class of images that do not satisfy the gray-world assumption implicit to the theoretical foundation of the retinex. Experiments are carried out to compare the new method with several spatial and transform domain methods. Our results indicate that the new method is superior in enhancement applications, improves computation speed, and performs well for images with high illumination variations than other methods. Further comparisons of images from National Aeronautics and Space Administration and a wearable camera eButton have shown a high performance of the new method with better color restoration and preservation of image details.

  15. Ototoxicity After Intensity-Modulated Radiation Therapy and Cisplatin-Based Chemotherapy in Children With Medulloblastoma

    International Nuclear Information System (INIS)

    Paulino, Arnold C.; Lobo, Mark; Teh, Bin S.; Okcu, M. Fatih; South, Michael; Butler, E. Brian; Su, Jack; Chintagumpala, Murali

    2010-01-01

    Purpose: To report the incidence of Pediatric Oncology Group (POG) Grade 3 or 4 ototoxicity in a cohort of patients treated with craniospinal irradiation (CSI) followed by posterior fossa (PF) and/or tumor bed (TB) boost using intensity-modulated radiation therapy (IMRT). Methods and Materials: From 1998 to 2006, 44 patients with medulloblastoma were treated with CSI followed by IMRT to the PF and/or TB and cisplatin-based chemotherapy. Patients with standard-risk disease were treated with 18 to 23.4 Gy CSI followed by either a (1) PF boost to 36 Gy and TB boost to 54 to 55.8 Gy or (2) TB boost to 55.8 Gy. Patients with high-risk disease received 36 to 39.6 Gy CSI followed by a (1) PF boost to 54 to 55.8 Gy, (2) PF boost to 45 Gy and TB boost to 55.8 Gy, or (3) TB boost to 55.8 Gy. Median audiogram follow-up was 41 months (range, 11-92.4 months). Results: POG Grade Ototoxicity 0, 1, 2, 3. and 4 was found in 29, 32, 11, 13. and 3 ears. respectively, with POG Grade 3 or 4 accounting for 18.2% of cases. There was a statistically significant difference in mean radiation dose (D mean ) cochlea according to degree of ototoxicity, with D mean cochlea increasing with severity of hearing loss (p = 0.027). Conclusions: Severe ototoxicity was seen in 18.2% of ears in children treated with IMRT boost and cisplatin-based chemotherapy. Increasing dose to the cochlea was associated with increasing severity of hearing loss.

  16. Zoning of the Russian Federation territory based on forest management and forest use intensity

    Directory of Open Access Journals (Sweden)

    A. A. Маrtynyuk

    2016-02-01

    Full Text Available Over extended periods issues of forest management intensification are important in all aspects of Russian forest sector development. Sufficient research has been done in silviculture, forest planning and forest economics to address forest management intensification targets. Systems of our national territory forest management and forest economics zoning due to specifics of timber processing and forest area infrastructure have been developed. Despite sufficient available experience in sustainable forest management so far intensification issues were addressed due to development of new woodlands without proper consideration of forest regeneration and sustainable forest management operations. It resulted in forest resource depletion and unfavorable substitution of coniferous forests with less valuable softwood ones in considerable territories (especially accessible for transport. The situation is complicated since degree of forest ecosystem changes is higher in territories with high potential productivity. Ongoing changes combined with the present effective forest management system resulted in a situation where development of new woodlands is impossible without heavy investments in road construction; meanwhile road construction is unfeasible due to distances to timber processing facilities. In the meantime, changes in forest legislation, availability of forest lease holding, and promising post-logging forest regeneration technologies generate new opportunities to increase timber volumes due to application of other procedures practically excluding development of virgin woodlands. With regard to above, the Russian territory was zoned on a basis of key factors that define forest management and forest use intensification based on forest ecosystem potential productivity and area transport accessibility. Based on available data with GIS analysis approach (taking into consideration value of various factors the Russian Federation forest resources have been

  17. rFRET: A comprehensive, Matlab-based program for analyzing intensity-based ratiometric microscopic FRET experiments.

    Science.gov (United States)

    Nagy, Peter; Szabó, Ágnes; Váradi, Tímea; Kovács, Tamás; Batta, Gyula; Szöllősi, János

    2016-04-01

    Fluorescence or Förster resonance energy transfer (FRET) remains one of the most widely used methods for assessing protein clustering and conformation. Although it is a method with solid physical foundations, many applications of FRET fall short of providing quantitative results due to inappropriate calibration and controls. This shortcoming is especially valid for microscopy where currently available tools have limited or no capability at all to display parameter distributions or to perform gating. Since users of multiparameter flow cytometry usually apply these tools, the absence of these features in applications developed for microscopic FRET analysis is a significant limitation. Therefore, we developed a graphical user interface-controlled Matlab application for the evaluation of ratiometric, intensity-based microscopic FRET measurements. The program can calculate all the necessary overspill and spectroscopic correction factors and the FRET efficiency and it displays the results on histograms and dot plots. Gating on plots and mask images can be used to limit the calculation to certain parts of the image. It is an important feature of the program that the calculated parameters can be determined by regression methods, maximum likelihood estimation (MLE) and from summed intensities in addition to pixel-by-pixel evaluation. The confidence interval of calculated parameters can be estimated using parameter simulations if the approximate average number of detected photons is known. The program is not only user-friendly, but it provides rich output, it gives the user freedom to choose from different calculation modes and it gives insight into the reliability and distribution of the calculated parameters. © 2016 International Society for Advancement of Cytometry. © 2016 International Society for Advancement of Cytometry.

  18. Real time quantitative phase microscopy based on single-shot transport of intensity equation (ssTIE) method

    Science.gov (United States)

    Yu, Wei; Tian, Xiaolin; He, Xiaoliang; Song, Xiaojun; Xue, Liang; Liu, Cheng; Wang, Shouyu

    2016-08-01

    Microscopy based on transport of intensity equation provides quantitative phase distributions which opens another perspective for cellular observations. However, it requires multi-focal image capturing while mechanical and electrical scanning limits its real time capacity in sample detections. Here, in order to break through this restriction, real time quantitative phase microscopy based on single-shot transport of the intensity equation method is proposed. A programmed phase mask is designed to realize simultaneous multi-focal image recording without any scanning; thus, phase distributions can be quantitatively retrieved in real time. It is believed the proposed method can be potentially applied in various biological and medical applications, especially for live cell imaging.

  19. Human Capital Intensity in Technology-Based Firms Located in Portugal: Do Foreign Multinationals Make a Difference?

    OpenAIRE

    Ana Teresa Tavares; Aurora A. C. Teixeira

    2005-01-01

    This paper contributes to the scarce empirical literature on the impact of foreign ownership on human capital intensity. New evidence is provided, based on a comprehensive, large-scale survey of technology-based firms located in Portugal. Using two alternatives measures of human capital (one based on skills, another on education), the key findings are that: (1) foreign ownership directly (and significantly) impacts on firms general human capital (education); (2) foreign ownership indirectly (...

  20. Efficiency of analytical and sampling-based uncertainty propagation in intensity-modulated proton therapy

    Science.gov (United States)

    Wahl, N.; Hennig, P.; Wieser, H. P.; Bangert, M.

    2017-07-01

    The sensitivity of intensity-modulated proton therapy (IMPT) treatment plans to uncertainties can be quantified and mitigated with robust/min-max and stochastic/probabilistic treatment analysis and optimization techniques. Those methods usually rely on sparse random, importance, or worst-case sampling. Inevitably, this imposes a trade-off between computational speed and accuracy of the uncertainty propagation. Here, we investigate analytical probabilistic modeling (APM) as an alternative for uncertainty propagation and minimization in IMPT that does not rely on scenario sampling. APM propagates probability distributions over range and setup uncertainties via a Gaussian pencil-beam approximation into moments of the probability distributions over the resulting dose in closed form. It supports arbitrary correlation models and allows for efficient incorporation of fractionation effects regarding random and systematic errors. We evaluate the trade-off between run-time and accuracy of APM uncertainty computations on three patient datasets. Results are compared against reference computations facilitating importance and random sampling. Two approximation techniques to accelerate uncertainty propagation and minimization based on probabilistic treatment plan optimization are presented. Runtimes are measured on CPU and GPU platforms, dosimetric accuracy is quantified in comparison to a sampling-based benchmark (5000 random samples). APM accurately propagates range and setup uncertainties into dose uncertainties at competitive run-times (GPU ≤slant {5} min). The resulting standard deviation (expectation value) of dose show average global γ{3% / {3}~mm} pass rates between 94.2% and 99.9% (98.4% and 100.0%). All investigated importance sampling strategies provided less accuracy at higher run-times considering only a single fraction. Considering fractionation, APM uncertainty propagation and treatment plan optimization was proven to be possible at constant time complexity

  1. Intense heavy ion beam-induced effects in carbon-based stripper foils

    Energy Technology Data Exchange (ETDEWEB)

    Kupka, Katharina

    2016-08-15

    Amorphous carbon or carbon-based stripper foils are commonly applied in accelerator technology for electron stripping of ions. At the planned facility for antiproton and ion research (FAIR) at the Helmholtzzentrum fuer Schwerionenforschung (GSI), Darmstadt, thin carbon stripper foils provide an option for directly delivering ions of intermediate charge states to the heavy ion synchrotron, SIS 18, in order to mitigate space charge limitations during high-intensity operation. In case of desired high end-energies in the synchrotron, a second stripping process by a thicker carbon foil provides ions of higher charge states for injection into the SIS18. High beam intensities and a pulsed beam structure as foreseen at FAIR pose new challenges to the stripper foils which experience enhanced degradation by radiation damage, thermal effects, and stress waves. In order to ensure reliable accelerator operation, radiation-hard stripper foils are required. This thesis aims to a better understanding of processes leading to degradation of carbon-based thin foils. Special focus is placed on ion-beam induced structure and physical property changes and on the influence of different beam parameters. Irradiation experiments were performed at the M3-beamline of the universal linear accelerator (UNILAC) at GSI, using swift heavy ion beams with different pulse lengths and repetition rates. Tested carbon foils were standard amorphous carbon stripper foils produced by the GSI target laboratory, as well as commercial amorphous and diamond-like carbon foils and buckypaper foils. Microstructural changes were investigated with various methods such as optical microscopy, scanning electron microscopy (SEM), profilometry and chromatic aberration measurements. For the investigation of structural changes X-ray photoelectron spectroscopy (XPS), Raman spectroscopy, high resolution transmission electron microscopy (HRTEM), in-situ Fourier-transform infrared spectroscopy (FTIR) and small angle X

  2. Intense heavy ion beam-induced effects in carbon-based stripper foils

    International Nuclear Information System (INIS)

    Kupka, Katharina

    2016-08-01

    Amorphous carbon or carbon-based stripper foils are commonly applied in accelerator technology for electron stripping of ions. At the planned facility for antiproton and ion research (FAIR) at the Helmholtzzentrum fuer Schwerionenforschung (GSI), Darmstadt, thin carbon stripper foils provide an option for directly delivering ions of intermediate charge states to the heavy ion synchrotron, SIS 18, in order to mitigate space charge limitations during high-intensity operation. In case of desired high end-energies in the synchrotron, a second stripping process by a thicker carbon foil provides ions of higher charge states for injection into the SIS18. High beam intensities and a pulsed beam structure as foreseen at FAIR pose new challenges to the stripper foils which experience enhanced degradation by radiation damage, thermal effects, and stress waves. In order to ensure reliable accelerator operation, radiation-hard stripper foils are required. This thesis aims to a better understanding of processes leading to degradation of carbon-based thin foils. Special focus is placed on ion-beam induced structure and physical property changes and on the influence of different beam parameters. Irradiation experiments were performed at the M3-beamline of the universal linear accelerator (UNILAC) at GSI, using swift heavy ion beams with different pulse lengths and repetition rates. Tested carbon foils were standard amorphous carbon stripper foils produced by the GSI target laboratory, as well as commercial amorphous and diamond-like carbon foils and buckypaper foils. Microstructural changes were investigated with various methods such as optical microscopy, scanning electron microscopy (SEM), profilometry and chromatic aberration measurements. For the investigation of structural changes X-ray photoelectron spectroscopy (XPS), Raman spectroscopy, high resolution transmission electron microscopy (HRTEM), in-situ Fourier-transform infrared spectroscopy (FTIR) and small angle X

  3. Intensity-based segmentation and visualization of cells in 3D microscopic images using the GPU

    Science.gov (United States)

    Kang, Mi-Sun; Lee, Jeong-Eom; Jeon, Woong-ki; Choi, Heung-Kook; Kim, Myoung-Hee

    2013-02-01

    3D microscopy images contain abundant astronomical data, rendering 3D microscopy image processing time-consuming and laborious on a central processing unit (CPU). To solve these problems, many people crop a region of interest (ROI) of the input image to a small size. Although this reduces cost and time, there are drawbacks at the image processing level, e.g., the selected ROI strongly depends on the user and there is a loss in original image information. To mitigate these problems, we developed a 3D microscopy image processing tool on a graphics processing unit (GPU). Our tool provides efficient and various automatic thresholding methods to achieve intensity-based segmentation of 3D microscopy images. Users can select the algorithm to be applied. Further, the image processing tool provides visualization of segmented volume data and can set the scale, transportation, etc. using a keyboard and mouse. However, the 3D objects visualized fast still need to be analyzed to obtain information for biologists. To analyze 3D microscopic images, we need quantitative data of the images. Therefore, we label the segmented 3D objects within all 3D microscopic images and obtain quantitative information on each labeled object. This information can use the classification feature. A user can select the object to be analyzed. Our tool allows the selected object to be displayed on a new window, and hence, more details of the object can be observed. Finally, we validate the effectiveness of our tool by comparing the CPU and GPU processing times by matching the specification and configuration.

  4. Graph-based geometric-iconic guide-wire tracking.

    Science.gov (United States)

    Honnorat, Nicolas; Vaillant, Régis; Paragios, Nikos

    2011-01-01

    In this paper we introduce a novel hybrid graph-based approach for Guide-wire tracking. The image support is captured by steerable filters and improved through tensor voting. Then, a graphical model is considered that represents guide-wire extraction/tracking through a B-spline control-point model. Points with strong geometric interest (landmarks) are automatically determined and anchored to such a representation. Tracking is then performed through discrete MRFs that optimize the spatio-temporal positions of the control points while establishing landmark temporal correspondences. Promising results demonstrate the potentials of our method.

  5. Analysis of velocity planning interpolation algorithm based on NURBS curve

    Science.gov (United States)

    Zhang, Wanjun; Gao, Shanping; Cheng, Xiyan; Zhang, Feng

    2017-04-01

    To reduce interpolation time and Max interpolation error in NURBS (Non-Uniform Rational B-Spline) inter-polation caused by planning Velocity. This paper proposed a velocity planning interpolation algorithm based on NURBS curve. Firstly, the second-order Taylor expansion is applied on the numerator in NURBS curve representation with parameter curve. Then, velocity planning interpolation algorithm can meet with NURBS curve interpolation. Finally, simulation results show that the proposed NURBS curve interpolator meet the high-speed and high-accuracy interpolation requirements of CNC systems. The interpolation of NURBS curve should be finished.

  6. Electrophysiological Correlates of Changes in Reaction Time Based on Stimulus Intensity

    Science.gov (United States)

    Lakhani, Bimal; Vette, Albert H.; Mansfield, Avril; Miyasike-daSilva, Veronica; McIlroy, William E.

    2012-01-01

    Background Although reaction time is commonly used as an indicator of central nervous system integrity, little is currently understood about the mechanisms that determine processing time. In the current study, we are interested in determining the differences in electrophysiological events associated with significant changes in reaction time that could be elicited by changes in stimulus intensity. The primary objective is to assess the effect of increasing stimulus intensity on the latency and amplitude of afferent inputs to the somatosensory cortex, and their relation to reaction time. Methods Median nerve stimulation was applied to the non-dominant hand of 12 healthy young adults at two different stimulus intensities (HIGH & LOW). Participants were asked to either press a button as fast as possible with their dominant hand or remain quiet following the stimulus. Electroencephalography was used to measure somatosensory evoked potentials (SEPs) and event related potentials (ERPs). Electromyography from the flexor digitorum superficialis of the button-pressing hand was used to assess reaction time. Response time was the time of button press. Results Reaction time and response time were significantly shorter following the HIGH intensity stimulus compared to the LOW intensity stimulus. There were no differences in SEP (N20 & P24) peak latencies and peak-to-peak amplitude for the two stimulus intensities. ERPs, locked to response time, demonstrated a significantly larger pre-movement negativity to positivity following the HIGH intensity stimulus over the Cz electrode. Discussion This work demonstrates that rapid reaction times are not attributable to the latency of afferent processing from the stimulated site to the somatosensory cortex, and those latency reductions occur further along the sensorimotor transformation pathway. Evidence from ERPs indicates that frontal planning areas such as the supplementary motor area may play a role in transforming the elevated sensory

  7. An agent based architecture for high-risk neonate management at neonatal intensive care unit.

    Science.gov (United States)

    Malak, Jaleh Shoshtarian; Safdari, Reza; Zeraati, Hojjat; Nayeri, Fatemeh Sadat; Mohammadzadeh, Niloofar; Farajollah, Seide Sedighe Seied

    2018-01-01

    In recent years, the use of new tools and technologies has decreased the neonatal mortality rate. Despite the positive effect of using these technologies, the decisions are complex and uncertain in critical conditions when the neonate is preterm or has a low birth weight or malformations. There is a need to automate the high-risk neonate management process by creating real-time and more precise decision support tools. To create a collaborative and real-time environment to manage neonates with critical conditions at the NICU (Neonatal Intensive Care Unit) and to overcome high-risk neonate management weaknesses by applying a multi agent based analysis and design methodology as a new solution for NICU management. This study was a basic research for medical informatics method development that was carried out in 2017. The requirement analysis was done by reviewing articles on NICU Decision Support Systems. PubMed, Science Direct, and IEEE databases were searched. Only English articles published after 1990 were included; also, a needs assessment was done by reviewing the extracted features and current processes at the NICU environment where the research was conducted. We analyzed the requirements and identified the main system roles (agents) and interactions by a comparative study of existing NICU decision support systems. The Universal Multi Agent Platform (UMAP) was applied to implement a prototype of our multi agent based high-risk neonate management architecture. Local environment agents interacted inside a container and each container interacted with external resources, including other NICU systems and consultation centers. In the NICU container, the main identified agents were reception, monitoring, NICU registry, and outcome prediction, which interacted with human agents including nurses and physicians. Managing patients at the NICU units requires online data collection, real-time collaboration, and management of many components. Multi agent systems are applied as

  8. Emergy-based comparative analysis of energy intensity in different industrial systems.

    Science.gov (United States)

    Liu, Zhe; Geng, Yong; Wang, Hui; Sun, Lu; Ma, Zhixiao; Tian, Xu; Yu, Xiaoman

    2015-12-01

    With the rapid economic development, energy consumption of China has been the second place in the world next to the USA. Usually, measuring energy consumption intensity or efficiency applies heat unit which is joule per gross domestic production (GDP) or coal equivalent per GDP. However, this measuring approach is only oriented by the conversion coefficient of heat combustion which does not match the real value of the materials during their formation in the ecological system. This study applied emergy analysis to evaluate the energy consumption intensity to fill this gap. Emergy analysis is considered as a bridge between ecological system and economic system, which can evaluate the contribution of ecological products and services as well as the load placed on environmental systems. In this study, emergy indicator for performing energy consumption intensity of primary energy was proposed. Industrial production is assumed as the main contributor of energy consumption compared to primary and tertiary industries. Therefore, this study validated this method by investigating the two industrial case studies which were Dalian Economic Development Area (DEDA) and Fuzhou economic and technological area (FETA), to comparatively study on their energy consumption intensity between the different kinds of industrial systems and investigate the reasons behind the differences. The results show that primary energy consumption (PEC) of DEDA was much higher than that of FETA during 2006 to 2010 and its primary energy consumption ratio (PECR) to total emergy involvement had a dramatically decline from year 2006 to 2010. In the same time, nonrenewable energy of PEC in DEDA was also much higher than that in FETA. The reason was that industrial structure of DEDA was mainly formed by heavy industries like petro-chemistry industry, manufacturing industries, and high energy-intensive industries. However, FETA was formed by electronic business, food industry, and light industries. Although

  9. Influence of an Intensive, Field-Based Life Science Course on Preservice Teachers' Self-Efficacy for Environmental Science Teaching

    Science.gov (United States)

    Trauth-Nare, Amy

    2015-01-01

    Personal and professional experiences influence teachers' perceptions of their ability to implement environmental science curricula and to positively impact students' learning. The purpose of this study was twofold: to determine what influence, if any, an intensive field-based life science course and service learning had on preservice teachers'…

  10. Web-Based Treatment Program Using Intensive Therapeutic Contact for Patients With Eating Disorders : Before-After Study

    NARCIS (Netherlands)

    ter Huurne, E.D.; Postel, Marloes Gerda; de Haan, H.A.; Drossaert, Constance H.C.; Jong, C.A.J.

    2013-01-01

    Background: Although eating disorders are common in the Netherlands, only a few patients are treated by mental health care professionals. To reach and treat more patients with eating disorders, Tactus Addiction Treatment developed a web-based treatment program with asynchronous and intensive

  11. Design of a Thermoacoustic Sensor for Low Intensity Ultrasound Measurements Based on an Artificial Neural Network.

    Science.gov (United States)

    Xing, Jida; Chen, Jie

    2015-06-23

    In therapeutic ultrasound applications, accurate ultrasound output intensities are crucial because the physiological effects of therapeutic ultrasound are very sensitive to the intensity and duration of these applications. Although radiation force balance is a benchmark technique for measuring ultrasound intensity and power, it is costly, difficult to operate, and compromised by noise vibration. To overcome these limitations, the development of a low-cost, easy to operate, and vibration-resistant alternative device is necessary for rapid ultrasound intensity measurement. Therefore, we proposed and validated a novel two-layer thermoacoustic sensor using an artificial neural network technique to accurately measure low ultrasound intensities between 30 and 120 mW/cm2. The first layer of the sensor design is a cylindrical absorber made of plexiglass, followed by a second layer composed of polyurethane rubber with a high attenuation coefficient to absorb extra ultrasound energy. The sensor determined ultrasound intensities according to a temperature elevation induced by heat converted from incident acoustic energy. Compared with our previous one-layer sensor design, the new two-layer sensor enhanced the ultrasound absorption efficiency to provide more rapid and reliable measurements. Using a three-dimensional model in the K-wave toolbox, our simulation of the ultrasound propagation process demonstrated that the two-layer design is more efficient than the single layer design. We also integrated an artificial neural network algorithm to compensate for the large measurement offset. After obtaining multiple parameters of the sensor characteristics through calibration, the artificial neural network is built to correct temperature drifts and increase the reliability of our thermoacoustic measurements through iterative training about ten seconds. The performance of the artificial neural network method was validated through a series of experiments. Compared to our previous

  12. Temperature-dependent of Nonlinear Optical Conductance of Graphene-based Systems in High-intensity Terahertz Field

    Institute of Scientific and Technical Information of China (English)

    Jing Lv; Rui-yang Yuan; Hui Yan

    2014-01-01

    For multi-photon processed with the linear dispersion in the high-intensity terahertz(THz) field,we have systematically investigated the temperature-dependent nonlinear optical response of graphene-based systems, including single layer graphene, graphene superlattice and gapped graphene. In the intrinsic single layer graphene system, it demonstrates that, at low temperature, nonlinear optical conductivities of the thirdand fifth-order are respectively five and ten orders of magnitude larger than the universal conductivity with high-intensity and low frequency THz wave.In the graphene superlattice and gapped graphene systems, the optical responses enhanced because of the anisotropic massless and massive Dirac fermions.

  13. Temp erature-dep endent of Nonlinear Optical Conductance of Graphene-based Systems in High-intensity Terahertz Field

    Institute of Scientific and Technical Information of China (English)

    Jing Lv; Rui-yang Yuan; Hui Yan

    2014-01-01

    For multi-photon processed with the linear dispersion in the high-intensity terahertz (THz) field, we have systematically investigated the temperature-dependent nonlinear optical response of graphene-based systems, including single layer graphene, graphene superlattice and gapped graphene. In the intrinsic single layer graphene system, it demonstrates that, at low temperature, nonlinear optical conductivities of the third-and fifth-order are respectively five and ten orders of magnitude larger than the universal conductivity with high-intensity and low frequency THz wave.In the graphene superlattice and gapped graphene systems, the optical responses enhanced because of the anisotropic massless and massive Dirac fermions.

  14. Analysis of Protein by Spectrophotometric and Computer Colour Based Intensity Method from Stem of Pea (Pisum sativum at Different Stages

    Directory of Open Access Journals (Sweden)

    Afsheen Mushtaque Shah

    2010-12-01

    Full Text Available In this study proteins were analyzed from pea plants at three different growth stages of stem by spectrophotometric i.e Lowry and Bradford quantitative methods and computer colour intensity based method. Though Spectrophotometric methods are regarded as classical methods, we report an alternate computer based method which gave comparable results. Computer software was developed the for protein analysis which is easier, time and money saving method as compared to the classical methods.

  15. Neural stem cell sparing by linac based intensity modulated stereotactic radiotherapy in intracranial tumors

    International Nuclear Information System (INIS)

    Oehler, Julia; Brachwitz, Tim; Wendt, Thomas G; Banz, Nico; Walther, Mario; Wiezorek, Tilo

    2013-01-01

    Neurocognitive decline observed after radiotherapy (RT) for brain tumors in long time survivors is attributed to radiation exposure of the hippocampus and the subventricular zone (SVZ). The potential of sparing capabilities for both structures by optimized intensity modulated stereotactic radiotherapy (IMSRT) is investigated. Brain tumors were irradiated by stereotactic 3D conformal RT or IMSRT using m3 collimator optimized for PTV and for sparing of the conventional OARs (lens, retina, optic nerve, chiasm, cochlea, brain stem and the medulla oblongata). Retrospectively both hippocampi and SVZ were added to the list of OAR and their dose volume histograms were compared to those from two newly generated IMSRT plans using 7 or 14 beamlets (IMSRT-7, IMSRT-14) dedicated for optimized additional sparing of these structures. Conventional OAR constraints were kept constant. Impact of plan complexity and planning target volume (PTV) topography on sparing of both hippocampi and SVZ, conformity index (CI), the homogeneity index (HI) and quality of coverage (QoC) were analyzed. Limits of agreement were used to compare sparing of stem cell niches with either IMSRT-7 or IMSRT-14. The influence of treatment technique related to the topography ratio between PTV and OARs, realized in group A-D, was assessed by a mixed model. In 47 patients CI (p ≤ 0.003) and HI (p < 0.001) improved by IMSRT-7, IMSRT-14, QoC remained stable (p ≥ 0.50) indicating no compromise in radiotherapy. 90% of normal brain was exposed to a significantly higher dose using IMSRT. IMSRT-7 plans resulted in significantly lower biologically effective doses at all four neural stem cell structures, while contralateral neural stem cells are better spared compared to ipsilateral. A further increase of the number of beamlets (IMSRT-14) did not improve sparing significantly, so IMSRT-7 and IMSRT-14 can be used interchangeable. Patients with tumors contacting neither the subventricular zone nor the cortex benefit

  16. Role-based support for intensive care nursing : A designer's perspective

    NARCIS (Netherlands)

    Melles, M.

    2011-01-01

    Design goals and design directions are formulated for the (digital) support of non-technical nursing tasks and skills in the intensive care unit (ICU), such as organizing work, evaluating care, coping with stress and dealing with poor team dynamics. A conceptual framework for ICU nursing was

  17. Memristor based computation-in-memory architecture for data-intensive applications

    NARCIS (Netherlands)

    Hamdioui, S.; Xie, L.; Nguyen, H.A.D.; Taouil, M.; Bertels, K.; Corporaal, H.; Jiao, H.; Catthoor, F.; Wouters, D.; Eike, L.; Van Lunteren, J.

    2015-01-01

    One of the most critical challenges for today's and future data-intensive and big-data problems is data storage and analysis. This paper first highlights some challenges of the new born Big Data paradigm and shows that the increase of the data size has already surpassed the capabilities of today's

  18. Improving Intensity-Based Lung CT Registration Accuracy Utilizing Vascular Information

    Directory of Open Access Journals (Sweden)

    Kunlin Cao

    2012-01-01

    Full Text Available Accurate pulmonary image registration is a challenging problem when the lungs have a deformation with large distance. In this work, we present a nonrigid volumetric registration algorithm to track lung motion between a pair of intrasubject CT images acquired at different inflation levels and introduce a new vesselness similarity cost that improves intensity-only registration. Volumetric CT datasets from six human subjects were used in this study. The performance of four intensity-only registration algorithms was compared with and without adding the vesselness similarity cost function. Matching accuracy was evaluated using landmarks, vessel tree, and fissure planes. The Jacobian determinant of the transformation was used to reveal the deformation pattern of local parenchymal tissue. The average matching error for intensity-only registration methods was on the order of 1 mm at landmarks and 1.5 mm on fissure planes. After adding the vesselness preserving cost function, the landmark and fissure positioning errors decreased approximately by 25% and 30%, respectively. The vesselness cost function effectively helped improve the registration accuracy in regions near thoracic cage and near the diaphragm for all the intensity-only registration algorithms tested and also helped produce more consistent and more reliable patterns of regional tissue deformation.

  19. Design of a Thermoacoustic Sensor for Low Intensity Ultrasound Measurements Based on an Artificial Neural Network

    Directory of Open Access Journals (Sweden)

    Jida Xing

    2015-06-01

    Full Text Available In therapeutic ultrasound applications, accurate ultrasound output intensities are crucial because the physiological effects of therapeutic ultrasound are very sensitive to the intensity and duration of these applications. Although radiation force balance is a benchmark technique for measuring ultrasound intensity and power, it is costly, difficult to operate, and compromised by noise vibration. To overcome these limitations, the development of a low-cost, easy to operate, and vibration-resistant alternative device is necessary for rapid ultrasound intensity measurement. Therefore, we proposed and validated a novel two-layer thermoacoustic sensor using an artificial neural network technique to accurately measure low ultrasound intensities between 30 and 120 mW/cm2. The first layer of the sensor design is a cylindrical absorber made of plexiglass, followed by a second layer composed of polyurethane rubber with a high attenuation coefficient to absorb extra ultrasound energy. The sensor determined ultrasound intensities according to a temperature elevation induced by heat converted from incident acoustic energy. Compared with our previous one-layer sensor design, the new two-layer sensor enhanced the ultrasound absorption efficiency to provide more rapid and reliable measurements. Using a three-dimensional model in the K-wave toolbox, our simulation of the ultrasound propagation process demonstrated that the two-layer design is more efficient than the single layer design. We also integrated an artificial neural network algorithm to compensate for the large measurement offset. After obtaining multiple parameters of the sensor characteristics through calibration, the artificial neural network is built to correct temperature drifts and increase the reliability of our thermoacoustic measurements through iterative training about ten seconds. The performance of the artificial neural network method was validated through a series of experiments. Compared

  20. Intensity changes in future extreme precipitation: A statistical event-based approach.

    Science.gov (United States)

    Manola, Iris; van den Hurk, Bart; de Moel, Hans; Aerts, Jeroen

    2017-04-01

    Short-lived precipitation extremes are often responsible for hazards in urban and rural environments with economic and environmental consequences. The precipitation intensity is expected to increase about 7% per degree of warming, according to the Clausius-Clapeyron (CC) relation. However, the observations often show a much stronger increase in the sub-daily values. In particular, the behavior of the hourly summer precipitation from radar observations with the dew point temperature (the Pi-Td relation) for the Netherlands suggests that for moderate to warm days the intensification of the precipitation can be even higher than 21% per degree of warming, that is 3 times higher than the expected CC relation. The rate of change depends on the initial precipitation intensity, as low percentiles increase with a rate below CC, the medium percentiles with 2CC and the moderate-high and high percentiles with 3CC. This non-linear statistical Pi-Td relation is suggested to be used as a delta-transformation to project how a historic extreme precipitation event would intensify under future, warmer conditions. Here, the Pi-Td relation is applied over a selected historic extreme precipitation event to 'up-scale' its intensity to warmer conditions. Additionally, the selected historic event is simulated in the high-resolution, convective-permitting weather model Harmonie. The initial and boundary conditions are alternated to represent future conditions. The comparison between the statistical and the numerical method of projecting the historic event to future conditions showed comparable intensity changes, which depending on the initial percentile intensity, range from below CC to a 3CC rate of change per degree of warming. The model tends to overestimate the future intensities for the low- and the very high percentiles and the clouds are somewhat displaced, due to small wind and convection changes. The total spatial cloud coverage in the model remains, as also in the statistical

  1. Magnetic Resonance-Based Treatment Planning for Prostate Intensity-Modulated Radiotherapy: Creation of Digitally Reconstructed Radiographs

    International Nuclear Information System (INIS)

    Chen, Lili; Nguyen, Thai-Binh; Jones, Elan; Chen Zuoqun; Luo Wei; Wang Lu; Price, Robert A.; Pollack, Alan; Ma, C.-M. Charlie

    2007-01-01

    Purpose: To develop a technique to create magnetic resonance (MR)-based digitally reconstructed radiographs (DRR) for initial patient setup for routine clinical applications of MR-based treatment planning for prostate intensity-modulated radiotherapy. Methods and Materials: Twenty prostate cancer patients' computed tomography (CT) and MR images were used for the study. Computed tomography and MR images were fused. The pelvic bony structures, including femoral heads, pubic rami, ischium, and ischial tuberosity, that are relevant for routine clinical patient setup were manually contoured on axial MR images. The contoured bony structures were then assigned a bulk density of 2.0 g/cm 3 . The MR-based DRRs were generated. The accuracy of the MR-based DDRs was quantitatively evaluated by comparing MR-based DRRs with CT-based DRRs for these patients. For each patient, eight measuring points on both coronal and sagittal DRRs were used for quantitative evaluation. Results: The maximum difference in the mean values of these measurement points was 1.3 ± 1.6 mm, and the maximum difference in absolute positions was within 3 mm for the 20 patients investigated. Conclusions: Magnetic resonance-based DRRs are comparable to CT-based DRRs for prostate intensity-modulated radiotherapy and can be used for patient treatment setup when MR-based treatment planning is applied clinically

  2. The state of development of an intense resonance electron-ion accelerator based on Doppler effect

    International Nuclear Information System (INIS)

    Egorov, A.M.; Ivanov, B.I.; Butenko, V.I.; Ognivenko, V.V.; Onishchenko, I.N.; Prishchepov, V.P.

    1996-01-01

    An intense ion accelerator has been proposed and now is being developed in which accelerating and focusing electric fields in a slow wave structure are excited by an intense electron beam using the anomalous and the normal Doppler effects. The results of theoretical studies and computer simulations show the advantage of this acceleration method that will make it possible to obtain acceleration rates of the order of 10 - 100 MeV/m, and ion beam energies and currents of the order of 10-100 MeV, 1-10 A. The project and technical documentation of an experimental accelerating installation were worked out. Currently, the 5 MeV accelerator-injector URAL-5 is in operation; preliminary experiments on a small installation have been carried out; experimental investigations of an accelerating RF resonator model (in 1/2 scaling) are being performed; the accelerating test installation is being manufactured. (author). 1 tab. 12 fig., 6 refs

  3. Spectral intensities in cubic systems. I. Progressions based upon parity vibrational modes

    International Nuclear Information System (INIS)

    Acevedo, R.; Vasquez, S.O.; Meruane, T.; Poblete, V.; Pozo, J.

    1998-01-01

    The well-resolved emission and absorption spectra of centrosymmetric coordination compounds of the transition metal ions have been used widely to provide the experimental data against which to test theoretical models of vibronic intensities. With reference to the 2 E g → 4 A 2g luminescence transition, at a perfect octahedral site in Cs 2 SiF 6 , over than one hundred vibronic lines are observed with line widths of a few wavenumber spread over some 3000 cm -1 . This paper reports a through examination of both the electronic and vibrational factors, which influences the observed vibronic intensities of the various assigned and identified lines in the spectra of the MnF 6 2- complex ion in the Cs 2 SiF 6 cubic lattice. The origin and nature of higher order vibronic interactions are analysed on the basis of a symmetrized vibronic crystal field-ligand polarization model. (Author)

  4. Intensity-demodulated torsion sensor based on thin-core polarization-maintaining fiber.

    Science.gov (United States)

    Kang, Xuexue; Zhang, Weigang; Zhang, Yanxin; Yang, Jiang; Chen, Lei; Kong, Lingxin; Zhang, Yunshan; Yu, Lin; Yan, Tieyi; Geng, Pengcheng

    2018-05-01

    An intensity-demodulated torsion sensor is designed and realized, which consists of a polarization ring as the sensing part and a section of thin-core polarization-maintaining fiber as the demodulation part. An intensity map of a sinusoidal change can be obtained at some specific wavelengths, and the experimental results correspond to the theoretical analysis well. The maximum sensitivity is about 0.29 dB/deg at the wavelength of 1584.6 nm, and the minimum sensitivity is about 0.10 dB/deg at the wavelength of 1510.2 nm. Meanwhile, the temperature characteristic is measured in the experiment. More broadly, the proposed structure can be used in an integrated smart device for loose-screw detection in devices in aeronautics and astronautics.

  5. The state of development of an intense resonance electron-ion accelerator based on Doppler effect

    Energy Technology Data Exchange (ETDEWEB)

    Egorov, A M; Ivanov, B I; Butenko, V I; Ognivenko, V V; Onishchenko, I N; Prishchepov, V P [Kharkov Inst. of Physics and Technology (Ukraine)

    1997-12-31

    An intense ion accelerator has been proposed and now is being developed in which accelerating and focusing electric fields in a slow wave structure are excited by an intense electron beam using the anomalous and the normal Doppler effects. The results of theoretical studies and computer simulations show the advantage of this acceleration method that will make it possible to obtain acceleration rates of the order of 10 - 100 MeV/m, and ion beam energies and currents of the order of 10-100 MeV, 1-10 A. The project and technical documentation of an experimental accelerating installation were worked out. Currently, the 5 MeV accelerator-injector URAL-5 is in operation; preliminary experiments on a small installation have been carried out; experimental investigations of an accelerating RF resonator model (in 1/2 scaling) are being performed; the accelerating test installation is being manufactured. (author). 1 tab. 12 fig., 6 refs.

  6. Dual channel rank-based intensity weighting for quantitative co-localization of microscopy images

    LENUS (Irish Health Repository)

    Singan, Vasanth R

    2011-10-21

    Abstract Background Accurate quantitative co-localization is a key parameter in the context of understanding the spatial co-ordination of molecules and therefore their function in cells. Existing co-localization algorithms consider either the presence of co-occurring pixels or correlations of intensity in regions of interest. Depending on the image source, and the algorithm selected, the co-localization coefficients determined can be highly variable, and often inaccurate. Furthermore, this choice of whether co-occurrence or correlation is the best approach for quantifying co-localization remains controversial. Results We have developed a novel algorithm to quantify co-localization that improves on and addresses the major shortcomings of existing co-localization measures. This algorithm uses a non-parametric ranking of pixel intensities in each channel, and the difference in ranks of co-localizing pixel positions in the two channels is used to weight the coefficient. This weighting is applied to co-occurring pixels thereby efficiently combining both co-occurrence and correlation. Tests with synthetic data sets show that the algorithm is sensitive to both co-occurrence and correlation at varying levels of intensity. Analysis of biological data sets demonstrate that this new algorithm offers high sensitivity, and that it is capable of detecting subtle changes in co-localization, exemplified by studies on a well characterized cargo protein that moves through the secretory pathway of cells. Conclusions This algorithm provides a novel way to efficiently combine co-occurrence and correlation components in biological images, thereby generating an accurate measure of co-localization. This approach of rank weighting of intensities also eliminates the need for manual thresholding of the image, which is often a cause of error in co-localization quantification. We envisage that this tool will facilitate the quantitative analysis of a wide range of biological data sets

  7. Parameter study for polymer solar modules based on various cell lengths and light intensities

    Energy Technology Data Exchange (ETDEWEB)

    Slooff, L.H.; Burgers, A.R.; Bende, E.E.; Kroon, J.M. [ECN Solar Energy, P.O. Box 1, 1755 ZG Petten (Netherlands); Veenstra, S.C. [ECN Solar Energy, Solliance, High Tech Campus 5, P63, 5656AE Eindhoven (Netherlands)

    2013-10-15

    Polymer solar cells may be applied in portable electronic devices, where light intensity and spectral distribution of the illuminating source can be very different compared to outdoor applications. As the power output of solar cells depends on temperature, light intensity and spectrum, the design of the module must be optimized for the specific illumination conditions in the different applications. The interconnection area between cells in a module must be as narrow as possible to maximize the active area, also called geometrical fill factor, of the module. Laser scribing has the potential to realize this. The optimal width of the interconnection zone depends both on technological limitations, e.g. laser scribe width and the minimal distance between scribes, and electrical limitations like resistive losses. The latter depends on the generated current in the cell and thus also on illumination intensity. Besides that, also the type of junction, i.e. a single or tandem junction, will influence the optimal geometry. In this paper a calculation model is presented that can be used for electrical modeling of polymer cells and modules in order to optimize the performance for the specific illumination conditions.

  8. Comparing the costs and benefits of floating rice-based and intensive rice-based farming systems in the Mekong delta

    OpenAIRE

    Van Kien Nguyen; Oc Van Vo; Duc Ngoc Huynh

    2015-01-01

    This paper compares financial costs and benefits of floating rice-based and intensive rice farming systems using data from focus group discussions and household survey in four locations in the Mekong Delta. We argue that the net financial benefit per 1000m2 of integrated floating rice-based farming systems is greater than the net financial benefit of intensive rice farming system. The total net benefit of floating rice-leeks shows the highest net benefit (VND 24.8 mil./1000 m2), followed by f...

  9. Building patient safety in intensive care nursing : Patient safety culture, team performance and simulation-based training

    OpenAIRE

    Ballangrud, Randi

    2013-01-01

    Aim: The overall aim of the thesis was to investigate patient safety culture, team performance and the use of simulation-based team training for building patient safety in intensive care nursing. Methods: Quantitative and qualitative methods were used. In Study I, 220 RNs from ten ICUs responded to a patient safety culture questionnaire analysed with statistics. Studies II-IV were based on an evaluation of a simulation-based team training programme. Studies II-III included 53 RNs from seven I...

  10. A Technique for Estimating Intensity of Emotional Expressions and Speaking Styles in Speech Based on Multiple-Regression HSMM

    Science.gov (United States)

    Nose, Takashi; Kobayashi, Takao

    In this paper, we propose a technique for estimating the degree or intensity of emotional expressions and speaking styles appearing in speech. The key idea is based on a style control technique for speech synthesis using a multiple regression hidden semi-Markov model (MRHSMM), and the proposed technique can be viewed as the inverse of the style control. In the proposed technique, the acoustic features of spectrum, power, fundamental frequency, and duration are simultaneously modeled using the MRHSMM. We derive an algorithm for estimating explanatory variables of the MRHSMM, each of which represents the degree or intensity of emotional expressions and speaking styles appearing in acoustic features of speech, based on a maximum likelihood criterion. We show experimental results to demonstrate the ability of the proposed technique using two types of speech data, simulated emotional speech and spontaneous speech with different speaking styles. It is found that the estimated values have correlation with human perception.

  11. Prototype fiber Bragg Grattings (FBG) sensor based on intensity modulation of the laser diode low frequency vibrations measurement

    Science.gov (United States)

    Setiono, Andi; Ula, Rini Khamimatul; Hanto, Dwi; Widiyatmoko, Bambang; Purnamaningsih, Retno Wigajatri

    2016-02-01

    In general, Fiber Bragg Grating (FBG) sensor works based on observation of spectral response characteristic to detect the desired parameter. In this research, we studied intensity response characteristic of FBG to detect the dynamic strain. Experiment result show that the reflected intensity had linier relationships with dynamic strain. Based on these characteristics, we developed the FBG sensor to detect low frequency vibration. This sensor is designed by attaching the FBG on the bronze cantilever with dimensions of 85×3×0.5 mm. Measurement results showed that the sensor was able to detect vibrations in the frequency range of 7-10 Hz at temperature range of 25-45 ˚C. The measured frequency range is still within the frequency range of digging activity, therefore this vibration sensor can be applied for oil pipelines vandalisation detection system.

  12. Evidence-based review: Quality of life following head and neck intensity-modulated radiotherapy

    International Nuclear Information System (INIS)

    Scott-Brown, Martin; Miah, Aisha; Harrington, Kevin; Nutting, Chris

    2010-01-01

    Inverse planned Intensity modulated radiotherapy (IMRT) can minimize the dose to normal structures and therefore can reduce long-term radiotherapy-related morbidity and may improve patients' long-term quality of life. Despite overwhelming evidence that IMRT can reduce late functional deficits in patients with head and neck cancer, treated with radiotherapy, a review of the published literature produced conflicting results with regard to quality of life outcomes. Following a critical appraisal of the literature, reasons for the discrepant outcomes are proposed.

  13. Acid-base regulation in intensively farmed air-breathing fish

    DEFF Research Database (Denmark)

    Bayley, Mark; Damsgaard, Christian; Thomsen, Mikkel

    Hypercapnia in slow moving organically loaded tropical waters is a natural occurrence with several records of pCO2 at 60 mm Hg. Despite this, studies on South American air-breathing fish have revealed a low capacity for extracellular pH (pHe) regulation. The two underlying reasons proposed are; 1......) an osmorespiratory compromise with reduced branchial surface area and reduced branchial ventilation 2) low ion concentrations in the very soft amazon waters limiting the capacity for branchial pH regulation. The Mekong delta region houses extremely intensive aquaculture of a large number of air-breathing species...

  14. Analytical method for determining colour intensities based on Cherenkov radiation colour quenching

    Energy Technology Data Exchange (ETDEWEB)

    Gonzalez-Gomez, C; Lopez-Gonzalez, J deD; Ferro-Garcia, M A [Univ. of Granada, Granada (Spain). Faculty of Sciences, Dept. of Inorganic Chemistry. Radiochemistry Section; Consejo Superior de Investigaciones Cientificas, Granada (Spain). Dept. of Chemical Research Coordinated Centre)

    1983-01-01

    A study was made for determining color intensities using as luminous non-monochromatic source produced by the Cherenkov emission in the walls of a glass capillary which acts as luminous source itself inside the colored solution to be evaluated. The reproducibility of this method has been compared with the spectrophotometric assay; the relative errors of both analytical methods have been calculated for different concentrations of congo red solution in the range of minimal error, according to Ringbom's criterion. The sensitivity of this analytical method has been studied for the two ..beta..-emitters employed: /sup 90/Sr//sup 90/Y and /sup 204/Tl.

  15. Serial clustering of extratropical cyclones and relationship with NAO and jet intensity based on the IMILAST cyclone database

    Science.gov (United States)

    Ulbrich, Sven; Pinto, Joaquim G.; Economou, Theodoros; Stephenson, David B.; Karremann, Melanie K.; Shaffrey, Len C.

    2017-04-01

    Cyclone families are a frequent synoptic weather feature in the Euro-Atlantic area, particularly during wintertime. Given appropriate large-scale conditions, such series (clusters) of storms may cause large socio-economic impacts and cumulative losses. Recent studies analyzing reanalysis data using single cyclone tracking methods have shown that serial clustering of cyclones occurs on both flanks and downstream regions of the North Atlantic storm track. Based on winter (DJF) cyclone counts from the IMILAST cyclone database, we explore the representation of serial clustering in the ERA-Interim period and its relationship with the NAO-phase and jet intensity. With this aim, clustering is estimated by the dispersion of winter (DJF) cyclone passages for each grid point over the Euro-Atlantic area. Results indicate that clustering over the Eastern North Atlantic and Western Europe can be identified for all methods, although the exact location and the dispersion magnitude may vary. The relationship between clustering and (i) the NAO-phase and (ii) jet intensity over the North Atlantic is statistically evaluated. Results show that the NAO-index and the jet intensity show a strong contribution to clustering, even though some spread is found between methods. We conclude that the general features of clustering of extratropical cyclones over the North Atlantic and Western Europe are robust to the choice of tracking method. The same is true for the influence of the NAO and jet intensity on cyclone dispersion.

  16. Numerical evaluation of droplet sizing based on the ratio of fluorescent and scattered light intensities (LIF/Mie technique)

    International Nuclear Information System (INIS)

    Charalampous, Georgios; Hardalupas, Yannis

    2011-01-01

    The dependence of fluorescent and scattered light intensities from spherical droplets on droplet diameter was evaluated using Mie theory. The emphasis is on the evaluation of droplet sizing, based on the ratio of laser-induced fluorescence and scattered light intensities (LIF/Mie technique). A parametric study is presented, which includes the effects of scattering angle, the real part of the refractive index and the dye concentration in the liquid (determining the imaginary part of the refractive index). The assumption that the fluorescent and scattered light intensities are proportional to the volume and surface area of the droplets for accurate sizing measurements is not generally valid. More accurate sizing measurements can be performed with minimal dye concentration in the liquid and by collecting light at a scattering angle of 60 deg. rather than the commonly used angle of 90 deg. Unfavorable to the sizing accuracy are oscillations of the scattered light intensity with droplet diameter that are profound at the sidescatter direction (90 deg.) and for droplets with refractive indices around 1.4.

  17. Mis-segmentation in voxel-based morphometry due to a signal intensity change in the putamen.

    Science.gov (United States)

    Goto, Masami; Abe, Osamu; Miyati, Tosiaki; Aoki, Shigeki; Gomi, Tsutomu; Takeda, Tohoru

    2017-12-01

    The aims of this study were to demonstrate an association between changes in the signal intensity of the putamen on three-dimensional T1-weighted magnetic resonance images (3D-T1WI) and mis-segmentation, using the voxel-based morphometry (VBM) 8 toolbox. The sagittal 3D-T1WIs of 22 healthy volunteers were obtained for VBM analysis using the 1.5-T MR scanner. We prepared five levels of 3D-T1WI signal intensity (baseline, same level, background level, low level, and high level) in regions of interest containing the putamen. Groups of smoothed, spatially normalized tissue images were compared to the baseline group using a paired t test. The baseline was compared to the other four levels. In all comparisons, significant volume changes were observed around and outside the area that included the signal intensity change. The present study demonstrated an association between a change in the signal intensity of the putamen on 3D-T1WI and changed volume in segmented tissue images.

  18. Physiotherapy in the intensive care unit: an evidence-based, expert driven, practical statement and rehabilitation recommendations

    Science.gov (United States)

    Sommers, Juultje; Engelbert, Raoul HH; Dettling-Ihnenfeldt, Daniela; Gosselink, Rik; Spronk, Peter E; Nollet, Frans; van der Schaaf, Marike

    2015-01-01

    Objective: To develop evidence-based recommendations for effective and safe diagnostic assessment and intervention strategies for the physiotherapy treatment of patients in intensive care units. Methods: We used the EBRO method, as recommended by the ‘Dutch Evidence Based Guideline Development Platform’ to develop an ‘evidence statement for physiotherapy in the intensive care unit’. This method consists of the identification of clinically relevant questions, followed by a systematic literature search, and summary of the evidence with final recommendations being moderated by feedback from experts. Results: Three relevant clinical domains were identified by experts: criteria to initiate treatment; measures to assess patients; evidence for effectiveness of treatments. In a systematic literature search, 129 relevant studies were identified and assessed for methodological quality and classified according to the level of evidence. The final evidence statement consisted of recommendations on eight absolute and four relative contra-indications to mobilization; a core set of nine specific instruments to assess impairments and activity restrictions; and six passive and four active effective interventions, with advice on (a) physiological measures to observe during treatment (with stopping criteria) and (b) what to record after the treatment. Conclusions: These recommendations form a protocol for treating people in an intensive care unit, based on best available evidence in mid-2014. PMID:25681407

  19. Feasibility of geometric-intensity-based semi-automated delineation of the tentorium cerebelli from MRI scans.

    Science.gov (United States)

    Penumetcha, Neeraja; Kabadi, Suraj; Jedynak, Bruno; Walcutt, Charles; Gado, Mokhtar H; Wang, Lei; Ratnanather, J Tilak

    2011-04-01

    This paper describes a feasibility study of a method for delineating the tentorium cerebelli in magnetic resonance imaging (MRI) brain scans. The tentorium cerebelli is a thin sheet of dura matter covering the cerebellum and separating it from the posterior part of the temporal lobe and the occipital lobe of the cerebral hemispheres. Cortical structures such as the parahippocampal gyrus can be indistinguishable from tentorium in magnetized prepared rapid gradient echo and T1-weighted MRI scans. Similar intensities in these neighboring regions make it difficult to perform accurate cortical analysis in neuroimaging studies of schizophrenia and Alzheimer's disease. A semi-automated, geometric, intensity-based procedure for delineating the tentorium from a whole-brain scan is described. Initial and final curves are traced within the tentorium. A cost function, based on intensity and Euclidean distance, is computed between the two curves using the Fast Marching method. The initial curve is then evolved to the final curve based on the gradient of the computed costs, generating a series of intermediate curves. These curves are then used to generate a triangulated surface of the tentorium. For 3 scans, surfaces were found to be within 2 voxels from hand segmentations. Copyright © 2009 by the American Society of Neuroimaging.

  20. Globally Increased Crop Growth and Cropping Intensity from the Long-Term Satellite-Based Observations

    Science.gov (United States)

    Chen, Bin

    2018-04-01

    Understanding the spatiotemporal change trend of global crop growth and multiple cropping system under climate change scenarios is a critical requirement for supporting the food security issue that maintains the function of human society. Many studies have predicted the effects of climate changes on crop production using a combination of filed studies and models, but there has been limited evidence relating decadal-scale climate change to global crop growth and the spatiotemporal distribution of multiple cropping system. Using long-term satellite-derived Normalized Difference Vegetation Index (NDVI) and observed climate data from 1982 to 2012, we investigated the crop growth trend, spatiotemporal pattern trend of agricultural cropping intensity, and their potential correlations with respect to the climate change drivers at a global scale. Results show that 82.97 % of global cropland maximum NDVI witnesses an increased trend while 17.03 % of that shows a decreased trend over the past three decades. The spatial distribution of multiple cropping system is observed to expand from lower latitude to higher latitude, and the increased cropping intensity is also witnessed globally. In terms of regional major crop zones, results show that all nine selected zones have an obvious upward trend of crop maximum NDVI (p impact on the crop growth trend.

  1. GLOBALLY INCREASED CROP GROWTH AND CROPPING INTENSITY FROM THE LONG-TERM SATELLITE-BASED OBSERVATIONS

    Directory of Open Access Journals (Sweden)

    B. Chen

    2018-04-01

    Full Text Available Understanding the spatiotemporal change trend of global crop growth and multiple cropping system under climate change scenarios is a critical requirement for supporting the food security issue that maintains the function of human society. Many studies have predicted the effects of climate changes on crop production using a combination of filed studies and models, but there has been limited evidence relating decadal-scale climate change to global crop growth and the spatiotemporal distribution of multiple cropping system. Using long-term satellite-derived Normalized Difference Vegetation Index (NDVI and observed climate data from 1982 to 2012, we investigated the crop growth trend, spatiotemporal pattern trend of agricultural cropping intensity, and their potential correlations with respect to the climate change drivers at a global scale. Results show that 82.97 % of global cropland maximum NDVI witnesses an increased trend while 17.03 % of that shows a decreased trend over the past three decades. The spatial distribution of multiple cropping system is observed to expand from lower latitude to higher latitude, and the increased cropping intensity is also witnessed globally. In terms of regional major crop zones, results show that all nine selected zones have an obvious upward trend of crop maximum NDVI (p < 0.001, and as for climatic drivers, the gradual temperature and precipitation changes have had a measurable impact on the crop growth trend.

  2. Feature and Intensity Based Medical Image Registration Using Particle Swarm Optimization.

    Science.gov (United States)

    Abdel-Basset, Mohamed; Fakhry, Ahmed E; El-Henawy, Ibrahim; Qiu, Tie; Sangaiah, Arun Kumar

    2017-11-03

    Image registration is an important aspect in medical image analysis, and kinds use in a variety of medical applications. Examples include diagnosis, pre/post surgery guidance, comparing/merging/integrating images from multi-modal like Magnetic Resonance Imaging (MRI), and Computed Tomography (CT). Whether registering images across modalities for a single patient or registering across patients for a single modality, registration is an effective way to combine information from different images into a normalized frame for reference. Registered datasets can be used for providing information relating to the structure, function, and pathology of the organ or individual being imaged. In this paper a hybrid approach for medical images registration has been developed. It employs a modified Mutual Information (MI) as a similarity metric and Particle Swarm Optimization (PSO) method. Computation of mutual information is modified using a weighted linear combination of image intensity and image gradient vector flow (GVF) intensity. In this manner, statistical as well as spatial image information is included into the image registration process. Maximization of the modified mutual information is effected using the versatile Particle Swarm Optimization which is developed easily with adjusted less parameter. The developed approach has been tested and verified successfully on a number of medical image data sets that include images with missing parts, noise contamination, and/or of different modalities (CT, MRI). The registration results indicate the proposed model as accurate and effective, and show the posture contribution in inclusion of both statistical and spatial image data to the developed approach.

  3. Spectral intensities in cubic systems. I. Progressions based upon parity vibrational modes

    Energy Technology Data Exchange (ETDEWEB)

    Acevedo, R.; Vasquez, S.O. [Department of Basic Chemistry, Faculty of Physical and Mathematical Sciences, University of Chile. Tupper 2069, Casilla 2777, Santiago, Chile (Chile); Meruane, T. [Department of Chemistry, Universidad Metropolitana de Ciencias de la Educacion. Av. J.P. Alessandri 774, Casilla 147, C. Santiago, Chile (Chile); Poblete, V. [Department of Nuclear Materials, Lo Aguirre, Comision Chilena de Energia Nuclear. Amunategui 95, Casilla 188-D, Santiago, Chile (Chile); Pozo, J. [Facultad de Ciencias de la Ingenieria. Universidad Diego Portales. Casilla 298-V, Santiago, Chile (Chile)

    1998-12-01

    The well-resolved emission and absorption spectra of centrosymmetric coordination compounds of the transition metal ions have been used widely to provide the experimental data against which to test theoretical models of vibronic intensities. With reference to the {sup 2} E{sub g} {yields} {sup 4} A{sub 2g} luminescence transition, at a perfect octahedral site in Cs{sub 2}SiF{sub 6}, over than one hundred vibronic lines are observed with line widths of a few wavenumber spread over some 3000 cm{sup -1}. This paper reports a through examination of both the electronic and vibrational factors, which influences the observed vibronic intensities of the various assigned and identified lines in the spectra of the MnF{sub 6} {sup 2-} complex ion in the Cs{sub 2}SiF{sub 6} cubic lattice. The origin and nature of higher order vibronic interactions are analysed on the basis of a symmetrized vibronic crystal field-ligand polarization model. (Author)

  4. Measuring Physical Activity Intensity

    Medline Plus

    Full Text Available ... Share Compartir For more help with what counts as aerobic activity, watch this video: Windows Media Player, ... The table below lists examples of activities classified as moderate-intensity or vigorous-intensity based upon the ...

  5. Measuring Physical Activity Intensity

    Medline Plus

    Full Text Available ... for a breath. Absolute Intensity The amount of energy used by the body per minute of activity. ... or vigorous-intensity based upon the amount of energy used by the body while doing the activity. ...

  6. Effect of Early Intensive Care on Recovery From Whiplash-Associated Disorders: Results of a Population-Based Cohort Study.

    Science.gov (United States)

    Skillgate, Eva; Côté, Pierre; Cassidy, J David; Boyle, Eleanor; Carroll, Linda; Holm, Lena W

    2016-05-01

    To determine whether the results from previous research suggesting that early intensive health care delays recovery from whiplash-associated disorders (WADs) were confounded by expectations of recovery and whether the association between early health care intensity and time to recovery varies across patterns of health care. Population-based inception cohort. All adults (≥18y) injured in motor vehicle collisions who received treatment from a regulated health professional or reported their injuries to the single provincially administered motor vehicle insurer. Participants with WAD (N=5204). Self-report visits to physicians, chiropractors, physiotherapists, massage therapists, and other professionals during the first 42 days postcollision were used to define health care intensity. Not applicable. Self-perceived recovery. Individuals with high utilization health care had slower recovery independent of expectation of recovery and other confounders. Compared with individuals who reported low utilization of physician services, recovery was slower for those with high health care utilization, regardless of the type of profession. For instance, those with high physician (hazard rate ratio [HRR]=.56; 95% confidence interval [CI], .42-.75), physician and high physiotherapy utilization (HRR=.68; 95% CI, .61-.77), physician and high chiropractor utilization (HRR=.74; 95% CI, .64-.85), and physician and high massage therapy utilization (HRR=.78; 95% CI, .68-.90) had significantly slower recovery. Our study adds to the existing evidence that early intensive care is associated with slower recovery from WAD, independent of expectation of recovery. The results have policy implications and suggest that the optimal management of WADs focuses on reassurance and education instead of intensive care. Copyright © 2016 American Congress of Rehabilitation Medicine. Published by Elsevier Inc. All rights reserved.

  7. First experiences with model based iterative reconstructions influence on quantitative plaque volume and intensity measurements in coronary computed tomography angiography

    International Nuclear Information System (INIS)

    Precht, H.; Kitslaar, P.H.; Broersen, A.; Gerke, O.; Dijkstra, J.; Thygesen, J.; Egstrup, K.; Lambrechtsen, J.

    2017-01-01

    Purpose: Investigate the influence of adaptive statistical iterative reconstruction (ASIR) and the model-based IR (Veo) reconstruction algorithm in coronary computed tomography angiography (CCTA) images on quantitative measurements in coronary arteries for plaque volumes and intensities. Methods: Three patients had three independent dose reduced CCTA performed and reconstructed with 30% ASIR (CTDI vol at 6.7 mGy), 60% ASIR (CTDI vol 4.3 mGy) and Veo (CTDI vol at 1.9 mGy). Coronary plaque analysis was performed for each measured CCTA volumes, plaque burden and intensities. Results: Plaque volume and plaque burden show a decreasing tendency from ASIR to Veo as median volume for ASIR is 314 mm 3 and 337 mm 3 –252 mm 3 for Veo and plaque burden is 42% and 44% for ASIR to 39% for Veo. The lumen and vessel volume decrease slightly from 30% ASIR to 60% ASIR with 498 mm 3 –391 mm 3 for lumen volume and vessel volume from 939 mm 3 to 830 mm 3 . The intensities did not change overall between the different reconstructions for either lumen or plaque. Conclusion: We found a tendency of decreasing plaque volumes and plaque burden but no change in intensities with the use of low dose Veo CCTA (1.9 mGy) compared to dose reduced ASIR CCTA (6.7 mGy & 4.3 mGy), although more studies are warranted. - Highlights: • Veo decrease plaque volumes and plaque burden using low-dose CCTA. • Moving from ASIR 30%, ASIR 60% to Veo did not appear to influence the plaque intensities. • Studies including larger sample size are needed to investigate the effect on plaque.

  8. Polyphenolic composition and antioxidant capacity of legume based swards are affected by light intensity in a Mediterranean agroforestry system.

    Science.gov (United States)

    Re, Giovanni Antonio; Piluzza, Giovanna; Sanna, Federico; Molinu, Maria Giovanna; Sulas, Leonardo

    2018-06-01

    In Mediterranean grazed woodlands, microclimate changes induced by trees influence the growth and development of the understory, but very little is known about its polyphenolic composition in relation to light intensity. We investigated the bioactive compounds and antioxidant capacity of different legume-based swards and variations due to full sunlight and partial shade. The research was carried out in a cork oak agrosilvopastoral system in Sardinia. The highest values of DPPH reached 7 mmol TEAC 100 g -1 DW, total phenolics 67.1 g GAE kg -1 DW and total flavonoids 7.5 g CE kg -1 DW. Compared to full sunlight, partial shade reduced DPPH values by 29 and 42%, and the total phenolic content by 23 and 53% in 100% legume mixture and semi natural pasture. Twelve phenolic compounds were detected: chlorogenic acid in 80% legume mixture (partial shade) and verbascoside in pure sward of bladder clover (full sunlight) were the most abundant. Light intensity significantly affected antioxidant capacity, composition and levels of phenolic compounds. Our results provide new insights into the effects of light intensity on plant secondary metabolites from legume based swards, underlining the important functions provided by agroforestry systems. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.

  9. Simulation Based Exploration of Critical Zone Dynamics in Intensively Managed Landscapes

    Science.gov (United States)

    Kumar, P.

    2017-12-01

    The advent of high-resolution measurements of topographic and (vertical) vegetation features using areal LiDAR are enabling us to resolve micro-scale ( 1m) landscape structural characteristics over large areas. Availability of hyperspectral measurements is further augmenting these LiDAR data by enabling the biogeochemical characterization of vegetation and soils at unprecedented spatial resolutions ( 1-10m). Such data have opened up novel opportunities for modeling Critical Zone processes and exploring questions that were not possible before. We show how an integrated 3-D model at 1m grid resolution can enable us to resolve micro-topographic and ecological dynamics and their control on hydrologic and biogeochemical processes over large areas. We address the computational challenge of such detailed modeling by exploiting hybrid CPU and GPU computing technologies. We show results of moisture, biogeochemical, and vegetation dynamics from studies in the Critical Zone Observatory for Intensively managed Landscapes (IMLCZO) in the Midwestern United States.

  10. Influence of temperature and light intensity on Ru(II) complex based organic-inorganic device

    International Nuclear Information System (INIS)

    Asubay, Sezai; Durap, Feyyaz; Aydemir, Murat; Baysal, Akin; Ocak, Yusuf Selim; Tombak, Ahmet

    2016-01-01

    An organic-inorganic junction was fabricated by forming [Ru(Cy_2PNHCH_2-C_4H_3O)(η"6-p-cymene)Cl_2] complex thin film using spin coating technique on n-Si and evaporating Au metal on the film. It was seen that the structure had perfect rectification property. Current-voltage (I-V) measurements were carried out in dark and under various illumination conditions (between 50-100 mW/cm"2) and with the temperature range from 303 to 380 K. The structure showed unusually forward and reverse bias temperature and light sensing behaviors. It was seen that the current both in forward and reverse bias increased with the increase in light intensity and temperature.

  11. Automated choroid segmentation based on gradual intensity distance in HD-OCT images.

    Science.gov (United States)

    Chen, Qiang; Fan, Wen; Niu, Sijie; Shi, Jiajia; Shen, Honglie; Yuan, Songtao

    2015-04-06

    The choroid is an important structure of the eye and plays a vital role in the pathology of retinal diseases. This paper presents an automated choroid segmentation method for high-definition optical coherence tomography (HD-OCT) images, including Bruch's membrane (BM) segmentation and choroidal-scleral interface (CSI) segmentation. An improved retinal nerve fiber layer (RNFL) complex removal algorithm is presented to segment BM by considering the structure characteristics of retinal layers. By analyzing the characteristics of CSI boundaries, we present a novel algorithm to generate a gradual intensity distance image. Then an improved 2-D graph search method with curve smooth constraints is used to obtain the CSI segmentation. Experimental results with 212 HD-OCT images from 110 eyes in 66 patients demonstrate that the proposed method can achieve high segmentation accuracy. The mean choroid thickness difference and overlap ratio between our proposed method and outlines drawn by experts was 6.72µm and 85.04%, respectively.

  12. Prescribed differences in exercise intensity based on the TCAR test over sandy ground and grass.

    Directory of Open Access Journals (Sweden)

    Juliano Fernandes da Silva

    2010-01-01

    Full Text Available The intensity of training might be influenced by exercise mode and type of terrain. Thus, the objective of this study was a to compare the physiological indices determined in the TCAR test carried out on natural grass (NG and sandy ground (SG, and b to analyze heart rate (HR and blood lactate responses during constant exercise on SG and NG. Ten soccer players (15.11 ± 1.1 years, 168 ± 4.0 cm, 60 ± 4.0 kg were submitted to the TCAR test to determine peak velocity (PV and the intensity corresponding to 80.4% PV (V80.4 on NG and SG. The second evaluation consisted of two constant load tests (CLT (80.4% PV on NG and SG with a duration of 27 min. The paired Student t-test was used to compare the tests carried out on NG and SG. ANOVA (two-way, complemented by the Tukey test, was used to compare lactate concentrations [La] at 9, 18 and 27 min between the two types of terrain. A p value <0.05 was adopted. PV and V80.4 (15.3±1.0 and 12.3±0.6 km/h were significantly higher on grass than on sand (14.3±1.0 and 11.5±0.4 km/h. Lactate concentration during the CLT [LaV80.4] was significantly higher on sand (4.1±0.9 mmol/L than on grass (2.8±0.7 mmol/L. In the CLT, no significant difference in mean HR was observed between the two terrains, whereas there was a difference in [La]. In conclusion, the type of terrain interferes with indicators associated with aerobic power and capacity obtained by the TCAR test.

  13. A modified GHG intensity indicator: Toward a sustainable global economy based on a carbon border tax and emissions trading

    International Nuclear Information System (INIS)

    Farrahi Moghaddam, Reza; Farrahi Moghaddam, Fereydoun; Cheriet, Mohamed

    2013-01-01

    It will be difficult to gain the agreement of all the actors on any proposal for climate change management, if universality and fairness are not considered. In this work, a universal measure of emissions to be applied at the international level is proposed, based on a modification of the Greenhouse Gas Intensity (GHG-INT) measure. It is hoped that the generality and low administrative cost of this measure, which we call the Modified Greenhouse Gas Intensity measure (MGHG-INT), will eliminate any need to classify nations. The core of the MGHG-INT is what we call the IHDI-adjusted Gross Domestic Product (IDHIGDP), based on the Inequality-adjusted Human Development Index (IHDI). The IDHIGDP makes it possible to propose universal measures, such as MGHG-INT. We also propose a carbon border tax applicable at national borders, based on MGHG-INT and IDHIGDP. This carbon tax is supported by a proposed global Emissions Trading System (ETS). The proposed carbon tax is analyzed in a short-term scenario, where it is shown that it can result in a significant reduction in global emissions while keeping the economy growing at a positive rate. In addition to annual GHG emissions, cumulative GHG emissions over two decades are considered with almost the same results. - Highlights: ► An IHDI-adjusted GDP (IHDIGDP) is introduced to universally account the activities of nations. ► A modified GHG emission intensity (MGHG-INT) is introduced based on the IHDIGDP. ► Based on green and red scenarios, admissible emissions and RED percentage are introduced. ► The RED percentage is used to define a border carbon tax (BCT) and emission trading system. ► The MGHG-INT can provide a universal control on emissions while allowing high economical growth

  14. The Influence of Activity-Based Funding on Treatment Intensity and Length of Stay of Geriatric Rehabilitation Patients.

    Science.gov (United States)

    Bouwstra, Hylco; Wattel, Lizette M; de Groot, Aafke J; Smalbrugge, Martin; Hertogh, Cees M

    2017-06-01

    Little is known about the impact of activity-based funding (ABF) to increase treatment intensity and decrease length of stay (LOS) of inpatient geriatric patients. In January 2014, ABF was implemented in The Netherlands with the aim to increase treatment intensity and shorten LOS in geriatric rehabilitation (GR). To describe the influence of ABF on treatment intensity and LOS of inpatient GR patients before and after ABF was implemented. Population-based, retrospective cohort study. Thirty nursing homes providing inpatient GR across The Netherlands. Digital medical records of patients who had received inpatient GR in Dutch nursing homes across The Netherlands were studied between January 1, 2013 and March 14, 2016. We calculated the mean treatment intensity in hours per week and median LOS in days in 3 cohorts according to the year of admittance. In addition, a historical representative cohort of GR patients who were admitted in 2007 was studied that represented the situation before the ABF reform was announced (eg, funding with a fixed price per day). In 2013, the funding with a fixed price per day was still in use but with compulsory ABF registration. In 2014 and 2015, the ABF was fully implemented. Statistical differences in treatment intensity and LOS were calculated between patients admitted in 2007 and 2013, 2013 and 2014, and 2013 and 2015. Statistical significance was set at a P value of <.02 (Bonferroni correction P = .05/3). Discharge destinations of patients discharged from March 1, 2015 to January 1, 2016 could be obtained and compared with 2007. The treatment intensity and LOS of 16,823 GR patients could be obtained and compared with the historical cohort from 2007 (n = 2950). Patients who were admitted in the year 2013 received higher treatment intensities and had the same median LOS compared with 2007. After the implementation of ABF in January 2014, the mean treatment intensity increased significantly by 37% (3.8 hours/week in 2013, 4.7

  15. Comparison of Effects of Soy Oil, Olive Oil, Mct-Lct Based Nutrition Solutions in Parenterally Fed Intensive Care Patients

    Directory of Open Access Journals (Sweden)

    Nurşen Gürsoy

    2012-08-01

    Full Text Available Objective: In this study, we aimed to compare the changes in biochemical parameters and efficacy of nutrition by using parenteral nutrition solutions with different lipid content in critically ill patients. Material and Method: Fourty-five intensive care patients were randomized into three groups to receive either soy bean based (Group 1 or olive oil based (Group 2 or MCT/LCT based (Group 3 nutrition solutions. The calorie requirement was calculated using Schofield equation day. The levels of albumin, total protein, AST, ALT, LDH, GGT, ALP, glucose, triglyceride, cholesterol, LDL, HDL, aPTT, PT, INR, CRP, transferin and prealbumin were measured on days 1, 7 and 14. Results: There was no statistically significant difference between groups according to glucose, liver function tests, triglyceride, cholesterol, LDL, HDL, aPTT, PT, INR levels. CRP and prealbumin were similar within-group and between-group comparisons. In groups II and III, CRP levels decreased while prealbumin levels were increasing. Conclusion: As a conclusion, no difference was found comparing the biochemical parameters and efficacy of nutrition, in ICU patients fed with soy oil, olive oil or MCT/LCT based parenteral nutrition solutions. (Journal of the Turkish Society Intensive Care 2012; 10: 52-8

  16. A reduced-form intensity-based model under fuzzy environments

    Science.gov (United States)

    Wu, Liang; Zhuang, Yaming

    2015-05-01

    The external shocks and internal contagion are the important sources of default events. However, the external shocks and internal contagion effect on the company is not observed, we cannot get the accurate size of the shocks. The information of investors relative to the default process exhibits a certain fuzziness. Therefore, using randomness and fuzziness to study such problems as derivative pricing or default probability has practical needs. But the idea of fuzzifying credit risk models is little exploited, especially in a reduced-form model. This paper proposes a new default intensity model with fuzziness and presents a fuzzy default probability and default loss rate, and puts them into default debt and credit derivative pricing. Finally, the simulation analysis verifies the rationality of the model. Using fuzzy numbers and random analysis one can consider more uncertain sources in the default process of default and investors' subjective judgment on the financial markets in a variety of fuzzy reliability so as to broaden the scope of possible credit spreads.

  17. TDM interrogation of intensity-modulated USFBGs network based on multichannel lasers.

    Science.gov (United States)

    Rohollahnejad, Jalal; Xia, Li; Cheng, Rui; Ran, Yanli; Rahubadde, Udaya; Zhou, Jiaao; Zhu, Lin

    2017-01-23

    We report a large-scale multi-channel fiber sensing network, where ultra-short FBGs (USFBGs) instead of conventional narrow-band ultra-weak FBGs are used as the sensors. In the time division multiplexing scheme of the network, each grating response is resolved as three adjacent discrete peaks. The central wavelengths of USFBGs are tracked with the differential detection, which is achieved by calculating the peak-to-peak ratio of two maximum peaks. Compared with previous large-scale hybrid multiplexing sensing networks (e.g., WDM/TDM) which typically have relatively low interrogation speed and very high complexity, the proposed system can achieve interrogation of all channel sensors through very fast and simple intensity measurements with a broad dynamic range. A proof-of-concept experiment with twenty USFBGs, at two wavelength channels, was performed and a fast static strain measurements were demonstrated, with a high average sensitivity of ~0.54dB/µƐ and wide dynamic range of over ~3000µƐ. The channel to channel switching time was 10ms and total network interrogation time was 50ms.

  18. Illumination robust face recognition using spatial adaptive shadow compensation based on face intensity prior

    Science.gov (United States)

    Hsieh, Cheng-Ta; Huang, Kae-Horng; Lee, Chang-Hsing; Han, Chin-Chuan; Fan, Kuo-Chin

    2017-12-01

    Robust face recognition under illumination variations is an important and challenging task in a face recognition system, particularly for face recognition in the wild. In this paper, a face image preprocessing approach, called spatial adaptive shadow compensation (SASC), is proposed to eliminate shadows in the face image due to different lighting directions. First, spatial adaptive histogram equalization (SAHE), which uses face intensity prior model, is proposed to enhance the contrast of each local face region without generating visible noises in smooth face areas. Adaptive shadow compensation (ASC), which performs shadow compensation in each local image block, is then used to produce a wellcompensated face image appropriate for face feature extraction and recognition. Finally, null-space linear discriminant analysis (NLDA) is employed to extract discriminant features from SASC compensated images. Experiments performed on the Yale B, Yale B extended, and CMU PIE face databases have shown that the proposed SASC always yields the best face recognition accuracy. That is, SASC is more robust to face recognition under illumination variations than other shadow compensation approaches.

  19. Sleep Characteristics of the Staff Working in a Pediatric Intensive Care Unit Based on a Survey

    Directory of Open Access Journals (Sweden)

    Yolanda Puerta

    2017-12-01

    Full Text Available The objective is to evaluate the sleep characteristics of the staff working in a pediatric intensive care unit (PICU. They were asked to complete an anonymous survey concerning the characteristics and quality of their sleep, as well as the impact of sleep disturbances on their work and social life, assessed by Functional Outcomes of Sleep Questionnaire (FOSQ-10 questionnaire. The response rate was 84.6% (85% females: 17% were doctors, 57% nurses, 23% nursing assistants, and 3% porters. 83.8% of them worked on fix shifts and 16.2% did 24-h shifts. 39.8% of workers considered that they had a good sleep quality and 39.8% considered it to be poor or bad. The score was good in 18.2% of the staff and bad in 81.8%. Night shift workers showed significantly worse sleep quality on both the objective and subjective evaluation. There was a weak concordance (kappa 0.267; p = 0.004 between the perceived quality of sleep and the FOSQ-10 evaluation. Sleep disorders affected their emotional state (30.2% of workers and relationships or social life (22.6%. In conclusion, this study finds that a high percentage of health professionals from PICU suffer from sleep disorders that affect their personal and social life. This negative impact is significantly higher in night shift workers. Many health workers are not aware of their bad sleep quality.

  20. Sleep Characteristics of the Staff Working in a Pediatric Intensive Care Unit Based on a Survey.

    Science.gov (United States)

    Puerta, Yolanda; García, Mirian; Heras, Elena; López-Herce, Jesús; Fernández, Sarah N; Mencía, Santiago

    2017-01-01

    The objective is to evaluate the sleep characteristics of the staff working in a pediatric intensive care unit (PICU). They were asked to complete an anonymous survey concerning the characteristics and quality of their sleep, as well as the impact of sleep disturbances on their work and social life, assessed by Functional Outcomes of Sleep Questionnaire (FOSQ)-10 questionnaire. The response rate was 84.6% (85% females): 17% were doctors, 57% nurses, 23% nursing assistants, and 3% porters. 83.8% of them worked on fix shifts and 16.2% did 24-h shifts. 39.8% of workers considered that they had a good sleep quality and 39.8% considered it to be poor or bad. The score was good in 18.2% of the staff and bad in 81.8%. Night shift workers showed significantly worse sleep quality on both the objective and subjective evaluation. There was a weak concordance (kappa 0.267; p  = 0.004) between the perceived quality of sleep and the FOSQ-10 evaluation. Sleep disorders affected their emotional state (30.2% of workers) and relationships or social life (22.6%). In conclusion, this study finds that a high percentage of health professionals from PICU suffer from sleep disorders that affect their personal and social life. This negative impact is significantly higher in night shift workers. Many health workers are not aware of their bad sleep quality.

  1. Windowless microfluidic platform based on capillary burst valves for high intensity x-ray measurements

    International Nuclear Information System (INIS)

    Vig, Asger Laurberg; Enevoldsen, Nikolaj; Thilsted, Anil Haraksingh; Eriksen, Johan; Kristensen, Anders; Haldrup, Kristoffer; Feidenhans'l, Robert; Nielsen, Martin Meedom

    2009-01-01

    We propose and describe a microfluidic system for high intensity x-ray measurements. The required open access to a microfluidic channel is provided by an out-of-plane capillary burst valve (CBV). The functionality of the out-of-plane CBV is characterized with respect to the diameter of the windowless access hole, ranging from 10 to 130 μm. Maximum driving pressures from 22 to 280 mbar corresponding to refresh rates of the exposed sample from 300 Hz to 54 kHz is demonstrated. The microfluidic system is tested at beamline ID09b at the ESRF synchrotron radiation facility in Grenoble, and x-ray scattering measurements are shown to be feasible and to require only very limited amounts of sample, <1 ml/h of measurements without recapturing of sample. With small adjustments of the present chip design, scattering angles up to 30 deg. can be achieved without shadowing effects and integration on-chip mixing and spectroscopy appears straightforward.

  2. Au-Graphene Hybrid Plasmonic Nanostructure Sensor Based on Intensity Shift

    Science.gov (United States)

    Alharbi, Raed; Irannejad, Mehrdad; Yavuz, Mustafa

    2017-01-01

    Integrating plasmonic materials, like gold with a two-dimensional material (e.g., graphene) enhances the light-material interaction and, hence, plasmonic properties of the metallic nanostructure. A localized surface plasmon resonance sensor is an effective platform for biomarker detection. They offer a better bulk surface (local) sensitivity than a regular surface plasmon resonance (SPR) sensor; however, they suffer from a lower figure of merit compared to that one in a propagating surface plasmon resonance sensors. In this work, a decorated multilayer graphene film with an Au nanostructures was proposed as a liquid sensor. The results showed a significant improvement in the figure of merit compared with other reported localized surface plasmon resonance sensors. The maximum figure of merit and intensity sensitivity of 240 and 55 RIU−1 (refractive index unit) at refractive index change of 0.001 were achieved which indicate the capability of the proposed sensor to detect a small change in concentration of liquids in the ng/mL level which is essential in early-stage cancer disease detection. PMID:28106850

  3. Au-Graphene Hybrid Plasmonic Nanostructure Sensor Based on Intensity Shift

    Directory of Open Access Journals (Sweden)

    Raed Alharbi

    2017-01-01

    Full Text Available Integrating plasmonic materials, like gold with a two-dimensional material (e.g., graphene enhances the light-material interaction and, hence, plasmonic properties of the metallic nanostructure. A localized surface plasmon resonance sensor is an effective platform for biomarker detection. They offer a better bulk surface (local sensitivity than a regular surface plasmon resonance (SPR sensor; however, they suffer from a lower figure of merit compared to that one in a propagating surface plasmon resonance sensors. In this work, a decorated multilayer graphene film with an Au nanostructures was proposed as a liquid sensor. The results showed a significant improvement in the figure of merit compared with other reported localized surface plasmon resonance sensors. The maximum figure of merit and intensity sensitivity of 240 and 55 RIU−1 (refractive index unit at refractive index change of 0.001 were achieved which indicate the capability of the proposed sensor to detect a small change in concentration of liquids in the ng/mL level which is essential in early-stage cancer disease detection.

  4. Designing a Method for AN Automatic Earthquake Intensities Calculation System Based on Data Mining and On-Line Polls

    Science.gov (United States)

    Liendo Sanchez, A. K.; Rojas, R.

    2013-05-01

    Seismic intensities can be calculated using the Modified Mercalli Intensity (MMI) scale or the European Macroseismic Scale (EMS-98), among others, which are based on a serie of qualitative aspects related to a group of subjective factors that describe human perception, effects on nature or objects and structural damage due to the occurrence of an earthquake. On-line polls allow experts to get an overview of the consequences of an earthquake, without going to the locations affected. However, this could be a hard work if the polls are not properly automated. Taking into account that the answers given to these polls are subjective and there is a number of them that have already been classified for some past earthquakes, it is possible to use data mining techniques in order to automate this process and to obtain preliminary results based on the on-line polls. In order to achieve these goal, a predictive model has been used, using a classifier based on a supervised learning techniques such as decision tree algorithm and a group of polls based on the MMI and EMS-98 scales. It summarized the most important questions of the poll, and recursive divides the instance space corresponding to each question (nodes), while each node splits the space depending on the possible answers. Its implementation was done with Weka, a collection of machine learning algorithms for data mining tasks, using the J48 algorithm which is an implementation of the C4.5 algorithm for decision tree models. By doing this, it was possible to obtain a preliminary model able to identify up to 4 different seismic intensities with 73% correctly classified polls. The error obtained is rather high, therefore, we will update the on-line poll in order to improve the results, based on just one scale, for instance the MMI. Besides, the integration of automatic seismic intensities methodology with a low error probability and a basic georeferencing system, will allow to generate preliminary isoseismal maps

  5. Atmospheric stabilization of CO2 emissions: Near-term reductions and absolute versus intensity-based targets

    International Nuclear Information System (INIS)

    Timilsina, Govinda R.

    2008-01-01

    This study analyzes CO 2 emissions reduction targets for various countries and geopolitical regions by the year 2030 to stabilize atmospheric concentrations of CO 2 at 450 ppm (550 ppm including non-CO 2 greenhouse gases) level. It also determines CO 2 intensity cuts that would be required in those countries and regions if the emission reductions were to be achieved through intensity-based targets without curtailing their expected economic growth. Considering that the stabilization of CO 2 concentrations at 450 ppm requires the global trend of CO 2 emissions to be reversed before 2030, this study develops two scenarios: reversing the global CO 2 trend in (i) 2020 and (ii) 2025. The study shows that global CO 2 emissions would be limited at 42 percent above 1990 level in 2030 if the increasing trend of global CO 2 emissions were to be reversed by 2020. If reversing the trend is delayed by 5 years, global CO 2 emissions in 2030 would be 52 percent higher than the 1990 level. The study also finds that to achieve these targets while maintaining expected economic growth, the global average CO 2 intensity would require a 68 percent drop from the 1990 level or a 60 percent drop from the 2004 level by 2030

  6. Determination of strength exercise intensities based on the load-power-velocity relationship.

    Science.gov (United States)

    Jandačka, Daniel; Beremlijski, Petr

    2011-06-01

    The velocity of movement and applied load affect the production of mechanical power output and subsequently the extent of the adaptation stimulus in strength exercises. We do not know of any known function describing the relationship of power and velocity and load in the bench press exercise. The objective of the study is to find a function modeling of the relationship of relative velocity, relative load and mechanical power output for the bench press exercise and to determine the intensity zones of the exercise for specifically focused strength training of soccer players. Fifteen highly trained soccer players at the start of a competition period were studied. The subjects of study performed bench presses with the load of 0, 10, 30, 50, 70 and 90% of the predetermined one repetition maximum with maximum possible speed of movement. The mean measured power and velocity for each load (kg) were used to develop a multiple linear regression function which describes the quadratic relationship between the ratio of power (W) to maximum power (W) and the ratios of the load (kg) to one repetition maximum (kg) and the velocity (m•s(-1)) to maximal velocity (m•s(-1)). The quadratic function of two variables that modeled the searched relationship explained 74% of measured values in the acceleration phase and 75% of measured values from the entire extent of the positive power movement in the lift. The optimal load for reaching maximum power output suitable for the dynamics effort strength training was 40% of one repetition maximum, while the optimal mean velocity would be 75% of maximal velocity. Moreover, four zones: maximum power, maximum velocity, velocity-power and strength-power were determined on the basis of the regression function.

  7. [Invasive candidiasis in non-neutropenic adults : Guideline-based management in the intensive care unit].

    Science.gov (United States)

    Glöckner, A; Cornely, O A

    2013-12-01

    Invasive Candida infections represent a diagnostic and therapeutic challenge for clinicians particularly in the intensive care unit (ICU). Despite substantial advances in antifungal agents and treatment strategies, invasive candidiasis remains associated with a high mortality. Recent guideline recommendations on the management of invasive candidiasis by the European Society of Clinical Microbiology and Infectious Diseases (ESCMID) from 2012, the German Speaking Mycological Society and the Paul Ehrlich Society for Chemotherapy (DMykG/PEG) from 2011 and the Infectious Diseases Society of America (IDSA) from 2009 provide valuable guidance for diagnostic procedures and treatment of these infections but need to be interpreted in the light of the individual situation of the patient and the local epidemiology of fungal pathogens. The following recommendations for management of candidemia are common to all three guidelines. Any positive blood culture for Candida indicates disseminated infection or deep organ infection and requires antifungal therapy. Treatment should be initiated as soon as possible. Removal or changing of central venous catheters or other foreign material in the bloodstream is recommended whenever possible. Ophthalmological examination for exclusion of endophthalmitis and follow-up blood cultures during therapy are also recommended. Duration of therapy should be 14 days after clearance of blood cultures and resolution of symptoms. Consideration of surgical options and a prolonged antifungal treatment (weeks to months) are required when there is organ involvement. During the last decade several new antifungal agents were introduced into clinical practice. These innovative drugs showed convincing efficacy and favorable safety in randomized clinical trials. Consequently, they were integrated in recent therapeutic guidelines, often replacing former standard drugs as first-line options. Echinocandins have emerged as the generally preferred primary treatment in

  8. Prefrontal cortex based sex differences in tinnitus perception: same tinnitus intensity, same tinnitus distress, different mood.

    Directory of Open Access Journals (Sweden)

    Sven Vanneste

    Full Text Available BACKGROUND: Tinnitus refers to auditory phantom sensation. It is estimated that for 2% of the population this auditory phantom percept severely affects the quality of life, due to tinnitus related distress. Although the overall distress levels do not differ between sexes in tinnitus, females are more influenced by distress than males. Typically, pain, sleep, and depression are perceived as significantly more severe by female tinnitus patients. Studies on gender differences in emotional regulation indicate that females with high depressive symptoms show greater attention to emotion, and use less anti-rumination emotional repair strategies than males. METHODOLOGY: The objective of this study was to verify whether the activity and connectivity of the resting brain is different for male and female tinnitus patients using resting-state EEG. CONCLUSIONS: Females had a higher mean score than male tinnitus patients on the BDI-II. Female tinnitus patients differ from male tinnitus patients in the orbitofrontal cortex (OFC extending to the frontopolar cortex in beta1 and beta2. The OFC is important for emotional processing of sounds. Increased functional alpha connectivity is found between the OFC, insula, subgenual anterior cingulate (sgACC, parahippocampal (PHC areas and the auditory cortex in females. Our data suggest increased functional connectivity that binds tinnitus-related auditory cortex activity to auditory emotion-related areas via the PHC-sgACC connections resulting in a more depressive state even though the tinnitus intensity and tinnitus-related distress are not different from men. Comparing male tinnitus patients to a control group of males significant differences could be found for beta3 in the posterior cingulate cortex (PCC. The PCC might be related to cognitive and memory-related aspects of the tinnitus percept. Our results propose that sex influences in tinnitus research cannot be ignored and should be taken into account in functional

  9. Method for screening prevention and control measures and technologies based on groundwater pollution intensity assessment.

    Science.gov (United States)

    Li, Juan; Yang, Yang; Huan, Huan; Li, Mingxiao; Xi, Beidou; Lv, Ningqing; Wu, Yi; Xie, Yiwen; Li, Xiang; Yang, Jinjin

    2016-05-01

    This paper presents a system for determining the evaluation and gradation indices of groundwater pollution intensity (GPI). Considering the characteristics of the vadose zone and pollution sources, the system decides which anti-seepage measures should be implemented at the contaminated site. The pollution sources hazards (PSH) and groundwater intrinsic vulnerability (GIV) are graded by the revised Nemerow Pollution Index and an improved DRTAS model, respectively. GPI is evaluated and graded by a double-sided multi-factor coupling model, which is constructed by the matrix method. The contaminated sites are categorized as prior, ordinary, or common sites. From the GPI results, we develop guiding principles for preventing and removing pollution sources, procedural interruption and remediation, and end treatment and monitoring. Thus, we can select appropriate prevention and control technologies (PCT). To screen the technological schemes and optimize the traditional analytical hierarchy process (AHP), we adopt the technique for order preference by the similarity to ideal solution (TOPSIS) method. Our GPI approach and PCT screening are applied to three types of pollution sites: the refuse dump of a rare earth mine development project (a potential pollution source), a chromium slag dump, and a landfill (existing pollution sources). These three sites are identified as ordinary, prior, and ordinary sites, respectively. The anti-seepage materials at the refuse dump should perform as effectively as a 1.5-m-thick clay bed. The chromium slag dump should be preferentially treated by soil flushing and in situ chemical remediation. The landfill should be treated by natural attenuation technology. The proposed PCT screening approach was compared with conventional screening methods results at the three sites and proved feasible and effective. The proposed method can provide technical support for the monitoring and management of groundwater pollution in China. Copyright © 2015

  10. Preliminary Study of the Effect of Low-Intensity Home-Based Physical Therapy in Chronic Stroke Patients

    Directory of Open Access Journals (Sweden)

    Jau-Hong Lin

    2004-01-01

    Full Text Available This study was a preliminary examination of the effect of low-intensity home-based physical therapy on the performance of activities of daily living (ADL and motor function in patients more than 1 year after stroke. Twenty patients were recruited from a community stroke register in Nan-Tou County, Taiwan, to a randomized, crossover trial comparing intervention by a physical therapist immediately after entry into the trial (Group I or after a delay of 10 weeks (Group II. The intervention consisted of home-based physical therapy once a week for 10 weeks. The Barthel Index (BI and Stroke Rehabilitation Assessment of Movement (STREAM were used as standard measures for ADL and motor function. At the first follow-up assessment at 11 weeks, Group I showed greater improvement in lower limb motor function than Group II. At the second follow-up assessment at 22 weeks, Group II showed improvement while Group I had declined. At 22 weeks, the motor function of upper limbs, mobility, and ADL performance in Group II had improved slightly more than in Group I, but the between-group differences were not significant. It appears that low-intensity home-based physical therapy can improve lower limb motor function in chronic stroke survivors. Further studies will be needed to confirm these findings.

  11. All-fiber intensity bend sensor based on photonic crystal fiber with asymmetric air-hole structure

    Science.gov (United States)

    Budnicki, Dawid; Szostkiewicz, Lukasz; Szymanski, Michal O.; Ostrowski, Lukasz; Holdynski, Zbigniew; Lipinski, Stanislaw; Murawski, Michal; Wojcik, Grzegorz; Makara, Mariusz; Poturaj, Krzysztof; Mergo, Pawel; Napierala, Marek; Nasilowski, Tomasz

    2017-10-01

    Monitoring the geometry of an moving element is a crucial task for example in robotics. The robots equipped with fiber bend sensor integrated in their arms can be a promising solution for medicine, physiotherapy and also for application in computer games. We report an all-fiber intensity bend sensor, which is based on microstructured multicore optical fiber. It allows to perform a measurement of the bending radius as well as the bending orientation. The reported solution has a special airhole structure which makes the sensor only bend-sensitive. Our solution is an intensity based sensor, which measures power transmitted along the fiber, influenced by bend. The sensor is based on a multicore fiber with the special air-hole structure that allows detection of bending orientation in range of 360°. Each core in the multicore fiber is sensitive to bend in specified direction. The principle behind sensor operation is to differentiate the confinement loss of fundamental mode propagating in each core. Thanks to received power differences one can distinguish not only bend direction but also its amplitude. Multicore fiber is designed to utilize most common light sources that operate at 1.55 μm thus ensuring high stability of operation. The sensitivity of the proposed solution is equal 29,4 dB/cm and the accuracy of bend direction for the fiber end point is up to 5 degrees for 15 cm fiber length. Such sensitivity allows to perform end point detection with millimeter precision.

  12. The integration of DVH-based planning aspects into a convex intensity modulated radiation therapy optimization framework

    Energy Technology Data Exchange (ETDEWEB)

    Kratt, Karin [Faculty of Mathematics, Technical University of Kaiserslautern, Kaiserslautern (Germany); Scherrer, Alexander [Department of Optimization, Fraunhofer Institute for Industrial Mathematics (ITWM), Kaiserslautern (Germany)], E-mail: alexander.scherrer@itwm.fraunhofer.de

    2009-06-21

    The formulation of intensity modulated radiation therapy (IMRT) planning aspects frequently uses the dose-volume histogram (DVH), whereas plan computations often happen in the more desirable convex IMRT optimization framework. Inspired by a recent publication of Zinchenko et al (2008 Phys. Med. Biol. 53 3231-50), this work addresses the integration of DVH-based planning aspects into this framework from a general point of view. It first provides the basic mathematical requirements on the evaluation functions in order to support such an incorporation. Then it introduces the condition number as a description for how precisely DVH-based planning aspects can be reformulated in terms of evaluation functions. Exemplary numerical studies for the generalized equivalent uniform dose and a physical constraint function show the influence of function parameter values and DVH approximation on the condition number. The work concludes by formulating the aspects that should be taken into account for an appropriate integration of DVH-based planning aspects. (note)

  13. The integration of DVH-based planning aspects into a convex intensity modulated radiation therapy optimization framework

    International Nuclear Information System (INIS)

    Kratt, Karin; Scherrer, Alexander

    2009-01-01

    The formulation of intensity modulated radiation therapy (IMRT) planning aspects frequently uses the dose-volume histogram (DVH), whereas plan computations often happen in the more desirable convex IMRT optimization framework. Inspired by a recent publication of Zinchenko et al (2008 Phys. Med. Biol. 53 3231-50), this work addresses the integration of DVH-based planning aspects into this framework from a general point of view. It first provides the basic mathematical requirements on the evaluation functions in order to support such an incorporation. Then it introduces the condition number as a description for how precisely DVH-based planning aspects can be reformulated in terms of evaluation functions. Exemplary numerical studies for the generalized equivalent uniform dose and a physical constraint function show the influence of function parameter values and DVH approximation on the condition number. The work concludes by formulating the aspects that should be taken into account for an appropriate integration of DVH-based planning aspects. (note)

  14. The Impacts of Theme-Based Language Instruction: A Case Study of an Advanced Chinese Intensive Program

    Directory of Open Access Journals (Sweden)

    Song Jiang

    2017-06-01

    Full Text Available Theme-based language teaching under Content-Based Instruction (CBI is a pedagogical approach that emphasizes learning professional content along with language skills. This paper reports a case study on the impacts of a theme-based advanced Chinese intensive program in a university setting. It begins with a review of CBI and its theme-based approach and then discusses the program design, curriculum development, and instructional practice of the program. The impacts of the theme-based approach are examined based on the pre- and post-proficiency test results, learners’ self-reported surveys on the themes and topics, and the reading strategies covered in the program. Qualitative analysis of learners’ self-reflections and program evaluations is also presented. Based on the evidence collected, this paper argues that the theme-based model has positive impacts on improving language proficiency, preparing for academic and professional language use, cultivating strategic language learners, and revitalizing Chinese teaching at the superior level.

  15. Impact of intense x-ray pulses on a NaI(Tl)-based gamma camera

    Science.gov (United States)

    Koppert, W. J. C.; van der Velden, S.; Steenbergen, J. H. L.; de Jong, H. W. A. M.

    2018-03-01

    In SPECT/CT systems x-ray and γ-ray imaging is performed sequentially. Simultaneous acquisition may have advantages, for instance in interventional settings. However, this may expose a gamma camera to relatively high x-ray doses and deteriorate its functioning. We studied the NaI(Tl) response to x-ray pulses with a photodiode, PMT and gamma camera, respectively. First, we exposed a NaI(Tl)-photodiode assembly to x-ray pulses to investigate potential crystal afterglow. Next, we exposed a NaI(Tl)-PMT assembly to 10 ms LED pulses (mimicking x-ray pulses) and measured the response to flashing LED probe-pulses (mimicking γ-pulses). We then exposed the assembly to x-ray pulses, with detector entrance doses of up to 9 nGy/pulse, and analysed the response for γ-pulse variations. Finally, we studied the response of a Siemens Diacam gamma camera to γ-rays while exposed to x-ray pulses. X-ray exposure of the crystal, read out with a photodiode, revealed 15% afterglow fraction after 3 ms. The NaI(Tl)-PMT assembly showed disturbances up to 10 ms after 10 ms LED exposure. After x-ray exposure however, responses showed elevated baselines, with 60 ms decay-time. Both for x-ray and LED exposure and after baseline subtraction, probe-pulse analysis revealed disturbed pulse height measurements shortly after exposure. X-ray exposure of the Diacam corroborated the elementary experiments. Up to 50 ms after an x-ray pulse, no events are registered, followed by apparent energy elevations up to 100 ms after exposure. Limiting the dose to 0.02 nGy/pulse prevents detrimental effects. Conventional gamma cameras exhibit substantial dead-time and mis-registration of photon energies up to 100 ms after intense x-ray pulses. This is due PMT limitations and due to afterglow in the crystal. Using PMTs with modified circuitry, we show that deteriorative afterglow effects can be reduced without noticeable effects on the PMT performance, up to x-ray pulse doses of 1 nGy.

  16. Mathematical model for biomolecular quantification using surface-enhanced Raman spectroscopy based signal intensity distributions

    DEFF Research Database (Denmark)

    Palla, Mirko; Bosco, Filippo Giacomo; Yang, Jaeyoung

    2015-01-01

    This paper presents the development of a novel statistical method for quantifying trace amounts of biomolecules by surface-enhanced Raman spectroscopy (SERS) using a rigorous, single molecule (SM) theory based mathematical derivation. Our quantification framework could be generalized for planar...

  17. The Rainfall Intensity Effects on 1–13 GHz UWB-Based 5G System for Outdoor Applications

    Directory of Open Access Journals (Sweden)

    Joko Suryana

    2017-01-01

    Full Text Available This paper reports a research contribution on tropical outdoor channel characterization in 1–13 GHz band for 5G systems. This 1–13 GHz ultra-wideband (UWB channel characterization is formulated with rain intensity as the most important variable, from 20 mm/h to 200 mm/h. Tropical rain will cause pulse broadening and distorts the transmitted symbols, so the probability of symbol errors will increase. In this research, the bit error rate (BER performance evaluation is done using both matched filtering or correlator-based receivers. At no rain conditions, BER 10−6 will be attained at signal to noise ratio (SNR 5 dB, but at rainfall intensity 200 mm/h, the BER will fall to 10−2 for matched filter and 5×10-2 for correlator-based receivers. For improving the BER performance, an adaptive nonlinear phase equalizer is proposed which adopts multiple allpass biquad infinite impulse response (IIR filters combined with low-order finite impulse response (FIR filter to mitigate the nonlinearity phase and differential attenuation of magnitude responses due to antenna and tropical outdoor UWB channel effects. Our simulation results show that the proposed equalizer has worked successfully with BER 10−6 on the rain rate that is exceeded for 0.01% of the time (R0.01 rain intensity or 99.99% availability. In addition, at rainfall rate 120 mm/h, the proposed nonlinear phase equalizer can give 9 dB signal improvement.

  18. Laboratory and Field-Based Evaluation of Short-Term Effort with Maximal Intensity in Individuals with Intellectual Disabilities

    Directory of Open Access Journals (Sweden)

    Lencse-Mucha Judit

    2015-12-01

    Full Text Available Results of previous studies have not indicated clearly which tests should be used to assess short-term efforts of people with intellectual disabilities. Thus, the aim of the present study was to evaluate laboratory and field-based tests of short-term effort with maximal intensity of subjects with intellectual disabilities. Twenty four people with intellectual disability, who trained soccer, participated in this study. The 30 s Wingate test and additionally an 8 s test with maximum intensity were performed on a bicycle ergometer. The fatigue index, maximal and mean power, relative maximal and relative mean power were measured. Overall, nine field-based tests were conducted: 5, 10 and 20 m sprints, a 20 m shuttle run, a seated medicine ball throw, a bent arm hang test, a standing broad jump, sit-ups and a hand grip test. The reliability of the 30 s and 8 s Wingate tests for subjects with intellectual disability was confirmed. Significant correlation was observed for mean power between the 30 s and 8 s tests on the bicycle ergometer at a moderate level (r >0.4. Moreover, significant correlations were indicated between the results of laboratory tests and field tests, such as the 20 m sprint, the 20 m shuttle run, the standing long jump and the medicine ball throw. The strongest correlation was in the medicine ball throw. The 30 s Wingate test is a reliable test assessing maximal effort in subjects with intellectual disability. The results of this research confirmed that the 8 s test on a bicycle ergometer had a moderate correlation with the 30 s Wingate test in this population, thus, this comparison needs further investigation to examine alternativeness of the 8 s to 30 s Wingate tests. The non-laboratory tests could be used to indirectly assess performance in short-term efforts with maximal intensity.

  19. Reliable detection of fluence anomalies in EPID-based IMRT pretreatment quality assurance using pixel intensity deviations

    International Nuclear Information System (INIS)

    Gordon, J. J.; Gardner, J. K.; Wang, S.; Siebers, J. V.

    2012-01-01

    Purpose: This work uses repeat images of intensity modulated radiation therapy (IMRT) fields to quantify fluence anomalies (i.e., delivery errors) that can be reliably detected in electronic portal images used for IMRT pretreatment quality assurance. Methods: Repeat images of 11 clinical IMRT fields are acquired on a Varian Trilogy linear accelerator at energies of 6 MV and 18 MV. Acquired images are corrected for output variations and registered to minimize the impact of linear accelerator and electronic portal imaging device (EPID) positioning deviations. Detection studies are performed in which rectangular anomalies of various sizes are inserted into the images. The performance of detection strategies based on pixel intensity deviations (PIDs) and gamma indices is evaluated using receiver operating characteristic analysis. Results: Residual differences between registered images are due to interfraction positional deviations of jaws and multileaf collimator leaves, plus imager noise. Positional deviations produce large intensity differences that degrade anomaly detection. Gradient effects are suppressed in PIDs using gradient scaling. Background noise is suppressed using median filtering. In the majority of images, PID-based detection strategies can reliably detect fluence anomalies of ≥5% in ∼1 mm 2 areas and ≥2% in ∼20 mm 2 areas. Conclusions: The ability to detect small dose differences (≤2%) depends strongly on the level of background noise. This in turn depends on the accuracy of image registration, the quality of the reference image, and field properties. The longer term aim of this work is to develop accurate and reliable methods of detecting IMRT delivery errors and variations. The ability to resolve small anomalies will allow the accuracy of advanced treatment techniques, such as image guided, adaptive, and arc therapies, to be quantified.

  20. Laboratory and Field-Based Evaluation of Short-Term Effort with Maximal Intensity in Individuals with Intellectual Disabilities

    Science.gov (United States)

    Lencse-Mucha, Judit; Molik, Bartosz; Marszałek, Jolanta; Kaźmierska-Kowalewska, Kalina; Ogonowska-Słodownik, Anna

    2015-01-01

    Results of previous studies have not indicated clearly which tests should be used to assess short-term efforts of people with intellectual disabilities. Thus, the aim of the present study was to evaluate laboratory and field-based tests of short-term effort with maximal intensity of subjects with intellectual disabilities. Twenty four people with intellectual disability, who trained soccer, participated in this study. The 30 s Wingate test and additionally an 8 s test with maximum intensity were performed on a bicycle ergometer. The fatigue index, maximal and mean power, relative maximal and relative mean power were measured. Overall, nine field-based tests were conducted: 5, 10 and 20 m sprints, a 20 m shuttle run, a seated medicine ball throw, a bent arm hang test, a standing broad jump, sit-ups and a hand grip test. The reliability of the 30 s and 8 s Wingate tests for subjects with intellectual disability was confirmed. Significant correlation was observed for mean power between the 30 s and 8 s tests on the bicycle ergometer at a moderate level (r >0.4). Moreover, significant correlations were indicated between the results of laboratory tests and field tests, such as the 20 m sprint, the 20 m shuttle run, the standing long jump and the medicine ball throw. The strongest correlation was in the medicine ball throw. The 30 s Wingate test is a reliable test assessing maximal effort in subjects with intellectual disability. The results of this research confirmed that the 8 s test on a bicycle ergometer had a moderate correlation with the 30 s Wingate test in this population, thus, this comparison needs further investigation to examine alternativeness of the 8 s to 30 s Wingate tests. The non-laboratory tests could be used to indirectly assess performance in short-term efforts with maximal intensity. PMID:26834874

  1. Experimental and numerical analysis for high intensity swirl based ultra-low emission flameless combustor operating with liquid fuels

    KAUST Repository

    Vanteru, Mahendra Reddy; Katoch, Amit; Roberts, William L.; Kumar, Sudarshan

    2014-01-01

    Flameless combustion offers many advantages over conventional combustion, particularly uniform temperature distribution and lower emissions. In this paper, a new strategy is proposed and adopted to scale up a burner operating in flameless combustion mode from a heat release density of 5.4-21 MW/m(3) (thermal input 21.5-84.7 kW) with kerosene fuel. A swirl flow based configuration was adopted for air injection and pressure swirl type nozzle with an SMD 35-37 lm was used to inject the fuel. Initially, flameless combustion was stabilized for a thermal input of 21.5 kW ((Q) over dot '''= 5.37 MW/m(3)). Attempts were made to scale this combustor to higher intensities i.e. 10.2, 16.3 and 21.1 MW/m(3). However, an increase in fuel flow rate led to incomplete combustion and accumulation of unburned fuel in the combustor. Two major difficulties were identified as possible reasons for unsustainable flameless combustion at the higher intensities. (i) A constant spray cone angle and SMD increases the droplet number density. (ii) Reactants dilution ratio (R-dil) decreased with increased thermal input. To solve these issues, a modified combustor configuration, aided by numerical computations was adopted, providing a chamfer near the outlet to increase the R-dil. Detailed experimental investigations showed that flameless combustion mode was achieved at high intensities with an evenly distributed reaction zone and temperature in the combustor at all heat intensities. The emissions of CO, NOx and HC for all heat intensities (Phi = 1-0.6) varied between 11-41, 6-19 and 0-9 ppm, respectively. These emissions are well within the range of emissions from other flameless combustion systems reported in the literature. The acoustic emission levels were also observed to be reduced by 8-9 dB at all conditions. (C) 2014 The Combustion Institute. Published by Elsevier Inc. All rights reserved.

  2. Experimental and numerical analysis for high intensity swirl based ultra-low emission flameless combustor operating with liquid fuels

    KAUST Repository

    Vanteru, Mahendra Reddy

    2014-06-21

    Flameless combustion offers many advantages over conventional combustion, particularly uniform temperature distribution and lower emissions. In this paper, a new strategy is proposed and adopted to scale up a burner operating in flameless combustion mode from a heat release density of 5.4-21 MW/m(3) (thermal input 21.5-84.7 kW) with kerosene fuel. A swirl flow based configuration was adopted for air injection and pressure swirl type nozzle with an SMD 35-37 lm was used to inject the fuel. Initially, flameless combustion was stabilized for a thermal input of 21.5 kW ((Q) over dot \\'\\'\\'= 5.37 MW/m(3)). Attempts were made to scale this combustor to higher intensities i.e. 10.2, 16.3 and 21.1 MW/m(3). However, an increase in fuel flow rate led to incomplete combustion and accumulation of unburned fuel in the combustor. Two major difficulties were identified as possible reasons for unsustainable flameless combustion at the higher intensities. (i) A constant spray cone angle and SMD increases the droplet number density. (ii) Reactants dilution ratio (R-dil) decreased with increased thermal input. To solve these issues, a modified combustor configuration, aided by numerical computations was adopted, providing a chamfer near the outlet to increase the R-dil. Detailed experimental investigations showed that flameless combustion mode was achieved at high intensities with an evenly distributed reaction zone and temperature in the combustor at all heat intensities. The emissions of CO, NOx and HC for all heat intensities (Phi = 1-0.6) varied between 11-41, 6-19 and 0-9 ppm, respectively. These emissions are well within the range of emissions from other flameless combustion systems reported in the literature. The acoustic emission levels were also observed to be reduced by 8-9 dB at all conditions. (C) 2014 The Combustion Institute. Published by Elsevier Inc. All rights reserved.

  3. TU-EF-304-07: Monte Carlo-Based Inverse Treatment Plan Optimization for Intensity Modulated Proton Therapy

    International Nuclear Information System (INIS)

    Li, Y; Tian, Z; Jiang, S; Jia, X; Song, T; Wu, Z; Liu, Y

    2015-01-01

    Purpose: Intensity-modulated proton therapy (IMPT) is increasingly used in proton therapy. For IMPT optimization, Monte Carlo (MC) is desired for spots dose calculations because of its high accuracy, especially in cases with a high level of heterogeneity. It is also preferred in biological optimization problems due to the capability of computing quantities related to biological effects. However, MC simulation is typically too slow to be used for this purpose. Although GPU-based MC engines have become available, the achieved efficiency is still not ideal. The purpose of this work is to develop a new optimization scheme to include GPU-based MC into IMPT. Methods: A conventional approach using MC in IMPT simply calls the MC dose engine repeatedly for each spot dose calculations. However, this is not the optimal approach, because of the unnecessary computations on some spots that turned out to have very small weights after solving the optimization problem. GPU-memory writing conflict occurring at a small beam size also reduces computational efficiency. To solve these problems, we developed a new framework that iteratively performs MC dose calculations and plan optimizations. At each dose calculation step, the particles were sampled from different spots altogether with Metropolis algorithm, such that the particle number is proportional to the latest optimized spot intensity. Simultaneously transporting particles from multiple spots also mitigated the memory writing conflict problem. Results: We have validated the proposed MC-based optimization schemes in one prostate case. The total computation time of our method was ∼5–6 min on one NVIDIA GPU card, including both spot dose calculation and plan optimization, whereas a conventional method naively using the same GPU-based MC engine were ∼3 times slower. Conclusion: A fast GPU-based MC dose calculation method along with a novel optimization workflow is developed. The high efficiency makes it attractive for clinical

  4. TU-EF-304-07: Monte Carlo-Based Inverse Treatment Plan Optimization for Intensity Modulated Proton Therapy

    Energy Technology Data Exchange (ETDEWEB)

    Li, Y [Tsinghua University, Beijing, Beijing (China); UT Southwestern Medical Center, Dallas, TX (United States); Tian, Z; Jiang, S; Jia, X [UT Southwestern Medical Center, Dallas, TX (United States); Song, T [Southern Medical University, Guangzhou, Guangdong (China); UT Southwestern Medical Center, Dallas, TX (United States); Wu, Z; Liu, Y [Tsinghua University, Beijing, Beijing (China)

    2015-06-15

    Purpose: Intensity-modulated proton therapy (IMPT) is increasingly used in proton therapy. For IMPT optimization, Monte Carlo (MC) is desired for spots dose calculations because of its high accuracy, especially in cases with a high level of heterogeneity. It is also preferred in biological optimization problems due to the capability of computing quantities related to biological effects. However, MC simulation is typically too slow to be used for this purpose. Although GPU-based MC engines have become available, the achieved efficiency is still not ideal. The purpose of this work is to develop a new optimization scheme to include GPU-based MC into IMPT. Methods: A conventional approach using MC in IMPT simply calls the MC dose engine repeatedly for each spot dose calculations. However, this is not the optimal approach, because of the unnecessary computations on some spots that turned out to have very small weights after solving the optimization problem. GPU-memory writing conflict occurring at a small beam size also reduces computational efficiency. To solve these problems, we developed a new framework that iteratively performs MC dose calculations and plan optimizations. At each dose calculation step, the particles were sampled from different spots altogether with Metropolis algorithm, such that the particle number is proportional to the latest optimized spot intensity. Simultaneously transporting particles from multiple spots also mitigated the memory writing conflict problem. Results: We have validated the proposed MC-based optimization schemes in one prostate case. The total computation time of our method was ∼5–6 min on one NVIDIA GPU card, including both spot dose calculation and plan optimization, whereas a conventional method naively using the same GPU-based MC engine were ∼3 times slower. Conclusion: A fast GPU-based MC dose calculation method along with a novel optimization workflow is developed. The high efficiency makes it attractive for clinical

  5. Pulsed plasma sources for the production of intense ion beams based on catalytic resonance ionization

    International Nuclear Information System (INIS)

    Knyazev, B.A.; Mel'nikov, P.I.; Bluhm, H.

    1994-01-01

    In this paper we describe a technique to produce planar and volumetric ion sources of nearly every element. This technique is based on a generalization of the LIBORS-process (Laser Ionization Based On Resonant Saturation) which because of its similarity to chemical catalytic reactions has been called CATRION (CATalytic Resonance IONization). A vapor containing the desired atomic species is doped with a suitable element processing resonance transitions that can be pumped ro saturation with a laser. By superelastic collisions with the excited atoms and by simulated bremsstrahlung absorption seed electrons are heated. It is the heated electron component which then by collisional processes ionizes the desired atomic species and are multiplied. 41 refs.; 4 figs.; 3 tabs

  6. Memristor-based ternary content addressable memory (mTCAM) for data-intensive computing

    International Nuclear Information System (INIS)

    Zheng, Le; Shin, Sangho; Steve Kang, Sung-Mo

    2014-01-01

    A memristor-based ternary content addressable memory (mTCAM) is presented. Each mTCAM cell, consisting of five transistors and two memristors to store and search for ternary data, is capable of remarkable nonvolatility and higher storage density than conventional CMOS-based TCAMs. Each memristor in the cell can be programmed individually such that high impedance is always present between searchlines to reduce static energy consumption. A unique two-step write scheme offers reliable and energy-efficient write operations. The search voltage is designed to ensure optimum sensing margins with the presence of variations in memristor devices. Simulations of the proposed mTCAM demonstrate functionalities in write and search modes, as well as a search delay of 2 ns and a search of 0.99 fJ/bit/search for a word width of 128 bits. (paper)

  7. Automation of the Work intensively based on Knowledge, a Challenge for the New Technologies

    Directory of Open Access Journals (Sweden)

    Vasile MAZILESCU

    2011-06-01

    Full Text Available Knowledge Management or knowledge-based management (noted and used throughout this paper as KM is defined as a collaborative practice, by which organizations deliberately and intelligibly create, organize, distribute and analyze their own knowledge, in terms of resources, documents and people’s skills. It is widely regarded as an internal tool for increasing the operational efficiency of any organization, and has the potential to revolutionize the intelligent interaction between humans and agents (intelligent, based on more and more advanced technology. Semantic Technologies (STs are distributed software technologies that make that meaning more explicit, principally so that it can be understood by computers. STs will dramatically impact enterprise architecture and the engineering of new system and infrastructure capabilities. They are tools that represent meanings, associations, theories, and know-how about the uses of things separately from data and knowledge, using reasoning algorithms. Time restrictions are not excessive in usual STs as distributed applications. Critical time reasoning problems may occur in case of faulty operations and overloading. At present, the reasoning depth developed for such system is still poor. This work represents research results for incorporating and considering appropriate semantic foundations in future technologies that can automate knowledge based work.

  8. Improved BER based on intensity noise alleviation using developed detection technique for incoherent SAC-OCDMA systems

    Science.gov (United States)

    Al-Khafaji, Hamza M. R.; Aljunid, S. A.; Fadhil, Hilal A.

    2012-06-01

    The major drawback of incoherent spectral-amplitude coding optical code-division multiple-access (SAC-OCDMA) systems is their inherent intensity noise originating due to the incoherency of the broadband light sources. In this paper, we propose a developed detection technique named the modified-AND subtraction detection for incoherent SAC-OCDMA systems. This detection technique is based upon decreasing the received signal strength during the decoding process by dividing the spectrum of the utilized code sequence. The proposed technique is capable of mitigating the intensity noise effect, as well as suppressing the multiple-access interference impact. Based on modified quadratic congruence (MQC) code, the analytical results reveal that the modified-AND detection offer best bit-error rate (BER) performance and enables MQC code to support higher transmission rate up to 1.25 Gb/s compared to conventional AND detection. Furthermore, we ascertained that the proposed technique enhances the system performance using a simulation experiment.

  9. Binomial probability distribution model-based protein identification algorithm for tandem mass spectrometry utilizing peak intensity information.

    Science.gov (United States)

    Xiao, Chuan-Le; Chen, Xiao-Zhou; Du, Yang-Li; Sun, Xuesong; Zhang, Gong; He, Qing-Yu

    2013-01-04

    Mass spectrometry has become one of the most important technologies in proteomic analysis. Tandem mass spectrometry (LC-MS/MS) is a major tool for the analysis of peptide mixtures from protein samples. The key step of MS data processing is the identification of peptides from experimental spectra by searching public sequence databases. Although a number of algorithms to identify peptides from MS/MS data have been already proposed, e.g. Sequest, OMSSA, X!Tandem, Mascot, etc., they are mainly based on statistical models considering only peak-matches between experimental and theoretical spectra, but not peak intensity information. Moreover, different algorithms gave different results from the same MS data, implying their probable incompleteness and questionable reproducibility. We developed a novel peptide identification algorithm, ProVerB, based on a binomial probability distribution model of protein tandem mass spectrometry combined with a new scoring function, making full use of peak intensity information and, thus, enhancing the ability of identification. Compared with Mascot, Sequest, and SQID, ProVerB identified significantly more peptides from LC-MS/MS data sets than the current algorithms at 1% False Discovery Rate (FDR) and provided more confident peptide identifications. ProVerB is also compatible with various platforms and experimental data sets, showing its robustness and versatility. The open-source program ProVerB is available at http://bioinformatics.jnu.edu.cn/software/proverb/ .

  10. Use of case-based reasoning to enhance intensive management of patients on insulin pump therapy.

    Science.gov (United States)

    Schwartz, Frank L; Shubrook, Jay H; Marling, Cynthia R

    2008-07-01

    This study was conducted to develop case-based decision support software to improve glucose control in patients with type 1 diabetes mellitus (T1DM) on insulin pump therapy. While the benefits of good glucose control are well known, achieving and maintaining good glucose control remains a difficult task. Case-based decision support software may assist by recalling past problems in glucose control and their associated therapeutic adjustments. Twenty patients with T1DM on insulin pumps were enrolled in a 6-week study. Subjects performed self-glucose monitoring and provided daily logs via the Internet, tracking insulin dosages, work, sleep, exercise, meals, stress, illness, menstrual cycles, infusion set changes, pump problems, hypoglycemic episodes, and other events. Subjects wore a continuous glucose monitoring system at weeks 1, 3, and 6. Clinical data were interpreted by physicians, who explained the relationship between life events and observed glucose patterns as well as treatment rationales to knowledge engineers. Knowledge engineers built a prototypical system that contained cases of problems in glucose control together with their associated solutions. Twelve patients completed the study. Fifty cases of clinical problems and solutions were developed and stored in a case base. The prototypical system detected 12 distinct types of clinical problems. It displayed the stored problems that are most similar to the problems detected, and offered learned solutions as decision support to the physician. This software can screen large volumes of clinical data and glucose levels from patients with T1DM, identify clinical problems, and offer solutions. It has potential application in managing all forms of diabetes.

  11. explICU: A web-based visualization and predictive modeling toolkit for mortality in intensive care patients.

    Science.gov (United States)

    Chen, Robert; Kumar, Vikas; Fitch, Natalie; Jagadish, Jitesh; Lifan Zhang; Dunn, William; Duen Horng Chau

    2015-01-01

    Preventing mortality in intensive care units (ICUs) has been a top priority in American hospitals. Predictive modeling has been shown to be effective in prediction of mortality based upon data from patients' past medical histories from electronic health records (EHRs). Furthermore, visualization of timeline events is imperative in the ICU setting in order to quickly identify trends in patient histories that may lead to mortality. With the increasing adoption of EHRs, a wealth of medical data is becoming increasingly available for secondary uses such as data exploration and predictive modeling. While data exploration and predictive modeling are useful for finding risk factors in ICU patients, the process is time consuming and requires a high level of computer programming ability. We propose explICU, a web service that hosts EHR data, displays timelines of patient events based upon user-specified preferences, performs predictive modeling in the back end, and displays results to the user via intuitive, interactive visualizations.

  12. Impact of high-intensity pulsed electric fields on bioactive compounds in Mediterranean plant-based foods.

    Science.gov (United States)

    Elez-Martínez, Pedro; Soliva-Fortuny, Robert; Martín-Belloso, Olga

    2009-05-01

    Novel non-thermal processing technologies such as high-intensity pulsed electric field (HIPEF) treatments may be applied to pasteurize plant-based liquid foods as an alternative to conventional heat treatments. In recent years, there has been an increasing interest in HIPEF as a way of preserving and extending the shelf-life of liquid products without the quality damage caused by heat treatments. However, less attention has been paid to the effects of HIPEF on minor constituents of these products, namely bioactive compounds. This review is a state-of-the-art update on the effects of HIPEF treatments on health-related compounds in plants of the Mediterranean diet such as fruit juices, and Spanish gazpacho. The relevance of HIPEF-processing parameters on retaining plant-based bioactive compounds will be discussed.

  13. The organic contamination level based on the total soil mass is not a proper index of the soil contamination intensity

    Science.gov (United States)

    Hung, H.-W.; Daniel, Sheng G.; Lin, T.-F.; Su, Y.; Chiou, C.T.

    2009-01-01

    Concentrations of organic contaminants in common productive soils based on the total soil mass give a misleading account of actual contamination effects. This is attributed to the fact that productive soils are essentially water-saturated, with the result that the soil uptake of organic compounds occurs principally by partition into the soil organic matter (SOM). This report illustrates that the soil contamination intensity of a compound is governed by the concentration in the SOM (Com) rather than by the concentration in whole soil (Cs). Supporting data consist of the measured levels and toxicities of many pesticides in soils of widely differing SOM contents and the related levels in in-situ crops that defy explanation by the Cs values. This SOM-based index is timely needed for evaluating the contamination effects of food crops grown in different soils and for establishing a dependable priority ranking for intended remediation of numerous contamination sites.

  14. A directory for neonatal intensive care: potential for facilitating network-based research in neonatology.

    Science.gov (United States)

    Ariagno, Ronald L; Lee, Henry C; Stevenson, David K; Benjamin, Daniel K; Smith, P Brian; Escobedo, Marilyn B; Bhatt, Dilip R

    2018-03-15

    Directories of contact information have evolved over time from thick paperback times such as the "Yellow Pages" to electronic forms that are searchable and have other functionalities. In our clinical specialty, the development of a professional directory helped to promote collaboration in clinical care, education, and quality improvement. However, there are opportunities for increasing the utility of the directory by taking advantage of modern web-based tools, and expanding the use of the directory to fill a gap in the area of collaborative research.

  15. Comparison of electroluminescence intensity and photocurrent of polymer based solar cells

    Energy Technology Data Exchange (ETDEWEB)

    Hoyer, Ulrich; Swonke, Thomas; Auer, Richard [Bayerisches Zentrum fuer Angewandte Energieforschung e.V., Erlangen (Germany); Pinna, Luigi; Brabec, Christoph J. [Bayerisches Zentrum fuer Angewandte Energieforschung e.V., Erlangen (Germany); I-MEET, University Erlangen (Germany); Stubhan, Tobias; Li, Ning [I-MEET, University Erlangen (Germany)

    2011-11-15

    The reciprocity theorem for solar cell predicts a linear relation between electroluminescence emission and photovoltaic quantum efficiency and an exponential dependence of the electroluminescence signal on the applied voltage. Both dependencies are experimentally verified for polymer based solar cells in this paper. Furthermore it is shown, that electroluminescence imaging of organic solar cells has the potential to visualize the photocurrent distribution significantly faster than standard laser beam induced current mapping (LBIC) techniques. (Copyright copyright 2011 WILEY-VCH Verlag GmbH and Co. KGaA, Weinheim)

  16. Intense high-frequency gyrotron-based microwave beams for material processing

    Energy Technology Data Exchange (ETDEWEB)

    Hardek, T.W.; Cooke, W.D.; Katz, J.D.; Perry, W.L.; Rees, D.E.

    1997-03-01

    Microwave processing of materials has traditionally utilized frequencies in the 0.915 and 2.45 GHz regions. Microwave power sources are readily available at these frequencies but the relatively long wavelengths can present challenges in uniformly heating materials. An additional difficulty is the poor coupling of ceramic based materials to the microwave energy. Los Alamos National Laboratory scientists, working in conjunction with the National Center for Manufacturing Sciences (NCMS), have assembled a high-frequency demonstration processing facility utilizing gyrotron based RF sources. The facility is primarily intended to demonstrate the unique features available at frequencies as high as 84 GHz. The authors can readily provide quasi-optical, 37 GHz beams at continuous wave (CW) power levels in the 10 kW range. They have also provided beams at 84 GHz at 10 kW CW power levels. They are presently preparing a facility to demonstrate the sintering of ceramics at 30 GHz. This paper presents an overview of the present demonstration processing facility and describes some of the features they have available now and will have available in the near future.

  17. Microprocessor based beam intensity and efficiency display system for the Fermilab accelerator

    International Nuclear Information System (INIS)

    Biwer, R.

    1979-01-01

    The Main Accelerator display system for the Fermilab accelerator gathers charge data and displays it including processed transfer efficiencies of each of the accelerators. To accomplish this, strategically located charge converters monitor the circulating internal beam of each of the Fermilab accelerators. Their outputs are processed via an asynchronously triggered, multiplexed analog-to-digital converter. The data is converted into a digital byte containing address code and data, then stores it into two 16-bit memories. One memory outputs the interleaved data as a data pulse train while the other interfaces directly to a local host computer for further analysis. The microprocessor based display unit synchronizes displayed data during normal operation as well as special storage modes. The display unit outputs data to the fron panel in the form of a numeric value and also makes digital-to-analog conversions of displayed data for external peripheral devices. 5 refs

  18. Accelerator-based intense neutron source for materials R and D

    International Nuclear Information System (INIS)

    Jameson, R.A.

    1990-01-01

    Accelerator-based neutron sources for R and D of materials in nuclear energy systems, including fusion reactors, can provide sufficient neutron flux, flux-volume, fluence and other attractive features for many aspects of materials research. The neutron spectrum produced from the D-Li reaction has been judged useful for many basic materials research problems, and satisfactory as an approximation of the fusion process. A most interesting aspect for materials researchers is the increased flexibility and opportunities for experimental configurations that a modern accelerator-based source could add to the set of available tools. First, of course, is a high flux of neutrons. Four other tools are described: 1. The output energy of the deuteron beam can be varied to provide energy selectivity for the materials researcher. The energy would typically be varied in discrete steps; the number of steps can be adjusted depending on actual needs and costs. 2. The materials sample target chamber could be irradiated by more than one beam, from different angles. This would provide many possibilities for tailoring the flux distribution. 3. Advanced techniques in magnetic optics systems allow the density distribution of the deuteron beam at the target to be tailored. Controlled distributions from Gaussian to uniform to hollow can be provided. This affords further control of the distribution in the target chamber. 4. The accelerator and associated beam transport elements are all essentially electronic systems and, therefore, can be controlled and modulated on a time cycle basis. Therefore, all of the above tools could be varied in possibly complex patterns under computer control; this may open further experimental approaches for studying various rate-dependent effects. These considerations will be described in the context of the Energy Selective Neutron Irradiation Test (ESNIT) facility which is conceived at JAERI. (author)

  19. How can China reach its CO2 intensity reduction targets by 2020? A regional allocation based on equity and development

    International Nuclear Information System (INIS)

    Yi Wenjing; Zou Lele; Guo Jie; Wang Kai; Wei Yiming

    2011-01-01

    In late 2009, the Chinese government committed to cut its carbon dioxide emissions per unit of gross domestic product (GDP) by 40% to 45% of 2005 levels by 2020. This has raised the issue of how to allocate the CO 2 reduction target regionally to meet the national reduction target. To meet this objective, the following aspects may be taken into consideration: equity principles, 'common but differentiated responsibilities'; intensity reduction target fulfillment; and economic difference and reduction potential among provinces. This paper selects per capita GDP, accumulated fossil fuel related CO 2 emissions and energy consumption per unit of industrial added value as indicators for emission reduction capacity, responsibility and potential, respectively. Based on these three indicators, a comprehensive index is developed and an intensity allocation model constructed. As decision makers may have different preferences when allocating the reduction burden, we allocate different weights to the indicators, analyzing the results using cluster analysis. The following aspects may also be considered together with the national regional development strategy to determine how to share the burden: the reduction potential of various regions; implementation potential of the plans; and promotion of a highly efficient low carbon economic development model. - Research highlights: → We compiled a comprehensive index using per capita GDP, accumulated fossil fuel related CO 2 emissions and energy consumption per unit of industrial added value as indicators for emission reduction capacity, responsibility and potential, respectively. → National CO 2 intensity reduction target is allocated according to different index values of provinces. → Equity principles were taken into account when allocating the target.

  20. Neonatal intensive care nursing curriculum challenges based on context, input, process, and product evaluation model: A qualitative study

    Directory of Open Access Journals (Sweden)

    Mansoureh Ashghali-Farahani

    2018-01-01

    Full Text Available Background: Weakness of curriculum development in nursing education results in lack of professional skills in graduates. This study was done on master's students in nursing to evaluate challenges of neonatal intensive care nursing curriculum based on context, input, process, and product (CIPP evaluation model. Materials and Methods: This study was conducted with qualitative approach, which was completed according to the CIPP evaluation model. The study was conducted from May 2014 to April 2015. The research community included neonatal intensive care nursing master's students, the graduates, faculty members, neonatologists, nurses working in neonatal intensive care unit (NICU, and mothers of infants who were hospitalized in such wards. Purposeful sampling was applied. Results: The data analysis showed that there were two main categories: “inappropriate infrastructure” and “unknown duties,” which influenced the context formation of NICU master's curriculum. The input was formed by five categories, including “biomedical approach,” “incomprehensive curriculum,” “lack of professional NICU nursing mentors,” “inappropriate admission process of NICU students,” and “lack of NICU skill labs.” Three categories were extracted in the process, including “more emphasize on theoretical education,” “the overlap of credits with each other and the inconsistency among the mentors,” and “ineffective assessment.” Finally, five categories were extracted in the product, including “preferring routine work instead of professional job,” “tendency to leave the job,” “clinical incompetency of graduates,” “the conflict between graduates and nursing staff expectations,” and “dissatisfaction of graduates.” Conclusions: Some changes are needed in NICU master's curriculum by considering the nursing experts' comments and evaluating the consequences of such program by them.

  1. Increasing milk solids production across lactation through genetic selection and intensive pasture-based feed system.

    Science.gov (United States)

    Coleman, J; Pierce, K M; Berry, D P; Brennan, A; Horan, B

    2010-09-01

    The objective of the study was to quantify the effect of genetic improvement using the Irish total merit index, the Economic Breeding Index (EBI), on overall performance and lactation profiles for milk, milk solids, body weight (BW), and body condition score (BCS) within 2 pasture-based systems of milk production likely to be used in the future, following abolition of the European Union's milk quota system. Three genotypes of Holstein-Friesian dairy cattle were established from within the Moorepark dairy research herd: LowNA, indicative of animals with North American origin and average or lower genetic merit at the time of the study; HighNA, North American Holstein-Friesians of high genetic merit; and HighNZ, New Zealand Holstein-Friesians of high genetic merit. Animals from within each genotype were randomly allocated to 1 of 2 possible pasture-based feeding systems (FS): 1) The Moorepark pasture (MP) system (2.64 cows/ha and 344 kg of concentrate supplement per cow per lactation) and 2) a high output per hectare (HC) system (2.85 cows/ha and 1,056 kg of concentrate supplement per cow per lactation). Pasture was allocated to achieve similar postgrazing residual sward heights for both treatments. A total of 126, 128, and 140 spring-calving dairy cows were used during the years 2006, 2007, and 2008, respectively. Each group had an individual farmlet of 17 paddocks and all groups were managed similarly throughout the study. The effects of genotype, FS, and the interaction between genotype and FS on milk production, BW, and BCS across lactation were studied using mixed models with factorial arrangements of genotype and FS accounting for the repeated cow records across years. No significant genotype by FS interaction was observed for any of the variables measured. Results show that milk solids production of the national average dairy cow can be increased across lactation through increased EBI. High EBI genotypes (HighNA and HighNZ) produced more milk solids per cow and

  2. On the treatment of primary extinction in diffraction theories based on intensity coupling

    International Nuclear Information System (INIS)

    Schneider, J.R.; Goncalves, O.D.; Graf, H.A.

    1988-01-01

    Czochralski-grown silicon crystals of approximately 10 cm diameter and 1 cm thickness have been annealed at 1470 K in order to create a homogeneous defect structure, which is a basic condition for all statistical treatment of extinction. Absolute values of the integrated reflecting power of the 220, 440 and 660 reflections have been measured with 0.0392 A γ-radiation in symmetrical Laue geometry for sample thicknesses between 1 and 3 cm. The amount of extinction in the experimental data varies between γ≅0.95 and γ≅0.05. Darwin's extinction theory has been used to describe the thickness dependence of the data sets. Despite some shortcomings of the model, it is shown that the assumption of a physically unrealistic Lorentzian mosaic distribution models the effect of primary extinction in an extinction theory based on the energy-transfer model. The sharp central part of the Lorentzian distribution produces a reduction of the effective sample thickness due to primary extinction, whereas the wings of the distribution dominate the correction for secondary extinction in the remaining part of the sample. A more flexible mosaic distribution function is proposed, which should be useful in cases of severe extinction. (orig.)

  3. Intensity-based hierarchical clustering in CT-scans: application to interactive segmentation in cardiology

    Science.gov (United States)

    Hadida, Jonathan; Desrosiers, Christian; Duong, Luc

    2011-03-01

    The segmentation of anatomical structures in Computed Tomography Angiography (CTA) is a pre-operative task useful in image guided surgery. Even though very robust and precise methods have been developed to help achieving a reliable segmentation (level sets, active contours, etc), it remains very time consuming both in terms of manual interactions and in terms of computation time. The goal of this study is to present a fast method to find coarse anatomical structures in CTA with few parameters, based on hierarchical clustering. The algorithm is organized as follows: first, a fast non-parametric histogram clustering method is proposed to compute a piecewise constant mask. A second step then indexes all the space-connected regions in the piecewise constant mask. Finally, a hierarchical clustering is achieved to build a graph representing the connections between the various regions in the piecewise constant mask. This step builds up a structural knowledge about the image. Several interactive features for segmentation are presented, for instance association or disassociation of anatomical structures. A comparison with the Mean-Shift algorithm is presented.

  4. Sound recovery via intensity variations of speckle pattern pixels selected with variance-based method

    Science.gov (United States)

    Zhu, Ge; Yao, Xu-Ri; Qiu, Peng; Mahmood, Waqas; Yu, Wen-Kai; Sun, Zhi-Bin; Zhai, Guang-Jie; Zhao, Qing

    2018-02-01

    In general, the sound waves can cause the vibration of the objects that are encountered in the traveling path. If we make a laser beam illuminate the rough surface of an object, it will be scattered into a speckle pattern that vibrates with these sound waves. Here, an efficient variance-based method is proposed to recover the sound information from speckle patterns captured by a high-speed camera. This method allows us to select the proper pixels that have large variances of the gray-value variations over time, from a small region of the speckle patterns. The gray-value variations of these pixels are summed together according to a simple model to recover the sound with a high signal-to-noise ratio. Meanwhile, our method will significantly simplify the computation compared with the traditional digital-image-correlation technique. The effectiveness of the proposed method has been verified by applying a variety of objects. The experimental results illustrate that the proposed method is robust to the quality of the speckle patterns and costs more than one-order less time to perform the same number of the speckle patterns. In our experiment, a sound signal of time duration 1.876 s is recovered from various objects with time consumption of 5.38 s only.

  5. A High Intensity Interval Training (HIIT)-Based Running Plan Improves Athletic Performance by Improving Muscle Power.

    Science.gov (United States)

    García-Pinillos, Felipe; Cámara-Pérez, Jose C; Soto-Hermoso, Víctor M; Latorre-Román, Pedro Á

    2017-01-01

    García-Pinillos, F, Cámara-Pérez, JC, Soto-Hermoso, VM, and Latorre-Román, PÁ. A High Intensity Interval Training (HIIT)-based running plan improves athletic performance by improving muscle power. J Strength Cond Res 31(1): 146-153, 2017-This study aimed to examine the effect of a 5-week high-intensity intermittent training (HIIT)-based running plan on athletic performance and to compare the physiological and neuromuscular responses during a sprint-distance triathlon before and after the HIIT period. Thirteen triathletes were matched into 2 groups: the experimental group (EG) and the control group (CG). The CG was asked to maintain their normal training routines, whereas the EG maintained only their swimming and cycling routines and modified their running routine. Participants completed a sprint-distance triathlon before (pretest) and after (posttest) the intervention period. In both pretest and posttest, the participants performed 4 jumping tests: before the race (baseline), postswim, postcycling, and postrun. Additionally, heart rate was monitored (HRmean), whereas rate of perceived exertion (RPE) and blood lactate accumulation (BLa) were registered after the race. No significant differences (p ≥ 0.05) between groups were found before HIIT intervention (at pretest). Significant group-by-training interactions were found in vertical jumping ability and athletic performance: the EG improved jumping performance (∼6-9%, p ≤ 0.05, effect size (ES) > 0.7), swimming performance (p = 0.013, ES = 0.438), and running time (p = 0.001, ES = 0.667) during the competition, whereas the CG remained unchanged (p ≥ 0.05, ES HIIT-based running plan combined with the high training volumes of these triathletes in swimming and cycling improved athletic performance during a sprint-distance triathlon. This improvement may be due to improved neuromuscular characteristics that were transferred into improved muscle power and work economy.

  6. [Cost of intensive care in a German hospital: cost-unit accounting based on the InEK matrix].

    Science.gov (United States)

    Martin, J; Neurohr, C; Bauer, M; Weiss, M; Schleppers, A

    2008-05-01

    The aim of this study was to determine the actual cost per intensive care unit (ICU) day in Germany based on routine data from an electronic patient data management system as well as analysis of cost-driving factors. A differentiation between days with and without mechanical ventilation was performed. On the ICU of a German focused-care hospital (896 beds, 12 anesthesiology ICU beds), cost per treatment day was calculated with or without mechanical ventilation from the perspective of the hospital. Costs were derived retrospectively with respect to the period between January and October 2006 by cost-unit accounting based on routine data collected from the ICU patients. Patients with a length of stay of at least 2 days on the ICU were included. Demographic, clinical and economical data were analyzed for patient characterization. Data of 407 patients (217 male and 190 female) were included in the analysis, of which 159 patients (100 male, 59 female) were completely or partially mechanically ventilated. The mean simplified acute physiology (SAPS) II score at the onset of ICU stay was 28.2. Average cost per ICU day was 1,265 EUR and costs for ICU days with and without mechanical ventilation amounted to 1,426 EUR and 1,145 EUR, respectively. Personnel costs (50%) showed the largest cost share followed by drugs plus medicinal products (18%) and infrastructure (16%). For the first time, a cost analysis of intensive care in Germany was performed with routine data based on the matrix of the institute for reimbursement in hospitals (InEK). The results revealed a higher resource use on the ICU than previously expected. The large share of personnel costs on the ICU was evident but is comparable to other medical departments in the hospital. The need for mechanical ventilation increases the daily costs of resources by approximately 25%.

  7. Application of the measurement-based Monte Carlo method in nasopharyngeal cancer patients for intensity modulated radiation therapy

    International Nuclear Information System (INIS)

    Yeh, C.Y.; Lee, C.C.; Chao, T.C.; Lin, M.H.; Lai, P.A.; Liu, F.H.; Tung, C.J.

    2014-01-01

    This study aims to utilize a measurement-based Monte Carlo (MBMC) method to evaluate the accuracy of dose distributions calculated using the Eclipse radiotherapy treatment planning system (TPS) based on the anisotropic analytical algorithm. Dose distributions were calculated for the nasopharyngeal carcinoma (NPC) patients treated with the intensity modulated radiotherapy (IMRT). Ten NPC IMRT plans were evaluated by comparing their dose distributions with those obtained from the in-house MBMC programs for the same CT images and beam geometry. To reconstruct the fluence distribution of the IMRT field, an efficiency map was obtained by dividing the energy fluence of the intensity modulated field by that of the open field, both acquired from an aS1000 electronic portal imaging device. The integrated image of the non-gated mode was used to acquire the full dose distribution delivered during the IMRT treatment. This efficiency map redistributed the particle weightings of the open field phase-space file for IMRT applications. Dose differences were observed in the tumor and air cavity boundary. The mean difference between MBMC and TPS in terms of the planning target volume coverage was 0.6% (range: 0.0–2.3%). The mean difference for the conformity index was 0.01 (range: 0.0–0.01). In conclusion, the MBMC method serves as an independent IMRT dose verification tool in a clinical setting. - Highlights: ► The patient-based Monte Carlo method serves as a reference standard to verify IMRT doses. ► 3D Dose distributions for NPC patients have been verified by the Monte Carlo method. ► Doses predicted by the Monte Carlo method matched closely with those by the TPS. ► The Monte Carlo method predicted a higher mean dose to the middle ears than the TPS. ► Critical organ doses should be confirmed to avoid overdose to normal organs

  8. Framework for Explaining the Formation of Knowledge Intensive Entrepreneurial Born Global Firm: Entrepreneurial, Strategic and Network Based Constituents

    Directory of Open Access Journals (Sweden)

    Vytaute Dlugoborskyte

    2017-01-01

    Full Text Available The nature of the knowledge based entrepreneurship relates to its essential reliance on research and development, deployment and maximization of research and development returns via technology development, and its commercialization via venturing. The paper aims to provide the empirically grounded framework for the analysis of the key determinants leading to the formation of R&D intensive entrepreneurial born global firm with a special focus on entrepreneurial firm and network theories. The unit of analysis chosen is the firm, while the focus is set on the firm behavior and strategic choices rather the business conditions per se. The paper aims to propose the definition of a born global firm as a specific form of entrepreneurial firm that forms while combining entrepreneurial, strategy and network constituents in a specific globally oriented constitution. Method of analysis applied is a multiple case study that was applied in order to build evidence on the interplay of strategy, networks and entrepreneurial constituents in the formation of knowledge intensive entrepreneurial born global firm. The small catching up country perspective adds on dynamics of the constituents as the framework and competitive conditions rapidly change in an uncertain direction.

  9. Preliminary considerations of an intense slow positron facility based on a 78Kr loop in the high flux isotopes reactor

    International Nuclear Information System (INIS)

    Hulett, L.D. Jr.; Donohue, D.L.; Peretz, F.J.; Montgomery, B.H.; Hayter, J.B.

    1990-01-01

    Suggestions have been made to the National Steering Committee for the Advanced Neutron Source (ANS) by Mills that provisions be made to install a high intensity slow positron facility, based on a 78 Kr loop, that would be available to the general community of scientists interested in this field. The flux of thermal neutrons calculated for the ANS is E + 15 sec -1 m -2 , which Mills has estimated will produce 5 mm beam of slow positrons having a current of about 1 E + 12 sec -1 . The intensity of such a beam will be a least 3 orders of magnitude greater than those presently available. The construction of the ANS is not anticipated to be complete until the year 2000. In order to properly plan the design of the ANS, strong considerations are being given to a proof-of-principle experiment, using the presently available High Flux Isotopes Reactor, to test the 78 Kr loop technique. The positron current from the HFIR facility is expected to be about 1 E + 10 sec -1 , which is 2 orders of magnitude greater than any other available. If the experiment succeeds, a very valuable facility will be established, and important formation will be generated on how the ANS should be designed. 3 refs., 1 fig

  10. Effects of volume-based overload plyometric training on maximal-intensity exercise adaptations in young basketball players.

    Science.gov (United States)

    Asadi, Abbas; Ramirez-Campillo, Rodrigo; Meylan, Cesar; Nakamura, Fabio Y; Cañas-Jamett, Rodrigo; Izquierdo, Mikel

    2017-12-01

    The aim of the present study was to compare maximal-intensity exercise adaptations in young basketball players (who were strong individuals at baseline) participating in regular basketball training versus regular plus a volume-based plyometric training program in the pre-season period. Young basketball players were recruited and assigned either to a plyometric with regular basketball training group (experimental group [EG]; N.=8), or a basketball training only group (control group [CG]; N.=8). The athletes in EG performed periodized (i.e., from 117 to 183 jumps per session) plyometric training for eight weeks. Before and after the intervention, players were assessed in vertical and broad jump, change of direction, maximal strength and a 60-meter sprint test. No significant improvements were found in the CG, while the EG improved vertical jump (effect size [ES] 2.8), broad jump (ES=2.4), agility T test (ES=2.2), Illinois agility test (ES=1.4), maximal strength (ES=1.8), and 60-m sprint (ES=1.6) (Ptraining in addition to regular basketball practice can lead to meaningful improvements in maximal-intensity exercise adaptations among young basketball players during the pre-season.

  11. A simple and sensitive method for L-cysteine detection based on the fluorescence intensity increment of quantum dots

    International Nuclear Information System (INIS)

    Huang Shan; Xiao Qi; Li Ran; Guan Hongliang; Liu Jing; Liu Xiaorong; He Zhike; Liu Yi

    2009-01-01

    In this contribution, a simple and sensitive method for L-cysteine detection was established based on the increment of the fluorescence intensity of mercaptoacetic acid-capped CdSe/ZnS quantum dots (QDs) in aqueous solution. Meanwhile, the fluorescence characteristics and the optimal conditions were investigated in detail. Under the optimized conditions, the linear range of QDs fluorescence intensity versus the concentration of L-cysteine was 10-800 nmol L -1 , with a correlation coefficient (R) of 0.9969 and a limit of detection (3σ black) of 3.8 nmol L -1 . The relative standard deviation (R.S.D.) for 0.5 μmol L -1 L-cysteine was 1.1% (n = 5). There was no interference to coexisting foreign substances including common ions, carbohydrates, nucleotide acids and other 19 amino acids. The proposed method possessed the advantages of simplicity, rapidity and sensitivity. Synthetic amino acid samples, medicine sample together with human urine samples were analyzed by the methodology and the results were satisfying.

  12. An Intensive, Simulation-Based Communication Course for Pediatric Critical Care Medicine Fellows.

    Science.gov (United States)

    Johnson, Erin M; Hamilton, Melinda F; Watson, R Scott; Claxton, Rene; Barnett, Michael; Thompson, Ann E; Arnold, Robert

    2017-08-01

    Effective communication among providers, families, and patients is essential in critical care but is often inadequate in the PICU. To address the lack of communication education pediatric critical care medicine fellows receive, the Children's Hospital of Pittsburgh PICU developed a simulation-based communication course, Pediatric Critical Care Communication course. Pediatric critical care medicine trainees have limited prior training in communication and will have increased confidence in their communication skills after participating in the Pediatric Critical Care Communication course. Pediatric Critical Care Communication is a 3-day course taken once during fellowship featuring simulation with actors portraying family members. Off-site conference space as part of a pediatric critical care medicine educational curriculum. Pediatric Critical Care Medicine Fellows. Didactic sessions and interactive simulation scenarios. Prior to and after the course, fellows complete an anonymous survey asking about 1) prior instruction in communication, 2) preparedness for difficult conversations, 3) attitudes about end-of-life care, and 4) course satisfaction. We compared pre- and postcourse surveys using paired Student t test. Most of the 38 fellows who participated over 4 years had no prior communication training in conducting a care conference (70%), providing bad news (57%), or discussing end-of-life options (75%). Across all four iterations of the course, fellows after the course reported increased confidence across many topics of communication, including giving bad news, conducting a family conference, eliciting both a family's emotional reaction to their child's illness and their concerns at the end of a child's life, discussing a child's code status, and discussing religious issues. Specifically, fellows in 2014 reported significant increases in self-perceived preparedness to provide empathic communication to families regarding many aspects of discussing critical care, end

  13. Stochastic conditional intensity processes

    DEFF Research Database (Denmark)

    Bauwens, Luc; Hautsch, Nikolaus

    2006-01-01

    model allows for a wide range of (cross-)autocorrelation structures in multivariate point processes. The model is estimated by simulated maximum likelihood (SML) using the efficient importance sampling (EIS) technique. By modeling price intensities based on NYSE trading, we provide significant evidence......In this article, we introduce the so-called stochastic conditional intensity (SCI) model by extending Russell’s (1999) autoregressive conditional intensity (ACI) model by a latent common dynamic factor that jointly drives the individual intensity components. We show by simulations that the proposed...... for a joint latent factor and show that its inclusion allows for an improved and more parsimonious specification of the multivariate intensity process...

  14. Roll-to-roll-compatible, flexible, transparent electrodes based on self-nanoembedded Cu nanowires using intense pulsed light irradiation

    Science.gov (United States)

    Zhong, Zhaoyang; Woo, Kyoohee; Kim, Inhyuk; Hwang, Hyewon; Kwon, Sin; Choi, Young-Man; Lee, Youngu; Lee, Taik-Min; Kim, Kwangyoung; Moon, Jooho

    2016-04-01

    Copper nanowire (Cu NW)-based flexible transparent conductive electrodes (FTCEs) have been investigated in detail for use in various applications such as flexible touch screens, organic photovoltaics and organic light-emitting diodes. In this study, hexadecylamine (HDA) adsorbed onto the surface of NWs is changed into polyvinylpyrrolidone (PVP) via a ligand exchange process; the high-molecular-weight PVP enables high dispersion stability. Intense pulsed light (IPL) irradiation is used to remove organic species present on the surface of the NWs and to form direct connections between the NWs rapidly without any atmospheric control. NWs are self-nanoembedded into a plastic substrate after IPL irradiation, which results in a smooth surface, strong NW/substrate adhesion, excellent mechanical flexibility and enhanced oxidation stability. Moreover, Cu NW FTCEs with high uniformities are successfully fabricated on a large area (150 mm × 200 mm) via successive IPL irradiation that is synchronized with the motion of the sample stage. This study demonstrates the possibility of roll-to-roll-based, large-scale production of low-cost, high-performance Cu NW-based FTCEs.Copper nanowire (Cu NW)-based flexible transparent conductive electrodes (FTCEs) have been investigated in detail for use in various applications such as flexible touch screens, organic photovoltaics and organic light-emitting diodes. In this study, hexadecylamine (HDA) adsorbed onto the surface of NWs is changed into polyvinylpyrrolidone (PVP) via a ligand exchange process; the high-molecular-weight PVP enables high dispersion stability. Intense pulsed light (IPL) irradiation is used to remove organic species present on the surface of the NWs and to form direct connections between the NWs rapidly without any atmospheric control. NWs are self-nanoembedded into a plastic substrate after IPL irradiation, which results in a smooth surface, strong NW/substrate adhesion, excellent mechanical flexibility and enhanced

  15. In Silico Testing of an Artificial-Intelligence-Based Artificial Pancreas Designed for Use in the Intensive Care Unit Setting.

    Science.gov (United States)

    DeJournett, Leon; DeJournett, Jeremy

    2016-11-01

    Effective glucose control in the intensive care unit (ICU) setting has the potential to decrease morbidity and mortality rates which should in turn lead to decreased health care expenditures. Current ICU-based glucose controllers are mathematically derived, and tend to be based on proportional integral derivative (PID) or model predictive control (MPC). Artificial intelligence (AI)-based closed loop glucose controllers may have the ability to achieve control that improves on the results achieved by either PID or MPC controllers. We conducted an in silico analysis of an AI-based glucose controller designed for use in the ICU setting. This controller was tested using a mathematical model of the ICU patient's glucose-insulin system. A total of 126 000 unique 5-day simulations were carried out, resulting in 107 million glucose values for analysis. For the 7 control ranges tested, with a sensor error of ±10%, the following average results were achieved: (1) time in control range, 94.2%, (2) time in range 70-140 mg/dl, 97.8%, (3) time in hyperglycemic range (>140 mg/dl), 2.1%, and (4) time in hypoglycemic range (artificial pancreas system for use in the ICU setting. © 2016 Diabetes Technology Society.

  16. Influence of an Intensive, Field-Based Life Science Course on Preservice Teachers' Self-Efficacy for Environmental Science Teaching

    Science.gov (United States)

    Trauth-Nare, Amy

    2015-08-01

    Personal and professional experiences influence teachers' perceptions of their ability to implement environmental science curricula and to positively impact students' learning. The purpose of this study was twofold: to determine what influence, if any, an intensive field-based life science course and service learning had on preservice teachers' self-efficacy for teaching about the environment and to determine which aspects of the combined field-based course/service learning preservice teachers perceived as effective for enhancing their self-efficacy. Data were collected from class documents and written teaching reflections of 38 middle-level preservice teachers. Some participants ( n = 18) also completed the Environmental Education Efficacy Belief Instrument at the beginning and end of the semester. Both qualitative and quantitative data analyses indicated a significant increase in PSTs' personal efficacies for environmental teaching, t(17) = 4.50, p = .000, d = 1.30, 95 % CI (.33, .90), but not outcome expectancy, t(17) = 1.15, p = .268, d = .220, 95 % CI (-.06, .20). Preservice teachers reported three aspects of the course as important for enhancing their self-efficacies: learning about ecological concepts through place-based issues, service learning with K-5 students and EE curriculum development. Data from this study extend prior work by indicating that practical experiences with students were not the sole factor in shaping PSTs' self-efficacy; learning ecological concepts and theories in field-based activities grounded in the local landscape also influenced PSTs' self-efficacy.

  17. Novel design concepts for generating intense accelerator based beams of mono-energetic fast neutrons

    International Nuclear Information System (INIS)

    Franklyn, C.B.; Govender, K.; Guzek, J.; Beer, A. de; Tapper, U.A.S.

    2001-01-01

    Full text: Successful application of neutron techniques in research, medicine and industry depends on the availability of suitable neutron sources. This is particularly important for techniques that require mono-energetic fast neutrons with well defined energy spread. There are a limited number of nuclear reactions available for neutron production and often the reaction yield is low, particularly for thin targets required for the production of mono-energetic neutron beams. Moreover, desired target materials are often in a gaseous form, such as the reactions D(d,n) 3 He and T(d,n) 3 He, requiring innovative design of targets, with sufficient target pressure and particle beam handling capability. Additional requirements, particularly important in industrial applications, and for research institutions with limited funds, are the cost effectiveness as well as small size, coupled with reliable and continuous operation of the system. Neutron sources based on high-power, compact radio-frequency quadrupole (RFQ) linacs can satisfy these criteria, if used with a suitable target system. This paper discusses the characteristics of a deuteron RFQ linear accelerator system coupled to a high pressure differentially pumped deuterium target. Such a source, provides in excess of 10 10 mono- energetic neutrons per second with minimal slow neutron and gamma-ray contamination, and is utilised for a variety of applications in the field of mineral identification and materials diagnostics. There is also the possibility of utilising a proposed enhanced system for isotope production. The RFQ linear accelerator consists of: 1) Deuterium 25 keV ion source injector; 2) Two close-coupled RFQ resonators, each powered by an rf amplifier supplying up to 300 kW of peak power at 425 MHz; 3) High energy beam transport system consisting of a beam line, a toroid for beam current monitoring, two steering magnets and a quadrupole triplet for beam focusing. Basic technical specifications of the RFQ linac

  18. CLASSIFICATION OF SEVERAL SKIN CANCER TYPES BASED ON AUTOFLUORESCENCE INTENSITY OF VISIBLE LIGHT TO NEAR INFRARED RATIO

    Directory of Open Access Journals (Sweden)

    Aryo Tedjo

    2009-12-01

    Full Text Available Skin cancer is a malignant growth on the skin caused by many factors. The most common skin cancers are Basal Cell Cancer (BCC and Squamous Cell Cancer (SCC. This research uses a discriminant analysis to classify some tissues of skin cancer based on criterion number of independent variables. An independent variable is variation of excitation light sources (LED lamp, filters, and sensors to measure Autofluorescence Intensity (IAF of visible light to near infrared (VIS/NIR ratio of paraffin embedded tissue biopsy from BCC, SCC, and Lipoma. From the result of discriminant analysis, it is known that the discriminant function is determined by 4 (four independent variables i.e., Blue LED-Red Filter, Blue LED-Yellow Filter, UV LED-Blue Filter, and UV LED-Yellow Filter. The accuracy of discriminant in classifying the analysis of three skin cancer tissues is 100 %.

  19. Extraction of a long-pulsed intense electron beam from a pulsed plasma based on hollow cathode discharge

    International Nuclear Information System (INIS)

    Uramoto, Johshin.

    1977-05-01

    An intense electron beam (up to 1.0 kV, 0.8 kA in 0.8 cm phi) is extracted along a uniform magnetic field with a long decay time (up to 2 msec) from a pulsed high density plasma source which is produced with a fast rise time (< 100 μsec) by a secondary discharge based on a dc hollow cathode discharge. Through a back stream of ionized ions from a beam-extracting anode region where a neutral gas is fed, a space charge limit of the electron beam is so reduced that the beam current is determined by an initially injected electron flux and concentrated in a central aperture of the extracting anode. Moreover, the beam pulse width is much extended by the neutral gas feed into the anode space. (auth.)

  20. Graphics processing unit accelerated intensity-based optical coherence tomography angiography using differential frames with real-time motion correction.

    Science.gov (United States)

    Watanabe, Yuuki; Takahashi, Yuhei; Numazawa, Hiroshi

    2014-02-01

    We demonstrate intensity-based optical coherence tomography (OCT) angiography using the squared difference of two sequential frames with bulk-tissue-motion (BTM) correction. This motion correction was performed by minimization of the sum of the pixel values using axial- and lateral-pixel-shifted structural OCT images. We extract the BTM-corrected image from a total of 25 calculated OCT angiographic images. Image processing was accelerated by a graphics processing unit (GPU) with many stream processors to optimize the parallel processing procedure. The GPU processing rate was faster than that of a line scan camera (46.9 kHz). Our OCT system provides the means of displaying structural OCT images and BTM-corrected OCT angiographic images in real time.

  1. Optimal Threshold Determination for Discriminating Driving Anger Intensity Based on EEG Wavelet Features and ROC Curve Analysis

    Directory of Open Access Journals (Sweden)

    Ping Wan

    2016-08-01

    Full Text Available Driving anger, called “road rage”, has become increasingly common nowadays, affecting road safety. A few researches focused on how to identify driving anger, however, there is still a gap in driving anger grading, especially in real traffic environment, which is beneficial to take corresponding intervening measures according to different anger intensity. This study proposes a method for discriminating driving anger states with different intensity based on Electroencephalogram (EEG spectral features. First, thirty drivers were recruited to conduct on-road experiments on a busy route in Wuhan, China where anger could be inducted by various road events, e.g., vehicles weaving/cutting in line, jaywalking/cyclist crossing, traffic congestion and waiting red light if they want to complete the experiments ahead of basic time for extra paid. Subsequently, significance analysis was used to select relative energy spectrum of β band (β% and relative energy spectrum of θ band (θ% for discriminating the different driving anger states. Finally, according to receiver operating characteristic (ROC curve analysis, the optimal thresholds (best cut-off points of β% and θ% for identifying none anger state (i.e., neutral were determined to be 0.2183 ≤ θ% < 1, 0 < β% < 0.2586; low anger state is 0.1539 ≤ θ% < 0.2183, 0.2586 ≤ β% < 0.3269; moderate anger state is 0.1216 ≤ θ% < 0.1539, 0.3269 ≤ β% < 0.3674; high anger state is 0 < θ% < 0.1216, 0.3674 ≤ β% < 1. Moreover, the discrimination performances of verification indicate that, the overall accuracy (Acc of the optimal thresholds of β% for discriminating the four driving anger states is 80.21%, while 75.20% for that of θ%. The results can provide theoretical foundation for developing driving anger detection or warning devices based on the relevant optimal thresholds.

  2. PCR-based verification of positive rapid diagnostic tests for intestinal protozoa infections with variable test band intensity.

    Science.gov (United States)

    Becker, Sören L; Müller, Ivan; Mertens, Pascal; Herrmann, Mathias; Zondie, Leyli; Beyleveld, Lindsey; Gerber, Markus; du Randt, Rosa; Pühse, Uwe; Walter, Cheryl; Utzinger, Jürg

    2017-10-01

    Stool-based rapid diagnostic tests (RDTs) for pathogenic intestinal protozoa (e.g. Cryptosporidium spp. and Giardia intestinalis) allow for prompt diagnosis and treatment in resource-constrained settings. Such RDTs can improve individual patient management and facilitate population-based screening programmes in areas without microbiological laboratories for confirmatory testing. However, RDTs are difficult to interpret in case of 'trace' results with faint test band intensities and little is known about whether such ambiguous results might indicate 'true' infections. In a longitudinal study conducted in poor neighbourhoods of Port Elizabeth, South Africa, a total of 1428 stool samples from two cohorts of schoolchildren were examined on the spot for Cryptosporidium spp. and G. intestinalis using an RDT (Crypto/Giardia DuoStrip; Coris BioConcept). Overall, 121 samples were positive for G. intestinalis and the RDT suggested presence of cryptosporidiosis in 22 samples. After a storage period of 9-10 months in cohort 1 and 2-3 months in cohort 2, samples were subjected to multiplex PCR (BD Max™ Enteric Parasite Panel, Becton Dickinson). Ninety-three percent (112/121) of RDT-positive samples for G. intestinalis were confirmed by PCR, with a correlation between RDT test band intensity and quantitative pathogen load present in the sample. For Cryptosporidium spp., all positive RDTs had faintly visible lines and these were negative on PCR. The performance of the BD Max™ PCR was nearly identical in both cohorts, despite the prolonged storage at disrupted cold chain conditions in cohort 1. The Crypto/Giardia DuoStrip warrants further validation in communities with a high incidence of diarrhoea. Copyright © 2017 Elsevier B.V. All rights reserved.

  3. On Gabor frames generated by sign-changing windows and B-splines

    DEFF Research Database (Denmark)

    Christensen, Ole; Kim, Hong Oh; Kim, Rae Young

    2015-01-01

    For a class of compactly supported windows we characterize the frame property for a Gabor system {EmbTnag}m,nZ, for translation parameters a belonging to a certain range depending on the support size. We show that the obstructions to the frame property are located on a countable number of "curves...

  4. Data assimilation using Bayesian filters and B-spline geological models

    KAUST Repository

    Duan, Lian; Farmer, Chris; Hoteit, Ibrahim; Luo, Xiaodong; Moroz, Irene

    2011-01-01

    This paper proposes a new approach to problems of data assimilation, also known as history matching, of oilfield production data by adjustment of the location and sharpness of patterns of geological facies. Traditionally, this problem has been

  5. B-spline goal-oriented error estimators for geometrically nonlinear rods

    Science.gov (United States)

    2011-04-01

    respectively, for the output functionals q2–q4 (linear and nonlinear with the trigonometric functions sine and cosine) in all the tests considered...of the errors resulting from the linear, quadratic and nonlinear (with trigonometric functions sine and cosine) outputs and for p = 1, 2. If the... Portugal . References [1] A.T. Adams. Sobolev Spaces. Academic Press, Boston, 1975. [2] M. Ainsworth and J.T. Oden. A posteriori error estimation in

  6. Energy expenditure and EPOC between water-based high-intensity interval training and moderate-intensity continuous training sessions in healthy women.

    Science.gov (United States)

    Schaun, Gustavo Zaccaria; Pinto, Stephanie Santana; Praia, Aline Borges de Carvalho; Alberton, Cristine Lima

    2018-02-05

    The present study compared the energy expenditure (EE) during and after two water aerobics protocols, high-intensity interval training (HIIT) and moderate continuous training (CONT). A crossover randomized design was employed comprising 11 healthy young women. HIIT consisted of eight 20s bouts at 130% of the cadence associated with the maximal oxygen consumption (measured in the aquatic environment) with 10s passive rest. CONT corresponded to 30 min at a heart rate equivalent to 90-95% of the second ventilatory threshold. EE was measured during and 30 min before and after the protocols and excess post-exercise oxygen consumption (EPOC) was calculated. Total EE during session was higher in CONT (227.62 ± 31.69 kcal) compared to HIIT (39.91 ± 4.24 kcal), while EE per minute was greater in HIIT (9.98 ± 1.06 kcal) than in CONT (7.58 ± 1.07 kcal). Post-exercise EE (64.48 ± 3.50 vs. 63.65 ± 10.39 kcal) and EPOC (22.53 ± 4.98 vs.22.10 ± 8.00 kcal) were not different between HIIT and CONT, respectively. Additionally, oxygen uptake had already returned to baseline fifteen minutes post-exercise. These suggest that a water aerobics CONT session results in post-exercise EE and EPOC comparable to HIIT despite the latter supramaximal nature. Still, CONT results in higher total EE.

  7. Convex reformulation of biologically-based multi-criteria intensity-modulated radiation therapy optimization including fractionation effects.

    Science.gov (United States)

    Hoffmann, Aswin L; den Hertog, Dick; Siem, Alex Y D; Kaanders, Johannes H A M; Huizenga, Henk

    2008-11-21

    Finding fluence maps for intensity-modulated radiation therapy (IMRT) can be formulated as a multi-criteria optimization problem for which Pareto optimal treatment plans exist. To account for the dose-per-fraction effect of fractionated IMRT, it is desirable to exploit radiobiological treatment plan evaluation criteria based on the linear-quadratic (LQ) cell survival model as a means to balance the radiation benefits and risks in terms of biologic response. Unfortunately, the LQ-model-based radiobiological criteria are nonconvex functions, which make the optimization problem hard to solve. We apply the framework proposed by Romeijn et al (2004 Phys. Med. Biol. 49 1991-2013) to find transformations of LQ-model-based radiobiological functions and establish conditions under which transformed functions result in equivalent convex criteria that do not change the set of Pareto optimal treatment plans. The functions analysed are: the LQ-Poisson-based model for tumour control probability (TCP) with and without inter-patient heterogeneity in radiation sensitivity, the LQ-Poisson-based relative seriality s-model for normal tissue complication probability (NTCP), the equivalent uniform dose (EUD) under the LQ-Poisson model and the fractionation-corrected Probit-based model for NTCP according to Lyman, Kutcher and Burman. These functions differ from those analysed before in that they cannot be decomposed into elementary EUD or generalized-EUD functions. In addition, we show that applying increasing and concave transformations to the convexified functions is beneficial for the piecewise approximation of the Pareto efficient frontier.

  8. TLS-Based Feature Extraction and 3D Modeling for Arch Structures

    Directory of Open Access Journals (Sweden)

    Xiangyang Xu

    2017-01-01

    Full Text Available Terrestrial laser scanning (TLS technology is one of the most efficient and accurate tools for 3D measurement which can reveal surface-based characteristics of objects with the aid of computer vision and programming. Thus, it plays an increasingly important role in deformation monitoring and analysis. Automatic data extraction and high efficiency and accuracy modeling from scattered point clouds are challenging issues during the TLS data processing. This paper presents a data extraction method considering the partial and statistical distribution of the point clouds scanned, called the window-neighborhood method. Based on the point clouds extracted, 3D modeling of the boundary of an arched structure was carried out. The ideal modeling strategy should be fast, accurate, and less complex regarding its application to large amounts of data. The paper discusses the accuracy of fittings in four cases between whole curve, segmentation, polynomial, and B-spline. A similar number of parameters was set for polynomial and B-spline because the number of unknown parameters is essential for the accuracy of the fittings. The uncertainties of the scanned raw point clouds and the modeling are discussed. This process is considered a prerequisite step for 3D deformation analysis with TLS.

  9. Adaptive local surface refinement based on LR NURBS and its application to contact

    Science.gov (United States)

    Zimmermann, Christopher; Sauer, Roger A.

    2017-12-01

    A novel adaptive local surface refinement technique based on Locally Refined Non-Uniform Rational B-Splines (LR NURBS) is presented. LR NURBS can model complex geometries exactly and are the rational extension of LR B-splines. The local representation of the parameter space overcomes the drawback of non-existent local refinement in standard NURBS-based isogeometric analysis. For a convenient embedding into general finite element codes, the Bézier extraction operator for LR NURBS is formulated. An automatic remeshing technique is presented that allows adaptive local refinement and coarsening of LR NURBS. In this work, LR NURBS are applied to contact computations of 3D solids and membranes. For solids, LR NURBS-enriched finite elements are used to discretize the contact surfaces with LR NURBS finite elements, while the rest of the body is discretized by linear Lagrange finite elements. For membranes, the entire surface is discretized by LR NURBS. Various numerical examples are shown, and they demonstrate the benefit of using LR NURBS: Compared to uniform refinement, LR NURBS can achieve high accuracy at lower computational cost.

  10. Performance evaluation of an algorithm for fast optimization of beam weights in anatomy-based intensity modulated radiotherapy

    International Nuclear Information System (INIS)

    Ranganathan, Vaitheeswaran; Sathiya Narayanan, V.K.; Bhangle, Janhavi R.; Gupta, Kamlesh K.; Basu, Sumit; Maiya, Vikram; Joseph, Jolly; Nirhali, Amit

    2010-01-01

    This study aims to evaluate the performance of a new algorithm for optimization of beam weights in anatomy-based intensity modulated radiotherapy (IMRT). The algorithm uses a numerical technique called Gaussian-Elimination that derives the optimum beam weights in an exact or non-iterative way. The distinct feature of the algorithm is that it takes only fraction of a second to optimize the beam weights, irrespective of the complexity of the given case. The algorithm has been implemented using MATLAB with a Graphical User Interface (GUI) option for convenient specification of dose constraints and penalties to different structures. We have tested the numerical and clinical capabilities of the proposed algorithm in several patient cases in comparison with KonRad inverse planning system. The comparative analysis shows that the algorithm can generate anatomy-based IMRT plans with about 50% reduction in number of MUs and 60% reduction in number of apertures, while producing dose distribution comparable to that of beamlet-based IMRT plans. Hence, it is clearly evident from the study that the proposed algorithm can be effectively used for clinical applications. (author)

  11. Evaluating team-based inter-professional advanced life support training in intensive care-a prospective observational study.

    Science.gov (United States)

    Brewster, D J; Barrett, J A; Gherardin, E; O'Neill, J A; Sage, D; Hanlon, G

    2017-01-01

    Recent focus on national standards within Australian hospitals has prompted a focus on the training of our staff in advanced life support (ALS). Research in critical care nursing has questioned the traditional annual certification of ALS competence as the best method of delivering this training. Simulation and team-based training may provide better ALS education to intensive care unit (ICU) staff. Our new inter-professional team-based advanced life support program involved ICU staff in a large private metropolitan ICU. A prospective observational study using three standardised questionnaires and two multiple choice questionnaire assessments was conducted. Ninety-nine staff demonstrated a 17.8% (95% confidence interval 4.2-31, P =0.01) increase in overall ICU nursing attendance at training sessions. Questionnaire response rates were 93 (94%), 99 (100%) and 60 (61%) respectively; 51 (52%) staff returned all three. Criteria were assessed by scores from 0 to 10. Nurses reported improved satisfaction with the education program (9.4 to 7.1, P versus 7.9 and 8.2, P versus 7.4 and 7.8, P versus 8.1, P =0.04). The new program cost approximately an extra $16,500 in nursing salaries. We concluded that team-based, inter-professional ALS training produced statistically significant improvements in nursing attendance, satisfaction with ALS education, confidence and role understanding compared to traditional ALS training.

  12. Automatic recognition of seismic intensity based on RS and GIS: a case study in Wenchuan Ms8.0 earthquake of China.

    Science.gov (United States)

    Zhang, Qiuwen; Zhang, Yan; Yang, Xiaohong; Su, Bin

    2014-01-01

    In recent years, earthquakes have frequently occurred all over the world, which caused huge casualties and economic losses. It is very necessary and urgent to obtain the seismic intensity map timely so as to master the distribution of the disaster and provide supports for quick earthquake relief. Compared with traditional methods of drawing seismic intensity map, which require many investigations in the field of earthquake area or are too dependent on the empirical formulas, spatial information technologies such as Remote Sensing (RS) and Geographical Information System (GIS) can provide fast and economical way to automatically recognize the seismic intensity. With the integrated application of RS and GIS, this paper proposes a RS/GIS-based approach for automatic recognition of seismic intensity, in which RS is used to retrieve and extract the information on damages caused by earthquake, and GIS is applied to manage and display the data of seismic intensity. The case study in Wenchuan Ms8.0 earthquake in China shows that the information on seismic intensity can be automatically extracted from remotely sensed images as quickly as possible after earthquake occurrence, and the Digital Intensity Model (DIM) can be used to visually query and display the distribution of seismic intensity.

  13. Automatic Recognition of Seismic Intensity Based on RS and GIS: A Case Study in Wenchuan Ms8.0 Earthquake of China

    Directory of Open Access Journals (Sweden)

    Qiuwen Zhang

    2014-01-01

    Full Text Available In recent years, earthquakes have frequently occurred all over the world, which caused huge casualties and economic losses. It is very necessary and urgent to obtain the seismic intensity map timely so as to master the distribution of the disaster and provide supports for quick earthquake relief. Compared with traditional methods of drawing seismic intensity map, which require many investigations in the field of earthquake area or are too dependent on the empirical formulas, spatial information technologies such as Remote Sensing (RS and Geographical Information System (GIS can provide fast and economical way to automatically recognize the seismic intensity. With the integrated application of RS and GIS, this paper proposes a RS/GIS-based approach for automatic recognition of seismic intensity, in which RS is used to retrieve and extract the information on damages caused by earthquake, and GIS is applied to manage and display the data of seismic intensity. The case study in Wenchuan Ms8.0 earthquake in China shows that the information on seismic intensity can be automatically extracted from remotely sensed images as quickly as possible after earthquake occurrence, and the Digital Intensity Model (DIM can be used to visually query and display the distribution of seismic intensity.

  14. Simultaneous tuning of electric field intensity and structural properties of ZnO: Graphene nanostructures for FOSPR based nicotine sensor.

    Science.gov (United States)

    Tabassum, Rana; Gupta, Banshi D

    2017-05-15

    We report theoretical and experimental realization of a SPR based fiber optic nicotine sensor having coatings of silver and graphene doped ZnO nanostructure onto the unclad core of the optical fiber. The volume fraction (f) of graphene in ZnO was optimized using simulation of electric field intensity. Four types of graphene doped ZnO nanostructures viz. nanocomposites, nanoflowers, nanotubes and nanofibers were prepared using optimized value of f. The morphology, photoluminescence (PL) spectra and UV-vis spectra of these nanostructures were studied. The peak PL intensity was found to be highest for ZnO: graphene nanofibers. The optimized value of f in ZnO: graphene nanofiber was reconfirmed using UV-vis spectroscopy. The experiments were performed on the fiber optic probe fabricated with Ag/ZnO: graphene layer and optimized parameters for in-situ detection of nicotine. The interaction of nicotine with ZnO: graphene nanostructures alters the dielectric function of ZnO: graphene nanostructure which is manifested in terms of shift in resonance wavelength. From the sensing signal, the performance parameters were measured including sensitivity, limit of detection (LOD), limit of quantification (LOQ), stability, repeatability and selectivity. The real sample prepared using cigarette tobacco leaves and analyzed using the fabricated sensor makes it suitable for practical applications. The achieved values of LOD and LOQ are found to be unrivalled in comparison to the reported ones. The sensor possesses additional advantages such as, immunity to electromagnetic interference, low cost, capability of online monitoring, remote sensing. Copyright © 2017 Elsevier B.V. All rights reserved.

  15. Recruiting intensity

    OpenAIRE

    R. Jason Faberman

    2014-01-01

    To hire new workers, employers use a variety of recruiting methods in addition to posting a vacancy announcement. The intensity with which employers use these alternative methods can vary widely with a firm’s performance and with the business cycle. In fact, persistently low recruiting intensity helps to explain the sluggish pace of US job growth following the Great Recession.

  16. Prospective demonstration of brain plasticity after intensive abacus-based mental calculation training: An fMRI study

    International Nuclear Information System (INIS)

    Chen, C.L.; Wu, T.H.; Cheng, M.C.; Huang, Y.H.; Sheu, C.Y.; Hsieh, J.C.; Lee, J.S.

    2006-01-01

    Abacus-based mental calculation is a unique Chinese culture. The abacus experts can perform complex computations mentally with exceptionally fast speed and high accuracy. However, the neural bases of computation processing are not yet clearly known. This study used a BOLD contrast 3T fMRI system to explore the brain activation differences between abacus experts and non-expert subjects. All the acquired data were analyzed using SPM99 software. From the results, different ways of performing calculations between the two groups were seen. The experts tended to adopt efficient visuospatial/visuomotor strategy (bilateral parietal/frontal network) to process and retrieve all the intermediate and final results on the virtual abacus during calculation. By contrast, coordination of several networks (verbal, visuospatial processing and executive function) was required in the normal group to carry out arithmetic operations. Furthermore, more involvement of the visuomotor imagery processing (right dorsal premotor area) for imagining bead manipulation and low level use of the executive function (frontal-subcortical area) for launching the relatively time-consuming sequentially organized process was noted in the abacus expert group than in the non-expert group. We suggest that these findings may explain why abacus experts can reveal the exceptional computational skills compared to non-experts after intensive training

  17. MRI to X-ray mammography intensity-based registration with simultaneous optimisation of pose and biomechanical transformation parameters.

    Science.gov (United States)

    Mertzanidou, Thomy; Hipwell, John; Johnsen, Stian; Han, Lianghao; Eiben, Bjoern; Taylor, Zeike; Ourselin, Sebastien; Huisman, Henkjan; Mann, Ritse; Bick, Ulrich; Karssemeijer, Nico; Hawkes, David

    2014-05-01

    Determining corresponding regions between an MRI and an X-ray mammogram is a clinically useful task that is challenging for radiologists due to the large deformation that the breast undergoes between the two image acquisitions. In this work we propose an intensity-based image registration framework, where the biomechanical transformation model parameters and the rigid-body transformation parameters are optimised simultaneously. Patient-specific biomechanical modelling of the breast derived from diagnostic, prone MRI has been previously used for this task. However, the high computational time associated with breast compression simulation using commercial packages, did not allow the optimisation of both pose and FEM parameters in the same framework. We use a fast explicit Finite Element (FE) solver that runs on a graphics card, enabling the FEM-based transformation model to be fully integrated into the optimisation scheme. The transformation model has seven degrees of freedom, which include parameters for both the initial rigid-body pose of the breast prior to mammographic compression, and those of the biomechanical model. The framework was tested on ten clinical cases and the results were compared against an affine transformation model, previously proposed for the same task. The mean registration error was 11.6±3.8mm for the CC and 11±5.4mm for the MLO view registrations, indicating that this could be a useful clinical tool. Copyright © 2014 The Authors. Published by Elsevier B.V. All rights reserved.

  18. Simulation of the Impact of New Aircraft- and Satellite-based Ocean Surface Wind Measurements on Estimates of Hurricane Intensity

    Science.gov (United States)

    Uhlhorn, Eric; Atlas, Robert; Black, Peter; Buckley, Courtney; Chen, Shuyi; El-Nimri, Salem; Hood, Robbie; Johnson, James; Jones, Linwood; Miller, Timothy; hide

    2009-01-01

    The Hurricane Imaging Radiometer (HIRAD) is a new airborne microwave remote sensor currently under development to enhance real-time hurricane ocean surface wind observations. HIRAD builds on the capabilities of the Stepped Frequency Microwave Radiometer (SFMR), which now operates on NOAA P-3, G-4, and AFRC C-130 aircraft. Unlike the SFMR, which measures wind speed and rain rate along the ground track directly beneath the aircraft, HIRAD will provide images of the surface wind and rain field over a wide swath (approximately 3 times the aircraft altitude). To demonstrate potential improvement in the measurement of peak hurricane winds, we present a set of Observing System Simulation Experiments (OSSEs) in which measurements from the new instrument as well as those from existing platforms (air, surface, and space-based) are simulated from the output of a high-resolution (approximately 1.7 km) numerical model. Simulated retrieval errors due to both instrument noise as well as model function accuracy are considered over the expected range of incidence angles, wind speeds and rain rates. Based on numerous simulated flight patterns and data source combinations, statistics are developed to describe relationships between the observed and true (from the model s perspective) peak wind speed. These results have implications for improving the estimation of hurricane intensity (as defined by the peak sustained wind anywhere in the storm), which may often go un-observed due to sampling limitations.

  19. Multiple-algorithm parallel fusion of infrared polarization and intensity images based on algorithmic complementarity and synergy

    Science.gov (United States)

    Zhang, Lei; Yang, Fengbao; Ji, Linna; Lv, Sheng

    2018-01-01

    Diverse image fusion methods perform differently. Each method has advantages and disadvantages compared with others. One notion is that the advantages of different image methods can be effectively combined. A multiple-algorithm parallel fusion method based on algorithmic complementarity and synergy is proposed. First, in view of the characteristics of the different algorithms and difference-features among images, an index vector-based feature-similarity is proposed to define the degree of complementarity and synergy. This proposed index vector is a reliable evidence indicator for algorithm selection. Second, the algorithms with a high degree of complementarity and synergy are selected. Then, the different degrees of various features and infrared intensity images are used as the initial weights for the nonnegative matrix factorization (NMF). This avoids randomness of the NMF initialization parameter. Finally, the fused images of different algorithms are integrated using the NMF because of its excellent data fusing performance on independent features. Experimental results demonstrate that the visual effect and objective evaluation index of the fused images obtained using the proposed method are better than those obtained using traditional methods. The proposed method retains all the advantages that individual fusion algorithms have.

  20. First experiments with a liquid-lithium based high-intensity 25-keV neutron source

    International Nuclear Information System (INIS)

    Paul, M.

    2014-01-01

    A high-intensity neutron source based on a Liquid-Lithium Target (LiLiT) and the 7 Li(p,n) reaction was developed at SARAF (Soreq Applied Research Accelerator Facility, Israel) and is used for nuclear astrophysics experiments. The setup was commissioned with a 1.3 mA proton beam at 1.91 MeV, producing a neutron yield of ~ 2 ×10 10 n/s, more than one order of magnitude larger than conventional 7 Li(p,n)-based neutron sources and peaked at ~25 keV. The LiLiT device consists of a high-velocity (> 4 m/s) vertical jet of liquid lithium (~200 °C) whose free surface is bombarded by the proton beam. The lithium jet acts both as the neutron-producing target and as a power beam dump. The target dissipates a peak power areal density of 2.5 kW/cm 2 and peak volume density of 0.5 MW/cm 3 with no change of temperature or vacuum regime in the vacuum chamber. Preliminary results of Maxwellian-averaged cross section measurements for stable isotopes of Zr and Ce, performed by activation in the neutron flux of LiLiT, and nuclear-astrophysics experiments in planning will be described. (author)

  1. Prospective demonstration of brain plasticity after intensive abacus-based mental calculation training: An fMRI study

    Science.gov (United States)

    Chen, C. L.; Wu, T. H.; Cheng, M. C.; Huang, Y. H.; Sheu, C. Y.; Hsieh, J. C.; Lee, J. S.

    2006-12-01

    Abacus-based mental calculation is a unique Chinese culture. The abacus experts can perform complex computations mentally with exceptionally fast speed and high accuracy. However, the neural bases of computation processing are not yet clearly known. This study used a BOLD contrast 3T fMRI system to explore the brain activation differences between abacus experts and non-expert subjects. All the acquired data were analyzed using SPM99 software. From the results, different ways of performing calculations between the two groups were seen. The experts tended to adopt efficient visuospatial/visuomotor strategy (bilateral parietal/frontal network) to process and retrieve all the intermediate and final results on the virtual abacus during calculation. By contrast, coordination of several networks (verbal, visuospatial processing and executive function) was required in the normal group to carry out arithmetic operations. Furthermore, more involvement of the visuomotor imagery processing (right dorsal premotor area) for imagining bead manipulation and low level use of the executive function (frontal-subcortical area) for launching the relatively time-consuming sequentially organized process was noted in the abacus expert group than in the non-expert group. We suggest that these findings may explain why abacus experts can reveal the exceptional computational skills compared to non-experts after intensive training.

  2. SU-E-T-175: Clinical Evaluations of Monte Carlo-Based Inverse Treatment Plan Optimization for Intensity Modulated Radiotherapy

    Energy Technology Data Exchange (ETDEWEB)

    Chi, Y; Li, Y; Tian, Z; Gu, X; Jiang, S; Jia, X [UT Southwestern Medical Center, Dallas, TX (United States)

    2015-06-15

    Purpose: Pencil-beam or superposition-convolution type dose calculation algorithms are routinely used in inverse plan optimization for intensity modulated radiation therapy (IMRT). However, due to their limited accuracy in some challenging cases, e.g. lung, the resulting dose may lose its optimality after being recomputed using an accurate algorithm, e.g. Monte Carlo (MC). It is the objective of this study to evaluate the feasibility and advantages of a new method to include MC in the treatment planning process. Methods: We developed a scheme to iteratively perform MC-based beamlet dose calculations and plan optimization. In the MC stage, a GPU-based dose engine was used and the particle number sampled from a beamlet was proportional to its optimized fluence from the previous step. We tested this scheme in four lung cancer IMRT cases. For each case, the original plan dose, plan dose re-computed by MC, and dose optimized by our scheme were obtained. Clinically relevant dosimetric quantities in these three plans were compared. Results: Although the original plan achieved a satisfactory PDV dose coverage, after re-computing doses using MC method, it was found that the PTV D95% were reduced by 4.60%–6.67%. After re-optimizing these cases with our scheme, the PTV coverage was improved to the same level as in the original plan, while the critical OAR coverages were maintained to clinically acceptable levels. Regarding the computation time, it took on average 144 sec per case using only one GPU card, including both MC-based beamlet dose calculation and treatment plan optimization. Conclusion: The achieved dosimetric gains and high computational efficiency indicate the feasibility and advantages of the proposed MC-based IMRT optimization method. Comprehensive validations in more patient cases are in progress.

  3. Intensity-Modulated Radiotherapy Results in Significant Decrease in Clinical Toxicities Compared With Conventional Wedge-Based Breast Radiotherapy

    International Nuclear Information System (INIS)

    Harsolia, Asif; Kestin, Larry; Grills, Inga; Wallace, Michelle; Jolly, Shruti; Jones, Cortney; Lala, Moinaktar; Martinez, Alvaro; Schell, Scott; Vicini, Frank A.

    2007-01-01

    Purpose: We have previously demonstrated that intensity-modulated radiotherapy (IMRT) with a static multileaf collimator process results in a more homogenous dose distribution compared with conventional wedge-based whole breast irradiation (WBI). In the present analysis, we reviewed the acute and chronic toxicity of this IMRT approach compared with conventional wedge-based treatment. Methods and Materials: A total of 172 patients with Stage 0-IIB breast cancer were treated with lumpectomy followed by WBI. All patients underwent treatment planning computed tomography and received WBI (median dose, 45 Gy) followed by a boost to 61 Gy. Of the 172 patients, 93 (54%) were treated with IMRT, and the 79 patients (46%) treated with wedge-based RT in a consecutive fashion immediately before this cohort served as the control group. The median follow-up was 4.7 years. Results: A significant reduction in acute Grade 2 or worse dermatitis, edema, and hyperpigmentation was seen with IMRT compared with wedges. A trend was found toward reduced acute Grade 3 or greater dermatitis (6% vs. 1%, p = 0.09) in favor of IMRT. Chronic Grade 2 or worse breast edema was significantly reduced with IMRT compared with conventional wedges. No difference was found in cosmesis scores between the two groups. In patients with larger breasts (≥1,600 cm 3 , n = 64), IMRT resulted in reduced acute (Grade 2 or greater) breast edema (0% vs. 36%, p <0.001) and hyperpigmentation (3% vs. 41%, p 0.001) and chronic (Grade 2 or greater) long-term edema (3% vs. 30%, p 0.007). Conclusion: The use of IMRT in the treatment of the whole breast results in a significant decrease in acute dermatitis, edema, and hyperpigmentation and a reduction in the development of chronic breast edema compared with conventional wedge-based RT

  4. Linear accelerator-based intensity-modulated total marrow irradiation technique for treatment of hematologic malignancies: a dosimetric feasibility study.

    Science.gov (United States)

    Yeginer, Mete; Roeske, John C; Radosevich, James A; Aydogan, Bulent

    2011-03-15

    To investigate the dosimetric feasibility of linear accelerator-based intensity-modulated total marrow irradiation (IM-TMI) in patients with hematologic malignancies. Linear accelerator-based IM-TMI treatment planning was performed for 9 patients using the Eclipse treatment planning system. The planning target volume (PTV) consisted of all the bones in the body from the head to the mid-femur, except for the forearms and hands. Organs at risk (OAR) to be spared included the lungs, heart, liver, kidneys, brain, eyes, oral cavity, and bowel and were contoured by a physician on the axial computed tomography images. The three-isocenter technique previously developed by our group was used for treatment planning. We developed and used a common dose-volume objective method to reduce the planning time and planner subjectivity in the treatment planning process. A 95% PTV coverage with the 99% of the prescribed dose of 12 Gy was achieved for all nine patients. The average dose reduction in OAR ranged from 19% for the lungs to 68% for the lenses. The common dose-volume objective method decreased the planning time by an average of 35% and reduced the inter- and intra- planner subjectivity. The results from the present study suggest that the linear accelerator-based IM-TMI technique is clinically feasible. We have demonstrated that linear accelerator-based IM-TMI plans with good PTV coverage and improved OAR sparing can be obtained within a clinically reasonable time using the common dose-volume objective method proposed in the present study. Copyright © 2011. Published by Elsevier Inc.

  5. Lateral Penumbra Modelling Based Leaf End Shape Optimization for Multileaf Collimator in Radiotherapy

    Directory of Open Access Journals (Sweden)

    Dong Zhou

    2016-01-01

    Full Text Available Lateral penumbra of multileaf collimator plays an important role in radiotherapy treatment planning. Growing evidence has revealed that, for a single-focused multileaf collimator, lateral penumbra width is leaf position dependent and largely attributed to the leaf end shape. In our study, an analytical method for leaf end induced lateral penumbra modelling is formulated using Tangent Secant Theory. Compared with Monte Carlo simulation and ray tracing algorithm, our model serves well the purpose of cost-efficient penumbra evaluation. Leaf ends represented in parametric forms of circular arc, elliptical arc, Bézier curve, and B-spline are implemented. With biobjective function of penumbra mean and variance introduced, genetic algorithm is carried out for approximating the Pareto frontier. Results show that for circular arc leaf end objective function is convex and convergence to optimal solution is guaranteed using gradient based iterative method. It is found that optimal leaf end in the shape of Bézier curve achieves minimal standard deviation, while using B-spline minimum of penumbra mean is obtained. For treatment modalities in clinical application, optimized leaf ends are in close agreement with actual shapes. Taken together, the method that we propose can provide insight into leaf end shape design of multileaf collimator.

  6. Lateral Penumbra Modelling Based Leaf End Shape Optimization for Multileaf Collimator in Radiotherapy.

    Science.gov (United States)

    Zhou, Dong; Zhang, Hui; Ye, Peiqing

    2016-01-01

    Lateral penumbra of multileaf collimator plays an important role in radiotherapy treatment planning. Growing evidence has revealed that, for a single-focused multileaf collimator, lateral penumbra width is leaf position dependent and largely attributed to the leaf end shape. In our study, an analytical method for leaf end induced lateral penumbra modelling is formulated using Tangent Secant Theory. Compared with Monte Carlo simulation and ray tracing algorithm, our model serves well the purpose of cost-efficient penumbra evaluation. Leaf ends represented in parametric forms of circular arc, elliptical arc, Bézier curve, and B-spline are implemented. With biobjective function of penumbra mean and variance introduced, genetic algorithm is carried out for approximating the Pareto frontier. Results show that for circular arc leaf end objective function is convex and convergence to optimal solution is guaranteed using gradient based iterative method. It is found that optimal leaf end in the shape of Bézier curve achieves minimal standard deviation, while using B-spline minimum of penumbra mean is obtained. For treatment modalities in clinical application, optimized leaf ends are in close agreement with actual shapes. Taken together, the method that we propose can provide insight into leaf end shape design of multileaf collimator.

  7. Lateral Penumbra Modelling Based Leaf End Shape Optimization for Multileaf Collimator in Radiotherapy

    Science.gov (United States)

    Zhou, Dong; Zhang, Hui; Ye, Peiqing

    2016-01-01

    Lateral penumbra of multileaf collimator plays an important role in radiotherapy treatment planning. Growing evidence has revealed that, for a single-focused multileaf collimator, lateral penumbra width is leaf position dependent and largely attributed to the leaf end shape. In our study, an analytical method for leaf end induced lateral penumbra modelling is formulated using Tangent Secant Theory. Compared with Monte Carlo simulation and ray tracing algorithm, our model serves well the purpose of cost-efficient penumbra evaluation. Leaf ends represented in parametric forms of circular arc, elliptical arc, Bézier curve, and B-spline are implemented. With biobjective function of penumbra mean and variance introduced, genetic algorithm is carried out for approximating the Pareto frontier. Results show that for circular arc leaf end objective function is convex and convergence to optimal solution is guaranteed using gradient based iterative method. It is found that optimal leaf end in the shape of Bézier curve achieves minimal standard deviation, while using B-spline minimum of penumbra mean is obtained. For treatment modalities in clinical application, optimized leaf ends are in close agreement with actual shapes. Taken together, the method that we propose can provide insight into leaf end shape design of multileaf collimator. PMID:27110274

  8. CFD based draft tube hydraulic design optimization

    International Nuclear Information System (INIS)

    McNabb, J; Murry, N; Mullins, B F; Devals, C; Kyriacou, S A

    2014-01-01

    The draft tube design of a hydraulic turbine, particularly in low to medium head applications, plays an important role in determining the efficiency and power characteristics of the overall machine, since an important proportion of the available energy, being in kinetic form leaving the runner, needs to be recovered by the draft tube into static head. For large units, these efficiency and power characteristics can equate to large sums of money when considering the anticipated selling price of the energy produced over the machine's life-cycle. This same draft tube design is also a key factor in determining the overall civil costs of the powerhouse, primarily in excavation and concreting, which can amount to similar orders of magnitude as the price of the energy produced. Therefore, there is a need to find the optimum compromise between these two conflicting requirements. In this paper, an elaborate approach is described for dealing with this optimization problem. First, the draft tube's detailed geometry is defined as a function of a comprehensive set of design parameters (about 20 of which a subset is allowed to vary during the optimization process) and are then used in a non-uniform rational B-spline based geometric modeller to fully define the wetted surfaces geometry. Since the performance of the draft tube is largely governed by 3D viscous effects, such as boundary layer separation from the walls and swirling flow characteristics, which in turn governs the portion of the available kinetic energy which will be converted into pressure, a full 3D meshing and Navier-Stokes analysis is performed for each design. What makes this even more challenging is the fact that the inlet velocity distribution to the draft tube is governed by the runner at each of the various operating conditions that are of interest for the exploitation of the powerhouse. In order to determine these inlet conditions, a combined steady-state runner and an initial draft tube analysis

  9. CFD based draft tube hydraulic design optimization

    Science.gov (United States)

    McNabb, J.; Devals, C.; Kyriacou, S. A.; Murry, N.; Mullins, B. F.

    2014-03-01

    The draft tube design of a hydraulic turbine, particularly in low to medium head applications, plays an important role in determining the efficiency and power characteristics of the overall machine, since an important proportion of the available energy, being in kinetic form leaving the runner, needs to be recovered by the draft tube into static head. For large units, these efficiency and power characteristics can equate to large sums of money when considering the anticipated selling price of the energy produced over the machine's life-cycle. This same draft tube design is also a key factor in determining the overall civil costs of the powerhouse, primarily in excavation and concreting, which can amount to similar orders of magnitude as the price of the energy produced. Therefore, there is a need to find the optimum compromise between these two conflicting requirements. In this paper, an elaborate approach is described for dealing with this optimization problem. First, the draft tube's detailed geometry is defined as a function of a comprehensive set of design parameters (about 20 of which a subset is allowed to vary during the optimization process) and are then used in a non-uniform rational B-spline based geometric modeller to fully define the wetted surfaces geometry. Since the performance of the draft tube is largely governed by 3D viscous effects, such as boundary layer separation from the walls and swirling flow characteristics, which in turn governs the portion of the available kinetic energy which will be converted into pressure, a full 3D meshing and Navier-Stokes analysis is performed for each design. What makes this even more challenging is the fact that the inlet velocity distribution to the draft tube is governed by the runner at each of the various operating conditions that are of interest for the exploitation of the powerhouse. In order to determine these inlet conditions, a combined steady-state runner and an initial draft tube analysis, using a

  10. SU-D-207A-01: Female Pelvic Synthetic CT Generation Based On Joint Shape and Intensity Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Liu, L; Jolly, S; Cao, Y; Vineberg, K; Fessler, J; Balter, J [University Michigan, Ann Arbor, MI (United States)

    2016-06-15

    Purpose: To develop a method for generating female pelvic synthetic CT (MRCT) images from a single MR scan and evaluate its utility in radiotherapy. Methods: Under IRB-approval, an imaging sequence (T1-VIBE-Dixon) was acquired for 10 patients. This sequence yields 3 useful image volumes of different contrast (“in-phase” T1-weighted, fat and water). A previously published pelvic bone shape model was used to generate a rough bone mask for each patient. A modified fuzzy c-means classification was performed on the multi spectral MR data, with a regularization term that utilizes the prior knowledge provided by the bone mask and addresses the intensity overlap between different tissue types. A weighted sum of classification probabilities with attenuation values yielded MRCT volumes. The mean absolute error (MAE) between MRCT and real CT on various regions was calculated following deformable alignment (Velocity). Intensity modulated Treatment plans based on actual CT and MRCT were made and compared. Results: The average/standard deviation of MAE across 10 patients was 10.1/6.7 HU for muscle, 6.7/4.6 HU for fat, 136.9/53.5 HU for bony tissues under 850 HU (97% of total bone volume), 188.9/119.3 HU for bony tissues above 850 HU and 17.3/13.3 HU for intrapelvic soft tissues. Calculated doses were comparable for plans generated on CT and calculated using MRCT densities or vice versa, with differences in PTV D99% (mean/σ) of (–0.1/0.2 Gy) and (0.3/0.2 Gy), PTV D0.5cc of (–0.3/0.2 Gy) and (–0.4/1.7 Gy). OAR differences were similarly small for comparable structures, with differences in bowel V50Gy of (–0.3/0.2%) and (0.0/0.2%), femur V30Gy of (0.7/1.2%) and (0.2/1.2%), sacrum V20GY of (0.0/0.1%) and (–0.1/1.1%) and mean pelvic V20Gy of (0.0/0.1%) and (0.6/1.8%). Conclusion: MRCT based on a single imaging sequence in the female pelvis is feasible, with acceptably small variations in attenuation estimates and calculated doses to target and critical organs. Work

  11. Feasibility of combined operation and perioperative intensity-modulated brachytherapy of advanced/recurrent malignancies involving the skull base

    Energy Technology Data Exchange (ETDEWEB)

    Strege, R.J.; Eichmann, T.; Mehdorn, H.M. [University Hospital Schleswig-Holstein, Kiel (Germany). Dept. of Neurosurgery; Kovacs, G.; Niehoff, P. [University Hospital Schleswig-Holstein, Kiel (Germany). Interdisciplinary Brachytherapy Center; Maune, S. [University Hospital Schleswig-Holstein, Kiel (Germany). Dept. of Otolaryngology; Holland, D. [University Hospital Schleswig-Holstein, Kiel (Germany). Dept. of Ophthalmology

    2005-02-01

    Purpose: To assess the technical feasibility and toxicity of combined operation and perioperative intensity-modulated fractionated interstitial brachytherapy (IMBT) in advanced-stage malignancies involving the skull base with the goal of preserving the patients' senses of sight. Patients and Methods: This series consisted of 18 consecutive cases: ten patients with paranasal sinus carcinomas, five with sarcomas, two with primitive neuroectodermal tumors (PNETs), and one with parotid gland carcinoma. After, in most cases, subtotal surgical resection (R1-R2: carried out so that the patients' senses of sight were preserved), two to twelve (mean five) afterloading plastic tubes were placed into the tumor bed. IMBT was performed with an iridium-192 stepping source in pulsed-dose-rate/high-dose-rate (PDR/HDR) afterloading technique. The total IMBT dose, ranging from 10 to 30 Gy, was administered in a fractionated manner (3-5 Gy/day, 5 days/week). Results: Perioperative fractionated IMBT was performed in 15 out of 18 patients and was well tolerated. Complications that partially prevented or delayed IMBT in some cases included cerebrospinal fluid leakage (twice), meningitis (twice), frontal brain syndrome (twice), afterloading tube displacement (twice), seizure (once), and general morbidity (once). No surgery- or radiation-induced injuries to the cranial nerves or eyes occurred. Median survival times were 33 months after diagnosis and 16 months after combined operation and IMBT. Conclusion: Perioperative fractionated IMBT after extensive but vision-preserving tumor resection seems to be a safe and well-tolerated treatment of advanced/recurrent malignancies involving the skull base. These preliminary state suggest that combined operation and perioperative fractionated IMBT is a palliative therapeutic option in the management of fatal malignancies involving the base of the skull, a strategy which leaves the patients' visual acuity intact. (orig.)

  12. Improving the output voltage waveform of an intense electron-beam accelerator based on helical type Blumlein pulse forming line

    Directory of Open Access Journals (Sweden)

    Xin-Bing Cheng

    2010-07-01

    Full Text Available The Blumlein pulse forming line (BPFL consisting of an inner coaxial pulse forming line (PFL and an outer coaxial PFL is widely used in the field of pulsed power, especially for intense electron-beam accelerators (IEBA. The output voltage waveform determines the quality and characteristics of the output beam current of the IEBA. Comparing with the conventional BPFL, an IEBA based on a helical type BPFL can increase the duration of the output voltage in the same geometrical volume. However, for the helical type BPFL, the voltage waveform on a matched load may be distorted which influences the electron-beam quality. In this paper, an IEBA based on helical type BPFL is studied theoretically. Based on telegrapher equations of the BPFL, a formula for the output voltage of IEBA is obtained when the transition section is taken into account, where the transition section is between the middle cylinder of BPFL and the load. From the theoretical analysis, it is found that the wave impedance and transit time of the transition section influence considerably the main pulse voltage waveform at the load, a step is formed in front of the main pulse, and a sharp spike is also formed at the end of the main pulse. In order to get a well-shaped square waveform at the load and to improve the electron-beam quality of such an accelerator, the wave impedance of the transition section should be equal to that of the inner PFL of helical type BPFL and the transit time of the transition section should be designed as short as possible. Experiments performed on an IEBA with the helical type BPFL show reasonable agreement with theoretical analysis.

  13. IMPLEMENTATION AND EVALUATION OF A MOBILE MAPPING SYSTEM BASED ON INTEGRATED RANGE AND INTENSITY IMAGES FOR TRAFFIC SIGNS LOCALIZATION

    Directory of Open Access Journals (Sweden)

    M. Shahbazi

    2012-07-01

    Full Text Available Recent advances in positioning techniques have made it possible to develop Mobile Mapping Systems (MMS for detection and 3D localization of various objects from a moving platform. On the other hand, automatic traffic sign recognition from an equipped mobile platform has recently been a challenging issue for both intelligent transportation and municipal database collection. However, there are several inevitable problems coherent to all the recognition methods completely relying on passive chromatic or grayscale images. This paper presents the implementation and evaluation of an operational MMS. Being distinct from the others, the developed MMS comprises one range camera based on Photonic Mixer Device (PMD technology and one standard 2D digital camera. The system benefits from certain algorithms to detect, recognize and localize the traffic signs by fusing the shape, color and object information from both range and intensity images. As the calibrating stage, a self-calibration method based on integrated bundle adjustment via joint setup with the digital camera is applied in this study for PMD camera calibration. As the result, an improvement of 83 % in RMS of range error and 72 % in RMS of coordinates residuals for PMD camera, over that achieved with basic calibration is realized in independent accuracy assessments. Furthermore, conventional photogrammetric techniques based on controlled network adjustment are utilized for platform calibration. Likewise, the well-known Extended Kalman Filtering (EKF is applied to integrate the navigation sensors, namely GPS and INS. The overall acquisition system along with the proposed techniques leads to 90 % true positive recognition and the average of 12 centimetres 3D positioning accuracy.

  14. Sound intensity

    DEFF Research Database (Denmark)

    Crocker, Malcolm J.; Jacobsen, Finn

    1998-01-01

    This chapter is an overview, intended for readers with no special knowledge about this particular topic. The chapter deals with all aspects of sound intensity and its measurement from the fundamental theoretical background to practical applications of the measurement technique.......This chapter is an overview, intended for readers with no special knowledge about this particular topic. The chapter deals with all aspects of sound intensity and its measurement from the fundamental theoretical background to practical applications of the measurement technique....

  15. Sound Intensity

    DEFF Research Database (Denmark)

    Crocker, M.J.; Jacobsen, Finn

    1997-01-01

    This chapter is an overview, intended for readers with no special knowledge about this particular topic. The chapter deals with all aspects of sound intensity and its measurement from the fundamental theoretical background to practical applications of the measurement technique.......This chapter is an overview, intended for readers with no special knowledge about this particular topic. The chapter deals with all aspects of sound intensity and its measurement from the fundamental theoretical background to practical applications of the measurement technique....

  16. Theoretical and practical model for implementing intensity modulated radiotherapy (IMRT) based on openness in head and neck tumors

    International Nuclear Information System (INIS)

    Napoles Morales, Misleidy; Yanes Lopez, Yaima; Ascension, Yudith; Alfonso La Guardia, Rodolfo; Calderon, Carlos

    2009-01-01

    Certain requirements have been internationally recommended for the transition from radiation therapy (3D-CRT) to intensity modulated radiation therapy (IMRT). They have been filling in clinical practice in the physical, dosimetry and quality of treatment. Prior to the implementation of IMRT have been developed preclinical will proceed according to the treatment planning techniques in the real images of patients, validating the rationale for the transition from the point of view and radiobiological dosimetry. The comparison was based on a group of patients eligible for IMRT, which were actually treated with 3D-CRT. IMRT plans were designed and applied to virtually the same patients, simulating the IMRT treatment. The prescribed dose and fractionation were similar in both techniques, to be able to compare radiobiology. The results show the rationality of IMRT in terms of reducing complications and the possibility of scaling doses in the PTV. Were used Dose Volume Histograms (HDV) obtained from the dosimetric calculations for radiobiological evaluation of treatment plans, letting through a software: 'Albireo Target' version 4.0.1.2008 calculate the equivalent uniform dose (EUD) for tumor and organs of risks (OAR) and tumor control probability (TCP) and the likelihood of damage to healthy tissue (NTCP). The results obtained with IMRT plans were more significant than with 3D-CRT especially in terms of EUD for organs at risk and NTCP. These results allow us to create the definitive basis for the implementation of IMRT in our environment. (Author)

  17. Barriers to implementing evidence-based practice in a private intensive care unit in the Eastern Cape

    Directory of Open Access Journals (Sweden)

    Portia Janine Jordan

    2016-11-01

    Full Text Available Background. Evidence-based practices (EBPs have been promoted to enhance the delivery of patient care, reduce cost, increase patient and family satisfaction and contribute to professional development. Individual and organisational barriers can hamper the implementation of EBP, which can be detrimental to healthcare delivery. Objective. To determine the individual and organisational implementation barriers of EBP among nurses in a private intensive care unit (ICU. Methods. A quantitative research design was used to collect data from nurses in a private ICU in the Eastern Cape Province, South Africa. The structured questionnaire (Cronbach’s alpha: 0.72 was administered to 70 respondents, with a response rate of 93%. Results. Barriers at individual level were identified, and include lack of familiarity with EBP, individual perceptions that underpin clinical decision-making, lack of access to information required for EBP, inadequate sources to access evidence, inability to synthesise the literature available, and resistance to change. Barriers related to organisational support, change and operations were identified. Conclusion. Although the findings were similar to other studies, this study showed that nurses younger than 40 years of age were more familiar with the concepts of EBP. Physicians were perceived as not being very supportive of EBP implementation. In order to enhance healthcare delivery in the ICUs, nurse managers need to take cognisance of the individual and organisational barriers that might hamper the implementation of EBP.

  18. Introducing the Comprehensive Unit-based Safety Program for mechanically ventilated patients in Saudi Arabian Intensive Care Units

    Directory of Open Access Journals (Sweden)

    Raymond M Khan

    2017-01-01

    Full Text Available Over the past decade, there have been major improvements to the care of mechanically ventilated patients (MVPs. Earlier initiatives used the concept of ventilator care bundles (sets of interventions, with a primary focus on reducing ventilator-associated pneumonia. However, recent evidence has led to a more comprehensive approach: The ABCDE bundle (Awakening and Breathing trial Coordination, Delirium management and Early mobilization. The approach of the Comprehensive Unit-based Safety Program (CUSP was developed by patient safety researchers at the Johns Hopkins Hospital and is supported by the Agency for Healthcare Research and Quality to improve local safety cultures and to learn from defects by utilizing a validated structured framework. In August 2015, 17 Intensive Care Units (ICUs (a total of 271 beds in eight hospitals in the Kingdom of Saudi Arabia joined the CUSP for MVPs (CUSP 4 MVP that was conducted in 235 ICUs in 169 US hospitals and led by the Johns Hopkins Armstrong Institute for Patient Safety and Quality. The CUSP 4 MVP project will set the stage for cooperation between multiple hospitals and thus strives to create a countrywide plan for the management of all MVPs in Saudi Arabia.

  19. Differences in net global warming potential and greenhouse gas intensity between major rice-based cropping systems in China.

    Science.gov (United States)

    Xiong, Zhengqin; Liu, Yinglie; Wu, Zhen; Zhang, Xiaolin; Liu, Pingli; Huang, Taiqing

    2015-12-02

    Double rice (DR) and upland crop-single rice (UR) systems are the major rice-based cropping systems in China, yet differences in net global warming potential (NGWP) and greenhouse gas intensity (GHGI) between the two systems are poorly documented. Accordingly, a 3-year field experiment was conducted to simultaneously measure methane (CH4) and nitrous oxide (N2O) emissions and changes in soil organic carbon (SOC) in oil rape-rice-rice and wheat-rice (representing DR and UR, respectively) systems with straw incorporation (0, 3 and 6 t/ha) during the rice-growing seasons. Compared with the UR system, the annual CH4, N2O, grain yield and NGWP were significantly increased in the DR system, though little effect on SOC sequestration or GHGI was observed without straw incorporation. Straw incorporation increased CH4 emission and SOC sequestration but had no significant effect on N2O emission in both systems. Averaged over the three study years, straw incorporation had no significant effect on NGWP and GHGI in the UR system, whereas these parameters were greatly increased in the DR system, i.e., by 108% (3 t/ha) and 180% (6 t/ha) for NGWP and 103% (3 t/ha) and 168% (6 t/ha) for GHGI.

  20. Differences in net global warming potential and greenhouse gas intensity between major rice-based cropping systems in China

    Science.gov (United States)

    Xiong, Zhengqin; Liu, Yinglie; Wu, Zhen; Zhang, Xiaolin; Liu, Pingli; Huang, Taiqing

    2015-01-01

    Double rice (DR) and upland crop-single rice (UR) systems are the major rice-based cropping systems in China, yet differences in net global warming potential (NGWP) and greenhouse gas intensity (GHGI) between the two systems are poorly documented. Accordingly, a 3-year field experiment was conducted to simultaneously measure methane (CH4) and nitrous oxide (N2O) emissions and changes in soil organic carbon (SOC) in oil rape-rice-rice and wheat-rice (representing DR and UR, respectively) systems with straw incorporation (0, 3 and 6 t/ha) during the rice-growing seasons. Compared with the UR system, the annual CH4, N2O, grain yield and NGWP were significantly increased in the DR system, though little effect on SOC sequestration or GHGI was observed without straw incorporation. Straw incorporation increased CH4 emission and SOC sequestration but had no significant effect on N2O emission in both systems. Averaged over the three study years, straw incorporation had no significant effect on NGWP and GHGI in the UR system, whereas these parameters were greatly increased in the DR system, i.e., by 108% (3 t/ha) and 180% (6 t/ha) for NGWP and 103% (3 t/ha) and 168% (6 t/ha) for GHGI. PMID:26626733

  1. An intensive primary-literature-based teaching program directly benefits undergraduate science majors and facilitates their transition to doctoral programs.

    Science.gov (United States)

    Kozeracki, Carol A; Carey, Michael F; Colicelli, John; Levis-Fitzgerald, Marc; Grossel, Martha

    2006-01-01

    UCLA's Howard Hughes Undergraduate Research Program (HHURP), a collaboration between the College of Letters and Science and the School of Medicine, trains a group of highly motivated undergraduates through mentored research enhanced by a rigorous seminar course. The course is centered on the presentation and critical analysis of scientific journal articles as well as the students' own research. This article describes the components and objectives of the HHURP and discusses the results of three program assessments: annual student evaluations, interviews with UCLA professors who served as research advisors for HHURP scholars, and a survey of program alumni. Students indicate that the program increased their ability to read and present primary scientific research and to present their own research and enhanced their research experience at UCLA. After graduating, they find their involvement in the HHURP helped them in securing admission to the graduate program of their choice and provided them with an advantage over their peers in the interactive seminars that are the foundation of graduate education. On the basis of the assessment of the program from 1998-1999 to 2004-2005, we conclude that an intensive literature-based training program increases student confidence and scientific literacy during their undergraduate years and facilitates their transition to postgraduate study.

  2. Efficiency of histidine rich protein II-based rapid diagnostic tests for monitoring malaria transmission intensities in an endemic area

    Science.gov (United States)

    Modupe, Dokunmu Titilope; Iyabo, Olasehinde Grace; Oladoke, Oladejo David; Oladeji, Olanrewaju; Abisola, Akinbobola; Ufuoma, Adjekukor Cynthia; Faith, Yakubu Omolara; Humphrey, Adebayo Abiodun

    2018-04-01

    In recent years there has been a global decrease in the prevalence of malaria due to scaling up of control measures, hence global control efforts now target elimination and eradication of the disease. However, a major problem associated with elimination is asymptomatic reservoir of infection especially in endemic areas. This study aims to determine the efficiency of histidine rich protein II (HRP-2) based rapid diagnostic tests (RDT) for monitoring transmission intensities in an endemic community in Nigeria during the pre-elimination stage. Plasmodium falciparum asymptomatic malaria infection in healthy individuals and symptomatic cases were detected using HRP-2. RDT negative tests were re-checked by microscopy and by primer specific PCR amplification of merozoite surface protein 2 (msp-2) for asexual parasites and Pfs25 gene for gametocytes in selected samples to detect low level parasitemia undetectable by microscopy. The mean age of the study population (n=280) was 6.12 years [95% CI 5.16 - 7.08, range 0.5 - 55], parasite prevalence was 44.6% and 36.3% by microscopy and RDT respectively (p =0.056). The parasite prevalence of 61.5% in children aged >2 - 10 years was significantly higher than 3.7% rate in adults >18years (p malaria in endemic areas.

  3. Development of a method of robust rain gauge network optimization based on intensity-duration-frequency results

    Directory of Open Access Journals (Sweden)

    A. Chebbi

    2013-10-01

    Full Text Available Based on rainfall intensity-duration-frequency (IDF curves, fitted in several locations of a given area, a robust optimization approach is proposed to identify the best locations to install new rain gauges. The advantage of robust optimization is that the resulting design solutions yield networks which behave acceptably under hydrological variability. Robust optimization can overcome the problem of selecting representative rainfall events when building the optimization process. This paper reports an original approach based on Montana IDF model parameters. The latter are assumed to be geostatistical variables, and their spatial interdependence is taken into account through the adoption of cross-variograms in the kriging process. The problem of optimally locating a fixed number of new monitoring stations based on an existing rain gauge network is addressed. The objective function is based on the mean spatial kriging variance and rainfall variogram structure using a variance-reduction method. Hydrological variability was taken into account by considering and implementing several return periods to define the robust objective function. Variance minimization is performed using a simulated annealing algorithm. In addition, knowledge of the time horizon is needed for the computation of the robust objective function. A short- and a long-term horizon were studied, and optimal networks are identified for each. The method developed is applied to north Tunisia (area = 21 000 km2. Data inputs for the variogram analysis were IDF curves provided by the hydrological bureau and available for 14 tipping bucket type rain gauges. The recording period was from 1962 to 2001, depending on the station. The study concerns an imaginary network augmentation based on the network configuration in 1973, which is a very significant year in Tunisia because there was an exceptional regional flood event in March 1973. This network consisted of 13 stations and did not meet World

  4. Satellite and ground-based remote sensing of aerosols during intense haze event of October 2013 over lahore, Pakistan

    Science.gov (United States)

    Tariq, Salman; Zia, ul-Haq; Ali, Muhammad

    2016-02-01

    Due to increase in population and economic development, the mega-cities are facing increased haze events which are causing important effects on the regional environment and climate. In order to understand these effects, we require an in-depth knowledge of optical and physical properties of aerosols in intense haze conditions. In this paper an effort has been made to analyze the microphysical and optical properties of aerosols during intense haze event over mega-city of Lahore by using remote sensing data obtained from satellites (Terra/Aqua Moderate-resolution Imaging Spectroradiometer (MODIS) and Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observation (CALIPSO)) and ground based instrument (AErosol RObotic NETwork (AERONET)) during 6-14 October 2013. The instantaneous highest value of Aerosol Optical Depth (AOD) is observed to be 3.70 on 9 October 2013 followed by 3.12 on 8 October 2013. The primary cause of such high values is large scale crop residue burning and urban-industrial emissions in the study region. AERONET observations show daily mean AOD of 2.36 which is eight times higher than the observed values on normal day. The observed fine mode volume concentration is more than 1.5 times greater than the coarse mode volume concentration on the high aerosol burden day. We also find high values (~0.95) of Single Scattering Albedo (SSA) on 9 October 2013. Scatter-plot between AOD (500 nm) and Angstrom exponent (440-870 nm) reveals that biomass burning/urban-industrial aerosols are the dominant aerosol type on the heavy aerosol loading day over Lahore. MODIS fire activity image suggests that the areas in the southeast of Lahore across the border with India are dominated by biomass burning activities. A Hybrid Single-Particle Lagrangian Integrated Trajectory (HYSPLIT) model backward trajectory showed that the winds at 1000 m above the ground are responsible for transport from southeast region of biomass burning to Lahore. CALIPSO derived sub-types of

  5. Validation of Cut-Points for Evaluating the Intensity of Physical Activity with Accelerometry-Based Mean Amplitude Deviation (MAD.

    Directory of Open Access Journals (Sweden)

    Henri Vähä-Ypyä

    Full Text Available Our recent study of three accelerometer brands in various ambulatory activities showed that the mean amplitude deviation (MAD of the resultant acceleration signal performed best in separating different intensity levels and provided excellent agreement between the three devices. The objective of this study was to derive a regression model that estimates oxygen consumption (VO2 from MAD values and validate the MAD-based cut-points for light, moderate and vigorous locomotion against VO2 within a wide range of speeds.29 participants performed a pace-conducted non-stop test on a 200 m long indoor track. The initial speed was 0.6 m/s and it was increased by 0.4 m/s every 2.5 minutes until volitional exhaustion. The participants could freely decide whether they preferred to walk or run. During the test they carried a hip-mounted tri-axial accelerometer and mobile metabolic analyzer. The MAD was calculated from the raw acceleration data and compared to directly measured incident VO2. Cut-point between light and moderate activity was set to 3.0 metabolic equivalent (MET, 1 MET = 3.5 ml · kg-1 · min-1 and between moderate and vigorous activity to 6.0 MET as per standard use.The MAD and VO2 showed a very strong association. Within individuals, the range of r values was from 0.927 to 0.991 providing the mean r = 0.969. The optimal MAD cut-point for 3.0 MET was 91 mg (milligravity and 414 mg for 6.0 MET.The present study showed that the MAD is a valid method in terms of the VO2 within a wide range of ambulatory activities from slow walking to fast running. Being a device-independent trait, the MAD facilitates directly comparable, accurate results on the intensity of physical activity with all accelerometers providing tri-axial raw data.

  6. Topology optimization based on spline-based meshfree method using topological derivatives

    International Nuclear Information System (INIS)

    Hur, Junyoung; Youn, Sung-Kie; Kang, Pilseong

    2017-01-01

    Spline-based meshfree method (SBMFM) is originated from the Isogeometric analysis (IGA) which integrates design and analysis through Non-uniform rational B-spline (NURBS) basis functions. SBMFM utilizes trimming technique of CAD system by representing the domain using NURBS curves. In this work, an explicit boundary topology optimization using SBMFM is presented with an effective boundary update scheme. There have been similar works in this subject. However unlike the previous works where semi-analytic method for calculating design sensitivities is employed, the design update is done by using topological derivatives. In this research, the topological derivative is used to derive the sensitivity of boundary curves and for the creation of new holes. Based on the values of topological derivatives, the shape of boundary curves is updated. Also, the topological change is achieved by insertion and removal of the inner holes. The presented approach is validated through several compliance minimization problems.

  7. Topology optimization based on spline-based meshfree method using topological derivatives

    Energy Technology Data Exchange (ETDEWEB)

    Hur, Junyoung; Youn, Sung-Kie [KAIST, Daejeon (Korea, Republic of); Kang, Pilseong [Korea Research Institute of Standards and Science, Daejeon (Korea, Republic of)

    2017-05-15

    Spline-based meshfree method (SBMFM) is originated from the Isogeometric analysis (IGA) which integrates design and analysis through Non-uniform rational B-spline (NURBS) basis functions. SBMFM utilizes trimming technique of CAD system by representing the domain using NURBS curves. In this work, an explicit boundary topology optimization using SBMFM is presented with an effective boundary update scheme. There have been similar works in this subject. However unlike the previous works where semi-analytic method for calculating design sensitivities is employed, the design update is done by using topological derivatives. In this research, the topological derivative is used to derive the sensitivity of boundary curves and for the creation of new holes. Based on the values of topological derivatives, the shape of boundary curves is updated. Also, the topological change is achieved by insertion and removal of the inner holes. The presented approach is validated through several compliance minimization problems.

  8. Image-guided, intensity-modulated radiation therapy (IG-IMRT) for skull base chordoma and chondrosarcoma: preliminary outcomes.

    Science.gov (United States)

    Sahgal, Arjun; Chan, Michael W; Atenafu, Eshetu G; Masson-Cote, Laurence; Bahl, Gaurav; Yu, Eugene; Millar, Barbara-Ann; Chung, Caroline; Catton, Charles; O'Sullivan, Brian; Irish, Jonathan C; Gilbert, Ralph; Zadeh, Gelareh; Cusimano, Michael; Gentili, Fred; Laperriere, Normand J

    2015-06-01

    We report our preliminary outcomes following high-dose image-guided intensity modulated radiotherapy (IG-IMRT) for skull base chordoma and chondrosarcoma. Forty-two consecutive IG-IMRT patients, with either skull base chordoma (n = 24) or chondrosarcoma (n = 18) treated between August 2001 and December 2012 were reviewed. The median follow-up was 36 months (range, 3-90 mo) in the chordoma cohort, and 67 months (range, 15-125) in the chondrosarcoma cohort. Initial surgery included biopsy (7% of patients), subtotal resection (57% of patients), and gross total resection (36% of patients). The median IG-IMRT total doses in the chondrosarcoma and chordoma cohorts were 70 Gy and 76 Gy, respectively, delivered with 2 Gy/fraction. For the chordoma and chondrosarcoma cohorts, the 5-year overall survival and local control rates were 85.6% and 65.3%, and 87.8% and 88.1%, respectively. In total, 10 patients progressed locally: 8 were chordoma patients and 2 chondrosarcoma patients. Both chondrosarcoma failures were in higher-grade tumors (grades 2 and 3). None of the 8 patients with grade 1 chondrosarcoma failed, with a median follow-up of 77 months (range, 34-125). There were 8 radiation-induced late effects-the most significant was a radiation-induced secondary malignancy occurring 6.7 years following IG-IMRT. Gross total resection and age were predictors of local control in the chordoma and chondrosarcoma patients, respectively. We report favorable survival, local control and adverse event rates following high dose IG-IMRT. Further follow-up is needed to confirm long-term efficacy. © The Author(s) 2014. Published by Oxford University Press on behalf of the Society for Neuro-Oncology. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  9. Consumer-Based Physical Activity Monitor as a Practical Way to Measure Walking Intensity During Inpatient Stroke Rehabilitation.

    Science.gov (United States)

    Klassen, Tara D; Semrau, Jennifer A; Dukelow, Sean P; Bayley, Mark T; Hill, Michael D; Eng, Janice J

    2017-09-01

    Identifying practical ways to accurately measure exercise intensity and dose in clinical environments is essential to advancing stroke rehabilitation. This is especially relevant in monitoring walking activity during inpatient rehabilitation where recovery is greatest. This study evaluated the accuracy of a readily available consumer-based physical activity monitor during daily inpatient stroke rehabilitation physical therapy sessions. Twenty-one individuals admitted to inpatient rehabilitation were monitored for a total of 471 one-hour physical therapy sessions which consisted of walking and nonwalking therapeutic activities. Participants wore a consumer-based physical activity monitor (Fitbit One) and the gold standard for assessing step count (StepWatch Activity Monitor) during physical therapy sessions. Linear mixed modeling was used to assess the relationship of the step count of the Fitbit to the StepWatch Activity Monitor. Device accuracy is reported as the percent error of the Fitbit compared with the StepWatch Activity Monitor. A strong relationship (slope=0.99; 95% confidence interval, 0.97-1.01) was found between the number of steps captured by the Fitbit One and the StepWatch Activity Monitor. The Fitbit One had a mean error of 10.9% (5.3) for participants with walking velocities 0.8 m/s. This study provides preliminary evidence that the Fitbit One, when positioned on the nonparetic ankle, can accurately measure walking steps early after stroke during inpatient rehabilitation physical therapy sessions. URL: https://www.clinicaltrials.gov. Unique identifier: NCT01915368. © 2017 American Heart Association, Inc.

  10. Full Monte Carlo-Based Biologic Treatment Plan Optimization System for Intensity Modulated Carbon Ion Therapy on Graphics Processing Unit.

    Science.gov (United States)

    Qin, Nan; Shen, Chenyang; Tsai, Min-Yu; Pinto, Marco; Tian, Zhen; Dedes, Georgios; Pompos, Arnold; Jiang, Steve B; Parodi, Katia; Jia, Xun

    2018-01-01

    One of the major benefits of carbon ion therapy is enhanced biological effectiveness at the Bragg peak region. For intensity modulated carbon ion therapy (IMCT), it is desirable to use Monte Carlo (MC) methods to compute the properties of each pencil beam spot for treatment planning, because of their accuracy in modeling physics processes and estimating biological effects. We previously developed goCMC, a graphics processing unit (GPU)-oriented MC engine for carbon ion therapy. The purpose of the present study was to build a biological treatment plan optimization system using goCMC. The repair-misrepair-fixation model was implemented to compute the spatial distribution of linear-quadratic model parameters for each spot. A treatment plan optimization module was developed to minimize the difference between the prescribed and actual biological effect. We used a gradient-based algorithm to solve the optimization problem. The system was embedded in the Varian Eclipse treatment planning system under a client-server architecture to achieve a user-friendly planning environment. We tested the system with a 1-dimensional homogeneous water case and 3 3-dimensional patient cases. Our system generated treatment plans with biological spread-out Bragg peaks covering the targeted regions and sparing critical structures. Using 4 NVidia GTX 1080 GPUs, the total computation time, including spot simulation, optimization, and final dose calculation, was 0.6 hour for the prostate case (8282 spots), 0.2 hour for the pancreas case (3795 spots), and 0.3 hour for the brain case (6724 spots). The computation time was dominated by MC spot simulation. We built a biological treatment plan optimization system for IMCT that performs simulations using a fast MC engine, goCMC. To the best of our knowledge, this is the first time that full MC-based IMCT inverse planning has been achieved in a clinically viable time frame. Copyright © 2017 Elsevier Inc. All rights reserved.

  11. Comparative analysis of volumetric-modulated arc therapy and intensity-modulated radiotherapy for base of tongue cancer

    International Nuclear Information System (INIS)

    Nithya, L.; Arulraj, Kumar; Rathinamuthu, Sasikumar; Pandey, Manish Bhushan; Nambi Raj, N. Arunai

    2014-01-01

    The aim of this study was to compare the various dosimetric parameters of dynamic multileaf collimator (MLC) intensity modulated radiation therapy (IMRT) plans with volumetric modulated arc therapy (VMAT) plans for base of tongue cases. All plans were done in Monaco planning system for Elekta synergy linear accelerator with 80 MLC. IMRT plans were planned with nine stationary beams, and VMAT plans were done for 360° arc with single arc or dual arc. The dose to the planning target volumes (PTV) for 70, 63, and 56 Gy was compared. The dose to 95, 98, and 50% volume of PTV were analyzed. The homogeneity index (HI) and the conformity index (CI) of the PTV 70 were also analyzed. IMRT and VMAT plan showed similar dose coverage, HI, and CI. Maximum dose and dose to 1-cc volume of spinal cord, planning risk volume (PRV) cord, and brain stem were compared. IMRT plan and VMAT plan showed similar results except for the 1 cc of PRV cord that received slightly higher dose in VMAT plan. Mean dose and dose to 50% volume of right and left parotid glands were analyzed. VMAT plan gave better sparing of parotid glands than IMRT. In normal tissue dose analyses VMAT was better than IMRT. The number of monitor units (MU) required for delivering the good quality of the plan and the time required to deliver the plan for IMRT and VMAT were compared. The number of MUs for VMAT was higher than that of IMRT plans. However, the delivery time was reduced by a factor of two for VMAT compared with IMRT. VMAT plans yielded good quality of the plan compared with IMRT, resulting in reduced treatment time and improved efficiency for base of tongue cases. (author)

  12. Comparative analysis of volumetric-modulated arc therapy and intensity-modulated radiotherapy for base of tongue cancer

    Directory of Open Access Journals (Sweden)

    L Nithya

    2014-01-01

    Full Text Available The aim of this study was to compare the various dosimetric parameters of dynamic multileaf collimator (MLC intensity modulated radiation therapy (IMRT plans with volumetric modulated arc therapy (VMAT plans for base of tongue cases. All plans were done in Monaco planning system for Elekta synergy linear accelerator with 80 MLC. IMRT plans were planned with nine stationary beams, and VMAT plans were done for 360° arc with single arc or dual arc. The dose to the planning target volumes (PTV for 70, 63, and 56 Gy was compared. The dose to 95, 98, and 50% volume of PTV were analyzed. The homogeneity index (HI and the conformity index (CI of the PTV 70 were also analyzed. IMRT and VMAT plan showed similar dose coverage, HI, and CI. Maximum dose and dose to 1-cc volume of spinal cord, planning risk volume (PRV cord, and brain stem were compared. IMRT plan and VMAT plan showed similar results except for the 1 cc of PRV cord that received slightly higher dose in VMAT plan. Mean dose and dose to 50% volume of right and left parotid glands were analyzed. VMAT plan gave better sparing of parotid glands than IMRT. In normal tissue dose analyses VMAT was better than IMRT. The number of monitor units (MU required for delivering the good quality of the plan and the time required to deliver the plan for IMRT and VMAT were compared. The number of MUs for VMAT was higher than that of IMRT plans. However, the delivery time was reduced by a factor of two for VMAT compared with IMRT. VMAT plans yielded good quality of the plan compared with IMRT, resulting in reduced treatment time and improved efficiency for base of tongue cases.

  13. Standardised simulation-based emergency and intensive care nursing curriculum to improve nursing students' performance during simulated resuscitation: A quasi-experimental study.

    Science.gov (United States)

    Chen, Jie; Yang, Jian; Hu, Fen; Yu, Si-Hong; Yang, Bing-Xiang; Liu, Qian; Zhu, Xiao-Ping

    2018-03-14

    Simulation-based curriculum has been demonstrated as crucial to nursing education in the development of students' critical thinking and complex clinical skills during a resuscitation simulation. Few studies have comprehensively examined the effectiveness of a standardised simulation-based emergency and intensive care nursing curriculum on the performance of students in a resuscitation simulation. To evaluate the impact of a standardised simulation-based emergency and intensive care nursing curriculum on nursing students' response time in a resuscitation simulation. Two-group, non-randomised quasi-experimental design. A simulation centre in a Chinese University School of Nursing. Third-year nursing students (N = 39) in the Emergency and Intensive Care course were divided into a control group (CG, n = 20) and an experimental group (EG, n = 19). The experimental group participated in a standardised high-technology, simulation-based emergency and intensive care nursing curriculum. The standardised simulation-based curriculum for third-year nursing students consists of three modules: disaster response, emergency care, and intensive care, which include clinical priorities (e.g. triage), basic resuscitation skills, airway/breathing management, circulation management and team work with eighteen lecture hours, six skill-practice hours and twelve simulation hours. The control group took part in the traditional curriculum. This course included the same three modules with thirty-four lecture hours and two skill-practice hours (trauma). Perceived benefits included decreased median (interquartile ranges, IQR) seconds to start compressions [CG 32 (25-75) vs. EG 20 (18-38); p  0.05] and defibrillation [CG 222 (194-254) vs. EG 221 (214-248); p > 0.05] at the beginning of the course. A simulation-based emergency and intensive care nursing curriculum was created and well received by third-year nursing students and associated with decreased response time in a

  14. A hybrid algorithm for instant optimization of beam weights in anatomy-based intensity modulated radiotherapy: a performance evaluation study

    International Nuclear Information System (INIS)

    Vaitheeswaran, Ranganathan; Sathiya Narayanan, V.K.; Bhangle, Janhavi R.; Nirhali, Amit; Kumar, Namita; Basu, Sumit; Maiya, Vikram

    2011-01-01

    The study aims to introduce a hybrid optimization algorithm for anatomy-based intensity modulated radiotherapy (AB-IMRT). Our proposal is that by integrating an exact optimization algorithm with a heuristic optimization algorithm, the advantages of both the algorithms can be combined, which will lead to an efficient global optimizer solving the problem at a very fast rate. Our hybrid approach combines Gaussian elimination algorithm (exact optimizer) with fast simulated annealing algorithm (a heuristic global optimizer) for the optimization of beam weights in AB-IMRT. The algorithm has been implemented using MATLAB software. The optimization efficiency of the hybrid algorithm is clarified by (i) analysis of the numerical characteristics of the algorithm and (ii) analysis of the clinical capabilities of the algorithm. The numerical and clinical characteristics of the hybrid algorithm are compared with Gaussian elimination method (GEM) and fast simulated annealing (FSA). The numerical characteristics include convergence, consistency, number of iterations and overall optimization speed, which were analyzed for the respective cases of 8 patients. The clinical capabilities of the hybrid algorithm are demonstrated in cases of (a) prostate and (b) brain. The analyses reveal that (i) the convergence speed of the hybrid algorithm is approximately three times higher than that of FSA algorithm (ii) the convergence (percentage reduction in the cost function) in hybrid algorithm is about 20% improved as compared to that in GEM algorithm (iii) the hybrid algorithm is capable of producing relatively better treatment plans in terms of Conformity Index (CI) (∼ 2% - 5% improvement) and Homogeneity Index (HI) (∼ 4% - 10% improvement) as compared to GEM and FSA algorithms (iv) the sparing of organs at risk in hybrid algorithm-based plans is better than that in GEM-based plans and comparable to that in FSA-based plans; and (v) the beam weights resulting from the hybrid algorithm are

  15. Demonstration of a high-intensity neutron source based on a liquid-lithium target for Accelerator based Boron Neutron Capture Therapy.

    Science.gov (United States)

    Halfon, S; Arenshtam, A; Kijel, D; Paul, M; Weissman, L; Berkovits, D; Eliyahu, I; Feinberg, G; Kreisel, A; Mardor, I; Shimel, G; Shor, A; Silverman, I; Tessler, M

    2015-12-01

    A free surface liquid-lithium jet target is operating routinely at Soreq Applied Research Accelerator Facility (SARAF), bombarded with a ~1.91 MeV, ~1.2 mA continuous-wave narrow proton beam. The experiments demonstrate the liquid lithium target (LiLiT) capability to constitute an intense source of epithermal neutrons, for Accelerator based Boron Neutron Capture Therapy (BNCT). The target dissipates extremely high ion beam power densities (>3 kW/cm(2), >0.5 MW/cm(3)) for long periods of time, while maintaining stable conditions and localized residual activity. LiLiT generates ~3×10(10) n/s, which is more than one order of magnitude larger than conventional (7)Li(p,n)-based near threshold neutron sources. A shield and moderator assembly for BNCT, with LiLiT irradiated with protons at 1.91 MeV, was designed based on Monte Carlo (MCNP) simulations of BNCT-doses produced in a phantom. According to these simulations it was found that a ~15 mA near threshold proton current will apply the therapeutic doses in ~1h treatment duration. According to our present results, such high current beams can be dissipated in a liquid-lithium target, hence the target design is readily applicable for accelerator-based BNCT. Copyright © 2015 Elsevier Ltd. All rights reserved.

  16. Accuracy improvement of the H-drive air-levitating wafer inspection stage based on error analysis and compensation

    Science.gov (United States)

    Zhang, Fan; Liu, Pinkuan

    2018-04-01

    In order to improve the inspection precision of the H-drive air-bearing stage for wafer inspection, in this paper the geometric error of the stage is analyzed and compensated. The relationship between the positioning errors and error sources are initially modeled, and seven error components are identified that are closely related to the inspection accuracy. The most effective factor that affects the geometric error is identified by error sensitivity analysis. Then, the Spearman rank correlation method is applied to find the correlation between different error components, aiming at guiding the accuracy design and error compensation of the stage. Finally, different compensation methods, including the three-error curve interpolation method, the polynomial interpolation method, the Chebyshev polynomial interpolation method, and the B-spline interpolation method, are employed within the full range of the stage, and their results are compared. Simulation and experiment show that the B-spline interpolation method based on the error model has better compensation results. In addition, the research result is valuable for promoting wafer inspection accuracy and will greatly benefit the semiconductor industry.

  17. Grammar-Based Multi-Frontal Solver for One Dimensional Isogeometric Analysis with Multiple Right-Hand-Sides

    KAUST Repository

    Kuźnik, Krzysztof

    2013-06-01

    This paper introduces a grammar-based model for developing a multi-thread multi-frontal parallel direct solver for one- dimensional isogeometric finite element method. The model includes the integration of B-splines for construction of the element local matrices and the multi-frontal solver algorithm. The integration and the solver algorithm are partitioned into basic indivisible tasks, namely the grammar productions, that can be executed squentially. The partial order of execution of the basic tasks is analyzed to provide the scheduling for the execution of the concurrent integration and multi-frontal solver algo- rithm. This graph grammar analysis allows for optimal concurrent execution of all tasks. The model has been implemented and tested on NVIDIA CUDA GPU, delivering logarithmic execution time for linear, quadratic, cubic and higher order B-splines. Thus, the CUDA implementation delivers the optimal performance predicted by our graph grammar analysis. We utilize the solver for multiple right hand sides related to the solution of non-stationary or inverse problems.

  18. Data-driven identification of intensity normalization region based on longitudinal coherency of 18F-FDG metabolism in the healthy brain.

    Science.gov (United States)

    Zhang, Huiwei; Wu, Ping; Ziegler, Sibylle I; Guan, Yihui; Wang, Yuetao; Ge, Jingjie; Schwaiger, Markus; Huang, Sung-Cheng; Zuo, Chuantao; Förster, Stefan; Shi, Kuangyu

    2017-02-01

    analysis strategies (subject-based and age-cohort averaging). In addition, the proposed new intensity normalization method using the paracentral lobule generates significantly higher differentiation from the age-associated changes than other intensity normalization methods. Proper intensity normalization can enhance the longitudinal coherency of normal brain glucose metabolism. The paracentral lobule followed by the cerebellar tonsil are shown to be the two most stable intensity normalization regions concerning age-dependent brain metabolism. This may provide the potential to better differentiate disease-related changes from age-related changes in brain metabolism, which is of relevance in the diagnosis of neurodegenerative disorders. Copyright © 2016 Elsevier Inc. All rights reserved.

  19. Repeated intravenous administration of gadobutrol does not lead to increased signal intensity on unenhanced T1-weighted images - a voxel-based whole brain analysis

    Energy Technology Data Exchange (ETDEWEB)

    Langner, Soenke; Kromrey, Marie-Luise [University Medicine Greifswald, Institute of Diagnostic Radiology and Neuroradiology, Greifswald (Germany); Kuehn, Jens-Peter [University Medicine Greifswald, Institute of Diagnostic Radiology and Neuroradiology, Greifswald (Germany); University Hospital, Carl Gustav Carus University Dresden, Institute for Radiology, Dresden (Germany); Grothe, Matthias [University Medicine Greifswald, Department of Neurology, Greifswald (Germany); Domin, Martin [University Medicine Greifswald, Functional Imaging Unit, Institute of Diagnostic Radiology and Neuroradiology, Greifswald (Germany)

    2017-09-15

    To identify a possible association between repeated intravenous administration of gadobutrol and increased signal intensity in the grey and white matter using voxel-based whole-brain analysis. In this retrospective single-centre study, 217 patients with a clinically isolated syndrome underwent baseline brain magnetic resonance imaging and at least one annual follow-up examination with intravenous administration of 0.1 mmol/kg body weight of gadobutrol. Using the ''Diffeomorphic Anatomical Registration using Exponentiated Lie algebra'' (DARTEL) normalisation process, tissue templates for grey matter (GM), white matter (WM), and cerebrospinal fluid (CSF) were calculated, as were GM-CSF and WM-CSF ratios. Voxel-based whole-brain analysis was used to calculate the signal intensity for each voxel in each data set. Paired t-test was applied to test differences to baseline MRI for significance. Voxel-based whole-brain analysis demonstrated no significant changes in signal intensity of grey and white matter after up to five gadobutrol administrations. There was no significant change in GM-CSF and grey WM-CSF ratios. Voxel-based whole-brain analysis did not demonstrate increased signal intensity of GM and WM on unenhanced T1-weighted images after repeated gadobutrol administration. The molecular structure of gadolinium-based contrast agent preparations may be an essential factor causing SI increase on unenhanced T1-weighted images. (orig.)

  20. Effectiveness of classroom based crew resource management training in the intensive care unit: study design of a controlled trial

    Science.gov (United States)

    2011-01-01

    Background Crew resource management (CRM) has the potential to enhance patient safety in intensive care units (ICU) by improving the use of non-technical skills. However, CRM evaluation studies in health care are inconclusive with regard to the effect of this training on behaviour and organizational outcomes, due to weak study designs and the scarce use of direct observations. Therefore, the aim of this study is to determine the effectiveness and cost-effectiveness of CRM training on attitude, behaviour and organization after one year, using a multi-method approach and matched control units. The purpose of the present article is to describe the study protocol and the underlying choices of this evaluation study of CRM in the ICU in detail. Methods/Design Six ICUs participated in a paired controlled trial, with one pre-test and two post test measurements (respectively three months and one year after the training). Three ICUs were trained and compared to matched control ICUs. The 2-day classroom-based training was delivered to multidisciplinary groups. Typical CRM topics on the individual, team and organizational level were discussed, such as situational awareness, leadership and communication. All levels of Kirkpatrick's evaluation framework (reaction, learning, behaviour and organisation) were assessed using questionnaires, direct observations, interviews and routine ICU administration data. Discussion It is expected that the CRM training acts as a generic intervention that stimulates specific interventions. Besides effectiveness and cost-effectiveness, the assessment of the barriers and facilitators will provide insight in the implementation process of CRM. Trial registration Netherlands Trial Register (NTR): NTR1976 PMID:22073981

  1. Study on the response of unsaturated soil slope based on the effects of rainfall intensity and slope angle

    Science.gov (United States)

    Ismail, Mohd Ashraf Mohamad; Hamzah, Nur Hasliza

    2017-07-01

    Rainfall has been considered as the major cause of the slope failure. The mechanism leading to slope failures included the infiltration process, surface runoff, volumetric water content and pore-water pressure of the soil. This paper describes a study in which simulated rainfall events were used with 2-dimensional soil column to study the response of unsaturated soil behavior based on different slope angle. The 2-dimensional soil column is used in order to demonstrate the mechanism of the slope failure. These unsaturated soil were tested with four different slope (15°, 25°, 35° and 45°) and subjected to three different rainfall intensities (maximum, mean and minimum). The following key results were obtained: (1) the stability of unsaturated soil decrease as the rainwater infiltrates into the soil. Soil that initially in unsaturated state will start to reach saturated state when rainwater seeps into the soil. Infiltration of rainwater will reduce the matric suction in the soil. Matric suction acts in controlling soil shear strength. Reduction in matric suction affects the decrease in effective normal stress, which in turn diminishes the available shear strength to a point where equilibrium can no longer be sustained in the slope. (2) The infiltration rate of rainwater decreases while surface runoff increase when the soil nearly achieve saturated state. These situations cause the soil erosion and lead to slope failure. (3) The steepness of the soil is not a major factor but also contribute to slope failures. For steep slopes, rainwater that fall on the soil surface will become surface runoff within a short time compare to the water that infiltrate into the soil. While for gentle slopes, water that becomes surface runoff will move slowly and these increase the water that infiltrate into the soil.

  2. Effects of high-intensity static magnetic fields on a root-based bioreactor system for space applications

    Science.gov (United States)

    Villani, Maria Elena; Massa, Silvia; Lopresto, Vanni; Pinto, Rosanna; Salzano, Anna Maria; Scaloni, Andrea; Benvenuto, Eugenio; Desiderio, Angiola

    2017-11-01

    Static magnetic fields created by superconducting magnets have been proposed as an effective solution to protect spacecrafts and planetary stations from cosmic radiations. This shield can deflect high-energy particles exerting injurious effects on living organisms, including plants. In fact, plant systems are becoming increasingly interesting for space adaptation studies, being useful not only as food source but also as sink of bioactive molecules in future bioregenerative life-support systems (BLSS). However, the application of protective magnetic shields would generate inside space habitats residual magnetic fields, of the order of few hundreds milli Tesla, whose effect on plant systems is poorly known. To simulate the exposure conditions of these residual magnetic fields in shielded environment, devices generating high-intensity static magnetic field (SMF) were comparatively evaluated in blind exposure experiments (250 mT, 500 mT and sham -no SMF-). The effects of these SMFs were assayed on tomato cultures (hairy roots) previously engineered to produce anthocyanins, known for their anti-oxidant properties and possibly useful in the setting of BLSS. Hairy roots exposed for periods ranging from 24 h to 11 days were morphometrically analyzed to measure their growth and corresponding molecular changes were assessed by a differential proteomic approach. After disclosing blind exposure protocol, a stringent statistical elaboration revealed the absence of significant differences in the soluble proteome, perfectly matching phenotypic results. These experimental evidences demonstrate that the identified plant system well tolerates the exposure to these magnetic fields. Results hereby described reinforce the notion of using this plant organ culture as a tool in ground-based experiments simulating space and planetary environments, in a perspective of using tomato 'hairy root' cultures as bioreactor of ready-to-use bioactive molecules during future long-term space missions.

  3. Catheter-based high-intensity ultrasound for epicardial ablation of the left ventricle: device design and in vivo feasiblity

    Science.gov (United States)

    Salgaonkar, Vasant A.; Nazer, Babak; Jones, Peter D.; Tanaka, Yasuaki; Martin, Alastair; Ng, Bennett; Duggirala, Srikant; Diederich, Chris J.; Gerstenfeld, Edward P.

    2015-03-01

    The development and in vivo testing of a high-intensity ultrasound thermal ablation catheter for epicardial ablation of the left ventricle (LV) is presented. Scar tissue can occur in the mid-myocardial and epicardial space in patients with nonischemic cardiomyopathy and lead to ventricular tachycardia. Current ablation technology uses radiofrequency energy, which is limited epicardially by the presence of coronary vessels, phrenic nerves, and fat. Ultrasound energy can be precisely directed to deliver targeted deep epicardial ablation while sparing intervening epicardial nerve and vessels. The proof-of-concept ultrasound applicators were designed for sub-xyphoid access to the pericardial space through a steerable 14-Fr sheath. The catheter consists of two rectangular planar transducers, for therapy (6.4 MHz) and imaging (5 MHz), mounted at the tip of a 3.5-mm flexible nylon catheter coupled and encapsulated within a custom-shaped balloon for cooling. Thermal lesions were created in the LV in a swine (n = 10) model in vivo. The ultrasound applicator was positioned fluoroscopically. Its orientation and contact with the LV were verified using A-mode imaging and a radio-opaque marker. Ablations employed 60-s exposures at 15 - 30 W (electrical power). Histology indicated thermal coagulation and ablative lesions penetrating 8 - 12 mm into the left ventricle on lateral and anterior walls and along the left anterior descending artery. The transducer design enabled successful sparing from the epicardial surface to 2 - 4 mm of intervening ventricle tissue and epicardial fat. The feasibility of targeted epicardial ablation with catheter-based ultrasound was demonstrated.

  4. Effectiveness of classroom based crew resource management training in the intensive care unit: study design of a controlled trial

    Directory of Open Access Journals (Sweden)

    Kemper Peter F

    2011-11-01

    Full Text Available Abstract Background Crew resource management (CRM has the potential to enhance patient safety in intensive care units (ICU by improving the use of non-technical skills. However, CRM evaluation studies in health care are inconclusive with regard to the effect of this training on behaviour and organizational outcomes, due to weak study designs and the scarce use of direct observations. Therefore, the aim of this study is to determine the effectiveness and cost-effectiveness of CRM training on attitude, behaviour and organization after one year, using a multi-method approach and matched control units. The purpose of the present article is to describe the study protocol and the underlying choices of this evaluation study of CRM in the ICU in detail. Methods/Design Six ICUs participated in a paired controlled trial, with one pre-test and two post test measurements (respectively three months and one year after the training. Three ICUs were trained and compared to matched control ICUs. The 2-day classroom-based training was delivered to multidisciplinary groups. Typical CRM topics on the individual, team and organizational level were discussed, such as situational awareness, leadership and communication. All levels of Kirkpatrick's evaluation framework (reaction, learning, behaviour and organisation were assessed using questionnaires, direct observations, interviews and routine ICU administration data. Discussion It is expected that the CRM training acts as a generic intervention that stimulates specific interventions. Besides effectiveness and cost-effectiveness, the assessment of the barriers and facilitators will provide insight in the implementation process of CRM. Trial registration Netherlands Trial Register (NTR: NTR1976

  5. Scattered Data Processing Approach Based on Optical Facial Motion Capture

    Directory of Open Access Journals (Sweden)

    Qiang Zhang

    2013-01-01

    Full Text Available In recent years, animation reconstruction of facial expressions has become a popular research field in computer science and motion capture-based facial expression reconstruction is now emerging in this field. Based on the facial motion data obtained using a passive optical motion capture system, we propose a scattered data processing approach, which aims to solve the common problems of missing data and noise. To recover missing data, given the nonlinear relationships among neighbors with the current missing marker, we propose an improved version of a previous method, where we use the motion of three muscles rather than one to recover the missing data. To reduce the noise, we initially apply preprocessing to eliminate impulsive noise, before our proposed three-order quasi-uniform B-spline-based fitting method is used to reduce the remaining noise. Our experiments showed that the principles that underlie this method are simple and straightforward, and it delivered acceptable precision during reconstruction.

  6. Evaluation of an automated knowledge-based textual summarization system for longitudinal clinical data, in the intensive care domain.

    Science.gov (United States)

    Goldstein, Ayelet; Shahar, Yuval; Orenbuch, Efrat; Cohen, Matan J

    2017-10-01

    To examine the feasibility of the automated creation of meaningful free-text summaries of longitudinal clinical records, using a new general methodology that we had recently developed; and to assess the potential benefits to the clinical decision-making process of using such a method to generate draft letters that can be further manually enhanced by clinicians. We had previously developed a system, CliniText (CTXT), for automated summarization in free text of longitudinal medical records, using a clinical knowledge base. In the current study, we created an Intensive Care Unit (ICU) clinical knowledge base, assisted by two ICU clinical experts in an academic tertiary hospital. The CTXT system generated free-text summary letters from the data of 31 different patients, which were compared to the respective original physician-composed discharge letters. The main evaluation measures were (1) relative completeness, quantifying the data items missed by one of the letters but included by the other, and their importance; (2) quality parameters, such as readability; (3) functional performance, assessed by the time needed, by three clinicians reading each of the summaries, to answer five key questions, based on the discharge letter (e.g., "What are the patient's current respiratory requirements?"), and by the correctness of the clinicians' answers. Completeness: In 13/31 (42%) of the letters the number of important items missed in the CTXT-generated letter was actually less than or equal to the number of important items missed by the MD-composed letter. In each of the MD-composed letters, at least two important items that were mentioned by the CTXT system were missed (a mean of 7.2±5.74). In addition, the standard deviation in the number of missed items in the MD letters (STD=15.4) was much higher than the standard deviation in the CTXT-generated letters (STD=5.3). Quality: The MD-composed letters obtained a significantly better grade in three out of four measured parameters

  7. Model-based, semiquantitative and time intensity curve shape analysis of dynamic contrast-enhanced MRI: a comparison in patients undergoing antiangiogenic treatment for recurrent glioma

    NARCIS (Netherlands)

    Lavini, Cristina; Verhoeff, Joost J. C.; Majoie, Charles B.; Stalpers, Lukas J. A.; Richel, Dick J.; Maas, Mario

    2011-01-01

    To compare time intensity curve (TIC)-shape analysis of dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) data with model-based analysis and semiquantitative analysis in patients with high-grade glioma treated with the antiangiogenic drug bevacizumab. Fifteen patients had a pretreatment

  8. Giant Ants and Walking Plants: Using Science Fiction to Teach a Writing-Intensive, Lab-Based Biology Class for Nonmajors

    Science.gov (United States)

    Firooznia, Fardad

    2006-01-01

    This writing-intensive, lab-based, nonmajor biology course explores scientific inquiry and biological concepts through specific topics illustrated or inaccurately depicted in works of science fiction. The laboratory emphasizes the scientific method and introduces several techniques used in biological research related to the works we study.…

  9. Evidence and consensus based guideline for the management of delirium, analgesia, and sedation in intensive care medicine. Revision 2015 (DAS-Guideline 2015 – short version

    Directory of Open Access Journals (Sweden)

    DAS-Taskforce 2015

    2015-11-01

    Full Text Available In 2010, under the guidance of the DGAI (German Society of Anaesthesiology and Intensive Care Medicine and DIVI (German Interdisciplinary Association for Intensive Care and Emergency Medicine, twelve German medical societies published the “Evidence- and Consensus-based Guidelines on the Management of Analgesia, Sedation and Delirium in Intensive Care”. Since then, several new studies and publications have considerably increased the body of evidence, including the new recommendations from the American College of Critical Care Medicine (ACCM in conjunction with Society of Critical Care Medicine (SCCM and American Society of Health-System Pharmacists (ASHP from 2013. For this update, a major restructuring and extension of the guidelines were needed in order to cover new aspects of treatment, such as sleep and anxiety management. The literature was systematically searched and evaluated using the criteria of the Oxford Center of Evidence Based Medicine. The body of evidence used to formulate these recommendations was reviewed and approved by representatives of 17 national societies. Three grades of recommendation were used as follows: Grade “A” (strong recommendation, Grade “B” (recommendation and Grade “0” (open recommendation. The result is a comprehensive, interdisciplinary, evidence and consensus-based set of level 3 guidelines. This publication was designed for all ICU professionals, and takes into account all critically ill patient populations. It represents a guide to symptom-oriented prevention, diagnosis, and treatment of delirium, anxiety, stress, and protocol-based analgesia, sedation, and sleep-management in intensive care medicine.

  10. Effects of Computer-Based Practice on the Acquisition and Maintenance of Basic Academic Skills for Children with Moderate to Intensive Educational Needs

    Science.gov (United States)

    Everhart, Julie M.; Alber-Morgan, Sheila R.; Park, Ju Hee

    2011-01-01

    This study investigated the effects of computer-based practice on the acquisition and maintenance of basic academic skills for two children with moderate to intensive disabilities. The special education teacher created individualized computer games that enabled the participants to independently practice academic skills that corresponded with their…

  11. Widespread pain - do pain intensity and care-seeking influence sickness absence? - A population-based cohort study

    DEFF Research Database (Denmark)

    Mose, Søren; Christiansen, David Høyrup; Jensen, Jens Christian

    2016-01-01

    BACKGROUND: Both musculoskeletal pain-intensity in relation to a specific location (e.g. lower back or shoulder) and pain in multiple body regions have been shown to be associated with impaired function and sickness absence, but the impact of pain intensity on the association between widespread...... between number of musculoskeletal pain sites and sickness absence, and to analyze the impact on absenteeism from care-seeking in general practice due to musculoskeletal disorders.METHODS: 3745 Danish adults registered with eight General Practitioners (GPs) in one primary medical center reported location...... pain and sickness absence has not been studied. Additionally it is unknown whether care-seeking in general practice due to musculoskeletal disorders has a positive or negative impact on future absenteeism. The purpose of this study was to examine the influence of pain intensity on the association...

  12. High Signal Intensity in the Dentate Nucleus and Globus Pallidus on Unenhanced T1-Weighted MR Images: Comparison between Gadobutrol and Linear Gadolinium-Based Contrast Agents.

    Science.gov (United States)

    Moser, F G; Watterson, C T; Weiss, S; Austin, M; Mirocha, J; Prasad, R; Wang, J

    2018-02-01

    In view of the recent observations that gadolinium deposits in brain tissue after intravenous injection, our aim of this study was to compare signal changes in the globus pallidus and dentate nucleus on unenhanced T1-weighted MR images in patients receiving serial doses of gadobutrol, a macrocyclic gadolinium-based contrast agent, with those seen in patients receiving linear gadolinium-based contrast agents. This was a retrospective analysis of on-site patients with brain tumors. Fifty-nine patients received only gadobutrol, and 60 patients received only linear gadolinium-based contrast agents. Linear gadolinium-based contrast agents included gadoversetamide, gadobenate dimeglumine, and gadodiamide. T1 signal intensity in the globus pallidus, dentate nucleus, and pons was measured on the precontrast portions of patients' first and seventh brain MRIs. Ratios of signal intensity comparing the globus pallidus with the pons (globus pallidus/pons) and dentate nucleus with the pons (dentate nucleus/pons) were calculated. Changes in the above signal intensity ratios were compared within the gadobutrol and linear agent groups, as well as between groups. The dentate nucleus/pons signal ratio increased in the linear gadolinium-based contrast agent group ( t = 4.215, P linear gadolinium-based contrast agent group ( t = 2.931, P linear gadolinium-based contrast agents. © 2018 by American Journal of Neuroradiology.

  13. A Study on the Model of Detecting the Variation of Geomagnetic Intensity Based on an Adapted Motion Strategy

    Directory of Open Access Journals (Sweden)

    Hong Li

    2017-12-01

    Full Text Available By simulating the geomagnetic fields and analyzing thevariation of intensities, this paper presents a model for calculating the objective function ofan Autonomous Underwater Vehicle (AUVgeomagnetic navigation task. By investigating the biologically inspired strategies, the AUV successfullyreachesthe destination duringgeomagnetic navigation without using the priori geomagnetic map. Similar to the pattern of a flatworm, the proposed algorithm relies on a motion pattern to trigger a local searching strategy by detecting the real-time geomagnetic intensity. An adapted strategy is then implemented, which is biased on the specific target. The results show thereliabilityandeffectivenessofthe proposed algorithm.

  14. [The pregnant employee in anaesthesia and intensive care - An evidence-based approach to designing adequate workplaces].

    Science.gov (United States)

    Röher, Katharina; Göpfert, Matthias S

    2015-07-01

    In the light of a rising percentage of women among employees in anaesthesia and intensive care designing adequate workplaces for pregnant employees plays an increasingly important role. Here it is necessary to align the varied interests of the pregnant employee, fellow employees and the employer, where the legal requirements of the Maternity Protection Act ("Mutterschutzgesetz") form the statutory framework. This review describes how adequate workplaces for pregnant employees in anaesthesia and intensive care can be established considering the scientific evidence on the subject. © Georg Thieme Verlag Stuttgart · New York.

  15. Variational boundary conditions based on the Nitsche method for fitted and unfitted isogeometric discretizations of the mechanically coupled Cahn-Hilliard equation

    Science.gov (United States)

    Zhao, Ying; Schillinger, Dominik; Xu, Bai-Xiang

    2017-07-01

    The primal variational formulation of the fourth-order Cahn-Hilliard equation requires C1-continuous finite element discretizations, e.g., in the context of isogeometric analysis. In this paper, we explore the variational imposition of essential boundary conditions that arise from the thermodynamic derivation of the Cahn-Hilliard equation in primal variables. Our formulation is based on the symmetric variant of Nitsche's method, does not introduce additional degrees of freedom and is shown to be variationally consistent. In contrast to strong enforcement, the new boundary condition formulation can be naturally applied to any mapped isogeometric parametrization of any polynomial degree. In addition, it preserves full accuracy, including higher-order rates of convergence, which we illustrate for boundary-fitted discretizations of several benchmark tests in one, two and three dimensions. Unfitted Cartesian B-spline meshes constitute an effective alternative to boundary-fitted isogeometric parametrizations for constructing C1-continuous discretizations, in particular for complex geometries. We combine our variational boundary condition formulation with unfitted Cartesian B-spline meshes and the finite cell method to simulate chemical phase segregation in a composite electrode. This example, involving coupling of chemical fields with mechanical stresses on complex domains and coupling of different materials across complex interfaces, demonstrates the flexibility of variational boundary conditions in the context of higher-order unfitted isogeometric discretizations.

  16. Sprinkling experiments to simulate high and intense rainfall for process based investigations - a comparison of two methods

    Science.gov (United States)

    Müller, C.; Seeger, M.; Schneider, R.; Johst, M.; Casper, M.

    2009-04-01

    Land use and land management changes affect runoff and erosion dynamics. So, measures within this scope are often directed towards the mitigation of natural hazards such as floods and landslides. However, the effects of these changes (e.g. in soil physics after reforestation or a less extensive agriculture) are i) detectable first many years later or ii) hardly observable with conventional methods. Therefore, sprinkling experiments are frequently used for process based investigations of near-surface hydrological response as well as rill and interrill erosion. In this study, two different sprinkling systems have been applied under different land use and at different scales to elucidate and quantify dominant processes of runoff generation, as well as to relate them to the detachment and transport of solids. The studies take place at the micro-scale basin Zemmer and Frankelbach in Germany. At the Zemmer basin the sprinkling experiments were performed on agricultural land while the experiments in Frankelbach were performed at reforested sites. The experiments were carried out i) with a small mobile rainfall simulator of high rainfall intensities (40 mm h-1) and ii) with a larger one covering a slope segment and simulating high rainfall amounts (120 mm in 3 days). Both methods show basically comparable results. On the agricultural sites clear differences could be observed between different soil management types: contrasting to the conventionally tilled soils, deep loosened soils (in combination with conservative tillage) do not produce overland flow, but tend to transfer more water by interflow processes, retaining large amounts in the subsoil. For the forested sites runoff shows a high variability as determined the larger and the smaller rainfall simulations. This variability is rather due to the different forest and soil types than to methodologically different settings of the sprinkling systems. Both rainfall simulation systems characterized the runoff behavior in a

  17. Multidisciplinary team-based approach for comprehensive preoperative pulmonary rehabilitation including intensive nutritional support for lung cancer patients.

    Directory of Open Access Journals (Sweden)

    Hiroaki Harada

    Full Text Available BACKGROUND: To decrease the risk of postoperative complication, improving general and pulmonary conditioning preoperatively should be considered essential for patients scheduled to undergo lung surgery. OBJECTIVE: The aim of this study is to develop a short-term beneficial program of preoperative pulmonary rehabilitation for lung cancer patients. METHODS: From June 2009, comprehensive preoperative pulmonary rehabilitation (CHPR including intensive nutritional support was performed prospectively using a multidisciplinary team-based approach. Postoperative complication rate and the transitions of pulmonary function in CHPR were compared with historical data of conventional preoperative pulmonary rehabilitation (CVPR conducted since June 2006. The study population was limited to patients who underwent standard lobectomy. RESULTS: Postoperative complication rate in the CVPR (n = 29 and CHPR (n = 21 were 48.3% and 28.6% (p = 0.2428, respectively. Those in patients with Charlson Comorbidity Index scores ≥2 were 68.8% (n = 16 and 27.3% (n = 11, respectively (p = 0.0341 and those in patients with preoperative risk score in Estimation of Physiologic Ability and Surgical Stress scores >0.3 were 57.9% (n = 19 and 21.4% (n = 14, respectively (p = 0.0362. Vital capacities of pre- and post intervention before surgery in the CHPR group were 2.63±0.65 L and 2.75±0.63 L (p = 0.0043, respectively; however, their transition in the CVPR group was not statistically significant (p = 0.6815. Forced expiratory volumes in one second of pre- and post intervention before surgery in the CHPR group were 1.73±0.46 L and 1.87±0.46 L (p = 0.0012, respectively; however, their transition in the CVPR group was not statistically significant (p = 0.6424. CONCLUSIONS: CHPR appeared to be a beneficial and effective short-term preoperative rehabilitation protocol, especially in patients with poor preoperative conditions.

  18. Multidisciplinary team-based approach for comprehensive preoperative pulmonary rehabilitation including intensive nutritional support for lung cancer patients.

    Science.gov (United States)

    Harada, Hiroaki; Yamashita, Yoshinori; Misumi, Keizo; Tsubokawa, Norifumi; Nakao, Junichi; Matsutani, Junko; Yamasaki, Miyako; Ohkawachi, Tomomi; Taniyama, Kiyomi

    2013-01-01

    To decrease the risk of postoperative complication, improving general and pulmonary conditioning preoperatively should be considered essential for patients scheduled to undergo lung surgery. The aim of this study is to develop a short-term beneficial program of preoperative pulmonary rehabilitation for lung cancer patients. From June 2009, comprehensive preoperative pulmonary rehabilitation (CHPR) including intensive nutritional support was performed prospectively using a multidisciplinary team-based approach. Postoperative complication rate and the transitions of pulmonary function in CHPR were compared with historical data of conventional preoperative pulmonary rehabilitation (CVPR) conducted since June 2006. The study population was limited to patients who underwent standard lobectomy. Postoperative complication rate in the CVPR (n = 29) and CHPR (n = 21) were 48.3% and 28.6% (p = 0.2428), respectively. Those in patients with Charlson Comorbidity Index scores ≥2 were 68.8% (n = 16) and 27.3% (n = 11), respectively (p = 0.0341) and those in patients with preoperative risk score in Estimation of Physiologic Ability and Surgical Stress scores >0.3 were 57.9% (n = 19) and 21.4% (n = 14), respectively (p = 0.0362). Vital capacities of pre- and post intervention before surgery in the CHPR group were 2.63±0.65 L and 2.75±0.63 L (p = 0.0043), respectively; however, their transition in the CVPR group was not statistically significant (p = 0.6815). Forced expiratory volumes in one second of pre- and post intervention before surgery in the CHPR group were 1.73±0.46 L and 1.87±0.46 L (p = 0.0012), respectively; however, their transition in the CVPR group was not statistically significant (p = 0.6424). CHPR appeared to be a beneficial and effective short-term preoperative rehabilitation protocol, especially in patients with poor preoperative conditions.

  19. Intensive mobilities

    DEFF Research Database (Denmark)

    Vannini, Phillip; Bissell, David; Jensen, Ole B.

    with fieldwork conducted in Canada, Denmark and Australia to develop our understanding of the experiential politics of long distance workers. Rather than focusing on the extensive dimensions of mobilities that are implicated in patterns and trends, our paper turns to the intensive dimensions of this experience......This paper explores the intensities of long distance commuting journeys as a way of exploring how bodily sensibilities are being changed by the mobilities that they undertake. The context of this paper is that many people are travelling further to work than ever before owing to a variety of factors...... which relate to transport, housing and employment. Yet we argue that the experiential dimensions of long distance mobilities have not received the attention that they deserve within geographical research on mobilities. This paper combines ideas from mobilities research and contemporary social theory...

  20. Intensity-Stabilized Fast-Scanned Direct Absorption Spectroscopy Instrumentation Based on a Distributed Feedback Laser with Detection Sensitivity down to 4 × 10−6

    Directory of Open Access Journals (Sweden)

    Gang Zhao

    2016-09-01

    Full Text Available A novel, intensity-stabilized, fast-scanned, direct absorption spectroscopy (IS-FS-DAS instrumentation, based on a distributed feedback (DFB diode laser, is developed. A fiber-coupled polarization rotator and a fiber-coupled polarizer are used to stabilize the intensity of the laser, which significantly reduces its relative intensity noise (RIN. The influence of white noise is reduced by fast scanning over the spectral feature (at 1 kHz, followed by averaging. By combining these two noise-reducing techniques, it is demonstrated that direct absorption spectroscopy (DAS can be swiftly performed down to a limit of detection (LOD (1σ of 4 × 10−6, which opens up a number of new applications.

  1. Accurate thermometry based on the red and green fluorescence intensity ratio in NaYF4: Yb, Er nanocrystals for bioapplication.

    Science.gov (United States)

    Liu, Lixin; Qin, Feng; Lv, Tianquan; Zhang, Zhiguo; Cao, Wenwu

    2016-10-15

    A biological temperature measurement method based on the fluorescence intensity ratio (FIR) was developed to reduce uncertainty. The upconversion luminescence of NaYF4:Yb, Er nanocrystals was studied as a function of temperature around the physiologically relevant range of 300-330 K. We found that the green-green FIR Fe and red-green FIR (I660/I540) varied linearly as temperature increased. The thermometric uncertainties using the two FIRs were discussed and were determined to be almost constant at 0.6 and 0.09 K for green-green and red-green, respectively. The lower thermometric uncertainty comes from the intense signal-to-noise ratio of the measured FIRs owing to their comparable fluorescence intensities.

  2. Effects of Mass Media Campaign Exposure Intensity and Durability on Quit Attempts in a Population-Based Cohort Study

    Science.gov (United States)

    Wakefield, M. A.; Spittal, M. J.; Yong, H-H.; Durkin, S. J.; Borland, R.

    2011-01-01

    Objective: To assess the extent to which intensity and timing of televised anti-smoking advertising emphasizing the serious harms of smoking influences quit attempts. Methods: Using advertising gross rating points (GRPs), we estimated exposure to tobacco control and nicotine replacement therapy (NRT) advertising in the 3, 4-6, 7-9 and 10-12 months…

  3. Muscle and intensity based hamstring exercise classification in elite female track and field athletes: implications for exercise selection during rehabilitation

    Science.gov (United States)

    Tsaklis, Panagiotis; Malliaropoulos, Nikos; Mendiguchia, Jurdan; Korakakis, Vasileios; Tsapralis, Kyriakos; Pyne, Debasish; Malliaras, Peter

    2015-01-01

    Background Hamstring injuries are common in many sports, including track and field. Strains occur in different parts of the hamstring muscle but very little is known about whether common hamstring loading exercises specifically load different hamstring components. The purpose of this study was to investigate muscle activation of different components of the hamstring muscle during common hamstring loading ex