WorldWideScience

Sample records for model-based therapeutic correction

  1. Physiologically Based Pharmacokinetic Modeling of Therapeutic Proteins.

    Science.gov (United States)

    Wong, Harvey; Chow, Timothy W

    2017-09-01

    Biologics or therapeutic proteins are becoming increasingly important as treatments for disease. The most common class of biologics are monoclonal antibodies (mAbs). Recently, there has been an increase in the use of physiologically based pharmacokinetic (PBPK) modeling in the pharmaceutical industry in drug development. We review PBPK models for therapeutic proteins with an emphasis on mAbs. Due to their size and similarity to endogenous antibodies, there are distinct differences between PBPK models for small molecules and mAbs. The high-level organization of a typical mAb PBPK model consists of a whole-body PBPK model with organ compartments interconnected by both blood and lymph flows. The whole-body PBPK model is coupled with tissue-level submodels used to describe key mechanisms governing mAb disposition including tissue efflux via the lymphatic system, elimination by catabolism, protection from catabolism binding to the neonatal Fc (FcRn) receptor, and nonlinear binding to specific pharmacological targets of interest. The use of PBPK modeling in the development of therapeutic proteins is still in its infancy. Further application of PBPK modeling for therapeutic proteins will help to define its developing role in drug discovery and development. Copyright © 2017 American Pharmacists Association®. Published by Elsevier Inc. All rights reserved.

  2. Integrating School-Based and Therapeutic Conflict Management Models at School.

    Science.gov (United States)

    D'Oosterlinck, Franky; Broekaert, Eric

    2003-01-01

    Explores the possibility of integrating school-based and therapeutic conflict management models, comparing two management models: a school-based conflict management program, "Teaching Students To Be Peacemakers"; and a therapeutic conflict management program, "Life Space Crisis Intervention." The paper concludes that integration might be possible…

  3. Ratio-based vs. model-based methods to correct for urinary creatinine concentrations.

    Science.gov (United States)

    Jain, Ram B

    2016-08-01

    Creatinine-corrected urinary analyte concentration is usually computed as the ratio of the observed level of analyte concentration divided by the observed level of the urinary creatinine concentration (UCR). This ratio-based method is flawed since it implicitly assumes that hydration is the only factor that affects urinary creatinine concentrations. On the contrary, it has been shown in the literature, that age, gender, race/ethnicity, and other factors also affect UCR. Consequently, an optimal method to correct for UCR should correct for hydration as well as other factors like age, gender, and race/ethnicity that affect UCR. Model-based creatinine correction in which observed UCRs are used as an independent variable in regression models has been proposed. This study was conducted to evaluate the performance of ratio-based and model-based creatinine correction methods when the effects of gender, age, and race/ethnicity are evaluated one factor at a time for selected urinary analytes and metabolites. It was observed that ratio-based method leads to statistically significant pairwise differences, for example, between males and females or between non-Hispanic whites (NHW) and non-Hispanic blacks (NHB), more often than the model-based method. However, depending upon the analyte of interest, the reverse is also possible. The estimated ratios of geometric means (GM), for example, male to female or NHW to NHB, were also compared for the two methods. When estimated UCRs were higher for the group (for example, males) in the numerator of this ratio, these ratios were higher for the model-based method, for example, male to female ratio of GMs. When estimated UCR were lower for the group (for example, NHW) in the numerator of this ratio, these ratios were higher for the ratio-based method, for example, NHW to NHB ratio of GMs. Model-based method is the method of choice if all factors that affect UCR are to be accounted for.

  4. Integrating school-based and therapeutic conflict management models at schools.

    Science.gov (United States)

    D'Oosterlinck, Franky; Broekaert, Eric

    2003-08-01

    Including children with emotional and behavioral needs in mainstream school systems leads to growing concern about the increasing number of violent and nonviolent conflicts. Schools must adapt to this evolution and adopt a more therapeutic dimension. This paper explores the possibility of integrating school-based and therapeutic conflict management models and compares two management models: a school-based conflict management program. Teaching Students To Be Peacemakers; and a therapeutic conflict management program, Life Space Crisis Intervention. The authors conclude that integration might be possible, but depends on establishing a positive school atmosphere, the central position of the teacher, and collaborative and social learning for pupils. Further implementation of integrated conflict management models can be considered but must be underpinned by appropriate scientific research.

  5. Physics Model-Based Scatter Correction in Multi-Source Interior Computed Tomography.

    Science.gov (United States)

    Gong, Hao; Li, Bin; Jia, Xun; Cao, Guohua

    2018-02-01

    Multi-source interior computed tomography (CT) has a great potential to provide ultra-fast and organ-oriented imaging at low radiation dose. However, X-ray cross scattering from multiple simultaneously activated X-ray imaging chains compromises imaging quality. Previously, we published two hardware-based scatter correction methods for multi-source interior CT. Here, we propose a software-based scatter correction method, with the benefit of no need for hardware modifications. The new method is based on a physics model and an iterative framework. The physics model was derived analytically, and was used to calculate X-ray scattering signals in both forward direction and cross directions in multi-source interior CT. The physics model was integrated to an iterative scatter correction framework to reduce scatter artifacts. The method was applied to phantom data from both Monte Carlo simulations and physical experimentation that were designed to emulate the image acquisition in a multi-source interior CT architecture recently proposed by our team. The proposed scatter correction method reduced scatter artifacts significantly, even with only one iteration. Within a few iterations, the reconstructed images fast converged toward the "scatter-free" reference images. After applying the scatter correction method, the maximum CT number error at the region-of-interests (ROIs) was reduced to 46 HU in numerical phantom dataset and 48 HU in physical phantom dataset respectively, and the contrast-noise-ratio at those ROIs increased by up to 44.3% and up to 19.7%, respectively. The proposed physics model-based iterative scatter correction method could be useful for scatter correction in dual-source or multi-source CT.

  6. Evolutionary modeling-based approach for model errors correction

    Directory of Open Access Journals (Sweden)

    S. Q. Wan

    2012-08-01

    Full Text Available The inverse problem of using the information of historical data to estimate model errors is one of the science frontier research topics. In this study, we investigate such a problem using the classic Lorenz (1963 equation as a prediction model and the Lorenz equation with a periodic evolutionary function as an accurate representation of reality to generate "observational data."

    On the basis of the intelligent features of evolutionary modeling (EM, including self-organization, self-adaptive and self-learning, the dynamic information contained in the historical data can be identified and extracted by computer automatically. Thereby, a new approach is proposed to estimate model errors based on EM in the present paper. Numerical tests demonstrate the ability of the new approach to correct model structural errors. In fact, it can actualize the combination of the statistics and dynamics to certain extent.

  7. Model-based therapeutic correction of hypothalamic-pituitary-adrenal axis dysfunction.

    Directory of Open Access Journals (Sweden)

    Amos Ben-Zvi

    2009-01-01

    Full Text Available The hypothalamic-pituitary-adrenal (HPA axis is a major system maintaining body homeostasis by regulating the neuroendocrine and sympathetic nervous systems as well modulating immune function. Recent work has shown that the complex dynamics of this system accommodate several stable steady states, one of which corresponds to the hypocortisol state observed in patients with chronic fatigue syndrome (CFS. At present these dynamics are not formally considered in the development of treatment strategies. Here we use model-based predictive control (MPC methodology to estimate robust treatment courses for displacing the HPA axis from an abnormal hypocortisol steady state back to a healthy cortisol level. This approach was applied to a recent model of HPA axis dynamics incorporating glucocorticoid receptor kinetics. A candidate treatment that displays robust properties in the face of significant biological variability and measurement uncertainty requires that cortisol be further suppressed for a short period until adrenocorticotropic hormone levels exceed 30% of baseline. Treatment may then be discontinued, and the HPA axis will naturally progress to a stable attractor defined by normal hormone levels. Suppression of biologically available cortisol may be achieved through the use of binding proteins such as CBG and certain metabolizing enzymes, thus offering possible avenues for deployment in a clinical setting. Treatment strategies can therefore be designed that maximally exploit system dynamics to provide a robust response to treatment and ensure a positive outcome over a wide range of conditions. Perhaps most importantly, a treatment course involving further reduction in cortisol, even transient, is quite counterintuitive and challenges the conventional strategy of supplementing cortisol levels, an approach based on steady-state reasoning.

  8. A network of helping: Generalized reciprocity and cooperative behavior in response to peer and staff affirmations and corrections among therapeutic community residents.

    Science.gov (United States)

    Doogan, Nathan J; Warren, Keith

    2017-01-01

    Clinical theory in therapeutic communities (TCs) for substance abuse treatment emphasizes the importance of peer interactions in bringing about change. This implies that residents will respond in a more prosocial manner to peer versus staff intervention and that residents will interact in such a way as to maintain cooperation. The data consist of electronic records of peer and staff affirmations and corrections at four corrections-based therapeutic community units. We treat the data as a directed social network of affirmations. We sampled 100 resident days from each unit (n = 400) and used a generalized linear mixed effects network time series model to analyze the predictors of sending and receiving affirmations and corrections. The model allowed us to control for characteristics of individuals as well as network-related dependencies. Residents show generalized reciprocity following peer affirmations, but not following staff affirmations. Residents did not respond to peer corrections by increasing affirmations, but responded to staff corrections by decreasing affirmations. Residents directly reciprocated peer affirmations. Residents were more likely to affirm a peer whom they had recently corrected. Residents were homophilous with respect to race, age and program entry time. This analysis demonstrates that TC residents react more prosocially to behavioral intervention by peers than by staff. Further, the community exhibits generalized and direct reciprocity, mechanisms known to foster cooperation in groups. Multiple forms of homophily influence resident interactions. These findings validate TC clinical theory while suggesting paths to improved outcomes.

  9. An Improved Physics-Based Model for Topographic Correction of Landsat TM Images

    Directory of Open Access Journals (Sweden)

    Ainong Li

    2015-05-01

    Full Text Available Optical remotely sensed images in mountainous areas are subject to radiometric distortions induced by topographic effects, which need to be corrected before quantitative applications. Based on Li model and Sandmeier model, this paper proposed an improved physics-based model for the topographic correction of Landsat Thematic Mapper (TM images. The model employed Normalized Difference Vegetation Index (NDVI thresholds to approximately divide land targets into eleven groups, due to NDVI’s lower sensitivity to topography and its significant role in indicating land cover type. Within each group of terrestrial targets, corresponding MODIS BRDF (Bidirectional Reflectance Distribution Function products were used to account for land surface’s BRDF effect, and topographic effects are corrected without Lambertian assumption. The methodology was tested with two TM scenes of severely rugged mountain areas acquired under different sun elevation angles. Results demonstrated that reflectance of sun-averted slopes was evidently enhanced, and the overall quality of images was improved with topographic effect being effectively suppressed. Correlation coefficients between Near Infra-Red band reflectance and illumination condition reduced almost to zero, and coefficients of variance also showed some reduction. By comparison with the other two physics-based models (Sandmeier model and Li model, the proposed model showed favorable results on two tested Landsat scenes. With the almost half-century accumulation of Landsat data and the successive launch and operation of Landsat 8, the improved model in this paper can be potentially helpful for the topographic correction of Landsat and Landsat-like data.

  10. Likelihood-Based Inference in Nonlinear Error-Correction Models

    DEFF Research Database (Denmark)

    Kristensen, Dennis; Rahbæk, Anders

    We consider a class of vector nonlinear error correction models where the transfer function (or loadings) of the stationary relation- ships is nonlinear. This includes in particular the smooth transition models. A general representation theorem is given which establishes the dynamic properties...... and a linear trend in general. Gaussian likelihood-based estimators are considered for the long- run cointegration parameters, and the short-run parameters. Asymp- totic theory is provided for these and it is discussed to what extend asymptotic normality and mixed normaity can be found. A simulation study...

  11. Model-based bootstrapping when correcting for measurement error with application to logistic regression.

    Science.gov (United States)

    Buonaccorsi, John P; Romeo, Giovanni; Thoresen, Magne

    2018-03-01

    When fitting regression models, measurement error in any of the predictors typically leads to biased coefficients and incorrect inferences. A plethora of methods have been proposed to correct for this. Obtaining standard errors and confidence intervals using the corrected estimators can be challenging and, in addition, there is concern about remaining bias in the corrected estimators. The bootstrap, which is one option to address these problems, has received limited attention in this context. It has usually been employed by simply resampling observations, which, while suitable in some situations, is not always formally justified. In addition, the simple bootstrap does not allow for estimating bias in non-linear models, including logistic regression. Model-based bootstrapping, which can potentially estimate bias in addition to being robust to the original sampling or whether the measurement error variance is constant or not, has received limited attention. However, it faces challenges that are not present in handling regression models with no measurement error. This article develops new methods for model-based bootstrapping when correcting for measurement error in logistic regression with replicate measures. The methodology is illustrated using two examples, and a series of simulations are carried out to assess and compare the simple and model-based bootstrap methods, as well as other standard methods. While not always perfect, the model-based approaches offer some distinct improvements over the other methods. © 2017, The International Biometric Society.

  12. CD-SEM real time bias correction using reference metrology based modeling

    Science.gov (United States)

    Ukraintsev, V.; Banke, W.; Zagorodnev, G.; Archie, C.; Rana, N.; Pavlovsky, V.; Smirnov, V.; Briginas, I.; Katnani, A.; Vaid, A.

    2018-03-01

    Accuracy of patterning impacts yield, IC performance and technology time to market. Accuracy of patterning relies on optical proximity correction (OPC) models built using CD-SEM inputs and intra die critical dimension (CD) control based on CD-SEM. Sub-nanometer measurement uncertainty (MU) of CD-SEM is required for current technologies. Reported design and process related bias variation of CD-SEM is in the range of several nanometers. Reference metrology and numerical modeling are used to correct SEM. Both methods are slow to be used for real time bias correction. We report on real time CD-SEM bias correction using empirical models based on reference metrology (RM) data. Significant amount of currently untapped information (sidewall angle, corner rounding, etc.) is obtainable from SEM waveforms. Using additional RM information provided for specific technology (design rules, materials, processes) CD extraction algorithms can be pre-built and then used in real time for accurate CD extraction from regular CD-SEM images. The art and challenge of SEM modeling is in finding robust correlation between SEM waveform features and bias of CD-SEM as well as in minimizing RM inputs needed to create accurate (within the design and process space) model. The new approach was applied to improve CD-SEM accuracy of 45 nm GATE and 32 nm MET1 OPC 1D models. In both cases MU of the state of the art CD-SEM has been improved by 3x and reduced to a nanometer level. Similar approach can be applied to 2D (end of line, contours, etc.) and 3D (sidewall angle, corner rounding, etc.) cases.

  13. Sandmeier model based topographic correction to lunar spectral profiler (SP) data from KAGUYA satellite.

    Science.gov (United States)

    Chen, Sheng-Bo; Wang, Jing-Ran; Guo, Peng-Ju; Wang, Ming-Chang

    2014-09-01

    The Moon may be considered as the frontier base for the deep space exploration. The spectral analysis is one of the key techniques to determine the lunar surface rock and mineral compositions. But the lunar topographic relief is more remarkable than that of the Earth. It is necessary to conduct the topographic correction for lunar spectral data before they are used to retrieve the compositions. In the present paper, a lunar Sandmeier model was proposed by considering the radiance effect from the macro and ambient topographic relief. And the reflectance correction model was also reduced based on the Sandmeier model. The Spectral Profile (SP) data from KAGUYA satellite in the Sinus Iridum quadrangle was taken as an example. And the digital elevation data from Lunar Orbiter Laser Altimeter are used to calculate the slope, aspect, incidence and emergence angles, and terrain-viewing factor for the topographic correction Thus, the lunar surface reflectance from the SP data was corrected by the proposed model after the direct component of irradiance on a horizontal surface was derived. As a result, the high spectral reflectance facing the sun is decreased and low spectral reflectance back to the sun is compensated. The statistical histogram of reflectance-corrected pixel numbers presents Gaussian distribution Therefore, the model is robust to correct lunar topographic effect and estimate lunar surface reflectance.

  14. New component-based normalization method to correct PET system models

    International Nuclear Information System (INIS)

    Kinouchi, Shoko; Miyoshi, Yuji; Suga, Mikio; Yamaya, Taiga; Yoshida, Eiji; Nishikido, Fumihiko; Tashima, Hideaki

    2011-01-01

    Normalization correction is necessary to obtain high-quality reconstructed images in positron emission tomography (PET). There are two basic types of normalization methods: the direct method and component-based methods. The former method suffers from the problem that a huge count number in the blank scan data is required. Therefore, the latter methods have been proposed to obtain high statistical accuracy normalization coefficients with a small count number in the blank scan data. In iterative image reconstruction methods, on the other hand, the quality of the obtained reconstructed images depends on the system modeling accuracy. Therefore, the normalization weighing approach, in which normalization coefficients are directly applied to the system matrix instead of a sinogram, has been proposed. In this paper, we propose a new component-based normalization method to correct system model accuracy. In the proposed method, two components are defined and are calculated iteratively in such a way as to minimize errors of system modeling. To compare the proposed method and the direct method, we applied both methods to our small OpenPET prototype system. We achieved acceptable statistical accuracy of normalization coefficients while reducing the count number of the blank scan data to one-fortieth that required in the direct method. (author)

  15. Physiologically-based PK/PD modelling of therapeutic macromolecules.

    Science.gov (United States)

    Thygesen, Peter; Macheras, Panos; Van Peer, Achiel

    2009-12-01

    Therapeutic proteins are a diverse class of drugs consisting of naturally occurring or modified proteins, and due to their size and physico-chemical properties, they can pose challenges for the pharmacokinetic and pharmacodynamic studies. Physiologically-based pharmacokinetics (PBPK) modelling has been effective for early in silico prediction of pharmacokinetic properties of new drugs. The aim of the present workshop was to discuss the feasibility of PBPK modelling of macromolecules. The classical PBPK approach was discussed with a presentation of the successful example of PBPK modelling of cyclosporine A. PBPK model was performed with transport of the cyclosporine across cell membranes, affinity to plasma proteins and active membrane transporters included to describe drug transport between physiological compartments. For macromolecules, complex PBPK modelling or permeability-limited and/or target-mediated distribution was discussed. It was generally agreed that PBPK modelling was feasible and desirable. The role of the lymphatic system should be considered when absorption after extravascular administration is modelled. Target-mediated drug disposition was regarded as an important feature for generation of PK models. Complex PK-models may not be necessary when a limited number of organs are affected. More mechanistic PK/PD models will be relevant when adverse events/toxicity are included in the PK/PD modelling.

  16. Beam-hardening correction in CT based on basis image and TV model

    International Nuclear Information System (INIS)

    Li Qingliang; Yan Bin; Li Lei; Sun Hongsheng; Zhang Feng

    2012-01-01

    In X-ray computed tomography, the beam hardening leads to artifacts and reduces the image quality. It analyzes how beam hardening influences on original projection. According, it puts forward a kind of new beam-hardening correction method based on the basis images and TV model. Firstly, according to physical characteristics of the beam hardening an preliminary correction model with adjustable parameters is set up. Secondly, using different parameters, original projections are operated by the correction model. Thirdly, the projections are reconstructed to obtain a series of basis images. Finally, the linear combination of basis images is the final reconstruction image. Here, with total variation for the final reconstruction image as the cost function, the linear combination coefficients for the basis images are determined according to iterative method. To verify the effectiveness of the proposed method, the experiments are carried out on real phantom and industrial part. The results show that the algorithm significantly inhibits cup and strip artifacts in CT image. (authors)

  17. Model Correction Factor Method

    DEFF Research Database (Denmark)

    Christensen, Claus; Randrup-Thomsen, Søren; Morsing Johannesen, Johannes

    1997-01-01

    The model correction factor method is proposed as an alternative to traditional polynomial based response surface techniques in structural reliability considering a computationally time consuming limit state procedure as a 'black box'. The class of polynomial functions is replaced by a limit...... of the model correction factor method, is that in simpler form not using gradient information on the original limit state function or only using this information once, a drastic reduction of the number of limit state evaluation is obtained together with good approximations on the reliability. Methods...

  18. Significance of Bias Correction in Drought Frequency and Scenario Analysis Based on Climate Models

    Science.gov (United States)

    Aryal, Y.; Zhu, J.

    2015-12-01

    Assessment of future drought characteristics is difficult as climate models usually have bias in simulating precipitation frequency and intensity. To overcome this limitation, output from climate models need to be bias corrected based on the specific purpose of applications. In this study, we examine the significance of bias correction in the context of drought frequency and scenario analysis using output from climate models. In particular, we investigate the performance of three widely used bias correction techniques: (1) monthly bias correction (MBC), (2) nested bias correction (NBC), and (3) equidistance quantile mapping (EQM) The effect of bias correction in future scenario of drought frequency is also analyzed. The characteristics of drought are investigated in terms of frequency and severity in nine representative locations in different climatic regions across the United States using regional climate model (RCM) output from the North American Regional Climate Change Assessment Program (NARCCAP). The Standardized Precipitation Index (SPI) is used as the means to compare and forecast drought characteristics at different timescales. Systematic biases in the RCM precipitation output are corrected against the National Centers for Environmental Prediction (NCEP) North American Regional Reanalysis (NARR) data. The results demonstrate that bias correction significantly decreases the RCM errors in reproducing drought frequency derived from the NARR data. Preserving mean and standard deviation is essential for climate models in drought frequency analysis. RCM biases both have regional and timescale dependence. Different timescale of input precipitation in the bias corrections show similar results. Drought frequency obtained from the RCM future (2040-2070) scenarios is compared with that from the historical simulations. The changes in drought characteristics occur in all climatic regions. The relative changes in drought frequency in future scenario in relation to

  19. A generic whole body physiologically based pharmacokinetic model for therapeutic proteins in PK-Sim.

    Science.gov (United States)

    Niederalt, Christoph; Kuepfer, Lars; Solodenko, Juri; Eissing, Thomas; Siegmund, Hans-Ulrich; Block, Michael; Willmann, Stefan; Lippert, Jörg

    2018-04-01

    Proteins are an increasingly important class of drugs used as therapeutic as well as diagnostic agents. A generic physiologically based pharmacokinetic (PBPK) model was developed in order to represent at whole body level the fundamental mechanisms driving the distribution and clearance of large molecules like therapeutic proteins. The model was built as an extension of the PK-Sim model for small molecules incorporating (i) the two-pore formalism for drug extravasation from blood plasma to interstitial space, (ii) lymph flow, (iii) endosomal clearance and (iv) protection from endosomal clearance by neonatal Fc receptor (FcRn) mediated recycling as especially relevant for antibodies. For model development and evaluation, PK data was used for compounds with a wide range of solute radii. The model supports the integration of knowledge gained during all development phases of therapeutic proteins, enables translation from pre-clinical species to human and allows predictions of tissue concentration profiles which are of relevance for the analysis of on-target pharmacodynamic effects as well as off-target toxicity. The current implementation of the model replaces the generic protein PBPK model available in PK-Sim since version 4.2 and becomes part of the Open Systems Pharmacology Suite.

  20. Rank-based Tests of the Cointegrating Rank in Semiparametric Error Correction Models

    NARCIS (Netherlands)

    Hallin, M.; van den Akker, R.; Werker, B.J.M.

    2012-01-01

    Abstract: This paper introduces rank-based tests for the cointegrating rank in an Error Correction Model with i.i.d. elliptical innovations. The tests are asymptotically distribution-free, and their validity does not depend on the actual distribution of the innovations. This result holds despite the

  1. Gene therapy for carcinoma of the breast: Therapeutic genetic correction strategies

    International Nuclear Information System (INIS)

    Obermiller, Patrice S; Tait, David L; Holt, Jeffrey T

    2000-01-01

    Gene therapy is a therapeutic approach that is designed to correct specific molecular defects that contribute to the cause or progression of cancer. Genes that are mutated or deleted in cancers include the cancer susceptibility genes p53 and BRCA1. Because mutational inactivation of gene function is specific to tumor cells in these settings, cancer gene correction strategies may provide an opportunity for selective targeting without significant toxicity for normal nontumor cells. Both p53 and BRCA1 appear to inhibit cancer cells that lack mutations in these genes, suggesting that the so-called gene correction strategies may have broader potential than initially believed. Increasing knowledge of cancer genetics has identified these and other genes as potential targets for gene replacement therapy. Initial patient trials of p53 and BRCA1 gene therapy have provided some indications of potential efficacy, but have also identified areas of basic and clinical research that are needed before these approaches may be widely used in patient care

  2. Case study of atmospheric correction on CCD data of HJ-1 satellite based on 6S model

    International Nuclear Information System (INIS)

    Xue, Xiaoiuan; Meng, Oingyan; Xie, Yong; Sun, Zhangli; Wang, Chang; Zhao, Hang

    2014-01-01

    In this study, atmospheric radiative transfer model 6S was used to simulate the radioactive transfer process in the surface-atmosphere-sensor. An algorithm based on the look-up table (LUT) founded by 6S model was used to correct (HJ-1) CCD image pixel by pixel. Then, the effect of atmospheric correction on CCD data of HJ-1 satellite was analyzed in terms of the spectral curves and evaluated against the measured reflectance acquired during HJ-1B satellite overpass, finally, the normalized difference vegetation index (NDVI) before and after atmospheric correction were compared. The results showed: (1) Atmospheric correction on CCD data of HJ-1 satellite can reduce the ''increase'' effect of the atmosphere. (2) Apparent reflectance are higher than those of surface reflectance corrected by 6S model in band1∼band3, but they are lower in the near-infrared band; the surface reflectance values corrected agree with the measured reflectance values well. (3)The NDVI increases significantly after atmospheric correction, which indicates the atmospheric correction can highlight the vegetation information

  3. Bias-correction in vector autoregressive models

    DEFF Research Database (Denmark)

    Engsted, Tom; Pedersen, Thomas Quistgaard

    2014-01-01

    We analyze the properties of various methods for bias-correcting parameter estimates in both stationary and non-stationary vector autoregressive models. First, we show that two analytical bias formulas from the existing literature are in fact identical. Next, based on a detailed simulation study......, we show that when the model is stationary this simple bias formula compares very favorably to bootstrap bias-correction, both in terms of bias and mean squared error. In non-stationary models, the analytical bias formula performs noticeably worse than bootstrapping. Both methods yield a notable...... improvement over ordinary least squares. We pay special attention to the risk of pushing an otherwise stationary model into the non-stationary region of the parameter space when correcting for bias. Finally, we consider a recently proposed reduced-bias weighted least squares estimator, and we find...

  4. ANALYSIS AND CORRECTION OF SYSTEMATIC HEIGHT MODEL ERRORS

    Directory of Open Access Journals (Sweden)

    K. Jacobsen

    2016-06-01

    Full Text Available The geometry of digital height models (DHM determined with optical satellite stereo combinations depends upon the image orientation, influenced by the satellite camera, the system calibration and attitude registration. As standard these days the image orientation is available in form of rational polynomial coefficients (RPC. Usually a bias correction of the RPC based on ground control points is required. In most cases the bias correction requires affine transformation, sometimes only shifts, in image or object space. For some satellites and some cases, as caused by small base length, such an image orientation does not lead to the possible accuracy of height models. As reported e.g. by Yong-hua et al. 2015 and Zhang et al. 2015, especially the Chinese stereo satellite ZiYuan-3 (ZY-3 has a limited calibration accuracy and just an attitude recording of 4 Hz which may not be satisfying. Zhang et al. 2015 tried to improve the attitude based on the color sensor bands of ZY-3, but the color images are not always available as also detailed satellite orientation information. There is a tendency of systematic deformation at a Pléiades tri-stereo combination with small base length. The small base length enlarges small systematic errors to object space. But also in some other satellite stereo combinations systematic height model errors have been detected. The largest influence is the not satisfying leveling of height models, but also low frequency height deformations can be seen. A tilt of the DHM by theory can be eliminated by ground control points (GCP, but often the GCP accuracy and distribution is not optimal, not allowing a correct leveling of the height model. In addition a model deformation at GCP locations may lead to not optimal DHM leveling. Supported by reference height models better accuracy has been reached. As reference height model the Shuttle Radar Topography Mission (SRTM digital surface model (DSM or the new AW3D30 DSM, based on ALOS

  5. Dendritic cell-based vaccination in cancer: therapeutic implications emerging from murine models

    Directory of Open Access Journals (Sweden)

    Soledad eMac Keon

    2015-05-01

    Full Text Available Dendritic cells (DCs play a pivotal role in the orchestration of immune responses, and are thus key targets in cancer vaccine design. Since the 2010 FDA approval of the first cancer DC-based vaccine (Sipuleucel T there has been a surge of interest in exploiting these cells as a therapeutic option for the treatment of tumors of diverse origin. In spite of the encouraging results obtained in the clinic, many elements of DC-based vaccination strategies need to be optimized. In this context, the use of experimental cancer models can help direct efforts towards an effective vaccine design. This paper reviews recent findings in murine models regarding the antitumoral mechanisms of DC-based vaccination, covering issues related to antigen sources, the use of adjuvants and maturing agents, and the role of DC subsets and their interaction in the initiation of antitumoral immune responses. The summary of such diverse aspects will highlight advantages and drawbacks in the use of murine models, and contribute to the design of successful DC-based translational approaches for cancer treatment.

  6. Real-Time Corrected Traffic Correlation Model for Traffic Flow Forecasting

    Directory of Open Access Journals (Sweden)

    Hua-pu Lu

    2015-01-01

    Full Text Available This paper focuses on the problems of short-term traffic flow forecasting. The main goal is to put forward traffic correlation model and real-time correction algorithm for traffic flow forecasting. Traffic correlation model is established based on the temporal-spatial-historical correlation characteristic of traffic big data. In order to simplify the traffic correlation model, this paper presents correction coefficients optimization algorithm. Considering multistate characteristic of traffic big data, a dynamic part is added to traffic correlation model. Real-time correction algorithm based on Fuzzy Neural Network is presented to overcome the nonlinear mapping problems. A case study based on a real-world road network in Beijing, China, is implemented to test the efficiency and applicability of the proposed modeling methods.

  7. Treatment Model in Children with Speech Disorders and Its Therapeutic Efficiency

    Directory of Open Access Journals (Sweden)

    Barberena, Luciana

    2014-05-01

    Full Text Available Introduction Speech articulation disorders affect the intelligibility of speech. Studies on therapeutic models show the effectiveness of the communication treatment. Objective To analyze the progress achieved by treatment with the ABAB—Withdrawal and Multiple Probes Model in children with different degrees of phonological disorders. Methods The diagnosis of speech articulation disorder was determined by speech and hearing evaluation and complementary tests. The subjects of this research were eight children, with the average age of 5:5. The children were distributed into four groups according to the degrees of the phonological disorders, based on the percentage of correct consonants, as follows: severe, moderate to severe, mild to moderate, and mild. The phonological treatment applied was the ABAB—Withdrawal and Multiple Probes Model. The development of the therapy by generalization was observed through the comparison between the two analyses: contrastive and distinctive features at the moment of evaluation and reevaluation. Results The following types of generalization were found: to the items not used in the treatment (other words, to another position in the word, within a sound class, to other classes of sounds, and to another syllable structure. Conclusion The different types of generalization studied showed the expansion of production and proper use of therapy-trained targets in other contexts or untrained environments. Therefore, the analysis of the generalizations proved to be an important criterion to measure the therapeutic efficacy.

  8. Improvement of the physically-based groundwater model simulations through complementary correction of its errors

    Directory of Open Access Journals (Sweden)

    Jorge Mauricio Reyes Alcalde

    2017-04-01

    Full Text Available Physically-Based groundwater Models (PBM, such MODFLOW, are used as groundwater resources evaluation tools supposing that the produced differences (residuals or errors are white noise. However, in the facts these numerical simulations usually show not only random errors but also systematic errors. For this work it has been developed a numerical procedure to deal with PBM systematic errors, studying its structure in order to model its behavior and correct the results by external and complementary means, trough a framework called Complementary Correction Model (CCM. The application of CCM to PBM shows a decrease in local biases, better distribution of errors and reductions in its temporal and spatial correlations, with 73% of reduction in global RMSN over an original PBM. This methodology seems an interesting chance to update a PBM avoiding the work and costs of interfere its internal structure.

  9. Correction tool for Active Shape Model based lumbar muscle segmentation.

    Science.gov (United States)

    Valenzuela, Waldo; Ferguson, Stephen J; Ignasiak, Dominika; Diserens, Gaelle; Vermathen, Peter; Boesch, Chris; Reyes, Mauricio

    2015-08-01

    In the clinical environment, accuracy and speed of the image segmentation process plays a key role in the analysis of pathological regions. Despite advances in anatomic image segmentation, time-effective correction tools are commonly needed to improve segmentation results. Therefore, these tools must provide faster corrections with a low number of interactions, and a user-independent solution. In this work we present a new interactive correction method for correcting the image segmentation. Given an initial segmentation and the original image, our tool provides a 2D/3D environment, that enables 3D shape correction through simple 2D interactions. Our scheme is based on direct manipulation of free form deformation adapted to a 2D environment. This approach enables an intuitive and natural correction of 3D segmentation results. The developed method has been implemented into a software tool and has been evaluated for the task of lumbar muscle segmentation from Magnetic Resonance Images. Experimental results show that full segmentation correction could be performed within an average correction time of 6±4 minutes and an average of 68±37 number of interactions, while maintaining the quality of the final segmentation result within an average Dice coefficient of 0.92±0.03.

  10. a Semi-Empirical Topographic Correction Model for Multi-Source Satellite Images

    Science.gov (United States)

    Xiao, Sa; Tian, Xinpeng; Liu, Qiang; Wen, Jianguang; Ma, Yushuang; Song, Zhenwei

    2018-04-01

    Topographic correction of surface reflectance in rugged terrain areas is the prerequisite for the quantitative application of remote sensing in mountainous areas. Physics-based radiative transfer model can be applied to correct the topographic effect and accurately retrieve the reflectance of the slope surface from high quality satellite image such as Landsat8 OLI. However, as more and more images data available from various of sensors, some times we can not get the accurate sensor calibration parameters and atmosphere conditions which are needed in the physics-based topographic correction model. This paper proposed a semi-empirical atmosphere and topographic corrction model for muti-source satellite images without accurate calibration parameters.Based on this model we can get the topographic corrected surface reflectance from DN data, and we tested and verified this model with image data from Chinese satellite HJ and GF. The result shows that the correlation factor was reduced almost 85 % for near infrared bands and the classification overall accuracy of classification increased 14 % after correction for HJ. The reflectance difference of slope face the sun and face away the sun have reduced after correction.

  11. Corrective interpersonal experience in psychodrama group therapy: a comprehensive process analysis of significant therapeutic events.

    Science.gov (United States)

    McVea, Charmaine S; Gow, Kathryn; Lowe, Roger

    2011-07-01

    This study investigated the process of resolving painful emotional experience during psychodrama group therapy, by examining significant therapeutic events within seven psychodrama enactments. A comprehensive process analysis of four resolved and three not-resolved cases identified five meta-processes which were linked to in-session resolution. One was a readiness to engage in the therapeutic process, which was influenced by client characteristics and the client's experience of the group; and four were therapeutic events: (1) re-experiencing with insight; (2) activating resourcefulness; (3) social atom repair with emotional release; and (4) integration. A corrective interpersonal experience (social atom repair) healed the sense of fragmentation and interpersonal disconnection associated with unresolved emotional pain, and emotional release was therapeutically helpful when located within the enactment of this new role relationship. Protagonists who experienced resolution reported important improvements in interpersonal functioning and sense of self which they attributed to this experience.

  12. Exemplar-based human action pose correction.

    Science.gov (United States)

    Shen, Wei; Deng, Ke; Bai, Xiang; Leyvand, Tommer; Guo, Baining; Tu, Zhuowen

    2014-07-01

    The launch of Xbox Kinect has built a very successful computer vision product and made a big impact on the gaming industry. This sheds lights onto a wide variety of potential applications related to action recognition. The accurate estimation of human poses from the depth image is universally a critical step. However, existing pose estimation systems exhibit failures when facing severe occlusion. In this paper, we propose an exemplar-based method to learn to correct the initially estimated poses. We learn an inhomogeneous systematic bias by leveraging the exemplar information within a specific human action domain. Furthermore, as an extension, we learn a conditional model by incorporation of pose tags to further increase the accuracy of pose correction. In the experiments, significant improvements on both joint-based skeleton correction and tag prediction are observed over the contemporary approaches, including what is delivered by the current Kinect system. Our experiments for the facial landmark correction also illustrate that our algorithm can improve the accuracy of other detection/estimation systems.

  13. A sun-crown-sensor model and adapted C-correction logic for topographic correction of high resolution forest imagery

    Science.gov (United States)

    Fan, Yuanchao; Koukal, Tatjana; Weisberg, Peter J.

    2014-10-01

    Canopy shadowing mediated by topography is an important source of radiometric distortion on remote sensing images of rugged terrain. Topographic correction based on the sun-canopy-sensor (SCS) model significantly improved over those based on the sun-terrain-sensor (STS) model for surfaces with high forest canopy cover, because the SCS model considers and preserves the geotropic nature of trees. The SCS model accounts for sub-pixel canopy shadowing effects and normalizes the sunlit canopy area within a pixel. However, it does not account for mutual shadowing between neighboring pixels. Pixel-to-pixel shadowing is especially apparent for fine resolution satellite images in which individual tree crowns are resolved. This paper proposes a new topographic correction model: the sun-crown-sensor (SCnS) model based on high-resolution satellite imagery (IKONOS) and high-precision LiDAR digital elevation model. An improvement on the C-correction logic with a radiance partitioning method to address the effects of diffuse irradiance is also introduced (SCnS + C). In addition, we incorporate a weighting variable, based on pixel shadow fraction, on the direct and diffuse radiance portions to enhance the retrieval of at-sensor radiance and reflectance of highly shadowed tree pixels and form another variety of SCnS model (SCnS + W). Model evaluation with IKONOS test data showed that the new SCnS model outperformed the STS and SCS models in quantifying the correlation between terrain-regulated illumination factor and at-sensor radiance. Our adapted C-correction logic based on the sun-crown-sensor geometry and radiance partitioning better represented the general additive effects of diffuse radiation than C parameters derived from the STS or SCS models. The weighting factor Wt also significantly enhanced correction results by reducing within-class standard deviation and balancing the mean pixel radiance between sunlit and shaded slopes. We analyzed these improvements with model

  14. The robust corrective action priority-an improved approach for selecting competing corrective actions in FMEA based on principle of robust design

    Science.gov (United States)

    Sutrisno, Agung; Gunawan, Indra; Vanany, Iwan

    2017-11-01

    In spite of being integral part in risk - based quality improvement effort, studies improving quality of selection of corrective action priority using FMEA technique are still limited in literature. If any, none is considering robustness and risk in selecting competing improvement initiatives. This study proposed a theoretical model to select risk - based competing corrective action by considering robustness and risk of competing corrective actions. We incorporated the principle of robust design in counting the preference score among corrective action candidates. Along with considering cost and benefit of competing corrective actions, we also incorporate the risk and robustness of corrective actions. An example is provided to represent the applicability of the proposed model.

  15. Enabling full-field physics-based optical proximity correction via dynamic model generation

    Science.gov (United States)

    Lam, Michael; Clifford, Chris; Raghunathan, Ananthan; Fenger, Germain; Adam, Kostas

    2017-07-01

    As extreme ultraviolet lithography becomes closer to reality for high volume production, its peculiar modeling challenges related to both inter and intrafield effects have necessitated building an optical proximity correction (OPC) infrastructure that operates with field position dependency. Previous state-of-the-art approaches to modeling field dependency used piecewise constant models where static input models are assigned to specific x/y-positions within the field. OPC and simulation could assign the proper static model based on simulation-level placement. However, in the realm of 7 and 5 nm feature sizes, small discontinuities in OPC from piecewise constant model changes can cause unacceptable levels of edge placement errors. The introduction of dynamic model generation (DMG) can be shown to effectively avoid these dislocations by providing unique mask and optical models per simulation region, allowing a near continuum of models through the field. DMG allows unique models for electromagnetic field, apodization, aberrations, etc. to vary through the entire field and provides a capability to precisely and accurately model systematic field signatures.

  16. Adeno-Associated Virus-Mediated Correction of a Canine Model of Glycogen Storage Disease Type Ia

    Science.gov (United States)

    Weinstein, David A.; Correia, Catherine E.; Conlon, Thomas; Specht, Andrew; Verstegen, John; Onclin-Verstegen, Karine; Campbell-Thompson, Martha; Dhaliwal, Gurmeet; Mirian, Layla; Cossette, Holly; Falk, Darin J.; Germain, Sean; Clement, Nathalie; Porvasnik, Stacy; Fiske, Laurie; Struck, Maggie; Ramirez, Harvey E.; Jordan, Juan; Andrutis, Karl; Chou, Janice Y.; Byrne, Barry J.

    2010-01-01

    Abstract Glycogen storage disease type Ia (GSDIa; von Gierke disease; MIM 232200) is caused by a deficiency in glucose-6-phosphatase-α. Patients with GSDIa are unable to maintain glucose homeostasis and suffer from severe hypoglycemia, hepatomegaly, hyperlipidemia, hyperuricemia, and lactic acidosis. The canine model of GSDIa is naturally occurring and recapitulates almost all aspects of the human form of disease. We investigated the potential of recombinant adeno-associated virus (rAAV) vector-based therapy to treat the canine model of GSDIa. After delivery of a therapeutic rAAV2/8 vector to a 1-day-old GSDIa dog, improvement was noted as early as 2 weeks posttreatment. Correction was transient, however, and by 2 months posttreatment the rAAV2/8-treated dog could no longer sustain normal blood glucose levels after 1 hr of fasting. The same animal was then dosed with a therapeutic rAAV2/1 vector delivered via the portal vein. Two months after rAAV2/1 dosing, both blood glucose and lactate levels were normal at 4 hr postfasting. With more prolonged fasting, the dog still maintained near-normal glucose concentrations, but lactate levels were elevated by 9 hr, indicating that partial correction was achieved. Dietary glucose supplementation was discontinued starting 1 month after rAAV2/1 delivery and the dog continues to thrive with minimal laboratory abnormalities at 23 months of age (18 months after rAAV2/1 treatment). These results demonstrate that delivery of rAAV vectors can mediate significant correction of the GSDIa phenotype and that gene transfer may be a promising alternative therapy for this disease and other genetic diseases of the liver. PMID:20163245

  17. Model-Based Illumination Correction for Face Images in Uncontrolled Scenarios

    NARCIS (Netherlands)

    Boom, B.J.; Spreeuwers, Lieuwe Jan; Veldhuis, Raymond N.J.

    2009-01-01

    Face Recognition under uncontrolled illumination conditions is partly an unsolved problem. Several illumination correction methods have been proposed, but these are usually tested on illumination conditions created in a laboratory. Our focus is more on uncontrolled conditions. We use the Phong model

  18. A therapeutic gain model for brachytherapy

    International Nuclear Information System (INIS)

    Wigg, D.R.

    2003-01-01

    When treating with continuous irradiation the potential therapeutic gain or loss depends on several treatment, normal tissue and tumour variables. There are similarities between equations defining tissue effects with fractionated treatment and brachytherapy. The former is sensitive to dose per fraction (and incomplete repair for short intervals between treatments) and the later is sensitive to dose rate and continuous repair factors. Because of these similarities, for typical tumours and normal tissues, dose per fraction and dose rates generally work in similar directions. As the dose per fraction or dose rate increases the therapeutic gain falls. With continuous irradiation the dose rates effects are determined by Beta cell kill and hence the absolute value of Beta . Minimal sensitivity occurs at very low and very high dose rates. The magnitude of cell kill also depends on the Continuous Repair Factor (g) which is a function of the treatment time and the Repair Half Time (in hours) of the tissues (Repair Half Time T 1/2Ln(2)/h, when h the Repair Constant). An interactive optimising model has been written to predict the therapeutic gain or loss as the parameter values are varied. This model includes the tumour and normal tissue parameters alpha and beta Gy (or individual values), their Repair Half Times, dose rates and overall treatment time. The model is based on the Linear-Quadratic equation and the Total Effect (TE) method of Thames and Hendry although the Extrapolated Response Dose (ERD) method of Barendsen produces the same results. The model is written so that the gain or loss may be seen when treatment is always to normal tissue tolerance doses. The magnitude of the therapeutic loss as the dose rate increases and its sensitivity to changes in normal tissue and tumour parameter values is clearly demonstrated

  19. Carbon nanotubes (CNTs) based advanced dermal therapeutics: current trends and future potential.

    Science.gov (United States)

    Kuche, Kaushik; Maheshwari, Rahul; Tambe, Vishakha; Mak, Kit-Kay; Jogi, Hardi; Raval, Nidhi; Pichika, Mallikarjuna Rao; Kumar Tekade, Rakesh

    2018-05-17

    The search for effective and non-invasive delivery modules to transport therapeutic molecules across skin has led to the discovery of a number of nanocarriers (viz.: liposomes, ethosomes, dendrimers, etc.) in the last few decades. However, available literature suggests that these delivery modules face several issues including poor stability, low encapsulation efficiency, and scale-up hurdles. Recently, carbon nanotubes (CNTs) emerged as a versatile tool to deliver therapeutics across skin. Superior stability, high loading capacity, well-developed synthesis protocol as well as ease of scale-up are some of the reason for growing interest in CNTs. CNTs have a unique physical architecture and a large surface area with unique surface chemistry that can be tailored for vivid biomedical applications. CNTs have been thus largely engaged in the development of transdermal systems such as tuneable hydrogels, programmable nonporous membranes, electroresponsive skin modalities, protein channel mimetic platforms, reverse iontophoresis, microneedles, and dermal buckypapers. In addition, CNTs were also employed in the development of RNA interference (RNAi) based therapeutics for correcting defective dermal genes. This review expounds the state-of-art synthesis methodologies, skin penetration mechanism, drug liberation profile, loading potential, characterization techniques, and transdermal applications along with a summary on patent/regulatory status and future scope of CNT based skin therapeutics.

  20. Non-model-based correction of respiratory motion using beat-to-beat 3D spiral fat-selective imaging.

    Science.gov (United States)

    Keegan, Jennifer; Gatehouse, Peter D; Yang, Guang-Zhong; Firmin, David N

    2007-09-01

    To demonstrate the feasibility of retrospective beat-to-beat correction of respiratory motion, without the need for a respiratory motion model. A high-resolution three-dimensional (3D) spiral black-blood scan of the right coronary artery (RCA) of six healthy volunteers was acquired over 160 cardiac cycles without respiratory gating. One spiral interleaf was acquired per cardiac cycle, prior to each of which a complete low-resolution fat-selective 3D spiral dataset was acquired. The respiratory motion (3D translation) on each cardiac cycle was determined by cross-correlating a region of interest (ROI) in the fat around the artery in the low-resolution datasets with that on a reference end-expiratory dataset. The measured translations were used to correct the raw data of the high-resolution spiral interleaves. Beat-to-beat correction provided consistently good results, with the image quality being better than that obtained with a fixed superior-inferior tracking factor of 0.6 and better than (N = 5) or equal to (N = 1) that achieved using a subject-specific retrospective 3D translation motion model. Non-model-based correction of respiratory motion using 3D spiral fat-selective imaging is feasible, and in this small group of volunteers produced better-quality images than a subject-specific retrospective 3D translation motion model. (c) 2007 Wiley-Liss, Inc.

  1. Paper-pen peer-correction versus wiki-based peer-correction

    Directory of Open Access Journals (Sweden)

    Froldova Vladimira

    2016-01-01

    Full Text Available This study reports on the comparison of the students’ achievement and their attitudes towards the use of paper-pen peer-correction and wiki-based peer-correction within English language lessons and CLIL Social Science lessons at the higher secondary school in Prague. Questionnaires and semi-structured interviews were utilized to gather information. The data suggests that students made considerable use of wikis and showed higher degrees of motivation in wiki-based peer-correction during English language lessons than in CLIL Social Science lessons. In both cases wikis not only contributed to developing students’ writing skills, but also helped students recognize the importance of collaboration.

  2. Bias-Correction in Vector Autoregressive Models: A Simulation Study

    Directory of Open Access Journals (Sweden)

    Tom Engsted

    2014-03-01

    Full Text Available We analyze the properties of various methods for bias-correcting parameter estimates in both stationary and non-stationary vector autoregressive models. First, we show that two analytical bias formulas from the existing literature are in fact identical. Next, based on a detailed simulation study, we show that when the model is stationary this simple bias formula compares very favorably to bootstrap bias-correction, both in terms of bias and mean squared error. In non-stationary models, the analytical bias formula performs noticeably worse than bootstrapping. Both methods yield a notable improvement over ordinary least squares. We pay special attention to the risk of pushing an otherwise stationary model into the non-stationary region of the parameter space when correcting for bias. Finally, we consider a recently proposed reduced-bias weighted least squares estimator, and we find that it compares very favorably in non-stationary models.

  3. Mass corrections to Green functions in instanton vacuum model

    International Nuclear Information System (INIS)

    Esaibegyan, S.V.; Tamaryan, S.N.

    1987-01-01

    The first nonvanishing mass corrections to the effective Green functions are calculated in the model of instanton-based vacuum consisting of a superposition of instanton-antiinstanton fluctuations. The meson current correlators are calculated with account of these corrections; the mass spectrum of pseudoscalar octet as well as the value of the kaon axial constant are found. 7 refs

  4. Correction of TRMM 3B42V7 Based on Linear Regression Models over China

    Directory of Open Access Journals (Sweden)

    Shaohua Liu

    2016-01-01

    Full Text Available High temporal-spatial precipitation is necessary for hydrological simulation and water resource management, and remotely sensed precipitation products (RSPPs play a key role in supporting high temporal-spatial precipitation, especially in sparse gauge regions. TRMM 3B42V7 data (TRMM precipitation is an essential RSPP outperforming other RSPPs. Yet the utilization of TRMM precipitation is still limited by the inaccuracy and low spatial resolution at regional scale. In this paper, linear regression models (LRMs have been constructed to correct and downscale the TRMM precipitation based on the gauge precipitation at 2257 stations over China from 1998 to 2013. Then, the corrected TRMM precipitation was validated by gauge precipitation at 839 out of 2257 stations in 2014 at station and grid scales. The results show that both monthly and annual LRMs have obviously improved the accuracy of corrected TRMM precipitation with acceptable error, and monthly LRM performs slightly better than annual LRM in Mideastern China. Although the performance of corrected TRMM precipitation from the LRMs has been increased in Northwest China and Tibetan plateau, the error of corrected TRMM precipitation is still significant due to the large deviation between TRMM precipitation and low-density gauge precipitation.

  5. Wall correction model for wind tunnels with open test section

    DEFF Research Database (Denmark)

    Sørensen, Jens Nørkær; Shen, Wen Zhong; Mikkelsen, Robert Flemming

    2006-01-01

    In the paper we present a correction model for wall interference on rotors of wind turbines or propellers in wind tunnels. The model, which is based on a one-dimensional momentum approach, is validated against results from CFD computations using a generalized actuator disc principle. In the model...... good agreement with the CFD computations, demonstrating that one-dimensional momentum theory is a reliable way of predicting corrections for wall interference in wind tunnels with closed as well as open cross sections....

  6. Structurally Based Therapeutic Evaluation: A Therapeutic and Practical Approach to Teaching Medicinal Chemistry.

    Science.gov (United States)

    Alsharif, Naser Z.; And Others

    1997-01-01

    Explains structurally based therapeutic evaluation of drugs, which uses seven therapeutic criteria in translating chemical and structural knowledge into therapeutic decision making in pharmaceutical care. In a Creighton University (Nebraska) medicinal chemistry course, students apply the approach to solve patient-related therapeutic problems in…

  7. Therapeutic Enactment: Integrating Individual and Group Counseling Models for Change

    Science.gov (United States)

    Westwood, Marvin J.; Keats, Patrice A.; Wilensky, Patricia

    2003-01-01

    The purpose of this article is to introduce the reader to a group-based therapy model known as therapeutic enactment. A description of this multimodal change model is provided by outlining the relevant background information, key concepts related to specific change processes, and the differences in this model compared to earlier psychodrama…

  8. On the importance of appropriate precipitation gauge catch correction for hydrological modelling at mid to high latitudes

    Science.gov (United States)

    Stisen, S.; Højberg, A. L.; Troldborg, L.; Refsgaard, J. C.; Christensen, B. S. B.; Olsen, M.; Henriksen, H. J.

    2012-11-01

    Precipitation gauge catch correction is often given very little attention in hydrological modelling compared to model parameter calibration. This is critical because significant precipitation biases often make the calibration exercise pointless, especially when supposedly physically-based models are in play. This study addresses the general importance of appropriate precipitation catch correction through a detailed modelling exercise. An existing precipitation gauge catch correction method addressing solid and liquid precipitation is applied, both as national mean monthly correction factors based on a historic 30 yr record and as gridded daily correction factors based on local daily observations of wind speed and temperature. The two methods, named the historic mean monthly (HMM) and the time-space variable (TSV) correction, resulted in different winter precipitation rates for the period 1990-2010. The resulting precipitation datasets were evaluated through the comprehensive Danish National Water Resources model (DK-Model), revealing major differences in both model performance and optimised model parameter sets. Simulated stream discharge is improved significantly when introducing the TSV correction, whereas the simulated hydraulic heads and multi-annual water balances performed similarly due to recalibration adjusting model parameters to compensate for input biases. The resulting optimised model parameters are much more physically plausible for the model based on the TSV correction of precipitation. A proxy-basin test where calibrated DK-Model parameters were transferred to another region without site specific calibration showed better performance for parameter values based on the TSV correction. Similarly, the performances of the TSV correction method were superior when considering two single years with a much dryer and a much wetter winter, respectively, as compared to the winters in the calibration period (differential split-sample tests). We conclude that TSV

  9. On the importance of appropriate precipitation gauge catch correction for hydrological modelling at mid to high latitudes

    Directory of Open Access Journals (Sweden)

    S. Stisen

    2012-11-01

    Full Text Available Precipitation gauge catch correction is often given very little attention in hydrological modelling compared to model parameter calibration. This is critical because significant precipitation biases often make the calibration exercise pointless, especially when supposedly physically-based models are in play. This study addresses the general importance of appropriate precipitation catch correction through a detailed modelling exercise. An existing precipitation gauge catch correction method addressing solid and liquid precipitation is applied, both as national mean monthly correction factors based on a historic 30 yr record and as gridded daily correction factors based on local daily observations of wind speed and temperature. The two methods, named the historic mean monthly (HMM and the time–space variable (TSV correction, resulted in different winter precipitation rates for the period 1990–2010. The resulting precipitation datasets were evaluated through the comprehensive Danish National Water Resources model (DK-Model, revealing major differences in both model performance and optimised model parameter sets. Simulated stream discharge is improved significantly when introducing the TSV correction, whereas the simulated hydraulic heads and multi-annual water balances performed similarly due to recalibration adjusting model parameters to compensate for input biases. The resulting optimised model parameters are much more physically plausible for the model based on the TSV correction of precipitation. A proxy-basin test where calibrated DK-Model parameters were transferred to another region without site specific calibration showed better performance for parameter values based on the TSV correction. Similarly, the performances of the TSV correction method were superior when considering two single years with a much dryer and a much wetter winter, respectively, as compared to the winters in the calibration period (differential split-sample tests

  10. Lipid correction model of carbon stable isotopes for a cosmopolitan predator, spiny dogfish Squalus acanthias.

    Science.gov (United States)

    Reum, J C P

    2011-12-01

    Three lipid correction models were evaluated for liver and white dorsal muscle from Squalus acanthias. For muscle, all three models performed well, based on the Akaike Information Criterion value corrected for small sample sizes (AIC(c) ), and predicted similar lipid corrections to δ(13) C that were up to 2.8 ‰ higher than those predicted using previously published models based on multispecies data. For liver, which possessed higher bulk C:N values compared to that of white muscle, all three models performed poorly and lipid-corrected δ(13) C values were best approximated by simply adding 5.74 ‰ to bulk δ(13) C values. © 2011 The Author. Journal of Fish Biology © 2011 The Fisheries Society of the British Isles.

  11. On-line core monitoring system based on buckling corrected modified one group model

    International Nuclear Information System (INIS)

    Freire, Fernando S.

    2011-01-01

    Nuclear power reactors require core monitoring during plant operation. To provide safe, clean and reliable core continuously evaluate core conditions. Currently, the reactor core monitoring process is carried out by nuclear code systems that together with data from plant instrumentation, such as, thermocouples, ex-core detectors and fixed or moveable In-core detectors, can easily predict and monitor a variety of plant conditions. Typically, the standard nodal methods can be found on the heart of such nuclear monitoring code systems. However, standard nodal methods require large computer running times when compared with standards course-mesh finite difference schemes. Unfortunately, classic finite-difference models require a fine mesh reactor core representation. To override this unlikely model characteristic we can usually use the classic modified one group model to take some account for the main core neutronic behavior. In this model a course-mesh core representation can be easily evaluated with a crude treatment of thermal neutrons leakage. In this work, an improvement made on classic modified one group model based on a buckling thermal correction was used to obtain a fast, accurate and reliable core monitoring system methodology for future applications, providing a powerful tool for core monitoring process. (author)

  12. Testing and Inference in Nonlinear Cointegrating Vector Error Correction Models

    DEFF Research Database (Denmark)

    Kristensen, Dennis; Rahbæk, Anders

    In this paper, we consider a general class of vector error correction models which allow for asymmetric and non-linear error correction. We provide asymptotic results for (quasi-)maximum likelihood (QML) based estimators and tests. General hypothesis testing is considered, where testing...... of non-stationary non-linear time series models. Thus the paper provides a full asymptotic theory for estimators as well as standard and non-standard test statistics. The derived asymptotic results prove to be new compared to results found elsewhere in the literature due to the impact of the estimated...... symmetric non-linear error correction considered. A simulation study shows that the fi…nite sample properties of the bootstrapped tests are satisfactory with good size and power properties for reasonable sample sizes....

  13. Testing and Inference in Nonlinear Cointegrating Vector Error Correction Models

    DEFF Research Database (Denmark)

    Kristensen, Dennis; Rahbek, Anders

    In this paper, we consider a general class of vector error correction models which allow for asymmetric and non-linear error correction. We provide asymptotic results for (quasi-)maximum likelihood (QML) based estimators and tests. General hypothesis testing is considered, where testing...... of non-stationary non-linear time series models. Thus the paper provides a full asymptotic theory for estimators as well as standard and non-standard test statistics. The derived asymptotic results prove to be new compared to results found elsewhere in the literature due to the impact of the estimated...... symmetric non-linear error correction are considered. A simulation study shows that the finite sample properties of the bootstrapped tests are satisfactory with good size and power properties for reasonable sample sizes....

  14. Immunogenicity of therapeutic proteins: the use of animal models.

    Science.gov (United States)

    Brinks, Vera; Jiskoot, Wim; Schellekens, Huub

    2011-10-01

    Immunogenicity of therapeutic proteins lowers patient well-being and drastically increases therapeutic costs. Preventing immunogenicity is an important issue to consider when developing novel therapeutic proteins and applying them in the clinic. Animal models are increasingly used to study immunogenicity of therapeutic proteins. They are employed as predictive tools to assess different aspects of immunogenicity during drug development and have become vital in studying the mechanisms underlying immunogenicity of therapeutic proteins. However, the use of animal models needs critical evaluation. Because of species differences, predictive value of such models is limited, and mechanistic studies can be restricted. This review addresses the suitability of animal models for immunogenicity prediction and summarizes the insights in immunogenicity that they have given so far.

  15. Non-stationary Bias Correction of Monthly CMIP5 Temperature Projections over China using a Residual-based Bagging Tree Model

    Science.gov (United States)

    Yang, T.; Lee, C.

    2017-12-01

    The biases in the Global Circulation Models (GCMs) are crucial for understanding future climate changes. Currently, most bias correction methodologies suffer from the assumption that model bias is stationary. This paper provides a non-stationary bias correction model, termed Residual-based Bagging Tree (RBT) model, to reduce simulation biases and to quantify the contributions of single models. Specifically, the proposed model estimates the residuals between individual models and observations, and takes the differences between observations and the ensemble mean into consideration during the model training process. A case study is conducted for 10 major river basins in Mainland China during different seasons. Results show that the proposed model is capable of providing accurate and stable predictions while including the non-stationarities into the modeling framework. Significant reductions in both bias and root mean squared error are achieved with the proposed RBT model, especially for the central and western parts of China. The proposed RBT model has consistently better performance in reducing biases when compared to the raw ensemble mean, the ensemble mean with simple additive bias correction, and the single best model for different seasons. Furthermore, the contribution of each single GCM in reducing the overall bias is quantified. The single model importance varies between 3.1% and 7.2%. For different future scenarios (RCP 2.6, RCP 4.5, and RCP 8.5), the results from RBT model suggest temperature increases of 1.44 ºC, 2.59 ºC, and 4.71 ºC by the end of the century, respectively, when compared to the average temperature during 1970 - 1999.

  16. Infinite-degree-corrected stochastic block model

    DEFF Research Database (Denmark)

    Herlau, Tue; Schmidt, Mikkel Nørgaard; Mørup, Morten

    2014-01-01

    In stochastic block models, which are among the most prominent statistical models for cluster analysis of complex networks, clusters are defined as groups of nodes with statistically similar link probabilities within and between groups. A recent extension by Karrer and Newman [Karrer and Newman...... corrected stochastic block model as a nonparametric Bayesian model, incorporating a parameter to control the amount of degree correction that can then be inferred from data. Additionally, our formulation yields principled ways of inferring the number of groups as well as predicting missing links...

  17. Male emotional intimacy: how therapeutic men's groups can enhance couples therapy.

    Science.gov (United States)

    Garfield, Robert

    2010-03-01

    Men's difficulty with emotional intimacy is a problem that therapists regularly encounter in working with heterosexual couples in therapy. The first part of this article describes historical and cultural factors that contribute to this dilemma in men's marriages and same-sex friendships. Therapeutic men's groups can provide a corrective experience for men, helping them to develop emotional intimacy skills while augmenting their work in couples therapy. A model for such groups is presented, including guidelines for referral, screening, and collaboration with other therapists. Our therapeutic approach encourages relationship-based learning through direct emotional expression and supportive feedback. We emphasize the development of friendship skills, core attributes of friendship (connection, communication, commitment, and cooperation) that contribute to emotional intimacy in men's relationships. Case examples are included to illustrate how this model works in clinical practice, as well as specific suggestions for further study that could lead to a more evidence-based practice.

  18. Use of Core Correctional Practice and Inmate Preparedness for Release.

    Science.gov (United States)

    Haas, Stephen M; Spence, Douglas H

    2017-10-01

    Core correctional practices (CCP) are an evidence-based approach that can improve the quality of the prison environment and enhance prisoner outcomes. CCP focus on increasing the effectiveness of treatment interventions as well as the therapeutic potential of relationships between prisoners and correctional staff. This study utilizes a new survey-based measurement tool to assess inmate perceptions of the quality of service delivery and level of adherence to CCP. It then examines the relationship between perceptions of CCP and prisoner's preparedness for releasing using both bivariate and multivariate analyses. The results show that the perceptions of CCP are positively correlated with readiness for release and are the most powerful predictor of readiness for release in the multivariate models. Implications for the future operationalization of CCP and its role in prisoner reentry are discussed.

  19. Bias-correction in vector autoregressive models: A simulation study

    DEFF Research Database (Denmark)

    Engsted, Tom; Pedersen, Thomas Quistgaard

    We analyze and compare the properties of various methods for bias-correcting parameter estimates in vector autoregressions. First, we show that two analytical bias formulas from the existing literature are in fact identical. Next, based on a detailed simulation study, we show that this simple...... and easy-to-use analytical bias formula compares very favorably to the more standard but also more computer intensive bootstrap bias-correction method, both in terms of bias and mean squared error. Both methods yield a notable improvement over both OLS and a recently proposed WLS estimator. We also...... of pushing an otherwise stationary model into the non-stationary region of the parameter space during the process of correcting for bias....

  20. Efficient color correction method for smartphone camera-based health monitoring application.

    Science.gov (United States)

    Duc Dang; Chae Ho Cho; Daeik Kim; Oh Seok Kwon; Jo Woon Chong

    2017-07-01

    Smartphone health monitoring applications are recently highlighted due to the rapid development of hardware and software performance of smartphones. However, color characteristics of images captured by different smartphone models are dissimilar each other and this difference may give non-identical health monitoring results when the smartphone health monitoring applications monitor physiological information using their embedded smartphone cameras. In this paper, we investigate the differences in color properties of the captured images from different smartphone models and apply a color correction method to adjust dissimilar color values obtained from different smartphone cameras. Experimental results show that the color corrected images using the correction method provide much smaller color intensity errors compared to the images without correction. These results can be applied to enhance the consistency of smartphone camera-based health monitoring applications by reducing color intensity errors among the images obtained from different smartphones.

  1. Color correction with blind image restoration based on multiple images using a low-rank model

    Science.gov (United States)

    Li, Dong; Xie, Xudong; Lam, Kin-Man

    2014-03-01

    We present a method that can handle the color correction of multiple photographs with blind image restoration simultaneously and automatically. We prove that the local colors of a set of images of the same scene exhibit the low-rank property locally both before and after a color-correction operation. This property allows us to correct all kinds of errors in an image under a low-rank matrix model without particular priors or assumptions. The possible errors may be caused by changes of viewpoint, large illumination variations, gross pixel corruptions, partial occlusions, etc. Furthermore, a new iterative soft-segmentation method is proposed for local color transfer using color influence maps. Due to the fact that the correct color information and the spatial information of images can be recovered using the low-rank model, more precise color correction and many other image-restoration tasks-including image denoising, image deblurring, and gray-scale image colorizing-can be performed simultaneously. Experiments have verified that our method can achieve consistent and promising results on uncontrolled real photographs acquired from the Internet and that it outperforms current state-of-the-art methods.

  2. Factors associated with therapeutic inertia in hypertension: validation of a predictive model.

    Science.gov (United States)

    Redón, Josep; Coca, Antonio; Lázaro, Pablo; Aguilar, Ma Dolores; Cabañas, Mercedes; Gil, Natividad; Sánchez-Zamorano, Miguel Angel; Aranda, Pedro

    2010-08-01

    To study factors associated with therapeutic inertia in treating hypertension and to develop a predictive model to estimate the probability of therapeutic inertia in a given medical consultation, based on variables related to the consultation, patient, physician, clinical characteristics, and level of care. National, multicentre, observational, cross-sectional study in primary care and specialist (hospital) physicians who each completed a questionnaire on therapeutic inertia, provided professional data and collected clinical data on four patients. Therapeutic inertia was defined as a consultation in which treatment change was indicated (i.e., SBP >or= 140 or DBP >or= 90 mmHg in all patients; SBP >or= 130 or DBP >or= 80 in patients with diabetes or stroke), but did not occur. A predictive model was constructed and validated according to the factors associated with therapeutic inertia. Data were collected on 2595 patients and 13,792 visits. Therapeutic inertia occurred in 7546 (75%) of the 10,041 consultations in which treatment change was indicated. Factors associated with therapeutic inertia were primary care setting, male sex, older age, SPB and/or DBP values close to normal, treatment with more than one antihypertensive drug, treatment with an ARB II, and more than six visits/year. Physician characteristics did not weigh heavily in the association. The predictive model was valid internally and externally, with acceptable calibration, discrimination and reproducibility, and explained one-third of the variability in therapeutic inertia. Although therapeutic inertia is frequent in the management of hypertension, the factors explaining it are not completely clear. Whereas some aspects of the consultations were associated with therapeutic inertia, physician characteristics were not a decisive factor.

  3. A correction on coastal heads for groundwater flow models.

    Science.gov (United States)

    Lu, Chunhui; Werner, Adrian D; Simmons, Craig T; Luo, Jian

    2015-01-01

    We introduce a simple correction to coastal heads for constant-density groundwater flow models that contain a coastal boundary, based on previous analytical solutions for interface flow. The results demonstrate that accurate discharge to the sea in confined aquifers can be obtained by direct application of Darcy's law (for constant-density flow) if the coastal heads are corrected to ((α + 1)/α)hs  - B/2α, in which hs is the mean sea level above the aquifer base, B is the aquifer thickness, and α is the density factor. For unconfined aquifers, the coastal head should be assigned the value hs1+α/α. The accuracy of using these corrections is demonstrated by consistency between constant-density Darcy's solution and variable-density flow numerical simulations. The errors introduced by adopting two previous approaches (i.e., no correction and using the equivalent fresh water head at the middle position of the aquifer to represent the hydraulic head at the coastal boundary) are evaluated. Sensitivity analysis shows that errors in discharge to the sea could be larger than 100% for typical coastal aquifer parameter ranges. The location of observation wells relative to the toe is a key factor controlling the estimation error, as it determines the relative aquifer length of constant-density flow relative to variable-density flow. The coastal head correction method introduced in this study facilitates the rapid and accurate estimation of the fresh water flux from a given hydraulic head measurement and allows for an improved representation of the coastal boundary condition in regional constant-density groundwater flow models. © 2014, National Ground Water Association.

  4. Actuator Disc Model Using a Modified Rhie-Chow/SIMPLE Pressure Correction Algorithm

    DEFF Research Database (Denmark)

    Rethore, Pierre-Elouan; Sørensen, Niels

    2008-01-01

    An actuator disc model for the flow solver EllipSys (2D&3D) is proposed. It is based on a correction of the Rhie-Chow algorithm for using discreet body forces in collocated variable finite volume CFD code. It is compared with three cases where an analytical solution is known.......An actuator disc model for the flow solver EllipSys (2D&3D) is proposed. It is based on a correction of the Rhie-Chow algorithm for using discreet body forces in collocated variable finite volume CFD code. It is compared with three cases where an analytical solution is known....

  5. A model-based correction for outcome reporting bias in meta-analysis.

    Science.gov (United States)

    Copas, John; Dwan, Kerry; Kirkham, Jamie; Williamson, Paula

    2014-04-01

    It is often suspected (or known) that outcomes published in medical trials are selectively reported. A systematic review for a particular outcome of interest can only include studies where that outcome was reported and so may omit, for example, a study that has considered several outcome measures but only reports those giving significant results. Using the methodology of the Outcome Reporting Bias (ORB) in Trials study of (Kirkham and others, 2010. The impact of outcome reporting bias in randomised controlled trials on a cohort of systematic reviews. British Medical Journal 340, c365), we suggest a likelihood-based model for estimating the effect of ORB on confidence intervals and p-values in meta-analysis. Correcting for bias has the effect of moving estimated treatment effects toward the null and hence more cautious assessments of significance. The bias can be very substantial, sometimes sufficient to completely overturn previous claims of significance. We re-analyze two contrasting examples, and derive a simple fixed effects approximation that can be used to give an initial estimate of the effect of ORB in practice.

  6. Centre-of-mass corrections for the harmonic S+V potential model

    International Nuclear Information System (INIS)

    Palladino, B.E.; Ferreira, P.L.

    1986-01-01

    Center-of-Mass corrections to the mass spectrum and static properties of low-lying S-wave baryoins are discussed in the context of a relativistic, independent quark model, based on a Dirac equation, with equally mixed scalar and vector confining potential of harmomic type. A more stisfactory fitting of the parameters involved is obtained, as compared with previous treatments which CM corrections were neglected. (Author) [pt

  7. Holographic p-wave superconductor models with Weyl corrections

    Directory of Open Access Journals (Sweden)

    Lu Zhang

    2015-04-01

    Full Text Available We study the effect of the Weyl corrections on the holographic p-wave dual models in the backgrounds of AdS soliton and AdS black hole via a Maxwell complex vector field model by using the numerical and analytical methods. We find that, in the soliton background, the Weyl corrections do not influence the properties of the holographic p-wave insulator/superconductor phase transition, which is different from that of the Yang–Mills theory. However, in the black hole background, we observe that similarly to the Weyl correction effects in the Yang–Mills theory, the higher Weyl corrections make it easier for the p-wave metal/superconductor phase transition to be triggered, which shows that these two p-wave models with Weyl corrections share some similar features for the condensation of the vector operator.

  8. Therapeutic Community in a California Prison: Treatment Outcomes after 5 Years

    Science.gov (United States)

    Zhang, Sheldon X.; Roberts, Robert E. L.; McCollister, Kathryn E.

    2011-01-01

    Therapeutic communities have become increasingly popular among correctional agencies with drug-involved offenders. This quasi-experimental study followed a group of inmates who participated in a prison-based therapeutic community in a California state prison, with a comparison group of matched offenders, for more than 5 years after their initial…

  9. Neural Network Based Real-time Correction of Transducer Dynamic Errors

    Science.gov (United States)

    Roj, J.

    2013-12-01

    In order to carry out real-time dynamic error correction of transducers described by a linear differential equation, a novel recurrent neural network was developed. The network structure is based on solving this equation with respect to the input quantity when using the state variables. It is shown that such a real-time correction can be carried out using simple linear perceptrons. Due to the use of a neural technique, knowledge of the dynamic parameters of the transducer is not necessary. Theoretical considerations are illustrated by the results of simulation studies performed for the modeled second order transducer. The most important properties of the neural dynamic error correction, when emphasizing the fundamental advantages and disadvantages, are discussed.

  10. Bias-correction of CORDEX-MENA projections using the Distribution Based Scaling method

    Science.gov (United States)

    Bosshard, Thomas; Yang, Wei; Sjökvist, Elin; Arheimer, Berit; Graham, L. Phil

    2014-05-01

    Within the Regional Initiative for the Assessment of the Impact of Climate Change on Water Resources and Socio-Economic Vulnerability in the Arab Region (RICCAR) lead by UN ESCWA, CORDEX RCM projections for the Middle East Northern Africa (MENA) domain are used to drive hydrological impacts models. Bias-correction of newly available CORDEX-MENA projections is a central part of this project. In this study, the distribution based scaling (DBS) method has been applied to 6 regional climate model projections driven by 2 RCP emission scenarios. The DBS method uses a quantile mapping approach and features a conditional temperature correction dependent on the wet/dry state in the climate model data. The CORDEX-MENA domain is particularly challenging for bias-correction as it spans very diverse climates showing pronounced dry and wet seasons. Results show that the regional climate models simulate too low temperatures and often have a displaced rainfall band compared to WATCH ERA-Interim forcing data in the reference period 1979-2008. DBS is able to correct the temperature biases as well as some aspects of the precipitation biases. Special focus is given to the analysis of the influence of the dry-frequency bias (i.e. climate models simulating too few rain days) on the bias-corrected projections and on the modification of the climate change signal by the DBS method.

  11. A New High-Precision Correction Method of Temperature Distribution in Model Stellar Atmospheres

    Directory of Open Access Journals (Sweden)

    Sapar A.

    2013-06-01

    Full Text Available The main features of the temperature correction methods, suggested and used in modeling of plane-parallel stellar atmospheres, are discussed. The main features of the new method are described. Derivation of the formulae for a version of the Unsöld-Lucy method, used by us in the SMART (Stellar Model Atmospheres and Radiative Transport software for modeling stellar atmospheres, is presented. The method is based on a correction of the model temperature distribution based on minimizing differences of flux from its accepted constant value and on the requirement of the lack of its gradient, meaning that local source and sink terms of radiation must be equal. The final relative flux constancy obtainable by the method with the SMART code turned out to have the precision of the order of 0.5 %. Some of the rapidly converging iteration steps can be useful before starting the high-precision model correction. The corrections of both the flux value and of its gradient, like in Unsöld-Lucy method, are unavoidably needed to obtain high-precision flux constancy. A new temperature correction method to obtain high-precision flux constancy for plane-parallel LTE model stellar atmospheres is proposed and studied. The non-linear optimization is carried out by the least squares, in which the Levenberg-Marquardt correction method and thereafter additional correction by the Broyden iteration loop were applied. Small finite differences of temperature (δT/T = 10−3 are used in the computations. A single Jacobian step appears to be mostly sufficient to get flux constancy of the order 10−2 %. The dual numbers and their generalization – the dual complex numbers (the duplex numbers – enable automatically to get the derivatives in the nilpotent part of the dual numbers. A version of the SMART software is in the stage of refactorization to dual and duplex numbers, what enables to get rid of the finite differences, as an additional source of lowering precision of the

  12. Short-term wind power combined forecasting based on error forecast correction

    International Nuclear Information System (INIS)

    Liang, Zhengtang; Liang, Jun; Wang, Chengfu; Dong, Xiaoming; Miao, Xiaofeng

    2016-01-01

    Highlights: • The correlation relationships of short-term wind power forecast errors are studied. • The correlation analysis method of the multi-step forecast errors is proposed. • A strategy selecting the input variables for the error forecast models is proposed. • Several novel combined models based on error forecast correction are proposed. • The combined models have improved the short-term wind power forecasting accuracy. - Abstract: With the increasing contribution of wind power to electric power grids, accurate forecasting of short-term wind power has become particularly valuable for wind farm operators, utility operators and customers. The aim of this study is to investigate the interdependence structure of errors in short-term wind power forecasting that is crucial for building error forecast models with regression learning algorithms to correct predictions and improve final forecasting accuracy. In this paper, several novel short-term wind power combined forecasting models based on error forecast correction are proposed in the one-step ahead, continuous and discontinuous multi-step ahead forecasting modes. First, the correlation relationships of forecast errors of the autoregressive model, the persistence method and the support vector machine model in various forecasting modes have been investigated to determine whether the error forecast models can be established by regression learning algorithms. Second, according to the results of the correlation analysis, the range of input variables is defined and an efficient strategy for selecting the input variables for the error forecast models is proposed. Finally, several combined forecasting models are proposed, in which the error forecast models are based on support vector machine/extreme learning machine, and correct the short-term wind power forecast values. The data collected from a wind farm in Hebei Province, China, are selected as a case study to demonstrate the effectiveness of the proposed

  13. Preclinical models used for immunogenicity prediction of therapeutic proteins.

    Science.gov (United States)

    Brinks, Vera; Weinbuch, Daniel; Baker, Matthew; Dean, Yann; Stas, Philippe; Kostense, Stefan; Rup, Bonita; Jiskoot, Wim

    2013-07-01

    All therapeutic proteins are potentially immunogenic. Antibodies formed against these drugs can decrease efficacy, leading to drastically increased therapeutic costs and in rare cases to serious and sometimes life threatening side-effects. Many efforts are therefore undertaken to develop therapeutic proteins with minimal immunogenicity. For this, immunogenicity prediction of candidate drugs during early drug development is essential. Several in silico, in vitro and in vivo models are used to predict immunogenicity of drug leads, to modify potentially immunogenic properties and to continue development of drug candidates with expected low immunogenicity. Despite the extensive use of these predictive models, their actual predictive value varies. Important reasons for this uncertainty are the limited/insufficient knowledge on the immune mechanisms underlying immunogenicity of therapeutic proteins, the fact that different predictive models explore different components of the immune system and the lack of an integrated clinical validation. In this review, we discuss the predictive models in use, summarize aspects of immunogenicity that these models predict and explore the merits and the limitations of each of the models.

  14. Center-of-mass corrections in the S+V potential model

    International Nuclear Information System (INIS)

    Palladino, B.E.

    1987-02-01

    Center-of-mass corrections to the mass spectrum and static properties of low-lying S-wave baryons and mesons are discussed in the context of a relativistic, independent quark model, based on a Dirac equation, with equally mixed scalar (S) and vector (V) confining potential. (author) [pt

  15. Huntington disease: Experimental models and therapeutic perspectives

    International Nuclear Information System (INIS)

    Serrano Sanchez, Teresa; Blanco Lezcano, Lisette; Garcia Minet, Rocio; Alberti Amador, Esteban; Diaz Armesto, Ivan and others

    2011-01-01

    Huntington's disease (HD) is a degenerative dysfunction of hereditary origin. Up to date there is not, an effective treatment to the disease which having lapsed 15 or 20 years advances inexorably, in a slow form, toward the total inability or death. This paper reviews the clinical and morphological characteristics of Huntington's disease as well as the experimental models more commonly used to study this disease, having as source the articles indexed in Medline data base, published in the last 20 years. Advantages and disadvantages of all experimental models to reproduce the disease as well as the perspectives to therapeutic assay have been also considered. the consent of outline reported about the toxic models, those induced by neurotoxins such as quinolinic acid, appears to be the most appropriate to reproduce the neuropathologic characteristic of the disease, an genetic models contributing with more evidence to the knowledge of the disease etiology. Numerous treatments ameliorate clinical manifestations, but none of them has been able to stop or diminish the affectations derived from neuronal loss. At present time it is possible to reproduce, at least partially, the characteristics of the disease in experimentation animals that allow therapy evaluation in HD. from the treatment view point, the more promissory seems to be transplantation of no neuronal cells, taking into account ethical issues and factibility. On the other hand the new technology of interference RNA emerges as a potential therapeutic tool for treatment in HD, and to respond basic questions on the development of the disease.

  16. Diffusion coefficient adaptive correction in Lagrangian puff model

    International Nuclear Information System (INIS)

    Tan Wenji; Wang Dezhong; Ma Yuanwei; Ji Zhilong

    2014-01-01

    Lagrangian puff model is widely used in the decision support system for nuclear emergency management. The diffusion coefficient is one of the key parameters impacting puff model. An adaptive method was proposed in this paper, which could correct the diffusion coefficient in Lagrangian puff model, and it aimed to improve the accuracy of calculating the nuclide concentration distribution. This method used detected concentration data, meteorological data and source release data to estimate the actual diffusion coefficient with least square method. The diffusion coefficient adaptive correction method was evaluated by Kincaid data in MVK, and was compared with traditional Pasquill-Gifford (P-G) diffusion scheme method. The results indicate that this diffusion coefficient adaptive correction method can improve the accuracy of Lagrangian puff model. (authors)

  17. A Novel Flood Forecasting Method Based on Initial State Variable Correction

    Directory of Open Access Journals (Sweden)

    Kuang Li

    2017-12-01

    Full Text Available The influence of initial state variables on flood forecasting accuracy by using conceptual hydrological models is analyzed in this paper and a novel flood forecasting method based on correction of initial state variables is proposed. The new method is abbreviated as ISVC (Initial State Variable Correction. The ISVC takes the residual between the measured and forecasted flows during the initial period of the flood event as the objective function, and it uses a particle swarm optimization algorithm to correct the initial state variables, which are then used to drive the flood forecasting model. The historical flood events of 11 watersheds in south China are forecasted and verified, and important issues concerning the ISVC application are then discussed. The study results show that the ISVC is effective and applicable in flood forecasting tasks. It can significantly improve the flood forecasting accuracy in most cases.

  18. Evaluation of Bias Correction Method for Satellite-Based Rainfall Data

    Science.gov (United States)

    Bhatti, Haris Akram; Rientjes, Tom; Haile, Alemseged Tamiru; Habib, Emad; Verhoef, Wouter

    2016-01-01

    With the advances in remote sensing technology, satellite-based rainfall estimates are gaining attraction in the field of hydrology, particularly in rainfall-runoff modeling. Since estimates are affected by errors correction is required. In this study, we tested the high resolution National Oceanic and Atmospheric Administration’s (NOAA) Climate Prediction Centre (CPC) morphing technique (CMORPH) satellite rainfall product (CMORPH) in the Gilgel Abbey catchment, Ethiopia. CMORPH data at 8 km-30 min resolution is aggregated to daily to match in-situ observations for the period 2003–2010. Study objectives are to assess bias of the satellite estimates, to identify optimum window size for application of bias correction and to test effectiveness of bias correction. Bias correction factors are calculated for moving window (MW) sizes and for sequential windows (SW’s) of 3, 5, 7, 9, …, 31 days with the aim to assess error distribution between the in-situ observations and CMORPH estimates. We tested forward, central and backward window (FW, CW and BW) schemes to assess the effect of time integration on accumulated rainfall. Accuracy of cumulative rainfall depth is assessed by Root Mean Squared Error (RMSE). To systematically correct all CMORPH estimates, station based bias factors are spatially interpolated to yield a bias factor map. Reliability of interpolation is assessed by cross validation. The uncorrected CMORPH rainfall images are multiplied by the interpolated bias map to result in bias corrected CMORPH estimates. Findings are evaluated by RMSE, correlation coefficient (r) and standard deviation (SD). Results showed existence of bias in the CMORPH rainfall. It is found that the 7 days SW approach performs best for bias correction of CMORPH rainfall. The outcome of this study showed the efficiency of our bias correction approach. PMID:27314363

  19. Evaluation of Bias Correction Method for Satellite-Based Rainfall Data.

    Science.gov (United States)

    Bhatti, Haris Akram; Rientjes, Tom; Haile, Alemseged Tamiru; Habib, Emad; Verhoef, Wouter

    2016-06-15

    With the advances in remote sensing technology, satellite-based rainfall estimates are gaining attraction in the field of hydrology, particularly in rainfall-runoff modeling. Since estimates are affected by errors correction is required. In this study, we tested the high resolution National Oceanic and Atmospheric Administration's (NOAA) Climate Prediction Centre (CPC) morphing technique (CMORPH) satellite rainfall product (CMORPH) in the Gilgel Abbey catchment, Ethiopia. CMORPH data at 8 km-30 min resolution is aggregated to daily to match in-situ observations for the period 2003-2010. Study objectives are to assess bias of the satellite estimates, to identify optimum window size for application of bias correction and to test effectiveness of bias correction. Bias correction factors are calculated for moving window (MW) sizes and for sequential windows (SW's) of 3, 5, 7, 9, …, 31 days with the aim to assess error distribution between the in-situ observations and CMORPH estimates. We tested forward, central and backward window (FW, CW and BW) schemes to assess the effect of time integration on accumulated rainfall. Accuracy of cumulative rainfall depth is assessed by Root Mean Squared Error (RMSE). To systematically correct all CMORPH estimates, station based bias factors are spatially interpolated to yield a bias factor map. Reliability of interpolation is assessed by cross validation. The uncorrected CMORPH rainfall images are multiplied by the interpolated bias map to result in bias corrected CMORPH estimates. Findings are evaluated by RMSE, correlation coefficient (r) and standard deviation (SD). Results showed existence of bias in the CMORPH rainfall. It is found that the 7 days SW approach performs best for bias correction of CMORPH rainfall. The outcome of this study showed the efficiency of our bias correction approach.

  20. Evaluation of Bias Correction Method for Satellite-Based Rainfall Data

    Directory of Open Access Journals (Sweden)

    Haris Akram Bhatti

    2016-06-01

    Full Text Available With the advances in remote sensing technology, satellite-based rainfall estimates are gaining attraction in the field of hydrology, particularly in rainfall-runoff modeling. Since estimates are affected by errors correction is required. In this study, we tested the high resolution National Oceanic and Atmospheric Administration’s (NOAA Climate Prediction Centre (CPC morphing technique (CMORPH satellite rainfall product (CMORPH in the Gilgel Abbey catchment, Ethiopia. CMORPH data at 8 km-30 min resolution is aggregated to daily to match in-situ observations for the period 2003–2010. Study objectives are to assess bias of the satellite estimates, to identify optimum window size for application of bias correction and to test effectiveness of bias correction. Bias correction factors are calculated for moving window (MW sizes and for sequential windows (SW’s of 3, 5, 7, 9, …, 31 days with the aim to assess error distribution between the in-situ observations and CMORPH estimates. We tested forward, central and backward window (FW, CW and BW schemes to assess the effect of time integration on accumulated rainfall. Accuracy of cumulative rainfall depth is assessed by Root Mean Squared Error (RMSE. To systematically correct all CMORPH estimates, station based bias factors are spatially interpolated to yield a bias factor map. Reliability of interpolation is assessed by cross validation. The uncorrected CMORPH rainfall images are multiplied by the interpolated bias map to result in bias corrected CMORPH estimates. Findings are evaluated by RMSE, correlation coefficient (r and standard deviation (SD. Results showed existence of bias in the CMORPH rainfall. It is found that the 7 days SW approach performs best for bias correction of CMORPH rainfall. The outcome of this study showed the efficiency of our bias correction approach.

  1. Recent Advances in Stem Cell-Based Therapeutics for Stroke

    OpenAIRE

    Napoli, Eleonora; Borlongan, Cesar V.

    2016-01-01

    Regenerative medicine for central nervous system disorders, including stroke, has challenged the non-regenerative capacity of the brain. Among the many treatment strategies tailored towards repairing the injured brain, stem cell-based therapeutics have been demonstrated as safe and effective in animal models of stroke, and are being tested in limited clinical trials. We address here key lab-to-clinic translational research that relate to efficacy, safety, and mechanism of action underlying st...

  2. A Zebrafish Heart Failure Model for Assessing Therapeutic Agents.

    Science.gov (United States)

    Zhu, Xiao-Yu; Wu, Si-Qi; Guo, Sheng-Ya; Yang, Hua; Xia, Bo; Li, Ping; Li, Chun-Qi

    2018-03-20

    Heart failure is a leading cause of death and the development of effective and safe therapeutic agents for heart failure has been proven challenging. In this study, taking advantage of larval zebrafish, we developed a zebrafish heart failure model for drug screening and efficacy assessment. Zebrafish at 2 dpf (days postfertilization) were treated with verapamil at a concentration of 200 μM for 30 min, which were determined as optimum conditions for model development. Tested drugs were administered into zebrafish either by direct soaking or circulation microinjection. After treatment, zebrafish were randomly selected and subjected to either visual observation and image acquisition or record videos under a Zebralab Blood Flow System. The therapeutic effects of drugs on zebrafish heart failure were quantified by calculating the efficiency of heart dilatation, venous congestion, cardiac output, and blood flow dynamics. All 8 human heart failure therapeutic drugs (LCZ696, digoxin, irbesartan, metoprolol, qiliqiangxin capsule, enalapril, shenmai injection, and hydrochlorothiazide) showed significant preventive and therapeutic effects on zebrafish heart failure (p failure model developed and validated in this study could be used for in vivo heart failure studies and for rapid screening and efficacy assessment of preventive and therapeutic drugs.

  3. Likelihood-based inference for cointegration with nonlinear error-correction

    DEFF Research Database (Denmark)

    Kristensen, Dennis; Rahbek, Anders Christian

    2010-01-01

    We consider a class of nonlinear vector error correction models where the transfer function (or loadings) of the stationary relationships is nonlinear. This includes in particular the smooth transition models. A general representation theorem is given which establishes the dynamic properties...... and a linear trend in general. Gaussian likelihood-based estimators are considered for the long-run cointegration parameters, and the short-run parameters. Asymptotic theory is provided for these and it is discussed to what extend asymptotic normality and mixed normality can be found. A simulation study...

  4. Magnetic Resonance-based Motion Correction for Quantitative PET in Simultaneous PET-MR Imaging.

    Science.gov (United States)

    Rakvongthai, Yothin; El Fakhri, Georges

    2017-07-01

    Motion degrades image quality and quantitation of PET images, and is an obstacle to quantitative PET imaging. Simultaneous PET-MR offers a tool that can be used for correcting the motion in PET images by using anatomic information from MR imaging acquired concurrently. Motion correction can be performed by transforming a set of reconstructed PET images into the same frame or by incorporating the transformation into the system model and reconstructing the motion-corrected image. Several phantom and patient studies have validated that MR-based motion correction strategies have great promise for quantitative PET imaging in simultaneous PET-MR. Copyright © 2017 Elsevier Inc. All rights reserved.

  5. Nano-based theranostics for chronic obstructive lung diseases: challenges and therapeutic potential.

    Science.gov (United States)

    Vij, Neeraj

    2011-09-01

    The major challenges in the delivery and therapeutic efficacy of nano-delivery systems in chronic obstructive airway conditions are airway defense, severe inflammation and mucous hypersecretion. Chronic airway inflammation and mucous hypersecretion are hallmarks of chronic obstructive airway diseases, including asthma, COPD (chronic obstructive pulmonary disease) and CF (cystic fibrosis). Distinct etiologies drive inflammation and mucous hypersecretion in these diseases, which are further induced by infection or components of cigarette smoke. Controlling chronic inflammation is at the root of treatments such as corticosteroids, antibiotics or other available drugs, which pose the challenge of sustained delivery of drugs to target cells or tissues. In spite of the wide application of nano-based drug delivery systems, very few are tested to date. Targeted nanoparticle-mediated sustained drug delivery is required to control inflammatory cell chemotaxis, fibrosis, protease-mediated chronic emphysema and/or chronic lung obstruction in COPD. Moreover, targeted epithelial delivery is indispensable for correcting the underlying defects in CF and targeted inflammatory cell delivery for controlling other chronic inflammatory lung diseases. We propose that the design and development of nano-based targeted theranostic vehicles with therapeutic, imaging and airway-defense penetrating capability, will be invaluable for treating chronic obstructive lung diseases. This paper discusses a novel nano-theranostic strategy that we are currently evaluating to treat the underlying cause of CF and COPD lung disease.

  6. Long-range correlation in synchronization and syncopation tapping: a linear phase correction model.

    Directory of Open Access Journals (Sweden)

    Didier Delignières

    Full Text Available We propose in this paper a model for accounting for the increase in long-range correlations observed in asynchrony series in syncopation tapping, as compared with synchronization tapping. Our model is an extension of the linear phase correction model for synchronization tapping. We suppose that the timekeeper represents a fractal source in the system, and that a process of estimation of the half-period of the metronome, obeying a random-walk dynamics, combines with the linear phase correction process. Comparing experimental and simulated series, we show that our model allows accounting for the experimentally observed pattern of serial dependence. This model complete previous modeling solutions proposed for self-paced and synchronization tapping, for a unifying framework of event-based timing.

  7. Classical Electron Model with QED Corrections

    OpenAIRE

    Lenk, Ron

    2010-01-01

    In this article we build a metric for a classical general relativistic electron model with QED corrections. We calculate the stress-energy tensor for the radiative corrections to the Coulomb potential in both the near-field and far-field approximations. We solve the three field equations in both cases by using a perturbative expansion to first order in alpha (the fine-structure constant) while insisting that the usual (+, +, -, -) structure of the stress-energy tensor is maintained. The resul...

  8. Using modeling to develop and evaluate a corrective action system

    International Nuclear Information System (INIS)

    Rodgers, L.

    1995-01-01

    At a former trucking facility in EPA Region 4, a corrective action system was installed to remediate groundwater and soil contaminated with gasoline and fuel oil products released from several underground storage tanks (USTs). Groundwater modeling was used to develop the corrective action plan and later used with soil vapor modeling to evaluate the systems effectiveness. Groundwater modeling was used to determine the effects of a groundwater recovery system on the water table at the site. Information gathered during the assessment phase was used to develop a three dimensional depiction of the subsurface at the site. Different groundwater recovery schemes were then modeled to determine the most effective method for recovering contaminated groundwater. Based on the modeling and calculations, a corrective action system combining soil vapor extraction (SVE) and groundwater recovery was designed. The system included seven recovery wells, to extract both soil vapor and groundwater, and a groundwater treatment system. Operation and maintenance of the system included monthly system sampling and inspections and quarterly groundwater sampling. After one year of operation the effectiveness of the system was evaluated. A subsurface soil gas model was used to evaluate the effects of the SVE system on the site contamination as well as its effects on the water table and groundwater recovery operations. Groundwater modeling was used in evaluating the effectiveness of the groundwater recovery system. Plume migration and capture were modeled to insure that the groundwater recovery system at the site was effectively capturing the contaminant plume. The two models were then combined to determine the effects of the two systems, acting together, on the remediation process

  9. Correcting Biases in a lower resolution global circulation model with data assimilation

    Science.gov (United States)

    Canter, Martin; Barth, Alexander

    2016-04-01

    With this work, we aim at developping a new method of bias correction using data assimilation. This method is based on the stochastic forcing of a model to correct bias. First, through a preliminary run, we estimate the bias of the model and its possible sources. Then, we establish a forcing term which is directly added inside the model's equations. We create an ensemble of runs and consider the forcing term as a control variable during the assimilation of observations. We then use this analysed forcing term to correct the bias of the model. Since the forcing is added inside the model, it acts as a source term, unlike external forcings such as wind. This procedure has been developed and successfully tested with a twin experiment on a Lorenz 95 model. It is currently being applied and tested on the sea ice ocean NEMO LIM model, which is used in the PredAntar project. NEMO LIM is a global and low resolution (2 degrees) coupled model (hydrodynamic model and sea ice model) with long time steps allowing simulations over several decades. Due to its low resolution, the model is subject to bias in area where strong currents are present. We aim at correcting this bias by using perturbed current fields from higher resolution models and randomly generated perturbations. The random perturbations need to be constrained in order to respect the physical properties of the ocean, and not create unwanted phenomena. To construct those random perturbations, we first create a random field with the Diva tool (Data-Interpolating Variational Analysis). Using a cost function, this tool penalizes abrupt variations in the field, while using a custom correlation length. It also decouples disconnected areas based on topography. Then, we filter the field to smoothen it and remove small scale variations. We use this field as a random stream function, and take its derivatives to get zonal and meridional velocity fields. We also constrain the stream function along the coasts in order not to have

  10. Effect of an Ergonomics-Based Educational Intervention Based on Transtheoretical Model in Adopting Correct Body Posture Among Operating Room Nurses.

    Science.gov (United States)

    Moazzami, Zeinab; Dehdari, Tahere; Taghdisi, Mohammad Hosein; Soltanian, Alireza

    2015-11-03

    One of the preventive strategies for chronic low back pain among operating room nurses is instructing proper body mechanics and postural behavior, for which the use of the Transtheoretical Model (TTM) has been recommended. Eighty two nurses who were in the contemplation and preparation stages for adopting correct body posture were randomly selected (control group = 40, intervention group = 42). TTM variables and body posture were measured at baseline and again after 1 and 6 months after the intervention. A four-week ergonomics educational intervention based on TTM variables was designed and conducted for the nurses in the intervention group. Following the intervention, a higher proportion of nurses in the intervention group moved into the action stage (p 0.05) after the intervention. The TTM provides a suitable framework for developing stage-based ergonomics interventions for postural behavior.

  11. A Mathematical Model of the Effect of Immunogenicity on Therapeutic Protein Pharmacokinetics

    OpenAIRE

    Chen, Xiaoying; Hickling, Timothy; Kraynov, Eugenia; Kuang, Bing; Parng, Chuenlei; Vicini, Paolo

    2013-01-01

    A mathematical pharmacokinetic/anti-drug-antibody (PK/ADA) model was constructed for quantitatively assessing immunogenicity for therapeutic proteins. The model is inspired by traditional pharmacokinetic/pharmacodynamic (PK/PD) models, and is based on the observed impact of ADA on protein drug clearance. The hypothesis for this work is that altered drug PK contains information about the extent and timing of ADA generation. By fitting drug PK profiles while accounting for ADA-mediated drug cle...

  12. On the gluonic correction to lepton-pair decays in a relativistic quarkonium model

    International Nuclear Information System (INIS)

    Ito, Hitoshi

    1987-01-01

    The gluonic correction to the leptonic decay of the heavy vector meson is investigated by using the perturbation theory to the order α s . The on-mass-shell approximation is assumed for the constituent quarks so that we assure the gauge independence of the correction. The decay rates in the model based on the Bethe-Salpeter equation are also shown, in which the gluonic correction with a high-momentum cutoff is calculated for the off-shell quarks. It is shown that the static approximation to the correction factor (1 - 16α s /3π) is not adequate and the gluonic correction does not suppress but enhance the decay rates of the ground states for the c anti c and b anti b systems. (author)

  13. Discovery and design of carbohydrate-based therapeutics.

    Science.gov (United States)

    Cipolla, Laura; Araújo, Ana C; Bini, Davide; Gabrielli, Luca; Russo, Laura; Shaikh, Nasrin

    2010-08-01

    Till now, the importance of carbohydrates has been underscored, if compared with the two other major classes of biopolymers such as oligonucleotides and proteins. Recent advances in glycobiology and glycochemistry have imparted a strong interest in the study of this enormous family of biomolecules. Carbohydrates have been shown to be implicated in recognition processes, such as cell-cell adhesion, cell-extracellular matrix adhesion and cell-intruder recognition phenomena. In addition, carbohydrates are recognized as differentiation markers and as antigenic determinants. Due to their relevant biological role, carbohydrates are promising candidates for drug design and disease treatment. However, the growing number of human disorders known as congenital disorders of glycosylation that are being identified as resulting from abnormalities in glycan structures and protein glycosylation strongly indicates that a fast development of glycobiology, glycochemistry and glycomedicine is highly desirable. The topics give an overview of different approaches that have been used to date for the design of carbohydrate-based therapeutics; this includes the use of native synthetic carbohydrates, the use of carbohydrate mimics designed on the basis of their native counterpart, the use of carbohydrates as scaffolds and finally the design of glyco-fused therapeutics, one of the most recent approaches. The review covers mainly literature that has appeared since 2000, except for a few papers cited for historical reasons. The reader will gain an overview of the current strategies applied to the design of carbohydrate-based therapeutics; in particular, the advantages/disadvantages of different approaches are highlighted. The topic is presented in a general, basic manner and will hopefully be a useful resource for all readers who are not familiar with it. In addition, in order to stress the potentialities of carbohydrates, several examples of carbohydrate-based marketed therapeutics are given

  14. Correction for Measurement Error from Genotyping-by-Sequencing in Genomic Variance and Genomic Prediction Models

    DEFF Research Database (Denmark)

    Ashraf, Bilal; Janss, Luc; Jensen, Just

    sample). The GBSeq data can be used directly in genomic models in the form of individual SNP allele-frequency estimates (e.g., reference reads/total reads per polymorphic site per individual), but is subject to measurement error due to the low sequencing depth per individual. Due to technical reasons....... In the current work we show how the correction for measurement error in GBSeq can also be applied in whole genome genomic variance and genomic prediction models. Bayesian whole-genome random regression models are proposed to allow implementation of large-scale SNP-based models with a per-SNP correction...... for measurement error. We show correct retrieval of genomic explained variance, and improved genomic prediction when accounting for the measurement error in GBSeq data...

  15. Design of Service Net based Correctness Verification Approach for Multimedia Conferencing Service Orchestration

    Directory of Open Access Journals (Sweden)

    Cheng Bo

    2012-02-01

    Full Text Available Multimedia conferencing is increasingly becoming a very important and popular application over Internet. Due to the complexity of asynchronous communications and handle large and dynamically concurrent processes for multimedia conferencing, which confront relevant challenge to achieve sufficient correctness guarantees, and supporting the effective verification methods for multimedia conferencing services orchestration is an extremely difficult and challenging problem. In this paper, we firstly present the Business Process Execution Language (BPEL based conferencing service orchestration, and mainly focus on the service net based correction verification approach for multimedia conferencing services orchestration, which can automatically translated the BPEL based service orchestration into a corresponding Petri net model with the Petri Net Markup Language (PNML, and also present the BPEL service net reduction rules and multimedia conferencing service orchestration correction verification algorithms. We perform the correctness analysis and verification using the service net properties as safeness, reachability and deadlocks, and also provide an automated support tool for the formal analysis and soundness verification for the multimedia conferencing services orchestration scenarios. Finally, we give the comparison and evaluations.

  16. Correction of β-thalassemia mutant by base editor in human embryos

    Directory of Open Access Journals (Sweden)

    Puping Liang

    2017-09-01

    Full Text Available Abstract β-Thalassemia is a global health issue, caused by mutations in the HBB gene. Among these mutations, HBB −28 (A>G mutations is one of the three most common mutations in China and Southeast Asia patients with β-thalassemia. Correcting this mutation in human embryos may prevent the disease being passed onto future generations and cure anemia. Here we report the first study using base editor (BE system to correct disease mutant in human embryos. Firstly, we produced a 293T cell line with an exogenous HBB −28 (A>G mutant fragment for gRNAs and targeting efficiency evaluation. Then we collected primary skin fibroblast cells from a β-thalassemia patient with HBB −28 (A>G homozygous mutation. Data showed that base editor could precisely correct HBB −28 (A>G mutation in the patient’s primary cells. To model homozygous mutation disease embryos, we constructed nuclear transfer embryos by fusing the lymphocyte or skin fibroblast cells with enucleated in vitro matured (IVM oocytes. Notably, the gene correction efficiency was over 23.0% in these embryos by base editor. Although these embryos were still mosaic, the percentage of repaired blastomeres was over 20.0%. In addition, we found that base editor variants, with narrowed deamination window, could promote G-to-A conversion at HBB −28 site precisely in human embryos. Collectively, this study demonstrated the feasibility of curing genetic disease in human somatic cells and embryos by base editor system.

  17. Learning versus correct models: influence of model type on the learning of a free-weight squat lift.

    Science.gov (United States)

    McCullagh, P; Meyer, K N

    1997-03-01

    It has been assumed that demonstrating the correct movement is the best way to impart task-relevant information. However, empirical verification with simple laboratory skills has shown that using a learning model (showing an individual in the process of acquiring the skill to be learned) may accelerate skill acquisition and increase retention more than using a correct model. The purpose of the present study was to compare the effectiveness of viewing correct versus learning models on the acquisition of a sport skill (free-weight squat lift). Forty female participants were assigned to four learning conditions: physical practice receiving feedback, learning model with model feedback, correct model with model feedback, and learning model without model feedback. Results indicated that viewing either a correct or learning model was equally effective in learning correct form in the squat lift.

  18. A Blast Wave Model With Viscous Corrections

    International Nuclear Information System (INIS)

    Yang, Z; Fries, R J

    2017-01-01

    Hadronic observables in the final stage of heavy ion collision can be described well by fluid dynamics or blast wave parameterizations. We improve existing blast wave models by adding shear viscous corrections to the particle distributions in the Navier-Stokes approximation. The specific shear viscosity η/s of a hadron gas at the freeze-out temperature is a new parameter in this model. We extract the blast wave parameters with viscous corrections from experimental data which leads to constraints on the specific shear viscosity at kinetic freeze-out. Preliminary results show η/s is rather small. (paper)

  19. A Blast Wave Model With Viscous Corrections

    Science.gov (United States)

    Yang, Z.; Fries, R. J.

    2017-04-01

    Hadronic observables in the final stage of heavy ion collision can be described well by fluid dynamics or blast wave parameterizations. We improve existing blast wave models by adding shear viscous corrections to the particle distributions in the Navier-Stokes approximation. The specific shear viscosity η/s of a hadron gas at the freeze-out temperature is a new parameter in this model. We extract the blast wave parameters with viscous corrections from experimental data which leads to constraints on the specific shear viscosity at kinetic freeze-out. Preliminary results show η/s is rather small.

  20. Heat transfer corrected isothermal model for devolatilization of thermally-thick biomass particles

    DEFF Research Database (Denmark)

    Luo, Hao; Wu, Hao; Lin, Weigang

    Isothermal model used in current computational fluid dynamic (CFD) model neglect the internal heat transfer during biomass devolatilization. This assumption is not reasonable for thermally-thick particles. To solve this issue, a heat transfer corrected isothermal model is introduced. In this model......, two heat transfer corrected coefficients: HT-correction of heat transfer and HR-correction of reaction, are defined to cover the effects of internal heat transfer. A series of single biomass devitalization case have been modeled to validate this model, the results show that devolatilization behaviors...... of both thermally-thick and thermally-thin particles are predicted reasonable by using heat transfer corrected model, while, isothermal model overestimate devolatilization rate and heating rate for thermlly-thick particle.This model probably has better performance than isothermal model when it is coupled...

  1. Fiducial marker-based correction for involuntary motion in weight-bearing C-arm CT scanning of knees. Part I. Numerical model-based optimization.

    Science.gov (United States)

    Choi, Jang-Hwan; Fahrig, Rebecca; Keil, Andreas; Besier, Thor F; Pal, Saikat; McWalter, Emily J; Beaupré, Gary S; Maier, Andreas

    2013-09-01

    Human subjects in standing positions are apt to show much more involuntary motion than in supine positions. The authors aimed to simulate a complicated realistic lower body movement using the four-dimensional (4D) digital extended cardiac-torso (XCAT) phantom. The authors also investigated fiducial marker-based motion compensation methods in two-dimensional (2D) and three-dimensional (3D) space. The level of involuntary movement-induced artifacts and image quality improvement were investigated after applying each method. An optical tracking system with eight cameras and seven retroreflective markers enabled us to track involuntary motion of the lower body of nine healthy subjects holding a squat position at 60° of flexion. The XCAT-based knee model was developed using the 4D XCAT phantom and the optical tracking data acquired at 120 Hz. The authors divided the lower body in the XCAT into six parts and applied unique affine transforms to each so that the motion (6 degrees of freedom) could be synchronized with the optical markers' location at each time frame. The control points of the XCAT were tessellated into triangles and 248 projection images were created based on intersections of each ray and monochromatic absorption. The tracking data sets with the largest motion (Subject 2) and the smallest motion (Subject 5) among the nine data sets were used to animate the XCAT knee model. The authors defined eight skin control points well distributed around the knees as pseudo-fiducial markers which functioned as a reference in motion correction. Motion compensation was done in the following ways: (1) simple projection shifting in 2D, (2) deformable projection warping in 2D, and (3) rigid body warping in 3D. Graphics hardware accelerated filtered backprojection was implemented and combined with the three correction methods in order to speed up the simulation process. Correction fidelity was evaluated as a function of number of markers used (4-12) and marker distribution

  2. Applications of lipid based formulation technologies in the delivery of biotechnology-based therapeutics.

    Science.gov (United States)

    du Plessis, Lissinda H; Marais, Etienne B; Mohammed, Faruq; Kotzé, Awie F

    2014-01-01

    In the last decades several new biotechnologically-based therapeutics have been developed due to progress in genetic engineering. A growing challenge facing pharmaceutical scientists is formulating these compounds into oral dosage forms with adequate bioavailability. An increasingly popular approach to formulate biotechnology-based therapeutics is the use of lipid based formulation technologies. This review highlights the importance of lipid based drug delivery systems in the formulation of oral biotechnology based therapeutics including peptides, proteins, DNA, siRNA and vaccines. The different production procedures used to achieve high encapsulation efficiencies of the bioactives are discussed, as well as the factors influencing the choice of excipient. Lipid based colloidal drug delivery systems including liposomes and solid lipid nanoparticles are reviewed with a focus on recent advances and updates. We further describe microemulsions and self-emulsifying drug delivery systems and recent findings on bioactive delivery. We conclude the review with a few examples on novel lipid based formulation technologies.

  3. Impacts of correcting the inter-variable correlation of climate model outputs on hydrological modeling

    Science.gov (United States)

    Chen, Jie; Li, Chao; Brissette, François P.; Chen, Hua; Wang, Mingna; Essou, Gilles R. C.

    2018-05-01

    Bias correction is usually implemented prior to using climate model outputs for impact studies. However, bias correction methods that are commonly used treat climate variables independently and often ignore inter-variable dependencies. The effects of ignoring such dependencies on impact studies need to be investigated. This study aims to assess the impacts of correcting the inter-variable correlation of climate model outputs on hydrological modeling. To this end, a joint bias correction (JBC) method which corrects the joint distribution of two variables as a whole is compared with an independent bias correction (IBC) method; this is considered in terms of correcting simulations of precipitation and temperature from 26 climate models for hydrological modeling over 12 watersheds located in various climate regimes. The results show that the simulated precipitation and temperature are considerably biased not only in the individual distributions, but also in their correlations, which in turn result in biased hydrological simulations. In addition to reducing the biases of the individual characteristics of precipitation and temperature, the JBC method can also reduce the bias in precipitation-temperature (P-T) correlations. In terms of hydrological modeling, the JBC method performs significantly better than the IBC method for 11 out of the 12 watersheds over the calibration period. For the validation period, the advantages of the JBC method are greatly reduced as the performance becomes dependent on the watershed, GCM and hydrological metric considered. For arid/tropical and snowfall-rainfall-mixed watersheds, JBC performs better than IBC. For snowfall- or rainfall-dominated watersheds, however, the two methods behave similarly, with IBC performing somewhat better than JBC. Overall, the results emphasize the advantages of correcting the P-T correlation when using climate model-simulated precipitation and temperature to assess the impact of climate change on watershed

  4. Linear model correction: A method for transferring a near-infrared multivariate calibration model without standard samples

    Science.gov (United States)

    Liu, Yan; Cai, Wensheng; Shao, Xueguang

    2016-12-01

    Calibration transfer is essential for practical applications of near infrared (NIR) spectroscopy because the measurements of the spectra may be performed on different instruments and the difference between the instruments must be corrected. For most of calibration transfer methods, standard samples are necessary to construct the transfer model using the spectra of the samples measured on two instruments, named as master and slave instrument, respectively. In this work, a method named as linear model correction (LMC) is proposed for calibration transfer without standard samples. The method is based on the fact that, for the samples with similar physical and chemical properties, the spectra measured on different instruments are linearly correlated. The fact makes the coefficients of the linear models constructed by the spectra measured on different instruments are similar in profile. Therefore, by using the constrained optimization method, the coefficients of the master model can be transferred into that of the slave model with a few spectra measured on slave instrument. Two NIR datasets of corn and plant leaf samples measured with different instruments are used to test the performance of the method. The results show that, for both the datasets, the spectra can be correctly predicted using the transferred partial least squares (PLS) models. Because standard samples are not necessary in the method, it may be more useful in practical uses.

  5. Beam-Based Nonlinear Optics Corrections in Colliders

    CERN Document Server

    Pilat, Fulvia Caterina; Malitsky, Nikolay; Ptitsyn, Vadim

    2005-01-01

    A method has been developed to measure and correct operationally the non-linear effects of the final focusing magnets in colliders, which gives access to the effects of multi-pole errors by applying closed orbit bumps, and analyzing the resulting tune and orbit shifts. This technique has been tested and used during 3 years of RHIC (the Relativistic Heavy Ion Collider at BNL) operations. I will discuss here the theoretical basis of the method, the experimental set-up, the correction results, the present understanding of the machine model, the potential and limitations of the method itself as compared with other non linear correction techniques.

  6. [Identification of novel therapeutically effective antibiotics using silkworm infection model].

    Science.gov (United States)

    Hamamoto, Hiroshi; Urai, Makoto; Paudel, Atmika; Horie, Ryo; Murakami, Kazuhisa; Sekimizu, Kazuhisa

    2012-01-01

    Most antibiotics obtained by in vitro screening with antibacterial activity have inappropriate properties as medicines due to their toxicity and pharmacodynamics in animal bodies. Thus, evaluation of the therapeutic effects of these samples using animal models is essential in the crude stage. Mammals are not suitable for therapeutic evaluation of a large number of samples due to high costs and ethical issues. We propose the use of silkworms (Bombyx mori) as model animals for screening therapeutically effective antibiotics. Silkworms are infected by various pathogenic bacteria and are effectively treated with similar ED(50) values of clinically used antibiotics. Furthermore, the drug metabolism pathways, such as cytochrome P450 and conjugation systems, are similar between silkworms and mammals. Silkworms have many advantages compared with other infection models, such as their 1) low cost, 2) few associated ethical problems, 3) adequate body size for easily handling, and 4) easier separation of organs and hemolymph. These features of the silkworm allow for efficient screening of therapeutically effective antibiotics. In this review, we discuss the advantages of the silkworm model in the early stages of drug development and the screening results of some antibiotics using the silkworm infection model.

  7. Loop Corrections to Standard Model fields in inflation

    Energy Technology Data Exchange (ETDEWEB)

    Chen, Xingang [Institute for Theory and Computation, Harvard-Smithsonian Center for Astrophysics,60 Garden Street, Cambridge, MA 02138 (United States); Department of Physics, The University of Texas at Dallas,800 W Campbell Rd, Richardson, TX 75080 (United States); Wang, Yi [Department of Physics, The Hong Kong University of Science and Technology,Clear Water Bay, Kowloon, Hong Kong (China); Xianyu, Zhong-Zhi [Center of Mathematical Sciences and Applications, Harvard University,20 Garden Street, Cambridge, MA 02138 (United States)

    2016-08-08

    We calculate 1-loop corrections to the Schwinger-Keldysh propagators of Standard-Model-like fields of spin-0, 1/2, and 1, with all renormalizable interactions during inflation. We pay special attention to the late-time divergences of loop corrections, and show that the divergences can be resummed into finite results in the late-time limit using dynamical renormalization group method. This is our first step toward studying both the Standard Model and new physics in the primordial universe.

  8. Ground-Based Correction of Remote-Sensing Spectral Imagery

    Science.gov (United States)

    Alder-Golden, Steven M.; Rochford, Peter; Matthew, Michael; Berk, Alexander

    2007-01-01

    Software has been developed for an improved method of correcting for the atmospheric optical effects (primarily, effects of aerosols and water vapor) in spectral images of the surface of the Earth acquired by airborne and spaceborne remote-sensing instruments. In this method, the variables needed for the corrections are extracted from the readings of a radiometer located on the ground in the vicinity of the scene of interest. The software includes algorithms that analyze measurement data acquired from a shadow-band radiometer. These algorithms are based on a prior radiation transport software model, called MODTRAN, that has been developed through several versions up to what are now known as MODTRAN4 and MODTRAN5 . These components have been integrated with a user-friendly Interactive Data Language (IDL) front end and an advanced version of MODTRAN4. Software tools for handling general data formats, performing a Langley-type calibration, and generating an output file of retrieved atmospheric parameters for use in another atmospheric-correction computer program known as FLAASH have also been incorporated into the present soft-ware. Concomitantly with the soft-ware described thus far, there has been developed a version of FLAASH that utilizes the retrieved atmospheric parameters to process spectral image data.

  9. Spectral-ratio radon background correction method in airborne γ-ray spectrometry based on compton scattering deduction

    International Nuclear Information System (INIS)

    Gu Yi; Xiong Shengqing; Zhou Jianxin; Fan Zhengguo; Ge Liangquan

    2014-01-01

    γ-ray released by the radon daughter has severe impact on airborne γ-ray spectrometry. The spectral-ratio method is one of the best mathematical methods for radon background deduction in airborne γ-ray spectrometry. In this paper, an advanced spectral-ratio method was proposed which deducts Compton scattering ray by the fast Fourier transform rather than tripping ratios, the relationship between survey height and correction coefficient of the advanced spectral-ratio radon background correction method was studied, the advanced spectral-ratio radon background correction mathematic model was established, and the ground saturation model calibrating technology for correction coefficient was proposed. As for the advanced spectral-ratio radon background correction method, its applicability and correction efficiency are improved, and the application cost is saved. Furthermore, it can prevent the physical meaning lost and avoid the possible errors caused by matrix computation and mathematical fitting based on spectrum shape which is applied in traditional correction coefficient. (authors)

  10. Investigation of turbulence models with compressibility corrections for hypersonic boundary flows

    Directory of Open Access Journals (Sweden)

    Han Tang

    2015-12-01

    Full Text Available The applications of pressure work, pressure-dilatation, and dilatation-dissipation (Sarkar, Zeman, and Wilcox models to hypersonic boundary flows are investigated. The flat plate boundary layer flows of Mach number 5–11 and shock wave/boundary layer interactions of compression corners are simulated numerically. For the flat plate boundary layer flows, original turbulence models overestimate the heat flux with Mach number high up to 10, and compressibility corrections applied to turbulence models lead to a decrease in friction coefficients and heating rates. The pressure work and pressure-dilatation models yield the better results. Among the three dilatation-dissipation models, Sarkar and Wilcox corrections present larger deviations from the experiment measurement, while Zeman correction can achieve acceptable results. For hypersonic compression corner flows, due to the evident increase of turbulence Mach number in separation zone, compressibility corrections make the separation areas larger, thus cannot improve the accuracy of calculated results. It is unreasonable that compressibility corrections take effect in separation zone. Density-corrected model by Catris and Aupoix is suitable for shock wave/boundary layer interaction flows which can improve the simulation accuracy of the peak heating and have a little influence on separation zone.

  11. MATLAB based beam orbit correction system of HLS storage ring

    International Nuclear Information System (INIS)

    Ding Shichuan; Liu Gongfa; Xuan Ke; Li Weimin; Wang Lin; Wang Jigang; Li Chuan; Bao Xun; Guo Weiqun

    2006-01-01

    The distortion of closed orbit usually causes much side effect which is harmful to synchrotron radiation source such as HLS, so it is necessary to correct the distortion of closed orbit. In this paper, the correction principle, development procedure and test of MATLAB based on beam orbit correction system of HLS storage ring are described. The correction system is consisted of the beam orbit measure system, corrector magnet system and the control system, and the beam orbit correction code based on MATLAB is working on the operation interface. The data of the beam orbit are analyzed and calculated firstly, and then the orbit is corrected by changing corrector strength via control system. The test shows that the distortion of closed orbit is from max 4.468 mm before correction to max 0.299 mm after correction as well as SDEV is from 2.986 mm to 0.087 mm. So the correction system reaches the design goal. (authors)

  12. Numerical model and analysis of an energy-based system using microwaves for vision correction

    Science.gov (United States)

    Pertaub, Radha; Ryan, Thomas P.

    2009-02-01

    A treatment system was developed utilizing a microwave-based procedure capable of treating myopia and offering a less invasive alternative to laser vision correction without cutting the eye. Microwave thermal treatment elevates the temperature of the paracentral stroma of the cornea to create a predictable refractive change while preserving the epithelium and deeper structures of the eye. A pattern of shrinkage outside of the optical zone may be sufficient to flatten the central cornea. A numerical model was set up to investigate both the electromagnetic field and the resultant transient temperature distribution. A finite element model of the eye was created and the axisymmetric distribution of temperature calculated to characterize the combination of controlled power deposition combined with surface cooling to spare the epithelium, yet shrink the cornea, in a circularly symmetric fashion. The model variables included microwave power levels and pulse width, cooling timing, dielectric material and thickness, and electrode configuration and gap. Results showed that power is totally contained within the cornea and no significant temperature rise was found outside the anterior cornea, due to the near-field design of the applicator and limited thermal conduction with the short on-time. Target isothermal regions were plotted as a result of common energy parameters along with a variety of electrode shapes and sizes, which were compared. Dose plots showed the relationship between energy and target isothermic regions.

  13. Direct cointegration testing in error-correction models

    NARCIS (Netherlands)

    F.R. Kleibergen (Frank); H.K. van Dijk (Herman)

    1994-01-01

    textabstractAbstract An error correction model is specified having only exact identified parameters, some of which reflect a possible departure from a cointegration model. Wald, likelihood ratio, and Lagrange multiplier statistics are derived to test for the significance of these parameters. The

  14. Optimal Model-Based Fault Estimation and Correction for Particle Accelerators and Industrial Plants Using Combined Support Vector Machines and First Principles Models

    International Nuclear Information System (INIS)

    2010-01-01

    Timely estimation of deviations from optimal performance in complex systems and the ability to identify corrective measures in response to the estimated parameter deviations has been the subject of extensive research over the past four decades. The implications in terms of lost revenue from costly industrial processes, operation of large-scale public works projects and the volume of the published literature on this topic clearly indicates the significance of the problem. Applications range from manufacturing industries (integrated circuits, automotive, etc.), to large-scale chemical plants, pharmaceutical production, power distribution grids, and avionics. In this project we investigated a new framework for building parsimonious models that are suited for diagnosis and fault estimation of complex technical systems. We used Support Vector Machines (SVMs) to model potentially time-varying parameters of a First-Principles (FP) description of the process. The combined SVM and FP model was built (i.e. model parameters were trained) using constrained optimization techniques. We used the trained models to estimate faults affecting simulated beam lifetime. In the case where a large number of process inputs are required for model-based fault estimation, the proposed framework performs an optimal nonlinear principal component analysis of the large-scale input space, and creates a lower dimension feature space in which fault estimation results can be effectively presented to the operation personnel. To fulfill the main technical objectives of the Phase I research, our Phase I efforts have focused on: (1) SVM Training in a Combined Model Structure - We developed the software for the constrained training of the SVMs in a combined model structure, and successfully modeled the parameters of a first-principles model for beam lifetime with support vectors. (2) Higher-order Fidelity of the Combined Model - We used constrained training to ensure that the output of the SVM (i.e. the

  15. Optimal Model-Based Fault Estimation and Correction for Particle Accelerators and Industrial Plants Using Combined Support Vector Machines and First Principles Models

    Energy Technology Data Exchange (ETDEWEB)

    Sayyar-Rodsari, Bijan; Schweiger, Carl; /SLAC /Pavilion Technologies, Inc., Austin, TX

    2010-08-25

    Timely estimation of deviations from optimal performance in complex systems and the ability to identify corrective measures in response to the estimated parameter deviations has been the subject of extensive research over the past four decades. The implications in terms of lost revenue from costly industrial processes, operation of large-scale public works projects and the volume of the published literature on this topic clearly indicates the significance of the problem. Applications range from manufacturing industries (integrated circuits, automotive, etc.), to large-scale chemical plants, pharmaceutical production, power distribution grids, and avionics. In this project we investigated a new framework for building parsimonious models that are suited for diagnosis and fault estimation of complex technical systems. We used Support Vector Machines (SVMs) to model potentially time-varying parameters of a First-Principles (FP) description of the process. The combined SVM & FP model was built (i.e. model parameters were trained) using constrained optimization techniques. We used the trained models to estimate faults affecting simulated beam lifetime. In the case where a large number of process inputs are required for model-based fault estimation, the proposed framework performs an optimal nonlinear principal component analysis of the large-scale input space, and creates a lower dimension feature space in which fault estimation results can be effectively presented to the operation personnel. To fulfill the main technical objectives of the Phase I research, our Phase I efforts have focused on: (1) SVM Training in a Combined Model Structure - We developed the software for the constrained training of the SVMs in a combined model structure, and successfully modeled the parameters of a first-principles model for beam lifetime with support vectors. (2) Higher-order Fidelity of the Combined Model - We used constrained training to ensure that the output of the SVM (i.e. the

  16. SU-F-T-143: Implementation of a Correction-Based Output Model for a Compact Passively Scattered Proton Therapy System

    Energy Technology Data Exchange (ETDEWEB)

    Ferguson, S; Ahmad, S; Chen, Y; Ferreira, C; Islam, M; Lau, A; Jin, H [University of Oklahoma Health Sciences Center, Oklahoma City, OK (United States); Keeling, V [Carti, Inc., Little Rock, AR (United States)

    2016-06-15

    Purpose: To commission and investigate the accuracy of an output (cGy/MU) prediction model for a compact passively scattered proton therapy system. Methods: A previously published output prediction model (Sahoo et al, Med Phys, 35, 5088–5097, 2008) was commissioned for our Mevion S250 proton therapy system. This model is a correction-based model that multiplies correction factors (d/MUwnc=ROFxSOBPF xRSFxSOBPOCFxOCRxFSFxISF). These factors accounted for changes in output due to options (12 large, 5 deep, and 7 small), modulation width M, range R, off-center, off-axis, field-size, and off-isocenter. In this study, the model was modified to ROFxSOBPFxRSFxOCRxFSFxISF-OCFxGACF by merging SOBPOCF and ISF for simplicity and introducing a gantry angle correction factor (GACF). To commission the model, outputs over 1,000 data points were taken at the time of the system commissioning. The output was predicted by interpolation (1D for SOBPF, FSF, and GACF; 2D for RSF and OCR) with inverse-square calculation (ISF-OCR). The outputs of 273 combinations of R and M covering total 24 options were measured to test the model. To minimize fluence perturbation, scattered dose from range compensator and patient was not considered. The percent differences between the predicted (P) and measured (M) outputs were calculated to test the prediction accuracy ([P-M]/Mx100%). Results: GACF was required because of up to 3.5% output variation dependence on the gantry angle. A 2D interpolation was required for OCR because the dose distribution was not radially symmetric especially for the deep options. The average percent differences were −0.03±0.98% (mean±SD) and the differences of all the measurements fell within ±3%. Conclusion: It is concluded that the model can be clinically used for the compact passively scattered proton therapy system. However, great care should be taken when the field-size is less than 5×5 cm{sup 2} where a direct output measurement is required due to substantial

  17. Corrected Four-Sphere Head Model for EEG Signals.

    Science.gov (United States)

    Næss, Solveig; Chintaluri, Chaitanya; Ness, Torbjørn V; Dale, Anders M; Einevoll, Gaute T; Wójcik, Daniel K

    2017-01-01

    The EEG signal is generated by electrical brain cell activity, often described in terms of current dipoles. By applying EEG forward models we can compute the contribution from such dipoles to the electrical potential recorded by EEG electrodes. Forward models are key both for generating understanding and intuition about the neural origin of EEG signals as well as inverse modeling, i.e., the estimation of the underlying dipole sources from recorded EEG signals. Different models of varying complexity and biological detail are used in the field. One such analytical model is the four-sphere model which assumes a four-layered spherical head where the layers represent brain tissue, cerebrospinal fluid (CSF), skull, and scalp, respectively. While conceptually clear, the mathematical expression for the electric potentials in the four-sphere model is cumbersome, and we observed that the formulas presented in the literature contain errors. Here, we derive and present the correct analytical formulas with a detailed derivation. A useful application of the analytical four-sphere model is that it can serve as ground truth to test the accuracy of numerical schemes such as the Finite Element Method (FEM). We performed FEM simulations of the four-sphere head model and showed that they were consistent with the corrected analytical formulas. For future reference we provide scripts for computing EEG potentials with the four-sphere model, both by means of the correct analytical formulas and numerical FEM simulations.

  18. Corrected Four-Sphere Head Model for EEG Signals

    Directory of Open Access Journals (Sweden)

    Solveig Næss

    2017-10-01

    Full Text Available The EEG signal is generated by electrical brain cell activity, often described in terms of current dipoles. By applying EEG forward models we can compute the contribution from such dipoles to the electrical potential recorded by EEG electrodes. Forward models are key both for generating understanding and intuition about the neural origin of EEG signals as well as inverse modeling, i.e., the estimation of the underlying dipole sources from recorded EEG signals. Different models of varying complexity and biological detail are used in the field. One such analytical model is the four-sphere model which assumes a four-layered spherical head where the layers represent brain tissue, cerebrospinal fluid (CSF, skull, and scalp, respectively. While conceptually clear, the mathematical expression for the electric potentials in the four-sphere model is cumbersome, and we observed that the formulas presented in the literature contain errors. Here, we derive and present the correct analytical formulas with a detailed derivation. A useful application of the analytical four-sphere model is that it can serve as ground truth to test the accuracy of numerical schemes such as the Finite Element Method (FEM. We performed FEM simulations of the four-sphere head model and showed that they were consistent with the corrected analytical formulas. For future reference we provide scripts for computing EEG potentials with the four-sphere model, both by means of the correct analytical formulas and numerical FEM simulations.

  19. Simplified correction of g-value measurements

    DEFF Research Database (Denmark)

    Duer, Karsten

    1998-01-01

    been carried out using a detailed physical model based on ISO9050 and prEN410 but using polarized data for non-normal incidence. This model is only valid for plane, clear glazings and therefor not suited for corrections of measurements performed on complex glazings. To investigate a more general...... correction procedure the results from the measurements on the Interpane DGU have been corrected using the principle outlined in (Rosenfeld, 1996). This correction procedure is more general as corrections can be carried out without a correct physical model of the investigated glazing. On the other hand...... the way this “general” correction procedure is used is not always in accordance to the physical conditions....

  20. Large-scale hydrological model river storage and discharge correction using a satellite altimetry-based discharge product

    Science.gov (United States)

    Emery, Charlotte Marie; Paris, Adrien; Biancamaria, Sylvain; Boone, Aaron; Calmant, Stéphane; Garambois, Pierre-André; Santos da Silva, Joecila

    2018-04-01

    Land surface models (LSMs) are widely used to study the continental part of the water cycle. However, even though their accuracy is increasing, inherent model uncertainties can not be avoided. In the meantime, remotely sensed observations of the continental water cycle variables such as soil moisture, lakes and river elevations are more frequent and accurate. Therefore, those two different types of information can be combined, using data assimilation techniques to reduce a model's uncertainties in its state variables or/and in its input parameters. The objective of this study is to present a data assimilation platform that assimilates into the large-scale ISBA-CTRIP LSM a punctual river discharge product, derived from ENVISAT nadir altimeter water elevation measurements and rating curves, over the whole Amazon basin. To deal with the scale difference between the model and the observation, the study also presents an initial development for a localization treatment that allows one to limit the impact of observations to areas close to the observation and in the same hydrological network. This assimilation platform is based on the ensemble Kalman filter and can correct either the CTRIP river water storage or the discharge. Root mean square error (RMSE) compared to gauge discharges is globally reduced until 21 % and at Óbidos, near the outlet, RMSE is reduced by up to 52 % compared to ENVISAT-based discharge. Finally, it is shown that localization improves results along the main tributaries.

  1. Integrated model-based retargeting and optical proximity correction

    Science.gov (United States)

    Agarwal, Kanak B.; Banerjee, Shayak

    2011-04-01

    Conventional resolution enhancement techniques (RET) are becoming increasingly inadequate at addressing the challenges of subwavelength lithography. In particular, features show high sensitivity to process variation in low-k1 lithography. Process variation aware RETs such as process-window OPC are becoming increasingly important to guarantee high lithographic yield, but such techniques suffer from high runtime impact. An alternative to PWOPC is to perform retargeting, which is a rule-assisted modification of target layout shapes to improve their process window. However, rule-based retargeting is not a scalable technique since rules cannot cover the entire search space of two-dimensional shape configurations, especially with technology scaling. In this paper, we propose to integrate the processes of retargeting and optical proximity correction (OPC). We utilize the normalized image log slope (NILS) metric, which is available at no extra computational cost during OPC. We use NILS to guide dynamic target modification between iterations of OPC. We utilize the NILS tagging capabilities of Calibre TCL scripting to identify fragments with low NILS. We then perform NILS binning to assign different magnitude of retargeting to different NILS bins. NILS is determined both for width, to identify regions of pinching, and space, to locate regions of potential bridging. We develop an integrated flow for 1x metal lines (M1) which exhibits lesser lithographic hotspots compared to a flow with just OPC and no retargeting. We also observe cases where hotspots that existed in the rule-based retargeting flow are fixed using our methodology. We finally also demonstrate that such a retargeting methodology does not significantly alter design properties by electrically simulating a latch layout before and after retargeting. We observe less than 1% impact on latch Clk-Q and D-Q delays post-retargeting, which makes this methodology an attractive one for use in improving shape process windows

  2. Validation of model-based brain shift correction in neurosurgery via intraoperative magnetic resonance imaging: preliminary results

    Science.gov (United States)

    Luo, Ma; Frisken, Sarah F.; Weis, Jared A.; Clements, Logan W.; Unadkat, Prashin; Thompson, Reid C.; Golby, Alexandra J.; Miga, Michael I.

    2017-03-01

    The quality of brain tumor resection surgery is dependent on the spatial agreement between preoperative image and intraoperative anatomy. However, brain shift compromises the aforementioned alignment. Currently, the clinical standard to monitor brain shift is intraoperative magnetic resonance (iMR). While iMR provides better understanding of brain shift, its cost and encumbrance is a consideration for medical centers. Hence, we are developing a model-based method that can be a complementary technology to address brain shift in standard resections, with resource-intensive cases as referrals for iMR facilities. Our strategy constructs a deformation `atlas' containing potential deformation solutions derived from a biomechanical model that account for variables such as cerebrospinal fluid drainage and mannitol effects. Volumetric deformation is estimated with an inverse approach that determines the optimal combinatory `atlas' solution fit to best match measured surface deformation. Accordingly, preoperative image is updated based on the computed deformation field. This study is the latest development to validate our methodology with iMR. Briefly, preoperative and intraoperative MR images of 2 patients were acquired. Homologous surface points were selected on preoperative and intraoperative scans as measurement of surface deformation and used to drive the inverse problem. To assess the model accuracy, subsurface shift of targets between preoperative and intraoperative states was measured and compared to model prediction. Considering subsurface shift above 3 mm, the proposed strategy provides an average shift correction of 59% across 2 cases. While further improvements in both the model and ability to validate with iMR are desired, the results reported are encouraging.

  3. Hydraulic correction method (HCM) to enhance the efficiency of SRTM DEM in flood modeling

    Science.gov (United States)

    Chen, Huili; Liang, Qiuhua; Liu, Yong; Xie, Shuguang

    2018-04-01

    Digital Elevation Model (DEM) is one of the most important controlling factors determining the simulation accuracy of hydraulic models. However, the currently available global topographic data is confronted with limitations for application in 2-D hydraulic modeling, mainly due to the existence of vegetation bias, random errors and insufficient spatial resolution. A hydraulic correction method (HCM) for the SRTM DEM is proposed in this study to improve modeling accuracy. Firstly, we employ the global vegetation corrected DEM (i.e. Bare-Earth DEM), developed from the SRTM DEM to include both vegetation height and SRTM vegetation signal. Then, a newly released DEM, removing both vegetation bias and random errors (i.e. Multi-Error Removed DEM), is employed to overcome the limitation of height errors. Last, an approach to correct the Multi-Error Removed DEM is presented to account for the insufficiency of spatial resolution, ensuring flow connectivity of the river networks. The approach involves: (a) extracting river networks from the Multi-Error Removed DEM using an automated algorithm in ArcGIS; (b) correcting the location and layout of extracted streams with the aid of Google Earth platform and Remote Sensing imagery; and (c) removing the positive biases of the raised segment in the river networks based on bed slope to generate the hydraulically corrected DEM. The proposed HCM utilizes easily available data and tools to improve the flow connectivity of river networks without manual adjustment. To demonstrate the advantages of HCM, an extreme flood event in Huifa River Basin (China) is simulated on the original DEM, Bare-Earth DEM, Multi-Error removed DEM, and hydraulically corrected DEM using an integrated hydrologic-hydraulic model. A comparative analysis is subsequently performed to assess the simulation accuracy and performance of four different DEMs and favorable results have been obtained on the corrected DEM.

  4. Modeling coherent errors in quantum error correction

    Science.gov (United States)

    Greenbaum, Daniel; Dutton, Zachary

    2018-01-01

    Analysis of quantum error correcting codes is typically done using a stochastic, Pauli channel error model for describing the noise on physical qubits. However, it was recently found that coherent errors (systematic rotations) on physical data qubits result in both physical and logical error rates that differ significantly from those predicted by a Pauli model. Here we examine the accuracy of the Pauli approximation for noise containing coherent errors (characterized by a rotation angle ɛ) under the repetition code. We derive an analytic expression for the logical error channel as a function of arbitrary code distance d and concatenation level n, in the small error limit. We find that coherent physical errors result in logical errors that are partially coherent and therefore non-Pauli. However, the coherent part of the logical error is negligible at fewer than {ε }-({dn-1)} error correction cycles when the decoder is optimized for independent Pauli errors, thus providing a regime of validity for the Pauli approximation. Above this number of correction cycles, the persistent coherent logical error will cause logical failure more quickly than the Pauli model would predict, and this may need to be combated with coherent suppression methods at the physical level or larger codes.

  5. Can molecular dynamics simulations help in discriminating correct from erroneous protein 3D models?

    Directory of Open Access Journals (Sweden)

    Gibrat Jean-François

    2008-01-01

    Full Text Available Abstract Background Recent approaches for predicting the three-dimensional (3D structure of proteins such as de novo or fold recognition methods mostly rely on simplified energy potential functions and a reduced representation of the polypeptide chain. These simplifications facilitate the exploration of the protein conformational space but do not permit to capture entirely the subtle relationship that exists between the amino acid sequence and its native structure. It has been proposed that physics-based energy functions together with techniques for sampling the conformational space, e.g., Monte Carlo or molecular dynamics (MD simulations, are better suited to the task of modelling proteins at higher resolutions than those of models obtained with the former type of methods. In this study we monitor different protein structural properties along MD trajectories to discriminate correct from erroneous models. These models are based on the sequence-structure alignments provided by our fold recognition method, FROST. We define correct models as being built from alignments of sequences with structures similar to their native structures and erroneous models from alignments of sequences with structures unrelated to their native structures. Results For three test sequences whose native structures belong to the all-α, all-β and αβ classes we built a set of models intended to cover the whole spectrum: from a perfect model, i.e., the native structure, to a very poor model, i.e., a random alignment of the test sequence with a structure belonging to another structural class, including several intermediate models based on fold recognition alignments. We submitted these models to 11 ns of MD simulations at three different temperatures. We monitored along the corresponding trajectories the mean of the Root-Mean-Square deviations (RMSd with respect to the initial conformation, the RMSd fluctuations, the number of conformation clusters, the evolution of

  6. On the Correctness of Real-Time Modular Computer Systems Modeling with Stopwatch Automata Networks

    Directory of Open Access Journals (Sweden)

    Alevtina B. Glonina

    2018-01-01

    Full Text Available In this paper, we consider a schedulability analysis problem for real-time modular computer systems (RT MCS. A system configuration is called schedulable if all the jobs finish within their deadlines. The authors propose a stopwatch automata-based general model of RT MCS operation. A model instance for a given RT MCS configuration is a network of stopwatch automata (NSA and it can be built automatically using the general model. A system operation trace, which is necessary for checking the schedulability criterion, can be obtained from the corresponding NSA trace. The paper substantiates the correctness of the proposed approach. A set of correctness requirements to models of system components and to the whole system model were derived from RT MCS specifications. The authors proved that if all models of system components satisfy the corresponding requirements, the whole system model built according to the proposed approach satisfies its correctness requirements and is deterministic (i.e. for a given configuration a trace generated by the corresponding model run is uniquely determined. The model determinism implies that any model run can be used for schedulability analysis. This fact is crucial for the approach efficiency, as the number of possible model runs grows exponentially with the number of jobs in a system. Correctness requirements to models of system components models can be checked automatically by a verifier using observer automata approach. The authors proved by using UPPAAL verifier that all the developed models of system components satisfy the corresponding requirements. User-defined models of system components can be also used for system modeling if they satisfy the requirements.

  7. Inflation via logarithmic entropy-corrected holographic dark energy model

    Energy Technology Data Exchange (ETDEWEB)

    Darabi, F.; Felegary, F. [Azarbaijan Shahid Madani University, Department of Physics, Tabriz (Iran, Islamic Republic of); Setare, M.R. [University of Kurdistan, Department of Science, Bijar (Iran, Islamic Republic of)

    2016-12-15

    We study the inflation in terms of the logarithmic entropy-corrected holographic dark energy (LECHDE) model with future event horizon, particle horizon, and Hubble horizon cut-offs, and we compare the results with those obtained in the study of inflation by the holographic dark energy HDE model. In comparison, the spectrum of primordial scalar power spectrum in the LECHDE model becomes redder than the spectrum in the HDE model. Moreover, the consistency with the observational data in the LECHDE model of inflation constrains the reheating temperature and Hubble parameter by one parameter of holographic dark energy and two new parameters of logarithmic corrections. (orig.)

  8. Inflation via logarithmic entropy-corrected holographic dark energy model

    International Nuclear Information System (INIS)

    Darabi, F.; Felegary, F.; Setare, M.R.

    2016-01-01

    We study the inflation in terms of the logarithmic entropy-corrected holographic dark energy (LECHDE) model with future event horizon, particle horizon, and Hubble horizon cut-offs, and we compare the results with those obtained in the study of inflation by the holographic dark energy HDE model. In comparison, the spectrum of primordial scalar power spectrum in the LECHDE model becomes redder than the spectrum in the HDE model. Moreover, the consistency with the observational data in the LECHDE model of inflation constrains the reheating temperature and Hubble parameter by one parameter of holographic dark energy and two new parameters of logarithmic corrections. (orig.)

  9. Statistical Downscaling and Bias Correction of Climate Model Outputs for Climate Change Impact Assessment in the U.S. Northeast

    Science.gov (United States)

    Ahmed, Kazi Farzan; Wang, Guiling; Silander, John; Wilson, Adam M.; Allen, Jenica M.; Horton, Radley; Anyah, Richard

    2013-01-01

    Statistical downscaling can be used to efficiently downscale a large number of General Circulation Model (GCM) outputs to a fine temporal and spatial scale. To facilitate regional impact assessments, this study statistically downscales (to 1/8deg spatial resolution) and corrects the bias of daily maximum and minimum temperature and daily precipitation data from six GCMs and four Regional Climate Models (RCMs) for the northeast United States (US) using the Statistical Downscaling and Bias Correction (SDBC) approach. Based on these downscaled data from multiple models, five extreme indices were analyzed for the future climate to quantify future changes of climate extremes. For a subset of models and indices, results based on raw and bias corrected model outputs for the present-day climate were compared with observations, which demonstrated that bias correction is important not only for GCM outputs, but also for RCM outputs. For future climate, bias correction led to a higher level of agreements among the models in predicting the magnitude and capturing the spatial pattern of the extreme climate indices. We found that the incorporation of dynamical downscaling as an intermediate step does not lead to considerable differences in the results of statistical downscaling for the study domain.

  10. Wall Correction Model for Wind Tunnels with Open Test Section

    DEFF Research Database (Denmark)

    Sørensen, Jens Nørkær; Shen, Wen Zhong; Mikkelsen, Robert Flemming

    2004-01-01

    , the corrections from the model are in very good agreement with the CFD computaions, demonstrating that one-dimensional momentum theory is a reliable way of predicting corrections for wall interference in wind tunnels with closed as well as open cross sections. Keywords: Wind tunnel correction, momentum theory...

  11. A concentration correction scheme for Lagrangian particle model and its application in street canyon air dispersion modelling

    Energy Technology Data Exchange (ETDEWEB)

    Jiyang Xia [Shanghai Jiao Tong University, Shanghai (China). Department of Engineering Mechanics; Leung, D.Y.C. [The University of Hong Kong (Hong Kong). Department of Mechanical Engineering

    2001-07-01

    Pollutant dispersion in street canyons with various configurations was simulated by discharging a large number of particles into the computation domain after developing a time-dependent wind field. Trajectory of the released particles was predicted using a Lagrangian particle model developed in an earlier study. A concentration correction scheme, based on the concept of 'visibility', was adopted for the Lagrangian particle model to correct the calculated pollutant concentration field in street canyons. The corrected concentrations compared favourably with those from wind tunnel experiments and a linear relationship between the computed concentrations and wind tunnel data were found. The developed model was then applied to four simulations to test for the suitability of the correction scheme and to study pollutant distribution in street canyons with different configurations. For those cases with obstacles presence in the computation domain, the correction scheme gives more reasonable results compared with the one without using it. Different flow regimes are observed in the street canyons, which depend on building configurations. A counter-clockwise rotating vortex may appear in a two-building case with wind flow from left to right, causing lower pollutant concentration at the leeward side of upstream building and higher concentration at the windward side of downstream building. On the other hand, a stable clockwise rotating vortex is formed in the street canyon with multiple identical buildings, resulting in poor natural ventilation in the street canyon. Moreover, particles emitted in the downstream canyon formed by buildings with large height-to-width ratios will be transported to upstream canyons. (author)

  12. Evaluation of Sinus/Edge-Corrected Zero-Echo-Time-Based Attenuation Correction in Brain PET/MRI.

    Science.gov (United States)

    Yang, Jaewon; Wiesinger, Florian; Kaushik, Sandeep; Shanbhag, Dattesh; Hope, Thomas A; Larson, Peder E Z; Seo, Youngho

    2017-11-01

    In brain PET/MRI, the major challenge of zero-echo-time (ZTE)-based attenuation correction (ZTAC) is the misclassification of air/tissue/bone mixtures or their boundaries. Our study aimed to evaluate a sinus/edge-corrected (SEC) ZTAC (ZTAC SEC ), relative to an uncorrected (UC) ZTAC (ZTAC UC ) and a CT atlas-based attenuation correction (ATAC). Methods: Whole-body 18 F-FDG PET/MRI scans were obtained for 12 patients after PET/CT scans. Only data acquired at a bed station that included the head were used for this study. Using PET data from PET/MRI, we applied ZTAC UC , ZTAC SEC , ATAC, and reference CT-based attenuation correction (CTAC) to PET attenuation correction. For ZTAC UC , the bias-corrected and normalized ZTE was converted to pseudo-CT with air (-1,000 HU for ZTE 0.75), and bone (-2,000 × [ZTE - 1] + 42 HU for 0.2 ≤ ZTE ≤ 0.75). Afterward, in the pseudo-CT, sinus/edges were automatically estimated as a binary mask through morphologic processing and edge detection. In the binary mask, the overestimated values were rescaled below 42 HU for ZTAC SEC For ATAC, the atlas deformed to MR in-phase was segmented to air, inner air, soft tissue, and continuous bone. For the quantitative evaluation, PET mean uptake values were measured in twenty 1-mL volumes of interest distributed throughout brain tissues. The PET uptake was compared using a paired t test. An error histogram was used to show the distribution of voxel-based PET uptake differences. Results: Compared with CTAC, ZTAC SEC achieved the overall PET quantification accuracy (0.2% ± 2.4%, P = 0.23) similar to CTAC, in comparison with ZTAC UC (5.6% ± 3.5%, P PET quantification in brain PET/MRI, comparable to the accuracy achieved by CTAC, particularly in the cerebellum. © 2017 by the Society of Nuclear Medicine and Molecular Imaging.

  13. Mathematical models for therapeutic approaches to control HIV disease transmission

    CERN Document Server

    Roy, Priti Kumar

    2015-01-01

    The book discusses different therapeutic approaches based on different mathematical models to control the HIV/AIDS disease transmission. It uses clinical data, collected from different cited sources, to formulate the deterministic as well as stochastic mathematical models of HIV/AIDS. It provides complementary approaches, from deterministic and stochastic points of view, to optimal control strategy with perfect drug adherence and also tries to seek viewpoints of the same issue from different angles with various mathematical models to computer simulations. The book presents essential methods and techniques for students who are interested in designing epidemiological models on HIV/AIDS. It also guides research scientists, working in the periphery of mathematical modeling, and helps them to explore a hypothetical method by examining its consequences in the form of a mathematical modelling and making some scientific predictions. The model equations, mathematical analysis and several numerical simulations that are...

  14. Structure-based sampling and self-correcting machine learning for accurate calculations of potential energy surfaces and vibrational levels

    Science.gov (United States)

    Dral, Pavlo O.; Owens, Alec; Yurchenko, Sergei N.; Thiel, Walter

    2017-06-01

    We present an efficient approach for generating highly accurate molecular potential energy surfaces (PESs) using self-correcting, kernel ridge regression (KRR) based machine learning (ML). We introduce structure-based sampling to automatically assign nuclear configurations from a pre-defined grid to the training and prediction sets, respectively. Accurate high-level ab initio energies are required only for the points in the training set, while the energies for the remaining points are provided by the ML model with negligible computational cost. The proposed sampling procedure is shown to be superior to random sampling and also eliminates the need for training several ML models. Self-correcting machine learning has been implemented such that each additional layer corrects errors from the previous layer. The performance of our approach is demonstrated in a case study on a published high-level ab initio PES of methyl chloride with 44 819 points. The ML model is trained on sets of different sizes and then used to predict the energies for tens of thousands of nuclear configurations within seconds. The resulting datasets are utilized in variational calculations of the vibrational energy levels of CH3Cl. By using both structure-based sampling and self-correction, the size of the training set can be kept small (e.g., 10% of the points) without any significant loss of accuracy. In ab initio rovibrational spectroscopy, it is thus possible to reduce the number of computationally costly electronic structure calculations through structure-based sampling and self-correcting KRR-based machine learning by up to 90%.

  15. Topological quantum error correction in the Kitaev honeycomb model

    Science.gov (United States)

    Lee, Yi-Chan; Brell, Courtney G.; Flammia, Steven T.

    2017-08-01

    The Kitaev honeycomb model is an approximate topological quantum error correcting code in the same phase as the toric code, but requiring only a 2-body Hamiltonian. As a frustrated spin model, it is well outside the commuting models of topological quantum codes that are typically studied, but its exact solubility makes it more amenable to analysis of effects arising in this noncommutative setting than a generic topologically ordered Hamiltonian. Here we study quantum error correction in the honeycomb model using both analytic and numerical techniques. We first prove explicit exponential bounds on the approximate degeneracy, local indistinguishability, and correctability of the code space. These bounds are tighter than can be achieved using known general properties of topological phases. Our proofs are specialized to the honeycomb model, but some of the methods may nonetheless be of broader interest. Following this, we numerically study noise caused by thermalization processes in the perturbative regime close to the toric code renormalization group fixed point. The appearance of non-topological excitations in this setting has no significant effect on the error correction properties of the honeycomb model in the regimes we study. Although the behavior of this model is found to be qualitatively similar to that of the standard toric code in most regimes, we find numerical evidence of an interesting effect in the low-temperature, finite-size regime where a preferred lattice direction emerges and anyon diffusion is geometrically constrained. We expect this effect to yield an improvement in the scaling of the lifetime with system size as compared to the standard toric code.

  16. Simulation-based artifact correction (SBAC) for metrological computed tomography

    Science.gov (United States)

    Maier, Joscha; Leinweber, Carsten; Sawall, Stefan; Stoschus, Henning; Ballach, Frederic; Müller, Tobias; Hammer, Michael; Christoph, Ralf; Kachelrieß, Marc

    2017-06-01

    Computed tomography (CT) is a valuable tool for the metrolocical assessment of industrial components. However, the application of CT to the investigation of highly attenuating objects or multi-material components is often restricted by the presence of CT artifacts caused by beam hardening, x-ray scatter, off-focal radiation, partial volume effects or the cone-beam reconstruction itself. In order to overcome this limitation, this paper proposes an approach to calculate a correction term that compensates for the contribution of artifacts and thus enables an appropriate assessment of these components using CT. Therefore, we make use of computer simulations of the CT measurement process. Based on an appropriate model of the object, e.g. an initial reconstruction or a CAD model, two simulations are carried out. One simulation considers all physical effects that cause artifacts using dedicated analytic methods as well as Monte Carlo-based models. The other one represents an ideal CT measurement i.e. a measurement in parallel beam geometry with a monochromatic, point-like x-ray source and no x-ray scattering. Thus, the difference between these simulations is an estimate for the present artifacts and can be used to correct the acquired projection data or the corresponding CT reconstruction, respectively. The performance of the proposed approach is evaluated using simulated as well as measured data of single and multi-material components. Our approach yields CT reconstructions that are nearly free of artifacts and thereby clearly outperforms commonly used artifact reduction algorithms in terms of image quality. A comparison against tactile reference measurements demonstrates the ability of the proposed approach to increase the accuracy of the metrological assessment significantly.

  17. Holographic quantum error-correcting codes: toy models for the bulk/boundary correspondence

    Energy Technology Data Exchange (ETDEWEB)

    Pastawski, Fernando; Yoshida, Beni [Institute for Quantum Information & Matter and Walter Burke Institute for Theoretical Physics,California Institute of Technology,1200 E. California Blvd., Pasadena CA 91125 (United States); Harlow, Daniel [Princeton Center for Theoretical Science, Princeton University,400 Jadwin Hall, Princeton NJ 08540 (United States); Preskill, John [Institute for Quantum Information & Matter and Walter Burke Institute for Theoretical Physics,California Institute of Technology,1200 E. California Blvd., Pasadena CA 91125 (United States)

    2015-06-23

    We propose a family of exactly solvable toy models for the AdS/CFT correspondence based on a novel construction of quantum error-correcting codes with a tensor network structure. Our building block is a special type of tensor with maximal entanglement along any bipartition, which gives rise to an isometry from the bulk Hilbert space to the boundary Hilbert space. The entire tensor network is an encoder for a quantum error-correcting code, where the bulk and boundary degrees of freedom may be identified as logical and physical degrees of freedom respectively. These models capture key features of entanglement in the AdS/CFT correspondence; in particular, the Ryu-Takayanagi formula and the negativity of tripartite information are obeyed exactly in many cases. That bulk logical operators can be represented on multiple boundary regions mimics the Rindler-wedge reconstruction of boundary operators from bulk operators, realizing explicitly the quantum error-correcting features of AdS/CFT recently proposed in http://dx.doi.org/10.1007/JHEP04(2015)163.

  18. The Simulation and Correction to the Brain Deformation Based on the Linear Elastic Model in IGS

    Institute of Scientific and Technical Information of China (English)

    MU Xiao-lan; SONG Zhi-jian

    2004-01-01

    @@ The brain deformation is a vital factor affecting the precision of the IGS and it becomes a hotspot to simulate and correct the brain deformation recently.The research organizations, which firstly resolved the brain deformation with the physical models, have the Image Processing and Analysis department of Yale University, Biomedical Modeling Lab of Vanderbilt University and so on. The former uses the linear elastic model; the latter uses the consolidation model.The linear elastic model only needs to drive the model using the surface displacement of exposed brain cortex,which is more convenient to be measured in the clinic.

  19. ITER Side Correction Coil Quench model and analysis

    Science.gov (United States)

    Nicollet, S.; Bessette, D.; Ciazynski, D.; Duchateau, J. L.; Gauthier, F.; Lacroix, B.

    2016-12-01

    Previous thermohydraulic studies performed for the ITER TF, CS and PF magnet systems have brought some important information on the detection and consequences of a quench as a function of the initial conditions (deposited energy, heated length). Even if the temperature margin of the Correction Coils is high, their behavior during a quench should also be studied since a quench is likely to be triggered by potential anomalies in joints, ground fault on the instrumentation wires, etc. A model has been developed with the SuperMagnet Code (Bagnasco et al., 2010) for a Side Correction Coil (SCC2) with four pancakes cooled in parallel, each of them represented by a Thea module (with the proper Cable In Conduit Conductor characteristics). All the other coils of the PF cooling loop are hydraulically connected in parallel (top/bottom correction coils and six Poloidal Field Coils) are modeled by Flower modules with equivalent hydraulics properties. The model and the analysis results are presented for five quench initiation cases with/without fast discharge: two quenches initiated by a heat input to the innermost turn of one pancake (case 1 and case 2) and two other quenches initiated at the innermost turns of four pancakes (case 3 and case 4). In the 5th case, the quench is initiated at the middle turn of one pancake. The impact on the cooling circuit, e.g. the exceedance of the opening pressure of the quench relief valves, is detailed in case of an undetected quench (i.e. no discharge of the magnet). Particular attention is also paid to a possible secondary quench detection system based on measured thermohydraulic signals (pressure, temperature and/or helium mass flow rate). The maximum cable temperature achieved in case of a fast current discharge (primary detection by voltage) is compared to the design hot spot criterion of 150 K, which includes the contribution of helium and jacket.

  20. Therapeutic Sleep for Traumatic Brain Injury

    Science.gov (United States)

    2017-06-01

    AWARD NUMBER: W81XWH-16-1-0166 TITLE: Therapeutic Sleep for Traumatic Brain Injury PRINCIPAL INVESTIGATOR: Ravi Allada CONTRACTING...1. REPORT DATE June 2017 2. REPORT TYPE Annual 3. DATES COVERED 1June2016 - 31May2017 4. TITLE AND SUBTITLE Therapeutic Sleep for Traumatic Brain ...proposal will test the hypothesis that correcting sleep disorders can have a therapeutic effect onTraumatic Brain Injury (TBI) The majority of TBI

  1. Spherical Nucleic Acids as Intracellular Agents for Nucleic Acid Based Therapeutics

    Science.gov (United States)

    Hao, Liangliang

    Recent functional discoveries on the noncoding sequences of human genome and transcriptome could lead to revolutionary treatment modalities because the noncoding RNAs (ncRNAs) can be applied as therapeutic agents to manipulate disease-causing genes. To date few nucleic acid-based therapeutics have been translated into the clinic due to challenges in the delivery of the oligonucleotide agents in an effective, cell specific, and non-toxic fashion. Unmodified oligonucleotide agents are destroyed rapidly in biological fluids by enzymatic degradation and have difficulty crossing the plasma membrane without the aid of transfection reagents, which often cause inflammatory, cytotoxic, or immunogenic side effects. Spherical nucleic acids (SNAs), nanoparticles consisting of densely organized and highly oriented oligonucleotides, pose one possible solution to circumventing these problems in both the antisense and RNA interference (RNAi) pathways. The unique three dimensional architecture of SNAs protects the bioactive oligonucleotides from unspecific degradation during delivery and supports their targeting of class A scavenger receptors and endocytosis via a lipid-raft-dependent, caveolae-mediated pathway. Owing to their unique structure, SNAs are able to cross cell membranes and regulate target genes expression as a single entity, without triggering the cellular innate immune response. Herein, my thesis has focused on understanding the interactions between SNAs and cellular components and developing SNA-based nanostructures to improve therapeutic capabilities. Specifically, I developed a novel SNA-based, nanoscale agent for delivery of therapeutic oligonucleotides to manipulate microRNAs (miRNAs), the endogenous post-transcriptional gene regulators. I investigated the role of SNAs involving miRNAs in anti-cancer or anti-inflammation responses in cells and in in vivo murine disease models via systemic injection. Furthermore, I explored using different strategies to construct

  2. Advanced Corrections for InSAR Using GPS and Numerical Weather Models

    Science.gov (United States)

    Cossu, F.; Foster, J. H.; Amelung, F.; Varugu, B. K.; Businger, S.; Cherubini, T.

    2017-12-01

    We present results from an investigation into the application of numerical weather models for generating tropospheric correction fields for Interferometric Synthetic Aperture Radar (InSAR). We apply the technique to data acquired from a UAVSAR campaign as well as from the CosmoSkyMed satellites. The complex spatial and temporal changes in the atmospheric propagation delay of the radar signal remain the single biggest factor limiting InSAR's potential for hazard monitoring and mitigation. A new generation of InSAR systems is being built and launched, and optimizing the science and hazard applications of these systems requires advanced methodologies to mitigate tropospheric noise. We use the Weather Research and Forecasting (WRF) model to generate a 900 m spatial resolution atmospheric models covering the Big Island of Hawaii and an even higher, 300 m resolution grid over the Mauna Loa and Kilauea volcanoes. By comparing a range of approaches, from the simplest, using reanalyses based on typically available meteorological observations, through to the "kitchen-sink" approach of assimilating all relevant data sets into our custom analyses, we examine the impact of the additional data sets on the atmospheric models and their effectiveness in correcting InSAR data. We focus particularly on the assimilation of information from the more than 60 GPS sites in the island. We ingest zenith tropospheric delay estimates from these sites directly into the WRF analyses, and also perform double-difference tomography using the phase residuals from the GPS processing to robustly incorporate heterogeneous information from the GPS data into the atmospheric models. We assess our performance through comparisons of our atmospheric models with external observations not ingested into the model, and through the effectiveness of the derived phase screens in reducing InSAR variance. Comparison of the InSAR data, our atmospheric analyses, and assessments of the active local and mesoscale

  3. Melatonin-Based Therapeutics for Neuroprotection in Stroke

    Directory of Open Access Journals (Sweden)

    Cesar V. Borlongan

    2013-04-01

    Full Text Available The present review paper supports the approach to deliver melatonin and to target melatonin receptors for neuroprotection in stroke. We discuss laboratory evidence demonstrating neuroprotective effects of exogenous melatonin treatment and transplantation of melatonin-secreting cells in stroke. In addition, we describe a novel mechanism of action underlying the therapeutic benefits of stem cell therapy in stroke, implicating the role of melatonin receptors. As we envision the clinical entry of melatonin-based therapeutics, we discuss translational experiments that warrant consideration to reveal an optimal melatonin treatment strategy that is safe and effective for human application.

  4. Model Consistent Pseudo-Observations of Precipitation and Their Use for Bias Correcting Regional Climate Models

    Directory of Open Access Journals (Sweden)

    Peter Berg

    2015-01-01

    Full Text Available Lack of suitable observational data makes bias correction of high space and time resolution regional climate models (RCM problematic. We present a method to construct pseudo-observational precipitation data bymerging a large scale constrained RCMreanalysis downscaling simulation with coarse time and space resolution observations. The large scale constraint synchronizes the inner domain solution to the driving reanalysis model, such that the simulated weather is similar to observations on a monthly time scale. Monthly biases for each single month are corrected to the corresponding month of the observational data, and applied to the finer temporal resolution of the RCM. A low-pass filter is applied to the correction factors to retain the small spatial scale information of the RCM. The method is applied to a 12.5 km RCM simulation and proven successful in producing a reliable pseudo-observational data set. Furthermore, the constructed data set is applied as reference in a quantile mapping bias correction, and is proven skillful in retaining small scale information of the RCM, while still correcting the large scale spatial bias. The proposed method allows bias correction of high resolution model simulations without changing the fine scale spatial features, i.e., retaining the very information required by many impact models.

  5. Manufacturing of Human Extracellular Vesicle-Based Therapeutics for Clinical Use

    Directory of Open Access Journals (Sweden)

    Mario Gimona

    2017-06-01

    Full Text Available Extracellular vesicles (EVs derived from stem and progenitor cells may have therapeutic effects comparable to their parental cells and are considered promising agents for the treatment of a variety of diseases. To this end, strategies must be designed to successfully translate EV research and to develop safe and efficacious therapies, whilst taking into account the applicable regulations. Here, we discuss the requirements for manufacturing, safety, and efficacy testing of EVs along their path from the laboratory to the patient. Development of EV-therapeutics is influenced by the source cell types and the target diseases. In this article, we express our view based on our experience in manufacturing biological therapeutics for routine use or clinical testing, and focus on strategies for advancing mesenchymal stromal cell (MSC-derived EV-based therapies. We also discuss the rationale for testing MSC-EVs in selected diseases with an unmet clinical need such as critical size bone defects, epidermolysis bullosa and spinal cord injury. While the scientific community, pharmaceutical companies and clinicians are at the point of entering into clinical trials for testing the therapeutic potential of various EV-based products, the identification of the mode of action underlying the suggested potency in each therapeutic approach remains a major challenge to the translational path.

  6. Effects of image distortion correction on voxel-based morphometry

    International Nuclear Information System (INIS)

    Goto, Masami; Abe, Osamu; Kabasawa, Hiroyuki

    2012-01-01

    We aimed to show that correcting image distortion significantly affects brain volumetry using voxel-based morphometry (VBM) and to assess whether the processing of distortion correction reduces system dependency. We obtained contiguous sagittal T 1 -weighted images of the brain from 22 healthy participants using 1.5- and 3-tesla magnetic resonance (MR) scanners, preprocessed images using Statistical Parametric Mapping 5, and tested the relation between distortion correction and brain volume using VBM. Local brain volume significantly increased or decreased on corrected images compared with uncorrected images. In addition, the method used to correct image distortion for gradient nonlinearity produced fewer volumetric errors from MR system variation. This is the first VBM study to show more precise volumetry using VBM with corrected images. These results indicate that multi-scanner or multi-site imaging trials require correction for distortion induced by gradient nonlinearity. (author)

  7. A reduced-order, single-bubble cavitation model with applications to therapeutic ultrasound.

    Science.gov (United States)

    Kreider, Wayne; Crum, Lawrence A; Bailey, Michael R; Sapozhnikov, Oleg A

    2011-11-01

    Cavitation often occurs in therapeutic applications of medical ultrasound such as shock-wave lithotripsy (SWL) and high-intensity focused ultrasound (HIFU). Because cavitation bubbles can affect an intended treatment, it is important to understand the dynamics of bubbles in this context. The relevant context includes very high acoustic pressures and frequencies as well as elevated temperatures. Relative to much of the prior research on cavitation and bubble dynamics, such conditions are unique. To address the relevant physics, a reduced-order model of a single, spherical bubble is proposed that incorporates phase change at the liquid-gas interface as well as heat and mass transport in both phases. Based on the energy lost during the inertial collapse and rebound of a millimeter-sized bubble, experimental observations were used to tune and test model predictions. In addition, benchmarks from the published literature were used to assess various aspects of model performance. Benchmark comparisons demonstrate that the model captures the basic physics of phase change and diffusive transport, while it is quantitatively sensitive to specific model assumptions and implementation details. Given its performance and numerical stability, the model can be used to explore bubble behaviors across a broad parameter space relevant to therapeutic ultrasound.

  8. The Innsbruck/ESO sky models and telluric correction tools*

    Directory of Open Access Journals (Sweden)

    Kimeswenger S.

    2015-01-01

    While the ground based astronomical observatories just have to correct for the line-of-sight integral of these effects, the Čerenkov telescopes use the atmosphere as the primary detector. The measured radiation originates at lower altitudes and does not pass through the entire atmosphere. Thus, a decent knowledge of the profile of the atmosphere at any time is required. The latter cannot be achieved by photometric measurements of stellar sources. We show here the capabilities of our sky background model and data reduction tools for ground-based optical/infrared telescopes. Furthermore, we discuss the feasibility of monitoring the atmosphere above any observing site, and thus, the possible application of the method for Čerenkov telescopes.

  9. Friction correction for model ship resistance and propulsion tests in ice at NRC's OCRE-RC

    Directory of Open Access Journals (Sweden)

    Michael Lau

    2018-05-01

    Full Text Available This paper documents the result of a preliminary analysis on the influence of hull-ice friction coefficient on model resistance and power predictions and their correlation to full-scale measurements. The study is based on previous model-scale/full-scale correlations performed on the National Research Council - Ocean, Coastal, and River Engineering Research Center's (NRC/OCRE-RC model test data. There are two objectives for the current study: (1 to validate NRC/OCRE-RC's modeling standards in regarding to its practice of specifying a CFC (Correlation Friction Coefficient of 0.05 for all its ship models; and (2 to develop a correction methodology for its resistance and propulsion predictions when the model is prepared with an ice friction coefficient slightly deviated from the CFC of 0.05. The mean CFC of 0.056 and 0.050 for perfect correlation as computed from the resistance and power analysis, respectively, have justified NRC/OCRE-RC's selection of 0.05 for the CFC of all its models. Furthermore, a procedure for minor friction corrections is developed. Keywords: Model test, Ice resistance, Power, Friction correction, Correlation friction coefficient

  10. Structuring polymers for delivery of DNA-based therapeutics: updated insights.

    Science.gov (United States)

    Gupta, Madhu; Tiwari, Shailja; Vyas, Suresh

    2012-01-01

    Gene therapy offers greater opportunities for treating numerous incurable diseases from genetic disorders, infections, and cancer. However, development of appropriate delivery systems could be one of the most important factors to overcome numerous biological barriers for delivery of various therapeutic molecules. A number of nonviral polymer-mediated vectors have been developed for DNA delivery and offer the potential to surmount the associated problems of their viral counterpart. To address the concerns associated with safety issues, a wide range of polymeric vectors are available and have been utilized successfully to deliver their therapeutics in vivo. Today's research is mainly focused on the various natural or synthetic polymer-based delivery carriers that protect the DNA molecule from degradation, which offer specific targeting to the desired cells after systemic administration, have transfection efficiencies equivalent to virus-mediated gene delivery, and have long-term gene expression through sustained-release mechanisms. This review explores an updated overview of different nonviral polymeric delivery system for delivery of DNA-based therapeutics. These polymeric carriers have been evaluated in vitro and in vivo and are being utilized in various stages of clinical evaluation. Continued research and understanding of the principles of polymer-based gene delivery systems will enable us to develop new and efficient delivery systems for the delivery of DNA-based therapeutics to achieve the goal of efficacious and specific gene therapy for a vast array of clinical disorders as the therapeutic solutions of tomorrow.

  11. Segmentation-based retrospective shading correction in fluorescence microscopy E. coli images for quantitative analysis

    Science.gov (United States)

    Mai, Fei; Chang, Chunqi; Liu, Wenqing; Xu, Weichao; Hung, Yeung S.

    2009-10-01

    Due to the inherent imperfections in the imaging process, fluorescence microscopy images often suffer from spurious intensity variations, which is usually referred to as intensity inhomogeneity, intensity non uniformity, shading or bias field. In this paper, a retrospective shading correction method for fluorescence microscopy Escherichia coli (E. Coli) images is proposed based on segmentation result. Segmentation and shading correction are coupled together, so we iteratively correct the shading effects based on segmentation result and refine the segmentation by segmenting the image after shading correction. A fluorescence microscopy E. Coli image can be segmented (based on its intensity value) into two classes: the background and the cells, where the intensity variation within each class is close to zero if there is no shading. Therefore, we make use of this characteristics to correct the shading in each iteration. Shading is mathematically modeled as a multiplicative component and an additive noise component. The additive component is removed by a denoising process, and the multiplicative component is estimated using a fast algorithm to minimize the intra-class intensity variation. We tested our method on synthetic images and real fluorescence E.coli images. It works well not only for visual inspection, but also for numerical evaluation. Our proposed method should be useful for further quantitative analysis especially for protein expression value comparison.

  12. The ρ - ω mass difference in a relativistic potential model with pion corrections

    International Nuclear Information System (INIS)

    Palladino, B.E.; Ferreira, P.L.

    1988-01-01

    The problem of the ρ - ω mass difference is studied in the framework of the relativistic, harmonic, S+V independent quark model implemented by center-of-mass, one-gluon exchange and plon-cloud corrections stemming from the requirement of chiral symmetry in the (u,d) SU(2) flavour sector of the model. The plonic self-energy corrections with different intermediate energy states are instrumental of the analysis of the problem, which requires and appropriate parametrization of the mesonic sector different from that previously used to calculate the mass spectrum of the S-wave baryons. The right ρ - ω mass splitting is found, together with a satisfactory value for the mass of the pion, calculated as a bound-state of a quark-antiquark pair. An analogous discussion based on the cloudy-bag model is also presented. (author) [pt

  13. Library based x-ray scatter correction for dedicated cone beam breast CT

    International Nuclear Information System (INIS)

    Shi, Linxi; Zhu, Lei; Vedantham, Srinivasan; Karellas, Andrew

    2016-01-01

    Purpose: The image quality of dedicated cone beam breast CT (CBBCT) is limited by substantial scatter contamination, resulting in cupping artifacts and contrast-loss in reconstructed images. Such effects obscure the visibility of soft-tissue lesions and calcifications, which hinders breast cancer detection and diagnosis. In this work, we propose a library-based software approach to suppress scatter on CBBCT images with high efficiency, accuracy, and reliability. Methods: The authors precompute a scatter library on simplified breast models with different sizes using the GEANT4-based Monte Carlo (MC) toolkit. The breast is approximated as a semiellipsoid with homogeneous glandular/adipose tissue mixture. For scatter correction on real clinical data, the authors estimate the breast size from a first-pass breast CT reconstruction and then select the corresponding scatter distribution from the library. The selected scatter distribution from simplified breast models is spatially translated to match the projection data from the clinical scan and is subtracted from the measured projection for effective scatter correction. The method performance was evaluated using 15 sets of patient data, with a wide range of breast sizes representing about 95% of general population. Spatial nonuniformity (SNU) and contrast to signal deviation ratio (CDR) were used as metrics for evaluation. Results: Since the time-consuming MC simulation for library generation is precomputed, the authors’ method efficiently corrects for scatter with minimal processing time. Furthermore, the authors find that a scatter library on a simple breast model with only one input parameter, i.e., the breast diameter, sufficiently guarantees improvements in SNU and CDR. For the 15 clinical datasets, the authors’ method reduces the average SNU from 7.14% to 2.47% in coronal views and from 10.14% to 3.02% in sagittal views. On average, the CDR is improved by a factor of 1.49 in coronal views and 2.12 in sagittal

  14. Library based x-ray scatter correction for dedicated cone beam breast CT

    Energy Technology Data Exchange (ETDEWEB)

    Shi, Linxi; Zhu, Lei, E-mail: leizhu@gatech.edu [Nuclear and Radiological Engineering and Medical Physics Programs, The George W. Woodruff School of Mechanical Engineering, Georgia Institute of Technology, Atlanta, Georgia 30332 (United States); Vedantham, Srinivasan; Karellas, Andrew [Department of Radiology, University of Massachusetts Medical School, Worcester, Massachusetts 01655 (United States)

    2016-08-15

    Purpose: The image quality of dedicated cone beam breast CT (CBBCT) is limited by substantial scatter contamination, resulting in cupping artifacts and contrast-loss in reconstructed images. Such effects obscure the visibility of soft-tissue lesions and calcifications, which hinders breast cancer detection and diagnosis. In this work, we propose a library-based software approach to suppress scatter on CBBCT images with high efficiency, accuracy, and reliability. Methods: The authors precompute a scatter library on simplified breast models with different sizes using the GEANT4-based Monte Carlo (MC) toolkit. The breast is approximated as a semiellipsoid with homogeneous glandular/adipose tissue mixture. For scatter correction on real clinical data, the authors estimate the breast size from a first-pass breast CT reconstruction and then select the corresponding scatter distribution from the library. The selected scatter distribution from simplified breast models is spatially translated to match the projection data from the clinical scan and is subtracted from the measured projection for effective scatter correction. The method performance was evaluated using 15 sets of patient data, with a wide range of breast sizes representing about 95% of general population. Spatial nonuniformity (SNU) and contrast to signal deviation ratio (CDR) were used as metrics for evaluation. Results: Since the time-consuming MC simulation for library generation is precomputed, the authors’ method efficiently corrects for scatter with minimal processing time. Furthermore, the authors find that a scatter library on a simple breast model with only one input parameter, i.e., the breast diameter, sufficiently guarantees improvements in SNU and CDR. For the 15 clinical datasets, the authors’ method reduces the average SNU from 7.14% to 2.47% in coronal views and from 10.14% to 3.02% in sagittal views. On average, the CDR is improved by a factor of 1.49 in coronal views and 2.12 in sagittal

  15. Prior-based artifact correction (PBAC) in computed tomography

    International Nuclear Information System (INIS)

    Heußer, Thorsten; Brehm, Marcus; Ritschl, Ludwig; Sawall, Stefan; Kachelrieß, Marc

    2014-01-01

    Purpose: Image quality in computed tomography (CT) often suffers from artifacts which may reduce the diagnostic value of the image. In many cases, these artifacts result from missing or corrupt regions in the projection data, e.g., in the case of metal, truncation, and limited angle artifacts. The authors propose a generalized correction method for different kinds of artifacts resulting from missing or corrupt data by making use of available prior knowledge to perform data completion. Methods: The proposed prior-based artifact correction (PBAC) method requires prior knowledge in form of a planning CT of the same patient or in form of a CT scan of a different patient showing the same body region. In both cases, the prior image is registered to the patient image using a deformable transformation. The registered prior is forward projected and data completion of the patient projections is performed using smooth sinogram inpainting. The obtained projection data are used to reconstruct the corrected image. Results: The authors investigate metal and truncation artifacts in patient data sets acquired with a clinical CT and limited angle artifacts in an anthropomorphic head phantom data set acquired with a gantry-based flat detector CT device. In all cases, the corrected images obtained by PBAC are nearly artifact-free. Compared to conventional correction methods, PBAC achieves better artifact suppression while preserving the patient-specific anatomy at the same time. Further, the authors show that prominent anatomical details in the prior image seem to have only minor impact on the correction result. Conclusions: The results show that PBAC has the potential to effectively correct for metal, truncation, and limited angle artifacts if adequate prior data are available. Since the proposed method makes use of a generalized algorithm, PBAC may also be applicable to other artifacts resulting from missing or corrupt data

  16. A metapopulation model for the spread of MRSA in correctional facilities

    Directory of Open Access Journals (Sweden)

    Marc Beauparlant

    2016-10-01

    Full Text Available The spread of methicillin-resistant strains of Staphylococcus aureus (MRSA in health-care settings has become increasingly difficult to control and has since been able to spread in the general community. The prevalence of MRSA within the general public has caused outbreaks in groups of people in close quarters such as military barracks, gyms, daycare centres and correctional facilities. Correctional facilities are of particular importance for spreading MRSA, as inmates are often in close proximity and have limited access to hygienic products and clean clothing. Although these conditions are ideal for spreading MRSA, a recent study has suggested that recurrent epidemics are caused by the influx of colonized or infected individuals into the correctional facility. In this paper, we further investigate the effects of community dynamics on the spread of MRSA within the correctional facility and determine whether recidivism has a significant effect on disease dynamics. Using a simplified hotspot model ignoring disease dynamics within the correctional facility, as well as two metapopulation models, we demonstrate that outbreaks in correctional facilities can be driven by community dynamics even when spread between inmates is restricted. We also show that disease dynamics within the correctional facility and their effect on the outlying community may be ignored due to the smaller size of the incarcerated population. This will allow construction of simpler models that consider the effects of many MRSA hotspots interacting with the general community. It is suspected that the cumulative effects of hotspots for MRSA would have a stronger feedback effect in other community settings. Keywords: methicillin-resistant Staphylococcus aureus, hotspots, mathematical model, metapopulation model, Latin Hypercube Sampling

  17. HDR Pathological Image Enhancement Based on Improved Bias Field Correction and Guided Image Filter

    Directory of Open Access Journals (Sweden)

    Qingjiao Sun

    2016-01-01

    Full Text Available Pathological image enhancement is a significant topic in the field of pathological image processing. This paper proposes a high dynamic range (HDR pathological image enhancement method based on improved bias field correction and guided image filter (GIF. Firstly, a preprocessing including stain normalization and wavelet denoising is performed for Haematoxylin and Eosin (H and E stained pathological image. Then, an improved bias field correction model is developed to enhance the influence of light for high-frequency part in image and correct the intensity inhomogeneity and detail discontinuity of image. Next, HDR pathological image is generated based on least square method using low dynamic range (LDR image, H and E channel images. Finally, the fine enhanced image is acquired after the detail enhancement process. Experiments with 140 pathological images demonstrate the performance advantages of our proposed method as compared with related work.

  18. Chromatic aberrations correction for imaging spectrometer based on acousto-optic tunable filter with two transducers.

    Science.gov (United States)

    Zhao, Huijie; Wang, Ziye; Jia, Guorui; Zhang, Ying; Xu, Zefu

    2017-10-02

    The acousto-optic tunable filter (AOTF) with wide wavelength range and high spectral resolution has long crystal and two transducers. A longer crystal length leads to a bigger chromatic focal shift and the double-transducer arrangement induces angular mutation in diffracted beam, which increase difficulty in longitudinal and lateral chromatic aberration correction respectively. In this study, the two chromatic aberrations are analyzed quantitatively based on an AOTF optical model and a novel catadioptric dual-path configuration is proposed to correct both the chromatic aberrations. The test results exhibit effectiveness of the optical configuration for this type of AOTF-based imaging spectrometer.

  19. Fuzzy clustering-based segmented attenuation correction in whole-body PET

    CERN Document Server

    Zaidi, H; Boudraa, A; Slosman, DO

    2001-01-01

    Segmented-based attenuation correction is now a widely accepted technique to reduce noise contribution of measured attenuation correction. In this paper, we present a new method for segmenting transmission images in positron emission tomography. This reduces the noise on the correction maps while still correcting for differing attenuation coefficients of specific tissues. Based on the Fuzzy C-Means (FCM) algorithm, the method segments the PET transmission images into a given number of clusters to extract specific areas of differing attenuation such as air, the lungs and soft tissue, preceded by a median filtering procedure. The reconstructed transmission image voxels are therefore segmented into populations of uniform attenuation based on the human anatomy. The clustering procedure starts with an over-specified number of clusters followed by a merging process to group clusters with similar properties and remove some undesired substructures using anatomical knowledge. The method is unsupervised, adaptive and a...

  20. Bartlett correction in the stable AR(1) model with intercept and trend

    NARCIS (Netherlands)

    van Giersbergen, N.P.A.

    2004-01-01

    The Bartlett correction is derived for testing hypotheses about the autoregressive parameter ρ in the stable: (i) AR(1) model; (ii) AR(1) model with intercept; (iii) AR(1) model with intercept and linear trend. The correction is found explicitly as a function of ρ. In the models with deterministic

  1. Towards New Empirical Versions of Financial and Accounting Models Corrected for Measurement Errors

    OpenAIRE

    Francois-Éric Racicot; Raymond Théoret; Alain Coen

    2006-01-01

    In this paper, we propose a new empirical version of the Fama and French Model based on the Hausman (1978) specification test and aimed at discarding measurement errors in the variables. The proposed empirical framework is general enough to be used for correcting other financial and accounting models of measurement errors. Removing measurement errors is important at many levels as information disclosure, corporate governance and protection of investors.

  2. Addressing the mischaracterization of extreme rainfall in regional climate model simulations - A synoptic pattern based bias correction approach

    Science.gov (United States)

    Li, Jingwan; Sharma, Ashish; Evans, Jason; Johnson, Fiona

    2018-01-01

    Addressing systematic biases in regional climate model simulations of extreme rainfall is a necessary first step before assessing changes in future rainfall extremes. Commonly used bias correction methods are designed to match statistics of the overall simulated rainfall with observations. This assumes that change in the mix of different types of extreme rainfall events (i.e. convective and non-convective) in a warmer climate is of little relevance in the estimation of overall change, an assumption that is not supported by empirical or physical evidence. This study proposes an alternative approach to account for the potential change of alternate rainfall types, characterized here by synoptic weather patterns (SPs) using self-organizing maps classification. The objective of this study is to evaluate the added influence of SPs on the bias correction, which is achieved by comparing the corrected distribution of future extreme rainfall with that using conventional quantile mapping. A comprehensive synthetic experiment is first defined to investigate the conditions under which the additional information of SPs makes a significant difference to the bias correction. Using over 600,000 synthetic cases, statistically significant differences are found to be present in 46% cases. This is followed by a case study over the Sydney region using a high-resolution run of the Weather Research and Forecasting (WRF) regional climate model, which indicates a small change in the proportions of the SPs and a statistically significant change in the extreme rainfall over the region, although the differences between the changes obtained from the two bias correction methods are not statistically significant.

  3. Standard Model-like corrections to Dilatonic Dynamics

    DEFF Research Database (Denmark)

    Antipin, Oleg; Krog, Jens; Mølgaard, Esben

    2013-01-01

    the same non-abelian global symmetries as a technicolor-like theory with matter in a complex representation of the gauge group. We then embed the electroweak gauge group within the global flavor structure and add also ordinary quark-like states to mimic the effects of the top. We find that the standard...... model-like induced corrections modify the original phase diagram and the details of the dilatonic spectrum. In particular, we show that the corrected theory exhibits near-conformal behavior for a smaller range of flavors and colors. For this range of values, however, our results suggest that near...

  4. A model measuring therapeutic inertia and the associated factors among diabetes patients: A nationwide population-based study in Taiwan.

    Science.gov (United States)

    Huang, Li-Ying; Shau, Wen-Yi; Yeh, Hseng-Long; Chen, Tsung-Tai; Hsieh, Jun Yi; Su, Syi; Lai, Mei-Shu

    2015-01-01

    This article presents an analysis conducted on the patterns related to therapeutic inertia with the aim of uncovering how variables at the patient level and the healthcare provider level influence the intensification of therapy when it is clinically indicated. A cohort study was conducted on 899,135 HbA1c results from 168,876 adult diabetes patients with poorly controlled HbA1c levels. HbA1c results were used to identify variations in the prescription of hypoglycemic drugs. Logistic regression and hierarchical linear models (HLMs) were used to determine how differences among healthcare providers and patient characteristics influence therapeutic inertia. We estimated that 38.5% of the patients in this study were subject to therapeutic inertia. The odds ratio of cardiologists choosing to intensify therapy was 0.708 times that of endocrinologists. Furthermore, patients in medical centers were shown to be 1.077 times more likely to be prescribed intensified treatment than patients in primary clinics. The HLMs presented results similar to those of the logistic model. Overall, we determined that 88.92% of the variation in the application of intensified treatment was at the within-physician level. Reducing therapeutic inertia will likely require educational initiatives aimed at ensuring adherence to clinical practice guidelines in the care of diabetes patients. © 2014, The American College of Clinical Pharmacology.

  5. A Technique for Real-Time Ionospheric Ranging Error Correction Based On Radar Dual-Frequency Detection

    Science.gov (United States)

    Lyu, Jiang-Tao; Zhou, Chen

    2017-12-01

    Ionospheric refraction is one of the principal error sources for limiting the accuracy of radar systems for space target detection. High-accuracy measurement of the ionospheric electron density along the propagation path of radar wave is the most important procedure for the ionospheric refraction correction. Traditionally, the ionospheric model and the ionospheric detection instruments, like ionosonde or GPS receivers, are employed for obtaining the electron density. However, both methods are not capable of satisfying the requirements of correction accuracy for the advanced space target radar system. In this study, we propose a novel technique for ionospheric refraction correction based on radar dual-frequency detection. Radar target range measurements at two adjacent frequencies are utilized for calculating the electron density integral exactly along the propagation path of the radar wave, which can generate accurate ionospheric range correction. The implementation of radar dual-frequency detection is validated by a P band radar located in midlatitude China. The experimental results present that the accuracy of this novel technique is more accurate than the traditional ionospheric model correction. The technique proposed in this study is very promising for the high-accuracy radar detection and tracking of objects in geospace.

  6. Threshold corrections and gauge symmetry in twisted superstring models

    International Nuclear Information System (INIS)

    Pierce, D.M.

    1994-01-01

    Threshold corrections to the running of gauge couplings are calculated for superstring models with free complex world sheet fermions. For two N=1 SU(2)xU(1) 5 models, the threshold corrections lead to a small increase in the unification scale. Examples are given to illustrate how a given particle spectrum can be described by models with different boundary conditions on the internal fermions. We also discuss how complex twisted fermions can enhance the symmetry group of an N=4, SU(3)xU(1)xU(1) model to the gauge group SU(3)xSU(2)xU(1). It is then shown how a mixing angle analogous to the Weinberg angle depends on the boundary conditions of the internal fermions

  7. Corrected Statistical Energy Analysis Model for Car Interior Noise

    Directory of Open Access Journals (Sweden)

    A. Putra

    2015-01-01

    Full Text Available Statistical energy analysis (SEA is a well-known method to analyze the flow of acoustic and vibration energy in a complex structure. For an acoustic space where significant absorptive materials are present, direct field component from the sound source dominates the total sound field rather than a reverberant field, where the latter becomes the basis in constructing the conventional SEA model. Such environment can be found in a car interior and thus a corrected SEA model is proposed here to counter this situation. The model is developed by eliminating the direct field component from the total sound field and only the power after the first reflection is considered. A test car cabin was divided into two subsystems and by using a loudspeaker as a sound source, the power injection method in SEA was employed to obtain the corrected coupling loss factor and the damping loss factor from the corrected SEA model. These parameters were then used to predict the sound pressure level in the interior cabin using the injected input power from the engine. The results show satisfactory agreement with the directly measured SPL.

  8. Agonist anti-GITR antibody significantly enhances the therapeutic efficacy of Listeria monocytogenes-based immunotherapy.

    Science.gov (United States)

    Shrimali, Rajeev; Ahmad, Shamim; Berrong, Zuzana; Okoev, Grigori; Matevosyan, Adelaida; Razavi, Ghazaleh Shoja E; Petit, Robert; Gupta, Seema; Mkrtichyan, Mikayel; Khleif, Samir N

    2017-08-15

    We previously demonstrated that in addition to generating an antigen-specific immune response, Listeria monocytogenes (Lm)-based immunotherapy significantly reduces the ratio of regulatory T cells (Tregs)/CD4 + and myeloid-derived suppressor cells (MDSCs) in the tumor microenvironment. Since Lm-based immunotherapy is able to inhibit the immune suppressive environment, we hypothesized that combining this treatment with agonist antibody to a co-stimulatory receptor that would further boost the effector arm of immunity will result in significant improvement of anti-tumor efficacy of treatment. Here we tested the immune and therapeutic efficacy of Listeria-based immunotherapy combination with agonist antibody to glucocorticoid-induced tumor necrosis factor receptor-related protein (GITR) in TC-1 mouse tumor model. We evaluated the potency of combination on tumor growth and survival of treated animals and profiled tumor microenvironment for effector and suppressor cell populations. We demonstrate that combination of Listeria-based immunotherapy with agonist antibody to GITR synergizes to improve immune and therapeutic efficacy of treatment in a mouse tumor model. We show that this combinational treatment leads to significant inhibition of tumor-growth, prolongs survival and leads to complete regression of established tumors in 60% of treated animals. We determined that this therapeutic benefit of combinational treatment is due to a significant increase in tumor infiltrating effector CD4 + and CD8 + T cells along with a decrease of inhibitory cells. To our knowledge, this is the first study that exploits Lm-based immunotherapy combined with agonist anti-GITR antibody as a potent treatment strategy that simultaneously targets both the effector and suppressor arms of the immune system, leading to significantly improved anti-tumor efficacy. We believe that our findings depicted in this manuscript provide a promising and translatable strategy that can enhance the overall

  9. Chitosan-based delivery systems for protein therapeutics and antigens

    NARCIS (Netherlands)

    Amidi, M.; Mastrobattista, E.; Jiskoot, W.; Hennink, W.E.

    Therapeutic peptides/proteins and protein-based antigens are chemically and structurally labile compounds, which are almost exclusively administered by parenteral injections. Recently, non-invasive mucosal routes have attracted interest for administration of these biotherapeutics. Chitosan-based

  10. Hardware-efficient bosonic quantum error-correcting codes based on symmetry operators

    Science.gov (United States)

    Niu, Murphy Yuezhen; Chuang, Isaac L.; Shapiro, Jeffrey H.

    2018-03-01

    We establish a symmetry-operator framework for designing quantum error-correcting (QEC) codes based on fundamental properties of the underlying system dynamics. Based on this framework, we propose three hardware-efficient bosonic QEC codes that are suitable for χ(2 )-interaction based quantum computation in multimode Fock bases: the χ(2 ) parity-check code, the χ(2 ) embedded error-correcting code, and the χ(2 ) binomial code. All of these QEC codes detect photon-loss or photon-gain errors by means of photon-number parity measurements, and then correct them via χ(2 ) Hamiltonian evolutions and linear-optics transformations. Our symmetry-operator framework provides a systematic procedure for finding QEC codes that are not stabilizer codes, and it enables convenient extension of a given encoding to higher-dimensional qudit bases. The χ(2 ) binomial code is of special interest because, with m ≤N identified from channel monitoring, it can correct m -photon-loss errors, or m -photon-gain errors, or (m -1 )th -order dephasing errors using logical qudits that are encoded in O (N ) photons. In comparison, other bosonic QEC codes require O (N2) photons to correct the same degree of bosonic errors. Such improved photon efficiency underscores the additional error-correction power that can be provided by channel monitoring. We develop quantum Hamming bounds for photon-loss errors in the code subspaces associated with the χ(2 ) parity-check code and the χ(2 ) embedded error-correcting code, and we prove that these codes saturate their respective bounds. Our χ(2 ) QEC codes exhibit hardware efficiency in that they address the principal error mechanisms and exploit the available physical interactions of the underlying hardware, thus reducing the physical resources required for implementing their encoding, decoding, and error-correction operations, and their universal encoded-basis gate sets.

  11. Using an experimental model for the study of therapeutic touch.

    Science.gov (United States)

    dos Santos, Daniella Soares; Marta, Ilda Estéfani Ribeiro; Cárnio, Evelin Capellari; de Quadros, Andreza Urba; Cunha, Thiago Mattar; de Carvalho, Emilia Campos

    2013-02-01

    to verify whether the Paw Edema Model can be used in investigations about the effects of Therapeutic Touch on inflammation by measuring the variables pain, edema and neutrophil migration. this is a pilot and experimental study, involving ten male mice of the same genetic strain and divided into experimental and control group, submitted to the chemical induction of local inflammation in the right back paw. The experimental group received a daily administration of Therapeutic Touch for 15 minutes during three days. the data showed statistically significant differences in the nociceptive threshold and in the paw circumference of the animals from the experimental group on the second day of the experiment. the experiment model involving animals can contribute to study the effects of Therapeutic Touch on inflammation, and adjustments are suggested in the treatment duration, number of sessions and experiment duration.

  12. Correction of Flow Curves and Constitutive Modelling of a Ti-6Al-4V Alloy

    Directory of Open Access Journals (Sweden)

    Ming Hu

    2018-04-01

    Full Text Available Isothermal uniaxial compressions of a Ti-6Al-4V alloy were carried out in the temperature range of 800–1050 °C and strain rate range of 0.001–1 s−1. The effects of friction between the specimen and anvils as well as the increase in temperature caused by the high strain rate deformation were considered, and flow curves were corrected as a result. Constitutive models were discussed based on the corrected flow curves. The correlation coefficient and average absolute relative error for the strain compensated Arrhenius-type constitutive model are 0.986 and 9.168%, respectively, while the values for a modified Johnson-Cook constitutive model are 0.924 and 22.673%, respectively. Therefore, the strain compensated Arrhenius-type constitutive model has a better prediction capability than a modified Johnson-Cook constitutive model.

  13. Minimal and non-minimal standard models: Universality of radiative corrections

    International Nuclear Information System (INIS)

    Passarino, G.

    1991-01-01

    The possibility of describing electroweak processes by means of models with a non-minimal Higgs sector is analyzed. The renormalization procedure which leads to a set of fitting equations for the bare parameters of the lagrangian is first reviewed for the minimal standard model. A solution of the fitting equations is obtained, which correctly includes large higher-order corrections. Predictions for physical observables, notably the W boson mass and the Z O partial widths, are discussed in detail. Finally the extension to non-minimal models is described under the assumption that new physics will appear only inside the vector boson self-energies and the concept of universality of radiative corrections is introduced, showing that to a large extent they are insensitive to the details of the enlarged Higgs sector. Consequences for the bounds on the top quark mass are also discussed. (orig.)

  14. Corrective Action Decision Document/Corrective Action Plan for Corrective Action Unit 97: Yucca Flat/Climax Mine Nevada National Security Site, Nevada, Revision 1

    Energy Technology Data Exchange (ETDEWEB)

    Farnham, Irene [Navarro, Las Vegas, NV (United States)

    2017-08-01

    This corrective action decision document (CADD)/corrective action plan (CAP) has been prepared for Corrective Action Unit (CAU) 97, Yucca Flat/Climax Mine, Nevada National Security Site (NNSS), Nevada. The Yucca Flat/Climax Mine CAU is located in the northeastern portion of the NNSS and comprises 720 corrective action sites. A total of 747 underground nuclear detonations took place within this CAU between 1957 and 1992 and resulted in the release of radionuclides (RNs) in the subsurface in the vicinity of the test cavities. The CADD portion describes the Yucca Flat/Climax Mine CAU data-collection and modeling activities completed during the corrective action investigation (CAI) stage, presents the corrective action objectives, and describes the actions recommended to meet the objectives. The CAP portion describes the corrective action implementation plan. The CAP presents CAU regulatory boundary objectives and initial use-restriction boundaries identified and negotiated by DOE and the Nevada Division of Environmental Protection (NDEP). The CAP also presents the model evaluation process designed to build confidence that the groundwater flow and contaminant transport modeling results can be used for the regulatory decisions required for CAU closure. The UGTA strategy assumes that active remediation of subsurface RN contamination is not feasible with current technology. As a result, the corrective action is based on a combination of characterization and modeling studies, monitoring, and institutional controls. The strategy is implemented through a four-stage approach that comprises the following: (1) corrective action investigation plan (CAIP), (2) CAI, (3) CADD/CAP, and (4) closure report (CR) stages.

  15. Bias-Corrected Estimation of Noncentrality Parameters of Covariance Structure Models

    Science.gov (United States)

    Raykov, Tenko

    2005-01-01

    A bias-corrected estimator of noncentrality parameters of covariance structure models is discussed. The approach represents an application of the bootstrap methodology for purposes of bias correction, and utilizes the relation between average of resample conventional noncentrality parameter estimates and their sample counterpart. The…

  16. Prediction of a Therapeutic Dose for Buagafuran, a Potent Anxiolytic Agent by Physiologically Based Pharmacokinetic/Pharmacodynamic Modeling Starting from Pharmacokinetics in Rats and Human

    Directory of Open Access Journals (Sweden)

    Fen Yang

    2017-10-01

    Full Text Available Physiologically based pharmacokinetic (PBPK/pharmacodynamic (PD models can contribute to animal-to-human extrapolation and therapeutic dose predictions. Buagafuran is a novel anxiolytic agent and phase I clinical trials of buagafuran have been completed. In this paper, a potentially effective dose for buagafuran of 30 mg t.i.d. in human was estimated based on the human brain concentration predicted by a PBPK/PD modeling. The software GastroPlusTM was used to build the PBPK/PD model for buagafuran in rat which related the brain tissue concentrations of buagafuran and the times of animals entering the open arms in the pharmacological model of elevated plus-maze. Buagafuran concentrations in human plasma were fitted and brain tissue concentrations were predicted by using a human PBPK model in which the predicted plasma profiles were in good agreement with observations. The results provided supportive data for the rational use of buagafuran in clinic.

  17. Toxin-Based Therapeutic Approaches

    Science.gov (United States)

    Shapira, Assaf; Benhar, Itai

    2010-01-01

    Protein toxins confer a defense against predation/grazing or a superior pathogenic competence upon the producing organism. Such toxins have been perfected through evolution in poisonous animals/plants and pathogenic bacteria. Over the past five decades, a lot of effort has been invested in studying their mechanism of action, the way they contribute to pathogenicity and in the development of antidotes that neutralize their action. In parallel, many research groups turned to explore the pharmaceutical potential of such toxins when they are used to efficiently impair essential cellular processes and/or damage the integrity of their target cells. The following review summarizes major advances in the field of toxin based therapeutics and offers a comprehensive description of the mode of action of each applied toxin. PMID:22069564

  18. An Overview on the Role of α -Synuclein in Experimental Models of Parkinson's Disease from Pathogenesis to Therapeutics.

    Science.gov (United States)

    Javed, Hayate; Kamal, Mohammad Amjad; Ojha, Shreesh

    2016-01-01

    Parkinson's disease (PD) is a devastating and progressive movement disorder characterized by symptoms of muscles rigidity, tremor, postural instability and slow physical movements. Biochemically, PD is characterized by lack of dopamine production and its action due to loss of dopaminergic neurons and neuropathologically by the presence of intracytoplasmic inclusions known as Lewy bodies, which mainly consist of presynaptic neuronal protein, α-synuclein (α-syn). It is believed that alteration in α-syn homeostasis leads to increased accumulation and aggregation of α-syn in Lewy body. Based on the important role of α-syn from pathogenesis to therapeutics, the recent researches are mainly focused on deciphering the critical role of α-syn at advanced level. Being a major protein in Lewy body that has a key role in pathogenesis of PD, several model systems including immortalized cell lines (SH-SY5Y), primary neuronal cultures, yeast (saccharomyces cerevisiae), drosophila (fruit flies), nematodes (Caenorhabditis elegans) and rodents are being employed to understand the PD pathogenesis and treatment. In order to study the etiopathogensis and develop novel therapeutic target for α -syn aggregation, majority of investigators rely on toxin (rotenone, 1-Methyl-4-Phenyl-1,2,3,6-Tetrahydropyridine, 6-hydroxydopamine, paraquat)-induced animal models of PD as a tool for basic research. Whereas, cell and tissue based models are mostly utilized to elucidate the mechanistic and molecular pathways underlying the α -syn induced toxicity and therapeutic approaches in PD. Gene modified mouse models based on α-syn expression are fascinating for modeling familial PD and toxin induced models provide a suitable approach for sporadic PD. The purpose of this review is to provide a summary and a critical review of the involvement of α-syn in various in vitro and in vivo models of PD based on use of neurotoxins as well as genetic modifications.

  19. MR-based attenuation correction for cardiac FDG PET on a hybrid PET/MRI scanner: comparison with standard CT attenuation correction

    Energy Technology Data Exchange (ETDEWEB)

    Vontobel, Jan; Liga, Riccardo; Possner, Mathias; Clerc, Olivier F.; Mikulicic, Fran; Veit-Haibach, Patrick; Voert, Edwin E.G.W. ter; Fuchs, Tobias A.; Stehli, Julia; Pazhenkottil, Aju P.; Benz, Dominik C.; Graeni, Christoph; Gaemperli, Oliver; Herzog, Bernhard; Buechel, Ronny R.; Kaufmann, Philipp A. [University Hospital Zurich, Department of Nuclear Medicine, Zurich (Switzerland)

    2015-09-15

    The aim of this study was to evaluate the feasibility of attenuation correction (AC) for cardiac {sup 18}F-labelled fluorodeoxyglucose (FDG) positron emission tomography (PET) using MR-based attenuation maps. We included 23 patients with no known cardiac history undergoing whole-body FDG PET/CT imaging for oncological indications on a PET/CT scanner using time-of-flight (TOF) and subsequent whole-body PET/MR imaging on an investigational hybrid PET/MRI scanner. Data sets from PET/MRI (with and without TOF) were reconstructed using MR AC and semi-quantitative segmental (20-segment model) myocardial tracer uptake (per cent of maximum) and compared to PET/CT which was reconstructed using CT AC and served as standard of reference. Excellent correlations were found for regional uptake values between PET/CT and PET/MRI with TOF (n = 460 segments in 23 patients; r = 0.913; p < 0.0001) with narrow Bland-Altman limits of agreement (-8.5 to +12.6 %). Correlation coefficients were slightly lower between PET/CT and PET/MRI without TOF (n = 460 segments in 23 patients; r = 0.851; p < 0.0001) with broader Bland-Altman limits of agreement (-12.5 to +15.0 %). PET/MRI with and without TOF showed minimal underestimation of tracer uptake (-2.08 and -1.29 %, respectively), compared to PET/CT. Relative myocardial FDG uptake obtained from MR-based attenuation corrected FDG PET is highly comparable to standard CT-based attenuation corrected FDG PET, suggesting interchangeability of both AC techniques. (orig.)

  20. Tracer kinetic modelling of receptor data with mathematical metabolite correction

    International Nuclear Information System (INIS)

    Burger, C.; Buck, A.

    1996-01-01

    Quantitation of metabolic processes with dynamic positron emission tomography (PET) and tracer kinetic modelling relies on the time course of authentic ligand in plasma, i.e. the input curve. The determination of the latter often requires the measurement of labelled metabilites, a laborious procedure. In this study we examined the possibility of mathematical metabolite correction, which might obviate the need for actual metabolite measurements. Mathematical metabilite correction was implemented by estimating the input curve together with kinetic tissue parameters. The general feasibility of the approach was evaluated in a Monte Carlo simulation using a two tissue compartment model. The method was then applied to a series of five human carbon-11 iomazenil PET studies. The measured cerebral tissue time-activity curves were fitted with a single tissue compartment model. For mathematical metabolite correction the input curve following the peak was approximated by a sum of three decaying exponentials, the amplitudes and characteristic half-times of which were then estimated by the fitting routine. In the simulation study the parameters used to generate synthetic tissue time-activity curves (K 1 -k 4 ) were refitted with reasonable identifiability when using mathematical metabolite correciton. Absolute quantitation of distribution volumes was found to be possible provided that the metabolite and the kinetic models are adequate. If the kinetic model is oversimplified, the linearity of the correlation between true and estimated distribution volumes is still maintained, although the linear regression becomes dependent on the input curve. These simulation results were confirmed when applying mathematical metabolite correction to the 11 C iomazenil study. Estimates of the distribution volume calculated with a measured input curve were linearly related to the estimates calculated using mathematical metabolite correction with correlation coefficients >0.990. (orig./MG)

  1. Accuracy Improvement Capability of Advanced Projectile Based on Course Correction Fuze Concept

    Directory of Open Access Journals (Sweden)

    Ahmed Elsaadany

    2014-01-01

    Full Text Available Improvement in terminal accuracy is an important objective for future artillery projectiles. Generally it is often associated with range extension. Various concepts and modifications are proposed to correct the range and drift of artillery projectile like course correction fuze. The course correction fuze concepts could provide an attractive and cost-effective solution for munitions accuracy improvement. In this paper, the trajectory correction has been obtained using two kinds of course correction modules, one is devoted to range correction (drag ring brake and the second is devoted to drift correction (canard based-correction fuze. The course correction modules have been characterized by aerodynamic computations and flight dynamic investigations in order to analyze the effects on deflection of the projectile aerodynamic parameters. The simulation results show that the impact accuracy of a conventional projectile using these course correction modules can be improved. The drag ring brake is found to be highly capable for range correction. The deploying of the drag brake in early stage of trajectory results in large range correction. The correction occasion time can be predefined depending on required correction of range. On the other hand, the canard based-correction fuze is found to have a higher effect on the projectile drift by modifying its roll rate. In addition, the canard extension induces a high-frequency incidence angle as canards reciprocate at the roll motion.

  2. Accuracy improvement capability of advanced projectile based on course correction fuze concept.

    Science.gov (United States)

    Elsaadany, Ahmed; Wen-jun, Yi

    2014-01-01

    Improvement in terminal accuracy is an important objective for future artillery projectiles. Generally it is often associated with range extension. Various concepts and modifications are proposed to correct the range and drift of artillery projectile like course correction fuze. The course correction fuze concepts could provide an attractive and cost-effective solution for munitions accuracy improvement. In this paper, the trajectory correction has been obtained using two kinds of course correction modules, one is devoted to range correction (drag ring brake) and the second is devoted to drift correction (canard based-correction fuze). The course correction modules have been characterized by aerodynamic computations and flight dynamic investigations in order to analyze the effects on deflection of the projectile aerodynamic parameters. The simulation results show that the impact accuracy of a conventional projectile using these course correction modules can be improved. The drag ring brake is found to be highly capable for range correction. The deploying of the drag brake in early stage of trajectory results in large range correction. The correction occasion time can be predefined depending on required correction of range. On the other hand, the canard based-correction fuze is found to have a higher effect on the projectile drift by modifying its roll rate. In addition, the canard extension induces a high-frequency incidence angle as canards reciprocate at the roll motion.

  3. Logarithmic corrections to scaling in the XY2-model

    International Nuclear Information System (INIS)

    Kenna, R.; Irving, A.C.

    1995-01-01

    We study the distribution of partition function zeroes for the XY-model in two dimensions. In particular we find the scaling behaviour of the end of the distribution of zeroes in the complex external magnetic field plane in the thermodynamic limit (the Yang-Lee edge) and the form for the density of these zeroes. Assuming that finite-size scaling holds, we show that there have to exist logarithmic corrections to the leading scaling behaviour of thermodynamic quantities in this model. These logarithmic corrections are also manifest in the finite-size scaling formulae and we identify them numerically. The method presented here can be used to check the compatibility of scaling behaviour of odd and even thermodynamic functions in other models too. ((orig.))

  4. Metabolic correction for attention deficit/hyperactivity disorder: A biochemical-physiological therapeutic approach

    Directory of Open Access Journals (Sweden)

    Mikirova NA

    2013-01-01

    Full Text Available ABSTRACTObjective: This investigation was undertaken to determine the reference values of specific biochemical markers that have been have been associated with behavior typical of ADHD in a group of patients before and after metabolic correction.Background: Attention deficit hyperactivity disorder (ADHD affects approximately two million American children, and this condition has grown to become the most commonly diagnosed behavioral disorder of childhood. According to the National Institute of Mental Health (NIMH, the cause of the condition, once called hyperkinesis, is not known.The cause of ADHD is generally acknowledged to be multifactorial, involving both biological and environmental influence. Molecular, genetic, and pharmacological studies suggest the involvement of the neurotransmitter systems in the pathogenesis of ADHD. Polymorphic variants in several genes involved in regulation of dopamine have been identified, and related neurotransmitter pathways alterations are reported to be associated with the disease.Nutritional deficiencies, including deficiencies in fatty acids (EPA, DHA, the amino acid methionine, and the trace minerals zinc and selenium, have been shown to influence neuronal function and produce defects in neuronal plasticity, as well as impact behavior in children with attention deficit hyperactivity disorder.Materials/Methods: This study was based on data extracted from our patient history database covering a period of over ten years. We performed laboratory tests in 116 patients 2.7-25 years old with a diagnosis of ADHD. Sixty-six percent (66% of patients were males. Patients were followed from 3 month to 3 years. We compared the distributions of fatty acids, essential metals, and the levels of metabolic stress factors with established reference ranges before and after interventions. In addition, we analyzed the association between toxic metal concentrations and the levels of essential metals.Results: This study was based

  5. Carotid wall volume quantification from magnetic resonance images using deformable model fitting and learning-based correction of systematic errors

    International Nuclear Information System (INIS)

    Hameeteman, K; Niessen, W J; Klein, S; Van 't Klooster, R; Selwaness, M; Van der Lugt, A; Witteman, J C M

    2013-01-01

    We present a method for carotid vessel wall volume quantification from magnetic resonance imaging (MRI). The method combines lumen and outer wall segmentation based on deformable model fitting with a learning-based segmentation correction step. After selecting two initialization points, the vessel wall volume in a region around the bifurcation is automatically determined. The method was trained on eight datasets (16 carotids) from a population-based study in the elderly for which one observer manually annotated both the lumen and outer wall. An evaluation was carried out on a separate set of 19 datasets (38 carotids) from the same study for which two observers made annotations. Wall volume and normalized wall index measurements resulting from the manual annotations were compared to the automatic measurements. Our experiments show that the automatic method performs comparably to the manual measurements. All image data and annotations used in this study together with the measurements are made available through the website http://ergocar.bigr.nl. (paper)

  6. Vascular input function correction of inflow enhancement for improved pharmacokinetic modeling of liver DCE-MRI.

    Science.gov (United States)

    Ning, Jia; Schubert, Tilman; Johnson, Kevin M; Roldán-Alzate, Alejandro; Chen, Huijun; Yuan, Chun; Reeder, Scott B

    2018-06-01

    To propose a simple method to correct vascular input function (VIF) due to inflow effects and to test whether the proposed method can provide more accurate VIFs for improved pharmacokinetic modeling. A spoiled gradient echo sequence-based inflow quantification and contrast agent concentration correction method was proposed. Simulations were conducted to illustrate improvement in the accuracy of VIF estimation and pharmacokinetic fitting. Animal studies with dynamic contrast-enhanced MR scans were conducted before, 1 week after, and 2 weeks after portal vein embolization (PVE) was performed in the left portal circulation of pigs. The proposed method was applied to correct the VIFs for model fitting. Pharmacokinetic parameters fitted using corrected and uncorrected VIFs were compared between different lobes and visits. Simulation results demonstrated that the proposed method can improve accuracy of VIF estimation and pharmacokinetic fitting. In animal study results, pharmacokinetic fitting using corrected VIFs demonstrated changes in perfusion consistent with changes expected after PVE, whereas the perfusion estimates derived by uncorrected VIFs showed no significant changes. The proposed correction method improves accuracy of VIFs and therefore provides more precise pharmacokinetic fitting. This method may be promising in improving the reliability of perfusion quantification. Magn Reson Med 79:3093-3102, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  7. Hydrological modeling as an evaluation tool of EURO-CORDEX climate projections and bias correction methods

    Science.gov (United States)

    Hakala, Kirsti; Addor, Nans; Seibert, Jan

    2017-04-01

    Streamflow stemming from Switzerland's mountainous landscape will be influenced by climate change, which will pose significant challenges to the water management and policy sector. In climate change impact research, the determination of future streamflow is impeded by different sources of uncertainty, which propagate through the model chain. In this research, we explicitly considered the following sources of uncertainty: (1) climate models, (2) downscaling of the climate projections to the catchment scale, (3) bias correction method and (4) parameterization of the hydrological model. We utilize climate projections at the 0.11 degree 12.5 km resolution from the EURO-CORDEX project, which are the most recent climate projections for the European domain. EURO-CORDEX is comprised of regional climate model (RCM) simulations, which have been downscaled from global climate models (GCMs) from the CMIP5 archive, using both dynamical and statistical techniques. Uncertainties are explored by applying a modeling chain involving 14 GCM-RCMs to ten Swiss catchments. We utilize the rainfall-runoff model HBV Light, which has been widely used in operational hydrological forecasting. The Lindström measure, a combination of model efficiency and volume error, was used as an objective function to calibrate HBV Light. Ten best sets of parameters are then achieved by calibrating using the genetic algorithm and Powell optimization (GAP) method. The GAP optimization method is based on the evolution of parameter sets, which works by selecting and recombining high performing parameter sets with each other. Once HBV is calibrated, we then perform a quantitative comparison of the influence of biases inherited from climate model simulations to the biases stemming from the hydrological model. The evaluation is conducted over two time periods: i) 1980-2009 to characterize the simulation realism under the current climate and ii) 2070-2099 to identify the magnitude of the projected change of

  8. Corrective Action Decision Document/Corrective Action Plan for Corrective Action Unit 447: Project Shoal Area, Subsurface, Nevada, Rev. No.: 3 with Errata Sheet

    Energy Technology Data Exchange (ETDEWEB)

    Tim Echelard

    2006-03-01

    , the U.S. Department of Energy (DOE) determined that the degree of uncertainty in transport predictions for PSA remained unacceptably large. As a result, a second CAIP was developed by DOE and approved by the Nevada Division of Environmental Protection (NDEP) in December 1998 (DOE/NV, 1998a). This plan prescribed a rigorous analysis of uncertainty in the Shoal model and quantification of methods of reducing uncertainty through data collection. This analysis is termed a Data Decision Analysis (Pohll et al., 1999a) and formed the basis for a second major characterization effort at PSA (Pohll et al., 1999b). The details for this second field effort are presented in an Addendum to the CAIP, which was approved by NDEP in April 1999 (DOE/NV, 1999a). Four additional characterization wells were drilled at PSA during summer and fall of 1999; details of the drilling and well installation are in IT Corporation (2000), with testing reported in Mihevc et al. (2000). A key component of the second field program was a tracer test between two of the new wells (Carroll et al., 2000; Reimus et al., 2003). Based on the potential exposure pathways, two corrective action objectives were identified for CAU 447: Prevent or mitigate exposure to groundwater contaminants of concern at concentrations exceeding regulatory maximum contaminant levels or risk-based levels; and Reduce the risk to human health and the environment to the extent practicable. Based on the review of existing data, the results of the modeling, future use, and current operations at PSA, the following alternatives have been developed for consideration at CAU 447: Alternative 1--No Further Action; Alternative 2--Proof-of-Concept and Monitoring with Institutional Controls; and Alternative 3--Contaminant Control. The corrective action alternatives were evaluated based on the approach outlined in the ''Focused Evaluation of Selected Remedial Alternatives for the Underground Test Area'' (DOE/NV, 1998b). Each

  9. Constraint based modeling of metabolism allows finding metabolic cancer hallmarks and identifying personalized therapeutic windows.

    Science.gov (United States)

    Bordel, Sergio

    2018-04-13

    In order to choose optimal personalized anticancer treatments, transcriptomic data should be analyzed within the frame of biological networks. The best known human biological network (in terms of the interactions between its different components) is metabolism. Cancer cells have been known to have specific metabolic features for a long time and currently there is a growing interest in characterizing new cancer specific metabolic hallmarks. In this article it is presented a method to find personalized therapeutic windows using RNA-seq data and Genome Scale Metabolic Models. This method is implemented in the python library, pyTARG. Our predictions showed that the most anticancer selective (affecting 27 out of 34 considered cancer cell lines and only 1 out of 6 healthy mesenchymal stem cell lines) single metabolic reactions are those involved in cholesterol biosynthesis. Excluding cholesterol biosynthesis, all the considered cell lines can be selectively affected by targeting different combinations (from 1 to 5 reactions) of only 18 metabolic reactions, which suggests that a small subset of drugs or siRNAs combined in patient specific manners could be at the core of metabolism based personalized treatments.

  10. NLO QCD Corrections to Drell-Yan in TeV-scale Gravity Models

    International Nuclear Information System (INIS)

    Mathews, Prakash; Ravindran, V.

    2006-01-01

    In TeV scale gravity models, we present the NLO-QCD corrections for the double differential cross sections in the scattering angle for dilepton production at hadron colliders. The quantitative impact of QCD corrections for extra dimension searches at LHC and Tevatron are investigated for both ADD and RS models through K-factors. We also show how the inclusion of QCD corrections to NLO stabilises the cross section with respect to renormalisation and factorisation scale variations

  11. Design Considerations in Therapeutic Exergaming

    OpenAIRE

    Doyle, Julie; Kelly, Daniel; Caulfield, B.

    2011-01-01

    In this paper we discuss the importance of feedback in therapeutic exergaming. It is widely believed that exergaming benefits the patient in terms of encouraging adherence and boosting the patient’s confidence of correct execution and feedback is essential in achieving these. However, feedback and in particular visual feedback, may also have potential negative effects on the quality of the exercise. We describe in this paper a prototype single-sensor therapeutic exergame that we have develope...

  12. Repeat-aware modeling and correction of short read errors.

    Science.gov (United States)

    Yang, Xiao; Aluru, Srinivas; Dorman, Karin S

    2011-02-15

    High-throughput short read sequencing is revolutionizing genomics and systems biology research by enabling cost-effective deep coverage sequencing of genomes and transcriptomes. Error detection and correction are crucial to many short read sequencing applications including de novo genome sequencing, genome resequencing, and digital gene expression analysis. Short read error detection is typically carried out by counting the observed frequencies of kmers in reads and validating those with frequencies exceeding a threshold. In case of genomes with high repeat content, an erroneous kmer may be frequently observed if it has few nucleotide differences with valid kmers with multiple occurrences in the genome. Error detection and correction were mostly applied to genomes with low repeat content and this remains a challenging problem for genomes with high repeat content. We develop a statistical model and a computational method for error detection and correction in the presence of genomic repeats. We propose a method to infer genomic frequencies of kmers from their observed frequencies by analyzing the misread relationships among observed kmers. We also propose a method to estimate the threshold useful for validating kmers whose estimated genomic frequency exceeds the threshold. We demonstrate that superior error detection is achieved using these methods. Furthermore, we break away from the common assumption of uniformly distributed errors within a read, and provide a framework to model position-dependent error occurrence frequencies common to many short read platforms. Lastly, we achieve better error correction in genomes with high repeat content. The software is implemented in C++ and is freely available under GNU GPL3 license and Boost Software V1.0 license at "http://aluru-sun.ece.iastate.edu/doku.php?id = redeem". We introduce a statistical framework to model sequencing errors in next-generation reads, which led to promising results in detecting and correcting errors

  13. Modular correction method of bending elastic modulus based on sliding behavior of contact point

    International Nuclear Information System (INIS)

    Ma, Zhichao; Zhao, Hongwei; Zhang, Qixun; Liu, Changyi

    2015-01-01

    During the three-point bending test, the sliding behavior of the contact point between the specimen and supports was observed, the sliding behavior was verified to affect the measurements of both deflection and span length, which directly affect the calculation of the bending elastic modulus. Based on the Hertz formula to calculate the elastic contact deformation and the theoretical calculation of the sliding behavior of the contact point, a theoretical model to precisely describe the deflection and span length as a function of bending load was established. Moreover, a modular correction method of bending elastic modulus was proposed, via the comparison between the corrected elastic modulus of three materials (H63 copper–zinc alloy, AZ31B magnesium alloy and 2026 aluminum alloy) and the standard modulus obtained from standard uniaxial tensile tests, the universal feasibility of the proposed correction method was verified. Also, the ratio of corrected to raw elastic modulus presented a monotonically decreasing tendency as the raw elastic modulus of materials increased. (technical note)

  14. Correction of Measured Taxicab Exhaust Emission Data Based on Cmem Modle

    Science.gov (United States)

    Li, Q.; Jia, T.

    2017-09-01

    Carbon dioxide emissions from urban road traffic mainly come from automobile exhaust. However, the carbon dioxide emissions obtained by the instruments are unreliable due to time delay error. In order to improve the reliability of data, we propose a method to correct the measured vehicles' carbon dioxide emissions from instrument based on the CMEM model. Firstly, the synthetic time series of carbon dioxide emissions are simulated by CMEM model and GPS velocity data. Then, taking the simulation data as the control group, the time delay error of the measured carbon dioxide emissions can be estimated by the asynchronous correlation analysis, and the outliers can be automatically identified and corrected using the principle of DTW algorithm. Taking the taxi trajectory data of Wuhan as an example, the results show that (1) the correlation coefficient between the measured data and the control group data can be improved from 0.52 to 0.59 by mitigating the systematic time delay error. Furthermore, by adjusting the outliers which account for 4.73 % of the total data, the correlation coefficient can raise to 0.63, which suggests strong correlation. The construction of low carbon traffic has become the focus of the local government. In order to respond to the slogan of energy saving and emission reduction, the distribution of carbon emissions from motor vehicle exhaust emission was studied. So our corrected data can be used to make further air quality analysis.

  15. Theoretical determination of gamma spectrometry systems efficiency based on probability functions. Application to self-attenuation correction factors

    Energy Technology Data Exchange (ETDEWEB)

    Barrera, Manuel, E-mail: manuel.barrera@uca.es [Escuela Superior de Ingeniería, University of Cadiz, Avda, Universidad de Cadiz 10, 11519 Puerto Real, Cadiz (Spain); Suarez-Llorens, Alfonso [Facultad de Ciencias, University of Cadiz, Avda, Rep. Saharaui s/n, 11510 Puerto Real, Cadiz (Spain); Casas-Ruiz, Melquiades; Alonso, José J.; Vidal, Juan [CEIMAR, University of Cadiz, Avda, Rep. Saharaui s/n, 11510 Puerto Real, Cádiz (Spain)

    2017-05-11

    A generic theoretical methodology for the calculation of the efficiency of gamma spectrometry systems is introduced in this work. The procedure is valid for any type of source and detector and can be applied to determine the full energy peak and the total efficiency of any source-detector system. The methodology is based on the idea of underlying probability of detection, which describes the physical model for the detection of the gamma radiation at the particular studied situation. This probability depends explicitly on the direction of the gamma radiation, allowing the use of this dependence the development of more realistic and complex models than the traditional models based on the point source integration. The probability function that has to be employed in practice must reproduce the relevant characteristics of the detection process occurring at the particular studied situation. Once the probability is defined, the efficiency calculations can be performed in general by using numerical methods. Monte Carlo integration procedure is especially useful to perform the calculations when complex probability functions are used. The methodology can be used for the direct determination of the efficiency and also for the calculation of corrections that require this determination of the efficiency, as it is the case of coincidence summing, geometric or self-attenuation corrections. In particular, we have applied the procedure to obtain some of the classical self-attenuation correction factors usually employed to correct for the sample attenuation of cylindrical geometry sources. The methodology clarifies the theoretical basis and approximations associated to each factor, by making explicit the probability which is generally hidden and implicit to each model. It has been shown that most of these self-attenuation correction factors can be derived by using a common underlying probability, having this probability a growing level of complexity as it reproduces more precisely

  16. Toward Exosome-Based Therapeutics: Isolation, Heterogeneity, and Fit-for-Purpose Potency

    Directory of Open Access Journals (Sweden)

    Gareth R. Willis

    2017-10-01

    Full Text Available Exosomes are defined as submicron (30–150 nm, lipid bilayer-enclosed extracellular vesicles (EVs, specifically generated by the late endosomal compartment through fusion of multivesicular bodies with the plasma membrane. Produced by almost all cells, exosomes were originally considered to represent just a mechanism for jettisoning unwanted cellular moieties. Although this may be a major function in most cells, evolution has recruited the endosomal membrane-sorting pathway to duties beyond mere garbage disposal, one of the most notable examples being its cooption by retroviruses for the generation of Trojan virions. It is, therefore, tempting to speculate that certain cell types have evolved an exosome subclass active in intracellular communication. We term this EV subclass “signalosomes” and define them as exosomes that are produced by the “signaling” cells upon specific physiological or environmental cues and harbor cargo capable of modulating the programming of recipient cells. Our recent studies have established that signalosomes released by mesenchymal stem/stromal cells (MSCs represent the main vector of MSC immunomodulation and therapeutic action in animal models of lung disease. The efficacy of MSC-exosome treatments in a number of preclinical models of cardiovascular and pulmonary disease supports the promise of application of exosome-based therapeutics across a wide range of pathologies within the near future. However, the full realization of exosome therapeutic potential has been hampered by the absence of standardization in EV isolation, and procedures for purification of signalosomes from the main exosome population. This is mainly due to immature methodologies for exosome isolation and characterization and our incomplete understanding of the specific characteristics and molecular composition of signalosomes. In addition, difficulties in defining metrics for potency of exosome preparations and the challenges of industrial

  17. The evolution of process-based hydrologic models

    NARCIS (Netherlands)

    Clark, Martyn P.; Bierkens, Marc F.P.; Samaniego, Luis; Woods, Ross A.; Uijlenhoet, Remko; Bennett, Katrina E.; Pauwels, Valentijn R.N.; Cai, Xitian; Wood, Andrew W.; Peters-Lidard, Christa D.

    2017-01-01

    The diversity in hydrologic models has historically led to great controversy on the "correct" approach to process-based hydrologic modeling, with debates centered on the adequacy of process parameterizations, data limitations and uncertainty, and computational constraints on model analysis. In this

  18. TU-G-210-02: TRANS-FUSIMO - An Integrative Approach to Model-Based Treatment Planning of Liver FUS

    Energy Technology Data Exchange (ETDEWEB)

    Preusser, T. [Fraunhofer MEVIS & Jacobs University (Germany)

    2015-06-15

    Modeling can play a vital role in predicting, optimizing and analyzing the results of therapeutic ultrasound treatments. Simulating the propagating acoustic beam in various targeted regions of the body allows for the prediction of the resulting power deposition and temperature profiles. In this session we will apply various modeling approaches to breast, abdominal organ and brain treatments. Of particular interest is the effectiveness of procedures for correcting for phase aberrations caused by intervening irregular tissues, such as the skull in transcranial applications or inhomogeneous breast tissues. Also described are methods to compensate for motion in targeted abdominal organs such as the liver or kidney. Douglas Christensen – Modeling for Breast and Brain HIFU Treatment Planning Tobias Preusser – TRANS-FUSIMO – An Integrative Approach to Model-Based Treatment Planning of Liver FUS Tobias Preusser – TRANS-FUSIMO – An Integrative Approach to Model-Based Treatment Planning of Liver FUS Learning Objectives: Understand the role of acoustic beam modeling for predicting the effectiveness of therapeutic ultrasound treatments. Apply acoustic modeling to specific breast, liver, kidney and transcranial anatomies. Determine how to obtain appropriate acoustic modeling parameters from clinical images. Understand the separate role of absorption and scattering in energy delivery to tissues. See how organ motion can be compensated for in ultrasound therapies. Compare simulated data with clinical temperature measurements in transcranial applications. Supported by NIH R01 HL172787 and R01 EB013433 (DC); EU Seventh Framework Programme (FP7/2007-2013) under 270186 (FUSIMO) and 611889 (TRANS-FUSIMO)(TP); and P01 CA159992, GE, FUSF and InSightec (UV)

  19. TU-G-210-02: TRANS-FUSIMO - An Integrative Approach to Model-Based Treatment Planning of Liver FUS

    International Nuclear Information System (INIS)

    Preusser, T.

    2015-01-01

    Modeling can play a vital role in predicting, optimizing and analyzing the results of therapeutic ultrasound treatments. Simulating the propagating acoustic beam in various targeted regions of the body allows for the prediction of the resulting power deposition and temperature profiles. In this session we will apply various modeling approaches to breast, abdominal organ and brain treatments. Of particular interest is the effectiveness of procedures for correcting for phase aberrations caused by intervening irregular tissues, such as the skull in transcranial applications or inhomogeneous breast tissues. Also described are methods to compensate for motion in targeted abdominal organs such as the liver or kidney. Douglas Christensen – Modeling for Breast and Brain HIFU Treatment Planning Tobias Preusser – TRANS-FUSIMO – An Integrative Approach to Model-Based Treatment Planning of Liver FUS Tobias Preusser – TRANS-FUSIMO – An Integrative Approach to Model-Based Treatment Planning of Liver FUS Learning Objectives: Understand the role of acoustic beam modeling for predicting the effectiveness of therapeutic ultrasound treatments. Apply acoustic modeling to specific breast, liver, kidney and transcranial anatomies. Determine how to obtain appropriate acoustic modeling parameters from clinical images. Understand the separate role of absorption and scattering in energy delivery to tissues. See how organ motion can be compensated for in ultrasound therapies. Compare simulated data with clinical temperature measurements in transcranial applications. Supported by NIH R01 HL172787 and R01 EB013433 (DC); EU Seventh Framework Programme (FP7/2007-2013) under 270186 (FUSIMO) and 611889 (TRANS-FUSIMO)(TP); and P01 CA159992, GE, FUSF and InSightec (UV)

  20. Targeting therapeutics to the glomerulus with nanoparticles.

    Science.gov (United States)

    Zuckerman, Jonathan E; Davis, Mark E

    2013-11-01

    Nanoparticles are an enabling technology for the creation of tissue-/cell-specific therapeutics that have been investigated extensively as targeted therapeutics for cancer. The kidney, specifically the glomerulus, is another accessible site for nanoparticle delivery that has been relatively overlooked as a target organ. Given the medical need for the development of more potent, kidney-targeted therapies, the use of nanoparticle-based therapeutics may be one such solution to this problem. Here, we review the literature on nanoparticle targeting of the glomerulus. Specifically, we provide a broad overview of nanoparticle-based therapeutics and how the unique structural characteristics of the glomerulus allow for selective, nanoparticle targeting of this area of the kidney. We then summarize literature examples of nanoparticle delivery to the glomerulus and elaborate on the appropriate nanoparticle design criteria for glomerular targeting. Finally, we discuss the behavior of nanoparticles in animal models of diseased glomeruli and review examples of nanoparticle therapeutic approaches that have shown promise in animal models of glomerulonephritic disease. Copyright © 2013 National Kidney Foundation, Inc. Published by Elsevier Inc. All rights reserved.

  1. 19 CFR 142.50 - Line Release data base corrections or changes.

    Science.gov (United States)

    2010-04-01

    ... 19 Customs Duties 2 2010-04-01 2010-04-01 false Line Release data base corrections or changes. 142...; DEPARTMENT OF THE TREASURY (CONTINUED) ENTRY PROCESS Line Release § 142.50 Line Release data base corrections... numbers or bond information on a Line Release Data Loading Sheet as soon as possible. Notification shall...

  2. Testing effort dependent software reliability model for imperfect debugging process considering both detection and correction

    International Nuclear Information System (INIS)

    Peng, R.; Li, Y.F.; Zhang, W.J.; Hu, Q.P.

    2014-01-01

    This paper studies the fault detection process (FDP) and fault correction process (FCP) with the incorporation of testing effort function and imperfect debugging. In order to ensure high reliability, it is essential for software to undergo a testing phase, during which faults can be detected and corrected by debuggers. The testing resource allocation during this phase, which is usually depicted by the testing effort function, considerably influences not only the fault detection rate but also the time to correct a detected fault. In addition, testing is usually far from perfect such that new faults may be introduced. In this paper, we first show how to incorporate testing effort function and fault introduction into FDP and then develop FCP as delayed FDP with a correction effort. Various specific paired FDP and FCP models are obtained based on different assumptions of fault introduction and correction effort. An illustrative example is presented. The optimal release policy under different criteria is also discussed

  3. Evaluation of NWP-based Satellite Precipitation Error Correction with Near-Real-Time Model Products and Flood-inducing Storms

    Science.gov (United States)

    Zhang, X.; Anagnostou, E. N.; Schwartz, C. S.

    2017-12-01

    Satellite precipitation products tend to have significant biases over complex terrain. Our research investigates a statistical approach for satellite precipitation adjustment based solely on numerical weather simulations. This approach has been evaluated in two mid-latitude (Zhang et al. 2013*1, Zhang et al. 2016*2) and three topical mountainous regions by using the WRF model to adjust two high-resolution satellite products i) National Oceanic and Atmospheric Administration (NOAA) Climate Prediction Center morphing technique (CMORPH) and ii) Global Satellite Mapping of Precipitation (GSMaP). Results show the adjustment effectively reduces the satellite underestimation of high rain rates, which provides a solid proof-of-concept for continuing research of NWP-based satellite correction. In this study we investigate the feasibility of using NCAR Real-time Ensemble Forecasts*3 for adjusting near-real-time satellite precipitation datasets over complex terrain areas in the Continental United States (CONUS) such as Olympic Peninsula, California coastal mountain ranges, Rocky Mountains and South Appalachians. The research will focus on flood-inducing storms occurred from May 2015 to December 2016 and four satellite precipitation products (CMORPH, GSMaP, PERSIANN-CCS and IMERG). The error correction performance evaluation will be based on comparisons against the gauge-adjusted Stage IV precipitation data. *1 Zhang, Xinxuan, et al. "Using NWP simulations in satellite rainfall estimation of heavy precipitation events over mountainous areas." Journal of Hydrometeorology 14.6 (2013): 1844-1858. *2 Zhang, Xinxuan, et al. "Hydrologic Evaluation of NWP-Adjusted CMORPH Estimates of Hurricane-Induced Precipitation in the Southern Appalachians." Journal of Hydrometeorology 17.4 (2016): 1087-1099. *3 Schwartz, Craig S., et al. "NCAR's experimental real-time convection-allowing ensemble prediction system." Weather and Forecasting 30.6 (2015): 1645-1654.

  4. Bayesian based Diagnostic Model for Condition based Maintenance of Offshore Wind Farms

    DEFF Research Database (Denmark)

    Asgarpour, Masoud; Sørensen, John Dalsgaard

    2018-01-01

    Operation and maintenance costs are a major contributor to the Levelized Cost of Energy for electricity produced by offshore wind and can be significantly reduced if existing corrective actions are performed as efficiently as possible and if future corrective actions are avoided by performing...... sufficient preventive actions. This paper presents an applied and generic diagnostic model for fault detection and condition based maintenance of offshore wind components. The diagnostic model is based on two probabilistic matrices; first, a confidence matrix, representing the probability of detection using...... for a wind turbine component based on vibration, temperature, and oil particle fault detection methods. The last part of the paper will have a discussion of the case study results and present conclusions....

  5. Multifactorial causal model of brain (dis)organization and therapeutic intervention: Application to Alzheimer's disease.

    Science.gov (United States)

    Iturria-Medina, Yasser; Carbonell, Félix M; Sotero, Roberto C; Chouinard-Decorte, Francois; Evans, Alan C

    2017-05-15

    Generative models focused on multifactorial causal mechanisms in brain disorders are scarce and generally based on limited data. Despite the biological importance of the multiple interacting processes, their effects remain poorly characterized from an integrative analytic perspective. Here, we propose a spatiotemporal multifactorial causal model (MCM) of brain (dis)organization and therapeutic intervention that accounts for local causal interactions, effects propagation via physical brain networks, cognitive alterations, and identification of optimum therapeutic interventions. In this article, we focus on describing the model and applying it at the population-based level for studying late onset Alzheimer's disease (LOAD). By interrelating six different neuroimaging modalities and cognitive measurements, this model accurately predicts spatiotemporal alterations in brain amyloid-β (Aβ) burden, glucose metabolism, vascular flow, resting state functional activity, structural properties, and cognitive integrity. The results suggest that a vascular dysregulation may be the most-likely initial pathologic event leading to LOAD. Nevertheless, they also suggest that LOAD it is not caused by a unique dominant biological factor (e.g. vascular or Aβ) but by the complex interplay among multiple relevant direct interactions. Furthermore, using theoretical control analysis of the identified population-based multifactorial causal network, we show the crucial advantage of using combinatorial over single-target treatments, explain why one-target Aβ based therapies might fail to improve clinical outcomes, and propose an efficiency ranking of possible LOAD interventions. Although still requiring further validation at the individual level, this work presents the first analytic framework for dynamic multifactorial brain (dis)organization that may explain both the pathologic evolution of progressive neurological disorders and operationalize the influence of multiple interventional

  6. A new digitized reverse correction method for hypoid gears based on a one-dimensional probe

    Science.gov (United States)

    Li, Tianxing; Li, Jubo; Deng, Xiaozhong; Yang, Jianjun; Li, Genggeng; Ma, Wensuo

    2017-12-01

    In order to improve the tooth surface geometric accuracy and transmission quality of hypoid gears, a new digitized reverse correction method is proposed based on the measurement data from a one-dimensional probe. The minimization of tooth surface geometrical deviations is realized from the perspective of mathematical analysis and reverse engineering. Combining the analysis of complex tooth surface generation principles and the measurement mechanism of one-dimensional probes, the mathematical relationship between the theoretical designed tooth surface, the actual machined tooth surface and the deviation tooth surface is established, the mapping relation between machine-tool settings and tooth surface deviations is derived, and the essential connection between the accurate calculation of tooth surface deviations and the reverse correction method of machine-tool settings is revealed. Furthermore, a reverse correction model of machine-tool settings is built, a reverse correction strategy is planned, and the minimization of tooth surface deviations is achieved by means of the method of numerical iterative reverse solution. On this basis, a digitized reverse correction system for hypoid gears is developed by the organic combination of numerical control generation, accurate measurement, computer numerical processing, and digitized correction. Finally, the correctness and practicability of the digitized reverse correction method are proved through a reverse correction experiment. The experimental results show that the tooth surface geometric deviations meet the engineering requirements after two trial cuts and one correction.

  7. Toxin-Based Therapeutic Approaches

    Directory of Open Access Journals (Sweden)

    Itai Benhar

    2010-10-01

    Full Text Available Protein toxins confer a defense against predation/grazing or a superior pathogenic competence upon the producing organism. Such toxins have been perfected through evolution in poisonous animals/plants and pathogenic bacteria. Over the past five decades, a lot of effort has been invested in studying their mechanism of action, the way they contribute to pathogenicity and in the development of antidotes that neutralize their action. In parallel, many research groups turned to explore the pharmaceutical potential of such toxins when they are used to efficiently impair essential cellular processes and/or damage the integrity of their target cells. The following review summarizes major advances in the field of toxin based therapeutics and offers a comprehensive description of the mode of action of each applied toxin.

  8. Quantum spin correction scheme based on spin-correlation functional for Kohn-Sham spin density functional theory

    International Nuclear Information System (INIS)

    Yamanaka, Shusuke; Takeda, Ryo; Nakata, Kazuto; Takada, Toshikazu; Shoji, Mitsuo; Kitagawa, Yasutaka; Yamaguchi, Kizashi

    2007-01-01

    We present a simple quantum correction scheme for ab initio Kohn-Sham spin density functional theory (KS-SDFT). This scheme is based on a mapping from ab initio results to a Heisenberg model Hamiltonian. The effective exchange integral is estimated by using energies and spin correlation functionals calculated by ab initio KS-SDFT. The quantum-corrected spin-correlation functional is open to be designed to cover specific quantum spin fluctuations. In this article, we present a simple correction for dinuclear compounds having multiple bonds. The computational results are discussed in relation to multireference (MR) DFT, by which we treat the quantum many-body effects explicitly

  9. Self-corrected chip-based dual-comb spectrometer.

    Science.gov (United States)

    Hébert, Nicolas Bourbeau; Genest, Jérôme; Deschênes, Jean-Daniel; Bergeron, Hugo; Chen, George Y; Khurmi, Champak; Lancaster, David G

    2017-04-03

    We present a dual-comb spectrometer based on two passively mode-locked waveguide lasers integrated in a single Er-doped ZBLAN chip. This original design yields two free-running frequency combs having a high level of mutual stability. We developed in parallel a self-correction algorithm that compensates residual relative fluctuations and yields mode-resolved spectra without the help of any reference laser or control system. Fluctuations are extracted directly from the interferograms using the concept of ambiguity function, which leads to a significant simplification of the instrument that will greatly ease its widespread adoption and commercial deployment. Comparison with a correction algorithm relying on a single-frequency laser indicates discrepancies of only 50 attoseconds on optical timings. The capacities of this instrument are finally demonstrated with the acquisition of a high-resolution molecular spectrum covering 20 nm. This new chip-based multi-laser platform is ideal for the development of high-repetition-rate, compact and fieldable comb spectrometers in the near- and mid-infrared.

  10. Beam-Based Error Identification and Correction Methods for Particle Accelerators

    CERN Document Server

    AUTHOR|(SzGeCERN)692826; Tomas, Rogelio; Nilsson, Thomas

    2014-06-10

    Modern particle accelerators have tight tolerances on the acceptable deviation from their desired machine parameters. The control of the parameters is of crucial importance for safe machine operation and performance. This thesis focuses on beam-based methods and algorithms to identify and correct errors in particle accelerators. The optics measurements and corrections of the Large Hadron Collider (LHC), which resulted in an unprecedented low β-beat for a hadron collider is described. The transverse coupling is another parameter which is of importance to control. Improvement in the reconstruction of the coupling from turn-by-turn data has resulted in a significant decrease of the measurement uncertainty. An automatic coupling correction method, which is based on the injected beam oscillations, has been successfully used in normal operation of the LHC. Furthermore, a new method to measure and correct chromatic coupling that was applied to the LHC, is described. It resulted in a decrease of the chromatic coupli...

  11. An alternative ionospheric correction model for global navigation satellite systems

    Science.gov (United States)

    Hoque, M. M.; Jakowski, N.

    2015-04-01

    The ionosphere is recognized as a major error source for single-frequency operations of global navigation satellite systems (GNSS). To enhance single-frequency operations the global positioning system (GPS) uses an ionospheric correction algorithm (ICA) driven by 8 coefficients broadcasted in the navigation message every 24 h. Similarly, the global navigation satellite system Galileo uses the electron density NeQuick model for ionospheric correction. The Galileo satellite vehicles (SVs) transmit 3 ionospheric correction coefficients as driver parameters of the NeQuick model. In the present work, we propose an alternative ionospheric correction algorithm called Neustrelitz TEC broadcast model NTCM-BC that is also applicable for global satellite navigation systems. Like the GPS ICA or Galileo NeQuick, the NTCM-BC can be optimized on a daily basis by utilizing GNSS data obtained at the previous day at monitor stations. To drive the NTCM-BC, 9 ionospheric correction coefficients need to be uploaded to the SVs for broadcasting in the navigation message. Our investigation using GPS data of about 200 worldwide ground stations shows that the 24-h-ahead prediction performance of the NTCM-BC is better than the GPS ICA and comparable to the Galileo NeQuick model. We have found that the 95 percentiles of the prediction error are about 16.1, 16.1 and 13.4 TECU for the GPS ICA, Galileo NeQuick and NTCM-BC, respectively, during a selected quiet ionospheric period, whereas the corresponding numbers are found about 40.5, 28.2 and 26.5 TECU during a selected geomagnetic perturbed period. However, in terms of complexity the NTCM-BC is easier to handle than the Galileo NeQuick and in this respect comparable to the GPS ICA.

  12. A Conceptually Simple Modeling Approach for Jason-1 Sea State Bias Correction Based on 3 Parameters Exclusively Derived from Altimetric Information

    Directory of Open Access Journals (Sweden)

    Nelson Pires

    2016-07-01

    Full Text Available A conceptually simple formulation is proposed for a new empirical sea state bias (SSB model using information retrieved entirely from altimetric data. Nonparametric regression techniques are used, based on penalized smoothing splines adjusted to each predictor and then combined by a Generalized Additive Model. In addition to the significant wave height (SWH and wind speed (U10, a mediator parameter designed by the mean wave period derived from radar altimetry, has proven to improve the model performance in explaining some of the SSB variability, especially in swell ocean regions with medium-high SWH and low U10. A collinear analysis of scaled sea level anomalies (SLA variance differences shows conformity between the proposed model and the established SSB models. The new formulation aims to be a fast, reliable and flexible SSB model, in line with the well-settled SSB corrections, depending exclusively on altimetric information. The suggested method is computationally efficient and capable of generating a stable model with a small training dataset, a useful feature for forthcoming missions.

  13. A golden A5 model of leptons with a minimal NLO correction

    International Nuclear Information System (INIS)

    Cooper, Iain K.; King, Stephen F.; Stuart, Alexander J.

    2013-01-01

    We propose a new A 5 model of leptons which corrects the LO predictions of Golden Ratio mixing via a minimal NLO Majorana mass correction which completely breaks the original Klein symmetry of the neutrino mass matrix. The minimal nature of the NLO correction leads to a restricted and correlated range of the mixing angles allowing agreement within the one sigma range of recent global fits following the reactor angle measurement by Daya Bay and RENO. The minimal NLO correction also preserves the LO inverse neutrino mass sum rule leading to a neutrino mass spectrum that extends into the quasi-degenerate region allowing the model to be accessible to the current and future neutrinoless double beta decay experiments

  14. The Impact of CRISPR/Cas9 Technology on Cardiac Research: From Disease Modelling to Therapeutic Approaches

    Science.gov (United States)

    Pramstaller, Peter P.; Hicks, Andrew A.; Rossini, Alessandra

    2017-01-01

    Genome-editing technology has emerged as a powerful method that enables the generation of genetically modified cells and organisms necessary to elucidate gene function and mechanisms of human diseases. The clustered regularly interspaced short palindromic repeats- (CRISPR-) associated 9 (Cas9) system has rapidly become one of the most popular approaches for genome editing in basic biomedical research over recent years because of its simplicity and adaptability. CRISPR/Cas9 genome editing has been used to correct DNA mutations ranging from a single base pair to large deletions in both in vitro and in vivo model systems. CRISPR/Cas9 has been used to increase the understanding of many aspects of cardiovascular disorders, including lipid metabolism, electrophysiology and genetic inheritance. The CRISPR/Cas9 technology has been proven to be effective in creating gene knockout (KO) or knockin in human cells and is particularly useful for editing induced pluripotent stem cells (iPSCs). Despite these progresses, some biological, technical, and ethical issues are limiting the therapeutic potential of genome editing in cardiovascular diseases. This review will focus on various applications of CRISPR/Cas9 genome editing in the cardiovascular field, for both disease research and the prospect of in vivo genome-editing therapies in the future. PMID:29434642

  15. Quantum-corrected drift-diffusion models for transport in semiconductor devices

    International Nuclear Information System (INIS)

    De Falco, Carlo; Gatti, Emilio; Lacaita, Andrea L.; Sacco, Riccardo

    2005-01-01

    In this paper, we propose a unified framework for Quantum-corrected drift-diffusion (QCDD) models in nanoscale semiconductor device simulation. QCDD models are presented as a suitable generalization of the classical drift-diffusion (DD) system, each particular model being identified by the constitutive relation for the quantum-correction to the electric potential. We examine two special, and relevant, examples of QCDD models; the first one is the modified DD model named Schroedinger-Poisson-drift-diffusion, and the second one is the quantum-drift-diffusion (QDD) model. For the decoupled solution of the two models, we introduce a functional iteration technique that extends the classical Gummel algorithm widely used in the iterative solution of the DD system. We discuss the finite element discretization of the various differential subsystems, with special emphasis on their stability properties, and illustrate the performance of the proposed algorithms and models on the numerical simulation of nanoscale devices in two spatial dimensions

  16. Ionospheric correction for spaceborne single-frequency GPS based ...

    Indian Academy of Sciences (India)

    A modified ionospheric correction method and the corresponding approximate algorithm for spaceborne single-frequency Global Positioning System (GPS) users are proposed in this study. Single Layer Model (SLM) mapping function for spaceborne GPS was analyzed. SLM mapping functions at different altitudes were ...

  17. WE-DE-207B-12: Scatter Correction for Dedicated Cone Beam Breast CT Based On a Forward Projection Model

    Energy Technology Data Exchange (ETDEWEB)

    Shi, L; Zhu, L [Georgia Institute of Technology, Atlanta, GA (Georgia); Vedantham, S; Karellas, A [University of Massachusetts Medical School, Worcester, MA (United States)

    2016-06-15

    Purpose: The image quality of dedicated cone-beam breast CT (CBBCT) is fundamentally limited by substantial x-ray scatter contamination, resulting in cupping artifacts and contrast-loss in reconstructed images. Such effects obscure the visibility of soft-tissue lesions and calcifications, which hinders breast cancer detection and diagnosis. In this work, we propose to suppress x-ray scatter in CBBCT images using a deterministic forward projection model. Method: We first use the 1st-pass FDK-reconstructed CBBCT images to segment fibroglandular and adipose tissue. Attenuation coefficients are assigned to the two tissues based on the x-ray spectrum used for imaging acquisition, and is forward projected to simulate scatter-free primary projections. We estimate the scatter by subtracting the simulated primary projection from the measured projection, and then the resultant scatter map is further refined by a Fourier-domain fitting algorithm after discarding untrusted scatter information. The final scatter estimate is subtracted from the measured projection for effective scatter correction. In our implementation, the proposed scatter correction takes 0.5 seconds for each projection. The method was evaluated using the overall image spatial non-uniformity (SNU) metric and the contrast-to-noise ratio (CNR) with 5 clinical datasets of BI-RADS 4/5 subjects. Results: For the 5 clinical datasets, our method reduced the SNU from 7.79% to 1.68% in coronal view and from 6.71% to 3.20% in sagittal view. The average CNR is improved by a factor of 1.38 in coronal view and 1.26 in sagittal view. Conclusion: The proposed scatter correction approach requires no additional scans or prior images and uses a deterministic model for efficient calculation. Evaluation with clinical datasets demonstrates the feasibility and stability of the method. These features are attractive for clinical CBBCT and make our method distinct from other approaches. Supported partly by NIH R21EB019597, R21CA134128

  18. Gauge threshold corrections for local string models

    International Nuclear Information System (INIS)

    Conlon, Joseph P.

    2009-01-01

    We study gauge threshold corrections for local brane models embedded in a large compact space. A large bulk volume gives important contributions to the Konishi and super-Weyl anomalies and the effective field theory analysis implies the unification scale should be enhanced in a model-independent way from M s to RM s . For local D3/D3 models this result is supported by the explicit string computations. In this case the scale RM s comes from the necessity of global cancellation of RR tadpoles sourced by the local model. We also study D3/D7 models and discuss discrepancies with the effective field theory analysis. We comment on phenomenological implications for gauge coupling unification and for the GUT scale.

  19. Correction of count losses due to deadtime on a DST-XLi (SMVi-GE) camera during dosimetric studies in patients injected with iodine-131

    International Nuclear Information System (INIS)

    Delpon, G.; Ferrer, L.; Lisbona, A.; Bardies, M.

    2002-01-01

    In dosimetric studies performed after therapeutic injection, it is essential to correct count losses due to deadtime on the gamma camera. This note describes four deadtime correction methods, one based on the use of a standard source without preliminary calibration, and three requiring specific calibration and based on the count rate observed in different spectrometric windows (20%, 20% plus a lower energy window and the full spectrum of 50-750 keV). Experiments were conducted on a phantom at increasingly higher count rates to check correction accuracy with the different methods. The error was less than +7% with a standard source, whereas count-rate-based methods gave more accurate results. On the assumption that the model was paralysable, preliminary calibration allowed an observed count rate curve to be plotted as a function of the real count rate. The use of the full spectrum led to a 3.0% underestimation for the highest activity imaged. As count losses depend on photon flux independent of energy, the use of the full spectrum during measurement allowed scatter conditions to be taken into account. A protocol was developed to apply this correction method to whole-body acquisitions. (author)

  20. Gravity loop corrections to the standard model Higgs in Einstein gravity

    International Nuclear Information System (INIS)

    Yugo Abe; Masaatsu Horikoshi; Takeo Inami

    2016-01-01

    We study one-loop quantum gravity corrections to the standard model Higgs potential V(φ) à la Coleman-Weinberg and examine the stability question of V(φ) in the energy region of Planck mass scale, μ ≃ M_P_l (M_P_l = 1.22x10"1"9 GeV). We calculate the gravity one-loop corrections to V(φ) in Einstein gravity by using the momentum cut-off Λ. We have found that even small gravity corrections compete with the standard model term of V(φ) and affect the stability argument of the latter part alone. This is because the latter part is nearly zero in the energy region of M_P_l. (author)

  1. TLS FIELD DATA BASED INTENSITY CORRECTION FOR FOREST ENVIRONMENTS

    Directory of Open Access Journals (Sweden)

    J. Heinzel

    2016-06-01

    Full Text Available Terrestrial laser scanning (TLS is increasingly used for forestry applications. Besides the three dimensional point coordinates, the 'intensity' of the reflected signal plays an important role in forestry and vegetation studies. The benefit of the signal intensity is caused by the wavelength of the laser that is within the near infrared (NIR for most scanners. The NIR is highly indicative for various vegetation characteristics. However, the intensity as recorded by most terrestrial scanners is distorted by both external and scanner specific factors. Since details about system internal alteration of the signal are often unknown to the user, model driven approaches are impractical. On the other hand, existing data driven calibration procedures require laborious acquisition of separate reference datasets or areas of homogenous reflection characteristics from the field data. In order to fill this gap, the present study introduces an approach to correct unwanted intensity variations directly from the point cloud of the field data. The focus is on the variation over range and sensor specific distortions. Instead of an absolute calibration of the values, a relative correction within the dataset is sufficient for most forestry applications. Finally, a method similar to time series detrending is presented with the only pre-condition of a relative equal distribution of forest objects and materials over range. Our test data covers 50 terrestrial scans captured with a FARO Focus 3D S120 scanner using a laser wavelength of 905 nm. Practical tests demonstrate that our correction method removes range and scanner based alterations of the intensity.

  2. How to simplify transmission-based scatter correction for clinical application

    International Nuclear Information System (INIS)

    Baccarne, V.; Hutton, B.F.

    1998-01-01

    Full text: The performances of ordered subsets (OS) EM reconstruction including attenuation, scatter and spatial resolution correction are evaluated using cardiac Monte Carlo data. We demonstrate how simplifications in the scatter model allow one to correct SPECT data for scatter in terms of quantitation and quality in a reasonable time. Initial reconstruction of the 20% window is performed including attenuation correction (broad beam μ values), to estimate the activity quantitatively (accuracy 3%), but not spatially. A rough reconstruction with 2 iterations (subset size: 8) is sufficient for subsequent scatter correction. Estimation of primary photons is obtained by projecting the previous distribution including attenuation (narrow beam μ values). Estimation of the scatter is obtained by convolving the primary estimates by a depth dependent scatter kernel, and scaling the result by a factor calculated from the attenuation map. The correction can be accelerated by convolving several adjacent planes with the same kernel, and using an average scaling factor. Simulation of the effects of the collimator during the scatter correction was demonstrated to be unnecessary. Final reconstruction is performed using 6 iterations OSEM, including attenuation (narrow beam μ values) and spatial resolution correction. Scatter correction is implemented by incorporating the estimated scatter as a constant offset in the forward projection step. The total correction + reconstruction (64 proj. 40x128 pixel) takes 38 minutes on a Sun Sparc 20. Quantitatively, the accuracy is 7% in a reconstructed slice. The SNR inside the whole myocardium (defined from the original object), is equal to 2.1 and 2.3 - in the corrected and the primary slices respectively. The scatter correction preserves the myocardium to ventricle contrast (primary: 0.79, corrected: 0.82). These simplifications allow acceleration of correction without influencing the quality of the result

  3. [Therapeutic strategy for different types of epicanthus].

    Science.gov (United States)

    Gaofeng, Li; Jun, Tan; Zihan, Wu; Wei, Ding; Huawei, Ouyang; Fan, Zhang; Mingcan, Luo

    2015-11-01

    To explore the reasonable therapeutic strategy for different types of epicanthus. Patients with epicanthus were classificated according to the shape, extent and inner canthal distance and treated with different methods appropriately. Modified asymmetric Z plasty with two curve method was used in lower eyelid type epicanthus, inner canthus type epicanthus and severe upper eyelid type epicanthus. Moderate upper epicanthus underwent '-' shape method. Mild Upper epicanthus in two conditions which underwent nasal augumentation and double eyelid formation with normal inner canthal distance need no correction surgery. The other mild epicanthus underwent '-' shape method. A total of 66 cases underwent the classification and the appropriate treatment. All wounds healed well. During 3 to 12 months follow-up period, all epicanthus were corrected completely with natural contour and unconspicuous scars. All patients were satisfied with the results. Classification of epicanthus hosed on the shape, extent and inner canthal distance and correction with appropriate methods is a reasonable therapeutic strategy.

  4. Effect of an Ergonomics-Based Educational Intervention Based on Transtheoretical Model in Adopting Correct Body Posture Among Operating Room Nurses

    OpenAIRE

    Moazzami, Zeinab; Dehdari, Tahere; Taghdisi, Mohammad Hosein; Soltanian, Alireza

    2015-01-01

    Background: One of the preventive strategies for chronic low back pain among operating room nurses is instructing proper body mechanics and postural behavior, for which the use of the Transtheoretical Model (TTM) has been recommended. Methods: Eighty two nurses who were in the contemplation and preparation stages for adopting correct body posture were randomly selected (control group = 40, intervention group = 42). TTM variables and body posture were measured at baseline and again after 1 and...

  5. A Physical Model-based Correction for Charge Traps in the Hubble Space Telescope ’s Wide Field Camera 3 Near-IR Detector and Its Applications to Transiting Exoplanets and Brown Dwarfs

    Energy Technology Data Exchange (ETDEWEB)

    Zhou, Yifan; Apai, Dániel; Schneider, Glenn [Department of Astronomy/Steward Observatory, The University of Arizona, 933 N. Cherry Avenue, Tucson, AZ 85721 (United States); Lew, Ben W. P., E-mail: yzhou@as.arizona.edu [Department of Planetary Science/Lunar and Planetary Laboratory, The University of Arizona, 1640 E. University Boulevard, Tucson, AZ 85718 (United States)

    2017-06-01

    The Hubble Space Telescope Wide Field Camera 3 (WFC3) near-IR channel is extensively used in time-resolved observations, especially for transiting exoplanet spectroscopy as well as brown dwarf and directly imaged exoplanet rotational phase mapping. The ramp effect is the dominant source of systematics in the WFC3 for time-resolved observations, which limits its photometric precision. Current mitigation strategies are based on empirical fits and require additional orbits to help the telescope reach a thermal equilibrium . We show that the ramp-effect profiles can be explained and corrected with high fidelity using charge trapping theories. We also present a model for this process that can be used to predict and to correct charge trap systematics. Our model is based on a very small number of parameters that are intrinsic to the detector. We find that these parameters are very stable between the different data sets, and we provide best-fit values. Our model is tested with more than 120 orbits (∼40 visits) of WFC3 observations and is proved to be able to provide near photon noise limited corrections for observations made with both staring and scanning modes of transiting exoplanets as well as for starting-mode observations of brown dwarfs. After our model correction, the light curve of the first orbit in each visit has the same photometric precision as subsequent orbits, so data from the first orbit no longer need to be discarded. Near-IR arrays with the same physical characteristics (e.g., JWST/NIRCam ) may also benefit from the extension of this model if similar systematic profiles are observed.

  6. Practical considerations in the development of hemoglobin-based oxygen therapeutics.

    Science.gov (United States)

    Kim, Hae Won; Estep, Timothy N

    2012-09-01

    The development of hemoglobin based oxygen therapeutics (HBOCs) requires consideration of a number of factors. While the enabling technology derives from fundamental research on protein biochemistry and biological interactions, translation of these research insights into usable medical therapeutics demands the application of considerable technical expertise and consideration and reconciliation of a myriad of manufacturing, medical, and regulatory requirements. The HBOC development challenge is further exacerbated by the extremely high intravenous doses required for many of the indications contemplated for these products, which in turn implies an extremely high level of purity is required. This communication discusses several of the important product configuration and developmental considerations that impact the translation of fundamental research discoveries on HBOCs into usable medical therapeutics.

  7. Inventory of Novel Animal Models Addressing Etiology of Preeclampsia in the Development of New Therapeutic/Intervention Opportunities.

    Science.gov (United States)

    Erlandsson, Lena; Nääv, Åsa; Hennessy, Annemarie; Vaiman, Daniel; Gram, Magnus; Åkerström, Bo; Hansson, Stefan R

    2016-03-01

    Preeclampsia is a pregnancy-related disease afflicting 3-7% of pregnancies worldwide and leads to maternal and infant morbidity and mortality. The disease is of placental origin and is commonly described as a disease of two stages. A variety of preeclampsia animal models have been proposed, but all of them have limitations in fully recapitulating the human disease. Based on the research question at hand, different or multiple models might be suitable. Multiple animal models in combination with in vitro or ex vivo studies on human placenta together offer a synergistic platform to further our understanding of the etiology of preeclampsia and potential therapeutic interventions. The described animal models of preeclampsia divide into four categories (i) spontaneous, (ii) surgically induced, (iii) pharmacologically/substance induced, and (iv) transgenic. This review aims at providing an inventory of novel models addressing etiology of the disease and or therapeutic/intervention opportunities. © 2015 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  8. Experimental demonstration of passive acoustic imaging in the human skull cavity using CT-based aberration corrections.

    Science.gov (United States)

    Jones, Ryan M; O'Reilly, Meaghan A; Hynynen, Kullervo

    2015-07-01

    Experimentally verify a previously described technique for performing passive acoustic imaging through an intact human skull using noninvasive, computed tomography (CT)-based aberration corrections Jones et al. [Phys. Med. Biol. 58, 4981-5005 (2013)]. A sparse hemispherical receiver array (30 cm diameter) consisting of 128 piezoceramic discs (2.5 mm diameter, 612 kHz center frequency) was used to passively listen through ex vivo human skullcaps (n = 4) to acoustic emissions from a narrow-band fixed source (1 mm diameter, 516 kHz center frequency) and from ultrasound-stimulated (5 cycle bursts, 1 Hz pulse repetition frequency, estimated in situ peak negative pressure 0.11-0.33 MPa, 306 kHz driving frequency) Definity™ microbubbles flowing through a thin-walled tube phantom. Initial in vivo feasibility testing of the method was performed. The performance of the method was assessed through comparisons to images generated without skull corrections, with invasive source-based corrections, and with water-path control images. For source locations at least 25 mm from the inner skull surface, the modified reconstruction algorithm successfully restored a single focus within the skull cavity at a location within 1.25 mm from the true position of the narrow-band source. The results obtained from imaging single bubbles are in good agreement with numerical simulations of point source emitters and the authors' previous experimental measurements using source-based skull corrections O'Reilly et al. [IEEE Trans. Biomed. Eng. 61, 1285-1294 (2014)]. In a rat model, microbubble activity was mapped through an intact human skull at pressure levels below and above the threshold for focused ultrasound-induced blood-brain barrier opening. During bursts that led to coherent bubble activity, the location of maximum intensity in images generated with CT-based skull corrections was found to deviate by less than 1 mm, on average, from the position obtained using source-based corrections. Taken

  9. Spherical aberration correction with an in-lens N-fold symmetric line currents model.

    Science.gov (United States)

    Hoque, Shahedul; Ito, Hiroyuki; Nishi, Ryuji

    2018-04-01

    In our previous works, we have proposed N-SYLC (N-fold symmetric line currents) models for aberration correction. In this paper, we propose "in-lens N-SYLC" model, where N-SYLC overlaps rotationally symmetric lens. Such overlap is possible because N-SYLC is free of magnetic materials. We analytically prove that, if certain parameters of the model are optimized, an in-lens 3-SYLC (N = 3) doublet can correct 3rd order spherical aberration. By computer simulation, we show that the required excitation current for correction is less than 0.25 AT for beam energy 5 keV, and the beam size after correction is smaller than 1 nm at the corrector image plane for initial slope less than 4 mrad. Copyright © 2018 Elsevier B.V. All rights reserved.

  10. Establishment and correction of an Echelle cross-prism spectrogram reduction model

    Science.gov (United States)

    Zhang, Rui; Bayanheshig; Li, Xiaotian; Cui, Jicheng

    2017-11-01

    The accuracy of an echelle cross-prism spectrometer depends on the matching degree between the spectrum reduction model and the actual state of the spectrometer. However, the error of adjustment can change the actual state of the spectrometer and result in a reduction model that does not match. This produces an inaccurate wavelength calibration. Therefore, the calibration of a spectrogram reduction model is important for the analysis of any echelle cross-prism spectrometer. In this study, the spectrogram reduction model of an echelle cross-prism spectrometer was established. The image position laws of a spectrometer that varies with the system parameters were simulated to the influence of the changes in prism refractive index, focal length and so on, on the calculation results. The model was divided into different wavebands. The iterative method, least squares principle and element lamps with known characteristic wavelength were used to calibrate the spectral model in different wavebands to obtain the actual values of the system parameters. After correction, the deviation of actual x- and y-coordinates and the coordinates calculated by the model are less than one pixel. The model corrected by this method thus reflects the system parameters in the current spectrometer state and can assist in accurate wavelength extraction. The instrument installation and adjustment would be guided in model-repeated correction, reducing difficulty of equipment, respectively.

  11. Validation of the Two-Layer Model for Correcting Clear Sky Reflectance Near Clouds

    Science.gov (United States)

    Wen, Guoyong; Marshak, Alexander; Evans, K. Frank; Vamal, Tamas

    2014-01-01

    A two-layer model was developed in our earlier studies to estimate the clear sky reflectance enhancement near clouds. This simple model accounts for the radiative interaction between boundary layer clouds and molecular layer above, the major contribution to the reflectance enhancement near clouds for short wavelengths. We use LES/SHDOM simulated 3D radiation fields to valid the two-layer model for reflectance enhancement at 0.47 micrometer. We find: (a) The simple model captures the viewing angle dependence of the reflectance enhancement near cloud, suggesting the physics of this model is correct; and (b) The magnitude of the 2-layer modeled enhancement agree reasonably well with the "truth" with some expected underestimation. We further extend our model to include cloud-surface interaction using the Poisson model for broken clouds. We found that including cloud-surface interaction improves the correction, though it can introduced some over corrections for large cloud albedo, large cloud optical depth, large cloud fraction, large cloud aspect ratio. This over correction can be reduced by excluding scenes (10 km x 10km) with large cloud fraction for which the Poisson model is not designed for. Further research is underway to account for the contribution of cloud-aerosol radiative interaction to the enhancement.

  12. Bayesian based Prognostic Model for Predictive Maintenance of Offshore Wind Farms

    DEFF Research Database (Denmark)

    Asgarpour, Masoud; Sørensen, John Dalsgaard

    2018-01-01

    The operation and maintenance costs of offshore wind farms can be significantly reduced if existing corrective actions are performed as efficient as possible and if future corrective actions are avoided by performing sufficient preventive actions. In this paper a prognostic model for degradation...... monitoring, fault prediction and predictive maintenance of offshore wind components is defined. The diagnostic model defined in this paper is based on degradation, remaining useful lifetime and hybrid inspection threshold models. The defined degradation model is based on an exponential distribution...

  13. Bayesian based Prognostic Model for Predictive Maintenance of Offshore Wind Farms

    DEFF Research Database (Denmark)

    Asgarpour, Masoud; Sørensen, John Dalsgaard

    2018-01-01

    monitoring, fault prediction and predictive maintenance of offshore wind components is defined. The diagnostic model defined in this paper is based on degradation, remaining useful lifetime and hybrid inspection threshold models. The defined degradation model is based on an exponential distribution......The operation and maintenance costs of offshore wind farms can be significantly reduced if existing corrective actions are performed as efficient as possible and if future corrective actions are avoided by performing sufficient preventive actions. In this paper a prognostic model for degradation...

  14. Bayesian based Prognostic Model for Predictive Maintenance of Offshore Wind Farms

    DEFF Research Database (Denmark)

    Asgarpour, Masoud

    2017-01-01

    monitoring, fault detection and predictive maintenance of offshore wind components is defined. The diagnostic model defined in this paper is based on degradation, remaining useful lifetime and hybrid inspection threshold models. The defined degradation model is based on an exponential distribution......The operation and maintenance costs of offshore wind farms can be significantly reduced if existing corrective actions are performed as efficient as possible and if future corrective actions are avoided by performing sufficient preventive actions. In this paper a prognostic model for degradation...

  15. Direct Reconstruction of CT-based Attenuation Correction Images for PET with Cluster-Based Penalties

    Science.gov (United States)

    Kim, Soo Mee; Alessio, Adam M.; De Man, Bruno; Asma, Evren; Kinahan, Paul E.

    2015-01-01

    Extremely low-dose CT acquisitions for the purpose of PET attenuation correction will have a high level of noise and biasing artifacts due to factors such as photon starvation. This work explores a priori knowledge appropriate for CT iterative image reconstruction for PET attenuation correction. We investigate the maximum a posteriori (MAP) framework with cluster-based, multinomial priors for the direct reconstruction of the PET attenuation map. The objective function for direct iterative attenuation map reconstruction was modeled as a Poisson log-likelihood with prior terms consisting of quadratic (Q) and mixture (M) distributions. The attenuation map is assumed to have values in 4 clusters: air+background, lung, soft tissue, and bone. Under this assumption, the MP was a mixture probability density function consisting of one exponential and three Gaussian distributions. The relative proportion of each cluster was jointly estimated during each voxel update of direct iterative coordinate decent (dICD) method. Noise-free data were generated from NCAT phantom and Poisson noise was added. Reconstruction with FBP (ramp filter) was performed on the noise-free (ground truth) and noisy data. For the noisy data, dICD reconstruction was performed with the combination of different prior strength parameters (β and γ) of Q- and M-penalties. The combined quadratic and mixture penalties reduces the RMSE by 18.7% compared to post-smoothed iterative reconstruction and only 0.7% compared to quadratic alone. For direct PET attenuation map reconstruction from ultra-low dose CT acquisitions, the combination of quadratic and mixture priors offers regularization of both variance and bias and is a potential method to derive attenuation maps with negligible patient dose. However, the small improvement in quantitative accuracy relative to the substantial increase in algorithm complexity does not currently justify the use of mixture-based PET attenuation priors for reconstruction of CT

  16. A variable age of onset segregation model for linkage analysis, with correction for ascertainment, applied to glioma

    DEFF Research Database (Denmark)

    Sun, Xiangqing; Vengoechea, Jaime; Elston, Robert

    2012-01-01

    We propose a 2-step model-based approach, with correction for ascertainment, to linkage analysis of a binary trait with variable age of onset and apply it to a set of multiplex pedigrees segregating for adult glioma....

  17. Towards quantitative PET/MRI: a review of MR-based attenuation correction techniques.

    Science.gov (United States)

    Hofmann, Matthias; Pichler, Bernd; Schölkopf, Bernhard; Beyer, Thomas

    2009-03-01

    Positron emission tomography (PET) is a fully quantitative technology for imaging metabolic pathways and dynamic processes in vivo. Attenuation correction of raw PET data is a prerequisite for quantification and is typically based on separate transmission measurements. In PET/CT attenuation correction, however, is performed routinely based on the available CT transmission data. Recently, combined PET/magnetic resonance (MR) has been proposed as a viable alternative to PET/CT. Current concepts of PET/MRI do not include CT-like transmission sources and, therefore, alternative methods of PET attenuation correction must be found. This article reviews existing approaches to MR-based attenuation correction (MR-AC). Most groups have proposed MR-AC algorithms for brain PET studies and more recently also for torso PET/MR imaging. Most MR-AC strategies require the use of complementary MR and transmission images, or morphology templates generated from transmission images. We review and discuss these algorithms and point out challenges for using MR-AC in clinical routine. MR-AC is work-in-progress with potentially promising results from a template-based approach applicable to both brain and torso imaging. While efforts are ongoing in making clinically viable MR-AC fully automatic, further studies are required to realize the potential benefits of MR-based motion compensation and partial volume correction of the PET data.

  18. Towards quantitative PET/MRI: a review of MR-based attenuation correction techniques

    International Nuclear Information System (INIS)

    Hofmann, Matthias; Pichler, Bernd; Schoelkopf, Bernhard; Beyer, Thomas

    2009-01-01

    Positron emission tomography (PET) is a fully quantitative technology for imaging metabolic pathways and dynamic processes in vivo. Attenuation correction of raw PET data is a prerequisite for quantification and is typically based on separate transmission measurements. In PET/CT attenuation correction, however, is performed routinely based on the available CT transmission data. Recently, combined PET/magnetic resonance (MR) has been proposed as a viable alternative to PET/CT. Current concepts of PET/MRI do not include CT-like transmission sources and, therefore, alternative methods of PET attenuation correction must be found. This article reviews existing approaches to MR-based attenuation correction (MR-AC). Most groups have proposed MR-AC algorithms for brain PET studies and more recently also for torso PET/MR imaging. Most MR-AC strategies require the use of complementary MR and transmission images, or morphology templates generated from transmission images. We review and discuss these algorithms and point out challenges for using MR-AC in clinical routine. MR-AC is work-in-progress with potentially promising results from a template-based approach applicable to both brain and torso imaging. While efforts are ongoing in making clinically viable MR-AC fully automatic, further studies are required to realize the potential benefits of MR-based motion compensation and partial volume correction of the PET data. (orig.)

  19. Towards quantitative PET/MRI: a review of MR-based attenuation correction techniques

    Energy Technology Data Exchange (ETDEWEB)

    Hofmann, Matthias [Max Planck Institute for Biological Cybernetics, Tuebingen (Germany); University of Tuebingen, Laboratory for Preclinical Imaging and Imaging Technology of the Werner Siemens-Foundation, Department of Radiology, Tuebingen (Germany); University of Oxford, Wolfson Medical Vision Laboratory, Department of Engineering Science, Oxford (United Kingdom); Pichler, Bernd [University of Tuebingen, Laboratory for Preclinical Imaging and Imaging Technology of the Werner Siemens-Foundation, Department of Radiology, Tuebingen (Germany); Schoelkopf, Bernhard [Max Planck Institute for Biological Cybernetics, Tuebingen (Germany); Beyer, Thomas [University Hospital Duisburg-Essen, Department of Nuclear Medicine, Essen (Germany); Cmi-Experts GmbH, Zurich (Switzerland)

    2009-03-15

    Positron emission tomography (PET) is a fully quantitative technology for imaging metabolic pathways and dynamic processes in vivo. Attenuation correction of raw PET data is a prerequisite for quantification and is typically based on separate transmission measurements. In PET/CT attenuation correction, however, is performed routinely based on the available CT transmission data. Recently, combined PET/magnetic resonance (MR) has been proposed as a viable alternative to PET/CT. Current concepts of PET/MRI do not include CT-like transmission sources and, therefore, alternative methods of PET attenuation correction must be found. This article reviews existing approaches to MR-based attenuation correction (MR-AC). Most groups have proposed MR-AC algorithms for brain PET studies and more recently also for torso PET/MR imaging. Most MR-AC strategies require the use of complementary MR and transmission images, or morphology templates generated from transmission images. We review and discuss these algorithms and point out challenges for using MR-AC in clinical routine. MR-AC is work-in-progress with potentially promising results from a template-based approach applicable to both brain and torso imaging. While efforts are ongoing in making clinically viable MR-AC fully automatic, further studies are required to realize the potential benefits of MR-based motion compensation and partial volume correction of the PET data. (orig.)

  20. Winner's Curse Correction and Variable Thresholding Improve Performance of Polygenic Risk Modeling Based on Genome-Wide Association Study Summary-Level Data.

    Directory of Open Access Journals (Sweden)

    Jianxin Shi

    2016-12-01

    Full Text Available Recent heritability analyses have indicated that genome-wide association studies (GWAS have the potential to improve genetic risk prediction for complex diseases based on polygenic risk score (PRS, a simple modelling technique that can be implemented using summary-level data from the discovery samples. We herein propose modifications to improve the performance of PRS. We introduce threshold-dependent winner's-curse adjustments for marginal association coefficients that are used to weight the single-nucleotide polymorphisms (SNPs in PRS. Further, as a way to incorporate external functional/annotation knowledge that could identify subsets of SNPs highly enriched for associations, we propose variable thresholds for SNPs selection. We applied our methods to GWAS summary-level data of 14 complex diseases. Across all diseases, a simple winner's curse correction uniformly led to enhancement of performance of the models, whereas incorporation of functional SNPs was beneficial only for selected diseases. Compared to the standard PRS algorithm, the proposed methods in combination led to notable gain in efficiency (25-50% increase in the prediction R2 for 5 of 14 diseases. As an example, for GWAS of type 2 diabetes, winner's curse correction improved prediction R2 from 2.29% based on the standard PRS to 3.10% (P = 0.0017 and incorporating functional annotation data further improved R2 to 3.53% (P = 2×10-5. Our simulation studies illustrate why differential treatment of certain categories of functional SNPs, even when shown to be highly enriched for GWAS-heritability, does not lead to proportionate improvement in genetic risk-prediction because of non-uniform linkage disequilibrium structure.

  1. Radar Rainfall Bias Correction based on Deep Learning Approach

    Science.gov (United States)

    Song, Yang; Han, Dawei; Rico-Ramirez, Miguel A.

    2017-04-01

    Radar rainfall measurement errors can be considerably attributed to various sources including intricate synoptic regimes. Temperature, humidity and wind are typically acknowledged as critical meteorological factors in inducing the precipitation discrepancies aloft and on the ground. The conventional practices mainly use the radar-gauge or geostatistical techniques by direct weighted interpolation algorithms as bias correction schemes whereas rarely consider the atmospheric effects. This study aims to comprehensively quantify those meteorological elements' impacts on radar-gauge rainfall bias correction based on a deep learning approach. The deep learning approach employs deep convolutional neural networks to automatically extract three-dimensional meteorological features for target recognition based on high range resolution profiles. The complex nonlinear relationships between input and target variables can be implicitly detected by such a scheme, which is validated on the test dataset. The proposed bias correction scheme is expected to be a promising improvement in systematically minimizing the synthesized atmospheric effects on rainfall discrepancies between radar and rain gauges, which can be useful in many meteorological and hydrological applications (e.g., real-time flood forecasting) especially for regions with complex atmospheric conditions.

  2. Determination of dose correction factor for energy and directional dependence of the MOSFET dosimeter in an anthropomorphic phantom

    International Nuclear Information System (INIS)

    Cho, Sung Koo; Choi, Sang Hyoun; Kim, Chan Hyeong; Na, Seong Ho

    2006-01-01

    In recent years, the MOSFET dosimeter has been widely used in various medical applications such as dose verification in radiation therapeutic and diagnostic applications. The MOSFET dosimeter is, however, mainly made of silicon and shows some energy dependence for low energy photons. Therefore, the MOSFET dosimeter tends to overestimate the dose for low energy scattered photons in a phantom. This study determines the correction factors to compensate these dependences of the MOSFET dosimeter in ATOM phantom. For this, we first constructed a computational model of the ATOM phantom based on the 3D CT image data of the phantom. The voxel phantom was then implemented in a Monte Carlo simulation code and used to calculate the energy spectrum of the photon field at each of the MOSFET dosimeter locations in the phantom. Finally, the correction factors were calculated based on the energy spectrum of the photon field at the dosimeter locations and the pre-determined energy and directional dependence of the MOSFET dosimeter. Our result for 60 Co and 137 Cs photon fields shows that the correction factors are distributed within the range of 0.89 and 0.97 considering all the MOSFET dosimeter locations in the phantom

  3. Markov random field and Gaussian mixture for segmented MRI-based partial volume correction in PET

    International Nuclear Information System (INIS)

    Bousse, Alexandre; Thomas, Benjamin A; Erlandsson, Kjell; Hutton, Brian F; Pedemonte, Stefano; Ourselin, Sébastien; Arridge, Simon

    2012-01-01

    In this paper we propose a segmented magnetic resonance imaging (MRI) prior-based maximum penalized likelihood deconvolution technique for positron emission tomography (PET) images. The model assumes the existence of activity classes that behave like a hidden Markov random field (MRF) driven by the segmented MRI. We utilize a mean field approximation to compute the likelihood of the MRF. We tested our method on both simulated and clinical data (brain PET) and compared our results with PET images corrected with the re-blurred Van Cittert (VC) algorithm, the simplified Guven (SG) algorithm and the region-based voxel-wise (RBV) technique. We demonstrated our algorithm outperforms the VC algorithm and outperforms SG and RBV corrections when the segmented MRI is inconsistent (e.g. mis-segmentation, lesions, etc) with the PET image. (paper)

  4. Real-time distortion correction for visual inspection systems based on FPGA

    Science.gov (United States)

    Liang, Danhua; Zhang, Zhaoxia; Chen, Xiaodong; Yu, Daoyin

    2008-03-01

    Visual inspection is a kind of new technology based on the research of computer vision, which focuses on the measurement of the object's geometry and location. It can be widely used in online measurement, and other real-time measurement process. Because of the defects of the traditional visual inspection, a new visual detection mode -all-digital intelligent acquisition and transmission is presented. The image processing, including filtering, image compression, binarization, edge detection and distortion correction, can be completed in the programmable devices -FPGA. As the wide-field angle lens is adopted in the system, the output images have serious distortion. Limited by the calculating speed of computer, software can only correct the distortion of static images but not the distortion of dynamic images. To reach the real-time need, we design a distortion correction system based on FPGA. The method of hardware distortion correction is that the spatial correction data are calculated first under software circumstance, then converted into the address of hardware storage and stored in the hardware look-up table, through which data can be read out to correct gray level. The major benefit using FPGA is that the same circuit can be used for other circularly symmetric wide-angle lenses without being modified.

  5. Correction of oral contrast artifacts in CT-based attenuation correction of PET images using an automated segmentation algorithm

    International Nuclear Information System (INIS)

    Ahmadian, Alireza; Ay, Mohammad R.; Sarkar, Saeed; Bidgoli, Javad H.; Zaidi, Habib

    2008-01-01

    Oral contrast is usually administered in most X-ray computed tomography (CT) examinations of the abdomen and the pelvis as it allows more accurate identification of the bowel and facilitates the interpretation of abdominal and pelvic CT studies. However, the misclassification of contrast medium with high-density bone in CT-based attenuation correction (CTAC) is known to generate artifacts in the attenuation map (μmap), thus resulting in overcorrection for attenuation of positron emission tomography (PET) images. In this study, we developed an automated algorithm for segmentation and classification of regions containing oral contrast medium to correct for artifacts in CT-attenuation-corrected PET images using the segmented contrast correction (SCC) algorithm. The proposed algorithm consists of two steps: first, high CT number object segmentation using combined region- and boundary-based segmentation and second, object classification to bone and contrast agent using a knowledge-based nonlinear fuzzy classifier. Thereafter, the CT numbers of pixels belonging to the region classified as contrast medium are substituted with their equivalent effective bone CT numbers using the SCC algorithm. The generated CT images are then down-sampled followed by Gaussian smoothing to match the resolution of PET images. A piecewise calibration curve was then used to convert CT pixel values to linear attenuation coefficients at 511 keV. The visual assessment of segmented regions performed by an experienced radiologist confirmed the accuracy of the segmentation and classification algorithms for delineation of contrast-enhanced regions in clinical CT images. The quantitative analysis of generated μmaps of 21 clinical CT colonoscopy datasets showed an overestimation ranging between 24.4% and 37.3% in the 3D-classified regions depending on their volume and the concentration of contrast medium. Two PET/CT studies known to be problematic demonstrated the applicability of the technique in

  6. Statistical reconstruction for cone-beam CT with a post-artifact-correction noise model: application to high-quality head imaging

    International Nuclear Information System (INIS)

    Dang, H; Stayman, J W; Sisniega, A; Xu, J; Zbijewski, W; Siewerdsen, J H; Wang, X; Foos, D H; Aygun, N; Koliatsos, V E

    2015-01-01

    Non-contrast CT reliably detects fresh blood in the brain and is the current front-line imaging modality for intracranial hemorrhage such as that occurring in acute traumatic brain injury (contrast ∼40–80 HU, size  >  1 mm). We are developing flat-panel detector (FPD) cone-beam CT (CBCT) to facilitate such diagnosis in a low-cost, mobile platform suitable for point-of-care deployment. Such a system may offer benefits in the ICU, urgent care/concussion clinic, ambulance, and sports and military theatres. However, current FPD-CBCT systems face significant challenges that confound low-contrast, soft-tissue imaging. Artifact correction can overcome major sources of bias in FPD-CBCT but imparts noise amplification in filtered backprojection (FBP). Model-based reconstruction improves soft-tissue image quality compared to FBP by leveraging a high-fidelity forward model and image regularization. In this work, we develop a novel penalized weighted least-squares (PWLS) image reconstruction method with a noise model that includes accurate modeling of the noise characteristics associated with the two dominant artifact corrections (scatter and beam-hardening) in CBCT and utilizes modified weights to compensate for noise amplification imparted by each correction. Experiments included real data acquired on a FPD-CBCT test-bench and an anthropomorphic head phantom emulating intra-parenchymal hemorrhage. The proposed PWLS method demonstrated superior noise-resolution tradeoffs in comparison to FBP and PWLS with conventional weights (viz. at matched 0.50 mm spatial resolution, CNR = 11.9 compared to CNR = 5.6 and CNR = 9.9, respectively) and substantially reduced image noise especially in challenging regions such as skull base. The results support the hypothesis that with high-fidelity artifact correction and statistical reconstruction using an accurate post-artifact-correction noise model, FPD-CBCT can achieve image quality allowing reliable detection of

  7. Forward and correctional OFDM-based visible light positioning

    Science.gov (United States)

    Li, Wei; Huang, Zhitong; Zhao, Runmei; He, Peixuan; Ji, Yuefeng

    2017-09-01

    Visible light positioning (VLP) has attracted much attention in both academic and industrial areas due to the extensive deployment of light-emitting diodes (LEDs) as next-generation green lighting. Generally, the coverage of a single LED lamp is limited, so LED arrays are always utilized to achieve uniform illumination within the large-scale indoor environment. However, in such dense LED deployment scenario, the superposition of the light signals becomes an important challenge for accurate VLP. To solve this problem, we propose a forward and correctional orthogonal frequency division multiplexing (OFDM)-based VLP (FCO-VLP) scheme with low complexity in generating and processing of signals. In the first forward procedure of FCO-VLP, an initial position is obtained by the trilateration method based on OFDM-subcarriers. The positioning accuracy will be further improved in the second correctional procedure based on the database of reference points. As demonstrated in our experiments, our approach yields an improved average positioning error of 4.65 cm and an enhanced positioning accuracy by 24.2% compared with trilateration method.

  8. A scheme for PET data normalization in event-based motion correction

    International Nuclear Information System (INIS)

    Zhou, Victor W; Kyme, Andre Z; Fulton, Roger; Meikle, Steven R

    2009-01-01

    Line of response (LOR) rebinning is an event-based motion-correction technique for positron emission tomography (PET) imaging that has been shown to compensate effectively for rigid motion. It involves the spatial transformation of LORs to compensate for motion during the scan, as measured by a motion tracking system. Each motion-corrected event is then recorded in the sinogram bin corresponding to the transformed LOR. It has been shown previously that the corrected event must be normalized using a normalization factor derived from the original LOR, that is, based on the pair of detectors involved in the original coincidence event. In general, due to data compression strategies (mashing), sinogram bins record events detected on multiple LORs. The number of LORs associated with a sinogram bin determines the relative contribution of each LOR. This paper provides a thorough treatment of event-based normalization during motion correction of PET data using LOR rebinning. We demonstrate theoretically and experimentally that normalization of the corrected event during LOR rebinning should account for the number of LORs contributing to the sinogram bin into which the motion-corrected event is binned. Failure to account for this factor may cause artifactual slice-to-slice count variations in the transverse slices and visible horizontal stripe artifacts in the coronal and sagittal slices of the reconstructed images. The theory and implementation of normalization in conjunction with the LOR rebinning technique is described in detail, and experimental verification of the proposed normalization method in phantom studies is presented.

  9. A model of diffraction scattering with unitary corrections

    International Nuclear Information System (INIS)

    Etim, E.; Malecki, A.; Satta, L.

    1989-01-01

    The inability of the multiple scattering model of Glauber and similar geometrical picture models to fit data at Collider energies, to fit low energy data at large momentum transfers and to explain the absence of multiple diffraction dips in the data is noted. It is argued and shown that a unitary correction to the multiple scattering amplitude gives rise to a better model and allows to fit all available data on nucleon-nucleon and nucleus-nucleus collisions at all energies and all momentum transfers. There are no multiple diffraction dips

  10. Multisite bias correction of precipitation data from regional climate models

    Czech Academy of Sciences Publication Activity Database

    Hnilica, Jan; Hanel, M.; Puš, V.

    2017-01-01

    Roč. 37, č. 6 (2017), s. 2934-2946 ISSN 0899-8418 R&D Projects: GA ČR GA16-05665S Grant - others:Grantová agentura ČR - GA ČR(CZ) 16-16549S Institutional support: RVO:67985874 Keywords : bias correction * regional climate model * correlation * covariance * multivariate data * multisite correction * principal components * precipitation Subject RIV: DA - Hydrology ; Limnology OBOR OECD: Climatic research Impact factor: 3.760, year: 2016

  11. Quantitative Evaluation of 2 Scatter-Correction Techniques for 18F-FDG Brain PET/MRI in Regard to MR-Based Attenuation Correction.

    Science.gov (United States)

    Teuho, Jarmo; Saunavaara, Virva; Tolvanen, Tuula; Tuokkola, Terhi; Karlsson, Antti; Tuisku, Jouni; Teräs, Mika

    2017-10-01

    In PET, corrections for photon scatter and attenuation are essential for visual and quantitative consistency. MR attenuation correction (MRAC) is generally conducted by image segmentation and assignment of discrete attenuation coefficients, which offer limited accuracy compared with CT attenuation correction. Potential inaccuracies in MRAC may affect scatter correction, because the attenuation image (μ-map) is used in single scatter simulation (SSS) to calculate the scatter estimate. We assessed the impact of MRAC to scatter correction using 2 scatter-correction techniques and 3 μ-maps for MRAC. Methods: The tail-fitted SSS (TF-SSS) and a Monte Carlo-based single scatter simulation (MC-SSS) algorithm implementations on the Philips Ingenuity TF PET/MR were used with 1 CT-based and 2 MR-based μ-maps. Data from 7 subjects were used in the clinical evaluation, and a phantom study using an anatomic brain phantom was conducted. Scatter-correction sinograms were evaluated for each scatter correction method and μ-map. Absolute image quantification was investigated with the phantom data. Quantitative assessment of PET images was performed by volume-of-interest and ratio image analysis. Results: MRAC did not result in large differences in scatter algorithm performance, especially with TF-SSS. Scatter sinograms and scatter fractions did not reveal large differences regardless of the μ-map used. TF-SSS showed slightly higher absolute quantification. The differences in volume-of-interest analysis between TF-SSS and MC-SSS were 3% at maximum in the phantom and 4% in the patient study. Both algorithms showed excellent correlation with each other with no visual differences between PET images. MC-SSS showed a slight dependency on the μ-map used, with a difference of 2% on average and 4% at maximum when a μ-map without bone was used. Conclusion: The effect of different MR-based μ-maps on the performance of scatter correction was minimal in non-time-of-flight 18 F-FDG PET

  12. Cell-based therapeutic strategies for multiple sclerosis

    DEFF Research Database (Denmark)

    Scolding, Neil J; Pasquini, Marcelo; Reingold, Stephen C

    2017-01-01

    and none directly promotes repair. Cell-based therapies, including immunoablation followed by autologous haematopoietic stem cell transplantation, mesenchymal and related stem cell transplantation, pharmacologic manipulation of endogenous stem cells to enhance their reparative capabilities......, and transplantation of oligodendrocyte progenitor cells, have generated substantial interest as novel therapeutic strategies for immune modulation, neuroprotection, or repair of the damaged central nervous system in multiple sclerosis. Each approach has potential advantages but also safety concerns and unresolved...

  13. Therapeutic Potency of Nanoformulations of siRNAs and shRNAs in Animal Models of Cancers

    Directory of Open Access Journals (Sweden)

    Md. Emranul Karim

    2018-05-01

    Full Text Available RNA Interference (RNAi has brought revolutionary transformations in cancer management in the past two decades. RNAi-based therapeutics including siRNA and shRNA have immense scope to silence the expression of mutant cancer genes specifically in a therapeutic context. Although tremendous progress has been made to establish catalytic RNA as a new class of biologics for cancer management, a lot of extracellular and intracellular barriers still pose a long-lasting challenge on the way to clinical approval. A series of chemically suitable, safe and effective viral and non-viral carriers have emerged to overcome physiological barriers and ensure targeted delivery of RNAi. The newly invented carriers, delivery techniques and gene editing technology made current treatment protocols stronger to fight cancer. This review has provided a platform about the chronicle of siRNA development and challenges of RNAi therapeutics for laboratory to bedside translation focusing on recent advancement in siRNA delivery vehicles with their limitations. Furthermore, an overview of several animal model studies of siRNA- or shRNA-based cancer gene therapy over the past 15 years has been presented, highlighting the roles of genes in multiple cancers, pharmacokinetic parameters and critical evaluation. The review concludes with a future direction for the development of catalytic RNA vehicles and design strategies to make RNAi-based cancer gene therapy more promising to surmount cancer gene delivery challenges.

  14. MAGNETIC SYMPATHICUS CORRECTION IN TREATMENT ENURESIS IN CHILDREN

    Directory of Open Access Journals (Sweden)

    T. V. Otpuschennikova

    2014-01-01

    Full Text Available Based on a investigation 92 children aged 6 to 15 years with enuresis shown to be effective use of physiotherapy in combination with a minimum dose of oxybutynin (2.5 mg at night. Physiotherapy is a combination of two types of magnetic therapy – transcranial method for bitemporal and cervical sympathetic ganglia on the neck-collar method. Therapeutic effect as reliever enuresis achieved in 76.6% of children. Positive dynamics of enuresis associated with the correction and improvement of adaptation hypersympathicotonia and autonomic nervous system.

  15. Study on fitness functions of genetic algorithm for dynamically correcting nuclide atmospheric diffusion model

    International Nuclear Information System (INIS)

    Ji Zhilong; Ma Yuanwei; Wang Dezhong

    2014-01-01

    Background: In radioactive nuclides atmospheric diffusion models, the empirical dispersion coefficients were deduced under certain experiment conditions, whose difference with nuclear accident conditions is a source of deviation. A better estimation of the radioactive nuclide's actual dispersion process could be done by correcting dispersion coefficients with observation data, and Genetic Algorithm (GA) is an appropriate method for this correction procedure. Purpose: This study is to analyze the fitness functions' influence on the correction procedure and the forecast ability of diffusion model. Methods: GA, coupled with Lagrange dispersion model, was used in a numerical simulation to compare 4 fitness functions' impact on the correction result. Results: In the numerical simulation, the fitness function with observation deviation taken into consideration stands out when significant deviation exists in the observed data. After performing the correction procedure on the Kincaid experiment data, a significant boost was observed in the diffusion model's forecast ability. Conclusion: As the result shows, in order to improve dispersion models' forecast ability using GA, observation data should be given different weight in the fitness function corresponding to their error. (authors)

  16. Challenges to oligonucleotides-based therapeutics for Duchenne muscular dystrophy

    Directory of Open Access Journals (Sweden)

    Goyenvalle Aurélie

    2011-02-01

    Full Text Available Abstract Antisense oligonucleotides are short nucleic acids designed to bind to specific messenger RNAs in order to modulate splicing patterns or inhibit protein translation. As such, they represent promising therapeutic tools for many disorders and have been actively developed for more than 20 years as a form of molecular medicine. Although significant progress has been made in developing these agents as drugs, they are yet not recognized as effective therapeutics and several hurdles remain to be overcome. Within the last few years, however, the prospect of successful oligonucleotides-based therapies has moved a step closer, in particular for Duchenne muscular dystrophy. Clinical trials have recently been conducted for this myopathy, where exon skipping is being used to achieve therapeutic outcomes. In this review, the recent developments and clinical trials using antisense oligonucleotides for Duchenne muscular dystrophy are discussed, with emphasis on the challenges ahead for this type of therapy, especially with regards to delivery and regulatory issues.

  17. Research and implementation of the algorithm for unwrapped and distortion correction basing on CORDIC for panoramic image

    Science.gov (United States)

    Zhang, Zhenhai; Li, Kejie; Wu, Xiaobing; Zhang, Shujiang

    2008-03-01

    The unwrapped and correcting algorithm based on Coordinate Rotation Digital Computer (CORDIC) and bilinear interpolation algorithm was presented in this paper, with the purpose of processing dynamic panoramic annular image. An original annular panoramic image captured by panoramic annular lens (PAL) can be unwrapped and corrected to conventional rectangular image without distortion, which is much more coincident with people's vision. The algorithm for panoramic image processing is modeled by VHDL and implemented in FPGA. The experimental results show that the proposed panoramic image algorithm for unwrapped and distortion correction has the lower computation complexity and the architecture for dynamic panoramic image processing has lower hardware cost and power consumption. And the proposed algorithm is valid.

  18. Highly accurate fluorogenic DNA sequencing with information theory-based error correction.

    Science.gov (United States)

    Chen, Zitian; Zhou, Wenxiong; Qiao, Shuo; Kang, Li; Duan, Haifeng; Xie, X Sunney; Huang, Yanyi

    2017-12-01

    Eliminating errors in next-generation DNA sequencing has proved challenging. Here we present error-correction code (ECC) sequencing, a method to greatly improve sequencing accuracy by combining fluorogenic sequencing-by-synthesis (SBS) with an information theory-based error-correction algorithm. ECC embeds redundancy in sequencing reads by creating three orthogonal degenerate sequences, generated by alternate dual-base reactions. This is similar to encoding and decoding strategies that have proved effective in detecting and correcting errors in information communication and storage. We show that, when combined with a fluorogenic SBS chemistry with raw accuracy of 98.1%, ECC sequencing provides single-end, error-free sequences up to 200 bp. ECC approaches should enable accurate identification of extremely rare genomic variations in various applications in biology and medicine.

  19. One loop electro-weak radiative corrections in the standard model

    International Nuclear Information System (INIS)

    Kalyniak, P.; Sundaresan, M.K.

    1987-01-01

    This paper reports on the effect of radiative corrections in the standard model. A sensitive test of the three gauge boson vertices is expected to come from the work in LEPII in which the reaction e + e - → W + W - can occur. Two calculations of radiative corrections to the reaction e + e - → W + W - exist at present. The results of the calculations although very similar disagree with one another as to the actual magnitude of the correction. Some of the reasons for the disagreement are understood. However, due to the reasons mentioned below, another look must be taken at these lengthy calculations to resolve the differences between the two previous calculations. This is what is being done in the present work. There are a number of reasons why we must take another look at the calculation of the radiative corrections. The previous calculations were carried out before the UA1, UA2 data on W and Z bosons were obtained. Experimental groups require a computer program which can readily calculate the radiative corrections ab initio for various experimental conditions. The normalization of sin 2 θ w in the previous calculations was done in a way which is not convenient for use in the experimental work. It would be desirable to have the analytical expressions for the corrections available so that the renormalization scheme dependence of the corrections could be studied

  20. Correction of dental artifacts within the anatomical surface in PET/MRI using active shape models and k-nearest-neighbors

    DEFF Research Database (Denmark)

    Ladefoged, Claes N.; Andersen, Flemming L.; Keller, Sune H.

    2014-01-01

    n combined PET/MR, attenuation correction (AC) is performed indirectly based on the available MR image information. Metal implant-induced susceptibility artifacts and subsequent signal voids challenge MR-based AC. Several papers acknowledge the problem in PET attenuation correction when dental...... artifacts are ignored, but none of them attempts to solve the problem. We propose a clinically feasible correction method which combines Active Shape Models (ASM) and k- Nearest-Neighbors (kNN) into a simple approach which finds and corrects the dental artifacts within the surface boundaries of the patient...... anatomy. ASM is used to locate a number of landmarks in the T1-weighted MR-image of a new patient. We calculate a vector of offsets from each voxel within a signal void to each of the landmarks. We then use kNN to classify each voxel as belonging to an artifact or an actual signal void using this offset...

  1. Two-loop corrections for nuclear matter in the Walecka model

    International Nuclear Information System (INIS)

    Furnstahl, R.J.; Perry, R.J.; Serot, B.D.; Department of Physics, The Ohio State University, Columbus, Ohio 43210; Physics Department and Nuclear Theory Center, Indiana University, Bloomington, Indiana 47405)

    1989-01-01

    Two-loop corrections for nuclear matter, including vacuum polarization, are calculated in the Walecka model to study the loop expansion as an approximation scheme for quantum hadrodynamics. Criteria for useful approximation schemes are discussed, and the concepts of strong and weak convergence are introduced. The two-loop corrections are evaluated first with one-loop parameters and mean fields and then by minimizing the total energy density with respect to the scalar field and refitting parameters to empirical nuclear matter saturation properties. The size and nature of the corrections indicate that the loop expansion is not convergent at two-loop order in either the strong or weak sense. Prospects for alternative approximation schemes are discussed

  2. Lowered threshold energy for femtosecond laser induced optical breakdown in a water based eye model by aberration correction with adaptive optics.

    Science.gov (United States)

    Hansen, Anja; Géneaux, Romain; Günther, Axel; Krüger, Alexander; Ripken, Tammo

    2013-06-01

    In femtosecond laser ophthalmic surgery tissue dissection is achieved by photodisruption based on laser induced optical breakdown. In order to minimize collateral damage to the eye laser surgery systems should be optimized towards the lowest possible energy threshold for photodisruption. However, optical aberrations of the eye and the laser system distort the irradiance distribution from an ideal profile which causes a rise in breakdown threshold energy even if great care is taken to minimize the aberrations of the system during design and alignment. In this study we used a water chamber with an achromatic focusing lens and a scattering sample as eye model and determined breakdown threshold in single pulse plasma transmission loss measurements. Due to aberrations, the precise lower limit for breakdown threshold irradiance in water is still unknown. Here we show that the threshold energy can be substantially reduced when using adaptive optics to improve the irradiance distribution by spatial beam shaping. We found that for initial aberrations with a root-mean-square wave front error of only one third of the wavelength the threshold energy can still be reduced by a factor of three if the aberrations are corrected to the diffraction limit by adaptive optics. The transmitted pulse energy is reduced by 17% at twice the threshold. Furthermore, the gas bubble motions after breakdown for pulse trains at 5 kilohertz repetition rate show a more transverse direction in the corrected case compared to the more spherical distribution without correction. Our results demonstrate how both applied and transmitted pulse energy could be reduced during ophthalmic surgery when correcting for aberrations. As a consequence, the risk of retinal damage by transmitted energy and the extent of collateral damage to the focal volume could be minimized accordingly when using adaptive optics in fs-laser surgery.

  3. Simple liquid models with corrected dielectric constants

    Science.gov (United States)

    Fennell, Christopher J.; Li, Libo; Dill, Ken A.

    2012-01-01

    Molecular simulations often use explicit-solvent models. Sometimes explicit-solvent models can give inaccurate values for basic liquid properties, such as the density, heat capacity, and permittivity, as well as inaccurate values for molecular transfer free energies. Such errors have motivated the development of more complex solvents, such as polarizable models. We describe an alternative here. We give new fixed-charge models of solvents for molecular simulations – water, carbon tetrachloride, chloroform and dichloromethane. Normally, such solvent models are parameterized to agree with experimental values of the neat liquid density and enthalpy of vaporization. Here, in addition to those properties, our parameters are chosen to give the correct dielectric constant. We find that these new parameterizations also happen to give better values for other properties, such as the self-diffusion coefficient. We believe that parameterizing fixed-charge solvent models to fit experimental dielectric constants may provide better and more efficient ways to treat solvents in computer simulations. PMID:22397577

  4. Improved Model for Depth Bias Correction in Airborne LiDAR Bathymetry Systems

    Directory of Open Access Journals (Sweden)

    Jianhu Zhao

    2017-07-01

    Full Text Available Airborne LiDAR bathymetry (ALB is efficient and cost effective in obtaining shallow water topography, but often produces a low-accuracy sounding solution due to the effects of ALB measurements and ocean hydrological parameters. In bathymetry estimates, peak shifting of the green bottom return caused by pulse stretching induces depth bias, which is the largest error source in ALB depth measurements. The traditional depth bias model is often applied to reduce the depth bias, but it is insufficient when used with various ALB system parameters and ocean environments. Therefore, an accurate model that considers all of the influencing factors must be established. In this study, an improved depth bias model is developed through stepwise regression in consideration of the water depth, laser beam scanning angle, sensor height, and suspended sediment concentration. The proposed improved model and a traditional one are used in an experiment. The results show that the systematic deviation of depth bias corrected by the traditional and improved models is reduced significantly. Standard deviations of 0.086 and 0.055 m are obtained with the traditional and improved models, respectively. The accuracy of the ALB-derived depth corrected by the improved model is better than that corrected by the traditional model.

  5. Quantification of hepatic steatosis with T1-independent, T2-corrected MR imaging with spectral modeling of fat: blinded comparison with MR spectroscopy.

    Science.gov (United States)

    Meisamy, Sina; Hines, Catherine D G; Hamilton, Gavin; Sirlin, Claude B; McKenzie, Charles A; Yu, Huanzhou; Brittain, Jean H; Reeder, Scott B

    2011-03-01

    To prospectively compare an investigational version of a complex-based chemical shift-based fat fraction magnetic resonance (MR) imaging method with MR spectroscopy for the quantification of hepatic steatosis. This study was approved by the institutional review board and was HIPAA compliant. Written informed consent was obtained before all studies. Fifty-five patients (31 women, 24 men; age range, 24-71 years) were prospectively imaged at 1.5 T with quantitative MR imaging and single-voxel MR spectroscopy, each within a single breath hold. The effects of T2 correction, spectral modeling of fat, and magnitude fitting for eddy current correction on fat quantification with MR imaging were investigated by reconstructing fat fraction images from the same source data with different combinations of error correction. Single-voxel T2-corrected MR spectroscopy was used to measure fat fraction and served as the reference standard. All MR spectroscopy data were postprocessed at a separate institution by an MR physicist who was blinded to MR imaging results. Fat fractions measured with MR imaging and MR spectroscopy were compared statistically to determine the correlation (r(2)), and the slope and intercept as measures of agreement between MR imaging and MR spectroscopy fat fraction measurements, to determine whether MR imaging can help quantify fat, and examine the importance of T2 correction, spectral modeling of fat, and eddy current correction. Two-sided t tests (significance level, P = .05) were used to determine whether estimated slopes and intercepts were significantly different from 1.0 and 0.0, respectively. Sensitivity and specificity for the classification of clinically significant steatosis were evaluated. Overall, there was excellent correlation between MR imaging and MR spectroscopy for all reconstruction combinations. However, agreement was only achieved when T2 correction, spectral modeling of fat, and magnitude fitting for eddy current correction were used (r(2

  6. A rank-based approach for correcting systematic biases in spatial disaggregation of coarse-scale climate simulations

    Science.gov (United States)

    Nahar, Jannatun; Johnson, Fiona; Sharma, Ashish

    2017-07-01

    Use of General Circulation Model (GCM) precipitation and evapotranspiration sequences for hydrologic modelling can result in unrealistic simulations due to the coarse scales at which GCMs operate and the systematic biases they contain. The Bias Correction Spatial Disaggregation (BCSD) method is a popular statistical downscaling and bias correction method developed to address this issue. The advantage of BCSD is its ability to reduce biases in the distribution of precipitation totals at the GCM scale and then introduce more realistic variability at finer scales than simpler spatial interpolation schemes. Although BCSD corrects biases at the GCM scale before disaggregation; at finer spatial scales biases are re-introduced by the assumptions made in the spatial disaggregation process. Our study focuses on this limitation of BCSD and proposes a rank-based approach that aims to reduce the spatial disaggregation bias especially for both low and high precipitation extremes. BCSD requires the specification of a multiplicative bias correction anomaly field that represents the ratio of the fine scale precipitation to the disaggregated precipitation. It is shown that there is significant temporal variation in the anomalies, which is masked when a mean anomaly field is used. This can be improved by modelling the anomalies in rank-space. Results from the application of the rank-BCSD procedure improve the match between the distributions of observed and downscaled precipitation at the fine scale compared to the original BCSD approach. Further improvements in the distribution are identified when a scaling correction to preserve mass in the disaggregation process is implemented. An assessment of the approach using a single GCM over Australia shows clear advantages especially in the simulation of particularly low and high downscaled precipitation amounts.

  7. Imputation of Housing Rents for Owners Using Models With Heckman Correction

    Directory of Open Access Journals (Sweden)

    Beat Hulliger

    2012-07-01

    Full Text Available The direct income of owners and tenants of dwellings is not comparable since the owners have a hidden income from the investment in their dwelling. This hidden income is considered a part of the disposable income of owners. It may be predicted with the help of a linear model of the rent. Since such a model must be developed and estimated for tenants with observed market rents a selection bias may occur. The selection bias can be minimised through a Heckman correction. The paper applies the Heckman correction to data from the Swiss Statistics on Income and Living Conditions. The Heckman method is adapted to the survey context, the modeling process including the choice of covariates is explained and the effect of the prediction using the model is discussed.

  8. Radiative corrections for semileptonic decays of hyperons: the 'model independent' part

    International Nuclear Information System (INIS)

    Toth, K.; Szegoe, K.; Margaritis, T.

    1984-04-01

    The 'model independent' part of the order α radiative correction due to virtual photon exchanges and inner bremsstrahlung is studied for semileptonic decays of hyperons. Numerical results of high accuracy are given for the relative correction to the branching ratio, the electron energy spectrum and the (Esub(e),Esub(f)) Dalitz distribution in the case of four different decays. (author)

  9. Correcting abnormal speaking through communication partners ...

    African Journals Online (AJOL)

    The listed characteristics are called speech disorders. Abnormal speaking attracts some penalties to the speaker. The penalties are usually very disturbing to the speaker that undertaking some therapeutic measures becomes inevitable. Communication partners strategy is a speech correction approach which makes use of ...

  10. Therapeutic Effects of Extinction Learning as a Model of Exposure Therapy in Rats

    Science.gov (United States)

    Fucich, Elizabeth A; Paredes, Denisse; Morilak, David A

    2016-01-01

    Current treatments for stress-related psychiatric disorders, such as depression and posttraumatic stress disorder (PTSD), are inadequate. Cognitive behavioral psychotherapies, including exposure therapy, are an alternative to pharmacotherapy, but the neurobiological mechanisms are unknown. Preclinical models demonstrating therapeutic effects of behavioral interventions are required to investigate such mechanisms. Exposure therapy bears similarity to extinction learning. Thus, we investigated the therapeutic effects of extinction learning as a behavioral intervention to model exposure therapy in rats, testing its effectiveness in reversing chronic stress-induced deficits in cognitive flexibility and coping behavior that resemble dimensions of depression and PTSD. Rats were fear-conditioned by pairing a tone with footshock, and then exposed to chronic unpredictable stress (CUS) that induces deficits in cognitive set-shifting and active coping behavior. They then received an extinction learning session as a therapeutic intervention by repeated exposure to the tone with no shock. Effects on cognitive flexibility and coping behavior were assessed 24 h later on the attentional set-shifting test or shock-probe defensive burying test, respectively. Extinction reversed the CUS-induced deficits in cognitive flexibility and coping behavior, and increased phosphorylation of ribosomal protein S6 in the medial prefrontal cortex (mPFC) of stress-compromised rats, suggesting a role for activity-dependent protein synthesis in the therapeutic effect. Inhibiting protein synthesis by microinjecting anisomycin into mPFC blocked the therapeutic effect of extinction on cognitive flexibility. These results demonstrate the utility of extinction as a model by which to study mechanisms underlying exposure therapy, and suggest these mechanisms involve protein synthesis in the mPFC, the further study of which may identify novel therapeutic targets. PMID:27417516

  11. Un probleme d’identification. Correction du modeles analytique en utilisat des données expérimentales

    Directory of Open Access Journals (Sweden)

    Gabriela Covatariu

    2009-01-01

    Full Text Available La procédure de correction d’un modele analytique adopté pour une structure de construction est précédée d’une comparaison entre le set des données expérimentales et celui des données analytiques, pour une vérification préliminaire concernant la correspondance raisonnable entre ces données. Pour l’identification dynamique des parametres ont été élaborées diverses méthodes de correction des matrices de rigidité et de l’amortissement qui ont a leur base la méthode des moindres carrés dans le domaine des fréquences. L’algorithme proposé a comme résultat la correction de la matrice de rigidité d’un modele de calcul en utilisant comme données d’entrée seulement celles enregistrées pendant les essais expérimentaux.

  12. A theoretical Markov chain model for evaluating correctional ...

    African Journals Online (AJOL)

    In this paper a stochastic method is applied in the study of the long time effect of confinement in a correctional institution on the behaviour of a person with criminal tendencies. The approach used is Markov chain, which uses past history to predict the state of a system in the future. A model is developed for comparing the ...

  13. An effort allocation model considering different budgetary constraint on fault detection process and fault correction process

    Directory of Open Access Journals (Sweden)

    Vijay Kumar

    2016-01-01

    Full Text Available Fault detection process (FDP and Fault correction process (FCP are important phases of software development life cycle (SDLC. It is essential for software to undergo a testing phase, during which faults are detected and corrected. The main goal of this article is to allocate the testing resources in an optimal manner to minimize the cost during testing phase using FDP and FCP under dynamic environment. In this paper, we first assume there is a time lag between fault detection and fault correction. Thus, removal of a fault is performed after a fault is detected. In addition, detection process and correction process are taken to be independent simultaneous activities with different budgetary constraints. A structured optimal policy based on optimal control theory is proposed for software managers to optimize the allocation of the limited resources with the reliability criteria. Furthermore, release policy for the proposed model is also discussed. Numerical example is given in support of the theoretical results.

  14. Photometric correction for an optical CCD-based system based on the sparsity of an eight-neighborhood gray gradient.

    Science.gov (United States)

    Zhang, Yuzhong; Zhang, Yan

    2016-07-01

    In an optical measurement and analysis system based on a CCD, due to the existence of optical vignetting and natural vignetting, photometric distortion, in which the intensity falls off away from the image center, affects the subsequent processing and measuring precision severely. To deal with this problem, an easy and straightforward method used for photometric distortion correction is presented in this paper. This method introduces a simple polynomial fitting model of the photometric distortion function and employs a particle swarm optimization algorithm to get these model parameters by means of a minimizing eight-neighborhood gray gradient. Compared with conventional calibration methods, this method can obtain the profile information of photometric distortion from only a single common image captured by the optical CCD-based system, with no need for a uniform luminance area source used as a standard reference source and relevant optical and geometric parameters in advance. To illustrate the applicability of this method, numerical simulations and photometric distortions with different lens parameters are evaluated using this method in this paper. Moreover, the application example of temperature field correction for casting billets also demonstrates the effectiveness of this method. The experimental results show that the proposed method is able to achieve the maximum absolute error for vignetting estimation of 0.0765 and the relative error for vignetting estimation from different background images of 3.86%.

  15. The L0 Regularized Mumford-Shah Model for Bias Correction and Segmentation of Medical Images.

    Science.gov (United States)

    Duan, Yuping; Chang, Huibin; Huang, Weimin; Zhou, Jiayin; Lu, Zhongkang; Wu, Chunlin

    2015-11-01

    We propose a new variant of the Mumford-Shah model for simultaneous bias correction and segmentation of images with intensity inhomogeneity. First, based on the model of images with intensity inhomogeneity, we introduce an L0 gradient regularizer to model the true intensity and a smooth regularizer to model the bias field. In addition, we derive a new data fidelity using the local intensity properties to allow the bias field to be influenced by its neighborhood. Second, we use a two-stage segmentation method, where the fast alternating direction method is implemented in the first stage for the recovery of true intensity and bias field and a simple thresholding is used in the second stage for segmentation. Different from most of the existing methods for simultaneous bias correction and segmentation, we estimate the bias field and true intensity without fixing either the number of the regions or their values in advance. Our method has been validated on medical images of various modalities with intensity inhomogeneity. Compared with the state-of-art approaches and the well-known brain software tools, our model is fast, accurate, and robust with initializations.

  16. Imaging enabled platforms for development of therapeutics

    Science.gov (United States)

    Celli, Jonathan; Rizvi, Imran; Blanden, Adam R.; Evans, Conor L.; Abu-Yousif, Adnan O.; Spring, Bryan Q.; Muzikansky, Alona; Pogue, Brian W.; Finkelstein, Dianne M.; Hasan, Tayyaba

    2011-03-01

    Advances in imaging and spectroscopic technologies have enabled the optimization of many therapeutic modalities in cancer and noncancer pathologies either by earlier disease detection or by allowing therapy monitoring. Amongst the therapeutic options benefiting from developments in imaging technologies, photodynamic therapy (PDT) is exceptional. PDT is a photochemistry-based therapeutic approach where a light-sensitive molecule (photosensitizer) is activated with light of appropriate energy (wavelength) to produce reactive molecular species such as free radicals and singlet oxygen. These molecular entities then react with biological targets such as DNA, membranes and other cellular components to impair their function and lead to eventual cell and tissue death. Development of PDT-based imaging also provides a platform for rapid screening of new therapeutics in novel in vitro models prior to expensive and labor-intensive animal studies. In this study we demonstrate how an imaging platform can be used for strategizing a novel combination treatment strategy for multifocal ovarian cancer. Using an in vitro 3D model for micrometastatic ovarian cancer in conjunction with quantitative imaging we examine dose and scheduling strategies for PDT in combination with carboplatin, a chemotherapeutic agent presently in clinical use for management of this deadly form of cancer.

  17. Predicting Social Anxiety Treatment Outcome Based on Therapeutic Email Conversations.

    Science.gov (United States)

    Hoogendoorn, Mark; Berger, Thomas; Schulz, Ava; Stolz, Timo; Szolovits, Peter

    2017-09-01

    Predicting therapeutic outcome in the mental health domain is of utmost importance to enable therapists to provide the most effective treatment to a patient. Using information from the writings of a patient can potentially be a valuable source of information, especially now that more and more treatments involve computer-based exercises or electronic conversations between patient and therapist. In this paper, we study predictive modeling using writings of patients under treatment for a social anxiety disorder. We extract a wealth of information from the text written by patients including their usage of words, the topics they talk about, the sentiment of the messages, and the style of writing. In addition, we study trends over time with respect to those measures. We then apply machine learning algorithms to generate the predictive models. Based on a dataset of 69 patients, we are able to show that we can predict therapy outcome with an area under the curve of 0.83 halfway through the therapy and with a precision of 0.78 when using the full data (i.e., the entire treatment period). Due to the limited number of participants, it is hard to generalize the results, but they do show great potential in this type of information.

  18. Aligning Animal Models of Clinical Germinal Matrix Hemorrhage, From Basic Correlation to Therapeutic Approach.

    Science.gov (United States)

    Lekic, Tim; Klebe, Damon; Pichon, Pilar; Brankov, Katarina; Sultan, Sally; McBride, Devin; Casel, Darlene; Al-Bayati, Alhamza; Ding, Yan; Tang, Jiping; Zhang, John H

    2017-01-01

    Germinal matrix hemorrhage is a leading cause of mortality and morbidity from prematurity. This brain region is vulnerable to bleeding and re-bleeding within the first 72 hours of preterm life. Cerebroventricular expansion of blood products contributes to the mechanisms of brain injury. Consequences include lifelong hydrocephalus, cerebral palsy, and intellectual disability. Unfortunately little is known about the therapeutic needs of this patient population. This review discusses the mechanisms of germinal matrix hemorrhage, the animal models utilized, and the potential therapeutic targets. Potential therapeutic approaches identified in pre-clinical investigations include corticosteroid therapy, iron chelator administration, and transforming growth factor-β pathway modulation, which all warrant further investigation. Thus, effective preclinical modeling is essential for elucidating and evaluating novel therapeutic approaches, ahead of clinical consideration. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.

  19. Novel therapeutic strategies against AIDS progression based on the ...

    African Journals Online (AJOL)

    Novel therapeutic strategies against AIDS progression based on the pathogenic effects of HIV-1 and V pr Proteins. Ahmed A Azad. Abstract. No Abstract. Discovery and Innovation Vol. 17, 2005: 52-60. Full Text: EMAIL FULL TEXT EMAIL FULL TEXT · DOWNLOAD FULL TEXT DOWNLOAD FULL TEXT. Article Metrics.

  20. Therapeutic potential of gel-based injectables for vocal fold regeneration

    Science.gov (United States)

    Bartlett, Rebecca S.; Thibeault, Susan L.; Prestwich, Glenn D.

    2012-01-01

    Vocal folds are anatomically and biomechanically unique, thus complicating the design and implementation of tissue engineering strategies for repair and regeneration. Integration of an enhanced understanding of tissue biomechanics, wound healing dynamics and innovative gel-based therapeutics has generated enthusiasm for the notion that an efficacious treatment for vocal fold scarring could be clinically attainable within several years. Fibroblast phenotype and gene expression are mediated by the three-dimensional mechanical and chemical microenvironment at an injury site. Thus, therapeutic approaches need to coordinate spatial and temporal aspects of the wound healing response in an injured vocal tissue to achieve an optimal clinical outcome. Successful gel-based injectables for vocal fold scarring will require a keen understanding of how the native inflammatory response sets into motion the later extracellular matrix remodeling, which in turn will determine the ultimate biomechanical properties of the tissue. We present an overview of the challenges associated with this translation as well as the proposed gel-based injectable solutions. PMID:22456756

  1. Therapeutic potential of gel-based injectables for vocal fold regeneration

    International Nuclear Information System (INIS)

    Bartlett, Rebecca S; Thibeault, Susan L; Prestwich, Glenn D

    2012-01-01

    Vocal folds are anatomically and biomechanically unique, thus complicating the design and implementation of tissue engineering strategies for repair and regeneration. Integration of an enhanced understanding of tissue biomechanics, wound healing dynamics and innovative gel-based therapeutics has generated enthusiasm for the notion that an efficacious treatment for vocal fold scarring could be clinically attainable within several years. Fibroblast phenotype and gene expression are mediated by the three-dimensional mechanical and chemical microenvironment at an injury site. Thus, therapeutic approaches need to coordinate spatial and temporal aspects of the wound healing response in an injured vocal tissue to achieve an optimal clinical outcome. Successful gel-based injectables for vocal fold scarring will require a keen understanding of how the native inflammatory response sets into motion the later extracellular matrix remodeling, which in turn will determine the ultimate biomechanical properties of the tissue. We present an overview of the challenges associated with this translation as well as the proposed gel-based injectable solutions. (paper)

  2. Parton distribution functions with QED corrections in the valon model

    Science.gov (United States)

    Mottaghizadeh, Marzieh; Taghavi Shahri, Fatemeh; Eslami, Parvin

    2017-10-01

    The parton distribution functions (PDFs) with QED corrections are obtained by solving the QCD ⊗QED DGLAP evolution equations in the framework of the "valon" model at the next-to-leading-order QCD and the leading-order QED approximations. Our results for the PDFs with QED corrections in this phenomenological model are in good agreement with the newly related CT14QED global fits code [Phys. Rev. D 93, 114015 (2016), 10.1103/PhysRevD.93.114015] and APFEL (NNPDF2.3QED) program [Comput. Phys. Commun. 185, 1647 (2014), 10.1016/j.cpc.2014.03.007] in a wide range of x =[10-5,1 ] and Q2=[0.283 ,108] GeV2 . The model calculations agree rather well with those codes. In the latter, we proposed a new method for studying the symmetry breaking of the sea quark distribution functions inside the proton.

  3. Quadratic Regression-based Non-uniform Response Correction for Radiochromic Film Scanners

    International Nuclear Information System (INIS)

    Jeong, Hae Sun; Kim, Chan Hyeong; Han, Young Yih; Kum, O Yeon

    2009-01-01

    In recent years, several types of radiochromic films have been extensively used for two-dimensional dose measurements such as dosimetry in radiotherapy as well as imaging and radiation protection applications. One of the critical aspects in radiochromic film dosimetry is the accurate readout of the scanner without dose distortion. However, most of charge-coupled device (CCD) scanners used for the optical density readout of the film employ a fluorescent lamp or a coldcathode lamp as a light source, which leads to a significant amount of light scattering on the active layer of the film. Due to the effect of the light scattering, dose distortions are produced with non-uniform responses, although the dose is uniformly irradiated to the film. In order to correct the distorted doses, a method based on correction factors (CF) has been reported and used. However, the prediction of the real incident doses is difficult when the indiscreet doses are delivered to the film, since the dose correction with the CF-based method is restrictively used in case that the incident doses are already known. In a previous study, therefore, a pixel-based algorithm with linear regression was developed to correct the dose distortion of a flatbed scanner, and to estimate the initial doses. The result, however, was not very good for some cases especially when the incident dose is under approximately 100 cGy. In the present study, the problem was addressed by replacing the linear regression with the quadratic regression. The corrected doses using this method were also compared with the results of other conventional methods

  4. Sociopathic Knowledge Bases: Correct Knowledge Can Be Harmful Even Given Unlimited Computation

    Science.gov (United States)

    1989-08-01

    1757 I Sociopathic Knowledge Bases: Correct Knowledge Can Be Harmful Even Given Unlimited Computation DTIC5 by flELECTE 5David C. Wilkins and Yong...NUMBERSWOKNI PROGRAM RAT TSWOKUI 61153N RR04206 OC 443g-008 11 TITLE (Include Security Classification) Sociopathic Knowledge Bases: Correct Knowledge Can be...probabilistic rules are shown to be sociopathic and so this problem is very widespread. Sociopathicity has important consequences for rule induction

  5. Ectromelia Virus Infections of Mice as a Model to Support the Licensure of Anti-Orthopoxvirus Therapeutics

    Directory of Open Access Journals (Sweden)

    R. Mark Buller

    2010-09-01

    Full Text Available The absence of herd immunity to orthopoxviruses and the concern that variola or monkeypox viruses could be used for bioterroristic activities has stimulated the development of therapeutics and safer prophylactics. One major limitation in this process is the lack of accessible human orthopoxvirus infections for clinical efficacy trials; however, drug licensure can be based on orthopoxvirus animal challenge models as described in the “Animal Efficacy Rule”. One such challenge model uses ectromelia virus, an orthopoxvirus, whose natural host is the mouse and is the etiological agent of mousepox. The genetic similarity of ectromelia virus to variola and monkeypox viruses, the common features of the resulting disease, and the convenience of the mouse as a laboratory animal underscores its utility in the study of orthopoxvirus pathogenesis and in the development of therapeutics and prophylactics. In this review we outline how mousepox has been used as a model for smallpox. We also discuss mousepox in the context of mouse strain, route of infection, infectious dose, disease progression, and recovery from infection.

  6. An introduction to Bartlett correction and bias reduction

    CERN Document Server

    Cordeiro, Gauss M

    2014-01-01

    This book presents a concise introduction to Bartlett and Bartlett-type corrections of statistical tests and bias correction of point estimators. The underlying idea behind both groups of corrections is to obtain higher accuracy in small samples. While the main focus is on corrections that can be analytically derived, the authors also present alternative strategies for improving estimators and tests based on bootstrap, a data resampling technique, and discuss concrete applications to several important statistical models.

  7. Empirical Correction to the Likelihood Ratio Statistic for Structural Equation Modeling with Many Variables.

    Science.gov (United States)

    Yuan, Ke-Hai; Tian, Yubin; Yanagihara, Hirokazu

    2015-06-01

    Survey data typically contain many variables. Structural equation modeling (SEM) is commonly used in analyzing such data. The most widely used statistic for evaluating the adequacy of a SEM model is T ML, a slight modification to the likelihood ratio statistic. Under normality assumption, T ML approximately follows a chi-square distribution when the number of observations (N) is large and the number of items or variables (p) is small. However, in practice, p can be rather large while N is always limited due to not having enough participants. Even with a relatively large N, empirical results show that T ML rejects the correct model too often when p is not too small. Various corrections to T ML have been proposed, but they are mostly heuristic. Following the principle of the Bartlett correction, this paper proposes an empirical approach to correct T ML so that the mean of the resulting statistic approximately equals the degrees of freedom of the nominal chi-square distribution. Results show that empirically corrected statistics follow the nominal chi-square distribution much more closely than previously proposed corrections to T ML, and they control type I errors reasonably well whenever N ≥ max(50,2p). The formulations of the empirically corrected statistics are further used to predict type I errors of T ML as reported in the literature, and they perform well.

  8. Towards Compensation Correctness in Interactive Systems

    Science.gov (United States)

    Vaz, Cátia; Ferreira, Carla

    One fundamental idea of service-oriented computing is that applications should be developed by composing already available services. Due to the long running nature of service interactions, a main challenge in service composition is ensuring correctness of failure recovery. In this paper, we use a process calculus suitable for modelling long running transactions with a recovery mechanism based on compensations. Within this setting, we discuss and formally state correctness criteria for compensable processes compositions, assuming that each process is correct with respect to failure recovery. Under our theory, we formally interpret self-healing compositions, that can detect and recover from failures, as correct compositions of compensable processes.

  9. Renormalisation group corrections to the littlest seesaw model and maximal atmospheric mixing

    International Nuclear Information System (INIS)

    King, Stephen F.; Zhang, Jue; Zhou, Shun

    2016-01-01

    The Littlest Seesaw (LS) model involves two right-handed neutrinos and a very constrained Dirac neutrino mass matrix, involving one texture zero and two independent Dirac masses, leading to a highly predictive scheme in which all neutrino masses and the entire PMNS matrix is successfully predicted in terms of just two real parameters. We calculate the renormalisation group (RG) corrections to the LS predictions, with and without supersymmetry, including also the threshold effects induced by the decoupling of heavy Majorana neutrinos both analytically and numerically. We find that the predictions for neutrino mixing angles and mass ratios are rather stable under RG corrections. For example we find that the LS model with RG corrections predicts close to maximal atmospheric mixing, θ_2_3=45"∘±1"∘, in most considered cases, in tension with the latest NOvA results. The techniques used here apply to other seesaw models with a strong normal mass hierarchy.

  10. Renormalisation group corrections to the littlest seesaw model and maximal atmospheric mixing

    Energy Technology Data Exchange (ETDEWEB)

    King, Stephen F. [School of Physics and Astronomy, University of Southampton,SO17 1BJ Southampton (United Kingdom); Zhang, Jue [Center for High Energy Physics, Peking University,Beijing 100871 (China); Zhou, Shun [Center for High Energy Physics, Peking University,Beijing 100871 (China); Institute of High Energy Physics, Chinese Academy of Sciences,Beijing 100049 (China)

    2016-12-06

    The Littlest Seesaw (LS) model involves two right-handed neutrinos and a very constrained Dirac neutrino mass matrix, involving one texture zero and two independent Dirac masses, leading to a highly predictive scheme in which all neutrino masses and the entire PMNS matrix is successfully predicted in terms of just two real parameters. We calculate the renormalisation group (RG) corrections to the LS predictions, with and without supersymmetry, including also the threshold effects induced by the decoupling of heavy Majorana neutrinos both analytically and numerically. We find that the predictions for neutrino mixing angles and mass ratios are rather stable under RG corrections. For example we find that the LS model with RG corrections predicts close to maximal atmospheric mixing, θ{sub 23}=45{sup ∘}±1{sup ∘}, in most considered cases, in tension with the latest NOvA results. The techniques used here apply to other seesaw models with a strong normal mass hierarchy.

  11. Imaging findings and therapeutic alternatives for peripheral vascular malformations

    International Nuclear Information System (INIS)

    Monsignore, Lucas Moretti; Nakiri, Guilherme Seizem; Santos, Daniela dos; Abud, Thiago Giansante; Abud, Daniel Giansante

    2010-01-01

    Peripheral vascular malformations represent a spectrum of lesions that appear through the lifetime and can be found in the whole body. Such lesions are uncommon and are frequently confounded with infantile hemangioma, a common benign neoplastic lesion. In the presence of such lesions, the correlation between the clinical and radiological findings is extremely important to achieve a correct diagnosis, which will guide the best therapeutic approach. The most recent classifications for peripheral vascular malformations are based on the blood flow (low or high) and on the main vascular components (arterial, capillary, lymphatic or venous). Peripheral vascular malformations represent a diagnostic and therapeutic challenge, and complementary methods such as computed tomography, Doppler ultrasonography and magnetic resonance imaging, in association with clinical findings can provide information regarding blood flow characteristics and lesions extent. Arteriography and venography confirm the diagnosis, evaluate the lesions extent and guide the therapeutic decision making. Generally, low flow vascular malformations are percutaneously treated with sclerosing agents injection, while in high flow lesions the approach is endovascular, with permanent liquid or solid embolization agents. (author)

  12. Preasymptotical corrections to the pomeron exchange

    International Nuclear Information System (INIS)

    Volkovitskij, P.E.

    1985-01-01

    IN the frame of quark-gluon model for strong interactions, based on the topological expansion and string model, the planar diagrams are connected with Regge poles and the cylinder diagrams correspond to the pomeron. In earlier works it was shown that in this approach strong exchange degeneracy has to take place. This fact in the case of the pomeron with intercept αsub(D)(O)>1 is in disagreement with experiment. In the present paper the preasymptotical corrections to the pomeron exchange are calculated. It is shown that these corrections remove the dissagreement

  13. Bayesian Based Diagnostic Model for Condition Based Maintenance of Offshore Wind Farms

    Directory of Open Access Journals (Sweden)

    Masoud Asgarpour

    2018-01-01

    Full Text Available Operation and maintenance costs are a major contributor to the Levelized Cost of Energy for electricity produced by offshore wind and can be significantly reduced if existing corrective actions are performed as efficiently as possible and if future corrective actions are avoided by performing sufficient preventive actions. This paper presents an applied and generic diagnostic model for fault detection and condition based maintenance of offshore wind components. The diagnostic model is based on two probabilistic matrices; first, a confidence matrix, representing the probability of detection using each fault detection method, and second, a diagnosis matrix, representing the individual outcome of each fault detection method. Once the confidence and diagnosis matrices of a component are defined, the individual diagnoses of each fault detection method are combined into a final verdict on the fault state of that component. Furthermore, this paper introduces a Bayesian updating model based on observations collected by inspections to decrease the uncertainty of initial confidence matrix. The framework and implementation of the presented diagnostic model are further explained within a case study for a wind turbine component based on vibration, temperature, and oil particle fault detection methods. The last part of the paper will have a discussion of the case study results and present conclusions.

  14. Accuracy of radiotherapy dose calculations based on cone-beam CT: comparison of deformable registration and image correction based methods

    Science.gov (United States)

    Marchant, T. E.; Joshi, K. D.; Moore, C. J.

    2018-03-01

    Radiotherapy dose calculations based on cone-beam CT (CBCT) images can be inaccurate due to unreliable Hounsfield units (HU) in the CBCT. Deformable image registration of planning CT images to CBCT, and direct correction of CBCT image values are two methods proposed to allow heterogeneity corrected dose calculations based on CBCT. In this paper we compare the accuracy and robustness of these two approaches. CBCT images for 44 patients were used including pelvis, lung and head & neck sites. CBCT HU were corrected using a ‘shading correction’ algorithm and via deformable registration of planning CT to CBCT using either Elastix or Niftyreg. Radiotherapy dose distributions were re-calculated with heterogeneity correction based on the corrected CBCT and several relevant dose metrics for target and OAR volumes were calculated. Accuracy of CBCT based dose metrics was determined using an ‘override ratio’ method where the ratio of the dose metric to that calculated on a bulk-density assigned version of the same image is assumed to be constant for each patient, allowing comparison to the patient’s planning CT as a gold standard. Similar performance is achieved by shading corrected CBCT and both deformable registration algorithms, with mean and standard deviation of dose metric error less than 1% for all sites studied. For lung images, use of deformed CT leads to slightly larger standard deviation of dose metric error than shading corrected CBCT with more dose metric errors greater than 2% observed (7% versus 1%).

  15. Therapeutic action of ghrelin in a mouse model of colitis.

    Science.gov (United States)

    Gonzalez-Rey, Elena; Chorny, Alejo; Delgado, Mario

    2006-05-01

    Ghrelin is a novel growth hormone-releasing peptide with potential endogenous anti-inflammatory activities ameliorating some pathologic inflammatory conditions. Crohn's disease is a chronic debilitating disease characterized by severe T helper cell (Th)1-driven inflammation of the colon. The aim of this study was to investigate the therapeutic effect of ghrelin in a murine model of colitis. We examined the anti-inflammatory action of ghrelin in the colitis induced by intracolonic administration of trinitrobenzene sulfonic acid. Diverse clinical signs of the disease were evaluated, including weight loss, diarrhea, colitis, and histopathology. We also investigated the mechanisms involved in the potential therapeutic effect of ghrelin, such as inflammatory cytokines and chemokines, Th1-type response, and regulatory factors. Ghrelin ameliorated significantly the clinical and histopathologic severity of the trinitrobenzene sulfonic acid-induced colitis; abrogating body weight loss, diarrhea, and inflammation; and increasing survival. The therapeutic effect was associated with down-regulation of both inflammatory and Th1-driven autoimmune response through the regulation of a wide spectrum of inflammatory mediators. In addition, a partial involvement of interluekin-10/transforming growth factor-beta1-secreting regulatory T cells in this therapeutic effect was demonstrated. Importantly, the ghrelin treatment was therapeutically effective in established colitis and avoided the recurrence of the disease. Our data demonstrate novel anti-inflammatory actions for ghrelin in the gastrointestinal tract, ie, the capacity to deactivate the intestinal inflammatory response and to restore mucosal immune tolerance at multiple levels. Consequently, ghrelin administration represents a novel possible therapeutic approach for the treatment of Crohn's disease and other Th1-mediated inflammatory diseases, such as rheumatoid arthritis and multiple sclerosis.

  16. NTCP modelling of lung toxicity after SBRT comparing the universal survival curve and the linear quadratic model for fractionation correction

    International Nuclear Information System (INIS)

    Wennberg, Berit M.; Baumann, Pia; Gagliardi, Giovanna

    2011-01-01

    Background. In SBRT of lung tumours no established relationship between dose-volume parameters and the incidence of lung toxicity is found. The aim of this study is to compare the LQ model and the universal survival curve (USC) to calculate biologically equivalent doses in SBRT to see if this will improve knowledge on this relationship. Material and methods. Toxicity data on radiation pneumonitis grade 2 or more (RP2+) from 57 patients were used, 10.5% were diagnosed with RP2+. The lung DVHs were corrected for fractionation (LQ and USC) and analysed with the Lyman- Kutcher-Burman (LKB) model. In the LQ-correction α/β = 3 Gy was used and the USC parameters used were: α/β = 3 Gy, D 0 = 1.0 Gy, n = 10, α 0.206 Gy-1 and d T = 5.8 Gy. In order to understand the relative contribution of different dose levels to the calculated NTCP the concept of fractional NTCP was used. This might give an insight to the questions of whether 'high doses to small volumes' or 'low doses to large volumes' are most important for lung toxicity. Results and Discussion. NTCP analysis with the LKB-model using parameters m = 0.4, D50 = 30 Gy resulted for the volume dependence parameter (n) with LQ correction n = 0.87 and with USC correction n = 0.71. Using parameters m = 0.3, D 50 = 20 Gy n = 0.93 with LQ correction and n 0.83 with USC correction. In SBRT of lung tumours, NTCP modelling of lung toxicity comparing models (LQ,USC) for fractionation correction, shows that low dose contribute less and high dose more to the NTCP when using the USC-model. Comparing NTCP modelling of SBRT data and data from breast cancer, lung cancer and whole lung irradiation implies that the response of the lung is treatment specific. More data are however needed in order to have a more reliable modelling

  17. The potential of prison-based democratic therapeutic communities.

    Science.gov (United States)

    Bennett, Jamie; Shuker, Richard

    2017-03-13

    Purpose The purpose of this paper is to describe the work of HMP Grendon, the only prison in the UK to operate entirely as a series of democratic therapeutic communities and to summarise the research of its effectiveness. Design/methodology/approach The paper is both descriptive, providing an overview of the work of a prison-based therapeutic community, and offers a literature review regarding evidence of effectiveness. Findings The work of HMP Grendon has a wide range of positive benefits including reduced levels of disruption in prison, reduced self-harm, improved well-being, an environment that is experienced as more humane and reduced levels of reoffending. Originality/value The work of HMP Grendon offers a well established and evidenced approach to managing men who have committed serious violent and sexually violent offences. It also promotes and embodies a progressive approach to managing prisons rooted in the welfare tradition.

  18. Radiative corrections to the Higgs couplings in the triplet model

    International Nuclear Information System (INIS)

    KIKUCHI, M.

    2014-01-01

    The feature of extended Higgs models can appear in the pattern of deviations from the Standard Model (SM) predictions in coupling constants of the SM-like Higgs boson (h). We can thus discriminate extended Higgs models by precisely measuring the pattern of deviations in the coupling constants of h, even when extra bosons are not found directly. In order to compare the theoretical predictions to the future precision data at the ILC, we must evaluate the theoretical predictions with radiative corrections in various extended Higgs models. In this paper, we give our comprehensive study for radiative corrections to various Higgs boson couplings of h in the minimal Higgs triplet model (HTM). First, we define renormalization conditions in the model, and we calculate the Higgs coupling; gγγ, hWW, hZZ and hhh at the one loop level. We then evaluate deviations in coupling constants of the SM-like Higgs boson from the predictions in the SM. We find that one-loop contributions to these couplings are substantial as compared to their expected measurement accuracies at the ILC. Therefore the HTM has a possibility to be distinguished from the other models by comparing the pattern of deviations in the Higgs boson couplings.

  19. A Judgement-Based Model of Workplace Learning

    Science.gov (United States)

    Athanasou, James A.

    2004-01-01

    The purpose of this paper is to outline a judgement-based model of adult learning. This approach is set out as a Perceptual-Judgemental-Reinforcement approach to social learning under conditions of complexity and where there is no single, clearly identified correct response. The model builds upon the Hager-Halliday thesis of workplace learning and…

  20. Comparison of MR-based attenuation correction and CT-based attenuation correction of whole-body PET/MR imaging

    Energy Technology Data Exchange (ETDEWEB)

    Izquierdo-Garcia, David [Mount Sinai School of Medicine, Translational and Molecular Imaging Institute, New York, NY (United States); Department of Radiology, Massachusetts General Hospital and Harvard Medical School, Athinoula A. Martinos Center for Biomedical Imaging, Charlestown, MA (United States); Sawiak, Stephen J. [University of Cambridge, Wolfson Brain Imaging Centre, Cambridge (United Kingdom); Knesaurek, Karin; Machac, Joseph [Mount Sinai School of Medicine, Division of Nuclear Medicine, Department of Radiology, New York, NY (United States); Narula, Jagat [Mount Sinai School of Medicine, Department of Cardiology, Zena and Michael A. Weiner Cardiovascular Institute and Marie-Josee and Henry R. Kravis Cardiovascular Health Center, New York, NY (United States); Fuster, Valentin [Mount Sinai School of Medicine, Department of Cardiology, Zena and Michael A. Weiner Cardiovascular Institute and Marie-Josee and Henry R. Kravis Cardiovascular Health Center, New York, NY (United States); The Centro Nacional de Investigaciones Cardiovasculares (CNIC), Madrid (Spain); Fayad, Zahi A. [Mount Sinai School of Medicine, Translational and Molecular Imaging Institute, New York, NY (United States); Mount Sinai School of Medicine, Department of Cardiology, Zena and Michael A. Weiner Cardiovascular Institute and Marie-Josee and Henry R. Kravis Cardiovascular Health Center, New York, NY (United States); Mount Sinai School of Medicine, Department of Radiology, New York, NY (United States)

    2014-08-15

    The objective of this study was to evaluate the performance of the built-in MR-based attenuation correction (MRAC) included in the combined whole-body Ingenuity TF PET/MR scanner and compare it to the performance of CT-based attenuation correction (CTAC) as the gold standard. Included in the study were 26 patients who underwent clinical whole-body FDG PET/CT imaging and subsequently PET/MR imaging (mean delay 100 min). Patients were separated into two groups: the alpha group (14 patients) without MR coils during PET/MR imaging and the beta group (12 patients) with MR coils present (neurovascular, spine, cardiac and torso coils). All images were coregistered to the same space (PET/MR). The two PET images from PET/MR reconstructed using MRAC and CTAC were compared by voxel-based and region-based methods (with ten regions of interest, ROIs). Lesions were also compared by an experienced clinician. Body mass index and lung density showed significant differences between the alpha and beta groups. Right and left lung densities were also significantly different within each group. The percentage differences in uptake values using MRAC in relation to those using CTAC were greater in the beta group than in the alpha group (alpha group -0.2 ± 33.6 %, R{sup 2} = 0.98, p < 0.001; beta group 10.31 ± 69.86 %, R{sup 2} = 0.97, p < 0.001). In comparison to CTAC, MRAC led to underestimation of the PET values by less than 10 % on average, although some ROIs and lesions did differ by more (including the spine, lung and heart). The beta group (imaged with coils present) showed increased overall PET quantification as well as increased variability compared to the alpha group (imaged without coils). PET data reconstructed with MRAC and CTAC showed some differences, mostly in relation to air pockets, metallic implants and attenuation differences in large bone areas (such as the pelvis and spine) due to the segmentation limitation of the MRAC method. (orig.)

  1. Comparison of MR-based attenuation correction and CT-based attenuation correction of whole-body PET/MR imaging

    International Nuclear Information System (INIS)

    Izquierdo-Garcia, David; Sawiak, Stephen J.; Knesaurek, Karin; Machac, Joseph; Narula, Jagat; Fuster, Valentin; Fayad, Zahi A.

    2014-01-01

    The objective of this study was to evaluate the performance of the built-in MR-based attenuation correction (MRAC) included in the combined whole-body Ingenuity TF PET/MR scanner and compare it to the performance of CT-based attenuation correction (CTAC) as the gold standard. Included in the study were 26 patients who underwent clinical whole-body FDG PET/CT imaging and subsequently PET/MR imaging (mean delay 100 min). Patients were separated into two groups: the alpha group (14 patients) without MR coils during PET/MR imaging and the beta group (12 patients) with MR coils present (neurovascular, spine, cardiac and torso coils). All images were coregistered to the same space (PET/MR). The two PET images from PET/MR reconstructed using MRAC and CTAC were compared by voxel-based and region-based methods (with ten regions of interest, ROIs). Lesions were also compared by an experienced clinician. Body mass index and lung density showed significant differences between the alpha and beta groups. Right and left lung densities were also significantly different within each group. The percentage differences in uptake values using MRAC in relation to those using CTAC were greater in the beta group than in the alpha group (alpha group -0.2 ± 33.6 %, R 2 = 0.98, p 2 = 0.97, p < 0.001). In comparison to CTAC, MRAC led to underestimation of the PET values by less than 10 % on average, although some ROIs and lesions did differ by more (including the spine, lung and heart). The beta group (imaged with coils present) showed increased overall PET quantification as well as increased variability compared to the alpha group (imaged without coils). PET data reconstructed with MRAC and CTAC showed some differences, mostly in relation to air pockets, metallic implants and attenuation differences in large bone areas (such as the pelvis and spine) due to the segmentation limitation of the MRAC method. (orig.)

  2. Image-based modeling of tumor shrinkage in head and neck radiation therapy

    International Nuclear Information System (INIS)

    Chao Ming; Xie Yaoqin; Moros, Eduardo G.; Le, Quynh-Thu; Xing Lei

    2010-01-01

    Purpose: Understanding the kinetics of tumor growth/shrinkage represents a critical step in quantitative assessment of therapeutics and realization of adaptive radiation therapy. This article presents a novel framework for image-based modeling of tumor change and demonstrates its performance with synthetic images and clinical cases. Methods: Due to significant tumor tissue content changes, similarity-based models are not suitable for describing the process of tumor volume changes. Under the hypothesis that tissue features in a tumor volume or at the boundary region are partially preserved, the kinetic change was modeled in two steps: (1) Autodetection of homologous tissue features shared by two input images using the scale invariance feature transformation (SIFT) method; and (2) establishment of a voxel-to-voxel correspondence between the images for the remaining spatial points by interpolation. The correctness of the tissue feature correspondence was assured by a bidirectional association procedure, where SIFT features were mapped from template to target images and reversely. A series of digital phantom experiments and five head and neck clinical cases were used to assess the performance of the proposed technique. Results: The proposed technique can faithfully identify the known changes introduced when constructing the digital phantoms. The subsequent feature-guided thin plate spline calculation reproduced the ''ground truth'' with accuracy better than 1.5 mm. For the clinical cases, the new algorithm worked reliably for a volume change as large as 30%. Conclusions: An image-based tumor kinetic algorithm was developed to model the tumor response to radiation therapy. The technique provides a practical framework for future application in adaptive radiation therapy.

  3. An Automated Baseline Correction Method Based on Iterative Morphological Operations.

    Science.gov (United States)

    Chen, Yunliang; Dai, Liankui

    2018-05-01

    Raman spectra usually suffer from baseline drift caused by fluorescence or other reasons. Therefore, baseline correction is a necessary and crucial step that must be performed before subsequent processing and analysis of Raman spectra. An automated baseline correction method based on iterative morphological operations is proposed in this work. The method can adaptively determine the structuring element first and then gradually remove the spectral peaks during iteration to get an estimated baseline. Experiments on simulated data and real-world Raman data show that the proposed method is accurate, fast, and flexible for handling different kinds of baselines in various practical situations. The comparison of the proposed method with some state-of-the-art baseline correction methods demonstrates its advantages over the existing methods in terms of accuracy, adaptability, and flexibility. Although only Raman spectra are investigated in this paper, the proposed method is hopefully to be used for the baseline correction of other analytical instrumental signals, such as IR spectra and chromatograms.

  4. Predicting the Uncertain Future of Aptamer-Based Diagnostics and Therapeutics.

    Science.gov (United States)

    Bruno, John G

    2015-04-16

    Despite the great promise of nucleic acid aptamers in the areas of diagnostics and therapeutics for their facile in vitro development, lack of immunogenicity and other desirable properties, few truly successful aptamer-based products exist in the clinical or other markets. Core reasons for these commercial deficiencies probably stem from industrial commitment to antibodies including a huge financial investment in humanized monoclonal antibodies and a general ignorance about aptamers and their performance among the research and development community. Given the early failures of some strong commercial efforts to gain government approval and bring aptamer-based products to market, it may seem that aptamers are doomed to take a backseat to antibodies forever. However, the key advantages of aptamers over antibodies coupled with niche market needs that only aptamers can fill and more recent published data still point to a bright commercial future for aptamers in areas such as infectious disease and cancer diagnostics and therapeutics. As more researchers and entrepreneurs become familiar with aptamers, it seems inevitable that aptamers will at least be considered for expanded roles in diagnostics and therapeutics. This review also examines new aptamer modifications and attempts to predict new aptamer applications that could revolutionize biomedical technology in the future and lead to marketed products.

  5. Pencil kernel correction and residual error estimation for quality-index-based dose calculations

    International Nuclear Information System (INIS)

    Nyholm, Tufve; Olofsson, Joergen; Ahnesjoe, Anders; Georg, Dietmar; Karlsson, Mikael

    2006-01-01

    Experimental data from 593 photon beams were used to quantify the errors in dose calculations using a previously published pencil kernel model. A correction of the kernel was derived in order to remove the observed systematic errors. The remaining residual error for individual beams was modelled through uncertainty associated with the kernel model. The methods were tested against an independent set of measurements. No significant systematic error was observed in the calculations using the derived correction of the kernel and the remaining random errors were found to be adequately predicted by the proposed method

  6. Mobility-based correction for accurate determination of binding constants by capillary electrophoresis-frontal analysis.

    Science.gov (United States)

    Qian, Cheng; Kovalchik, Kevin A; MacLennan, Matthew S; Huang, Xiaohua; Chen, David D Y

    2017-06-01

    Capillary electrophoresis frontal analysis (CE-FA) can be used to determine binding affinity of molecular interactions. However, its current data processing method mandate specific requirement on the mobilities of the binding pair in order to obtain accurate binding constants. This work shows that significant errors are resulted when the mobilities of the interacting species do not meet these requirements. Therefore, the applicability of CE-FA in many real word applications becomes questionable. An electrophoretic mobility-based correction method is developed in this work based on the flux of each species. A simulation program and a pair of model compounds are used to verify the new equations and evaluate the effectiveness of this method. Ibuprofen and hydroxypropyl-β-cyclodextrinare used to demonstrate the differences in the obtained binding constant by CE-FA when different calculation methods are used, and the results are compared with those obtained by affinity capillary electrophoresis (ACE). The results suggest that CE-FA, with the mobility-based correction method, can be a generally applicable method for a much wider range of applications. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  7. Correction of Misclassifications Using a Proximity-Based Estimation Method

    Directory of Open Access Journals (Sweden)

    Shmulevich Ilya

    2004-01-01

    Full Text Available An estimation method for correcting misclassifications in signal and image processing is presented. The method is based on the use of context-based (temporal or spatial information in a sliding-window fashion. The classes can be purely nominal, that is, an ordering of the classes is not required. The method employs nonlinear operations based on class proximities defined by a proximity matrix. Two case studies are presented. In the first, the proposed method is applied to one-dimensional signals for processing data that are obtained by a musical key-finding algorithm. In the second, the estimation method is applied to two-dimensional signals for correction of misclassifications in images. In the first case study, the proximity matrix employed by the estimation method follows directly from music perception studies, whereas in the second case study, the optimal proximity matrix is obtained with genetic algorithms as the learning rule in a training-based optimization framework. Simulation results are presented in both case studies and the degree of improvement in classification accuracy that is obtained by the proposed method is assessed statistically using Kappa analysis.

  8. Yukawa corrections from PGBs in OGTC model to the process γγ→bb-bar

    International Nuclear Information System (INIS)

    Huang Jinshu; Song Taiping; Song Haizhen; Lu gongru

    2000-01-01

    The Yukawa corrections from the pseudo-Goldstone bosons (PGBs) in the one generation technicolor (OGTC) model to the process γγ→bb-bar are calculated. The authors find the corrections from the PGBs to the cross section γγ→bb-bar are more than 10% in the certain parameter values region. The maximum of the relative corrections to the process e + e - →γγ→bb-bar may reach -51% in laser back-scattering photos mode, and is only -17.9% in Beamstrahlung photons mode. The corrections are greatly larger the contributions from the relevant particles in the standard model and the supersymmetric model. It can be considered as a signatures of finding the technicolor at the next-generation high energy photons collision

  9. Next-to-leading order corrections to the valon model

    Indian Academy of Sciences (India)

    Next-to-leading order corrections to the valon model. G R BOROUN. ∗ and E ESFANDYARI. Physics Department, Razi University, Kermanshah 67149, Iran. ∗. Corresponding author. E-mail: grboroun@gmail.com; boroun@razi.ac.ir. MS received 17 January 2014; revised 31 October 2014; accepted 21 November 2014.

  10. Quantification of Hepatic Steatosis with T1-independent, T2*-corrected MR Imaging with Spectral Modeling of Fat: Blinded Comparison with MR Spectroscopy

    Science.gov (United States)

    Hines, Catherine D. G.; Hamilton, Gavin; Sirlin, Claude B.; McKenzie, Charles A.; Yu, Huanzhou; Brittain, Jean H.; Reeder, Scott B.

    2011-01-01

    Purpose: To prospectively compare an investigational version of a complex-based chemical shift–based fat fraction magnetic resonance (MR) imaging method with MR spectroscopy for the quantification of hepatic steatosis. Materials and Methods: This study was approved by the institutional review board and was HIPAA compliant. Written informed consent was obtained before all studies. Fifty-five patients (31 women, 24 men; age range, 24–71 years) were prospectively imaged at 1.5 T with quantitative MR imaging and single-voxel MR spectroscopy, each within a single breath hold. The effects of T2* correction, spectral modeling of fat, and magnitude fitting for eddy current correction on fat quantification with MR imaging were investigated by reconstructing fat fraction images from the same source data with different combinations of error correction. Single-voxel T2-corrected MR spectroscopy was used to measure fat fraction and served as the reference standard. All MR spectroscopy data were postprocessed at a separate institution by an MR physicist who was blinded to MR imaging results. Fat fractions measured with MR imaging and MR spectroscopy were compared statistically to determine the correlation (r2), and the slope and intercept as measures of agreement between MR imaging and MR spectroscopy fat fraction measurements, to determine whether MR imaging can help quantify fat, and examine the importance of T2* correction, spectral modeling of fat, and eddy current correction. Two-sided t tests (significance level, P = .05) were used to determine whether estimated slopes and intercepts were significantly different from 1.0 and 0.0, respectively. Sensitivity and specificity for the classification of clinically significant steatosis were evaluated. Results: Overall, there was excellent correlation between MR imaging and MR spectroscopy for all reconstruction combinations. However, agreement was only achieved when T2* correction, spectral modeling of fat, and magnitude

  11. Temporal high-pass non-uniformity correction algorithm based on grayscale mapping and hardware implementation

    Science.gov (United States)

    Jin, Minglei; Jin, Weiqi; Li, Yiyang; Li, Shuo

    2015-08-01

    In this paper, we propose a novel scene-based non-uniformity correction algorithm for infrared image processing-temporal high-pass non-uniformity correction algorithm based on grayscale mapping (THP and GM). The main sources of non-uniformity are: (1) detector fabrication inaccuracies; (2) non-linearity and variations in the read-out electronics and (3) optical path effects. The non-uniformity will be reduced by non-uniformity correction (NUC) algorithms. The NUC algorithms are often divided into calibration-based non-uniformity correction (CBNUC) algorithms and scene-based non-uniformity correction (SBNUC) algorithms. As non-uniformity drifts temporally, CBNUC algorithms must be repeated by inserting a uniform radiation source which SBNUC algorithms do not need into the view, so the SBNUC algorithm becomes an essential part of infrared imaging system. The SBNUC algorithms' poor robustness often leads two defects: artifacts and over-correction, meanwhile due to complicated calculation process and large storage consumption, hardware implementation of the SBNUC algorithms is difficult, especially in Field Programmable Gate Array (FPGA) platform. The THP and GM algorithm proposed in this paper can eliminate the non-uniformity without causing defects. The hardware implementation of the algorithm only based on FPGA has two advantages: (1) low resources consumption, and (2) small hardware delay: less than 20 lines, it can be transplanted to a variety of infrared detectors equipped with FPGA image processing module, it can reduce the stripe non-uniformity and the ripple non-uniformity.

  12. Hydrological Modeling in Northern Tunisia with Regional Climate Model Outputs: Performance Evaluation and Bias-Correction in Present Climate Conditions

    Directory of Open Access Journals (Sweden)

    Asma Foughali

    2015-07-01

    Full Text Available This work aims to evaluate the performance of a hydrological balance model in a watershed located in northern Tunisia (wadi Sejnane, 378 km2 in present climate conditions using input variables provided by four regional climate models. A modified version (MBBH of the lumped and single layer surface model BBH (Bucket with Bottom Hole model, in which pedo-transfer parameters estimated using watershed physiographic characteristics are introduced is adopted to simulate the water balance components. Only two parameters representing respectively the water retention capacity of the soil and the vegetation resistance to evapotranspiration are calibrated using rainfall-runoff data. The evaluation criterions for the MBBH model calibration are: relative bias, mean square error and the ratio of mean actual evapotranspiration to mean potential evapotranspiration. Daily air temperature, rainfall and runoff observations are available from 1960 to 1984. The period 1960–1971 is selected for calibration while the period 1972–1984 is chosen for validation. Air temperature and precipitation series are provided by four regional climate models (DMI, ARP, SMH and ICT from the European program ENSEMBLES, forced by two global climate models (GCM: ECHAM and ARPEGE. The regional climate model outputs (precipitation and air temperature are compared to the observations in terms of statistical distribution. The analysis was performed at the seasonal scale for precipitation. We found out that RCM precipitation must be corrected before being introduced as MBBH inputs. Thus, a non-parametric quantile-quantile bias correction method together with a dry day correction is employed. Finally, simulated runoff generated using corrected precipitation from the regional climate model SMH is found the most acceptable by comparison with runoff simulated using observed precipitation data, to reproduce the temporal variability of mean monthly runoff. The SMH model is the most accurate to

  13. Correction of self-reported BMI based on objective measurements: a Belgian experience.

    Science.gov (United States)

    Drieskens, S; Demarest, S; Bel, S; De Ridder, K; Tafforeau, J

    2018-01-01

    Based on successive Health Interview Surveys (HIS), it has been demonstrated that also in Belgium obesity, measured by means of a self-reported body mass index (BMI in kg/m 2 ), is a growing public health problem that needs to be monitored as accurately as possible. Studies have shown that a self-reported BMI can be biased. Consequently, if the aim is to rely on a self-reported BMI, adjustment is recommended. Data on measured and self-reported BMI, derived from the Belgian Food Consumption Survey (FCS) 2014 offers the opportunity to do so. The HIS and FCS are cross-sectional surveys based on representative population samples. This study focused on adults aged 18-64 years (sample HIS = 6545 and FCS = 1213). Measured and self-reported BMI collected in FCS were used to assess possible misreporting. Using FCS data, correction factors (measured BMI/self-reported BMI) were calculated in function of a combination of background variables (region, gender, educational level and age group). Individual self-reported BMI of the HIS 2013 were then multiplied with the corresponding correction factors to produce a corrected BMI-classification. When compared with the measured BMI, the self-reported BMI in the FCS was underestimated (mean 0.97 kg/m 2 ). 28% of the obese people underestimated their BMI. After applying the correction factors, the prevalence of obesity based on HIS data significantly increased (from 13% based on the original HIS data to 17% based on the corrected HIS data) and approximated the measured one derived from the FCS data. Since self-reported calculations of BMI are underestimated, it is recommended to adjust them to obtain accurate estimates which are important for decision making.

  14. Radiative corrections to e+e- → W+W- in the Weinberg model

    NARCIS (Netherlands)

    Veltman, M.J.G.; Lemoine, M.

    1980-01-01

    The one-loop radiation corrections to the process e+e- ->W+W- are calculated in the Weinberg model. The corrections are computed in a c.m. energy range of 180-1000 GeV. The dependence on the Higgs mass is studied in detail; it is found that variations in the Higgs mass from 10-1000 GeV give rise

  15. A national prediction model for PM2.5 component exposures and measurement error-corrected health effect inference.

    Science.gov (United States)

    Bergen, Silas; Sheppard, Lianne; Sampson, Paul D; Kim, Sun-Young; Richards, Mark; Vedal, Sverre; Kaufman, Joel D; Szpiro, Adam A

    2013-09-01

    Studies estimating health effects of long-term air pollution exposure often use a two-stage approach: building exposure models to assign individual-level exposures, which are then used in regression analyses. This requires accurate exposure modeling and careful treatment of exposure measurement error. To illustrate the importance of accounting for exposure model characteristics in two-stage air pollution studies, we considered a case study based on data from the Multi-Ethnic Study of Atherosclerosis (MESA). We built national spatial exposure models that used partial least squares and universal kriging to estimate annual average concentrations of four PM2.5 components: elemental carbon (EC), organic carbon (OC), silicon (Si), and sulfur (S). We predicted PM2.5 component exposures for the MESA cohort and estimated cross-sectional associations with carotid intima-media thickness (CIMT), adjusting for subject-specific covariates. We corrected for measurement error using recently developed methods that account for the spatial structure of predicted exposures. Our models performed well, with cross-validated R2 values ranging from 0.62 to 0.95. Naïve analyses that did not account for measurement error indicated statistically significant associations between CIMT and exposure to OC, Si, and S. EC and OC exhibited little spatial correlation, and the corrected inference was unchanged from the naïve analysis. The Si and S exposure surfaces displayed notable spatial correlation, resulting in corrected confidence intervals (CIs) that were 50% wider than the naïve CIs, but that were still statistically significant. The impact of correcting for measurement error on health effect inference is concordant with the degree of spatial correlation in the exposure surfaces. Exposure model characteristics must be considered when performing two-stage air pollution epidemiologic analyses because naïve health effect inference may be inappropriate.

  16. Radiative Corrections for Wto e barν Decay in the Weinberg-Salam Model

    Science.gov (United States)

    Inoue, K.; Kakuto, A.; Komatsu, H.; Takeshita, S.

    1980-09-01

    The one-loop corrections for the Wto e barν decay rate are calculated in the Weinberg-Salam model with arbitrary number of generations. The on-shell renormalization prescription and the 't Hooft-Feynman gauge are employed. Divergences are treated by the dimensional regularization method. Some numerical estimates for the decay rate are given in the three-generation model. It is found that there are significant corrections mainly owing to fermion-mass singularities.

  17. Statistical bias correction modelling for seasonal rainfall forecast for the case of Bali island

    Science.gov (United States)

    Lealdi, D.; Nurdiati, S.; Sopaheluwakan, A.

    2018-04-01

    Rainfall is an element of climate which is highly influential to the agricultural sector. Rain pattern and distribution highly determines the sustainability of agricultural activities. Therefore, information on rainfall is very useful for agriculture sector and farmers in anticipating the possibility of extreme events which often cause failures of agricultural production. This research aims to identify the biases from seasonal forecast products from ECMWF (European Centre for Medium-Range Weather Forecasts) rainfall forecast and to build a transfer function in order to correct the distribution biases as a new prediction model using quantile mapping approach. We apply this approach to the case of Bali Island, and as a result, the use of bias correction methods in correcting systematic biases from the model gives better results. The new prediction model obtained with this approach is better than ever. We found generally that during rainy season, the bias correction approach performs better than in dry season.

  18. A novel correction factor based on extended volume to complement the conformity index.

    Science.gov (United States)

    Jin, F; Wang, Y; Wu, Y-Z

    2012-08-01

    We propose a modified conformity index (MCI), based on extended volume, that improves on existing indices by correcting for the insensitivity of previous conformity indices to reference dose shape to assess the quality of high-precision radiation therapy and present an evaluation of its application. In this paper, the MCI is similar to the conformity index suggested by Paddick (CI(Paddick)), but with a different correction factor. It is shown for three cases: with an extended target volume, with an extended reference dose volume and without an extended volume. Extended volume is generated by expanding the original volume by 0.1-1.1 cm isotropically. Focusing on the simulation model, measurements of MCI employ a sphere target and three types of reference doses: a sphere, an ellipsoid and a cube. We can constrain the potential advantage of the new index by comparing MCI with CI(Paddick). The measurements of MCI in head-neck cancers treated with intensity-modulated radiation therapy and volumetric-modulated arc therapy provide a window on its clinical practice. The results of MCI for a simulation model and clinical practice are presented and the measurements are corrected for limited spatial resolution. The three types of MCI agree with each other, and comparisons between the MCI and CI(Paddick) are also provided. The results from our analysis show that the proposed MCI can provide more objective and accurate conformity measurement for high-precision radiation therapy. In combination with a dose-volume histogram, it will be a more useful conformity index.

  19. Bias Correction in a Stable AD (1,1) Model: Weak versus Strong Exogeneity

    NARCIS (Netherlands)

    van Giersbergen, N.P.A.

    2001-01-01

    This paper compares the behaviour of a bias-corrected estimator assuming strongly exogenous regressors to the behaviour of a bias-corrected estimator assuming weakly exogenous regressors, when in fact the marginal model contains a feedback mechanism. To this end, the effects of a feedback mechanism

  20. A Novel Optimal Control Method for Impulsive-Correction Projectile Based on Particle Swarm Optimization

    Directory of Open Access Journals (Sweden)

    Ruisheng Sun

    2016-01-01

    Full Text Available This paper presents a new parametric optimization approach based on a modified particle swarm optimization (PSO to design a class of impulsive-correction projectiles with discrete, flexible-time interval, and finite-energy control. In terms of optimal control theory, the task is described as the formulation of minimum working number of impulses and minimum control error, which involves reference model linearization, boundary conditions, and discontinuous objective function. These result in difficulties in finding the global optimum solution by directly utilizing any other optimization approaches, for example, Hp-adaptive pseudospectral method. Consequently, PSO mechanism is employed for optimal setting of impulsive control by considering the time intervals between two neighboring lateral impulses as design variables, which makes the briefness of the optimization process. A modification on basic PSO algorithm is developed to improve the convergence speed of this optimization through linearly decreasing the inertial weight. In addition, a suboptimal control and guidance law based on PSO technique are put forward for the real-time consideration of the online design in practice. Finally, a simulation case coupled with a nonlinear flight dynamic model is applied to validate the modified PSO control algorithm. The results of comparative study illustrate that the proposed optimal control algorithm has a good performance in obtaining the optimal control efficiently and accurately and provides a reference approach to handling such impulsive-correction problem.

  1. Publisher Correction: Oncolytic viruses as engineering platforms for combination immunotherapy.

    Science.gov (United States)

    Twumasi-Boateng, Kwame; Pettigrew, Jessica L; Kwok, Y Y Eunice; Bell, John C; Nelson, Brad H

    2018-05-04

    In the online html version of this article, the affiliations for Jessica L. Pettigrew and John C. Bell were not correct. Jessica L. Pettigrew is at the Department of Medicine, University of British Columbia, Vancouver, British Columbia, Canada and John C. Bell is at the Center for Innovative Cancer Therapeutics, Ottawa Hospital Research Institute, Ottawa, Ontario, Canada. This is correct in the print and PDF versions of the article and has been corrected in the html version.

  2. Robust recurrent neural network modeling for software fault detection and correction prediction

    International Nuclear Information System (INIS)

    Hu, Q.P.; Xie, M.; Ng, S.H.; Levitin, G.

    2007-01-01

    Software fault detection and correction processes are related although different, and they should be studied together. A practical approach is to apply software reliability growth models to model fault detection, and fault correction process is assumed to be a delayed process. On the other hand, the artificial neural networks model, as a data-driven approach, tries to model these two processes together with no assumptions. Specifically, feedforward backpropagation networks have shown their advantages over analytical models in fault number predictions. In this paper, the following approach is explored. First, recurrent neural networks are applied to model these two processes together. Within this framework, a systematic networks configuration approach is developed with genetic algorithm according to the prediction performance. In order to provide robust predictions, an extra factor characterizing the dispersion of prediction repetitions is incorporated into the performance function. Comparisons with feedforward neural networks and analytical models are developed with respect to a real data set

  3. Can faith-based correctional programs work? An outcome evaluation of the innerchange freedom initiative in Minnesota.

    Science.gov (United States)

    Duwe, Grant; King, Michelle

    2013-07-01

    This study evaluated the effectiveness of the InnerChange Freedom Initiative (InnerChange), a faith-based prisoner reentry program, by examining recidivism outcomes among 732 offenders released from Minnesota prisons between 2003 and 2009. Results from the Cox regression analyses revealed that participating in InnerChange significantly reduced reoffending (rearrest, reconviction, and new offense reincarceration), although it did not have a significant impact on reincarceration for a technical violation revocation. The findings further suggest that the beneficial recidivism outcomes for InnerChange participants may have been due, in part, to the continuum of mentoring support some offenders received in the institution and the community. The results imply that faith-based correctional programs can reduce recidivism, but only if they apply evidence-based practices that focus on providing a behavioral intervention within a therapeutic community, addressing the criminogenic needs of participants and delivering a continuum of care from the institution to the community. Given that InnerChange relies heavily on volunteers and program costs are privately funded, the program exacts no additional costs to the State of Minnesota. Yet, because InnerChange lowers recidivism, which includes reduced reincarceration and victimization costs, the program may be especially advantageous from a cost-benefit perspective.

  4. Developing Formal Correctness Properties from Natural Language Requirements

    Science.gov (United States)

    Nikora, Allen P.

    2006-01-01

    This viewgraph presentation reviews the rationale of the program to transform natural language specifications into formal notation.Specifically, automate generation of Linear Temporal Logic (LTL)correctness properties from natural language temporal specifications. There are several reasons for this approach (1) Model-based techniques becoming more widely accepted, (2) Analytical verification techniques (e.g., model checking, theorem proving) significantly more effective at detecting types of specification design errors (e.g., race conditions, deadlock) than manual inspection, (3) Many requirements still written in natural language, which results in a high learning curve for specification languages, associated tools and increased schedule and budget pressure on projects reduce training opportunities for engineers, and (4) Formulation of correctness properties for system models can be a difficult problem. This has relevance to NASA in that it would simplify development of formal correctness properties, lead to more widespread use of model-based specification, design techniques, assist in earlier identification of defects and reduce residual defect content for space mission software systems. The presentation also discusses: potential applications, accomplishments and/or technological transfer potential and the next steps.

  5. An overview of correctional psychiatry.

    Science.gov (United States)

    Metzner, Jeffrey; Dvoskin, Joel

    2006-09-01

    Supermax facilities may be an unfortunate and unpleasant necessity in modern corrections. Because of the serious dangers posed by prison gangs, they are unlikely to disappear completely from the correctional landscape any time soon. But such units should be carefully reserved for those inmates who pose the most serious danger to the prison environment. Further, the constitutional duty to provide medical and mental health care does not end at the supermax door. There is a great deal of common ground between the opponents of such environments and those who view them as a necessity. No one should want these expensive beds to be used for people who could be more therapeutically and safely managed in mental health treatment environments. No one should want people with serious mental illnesses to be punished for their symptoms. Finally, no one wants these units to make people more, instead of less, dangerous. It is in everyone's interests to learn as much as possible about the potential of these units for good and for harm. Corrections is a profession, and professions base their practices on data. If we are to avoid the most egregious and harmful effects of supermax confinement, we need to understand them far better than we currently do. Though there is a role for advocacy from those supporting or opposed to such environments, there is also a need for objective, scientifically rigorous study of these units and the people who live there.

  6. MRI-Based Nonrigid Motion Correction in Simultaneous PET/MRI

    Science.gov (United States)

    Chun, Se Young; Reese, Timothy G.; Ouyang, Jinsong; Guerin, Bastien; Catana, Ciprian; Zhu, Xuping; Alpert, Nathaniel M.; El Fakhri, Georges

    2014-01-01

    Respiratory and cardiac motion is the most serious limitation to whole-body PET, resulting in spatial resolution close to 1 cm. Furthermore, motion-induced inconsistencies in the attenuation measurements often lead to significant artifacts in the reconstructed images. Gating can remove motion artifacts at the cost of increased noise. This paper presents an approach to respiratory motion correction using simultaneous PET/MRI to demonstrate initial results in phantoms, rabbits, and nonhuman primates and discusses the prospects for clinical application. Methods Studies with a deformable phantom, a free-breathing primate, and rabbits implanted with radioactive beads were performed with simultaneous PET/MRI. Motion fields were estimated from concurrently acquired tagged MR images using 2 B-spline nonrigid image registration methods and incorporated into a PET list-mode ordered-subsets expectation maximization algorithm. Using the measured motion fields to transform both the emission data and the attenuation data, we could use all the coincidence data to reconstruct any phase of the respiratory cycle. We compared the resulting SNR and the channelized Hotelling observer (CHO) detection signal-to-noise ratio (SNR) in the motion-corrected reconstruction with the results obtained from standard gating and uncorrected studies. Results Motion correction virtually eliminated motion blur without reducing SNR, yielding images with SNR comparable to those obtained by gating with 5–8 times longer acquisitions in all studies. The CHO study in dynamic phantoms demonstrated a significant improvement (166%–276%) in lesion detection SNR with MRI-based motion correction as compared with gating (P < 0.001). This improvement was 43%–92% for large motion compared with lesion detection without motion correction (P < 0.001). CHO SNR in the rabbit studies confirmed these results. Conclusion Tagged MRI motion correction in simultaneous PET/MRI significantly improves lesion detection

  7. Polyhedral shape model for terrain correction of gravity and gravity gradient data based on an adaptive mesh

    Science.gov (United States)

    Guo, Zhikui; Chen, Chao; Tao, Chunhui

    2016-04-01

    Since 2007, there are four China Da yang cruises (CDCs), which have been carried out to investigate polymetallic sulfides in the southwest Indian ridge (SWIR) and have acquired both gravity data and bathymetry data on the corresponding survey lines(Tao et al., 2014). Sandwell et al. (2014) published a new global marine gravity model including the free air gravity data and its first order vertical gradient (Vzz). Gravity data and its gradient can be used to extract unknown density structure information(e.g. crust thickness) under surface of the earth, but they contain all the mass effect under the observation point. Therefore, how to get accurate gravity and its gradient effect of the existing density structure (e.g. terrain) has been a key issue. Using the bathymetry data or ETOPO1 (http://www.ngdc.noaa.gov/mgg/global/global.html) model at a full resolution to calculate the terrain effect could spend too much computation time. We expect to develop an effective method that takes less time but can still yield the desired accuracy. In this study, a constant-density polyhedral model is used to calculate the gravity field and its vertical gradient, which is based on the work of Tsoulis (2012). According to gravity field attenuation with distance and variance of bathymetry, we present an adaptive mesh refinement and coarsening strategies to merge both global topography data and multi-beam bathymetry data. The local coarsening or size of mesh depends on user-defined accuracy and terrain variation (Davis et al., 2011). To depict terrain better, triangular surface element and rectangular surface element are used in fine and coarse mesh respectively. This strategy can also be applied to spherical coordinate in large region and global scale. Finally, we applied this method to calculate Bouguer gravity anomaly (BGA), mantle Bouguer anomaly(MBA) and their vertical gradient in SWIR. Further, we compared the result with previous results in the literature. Both synthetic model

  8. Evidence of novel miR-34a-based therapeutic approaches for multiple myeloma treatment

    Czech Academy of Sciences Publication Activity Database

    Zarone, M.R.; Misso, G.; Grimaldi, A.; Zappavigna, S.; Russo, M.; Amler, Evžen; Di Martino, M.T.; Amodio, N.; Tagliaferri, P.; Tassone, P.; Caraglia, M.

    2017-01-01

    Roč. 7, dec (2017), s. 17949 ISSN 2045-2322 Institutional support: RVO:68378041 Keywords : gamma-secretase inhibitors * tumor-suppressor network * breast - cancer Subject RIV: FP - Other Medical Disciplines OBOR OECD: Technologies involving identifying the functioning of DNA, proteins and enzymes and how they influence the onset of disease and maintenance of well-being (gene-based diagnostics and therapeutic interventions (pharmacogenomics, gene-based therapeutics) Impact factor: 4.259, year: 2016

  9. Recent Trends in Nanotechnology-Based Drugs and Formulations for Targeted Therapeutic Delivery.

    Science.gov (United States)

    Iqbal, Hafiz M N; Rodriguez, Angel M V; Khandia, Rekha; Munjal, Ashok; Dhama, Kuldeep

    2017-01-01

    In the recent past, a wider spectrum of nanotechnologybased drugs or drug-loaded devices and systems has been engineered and investigated with high interests. The key objective is to help for an enhanced/better quality of patient life in a secure way by avoiding/limiting drug abuse, or severe adverse effects of some in practice traditional therapies. Various methodological approaches including in vitro, in vivo, and ex vivo techniques have been exploited, so far. Among them, nanoparticles-based therapeutic agents are of supreme interests for an enhanced and efficient delivery in the current biomedical sector of the modern world. The development of new types of novel, effective and highly reliable therapeutic drug delivery system (DDS) for multipurpose applications is essential and a core demand to tackle many human health related diseases. In this context, nanotechnology-based several advanced DDS have been engineered with novel characteristics for biomedical, pharmaceutical and cosmeceutical applications that include but not limited to the enhanced/improved bioactivity, bioavailability, drug efficacy, targeted delivery, and therapeutically safer with an extra advantage of overcoming demerits of traditional drug formulations/designs. This review work is focused on recent trends/advances in nanotechnology-based drugs and formulations designed for targeted therapeutic delivery. Moreover, information is also reviewed and given from recent patents and summarized or illustrated diagrammatically to depict a better understanding. Recent patents covering various nanotechnology-based approaches for several applications have also been reviewed. The drug-loaded nanoparticles are among versatile candidates with multifunctional characteristics for potential applications in biomedical, and tissue engineering sector. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.

  10. An Angular Leakage Correction for Modeling a Hemisphere, Using One-Dimensional Spherical Coordinates

    International Nuclear Information System (INIS)

    Schwinkendorf, K.N.; Eberle, C.S.

    2003-01-01

    A radially dependent, angular leakage correction was applied to a one-dimensional, multigroup neutron diffusion theory computer code to accurately model hemispherical geometry. This method allows the analyst to model hemispherical geometry, important in nuclear criticality safety analyses, with one-dimensional computer codes, which execute very quickly. Rapid turnaround times for scoping studies thus may be realized. This method uses an approach analogous to an axial leakage correction in a one-dimensional cylinder calculation. The two-dimensional Laplace operator was preserved in spherical geometry using a leakage correction proportional to 1/r 2 , which was folded into the one-dimensional spherical calculation on a mesh-by-mesh basis. Hemispherical geometry is of interest to criticality safety because of its similarity to piles of spilled fissile material and accumulations of fissile material in process containers. A hemisphere also provides a more realistic calculational model for spilled fissile material than does a sphere

  11. Confirming the RNAi-mediated mechanism of action of siRNA-based cancer therapeutics in mice.

    Science.gov (United States)

    Judge, Adam D; Robbins, Marjorie; Tavakoli, Iran; Levi, Jasna; Hu, Lina; Fronda, Anna; Ambegia, Ellen; McClintock, Kevin; MacLachlan, Ian

    2009-03-01

    siRNAs that specifically silence the expression of cancer-related genes offer a therapeutic approach in oncology. However, it remains critical to determine the true mechanism of their therapeutic effects. Here, we describe the preclinical development of chemically modified siRNA targeting the essential cell-cycle proteins polo-like kinase 1 (PLK1) and kinesin spindle protein (KSP) in mice. siRNA formulated in stable nucleic acid lipid particles (SNALP) displayed potent antitumor efficacy in both hepatic and subcutaneous tumor models. This was correlated with target gene silencing following a single intravenous administration that was sufficient to cause extensive mitotic disruption and tumor cell apoptosis. Our siRNA formulations induced no measurable immune response, minimizing the potential for nonspecific effects. Additionally, RNAi-specific mRNA cleavage products were found in tumor cells, and their presence correlated with the duration of target mRNA silencing. Histological biomarkers confirmed that RNAi-mediated gene silencing effectively inhibited the target's biological activity. This report supports an RNAi-mediated mechanism of action for siRNA antitumor effects, suggesting a new methodology for targeting other key genes in cancer development with siRNA-based therapeutics.

  12. What is the best pre-therapeutic dosimetry for successful radioiodine therapy of multifocal autonomy?

    Energy Technology Data Exchange (ETDEWEB)

    Gotthardt, M. [Radboud Univ. Nijmegen Medical Center, Nijmegen (Netherlands). Dept. of Nuclear Medicine; Philipps Univ., Marburg (Germany). Dept. of Nuclear Medicine; Rubner, C. [Philipps Univ., Marburg (Germany). Dept. of Nuclear Medicine; Bauhofer, A. [Philipps Univ., Marburg (DE). Inst. of Theoretical Surgery] (and others)

    2006-07-01

    Purpose: Dose calculation for radioiodine therapy (RIT) of multifocal autonomies (MFA) is a problem as therapeutic outcome may be worse than in other kinds of autonomies. We compared different dosimetric concepts in our patients. Patients, methods: Data from 187 patients who had undergone RIT for MFA (Marinelli algorithm, volumetric compromise) were included in the study. For calculation, either a standard or a measured half-life had been used and the dosimetric compromise (150 Gy, total thyroid volume). Therapeutic activities were calculated by 2 alternative concepts and compared to therapeutic success achieved (concept of TcTUs-based calculation of autonomous volume with 300 Gy and TcTUs-based adaptation of target dose on total thyroid volume). Results: If a standard half-life is used, therapeutic success was achieved in 90.2% (hypothyroidism 23,1%, n=143). If a measured half-life was used the success rate was 93.1% (13,6% hypothyroidism, n=44). These differences were statistically not significant, neither for all patients together nor for subgroups eu-, hypo-, or hyperthyroid after therapy (ANOVA, all p>0.05). The alternative dosimetric concepts would have resulted either in significantly lower organ doses (TcTUs-based calculation of autonomous volume; 80.76{+-}80.6 Gy versus 125.6{+-}46.3 Gy; p<0.0001) or in systematic over-treatment with significantly higher doses (TcTUs-adapted concept; 164.2{+-}101.7 Gy versus 125.6{+-}46.3 Gy; p=0.0097). Conclusions: TcTUs-based determination of the autonomous volume should not be performed, the TcTUs-based adaptation of the target dose will only increase the rate of hypothyroidism. A standard half-life may be used in pre-therapeutic dosimetry for RIT of MFA. If so, individual therapeutic activities may be calculated based on thyroid size corrected to the 24h ITUs without using Marinelli's algorithm. (orig.)

  13. What is the best pre-therapeutic dosimetry for successful radioiodine therapy of multifocal autonomy?

    International Nuclear Information System (INIS)

    Gotthardt, M.; Philipps Univ., Marburg; Rubner, C.; Bauhofer, A.

    2006-01-01

    Purpose: Dose calculation for radioiodine therapy (RIT) of multifocal autonomies (MFA) is a problem as therapeutic outcome may be worse than in other kinds of autonomies. We compared different dosimetric concepts in our patients. Patients, methods: Data from 187 patients who had undergone RIT for MFA (Marinelli algorithm, volumetric compromise) were included in the study. For calculation, either a standard or a measured half-life had been used and the dosimetric compromise (150 Gy, total thyroid volume). Therapeutic activities were calculated by 2 alternative concepts and compared to therapeutic success achieved (concept of TcTUs-based calculation of autonomous volume with 300 Gy and TcTUs-based adaptation of target dose on total thyroid volume). Results: If a standard half-life is used, therapeutic success was achieved in 90.2% (hypothyroidism 23,1%, n=143). If a measured half-life was used the success rate was 93.1% (13,6% hypothyroidism, n=44). These differences were statistically not significant, neither for all patients together nor for subgroups eu-, hypo-, or hyperthyroid after therapy (ANOVA, all p>0.05). The alternative dosimetric concepts would have resulted either in significantly lower organ doses (TcTUs-based calculation of autonomous volume; 80.76±80.6 Gy versus 125.6±46.3 Gy; p<0.0001) or in systematic over-treatment with significantly higher doses (TcTUs-adapted concept; 164.2±101.7 Gy versus 125.6±46.3 Gy; p=0.0097). Conclusions: TcTUs-based determination of the autonomous volume should not be performed, the TcTUs-based adaptation of the target dose will only increase the rate of hypothyroidism. A standard half-life may be used in pre-therapeutic dosimetry for RIT of MFA. If so, individual therapeutic activities may be calculated based on thyroid size corrected to the 24h ITUs without using Marinelli's algorithm. (orig.)

  14. A DSP-based neural network non-uniformity correction algorithm for IRFPA

    Science.gov (United States)

    Liu, Chong-liang; Jin, Wei-qi; Cao, Yang; Liu, Xiu

    2009-07-01

    An effective neural network non-uniformity correction (NUC) algorithm based on DSP is proposed in this paper. The non-uniform response in infrared focal plane array (IRFPA) detectors produces corrupted images with a fixed-pattern noise(FPN).We introduced and analyzed the artificial neural network scene-based non-uniformity correction (SBNUC) algorithm. A design of DSP-based NUC development platform for IRFPA is described. The DSP hardware platform designed is of low power consumption, with 32-bit fixed point DSP TMS320DM643 as the kernel processor. The dependability and expansibility of the software have been improved by DSP/BIOS real-time operating system and Reference Framework 5. In order to realize real-time performance, the calibration parameters update is set at a lower task priority then video input and output in DSP/BIOS. In this way, calibration parameters updating will not affect video streams. The work flow of the system and the strategy of real-time realization are introduced. Experiments on real infrared imaging sequences demonstrate that this algorithm requires only a few frames to obtain high quality corrections. It is computationally efficient and suitable for all kinds of non-uniformity.

  15. General rigid motion correction for computed tomography imaging based on locally linear embedding

    Science.gov (United States)

    Chen, Mianyi; He, Peng; Feng, Peng; Liu, Baodong; Yang, Qingsong; Wei, Biao; Wang, Ge

    2018-02-01

    The patient motion can damage the quality of computed tomography images, which are typically acquired in cone-beam geometry. The rigid patient motion is characterized by six geometric parameters and are more challenging to correct than in fan-beam geometry. We extend our previous rigid patient motion correction method based on the principle of locally linear embedding (LLE) from fan-beam to cone-beam geometry and accelerate the computational procedure with the graphics processing unit (GPU)-based all scale tomographic reconstruction Antwerp toolbox. The major merit of our method is that we need neither fiducial markers nor motion-tracking devices. The numerical and experimental studies show that the LLE-based patient motion correction is capable of calibrating the six parameters of the patient motion simultaneously, reducing patient motion artifacts significantly.

  16. Corpus-Based Websites to Promote Learner Autonomy in Correcting Writing Collocation Errors

    Directory of Open Access Journals (Sweden)

    Pham Thuy Dung

    2016-12-01

    Full Text Available The recent yet powerful emergence of E-learning and using online resources in learning EFL (English as a Foreign Language has helped promote learner autonomy in language acquisition including self-correcting their mistakes. This pilot study despite conducted on a modest sample of 25 second year students majoring in Business English at Hanoi Foreign Trade University is an initial attempt to investigate the feasibility of using corpus-based websites to promote learner autonomy in correcting collocation errors in EFL writing. The data is collected using a pre-questionnaire and a post-interview aiming to find out the participants’ change in belief and attitude toward learner autonomy in collocation errors in writing, the extent of their success in using the corpus-based websites to self-correct the errors and the change in their confidence in self-correcting the errors using the websites. The findings show that a significant majority of students have shifted their belief and attitude toward a more autonomous mode of learning, enjoyed a fair success of using the websites to self-correct the errors and become more confident. The study also yields an implication that a face-to-face training of how to use these online tools is vital to the later confidence and success of the learners

  17. Review of Therapeutic Education: Working Alongside Troubled and Troublesome Children (Book Review)

    OpenAIRE

    Bigger, Stephen

    2008-01-01

    Therapeutic education requires a move from “a punitive, blame-based, unfairly competitive and deviant-defined culture” to “one that celebrates diversity and cultural differences” (p.11), from a deficit model of SEN and deviant model of challenging behaviour to “a more humane and therapeutic approach to education and learning generally (p.12). Therapeutic education is holistic and encourages agency and responsibility. How adults relate to learners is viewed as more important than what is taugh...

  18. Graphene-based platforms for cancer therapeutics

    Science.gov (United States)

    Patel, Sunny C; Lee, Stephen; Lalwani, Gaurav; Suhrland, Cassandra; Chowdhury, Sayan Mullick; Sitharaman, Balaji

    2016-01-01

    Graphene is a multifunctional carbon nanomaterial and could be utilized to develop platform technologies for cancer therapies. Its surface can be covalently and noncovalently functionalized with anticancer drugs and functional groups that target cancer cells and tissue to improve treatment efficacies. Furthermore, its physicochemical properties can be harnessed to facilitate stimulus responsive therapeutics and drug delivery. This review article summarizes the recent literature specifically focused on development of graphene technologies to treat cancer. We will focus on advances at the interface of graphene based drug/gene delivery, photothermal/photodynamic therapy and combinations of these techniques. We also discuss the current understanding in cytocompatibility and biocompatibility issues related to graphene formulations and their implications pertinent to clinical cancer management. PMID:26769305

  19. Speciation in Metal Toxicity and Metal-Based Therapeutics

    Directory of Open Access Journals (Sweden)

    Douglas M. Templeton

    2015-04-01

    Full Text Available Metallic elements, ions and compounds produce varying degrees of toxicity in organisms with which they come into contact. Metal speciation is critical to understanding these adverse effects; the adjectives “heavy” and “toxic” are not helpful in describing the biological properties of individual elements, but detailed chemical structures are. As a broad generalization, the metallic form of an element is inert, and the ionic salts are the species that show more significant bioavailability. Yet the salts and other chelates of a metal ion can give rise to quite different toxicities, as exemplified by a range of carcinogenic potential for various nickel species. Another important distinction comes when a metallic element is organified, increasing its lipophilicity and hence its ability to penetrate the blood brain barrier, as is seen, for example, with organic mercury and tin species. Some metallic elements, such as gold and platinum, are themselves useful therapeutic agents in some forms, while other species of the same element can be toxic, thus focusing attention on species interconversions in evaluating metal-based drugs. The therapeutic use of metal-chelating agents introduces new species of the target metal in vivo, and this can affect not only its desired detoxification, but also introduce a potential for further mechanisms of toxicity. Examples of therapeutic iron chelator species are discussed in this context, as well as the more recent aspects of development of chelation therapy for uranium exposure.

  20. Performance Evaluation of Blind Tropospheric Delay correction ...

    African Journals Online (AJOL)

    lekky

    and Temperature 2 wet (GPT2w) models) for tropospheric delay correction, ... In practice, a user often employs a certain troposphere model based on the popularity ... comparisons between some of the models have been carried out in the past for .... prediction of meteorological parameter values, which are then used to ...

  1. Evaluation of Ocean Tide Models Used for Jason-2 Altimetry Corrections

    DEFF Research Database (Denmark)

    Fok, H.S.; Baki Iz, H.; Shum, C. K.

    2010-01-01

    It has been more than a decade since the last comprehensive accuracy assessment of global ocean tide models. Here, we conduct an evaluation of the barotropic ocean tide corrections, which were computed using FES2004 and GOT00.2, and other models on the Jason-2 altimetry Geophysical Data Record (G...

  2. Evidence Based Digoxin Therapeutic Monitoring - A Lower and Narrower Therapeutic Range

    Directory of Open Access Journals (Sweden)

    Amine BENLMOUDEN

    2016-06-01

    Full Text Available Cardiac glycosides have been used for congestive heart failure and certain cardiac arrhythmias for more than 200 years. Despite the introduction of a variety of new classes of drugs for the management of heart failure, specifically angiotensin-converting enzyme (ACE inhibitors, b-adrenergic antagonists (bblockers, and the aldosterone antagonist spironolactone, digoxin continues to have an important role in long-term outpatient management. However, a narrow margin exists between therapeutic and toxic doses of digoxin, resulting in a high incidence of digoxin toxicity in clinical practice.A wide variety of placebo-controlled clinical trials have unequivocally shown that treatment with digoxin can improve symptoms, quality of life, and exercise tolerance in patients with mild, moderate, or severe heart failure. The clinical relevance of digoxin therapeutic monitoring is also proved but the SDC (Serum Digoxin Conentrations required for optimal clinical efficacy and acceptable toxicity remains controversial. In the last years, international guidelines recommend 1.2 ng/mL as acceptable high level.In this bibliographic synthesis, we aim to collect pertinent informations from MedLine database about exposure-effect relationship in order to assess the evidence level scientific of new digoxin therapeutic monitoring. 

  3. X-ray-based attenuation correction for positron emission tomography/computed tomography scanners.

    Science.gov (United States)

    Kinahan, Paul E; Hasegawa, Bruce H; Beyer, Thomas

    2003-07-01

    A synergy of positron emission tomography (PET)/computed tomography (CT) scanners is the use of the CT data for x-ray-based attenuation correction of the PET emission data. Current methods of measuring transmission use positron sources, gamma-ray sources, or x-ray sources. Each of the types of transmission scans involves different trade-offs of noise versus bias, with positron transmission scans having the highest noise but lowest bias, whereas x-ray scans have negligible noise but the potential for increased quantitative errors. The use of x-ray-based attenuation correction, however, has other advantages, including a lack of bias introduced from post-injection transmission scanning, which is an important practical consideration for clinical scanners, as well as reduced scan times. The sensitivity of x-ray-based attenuation correction to artifacts and quantitative errors depends on the method of translating the CT image from the effective x-ray energy of approximately 70 keV to attenuation coefficients at the PET energy of 511 keV. These translation methods are usually based on segmentation and/or scaling techniques. Errors in the PET emission image arise from positional mismatches caused by patient motion or respiration differences between the PET and CT scans; incorrect calculation of attenuation coefficients for CT contrast agents or metallic implants; or keeping the patient's arms in the field of view, which leads to truncation and/or beam-hardening (or x-ray scatter) artifacts. Proper interpretation of PET emission images corrected for attenuation by using the CT image relies on an understanding of the potential artifacts. In cases where an artifact or bias is suspected, careful inspection of all three available images (CT and PET emission with and without attenuation correction) is recommended. Copyright 2003 Elsevier Inc. All rights reserved.

  4. LiDAR-based 2D Localization and Mapping System using Elliptical Distance Correction Models for UAV Wind Turbine Blade Inspection

    DEFF Research Database (Denmark)

    Nikolov, Ivan Adriyanov; Madsen, Claus B.

    2017-01-01

    for on-site outdoor localization and mapping in low feature environment using the inexpensive RPLIDAR and an 9-DOF IMU. Our algorithm geometrically simplifies the wind turbine blade 2D cross-section to an elliptical model and uses it for distance and shape correction. We show that the proposed algorithm...

  5. MODEL PERMINTAAN UANG DI INDONESIA DENGAN PENDEKATAN VECTOR ERROR CORRECTION MODEL

    Directory of Open Access Journals (Sweden)

    imam mukhlis

    2016-09-01

    Full Text Available This research aims to estimate the demand for money model in Indonesia for 2005.2-2015.12. The variables used in this research are ; demand for money, interest rate, inflation, and exchange rate (IDR/US$. The stationary test with ADF used to test unit root in the data. Cointegration test applied to estimate the long run relationship berween variables. This research employed the Vector Error Correction Model (VECM to estimate the money demand model in Indonesia. The results showed that all the data was stationer at the difference level (1%. There were long run relationship between interest rate, inflation and exchange rate to demand for money in Indonesia. The VECM model could not explaine interaction between explanatory variables to independent variables. In the short run, there were not relationship between interest rate, inflation and exchange rate to demand for money in Indonesia for 2005.2-2015.12

  6. The importance of topographically corrected null models for analyzing ecological point processes.

    Science.gov (United States)

    McDowall, Philip; Lynch, Heather J

    2017-07-01

    Analyses of point process patterns and related techniques (e.g., MaxEnt) make use of the expected number of occurrences per unit area and second-order statistics based on the distance between occurrences. Ecologists working with point process data often assume that points exist on a two-dimensional x-y plane or within a three-dimensional volume, when in fact many observed point patterns are generated on a two-dimensional surface existing within three-dimensional space. For many surfaces, however, such as the topography of landscapes, the projection from the surface to the x-y plane preserves neither area nor distance. As such, when these point patterns are implicitly projected to and analyzed in the x-y plane, our expectations of the point pattern's statistical properties may not be met. When used in hypothesis testing, we find that the failure to account for the topography of the generating surface may bias statistical tests that incorrectly identify clustering and, furthermore, may bias coefficients in inhomogeneous point process models that incorporate slope as a covariate. We demonstrate the circumstances under which this bias is significant, and present simple methods that allow point processes to be simulated with corrections for topography. These point patterns can then be used to generate "topographically corrected" null models against which observed point processes can be compared. © 2017 by the Ecological Society of America.

  7. A new approach for beam hardening correction based on the local spectrum distributions

    International Nuclear Information System (INIS)

    Rasoulpour, Naser; Kamali-Asl, Alireza; Hemmati, Hamidreza

    2015-01-01

    Energy dependence of material absorption and polychromatic nature of x-ray beams in the Computed Tomography (CT) causes a phenomenon which called “beam hardening”. The purpose of this study is to provide a novel approach for Beam Hardening (BH) correction. This approach is based on the linear attenuation coefficients of Local Spectrum Distributions (LSDs) in the various depths of a phantom. The proposed method includes two steps. Firstly, the hardened spectra in various depths of the phantom (or LSDs) are estimated based on the Expectation Maximization (EM) algorithm for arbitrary thickness interval of known materials in the phantom. The performance of LSD estimation technique is evaluated by applying random Gaussian noise to transmission data. Then, the linear attenuation coefficients with regarding to the mean energy of LSDs are obtained. Secondly, a correction function based on the calculated attenuation coefficients is derived in order to correct polychromatic raw data. Since a correction function has been used for the conversion of the polychromatic data to the monochromatic data, the effect of BH in proposed reconstruction must be reduced in comparison with polychromatic reconstruction. The proposed approach has been assessed in the phantoms which involve less than two materials, but the correction function has been extended for using in the constructed phantoms with more than two materials. The relative mean energy difference in the LSDs estimations based on the noise-free transmission data was less than 1.5%. Also, it shows an acceptable value when a random Gaussian noise is applied to the transmission data. The amount of cupping artifact in the proposed reconstruction method has been effectively reduced and proposed reconstruction profile is uniform more than polychromatic reconstruction profile. - Highlights: • A novel Beam Hardening (BH) correction approach was described. • A new concept named Local Spectrum Distributions (LSDs) was used to BH

  8. A new approach for beam hardening correction based on the local spectrum distributions

    Energy Technology Data Exchange (ETDEWEB)

    Rasoulpour, Naser; Kamali-Asl, Alireza, E-mail: a_kamali@sbu.ac.ir; Hemmati, Hamidreza

    2015-09-11

    Energy dependence of material absorption and polychromatic nature of x-ray beams in the Computed Tomography (CT) causes a phenomenon which called “beam hardening”. The purpose of this study is to provide a novel approach for Beam Hardening (BH) correction. This approach is based on the linear attenuation coefficients of Local Spectrum Distributions (LSDs) in the various depths of a phantom. The proposed method includes two steps. Firstly, the hardened spectra in various depths of the phantom (or LSDs) are estimated based on the Expectation Maximization (EM) algorithm for arbitrary thickness interval of known materials in the phantom. The performance of LSD estimation technique is evaluated by applying random Gaussian noise to transmission data. Then, the linear attenuation coefficients with regarding to the mean energy of LSDs are obtained. Secondly, a correction function based on the calculated attenuation coefficients is derived in order to correct polychromatic raw data. Since a correction function has been used for the conversion of the polychromatic data to the monochromatic data, the effect of BH in proposed reconstruction must be reduced in comparison with polychromatic reconstruction. The proposed approach has been assessed in the phantoms which involve less than two materials, but the correction function has been extended for using in the constructed phantoms with more than two materials. The relative mean energy difference in the LSDs estimations based on the noise-free transmission data was less than 1.5%. Also, it shows an acceptable value when a random Gaussian noise is applied to the transmission data. The amount of cupping artifact in the proposed reconstruction method has been effectively reduced and proposed reconstruction profile is uniform more than polychromatic reconstruction profile. - Highlights: • A novel Beam Hardening (BH) correction approach was described. • A new concept named Local Spectrum Distributions (LSDs) was used to BH

  9. On Model Based Synthesis of Embedded Control Software

    OpenAIRE

    Alimguzhin, Vadim; Mari, Federico; Melatti, Igor; Salvo, Ivano; Tronci, Enrico

    2012-01-01

    Many Embedded Systems are indeed Software Based Control Systems (SBCSs), that is control systems whose controller consists of control software running on a microcontroller device. This motivates investigation on Formal Model Based Design approaches for control software. Given the formal model of a plant as a Discrete Time Linear Hybrid System and the implementation specifications (that is, number of bits in the Analog-to-Digital (AD) conversion) correct-by-construction control software can be...

  10. Pig models on intestinal development and therapeutics.

    Science.gov (United States)

    Yin, Lanmei; Yang, Huansheng; Li, Jianzhong; Li, Yali; Ding, Xueqing; Wu, Guoyao; Yin, Yulong

    2017-12-01

    The gastrointestinal tract plays a vital role in nutrient supply, digestion, and absorption, and has a crucial impact on the entire organism. Much attention is being paid to utilize animal models to study the pathogenesis of gastrointestinal diseases in response to intestinal development and health. The piglet has a body size similar to that of the human and is an omnivorous animal with comparable anatomy, nutritional requirements, and digestive and associated inflammatory processes, and displays similarities to the human intestinal microbial ecosystem, which make piglets more appropriate as an animal model for human than other non-primate animals. Therefore, the objective of this review is to summarize key attributes of the piglet model with which to study human intestinal development and intestinal health through probing into the etiology of several gastrointestinal diseases, thus providing a theoretical and hopefully practical, basis for further studies on mammalian nutrition, health, and disease, and therapeutics. Given the comparable nutritional requirements and strikingly similar brain developmental patterns between young piglets and humans, the piglet has been used as an important translational model for studying neurodevelopmental outcomes influenced by pediatric nutrition. Because of similarities in anatomy and physiology between pigs and mankind, more emphasises are put on how to use the piglet model for human organ transplantation research.

  11. Measurement correction method for force sensor used in dynamic pressure calibration based on artificial neural network optimized by genetic algorithm

    Science.gov (United States)

    Gu, Tingwei; Kong, Deren; Shang, Fei; Chen, Jing

    2017-12-01

    We present an optimization algorithm to obtain low-uncertainty dynamic pressure measurements from a force-transducer-based device. In this paper, the advantages and disadvantages of the methods that are commonly used to measure the propellant powder gas pressure, the applicable scope of dynamic pressure calibration devices, and the shortcomings of the traditional comparison calibration method based on the drop-weight device are firstly analysed in detail. Then, a dynamic calibration method for measuring pressure using a force sensor based on a drop-weight device is introduced. This method can effectively save time when many pressure sensors are calibrated simultaneously and extend the life of expensive reference sensors. However, the force sensor is installed between the drop-weight and the hammerhead by transition pieces through the connection mode of bolt fastening, which causes adverse effects such as additional pretightening and inertia forces. To solve these effects, the influence mechanisms of the pretightening force, the inertia force and other influence factors on the force measurement are theoretically analysed. Then a measurement correction method for the force measurement is proposed based on an artificial neural network optimized by a genetic algorithm. The training and testing data sets are obtained from calibration tests, and the selection criteria for the key parameters of the correction model is discussed. The evaluation results for the test data show that the correction model can effectively improve the force measurement accuracy of the force sensor. Compared with the traditional high-accuracy comparison calibration method, the percentage difference of the impact-force-based measurement is less than 0.6% and the relative uncertainty of the corrected force value is 1.95%, which can meet the requirements of engineering applications.

  12. Hypothyroidism after primary radiotherapy for head and neck squamous cell carcinoma: Normal tissue complication probability modeling with latent time correction

    International Nuclear Information System (INIS)

    Rønjom, Marianne Feen; Brink, Carsten; Bentzen, Søren M.; Hegedüs, Laszlo; Overgaard, Jens; Johansen, Jørgen

    2013-01-01

    Background and purpose: To develop a normal tissue complication probability (NTCP) model of radiation-induced biochemical hypothyroidism (HT) after primary radiotherapy for head and neck squamous cell carcinoma (HNSCC) with adjustment for latency and clinical risk factors. Patients and methods: Patients with HNSCC receiving definitive radiotherapy with 66–68 Gy without surgery were followed up with serial post-treatment thyrotropin (TSH) assessment. HT was defined as TSH >4.0 mU/l. Data were analyzed with both a logistic and a mixture model (correcting for latency) to determine risk factors for HT and develop an NTCP model based on mean thyroid dose (MTD) and thyroid volume. Results: 203 patients were included. Median follow-up: 25.1 months. Five-year estimated risk of HT was 25.6%. In the mixture model, the only independent risk factors for HT were thyroid volume (cm 3 ) (OR = 0.75 [95% CI: 0.64–0.85], p 3 , respectively. Conclusions: Comparing the logistic and mixture models demonstrates the importance of latent-time correction in NTCP-modeling. Thyroid dose constraints in treatment planning should be individualized based on thyroid volume

  13. Comparative evaluation of scatter correction techniques in 3D positron emission tomography

    CERN Document Server

    Zaidi, H

    2000-01-01

    Much research and development has been concentrated on the scatter compensation required for quantitative 3D PET. Increasingly sophisticated scatter correction procedures are under investigation, particularly those based on accurate scatter models, and iterative reconstruction-based scatter compensation approaches. The main difference among the correction methods is the way in which the scatter component in the selected energy window is estimated. Monte Carlo methods give further insight and might in themselves offer a possible correction procedure. Methods: Five scatter correction methods are compared in this paper where applicable. The dual-energy window (DEW) technique, the convolution-subtraction (CVS) method, two variants of the Monte Carlo-based scatter correction technique (MCBSC1 and MCBSC2) and our newly developed statistical reconstruction-based scatter correction (SRBSC) method. These scatter correction techniques are evaluated using Monte Carlo simulation studies, experimental phantom measurements...

  14. Accuracy Improvement of Multi-Axis Systems Based on Laser Correction of Volumetric Geometric Errors

    Science.gov (United States)

    Teleshevsky, V. I.; Sokolov, V. A.; Pimushkin, Ya I.

    2018-04-01

    The article describes a volumetric geometric errors correction method for CNC- controlled multi-axis systems (machine-tools, CMMs etc.). The Kalman’s concept of “Control and Observation” is used. A versatile multi-function laser interferometer is used as Observer in order to measure machine’s error functions. A systematic error map of machine’s workspace is produced based on error functions measurements. The error map results into error correction strategy. The article proposes a new method of error correction strategy forming. The method is based on error distribution within machine’s workspace and a CNC-program postprocessor. The postprocessor provides minimal error values within maximal workspace zone. The results are confirmed by error correction of precision CNC machine-tools.

  15. Oral Immunization with a Multivalent Epitope-Based Vaccine, Based on NAP, Urease, HSP60, and HpaA, Provides Therapeutic Effect on H. pylori Infection in Mongolian gerbils.

    Science.gov (United States)

    Guo, Le; Yang, Hua; Tang, Feng; Yin, Runting; Liu, Hongpeng; Gong, Xiaojuan; Wei, Jun; Zhang, Ying; Xu, Guangxian; Liu, Kunmei

    2017-01-01

    Epitope-based vaccine is a promising strategy for therapeutic vaccination against Helicobacter pylori ( H. pylori ) infection. A multivalent subunit vaccine containing various antigens from H. pylori is superior to a univalent subunit vaccine. However, whether a multivalent epitope-based vaccine is superior to a univalent epitope-based vaccine in therapeutic vaccination against H. pylori , remains unclear. In this study, a multivalent epitope-based vaccine named CWAE against H. pylori urease, neutrophil-activating protein (NAP), heat shock protein 60 (HSP60) and H. pylori adhesin A (HpaA) was constructed based on mucosal adjuvant cholera toxin B subunit (CTB), Th1-type adjuvant NAP, multiple copies of selected B and Th cell epitopes (UreA 27-53 , UreA 183-203 , HpaA 132-141 , and HSP60 189-203 ), and also the epitope-rich regions of urease B subunit (UreB 158-251 and UreB 321-385 ) predicted by bioinformatics. Immunological properties of CWAE vaccine were characterized in BALB/c mice model. Its therapeutic effect was evaluated in H. pylori -infected Mongolian gerbil model by comparing with a univalent epitope-based vaccine CTB-UE against H. pylori urease that was constructed in our previous studies. Both CWAE and CTB-UE could induce similar levels of specific antibodies against H. pylori urease, and had similar inhibition effect of H. pylori urease activity. However, only CWAE could induce high levels of specific antibodies to NAP, HSP60, HpaA, and also the synthetic peptides epitopes (UreB 158-172 , UreB 181-195 , UreB 211-225 , UreB 349-363 , HpaA 132-141 , and HSP60 189-203 ). In addition, oral therapeutic immunization with CWAE significantly reduced the number of H. pylori colonies in the stomach of Mongolian gerbils, compared with oral immunization using CTB-UE or H. pylori urease. The protection of CWAE was associated with higher levels of mixed CD4 + T cell (Th cell) response, IgG, and secretory IgA (sIgA) antibodies to H. pylori . These results indic ate

  16. Oral Immunization with a Multivalent Epitope-Based Vaccine, Based on NAP, Urease, HSP60, and HpaA, Provides Therapeutic Effect on H. pylori Infection in Mongolian gerbils

    Directory of Open Access Journals (Sweden)

    Le Guo

    2017-08-01

    Full Text Available Epitope-based vaccine is a promising strategy for therapeutic vaccination against Helicobacter pylori (H. pylori infection. A multivalent subunit vaccine containing various antigens from H. pylori is superior to a univalent subunit vaccine. However, whether a multivalent epitope-based vaccine is superior to a univalent epitope-based vaccine in therapeutic vaccination against H. pylori, remains unclear. In this study, a multivalent epitope-based vaccine named CWAE against H. pylori urease, neutrophil-activating protein (NAP, heat shock protein 60 (HSP60 and H. pylori adhesin A (HpaA was constructed based on mucosal adjuvant cholera toxin B subunit (CTB, Th1-type adjuvant NAP, multiple copies of selected B and Th cell epitopes (UreA27–53, UreA183–203, HpaA132–141, and HSP60189–203, and also the epitope-rich regions of urease B subunit (UreB158–251 and UreB321–385 predicted by bioinformatics. Immunological properties of CWAE vaccine were characterized in BALB/c mice model. Its therapeutic effect was evaluated in H. pylori-infected Mongolian gerbil model by comparing with a univalent epitope-based vaccine CTB-UE against H. pylori urease that was constructed in our previous studies. Both CWAE and CTB-UE could induce similar levels of specific antibodies against H. pylori urease, and had similar inhibition effect of H. pylori urease activity. However, only CWAE could induce high levels of specific antibodies to NAP, HSP60, HpaA, and also the synthetic peptides epitopes (UreB158–172, UreB181–195, UreB211–225, UreB349–363, HpaA132–141, and HSP60189–203. In addition, oral therapeutic immunization with CWAE significantly reduced the number of H. pylori colonies in the stomach of Mongolian gerbils, compared with oral immunization using CTB-UE or H. pylori urease. The protection of CWAE was associated with higher levels of mixed CD4+ T cell (Th cell response, IgG, and secretory IgA (sIgA antibodies to H. pylori. These results indic

  17. Image-based modeling of tumor shrinkage in head and neck radiation therapy

    Energy Technology Data Exchange (ETDEWEB)

    Chao Ming; Xie Yaoqin; Moros, Eduardo G.; Le, Quynh-Thu; Xing Lei [Department of Radiation Oncology, Stanford University School of Medicine, 875 Blake Wilbur Drive, Stanford, California 94305-5847 and Department of Radiation Oncology, University of Arkansas for Medical Sciences, 4301 W. Markham Street, Little Rock, Arkansas 72205-1799 (United States); Department of Radiation Oncology, Stanford University School of Medicine, 875 Blake Wilbur Drive, Stanford, California 94305-5847 (United States); Department of Radiation Oncology, University of Arkansas for Medical Sciences, 4301 W. Markham Street, Little Rock, Arkansas 72205-1799 (United States); Department of Radiation Oncology, Stanford University School of Medicine, 875 Blake Wilbur Drive, Stanford, California 94305-5847 (United States)

    2010-05-15

    Purpose: Understanding the kinetics of tumor growth/shrinkage represents a critical step in quantitative assessment of therapeutics and realization of adaptive radiation therapy. This article presents a novel framework for image-based modeling of tumor change and demonstrates its performance with synthetic images and clinical cases. Methods: Due to significant tumor tissue content changes, similarity-based models are not suitable for describing the process of tumor volume changes. Under the hypothesis that tissue features in a tumor volume or at the boundary region are partially preserved, the kinetic change was modeled in two steps: (1) Autodetection of homologous tissue features shared by two input images using the scale invariance feature transformation (SIFT) method; and (2) establishment of a voxel-to-voxel correspondence between the images for the remaining spatial points by interpolation. The correctness of the tissue feature correspondence was assured by a bidirectional association procedure, where SIFT features were mapped from template to target images and reversely. A series of digital phantom experiments and five head and neck clinical cases were used to assess the performance of the proposed technique. Results: The proposed technique can faithfully identify the known changes introduced when constructing the digital phantoms. The subsequent feature-guided thin plate spline calculation reproduced the ''ground truth'' with accuracy better than 1.5 mm. For the clinical cases, the new algorithm worked reliably for a volume change as large as 30%. Conclusions: An image-based tumor kinetic algorithm was developed to model the tumor response to radiation therapy. The technique provides a practical framework for future application in adaptive radiation therapy.

  18. Error Correction of Measured Unstructured Road Profiles Based on Accelerometer and Gyroscope Data

    Directory of Open Access Journals (Sweden)

    Jinhua Han

    2017-01-01

    Full Text Available This paper describes a noncontact acquisition system composed of several time synchronized laser height sensors, accelerometers, gyroscope, and so forth in order to collect the road profiles of vehicle riding on the unstructured roads. A method of correcting road profiles based on the accelerometer and gyroscope data is proposed to eliminate the adverse impacts of vehicle vibration and attitudes change. Because the power spectral density (PSD of gyro attitudes concentrates in the low frequency band, a method called frequency division is presented to divide the road profiles into two parts: high frequency part and low frequency part. The vibration error of road profiles is corrected by displacement data obtained through two times integration of measured acceleration data. After building the mathematical model between gyro attitudes and road profiles, the gyro attitudes signals are separated from low frequency road profile by the method of sliding block overlap based on correlation analysis. The accuracy and limitations of the system have been analyzed, and its validity has been verified by implementing the system on wheeled equipment for road profiles’ measuring of vehicle testing ground. The paper offers an accurate and practical approach to obtaining unstructured road profiles for road simulation test.

  19. Sub therapeutic drug levels among HIV/TB co-infected patients ...

    African Journals Online (AJOL)

    Daniel W. Gunda

    2016-11-01

    Nov 1, 2016 ... NVP based regimen was associated with sub-therapeutic drug levels on uni- ... a number of important challenges including induction of sub- therapeutic levels of .... ARV plasma levels in the univariate model with p-values less than 0.05 .... clearance of ARVs.56 This may be one of the explanations that.

  20. Pion-cloud corrections to the relativistic S + V harmonic potential model

    International Nuclear Information System (INIS)

    Palladino, B.E.; Ferreira, P.L.

    1988-01-01

    Pionic corrections to the mass spectrum of low-lying s-wave baryons are incorporated in a relativistic independent quark model with equally mixed Lorentz scalar and vector harmonic potentials. (M.W.O.) [pt

  1. Emergence of spacetime dynamics in entropy corrected and braneworld models

    International Nuclear Information System (INIS)

    Sheykhi, A.; Dehghani, M.H.; Hosseini, S.E.

    2013-01-01

    A very interesting new proposal on the origin of the cosmic expansion was recently suggested by Padmanabhan [arXiv:1206.4916]. He argued that the difference between the surface degrees of freedom and the bulk degrees of freedom in a region of space drives the accelerated expansion of the universe, as well as the standard Friedmann equation through relation ΔV = Δt(N sur −N bulk ). In this paper, we first present the general expression for the number of degrees of freedom on the holographic surface, N sur , using the general entropy corrected formula S = A/(4L p 2 )+s(A). Then, as two example, by applying the Padmanabhan's idea we extract the corresponding Friedmann equations in the presence of power-law and logarithmic correction terms in the entropy. We also extend the study to RS II and DGP braneworld models and derive successfully the correct form of the Friedmann equations in these theories. Our study further supports the viability of Padmanabhan's proposal

  2. Spectral matching research for light-emitting diode-based neonatal jaundice therapeutic device light source

    Science.gov (United States)

    Gan, Ruting; Guo, Zhenning; Lin, Jieben

    2015-09-01

    To decrease the risk of bilirubin encephalopathy and minimize the need for exchange transfusions, we report a novel design for light source of light-emitting diode (LED)-based neonatal jaundice therapeutic device (NJTD). The bilirubin absorption spectrum in vivo was regarded as target. Based on spectral constructing theory, we used commercially available LEDs with different peak wavelengths and full width at half maximum as matching light sources. Simple genetic algorithm was first proposed as the spectral matching method. The required LEDs number at each peak wavelength was calculated, and then, the commercial light source sample model of the device was fabricated to confirm the spectral matching technology. In addition, the corresponding spectrum was measured and the effect was analyzed finally. The results showed that fitted spectrum was very similar to the target spectrum with 98.86 % matching degree, and the actual device model has a spectrum close to the target with 96.02 % matching degree. With higher fitting degree and efficiency, this matching algorithm is very suitable for light source matching technology of LED-based spectral distribution, and bilirubin absorption spectrum in vivo will be auspicious candidate for the target spectrum of new LED-based NJTD light source.

  3. A method of measuring and correcting tilt of anti - vibration wind turbines based on screening algorithm

    Science.gov (United States)

    Xiao, Zhongxiu

    2018-04-01

    A Method of Measuring and Correcting Tilt of Anti - vibration Wind Turbines Based on Screening Algorithm is proposed in this paper. First of all, we design a device which the core is the acceleration sensor ADXL203, the inclination is measured by installing it on the tower of the wind turbine as well as the engine room. Next using the Kalman filter algorithm to filter effectively by establishing a state space model for signal and noise. Then we use matlab for simulation. Considering the impact of the tower and nacelle vibration on the collected data, the original data and the filtering data are classified and stored by the Screening algorithm, then filter the filtering data to make the output data more accurate. Finally, we eliminate installation errors by using algorithm to achieve the tilt correction. The device based on this method has high precision, low cost and anti-vibration advantages. It has a wide range of application and promotion value.

  4. Integration of laboratory bioassays into the risk-based corrective action process

    International Nuclear Information System (INIS)

    Edwards, D.; Messina, F.; Clark, J.

    1995-01-01

    Recent data generated by the Gas Research Institute (GRI) and others indicate that residual hydrocarbon may be bound/sequestered in soil such that it is unavailable for microbial degradation, and thus possibly not bioavailable to human/ecological receptors. A reduction in bioavailability would directly equate to reduced exposure and, therefore, potentially less-conservative risk-based cleanup soil goals. Laboratory bioassays which measure bioavailability/toxicity can be cost-effectively integrated into the risk-based corrective action process. However, in order to maximize the cost-effective application of bioassays several site-specific parameters should be addressed up front. This paper discusses (1) the evaluation of parameters impacting the application of bioassays to soils contaminated with metals and/or petroleum hydrocarbons and (2) the cost-effective integration of bioassays into a tiered ASTM type framework for risk-based corrective action

  5. Improvement of Klobuchar model for GNSS single-frequency ionospheric delay corrections

    Science.gov (United States)

    Wang, Ningbo; Yuan, Yunbin; Li, Zishen; Huo, Xingliang

    2016-04-01

    Broadcast ionospheric model is currently an effective approach to mitigate the ionospheric time delay for real-time Global Navigation Satellite System (GNSS) single-frequency users. Klobuchar coefficients transmitted in Global Positioning System (GPS) navigation message have been widely used in various GNSS positioning and navigation applications; however, this model can only reduce the ionospheric error by approximately 50% in mid-latitudes. With the emerging BeiDou and Galileo, as well as the modernization of GPS and GLONASS, more precise ionospheric correction models or algorithms are required by GNSS single-frequency users. Numerical analysis of the initial phase and nighttime term in Klobuchar algorithm demonstrates that more parameters should be introduced to better describe the variation of nighttime ionospheric total electron content (TEC). In view of this, several schemes are proposed for the improvement of Klobuchar algorithm. Performance of these improved Klobuchar-like models are validated over the continental and oceanic regions during high (2002) and low (2006) levels of solar activities, respectively. Over the continental region, GPS TEC generated from 35 International GNSS Service (IGS) and the Crust Movement Observation Network of China (CMONOC) stations are used as references. Over the oceanic region, TEC data from TOPEX/Poseidon and JASON-1 altimeters are used for comparison. A ten-parameter Klobuchar-like model, which describes the nighttime term as a linear function of geomagnetic latitude, is finally proposed for GNSS single-frequency ionospheric corrections. Compared to GPS TEC, while GPS broadcast model can correct for 55.0% and 49.5% of the ionospheric delay for the year 2002 and 2006, respectively, the proposed ten-parameter Klobuchar-like model can reduce the ionospheric error by 68.4% and 64.7% for the same period. Compared to TOPEX/Poseidon and JASON-1 TEC, the improved ten-parameter Klobuchar-like model can mitigate the ionospheric

  6. Effect of therapeutic touch on brain activation of preterm infants in response to sensory punctate stimulus: a near-infrared spectroscopy-based study.

    Science.gov (United States)

    Honda, Noritsugu; Ohgi, Shohei; Wada, Norihisa; Loo, Kek Khee; Higashimoto, Yuji; Fukuda, Kanji

    2013-05-01

    The purpose of this study was to determine whether therapeutic touch in preterm infants can ameliorate their sensory punctate stimulus response in terms of brain activation measured by near-infrared spectroscopy. The study included 10 preterm infants at 34-40 weeks' corrected age. Oxyhaemoglobin (Oxy-Hb) concentration, heart rate (HR), arterial oxygen saturation (SaO2) and body movements were recorded during low-intensity sensory punctate stimulation for 1 s with and without therapeutic touch by a neonatal development specialist nurse. Each stimulation was followed by a resting phase of 30 s. All measurements were performed with the infants asleep in the prone position. sensory punctate stimulus exposure significantly increased the oxy-Hb concentration but did not affect HR, SaO2 and body movements. The infants receiving therapeutic touch had significantly decreased oxy-Hb concentrations over time. Therapeutic touch in preterm infants can ameliorate their sensory punctate stimulus response in terms of brain activation, indicated by increased cerebral oxygenation. Therefore, therapeutic touch may have a protective effect on the autoregulation of cerebral blood flow during sensory punctate stimulus in neonates.

  7. Population pharmacokinetics analysis of olanzapine for Chinese psychotic patients based on clinical therapeutic drug monitoring data with assistance of meta-analysis.

    Science.gov (United States)

    Yin, Anyue; Shang, Dewei; Wen, Yuguan; Li, Liang; Zhou, Tianyan; Lu, Wei

    2016-08-01

    The aim of this study was to build an eligible population pharmacokinetic (PK) model for olanzapine in Chinese psychotic patients based on therapeutic drug monitoring (TDM) data, with assistance of meta-analysis, to facilitate individualized therapy. Population PK analysis for olanzapine was performed using NONMEM software (version 7.3.0). TDM data were collected from Guangzhou Brain Hospital (China). Because of the limitations of TDM data, model-based meta-analysis was performed to construct a structural model to assist the modeling of TDM data as prior estimates. After analyzing related covariates, a simulation was performed to predict concentrations for different types of patients under common dose regimens. A two-compartment model with first-order absorption and elimination was developed for olanzapine oral tablets, based on 23 articles with 390 data points. The model was then applied to the TDM data. Gender and smoking habits were found to be significant covariates that influence the clearance of olanzapine. To achieve a blood concentration of 20 ng/mL (the lower boundary of the recommended therapeutic range), simulation results indicated that the dose regimen of olanzapine should be 5 mg BID (twice a day), ≥ 5 mg QD (every day) plus 10 mg QN (every night), or >10 mg BID for female nonsmokers, male nonsmokers and male smokers, respectively. The population PK model, built using meta-analysis, could facilitate the modeling of TDM data collected from Chinese psychotic patients. The factors that significantly influence olanzapine disposition were determined and the final model could be used for individualized treatment.

  8. Therapeutic exercises for the control of temporomandibular disorders

    Directory of Open Access Journals (Sweden)

    Alberto da Rocha Moraes

    2013-10-01

    Full Text Available INTRODUCTION: Temporomandibular disorder (TMD is a multifactorial disease. For this reason, it is difficult to obtain an accurate and correct diagnosis. In this context, conservative treatments, including therapeutic exercises classified as stretching, relaxation, coordination, strengthening and endurance, are oftentimes prescribed. OBJECTIVE: Thus, the aim of the present article was to conduct a literature review concerning the types of exercises available and the efficacy for the treatment of muscular TMD. METHODS: The review included researches carried out between 2000 and 2010, indexed on Web of Science, PubMed, LILACS and BBO. Moreover, the following keywords were used: Exercise, physical therapy, facial pain, myofascial pain syndrome, and temporomandibular joint dysfunction syndrome. Studies that did not consider the subject "TMD and exercises", used post-surgery exercises and did not use validated criteria for the diagnosis of TMD (RDC/TMD were not included. RESULTS: The results comprised seven articles which proved therapeutic exercises to be effective for the treatment of muscular TMD. However, these studies are seen as limited, since therapeutic exercises were not applied alone, but in association with other conservative procedures. In addition, they present some drawbacks such as: Small samples, lack of control group and no detailed exercise description which should have included intensity, repetition, frequency and duration. CONCLUSION: Although therapeutic exercises are considered effective in the management of muscular TMD, the development of randomized clinical trials is necessary, since many existing studies are still based on the clinical experience of professionals.

  9. Analytical model for relativistic corrections to the nuclear magnetic shielding constant in atoms

    International Nuclear Information System (INIS)

    Romero, Rodolfo H.; Gomez, Sergio S.

    2006-01-01

    We present a simple analytical model for calculating and rationalizing the main relativistic corrections to the nuclear magnetic shielding constant in atoms. It provides good estimates for those corrections and their trends, in reasonable agreement with accurate four-component calculations and perturbation methods. The origin of the effects in deep core atomic orbitals is manifestly shown

  10. Analytical model for relativistic corrections to the nuclear magnetic shielding constant in atoms

    Energy Technology Data Exchange (ETDEWEB)

    Romero, Rodolfo H. [Facultad de Ciencias Exactas, Universidad Nacional del Nordeste, Avenida Libertad 5500 (3400), Corrientes (Argentina)]. E-mail: rhromero@exa.unne.edu.ar; Gomez, Sergio S. [Facultad de Ciencias Exactas, Universidad Nacional del Nordeste, Avenida Libertad 5500 (3400), Corrientes (Argentina)

    2006-04-24

    We present a simple analytical model for calculating and rationalizing the main relativistic corrections to the nuclear magnetic shielding constant in atoms. It provides good estimates for those corrections and their trends, in reasonable agreement with accurate four-component calculations and perturbation methods. The origin of the effects in deep core atomic orbitals is manifestly shown.

  11. A Web-Based Therapeutic Workplace for the Treatment of Drug Addiction and Chronic Unemployment

    Science.gov (United States)

    Silverman, Kenneth; Wong, Conrad J.; Grabinski, Michael J.; Hampton, Jacqueline; Sylvest, Christine E.; Dillon, Erin M.; Wentland, R. Daniel

    2005-01-01

    This article describes a Web-based therapeutic workplace intervention designed to promote heroin and cocaine abstinence and train and employ participants as data entry operators. Patients are paid to participate in training and then to perform data entry jobs in a therapeutic workplace business. Salary is linked to abstinence by requiring patients…

  12. Model-based sensor diagnosis

    International Nuclear Information System (INIS)

    Milgram, J.; Dormoy, J.L.

    1994-09-01

    Running a nuclear power plant involves monitoring data provided by the installation's sensors. Operators and computerized systems then use these data to establish a diagnostic of the plant. However, the instrumentation system is complex, and is not immune to faults and failures. This paper presents a system for detecting sensor failures using a topological description of the installation and a set of component models. This model of the plant implicitly contains relations between sensor data. These relations must always be checked if all the components are functioning correctly. The failure detection task thus consists of checking these constraints. The constraints are extracted in two stages. Firstly, a qualitative model of their existence is built using structural analysis. Secondly, the models are formally handled according to the results of the structural analysis, in order to establish the constraints on the sensor data. This work constitutes an initial step in extending model-based diagnosis, as the information on which it is based is suspect. This work will be followed by surveillance of the detection system. When the instrumentation is assumed to be sound, the unverified constraints indicate errors on the plant model. (authors). 8 refs., 4 figs

  13. Atmospheric Error Correction of the Laser Beam Ranging

    Directory of Open Access Journals (Sweden)

    J. Saydi

    2014-01-01

    Full Text Available Atmospheric models based on surface measurements of pressure, temperature, and relative humidity have been used to increase the laser ranging accuracy by ray tracing. Atmospheric refraction can cause significant errors in laser ranging systems. Through the present research, the atmospheric effects on the laser beam were investigated by using the principles of laser ranging. Atmospheric correction was calculated for 0.532, 1.3, and 10.6 micron wavelengths through the weather conditions of Tehran, Isfahan, and Bushehr in Iran since March 2012 to March 2013. Through the present research the atmospheric correction was computed for meteorological data in base of monthly mean. Of course, the meteorological data were received from meteorological stations in Tehran, Isfahan, and Bushehr. Atmospheric correction was calculated for 11, 100, and 200 kilometers laser beam propagations under 30°, 60°, and 90° rising angles for each propagation. The results of the study showed that in the same months and beam emission angles, the atmospheric correction was most accurate for 10.6 micron wavelength. The laser ranging error was decreased by increasing the laser emission angle. The atmospheric correction with two Marini-Murray and Mendes-Pavlis models for 0.532 nm was compared.

  14. Optical conductivity calculation of a k.p model semiconductor GaAs incorporating first-order electron-hole vertex correction

    Science.gov (United States)

    Nurhuda, Maryam; Aziz Majidi, Muhammad

    2018-04-01

    The role of excitons in semiconducting materials carries potential applications. Experimental results show that excitonic signals also appear in optical absorption spectra of semiconductor system with narrow gap, such as Gallium Arsenide (GaAs). While on the theoretical side, calculation of optical spectra based purely on Density Functional Theory (DFT) without taking electron-hole (e-h) interactions into account does not lead to the appearance of any excitonic signal. Meanwhile, existing DFT-based algorithms that include a full vertex correction through Bethe-Salpeter equation may reveal an excitonic signal, but the algorithm has not provided a way to analyze the excitonic signal further. Motivated to provide a way to isolate the excitonic effect in the optical response theoretically, we develop a method of calculation for the optical conductivity of a narrow band-gap semiconductor GaAs within the 8-band k.p model that includes electron-hole interactions through first-order electron-hole vertex correction. Our calculation confirms that the first-order e-h vertex correction reveals excitonic signal around 1.5 eV (the band gap edge), consistent with the experimental data.

  15. Avastin exhibits therapeutic effects on collagen-induced arthritis in rat model.

    Science.gov (United States)

    Wang, Yong; Da, Gula; Li, Hongbin; Zheng, Yi

    2013-12-01

    Avastin is the monoclonal antibody for vascular endothelial growth factor (VEGF). This study aimed to investigate therapeutic effect of Avastin on type II collagen-induced arthritis. Type II chicken collagen was injected into the tails of Wistar rats, and 60 modeled female rats were randomly divided into three groups (n = 20): Avastin group, Etanercept group, and control group. Arthritis index and joint pad thickness were scored, and the pathology of back metapedes was analyzed. The results showed that compared to control group, the arthritis index, target-to-non-target ratio, synovial pathological injury index, serum levels of VEGF and tumor necrosis factor alpha, and VEGF staining were decreased significantly 14 days after Avastin or Etanercept treatment, but there were no significant differences between Avastin group and Etanercept group. These data provide evidence that Avastin exhibits similar effects to Etanercept to relieve rheumatoid arthritis in rat model and suggest that Avastin is a promising therapeutic agent for rheumatoid arthritis.

  16. Copula-based assimilation of radar and gauge information to derive bias-corrected precipitation fields

    Directory of Open Access Journals (Sweden)

    S. Vogl

    2012-07-01

    Full Text Available This study addresses the problem of combining radar information and gauge measurements. Gauge measurements are the best available source of absolute rainfall intensity albeit their spatial availability is limited. Precipitation information obtained by radar mimics well the spatial patterns but is biased for their absolute values.

    In this study copula models are used to describe the dependence structure between gauge observations and rainfall derived from radar reflectivity at the corresponding grid cells. After appropriate time series transformation to generate "iid" variates, only the positive pairs (radar >0, gauge >0 of the residuals are considered. As not each grid cell can be assigned to one gauge, the integration of point information, i.e. gauge rainfall intensities, is achieved by considering the structure and the strength of dependence between the radar pixels and all the gauges within the radar image. Two different approaches, namely Maximum Theta and Multiple Theta, are presented. They finally allow for generating precipitation fields that mimic the spatial patterns of the radar fields and correct them for biases in their absolute rainfall intensities. The performance of the approach, which can be seen as a bias-correction for radar fields, is demonstrated for the Bavarian Alps. The bias-corrected rainfall fields are compared to a field of interpolated gauge values (ordinary kriging and are validated with available gauge measurements. The simulated precipitation fields are compared to an operationally corrected radar precipitation field (RADOLAN. The copula-based approach performs similarly well as indicated by different validation measures and successfully corrects for errors in the radar precipitation.

  17. Bias correction in the realized stochastic volatility model for daily volatility on the Tokyo Stock Exchange

    Science.gov (United States)

    Takaishi, Tetsuya

    2018-06-01

    The realized stochastic volatility model has been introduced to estimate more accurate volatility by using both daily returns and realized volatility. The main advantage of the model is that no special bias-correction factor for the realized volatility is required a priori. Instead, the model introduces a bias-correction parameter responsible for the bias hidden in realized volatility. We empirically investigate the bias-correction parameter for realized volatilities calculated at various sampling frequencies for six stocks on the Tokyo Stock Exchange, and then show that the dynamic behavior of the bias-correction parameter as a function of sampling frequency is qualitatively similar to that of the Hansen-Lunde bias-correction factor although their values are substantially different. Under the stochastic diffusion assumption of the return dynamics, we investigate the accuracy of estimated volatilities by examining the standardized returns. We find that while the moments of the standardized returns from low-frequency realized volatilities are consistent with the expectation from the Gaussian variables, the deviation from the expectation becomes considerably large at high frequencies. This indicates that the realized stochastic volatility model itself cannot completely remove bias at high frequencies.

  18. Static properties of the nucleon octet in a relativistic potential model with center-of-mass correction

    International Nuclear Information System (INIS)

    Barik, N.; Dash, B.K.; Das, M.

    1985-01-01

    The static properties, such as magnetic moment, charge radius, and axial-vector coupling constants, of the quark core of baryons in the nucleon octet have been studied in an independent-quark model based on the Dirac equation with equally mixed scalar-vector potential in harmonic form in the current quark mass limit. The results obtained with the corrections due to center-of-mass motion are in reasonable agreement with experimental values

  19. LINEAR LATTICE AND TRAJECTORY RECONSTRUCTION AND CORRECTION AT FAST LINEAR ACCELERATOR

    Energy Technology Data Exchange (ETDEWEB)

    Romanov, A. [Fermilab; Edstrom, D. [Fermilab; Halavanau, A. [Northern Illinois U.

    2017-07-16

    The low energy part of the FAST linear accelerator based on 1.3 GHz superconducting RF cavities was successfully commissioned [1]. During commissioning, beam based model dependent methods were used to correct linear lattice and trajectory. Lattice correction algorithm is based on analysis of beam shape from profile monitors and trajectory responses to dipole correctors. Trajectory responses to field gradient variations in quadrupoles and phase variations in superconducting RF cavities were used to correct bunch offsets in quadrupoles and accelerating cavities relative to their magnetic axes. Details of used methods and experimental results are presented.

  20. Tools for predicting the PK/PD of therapeutic proteins.

    Science.gov (United States)

    Diao, Lei; Meibohm, Bernd

    2015-07-01

    Assessments of the pharmacokinetic/pharmacodynamic (PK/PD) characteristics are an integral part in the development of novel therapeutic agents. Compared with traditional small molecule drugs, therapeutic proteins possess many distinct PK/PD features that necessitate the application of modified or separate approaches for assessing their PK/PD relationships. In this review, the authors discuss tools that are utilized to describe and predict the PK/PD features of therapeutic proteins and that are valuable additions in the armamentarium of drug development approaches to facilitate and accelerate their successful preclinical and clinical development. A variety of state-of-the-art PK/PD tools is currently being applied and has been adjusted to support the development of proteins as therapeutics, including allometric scaling approaches, target-mediated disposition models, first-in-man dose calculations, physiologically based PK models and empirical and semi-mechanistic PK/PD modeling. With the advent of the next generation of biologics including bioengineered antibody constructs being developed, these tools will need to be further refined and adapted to ensure their applicability and successful facilitation of the drug development process for these novel scaffolds.

  1. Model-Based Individualized Treatment of Chemotherapeutics: Bayesian Population Modeling and Dose Optimization.

    Directory of Open Access Journals (Sweden)

    Devaraj Jayachandran

    Full Text Available 6-Mercaptopurine (6-MP is one of the key drugs in the treatment of many pediatric cancers, auto immune diseases and inflammatory bowel disease. 6-MP is a prodrug, converted to an active metabolite 6-thioguanine nucleotide (6-TGN through enzymatic reaction involving thiopurine methyltransferase (TPMT. Pharmacogenomic variation observed in the TPMT enzyme produces a significant variation in drug response among the patient population. Despite 6-MP's widespread use and observed variation in treatment response, efforts at quantitative optimization of dose regimens for individual patients are limited. In addition, research efforts devoted on pharmacogenomics to predict clinical responses are proving far from ideal. In this work, we present a Bayesian population modeling approach to develop a pharmacological model for 6-MP metabolism in humans. In the face of scarcity of data in clinical settings, a global sensitivity analysis based model reduction approach is used to minimize the parameter space. For accurate estimation of sensitive parameters, robust optimal experimental design based on D-optimality criteria was exploited. With the patient-specific model, a model predictive control algorithm is used to optimize the dose scheduling with the objective of maintaining the 6-TGN concentration within its therapeutic window. More importantly, for the first time, we show how the incorporation of information from different levels of biological chain-of response (i.e. gene expression-enzyme phenotype-drug phenotype plays a critical role in determining the uncertainty in predicting therapeutic target. The model and the control approach can be utilized in the clinical setting to individualize 6-MP dosing based on the patient's ability to metabolize the drug instead of the traditional standard-dose-for-all approach.

  2. Model-Based Individualized Treatment of Chemotherapeutics: Bayesian Population Modeling and Dose Optimization

    Science.gov (United States)

    Jayachandran, Devaraj; Laínez-Aguirre, José; Rundell, Ann; Vik, Terry; Hannemann, Robert; Reklaitis, Gintaras; Ramkrishna, Doraiswami

    2015-01-01

    6-Mercaptopurine (6-MP) is one of the key drugs in the treatment of many pediatric cancers, auto immune diseases and inflammatory bowel disease. 6-MP is a prodrug, converted to an active metabolite 6-thioguanine nucleotide (6-TGN) through enzymatic reaction involving thiopurine methyltransferase (TPMT). Pharmacogenomic variation observed in the TPMT enzyme produces a significant variation in drug response among the patient population. Despite 6-MP’s widespread use and observed variation in treatment response, efforts at quantitative optimization of dose regimens for individual patients are limited. In addition, research efforts devoted on pharmacogenomics to predict clinical responses are proving far from ideal. In this work, we present a Bayesian population modeling approach to develop a pharmacological model for 6-MP metabolism in humans. In the face of scarcity of data in clinical settings, a global sensitivity analysis based model reduction approach is used to minimize the parameter space. For accurate estimation of sensitive parameters, robust optimal experimental design based on D-optimality criteria was exploited. With the patient-specific model, a model predictive control algorithm is used to optimize the dose scheduling with the objective of maintaining the 6-TGN concentration within its therapeutic window. More importantly, for the first time, we show how the incorporation of information from different levels of biological chain-of response (i.e. gene expression-enzyme phenotype-drug phenotype) plays a critical role in determining the uncertainty in predicting therapeutic target. The model and the control approach can be utilized in the clinical setting to individualize 6-MP dosing based on the patient’s ability to metabolize the drug instead of the traditional standard-dose-for-all approach. PMID:26226448

  3. Semiparametric modeling: Correcting low-dimensional model error in parametric models

    International Nuclear Information System (INIS)

    Berry, Tyrus; Harlim, John

    2016-01-01

    In this paper, a semiparametric modeling approach is introduced as a paradigm for addressing model error arising from unresolved physical phenomena. Our approach compensates for model error by learning an auxiliary dynamical model for the unknown parameters. Practically, the proposed approach consists of the following steps. Given a physics-based model and a noisy data set of historical observations, a Bayesian filtering algorithm is used to extract a time-series of the parameter values. Subsequently, the diffusion forecast algorithm is applied to the retrieved time-series in order to construct the auxiliary model for the time evolving parameters. The semiparametric forecasting algorithm consists of integrating the existing physics-based model with an ensemble of parameters sampled from the probability density function of the diffusion forecast. To specify initial conditions for the diffusion forecast, a Bayesian semiparametric filtering method that extends the Kalman-based filtering framework is introduced. In difficult test examples, which introduce chaotically and stochastically evolving hidden parameters into the Lorenz-96 model, we show that our approach can effectively compensate for model error, with forecasting skill comparable to that of the perfect model.

  4. Therapeutic enhancement: nursing intervention category for patients diagnosed with Readiness for Therapeutic Regimen Management.

    Science.gov (United States)

    Kelly, Cynthia W

    2008-04-01

    To present a new nursing intervention category called therapeutic enhancement. Fewer than half of North Americans follow their physician's recommendations for diet and exercise, even when such are crucial to their health or recovery. It is imperative that nurses consider new ways to promote healthy behaviours. Therapeutic enhancement is intended to provide such a fresh approach. Traditional intervention techniques focusing on education, contracts, social support and more frequent interaction with physicians appear not to be effective when used alone. Successful strategies have been multidisciplinary; and have included interventions by professional nurses who assist patients to understand their disease and the disease process and that helps them to develop disease-management and self-management skills. Therapeutic enhancement incorporates The Stages of Change Theory, Commitment to Health Theory, Motivational Interviewing techniques and instrumentation specifically designed for process evaluation of health-promoting interventions. This is a critical review of approaches that, heretofore, have not been synthesised in a single published article. Based on the commonly used Stages of Change model, therapeutic enhancement is useful for patients who are at the action stage of change. Using therapeutic enhancement as well as therapeutic strategies identified in Stages of Change Theory, such as contingency management, helping relationships, counterconditioning, stimulus control and Motivational Interviewing techniques, nursing professionals can significantly increase the chances of patients moving from action to the maintenance stage of change for a specific health behaviour. Using the nursing intervention category, therapeutic enhancement can increase caregivers' success in helping patients maintain healthy behaviours.

  5. Ab initio thermochemistry using optimal-balance models with isodesmic corrections: The ATOMIC protocol

    Science.gov (United States)

    Bakowies, Dirk

    2009-04-01

    A theoretical composite approach, termed ATOMIC for Ab initio Thermochemistry using Optimal-balance Models with Isodesmic Corrections, is introduced for the calculation of molecular atomization energies and enthalpies of formation. Care is taken to achieve optimal balance in accuracy and cost between the various components contributing to high-level estimates of the fully correlated energy at the infinite-basis-set limit. To this end, the energy at the coupled-cluster level of theory including single, double, and quasiperturbational triple excitations is decomposed into Hartree-Fock, low-order correlation (MP2, CCSD), and connected-triples contributions and into valence-shell and core contributions. Statistical analyses for 73 representative neutral closed-shell molecules containing hydrogen and at least three first-row atoms (CNOF) are used to devise basis-set and extrapolation requirements for each of the eight components to maintain a given level of accuracy. Pople's concept of bond-separation reactions is implemented in an ab initio framework, providing for a complete set of high-level precomputed isodesmic corrections which can be used for any molecule for which a valence structure can be drawn. Use of these corrections is shown to lower basis-set requirements dramatically for each of the eight components of the composite model. A hierarchy of three levels is suggested for isodesmically corrected composite models which reproduce atomization energies at the reference level of theory to within 0.1 kcal/mol (A), 0.3 kcal/mol (B), and 1 kcal/mol (C). Large-scale statistical analysis shows that corrections beyond the CCSD(T) reference level of theory, including coupled-cluster theory with fully relaxed connected triple and quadruple excitations, first-order relativistic and diagonal Born-Oppenheimer corrections can normally be dealt with using a greatly simplified model that assumes thermoneutral bond-separation reactions and that reduces the estimate of these

  6. Cell-based therapeutic strategies for multiple sclerosis.

    Science.gov (United States)

    Scolding, Neil J; Pasquini, Marcelo; Reingold, Stephen C; Cohen, Jeffrey A

    2017-11-01

    The availability of multiple disease-modifying medications with regulatory approval to treat multiple sclerosis illustrates the substantial progress made in therapy of the disease. However, all are only partially effective in preventing inflammatory tissue damage in the central nervous system and none directly promotes repair. Cell-based therapies, including immunoablation followed by autologous haematopoietic stem cell transplantation, mesenchymal and related stem cell transplantation, pharmacologic manipulation of endogenous stem cells to enhance their reparative capabilities, and transplantation of oligodendrocyte progenitor cells, have generated substantial interest as novel therapeutic strategies for immune modulation, neuroprotection, or repair of the damaged central nervous system in multiple sclerosis. Each approach has potential advantages but also safety concerns and unresolved questions. Moreover, clinical trials of cell-based therapies present several unique methodological and ethical issues. We summarize here the status of cell-based therapies to treat multiple sclerosis and make consensus recommendations for future research and clinical trials. © The Author (2017). Published by Oxford University Press on behalf of the Guarantors of Brain.

  7. [Beat therapeutic inertia in dyslipidemic patient management: A challenge in daily clinical practice] [corrected].

    Science.gov (United States)

    Morales, Clotilde; Mauri, Marta; Vila, Lluís

    2014-01-01

    Beat therapeutic inertia in dyslipidemic patient management: a challenge in daily clinical practice. In patients with dyslipidemia, there is the need to reach the therapeutic goals in order to get the maximum benefit in the cardiovascular events risk reduction, especially myocardial infarction. Even having guidelines and some powerful hypolipidemic drugs, the goals of low-density lipoprotein-cholesterol (LDL-c) are often not reached, being of special in patients with a high cardiovascular risk. One of the causes is the therapeutic inertia. There are tools to plan the treatment and make the decisions easier. One of the challenges in everyday clinical practice is to know the needed percentage of reduction in LDL-c. Moreover: it is hard to know which one is the treatment we should use in the beginning of the treatment but also when the desired objective is not reached. This article proposes a practical method that can help solving these questions. Copyright © 2013 Sociedad Española de Arteriosclerosis. Published by Elsevier España. All rights reserved.

  8. Comparison of prostate set-up accuracy and margins with off-line bony anatomy corrections and online implanted fiducial-based corrections.

    Science.gov (United States)

    Greer, P B; Dahl, K; Ebert, M A; Wratten, C; White, M; Denham, J W

    2008-10-01

    The aim of the study was to determine prostate set-up accuracy and set-up margins with off-line bony anatomy-based imaging protocols, compared with online implanted fiducial marker-based imaging with daily corrections. Eleven patients were treated with implanted prostate fiducial markers and online set-up corrections. Pretreatment orthogonal electronic portal images were acquired to determine couch shifts and verification images were acquired during treatment to measure residual set-up error. The prostate set-up errors that would result from skin marker set-up, off-line bony anatomy-based protocols and online fiducial marker-based corrections were determined. Set-up margins were calculated for each set-up technique using the percentage of encompassed isocentres and a margin recipe. The prostate systematic set-up errors in the medial-lateral, superior-inferior and anterior-posterior directions for skin marker set-up were 2.2, 3.6 and 4.5 mm (1 standard deviation). For our bony anatomy-based off-line protocol the prostate systematic set-up errors were 1.6, 2.5 and 4.4 mm. For the online fiducial based set-up the results were 0.5, 1.4 and 1.4 mm. A prostate systematic error of 10.2 mm was uncorrected by the off-line bone protocol in one patient. Set-up margins calculated to encompass 98% of prostate set-up shifts were 11-14 mm with bone off-line set-up and 4-7 mm with online fiducial markers. Margins from the van Herk margin recipe were generally 1-2 mm smaller. Bony anatomy-based set-up protocols improve the group prostate set-up error compared with skin marks; however, large prostate systematic errors can remain undetected or systematic errors increased for individual patients. The margin required for set-up errors was found to be 10-15 mm unless implanted fiducial markers are available for treatment guidance.

  9. A software-based x-ray scatter correction method for breast tomosynthesis

    OpenAIRE

    Jia Feng, Steve Si; Sechopoulos, Ioannis

    2011-01-01

    Purpose: To develop a software-based scatter correction method for digital breast tomosynthesis (DBT) imaging and investigate its impact on the image quality of tomosynthesis reconstructions of both phantoms and patients.

  10. Clearing the waters: Evaluating the need for site-specific field fluorescence corrections based on turbidity measurements

    Science.gov (United States)

    Saraceno, John F.; Shanley, James B.; Downing, Bryan D.; Pellerin, Brian A.

    2017-01-01

    In situ fluorescent dissolved organic matter (fDOM) measurements have gained increasing popularity as a proxy for dissolved organic carbon (DOC) concentrations in streams. One challenge to accurate fDOM measurements in many streams is light attenuation due to suspended particles. Downing et al. (2012) evaluated the need for corrections to compensate for particle interference on fDOM measurements using a single sediment standard in a laboratory study. The application of those results to a large river improved unfiltered field fDOM accuracy. We tested the same correction equation in a headwater tropical stream and found that it overcompensated fDOM when turbidity exceeded ∼300 formazin nephelometric units (FNU). Therefore, we developed a site-specific, field-based fDOM correction equation through paired in situ fDOM measurements of filtered and unfiltered streamwater. The site-specific correction increased fDOM accuracy up to a turbidity as high as 700 FNU, the maximum observed in this study. The difference in performance between the laboratory-based correction equation of Downing et al. (2012) and our site-specific, field-based correction equation likely arises from differences in particle size distribution between the sediment standard used in the lab (silt) and that observed in our study (fine to medium sand), particularly during high flows. Therefore, a particle interference correction equation based on a single sediment type may not be ideal when field sediment size is significantly different. Given that field fDOM corrections for particle interference under turbid conditions are a critical component in generating accurate DOC estimates, we describe a way to develop site-specific corrections.

  11. Specification Search for Identifying the Correct Mean Trajectory in Polynomial Latent Growth Models

    Science.gov (United States)

    Kim, Minjung; Kwok, Oi-Man; Yoon, Myeongsun; Willson, Victor; Lai, Mark H. C.

    2016-01-01

    This study investigated the optimal strategy for model specification search under the latent growth modeling (LGM) framework, specifically on searching for the correct polynomial mean or average growth model when there is no a priori hypothesized model in the absence of theory. In this simulation study, the effectiveness of different starting…

  12. MRI-Based Computed Tomography Metal Artifact Correction Method for Improving Proton Range Calculation Accuracy

    International Nuclear Information System (INIS)

    Park, Peter C.; Schreibmann, Eduard; Roper, Justin; Elder, Eric; Crocker, Ian; Fox, Tim; Zhu, X. Ronald; Dong, Lei; Dhabaan, Anees

    2015-01-01

    Purpose: Computed tomography (CT) artifacts can severely degrade dose calculation accuracy in proton therapy. Prompted by the recently increased popularity of magnetic resonance imaging (MRI) in the radiation therapy clinic, we developed an MRI-based CT artifact correction method for improving the accuracy of proton range calculations. Methods and Materials: The proposed method replaces corrupted CT data by mapping CT Hounsfield units (HU number) from a nearby artifact-free slice, using a coregistered MRI. MRI and CT volumetric images were registered with use of 3-dimensional (3D) deformable image registration (DIR). The registration was fine-tuned on a slice-by-slice basis by using 2D DIR. Based on the intensity of paired MRI pixel values and HU from an artifact-free slice, we performed a comprehensive analysis to predict the correct HU for the corrupted region. For a proof-of-concept validation, metal artifacts were simulated on a reference data set. Proton range was calculated using reference, artifactual, and corrected images to quantify the reduction in proton range error. The correction method was applied to 4 unique clinical cases. Results: The correction method resulted in substantial artifact reduction, both quantitatively and qualitatively. On respective simulated brain and head and neck CT images, the mean error was reduced from 495 and 370 HU to 108 and 92 HU after correction. Correspondingly, the absolute mean proton range errors of 2.4 cm and 1.7 cm were reduced to less than 2 mm in both cases. Conclusions: Our MRI-based CT artifact correction method can improve CT image quality and proton range calculation accuracy for patients with severe CT artifacts

  13. [Predictors of the therapeutic discharge in patients with dual pathology admitted to a therapeutic community with a psychiatric unit].

    Science.gov (United States)

    Madoz-Gúrpide, Agustín; García Vicent, Vicente; Luque Fuentes, Encarnación; Ochoa Mangado, Enriqueta

    2013-01-01

    This study aims to analyze the variables on which depends therapeutic discharge, in patients with a severe dual diagnosis admitted to a professional therapeutic community where their pathology is treated. 325 patients admitted between June 2000 and June 2009 to the therapeutic community. This is a retrospective, cross-sectional study with no control group, based on the detailed analysis of the information collected in a model of semi-structured clinical interview designed in the therapeutic community. The 29.5% of the individuals included in the sample were therapeutically discharged. Of all the variables introduced in this analysis the most significant ones were gender, age at the beginning of treatment, education level, opiate dependence, polidrug abuse, and the presence of psychotic disorders and borderline personality disorder. In our study, gender determines the type of discharge, being therapeutic discharge more frequent among women. A higher educational also increases a better prognosis with a higher rate of therapeutic discharge among individuals with higher education level. A later age at the beginning of the treatment reduces the likelihood of therapeutic discharge. Likewise, polidrug abuse, diagnosis of psychotic disorders and borderline personality disorder are associated to a lower rate of therapeutic discharge. Recognizing these characteristics will allow the early identification of those patients more at risk of dropping treatment hastily, while trying to prevent it by increasing the therapeutic intensity.

  14. Correcting for catchment area nonresidency in studies based on tumor-registry data

    International Nuclear Information System (INIS)

    Sposto, R.; Preston, D.L.

    1993-05-01

    We discuss the effect of catchment area nonresidency on estimates of cancer incidence from a tumor-registry-based cohort study and demonstrate that a relatively simple correction is possible in the context of Poisson regression analysis if individual residency histories or the probabilities of residency are known. A comparison of a complete data maximum likelihood analysis with several Poisson regression analyses demonstrates the adequacy of the simple correction in a large simulated data set. We compare analyses of stomach-cancer incidence from the Radiation Effects Research Foundation tumor registry with and without the correction. We also discuss some implications of including cases identified only on the basis of death certificates. (author)

  15. A Phillips curve interpretation of error-correction models of the wage and price dynamics

    DEFF Research Database (Denmark)

    Harck, Søren H.

    -correction setting, which actually seems to capture the wage and price dynamics of many large- scale econometric models quite well, is fully compatible with the notion of an old-fashioned Phillips curve with finite slope. It is shown how the steady-state impact of various shocks to the model can be profitably...... This paper presents a model of employment, distribution and inflation in which a modern error correction specification of the nominal wage and price dynamics (referring to claims on income by workers and firms) occupies a prominent role. It is brought out, explicitly, how this rather typical error...

  16. A Phillips curve interpretation of error-correction models of the wage and price dynamics

    DEFF Research Database (Denmark)

    Harck, Søren H.

    2009-01-01

    -correction setting, which actually seems to capture the wage and price dynamics of many large- scale econometric models quite well, is fully compatible with the notion of an old-fashioned Phillips curve with finite slope. It is shown how the steady-state impact of various shocks to the model can be profitably......This paper presents a model of employment, distribution and inflation in which a modern error correction specification of the nominal wage and price dynamics (referring to claims on income by workers and firms) occupies a prominent role. It is brought out, explicitly, how this rather typical error...

  17. An Enhanced MWR-Based Wet Tropospheric Correction for Sentinel-3: Inheritance from Past ESA Altimetry Missions

    Science.gov (United States)

    Lazaro, Clara; Fernandes, Joanna M.

    2015-12-01

    The GNSS-derived Path Delay (GPD) and the Data Combination (DComb) algorithms were developed by University of Porto (U.Porto), in the scope of different projects funded by ESA, to compute a continuous and improved wet tropospheric correction (WTC) for use in satellite altimetry. Both algorithms are mission independent and are based on a linear space-time objective analysis procedure that combines various wet path delay data sources. A new algorithm that gets the best of each aforementioned algorithm (GNSS-derived Path Delay Plus, GPD+) has been developed at U.Porto in the scope of SL_cci project, where the use of consistent and stable in time datasets is of major importance. The algorithm has been applied to the main eight altimetric missions (TOPEX/Poseidon, Jason-1, Jason-2, ERS-1, ERS-2, Envisat and CryoSat-2 and SARAL). Upcoming Sentinel-3 possesses a two-channel on-board radiometer similar to those that were deployed in ERS-1/2 and Envisat. Consequently, the fine-tuning of the GPD+ algorithm to these missions datasets shall enrich it, by increasing its capability to quickly deal with Sentinel-3 data. Foreseeing that the computation of an improved MWR-based WTC for use with Sentinel-3 data will be required, this study focuses on the results obtained for ERS-1/2 and Envisat missions, which are expected to give insight into the computation of this correction for the upcoming ESA altimetric mission. The various WTC corrections available for each mission (in general, the original correction derived from the on-board MWR, the model correction and the one derived from GPD+) are inter-compared either directly or using various sea level anomaly variance statistical analyses. Results show that the GPD+ algorithm is efficient in generating global and continuous datasets, corrected for land and ice contamination and spurious measurements of instrumental origin, with significant impacts on all ESA missions.

  18. Petri net-based prediction of therapeutic targets that recover abnormally phosphorylated proteins in muscle atrophy.

    Science.gov (United States)

    Jung, Jinmyung; Kwon, Mijin; Bae, Sunghwa; Yim, Soorin; Lee, Doheon

    2018-03-05

    Muscle atrophy, an involuntary loss of muscle mass, is involved in various diseases and sometimes leads to mortality. However, therapeutics for muscle atrophy thus far have had limited effects. Here, we present a new approach for therapeutic target prediction using Petri net simulation of the status of phosphorylation, with a reasonable assumption that the recovery of abnormally phosphorylated proteins can be a treatment for muscle atrophy. The Petri net model was employed to simulate phosphorylation status in three states, i.e. reference, atrophic and each gene-inhibited state based on the myocyte-specific phosphorylation network. Here, we newly devised a phosphorylation specific Petri net that involves two types of transitions (phosphorylation or de-phosphorylation) and two types of places (activation with or without phosphorylation). Before predicting therapeutic targets, the simulation results in reference and atrophic states were validated by Western blotting experiments detecting five marker proteins, i.e. RELA, SMAD2, SMAD3, FOXO1 and FOXO3. Finally, we determined 37 potential therapeutic targets whose inhibition recovers the phosphorylation status from an atrophic state as indicated by the five validated marker proteins. In the evaluation, we confirmed that the 37 potential targets were enriched for muscle atrophy-related terms such as actin and muscle contraction processes, and they were also significantly overlapping with the genes associated with muscle atrophy reported in the Comparative Toxicogenomics Database (p-value net. We generated a list of the potential therapeutic targets whose inhibition recovers abnormally phosphorylated proteins in an atrophic state. They were evaluated by various approaches, such as Western blotting, GO terms, literature, known muscle atrophy-related genes and shortest path analysis. We expect the new proposed strategy to provide an understanding of phosphorylation status in muscle atrophy and to provide assistance towards

  19. Therapeutic dosage assessment based on population pharmacokinetics of a novel single-dose transdermal donepezil patch in healthy volunteers.

    Science.gov (United States)

    Choi, Hee Youn; Kim, Yo Han; Hong, Donghyun; Kim, Seong Su; Bae, Kyun-Seop; Lim, Hyeong-Seok

    2015-08-01

    We performed population pharmacokinetic (PK) analysis of a novel transdermal donepezil patch in healthy subjects who participated in a phase I trial. We also studied the optimal dosage regimen with repeated patch application for achieving a therapeutic range using a PK simulation model. This study used data from a randomized, single-dose escalation phase I clinical trial conducted in Korea. The population PK analysis was performed using NONMEM software, version 7.3. From the final PK model, we simulated repeat patch application results assuming various transdermal absorption rates. Based on the clinical trial data, novel donepezil patches with doses of 43.75 mg/12.5 cm(2), 87.5 mg/25 cm(2), and 175 mg/50 cm(2) were placed on each subject. A linear one-compartment, first-order elimination with sequential zero- and first-order absorption model best described the donepezil plasma concentrations after patch application. Simulated results on the basis of the PK model showed that repeat application of the patches of 87.5 mg/25 cm(2) and 175 mg/50 cm(2) every 72 h would cover the therapeutic range of donepezil and reach steady-state faster with fewer fluctuations in concentration compared to typical oral administrations. A linear one-compartment with sequential zero- and first-order absorption model was effective for describing the PKs of donepezil after application of patch. Based on this analysis, 87.5 mg/25 cm(2) or 175 mg/50 cm(2) patch application every 72 h is expected to achieve the desired plasma concentration of donepezil.

  20. Determining spherical lens correction for astronaut training underwater.

    Science.gov (United States)

    Porter, Jason; Gibson, C Robert; Strauss, Samuel

    2011-09-01

    To develop a model that will accurately predict the distance spherical lens correction needed to be worn by National Aeronautics and Space Administration astronauts while training underwater. The replica space suit's helmet contains curved visors that induce refractive power when submersed in water. Anterior surface powers and thicknesses were measured for the helmet's protective and inside visors. The impact of each visor on the helmet's refractive power in water was analyzed using thick lens calculations and Zemax optical design software. Using geometrical optics approximations, a model was developed to determine the optimal distance spherical power needed to be worn underwater based on the helmet's total induced spherical power underwater and the astronaut's manifest spectacle plane correction in air. The validity of the model was tested using data from both eyes of 10 astronauts who trained underwater. The helmet's visors induced a total power of -2.737 D when placed underwater. The required underwater spherical correction (FW) was linearly related to the spectacle plane spherical correction in air (FAir): FW = FAir + 2.356 D. The mean magnitude of the difference between the actual correction worn underwater and the calculated underwater correction was 0.20 ± 0.11 D. The actual and calculated values were highly correlated (r = 0.971) with 70% of eyes having a difference in magnitude of astronauts. The model accurately predicts the actual values worn underwater and can be applied (more generally) to determine a suitable spectacle lens correction to be worn behind other types of masks when submerged underwater.

  1. Cannabinoid receptor 2 participates in amyloid-β processing in a mouse model of Alzheimer's disease but plays a minor role in the therapeutic properties of a cannabis-based medicine

    OpenAIRE

    Aso Pérez, Ester; Andrés Benito, Pol; Carmona, Margarita; Maldonado, Rafael, 1961-; Ferrer, Isidre

    2016-01-01

    The endogenous cannabinoid system represents a promising therapeutic target to modify neurodegenerative pathways linked to Alzheimer's disease (AD). The aim of the present study was to evaluate the specific contribution of CB2 receptor to the progression of AD-like pathology and its role in the positive effect of a cannabis-based medicine (1:1 combination of Δ9-tetrahidrocannabinol and cannabidiol) previously demonstrated to be beneficial in the AβPP/PS1 transgenic model of the disease. A new...

  2. Promising Therapeutic Strategies for Mesenchymal Stem Cell-Based Cardiovascular Regeneration: From Cell Priming to Tissue Engineering

    Directory of Open Access Journals (Sweden)

    Seung Taek Ji

    2017-01-01

    Full Text Available The primary cause of death among chronic diseases worldwide is ischemic cardiovascular diseases, such as stroke and myocardial infarction. Recent evidence indicates that adult stem cell therapies involving cardiovascular regeneration represent promising strategies to treat cardiovascular diseases. Owing to their immunomodulatory properties and vascular repair capabilities, mesenchymal stem cells (MSCs are strong candidate therapeutic stem cells for use in cardiovascular regeneration. However, major limitations must be overcome, including their very low survival rate in ischemic lesion. Various attempts have been made to improve the poor survival and longevity of engrafted MSCs. In order to develop novel therapeutic strategies, it is necessary to first identify stem cell modulators for intracellular signal triggering or niche activation. One promising therapeutic strategy is the priming of therapeutic MSCs with stem cell modulators before transplantation. Another is a tissue engineering-based therapeutic strategy involving a cell scaffold, a cell-protein-scaffold architecture made of biomaterials such as ECM or hydrogel, and cell patch- and 3D printing-based tissue engineering. This review focuses on the current clinical applications of MSCs for treating cardiovascular diseases and highlights several therapeutic strategies for promoting the therapeutic efficacy of MSCs in vitro or in vivo from cell priming to tissue engineering strategies, for use in cardiovascular regeneration.

  3. Volterra Filtering for ADC Error Correction

    Directory of Open Access Journals (Sweden)

    J. Saliga

    2001-09-01

    Full Text Available Dynamic non-linearity of analog-to-digital converters (ADCcontributes significantly to the distortion of digitized signals. Thispaper introduces a new effective method for compensation such adistortion based on application of Volterra filtering. Considering ana-priori error model of ADC allows finding an efficient inverseVolterra model for error correction. Efficiency of proposed method isdemonstrated on experimental results.

  4. 76 FR 39006 - Medicare Program; Hospital Inpatient Value-Based Purchasing Program; Correction

    Science.gov (United States)

    2011-07-05

    ... and 480 [CMS-3239-CN] RIN 0938-AQ55 Medicare Program; Hospital Inpatient Value-Based Purchasing... Value-Based Purchasing Program.'' DATES: Effective Date: These corrections are effective on July 1, 2011... for the hospital value-based purchasing program. Therefore, in section III. 6. and 7. of this notice...

  5. The controversial origin of pericytes during angiogenesis - Implications for cell-based therapeutic angiogenesis and cell-based therapies.

    Science.gov (United States)

    Blocki, Anna; Beyer, Sebastian; Jung, Friedrich; Raghunath, Michael

    2018-01-01

    Pericytes reside within the basement membrane of small vessels and are often in direct cellular contact with endothelial cells, fulfilling important functions during blood vessel formation and homeostasis. Recently, these pericytes have been also identified as mesenchymal stem cells. Mesenchymal stem cells, and especially their specialized subpopulation of pericytes, represent promising candidates for therapeutic angiogenesis applications, and have already been widely applied in pre-clinical and clinical trials. However, cell-based therapies of ischemic diseases (especially of myocardial infarction) have not resulted in significant long-term improvement. Interestingly, pericytes from a hematopoietic origin were observed in embryonic skin and a pericyte sub-population expressing leukocyte and monocyte markers was described during adult angiogenesis in vivo. Since mesenchymal stem cells do not express hematopoietic markers, the latter cell type might represent an alternative pericyte population relevant to angiogenesis. Therefore, we sourced blood-derived angiogenic cells (BDACs) from monocytes that closely resembled hematopoietic pericytes, which had only been observed in vivo thus far. BDACs displayed many pericytic features and exhibited enhanced revascularization and functional tissue regeneration in a pre-clinical model of critical limb ischemia. Comparison between BDACs and mesenchymal pericytes indicated that BDACs (while resembling hematopoietic pericytes) enhanced early stages of angiogenesis, such as endothelial cell sprouting. In contrast, mesenchymal pericytes were responsible for blood vessel maturation and homeostasis, while reducing endothelial sprouting.Since the formation of new blood vessels is crucial during therapeutic angiogenesis or during integration of implants into the host tissue, hematopoietic pericytes (and therefore BDACs) might offer an advantageous addition or even an alternative for cell-based therapies.

  6. Image-based modeling of tumor shrinkage in head and neck radiation therapy1

    Science.gov (United States)

    Chao, Ming; Xie, Yaoqin; Moros, Eduardo G.; Le, Quynh-Thu; Xing, Lei

    2010-01-01

    Purpose: Understanding the kinetics of tumor growth∕shrinkage represents a critical step in quantitative assessment of therapeutics and realization of adaptive radiation therapy. This article presents a novel framework for image-based modeling of tumor change and demonstrates its performance with synthetic images and clinical cases. Methods: Due to significant tumor tissue content changes, similarity-based models are not suitable for describing the process of tumor volume changes. Under the hypothesis that tissue features in a tumor volume or at the boundary region are partially preserved, the kinetic change was modeled in two steps: (1) Autodetection of homologous tissue features shared by two input images using the scale invariance feature transformation (SIFT) method; and (2) establishment of a voxel-to-voxel correspondence between the images for the remaining spatial points by interpolation. The correctness of the tissue feature correspondence was assured by a bidirectional association procedure, where SIFT features were mapped from template to target images and reversely. A series of digital phantom experiments and five head and neck clinical cases were used to assess the performance of the proposed technique. Results: The proposed technique can faithfully identify the known changes introduced when constructing the digital phantoms. The subsequent feature-guided thin plate spline calculation reproduced the “ground truth” with accuracy better than 1.5 mm. For the clinical cases, the new algorithm worked reliably for a volume change as large as 30%. Conclusions: An image-based tumor kinetic algorithm was developed to model the tumor response to radiation therapy. The technique provides a practical framework for future application in adaptive radiation therapy. PMID:20527569

  7. Geological Corrections in Gravimetry

    Science.gov (United States)

    Mikuška, J.; Marušiak, I.

    2015-12-01

    Applying corrections for the known geology to gravity data can be traced back into the first quarter of the 20th century. Later on, mostly in areas with sedimentary cover, at local and regional scales, the correction known as gravity stripping has been in use since the mid 1960s, provided that there was enough geological information. Stripping at regional to global scales became possible after releasing the CRUST 2.0 and later CRUST 1.0 models in the years 2000 and 2013, respectively. Especially the later model provides quite a new view on the relevant geometries and on the topographic and crustal densities as well as on the crust/mantle density contrast. Thus, the isostatic corrections, which have been often used in the past, can now be replaced by procedures working with an independent information interpreted primarily from seismic studies. We have developed software for performing geological corrections in space domain, based on a-priori geometry and density grids which can be of either rectangular or spherical/ellipsoidal types with cells of the shapes of rectangles, tesseroids or triangles. It enables us to calculate the required gravitational effects not only in the form of surface maps or profiles but, for instance, also along vertical lines, which can shed some additional light on the nature of the geological correction. The software can work at a variety of scales and considers the input information to an optional distance from the calculation point up to the antipodes. Our main objective is to treat geological correction as an alternative to accounting for the topography with varying densities since the bottoms of the topographic masses, namely the geoid or ellipsoid, generally do not represent geological boundaries. As well we would like to call attention to the possible distortions of the corrected gravity anomalies. This work was supported by the Slovak Research and Development Agency under the contract APVV-0827-12.

  8. Event-based motion correction for PET transmission measurements with a rotating point source

    International Nuclear Information System (INIS)

    Zhou, Victor W; Kyme, Andre Z; Meikle, Steven R; Fulton, Roger

    2011-01-01

    Accurate attenuation correction is important for quantitative positron emission tomography (PET) studies. When performing transmission measurements using an external rotating radioactive source, object motion during the transmission scan can distort the attenuation correction factors computed as the ratio of the blank to transmission counts, and cause errors and artefacts in reconstructed PET images. In this paper we report a compensation method for rigid body motion during PET transmission measurements, in which list mode transmission data are motion corrected event-by-event, based on known motion, to ensure that all events which traverse the same path through the object are recorded on a common line of response (LOR). As a result, the motion-corrected transmission LOR may record a combination of events originally detected on different LORs. To ensure that the corresponding blank LOR records events from the same combination of contributing LORs, the list mode blank data are spatially transformed event-by-event based on the same motion information. The number of counts recorded on the resulting blank LOR is then equivalent to the number of counts that would have been recorded on the corresponding motion-corrected transmission LOR in the absence of any attenuating object. The proposed method has been verified in phantom studies with both stepwise movements and continuous motion. We found that attenuation maps derived from motion-corrected transmission and blank data agree well with those of the stationary phantom and are significantly better than uncorrected attenuation data.

  9. Model-Based Optimization of Scaffold Geometry and Operating Conditions of Radial Flow Packed-Bed Bioreactors for Therapeutic Applications

    Directory of Open Access Journals (Sweden)

    Danilo Donato

    2014-01-01

    Full Text Available Radial flow perfusion of cell-seeded hollow cylindrical porous scaffolds may overcome the transport limitations of pure diffusion and direct axial perfusion in the realization of bioengineered substitutes of failing or missing tissues. Little has been reported on the optimization criteria of such bioreactors. A steady-state model was developed, combining convective and dispersive transport of dissolved oxygen with Michaelis-Menten cellular consumption kinetics. Dimensional analysis was used to combine more effectively geometric and operational variables in the dimensionless groups determining bioreactor performance. The effectiveness of cell oxygenation was expressed in terms of non-hypoxic fractional construct volume. The model permits the optimization of the geometry of hollow cylindrical constructs, and direction and magnitude of perfusion flow, to ensure cell oxygenation and culture at controlled oxygen concentration profiles. This may help engineer tissues suitable for therapeutic and drug screening purposes.

  10. Generalized Empirical Likelihood-Based Focused Information Criterion and Model Averaging

    Directory of Open Access Journals (Sweden)

    Naoya Sueishi

    2013-07-01

    Full Text Available This paper develops model selection and averaging methods for moment restriction models. We first propose a focused information criterion based on the generalized empirical likelihood estimator. We address the issue of selecting an optimal model, rather than a correct model, for estimating a specific parameter of interest. Then, this study investigates a generalized empirical likelihood-based model averaging estimator that minimizes the asymptotic mean squared error. A simulation study suggests that our averaging estimator can be a useful alternative to existing post-selection estimators.

  11. Sensor-Based Model Driven Control Strategy for Precision Irrigation

    Directory of Open Access Journals (Sweden)

    Camilo Lozoya

    2016-01-01

    Full Text Available Improving the efficiency of the agricultural irrigation systems substantially contributes to sustainable water management. This improvement can be achieved through an automated irrigation system that includes a real-time control strategy based on the water, soil, and crop relationship. This paper presents a model driven control strategy applied to an irrigation system, in order to make an efficient use of water for large crop fields, that is, applying the correct amount of water in the correct place at the right moment. The proposed model uses a predictive algorithm that senses soil moisture and weather variables, to determine optimal amount of water required by the crop. This proposed approach is evaluated against a traditional irrigation system based on the empirical definition of time periods and against a basic soil moisture control system. Results indicate that the use of a model predictive control in an irrigation system achieves a higher efficiency and significantly reduce the water consumption.

  12. Aligning observed and modelled behaviour based on workflow decomposition

    Science.gov (United States)

    Wang, Lu; Du, YuYue; Liu, Wei

    2017-09-01

    When business processes are mostly supported by information systems, the availability of event logs generated from these systems, as well as the requirement of appropriate process models are increasing. Business processes can be discovered, monitored and enhanced by extracting process-related information. However, some events cannot be correctly identified because of the explosion of the amount of event logs. Therefore, a new process mining technique is proposed based on a workflow decomposition method in this paper. Petri nets (PNs) are used to describe business processes, and then conformance checking of event logs and process models is investigated. A decomposition approach is proposed to divide large process models and event logs into several separate parts that can be analysed independently; while an alignment approach based on a state equation method in PN theory enhances the performance of conformance checking. Both approaches are implemented in programmable read-only memory (ProM). The correctness and effectiveness of the proposed methods are illustrated through experiments.

  13. Reconstructing interacting entropy-corrected holographic scalar field models of dark energy in the non-flat universe

    Energy Technology Data Exchange (ETDEWEB)

    Karami, K; Khaledian, M S [Department of Physics, University of Kurdistan, Pasdaran Street, Sanandaj (Iran, Islamic Republic of); Jamil, Mubasher, E-mail: KKarami@uok.ac.ir, E-mail: MS.Khaledian@uok.ac.ir, E-mail: mjamil@camp.nust.edu.pk [Center for Advanced Mathematics and Physics (CAMP), National University of Sciences and Technology (NUST), Islamabad (Pakistan)

    2011-02-15

    Here we consider the entropy-corrected version of the holographic dark energy (DE) model in the non-flat universe. We obtain the equation of state parameter in the presence of interaction between DE and dark matter. Moreover, we reconstruct the potential and the dynamics of the quintessence, tachyon, K-essence and dilaton scalar field models according to the evolutionary behavior of the interacting entropy-corrected holographic DE model.

  14. Cannabinoid Receptor 2 Participates in Amyloid-β Processing in a Mouse Model of Alzheimer's Disease but Plays a Minor Role in the Therapeutic Properties of a Cannabis-Based Medicine.

    Science.gov (United States)

    Aso, Ester; Andrés-Benito, Pol; Carmona, Margarita; Maldonado, Rafael; Ferrer, Isidre

    2016-01-01

    The endogenous cannabinoid system represents a promising therapeutic target to modify neurodegenerative pathways linked to Alzheimer's disease (AD). The aim of the present study was to evaluate the specific contribution of CB2 receptor to the progression of AD-like pathology and its role in the positive effect of a cannabis-based medicine (1:1 combination of Δ9-tetrahidrocannabinol and cannabidiol) previously demonstrated to be beneficial in the AβPP/PS1 transgenic model of the disease. A new mouse strain was generated by crossing AβPP/PS1 transgenic mice with CB2 knockout mice. Results show that lack of CB2 exacerbates cortical Aβ deposition and increases the levels of soluble Aβ40. However, CB2 receptor deficiency does not affect the viability of AβPP/PS1 mice, does not accelerate their memory impairment, does not modify tau hyperphosphorylation in dystrophic neurites associated to Aβ plaques, and does not attenuate the positive cognitive effect induced by the cannabis-based medicine in these animals. These findings suggest a minor role for the CB2 receptor in the therapeutic effect of the cannabis-based medicine in AβPP/PS1 mice, but also constitute evidence of a link between CB2 receptor and Aβ processing.

  15. Magnetic corrections to π -π scattering lengths in the linear sigma model

    Science.gov (United States)

    Loewe, M.; Monje, L.; Zamora, R.

    2018-03-01

    In this article, we consider the magnetic corrections to π -π scattering lengths in the frame of the linear sigma model. For this, we consider all the one-loop corrections in the s , t , and u channels, associated to the insertion of a Schwinger propagator for charged pions, working in the region of small values of the magnetic field. Our calculation relies on an appropriate expansion for the propagator. It turns out that the leading scattering length, l =0 in the S channel, increases for an increasing value of the magnetic field, in the isospin I =2 case, whereas the opposite effect is found for the I =0 case. The isospin symmetry is valid because the insertion of the magnetic field occurs through the absolute value of the electric charges. The channel I =1 does not receive any corrections. These results, for the channels I =0 and I =2 , are opposite with respect to the thermal corrections found previously in the literature.

  16. Evaluation of scoring models for identifying the need for therapeutic intervention of upper gastrointestinal bleeding: A new prediction score model for Japanese patients.

    Science.gov (United States)

    Iino, Chikara; Mikami, Tatsuya; Igarashi, Takasato; Aihara, Tomoyuki; Ishii, Kentaro; Sakamoto, Jyuichi; Tono, Hiroshi; Fukuda, Shinsaku

    2016-11-01

    Multiple scoring systems have been developed to predict outcomes in patients with upper gastrointestinal bleeding. We determined how well these and a newly established scoring model predict the need for therapeutic intervention, excluding transfusion, in Japanese patients with upper gastrointestinal bleeding. We reviewed data from 212 consecutive patients with upper gastrointestinal bleeding. Patients requiring endoscopic intervention, operation, or interventional radiology were allocated to the therapeutic intervention group. Firstly, we compared areas under the curve for the Glasgow-Blatchford, Clinical Rockall, and AIMS65 scores. Secondly, the scores and factors likely associated with upper gastrointestinal bleeding were analyzed with a logistic regression analysis to form a new scoring model. Thirdly, the new model and the existing model were investigated to evaluate their usefulness. Therapeutic intervention was required in 109 patients (51.4%). The Glasgow-Blatchford score was superior to both the Clinical Rockall and AIMS65 scores for predicting therapeutic intervention need (area under the curve, 0.75 [95% confidence interval, 0.69-0.81] vs 0.53 [0.46-0.61] and 0.52 [0.44-0.60], respectively). Multivariate logistic regression analysis retained seven significant predictors in the model: systolic blood pressure upper gastrointestinal bleeding. © 2016 Japan Gastroenterological Endoscopy Society.

  17. Combine TV-L1 model with guided image filtering for wide and faint ring artifacts correction of in-line x-ray phase contrast computed tomography.

    Science.gov (United States)

    Ji, Dongjiang; Qu, Gangrong; Hu, Chunhong; Zhao, Yuqing; Chen, Xiaodong

    2018-01-01

    In practice, mis-calibrated detector pixels give rise to wide and faint ring artifacts in the reconstruction image of the In-line phase-contrast computed tomography (IL-PC-CT). Ring artifacts correction is essential in IL-PC-CT. In this study, a novel method of wide and faint ring artifacts correction was presented based on combining TV-L1 model with guided image filtering (GIF) in the reconstruction image domain. The new correction method includes two main steps namely, the GIF step and the TV-L1 step. To validate the performance of this method, simulation data and real experimental synchrotron data are provided. The results demonstrate that TV-L1 model with GIF step can effectively correct the wide and faint ring artifacts for IL-PC-CT.

  18. Pressure correction schemes for compressible flows: application to baro-tropic Navier-Stokes equations and to drift-flux model; Methodes de correction de pression pour les ecoulements compressibles: application aux equations de Navier-Stokes barotropes et au modele de derive

    Energy Technology Data Exchange (ETDEWEB)

    Gastaldo, L

    2007-11-15

    We develop in this PhD thesis a simulation tool for bubbly flows encountered in some late phases of a core-melt accident in pressurized water reactors, when the flow of molten core and vessel structures comes to chemically interact with the concrete of the containment floor. The physical modelling is based on the so-called drift-flux model, consisting of mass balance and momentum balance equations for the mixture (Navier-Stokes equations) and a mass balance equation for the gaseous phase. First, we propose a pressure correction scheme for the compressible Navier-Stokes equations based on mixed non-conforming finite elements. An ad hoc discretization of the advection operator, by a finite volume technique based on a dual mesh, ensures the stability of the velocity prediction step. A priori estimates for the velocity and the pressure yields the existence of the solution. We prove that this scheme is stable, in the sense that the discrete entropy is decreasing. For the conservation equation of the gaseous phase, we build a finite volume discretization which satisfies a discrete maximum principle. From this last property, we deduce the existence and the uniqueness of the discrete solution. Finally, on the basis of these works, a conservative and monotone scheme which is stable in the low Mach number limit, is build for the drift-flux model. This scheme enjoys, moreover, the following property: the algorithm preserves a constant pressure and velocity through moving interfaces between phases (i.e. contact discontinuities of the underlying hyperbolic system). In order to satisfy this property at the discrete level, we build an original pressure correction step which couples the mass balance equation with the transport terms of the gas mass balance equation, the remaining terms of the gas mass balance being taken into account with a splitting method. We prove the existence of a discrete solution for the pressure correction step. Numerical results are presented; they

  19. Spheroidal corrections to the spherical and parabolic bases of the hydrogen atom

    International Nuclear Information System (INIS)

    Mardyan, L.G.; Pogosyan, G.S.; Sisakyan, A.N.

    1986-01-01

    This paper introduces the bases of the hydrogen atom and obtains recursion relations that determine the expansion of the spheroidal basis with respect to its parabolic basis. The leading spheroidal corrections to the spherical and parabolic bases are calculated by perturbation theory

  20. Cell-based therapeutics from an economic perspective: primed for a commercial success or a research sinkhole?

    Science.gov (United States)

    McAllister, Todd N; Dusserre, Nathalie; Maruszewski, Marcin; L'heureux, Nicolas

    2008-11-01

    Despite widespread hype and significant investment through the late 1980s and 1990s, cell-based therapeutics have largely failed from both a clinical and financial perspective. While the early pioneers were able to create clinically efficacious products, small margins coupled with small initial indications made it impossible to produce a reasonable return on the huge initial investments that had been made to support widespread research activities. Even as US FDA clearance opened up larger markets, investor interest waned, and the crown jewels of cell-based therapeutics went bankrupt or were rescued by corporate bailout. Despite the hard lessons learned from these pioneering companies, many of today's regenerative medicine companies are supporting nearly identical strategies. It remains to be seen whether or not our proposed tenets for investment and commercialization strategy yield an economic success or whether the original model can produce a return on investment sufficient to justify the large up-front investments. Irrespective of which approach yields a success, it is critically important that more of the second-generation products establish profitability if the field is to enjoy continued investment from both public and private sectors.

  1. A software-based x-ray scatter correction method for breast tomosynthesis

    International Nuclear Information System (INIS)

    Jia Feng, Steve Si; Sechopoulos, Ioannis

    2011-01-01

    Purpose: To develop a software-based scatter correction method for digital breast tomosynthesis (DBT) imaging and investigate its impact on the image quality of tomosynthesis reconstructions of both phantoms and patients. Methods: A Monte Carlo (MC) simulation of x-ray scatter, with geometry matching that of the cranio-caudal (CC) view of a DBT clinical prototype, was developed using the Geant4 toolkit and used to generate maps of the scatter-to-primary ratio (SPR) of a number of homogeneous standard-shaped breasts of varying sizes. Dimension-matched SPR maps were then deformed and registered to DBT acquisition projections, allowing for the estimation of the primary x-ray signal acquired by the imaging system. Noise filtering of the estimated projections was then performed to reduce the impact of the quantum noise of the x-ray scatter. Three dimensional (3D) reconstruction was then performed using the maximum likelihood-expectation maximization (MLEM) method. This process was tested on acquisitions of a heterogeneous 50/50 adipose/glandular tomosynthesis phantom with embedded masses, fibers, and microcalcifications and on acquisitions of patients. The image quality of the reconstructions of the scatter-corrected and uncorrected projections was analyzed by studying the signal-difference-to-noise ratio (SDNR), the integral of the signal in each mass lesion (integrated mass signal, IMS), and the modulation transfer function (MTF). Results: The reconstructions of the scatter-corrected projections demonstrated superior image quality. The SDNR of masses embedded in a 5 cm thick tomosynthesis phantom improved 60%-66%, while the SDNR of the smallest mass in an 8 cm thick phantom improved by 59% (p < 0.01). The IMS of the masses in the 5 cm thick phantom also improved by 15%-29%, while the IMS of the masses in the 8 cm thick phantom improved by 26%-62% (p < 0.01). Some embedded microcalcifications in the tomosynthesis phantoms were visible only in the scatter-corrected

  2. Corrective Action Investigation Plan for Corrective Action Unit 232: Area 25 Sewage Lagoons, Nevada Test Site, Nevada, Revision 0

    International Nuclear Information System (INIS)

    1999-01-01

    The Corrective Action Investigation Plan for Corrective Action Unit 232, Area 25 Sewage Lagoons, has been developed in accordance with the Federal Facility Agreement and Consent Order that was agreed to by the U.S. Department of Energy, Nevada Operations Office; the State of Nevada Division of Environmental Protection; and the U. S. Department of Defense. Corrective Action Unit 232 consists of Corrective Action Site 25-03-01, Sewage Lagoon. Corrective Action Unit 232, Area 25 Sewage Lagoons, received sanitary effluent from four buildings within the Test Cell ''C'' Facility from the mid-1960s through approximately 1996. The Test Cell ''C'' Facility was used to develop nuclear propulsion technology by conducting nuclear test reactor studies. Based on the site history collected to support the Data Quality Objectives process, contaminants of potential concern include volatile organic compounds, semivolatile organic compounds, Resource Conservation and Recovery Act metals, petroleum hydrocarbons, polychlorinated biphenyls, pesticides, herbicides, gamma emitting radionuclides, isotopic plutonium, isotopic uranium, and strontium-90. A detailed conceptual site model is presented in Section 3.0 and Appendix A of this Corrective Action Investigation Plan. The conceptual model serves as the basis for the sampling strategy. Under the Federal Facility Agreement and Consent Order, the Corrective Action Investigation Plan will be submitted to the Nevada Division of Environmental Protection for approval. Field work will be conducted following approval of the plan. The results of the field investigation will support a defensible evaluation of corrective action alternatives in the Corrective Action Decision Document

  3. Corrective Action Investigation Plan for Corrective Action Unit 232: Area 25 Sewage Lagoons, Nevada Test Site, Nevada, Revision 0

    Energy Technology Data Exchange (ETDEWEB)

    USDOE/NV

    1999-05-01

    The Corrective Action Investigation Plan for Corrective Action Unit 232, Area 25 Sewage Lagoons, has been developed in accordance with the Federal Facility Agreement and Consent Order that was agreed to by the U.S. Department of Energy, Nevada Operations Office; the State of Nevada Division of Environmental Protection; and the U. S. Department of Defense. Corrective Action Unit 232 consists of Corrective Action Site 25-03-01, Sewage Lagoon. Corrective Action Unit 232, Area 25 Sewage Lagoons, received sanitary effluent from four buildings within the Test Cell ''C'' Facility from the mid-1960s through approximately 1996. The Test Cell ''C'' Facility was used to develop nuclear propulsion technology by conducting nuclear test reactor studies. Based on the site history collected to support the Data Quality Objectives process, contaminants of potential concern include volatile organic compounds, semivolatile organic compounds, Resource Conservation and Recovery Act metals, petroleum hydrocarbons, polychlorinated biphenyls, pesticides, herbicides, gamma emitting radionuclides, isotopic plutonium, isotopic uranium, and strontium-90. A detailed conceptual site model is presented in Section 3.0 and Appendix A of this Corrective Action Investigation Plan. The conceptual model serves as the basis for the sampling strategy. Under the Federal Facility Agreement and Consent Order, the Corrective Action Investigation Plan will be submitted to the Nevada Division of Environmental Protection for approval. Field work will be conducted following approval of the plan. The results of the field investigation will support a defensible evaluation of corrective action alternatives in the Corrective Action Decision Document.

  4. Fecal Microbiota-based Therapeutics for Recurrent Clostridium difficile Infection, Ulcerative Colitis and Obesity

    Directory of Open Access Journals (Sweden)

    Christian Carlucci

    2016-11-01

    Full Text Available The human gut microbiome is a complex ecosystem of fundamental importance to human health. Our increased understanding of gut microbial composition and functional interactions in health and disease states has spurred research efforts examining the gut microbiome as a valuable target for therapeutic intervention. This review provides updated insight into the state of the gut microbiome in recurrent Clostridium difficile infection (CDI, ulcerative colitis (UC, and obesity while addressing the rationale for the modulation of the gut microbiome using fecal microbiota transplant (FMT-based therapies. Current microbiome-based therapeutics in pre-clinical or clinical development are discussed. We end by putting this within the context of the current regulatory framework surrounding FMT and related therapies.

  5. Ultrafast cone-beam CT scatter correction with GPU-based Monte Carlo simulation

    Directory of Open Access Journals (Sweden)

    Yuan Xu

    2014-03-01

    Full Text Available Purpose: Scatter artifacts severely degrade image quality of cone-beam CT (CBCT. We present an ultrafast scatter correction framework by using GPU-based Monte Carlo (MC simulation and prior patient CT image, aiming at automatically finish the whole process including both scatter correction and reconstruction within 30 seconds.Methods: The method consists of six steps: 1 FDK reconstruction using raw projection data; 2 Rigid Registration of planning CT to the FDK results; 3 MC scatter calculation at sparse view angles using the planning CT; 4 Interpolation of the calculated scatter signals to other angles; 5 Removal of scatter from the raw projections; 6 FDK reconstruction using the scatter-corrected projections. In addition to using GPU to accelerate MC photon simulations, we also use a small number of photons and a down-sampled CT image in simulation to further reduce computation time. A novel denoising algorithm is used to eliminate MC noise from the simulated scatter images caused by low photon numbers. The method is validated on one simulated head-and-neck case with 364 projection angles.Results: We have examined variation of the scatter signal among projection angles using Fourier analysis. It is found that scatter images at 31 angles are sufficient to restore those at all angles with < 0.1% error. For the simulated patient case with a resolution of 512 × 512 × 100, we simulated 5 × 106 photons per angle. The total computation time is 20.52 seconds on a Nvidia GTX Titan GPU, and the time at each step is 2.53, 0.64, 14.78, 0.13, 0.19, and 2.25 seconds, respectively. The scatter-induced shading/cupping artifacts are substantially reduced, and the average HU error of a region-of-interest is reduced from 75.9 to 19.0 HU.Conclusion: A practical ultrafast MC-based CBCT scatter correction scheme is developed. It accomplished the whole procedure of scatter correction and reconstruction within 30 seconds.----------------------------Cite this

  6. A correction for Dupuit-Forchheimer interface flow models of seawater intrusion in unconfined coastal aquifers

    Science.gov (United States)

    Koussis, Antonis D.; Mazi, Katerina; Riou, Fabien; Destouni, Georgia

    2015-06-01

    Interface flow models that use the Dupuit-Forchheimer (DF) approximation for assessing the freshwater lens and the seawater intrusion in coastal aquifers lack representation of the gap through which fresh groundwater discharges to the sea. In these models, the interface outcrops unrealistically at the same point as the free surface, is too shallow and intersects the aquifer base too far inland, thus overestimating an intruding seawater front. To correct this shortcoming of DF-type interface solutions for unconfined aquifers, we here adapt the outflow gap estimate of an analytical 2-D interface solution for infinitely thick aquifers to fit the 50%-salinity contour of variable-density solutions for finite-depth aquifers. We further improve the accuracy of the interface toe location predicted with depth-integrated DF interface solutions by ∼20% (relative to the 50%-salinity contour of variable-density solutions) by combining the outflow-gap adjusted aquifer depth at the sea with a transverse-dispersion adjusted density ratio (Pool and Carrera, 2011), appropriately modified for unconfined flow. The effectiveness of the combined correction is exemplified for two regional Mediterranean aquifers, the Israel Coastal and Nile Delta aquifers.

  7. Pedagogical Knowledge Base Underlying EFL Teachers' Provision of Oral Corrective Feedback in Grammar Instruction

    Science.gov (United States)

    Atai, Mahmood Reza; Shafiee, Zahra

    2017-01-01

    The present study investigated the pedagogical knowledge base underlying EFL teachers' provision of oral corrective feedback in grammar instruction. More specifically, we explored the consistent thought patterns guiding the decisions of three Iranian teachers regarding oral corrective feedback on grammatical errors. We also examined the potential…

  8. Long range Debye-Hückel correction for computation of grid-based electrostatic forces between biomacromolecules

    International Nuclear Information System (INIS)

    Mereghetti, Paolo; Martinez, Michael; Wade, Rebecca C

    2014-01-01

    Brownian dynamics (BD) simulations can be used to study very large molecular systems, such as models of the intracellular environment, using atomic-detail structures. Such simulations require strategies to contain the computational costs, especially for the computation of interaction forces and energies. A common approach is to compute interaction forces between macromolecules by precomputing their interaction potentials on three-dimensional discretized grids. For long-range interactions, such as electrostatics, grid-based methods are subject to finite size errors. We describe here the implementation of a Debye-Hückel correction to the grid-based electrostatic potential used in the SDA BD simulation software that was applied to simulate solutions of bovine serum albumin and of hen egg white lysozyme. We found that the inclusion of the long-range electrostatic correction increased the accuracy of both the protein-protein interaction profiles and the protein diffusion coefficients at low ionic strength. An advantage of this method is the low additional computational cost required to treat long-range electrostatic interactions in large biomacromolecular systems. Moreover, the implementation described here for BD simulations of protein solutions can also be applied in implicit solvent molecular dynamics simulations that make use of gridded interaction potentials

  9. The modified version of the centre-of-mass correction to the bag model

    International Nuclear Information System (INIS)

    Bartelski, J.; Tatur, S.

    1986-01-01

    We propose the improvement of the recently considered version of the centre-of-mass correction to the bag model. We identify a nucleon bag with physical nucleon confined in an external fictitious spherical well potential with an additional external fictitious pressure characterized by the parameter b. The introduction of such a pressure restores the conservation of the canonical energy-momentum tensor, which was lost in the former model. We propose several methods to determine the numerical value of b. We calculate the Roper resonance mass as well as static electroweak parameters of a nucleon with centre-of-mass corrections taken into account. 7 refs., 1 tab. (author)

  10. Perturbative corrections for approximate inference in gaussian latent variable models

    DEFF Research Database (Denmark)

    Opper, Manfred; Paquet, Ulrich; Winther, Ole

    2013-01-01

    Expectation Propagation (EP) provides a framework for approximate inference. When the model under consideration is over a latent Gaussian field, with the approximation being Gaussian, we show how these approximations can systematically be corrected. A perturbative expansion is made of the exact b...... illustrate on tree-structured Ising model approximations. Furthermore, they provide a polynomial-time assessment of the approximation error. We also provide both theoretical and practical insights on the exactness of the EP solution. © 2013 Manfred Opper, Ulrich Paquet and Ole Winther....

  11. The accuracy of CT-based inhomogeneity corrections and in vivo dosimetry for the treatment of lung cancer

    International Nuclear Information System (INIS)

    Essers, M.; Lanson, J.H.; Leunens, G.; Schnabel, T.; Mijnheer, B.J.

    1995-01-01

    Purpose: To determine the accuracy of dose calculations based on CT-densities for lung cancer patients irradiated with an anterioposterior parallel-opposed treatment technique and to evaluate, for this technique, the use of diodes and an Electronic Portal Imaging Device (EPID) for absolute exit dose and relative transmission dose verification, respectively. Materials and methods: Dose calculations were performed using a 3-dimensional treatment planning system, using CT-densities or assuming the patient to be water-equivalent. A simple inhomogeneity correction model was used to take CT-densities into account. For 22 patients, entrance and exit dose calculations at the central beam axis and at several off-axis positions were compared with diode measurements. For 12 patients, diode exit dose measurements and exit dose calculations were compared with EPID transmission dose values. Results: Using water-equivalent calculations, the actual exit dose value under lung was, on average, underestimated by 30%, with an overall spread of 10% (1 SD) in the ratio of measurement and calculation. Using inhomogeneity corrections, the exit dose was, on average, overestimated by 4%, with an overall spread of 6% (1 SD). Only 2% of the average deviation was due to the inhomogeneity correction model. The other 2% resulted from a small inaccuracy in beam fit parameters and the fact that lack of backscatter is not taken into account by the calculation model. Organ motion, resulting from the ventilatory or cardiac cycle, caused an estimated uncertainty in calculated exit dose of 2.5% (1 SD). The most important reason for the large overall spread was, however, the inaccuracy involved in point measurements, of about 4% (1 SD), which resulted from the systematic and random deviation in patient set-up and therefore in the diode position with respect to patient anatomy. Transmission and exit dose values agreed with an average difference of 1.1%. Transmission dose profiles also showed good

  12. Recoil corrected bag model calculations for semileptonic weak decays

    International Nuclear Information System (INIS)

    Lie-Svendsen, Oe.; Hoegaasen, H.

    1987-02-01

    Recoil corrections to various model results for strangeness changing weak decay amplitudes have been developed. It is shown that the spurious reference frame dependence of earlier calculations is reduced. The second class currents are generally less important than obtained by calculations in the static approximation. Theoretical results are compared to observations. The agreement is quite good, although the values for the Cabibbo angle obtained by fits to the decay rates are somewhat to large

  13. One-loop radiative correction to the triple Higgs coupling in the Higgs singlet model

    Directory of Open Access Journals (Sweden)

    Shi-Ping He

    2017-01-01

    Full Text Available Though the 125 GeV Higgs boson is consistent with the standard model (SM prediction until now, the triple Higgs coupling can deviate from the SM value in the physics beyond the SM (BSM. In this paper, the radiative correction to the triple Higgs coupling is calculated in the minimal extension of the SM by adding a real gauge singlet scalar. In this model there are two scalars h and H and both of them are mixing states of the doublet and singlet. Provided that the mixing angle is set to be zero, namely the SM limit, h is the pure left-over of the doublet and its behavior is the same as that of the SM at the tree level. However the loop corrections can alter h-related couplings. In this SM limit case, the effect of the singlet H may show up in the h-related couplings, especially the triple h coupling. Our numerical results show that the deviation is sizable. For λΦS=1 (see text for the parameter definition, the deviation δhhh(1 can be 40%. For λΦS=1.5, the δhhh(1 can reach 140%. The sizable radiative correction is mainly caused by three reasons: the magnitude of the coupling λΦS, light mass of the additional scalar and the threshold enhancement. The radiative corrections for the hVV, hff couplings are from the counter-terms, which are the universal correction in this model and always at O(1%. The hZZ coupling, which can be precisely measured, may be a complementarity to the triple h coupling to search for the BSM. In the optimal case, the triple h coupling is very sensitive to the BSM physics, and this model can be tested at future high luminosity hadron colliders and electron–positron colliders.

  14. Ellipsoidal terrain correction based on multi-cylindrical equal-area map projection of the reference ellipsoid

    Science.gov (United States)

    Ardalan, A. A.; Safari, A.

    2004-09-01

    An operational algorithm for computation of terrain correction (or local gravity field modeling) based on application of closed-form solution of the Newton integral in terms of Cartesian coordinates in multi-cylindrical equal-area map projection of the reference ellipsoid is presented. Multi-cylindrical equal-area map projection of the reference ellipsoid has been derived and is described in detail for the first time. Ellipsoidal mass elements with various sizes on the surface of the reference ellipsoid are selected and the gravitational potential and vector of gravitational intensity (i.e. gravitational acceleration) of the mass elements are computed via numerical solution of the Newton integral in terms of geodetic coordinates {λ,ϕ,h}. Four base- edge points of the ellipsoidal mass elements are transformed into a multi-cylindrical equal-area map projection surface to build Cartesian mass elements by associating the height of the corresponding ellipsoidal mass elements to the transformed area elements. Using the closed-form solution of the Newton integral in terms of Cartesian coordinates, the gravitational potential and vector of gravitational intensity of the transformed Cartesian mass elements are computed and compared with those of the numerical solution of the Newton integral for the ellipsoidal mass elements in terms of geodetic coordinates. Numerical tests indicate that the difference between the two computations, i.e. numerical solution of the Newton integral for ellipsoidal mass elements in terms of geodetic coordinates and closed-form solution of the Newton integral in terms of Cartesian coordinates, in a multi-cylindrical equal-area map projection, is less than 1.6×10-8 m2/s2 for a mass element with a cross section area of 10×10 m and a height of 10,000 m. For a mass element with a cross section area of 1×1 km and a height of 10,000 m the difference is less than 1.5×10-4m2/s2. Since 1.5× 10-4 m2/s2 is equivalent to 1.5×10-5m in the vertical

  15. Animal models and therapeutic molecular targets of cancer: utility and limitations

    Directory of Open Access Journals (Sweden)

    Cekanova M

    2014-10-01

    Full Text Available Maria Cekanova, Kusum Rathore Department of Small Animal Clinical Sciences, College of Veterinary Medicine, The University of Tennessee, Knoxville, TN, USA Abstract: Cancer is the term used to describe over 100 diseases that share several common hallmarks. Despite prevention, early detection, and novel therapies, cancer is still the second leading cause of death in the USA. Successful bench-to-bedside translation of basic scientific findings about cancer into therapeutic interventions for patients depends on the selection of appropriate animal experimental models. Cancer research uses animal and human cancer cell lines in vitro to study biochemical pathways in these cancer cells. In this review, we summarize the important animal models of cancer with focus on their advantages and limitations. Mouse cancer models are well known, and are frequently used for cancer research. Rodent models have revolutionized our ability to study gene and protein functions in vivo and to better understand their molecular pathways and mechanisms. Xenograft and chemically or genetically induced mouse cancers are the most commonly used rodent cancer models. Companion animals with spontaneous neoplasms are still an underexploited tool for making rapid advances in human and veterinary cancer therapies by testing new drugs and delivery systems that have shown promise in vitro and in vivo in mouse models. Companion animals have a relatively high incidence of cancers, with biological behavior, response to therapy, and response to cytotoxic agents similar to those in humans. Shorter overall lifespan and more rapid disease progression are factors contributing to the advantages of a companion animal model. In addition, the current focus is on discovering molecular targets for new therapeutic drugs to improve survival and quality of life in cancer patients. Keywords: mouse cancer model, companion animal cancer model, dogs, cats, molecular targets

  16. Evaluation of metal artifacts in MVCT systems using a model based correction method

    Energy Technology Data Exchange (ETDEWEB)

    Paudel, M. R.; Mackenzie, M.; Fallone, B. G.; Rathee, S. [Department of Oncology, Medical Physics Division, University of Alberta, 11560 University Avenue, Edmonton, Alberta T6G 1Z2 (Canada); Department of Medical Physics, Cross Cancer Institute, 11560 University Avenue, Edmonton, Alberta T6G 1Z2 (Canada) and Department of Oncology, Medical Physics Division, University of Alberta, 11560 University Avenue, Edmonton, Alberta T6G 1Z2 (Canada); Department of Physics, University of Alberta, 11322-89 Avenue, Edmonton, Alberta T6G 2G7 (Canada); Department of Medical Physics, Cross Cancer Institute, 11560 University Avenue, Edmonton, Alberta T6G 1Z2 (Canada) and Department of Oncology, Medical Physics Division, University of Alberta, 11560 University Avenue, Edmonton, Alberta T6G 1Z2 (Canada); Department of Medical Physics, Cross Cancer Institute, 11560 University Avenue, Edmonton, Alberta T6G 1Z2 (Canada) and Department of Oncology, Medical Physics Division, University of Alberta, 11560 University Avenue, Edmonton, Alberta T6G 1Z2 (Canada)

    2012-10-15

    Purpose: To evaluate the performance of a model based image reconstruction method in reducing metal artifacts in the megavoltage computed tomography (MVCT) images of a phantom representing bilateral hip prostheses and to compare with the filtered-backprojection (FBP) technique. Methods: An iterative maximum likelihood polychromatic algorithm for CT (IMPACT) is used with an additional model for the pair/triplet production process and the energy dependent response of the detectors. The beam spectra for an in-house bench-top and TomoTherapy Trade-Mark-Sign MVCTs are modeled for use in IMPACT. The empirical energy dependent response of detectors is calculated using a constrained optimization technique that predicts the measured attenuation of the beam by various thicknesses (0-24 cm) of solid water slabs. A cylindrical (19.1 cm diameter) plexiglass phantom containing various cylindrical inserts of relative electron densities 0.295-1.695 positioned between two steel rods (2.7 cm diameter) is scanned in the bench-top MVCT that utilizes the bremsstrahlung radiation from a 6 MeV electron beam passed through 4 cm solid water on the Varian Clinac 2300C and in the imaging beam of the TomoTherapy Trade-Mark-Sign MVCT. The FBP technique in bench-top MVCT reconstructs images from raw signal normalized to air scan and corrected for beam hardening using a uniform plexiglass cylinder (20 cm diameter). The IMPACT starts with a FBP reconstructed seed image and reconstructs the final image in 150 iterations. Results: In both MVCTs, FBP produces visible dark shading in the image connecting the steel rods. In the IMPACT reconstructed images this shading is nearly removed and the uniform background is restored. The average attenuation coefficients of the inserts and the background are very close to the corresponding values in the absence of the steel inserts. In the FBP images of the bench-top MVCT, the shading causes 4%-9.5% underestimation of electron density at the central inserts

  17. Accurately Detecting Students' Lies regarding Relational Aggression by Correctional Instructions

    Science.gov (United States)

    Dickhauser, Oliver; Reinhard, Marc-Andre; Marksteiner, Tamara

    2012-01-01

    This study investigates the effect of correctional instructions when detecting lies about relational aggression. Based on models from the field of social psychology, we predict that correctional instruction will lead to a less pronounced lie bias and to more accurate lie detection. Seventy-five teachers received videotapes of students' true denial…

  18. Integrals of random fields treated by the model correction factor method

    DEFF Research Database (Denmark)

    Franchin, P.; Ditlevsen, Ove Dalager; Kiureghian, Armen Der

    2002-01-01

    The model correction factor method (MCFM) is used in conjunction with the first-order reliability method (FORM) to solve structural reliability problems involving integrals of non-Gaussian random fields. The approach replaces the limit-state function with an idealized one, in which the integrals ...

  19. Splice-correcting oligonucleotides restore BTK function in X-linked agammaglobulinemia model

    DEFF Research Database (Denmark)

    Bestas, Burcu; Moreno, Pedro M D; Blomberg, K Emelie M

    2014-01-01

    , splice-correcting oligonucleotides (SCOs) targeting mutated BTK transcripts for treating XLA. Both the SCO structural design and chemical properties were optimized using 2'-O-methyl, locked nucleic acid, or phosphorodiamidate morpholino backbones. In order to have access to an animal model of XLA, we...

  20. Self-correcting quantum computers

    International Nuclear Information System (INIS)

    Bombin, H; Chhajlany, R W; Horodecki, M; Martin-Delgado, M A

    2013-01-01

    Is the notion of a quantum computer (QC) resilient to thermal noise unphysical? We address this question from a constructive perspective and show that local quantum Hamiltonian models provide self-correcting QCs. To this end, we first give a sufficient condition on the connectedness of excitations for a stabilizer code model to be a self-correcting quantum memory. We then study the two main examples of topological stabilizer codes in arbitrary dimensions and establish their self-correcting capabilities. Also, we address the transversality properties of topological color codes, showing that six-dimensional color codes provide a self-correcting model that allows the transversal and local implementation of a universal set of operations in seven spatial dimensions. Finally, we give a procedure for initializing such quantum memories at finite temperature. (paper)

  1. ABI Base Recall: Automatic Correction and Ends Trimming of DNA Sequences.

    Science.gov (United States)

    Elyazghi, Zakaria; Yazouli, Loubna El; Sadki, Khalid; Radouani, Fouzia

    2017-12-01

    Automated DNA sequencers produce chromatogram files in ABI format. When viewing chromatograms, some ambiguities are shown at various sites along the DNA sequences, because the program implemented in the sequencing machine and used to call bases cannot always precisely determine the right nucleotide, especially when it is represented by either a broad peak or a set of overlaying peaks. In such cases, a letter other than A, C, G, or T is recorded, most commonly N. Thus, DNA sequencing chromatograms need manual examination: checking for mis-calls and truncating the sequence when errors become too frequent. The purpose of this paper is to develop a program allowing the automatic correction of these ambiguities. This application is a Web-based program powered by Shiny and runs under R platform for an easy exploitation. As a part of the interface, we added the automatic ends clipping option, alignment against reference sequences, and BLAST. To develop and test our tool, we collected several bacterial DNA sequences from different laboratories within Institut Pasteur du Maroc and performed both manual and automatic correction. The comparison between the two methods was carried out. As a result, we note that our program, ABI base recall, accomplishes good correction with a high accuracy. Indeed, it increases the rate of identity and coverage and minimizes the number of mismatches and gaps, hence it provides solution to sequencing ambiguities and saves biologists' time and labor.

  2. Correction of rotational distortion for catheter-based en face OCT and OCT angiography

    Science.gov (United States)

    Ahsen, Osman O.; Lee, Hsiang-Chieh; Giacomelli, Michael G.; Wang, Zhao; Liang, Kaicheng; Tsai, Tsung-Han; Potsaid, Benjamin; Mashimo, Hiroshi; Fujimoto, James G.

    2015-01-01

    We demonstrate a computationally efficient method for correcting the nonuniform rotational distortion (NURD) in catheter-based imaging systems to improve endoscopic en face optical coherence tomography (OCT) and OCT angiography. The method performs nonrigid registration using fiducial markers on the catheter to correct rotational speed variations. Algorithm performance is investigated with an ultrahigh-speed endoscopic OCT system and micromotor catheter. Scan nonuniformity is quantitatively characterized, and artifacts from rotational speed variations are significantly reduced. Furthermore, we present endoscopic en face OCT and OCT angiography images of human gastrointestinal tract in vivo to demonstrate the image quality improvement using the correction algorithm. PMID:25361133

  3. Climate projections and extremes in dynamically downscaled CMIP5 model outputs over the Bengal delta: a quartile based bias-correction approach with new gridded data

    Science.gov (United States)

    Hasan, M. Alfi; Islam, A. K. M. Saiful; Akanda, Ali Shafqat

    2017-11-01

    In the era of global warning, the insight of future climate and their changing extremes is critical for climate-vulnerable regions of the world. In this study, we have conducted a robust assessment of Regional Climate Model (RCM) results in a monsoon-dominated region within the new Coupled Model Intercomparison Project Phase 5 (CMIP5) and the latest Representative Concentration Pathways (RCP) scenarios. We have applied an advanced bias correction approach to five RCM simulations in order to project future climate and associated extremes over Bangladesh, a critically climate-vulnerable country with a complex monsoon system. We have also generated a new gridded product that performed better in capturing observed climatic extremes than existing products. The bias-correction approach provided a notable improvement in capturing the precipitation extremes as well as mean climate. The majority of projected multi-model RCMs indicate an increase of rainfall, where one model shows contrary results during the 2080s (2071-2100) era. The multi-model mean shows that nighttime temperatures will increase much faster than daytime temperatures and the average annual temperatures are projected to be as hot as present-day summer temperatures. The expected increase of precipitation and temperature over the hilly areas are higher compared to other parts of the country. Overall, the projected extremities of future rainfall are more variable than temperature. According to the majority of the models, the number of the heavy rainy days will increase in future years. The severity of summer-day temperatures will be alarming, especially over hilly regions, where winters are relatively warm. The projected rise of both precipitation and temperature extremes over the intense rainfall-prone northeastern region of the country creates a possibility of devastating flash floods with harmful impacts on agriculture. Moreover, the effect of bias-correction, as presented in probable changes of both bias-corrected

  4. Significance of forecasting factors for correction of therapeutic tactics in children with lymphogranulomatosis

    International Nuclear Information System (INIS)

    Kobikov, S.Kh.

    1989-01-01

    Ways of reducing the frequency of lymphogranulomatosis recurrences in 347 children after radio and chemotherapy are considered. Children are irradiated using the Rocus gamma therapeutic device and accelerators applying 15-25 MeV energy electron beam. Integral focal doses make up 40-45 Gy

  5. Increased Plasma Colloid Osmotic Pressure Facilitates the Uptake of Therapeutic Macromolecules in a Xenograft Tumor Model

    Directory of Open Access Journals (Sweden)

    Matthias Hofmann

    2009-08-01

    Full Text Available Elevated tumor interstitial fluid pressure (TIFP is a characteristic of most solid tumors. Clinically, TIFP may hamper the uptake of chemotherapeutic drugs into the tumor tissue reducing their therapeutic efficacy. In this study, a means of modulating TIFP to increase the flux of macromolecules into tumor tissue is presented, which is based on the rationale that elevated plasma colloid osmotic pressure (COP pulls water from tumor interstitium lowering the TIFP. Concentrated human serum albumin: (20% HSA, used as an agent to enhance COP, reduced the TIFP time-dependently from 8 to 2 mm Hg in human tumor xenograft models bearing A431 epidermoid vulva carcinomas. To evaluate whether this reduction facilitates the uptake of macromolecules, the intratumoral distribution of fluorescently conjugated dextrans (2.5 mg/ml and cetuximab (2.0 mg/ml was probed using novel time domain nearinfrared fluorescence imaging. This method permitted discrimination and semiquantification of tumor-accumulated conjugate from background and unspecific probe fluorescence. The coadministration of 20% HSA together with either dextrans or cetuximab was found to lower the TIFP significantly and increase the concentration of the substances within the tumor tissue in comparison to control tumors. Furthermore, combined administration of 20%HSA plus cetuximab reduced the tumor growth significantly in comparison to standard cetuximab treatment. These data demonstrate that increased COP lowers the TIFP within hours and increases the uptake of therapeutic macromolecules into the tumor interstitium leading to reduced tumor growth. This model represents a novel approach to facilitate the delivery of therapeutics into tumor tissue, particularly monoclonal antibodies.

  6. Correction of hyperkalemia in dogs with chronic kidney disease consuming commercial renal therapeutic diets by a potassium-reduced home-prepared diet.

    Science.gov (United States)

    Segev, G; Fascetti, A J; Weeth, L P; Cowgill, L D

    2010-01-01

    Hyperkalemia occurs in dogs with chronic kidney disease (CKD). (1) To determine the incidence of hyperkalemia in dogs with CKD, (2) to determine the proportion of hyperkalemic dogs that required modification of dietary potassium intake, (3) to evaluate the response to dietary modification. The hospital database was reviewed retrospectively to identify dogs with CKD and persistent (>5.3 mmol/L on at least 3 occasions) or severe (K > or = 6.5 mmol/L) hyperkalemia while consuming a therapeutic renal diet. Records of dogs with hyperkalemia that were prescribed a home-prepared, potassium-reduced diet were evaluated further. Response was evaluated by changes in body weight, BCS, and serum potassium concentration. One hundred and fifty-two dogs were diagnosed with CKD, of which 47% had > or =1 documented episode of hyperkalemia, 25% had > or = 3 episodes of hyperkalemia, and 16% had > or =1 episodes of severe hyperkalemia (K > 6.5 mmol/L). Twenty-six dogs (17.2%) with CKD and hyperkalemia were prescribed a potassium-reduced, home-prepared diet. The potassium concentration of all hyperkalemic dogs on therapeutic diets (potassium content, 1.6 +/- 0.23 g/1,000 kcal of metabolizable energy [ME]) was 6.5 +/- 0.5 mmol/L but decreased significantly to 5.1 +/- 0.5 mmol/L in 18 dogs available for follow-up in response to the dietary modification (0.91 +/- 0.14 g/1,000 kcal of ME, P diets and could restrict use of these diets. Appropriately formulated, potassium-reduced, diets are an effective alternative to correct hyperkalemia.

  7. Pixel-based CTE Correction of ACS/WFC: Modifications To The ACS Calibration Pipeline (CALACS)

    Science.gov (United States)

    Smith, Linda J.; Anderson, J.; Armstrong, A.; Avila, R.; Bedin, L.; Chiaberge, M.; Davis, M.; Ferguson, B.; Fruchter, A.; Golimowski, D.; Grogin, N.; Hack, W.; Lim, P. L.; Lucas, R.; Maybhate, A.; McMaster, M.; Ogaz, S.; Suchkov, A.; Ubeda, L.

    2012-01-01

    The Advanced Camera for Surveys (ACS) was installed on the Hubble Space Telescope (HST) nearly ten years ago. Over the last decade, continuous exposure to the harsh radiation environment has degraded the charge transfer efficiency (CTE) of the CCDs. The worsening CTE impacts the science that can be obtained by altering the photometric, astrometric and morphological characteristics of sources, particularly those farthest from the readout amplifiers. To ameliorate these effects, Anderson & Bedin (2010, PASP, 122, 1035) developed a pixel-based empirical approach to correcting ACS data by characterizing the CTE profiles of trails behind warm pixels in dark exposures. The success of this technique means that it is now possible to correct full-frame ACS/WFC images for CTE degradation in the standard data calibration and reduction pipeline CALACS. Over the past year, the ACS team at STScI has developed, refined and tested the new software. The details of this work are described in separate posters. The new code is more effective at low flux levels (repair ACS electronics) and pixel-based CTE correction. In addition to the standard cosmic ray corrected, flat-fielded and drizzled data products (crj, flt and drz files) there are three new equivalent files (crc, flc and drc) which contain the CTE-corrected data products. The user community will be able to choose whether to use the standard or CTE-corrected products.

  8. Comparison of prostate set-up accuracy and margins with off-line bony anatomy corrections and online implanted fiducial-based corrections

    International Nuclear Information System (INIS)

    Greer, P. B.; Dahl, K.; Ebert, M. A.; Wratten, C.; White, M.; Denham, K. W.

    2008-01-01

    Full text: The aim of the study was to determine prostate set-up accuracy and set-up margins with off-line bony anatomy-based imaging protocols, compared with online implanted fiducial marker-based imaging with daily corrections. Eleven patients were treated with implanted prostate fiducial markers and online set-up corrections. Pretreatment orthogonal electronic portal images were acquired to determine couch shifts and verification images were acquired during treatment to measure residual set-up error. The prostate set-up errors that would result from skin marker set-up, off-line bony anatomy-based protocols and online fiducial marker-based corrections were determined. Set-up margins were calculated for each set-up technique using the percentage of encompassed isocentres land a margin recipe. The prostate systematic set-up errors in the medial-lateral, superior-inferior and anterior-I posterior directions for skin marker set-up were 2.2, 3.6 and 4.5 mm (1 standard deviation). For our bony anatomy-I based off-line protocol the prostate systematic set-up errors were 1.6, 2.5 and 4.4 mm. For the online fiducial based set-up the results were 0.5, 1.4 and 1.4 mm. A prostate systematic error of 10.2 mm was uncorrected by the off-line bone protocol in one patient. Set-up margins calculated to encompass 98% of prostate set-up shifts were 111-14 mm with bone off-line set-up and 4-7 mm with online fiducial markers. Margins from the van Herk margin I recipe were generally 1-2 mm smaller. Bony anatomy-based set-up protocols improve the group prostate set-up error compared with skin marks; however, large prostate systematic errors can remain undetected or systematic (errors increased for individual patients. The margin required for set-up errors was found to be 10-15 mm unless I implanted fiducial markers are available for treatment guidance.

  9. Correcting for cryptic relatedness by a regression-based genomic control method

    Directory of Open Access Journals (Sweden)

    Yang Yaning

    2009-12-01

    Full Text Available Abstract Background Genomic control (GC method is a useful tool to correct for the cryptic relatedness in population-based association studies. It was originally proposed for correcting for the variance inflation of Cochran-Armitage's additive trend test by using information from unlinked null markers, and was later generalized to be applicable to other tests with the additional requirement that the null markers are matched with the candidate marker in allele frequencies. However, matching allele frequencies limits the number of available null markers and thus limits the applicability of the GC method. On the other hand, errors in genotype/allele frequencies may cause further bias and variance inflation and thereby aggravate the effect of GC correction. Results In this paper, we propose a regression-based GC method using null markers that are not necessarily matched in allele frequencies with the candidate marker. Variation of allele frequencies of the null markers is adjusted by a regression method. Conclusion The proposed method can be readily applied to the Cochran-Armitage's trend tests other than the additive trend test, the Pearson's chi-square test and other robust efficiency tests. Simulation results show that the proposed method is effective in controlling type I error in the presence of population substructure.

  10. Distortion correction algorithm for UAV remote sensing image based on CUDA

    International Nuclear Information System (INIS)

    Wenhao, Zhang; Yingcheng, Li; Delong, Li; Changsheng, Teng; Jin, Liu

    2014-01-01

    In China, natural disasters are characterized by wide distribution, severe destruction and high impact range, and they cause significant property damage and casualties every year. Following a disaster, timely and accurate acquisition of geospatial information can provide an important basis for disaster assessment, emergency relief, and reconstruction. In recent years, Unmanned Aerial Vehicle (UAV) remote sensing systems have played an important role in major natural disasters, with UAVs becoming an important technique of obtaining disaster information. UAV is equipped with a non-metric digital camera with lens distortion, resulting in larger geometric deformation for acquired images, and affecting the accuracy of subsequent processing. The slow speed of the traditional CPU-based distortion correction algorithm cannot meet the requirements of disaster emergencies. Therefore, we propose a Compute Unified Device Architecture (CUDA)-based image distortion correction algorithm for UAV remote sensing, which takes advantage of the powerful parallel processing capability of the GPU, greatly improving the efficiency of distortion correction. Our experiments show that, compared with traditional CPU algorithms and regardless of image loading and saving times, the maximum acceleration ratio using our proposed algorithm reaches 58 times that using the traditional algorithm. Thus, data processing time can be reduced by one to two hours, thereby considerably improving disaster emergency response capability

  11. [Therapeutic algorithm of idiopathic scoliosis in children].

    Science.gov (United States)

    Ciortan, Ionica; Goţia, D G

    2008-01-01

    Acquired deformations of spinal cord (scoliosis, kyphosis, lordosis) represent a frequent pathology in child; their treatment is complex, with variable results which depend on various parameters. Mild scoliosis, with an angle less than 30 degrees, is treated with physiotherapy and regular follow-up. If the angle is higher than 30 degrees, the orthopedic corset is required; the angle over 45 degrees impose surgically correction. The indications of every therapeutic method depend on many factors, the main target of the treatment is to prevent the aggravation of the curvature; concerning the surgery, the goal is to obtain a correction as normal as possible of the spinal axis.

  12. Structured Therapeutic Games for Nonoffending Caregivers of Children Who Have Experienced Sexual Abuse.

    Science.gov (United States)

    Springer, Craig I; Colorado, Giselle; Misurell, Justin R

    2015-01-01

    Game-based cognitive-behavioral therapy group model for nonoffending caregivers utilizes structured therapeutic games to assist parents following child sexual abuse. Game-based cognitive-behavioral therapy group model is a manualized group treatment approach that integrates evidence-based cognitive-behavioral therapy components with structured play therapy to teach parenting and coping skills, provide psychoeducation, and process trauma. Structured therapeutic games were designed to allow nonoffending caregivers to process their children's abuse experiences and learn skills necessary to overcome trauma in a nonthreatening, fun, and engaging manner. The implementation of these techniques allow clinicians to address a variety of psychosocial difficulties that are commonly found among nonoffending caregivers of children who have experienced sexual abuse. In addition, structured therapeutic games help caregivers develop strengths and abilities that they can use to help their children cope with abuse and trauma and facilitates the development of positive posttraumatic growth. Techniques and procedures for treatment delivery along with a description of core components and therapeutic modules are discussed. An illustrative case study is provided.

  13. Testing and inference in nonlinear cointegrating vector error correction models

    DEFF Research Database (Denmark)

    Kristensen, D.; Rahbek, A.

    2013-01-01

    We analyze estimators and tests for a general class of vector error correction models that allows for asymmetric and nonlinear error correction. For a given number of cointegration relationships, general hypothesis testing is considered, where testing for linearity is of particular interest. Under...... the null of linearity, parameters of nonlinear components vanish, leading to a nonstandard testing problem. We apply so-called sup-tests to resolve this issue, which requires development of new(uniform) functional central limit theory and results for convergence of stochastic integrals. We provide a full...... asymptotic theory for estimators and test statistics. The derived asymptotic results prove to be nonstandard compared to results found elsewhere in the literature due to the impact of the estimated cointegration relations. This complicates implementation of tests motivating the introduction of bootstrap...

  14. A Hierarchical Bayes Error Correction Model to Explain Dynamic Effects of Price Changes

    NARCIS (Netherlands)

    D. Fok (Dennis); R. Paap (Richard); C. Horváth (Csilla); Ph.H.B.F. Franses (Philip Hans)

    2005-01-01

    textabstractThe authors put forward a sales response model to explain the differences in immediate and dynamic effects of promotional prices and regular prices on sales. The model consists of a vector autoregression rewritten in error-correction format which allows to disentangle the immediate

  15. Atmospheric Attenuation Correction Based on a Constant Reference for High-Precision Infrared Radiometry

    Directory of Open Access Journals (Sweden)

    Zhiguo Huang

    2017-11-01

    Full Text Available Infrared (IR radiometry technology is an important method for characterizing the IR signature of targets, such as aircrafts or rockets. However, the received signal of targets could be reduced by a combination of atmospheric molecule absorption and aerosol scattering. Therefore, atmospheric correction is a requisite step for obtaining the real radiance of targets. Conventionally, the atmospheric transmittance and the air path radiance are calculated by an atmospheric radiative transfer calculation software. In this paper, an improved IR radiometric method based on constant reference correction of atmospheric attenuation is proposed. The basic principle and procedure of this method are introduced, and then the linear model of high-speed calibration in consideration of the integration time is employed and confirmed, which is then applicable in various complex conditions. To eliminate stochastic errors, radiometric experiments were conducted for multiple integration times. Finally, several experiments were performed on a mid-wave IR system with Φ600 mm aperture. The radiometry results indicate that the radiation inversion precision of the novel method is 4.78–4.89%, while the precision of the conventional method is 10.86–13.81%.

  16. Assessment of a non-uniform heat flux correction model to predicting CHF in PWR rod bundles

    International Nuclear Information System (INIS)

    Dae-Hyun, Hwang; Sung-Quun, Zee

    2001-01-01

    The full text follows. The prediction of CHF (critical heat flux) has been, in most cases, based on the empirical correlation. For PWR fuel assemblies the local parameter correlation requires the local thermal-hydraulic conditions usually calculated by a subchannel analysis code. The cross-sectional averaged fluid conditions of the subchannel, however, are not sufficient for determining CHF, especially for the cases of non-uniform axial heat flux distributions. Many investigators have studied the effect of the upstream heat flux on the CHF. In terms of the upstream memory effect, two different approaches have been considered as the limiting cases. The 'local conditions' hypothesis assumes that there is a unique relationship between the CHF and the local thermal-hydraulic conditions, and consequently there is no memory effect. In the 'overall power' hypothesis, on the other hand, it is assumed that the total power which can be fed into the tube with nonuniform heating will be the same as that for a uniformly heated tube of the same heated length with the same inlet conditions. Thus the CHF is totally influenced by the upstream heat flux distribution. In view of some experimental investigations such as the DeBortoli's test, it revealed that the two approaches are inadequate in general. It means that the local critical heat flux may be affected to some extent by the heat flux distribution upstream of the CHF location. Some correction-factor models have been suggested to take into account the upstream memory effect. Typically, Tong devised a correction factor on the basis of the heat balance of the superheated liquid layer that is spread underneath a highly viscous bubbly layer along the heated surface. His physical model suggested that the fluid enthalpy obtained from an energy balance of the superheated liquid layer is a representative quantity for the onset of DNB (departure nucleate boiling). A theoretically based correction factor model has been proposed by the

  17. NNLO QCD corrections to the Drell-Yan cross section in models of TeV-scale gravity

    Energy Technology Data Exchange (ETDEWEB)

    Ahmed, Taushif; Banerjee, Pulak; Dhani, Prasanna K.; Rana, Narayan [The Institute of Mathematical Sciences, Chennai, Tamil Nadu (India); Homi Bhabha National Institute, Mumbai (India); Kumar, M.C. [Indian Institute of Technology Guwahati, Department of Physics, Guwahati (India); Mathews, Prakash [Saha Institute of Nuclear Physics, Kolkata, West Bengal (India); Ravindran, V. [The Institute of Mathematical Sciences, Chennai, Tamil Nadu (India)

    2017-01-15

    The first results on the complete next-to-next-to-leading order (NNLO) Quantum Chromodynamic (QCD) corrections to the production of di-leptons at hadron colliders in large extra dimension models with spin-2 particles are reported in this article. In particular, we have computed these corrections to the invariant mass distribution of the di-leptons taking into account all the partonic sub-processes that contribute at NNLO. In these models, spin-2 particles couple through the energy-momentum tensor of the Standard Model with the universal coupling strength. The tensorial nature of the interaction and the presence of both quark annihilation and gluon fusion channels at the Born level make it challenging computationally and interesting phenomenologically. We have demonstrated numerically the importance of our results at Large Hadron Collider energies. The two-loop corrections contribute an additional 10% to the total cross section. We find that the QCD corrections are not only large but also important to make the predictions stable under renormalisation and factorisation scale variations, providing an opportunity to stringently constrain the parameters of the models with a spin-2 particle. (orig.)

  18. Influence of the partial volume correction method on (18)F-fluorodeoxyglucose brain kinetic modelling from dynamic PET images reconstructed with resolution model based OSEM.

    Science.gov (United States)

    Bowen, Spencer L; Byars, Larry G; Michel, Christian J; Chonde, Daniel B; Catana, Ciprian

    2013-10-21

    Kinetic parameters estimated from dynamic (18)F-fluorodeoxyglucose ((18)F-FDG) PET acquisitions have been used frequently to assess brain function in humans. Neglecting partial volume correction (PVC) for a dynamic series has been shown to produce significant bias in model estimates. Accurate PVC requires a space-variant model describing the reconstructed image spatial point spread function (PSF) that accounts for resolution limitations, including non-uniformities across the field of view due to the parallax effect. For ordered subsets expectation maximization (OSEM), image resolution convergence is local and influenced significantly by the number of iterations, the count density, and background-to-target ratio. As both count density and background-to-target values for a brain structure can change during a dynamic scan, the local image resolution may also concurrently vary. When PVC is applied post-reconstruction the kinetic parameter estimates may be biased when neglecting the frame-dependent resolution. We explored the influence of the PVC method and implementation on kinetic parameters estimated by fitting (18)F-FDG dynamic data acquired on a dedicated brain PET scanner and reconstructed with and without PSF modelling in the OSEM algorithm. The performance of several PVC algorithms was quantified with a phantom experiment, an anthropomorphic Monte Carlo simulation, and a patient scan. Using the last frame reconstructed image only for regional spread function (RSF) generation, as opposed to computing RSFs for each frame independently, and applying perturbation geometric transfer matrix PVC with PSF based OSEM produced the lowest magnitude bias kinetic parameter estimates in most instances, although at the cost of increased noise compared to the PVC methods utilizing conventional OSEM. Use of the last frame RSFs for PVC with no PSF modelling in the OSEM algorithm produced the lowest bias in cerebral metabolic rate of glucose estimates, although by less than 5% in

  19. Generation of Unbiased Ionospheric Corrections in Brazilian Region for GNSS positioning based on SSR concept

    Science.gov (United States)

    Monico, J. F. G.; De Oliveira, P. S., Jr.; Morel, L.; Fund, F.; Durand, S.; Durand, F.

    2017-12-01

    Mitigation of ionospheric effects on GNSS (Global Navigation Satellite System) signals is very challenging, especially for GNSS positioning applications based on SSR (State Space Representation) concept, which requires the knowledge of spatial correlated errors with considerable accuracy level (centimeter). The presence of satellite and receiver hardware biases on GNSS measurements difficult the proper estimation of ionospheric corrections, reducing their physical meaning. This problematic can lead to ionospheric corrections biased of several meters and often presenting negative values, which is physically not possible. In this contribution, we discuss a strategy to obtain SSR ionospheric corrections based on GNSS measurements from CORS (Continuous Operation Reference Stations) Networks with minimal presence of hardware biases and consequently physical meaning. Preliminary results are presented on generation and application of such corrections for simulated users located in Brazilian region under high level of ionospheric activity.

  20. Effect of tubing length on the dispersion correction of an arterially sampled input function for kinetic modeling in PET.

    Science.gov (United States)

    O'Doherty, Jim; Chilcott, Anna; Dunn, Joel

    2015-11-01

    Arterial sampling with dispersion correction is routinely performed for kinetic analysis of PET studies. Because of the the advent of PET-MRI systems, non-MR safe instrumentation will be required to be kept outside the scan room, which requires the length of the tubing between the patient and detector to increase, thus worsening the effects of dispersion. We examined the effects of dispersion in idealized radioactive blood studies using various lengths of tubing (1.5, 3, and 4.5 m) and applied a well-known transmission-dispersion model to attempt to correct the resulting traces. A simulation study was also carried out to examine noise characteristics of the model. The model was applied to patient traces using a 1.5 m acquisition tubing and extended to its use at 3 m. Satisfactory dispersion correction of the blood traces was achieved in the 1.5 m line. Predictions on the basis of experimental measurements, numerical simulations and noise analysis of resulting traces show that corrections of blood data can also be achieved using the 3 m tubing. The effects of dispersion could not be corrected for the 4.5 m line by the selected transmission-dispersion model. On the basis of our setup, correction of dispersion in arterial sampling tubing up to 3 m by the transmission-dispersion model can be performed. The model could not dispersion correct data acquired using a 4.5 m arterial tubing.

  1. Predictive markers of efficacy for an angiopoietin-2 targeting therapeutic in xenograft models.

    Directory of Open Access Journals (Sweden)

    Gallen Triana-Baltzer

    Full Text Available The clinical efficacy of anti-angiogenic therapies has been difficult to predict, and biomarkers that can predict responsiveness are sorely needed in this era of personalized medicine. CVX-060 is an angiopoietin-2 (Ang2 targeting therapeutic, consisting of two peptides that bind Ang2 with high affinity and specificity, covalently fused to a scaffold antibody. In order to optimize the use of this compound in the clinic the construction of a predictive model is described, based on the efficacy of CVX-060 in 13 cell line and 2 patient-derived xenograft models. Pretreatment size tumors from each of the models were profiled for the levels of 27 protein markers of angiogenesis, SNP haplotype in 5 angiogenesis genes, and somatic mutation status for 11 genes implicated in tumor growth and/or vascularization. CVX-060 efficacy was determined as tumor growth inhibition (TGI% at termination of each study. A predictive statistical model was constructed based on the correlation of these efficacy data with the marker profiles, and the model was subsequently tested by prospective analysis in 11 additional models. The results reveal a range of CVX-060 efficacy in xenograft models of diverse tissue types (0-64% TGI, median = 27% and define a subset of 3 proteins (Ang1, EGF, Emmprin, the levels of which may be predictive of TGI by Ang2 blockade. The direction of the associations is such that better efficacy correlates with high levels of target and low levels of compensatory/antagonizing molecules. This effort has revealed a set of candidate predictive markers for CVX-060 efficacy that will be further evaluated in ongoing clinical trials.

  2. Whole-body bone segmentation from MRI for PET/MRI attenuation correction using shape-based averaging

    DEFF Research Database (Denmark)

    Arabi, Hossein; Zaidi, H.

    2016-01-01

    Purpose: The authors evaluate the performance of shape-based averaging (SBA) technique for whole-body bone segmentation from MRI in the context of MRI-guided attenuation correction (MRAC) in hybrid PET/MRI. To enhance the performance of the SBA scheme, the authors propose to combine it with stati......Purpose: The authors evaluate the performance of shape-based averaging (SBA) technique for whole-body bone segmentation from MRI in the context of MRI-guided attenuation correction (MRAC) in hybrid PET/MRI. To enhance the performance of the SBA scheme, the authors propose to combine...... it with statistical atlas fusion techniques. Moreover, a fast and efficient shape comparisonbased atlas selection scheme was developed and incorporated into the SBA method. Methods: Clinical studies consisting of PET/CT and MR images of 21 patients were used to assess the performance of the SBA method. In addition...... voting (MV) atlas fusion scheme was also evaluated as a conventional and commonly used method. MRI-guided attenuation maps were generated using the different segmentation methods. Thereafter, quantitative analysis of PET attenuation correction was performed using CT-based attenuation correction...

  3. Utility of immunodeficient mouse models for characterizing the preclinical pharmacokinetics of immunogenic antibody therapeutics.

    Science.gov (United States)

    Myzithras, Maria; Bigwarfe, Tammy; Li, Hua; Waltz, Erica; Ahlberg, Jennifer; Giragossian, Craig; Roberts, Simon

    Prior to clinical studies, the pharmacokinetics (PK) of antibody-based therapeutics are characterized in preclinical species; however, those species can elicit immunogenic responses that can lead to an inaccurate estimation of PK parameters. Immunodeficient (SCID) transgenic hFcRn and C57BL/6 mice were used to characterize the PK of three antibodies that were previously shown to be immunogenic in mice and cynomolgus monkeys. Four mouse strains, Tg32 hFcRn SCID, Tg32 hFcRn, SCID and C57BL/6, were administered adalimumab (Humira®), mAbX and mAbX-YTE at 1 mg/kg, and in SCID strains there was no incidence of immunogenicity. In non-SCID strains, drug-clearing ADAs appeared after 4-7 days, which affected the ability to accurately calculate PK parameters. Single species allometric scaling of PK data for Humira® in SCID and hFcRn SCID mice resulted in improved human PK predictions compared to C57BL/6 mice. Thus, the SCID mouse model was demonstrated to be a useful tool for assessing the preclinical PK of immunogenic therapeutics.

  4. Possibility of psychological correction of sexual anomalies in hospital

    Directory of Open Access Journals (Sweden)

    Babina S.V.

    2014-09-01

    Full Text Available The article examines the possibilities of psychological correction of sexual anomalies in the hospital. We reviewed modern Russian and foreign literature on the treatment of disorders of sexual preference and singled out the main directions of therapy of disorders of sexual preference. We presents a comparative analysis of three therapeutic approaches for the treatment of sexual anomalies (psychopharmacological treatment, cognitive-behavioral therapy, psychotherapy to determine their effectiveness and assess the relevance of the role of the psychologist in conducting therapy. These approaches are discussed on the following criteria: therapy target, therapy aims, the extent and depth of changes, specific treatments. The positive and negative aspects of different treatments are indicated. The review allows the conclusion on correct organization of maximum effective treatment of sexual disorders and on the role of the psychologist in the creation and implementation of therapeutic schemes. We also replenished some of the gaps in Russian studies on treatment of sexual behavior disorders.

  5. PENDEKATAN ERROR CORRECTION MODEL SEBAGAI PENENTU HARGA SAHAM

    Directory of Open Access Journals (Sweden)

    David Kaluge

    2017-03-01

    Full Text Available This research was to find the effect of profitability, rate of interest, GDP, and foreign exchange rate on stockprices. Approach used was error correction model. Profitability was indicated by variables EPS, and ROIwhile the SBI (1 month was used for representing interest rate. This research found that all variablessimultaneously affected the stock prices significantly. Partially, EPS, PER, and Foreign Exchange rate significantlyaffected the prices both in short run and long run. Interestingly that SBI and GDP did not affect theprices at all. The variable of ROI had only long run impact on the prices.

  6. Correction of the angular dependence of satellite retrieved LST at global scale using parametric models

    Science.gov (United States)

    Ermida, S. L.; Trigo, I. F.; DaCamara, C.; Ghent, D.

    2017-12-01

    Land surface temperature (LST) values retrieved from satellite measurements in the thermal infrared (TIR) may be strongly affected by spatial anisotropy. This effect introduces significant discrepancies among LST estimations from different sensors, overlapping in space and time, that are not related to uncertainties in the methodologies or input data used. Furthermore, these directional effects deviate LST products from an ideally defined LST, which should represent to the ensemble of directional radiometric temperature of all surface elements within the FOV. Angular effects on LST are here conveniently estimated by means of a parametric model of the surface thermal emission, which describes the angular dependence of LST as a function of viewing and illumination geometry. Two models are consistently analyzed to evaluate their performance of and to assess their respective potential to correct directional effects on LST for a wide range of surface conditions, in terms of tree coverage, vegetation density, surface emissivity. We also propose an optimization of the correction of directional effects through a synergistic use of both models. The models are calibrated using LST data as provided by two sensors: MODIS on-board NASA's TERRA and AQUA; and SEVIRI on-board EUMETSAT's MSG. As shown in our previous feasibility studies the sampling of illumination and view angles has a high impact on the model parameters. This impact may be mitigated when the sampling size is increased by aggregating pixels with similar surface conditions. Here we propose a methodology where land surface is stratified by means of a cluster analysis using information on land cover type, fraction of vegetation cover and topography. The models are then adjusted to LST data corresponding to each cluster. It is shown that the quality of the cluster based models is very close to the pixel based ones. Furthermore, the reduced number of parameters allows improving the model trough the incorporation of a

  7. Automatic low-order aberration correction based on geometrical optics for slab lasers.

    Science.gov (United States)

    Yu, Xin; Dong, Lizhi; Lai, Boheng; Yang, Ping; Liu, Yong; Kong, Qingfeng; Yang, Kangjian; Tang, Guomao; Xu, Bing

    2017-02-20

    In this paper, we present a method based on geometry optics to simultaneously correct low-order aberrations and reshape the beams of slab lasers. A coaxial optical system with three lenses is adapted. The positions of the three lenses are directly calculated based on the beam parameters detected by wavefront sensors. The initial sizes of the input beams are 1.8  mm×11  mm, and peak-to-valley (PV) values of the wavefront range up to several tens of microns. After automatic correction, the dimensions may reach nearly 22  mm×22  mm as expected, and PV values of the wavefront are less than 2 μm. The effectiveness and precision of this method are verified with experiments.

  8. Assessment of relative individual renal function based on DMSA uptake corrected for renal size

    International Nuclear Information System (INIS)

    Estorch, M.; Camacho, V.; Tembl, A.; Mena, I.; Hernandez, A.; Flotats, A.; Carrio, I.; Torres, G.; Prat, L.

    2002-01-01

    Decreased relative renal DMSA uptake can be a consequence of abnormal kidney size, associated with normal or impaired renal function. The quantification of relative renal function based on DMSA uptake in both kidneys is an established method for the assessment of individual renal function. Aim: To assess relative renal function by means of quantification of renal DMSA uptake corrected for kidney size. Results were compared with relative renal DMSA uptake without size correction, and were validated against the absolute renal DMSA uptake. Material and Methods: Four-hundred-forty-four consecutive patients (147 adults, mean age 14 years) underwent a DMSA study for several renal diseases. The relative renal function, based on the relative DMSA uptake uncorrected and corrected for renal size, and the absolute renal DMSA uptake were calculated. In order to relate the relative DMSA uptake uncorrected and corrected for renal size with the absolute DMSA uptake, subtraction of uncorrected (SU) and corrected (SC) relative uptake percentages of each pair of kidneys was obtained, and these values were correlated to the matched subtraction percentages of absolute uptake (SA). If the individual relative renal function is normal (45%-55%), the subtraction value is less or equal to 10%. Results: In 227 patients (51%) the relative renal DMSA uptake value was normal either uncorrected or corrected for renal size (A), and in 149 patients (34%) it was abnormal by both quantification methods (B). Seventy-seven patients (15%) had the relative renal DMSA uptake abnormal only by the uncorrected method (C). Subtraction value of absolute DMSA uptake percentages was not significantly different of subtraction value of relative DMSA uptake percentages corrected for renal size when relative uncorrected uptake was abnormal and corrected normal. where * p<0.0001, and p=NS. Conclusion: When uncorrected and corrected relative DMSA uptake are abnormal, the absolute uptake is also impaired, while when

  9. Pressure correction schemes for compressible flows: application to baro-tropic Navier-Stokes equations and to drift-flux model

    International Nuclear Information System (INIS)

    Gastaldo, L.

    2007-11-01

    We develop in this PhD thesis a simulation tool for bubbly flows encountered in some late phases of a core-melt accident in pressurized water reactors, when the flow of molten core and vessel structures comes to chemically interact with the concrete of the containment floor. The physical modelling is based on the so-called drift-flux model, consisting of mass balance and momentum balance equations for the mixture (Navier-Stokes equations) and a mass balance equation for the gaseous phase. First, we propose a pressure correction scheme for the compressible Navier-Stokes equations based on mixed non-conforming finite elements. An ad hoc discretization of the advection operator, by a finite volume technique based on a dual mesh, ensures the stability of the velocity prediction step. A priori estimates for the velocity and the pressure yields the existence of the solution. We prove that this scheme is stable, in the sense that the discrete entropy is decreasing. For the conservation equation of the gaseous phase, we build a finite volume discretization which satisfies a discrete maximum principle. From this last property, we deduce the existence and the uniqueness of the discrete solution. Finally, on the basis of these works, a conservative and monotone scheme which is stable in the low Mach number limit, is build for the drift-flux model. This scheme enjoys, moreover, the following property: the algorithm preserves a constant pressure and velocity through moving interfaces between phases (i.e. contact discontinuities of the underlying hyperbolic system). In order to satisfy this property at the discrete level, we build an original pressure correction step which couples the mass balance equation with the transport terms of the gas mass balance equation, the remaining terms of the gas mass balance being taken into account with a splitting method. We prove the existence of a discrete solution for the pressure correction step. Numerical results are presented; they

  10. Vectorization of morpholino oligomers by the (R-Ahx-R)4 peptide allows efficient splicing correction in the absence of endosomolytic agents.

    Science.gov (United States)

    Abes, Saïd; Moulton, Hong M; Clair, Philippe; Prevot, Paul; Youngblood, Derek S; Wu, Rebecca P; Iversen, Patrick L; Lebleu, Bernard

    2006-12-01

    The efficient and non-toxic nuclear delivery of steric-block oligonucleotides (ON) is a prerequisite for therapeutic strategies involving splice correction or exon skipping. Cationic cell penetrating peptides (CPPs) have given rise to much interest for the intracellular delivery of biomolecules, but their efficiency in promoting cytoplasmic or nuclear delivery of oligonucleotides has been hampered by endocytic sequestration and subsequent degradation of most internalized material in endocytic compartments. In the present study, we compared the splice correction activity of three different CPPs conjugated to PMO(705), a steric-block ON targeted against the mutated splicing site of human beta-globin pre-mRNA in the HeLa pLuc705 splice correction model. In contrast to Tat48-60 (Tat) and oligoarginine (R(9)F(2)) PMO(705) conjugates, the 6-aminohexanoic-spaced oligoarginine (R-Ahx-R)(4)-PMO(705) conjugate was able to promote an efficient splice correction in the absence of endosomolytic agents. Our mechanistic investigations about its uptake mechanisms lead to the conclusion that these three vectors are internalized using the same endocytic route involving proteoglycans, but that the (R-Ahx-R)(4)-PMO(705) conjugate has the unique ability to escape from lysosomial fate and to access to the nuclear compartment. This vector, which has displays an extremely low cytotoxicity, the ability to function without chloroquine adjunction and in the presence of serum proteins. It thus offers a promising lead for the development of vectors able to enhance the delivery of therapeutic steric-block ON in clinically relevant models.

  11. A web-based resource for designing therapeutics against Ebola Virus

    OpenAIRE

    Sandeep Kumar Dhanda; Kumardeep Chaudhary; Sudheer Gupta; Samir Kumar Brahmachari; Gajendra P. S. Raghava

    2016-01-01

    In this study, we describe a web-based resource, developed for assisting the scientific community in designing an effective therapeutics against the Ebola virus. Firstly, we predicted and identified experimentally validated epitopes in each of the antigens/proteins of the five known ebolaviruses. Secondly, we generated all the possible overlapping 9mer peptides from the proteins of ebolaviruses. Thirdly, conserved peptides across all the five ebolaviruses (four human pathogenic species) with ...

  12. Protein based therapeutic delivery agents: Contemporary developments and challenges.

    Science.gov (United States)

    Yin, Liming; Yuvienco, Carlo; Montclare, Jin Kim

    2017-07-01

    As unique biopolymers, proteins can be employed for therapeutic delivery. They bear important features such as bioavailability, biocompatibility, and biodegradability with low toxicity serving as a platform for delivery of various small molecule therapeutics, gene therapies, protein biologics and cells. Depending on size and characteristic of the therapeutic, a variety of natural and engineered proteins or peptides have been developed. This, coupled to recent advances in synthetic and chemical biology, has led to the creation of tailor-made protein materials for delivery. This review highlights strategies employing proteins to facilitate the delivery of therapeutic matter, addressing the challenges for small molecule, gene, protein and cell transport. Copyright © 2017 Elsevier Ltd. All rights reserved.

  13. Permutation importance: a corrected feature importance measure.

    Science.gov (United States)

    Altmann, André; Toloşi, Laura; Sander, Oliver; Lengauer, Thomas

    2010-05-15

    In life sciences, interpretability of machine learning models is as important as their prediction accuracy. Linear models are probably the most frequently used methods for assessing feature relevance, despite their relative inflexibility. However, in the past years effective estimators of feature relevance have been derived for highly complex or non-parametric models such as support vector machines and RandomForest (RF) models. Recently, it has been observed that RF models are biased in such a way that categorical variables with a large number of categories are preferred. In this work, we introduce a heuristic for normalizing feature importance measures that can correct the feature importance bias. The method is based on repeated permutations of the outcome vector for estimating the distribution of measured importance for each variable in a non-informative setting. The P-value of the observed importance provides a corrected measure of feature importance. We apply our method to simulated data and demonstrate that (i) non-informative predictors do not receive significant P-values, (ii) informative variables can successfully be recovered among non-informative variables and (iii) P-values computed with permutation importance (PIMP) are very helpful for deciding the significance of variables, and therefore improve model interpretability. Furthermore, PIMP was used to correct RF-based importance measures for two real-world case studies. We propose an improved RF model that uses the significant variables with respect to the PIMP measure and show that its prediction accuracy is superior to that of other existing models. R code for the method presented in this article is available at http://www.mpi-inf.mpg.de/ approximately altmann/download/PIMP.R CONTACT: altmann@mpi-inf.mpg.de, laura.tolosi@mpi-inf.mpg.de Supplementary data are available at Bioinformatics online.

  14. One loop corrections to the lightest Higgs mass in the minimal η model with a heavy Z'

    International Nuclear Information System (INIS)

    Comelli, D.

    1992-06-01

    We have evaluated the one loop correction to the bound on the lightest Higgs mass valid in the minimal, E 6 based, supersymmetric η model in the presence of a 'heavy' Z', M z' ≥1 TeV. The dominant contribution from the fermion sfermion sector increases the 108 GeV tree level value by an amount that depends on the top mass in a way that is largely reminescent of minimal SUSY models. For M t ≤150 GeV, Msub(t tilde)=1 TeV, the 'light' Higgs mass is always ≤130 GeV. (orig.)

  15. Curvature correction of retinal OCTs using graph-based geometry detection

    Science.gov (United States)

    Kafieh, Raheleh; Rabbani, Hossein; Abramoff, Michael D.; Sonka, Milan

    2013-05-01

    In this paper, we present a new algorithm as an enhancement and preprocessing step for acquired optical coherence tomography (OCT) images of the retina. The proposed method is composed of two steps, first of which is a denoising algorithm with wavelet diffusion based on a circular symmetric Laplacian model, and the second part can be described in terms of graph-based geometry detection and curvature correction according to the hyper-reflective complex layer in the retina. The proposed denoising algorithm showed an improvement of contrast-to-noise ratio from 0.89 to 1.49 and an increase of signal-to-noise ratio (OCT image SNR) from 18.27 to 30.43 dB. By applying the proposed method for estimation of the interpolated curve using a full automatic method, the mean ± SD unsigned border positioning error was calculated for normal and abnormal cases. The error values of 2.19 ± 1.25 and 8.53 ± 3.76 µm were detected for 200 randomly selected slices without pathological curvature and 50 randomly selected slices with pathological curvature, respectively. The important aspect of this algorithm is its ability in detection of curvature in strongly pathological images that surpasses previously introduced methods; the method is also fast, compared to the relatively low speed of similar methods.

  16. Implementation and Application of PSF-Based EPI Distortion Correction to High Field Animal Imaging

    Directory of Open Access Journals (Sweden)

    Dominik Paul

    2009-01-01

    Full Text Available The purpose of this work is to demonstrate the functionality and performance of a PSF-based geometric distortion correction for high-field functional animal EPI. The EPI method was extended to measure the PSF and a postprocessing chain was implemented in Matlab for offline distortion correction. The correction procedure was applied to phantom and in vivo imaging of mice and rats at 9.4T using different SE-EPI and DWI-EPI protocols. Results show the significant improvement in image quality for single- and multishot EPI. Using a reduced FOV in the PSF encoding direction clearly reduced the acquisition time for PSF data by an acceleration factor of 2 or 4, without affecting the correction quality.

  17. Iteration of ultrasound aberration correction methods

    Science.gov (United States)

    Maasoey, Svein-Erik; Angelsen, Bjoern; Varslot, Trond

    2004-05-01

    Aberration in ultrasound medical imaging is usually modeled by time-delay and amplitude variations concentrated on the transmitting/receiving array. This filter process is here denoted a TDA filter. The TDA filter is an approximation to the physical aberration process, which occurs over an extended part of the human body wall. Estimation of the TDA filter, and performing correction on transmit and receive, has proven difficult. It has yet to be shown that this method works adequately for severe aberration. Estimation of the TDA filter can be iterated by retransmitting a corrected signal and re-estimate until a convergence criterion is fulfilled (adaptive imaging). Two methods for estimating time-delay and amplitude variations in receive signals from random scatterers have been developed. One method correlates each element signal with a reference signal. The other method use eigenvalue decomposition of the receive cross-spectrum matrix, based upon a receive energy-maximizing criterion. Simulations of iterating aberration correction with a TDA filter have been investigated to study its convergence properties. A weak and strong human-body wall model generated aberration. Both emulated the human abdominal wall. Results after iteration improve aberration correction substantially, and both estimation methods converge, even for the case of strong aberration.

  18. The generation algorithm of arbitrary polygon animation based on dynamic correction

    Directory of Open Access Journals (Sweden)

    Hou Ya Wei

    2016-01-01

    Full Text Available This paper, based on the key-frame polygon sequence, proposes a method that makes use of dynamic correction to develop continuous animation. Firstly we use quadratic Bezier curve to interpolate the corresponding sides vector of polygon sequence consecutive frame and realize the continuity of animation sequences. And then, according to Bezier curve characteristic, we conduct dynamic regulation to interpolation parameters and implement the changing smoothness. Meanwhile, we take use of Lagrange Multiplier Method to correct the polygon and close it. Finally, we provide the concrete algorithm flow and present numerical experiment results. The experiment results show that the algorithm acquires excellent effect.

  19. Temperature effects on pitfall catches of epigeal arthropods: a model and method for bias correction.

    Science.gov (United States)

    Saska, Pavel; van der Werf, Wopke; Hemerik, Lia; Luff, Martin L; Hatten, Timothy D; Honek, Alois; Pocock, Michael

    2013-02-01

    Carabids and other epigeal arthropods make important contributions to biodiversity, food webs and biocontrol of invertebrate pests and weeds. Pitfall trapping is widely used for sampling carabid populations, but this technique yields biased estimates of abundance ('activity-density') because individual activity - which is affected by climatic factors - affects the rate of catch. To date, the impact of temperature on pitfall catches, while suspected to be large, has not been quantified, and no method is available to account for it. This lack of knowledge and the unavailability of a method for bias correction affect the confidence that can be placed on results of ecological field studies based on pitfall data.Here, we develop a simple model for the effect of temperature, assuming a constant proportional change in the rate of catch per °C change in temperature, r , consistent with an exponential Q 10 response to temperature. We fit this model to 38 time series of pitfall catches and accompanying temperature records from the literature, using first differences and other detrending methods to account for seasonality. We use meta-analysis to assess consistency of the estimated parameter r among studies.The mean rate of increase in total catch across data sets was 0·0863 ± 0·0058 per °C of maximum temperature and 0·0497 ± 0·0107 per °C of minimum temperature. Multiple regression analyses of 19 data sets showed that temperature is the key climatic variable affecting total catch. Relationships between temperature and catch were also identified at species level. Correction for temperature bias had substantial effects on seasonal trends of carabid catches. Synthesis and Applications . The effect of temperature on pitfall catches is shown here to be substantial and worthy of consideration when interpreting results of pitfall trapping. The exponential model can be used both for effect estimation and for bias correction of observed data. Correcting for temperature

  20. A 2 × 2 taxonomy of multilevel latent contextual models: accuracy-bias trade-offs in full and partial error correction models.

    Science.gov (United States)

    Lüdtke, Oliver; Marsh, Herbert W; Robitzsch, Alexander; Trautwein, Ulrich

    2011-12-01

    In multilevel modeling, group-level variables (L2) for assessing contextual effects are frequently generated by aggregating variables from a lower level (L1). A major problem of contextual analyses in the social sciences is that there is no error-free measurement of constructs. In the present article, 2 types of error occurring in multilevel data when estimating contextual effects are distinguished: unreliability that is due to measurement error and unreliability that is due to sampling error. The fact that studies may or may not correct for these 2 types of error can be translated into a 2 × 2 taxonomy of multilevel latent contextual models comprising 4 approaches: an uncorrected approach, partial correction approaches correcting for either measurement or sampling error (but not both), and a full correction approach that adjusts for both sources of error. It is shown mathematically and with simulated data that the uncorrected and partial correction approaches can result in substantially biased estimates of contextual effects, depending on the number of L1 individuals per group, the number of groups, the intraclass correlation, the number of indicators, and the size of the factor loadings. However, the simulation study also shows that partial correction approaches can outperform full correction approaches when the data provide only limited information in terms of the L2 construct (i.e., small number of groups, low intraclass correlation). A real-data application from educational psychology is used to illustrate the different approaches.

  1. Planning corrective osteotomy of the femoral bone using three-dimensional modeling. Part II

    Directory of Open Access Journals (Sweden)

    Vladimir E. Baskov

    2017-10-01

    Full Text Available Introduction. Three-dimensional (3D modeling and prototyping are increasingly being used in various branches of surgery for planning and performing surgical interventions. In orthopedics, this technology was first used in 1990 for performing knee-joint surgery. This was followed by the development of protocols for creating and applying individual patterns for navigation in the surgical interventions for various bones. Aim. The study aimed to develop a new 3D method for planning and performing corrective osteotomy of the femoral bone using an individual pattern and to identify the advantages of the proposed method in comparison with the standard method of planning and performing surgical intervention. Materials and methods. A new method for planning and performing corrective osteotomy of the femoral bone in children with various pathologies of the hip joint is presented. The outcomes of planning and performing corrective osteotomy of the femoral bone in 27 patients aged 5 to 18 years (32 hip joints with congenital and acquired deformity of the femoral bone were analyzed. Conclusion. The use of computer 3D modeling for planning and implementing corrective interventions on the femoral bone improves the treatment results owing to an almost perfect performance accuracy achieved by the minimization of possible human errors reduction in the surgery duration; and reduction in the radiation exposure for the patient.

  2. Improving quantitative dosimetry in (177)Lu-DOTATATE SPECT by energy window-based scatter corrections

    DEFF Research Database (Denmark)

    de Nijs, Robin; Lagerburg, Vera; Klausen, Thomas L

    2014-01-01

    and the activity, which depends on the collimator type, the utilized energy windows and the applied scatter correction techniques. In this study, energy window subtraction-based scatter correction methods are compared experimentally and quantitatively. MATERIALS AND METHODS: (177)Lu SPECT images of a phantom...... technique, the measured ratio was close to the real ratio, and the differences between spheres were small. CONCLUSION: For quantitative (177)Lu imaging MEGP collimators are advised. Both energy peaks can be utilized when the ESSE correction technique is applied. The difference between the calculated...

  3. Error Correction of Meteorological Data Obtained with Mini-AWSs Based on Machine Learning

    Directory of Open Access Journals (Sweden)

    Ji-Hun Ha

    2018-01-01

    Full Text Available Severe weather events occur more frequently due to climate change; therefore, accurate weather forecasts are necessary, in addition to the development of numerical weather prediction (NWP of the past several decades. A method to improve the accuracy of weather forecasts based on NWP is the collection of more meteorological data by reducing the observation interval. However, in many areas, it is economically and locally difficult to collect observation data by installing automatic weather stations (AWSs. We developed a Mini-AWS, much smaller than AWSs, to complement the shortcomings of AWSs. The installation and maintenance costs of Mini-AWSs are lower than those of AWSs; Mini-AWSs have fewer spatial constraints with respect to the installation than AWSs. However, it is necessary to correct the data collected with Mini-AWSs because they might be affected by the external environment depending on the installation area. In this paper, we propose a novel error correction of atmospheric pressure data observed with a Mini-AWS based on machine learning. Using the proposed method, we obtained corrected atmospheric pressure data, reaching the standard of the World Meteorological Organization (WMO; ±0.1 hPa, and confirmed the potential of corrected atmospheric pressure data as an auxiliary resource for AWSs.

  4. Information filtering based on corrected redundancy-eliminating mass diffusion.

    Science.gov (United States)

    Zhu, Xuzhen; Yang, Yujie; Chen, Guilin; Medo, Matus; Tian, Hui; Cai, Shi-Min

    2017-01-01

    Methods used in information filtering and recommendation often rely on quantifying the similarity between objects or users. The used similarity metrics often suffer from similarity redundancies arising from correlations between objects' attributes. Based on an unweighted undirected object-user bipartite network, we propose a Corrected Redundancy-Eliminating similarity index (CRE) which is based on a spreading process on the network. Extensive experiments on three benchmark data sets-Movilens, Netflix and Amazon-show that when used in recommendation, the CRE yields significant improvements in terms of recommendation accuracy and diversity. A detailed analysis is presented to unveil the origins of the observed differences between the CRE and mainstream similarity indices.

  5. Maximum likelihood estimation and EM algorithm of Copas-like selection model for publication bias correction.

    Science.gov (United States)

    Ning, Jing; Chen, Yong; Piao, Jin

    2017-07-01

    Publication bias occurs when the published research results are systematically unrepresentative of the population of studies that have been conducted, and is a potential threat to meaningful meta-analysis. The Copas selection model provides a flexible framework for correcting estimates and offers considerable insight into the publication bias. However, maximizing the observed likelihood under the Copas selection model is challenging because the observed data contain very little information on the latent variable. In this article, we study a Copas-like selection model and propose an expectation-maximization (EM) algorithm for estimation based on the full likelihood. Empirical simulation studies show that the EM algorithm and its associated inferential procedure performs well and avoids the non-convergence problem when maximizing the observed likelihood. © The Author 2017. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  6. CORRECTION OF FAULTY LINES IN MUSCLE MODEL, TO BE USED IN 3D BUILDING NETWORK CONSTRUCTION

    Directory of Open Access Journals (Sweden)

    I. R. Karas

    2012-07-01

    Full Text Available This paper describes the usage of MUSCLE (Multidirectional Scanning for Line Extraction Model for automatic generation of 3D networks in CityGML format (from raster floor plans. MUSCLE (Multidirectional Scanning for Line Extraction Model is a conversion method which was developed to vectorize the straight lines through the raster images including floor plans, maps for GIS, architectural drawings, and machine plans. The model allows user to define specific criteria which are crucial for acquiring the vectorization process. Unlike traditional vectorization process, this model generates straight lines based on a line thinning algorithm, without performing line following-chain coding and vector reduction stages. In this method the nearly vertical lines were obtained by scanning the images horizontally, while the nearly horizontal lines were obtained by scanning the images vertically. In a case where two or more consecutive lines are nearly horizontal or nearly vertical, raster data become unmanageable and the process generates wrongly vectorized lines. In this situation, to obtain the precise lines, the image with the wrongly vectorized lines is diagonally scanned. By using MUSCLE model, the network models are topologically structured in CityGML format. After the generation process, it is possible to perform 3D network analysis based on these models. Then, by using the software that was designed based on the generated models, a geodatabase of the models could be established. This paper presents the correction application in MUSCLE and explains 3D network construction in detail.

  7. Assessing climate change impacts on the rape stem weevil, Ceutorhynchus napi Gyll., based on bias- and non-bias-corrected regional climate change projections

    Science.gov (United States)

    Junk, J.; Ulber, B.; Vidal, S.; Eickermann, M.

    2015-11-01

    Agricultural production is directly affected by projected increases in air temperature and changes in precipitation. A multi-model ensemble of regional climate change projections indicated shifts towards higher air temperatures and changing precipitation patterns during the summer and winter seasons up to the year 2100 for the region of Goettingen (Lower Saxony, Germany). A second major controlling factor of the agricultural production is the infestation level by pests. Based on long-term field surveys and meteorological observations, a calibration of an existing model describing the migration of the pest insect Ceutorhynchus napi was possible. To assess the impacts of climate on pests under projected changing environmental conditions, we combined the results of regional climate models with the phenological model to describe the crop invasion of this species. In order to reduce systematic differences between the output of the regional climate models and observational data sets, two different bias correction methods were applied: a linear correction for air temperature and a quantile mapping approach for precipitation. Only the results derived from the bias-corrected output of the regional climate models showed satisfying results. An earlier onset, as well as a prolongation of the possible time window for the immigration of Ceutorhynchus napi, was projected by the majority of the ensemble members.

  8. Assessing climate change impacts on the rape stem weevil, Ceutorhynchus napi Gyll., based on bias- and non-bias-corrected regional climate change projections.

    Science.gov (United States)

    Junk, J; Ulber, B; Vidal, S; Eickermann, M

    2015-11-01

    Agricultural production is directly affected by projected increases in air temperature and changes in precipitation. A multi-model ensemble of regional climate change projections indicated shifts towards higher air temperatures and changing precipitation patterns during the summer and winter seasons up to the year 2100 for the region of Goettingen (Lower Saxony, Germany). A second major controlling factor of the agricultural production is the infestation level by pests. Based on long-term field surveys and meteorological observations, a calibration of an existing model describing the migration of the pest insect Ceutorhynchus napi was possible. To assess the impacts of climate on pests under projected changing environmental conditions, we combined the results of regional climate models with the phenological model to describe the crop invasion of this species. In order to reduce systematic differences between the output of the regional climate models and observational data sets, two different bias correction methods were applied: a linear correction for air temperature and a quantile mapping approach for precipitation. Only the results derived from the bias-corrected output of the regional climate models showed satisfying results. An earlier onset, as well as a prolongation of the possible time window for the immigration of Ceutorhynchus napi, was projected by the majority of the ensemble members.

  9. A Time-Walk Correction Method for PET Detectors Based on Leading Edge Discriminators.

    Science.gov (United States)

    Du, Junwei; Schmall, Jeffrey P; Judenhofer, Martin S; Di, Kun; Yang, Yongfeng; Cherry, Simon R

    2017-09-01

    The leading edge timing pick-off technique is the simplest timing extraction method for PET detectors. Due to the inherent time-walk of the leading edge technique, corrections should be made to improve timing resolution, especially for time-of-flight PET. Time-walk correction can be done by utilizing the relationship between the threshold crossing time and the event energy on an event by event basis. In this paper, a time-walk correction method is proposed and evaluated using timing information from two identical detectors both using leading edge discriminators. This differs from other techniques that use an external dedicated reference detector, such as a fast PMT-based detector using constant fraction techniques to pick-off timing information. In our proposed method, one detector was used as reference detector to correct the time-walk of the other detector. Time-walk in the reference detector was minimized by using events within a small energy window (508.5 - 513.5 keV). To validate this method, a coincidence detector pair was assembled using two SensL MicroFB SiPMs and two 2.5 mm × 2.5 mm × 20 mm polished LYSO crystals. Coincidence timing resolutions using different time pick-off techniques were obtained at a bias voltage of 27.5 V and a fixed temperature of 20 °C. The coincidence timing resolution without time-walk correction were 389.0 ± 12.0 ps (425 -650 keV energy window) and 670.2 ± 16.2 ps (250-750 keV energy window). The timing resolution with time-walk correction improved to 367.3 ± 0.5 ps (425 - 650 keV) and 413.7 ± 0.9 ps (250 - 750 keV). For comparison, timing resolutions were 442.8 ± 12.8 ps (425 - 650 keV) and 476.0 ± 13.0 ps (250 - 750 keV) using constant fraction techniques, and 367.3 ± 0.4 ps (425 - 650 keV) and 413.4 ± 0.9 ps (250 - 750 keV) using a reference detector based on the constant fraction technique. These results show that the proposed leading edge based time-walk correction method works well. Timing resolution obtained

  10. Clinical neurocardiology defining the value of neuroscience‐based cardiovascular therapeutics

    Science.gov (United States)

    Ajijola, Olujimi A.; Anand, Inder; Armour, J. Andrew; Chen, Peng‐Sheng; Esler, Murray; De Ferrari, Gaetano M.; Fishbein, Michael C.; Goldberger, Jeffrey J.; Harper, Ronald M.; Joyner, Michael J.; Khalsa, Sahib S.; Kumar, Rajesh; Lane, Richard; Mahajan, Aman; Po, Sunny; Schwartz, Peter J.; Somers, Virend K.; Valderrabano, Miguel; Vaseghi, Marmar; Zipes, Douglas P.

    2016-01-01

    Abstract The autonomic nervous system regulates all aspects of normal cardiac function, and is recognized to play a critical role in the pathophysiology of many cardiovascular diseases. As such, the value of neuroscience‐based cardiovascular therapeutics is increasingly evident. This White Paper reviews the current state of understanding of human cardiac neuroanatomy, neurophysiology, pathophysiology in specific disease conditions, autonomic testing, risk stratification, and neuromodulatory strategies to mitigate the progression of cardiovascular diseases. PMID:27114333

  11. List-mode-based reconstruction for respiratory motion correction in PET using non-rigid body transformations

    International Nuclear Information System (INIS)

    Lamare, F; Carbayo, M J Ledesma; Cresson, T; Kontaxakis, G; Santos, A; Rest, C Cheze Le; Reader, A J; Visvikis, D

    2007-01-01

    Respiratory motion in emission tomography leads to reduced image quality. Developed correction methodology has been concentrating on the use of respiratory synchronized acquisitions leading to gated frames. Such frames, however, are of low signal-to-noise ratio as a result of containing reduced statistics. In this work, we describe the implementation of an elastic transformation within a list-mode-based reconstruction for the correction of respiratory motion over the thorax, allowing the use of all data available throughout a respiratory motion average acquisition. The developed algorithm was evaluated using datasets of the NCAT phantom generated at different points throughout the respiratory cycle. List-mode-data-based PET-simulated frames were subsequently produced by combining the NCAT datasets with Monte Carlo simulation. A non-rigid registration algorithm based on B-spline basis functions was employed to derive transformation parameters accounting for the respiratory motion using the NCAT dynamic CT images. The displacement matrices derived were subsequently applied during the image reconstruction of the original emission list mode data. Two different implementations for the incorporation of the elastic transformations within the one-pass list mode EM (OPL-EM) algorithm were developed and evaluated. The corrected images were compared with those produced using an affine transformation of list mode data prior to reconstruction, as well as with uncorrected respiratory motion average images. Results demonstrate that although both correction techniques considered lead to significant improvements in accounting for respiratory motion artefacts in the lung fields, the elastic-transformation-based correction leads to a more uniform improvement across the lungs for different lesion sizes and locations

  12. Uncertainty estimation with bias-correction for flow series based on rating curve

    Science.gov (United States)

    Shao, Quanxi; Lerat, Julien; Podger, Geoff; Dutta, Dushmanta

    2014-03-01

    Streamflow discharge constitutes one of the fundamental data required to perform water balance studies and develop hydrological models. A rating curve, designed based on a series of concurrent stage and discharge measurements at a gauging location, provides a way to generate complete discharge time series with a reasonable quality if sufficient measurement points are available. However, the associated uncertainty is frequently not available even though it has a significant impact on hydrological modelling. In this paper, we identify the discrepancy of the hydrographers' rating curves used to derive the historical discharge data series and proposed a modification by bias correction which is also in the form of power function as the traditional rating curve. In order to obtain the uncertainty estimation, we propose a further both-side Box-Cox transformation to stabilize the regression residuals as close to the normal distribution as possible, so that a proper uncertainty can be attached for the whole discharge series in the ensemble generation. We demonstrate the proposed method by applying it to the gauging stations in the Flinders and Gilbert rivers in north-west Queensland, Australia.

  13. Establishing a compulsory drug treatment prison: Therapeutic policy, principles, and practices in addressing offender rights and rehabilitation.

    Science.gov (United States)

    Birgden, Astrid; Grant, Luke

    2010-01-01

    A Compulsory Drug Treatment Correctional Center (CDTCC) was established in Australia in 2006 for repeat drug-related male offenders. Compulsory treatment law is inconsistent with a therapeutic jurisprudence approach. Despite the compulsory law, a normative offender rehabilitation framework has been established based on offender moral rights. Within moral rights, the offender rehabilitation framework addresses the core values of freedom (supporting autonomous decision-making) and well-being (supporting support physical, social, and psychological needs). Moral rights are underpinned by a theory or principle which, in this instance, is a humane approach to offender rehabilitation. While a law that permits offenders to choose drug treatment and rehabilitation is preferable, the article discusses the establishment of a prison based on therapeutic policy, principles, and practices that respond to participants as both rights-violators and rights-holders. The opportunity for accelerated community access and a therapeutic alliance with staff has resulted in offenders actively seeking to be ordered into compulsory drug treatment and rehabilitation. Crown Copyright © 2010. Published by Elsevier Ltd. All rights reserved.

  14. A simple and fast physics-based analytical method to calculate therapeutic and stray doses from external beam, megavoltage x-ray therapy.

    Science.gov (United States)

    Jagetic, Lydia J; Newhauser, Wayne D

    2015-06-21

    State-of-the-art radiotherapy treatment planning systems provide reliable estimates of the therapeutic radiation but are known to underestimate or neglect the stray radiation exposures. Most commonly, stray radiation exposures are reconstructed using empirical formulas or lookup tables. The purpose of this study was to develop the basic physics of a model capable of calculating the total absorbed dose both inside and outside of the therapeutic radiation beam for external beam photon therapy. The model was developed using measurements of total absorbed dose in a water-box phantom from a 6 MV medical linear accelerator to calculate dose profiles in both the in-plane and cross-plane direction for a variety of square field sizes and depths in water. The water-box phantom facilitated development of the basic physical aspects of the model. RMS discrepancies between measured and calculated total absorbed dose values in water were less than 9.3% for all fields studied. Computation times for 10 million dose points within a homogeneous phantom were approximately 4 min. These results suggest that the basic physics of the model are sufficiently simple, fast, and accurate to serve as a foundation for a variety of clinical and research applications, some of which may require that the model be extended or simplified based on the needs of the user. A potentially important advantage of a physics-based approach is that the model is more readily adaptable to a wide variety of treatment units and treatment techniques than with empirical models.

  15. Spatial uncertainty in bias corrected climate change projections and hydrogeological impacts

    DEFF Research Database (Denmark)

    Seaby, Lauren Paige; Refsgaard, Jens Christian; Sonnenborg, Torben

    2015-01-01

    Model pairing, this paper analyses the relationship between complexity and robustness of three distribution-based scaling (DBS) bias correction methods applied to daily precipitation at various spatial scales. Hydrological simulations are forced by CM inputs to assess the spatial uncertainty......The question of which climate model bias correction methods and spatial scales for correction are optimal for both projecting future hydrological changes as well as removing initial model bias has so far received little attention. For 11 climate models (CMs), or GCM/RCM – Global/Regional Climate...... signals. The magnitude of spatial bias seen in precipitation inputs does not necessarily correspond to the magnitude of biases seen in hydrological outputs. Variables that integrate basin responses over time and space are more sensitive to mean spatial biases and less so on extremes. Hydrological...

  16. Introduction to thematic minireview series: Development of human therapeutics based on induced pluripotent stem cell (iPSC) technology.

    Science.gov (United States)

    Rao, Mahendra; Gottesfeld, Joel M

    2014-02-21

    With the advent of human induced pluripotent stem cell (hiPSC) technology, it is now possible to derive patient-specific cell lines that are of great potential in both basic research and the development of new therapeutics for human diseases. Not only do hiPSCs offer unprecedented opportunities to study cellular differentiation and model human diseases, but the differentiated cell types obtained from iPSCs may become therapeutics themselves. These cells can also be used in the screening of therapeutics and in toxicology assays for potential liabilities of therapeutic agents. The remarkable achievement of transcription factor reprogramming to generate iPSCs was recognized by the award of the Nobel Prize in Medicine to Shinya Yamanaka in 2012, just 6 years after the first publication of reprogramming methods to generate hiPSCs (Takahashi, K., Tanabe, K., Ohnuki, M., Narita, M., Ichisaka, T., Tomoda, K., and Yamanaka, S. (2007) Cell 131, 861-872). This minireview series highlights both the promises and challenges of using iPSC technology for disease modeling, drug screening, and the development of stem cell therapeutics.

  17. Bias correction for the estimation of sensitivity indices based on random balance designs

    International Nuclear Information System (INIS)

    Tissot, Jean-Yves; Prieur, Clémentine

    2012-01-01

    This paper deals with the random balance design method (RBD) and its hybrid approach, RBD-FAST. Both these global sensitivity analysis methods originate from Fourier amplitude sensitivity test (FAST) and consequently face the main problems inherent to discrete harmonic analysis. We present here a general way to correct a bias which occurs when estimating sensitivity indices (SIs) of any order – except total SI of single factor or group of factors – by the random balance design method (RBD) and its hybrid version, RBD-FAST. In the RBD case, this positive bias has been recently identified in a paper by Xu and Gertner [1]. Following their work, we propose a bias correction method for first-order SIs estimates in RBD. We then extend the correction method to the SIs of any order in RBD-FAST. At last, we suggest an efficient strategy to estimate all the first- and second-order SIs using RBD-FAST. - Highlights: ► We provide a bias correction method for the global sensitivity analysis methods: RBD and RBD-FAST. ► In RBD, first-order sensitivity estimates are corrected. ► In RBD-FAST, sensitivity indices of any order and closed sensitivity indices are corrected. ► We propose an efficient strategy to estimate all the first- and second-order indices of a model.

  18. Apparent resistivity for transient electromagnetic induction logging and its correction in radial layer identification

    Science.gov (United States)

    Meng, Qingxin; Hu, Xiangyun; Pan, Heping; Xi, Yufei

    2018-04-01

    We propose an algorithm for calculating all-time apparent resistivity from transient electromagnetic induction logging. The algorithm is based on the whole-space transient electric field expression of the uniform model and Halley's optimisation. In trial calculations for uniform models, the all-time algorithm is shown to have high accuracy. We use the finite-difference time-domain method to simulate the transient electromagnetic field in radial two-layer models without wall rock and convert the simulation results to apparent resistivity using the all-time algorithm. The time-varying apparent resistivity reflects the radially layered geoelectrical structure of the models and the apparent resistivity of the earliest time channel follows the true resistivity of the inner layer; however, the apparent resistivity at larger times reflects the comprehensive electrical characteristics of the inner and outer layers. To accurately identify the outer layer resistivity based on the series relationship model of the layered resistance, the apparent resistivity and diffusion depth of the different time channels are approximately replaced by related model parameters; that is, we propose an apparent resistivity correction algorithm. By correcting the time-varying apparent resistivity of radial two-layer models, we show that the correction results reflect the radially layered electrical structure and the corrected resistivities of the larger time channels follow the outer layer resistivity. The transient electromagnetic fields of radially layered models with wall rock are simulated to obtain the 2D time-varying profiles of the apparent resistivity and corrections. The results suggest that the time-varying apparent resistivity and correction results reflect the vertical and radial geoelectrical structures. For models with small wall-rock effect, the correction removes the effect of the low-resistance inner layer on the apparent resistivity of the larger time channels.

  19. Asteroseismic modelling of solar-type stars: internal systematics from input physics and surface correction methods

    Science.gov (United States)

    Nsamba, B.; Campante, T. L.; Monteiro, M. J. P. F. G.; Cunha, M. S.; Rendle, B. M.; Reese, D. R.; Verma, K.

    2018-04-01

    Asteroseismic forward modelling techniques are being used to determine fundamental properties (e.g. mass, radius, and age) of solar-type stars. The need to take into account all possible sources of error is of paramount importance towards a robust determination of stellar properties. We present a study of 34 solar-type stars for which high signal-to-noise asteroseismic data is available from multi-year Kepler photometry. We explore the internal systematics on the stellar properties, that is, associated with the uncertainty in the input physics used to construct the stellar models. In particular, we explore the systematics arising from: (i) the inclusion of the diffusion of helium and heavy elements; and (ii) the uncertainty in solar metallicity mixture. We also assess the systematics arising from (iii) different surface correction methods used in optimisation/fitting procedures. The systematics arising from comparing results of models with and without diffusion are found to be 0.5%, 0.8%, 2.1%, and 16% in mean density, radius, mass, and age, respectively. The internal systematics in age are significantly larger than the statistical uncertainties. We find the internal systematics resulting from the uncertainty in solar metallicity mixture to be 0.7% in mean density, 0.5% in radius, 1.4% in mass, and 6.7% in age. The surface correction method by Sonoi et al. and Ball & Gizon's two-term correction produce the lowest internal systematics among the different correction methods, namely, ˜1%, ˜1%, ˜2%, and ˜8% in mean density, radius, mass, and age, respectively. Stellar masses obtained using the surface correction methods by Kjeldsen et al. and Ball & Gizon's one-term correction are systematically higher than those obtained using frequency ratios.

  20. Importance of Lorentz structure in the parton model: Target mass corrections, transverse momentum dependence, positivity bounds

    International Nuclear Information System (INIS)

    D'Alesio, U.; Leader, E.; Murgia, F.

    2010-01-01

    We show that respecting the underlying Lorentz structure in the parton model has very strong consequences. Failure to insist on the correct Lorentz covariance is responsible for the existence of contradictory results in the literature for the polarized structure function g 2 (x), whereas with the correct imposition we are able to derive the Wandzura-Wilczek relation for g 2 (x) and the target-mass corrections for polarized deep inelastic scattering without recourse to the operator product expansion. We comment briefly on the problem of threshold behavior in the presence of target-mass corrections. Careful attention to the Lorentz structure has also profound implications for the structure of the transverse momentum dependent parton densities often used in parton model treatments of hadron production, allowing the k T dependence to be derived explicitly. It also leads to stronger positivity and Soffer-type bounds than usually utilized for the collinear densities.